[Yahoo-eng-team] [Bug 1696264] Re: Create OpenStack client environment scripts in Installation Guide INCOMPLETE - doesn't state path for file

2017-07-11 Thread wingwj
** Also affects: keystone
   Importance: Undecided
   Status: New

** Changed in: keystone
   Status: New => In Progress

** Changed in: keystone
 Assignee: (unassigned) => wingwj (wingwj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1696264

Title:
  Create OpenStack client environment scripts in Installation Guide
  INCOMPLETE - doesn't state path for file

Status in OpenStack Identity (keystone):
  In Progress
Status in openstack-manuals:
  In Progress

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [X] This doc is inaccurate in this way: It says to create a file but
  doesn't say where... so I can't.  So I'm stuck.

  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 15.0.0 on 2017-06-02 17:52
  SHA: 3d8b9d021dd5f85bb0f0232a0475271d7f5537fc
  Source: 
https://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/install-guide/source/keystone-openrc.rst
  URL: 
https://docs.openstack.org/ocata/install-guide-ubuntu/keystone-openrc.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1696264/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1345905] [NEW] Fail to use shareable image created from a volume-booted VM

2014-07-20 Thread wingwj
Public bug reported:

If one image created from a volume-booted instance and shared, it can't
be used by other tenants due to the snapshot is only owned by the
original tenant in cinder.

The problem can be reproduced following this steps:

1. Create one boot-able volume from an image.
2. Create one instance with this volume.
3. Create an image from this instance.
4. Share the image to other tenants, like tenant B.
5. Create a new instance from this shared image by tenant B.
6. An error(failed to get snapshot) will be raised.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1345905

Title:
  Fail to use shareable image created from a volume-booted VM

Status in OpenStack Compute (Nova):
  New

Bug description:
  If one image created from a volume-booted instance and shared, it
  can't be used by other tenants due to the snapshot is only owned by
  the original tenant in cinder.

  The problem can be reproduced following this steps:

  1. Create one boot-able volume from an image.
  2. Create one instance with this volume.
  3. Create an image from this instance.
  4. Share the image to other tenants, like tenant B.
  5. Create a new instance from this shared image by tenant B.
  6. An error(failed to get snapshot) will be raised.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1345905/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334923] [NEW] No need to backup instance when 'rotation' equals to 0

2014-06-26 Thread wingwj
Public bug reported:

Nova will delete all backup images according the 'backup_type' parameter
when 'rotation' equals to 0.

But according the logic now, Nova will generate one new backup, and then
delete it..

So It's weird to snapshot a useless backup firstly IMO.
We need to add one new branch here: if 'rotation' is equal to 0, no need to 
backup, just rotate it.

The related discussion can be found here:
http://lists.openstack.org/pipermail/openstack-dev/2014-June/038760.html

** Affects: nova
 Importance: Undecided
 Assignee: wingwj (wingwj)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = wingwj (wingwj)

** Description changed:

  Nova will delete all backup images according the 'backup_type' parameter
  when 'rotation' equals to 0.
  
  But according the logic now, Nova will generate one new backup, and then
  delete it..
  
- So It's weird to snapshot a useless backup firstly IMO. 
+ So It's weird to snapshot a useless backup firstly IMO.
  We need to add one new branch here: if 'rotation' is equal to 0, no need to 
backup, just rotate it.
  
- Relate discussions can be found here:
+ The related discussion can be found here:
  http://lists.openstack.org/pipermail/openstack-dev/2014-June/038760.html

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334923

Title:
  No need to backup instance when 'rotation' equals to 0

Status in OpenStack Compute (Nova):
  New

Bug description:
  Nova will delete all backup images according the 'backup_type'
  parameter when 'rotation' equals to 0.

  But according the logic now, Nova will generate one new backup, and
  then delete it..

  So It's weird to snapshot a useless backup firstly IMO.
  We need to add one new branch here: if 'rotation' is equal to 0, no need to 
backup, just rotate it.

  The related discussion can be found here:
  http://lists.openstack.org/pipermail/openstack-dev/2014-June/038760.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1334923/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334929] [NEW] Need to add a new task_state between 'SCHEDULING' 'BLOCK_DEVICE_MAPPING'

2014-06-26 Thread wingwj
Public bug reported:

Now there's a 'None' task_state between 'SCHEDULING' 
'BLOCK_DEVICE_MAPPING'.

So if compute node is rebooted after that procession, all building VMs on it 
will always stay in 'None' task_state.
And it's useless and not convenient for locating problems. 

Therefore, we need to add one new task_state(like 'STARTING_BUILD') for
that period.

This new task_state also needs to be added in the Doc.

The related discussion could be found here:
http://lists.openstack.org/pipermail/openstack-dev/2014-June/038638.html

** Affects: nova
 Importance: Undecided
 Assignee: wingwj (wingwj)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = wingwj (wingwj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334929

Title:
  Need to add a new task_state between 'SCHEDULING' 
  'BLOCK_DEVICE_MAPPING'

Status in OpenStack Compute (Nova):
  New

Bug description:
  Now there's a 'None' task_state between 'SCHEDULING' 
  'BLOCK_DEVICE_MAPPING'.

  So if compute node is rebooted after that procession, all building VMs on it 
will always stay in 'None' task_state.
  And it's useless and not convenient for locating problems. 

  Therefore, we need to add one new task_state(like 'STARTING_BUILD')
  for that period.

  This new task_state also needs to be added in the Doc.

  The related discussion could be found here:
  http://lists.openstack.org/pipermail/openstack-dev/2014-June/038638.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1334929/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1325148] [NEW] Remove useless codes for server_group

2014-05-30 Thread wingwj
Public bug reported:

Some codes about 'extract_members' from 'create_server_group' is written
in original bp/patch's reviewing
(https://review.openstack.org/#/c/62557/29).

But as the design variation, it can't be specified members in create API (see 
it in PS29 vs PS30).
But some codes are still kept after this feature merged.
So it's necessary to remove them.

** Affects: nova
 Importance: Undecided
 Assignee: wingwj (wingwj)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = wingwj (wingwj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1325148

Title:
  Remove useless codes for server_group

Status in OpenStack Compute (Nova):
  New

Bug description:
  Some codes about 'extract_members' from 'create_server_group' is
  written in original bp/patch's reviewing
  (https://review.openstack.org/#/c/62557/29).

  But as the design variation, it can't be specified members in create API (see 
it in PS29 vs PS30).
  But some codes are still kept after this feature merged.
  So it's necessary to remove them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1325148/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1324348] Re: Server_group shouldn't have same policies in it

2014-05-29 Thread wingwj
** Also affects: tempest
   Importance: Undecided
   Status: New

** Changed in: tempest
 Assignee: (unassigned) = wingwj (wingwj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1324348

Title:
  Server_group shouldn't have same policies in it

Status in OpenStack Compute (Nova):
  In Progress
Status in Tempest:
  In Progress

Bug description:
  It can put several same policies in one server_group now.
  It doesn't make sense, the duplicate policies need to be ignored.

  

  stack@devaio:~$ nova server-group-create --policy affinity --policy affinity 
wjsg1 
  
+--+---++-+--+
  | Id   | Name  | Policies   | 
Members | Metadata |
  
+--+---++-+--+
  | 4f6679b7-f6b1-4d1e-92cd-1a54e1fe0f3d | wjsg1 | [u'affinity', u'affinity'] | 
[]  | {}   |
  
+--+---++-+--+
  stack@devaio:~$

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1324348/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1324348] [NEW] Server_group shouldn't have same policies in it

2014-05-28 Thread wingwj
Public bug reported:

It can put several same policies in one server_group now.
It doesn't make sense, the duplicate policies need to be ignored.



stack@devaio:~$ nova server-group-create --policy affinity --policy affinity 
wjsg1 
+--+---++-+--+
| Id   | Name  | Policies   | 
Members | Metadata |
+--+---++-+--+
| 4f6679b7-f6b1-4d1e-92cd-1a54e1fe0f3d | wjsg1 | [u'affinity', u'affinity'] | 
[]  | {}   |
+--+---++-+--+
stack@devaio:~$

** Affects: nova
 Importance: Undecided
 Assignee: wingwj (wingwj)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = wingwj (wingwj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1324348

Title:
  Server_group shouldn't have same policies in it

Status in OpenStack Compute (Nova):
  New

Bug description:
  It can put several same policies in one server_group now.
  It doesn't make sense, the duplicate policies need to be ignored.

  

  stack@devaio:~$ nova server-group-create --policy affinity --policy affinity 
wjsg1 
  
+--+---++-+--+
  | Id   | Name  | Policies   | 
Members | Metadata |
  
+--+---++-+--+
  | 4f6679b7-f6b1-4d1e-92cd-1a54e1fe0f3d | wjsg1 | [u'affinity', u'affinity'] | 
[]  | {}   |
  
+--+---++-+--+
  stack@devaio:~$

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1324348/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1288636] [NEW] Deleted flavor influences new flavor with same id in 'flavor-show' under psql

2014-03-06 Thread wingwj
Public bug reported:

When you create a new flavor with a deleted flavorid, the result in
mysql will show the new record. That's OK.

But if you operate these commands under psql, you'll find the result
will always be the old deleted one.

That means, the flavorid will always be occupied by the first record who use 
this flavorid under psql,
even if this flavor has already been deleted. 

More info can be found at here: http://paste.openstack.org/show/72752/

** Affects: nova
 Importance: Undecided
 Assignee: wingwj (wingwj)
 Status: New


** Tags: flavor postgresql

** Changed in: nova
 Assignee: (unassigned) = wingwj (wingwj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1288636

Title:
  Deleted flavor influences new flavor with same id in 'flavor-show'
  under psql

Status in OpenStack Compute (Nova):
  New

Bug description:
  When you create a new flavor with a deleted flavorid, the result in
  mysql will show the new record. That's OK.

  But if you operate these commands under psql, you'll find the result
  will always be the old deleted one.

  That means, the flavorid will always be occupied by the first record who use 
this flavorid under psql,
  even if this flavor has already been deleted. 

  More info can be found at here: http://paste.openstack.org/show/72752/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1288636/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1287492] [NEW] Add one configuration item for cache-using in libvirt/hypervisor

2014-03-03 Thread wingwj
Public bug reported:

We met one scenario which needs to close the cache on linux hypervisor.

But some codes written in libvirt/driver.py (including
'suspend'/'snapshot' actions) are hard-coded.

For example:
---
def suspend(self, instance):
Suspend the specified instance.
dom = self._lookup_by_name(instance['name'])
self._detach_pci_devices(dom,
pci_manager.get_instance_pci_devs(instance))
dom.managedSave(0)

So, we need to add one configuration item for it in nova.conf, and let
operator can choose it.

** Affects: nova
 Importance: Undecided
 Assignee: wingwj (wingwj)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = wingwj (wingwj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1287492

Title:
  Add one configuration item for cache-using in libvirt/hypervisor

Status in OpenStack Compute (Nova):
  New

Bug description:
  We met one scenario which needs to close the cache on linux
  hypervisor.

  But some codes written in libvirt/driver.py (including
  'suspend'/'snapshot' actions) are hard-coded.

  For example:
  ---
  def suspend(self, instance):
  Suspend the specified instance.
  dom = self._lookup_by_name(instance['name'])
  self._detach_pci_devices(dom,
  pci_manager.get_instance_pci_devs(instance))
  dom.managedSave(0)

  So, we need to add one configuration item for it in nova.conf, and let
  operator can choose it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1287492/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284938] [NEW] EC2 authorize/revoke_rules in ec2 api broken with neutron

2014-02-25 Thread wingwj
Public bug reported:

Authorize/Revoke security_group_rules in ec2 api is unavailable with
neutron.

The codes in cloud.py, only works with nova-network:

--
def _rule_dict_last_step():

source_security_group = db.security_group_get_by_name(
context.elevated(),
source_project_id,
source_security_group_name)



** Affects: nova
 Importance: Undecided
 Assignee: wingwj (wingwj)
 Status: New


** Tags: ec2 havana-backport-potential

** Changed in: nova
 Assignee: (unassigned) = wingwj (wingwj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1284938

Title:
  EC2 authorize/revoke_rules in ec2 api broken with neutron

Status in OpenStack Compute (Nova):
  New

Bug description:
  Authorize/Revoke security_group_rules in ec2 api is unavailable with
  neutron.

  The codes in cloud.py, only works with nova-network:

  --
  def _rule_dict_last_step():

  source_security_group = db.security_group_get_by_name(
  context.elevated(),
  source_project_id,
  source_security_group_name)

  

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1284938/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1283987] [NEW] Query Deadlock when creating 200 servers at once in sqlalchemy

2014-02-24 Thread wingwj
Public bug reported:

Query Deadlock when creating 200 servers at once in sqlalchemy.



This bug occurred when I test this bug: 
https://bugs.launchpad.net/nova/+bug/1270725

The original info is logged here:
http://paste.openstack.org/show/61534/

--

After checking the error-log, we can notice that the deadlock function
is 'all()' in sqlalchemy framework.


Previously, we use '@retry_on_dead_lock' function to retry requests when 
deadlock occurs.

But it's only available for session deadlock(query/flush/execute). It
doesn't cover some 'Query' actions in sqlalchemy.


So, we need to add the same protction for 'all()' in sqlalchemy.

** Affects: nova
 Importance: Undecided
 Assignee: wingwj (wingwj)
 Status: New

** Affects: oslo
 Importance: Undecided
 Assignee: wingwj (wingwj)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = wingwj (wingwj)

** Also affects: oslo
   Importance: Undecided
   Status: New

** Changed in: oslo
 Assignee: (unassigned) = wingwj (wingwj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1283987

Title:
  Query Deadlock when creating 200 servers at once in sqlalchemy

Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  Query Deadlock when creating 200 servers at once in sqlalchemy.

  

  This bug occurred when I test this bug: 
  https://bugs.launchpad.net/nova/+bug/1270725

  The original info is logged here:
  http://paste.openstack.org/show/61534/

  --

  After checking the error-log, we can notice that the deadlock function
  is 'all()' in sqlalchemy framework.

  
  Previously, we use '@retry_on_dead_lock' function to retry requests when 
deadlock occurs.

  But it's only available for session deadlock(query/flush/execute). It
  doesn't cover some 'Query' actions in sqlalchemy.

  
  So, we need to add the same protction for 'all()' in sqlalchemy.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1283987/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1283364] [NEW] Items in quota-class shouldn't be negative int values

2014-02-21 Thread wingwj
Public bug reported:

Items in quota-class shouldn't be negative int values.

The validation in quotas is correct.

---

U can test this issue via api:



POST http://10.250.10.245:8774/v2/1eb5c0b6adbc4ad8a8797e65df2778f5/os-
quota-class-sets/defaults

Body = {
quota_class_set: {
ram: -51200
}
}

---

The response is shown above, :

{
quota_class_set: {
metadata_items: 128,
injected_file_content_bytes: 10240,
ram: -51200,
floating_ips: 10,
security_group_rules: 20,
instances: 10,
key_pairs: 100,
injected_files: 5,
cores: 20,
fixed_ips: -1,
injected_file_path_bytes: 255,
security_groups: 10
}
}

** Affects: nova
 Importance: Undecided
 Assignee: wingwj (wingwj)
 Status: New


** Tags: havana-backport-potential quota

** Changed in: nova
 Assignee: (unassigned) = wingwj (wingwj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1283364

Title:
  Items in quota-class shouldn't be negative int values

Status in OpenStack Compute (Nova):
  New

Bug description:
  Items in quota-class shouldn't be negative int values.

  The validation in quotas is correct.

  ---

  U can test this issue via api:

  

  POST http://10.250.10.245:8774/v2/1eb5c0b6adbc4ad8a8797e65df2778f5/os-
  quota-class-sets/defaults

  Body = {
  quota_class_set: {
  ram: -51200
  }
  }

  ---

  The response is shown above, :

  {
  quota_class_set: {
  metadata_items: 128,
  injected_file_content_bytes: 10240,
  ram: -51200,
  floating_ips: 10,
  security_group_rules: 20,
  instances: 10,
  key_pairs: 100,
  injected_files: 5,
  cores: 20,
  fixed_ips: -1,
  injected_file_path_bytes: 255,
  security_groups: 10
  }
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1283364/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270725] Re: Quota Deadlock when creating 200 servers at once under Psql

2014-02-19 Thread wingwj
Link to oslo patch

** Also affects: oslo
   Importance: Undecided
   Status: New

** Changed in: oslo
 Assignee: (unassigned) = wingwj (wingwj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270725

Title:
  Quota Deadlock when creating 200 servers at once under Psql

Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  When you create 200 servers at once under psql, 
it's easier to meet a deadlock exception in quota process(reserve/commit).

  Like this:
  http://paste.openstack.org/show/61534/

  The '@retry_on_dead_lock' function, is not available for matching
  psql.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1270725/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280534] [NEW] Incorrect answer of 'flavor show' if deleted flavors with the same flavor_id exist

2014-02-14 Thread wingwj
Public bug reported:

'Flavor show' can't give the right answer if deleted flavors with the
same flavor_id exist in db.

The result of 'flavor list' is right, it will show the new flavor's
contents.

But when you execuate 'flavor show flavor_id', the info only shows the old 
deleted record,
the new info will be covered, and won't be given.

That means, if one flavor_id was assigned, 'flavor show' will always show this 
record, 
even though you delete it and create a new one with this flavor_id.

Moreover, if you have more deleted flavors with same flavor_id, the
result will only show the first deleted record.


More test information can be found here:
http://paste.openstack.org/show/65716/

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: flavor havana-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1280534

Title:
  Incorrect answer of 'flavor show' if deleted flavors with the same
  flavor_id exist

Status in OpenStack Compute (Nova):
  New

Bug description:
  'Flavor show' can't give the right answer if deleted flavors with the
  same flavor_id exist in db.

  The result of 'flavor list' is right, it will show the new flavor's
  contents.

  But when you execuate 'flavor show flavor_id', the info only shows the old 
deleted record,
  the new info will be covered, and won't be given.

  That means, if one flavor_id was assigned, 'flavor show' will always show 
this record, 
  even though you delete it and create a new one with this flavor_id.

  Moreover, if you have more deleted flavors with same flavor_id, the
  result will only show the first deleted record.

  
  More test information can be found here:
  http://paste.openstack.org/show/65716/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1280534/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279719] [NEW] Fail to createVM with extra_spec using ComputeCapabilitiesFilter

2014-02-13 Thread wingwj
Public bug reported:

Fails to createVM with extra_spec using ComputeCapabilitiesFilter, the
scheduler will always fail to find a suitable host.



Here's the test steps:

1. Create an aggregate, and set its metadata, like ssd=True.

2. Add one host to this aggregate.

3. Create a new flavor, set extra_spcs like ssd=True.

4. Create a new VM using this flavor.

5. Creation failed due to no valid hosts.

-
Let's look at the codes:
In ComputeCapabilitiesFilter, it'll match hosts' capacities with extra_spec.

Before in Grizzly, there's a periodic_task named '_report_driver_status()' to 
report hosts' capacities.
But in Havana, the task is canceled. So the capacities won't be updated, the 
value is always 'None'.

So, if you boot a VM with extra_spec, those hosts will be filtered out.
And the exception will be raised.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279719

Title:
  Fail to createVM with extra_spec using ComputeCapabilitiesFilter

Status in OpenStack Compute (Nova):
  New

Bug description:
  Fails to createVM with extra_spec using ComputeCapabilitiesFilter, the
  scheduler will always fail to find a suitable host.

  

  Here's the test steps:

  1. Create an aggregate, and set its metadata, like ssd=True.

  2. Add one host to this aggregate.

  3. Create a new flavor, set extra_spcs like ssd=True.

  4. Create a new VM using this flavor.

  5. Creation failed due to no valid hosts.

  -
  Let's look at the codes:
  In ComputeCapabilitiesFilter, it'll match hosts' capacities with extra_spec.

  Before in Grizzly, there's a periodic_task named '_report_driver_status()' to 
report hosts' capacities.
  But in Havana, the task is canceled. So the capacities won't be updated, the 
value is always 'None'.

  So, if you boot a VM with extra_spec, those hosts will be filtered out.
  And the exception will be raised.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1279719/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278696] [NEW] 'DescribeInstances' in ec2 shows wrong image-message to instance booting from volumes

2014-02-10 Thread wingwj
Public bug reported:

'DescribeInstances' in ec2 shows the wrong image message('ami-0002')
to the instance booting from a volume:

n781dba539a84:~ # euca-describe-instances i-003a
RESERVATION r-mdcwadip  c1c092e1c88f4027aeb203d50d63135b
INSTANCEi-003a  ami-0002wjvm1   running None 
(c1c092e1c88f4027aeb203d50d63135b, n781dba57c996)  0   m1.small 
   2014-02-10T07:00:40.000Znova  20.20.16.12
  ebs
BLOCKDEVICE /dev/vdavol-000dfalse

More info can be found here: http://paste.openstack.org/show/64087/

--

The codes in ec2utils.py:

def glance_id_to_id(context, glance_id):
Convert a glance id to an internal (db) id.
if glance_id is None:
return
try:
return db.s3_image_get_by_uuid(context, glance_id)['id']
except exception.NotFound:
return db.s3_image_create(context, glance_id)['id']

--

The image_ref (as glance_id in this function) of the instance booting from a 
volume, will be , not None.
The protected codes above can't take efforts.

** Affects: nova
 Importance: Undecided
 Assignee: wingwj (wingwj)
 Status: New


** Tags: ec2

** Changed in: nova
 Assignee: (unassigned) = wingwj (wingwj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1278696

Title:
  'DescribeInstances' in ec2 shows wrong image-message to instance
  booting from volumes

Status in OpenStack Compute (Nova):
  New

Bug description:
  'DescribeInstances' in ec2 shows the wrong image
  message('ami-0002') to the instance booting from a volume:

  n781dba539a84:~ # euca-describe-instances i-003a
  RESERVATION r-mdcwadip  c1c092e1c88f4027aeb203d50d63135b
  INSTANCEi-003a  ami-0002wjvm1   running None 
(c1c092e1c88f4027aeb203d50d63135b, n781dba57c996)  0   m1.small 
   2014-02-10T07:00:40.000Znova  20.20.16.12
  ebs
  BLOCKDEVICE /dev/vdavol-000dfalse

  More info can be found here: http://paste.openstack.org/show/64087/

  --

  The codes in ec2utils.py:

  def glance_id_to_id(context, glance_id):
  Convert a glance id to an internal (db) id.
  if glance_id is None:
  return
  try:
  return db.s3_image_get_by_uuid(context, glance_id)['id']
  except exception.NotFound:
  return db.s3_image_create(context, glance_id)['id']

  --

  The image_ref (as glance_id in this function) of the instance booting from a 
volume, will be , not None.
  The protected codes above can't take efforts.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1278696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1272844] [NEW] Fails to 'modify_image_attribute()' for ec2

2014-01-25 Thread wingwj
Public bug reported:

The params of modify_image_attribute() in cloud.py, are not matched with
ec2 api.

-

I checked the ec2 api, the url is like: 
https://ec2.amazonaws.com/?Action=ModifyImageAttribute
ImageId=ami-61a54008
LaunchPermission.Remove.1.UserId=

And I tested again with euca2ools, the modify_image_attribute() failed
again:

TypeError: 'modify_image_attribute() takes exactly 5 non-keyword
arguments (3 given)'

--

Here is the definition of modify_image_attribute():

def modify_image_attribute(self, context, image_id, attribute,
   operation_type, **kwargs)

And I printed out the params send to here, the value of args is:
args={'launch_permission': {'add': {'1': {'group': u'all'}}}, 'image_id': 
u'ami-0004'}

--

So I got a question, are the params used in modify_image_attribute()
correct?

** Affects: nova
 Importance: Undecided
 Assignee: wingwj (wingwj)
 Status: New


** Tags: ec2

** Changed in: nova
 Assignee: (unassigned) = wingwj (wingwj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1272844

Title:
  Fails to 'modify_image_attribute()' for ec2

Status in OpenStack Compute (Nova):
  New

Bug description:
  The params of modify_image_attribute() in cloud.py, are not matched
  with ec2 api.

  -

  I checked the ec2 api, the url is like: 
  https://ec2.amazonaws.com/?Action=ModifyImageAttribute
  ImageId=ami-61a54008
  LaunchPermission.Remove.1.UserId=

  And I tested again with euca2ools, the modify_image_attribute() failed
  again:

  TypeError: 'modify_image_attribute() takes exactly 5 non-keyword
  arguments (3 given)'

  --

  Here is the definition of modify_image_attribute():

  def modify_image_attribute(self, context, image_id, attribute,
 operation_type, **kwargs)

  And I printed out the params send to here, the value of args is:
  args={'launch_permission': {'add': {'1': {'group': u'all'}}}, 'image_id': 
u'ami-0004'}

  --

  So I got a question, are the params used in modify_image_attribute()
  correct?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1272844/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1272205] [NEW] VM stucks in 'RESIZE' if nova-scheduler failed

2014-01-24 Thread wingwj
Public bug reported:

VM stucks in 'RESIZE' if nova-scheduler failed.

And the status would not be recovered even if 'nova-scheduler' was restarted.
You'd have to execute 'reset-state' manually to recover.

We need to detach this situation on API layer, and rollback the vm state
to ACTIVE.

More detailed info can be found here:
http://paste.openstack.org/show/61803/

** Affects: nova
 Importance: Undecided
 Assignee: wingwj (wingwj)
 Status: New


** Tags: havana-backport-potential

** Changed in: nova
 Assignee: (unassigned) = wingwj (wingwj)

** Tags added: havana-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1272205

Title:
  VM stucks in 'RESIZE' if nova-scheduler failed

Status in OpenStack Compute (Nova):
  New

Bug description:
  VM stucks in 'RESIZE' if nova-scheduler failed.

  And the status would not be recovered even if 'nova-scheduler' was restarted.
  You'd have to execute 'reset-state' manually to recover.

  We need to detach this situation on API layer, and rollback the vm
  state to ACTIVE.

  More detailed info can be found here:
  http://paste.openstack.org/show/61803/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1272205/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271821] [NEW] Target 'host' in 'evacuate' should not be the original server's host

2014-01-22 Thread wingwj
Public bug reported:

'Evacuate' function aims to help administrator/operator to evacuate
servers if this compute node fails.

So we need to add a protection here, the target host should not be the
original host.

** Affects: nova
 Importance: Undecided
 Assignee: wingwj (wingwj)
 Status: New


** Tags: havana-backport-potential

** Changed in: nova
 Assignee: (unassigned) = wingwj (wingwj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1271821

Title:
  Target 'host' in 'evacuate' should not be the original server's host

Status in OpenStack Compute (Nova):
  New

Bug description:
  'Evacuate' function aims to help administrator/operator to evacuate
  servers if this compute node fails.

  So we need to add a protection here, the target host should not be the
  original host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1271821/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270675] [NEW] 'Unshelve' doesn't record instances' host-attributes into DB

2014-01-19 Thread wingwj
Public bug reported:

When one VM was shelve-offloaded, the VM won't belong to any hosts.
And the 'host' attribute will be 'None' if you execute 'nova show $id'. That's 
correct.

But 'unshelve' doesn't record 'host' attributes even if the instance has 
already spawned in one host.
Therefore, if you operate it later, the nova-api will raise an exception 
because it can't find the VM's host.

Visit here for more information:
http://paste.openstack.org/show/61521/

** Affects: nova
 Importance: Undecided
 Assignee: wingwj (wingwj)
 Status: New


** Tags: unshelve

** Description changed:

- When one VM was shelve-offloaded, the VM won't belong to any hosts. 
+ When one VM was shelve-offloaded, the VM won't belong to any hosts.
  And the 'host' attribute will be 'None' if you execute 'nova show $id'. 
That's correct.
  
  But 'unshelve' doesn't record 'host' attributes even if the instance has 
already spawned in one host.
  Therefore, if you operate it later, the nova-api will raise an exception 
because it can't find the VM's host.
  
  Visit here for more information:
+ http://paste.openstack.org/show/61521/

** Tags added: unshelve

** Changed in: nova
 Assignee: (unassigned) = wingwj (wingwj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270675

Title:
  'Unshelve' doesn't record instances' host-attributes into DB

Status in OpenStack Compute (Nova):
  New

Bug description:
  When one VM was shelve-offloaded, the VM won't belong to any hosts.
  And the 'host' attribute will be 'None' if you execute 'nova show $id'. 
That's correct.

  But 'unshelve' doesn't record 'host' attributes even if the instance has 
already spawned in one host.
  Therefore, if you operate it later, the nova-api will raise an exception 
because it can't find the VM's host.

  Visit here for more information:
  http://paste.openstack.org/show/61521/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1270675/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270725] [NEW] Quota Deadlock when creating 200 servers at once under psql

2014-01-19 Thread wingwj
Public bug reported:

When you create 200 servers at once under psql, 
  it's easier to meet a deadlock exception in quota process(reserve/commit).

Like this:
http://paste.openstack.org/show/61534/

The '@retry_on_dead_lock' function, is not available for matching psql.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: db deadlock

** Tags added: db deadlock

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270725

Title:
  Quota Deadlock when creating 200 servers at once under psql

Status in OpenStack Compute (Nova):
  New

Bug description:
  When you create 200 servers at once under psql, 
it's easier to meet a deadlock exception in quota process(reserve/commit).

  Like this:
  http://paste.openstack.org/show/61534/

  The '@retry_on_dead_lock' function, is not available for matching
  psql.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1270725/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 892415] Re: Hung reboots periodic task should only act on it's own instances

2013-10-15 Thread wingwj
** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/892415

Title:
  Hung reboots periodic task should only act on it's own instances

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Currently the hung reboots periodic tasks queries for *all* instances
  in the database. It really should only querying for instances
  belonging to it's own host.

  This should use the `instance_get_all_by_host(context, self.host)`
  pattern used elsewhere.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/892415/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp