[Yahoo-eng-team] [Bug 1265452] [NEW] cache lock for image not consistent

2014-01-02 Thread jichencom
Public bug reported:

According to this bug https://bugs.launchpad.net/nova/+bug/1256306

for one image in _base dir 03d8e206-6500-4d91-b47d-ee74897f9b4e
2 locks were created 

-rw-r--r-- 1 nova nova 0 Oct 4 20:40 nova-03d8e206-6500-4d91-b47d-
ee74897f9b4e

-rw-r--r-- 1 nova nova 0 Oct 4 20:40 nova-
_var_lib_nova_instances__base_03d8e206-6500-4d91-b47d-ee74897f9b4e

generally locks are used to protect concurrent data access, so the lock can't 
work as they expected (mutual access)
in current code fetch_image from glance use lock nova-x while copy image 
from _base to target directory use nova_var_lib_xxx
should they use same lock?

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1265452

Title:
  cache lock for image not consistent

Status in OpenStack Compute (Nova):
  New

Bug description:
  According to this bug https://bugs.launchpad.net/nova/+bug/1256306

  for one image in _base dir 03d8e206-6500-4d91-b47d-ee74897f9b4e
  2 locks were created 

  -rw-r--r-- 1 nova nova 0 Oct 4 20:40 nova-03d8e206-6500-4d91-b47d-
  ee74897f9b4e

  -rw-r--r-- 1 nova nova 0 Oct 4 20:40 nova-
  _var_lib_nova_instances__base_03d8e206-6500-4d91-b47d-ee74897f9b4e

  generally locks are used to protect concurrent data access, so the lock can't 
work as they expected (mutual access)
  in current code fetch_image from glance use lock nova-x while copy image 
from _base to target directory use nova_var_lib_xxx
  should they use same lock?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1265452/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265466] [NEW] Nova boot fail and raise NoValidHost when use specific aggregate

2014-01-02 Thread ChenZheng
Public bug reported:

1. Set the nova.conf 
scheduler_default_filters=AggregateInstanceExtraSpecsFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter
 and restart compute, scheduler service
2. Create an aggregate and add a host to it with metadata test_meta=1
3. Modify exist flavor to add the same key:value test_meta=1
4. Boot an instance with this specific flavor and came across 
| fault| {u'message': u'NV-67B7376 No valid 
host was found. ', u'code': 500, u'details': u'  File 
/usr/lib/python2.6/site-packages/nova/scheduler/filter_scheduler.py, line 
107, in schedule_run_instance |
|  | raise 
exception.NoValidHost(reason=)

The related feature lies here
http://docs.openstack.org/grizzly/openstack-compute/admin/content//host-aggregates.html

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1265466

Title:
  Nova boot fail and raise NoValidHost when use specific aggregate

Status in OpenStack Compute (Nova):
  New

Bug description:
  1. Set the nova.conf 
scheduler_default_filters=AggregateInstanceExtraSpecsFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter
 and restart compute, scheduler service
  2. Create an aggregate and add a host to it with metadata test_meta=1
  3. Modify exist flavor to add the same key:value test_meta=1
  4. Boot an instance with this specific flavor and came across 
  | fault| {u'message': u'NV-67B7376 No valid 
host was found. ', u'code': 500, u'details': u'  File 
/usr/lib/python2.6/site-packages/nova/scheduler/filter_scheduler.py, line 
107, in schedule_run_instance |
  |  | raise 
exception.NoValidHost(reason=)

  The related feature lies here
  
http://docs.openstack.org/grizzly/openstack-compute/admin/content//host-aggregates.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1265466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265465] [NEW] xenapi: auto disk config drops boot partition flag

2014-01-02 Thread John Garbutt
Public bug reported:

When the XenAPI driver resizes a boot partition, it does not take care
to add back the boot partition flag.

With PV images, this is not really needed, because Xen doesn't worry
about the partition being bootable, but for HVM images, it is stops the
image from booting any more.

** Affects: nova
 Importance: Medium
 Status: New


** Tags: xenserver

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1265465

Title:
  xenapi: auto disk config drops boot partition flag

Status in OpenStack Compute (Nova):
  New

Bug description:
  When the XenAPI driver resizes a boot partition, it does not take care
  to add back the boot partition flag.

  With PV images, this is not really needed, because Xen doesn't worry
  about the partition being bootable, but for HVM images, it is stops
  the image from booting any more.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1265465/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265472] [NEW] nicira: db integrity error on dhcp port operations

2014-01-02 Thread Armando Migliaccio
Public bug reported:

This stacktrace has been observed during dhcp port operations after
change 648f787d80530d34159220d56591dfa46a86:

2014-01-01 10:25:03.086 29167 ERROR neutron.openstack.common.rpc.amqp 
[req-5efe812b-3f99-4712-b301-c93cb1f5e893 888bff1f89bd4cdab3ea4d1d9e4a1c34 
0f1987d130294a938bf3d6c04ca843dc] Exception during message handling
2014-01-01 10:25:03.086 29167 TRACE neutron.openstack.common.rpc.amqp Traceback 
(most recent call last):
2014-01-01 10:25:03.086 29167 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/neutron/neutron/openstack/common/rpc/amqp.py, line 438, in 
_process_data
2014-01-01 10:25:03.086 29167 TRACE neutron.openstack.common.rpc.amqp 
**args)
2014-01-01 10:25:03.086 29167 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/neutron/neutron/common/rpc.py, line 45, in dispatch
2014-01-01 10:25:03.086 29167 TRACE neutron.openstack.common.rpc.amqp 
neutron_ctxt, version, method, namespace, **kwargs)
2014-01-01 10:25:03.086 29167 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/neutron/neutron/openstack/common/rpc/dispatcher.py, line 172, in 
dispatch
2014-01-01 10:25:03.086 29167 TRACE neutron.openstack.common.rpc.amqp 
result = getattr(proxyobj, method)(ctxt, **kwargs)
2014-01-01 10:25:03.086 29167 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/neutron/neutron/db/dhcp_rpc_base.py, line 195, in get_dhcp_port
2014-01-01 10:25:03.086 29167 TRACE neutron.openstack.common.rpc.amqp 
'create_port')
2014-01-01 10:25:03.086 29167 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/neutron/neutron/db/dhcp_rpc_base.py, line 53, in _port_action
2014-01-01 10:25:03.086 29167 TRACE neutron.openstack.common.rpc.amqp 
return plugin.create_port(context, port)
2014-01-01 10:25:03.086 29167 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/neutron/neutron/plugins/nicira/NeutronPlugin.py, line 1172, in 
create_port
2014-01-01 10:25:03.086 29167 TRACE neutron.openstack.common.rpc.amqp with 
context.session.begin(subtransactions=True):
2014-01-01 10:25:03.086 29167 TRACE neutron.openstack.common.rpc.amqp   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 643, 
in begin
2014-01-01 10:25:03.086 29167 TRACE neutron.openstack.common.rpc.amqp 
nested=nested)
2014-01-01 10:25:03.086 29167 TRACE neutron.openstack.common.rpc.amqp   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 236, 
in _begin
2014-01-01 10:25:03.086 29167 TRACE neutron.openstack.common.rpc.amqp 
self._assert_is_active()
2014-01-01 10:25:03.086 29167 TRACE neutron.openstack.common.rpc.amqp   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 213, 
in _assert_is_active
2014-01-01 10:25:03.086 29167 TRACE neutron.openstack.common.rpc.amqp % 
self._rollback_exception
2014-01-01 10:25:03.086 29167 TRACE neutron.openstack.common.rpc.amqp 
InvalidRequestError: This Session's transaction has been rolled back due to a 
previous exception during flush. To begin a new transaction with this Session, 
first issue Session.rollback(). Original exception was: (IntegrityError) (1452, 
'Cannot add or update a child row: a foreign key constraint fails 
(`neutron_nvp`.`neutron_nsx_port_mappings`, CONSTRAINT 
`neutron_nsx_port_mappings_ibfk_1` FOREIGN KEY (`neutron_id`) REFERENCES 
`ports` (`id`) ON DELETE CASCADE)') 'INSERT INTO neutron_nsx_port_mappings 
(neutron_id, nsx_switch_id, nsx_port_id) VALUES (%s, %s, %s)' 
('b79d2734-29e6-4494-9325-c50c647d6a56', 
'cb841161-bcee-49df-a881-d143667115c4', '5e660371-797a-449c-890d-b63c02364bea')

It looks like the plugin is attempting to add a mapping for a Neutron
port that no longer exists. Even though this is likely not to lead to a
potential failure in API operations, this should be dealt with in order
to not upset the dhcp agent normal processing.

** Affects: neutron
 Importance: Undecided
 Assignee: Armando Migliaccio (armando-migliaccio)
 Status: New


** Tags: nicira

** Changed in: neutron
 Assignee: (unassigned) = Armando Migliaccio (armando-migliaccio)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1265472

Title:
  nicira: db integrity error on dhcp port operations

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This stacktrace has been observed during dhcp port operations after
  change 648f787d80530d34159220d56591dfa46a86:

  2014-01-01 10:25:03.086 29167 ERROR neutron.openstack.common.rpc.amqp 
[req-5efe812b-3f99-4712-b301-c93cb1f5e893 888bff1f89bd4cdab3ea4d1d9e4a1c34 
0f1987d130294a938bf3d6c04ca843dc] Exception during message handling
  2014-01-01 10:25:03.086 29167 TRACE neutron.openstack.common.rpc.amqp 
Traceback (most recent call last):
  2014-01-01 10:25:03.086 29167 TRACE neutron.openstack.common.rpc.amqp   File 

[Yahoo-eng-team] [Bug 1265481] [NEW] mysql lock wait timeout on subnet_create

2014-01-02 Thread Salvatore Orlando
Public bug reported:

Traceback: http://paste.openstack.org/show/59586/
Ocurred during testing with parallelism enabled: 
http://logs.openstack.org/20/57420/40/experimental/check-tempest-dsvm-neutron-isolated-parallel/c95f8e0

It's worth noticing the tests being executed here are particularly
stressfull for neutron IPAM.

Setting important to medium pending triage

** Affects: neutron
 Importance: Medium
 Status: New


** Tags: db neutron-parallel

** Tags removed: parallel-testing
** Tags added: neutron-parallel

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1265481

Title:
  mysql lock wait timeout on subnet_create

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Traceback: http://paste.openstack.org/show/59586/
  Ocurred during testing with parallelism enabled: 
http://logs.openstack.org/20/57420/40/experimental/check-tempest-dsvm-neutron-isolated-parallel/c95f8e0

  It's worth noticing the tests being executed here are particularly
  stressfull for neutron IPAM.

  Setting important to medium pending triage

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1265481/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265493] [NEW] RFE - Adding comments to each rule added to iptables

2014-01-02 Thread ofer blaut
Public bug reported:

Version
===
Havana on rhel

Description
===

Each time a VM is created new rules are added into iptables,
Same when modify security groups or floating ips 

It will be very helpful if to add comments per each rule , explaining what is 
the purpose and which sub-component add it
This will help debug the system.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1265493

Title:
  RFE - Adding comments to each rule added to iptables

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Version
  ===
  Havana on rhel

  Description
  ===

  Each time a VM is created new rules are added into iptables,
  Same when modify security groups or floating ips 

  It will be very helpful if to add comments per each rule , explaining what is 
the purpose and which sub-component add it
  This will help debug the system.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1265493/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265494] [NEW] Openstack Nova: Unpause after host reboot fails

2014-01-02 Thread Tzach Shefi
Public bug reported:

Description of problem:
Unpauseing an instance fails if host has rebooted. 

Version-Release number of selected component (if applicable):
RHEL: release 6.5 (Santiago)
openstack-nova-api-2013.2.1-1.el6ost.noarch
openstack-nova-compute-2013.2.1-1.el6ost.noarch
openstack-nova-scheduler-2013.2.1-1.el6ost.noarch
openstack-nova-common-2013.2.1-1.el6ost.noarch
openstack-nova-console-2013.2.1-1.el6ost.noarch
openstack-nova-conductor-2013.2.1-1.el6ost.noarch
openstack-nova-novncproxy-2013.2.1-1.el6ost.noarch
openstack-nova-cert-2013.2.1-1.el6ost.noarch

How reproducible:
Every time 

Steps to Reproduce:
1. Boot an instance 
2. Pause that instance
3. Reboot host
4. Unpause instance  

Actual results:
can't unpause instance stuck in status paused, power state - shutdown

Expected results:
Instance should unpause, return to running state

Additional info:

virsh list -all --managed-save 
ID is missing from paused instance - (pausecirros), state - shut off.

[root@orange-vdse ~(keystone_admin)]# virsh list --all --managed-save
 IdName   State

 1 instance-0003  running
 2 instance-0002  running
 - instance-0001  shut off

[root@orange-vdse ~(keystone_admin)]# nova list  (notice nova status paused)
+--+---+++-+-+
| ID   | Name  | Status | Task State | 
Power State | Networks|
+--+---+++-+-+
| ebe310c2-d715-45e5-83b6-32717af1ac90 | cirros| ACTIVE | None   | 
Running | net=192.168.1.4 |
| 3ef89feb-414f-4524-b806-f14044efdb14 | pausecirros   | PAUSED | None   | 
Shutdown| net=192.168.1.5 |
| 8bcae041-2f92-4ae2-a2c2-ee59b067ac76 | suspendcirros | ACTIVE | None   | 
Running | net=192.168.1.2 |
+--+---+++-+-+


Testing without rebooting host, ID/state (1/paused) instance (cirros) are ok 
and it unpauses ok.

[root@orange-vdse ~(keystone_admin)]# virsh list --all --managed-save
 IdName   State

 1 instance-0003  paused
 2 instance-0002  running
 - instance-0001  shut off
+--+---+++-+-+
| ID   | Name  | Status | Task State | 
Power State | Networks|
+--+---+++-+-+
| ebe310c2-d715-45e5-83b6-32717af1ac90 | cirros| PAUSED | None   | 
Paused  | net=192.168.1.4 |
| 3ef89feb-414f-4524-b806-f14044efdb14 | pausecirros   | PAUSED | None   | 
Shutdown| net=192.168.1.5 |
| 8bcae041-2f92-4ae2-a2c2-ee59b067ac76 | suspendcirros | ACTIVE | None   | 
Running | net=192.168.1.2 |
+--+---+++-+-+

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1265494

Title:
  Openstack Nova: Unpause after host reboot fails

Status in OpenStack Compute (Nova):
  New

Bug description:
  Description of problem:
  Unpauseing an instance fails if host has rebooted. 

  Version-Release number of selected component (if applicable):
  RHEL: release 6.5 (Santiago)
  openstack-nova-api-2013.2.1-1.el6ost.noarch
  openstack-nova-compute-2013.2.1-1.el6ost.noarch
  openstack-nova-scheduler-2013.2.1-1.el6ost.noarch
  openstack-nova-common-2013.2.1-1.el6ost.noarch
  openstack-nova-console-2013.2.1-1.el6ost.noarch
  openstack-nova-conductor-2013.2.1-1.el6ost.noarch
  openstack-nova-novncproxy-2013.2.1-1.el6ost.noarch
  openstack-nova-cert-2013.2.1-1.el6ost.noarch

  How reproducible:
  Every time 

  Steps to Reproduce:
  1. Boot an instance 
  2. Pause that instance
  3. Reboot host
  4. Unpause instance  

  Actual results:
  can't unpause instance stuck in status paused, power state - shutdown

  Expected results:
  Instance should unpause, return to running state

  Additional info:

  virsh list -all --managed-save 
  ID is missing from paused instance - (pausecirros), state - shut off.

  [root@orange-vdse ~(keystone_admin)]# virsh list --all --managed-save
   IdName   State
  
   1 instance-0003  running
   2 instance-0002  running
   - 

[Yahoo-eng-team] [Bug 1265495] [NEW] Error reading SSH protocol banner

2014-01-02 Thread Salvatore Orlando
Public bug reported:

This appears similar to bug 1210664 which is now marked as released.

Once parallel testing is enabled (running with the patches for blueprint 
neutron-parallel-testing), this error appears frequently.
One example here: 
http://logs.openstack.org/20/57420/40/experimental/check-tempest-dsvm-neutron-isolated-parallel/40bee04/console.html.gz

Please note that the manifestation of this error is similar to bug
1253896, but the error is different, and possibly the root cause as
well, as it seems the connection on port 22 is established successfully.

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: tempest
 Importance: Undecided
 Status: New


** Tags: neutron neutron-parallel

** Tags added: neutron neutron-parallel

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1265495

Title:
  Error reading SSH protocol banner

Status in OpenStack Neutron (virtual network service):
  New
Status in Tempest:
  New

Bug description:
  This appears similar to bug 1210664 which is now marked as released.

  Once parallel testing is enabled (running with the patches for blueprint 
neutron-parallel-testing), this error appears frequently.
  One example here: 
http://logs.openstack.org/20/57420/40/experimental/check-tempest-dsvm-neutron-isolated-parallel/40bee04/console.html.gz

  Please note that the manifestation of this error is similar to bug
  1253896, but the error is different, and possibly the root cause as
  well, as it seems the connection on port 22 is established
  successfully.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1265495/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265498] [NEW] Router over quota error with parallel testing

2014-01-02 Thread Salvatore Orlando
Public bug reported:

With parallel testing enabled, an error has been observed [1]. It seems
the router quota is exceeded, which is compatible with a scenario where
several tests creating routers are concurrently executed, and full
tenant isolation is not enabled.

There does not seem to be any issue on the neutron side; this error is
probably due to tempest tests which needs to be made more robust - or
perhaps it will just go away with full isolation

[1] http://logs.openstack.org/85/64185/1/experimental/check-tempest-
dsvm-neutron-isolated-parallel/706d454/console.html.gz

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: tempest
 Importance: Undecided
 Status: New


** Tags: neutron-parallel

** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1265498

Title:
  Router over quota error with parallel testing

Status in OpenStack Neutron (virtual network service):
  New
Status in Tempest:
  New

Bug description:
  With parallel testing enabled, an error has been observed [1]. It
  seems the router quota is exceeded, which is compatible with a
  scenario where several tests creating routers are concurrently
  executed, and full tenant isolation is not enabled.

  There does not seem to be any issue on the neutron side; this error is
  probably due to tempest tests which needs to be made more robust - or
  perhaps it will just go away with full isolation

  [1] http://logs.openstack.org/85/64185/1/experimental/check-tempest-
  dsvm-neutron-isolated-parallel/706d454/console.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1265498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265501] [NEW] TestAttachInterfaces fails on neutron w/parallelism

2014-01-02 Thread Salvatore Orlando
Public bug reported:

Failure instance: http://logs.openstack.org/85/64185/1/experimental
/check-tempest-dsvm-neutron-isolated-
parallel/94ca5ac/console.html.gz#_2013-12-27_13_37_15_639

This failure is the 3rd most frequent with parallel testing (after the
error on port quota check and the timeout due to ssh protocol banner
error)

It seems the problem might lie in the fact that the operation, when
neutron is enabled, does not complete in the expected time (which seems
to be 5 seconds for this test).

Possible approaches:
- increase timeout
- enable multiple neutron api workers (might have side effects as pointed out 
by other contributors)
- address the issue in the nova/neutron interface

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: tempest
 Importance: Undecided
 Status: New


** Tags: neutron-parallel

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

** Tags added: neutron-parallel

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1265501

Title:
  TestAttachInterfaces fails on neutron w/parallelism

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  New

Bug description:
  Failure instance: http://logs.openstack.org/85/64185/1/experimental
  /check-tempest-dsvm-neutron-isolated-
  parallel/94ca5ac/console.html.gz#_2013-12-27_13_37_15_639

  This failure is the 3rd most frequent with parallel testing (after the
  error on port quota check and the timeout due to ssh protocol banner
  error)

  It seems the problem might lie in the fact that the operation, when
  neutron is enabled, does not complete in the expected time (which
  seems to be 5 seconds for this test).

  Possible approaches:
  - increase timeout
  - enable multiple neutron api workers (might have side effects as pointed out 
by other contributors)
  - address the issue in the nova/neutron interface

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1265501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265505] [NEW] VM is reachable, but cannot SSH into it

2014-01-02 Thread Salvatore Orlando
Public bug reported:

Tempest experimental job with parallelism manifested the following failure:
http://logs.openstack.org/85/64185/1/experimental/check-tempest-dsvm-neutron-isolated-parallel/03017fd/console.html.gz

Ping succeeds, but SSH connection fails.
Possible causes are:
- SSH service did not start on target VM
- L3 agent did not configure correctly Floating IP; it added the IP but not the 
SNAT/DNAT rule, so the ping succeeds because it's just pinging an additional IP 
on the gateway.

Triaging needed

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: neutron-parallel

** Also affects: neutron
   Importance: Undecided
   Status: New

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1265505

Title:
  VM is reachable, but cannot SSH into it

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Tempest experimental job with parallelism manifested the following failure:
  
http://logs.openstack.org/85/64185/1/experimental/check-tempest-dsvm-neutron-isolated-parallel/03017fd/console.html.gz

  Ping succeeds, but SSH connection fails.
  Possible causes are:
  - SSH service did not start on target VM
  - L3 agent did not configure correctly Floating IP; it added the IP but not 
the SNAT/DNAT rule, so the ping succeeds because it's just pinging an 
additional IP on the gateway.

  Triaging needed

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1265505/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265512] [NEW] VMware: unnecesary session termination

2014-01-02 Thread Gary Kotton
Public bug reported:

In some cases, the session with the VC is terminated and restarted again. This 
can happen for example when the user does:
nova list (and there are no running VMs)
In addition to the restart of the session the operation also waits 2 seconds.

** Affects: nova
 Importance: High
 Assignee: Gary Kotton (garyk)
 Status: New


** Tags: grizzly-backport-potential havana-backport-potential vmware

** Changed in: nova
 Assignee: (unassigned) = Gary Kotton (garyk)

** Changed in: nova
Milestone: None = icehouse-2

** Changed in: nova
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1265512

Title:
  VMware: unnecesary session termination

Status in OpenStack Compute (Nova):
  New

Bug description:
  In some cases, the session with the VC is terminated and restarted again. 
This can happen for example when the user does:
  nova list (and there are no running VMs)
  In addition to the restart of the session the operation also waits 2 seconds.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1265512/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265563] [NEW] keypairs can not have an '@' sign in the name

2014-01-02 Thread Matthias Runge
Public bug reported:

when importing a keypair and naming the keypair like f...@host.bar.com:

you get the error Unable to import keypair.
while the message from keystone is more clear:

DEBUG:urllib3.connectionpool:POST 
/v2/6ebbe9474cf84bfbb42b5962b6b7e79f/os-keypairs HTTP/1.1 400 108
RESP: [400] CaseInsensitiveDict({'date': 'Thu, 02 Jan 2014 16:31:28 GMT', 
'content-length': '108', 'content-type': 'application/json; charset=UTF-8', 
'x-compute-request-id': 'req-86527df6-1dd6-4232-964d-0401332baa78'})
RESP BODY: {badRequest: {message: Keypair data is invalid: Keypair name 
contains unsafe characters, code: 400}}


message: Keypair data is invalid: Keypair name contains unsafe characters, 
code: 400}}

we should return this message or try to prevent this issue at all.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1265563

Title:
  keypairs can not have an '@' sign in the name

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  when importing a keypair and naming the keypair like
  f...@host.bar.com:

  you get the error Unable to import keypair.
  while the message from keystone is more clear:

  DEBUG:urllib3.connectionpool:POST 
/v2/6ebbe9474cf84bfbb42b5962b6b7e79f/os-keypairs HTTP/1.1 400 108
  RESP: [400] CaseInsensitiveDict({'date': 'Thu, 02 Jan 2014 16:31:28 GMT', 
'content-length': '108', 'content-type': 'application/json; charset=UTF-8', 
'x-compute-request-id': 'req-86527df6-1dd6-4232-964d-0401332baa78'})
  RESP BODY: {badRequest: {message: Keypair data is invalid: Keypair name 
contains unsafe characters, code: 400}}

  
  message: Keypair data is invalid: Keypair name contains unsafe characters, 
code: 400}}

  we should return this message or try to prevent this issue at all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1265563/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265607] [NEW] Instance.refresh() sends new info_cache objects

2014-01-02 Thread Dan Smith
Public bug reported:

If an older node does an Instance.refresh() it will fail because
conductor will overwrite the info_cache field with a new
InstanceInfoCache object. This happens during the LifecycleEvent handler
in nova-compute.

** Affects: nova
 Importance: Undecided
 Assignee: Dan Smith (danms)
 Status: Confirmed


** Tags: unified-objects

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1265607

Title:
  Instance.refresh() sends new info_cache objects

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  If an older node does an Instance.refresh() it will fail because
  conductor will overwrite the info_cache field with a new
  InstanceInfoCache object. This happens during the LifecycleEvent
  handler in nova-compute.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1265607/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265618] [NEW] image_snapshot_pending state breaks havana nodes

2014-01-02 Thread Dan Smith
Public bug reported:

Icehouse introduced a state called image_snapshot_pending which havana
nodes do not understand. If they call save with
expected_task_state=image_snapshot they will crash on the new state.

2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp   File 
/opt/upstack/nova-havana/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 2341, in _snapshot_instance
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp 
update_task_state)
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp   File 
/opt/upstack/nova-havana/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py,
 line 1386, in snapshot
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp 
update_task_state(task_state=task_states.IMAGE_PENDING_UPLOAD)
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp   File 
/opt/upstack/nova-havana/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 2338, in update_task_state
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp 
instance.save(expected_task_state=expected_state)
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp   File 
/opt/upstack/nova-havana/local/lib/python2.7/site-packages/nova/objects/base.py,
 line 139, in wrapper
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp ctxt, self, 
fn.__name__, args, kwargs)
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp   File 
/opt/upstack/nova-havana/local/lib/python2.7/site-packages/nova/conductor/rpcapi.py,
 line 497, in object_action
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp 
objmethod=objmethod, args=args, kwargs=kwargs)
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp   File 
/opt/upstack/nova-havana/local/lib/python2.7/site-packages/nova/rpcclient.py, 
line 85, in call
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp return 
self._invoke(self.proxy.call, ctxt, method, **kwargs)
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp   File 
/opt/upstack/nova-havana/local/lib/python2.7/site-packages/nova/rpcclient.py, 
line 63, in _invoke
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp return 
cast_or_call(ctxt, msg, **self.kwargs)
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp   File 
/opt/upstack/nova-havana/local/lib/python2.7/site-packages/nova/openstack/common/rpc/proxy.py,
 line 126, in call
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp result = 
rpc.call(context, real_topic, msg, timeout)
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp   File 
/opt/upstack/nova-havana/local/lib/python2.7/site-packages/nova/openstack/common/rpc/__init__.py,
 line 139, in call
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp return 
_get_impl().call(CONF, context, topic, msg, timeout)
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp   File 
/opt/upstack/nova-havana/local/lib/python2.7/site-packages/nova/openstack/common/rpc/impl_kombu.py,
 line 816, in call
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp 
rpc_amqp.get_connection_pool(conf, Connection))
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp   File 
/opt/upstack/nova-havana/local/lib/python2.7/site-packages/nova/openstack/common/rpc/amqp.py,
 line 574, in call
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp rv = list(rv)
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp   File 
/opt/upstack/nova-havana/local/lib/python2.7/site-packages/nova/openstack/common/rpc/amqp.py,
 line 539, in __iter__
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp raise result
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp 
UnexpectedTaskStateError_Remote: Unexpected task state: expecting 
(u'image_snapshot',) but the actual state is image_snapshot_pending
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp Traceback (most 
recent call last):
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp 
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp   File 
/opt/upstack/nova/nova/conductor/manager.py, line 576, in _object_dispatch
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp return 
getattr(target, method)(context, *args, **kwargs)
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp 
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp   File 
/opt/upstack/nova/nova/objects/base.py, line 152, in wrapper
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp return 
fn(self, ctxt, *args, **kwargs)
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp 
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp   File 
/opt/upstack/nova/nova/objects/instance.py, line 459, in save
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp 
columns_to_join=_expected_cols(expected_attrs))
2014-01-02 11:58:46.766 TRACE nova.openstack.common.rpc.amqp 
2014-01-02 11:58:46.766 TRACE 

[Yahoo-eng-team] [Bug 1265619] [NEW] Refactor the loadbalancing views

2014-01-02 Thread George Peristerakis
Public bug reported:

There's several view class with overloaded function declarations that
really do nothing. I suggest to remove these functions.

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

  There's several view class with overloaded function declarations that
- really do nothing.
+ really do nothing. I suggest to remove these functions.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1265619

Title:
  Refactor the loadbalancing views

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  There's several view class with overloaded function declarations that
  really do nothing. I suggest to remove these functions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1265619/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265620] [NEW] admin_project does not redirect to

2014-01-02 Thread Matthew D. Wood
Public bug reported:

When on the the /admin_project will not accept an empty redirect.  E.G.
next=''

This results in an internal server error.

** Affects: horizon
 Importance: Undecided
 Assignee: Matthew D. Wood (woodm1979)
 Status: Invalid

** Changed in: horizon
 Assignee: (unassigned) = Matthew D. Wood (woodm1979)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1265620

Title:
  admin_project does not redirect to 

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When on the the /admin_project will not accept an empty redirect.
  E.G. next=''

  This results in an internal server error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1265620/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265620] Re: admin_project does not redirect to

2014-01-02 Thread Matthew D. Wood
My mistake.  This is really from the internal code from our company.
Please disregard this bug.

** Changed in: horizon
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1265620

Title:
  admin_project does not redirect to 

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When on the the /admin_project will not accept an empty redirect.
  E.G. next=''

  This results in an internal server error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1265620/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265626] [NEW] Resource usage, global object store usage fails when project is deleted but data is available

2014-01-02 Thread Michiel Muhlenbaumer
Public bug reported:

When requesting Global Object Store Usage in Dashboard under Resource
Usage when a project is deleted, the request fails with a 404 in
error_log:

NotFound: Could not find project, b1849da6f313414793b53fdbc6871177.
(HTTP 404)

and Error: Unable to retrieve statistics. as error via the ajax-popup.

Expected behaviour would be that this project line is omitted from the
overview, but in stead the whole page hard fails.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1265626

Title:
  Resource usage, global object store usage fails when project is
  deleted but data is available

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When requesting Global Object Store Usage in Dashboard under
  Resource Usage when a project is deleted, the request fails with a
  404 in error_log:

  NotFound: Could not find project, b1849da6f313414793b53fdbc6871177.
  (HTTP 404)

  and Error: Unable to retrieve statistics. as error via the ajax-
  popup.

  Expected behaviour would be that this project line is omitted from the
  overview, but in stead the whole page hard fails.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1265626/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258691] Re: don't ignore H306 while running tests

2014-01-02 Thread nikhil komawar
This has been resolved by https://review.openstack.org/#/c/62321/ .
However, that MP did not link to the bug.

** Changed in: glance
   Status: Confirmed = Fix Released

** Changed in: glance
   Status: Fix Released = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1258691

Title:
  don't ignore H306 while running tests

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed

Bug description:
  we should ensure that the imports are in alphabetical order and remove
  H306 ignore flag from tox.ini

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1258691/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265641] [NEW] Can not create pbr directory due to permission denied

2014-01-02 Thread Ken'ichi Ohmichi
Public bug reported:

A gate test failed due to permission denied when creating pbr directory:

http://logs.openstack.org/34/59934/7/gate/gate-nova-
python26/0297942/console.html

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1265641

Title:
  Can not create pbr directory due to permission denied

Status in OpenStack Compute (Nova):
  New

Bug description:
  A gate test failed due to permission denied when creating pbr
  directory:

  http://logs.openstack.org/34/59934/7/gate/gate-nova-
  python26/0297942/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1265641/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265670] [NEW] Changing cache_time doesn't work

2014-01-02 Thread Brant Knudson
Public bug reported:


The [assignment].cache_time value is loaded at import-time. This means that it 
gets the default value rather than the value that the user configured because 
the user config isn't read until CONF() is called (in keystone-all).

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1265670

Title:
  Changing cache_time doesn't work

Status in OpenStack Identity (Keystone):
  New

Bug description:
  
  The [assignment].cache_time value is loaded at import-time. This means that 
it gets the default value rather than the value that the user configured 
because the user config isn't read until CONF() is called (in keystone-all).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1265670/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265701] [NEW] Need use assertTrue/assertFalse replace assertEqual boolean value

2014-01-02 Thread Eric Guo
Public bug reported:

unittest.TestCase has methods assertTrue/assertFalse to assert boolean value. 
But some test codes
used assertEqual to compare with boolean value.   Using  assertTrue/assertFalse 
is more readable and clear.
aslo less code.

** Affects: glance
 Importance: Undecided
 Status: New

** Affects: keystone
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: oslo
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1265701

Title:
  Need use assertTrue/assertFalse replace assertEqual  boolean value

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Identity (Keystone):
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  unittest.TestCase has methods assertTrue/assertFalse to assert boolean value. 
But some test codes
  used assertEqual to compare with boolean value.   Using  
assertTrue/assertFalse is more readable and clear.
  aslo less code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1265701/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265701] Re: Need use assertTrue/assertFalse replace assertEqual boolean value

2014-01-02 Thread Eric Guo
** Also affects: oslo
   Importance: Undecided
   Status: New

** Also affects: glance
   Importance: Undecided
   Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1265701

Title:
  Need use assertTrue/assertFalse replace assertEqual  boolean value

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Identity (Keystone):
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  unittest.TestCase has methods assertTrue/assertFalse to assert boolean value. 
But some test codes
  used assertEqual to compare with boolean value.   Using  
assertTrue/assertFalse is more readable and clear.
  aslo less code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1265701/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265716] [NEW] Wrong variable names addFixedIp and networkId in multinic-add-fixed-ip-req.json

2014-01-02 Thread Shuangtai Tian
Public bug reported:

{
   addFixedIp:{
   networkId: 1
}
}

the variable names should be add_fixed_ip and network_id 
those are forgot to change in the abaadf09ed86ef68(Adds V3 API samples for 
cells and multinic)

** Affects: nova
 Importance: Undecided
 Assignee: Shuangtai Tian (shuangtai-tian)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) = Shuangtai Tian (shuangtai-tian)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1265716

Title:
  Wrong variable names  addFixedIp and networkId in multinic-add-
  fixed-ip-req.json

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  {
 addFixedIp:{
 networkId: 1
  }
  }

  the variable names should be add_fixed_ip and network_id 
  those are forgot to change in the abaadf09ed86ef68(Adds V3 API samples for 
cells and multinic)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1265716/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265719] [NEW] Create image with api v2 return 500 error

2014-01-02 Thread renminmin
Public bug reported:

I try to use the glance api v2 to create image as two steps:
1. create image metadata and get image id
curl -i -X POST  -H X-Auth-Token: $token -H content-type: application/json 
-d '{name: image-8, type: kernel, foo: bar, disk_format: aki, 
container_format: aki, protected: false, tags: [test,image], 
visibility: public, min_ram:1, min_disk:1}' \
http://192.168.0.100:9292/v2/images

glance image-show image-8

+--+--+
| Property | Value|
+--+--+
| Property 'foo'   | bar  |
| Property 'type'  | kernel   |
| container_format | aki  |
| created_at   | 2014-01-03T06:21:37  |
| deleted  | False|
| disk_format  | aki  |
| id   | a97139a5-1942-45ac-91e9-37e1febf7627 |
| is_public| True |
| min_disk | 1|
| min_ram  | 1|
| name | image-8  |
| owner| 25adaa8f93ee4199b6a362c45745231d |
| protected| False|
| status   | queued   |
| updated_at   | 2014-01-03T06:21:37  |
+--+--+

image status is queued and waiting for image data

2. with patch method update the locations of image-8.

curl -i -X PATCH -H X-Auth-Token: $1 -H 
content-type:application/openstack-images-v2.1-json-patch \
-d '[{op:add, path:/locations/1, 
value:{url:file:///var/lib/glance/images/cirros-0.3.1-x86_64-uec, 
metadata:{}}}]' \
http://192.168.0.100:9292/v2/images/a97139a5-1942-45ac-91e9-37e1febf7627

respond:

HTTP/1.1 500 Internal Server Error
Content-Type: text/plain
Content-Length: 0
Date: Fri, 03 Jan 2014 06:22:01 GMT
Connection: close


expected respond

HTTP/1.1 200 OK
Content-Length: 496
Content-Type: application/json; charset=UTF-8
X-Openstack-Request-Id: req-755954a9-e375-4e43-a77e-1db2d430619e
Date: Fri, 03 Jan 2014 06:35:43 GMT

{status: active, name: image-8, tags: [test, image],
container_format: aki, created_at: 2014-01-03T06:21:37Z,
disk_format: aki, updated_at: 2014-01-03T06:35:43Z,
visibility: public, self: /v2/images/a97139a5-1942-45ac-
91e9-37e1febf7627, protected: false, id: a97139a5-1942-45ac-
91e9-37e1febf7627, file: /v2/images/a97139a5-1942-45ac-
91e9-37e1febf7627/file, min_disk: 1, foo: bar, type: kernel,
min_ram: 1, schema: /v2/schemas/image}


I debug the code of glance. I think the wrong is from glance/quota/__init__.py: 
_check_quota() function.
After image create without size which is none type, however _check_quota() will 
count the image space with image meta size.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1265719

Title:
  Create image with api v2 return 500 error

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  I try to use the glance api v2 to create image as two steps:
  1. create image metadata and get image id
  curl -i -X POST  -H X-Auth-Token: $token -H content-type: 
application/json -d '{name: image-8, type: kernel, foo: bar, 
disk_format: aki, container_format: aki, protected: false, tags: 
[test,image], visibility: public, min_ram:1, min_disk:1}' \
  http://192.168.0.100:9292/v2/images

  glance image-show image-8

  +--+--+
  | Property | Value|
  +--+--+
  | Property 'foo'   | bar  |
  | Property 'type'  | kernel   |
  | container_format | aki  |
  | created_at   | 2014-01-03T06:21:37  |
  | deleted  | False|
  | disk_format  | aki  |
  | id   | a97139a5-1942-45ac-91e9-37e1febf7627 |
  | is_public| True |
  | min_disk | 1|
  | min_ram  | 1|
  | name | image-8  |
  | owner| 25adaa8f93ee4199b6a362c45745231d |
  | protected| False|
  | status   | queued   |
  | updated_at   | 2014-01-03T06:21:37  |
  +--+--+

  image status is queued and