[Yahoo-eng-team] [Bug 1322952] [NEW] Uptime in Overview and Instance tab change if system time changes.

2014-05-24 Thread vigneshvar
Public bug reported:

If the system time of server running Horizon is changed, the uptime of
instances in overview and instances tab change accordingly.

Steps to Reproduce

1) Launch an instance and leave it for few minutes.
2) Check if uptime is changed.
3) Log in to the server where horizon is started and change the time.
4) Scenario 1: Change it backward by few minutes but not less than the total 
uptime at the moment.
 Eg: If current uptime is 1 hour, decrease the time by 30 minutes only.
 The uptime is shows 30 minutes after refreshing the page.
5) Scenario 2: Change it backward by few minutes greater than the total uptime 
at the moment.
 Eg: If current uptime is 1 hour, decrease the time by 2 hour 30 minutes.
 The uptime shows 0 minutes in instances page and sometimes instance 
disappears from overview page.
6) Scenario 3: Change it forward by few minutes.
 Eg: If current uptime is 1 hour, increase the time by 30 minutes only.
 uptime is modified to 1 hour 30 minutes.

The above mentioned bug may affect the metering and billing if uptime is 
considered for calculation
Scenario 3 (By some mistake if system time is modified to future) can infact 
affect the end users by overcharging them, which would break the SLA.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1322952

Title:
  Uptime in Overview and Instance tab change if system time changes.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If the system time of server running Horizon is changed, the uptime of
  instances in overview and instances tab change accordingly.

  Steps to Reproduce

  1) Launch an instance and leave it for few minutes.
  2) Check if uptime is changed.
  3) Log in to the server where horizon is started and change the time.
  4) Scenario 1: Change it backward by few minutes but not less than the total 
uptime at the moment.
   Eg: If current uptime is 1 hour, decrease the time by 30 minutes only.
   The uptime is shows 30 minutes after refreshing the page.
  5) Scenario 2: Change it backward by few minutes greater than the total 
uptime at the moment.
   Eg: If current uptime is 1 hour, decrease the time by 2 hour 30 minutes.
   The uptime shows 0 minutes in instances page and sometimes instance 
disappears from overview page.
  6) Scenario 3: Change it forward by few minutes.
   Eg: If current uptime is 1 hour, increase the time by 30 minutes only.
   uptime is modified to 1 hour 30 minutes.

  The above mentioned bug may affect the metering and billing if uptime is 
considered for calculation
  Scenario 3 (By some mistake if system time is modified to future) can infact 
affect the end users by overcharging them, which would break the SLA.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1322952/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1297088] Re: unit test of test_delete_ports_by_device_id always failed

2014-05-24 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1297088

Title:
  unit test of test_delete_ports_by_device_id always failed

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  I found int test_db_plugin.py, the test test_delete_ports_by_device_id
  always failed, the error log is below:

  INFO [neutron.api.v2.resource] delete failed (client error): Unable to 
complete operation on subnet 9579ede3-4bc4-43ea-939c-42c9ab027a53. One or more 
ports have an IP allocation from this subnet.
  INFO [neutron.api.v2.resource] delete failed (client error): Unable to 
complete operation on network 5f2ec397-31c7-4e92-acda-79d6093636ba. There are 
one or more ports still in use on the network.
   }}}
   
   Traceback (most recent call last):
 File "neutron/tests/unit/test_db_plugin.py", line 1681, in 
test_delete_ports_by_device_id
   expected_code=webob.exc.HTTPOk.code)
 File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__
   self.gen.throw(type, value, traceback)
 File "neutron/tests/unit/test_db_plugin.py", line 567, in subnet
   self._delete('subnets', subnet['subnet']['id'])
 File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__
   self.gen.throw(type, value, traceback)
 File "neutron/tests/unit/test_db_plugin.py", line 534, in network
   self._delete('networks', network['network']['id'])
 File "neutron/tests/unit/test_db_plugin.py", line 450, in _delete
   self.assertEqual(res.status_int, expected_code)
 File 
"/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py",
 line 321, in assertEqual
   self.assertThat(observed, matcher, message)
 File 
"/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py",
 line 406, in assertThat
   raise mismatch_error
   MismatchError: 409 != 204

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1297088/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1322934] [NEW] Remove '---' in default text of Instance Boot Source dropdown in Launch Instance dialog

2014-05-24 Thread Omri Gazitt
Public bug reported:

In bug https://bugs.launchpad.net/horizon/+bug/1302256 it was suggested
that the style for default text for dropdowns should not contain leading
and trailing '---'.

Therefore, the '--- Select source ---' text in Project -> Instances ->
Launch instance -> Instance Boot Source should read 'Select source'
instead.

** Affects: horizon
 Importance: Undecided
 Assignee: Omri Gazitt (ogazitt)
 Status: New


** Tags: low-hanging-fruit ux

** Attachment added: "Screen Shot 2014-05-24 at 4.55.52 PM.png"
   
https://bugs.launchpad.net/bugs/1322934/+attachment/4119117/+files/Screen%20Shot%202014-05-24%20at%204.55.52%20PM.png

** Changed in: horizon
 Assignee: (unassigned) => Omri Gazitt (ogazitt)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1322934

Title:
  Remove '---' in default text of Instance Boot Source dropdown in
  Launch Instance dialog

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In bug https://bugs.launchpad.net/horizon/+bug/1302256 it was
  suggested that the style for default text for dropdowns should not
  contain leading and trailing '---'.

  Therefore, the '--- Select source ---' text in Project -> Instances ->
  Launch instance -> Instance Boot Source should read 'Select source'
  instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1322934/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1322926] [NEW] Hyper-V driver volumes are attached incorrectly when multiple iSCSI servers are present

2014-05-24 Thread Alessandro Pilotti
Public bug reported:

Hyper-V can change the order of the mounted drives when rebooting a host
and thus passthrough disks can be assigned to the wrong instance
resulting in a critical scenario.

** Affects: nova
 Importance: Critical
 Status: Triaged


** Tags: hyper-v

** Tags added: hyper-v

** Changed in: nova
   Status: New => Triaged

** Changed in: nova
   Importance: Undecided => Critical

** Changed in: nova
Milestone: None => juno-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1322926

Title:
  Hyper-V driver volumes are attached incorrectly when multiple iSCSI
  servers are present

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  Hyper-V can change the order of the mounted drives when rebooting a
  host and thus passthrough disks can be assigned to the wrong instance
  resulting in a critical scenario.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1322926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1322921] [NEW] hypervisor-servers command always search by wildcard as '%hypervisor_hostname%'

2014-05-24 Thread Yohei Matsuhashi
Public bug reported:

I searched servers by specific hypervisor. However the result is
included with other hypervisors matched by wildcard with
'%hypervisor_hostname%'.

I found this bug by following command:

admin@controller:~$ nova hypervisor-servers 10-0-0-1
+--+---+---+-+
| ID   | Name  | Hypervisor ID | 
Hypervisor Hostname |
+--+---+---+-+
| db52fd93-cc80-4d5e-852c-b113dec35fbf | instance-00a0 | 1 | 
10-0-0-10   |
| 5b15fa8a-66d8-4db1-bb0e-c52fc3a030f3 | instance-00a1 | 1 | 
10-0-0-10   |
| 2b492995-007d-4435-8f6b-037ea57188dc | instance-00a2 | 2 | 
10-0-0-11   |
| 45b18880-c0f1-4b8b-a21d-80f9dd2566ff | instance-00a3 | 2 | 
10-0-0-11   |
+--+---+---+-+
admin@controller:~$ nova hypervisor-servers 10-0-0-11
+--+---+---+-+
| ID   | Name  | Hypervisor ID | 
Hypervisor Hostname |
+--+---+---+-+
| 2b492995-007d-4435-8f6b-037ea57188dc | instance-00a2 | 2 | 
10-0-0-11   |
| 45b18880-c0f1-4b8b-a21d-80f9dd2566ff | instance-00a3 | 2 | 
10-0-0-11   |
+--+---+---+-+


This bug is contained in compute api v2 extensions

at /v2/​{tenant_id}​/os-hypervisors/​{hypervisor_hostname}​/servers

admin@controller:~$ curl -H "X-Auth-Token:MIIL" 
"http://localhost:8774/v2/771be698aba4431daf41c8012df97e7b/os-hypervisors/10-0-0-1/servers";
{"hypervisors": [{"id": 1, "hypervisor_hostname": "10-0-0-10", "servers": 
[{"uuid": "db52fd93-cc80-4d5e-852c-b113dec35fbf", "name": "instance-00a0"}, 
{"uuid": "5b15fa8a-66d8-4db1-bb0e-c52fc3a030f3", "name": 
"instance-00a1"}]}, {"id": 2, "hypervisor_hostname": 
"gtestcompute-172-16-227-11", "servers": [{"uuid": 
"2b492995-007d-4435-8f6b-037ea57188dc", "name": "instance-00a2"}, {"uuid": 
"45b18880-c0f1-4b8b-a21d-80f9dd2566ff", "name": "instance-00a3"}]}]}

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1322921

Title:
  hypervisor-servers command always search by wildcard as
  '%hypervisor_hostname%'

Status in OpenStack Compute (Nova):
  New

Bug description:
  I searched servers by specific hypervisor. However the result is
  included with other hypervisors matched by wildcard with
  '%hypervisor_hostname%'.

  I found this bug by following command:

  admin@controller:~$ nova hypervisor-servers 10-0-0-1
  
+--+---+---+-+
  | ID   | Name  | Hypervisor ID | 
Hypervisor Hostname |
  
+--+---+---+-+
  | db52fd93-cc80-4d5e-852c-b113dec35fbf | instance-00a0 | 1 | 
10-0-0-10   |
  | 5b15fa8a-66d8-4db1-bb0e-c52fc3a030f3 | instance-00a1 | 1 | 
10-0-0-10   |
  | 2b492995-007d-4435-8f6b-037ea57188dc | instance-00a2 | 2 | 
10-0-0-11   |
  | 45b18880-c0f1-4b8b-a21d-80f9dd2566ff | instance-00a3 | 2 | 
10-0-0-11   |
  
+--+---+---+-+
  admin@controller:~$ nova hypervisor-servers 10-0-0-11
  
+--+---+---+-+
  | ID   | Name  | Hypervisor ID | 
Hypervisor Hostname |
  
+--+---+---+-+
  | 2b492995-007d-4435-8f6b-037ea57188dc | instance-00a2 | 2 | 
10-0-0-11   |
  | 45b18880-c0f1-4b8b-a21d-80f9dd2566ff | instance-00a3 | 2 | 
10-0-0-11   |
  
+--+---+---+-+

  
  This bug is contained in compute api v2 extensions

  at /v2/​{tenant_id}​/os-hypervisors/​{hypervisor_hostname}​/servers

  admin@controller:~$ curl -H "X-Auth-Token:MIIL" 
"http://localhost:8774/v2/771be698aba4431daf41c8012df97e7b/os-hypervisors/10-0-0-1/servers";
  {"hypervisors": [{"id": 1, "hypervisor_hostname": "10-0-0-10", "servers": 
[{"uuid": "db52fd93-cc80-4d5e-852c-b113dec35fbf", "n

[Yahoo-eng-team] [Bug 1290468] Re: AttributeError: 'NoneType' object has no attribute '_sa_instance_state'

2014-05-24 Thread Vladimir Kuklin
here is the traceback.

http://paste.openstack.org/show/81339/

Steps to reproduce:

1) deploy openstack with nova-network and some kind of shared storage on 
compute nodes (in our case it is Ceph)
2) launch a VM
3) associate a floating IP to this instance
4) try to do nova live-migration

Expected result:

instance migrates to the other node and transits to ACTIVE state

Actual result:

instance hangs forever in MIGRATING state


BUT: instance is already on the other node, so it looks like some floating IP 
migration finalization issue as it migrates seemlessly without floating IP.



** Also affects: fuel
   Importance: Undecided
   Status: New

** Changed in: fuel
   Importance: Undecided => Critical

** Also affects: fuel/5.0.x
   Importance: Critical
   Status: New

** Changed in: fuel/5.0.x
Milestone: None => 5.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290468

Title:
  AttributeError: 'NoneType' object has no attribute
  '_sa_instance_state'

Status in Cinder:
  New
Status in Fuel: OpenStack installer that works:
  New
Status in Fuel for OpenStack 5.0.x series:
  New
Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  Dan Smith was seeing this in some nova testing:

  http://paste.openstack.org/show/73043/

  Looking at logstash, this is showing up a lot since 3/7 which is when
  lazy translation was enabled in Cinder:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQXR0cmlidXRlRXJyb3I6IFxcJ05vbmVUeXBlXFwnIG9iamVjdCBoYXMgbm8gYXR0cmlidXRlIFxcJ19zYV9pbnN0YW5jZV9zdGF0ZVxcJ1wiIEFORCBmaWxlbmFtZTpsb2dzKnNjcmVlbi1jLWFwaS50eHQiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTQ0NzI5Nzg4MDV9

  
https://review.openstack.org/#/q/status:merged+project:openstack/cinder+branch:master+topic:bug/1280826,n,z

  Logstash shows a 99% success rate when this shows up but it can't stay
  like this, but right now it looks to be more cosmetic than functional.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1290468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290468] Re: AttributeError: 'NoneType' object has no attribute '_sa_instance_state'

2014-05-24 Thread Vladimir Kuklin
Looks like we also hit this issue in nova. Look into related bug in the
Fuel project.

https://bugs.launchpad.net/fuel/+bug/1317548


** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290468

Title:
  AttributeError: 'NoneType' object has no attribute
  '_sa_instance_state'

Status in Cinder:
  New
Status in Fuel: OpenStack installer that works:
  New
Status in Fuel for OpenStack 5.0.x series:
  New
Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  Dan Smith was seeing this in some nova testing:

  http://paste.openstack.org/show/73043/

  Looking at logstash, this is showing up a lot since 3/7 which is when
  lazy translation was enabled in Cinder:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQXR0cmlidXRlRXJyb3I6IFxcJ05vbmVUeXBlXFwnIG9iamVjdCBoYXMgbm8gYXR0cmlidXRlIFxcJ19zYV9pbnN0YW5jZV9zdGF0ZVxcJ1wiIEFORCBmaWxlbmFtZTpsb2dzKnNjcmVlbi1jLWFwaS50eHQiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTQ0NzI5Nzg4MDV9

  
https://review.openstack.org/#/q/status:merged+project:openstack/cinder+branch:master+topic:bug/1280826,n,z

  Logstash shows a 99% success rate when this shows up but it can't stay
  like this, but right now it looks to be more cosmetic than functional.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1290468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1322847] [NEW] Simultaneous requests for creating an instance result in recopying the image to _base before it is cached

2014-05-24 Thread Alexei Karve
Public bug reported:

Simultaneous requests for creating an instance result in unnecessary
recopying of the image with ".part" extension to _base directory (one
for each request) before it is cached. The fetch_func_sync needs to
check again inside the synchronized whether the earlier copy was
completed as indicated below in
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/imagebackend.py

def cache(self, fetch_func, filename, size=None, *args, **kwargs):
"""Creates image from template.

Ensures that template and image not already exists.
Ensures that base directory exists.
Synchronizes on template fetching.

:fetch_func: Function that creates the base image
 Should accept `target` argument.
:filename: Name of the file in the image directory
:size: Size of created image in bytes (optional)
"""
@utils.synchronized(filename, external=True, lock_path=self.lock_path)
def fetch_func_sync(target, *args, **kwargs):
if not os.path.exists(target):
fetch_func(target=target, *args, **kwargs)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1322847

Title:
  Simultaneous requests for creating an instance result in recopying the
  image to _base before it is cached

Status in OpenStack Compute (Nova):
  New

Bug description:
  Simultaneous requests for creating an instance result in unnecessary
  recopying of the image with ".part" extension to _base directory (one
  for each request) before it is cached. The fetch_func_sync needs to
  check again inside the synchronized whether the earlier copy was
  completed as indicated below in
  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/imagebackend.py

  def cache(self, fetch_func, filename, size=None, *args, **kwargs):
  """Creates image from template.

  Ensures that template and image not already exists.
  Ensures that base directory exists.
  Synchronizes on template fetching.

  :fetch_func: Function that creates the base image
   Should accept `target` argument.
  :filename: Name of the file in the image directory
  :size: Size of created image in bytes (optional)
  """
  @utils.synchronized(filename, external=True, lock_path=self.lock_path)
  def fetch_func_sync(target, *args, **kwargs):
  if not os.path.exists(target):
  fetch_func(target=target, *args, **kwargs)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1322847/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp