[Yahoo-eng-team] [Bug 1988011] [NEW] The allocation ratio of ram change by placement does not work

2022-08-28 Thread HYSong
Public bug reported:

I found that Nova will check ram_ratio in the table of compute_nodes in
order to check whether the destination node has enough memory, when
executing live migration instance to a target host.

In same cases, the ram ratio in Nova is different from Placement, when I
change the allocation_ratio by Placement API. And live migration is
still failed when I increasing the ram ratio by Placement if it is
deficient.

I think the ratio in Nova should keep up with the value in Placement.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1988011

Title:
  The allocation ratio of ram change by placement does not work

Status in OpenStack Compute (nova):
  New

Bug description:
  I found that Nova will check ram_ratio in the table of compute_nodes
  in order to check whether the destination node has enough memory, when
  executing live migration instance to a target host.

  In same cases, the ram ratio in Nova is different from Placement, when
  I change the allocation_ratio by Placement API. And live migration is
  still failed when I increasing the ram ratio by Placement if it is
  deficient.

  I think the ratio in Nova should keep up with the value in Placement.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1988011/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1982988] [NEW] keystonemiddleware invalid unknown uuid

2022-07-27 Thread HYSong
Public bug reported:


I found that there are too many "Invalid uuid" warnings in nova and
cinder logs.

The full trace:
/var/lib/openstack/lib/python3.8/site-packages/pycadf/identifier.py:71: 
UserWarning: Invalid uuid: unknown. To ensure interoperability, identifiers 
should be a valid uuid.
  warnings.warn(('Invalid uuid: %s. To ensure interoperability, ')

service_info.uuid in initailize whith "unknown" in
keystonemiddleware(https://opendev.org/openstack/keystonemiddleware/src/commit/2bda844bb219df355d74b5c5b21f86244921a1c2/keystonemiddleware/audit/_api.py#L250),
but it isn't handled very well in some cases.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1982988

Title:
  keystonemiddleware invalid unknown uuid

Status in OpenStack Identity (keystone):
  New

Bug description:

  I found that there are too many "Invalid uuid" warnings in nova and
  cinder logs.

  The full trace:
  /var/lib/openstack/lib/python3.8/site-packages/pycadf/identifier.py:71: 
UserWarning: Invalid uuid: unknown. To ensure interoperability, identifiers 
should be a valid uuid.
warnings.warn(('Invalid uuid: %s. To ensure interoperability, ')

  service_info.uuid in initailize whith "unknown" in
  
keystonemiddleware(https://opendev.org/openstack/keystonemiddleware/src/commit/2bda844bb219df355d74b5c5b21f86244921a1c2/keystonemiddleware/audit/_api.py#L250),
  but it isn't handled very well in some cases.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1982988/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1962644] [NEW] local volume driver can not execute blockResize when extend volume

2022-03-01 Thread HYSong
Public bug reported:

The disk is not resized in VMs when I extend volume with local volume
driver,

and I found that the function of extend_volume is not defined in
LibvirtVolumeDriver.

How about return requested_size in the function of extend_volume in
order to execute blockResize in Libvirt driver.

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  The disk is not resized in VMs when I extend volume with local volume
  driver,
  
  and I found that the function of extend_volume is not defined in
  LibvirtVolumeDriver.
  
- How about return the requested_size in the function of extend_volume in order 
to execute 
-  
- blockResize in Libvirt driver.
+ How about return the requested_size in the function of extend_volume in
+ order to execute blockResize in Libvirt driver.

** Description changed:

  The disk is not resized in VMs when I extend volume with local volume
  driver,
  
  and I found that the function of extend_volume is not defined in
  LibvirtVolumeDriver.
  
- How about return the requested_size in the function of extend_volume in
+ How about return requested_size in the function of extend_volume in
  order to execute blockResize in Libvirt driver.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1962644

Title:
  local volume driver can not execute blockResize when extend volume

Status in OpenStack Compute (nova):
  New

Bug description:
  The disk is not resized in VMs when I extend volume with local volume
  driver,

  and I found that the function of extend_volume is not defined in
  LibvirtVolumeDriver.

  How about return requested_size in the function of extend_volume in
  order to execute blockResize in Libvirt driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1962644/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1956432] [NEW] Old allocation of VM is not deleted after evacuating

2022-01-05 Thread HYSong
Public bug reported:

I found that the old instance allocation in placement is not deleted after 
executing evacuate, 
it will lead to wrong resources info of old compute node.


-

MariaDB [placement]> select * from  allocations where 
consumer_id='4c6c29e7-a1f0-4dac-a3ef-a98b5598abe9';
+-++---+--+--+---+--+
| created_at  | updated_at | id| resource_provider_id | consumer_id 
 | resource_class_id | used |
+-++---+--+--+---+--+
| 2022-01-05 08:23:19 | NULL   | 18315 |   11 | 
4c6c29e7-a1f0-4dac-a3ef-a98b5598abe9 | 2 |1 |
| 2022-01-05 08:23:19 | NULL   | 18318 |   11 | 
4c6c29e7-a1f0-4dac-a3ef-a98b5598abe9 | 1 |  512 |
| 2022-01-05 08:23:19 | NULL   | 18321 |   11 | 
4c6c29e7-a1f0-4dac-a3ef-a98b5598abe9 | 0 |1 |
| 2022-01-05 08:23:19 | NULL   | 18324 |   33 | 
4c6c29e7-a1f0-4dac-a3ef-a98b5598abe9 | 0 |1 |
| 2022-01-05 08:23:19 | NULL   | 18327 |   33 | 
4c6c29e7-a1f0-4dac-a3ef-a98b5598abe9 | 1 |  512 |
| 2022-01-05 08:23:19 | NULL   | 18330 |   33 | 
4c6c29e7-a1f0-4dac-a3ef-a98b5598abe9 | 2 |1 |
+-++---+--+--+---+--+

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1956432

Title:
  Old allocation of VM is not deleted after evacuating

Status in OpenStack Compute (nova):
  New

Bug description:
  I found that the old instance allocation in placement is not deleted after 
executing evacuate, 
  it will lead to wrong resources info of old compute node.

  
  -

  MariaDB [placement]> select * from  allocations where 
consumer_id='4c6c29e7-a1f0-4dac-a3ef-a98b5598abe9';
  
+-++---+--+--+---+--+
  | created_at  | updated_at | id| resource_provider_id | 
consumer_id  | resource_class_id | used |
  
+-++---+--+--+---+--+
  | 2022-01-05 08:23:19 | NULL   | 18315 |   11 | 
4c6c29e7-a1f0-4dac-a3ef-a98b5598abe9 | 2 |1 |
  | 2022-01-05 08:23:19 | NULL   | 18318 |   11 | 
4c6c29e7-a1f0-4dac-a3ef-a98b5598abe9 | 1 |  512 |
  | 2022-01-05 08:23:19 | NULL   | 18321 |   11 | 
4c6c29e7-a1f0-4dac-a3ef-a98b5598abe9 | 0 |1 |
  | 2022-01-05 08:23:19 | NULL   | 18324 |   33 | 
4c6c29e7-a1f0-4dac-a3ef-a98b5598abe9 | 0 |1 |
  | 2022-01-05 08:23:19 | NULL   | 18327 |   33 | 
4c6c29e7-a1f0-4dac-a3ef-a98b5598abe9 | 1 |  512 |
  | 2022-01-05 08:23:19 | NULL   | 18330 |   33 | 
4c6c29e7-a1f0-4dac-a3ef-a98b5598abe9 | 2 |1 |
  
+-++---+--+--+---+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1956432/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1945401] [NEW] scheduler can not filter node by storage backend

2021-09-28 Thread HYSong
Public bug reported:


If my aggregation has ceph backend node cmp01 and fcsan backend node cmp02, 
and I create a ceph backend VM01 in cmp01, then execute migrate it. 

The migration will be failed if scheduler filter cmp02 or I set the
target node is cmp02.


--Traceback--
oslo_messaging.rpc.client.RemoteError: Remote error: 
ClientException Unable to create attachment for volume 
(Invalid input received: Connector doesn't have required information: wwpns). 
(HTTP 500) 


I think nova need to pre-check the target node I set or 
filter the available storage backend compute nodes when selecting destination.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1945401

Title:
  scheduler can not filter node by storage backend

Status in OpenStack Compute (nova):
  New

Bug description:
  
  If my aggregation has ceph backend node cmp01 and fcsan backend node cmp02, 
  and I create a ceph backend VM01 in cmp01, then execute migrate it. 

  The migration will be failed if scheduler filter cmp02 or I set the
  target node is cmp02.

  
  --Traceback--
  oslo_messaging.rpc.client.RemoteError: Remote error: 
  ClientException Unable to create attachment for volume 
  (Invalid input received: Connector doesn't have required information: wwpns). 
  (HTTP 500) 

  
  I think nova need to pre-check the target node I set or 
  filter the available storage backend compute nodes when selecting destination.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1945401/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1942501] [NEW] Loading flavor list takes several seconds when creating VM

2021-09-02 Thread HYSong
Public bug reported:


Getting the list of Flavors can take 40 secs in our case, 
and I have 500 flavors.

The API of `http://xxx.com/api/nova/flavors/?get_extras=true&is_public=true` 
in OpenStack dashboard will call 
`/v2.1/{project_id}/flavors/{flavor_id}/os-extra_specs` 
one by one just inorder to get public info.


I think it can be optimized in horizon.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1942501

Title:
  Loading flavor list takes several seconds  when creating VM

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  
  Getting the list of Flavors can take 40 secs in our case, 
  and I have 500 flavors.

  The API of `http://xxx.com/api/nova/flavors/?get_extras=true&is_public=true` 
  in OpenStack dashboard will call 
`/v2.1/{project_id}/flavors/{flavor_id}/os-extra_specs` 
  one by one just inorder to get public info.

  
  I think it can be optimized in horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1942501/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1932268] Re: soft deleted instance is deleted when error restoring

2021-06-17 Thread HYSong
I suggest to execute 'instance.deleted_at = None' after
self.driver.restore(instance) or self._power_on(context, instance)
finished, and the vm will not be deleted when restoring failed.

** Changed in: nova
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1932268

Title:
  soft deleted instance is deleted when error restoring

Status in OpenStack Compute (nova):
  New

Bug description:
  
  The SOFT_DELETED instance will be deleted when executing restore instance 
failed.

  restore instance:
  instance.task_state = task_states.RESTORING
  instance.deleted_at = None

  If `self.driver.restore(instance)` or `self._power_on(context,
  instance)` in `nova/compute/manager.py` execute failed,
  instance.task_state will revert to None due to `@reverts_task_state`.

  The instance will be filtered in _reclaim_queued_deletes task and will
  be deleted incorrectly.

  filters = {'vm_state': vm_states.SOFT_DELETED,
 'task_state': None,
 'host': self.host}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1932268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1932268] [NEW] soft deleted instance is deleted when error restoring

2021-06-17 Thread HYSong
Public bug reported:


The SOFT_DELETED instance will be deleted when executing restore instance 
failed.

restore instance:
instance.task_state = task_states.RESTORING
instance.deleted_at = None

If `self.driver.restore(instance)` or `self._power_on(context,
instance)` in `nova/compute/manager.py` execute failed,
instance.task_state will revert to None due to `@reverts_task_state`.

The instance will be filtered in _reclaim_queued_deletes task and will
be deleted incorrectly.

filters = {'vm_state': vm_states.SOFT_DELETED,
   'task_state': None,
   'host': self.host}

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1932268

Title:
  soft deleted instance is deleted when error restoring

Status in OpenStack Compute (nova):
  New

Bug description:
  
  The SOFT_DELETED instance will be deleted when executing restore instance 
failed.

  restore instance:
  instance.task_state = task_states.RESTORING
  instance.deleted_at = None

  If `self.driver.restore(instance)` or `self._power_on(context,
  instance)` in `nova/compute/manager.py` execute failed,
  instance.task_state will revert to None due to `@reverts_task_state`.

  The instance will be filtered in _reclaim_queued_deletes task and will
  be deleted incorrectly.

  filters = {'vm_state': vm_states.SOFT_DELETED,
 'task_state': None,
 'host': self.host}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1932268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1924257] [NEW] nova get_capabilities should not use host capabilities

2021-04-15 Thread HYSong
Public bug reported:


Nova-compute will configure Broadwell-IBRS's features if using host 
capabilities by the function of get_capabilities, but the VM's real features is 
depend on the model of Broadwell-IBRS when cpu_mode is configured to host-model 
in nova.conf.

I think it is unreasonable because nova will use host capabilities to
compare CPU when live_migration, but the VM isn't it. It is likely lead
to live migrate ERROR because of the incompatible features.

---
root@cmp004:~# qemu-system-x86_64 --version
QEMU emulator version 2.11.1(Debian 1:2.11+dfsg-1.4~u16.04+mcp2)
Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers

root@cmp004:~# libvirtd -V
libvirtd (libvirt) 4.0.0

openstack version: queens

root@cmp004:~# virsh domcapabilities

  

  Skylake-Client-IBRS
  Intel
  
  
  
  
  
  
  
  
  
  

  


root@cmp004:~# virsh capabilities

  

  x86_64
  Broadwell-IBRS
  Intel
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  

  


root@ctl01:~# nova hypervisor-show c8f34226-c2e9-4c09-bdbe-aaaff1a1d370
+---+--+
| Property  | Value|
+---+--+
| cpu_info_arch | x86_64   |
| cpu_info_features | ["pge", "avx", "xsaveopt", "clflush",|
|   | "sep", "rtm", "tsc_adjust", "tsc-|
|   | deadline", "dtes64", "invpcid", "tsc",   |
|   | "fsgsbase", "xsave", "smap", "vmx",  |
|   | "erms", "xtpr", "cmov", "hle", "smep",   |
|   | "ssse3", "est", "pat", "monitor", "smx", |
|   | "pbe", "lm", "msr", "adx",   |
|   | "3dnowprefetch", "nx", "fxsr",   |
|   | "syscall", "tm", "sse4.1", "pae",|
|   | "sse4.2", "pclmuldq", "cx16", "pcid",|
|   | "fma", "vme", "popcnt", "mmx",   |
|   | "osxsave", "cx8", "mce", "de", "rdtscp", |
|   | "ht", "dca", "lahf_lm", "abm", "rdseed", |
|   | "pdcm", "mca", "pdpe1gb", "apic", "sse", |
|   | "f16c", "pse", "ds", "invtsc", "pni",|
|   | "tm2", "avx2", "aes", "sse2", "ss",  |
|   | "ds_cpl", "arat", "bmi1", "bmi2",|
|   | "acpi", "spec-ctrl", "fpu", "ssbd",  |
|   | "pse36", "mtrr", "movbe", "rdrand",  |
|   | "x2apic"]|
| cpu_info_model| Broadwell-IBRS   |
| service_host  | cmp004   |
| service_id| 5e04fa07-db8a-4e84-a895-411c704b9d64 |
+---+--+

root@cmp004:~# ps -ef |grep instance-000124b5
root 16456 10674  0 17:24 pts/12   00:00:00 grep --color=auto 
instance-000124b5
libvirt+ 18623 1  7 16:15 ?00:04:59 qemu-system-x86_64 -enable-kvm 
-cpu 
Skylake-Client-IBRS,ss=on,vmx=on,hypervisor=on,tsc_adjust=on,ssbd=on,pdpe1gb=on,mpx=off,xsavec=off,xgetbv1=off
 ...

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1924257

Title:
  nova get_capabilities should not use host capabilities

Status in OpenStack Compute (nova):
  New

Bug description:
  
  Nova-compute will configure Broadwell-IBRS's features if using host 
capabilities by the function of get_capabilities, but the VM's real features is 
depend on the model of Broadwell-IBRS when cpu_mode is configured to host-model 
in nova.conf.

  I think it is unreasonable because nova will use host capabilities to
  compare CPU when live_migration, but the VM isn't it. It is likely
  lead to live migrate ERROR because of the incompatible features.

  
---
  root@cmp004:~# qemu-system-x86_64 --version
  QEMU emulator version 2.11.1(Debian 1:2.11+dfsg-1.4~u16.04+mcp2)
  Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers

  root@cmp004:~# libvirtd -V
  libvirtd (libvirt) 4.0.0

  openstack vers

[Yahoo-eng-team] [Bug 1919401] [NEW] the function of _resize_instance lack of exception handling

2021-03-16 Thread HYSong
Public bug reported:

Env info:
openstack version: rocky
storage back-end: ceph
hypervisor: qemu/KVM

Sample traceback:
==
[req-69c94c9a-6ee4-4936-8ce5-9a23b7aea89a 00b865b2a29e47f8b57a62ac624bdfa4 
9edd1f98bf2f47e885f7077a066c83dd - default default] 
[instance: 642ab2df-4dc2-4ca8-9bbd-ab19c72352df] 
Setting instance vm_state to ERROR: OSError: [Errno 39] Directory not empty
Traceback (most recent call last):
  File 
"/var/lib/openstack/lib/python2.7/site-packages/nova/compute/manager.py", line 
8333, in _error_out_instance_on_exception
yield
  File 
"/var/lib/openstack/lib/python2.7/site-packages/nova/compute/manager.py", line 
4693, in _resize_instance
timeout, retry_interval)
  File 
"/var/lib/openstack/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", 
line 8668, in migrate_disk_and_power_off
shared_storage)
  File "/var/lib/openstack/lib/python2.7/site-packages/oslo_utils/excutils.py", 
line 220, in __exit__
self.force_reraise()
  File "/var/lib/openstack/lib/python2.7/site-packages/oslo_utils/excutils.py", 
line 196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
  File 
"/var/lib/openstack/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", 
line 8622, in migrate_disk_and_power_off
os.rename(inst_base, inst_base_resize)
OSError: [Errno 39] Directory not empty


Description:
==
1. Executing VM resize error, and the the dir of inst_base_resize has been 
created by `os.rename(inst_base, inst_base_resize)` in the function of 
migrate_disk_and_power_off. The function of `_error_out_instance_on_exception` 
in _resize_instance just catch exceptions and can not rollback dir.

2. Executing command of `openstack server set` to recover VM status to
active.

3. Executing VM resize error again, and Exception in Sample traceback.
The operation of `os.rename(inst_base, inst_base_resize)` failed because
of the dir of inst_base_resize has console.log.

4. Whether or not execute `self._cleanup_remote_migration` before
`os.rename(inst_base, inst_base_resize)`?  Is there any methods to
optimize exception handling in the function of _resize_instance ?

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1919401

Title:
  the function of _resize_instance lack of exception handling

Status in OpenStack Compute (nova):
  New

Bug description:
  Env info:
  openstack version: rocky
  storage back-end: ceph
  hypervisor: qemu/KVM

  Sample traceback:
  ==
  [req-69c94c9a-6ee4-4936-8ce5-9a23b7aea89a 00b865b2a29e47f8b57a62ac624bdfa4 
9edd1f98bf2f47e885f7077a066c83dd - default default] 
  [instance: 642ab2df-4dc2-4ca8-9bbd-ab19c72352df] 
  Setting instance vm_state to ERROR: OSError: [Errno 39] Directory not empty
  Traceback (most recent call last):
File 
"/var/lib/openstack/lib/python2.7/site-packages/nova/compute/manager.py", line 
8333, in _error_out_instance_on_exception
  yield
File 
"/var/lib/openstack/lib/python2.7/site-packages/nova/compute/manager.py", line 
4693, in _resize_instance
  timeout, retry_interval)
File 
"/var/lib/openstack/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", 
line 8668, in migrate_disk_and_power_off
  shared_storage)
File 
"/var/lib/openstack/lib/python2.7/site-packages/oslo_utils/excutils.py", line 
220, in __exit__
  self.force_reraise()
File 
"/var/lib/openstack/lib/python2.7/site-packages/oslo_utils/excutils.py", line 
196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File 
"/var/lib/openstack/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", 
line 8622, in migrate_disk_and_power_off
  os.rename(inst_base, inst_base_resize)
  OSError: [Errno 39] Directory not empty

  
  Description:
  ==
  1. Executing VM resize error, and the the dir of inst_base_resize has been 
created by `os.rename(inst_base, inst_base_resize)` in the function of 
migrate_disk_and_power_off. The function of `_error_out_instance_on_exception` 
in _resize_instance just catch exceptions and can not rollback dir.

  2. Executing command of `openstack server set` to recover VM status to
  active.

  3. Executing VM resize error again, and Exception in Sample traceback.
  The operation of `os.rename(inst_base, inst_base_resize)` failed
  because of the dir of inst_base_resize has console.log.

  4. Whether or not execute `self._cleanup_remote_migration` before
  `os.rename(inst_base, inst_base_resize)`?  Is there any methods to
  optimize exception handling in the function of _resize_instance ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1919401/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-

[Yahoo-eng-team] [Bug 1893720] [NEW] horizon project display incomplete

2020-08-31 Thread HYSong
Public bug reported:

The project in openstack dashboard dispaly incomplete because of plenty
of projects.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1893720

Title:
  horizon project display incomplete

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The project in openstack dashboard dispaly incomplete because of
  plenty of projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1893720/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1884486] [NEW] subnet create with wrong project

2020-06-22 Thread HYSong
Public bug reported:

If we choose create subnet when creating network in dashboard, the
default project of subnet will use current project instead of the
project setting in creating network. It will cause the difference
project between network and subnet, and VM create failed.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1884486

Title:
  subnet create with wrong project

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If we choose create subnet when creating network in dashboard, the
  default project of subnet will use current project instead of the
  project setting in creating network. It will cause the difference
  project between network and subnet, and VM create failed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1884486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1869853] [NEW] flavor is filtered only in current page

2020-03-31 Thread HYSong
Public bug reported:

I wanna to search a flavor in dashboard, 
but it can only filtered in current page.
It is hard to judge whether the flavor is exists in horizon.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1869853

Title:
  flavor is filtered only in current page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I wanna to search a flavor in dashboard, 
  but it can only filtered in current page.
  It is hard to judge whether the flavor is exists in horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1869853/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1865125] [NEW] block_device_mapping redundancy data when mq not routed

2020-02-28 Thread HYSong
Public bug reported:

The volumes_attached has some repetitive volume id when excuting
'openstack server show ',

and there are same redundancy data in block_device_mapping table.

In this period, not routed error occurred in MQ, and I failed to resize
this server.

I don't know whether the MQ cause this redundancy data error.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1865125

Title:
  block_device_mapping redundancy data when mq not routed

Status in OpenStack Compute (nova):
  New

Bug description:
  The volumes_attached has some repetitive volume id when excuting
  'openstack server show ',

  and there are same redundancy data in block_device_mapping table.

  In this period, not routed error occurred in MQ, and I failed to
  resize this server.

  I don't know whether the MQ cause this redundancy data error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1865125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1861964] [NEW] availability_zone in request_specs not updated when live migrate vm from one availability zone to another

2020-02-04 Thread HYSong
Public bug reported:

availability_zone in request_specs not updated when live migrate vm from
one availability zone to another, and if the old aggregate deleted or
host demount, vm migration will be failed.

Steps to reproduce
==

1) create aggregation: aggregate01 using AZ: zone01, and bonding cmp001;
2) create aggregation: aggregate02 using AZ: zone02, and bonding cmp002;
3) create flavor: test_flavor01, and set properties fit to aggregate01;
4) create instance: test_vm01 using test_flavor01 and --availability-zone set 
to zome01;
5) check OS-EXT-AZ:availability_zone using 'openstack server show test_vm01' is 
set to zone01, and the vm's availability_zone in request_specs is set to zone01 
too.
6) live migrate test_vm01 to cmp002;
7) check OS-EXT-AZ:availability_zone using 'openstack server show test_vm01' is 
set to zone02, but the vm's availability_zone in request_specs is still zone01.
8) demount cmp001 on aggregate01 or delete aggregate01;
9) live migrate test_vm01 will failed if don't set host. because 
availability_zone in request_specs is still zone01.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1861964

Title:
  availability_zone in request_specs not updated when live migrate vm
  from one availability zone to another

Status in OpenStack Compute (nova):
  New

Bug description:
  availability_zone in request_specs not updated when live migrate vm
  from one availability zone to another, and if the old aggregate
  deleted or host demount, vm migration will be failed.

  Steps to reproduce
  ==

  1) create aggregation: aggregate01 using AZ: zone01, and bonding cmp001;
  2) create aggregation: aggregate02 using AZ: zone02, and bonding cmp002;
  3) create flavor: test_flavor01, and set properties fit to aggregate01;
  4) create instance: test_vm01 using test_flavor01 and --availability-zone set 
to zome01;
  5) check OS-EXT-AZ:availability_zone using 'openstack server show test_vm01' 
is set to zone01, and the vm's availability_zone in request_specs is set to 
zone01 too.
  6) live migrate test_vm01 to cmp002;
  7) check OS-EXT-AZ:availability_zone using 'openstack server show test_vm01' 
is set to zone02, but the vm's availability_zone in request_specs is still 
zone01.
  8) demount cmp001 on aggregate01 or delete aggregate01;
  9) live migrate test_vm01 will failed if don't set host. because 
availability_zone in request_specs is still zone01.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1861964/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1858019] [NEW] The flavor id is not limited when creating a flavor

2020-01-01 Thread HYSong
Public bug reported:

when creating a flavor by 'openstack flavor create --id  --vcpus  
--ram  --disk  ',
the parameter id is not limited. It can lead to ambiguities when id is set to 
an existed flavor name.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1858019

Title:
  The flavor id is not limited when creating a flavor

Status in OpenStack Compute (nova):
  New

Bug description:
  when creating a flavor by 'openstack flavor create --id  --vcpus  
--ram  --disk  ',
  the parameter id is not limited. It can lead to ambiguities when id is set to 
an existed flavor name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1858019/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp