[Yahoo-eng-team] [Bug 2009448] [NEW] Create VM from volume having no "volume_image_metadata" should not be allowed

2023-03-06 Thread Amit Uniyal
Public bug reported:


Description
===
VM creation from a volume with no image data should not be allowed.
Right now, we can create a bootable volume with no image, and then we can 
create a VM from the volume. The VM will get created, but it won't boot as 
there is no image to boot from.

Nova should verify this request and send an error message with the
appropriate message.

Steps to reproduce
==

$ openstack volume create --bootable --size 1 vol3 
$ openstack server create --network public --flavor 1  vm_3 --volume vol3  
--wait

$ openstack console log show vm_3   # it won't return any log
$ openstack console url show vm_3   # check via vnc


Expected result
===
Nova should tell user, it's not correct.

Actual result
=
VM gets created with without any issue, but user can't use it.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2009448

Title:
  Create VM from volume having no "volume_image_metadata" should not be
  allowed

Status in OpenStack Compute (nova):
  New

Bug description:

  Description
  ===
  VM creation from a volume with no image data should not be allowed.
  Right now, we can create a bootable volume with no image, and then we can 
create a VM from the volume. The VM will get created, but it won't boot as 
there is no image to boot from.

  Nova should verify this request and send an error message with the
  appropriate message.

  Steps to reproduce
  ==

  $ openstack volume create --bootable --size 1 vol3 
  $ openstack server create --network public --flavor 1  vm_3 --volume vol3  
--wait

  $ openstack console log show vm_3   # it won't return any log
  $ openstack console url show vm_3   # check via vnc

  
  Expected result
  ===
  Nova should tell user, it's not correct.

  Actual result
  =
  VM gets created with without any issue, but user can't use it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2009448/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2011564] [NEW] launchpad link do not work in this page

2023-03-14 Thread Amit Uniyal
Public bug reported:

Description
===
launchpad links do not work from this/these pages:
https://specs.openstack.org/openstack/nova-specs/specs/wallaby/template.html
..
https://specs.openstack.org/openstack/nova-specs/specs/2023.1/template.html


ex: 
https://blueprints.launchpad.net/nova/+spec/example
https://blueprints.launchpad.net/nova/+spec/awesome-thing
https://review.opendev.org/q/status:open+project:openstack/nova-specs+message:apiimpact

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: doc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2011564

Title:
  launchpad  link do not work in this page

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  launchpad links do not work from this/these pages:
  https://specs.openstack.org/openstack/nova-specs/specs/wallaby/template.html
  ..
  https://specs.openstack.org/openstack/nova-specs/specs/2023.1/template.html

  
  ex: 
  https://blueprints.launchpad.net/nova/+spec/example
  https://blueprints.launchpad.net/nova/+spec/awesome-thing
  
https://review.opendev.org/q/status:open+project:openstack/nova-specs+message:apiimpact

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2011564/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2011567] [NEW] Cycle theme page is empty

2023-03-14 Thread Amit Uniyal
Public bug reported:

here: https://specs.openstack.org/openstack/nova-specs/

Under nova project plans -> Priorities

https://specs.openstack.org/openstack/nova-specs/priorities/ussuri-priorities.html
...
https://specs.openstack.org/openstack/nova-specs/priorities/2023.1-priorities.html


Since Ussuri cycle theme page is no filled.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: doc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2011567

Title:
  Cycle theme page is empty

Status in OpenStack Compute (nova):
  New

Bug description:
  here: https://specs.openstack.org/openstack/nova-specs/

  Under nova project plans -> Priorities

  
https://specs.openstack.org/openstack/nova-specs/priorities/ussuri-priorities.html
  ...
  
https://specs.openstack.org/openstack/nova-specs/priorities/2023.1-priorities.html

  
  Since Ussuri cycle theme page is no filled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2011567/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2012365] [NEW] Issues with nova-manage volume_attachment subcommand

2023-03-21 Thread Amit Uniyal
Public bug reported:

Downstream bug report from Red Hat Bugzilla against Train:
https://bugzilla.redhat.com/show_bug.cgi?id=2161733

1 - fix handling of instance locking
Add a context manager for locking and unlocking instance for volume 
attachement refresh cmd.

2 - disconnecting the volume from the correct host
Verify if instance is attached to correct compute host before removing 
volume connection.

** Affects: nova
 Importance: Undecided
 Assignee: Amit Uniyal (auniyal)
 Status: New


** Tags: nova-manage

** Tags added: nova-manage

** Changed in: nova
 Assignee: (unassigned) => Amit Uniyal (auniyal)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2012365

Title:
  Issues with nova-manage volume_attachment subcommand

Status in OpenStack Compute (nova):
  New

Bug description:
  Downstream bug report from Red Hat Bugzilla against Train:
  https://bugzilla.redhat.com/show_bug.cgi?id=2161733

  1 - fix handling of instance locking
  Add a context manager for locking and unlocking instance for volume 
attachement refresh cmd.

  2 - disconnecting the volume from the correct host
  Verify if instance is attached to correct compute host before removing 
volume connection.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2012365/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2012873] [NEW] [nova][DOC] stackalytics links are not updated in openstack wiki

2023-03-26 Thread Amit Uniyal
Public bug reported:

here https://wiki.openstack.org/wiki/Nova/CoreTeam

below links should be updated
Last 30 Days:
https://stackalytics.com/report/contribution/nova/30
to
https://www.stackalytics.io/report/contribution?module=nova-group&project_type=openstack&days=30

Last 90 days
https://stackalytics.com/report/contribution/nova/90
to
https://www.stackalytics.io/report/contribution?module=nova-group&project_type=openstack&days=90

Last 180 Days
https://stackalytics.com/report/contribution/nova/180
to
https://www.stackalytics.io/report/contribution?module=nova-group&project_type=openstack&days=180

** Affects: nova
 Importance: Low
 Status: New


** Tags: doc

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2012873

Title:
  [nova][DOC] stackalytics links are not updated in openstack wiki

Status in OpenStack Compute (nova):
  New

Bug description:
  here https://wiki.openstack.org/wiki/Nova/CoreTeam

  below links should be updated
  Last 30 Days:
  https://stackalytics.com/report/contribution/nova/30
  to
  
https://www.stackalytics.io/report/contribution?module=nova-group&project_type=openstack&days=30

  Last 90 days
  https://stackalytics.com/report/contribution/nova/90
  to
  
https://www.stackalytics.io/report/contribution?module=nova-group&project_type=openstack&days=90

  Last 180 Days
  https://stackalytics.com/report/contribution/nova/180
  to
  
https://www.stackalytics.io/report/contribution?module=nova-group&project_type=openstack&days=180

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2012873/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2019078] [NEW] cleanup up dangling volume attachment

2023-05-10 Thread Amit Uniyal
Public bug reported:

A live migration failure created a scenario when the volume is deleted
from the system but its BDM is present in Nova DB.


Nova does not support deleting a volume that is attached to a VM.

On rebooting Nova failed to find the volume listed as attached in DB and
the instance ended up going to an error state.

In order to remove the volume that has already been deleted, operator
has to shutdown Vm and then delete the bdm from DB manually.


Environment
===
Train release


Logs & Configs
==
https://paste.opendev.org/show/bnGkFSsConbbLpynbdfp/


@gibi filed a blueprint for same 
https://blueprints.launchpad.net/nova/+spec/nova-manage-cleanup-dangling-volume-attachments

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2019078

Title:
  cleanup up dangling volume attachment

Status in OpenStack Compute (nova):
  New

Bug description:
  A live migration failure created a scenario when the volume is deleted
  from the system but its BDM is present in Nova DB.

  
  Nova does not support deleting a volume that is attached to a VM.

  On rebooting Nova failed to find the volume listed as attached in DB
  and the instance ended up going to an error state.

  In order to remove the volume that has already been deleted, operator
  has to shutdown Vm and then delete the bdm from DB manually.

  
  Environment
  ===
  Train release

  
  Logs & Configs
  ==
  https://paste.opendev.org/show/bnGkFSsConbbLpynbdfp/

  
  @gibi filed a blueprint for same 
https://blueprints.launchpad.net/nova/+spec/nova-manage-cleanup-dangling-volume-attachments

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2019078/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2033601] [NEW] refactor volumeAttachment creation

2023-08-30 Thread Amit Uniyal
Public bug reported:

volumeAttachment create api request is called from multiple tests. Move
volume_attachment to integrated_helpers.

https://github.com/search?q=repo%3Aopenstack%2Fnova+volumeAttachment+language%3APython+path%3A%2F%5Enova%5C%2Ftests%5C%2Ffunctional%5C%2F%2F&type=code

** Affects: nova
 Importance: Undecided
 Assignee: Amit Uniyal (auniyal)
 Status: New


** Tags: low-hanging-fruit

** Changed in: nova
 Assignee: (unassigned) => Amit Uniyal (auniyal)

** Tags added: low-

** Tags removed: low-
** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2033601

Title:
  refactor volumeAttachment creation

Status in OpenStack Compute (nova):
  New

Bug description:
  volumeAttachment create api request is called from multiple tests.
  Move volume_attachment to integrated_helpers.

  
https://github.com/search?q=repo%3Aopenstack%2Fnova+volumeAttachment+language%3APython+path%3A%2F%5Enova%5C%2Ftests%5C%2Ffunctional%5C%2F%2F&type=code

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2033601/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2035095] [NEW] CI:test_live_migration_with_trunk failing frequently on nova-live-migration job

2023-09-11 Thread Amit Uniyal
Public bug reported:

tests:
https://9d880f4dac5b6d1509a3-d490a441310dc4e25f1212d07e075dda.ssl.cf1.rackcdn.com/893744/1/check/nova-live-migration/8e97128/testr_results.html
https://291f4451bebc670e507b-a999ae1d5baedde86711d4f3bf719537.ssl.cf1.rackcdn.com/873648/23/check/nova-live-migration/8634c7c/testr_results.html
https://d6c736fcc9a860f59461-fbb3a5107d50e8d0a9c9940ac7f8a1de.ssl.cf5.rackcdn.com/894288/1/check/nova-live-migration/acaf4a4/testr_results.html
https://f0b27972d169a4e6104a-40416aec901d1e1b0fbe6fedfed92f1f.ssl.cf5.rackcdn.com/877446/22/check/nova-live-migration/975e3fc/testr_results.html
https://e19c202f51d149771e8a-51988972a6d6f0f30aafba2bfab9c470.ssl.cf2.rackcdn.com/891289/3/check/nova-live-migration/9812cc6/testr_results.html


Error backtrace:

` 
Traceback (most recent call last):
  File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 89, in 
wrapper
return func(*func_args, **func_kwargs)
  File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 70, in 
wrapper
return f(*func_args, **func_kwargs)
  File "/opt/stack/tempest/tempest/api/compute/admin/test_live_migration.py", 
line 292, in test_live_migration_with_trunk
self.assertTrue(
  File "/usr/lib/python3.10/unittest/case.py", line 687, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true
`

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: gate-failure

** Tags added: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2035095

Title:
  CI:test_live_migration_with_trunk  failing frequently on nova-live-
  migration job

Status in OpenStack Compute (nova):
  New

Bug description:
  tests:
  
https://9d880f4dac5b6d1509a3-d490a441310dc4e25f1212d07e075dda.ssl.cf1.rackcdn.com/893744/1/check/nova-live-migration/8e97128/testr_results.html
  
https://291f4451bebc670e507b-a999ae1d5baedde86711d4f3bf719537.ssl.cf1.rackcdn.com/873648/23/check/nova-live-migration/8634c7c/testr_results.html
  
https://d6c736fcc9a860f59461-fbb3a5107d50e8d0a9c9940ac7f8a1de.ssl.cf5.rackcdn.com/894288/1/check/nova-live-migration/acaf4a4/testr_results.html
  
https://f0b27972d169a4e6104a-40416aec901d1e1b0fbe6fedfed92f1f.ssl.cf5.rackcdn.com/877446/22/check/nova-live-migration/975e3fc/testr_results.html
  
https://e19c202f51d149771e8a-51988972a6d6f0f30aafba2bfab9c470.ssl.cf2.rackcdn.com/891289/3/check/nova-live-migration/9812cc6/testr_results.html

  
  Error backtrace:

  ` 
  Traceback (most recent call last):
File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 89, in 
wrapper
  return func(*func_args, **func_kwargs)
File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 70, in 
wrapper
  return f(*func_args, **func_kwargs)
File "/opt/stack/tempest/tempest/api/compute/admin/test_live_migration.py", 
line 292, in test_live_migration_with_trunk
  self.assertTrue(
File "/usr/lib/python3.10/unittest/case.py", line 687, in assertTrue
  raise self.failureException(msg)
  AssertionError: False is not true
  `

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2035095/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2036867] [NEW] refactor test: use project id as constant variable in all places

2023-09-20 Thread Amit Uniyal
Public bug reported:

This is not a bug, same PROJECT_ID const defined in many places.

ex:
fixtures/nova.py:75:PROJECT_ID = '6f70656e737461636b20342065766572'
functional/api_samples_test_base.py:25:PROJECT_ID = 
"6f70656e737461636b20342065766572"


for full list, inside tests, grep for 6f70656e737461636b20342065766572.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit testing

** Tags added: low-hanging-fruit testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2036867

Title:
  refactor test: use project id as constant variable in all places

Status in OpenStack Compute (nova):
  New

Bug description:
  This is not a bug, same PROJECT_ID const defined in many places.

  ex:
  fixtures/nova.py:75:PROJECT_ID = '6f70656e737461636b20342065766572'
  functional/api_samples_test_base.py:25:PROJECT_ID = 
"6f70656e737461636b20342065766572"

  
  for full list, inside tests, grep for 6f70656e737461636b20342065766572.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2036867/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2048184] [NEW] VM, booted from volume end up on Error state on reboot

2024-01-04 Thread Amit Uniyal
Public bug reported:

Description
===

A VM created as booted from volume, went to error state on reboot.

reason Volume attachment is getting deleted at Cinder side.
https://paste.opendev.org/show/bgMKY9lwrIatIJ6Acl0H/

Steps to reproduce
==
1- Create a VM.
 
   $ openstack server create --flavor 1 --network private --image cirros 
--boot-from-volume 1 vm1
   wait for VM to become Active

2 - Reboot VM


   $ openstack server reboot vm1

Expected result
===
Server should not go to error state

Actual result
=
Server went to error state

Environment
===
devstack setup


Logs & Configs
==

error from compute log: https://paste.opendev.org/show/b365smdQWNA7A0MwK3s5/
full compute logs: attached

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "error logs"
   
https://bugs.launchpad.net/bugs/2048184/+attachment/5736759/+files/compute_logs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2048184

Title:
  VM, booted from volume end up on Error state on reboot

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  A VM created as booted from volume, went to error state on reboot.

  reason Volume attachment is getting deleted at Cinder side.
  https://paste.opendev.org/show/bgMKY9lwrIatIJ6Acl0H/

  Steps to reproduce
  ==
  1- Create a VM.
   
 $ openstack server create --flavor 1 --network private --image cirros 
--boot-from-volume 1 vm1
 wait for VM to become Active

  2 - Reboot VM

  
 $ openstack server reboot vm1

  Expected result
  ===
  Server should not go to error state

  Actual result
  =
  Server went to error state

  Environment
  ===
  devstack setup

  
  Logs & Configs
  ==

  error from compute log: https://paste.opendev.org/show/b365smdQWNA7A0MwK3s5/
  full compute logs: attached

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2048184/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2062523] [NEW] evacuation, running VM shown stop after compute service started at firsh host.

2024-04-19 Thread Amit Uniyal
Public bug reported:

Description
===
After evacuation, when original host/node become seriveable (like nova is 
running properly in host ), VM status changed in nova DB as 'SHUTOFF' 
automatically from 'ACTIVE'.

This is because on nova initialization nova checks for all evacuated
instances on original server-host, and delete local copy from host. and
also update status at DB.

Steps to reproduce (reporduced 100%)


- create a VM (you must have multinode setup to run evacuation)
- get server-host
openstack server list --long

- server-host -  stop service
systemctl stop devstack@n-cpu

- server-host - force-down compute service
openstack compute service --down  vm1

- stop VM
openstack server stop vm

- look for VM power state - stuck at powering off
openstack server list --long

- open logs api/cpu in both nodes
- evacuate VM
 openstack server evacuate --host=  --os-compute-api-version 
2.29

- server moved to new host and become active

- orginal server-host start compute service (systemctl)

- look for VM status 
openstack server list --long


Expected result
===
After service start at original host, it should not affect VM in any way.

Actual result
=
openstack server list shows  -

VM went to shutoff,  task state=None, power=no state, stays at expected
new host

although VM can be used, can login to server (tried with virsh)

Environment
===

Nova: current master or future 2024.2

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: db evacuate

** Tags added: evacuate

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2062523

Title:
  evacuation, running VM shown stop after compute service started at
  firsh host.

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  After evacuation, when original host/node become seriveable (like nova is 
running properly in host ), VM status changed in nova DB as 'SHUTOFF' 
automatically from 'ACTIVE'.

  This is because on nova initialization nova checks for all evacuated
  instances on original server-host, and delete local copy from host.
  and also update status at DB.

  Steps to reproduce (reporduced 100%)
  

  - create a VM (you must have multinode setup to run evacuation)
  - get server-host
  openstack server list --long

  - server-host -  stop service
  systemctl stop devstack@n-cpu

  - server-host - force-down compute service
  openstack compute service --down  vm1

  - stop VM
  openstack server stop vm

  - look for VM power state - stuck at powering off
  openstack server list --long

  - open logs api/cpu in both nodes
  - evacuate VM
   openstack server evacuate --host=  --os-compute-api-version 
2.29

  - server moved to new host and become active

  - orginal server-host start compute service (systemctl)

  - look for VM status 
  openstack server list --long

  
  Expected result
  ===
  After service start at original host, it should not affect VM in any way.

  Actual result
  =
  openstack server list shows  -

  VM went to shutoff,  task state=None, power=no state, stays at
  expected new host

  although VM can be used, can login to server (tried with virsh)

  Environment
  ===

  Nova: current master or future 2024.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2062523/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1990809] [NEW] multinode setup, devstack scheduler fails to start after controller restart

2022-09-25 Thread Amit Uniyal
Public bug reported:

In multinode devstack setup nova scheduler fails to start after reboot


Steps to reproduce
==

1 - deploy multinode devstack
https://docs.openstack.org/devstack/latest/guides/multinode-lab.html

2 - Verify all compute nodes are listed and setup is working as expected
$ openstack compute service list

create vm, assign floating IP and access VM

3 - Restart compute nodes, and controller node
$ sudo init 6

4 - Once controller and all other nodes are rebooted, check whether all nova 
services are running
$ openstack compute service list
  
$ sudo systemctl status devstack@n-*


Expected result
===
$ sudo systemctl status devstack@n-*

All services should be running


$ openstack compute service list

openstack cmds should run without a issue,


Actual result
=
nova-schduler fails to start with error:

Sep 26 04:59:14 multinodesetupcontroller nova-scheduler[926]: ERROR nova 
self._init_plugins(extensions)
Sep 26 04:59:14 multinodesetupcontroller nova-scheduler[926]: ERROR nova   File 
"/usr/local/lib/python3.8/dist-packages/stevedore/driver.py", line 113, in 
_init_plugins
Sep 26 04:59:14 multinodesetupcontroller nova-scheduler[926]: ERROR nova 
raise NoMatches('No %r driver found, looking for %r' %
Sep 26 04:59:14 multinodesetupcontroller nova-scheduler[926]: ERROR nova 
stevedore.exception.NoMatches: No 'nova.scheduler.driver' driver found, looking 
for 'filter_scheduler'
Sep 26 04:59:14 multinodesetupcontroller nova-scheduler[926]: ERROR nova 
Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: INFO 
oslo_service.periodic_task [-] Skipping periodic task _discover_hosts_in_cells 
because its interval is negative
Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: WARNING 
stevedore.named [-] Could not load filter_scheduler
Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: CRITICAL nova 
[-] Unhandled error: stevedore.exception.NoMatches: No 'nova.scheduler.driver' 
driver found, looking for 'filter_scheduler'
Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova 
Traceback (most recent call last):
Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova   
File "/usr/local/bin/nova-scheduler", line 10, in 
Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova 
sys.exit(main())
Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova   
File "/opt/stack/nova/nova/cmd/scheduler.py", line 47, in main
Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova 
server = service.Service.create(binary='nova-scheduler',
Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova   
File "/opt/stack/nova/nova/service.py", line 252, in create
Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova 
service_obj = cls(host, binary, topic, manager,
Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova   
File "/opt/stack/nova/nova/service.py", line 116, in __init__
Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova 
self.manager = manager_class(host=self.host, *args, **kwargs)
Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova   
File "/opt/stack/nova/nova/scheduler/manager.py", line 60, in __init__
Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova 
self.driver = driver.DriverManager(
Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova   
File "/usr/local/lib/python3.8/dist-packages/stevedore/driver.py", line 54, in 
__init__
Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova 
super(DriverManager, self).__init__(
Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova   
File "/usr/local/lib/python3.8/dist-packages/stevedore/named.py", line 89, in 
__init__
Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova 
self._init_plugins(extensions)
Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova   
File "/usr/local/lib/python3.8/dist-packages/stevedore/driver.py", line 113, in 
_init_plugins
Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova 
raise NoMatches('No %r driver found, looking for %r' %
Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova 
stevedore.exception.NoMatches: No 'nova.scheduler.driver' driver found, looking 
for 'filter_scheduler'
Sep 26 05:09:16 multinodesetupcontroller nova-scheduler[11226]: ERROR nova 



$ openstack compute service list
HttpException: 500: Server Error for url: 
http://22.0.2.5/compute/v2.1/os-services, Internal Server Error

$ sudo systemctl status devstack@n-sch
● devstack@n-sch.service - Devstack devstack@n-sch.service
 Loaded: loaded (/etc/systemd/sys