[Yahoo-eng-team] [Bug 1869707] [NEW] Unit test error on aarch64

2020-03-30 Thread Kevin Zhao
Public bug reported:


nova.tests.unit.virt.libvirt.test_driver.LibvirtConnTestCase.test_get_guest_config_machine_type_from_config
---

Captured traceback:
~~~
Traceback (most recent call last):

  File "/opt/stack/nova/nova/tests/unit/virt/libvirt/test_driver.py", line 
7325, in test_get_guest_config_machine_type_from_config
self.assertEqual(cfg.os_mach_type, "fake_machine_type")

  File 
"/opt/stack/nova/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py", 
line 415, in assertEqual
self.assertThat(observed, matcher, message)

  File 
"/opt/stack/nova/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py", 
line 502, in assertThat
raise mismatch_error

testtools.matchers._impl.MismatchError: 'virt' !=
'fake_machine_type'


Captured pythonlogging:
~~~
2020-03-30 11:03:04,756 WARNING [os_brick.initiator.connectors.remotefs] 
Connection details not present. RemoteFsClient may not initialize properly.
2020-03-30 11:03:04,762 INFO [nova.virt.libvirt.host] kernel doesn't support 
AMD SEV


nova.tests.unit.virt.libvirt.test_driver.LibvirtConnTestCase.test_get_guest_config_one_scsi_volume_with_configdrive
---

Captured traceback:
~~~
Traceback (most recent call last):

  File "/opt/stack/nova/nova/tests/unit/virt/libvirt/test_driver.py", line 
4927, in test_get_guest_config_one_scsi_volume_with_configdrive
self.assertEqual('hda', cfg.devices[2].target_dev)

  File 
"/opt/stack/nova/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py", 
line 415, in assertEqual
self.assertThat(observed, matcher, message)

  File 
"/opt/stack/nova/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py", 
line 502, in assertThat
raise mismatch_error

testtools.matchers._impl.MismatchError: 'hda' != 'sdd'


Captured pythonlogging:
~~~
2020-03-30 11:03:04,484 WARNING [os_brick.initiator.connectors.remotefs] 
Connection details not present. RemoteFsClient may not initialize properly.
2020-03-30 11:03:05,017 INFO [nova.virt.libvirt.host] kernel doesn't support 
AMD SEV


nova.tests.unit.virt.libvirt.test_driver.LibvirtConnTestCase.test_get_guest_config_boot_from_volume_with_configdrive


Captured traceback:
~~~
Traceback (most recent call last):

  File "/opt/stack/nova/nova/tests/unit/virt/libvirt/test_driver.py", line 
4999, in test_get_guest_config_boot_from_volume_with_configdrive
self.assertEqual('hda', cfg.devices[1].target_dev)

  File 
"/opt/stack/nova/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py", 
line 415, in assertEqual
self.assertThat(observed, matcher, message)

  File 
"/opt/stack/nova/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py", 
line 502, in assertThat
raise mismatch_error

testtools.matchers._impl.MismatchError: 'hda' != 'sdd'


Captured pythonlogging:
~~~
2020-03-30 11:03:27,767 WARNING [os_brick.initiator.connectors.remotefs] 
Connection details not present. RemoteFsClient may not initialize properly.
2020-03-30 11:03:27,791 INFO [nova.virt.libvirt.host] kernel doesn't support 
AMD SEV


nova.tests.unit.virt.libvirt.test_driver.LibvirtConnTestCase.test_sev_enabled_host_extra_spec_no_machine_type
-

Captured traceback:
~~~
Traceback (most recent call last):

  File "/opt/stack/nova/nova/tests/unit/virt/libvirt/test_driver.py", line 
3101, in test_sev_enabled_host_extra_spec_no_machine_type
"for SEV to work", str(exc))

  File 
"/opt/stack/nova/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py", 
line 415, in assertEqual
self.assertThat(observed, matcher, message)

  File 
"/opt/stack/nova/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py", 
line 502, in assertThat
raise mismatch_error

testtools.matchers._impl.MismatchError: !=:
reference = "Machine type 'pc' is not compatible with image fake_image 
(150d530b-1c57-4367-b754-1f1b5237923d): q35 type is required for SEV to work"
actual= "Machine type 'virt' is not compatible with image fake_image 
(150d530b-1c57-4367-b754-1f1b5237923d): q35 type is required for SEV to work"


Captured pythonlogging:
~~~
2020-03-30 11:03:34,823 WARNING [os_brick.initiator.connectors.remotefs] 
Connection details not present. RemoteFsClient may not initialize properly.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a 

[Yahoo-eng-team] [Bug 1868121] [NEW] block info get cdrom error on non x86 platform

2020-03-19 Thread Kevin Zhao
Public bug reported:

nova.tests.unit.virt.libvirt.test_blockinfo.LibvirtBlockInfoTest.test_get_disk_bus_for_device_type_cdrom_with_q35_image_meta


Captured traceback:
~~~
Traceback (most recent call last):
  File "/opt/stack/nova/nova/tests/unit/virt/libvirt/test_blockinfo.py", 
line 839, in test_get_disk_bus_for_device_type_cdrom_with_q35_image_meta
self.assertEqual('sata', bus)
  File 
"/opt/stack/nova/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py", 
line 411, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/opt/stack/nova/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py", 
line 498, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 'sata' != 'scsi'

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1868121

Title:
  block info get cdrom error on non x86 platform

Status in OpenStack Compute (nova):
  New

Bug description:
  
nova.tests.unit.virt.libvirt.test_blockinfo.LibvirtBlockInfoTest.test_get_disk_bus_for_device_type_cdrom_with_q35_image_meta
  


  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "/opt/stack/nova/nova/tests/unit/virt/libvirt/test_blockinfo.py", 
line 839, in test_get_disk_bus_for_device_type_cdrom_with_q35_image_meta
  self.assertEqual('sata', bus)
File 
"/opt/stack/nova/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py", 
line 411, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/opt/stack/nova/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py", 
line 498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 'sata' != 'scsi'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1868121/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1867075] [NEW] Arm64: Instance with Configure Drive attach volume failed failed

2020-03-11 Thread Kevin Zhao
h(*args, **kwargs)
ERROR nova.virt.block_device [instance: 22bdc0a6-1c0c-43fa-8c64-66735b6a6cb6]   
File "/usr/local/lib/python3.6/dist-packages/libvirt.py", line 593, in 
attachDeviceFlags
ERROR nova.virt.block_device [instance: 22bdc0a6-1c0c-43fa-8c64-66735b6a6cb6]   
  if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', 
dom=self)
ERROR nova.virt.block_device [instance: 22bdc0a6-1c0c-43fa-8c64-66735b6a6cb6] 
libvirt.libvirtError: Requested operation is not valid: Domain already contains 
a disk with that address
ERROR nova.virt.block_device [instance: 22bdc0a6-1c0c-43fa-8c64-66735b6a6cb6]


  instance-00f8
  22bdc0a6-1c0c-43fa-8c64-66735b6a6cb6
  
http://openstack.org/xmlns/libvirt/nova/1.0;>
  
  cirros-test
  2020-03-12 01:35:31
  
512
1
0
0
1
  
  
admin
admin
  
  

  
  524288
  524288
  1
  
1024
  
  
/machine
  
  
hvm
/usr/share/AAVMF/AAVMF_CODE.fd
/var/lib/libvirt/qemu/nvram/instance-00f8_VARS.fd

  
  



  
  
cortex-a57

  
  
  destroy
  restart
  destroy
  
/usr/bin/qemu-system-aarch64

  
  
  



  
  
  
  


  
  
  
  
  
  
  


  
  


  


  
  
  
  


  
  
  
  


  
  
  
  


  
  
  
  


  
  
  
  


  
  
  
  


  
  
  
  


  
  
  
  


  
  
  
  


  
  
  
  


  
  
  
  


  
  
  
  


  
  
  
  


  
  
  
  


  
  
  
  


  
  
  
  


  
  
  
  


  
  
  
  


  
  
  
  


  
  
  
  


  
  
  
  


  
  
  
  


  
  
  
  


  
  
  
  


  
  
  

  
  
  
  
  
  
  


  
  
  

  
  


  
  
  
  


  
  
  


  /dev/urandom
  
  

  
  
libvirt-22bdc0a6-1c0c-43fa-8c64-66735b6a6cb6
libvirt-22bdc0a6-1c0c-43fa-8c64-66735b6a6cb6
  
  
+64055:+123
+64055:+123
  


** Affects: nova
 Importance: Undecided
 Assignee: Kevin Zhao (kevin-zhao)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Kevin Zhao (kevin-zhao)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1867075

Title:
  Arm64: Instance with Configure Drive attach volume failed failed

Status in OpenStack Compute (nova):
  New

Bug description:
  Arm64.

  Image: cirros-0.5.1
  hw_cdrom_bus='scsi', hw_disk_bus='scsi', hw_machine_type='virt', 
hw_rng_model='virtio', hw_scsi_model='virtio-scsi', 
os_command_line=''console=ttyAMA0''

  Boot a vm.
  Create a volume: openstack volume create --size 1 test

  Attach:
  openstack server add volume cirros-test test

  Error:
  DEBUG nova.virt.libvirt.guest [None req-8dfbf677-50bb-42be-869f-52c9ac638d59 
admin admin] attach device xml: 
  
   



  


b9abb789-1c55-4210-ab5c-78b0e3619405   


 

  


  ror: Requested operation is not valid: Domain already contains a disk with 
that address
  ERROR nova.virt.block_device [instance: 22bdc0a6-1c0c-43fa-8c64-66735b6a6cb6] 
Traceback (most recent call last):
  ERROR nova.virt.block_device [instance: 22bdc0a6-1c0c-43fa-8c64-66735b6a6cb6] 
  File "/opt/stack/nova/nova/virt/block_device.py", line 599, in _volume_attach
  ERROR nova.virt.block_device [instance: 22bdc0a6-1c0c-43fa-8c64-66735b6a6cb6] 
device_type=self['device_type'], encryption=encryption

[Yahoo-eng-team] [Bug 1864661] Re: Miss qrouter namespace after the router create and set network gateway/subnet

2020-02-25 Thread Kevin Zhao
Find the root cause.
Deploy it with Kolla and the ip netns can be found at the kolla container, 
which behavior is rather different than before.
Will mark it as invalid

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1864661

Title:
  Miss qrouter namespace after the router create and set network
  gateway/subnet

Status in neutron:
  Invalid

Bug description:
  Train release.
  create private and public network and then configure router.

  openstack network create --provider-physical-network physnet1
  --provider-network-type flat --external public

  openstack subnet create --allocation-pool
  start=10.101.133.194,end=10.101.133.222 --network public --subnet-
  range 10.101.133.192/27 --gateway 10.101.133.193   public-subnet

  ip addr add 10.101.133.193/27 dev eth0
  openstack network create private 
  openstack subnet pool create shared-default-subnetpool-v4 
--default-prefix-length 26 --pool-prefix 10.100.0.0/24 --share --default -f 
value -c id
  openstack subnet create --ip-version 4 --subnet-pool 
shared-default-subnetpool-v4 --network private private-subnet

  openstack router create admin-router
  openstack router set --external-gateway public admin-router 
  openstack router add subnet admin-router private-subnet

  =
  ip netns list:
  return nothing.

  
  l3_agent log:2020-02-25 14:29:49.380 20 INFO neutron.common.config [-] 
Logging enabled!
  2020-02-25 14:29:49.381 20 INFO neutron.common.config [-] 
/var/lib/kolla/venv/bin/neutron-l3-agent version 15.0.1
  2020-02-25 14:29:50.206 20 INFO neutron.agent.l3.agent 
[req-340b2ea3-b816-4a1b-bc14-0a0d9178cab9 - - - - -] Agent HA routers count 0
  2020-02-25 14:29:50.208 20 INFO neutron.agent.agent_extensions_manager 
[req-340b2ea3-b816-4a1b-bc14-0a0d9178cab9 - - - - -] Loaded agent extensions: []
  2020-02-25 14:29:50.248 20 INFO eventlet.wsgi.server [-] (20) wsgi starting 
up on http:/var/lib/neutron/keepalived-state-change
  2020-02-25 14:29:50.310 20 INFO neutron.agent.l3.agent [-] L3 agent started
  2020-02-25 14:29:55.314 20 INFO oslo.privsep.daemon 
[req-799123e9-6cad-46d3-a03d-265ffcf31ff6 - - - - -] Running privsep helper: 
['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', 
'--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', 
'/tmp/tmp5y8x4u6q/privsep.sock']
  2020-02-25 14:29:56.710 20 INFO oslo.privsep.daemon 
[req-799123e9-6cad-46d3-a03d-265ffcf31ff6 - - - - -] Spawned new privsep daemon 
via rootwrap
  2020-02-25 14:29:56.496 32 INFO oslo.privsep.daemon [-] privsep daemon 
starting
  2020-02-25 14:29:56.506 32 INFO oslo.privsep.daemon [-] privsep process 
running with uid/gid: 0/0
  2020-02-25 14:29:56.511 32 INFO oslo.privsep.daemon [-] privsep process 
running with capabilities (eff/prm/inh): 
CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
  2020-02-25 14:29:56.512 32 INFO oslo.privsep.daemon [-] privsep daemon 
running as pid 32
  2020-02-25 14:45:05.540 20 INFO neutron.agent.l3.agent [-] Starting router 
update for 9b9639b9-f1d4-4e55-a34d-3734389aeedf, action 3, priority 1, 
update_id 7fe46b58-852b-461b-b9f4-febfadf59343. Wait time elapsed: 0.001
  2020-02-25 14:45:19.815 20 INFO neutron.agent.l3.agent [-] Finished a router 
update for 9b9639b9-f1d4-4e55-a34d-3734389aeedf, update_id 
7fe46b58-852b-461b-b9f4-febfadf59343. Time elapsed: 14.275
  2020-02-25 14:45:19.817 20 INFO neutron.agent.l3.agent [-] Starting router 
update for 9b9639b9-f1d4-4e55-a34d-3734389aeedf, action 3, priority 1, 
update_id e71e2ed5-07bc-4148-a111-fbc362ac9e7b. Wait time elapsed: 7.905
  2020-02-25 14:45:23.282 20 INFO neutron.agent.linux.interface [-] Device 
qg-bba540a4-b5 already exists
  2020-02-25 14:45:28.490 20 INFO neutron.agent.l3.agent [-] Finished a router 
update for 9b9639b9-f1d4-4e55-a34d-3734389aeedf, update_id 
e71e2ed5-07bc-4148-a111-fbc362ac9e7b. Time elapsed: 8.672
  ~
  ~

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1864661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1864661] [NEW] Miss qrouter namespace after the router create and set network gateway/subnet

2020-02-25 Thread Kevin Zhao
Public bug reported:

Train release.
create private and public network and then configure router.

openstack network create --provider-physical-network physnet1
--provider-network-type flat --external public

openstack subnet create --allocation-pool
start=10.101.133.194,end=10.101.133.222 --network public --subnet-range
10.101.133.192/27 --gateway 10.101.133.193   public-subnet

ip addr add 10.101.133.193/27 dev eth0
openstack network create private 
openstack subnet pool create shared-default-subnetpool-v4 
--default-prefix-length 26 --pool-prefix 10.100.0.0/24 --share --default -f 
value -c id
openstack subnet create --ip-version 4 --subnet-pool 
shared-default-subnetpool-v4 --network private private-subnet

openstack router create admin-router
openstack router set --external-gateway public admin-router 
openstack router add subnet admin-router private-subnet

=
ip netns list:
return nothing.


l3_agent log:2020-02-25 14:29:49.380 20 INFO neutron.common.config [-] Logging 
enabled!
2020-02-25 14:29:49.381 20 INFO neutron.common.config [-] 
/var/lib/kolla/venv/bin/neutron-l3-agent version 15.0.1
2020-02-25 14:29:50.206 20 INFO neutron.agent.l3.agent 
[req-340b2ea3-b816-4a1b-bc14-0a0d9178cab9 - - - - -] Agent HA routers count 0
2020-02-25 14:29:50.208 20 INFO neutron.agent.agent_extensions_manager 
[req-340b2ea3-b816-4a1b-bc14-0a0d9178cab9 - - - - -] Loaded agent extensions: []
2020-02-25 14:29:50.248 20 INFO eventlet.wsgi.server [-] (20) wsgi starting up 
on http:/var/lib/neutron/keepalived-state-change
2020-02-25 14:29:50.310 20 INFO neutron.agent.l3.agent [-] L3 agent started
2020-02-25 14:29:55.314 20 INFO oslo.privsep.daemon 
[req-799123e9-6cad-46d3-a03d-265ffcf31ff6 - - - - -] Running privsep helper: 
['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', 
'--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', 
'/tmp/tmp5y8x4u6q/privsep.sock']
2020-02-25 14:29:56.710 20 INFO oslo.privsep.daemon 
[req-799123e9-6cad-46d3-a03d-265ffcf31ff6 - - - - -] Spawned new privsep daemon 
via rootwrap
2020-02-25 14:29:56.496 32 INFO oslo.privsep.daemon [-] privsep daemon starting
2020-02-25 14:29:56.506 32 INFO oslo.privsep.daemon [-] privsep process running 
with uid/gid: 0/0
2020-02-25 14:29:56.511 32 INFO oslo.privsep.daemon [-] privsep process running 
with capabilities (eff/prm/inh): 
CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
2020-02-25 14:29:56.512 32 INFO oslo.privsep.daemon [-] privsep daemon running 
as pid 32
2020-02-25 14:45:05.540 20 INFO neutron.agent.l3.agent [-] Starting router 
update for 9b9639b9-f1d4-4e55-a34d-3734389aeedf, action 3, priority 1, 
update_id 7fe46b58-852b-461b-b9f4-febfadf59343. Wait time elapsed: 0.001
2020-02-25 14:45:19.815 20 INFO neutron.agent.l3.agent [-] Finished a router 
update for 9b9639b9-f1d4-4e55-a34d-3734389aeedf, update_id 
7fe46b58-852b-461b-b9f4-febfadf59343. Time elapsed: 14.275
2020-02-25 14:45:19.817 20 INFO neutron.agent.l3.agent [-] Starting router 
update for 9b9639b9-f1d4-4e55-a34d-3734389aeedf, action 3, priority 1, 
update_id e71e2ed5-07bc-4148-a111-fbc362ac9e7b. Wait time elapsed: 7.905
2020-02-25 14:45:23.282 20 INFO neutron.agent.linux.interface [-] Device 
qg-bba540a4-b5 already exists
2020-02-25 14:45:28.490 20 INFO neutron.agent.l3.agent [-] Finished a router 
update for 9b9639b9-f1d4-4e55-a34d-3734389aeedf, update_id 
e71e2ed5-07bc-4148-a111-fbc362ac9e7b. Time elapsed: 8.672
~
~

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1864661

Title:
  Miss qrouter namespace after the router create and set network
  gateway/subnet

Status in neutron:
  New

Bug description:
  Train release.
  create private and public network and then configure router.

  openstack network create --provider-physical-network physnet1
  --provider-network-type flat --external public

  openstack subnet create --allocation-pool
  start=10.101.133.194,end=10.101.133.222 --network public --subnet-
  range 10.101.133.192/27 --gateway 10.101.133.193   public-subnet

  ip addr add 10.101.133.193/27 dev eth0
  openstack network create private 
  openstack subnet pool create shared-default-subnetpool-v4 
--default-prefix-length 26 --pool-prefix 10.100.0.0/24 --share --default -f 
value -c id
  openstack subnet create --ip-version 4 --subnet-pool 
shared-default-subnetpool-v4 --network private private-subnet

  openstack router create admin-router
  openstack router set --external-gateway public admin-router 
  openstack router add subnet admin-router private-subnet

  =
  ip netns list:
  return nothing.

 

[Yahoo-eng-team] [Bug 1864588] [NEW] Cpu model is not correct on Aarch64/Qemu/Custom mode

2020-02-24 Thread Kevin Zhao
Public bug reported:

Background:
We'd like to setup Nova and Devstack Upstream CI. Should support launch vm via 
Qemu virt_type. But host-passthrough mode(default in Aarch64) doesn't work on 
Qemu. So we can just use "custom" and specify CPU-model.

But actually Aarch64 don't return available cpu models list from Libvirt
side. Ref:https://libvirt.org/html/libvirt-libvirt-
host.html#virConnectGetCPUModelNames

So we should use the specified ones from config file. And add default
ones to it on Aarch64.

Nova-cpu.conf:
[libvirt]
live_migration_uri = qemu+ssh://stack@%s/system
cpu_mode = custom
virt_type = qemu
cpu_model = cortex-a57


WARNING nova.virt.libvirt.driver [-] The libvirt driver is not tested on 
qemu/aarch64 by the OpenStack project and thus its quality can not be ensured. 
For more information, see: 
https://docs.openstack.org/nova/latest/user/support-matrix.html
WARNING nova.virt.libvirt.driver [-] Running Nova with a libvirt version less 
than 5.0.0 is deprecated. The required minimum version of libvirt will be 
raised to 5.0.0 in the next release.
WARNING nova.virt.libvirt.driver [-] Running Nova with a QEMU version less than 
4.0.0 is deprecated. The required minimum version of QEMU will be raised to 
4.0.0 in the next release.
ERROR oslo_service.service [-] Error starting thread.: 
nova.exception.InvalidCPUInfo: Configured CPU model: cortex-a57 is not correct, 
or your host CPU arch does not suuport this model. Please correct your config 
and try again.
ERROR oslo_service.service Traceback (most recent call last):
ERROR oslo_service.service   File 
"/usr/local/lib/python3.6/dist-packages/oslo_service/service.py", line 810, in 
run_service
ERROR oslo_service.service service.start()
ERROR oslo_service.service   File "/opt/stack/nova/nova/service.py", line 158, 
in start
ERROR oslo_service.service self.manager.init_host()
ERROR oslo_service.service   File "/opt/stack/nova/nova/compute/manager.py", 
line 1394, in init_host
ERROR oslo_service.service self.driver.init_host(host=self.host)
ERROR oslo_service.service   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 715, in init_host
ERROR oslo_service.service self._check_cpu_compatibility()
ERROR oslo_service.service   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 747, in 
_check_cpu_compatibility
ERROR oslo_service.service raise exception.InvalidCPUInfo(msg)
ERROR oslo_service.service nova.exception.InvalidCPUInfo: Configured CPU model: 
cortex-a57 is not correct, or your host CPU arch does not suuport this model. 
Please correct your config and try again.
ERROR oslo_service.service

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1864588

Title:
  Cpu model is not correct on Aarch64/Qemu/Custom mode

Status in OpenStack Compute (nova):
  New

Bug description:
  Background:
  We'd like to setup Nova and Devstack Upstream CI. Should support launch vm 
via Qemu virt_type. But host-passthrough mode(default in Aarch64) doesn't work 
on Qemu. So we can just use "custom" and specify CPU-model.

  But actually Aarch64 don't return available cpu models list from
  Libvirt side. Ref:https://libvirt.org/html/libvirt-libvirt-
  host.html#virConnectGetCPUModelNames

  So we should use the specified ones from config file. And add default
  ones to it on Aarch64.

  Nova-cpu.conf:
  [libvirt]
  live_migration_uri = qemu+ssh://stack@%s/system
  cpu_mode = custom
  virt_type = qemu
  cpu_model = cortex-a57


  WARNING nova.virt.libvirt.driver [-] The libvirt driver is not tested on 
qemu/aarch64 by the OpenStack project and thus its quality can not be ensured. 
For more information, see: 
https://docs.openstack.org/nova/latest/user/support-matrix.html
  WARNING nova.virt.libvirt.driver [-] Running Nova with a libvirt version less 
than 5.0.0 is deprecated. The required minimum version of libvirt will be 
raised to 5.0.0 in the next release.
  WARNING nova.virt.libvirt.driver [-] Running Nova with a QEMU version less 
than 4.0.0 is deprecated. The required minimum version of QEMU will be raised 
to 4.0.0 in the next release.
  ERROR oslo_service.service [-] Error starting thread.: 
nova.exception.InvalidCPUInfo: Configured CPU model: cortex-a57 is not correct, 
or your host CPU arch does not suuport this model. Please correct your config 
and try again.
  ERROR oslo_service.service Traceback (most recent call last):
  ERROR oslo_service.service   File 
"/usr/local/lib/python3.6/dist-packages/oslo_service/service.py", line 810, in 
run_service
  ERROR oslo_service.service service.start()
  ERROR oslo_service.service   File "/opt/stack/nova/nova/service.py", line 
158, in start
  ERROR oslo_service.service self.manager.init_host()
  ERROR oslo_service.service   File "/opt/stack/nova/nova/compute/manager.py", 
line 1394, in 

[Yahoo-eng-team] [Bug 1864014] [NEW] Upgrade from Rocky to Stein, router namespace disappear

2020-02-20 Thread Kevin Zhao
Public bug reported:

Upgrade All-in-one from Rocky to Stein.
Upgrading finished but the router namespace disappears.


Before:
ip netns list
qrouter-79658dd5-e3b4-4b13-a361-16d696ed1d1c (id: 1)
qdhcp-4a183162-64f5-49f9-a615-7c0fd63cf2a8 (id: 0)

After:
ip netns list

After about 1 minutes, dhcp ns has appeared and no error on dhcp-agent,
but qrouter ns is still missing, until manually restart the docker container 
l3-agent.

l3-agent error after upgrade:
2020-02-20 02:57:07.306 12 INFO neutron.common.config [-] Logging enabled!
2020-02-20 02:57:07.308 12 INFO neutron.common.config [-] 
/var/lib/kolla/venv/bin/neutron-l3-agent version 14.0.4
2020-02-20 02:57:08.616 12 INFO neutron.agent.l3.agent 
[req-95654890-dab3-4106-b56d-c2685fb96f29 - - - - -] Agent HA routers count 0
2020-02-20 02:57:08.619 12 INFO neutron.agent.agent_extensions_manager 
[req-95654890-dab3-4106-b56d-c2685fb96f29 - - - - -] Loaded agent extensions: []
2020-02-20 02:57:08.657 12 INFO eventlet.wsgi.server [-] (12) wsgi starting up 
on http:/var/lib/neutron/keepalived-state-change
2020-02-20 02:57:08.710 12 INFO neutron.agent.l3.agent [-] L3 agent started
2020-02-20 02:57:10.716 12 INFO oslo.privsep.daemon 
[req-681aad3f-ae14-4315-b96d-5e95225cdf92 - - - - -] Running privsep helper: 
['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', 
'--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', 
'/tmp/tmpg8Ihqa/privsep.sock']
2020-02-20 02:57:11.750 12 INFO oslo.privsep.daemon 
[req-681aad3f-ae14-4315-b96d-5e95225cdf92 - - - - -] Spawned new privsep daemon 
via rootwrap
2020-02-20 02:57:11.614 29 INFO oslo.privsep.daemon [-] privsep daemon starting
2020-02-20 02:57:11.622 29 INFO oslo.privsep.daemon [-] privsep process running 
with uid/gid: 0/0
2020-02-20 02:57:11.627 29 INFO oslo.privsep.daemon [-] privsep process running 
with capabilities (eff/prm/inh): 
CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
2020-02-20 02:57:11.628 29 INFO oslo.privsep.daemon [-] privsep daemon running 
as pid 29
2020-02-20 02:57:14.449 12 INFO neutron.agent.l3.agent [-] Starting router 
update for 79658dd5-e3b4-4b13-a361-16d696ed1d1c, action 3, priority 2, 
update_id 49908db7-8a8c-410f-84a7-9e95a3dede16. Wait time elapsed: 0.000
2020-02-20 02:57:24.160 12 ERROR neutron.agent.linux.utils [-] Exit code: 4; 
Stdin: # Generated by iptables_manager

2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info 
self.process_floating_ip_address_scope_rules()
2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info File 
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info self.gen.next()
2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py",
 line 438, in defer_apply
2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info raise 
l3_exc.IpTablesApplyException(msg)
2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info 
IpTablesApplyException: Failure applying iptables rules
2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info
2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent [-] Failed to process 
compatible router: 79658dd5-e3b4-4b13-a361-16d696ed1d1c: 
IpTablesApplyException: Failure applying iptables rules
2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent Traceback (most recent 
call last):
2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/l3/agent.py", 
line 723, in _process_routers_if_compatible
2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent 
self._process_router_if_compatible(router)
2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/l3/agent.py", 
line 567, in _process_router_if_compatible
2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent 
self._process_added_router(router)
2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/l3/agent.py", 
line 575, in _process_added_router
2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent ri.process()
2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/common/utils.py", line 
161, in call
2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent self.logger(e)
2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_utils/excutils.py", line 
220, in __exit__
2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent self.force_reraise()
2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent File 

[Yahoo-eng-team] [Bug 1863058] [NEW] Arm64 CI for Nova

2020-02-13 Thread Kevin Zhao
Public bug reported:

Linaro has donate a cluster for OpenStack CI on Arm64.
Now the cluster is ready, 
https://opendev.org/openstack/project-config/src/branch/master/nodepool/nl03.openstack.org.yaml#L414

We'd like to setup CI for Nova first.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1863058

Title:
  Arm64 CI for Nova

Status in OpenStack Compute (nova):
  New

Bug description:
  Linaro has donate a cluster for OpenStack CI on Arm64.
  Now the cluster is ready, 
https://opendev.org/openstack/project-config/src/branch/master/nodepool/nl03.openstack.org.yaml#L414

  We'd like to setup CI for Nova first.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1863058/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1844123] [NEW] Unable to trigger IPv6 Prefix Delegation

2019-09-16 Thread Kevin Zhao
Public bug reported:

Follow the guide:
https://docs.openstack.org/neutron/pike/admin/config-ipv6.html#configuring-the-dibbler-server

I run devstack and configure Dibber server. But I've checked I could not
trigger the PD process.

Base OS: Ubuntu 18.04

The devstack local conf:
==
ADMIN_PASSWORD=
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD

HOST_IP=192.168.100.19
HOST_IPV6=2604:1380:4111:3e00::13
SERVICE_HOST=$HOST_IP
MYSQL_HOST=$HOST_IP
RABBIT_HOST=$HOST_IP
GLANCE_HOSTPORT=$HOST_IP:9292

## Neutron options
Q_USE_SECGROUP=True
FLOATING_RANGE="139.178.86.48/28"
IPV4_ADDRS_SAFE_TO_USE="11.100.0.0/24"
Q_FLOATING_ALLOCATION_POOL=start=139.178.86.51,end=139.178.86.62
PUBLIC_NETWORK_GATEWAY="139.178.86.49"
PUBLIC_INTERFACE=bond0

disable_service tempest

# Open vSwitch provider networking configuration
Q_USE_PROVIDERNET_FOR_PUBLIC=True
OVS_PHYSICAL_BRIDGE=br-ex
PUBLIC_BRIDGE=br-ex
OVS_BRIDGE_MAPPINGS=public:br-ex

LOGFILE=/opt/stack/logs/stack.sh.log

While my ip addr output:
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 2604:1380:4111:3e08::1/61 scope global deprecated
   valid_lft forever preferred_lft 0sec
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: enaqcom8070i0:  mtu 1500 qdisc noop state DOWN group 
default qlen 1000
link/ether 8c:fd:f0:0c:71:79 brd ff:ff:ff:ff:ff:ff
3: enp1s0f0:  mtu 1500 qdisc mq master 
bond0 state UP group default qlen 1000
link/ether 98:03:9b:9c:b9:4c brd ff:ff:ff:ff:ff:ff
4: enp1s0f1:  mtu 1500 qdisc mq state UP group 
default qlen 1000
link/ether 98:03:9b:9c:b9:4d brd ff:ff:ff:ff:ff:ff
inet 192.168.100.19/24 brd 192.168.100.255 scope global enp1s0f1
   valid_lft forever preferred_lft forever
inet6 fe80::9a03:9bff:fe9c:b94d/64 scope link
   valid_lft forever preferred_lft forever
5: bond0:  mtu 1500 qdisc noqueue 
master ovs-system state UP group default qlen 1000
link/ether 98:03:9b:9c:b9:4c brd ff:ff:ff:ff:ff:ff
inet 10.32.36.34/28 brd 10.32.36.47 scope global bond0
   valid_lft forever preferred_lft forever
inet6 fe80::9a03:9bff:fe9c:b94c/64 scope link
   valid_lft forever preferred_lft forever
6: virbr0:  mtu 1500 qdisc noqueue state 
DOWN group default qlen 1000
link/ether 52:54:00:d2:50:fd brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
   valid_lft forever preferred_lft forever
7: virbr0-nic:  mtu 1500 qdisc fq_codel master virbr0 
state DOWN group default qlen 1000
link/ether 52:54:00:d2:50:fd brd ff:ff:ff:ff:ff:ff
23: tap0a3e1687-4b:  mtu 1450 qdisc fq_codel 
state UNKNOWN group default qlen 1000
link/ether fe:16:3e:17:69:19 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fe17:6919/64 scope link
   valid_lft forever preferred_lft forever
27: ovs-system:  mtu 1500 qdisc noop state DOWN group 
default qlen 1000
link/ether f2:04:a9:6f:6c:65 brd ff:ff:ff:ff:ff:ff
28: br-int:  mtu 1450 qdisc noop state DOWN group default 
qlen 1000
link/ether 72:49:08:66:d2:42 brd ff:ff:ff:ff:ff:ff
29: br-ex:  mtu 1500 qdisc noqueue state 
UNKNOWN group default qlen 1000
link/ether 98:03:9b:9c:b9:4c brd ff:ff:ff:ff:ff:ff
inet 139.178.86.50/28 brd 139.178.86.63 scope global br-ex
   valid_lft forever preferred_lft forever
inet 139.178.86.49/28 scope global secondary br-ex
   valid_lft forever preferred_lft forever
inet6 2604:1380:4111:3e08::2/61 scope global
   valid_lft forever preferred_lft forever
inet6 2001:db8::2/64 scope global
   valid_lft forever preferred_lft forever
inet6 2604:1380:4111:3e00::13/127 scope global
   valid_lft forever preferred_lft forever
inet6 fe80::346f:5aff:fee5:f145/64 scope link
   valid_lft forever preferred_lft forever
30: br-tun:  mtu 1500 qdisc noop state DOWN group default 
qlen 1000
link/ether 42:2e:13:28:eb:47 brd ff:ff:ff:ff:ff:ff
35: tap3ba8f9d3-8f:  mtu 1450 qdisc fq_codel 
master ovs-system state UNKNOWN group default qlen 1000
link/ether fe:16:3e:ae:0d:28 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:feae:d28/64 scope link
   valid_lft forever preferred_lft forever
==

I setup dibbler server 1.0.1 and install the dibbler-client.

cat /etc/dibbler/server.conf
#
# Example server configuration file
#
# This config. file is considered all-purpose as it instructs server
# to provide almost every configuratio
#

# Logging level range: 1(Emergency)-8(Debug)
log-level 8

# Don't log full date
log-mode short

# Uncomment this line to call script every time a response is sent
script "/var/lib/dibbler/pd-server.sh"

# set preference of this server to 0 (higher = more prefered)
preference 0

iface 

[Yahoo-eng-team] [Bug 1740824] [NEW] Make UEFI as the default properties for AArch64

2018-01-01 Thread Kevin Zhao
Public bug reported:

Nowadays UEFI is essential boot method for AArch64, so
make it as default. For the images that boot with external
initrd/kernel, this change doesn't affect them.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1740824

Title:
  Make UEFI as the default properties for AArch64

Status in OpenStack Compute (nova):
  New

Bug description:
  Nowadays UEFI is essential boot method for AArch64, so
  make it as default. For the images that boot with external
  initrd/kernel, this change doesn't affect them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1740824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1711093] [NEW] Unit test get_disk_mapping_rescue_with_config error in AArch64

2017-08-16 Thread Kevin Zhao
Public bug reported:

nova.tests.unit.virt.libvirt.test_blockinfo.LibvirtBlockInfoTest.test_get_disk_mapping_rescue_with_config
-

Captured traceback:
~~~
Traceback (most recent call last):
  File "nova/tests/unit/virt/libvirt/test_blockinfo.py", line 265, in 
test_get_disk_mapping_rescue_with_config
self.assertEqual(expect, mapping)
  File 
"/opt/stack/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/opt/stack/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: !=:
reference = {'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'},
 'disk.config.rescue': {'bus': 'ide', 'dev': 'hda', 'type': 'cdrom'},
 'disk.rescue': {'boot_index': '1',
 'bus': 'virtio',
 'dev': 'vda',
 'type': 'disk'},
 'root': {'boot_index': '1', 'bus': 'virtio', 'dev': 'vda', 'type': 'disk'}}
actual= {'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'},
 'disk.config.rescue': {'bus': 'scsi', 'dev': 'sda', 'type': 'cdrom'},
 'disk.rescue': {'boot_index': '1',
 'bus': 'virtio',
 'dev': 'vda',
 'type': 'disk'},
 'root': {'boot_index': '1', 'bus': 'virtio', 'dev': 'vda', 'type': 'disk'}}




'type': 'disk', 'boot_index': '1'},

** Affects: nova
 Importance: Undecided
     Assignee: Kevin Zhao (kevin-zhao)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Kevin Zhao (kevin-zhao)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1711093

Title:
  Unit test get_disk_mapping_rescue_with_config error in AArch64

Status in OpenStack Compute (nova):
  New

Bug description:
  
nova.tests.unit.virt.libvirt.test_blockinfo.LibvirtBlockInfoTest.test_get_disk_mapping_rescue_with_config
  
-

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "nova/tests/unit/virt/libvirt/test_blockinfo.py", line 265, in 
test_get_disk_mapping_rescue_with_config
  self.assertEqual(expect, mapping)
File 
"/opt/stack/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/opt/stack/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: !=:
  reference = {'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'},
   'disk.config.rescue': {'bus': 'ide', 'dev': 'hda', 'type': 'cdrom'},
   'disk.rescue': {'boot_index': '1',
   'bus': 'virtio',
   'dev': 'vda',
   'type': 'disk'},
   'root': {'boot_index': '1', 'bus': 'virtio', 'dev': 'vda', 'type': 
'disk'}}
  actual= {'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'},
   'disk.config.rescue': {'bus': 'scsi', 'dev': 'sda', 'type': 'cdrom'},
   'disk.rescue': {'boot_index': '1',
   'bus': 'virtio',
   'dev': 'vda',
   'type': 'disk'},
   'root': {'boot_index': '1', 'bus': 'virtio', 'dev': 'vda', 'type': 
'disk'}}
  
  

  
  'type': 'disk', 'boot_index': '1'},

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1711093/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1710766] [NEW] video type error when launching instance in AArch64

2017-08-14 Thread Kevin Zhao
compute.manager 
[instance: 66569a7e-2695-4de7-868b-5645a6d87926] self._encoded_xml, 
errors='ignore')
Aug 15 02:53:18 debian nova-compute[2699]: ERROR nova.compute.manager 
[instance: 66569a7e-2695-4de7-868b-5645a6d87926]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
Aug 15 02:53:18 debian nova-compute[2699]: ERROR nova.compute.manager 
[instance: 66569a7e-2695-4de7-868b-5645a6d87926] self.force_reraise()
Aug 15 02:53:18 debian nova-compute[2699]: ERROR nova.compute.manager 
[instance: 66569a7e-2695-4de7-868b-5645a6d87926]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
Aug 15 02:53:18 debian nova-compute[2699]: ERROR nova.compute.manager 
[instance: 66569a7e-2695-4de7-868b-5645a6d87926] six.reraise(self.type_, 
self.value, self.tb)
Aug 15 02:53:18 debian nova-compute[2699]: ERROR nova.compute.manager 
[instance: 66569a7e-2695-4de7-868b-5645a6d87926]   File 
"/opt/stack/nova/nova/virt/libvirt/guest.py", line 139, in launch
Aug 15 02:53:18 debian nova-compute[2699]: ERROR nova.compute.manager 
[instance: 66569a7e-2695-4de7-868b-5645a6d87926] return 
self._domain.createWithFlags(flags)
Aug 15 02:53:18 debian nova-compute[2699]: ERROR nova.compute.manager 
[instance: 66569a7e-2695-4de7-868b-5645a6d87926]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit
Aug 15 02:53:18 debian nova-compute[2699]: ERROR nova.compute.manager 
[instance: 66569a7e-2695-4de7-868b-5645a6d87926] result = 
proxy_call(self._autowrap, f, *args, **kwargs)
Aug 15 02:53:18 debian nova-compute[2699]: ERROR nova.compute.manager 
[instance: 66569a7e-2695-4de7-868b-5645a6d87926]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in 
proxy_call
Aug 15 02:53:18 debian nova-compute[2699]: ERROR nova.compute.manager 
[instance: 66569a7e-2695-4de7-868b-5645a6d87926] rv = execute(f, *args, 
**kwargs)
Aug 15 02:53:18 debian nova-compute[2699]: ERROR nova.compute.manager 
[instance: 66569a7e-2695-4de7-868b-5645a6d87926]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute
Aug 15 02:53:18 debian nova-compute[2699]: ERROR nova.compute.manager 
[instance: 66569a7e-2695-4de7-868b-5645a6d87926] six.reraise(c, e, tb)
Aug 15 02:53:18 debian nova-compute[2699]: ERROR nova.compute.manager 
[instance: 66569a7e-2695-4de7-868b-5645a6d87926]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker
Aug 15 02:53:18 debian nova-compute[2699]: ERROR nova.compute.manager 
[instance: 66569a7e-2695-4de7-868b-5645a6d87926] rv = meth(*args, **kwargs)
Aug 15 02:53:18 debian nova-compute[2699]: ERROR nova.compute.manager 
[instance: 66569a7e-2695-4de7-868b-5645a6d87926]   File 
"/usr/local/lib/python2.7/dist-packages/libvirt.py", line 1092, in 
createWithFlags
Aug 15 02:53:18 debian nova-compute[2699]: ERROR nova.compute.manager 
[instance: 66569a7e-2695-4de7-868b-5645a6d87926] if ret == -1: raise 
libvirtError ('virDomainCreateWithFlags() failed', dom=self)
Aug 15 02:53:18 debian nova-compute[2699]: ERROR nova.compute.manager 
[instance: 66569a7e-2695-4de7-868b-5645a6d87926] libvirtError: unsupported 
configuration: this QEMU does not support 'cirrus' video device
Aug 15 02:53:18 debian nova-compute[2699


Libvirt version : 3.4.0
Qemu version: 2.9.0

** Affects: nova
 Importance: Undecided
 Assignee: Kevin Zhao (kevin-zhao)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Kevin Zhao (kevin-zhao)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1710766

Title:
  video type error when launching instance in AArch64

Status in OpenStack Compute (nova):
  New

Bug description:
  Aug 15 02:53:17 debian nova-compute[2699]: INFO os_vif [None 
req-4d438048-b9f9-4879-869e-441e577af0a1 admin admin] Successfully plugged vif 
VIFBridge(active=False,address=fa:16:3e:c9:32:61,bridge_name='qbredc3936d-c5',has_traffic_filtering=True,id=edc3936d-c542-430f-82f2-46e560f2774a,network=Network(e43bd212-675b-4c24-a714-437b57de70b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedc3936d-c5')
  Aug 15 02:53:18 debian nova-compute[2699]: ERROR nova.virt.libvirt.guest 
[None req-4d438048-b9f9-4879-869e-441e577af0a1 admin admin] Error launching a 
defined domain with XML: 
  Aug 15 02:53:18 debian nova-compute[2699]:   instance-0002
  Aug 15 02:53:18 debian nova-compute[2699]:   
66569a7e-2695-4de7-868b-5645a6d87926
  Aug 15 02:53:18 debian nova-compute[2699]:   
  Aug 15 02:53:18 debian nova-compute[2699]: http://openstack.org/xmlns/libvirt/nova/1.0;>
  Aug 15 02:53:18 debian nova-compute[2699]:   
  Aug 15 02:53:18 debian nova-compute[2699]

[Yahoo-eng-team] [Bug 1502028] Re: cannot attach a volume when using multiple ceph backends

2016-08-22 Thread Kevin Zhao
-22 03:30:04.949 TRACE nova.virt.block_device ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00m
self._disconnect_volume(connection_info, disk_dev)
2016-08-22 03:30:04.949 TRACE nova.virt.block_device ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00m  File 
"/srv/nova/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, 
in __exit__
2016-08-22 03:30:04.949 TRACE nova.virt.block_device ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00mself.force_reraise()
2016-08-22 03:30:04.949 TRACE nova.virt.block_device ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00m  File 
"/srv/nova/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, 
in force_reraise
2016-08-22 03:30:04.949 TRACE nova.virt.block_device ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00msix.reraise(self.type_, 
self.value, self.tb)
2016-08-22 03:30:04.949 TRACE nova.virt.block_device ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00m  File 
"/srv/nova/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 
1225, in attach_volume
2016-08-22 03:30:04.949 TRACE nova.virt.block_device ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00mguest.attach_device(conf, 
persistent=True, live=live)
2016-08-22 03:30:04.949 TRACE nova.virt.block_device ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00m  File 
"/srv/nova/local/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 
296, in attach_device
2016-08-22 03:30:04.949 TRACE nova.virt.block_device ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00m
self._domain.attachDeviceFlags(device_xml, flags=flags)
2016-08-22 03:30:04.949 TRACE nova.virt.block_device ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00m  File 
"/srv/nova/local/lib/python2.7/site-packages/eventlet/tpool.py", line 186, in 
doit
2016-08-22 03:30:04.949 TRACE nova.virt.block_device ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00mresult = 
proxy_call(self._autowrap, f, *args, **kwargs)
2016-08-22 03:30:04.949 TRACE nova.virt.block_device ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00m  File 
"/srv/nova/local/lib/python2.7/site-packages/eventlet/tpool.py", line 144, in 
proxy_call
2016-08-22 03:30:04.949 TRACE nova.virt.block_device ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00mrv = execute(f, *args, **kwargs)
2016-08-22 03:30:04.949 TRACE nova.virt.block_device ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00m  File 
"/srv/nova/local/lib/python2.7/site-packages/eventlet/tpool.py", line 125, in 
execute
2016-08-22 03:30:04.949 TRACE nova.virt.block_device ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00msix.reraise(c, e, tb)
2016-08-22 03:30:04.949 TRACE nova.virt.block_device ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00m  File 
"/srv/nova/local/lib/python2.7/site-packages/eventlet/tpool.py", line 83, in 
tworker
2016-08-22 03:30:04.949 TRACE nova.virt.block_device ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00mrv = meth(*args, **kwargs)
2016-08-22 03:30:04.949 TRACE nova.virt.block_device ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00m  File 
"/srv/nova/local/lib/python2.7/site-packages/libvirt.py", line 560, in 
attachDeviceFlags
2016-08-22 03:30:04.949 TRACE nova.virt.block_device ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00mif ret == -1: raise 
libvirtError ('virDomainAttachDeviceFlags() failed', dom=self)
2016-08-22 03:30:04.949 TRACE nova.virt.block_device ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00mlibvirtError: internal error: 
unable to execute QEMU command 'device_add': Property 'scsi-hd.drive' can't 
find value 'drive-scsi0-0-0-1'


** Changed in: nova
   Status: Expired => Confirmed

** Changed in: nova
 Assignee: (unassigned) => Kevin Zhao (kevin-zhao)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1502028

Title:
  cannot attach a volume when using multiple ceph backends

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  1. Exact version of Nova/OpenStack you are running: Kilo Stable

  2. Relevant log files:

  I'm testing using ceph RADOS block devices to attach VM; however I've
  hit an issue when ceph cluster is different between VM and volumes.

  <--error message-->
  2015-09-24 11:32:31 13083 DEBUG nova.virt.libvirt.config 
[req-b9bbd744-cf75-477b-b6a6-ea5b72f6181f 9504f2c4fe6b4b34a1bb0330f2faba35 
0788824d5d1f46f2b014597ba8dc0585] Generated XML ('\n  \n  \n
\n\n\n  \n  
\n\n  \n  \n  
727c5319-1926-44ac-ba52-de55485faf2b\n\n',)  t

[Yahoo-eng-team] [Bug 1604677] [NEW] Got messy code via console log in AArch64

2016-07-20 Thread Kevin Zhao
Public bug reported:

Description
===
After launching a instance in AArch64 via Nova, running nova console log I can 
get some messy code information. Due to the console type is not correct.
==
1.Using devstack to deploy openstack. Using default local.conf.

2.Upload the aarch64 image with glance.
$ source ~/devstack/openrc admin admin
$ glance image-create --name image-arm64.img --disk-format qcow2 
--container-format bare --visibility public --file 
images/image-arm64-wily.qcow2 --progress
$ glance image-create --name image-arm64.vmlinuz --disk-format aki 
--container-format aki --visibility public --file 
images/image-arm64-wily.vmlinuz --progress
$ glance image-create --name image-arm64.initrd --disk-format ari 
--container-format ari --visibility public --file 
images/image-arm64-wily.initrd --progress
$ IMAGE_UUID=$(glance image-list | grep image-arm64.img | awk '{ print $2 }')
$ IMAGE_KERNEL_UUID=$(glance image-list | grep image-arm64.vmlinuz | awk '{ 
print $2 }')
$ IMAGE_INITRD_UUID=$(glance image-list | grep image-arm64.initrd | awk '{ 
print $2 }')
$ glance image-update --kernel-id ${IMAGE_KERNEL_UUID} --ramdisk-id 
${IMAGE_INITRD_UUID} ${IMAGE_UUID}
3.Set the scsi model:
$ glance image-update --property hw_disk_bus --property 
hw_scsi_model=virtio-scsi ${IMAGE_UUID}

4.nova add keypair
$ nova keypair-add default --pub-key ~/.ssh/id_rsa.pub

5.Launch the instance:
$ image=$(nova image-list | egrep "image-arm64.img"'[^-]' | awk '{ print $2 }')
$ nova boot --flavor m1.small--image ${image} --key-name default test-arm64

6.nova console-log 

Expected result
===
Get the console information

Actual result
=
Messy Code or No information.

The default kernel command line is not properly in AArch64.
"console=tty0 console=ttyS0" is not supported in AArch64.

Environment
===
1. Exact version of OpenStack you are running. See the following
  list for all releases: http://docs.openstack.org/releases/
   Nova development, commit code: 3e96b0fde010c3f800a539eec5376c3c379c8594

2. Which hypervisor did you use?
Libvirt+KVM
$ kvm --version
QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1), Copyright (c) 
2003-2008 Fabrice Bellard
$ libvirtd --version
libvirtd (libvirt) 1.3.1

2. Which storage type did you use?
   In the host file system,all in one physics machine.
stack@u202154:/opt/stack/nova$ df -hl
Filesystem Size Used Avail Use% Mounted on
udev 7.8G 0 7.8G 0% /dev
tmpfs 1.6G 21M 1.6G 2% /run
/dev/sda2 917G 12G 859G 2% /
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/sda1 511M 888K 511M 1% /boot/efi
cgmfs 100K 0 100K 0% /run/cgmanager/fs
tmpfs 1.6G 0 1.6G 0% /run/user/1002

3. Which networking type did you use?
   nova-network

4. Environment information:
   Architecture : AARCH64
   OS: Ubuntu 16.04

Detailed log info is in the accessory.
The guest xml is:

  c1be4539-43ba-4c88-b725-cdaf0fbccf8e
  instance-0015
  4194304
  2
  
http://openstack.org/xmlns/libvirt/nova/1.0;>
  
  test-cirros
  2016-07-20 06:09:32
  
4096
40
0
0
2
  
  
admin
admin
  
  

  
  
hvm

/opt/stack/data/nova/instances/c1be4539-43ba-4c88-b725-cdaf0fbccf8e/kernel

/opt/stack/data/nova/instances/c1be4539-43ba-4c88-b725-cdaf0fbccf8e/ramdisk
root=/dev/vda console=ttyAMA0
  
  


  
  
2048
  
  


  
  

  
  

  
  
  


  
  
  


  
  
  
  


  



  

  


** Affects: nova
 Importance: Undecided
     Assignee: Kevin Zhao (kevin-zhao)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Kevin Zhao (kevin-zhao)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1604677

Title:
  Got messy code via console log in AArch64

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  After launching a instance in AArch64 via Nova, running nova console log I 
can get some messy code information. Due to the console type is not correct.
  ==
  1.Using devstack to deploy openstack. Using default local.conf.

  2.Upload the aarch64 image with glance.
  $ source ~/devstack/openrc admin admin
  $ glance image-create --name image-arm64.img --disk-format qcow2 
--container-format bare --visibility public --file 
images/image-arm64-wily.qcow2 --progress
  $ glance image-create --name image-arm64.vmlinuz --disk-format aki 
--container-format aki --visibility public --file 
images/image-arm64-wily.vmlinuz --progress
  $ glance image-create --name image-arm64.initrd --disk-format ari 
--container-format ari --visibility public --file 
images/image-arm64-wily.initrd

[Yahoo-eng-team] [Bug 1598370] [NEW] Got AttributeError when launching instance in Aarch64

2016-07-02 Thread Kevin Zhao
5d7-af4c-2cc6e8b966bd] obj.parse_dom(c)
2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: 
c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] AttributeError: 'NoneType' object has no 
attribute 'parse_dom'
2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: 
c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] 
2016-07-02 06:57:08.647 INFO nova.compute.manager 
[req-c8805971-7d8a-4775-ae95-7ac62b284487 admin admin] [instance: 
c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] Terminating instance


Environment
===
1. Exact version of OpenStack you are running. See the following
  list for all releases: http://docs.openstack.org/releases/
   Nova development, commit code: 23153952a979c93a414705744b0f8ba4bde18f75

2. Which hypervisor did you use?
Libvirt+KVM
$ kvm --version
QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1), Copyright (c) 
2003-2008 Fabrice Bellard
$ libvirtd --version
libvirtd (libvirt) 1.3.1

2. Which storage type did you use?
   In the host file system,all in one physics machine.
stack@u202154:/opt/stack/nova$ df -hl
Filesystem  Size  Used Avail Use% Mounted on
udev7.8G 0  7.8G   0% /dev
tmpfs   1.6G   21M  1.6G   2% /run
/dev/sda2   917G   12G  859G   2% /
tmpfs   7.9G 0  7.9G   0% /dev/shm
tmpfs   5.0M 0  5.0M   0% /run/lock
tmpfs   7.9G 0  7.9G   0% /sys/fs/cgroup
/dev/sda1   511M  888K  511M   1% /boot/efi
cgmfs   100K 0  100K   0% /run/cgmanager/fs
tmpfs   1.6G 0  1.6G   0% /run/user/1002


3. Which networking type did you use?
   nova-network

4. Environment information:
   Architecture : AARCH64
   OS: Ubuntu 16.04

Detailed log info is in the accessory.
The guest xml is also in the log info.

** Affects: nova
 Importance: Undecided
     Assignee: Kevin Zhao (kevin-zhao)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Kevin Zhao (kevin-zhao)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1598370

Title:
  Got AttributeError  when launching instance in Aarch64

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Using nova to create an instance in AArch64,and finally got AttributeError 
"'NoneType' object has no attribute 'parse_dom'" after "_get_guest_xml".
  ==
  1.Using devstack to deploy openstack. Using default local.conf.

  2.Upload the aarch64 image with glance.
  $ source ~/devstack/openrc admin admin
  $ glance image-create --name image-arm64.img --disk-format qcow2 
--container-format bare --visibility public --file 
images/image-arm64-wily.qcow2 --progress
  $ glance image-create --name image-arm64.vmlinuz --disk-format aki 
--container-format aki --visibility public --file 
images/image-arm64-wily.vmlinuz --progress
  $ glance image-create --name image-arm64.initrd --disk-format ari 
--container-format ari --visibility public --file 
images/image-arm64-wily.initrd --progress
  $ IMAGE_UUID=$(glance image-list | grep image-arm64.img | awk '{ print $2 }')
  $ IMAGE_KERNEL_UUID=$(glance image-list | grep image-arm64.vmlinuz | awk '{ 
print $2 }')
  $ IMAGE_INITRD_UUID=$(glance image-list | grep image-arm64.initrd | awk '{ 
print $2 }')
  $ glance image-update --kernel-id ${IMAGE_KERNEL_UUID} --ramdisk-id 
${IMAGE_INITRD_UUID} ${IMAGE_UUID}
  3.Set the scsi model:
  $ glance image-update --property hw_disk_bus --property 
hw_scsi_model=virtio-scsi ${IMAGE_UUID}

  4.nova add keypair
  $ nova keypair-add default --pub-key ~/.ssh/id_rsa.pub

  5.Launch the instance:
  $ image=$(nova image-list | egrep "image-arm64.img"'[^-]' | awk '{ print $2 
}')
  $ nova boot --flavor m1.small--image ${image} --key-name default test-arm64

  6.See the n-cpu log, we can get the error information.

  Expected result
  ===
  Spawning guest successfully.

  Actual result
  =
  Got the error log information as below:
  2016-07-02 06:57:08.645 ERROR nova.compute.manager 
[req-c8805971-7d8a-4775-ae95-7ac62b284487 admin admin] [instance: 
c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] Instance failed to spawn
  2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: 
c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] Traceback (most recent call last):
  2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: 
c8ea40f1-2877-45d7-af4c-2cc6e8b966bd]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2063, in _build_resources
  2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: 
c8ea40f1-2877-45d7-af4c-2cc6e8b966bd] yield resources
  2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: 
c8ea40f1-2877-45d7-af4c-2cc6e8b966bd]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1907, in _build_and_run_instance
  2016-07-02 06:57:08.645 TRACE nova.compute.manager [instance: 
c8ea40f

[Yahoo-eng-team] [Bug 1591966] Re: Running Unit Tests got 5 errors in Aarch64

2016-07-02 Thread Kevin Zhao
** Changed in: nova
   Status: In Progress => Fix Released

** Changed in: nova
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1591966

Title:
  Running Unit Tests got 5 errors in Aarch64

Status in OpenStack Compute (nova):
  Fix Committed

Bug description:
  Description
  ===
  Using nova to create an instance in Aarch64,the disk.config is the 'cdrom' 
and has the type of 'scsi'.After instance creation ,log into the instance and 
can't see the cdrom device.

  Steps to reproduce
  ==
  1.Using devstack to deploy openstack. Using default local.conf.

  2.Enter into the nova directory
  $ cd /opt/stack/nova
  $ tox -e py27

  Expected result
  ===
  After running the tests, return:

  ==
  Totals
  ==
  Ran: 13479 tests in 662. sec.
   - Passed: 13423
   - Skipped: 56
   - Expected Fail: 0
   - Unexpected Success: 0
   - Failed: 0
  Sum of execute time for each test: 4490.9027 sec.

  ==
  Worker Balance
  ==
   - Worker 0 (1685 tests) => 0:09:26.657728
   - Worker 1 (1687 tests) => 0:09:37.397435
   - Worker 2 (1684 tests) => 0:09:32.077593
   - Worker 3 (1684 tests) => 0:09:26.830573
   - Worker 4 (1684 tests) => 0:09:23.826421
   - Worker 5 (1685 tests) => 0:09:32.041672
   - Worker 6 (1685 tests) => 0:09:15.550618
   - Worker 7 (1685 tests) => 0:09:23.697743
  __ summary 
__

py27: commands succeeded
congratulations :)

  Actual result
  =
  Got 5 errors while running the tests.
  The test cases are:
  
nova.tests.unit.virt.libvirt.test_blockinfo.LibvirtBlockInfoTest.test_get_disk_mapping_cdrom_configdrive
  
nova.tests.unit.virt.libvirt.test_blockinfo.LibvirtBlockInfoTest.test_get_disk_mapping_simple_configdrive
  
nova.tests.unit.virt.libvirt.test_driver.LibvirtConnTestCase.test_xml_disk_bus_ide
  
nova.tests.unit.virt.libvirt.test_driver.LibvirtConnTestCase.test_xml_disk_bus_ide_and_virtio
  
nova.tests.unit.virt.libvirt.test_driver.LibvirtConnTestCase.test_get_guest_config_with_configdrive

  The detailed information is in the attaching log file.
  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/
 Nova development, commit code: 6e2e1dc912199e057e5c3a5e07d39f26cbbbdd5b

  2. Which hypervisor did you use?
  Libvirt+KVM
  $ kvm --version
  QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1), Copyright 
(c) 2003-2008 Fabrice Bellard
  $ libvirtd --version
  libvirtd (libvirt) 1.3.1

  2. Which storage type did you use?
 In the host file system,all in one physics machine.
  stack@u202154:/opt/stack/nova$ df -hl
  Filesystem Size Used Avail Use% Mounted on
  udev 7.8G 0 7.8G 0% /dev
  tmpfs 1.6G 61M 1.6G 4% /run
  /dev/sda2 917G 41G 830G 5% /
  tmpfs 7.9G 0 7.9G 0% /dev/shm
  tmpfs 5.0M 0 5.0M 0% /run/lock
  tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
  /dev/sda1 511M 888K 511M 1% /boot/efi
  cgmfs 100K 0 100K 0% /run/cgmanager/fs
  tmpfs 1.6G 0 1.6G 0% /run/user/1002
  tmpfs 1.6G 0 1.6G 0% /run/user/1000
  tmpfs 1.6G 0 1.6G 0% /run/user/0

  3. Which networking type did you use?
 nova-network

  4. Environment information:
 Architecture : AARCH64
 OS: Ubuntu 16.04

  Detailed log info is in the accessory.
  The guest xml is also in the log info.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1591966/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591966] [NEW] Running Unit Tests got 5 errors in Aarch64

2016-06-13 Thread Kevin Zhao
Public bug reported:

Description
===
Using nova to create an instance in Aarch64,the disk.config is the 'cdrom' and 
has the type of 'scsi'.After instance creation ,log into the instance and can't 
see the cdrom device.

Steps to reproduce
==
1.Using devstack to deploy openstack. Using default local.conf.

2.Enter into the nova directory
$ cd /opt/stack/nova
$ tox -e py27

Expected result
===
After running the tests, return:

==
Totals
==
Ran: 13479 tests in 662. sec.
 - Passed: 13423
 - Skipped: 56
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 0
Sum of execute time for each test: 4490.9027 sec.

==
Worker Balance
==
 - Worker 0 (1685 tests) => 0:09:26.657728
 - Worker 1 (1687 tests) => 0:09:37.397435
 - Worker 2 (1684 tests) => 0:09:32.077593
 - Worker 3 (1684 tests) => 0:09:26.830573
 - Worker 4 (1684 tests) => 0:09:23.826421
 - Worker 5 (1685 tests) => 0:09:32.041672
 - Worker 6 (1685 tests) => 0:09:15.550618
 - Worker 7 (1685 tests) => 0:09:23.697743
__ summary 
__

  py27: commands succeeded
  congratulations :)

Actual result
=
Got 5 errors while running the tests.
The test cases are:
nova.tests.unit.virt.libvirt.test_blockinfo.LibvirtBlockInfoTest.test_get_disk_mapping_cdrom_configdrive
nova.tests.unit.virt.libvirt.test_blockinfo.LibvirtBlockInfoTest.test_get_disk_mapping_simple_configdrive
nova.tests.unit.virt.libvirt.test_driver.LibvirtConnTestCase.test_xml_disk_bus_ide
nova.tests.unit.virt.libvirt.test_driver.LibvirtConnTestCase.test_xml_disk_bus_ide_and_virtio
nova.tests.unit.virt.libvirt.test_driver.LibvirtConnTestCase.test_get_guest_config_with_configdrive

The detailed information is in the attaching log file.
Environment
===
1. Exact version of OpenStack you are running. See the following
  list for all releases: http://docs.openstack.org/releases/
   Nova development, commit code: 6e2e1dc912199e057e5c3a5e07d39f26cbbbdd5b

2. Which hypervisor did you use?
Libvirt+KVM
$ kvm --version
QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1), Copyright (c) 
2003-2008 Fabrice Bellard
$ libvirtd --version
libvirtd (libvirt) 1.3.1

2. Which storage type did you use?
   In the host file system,all in one physics machine.
stack@u202154:/opt/stack/nova$ df -hl
Filesystem Size Used Avail Use% Mounted on
udev 7.8G 0 7.8G 0% /dev
tmpfs 1.6G 61M 1.6G 4% /run
/dev/sda2 917G 41G 830G 5% /
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/sda1 511M 888K 511M 1% /boot/efi
cgmfs 100K 0 100K 0% /run/cgmanager/fs
tmpfs 1.6G 0 1.6G 0% /run/user/1002
tmpfs 1.6G 0 1.6G 0% /run/user/1000
tmpfs 1.6G 0 1.6G 0% /run/user/0

3. Which networking type did you use?
   nova-network

4. Environment information:
   Architecture : AARCH64
   OS: Ubuntu 16.04

Detailed log info is in the accessory.
The guest xml is also in the log info.

** Affects: nova
 Importance: Undecided
     Assignee: Kevin Zhao (kevin-zhao)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Kevin Zhao (kevin-zhao)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1591966

Title:
  Running Unit Tests got 5 errors in Aarch64

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Using nova to create an instance in Aarch64,the disk.config is the 'cdrom' 
and has the type of 'scsi'.After instance creation ,log into the instance and 
can't see the cdrom device.

  Steps to reproduce
  ==
  1.Using devstack to deploy openstack. Using default local.conf.

  2.Enter into the nova directory
  $ cd /opt/stack/nova
  $ tox -e py27

  Expected result
  ===
  After running the tests, return:

  ==
  Totals
  ==
  Ran: 13479 tests in 662. sec.
   - Passed: 13423
   - Skipped: 56
   - Expected Fail: 0
   - Unexpected Success: 0
   - Failed: 0
  Sum of execute time for each test: 4490.9027 sec.

  ==
  Worker Balance
  ==
   - Worker 0 (1685 tests) => 0:09:26.657728
   - Worker 1 (1687 tests) => 0:09:37.397435
   - Worker 2 (1684 tests) => 0:09:32.077593
   - Worker 3 (1684 tests) => 0:09:26.830573
   - Worker 4 (1684 tests) => 0:09:23.826421
   - Worker 5 (1685 tests) => 0:09:32.041672
   - Worker 6 (1685 tests) => 0:09:15.550618
   - Worker 7 (1685 tests) => 0:09:23.697743
  __ summary 
__

py27: commands succeeded
congratulations :)

  Actual result
  =
  Got

[Yahoo-eng-team] [Bug 1591827] [NEW] Scsi device can't be recognized in Guest in aarch64

2016-06-12 Thread Kevin Zhao
Public bug reported:

Description
===
Using nova to create an instance in Aarch64,the disk.config is the 'cdrom' and 
has the type of 'scsi'.After instance creation ,log into the instance and can't 
see the cdrom device.

Steps to reproduce
==
1.Using devstack to deploy openstack. Using default local.conf.

2.Upload the aarch64 image with glance.
$ source ~/devstack/openrc admin admin
$ glance image-create --name image-arm64.img --disk-format qcow2 
--container-format bare --visibility public --file 
images/image-arm64-wily.qcow2 --progress
$ glance image-create --name image-arm64.vmlinuz --disk-format aki 
--container-format aki --visibility public --file 
images/image-arm64-wily.vmlinuz --progress
$ glance image-create --name image-arm64.initrd --disk-format ari 
--container-format ari --visibility public --file 
images/image-arm64-wily.initrd --progress
$ IMAGE_UUID=$(glance image-list | grep image-arm64.img | awk '{ print $2 }')
$ IMAGE_KERNEL_UUID=$(glance image-list | grep image-arm64.vmlinuz | awk '{ 
print $2 }')
$ IMAGE_INITRD_UUID=$(glance image-list | grep image-arm64.initrd | awk '{ 
print $2 }')
$ glance image-update --kernel-id ${IMAGE_KERNEL_UUID} --ramdisk-id 
${IMAGE_INITRD_UUID} ${IMAGE_UUID} 
3.Set the scsi model:
$ glance image-update --property hw_scsi_model=virtio-scsi ${IMAGE_UUID}

4.nova add keypair
$ nova keypair-add default --pub-key ~/.ssh/id_rsa.pub

5.Launch the instance:
$ image=$(nova image-list | egrep "image-arm64.img"'[^-]' | awk '{ print $2 }')
$ nova boot --flavor m1.small--image ${image} --key-name default test-arm64

6.After creation,use ssh to login into the instance.In the guest:
$ ls -l /dev
The we can see that cdrom doesn't exist.

Expected result
===
After spawningn the instance, login into the guest :
$ ls -l /dev 
We can see the cdrom is here.

Actual result
=
The xml file of the disk.config is generated by nova ,but can't work.

Environment
===
1. Exact version of OpenStack you are running. See the following
  list for all releases: http://docs.openstack.org/releases/
   Nova development, commit code: 279f1a9bf65c4b904e01d26f0619a62ed99fc4d3

2. Which hypervisor did you use?
Libvirt+KVM
$ kvm --version
QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1), Copyright (c) 
2003-2008 Fabrice Bellard
$ libvirtd --version
libvirtd (libvirt) 1.3.1

2. Which storage type did you use?
   In the host file system,all in one physics machine.
stack@u202154:/opt/stack/nova$ df -hl
Filesystem Size Used Avail Use% Mounted on
udev 7.8G 0 7.8G 0% /dev
tmpfs 1.6G 61M 1.6G 4% /run
/dev/sda2 917G 41G 830G 5% /
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/sda1 511M 888K 511M 1% /boot/efi
cgmfs 100K 0 100K 0% /run/cgmanager/fs
tmpfs 1.6G 0 1.6G 0% /run/user/1002
tmpfs 1.6G 0 1.6G 0% /run/user/1000
tmpfs 1.6G 0 1.6G 0% /run/user/0

3. Which networking type did you use?
   nova-network

4. Environment information:
   Architecture : AARCH64
   OS: Ubuntu 16.04

Detailed log info is in the accessory.
The guest xml is also in the log info.

** Affects: nova
 Importance: Undecided
 Assignee: Kevin Zhao (kevin-zhao)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1591827

Title:
  Scsi device can't be recognized in Guest in aarch64

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Using nova to create an instance in Aarch64,the disk.config is the 'cdrom' 
and has the type of 'scsi'.After instance creation ,log into the instance and 
can't see the cdrom device.

  Steps to reproduce
  ==
  1.Using devstack to deploy openstack. Using default local.conf.

  2.Upload the aarch64 image with glance.
  $ source ~/devstack/openrc admin admin
  $ glance image-create --name image-arm64.img --disk-format qcow2 
--container-format bare --visibility public --file 
images/image-arm64-wily.qcow2 --progress
  $ glance image-create --name image-arm64.vmlinuz --disk-format aki 
--container-format aki --visibility public --file 
images/image-arm64-wily.vmlinuz --progress
  $ glance image-create --name image-arm64.initrd --disk-format ari 
--container-format ari --visibility public --file 
images/image-arm64-wily.initrd --progress
  $ IMAGE_UUID=$(glance image-list | grep image-arm64.img | awk '{ print $2 }')
  $ IMAGE_KERNEL_UUID=$(glance image-list | grep image-arm64.vmlinuz | awk '{ 
print $2 }')
  $ IMAGE_INITRD_UUID=$(glance image-list | grep image-arm64.initrd | awk '{ 
print $2 }')
  $ glance image-update --kernel-id ${IMAGE_KERNEL_UUID} --ramdisk-id 
${IMAGE_INITRD_UUID} ${IMAGE_UUID} 
  3.Set the scsi model:
  $ glance image-update --property hw_scsi_model=virtio-scsi ${IMAGE_UUID}

  4.nova add keypair
  $ nova keypair-add default --pub-

[Yahoo-eng-team] [Bug 1585893] [NEW] Launch instance got libvirtError for qemu unsupported IDE bus in AARCH64

2016-05-26 Thread Kevin Zhao
/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in 
doit
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]
 result = proxy_call(self._autowrap, f, *args, **kwargs)
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]   
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in 
proxy_call
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]
 rv = execute(f, *args, **kwargs)
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]   
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in 
execute
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]
 six.reraise(c, e, tb)
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]   
File "/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in 
tworker
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]
 rv = meth(*args, **kwargs)
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]   
File "/usr/local/lib/python2.7/dist-packages/libvirt.py", line 1065, in 
createWithFlags
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]
 if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', 
dom=self)
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e] 
libvirtError: unsupported configuration: IDE controllers are unsupported for 
this QEMU binary or machine type
 TRACE nova.compute.manager [instance: 188aa5bc-173c-46ec-b872-6bacb512911e]
 INFO nova.compute.manager [req-75325207-6c1b-481d-b188-a66c0a64eb89 admin 
admin] [instance: 188aa5bc-173c-46ec-b872-6bacb512911e] Terminating instance

Environment
===
1. Exact version of OpenStack you are running. See the following
  list for all releases: http://docs.openstack.org/releases/
   Nova development, commit code: 9a05d38f48ef0f630c5e49e332075b273cee38b9
   

2. Which hypervisor did you use?
Libvirt+KVM
$ kvm --version
QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1), Copyright (c) 
2003-2008 Fabrice Bellard
$ libvirtd --version
libvirtd (libvirt) 1.3.1

2. Which storage type did you use?
   In the host file system,all in one physics machine.q
stack@u202154:/opt/stack/nova$ df -hl
Filesystem  Size  Used Avail Use% Mounted on
udev7.8G 0  7.8G   0% /dev
tmpfs   1.6G   61M  1.6G   4% /run
/dev/sda2   917G   41G  830G   5% /
tmpfs   7.9G 0  7.9G   0% /dev/shm
tmpfs   5.0M 0  5.0M   0% /run/lock
tmpfs   7.9G 0  7.9G   0% /sys/fs/cgroup
/dev/sda1   511M  888K  511M   1% /boot/efi
cgmfs   100K 0  100K   0% /run/cgmanager/fs
tmpfs   1.6G 0  1.6G   0% /run/user/1002
tmpfs   1.6G 0  1.6G   0% /run/user/1000
tmpfs   1.6G 0  1.6G   0% /run/user/0

3. Which networking type did you use?
   nova-network

4. Environment information:
   Architecture : AARCH64
   OS: Ubuntu 16.04

Detailed log info is in the accessory.
The guest xml is also in the log info.

** Affects: nova
 Importance: Undecided
 Assignee: Kevin Zhao (kevin-zhao)
 Status: New


** Tags: aarch64

** Attachment added: "Detailed log info"
   https://bugs.launchpad.net/bugs/1585893/+attachment/4670546/+files/log

** Changed in: nova
 Assignee: (unassigned) => Kevin Zhao (kevin-zhao)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1585893

Title:
  Launch instance got libvirtError for qemu unsupported  IDE bus in
  AARCH64

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  After setup the nova development environment with devstack in aarch64 machine 
,use the glance upload the image ,then use nova to launch the 
instance.Launching failed with the error "libvirtError: unsupported 
configuration: IDE controllers are unsupported for this QEMU binary or machine 
type".
   
  Steps to reproduce
  ==
  1.Using devstack to deploy openstack. Using default local.conf.

  2.Upload the aarch64 image with glance.
  $ source ~/devstack/openrc admin admin
  $ glance image-create --name image-arm64.img --disk-format qcow2 
--container-format bare --visibility public --file 
images/image-arm64-wily.qcow2 --progress
  $ glance image-create --name image-arm64.vmlinuz --disk-format aki 
--container-format aki --visibility public --file 
images/image-arm64-wily.vmlinuz --progress
  $ glance image-create --name image-arm64.initrd --disk-format ari 
--container-format ari --visibility public --file 
images/image-arm64-wily.initrd --progress
  $ IMAGE_UUID=$(glance image-list | grep image-arm64.img | awk '{ print $2 }')
  $ IMAGE_KERNEL