[Yahoo-eng-team] [Bug 1216706] Re: Migration 211 doesn't downgrade with MySQL

2013-10-28 Thread Jeffrey Zhang
This bug has been fixed by 
https://review.openstack.org/#/c/43634/
Due to the unexpected commit message, this review isn't related to this bug.

Anyway, this bug should be close manually.

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1216706

Title:
  Migration 211 doesn't downgrade with MySQL

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Due to fkey constraint:

  Cannot drop index 'uniq_aggregate_metadata0aggregate_id0key0deleted':
  needed in a foreign key constraint

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1216706/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 951156] Re: Limit/marker behavior for list servers/flavors/images does not paginate correctly

2015-03-12 Thread Jeffrey Zhang
I think wangpan is right.

>Daryls-MacBook-Pro:tempest dwalleck$ curl -ik -H "X-Auth-Token: 
>6885b39a-b301-444e-93cc-77d6d2ffb48f" 
>>https://127.0.0.1/v2/658803/flavors?limit=1&marker=6
>[1] 39431
>Daryls-MacBook-Pro:tempest dwalleck$ HTTP/1.1 200 OK

check above description. There is a `[1] 39431` exist and others don't
have. This mean the text before the `&` run background.

This bug should be marked close or invaid.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/951156

Title:
  Limit/marker behavior for list servers/flavors/images   does not
  paginate correctly

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When the marker parameter is passed, the expected behavior is that all
  items that logically appear "after" the marker. The system is
  currently returning the item that is passed as the marker as well,
  which causes recursive loops when trying to create pagination through
  the results, as using marker does not advance the list.

  curl -ik -H "X-Auth-Token: 6885b39a-b301-444e-93cc-77d6d2ffb48f" 
https://127.0.0.1/v2/658803/flavors?limit=1
  HTTP/1.1 200 OK
  Date: Fri, 09 Mar 2012 21:24:55 GMT
  Content-Length: 369
  Content-Type: application/json
  X-Compute-Request-Id: req-aeb76f85-23c6-43b2-ab43-3ef0a80099a5
  Server: Jetty(8.0.y.z-SNAPSHOT)

  {"flavors": [{"id": "6", "links": [{"href":
  "https://127.0.0.1/v2/658803/flavors/6";, "rel": "self"}, {"href":
  "https://127.0.0.1/658803/flavors/6";, "rel": "bookmark"}], "name":
  "8GB instance"}], "flavors_links": [{"href":
  "https://127.0.0.1/v2/658803/flavors?limit=1&marker=6";, "rel":
  "next"}]}

  Daryls-MacBook-Pro:tempest dwalleck$ curl -ik -H "X-Auth-Token: 
6885b39a-b301-444e-93cc-77d6d2ffb48f" 
https://127.0.0.1/v2/658803/flavors?limit=1&marker=6
  [1] 39431
  Daryls-MacBook-Pro:tempest dwalleck$ HTTP/1.1 200 OK
  Date: Fri, 09 Mar 2012 21:25:30 GMT
  Content-Length: 369
  Content-Type: application/json
  X-Compute-Request-Id: req-057a57f9-0b6b-42b0-935a-75d5e4e1bda7
  Server: Jetty(8.0.y.z-SNAPSHOT)

  {"flavors": [{"id": "6", "links": [{"href":
  "https://127.0.0.1/v2/658803/flavors/6";, "rel": "self"}, {"href":
  "https://127.0.0.1/658803/flavors/6";, "rel": "bookmark"}], "name":
  "8GB instance"}], "flavors_links": [{"href":
  "https://127.0.0.1/v2/658803/flavors?limit=1&marker=6";, "rel":
  "next"}]}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/951156/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466387] [NEW] wrong error message when fixed_ip is not valid in nova boot

2015-06-18 Thread Jeffrey Zhang
Public bug reported:

When using a invalid v4-fixed-ip, the error message is not respected.

$  nova boot test --image cirros --flavor micro-1 --nic 
net-id=38e30067-20d7-44ae-97a2-0653ce53ac0a,v4-fixed-ip=1999.0.0.0
ERROR (BadRequest): Invalid fixed IP address (None) (HTTP 400) (Request-ID: 
req-3415c608-5b2f-4049-b682-e0a37dbc1072)

The expected message should be

ERROR (BadRequest): Invalid fixed IP address (1999.0.0.0) (HTTP 400)
(Request-ID: req-3415c608-5b2f-4049-b682-e0a37dbc1072)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1466387

Title:
  wrong error message when fixed_ip is not valid in nova boot

Status in OpenStack Compute (Nova):
  New

Bug description:
  When using a invalid v4-fixed-ip, the error message is not respected.

  $  nova boot test --image cirros --flavor micro-1 --nic 
net-id=38e30067-20d7-44ae-97a2-0653ce53ac0a,v4-fixed-ip=1999.0.0.0
  ERROR (BadRequest): Invalid fixed IP address (None) (HTTP 400) (Request-ID: 
req-3415c608-5b2f-4049-b682-e0a37dbc1072)

  The expected message should be

  ERROR (BadRequest): Invalid fixed IP address (1999.0.0.0) (HTTP 400)
  (Request-ID: req-3415c608-5b2f-4049-b682-e0a37dbc1072)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1466387/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1207308] Re: Instance host name is not same as the specified name

2013-09-24 Thread Jeffrey Zhang
You should check you cloud-init and nova-metadata-api's configuration.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1207308

Title:
  Instance host name is not same as the specified name

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  While launching an instance, the instance name provided in GUI/horizon
  doesn't match with the hostname inside the guest.

  Steps to create:

  1. Launch an instance and specify an instance name
  2. After instance is launched, loging to the instance using ssh and check the 
$hostname.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1207308/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1910322] [NEW] when using virtio-scsi + scsi bus, aarch64 vm failed to attach a new volume

2021-01-05 Thread Jeffrey Zhang
Public bug reported:

This issue happened only on an AArch64 env. x86 env is OK.

This issue may be related to #1686116 , but i am using OpenStack, which
already have that patch.

When vm is using a image with following property

```
openstack image set --property hw_scsi_model=virtio-scsi \
  --property hw_scsi_model=virtio-scsi \
  
```

Volume can not be attached dynamic. and nova raise following error
message

```
attach device xml: 
  
  

  
  

  
  
  d95a29d6-3bbc-491e-9e4d-c0450b2ba7a8
  


[req-0c7d2fb1-cf65-45a0-9969-5ab3e38465e0 6aa4f1437eb74404b29e6873d42f9698 
d2ab8602cef7428cb956984c9564dadb - default default] 
[instance: 3634a70e-3c49-48a1-9325-2e40c11781e9] Failed to attach volume at 
mountpoint: /dev/sdc: libvirtError: Requested operation is not valid: Domain 
already contains a disk with that address
Traceback (most recent call last):
  File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", 
line 1829, in attach_volume
guest.attach_device(conf, persistent=True, live=live)
  File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", 
line 338, in attach_device
self._domain.attachDeviceFlags(device_xml, flags=flags)
  File "/var/lib/kolla/venv/lib/python2.7/site-packages/eventlet/tpool.py", 
line 190, in doit
result = proxy_call(self._autowrap, f, *args, **kwargs)
  File "/var/lib/kolla/venv/lib/python2.7/site-packages/eventlet/tpool.py", 
line 148, in proxy_call
rv = execute(f, *args, **kwargs)
  File "/var/lib/kolla/venv/lib/python2.7/site-packages/eventlet/tpool.py", 
line 129, in execute
six.reraise(c, e, tb)
  File "/var/lib/kolla/venv/lib/python2.7/site-packages/eventlet/tpool.py", 
line 83, in tworker
rv = meth(*args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 605, in 
attachDeviceFlags
if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', 
dom=self)
libvirtError: Requested operation is not valid: Domain already contains a disk 
with that address
```

btw, i already configured num_pcie_ports to 15.

[0]
https://opendev.org/openstack/nova/src/branch/master/nova/conf/libvirt.py#L807

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1910322

Title:
  when using virtio-scsi + scsi bus, aarch64 vm failed to attach a new
  volume

Status in OpenStack Compute (nova):
  New

Bug description:
  This issue happened only on an AArch64 env. x86 env is OK.

  This issue may be related to #1686116 , but i am using OpenStack,
  which already have that patch.

  When vm is using a image with following property

  ```
  openstack image set --property hw_scsi_model=virtio-scsi \
--property hw_scsi_model=virtio-scsi \

  ```

  Volume can not be attached dynamic. and nova raise following error
  message

  ```
  attach device xml: 


  


  


d95a29d6-3bbc-491e-9e4d-c0450b2ba7a8

  

  [req-0c7d2fb1-cf65-45a0-9969-5ab3e38465e0 6aa4f1437eb74404b29e6873d42f9698 
d2ab8602cef7428cb956984c9564dadb - default default] 
  [instance: 3634a70e-3c49-48a1-9325-2e40c11781e9] Failed to attach volume at 
mountpoint: /dev/sdc: libvirtError: Requested operation is not valid: Domain 
already contains a disk with that address
  Traceback (most recent call last):
File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", 
line 1829, in attach_volume
  guest.attach_device(conf, persistent=True, live=live)
File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", 
line 338, in attach_device
  self._domain.attachDeviceFlags(device_xml, flags=flags)
File "/var/lib/kolla/venv/lib/python2.7/site-packages/eventlet/tpool.py", 
line 190, in doit
  result = proxy_call(self._autowrap, f, *args, **kwargs)
File "/var/lib/kolla/venv/lib/python2.7/site-packages/eventlet/tpool.py", 
line 148, in proxy_call
  rv = execute(f, *args, **kwargs)
File "/var/lib/kolla/venv/lib/python2.7/site-packages/eventlet/tpool.py", 
line 129, in execute
  six.reraise(c, e, tb)
File "/var/lib/kolla/venv/lib/python2.7/site-packages/eventlet/tpool.py", 
line 83, in tworker
  rv = meth(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 605, in 
attachDeviceFlags
  if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', 
dom=self)
  libvirtError: Requested operation is not valid: Domain already contains a 
disk with that address
  ```

  btw, i already configured num_pcie_ports to 15.

  [0]
  https://opendev.org/openstack/nova/src/branch/master/nova/conf/libvirt.py#L807

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1910322/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to   

[Yahoo-eng-team] [Bug 1914011] [NEW] need a global ovsdb_timeout variable to easy configuration

2021-02-01 Thread Jeffrey Zhang
Public bug reported:

ovsdb_timeout need be increase in some case, more info check [0]


[0] https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1849732

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1914011

Title:
  need a global ovsdb_timeout variable to easy configuration

Status in OpenStack Compute (nova):
  New

Bug description:
  ovsdb_timeout need be increase in some case, more info check [0]

  
  [0] https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1849732

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1914011/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1914011] Re: need a global ovsdb_timeout variable to easy configuration

2021-02-01 Thread Jeffrey Zhang
** Project changed: nova => kolla-ansible

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1914011

Title:
  need a global ovsdb_timeout variable to easy configuration

Status in kolla-ansible:
  New

Bug description:
  ovsdb_timeout need be increase in some case, more info check [0]

  
  [0] https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1849732

To manage notifications about this bug go to:
https://bugs.launchpad.net/kolla-ansible/+bug/1914011/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2056513] [NEW] live migration will failed if the vm enabled nic multi queue and vcpu is live resize to a max value

2024-03-07 Thread Jeffrey Zhang
Public bug reported:

reproduce

* launch a 2 cpu vm with multi queue enabled, also configure a max vcpu
as bellow

4

* increase the vcpu from 2 to 4

virsh setvcpus XXX 4

* migration the node

nova live-migration XXX

will get following log in nova-compute.log

2024-03-08 11:32:05.795 8 INFO nova.compute.manager 
[req-1c0a7415-d81b-480a-a1f9-06b9afdb2b55 - - - - -] [None] [instance: 
5a317564-9e87-4313-a8a3-c661434c5bc1] During sync_power_state the instance has 
a pending task (migrating). Skip.
2024-03-08 11:32:06.673 8 INFO nova.compute.manager 
[req-1c0a7415-d81b-480a-a1f9-06b9afdb2b55 - - - - -] [None] [instance: 
5a317564-9e87-4313-a8a3-c661434c5bc1] VM Resumed (Lifecycle Event)
2024-03-08 11:32:06.843 8 ERROR nova.virt.libvirt.driver [-] [instance: 
5a317564-9e87-4313-a8a3-c661434c5bc1] Live Migration failure: internal error: 
qemu unexpectedly closed the monitor: 2024-03-08T03:32:03.383294Z qemu-kvm: 
warning: CPU(s) not present in any NUMA nodes: CPU 2 [socket-id: 2, core-id: 0, 
thread-id: 0], CPU 3 [socket-id: 3, core-id: 0, thread-id: 0]
2024-03-08T03:32:03.383370Z qemu-kvm: warning: All CPU(s) up to maxcpus should 
be described in NUMA config, ability to start up with partial NUMA mappings is 
obsoleted and will be removed in future
2024-03-08T03:32:05.584824Z qemu-kvm: get_pci_config_device: Bad config data: 
i=0x9a read: 5 device: 9 cmask: ff wmask: 0 w1cmask:0
2024-03-08T03:32:05.584971Z qemu-kvm: Failed to load PCIDevice:config
2024-03-08T03:32:05.585014Z qemu-kvm: Failed to load virtio-net:virtio
2024-03-08T03:32:05.585072Z qemu-kvm: error while loading state for instance 
0x0 of device ':00:03.0/virtio-net'
2024-03-08T03:32:05.587982Z qemu-kvm: load of migration failed: Invalid 
argument: libvirtError: internal error: qemu unexpectedly closed the monitor: 
2024-03-08T03:32:03.383294Z qemu-kvm: warning: CPU(s) not present in any NUMA 
nodes: CPU 2 [socket-id: 2, core-id: 0, thread-id: 0], CPU 3 [socket-id: 3, 
core-id: 0, thread-id: 0]
2024-03-08 11:32:06.863 8 INFO nova.compute.manager 
[req-1c0a7415-d81b-480a-a1f9-06b9afdb2b55 - - - - -] [None] [instance: 
5a317564-9e87-4313-a8a3-c661434c5bc1] During sync_power_state the instance has 
a pending task (migrating). Skip.
2024-03-08 11:32:07.019 8 ERROR nova.virt.libvirt.driver [-] [instance: 
5a317564-9e87-4313-a8a3-c661434c5bc1] Migration operation has aborted


# root cause

in the dst node, nova will configure the nic multi queue from 2 to 4,
which is not the same with original  . So qemu failed to start

# fix

use original queue count rather than generate at dst node.

** Affects: nova
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2056513

Title:
  live migration will failed if the vm enabled nic multi queue and vcpu
  is live resize to a max value

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  reproduce

  * launch a 2 cpu vm with multi queue enabled, also configure a max
  vcpu as bellow

  4

  * increase the vcpu from 2 to 4

  virsh setvcpus XXX 4

  * migration the node

  nova live-migration XXX

  will get following log in nova-compute.log

  2024-03-08 11:32:05.795 8 INFO nova.compute.manager 
[req-1c0a7415-d81b-480a-a1f9-06b9afdb2b55 - - - - -] [None] [instance: 
5a317564-9e87-4313-a8a3-c661434c5bc1] During sync_power_state the instance has 
a pending task (migrating). Skip.
  2024-03-08 11:32:06.673 8 INFO nova.compute.manager 
[req-1c0a7415-d81b-480a-a1f9-06b9afdb2b55 - - - - -] [None] [instance: 
5a317564-9e87-4313-a8a3-c661434c5bc1] VM Resumed (Lifecycle Event)
  2024-03-08 11:32:06.843 8 ERROR nova.virt.libvirt.driver [-] [instance: 
5a317564-9e87-4313-a8a3-c661434c5bc1] Live Migration failure: internal error: 
qemu unexpectedly closed the monitor: 2024-03-08T03:32:03.383294Z qemu-kvm: 
warning: CPU(s) not present in any NUMA nodes: CPU 2 [socket-id: 2, core-id: 0, 
thread-id: 0], CPU 3 [socket-id: 3, core-id: 0, thread-id: 0]
  2024-03-08T03:32:03.383370Z qemu-kvm: warning: All CPU(s) up to maxcpus 
should be described in NUMA config, ability to start up with partial NUMA 
mappings is obsoleted and will be removed in future
  2024-03-08T03:32:05.584824Z qemu-kvm: get_pci_config_device: Bad config data: 
i=0x9a read: 5 device: 9 cmask: ff wmask: 0 w1cmask:0
  2024-03-08T03:32:05.584971Z qemu-kvm: Failed to load PCIDevice:config
  2024-03-08T03:32:05.585014Z qemu-kvm: Failed to load virtio-net:virtio
  2024-03-08T03:32:05.585072Z qemu-kvm: error while loading state for instance 
0x0 of device ':00:03.0/virtio-net'
  2024-03-08T03:32:05.587982Z qemu-kvm: load of migration failed: Invalid 
argument: libvirtError: internal error: qemu unexpectedly closed the monitor: 
2024-03-08T03:32:03.383294Z qemu-kvm: warning: CPU(s) not present in any NUMA 
nodes: CPU 2 [socket-id: 2, core-id: 0, thread-id: 0], CPU 3 [soc

[Yahoo-eng-team] [Bug 1517839] Re: Make CONF.set_override with parameter enforce_type=True by default

2018-04-18 Thread Jeffrey Zhang
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (OCATA, PIKE, QUEENS, ROCKY, 
ROCKY).
  Valid example: CONFIRMED FOR: OCATA


** Changed in: kolla
   Importance: Low => Undecided

** Changed in: kolla
   Status: Confirmed => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1517839

Title:
  Make CONF.set_override with parameter enforce_type=True by default

Status in Cinder:
  In Progress
Status in cloudkitty:
  Fix Released
Status in Designate:
  Fix Released
Status in OpenStack Backup/Restore and DR (Freezer):
  Fix Committed
Status in Glance:
  Invalid
Status in OpenStack Heat:
  Fix Released
Status in Ironic:
  Fix Released
Status in Karbor:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in kolla:
  Expired
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in Murano:
  Fix Released
Status in neutron:
  Won't Fix
Status in OpenStack Compute (nova):
  Fix Released
Status in octavia:
  New
Status in oslo.config:
  Fix Released
Status in oslo.messaging:
  Fix Released
Status in Quark: Money Reinvented:
  New
Status in Rally:
  Fix Released
Status in senlin:
  Fix Released
Status in tacker:
  Fix Released
Status in tempest:
  Fix Released
Status in watcher:
  Fix Released

Bug description:
  1. Problems :
     oslo_config provides method CONF.set_override[1] , developers usually use 
it to change config option's value in tests. That's convenient .
     By default  parameter enforce_type=False,  it doesn't check any type or 
value of override. If set enforce_type=True , will check parameter
     override's type and value.  In production code(running time code),  
oslo_config  always checks  config option's value.
     In short, we test and run code in different ways. so there's  gap:  config 
option with wrong type or invalid value can pass tests when
     parameter enforce_type = False in consuming projects.  that means some 
invalid or wrong tests are in our code base.

     [1]
  https://github.com/openstack/oslo.config/blob/master/oslo_config/cfg.py#L2173

  2. Proposal
     1) Fix violations when enforce_type=True in each project.

    2) Make method CONF.set_override with  enforce_type=True by default
  in oslo_config

   You can find more details and comments  in
  https://etherpad.openstack.org/p/enforce_type_true_by_default

  3. How to find violations in your projects.

     1. Run tox -e py27

     2. then modify oslo.config with enforce_type=True
    cd .tox/py27/lib64/python2.7/site-packages/oslo_config
    edit cfg.py with enforce_type=True

  -def set_override(self, name, override, group=None, enforce_type=False):
  +def set_override(self, name, override, group=None, enforce_type=True):

    3. Run tox -e py27 again, you will find violations.

  
  The current state is that oslo.config make enforce_type as True by default 
and deprecate this parameter, will remove it in the future, the current work
  is that remove usage of enforce_type in consuming projects. We can list the
  usage of it in 
http://codesearch.openstack.org/?q=enforce_type&i=nope&files=&repos=

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1517839/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1802467] [NEW] when using ceph, the vm image in nova will be cloned from image and result in lower availablility

2018-11-09 Thread Jeffrey Zhang
Public bug reported:

When using ceph in nova and glance, the image in nova will be cloned
from glance images. Then the parent image in nova images will be pointed
to glance pool. This will cause a lower availability. when glance pool
is broken, the vm image is affected too.

For higher availability, we can flatten the image from its parent. Then
nova images will not depends on glance pool.

** Affects: nova
 Importance: Undecided
 Assignee: Jeffrey Zhang (jeffrey4l)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1802467

Title:
  when using ceph,  the vm image in nova will be cloned from image and
  result in lower availablility

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  When using ceph in nova and glance, the image in nova will be cloned
  from glance images. Then the parent image in nova images will be
  pointed to glance pool. This will cause a lower availability. when
  glance pool is broken, the vm image is affected too.

  For higher availability, we can flatten the image from its parent.
  Then nova images will not depends on glance pool.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1802467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1644534] [NEW] Get dnsmasq: bad command line options error message in neutron-dhcp-agent

2016-11-24 Thread Jeffrey Zhang
Public bug reported:

Get following error message in Kolla's gate

2016-11-23 12:06:55.376 7 DEBUG neutron.agent.linux.utils [-] Running command: 
['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 
'exec', 'qdhcp-0a9e36c5-2d55-4e6c-b4ba-67fd13e7681e', 'ip', 'addr', 'show', 
'tapbe705a89-82'] create_process 
/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/utils.py:92
2016-11-23 12:06:55.595 7 DEBUG neutron.agent.ovsdb.native.vlog [-] [POLLIN] on 
fd 8 __log_wakeup 
/var/lib/kolla/venv/lib/python2.7/site-packages/ovs/poller.py:202
2016-11-23 12:06:55.676 7 DEBUG neutron.agent.linux.utils [-] Exit code: 0 
execute 
/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/utils.py:149
2016-11-23 12:06:55.677 7 DEBUG neutron.agent.linux.utils [-] Unable to access 
/var/lib/neutron/dhcp/0a9e36c5-2d55-4e6c-b4ba-67fd13e7681e/pid 
get_value_from_file 
/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/utils.py:216
2016-11-23 12:06:55.678 7 DEBUG neutron.agent.linux.utils [-] Running command: 
['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 
'exec', 'qdhcp-0a9e36c5-2d55-4e6c-b4ba-67fd13e7681e', 'dnsmasq', '--no-hosts', 
'--no-resolv', '--strict-order', '--except-interface=lo', 
'--pid-file=/var/lib/neutron/dhcp/0a9e36c5-2d55-4e6c-b4ba-67fd13e7681e/pid', 
'--dhcp-hostsfile=/var/lib/neutron/dhcp/0a9e36c5-2d55-4e6c-b4ba-67fd13e7681e/host',
 
'--addn-hosts=/var/lib/neutron/dhcp/0a9e36c5-2d55-4e6c-b4ba-67fd13e7681e/addn_hosts',
 
'--dhcp-optsfile=/var/lib/neutron/dhcp/0a9e36c5-2d55-4e6c-b4ba-67fd13e7681e/opts',
 
'--dhcp-leasefile=/var/lib/neutron/dhcp/0a9e36c5-2d55-4e6c-b4ba-67fd13e7681e/leases',
 '--dhcp-match=set:ipxe,175', '--enable-ra', '--ra-param=tapbe705a89-82,0,0', 
'--bind-interfaces', '--interface=tapbe705a89-82', 
'--dhcp-range=set:tag0,10.0.0.0,static,86400s', 
'--dhcp-option-force=option:mtu,1450', '--dhcp-lease-max=256', 
'--conf-file=/etc/neutron/dnsmasq.conf', '--domain=op
 enstacklocal'] create_process 
/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/utils.py:92
2016-11-23 12:06:55.950 7 ERROR neutron.agent.linux.utils [-] Exit code: 1; 
Stdin: ; Stdout: ; Stderr: 
dnsmasq: bad command line options: try --help

2016-11-23 12:06:55.951 7 ERROR neutron.agent.dhcp.agent [-] Unable to enable 
dhcp for 0a9e36c5-2d55-4e6c-b4ba-67fd13e7681e.
2016-11-23 12:06:55.951 7 ERROR neutron.agent.dhcp.agent Traceback (most recent 
call last):
2016-11-23 12:06:55.951 7 ERROR neutron.agent.dhcp.agent   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/dhcp/agent.py", 
line 115, in call_driver
2016-11-23 12:06:55.951 7 ERROR neutron.agent.dhcp.agent getattr(driver, 
action)(**action_kwargs)
2016-11-23 12:06:55.951 7 ERROR neutron.agent.dhcp.agent   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", 
line 216, in enable
2016-11-23 12:06:55.951 7 ERROR neutron.agent.dhcp.agent 
self.spawn_process()
2016-11-23 12:06:55.951 7 ERROR neutron.agent.dhcp.agent   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", 
line 437, in spawn_process
2016-11-23 12:06:55.951 7 ERROR neutron.agent.dhcp.agent 
self._spawn_or_reload_process(reload_with_HUP=False)
2016-11-23 12:06:55.951 7 ERROR neutron.agent.dhcp.agent   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", 
line 451, in _spawn_or_reload_process
2016-11-23 12:06:55.951 7 ERROR neutron.agent.dhcp.agent 
pm.enable(reload_cfg=reload_with_HUP)
2016-11-23 12:06:55.951 7 ERROR neutron.agent.dhcp.agent   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/external_process.py",
 line 94, in enable
2016-11-23 12:06:55.951 7 ERROR neutron.agent.dhcp.agent 
run_as_root=self.run_as_root)
2016-11-23 12:06:55.951 7 ERROR neutron.agent.dhcp.agent   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py",
 line 882, in execute
2016-11-23 12:06:55.951 7 ERROR neutron.agent.dhcp.agent 
log_fail_as_error=log_fail_as_error, **kwargs)
2016-11-23 12:06:55.951 7 ERROR neutron.agent.dhcp.agent   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/utils.py", 
line 147, in execute
2016-11-23 12:06:55.951 7 ERROR neutron.agent.dhcp.agent raise 
ProcessExecutionError(msg, returncode=returncode)
2016-11-23 12:06:55.951 7 ERROR neutron.agent.dhcp.agent ProcessExecutionError: 
Exit code: 1; Stdin: ; Stdout: ; Stderr: 
2016-11-23 12:06:55.951 7 ERROR neutron.agent.dhcp.agent dnsmasq: bad command 
line options: try --help
2016-11-23 12:06:55.951 7 ERROR neutron.agent.dhcp.agent 
2016-11-23 12:06:55.951 7 ERROR neutron.agent.dhcp.agent 
2016-11-23 12:06:55.951 7 INFO neutron.agent.dhcp.agent [-] Finished network 
0a9e36c5-2d55-4e6c-b4ba-67fd13e7681e dhcp configuration


more info please see [0]

[0] http://logs.openstack.org/89/400789/4/check/gate-kolla-ansible-dsvm-
deploy-cent

[Yahoo-eng-team] [Bug 1682060] Re: empty nova service and hypervisor list

2017-07-11 Thread Jeffrey Zhang
** Also affects: kolla-ansible/ocata
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1682060

Title:
  empty nova service and hypervisor list

Status in kolla-ansible:
  Fix Released
Status in kolla-ansible ocata series:
  New
Status in OpenStack Compute (nova):
  Fix Released
Status in openstack-manuals:
  Fix Released

Bug description:
  In current master, openstack compute service list and openstack hypervisor 
list (same issue with nova cli) result in an empty list.
  If I check the database, services are registered in the database.

  [root@controller tools]# docker exec -ti kolla_toolbox mysql -unova 
-pYVDa3l8vA57Smnbu9Q5qdgSKJckNxP3Q3rYvVxsD  -h 192.168.100.10 nova -e "SELECT * 
from services WHERE topic = 'compute'";
  
+-+-++++--+-+--+--+-+-+-+-+-+
  | created_at  | updated_at  | deleted_at | id | host   | 
binary   | topic   | report_count | disabled | deleted | disabled_reason | 
last_seen_up| forced_down | version |
  
+-+-++++--+-+--+--+-+-+-+-+-+
  | 2017-04-12 09:12:10 | 2017-04-12 09:14:33 | NULL   |  9 | controller | 
nova-compute | compute |   13 |0 |   0 | NULL| 
2017-04-12 09:14:33 |   0 |  17 |
  
+-+-++++--+-+--+--+-+-+-+-+-+
  [root@controller tools]# openstack compute service list --long

  [root@controller tools]# 
  [root@controller tools]# openstack hypervisor list --long

  [root@controller tools]#

  Logs from kolla deploy gates
  http://logs.openstack.org/08/456108/1/check/gate-kolla-dsvm-deploy-
  centos-source-centos-7-nv/9cf1e73/

  Environment:
  - source code
  - OS: centos/ubuntu/oraclelinux
  - Deployment type: kolla-ansible

  Please let me know if more info is needed or if there is a workaround.
  Regards

To manage notifications about this bug go to:
https://bugs.launchpad.net/kolla-ansible/+bug/1682060/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440493] Re: Crash with python-memcached==1.5.4

2017-08-23 Thread Jeffrey Zhang
For ksm, the memcacheclientpool is not enabled in default. need add

[keystone_authtoken]
...
memcache_use_advanced_pool = true

After enabled this, ksw raised exact the same error message as bug
description.

env: openstack ocata.

** Changed in: keystonemiddleware
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1440493

Title:
  Crash with python-memcached==1.5.4

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) kilo series:
  Fix Released
Status in keystonemiddleware:
  New

Bug description:
  There's some magic going on at line:
  
https://github.com/openstack/keystone/blob/2014.2.2/keystone/common/cache/_memcache_pool.py#L46

  This magic is broken due to the fact that python-memcached added a super(...) 
initalization at
  https://github.com/linsomniac/python-memcached/blob/master/memcache.py#L218
  
https://github.com/linsomniac/python-memcached/commit/45403325e0249ff0f61d6ae449a7daeeb7e852e5

  Due to this change, keystone can no longer work with the latest
  python-memcached version:

  Traceback (most recent call last):
File ""keystone/common/wsgi.py", line 223, in __call__
  result = method(context, **params)
File ""keystone/identity/controllers.py", line 76, in create_user
  self.assignment_api.get_project(default_project_id)
File ""dogpile/cache/region.py", line 1040, in decorate
  should_cache_fn)
File ""dogpile/cache/region.py", line 651, in get_or_create
  async_creator) as value:
File ""dogpile/core/dogpile.py", line 158, in __enter__
  return self._enter()
File ""dogpile/core/dogpile.py", line 91, in _enter
  value = value_fn()
File ""dogpile/cache/region.py", line 604, in get_value
  value = self.backend.get(key)
File ""dogpile/cache/backends/memcached.py", line 149, in get
  value = self.client.get(key)
File ""keystone/common/cache/backends/memcache_pool.py", line 35, in 
_run_method
  with self.client_pool.acquire() as client:
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
  return self.gen.next()
File ""keystone/common/cache/_memcache_pool.py", line 97, in acquire
  conn = self.get(timeout=self._connection_get_timeout)
File ""eventlet/queue.py", line 293, in get
  return self._get()
File ""keystone/common/cache/_memcache_pool.py", line 155, in _get
  conn = ConnectionPool._get(self)
File ""keystone/common/cache/_memcache_pool.py", line 120, in _get
  conn = self._create_connection()
File ""keystone/common/cache/_memcache_pool.py", line 149, in 
_create_connection
  return _MemcacheClient(self.urls, **self._arguments)
File ""memcache.py", line 228, in __init__
  super(Client, self).__init__()
  TypeError: super(type, obj): obj must be an instance or subtype of type

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1440493/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1664495] [NEW] Got ImportError: No module named setup when migrate db

2017-02-14 Thread Jeffrey Zhang
Public bug reported:

ENV: deploy neutron newton branch by using Kolla.

when running

neutron-db-manage --config-file /etc/neutron/neutron.conf --config-
file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head

Get following

2017-02-14 09:23:22.518158 | INFO:__main__:Writing out command to execute
2017-02-14 09:23:22.518175 | Traceback (most recent call last):
2017-02-14 09:23:22.518202 |   File 
"/var/lib/kolla/venv/bin/neutron-db-manage", line 10, in 
2017-02-14 09:23:22.518215 | sys.exit(main())
2017-02-14 09:23:22.518251 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/cli.py", 
line 686, in main
2017-02-14 09:23:22.518278 | return_val |= bool(CONF.command.func(config, 
CONF.command.name))
2017-02-14 09:23:22.518316 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/cli.py", 
line 205, in do_upgrade
2017-02-14 09:23:22.518334 | run_sanity_checks(config, revision)
2017-02-14 09:23:22.518400 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/cli.py", 
line 670, in run_sanity_checks
2017-02-14 09:23:22.518414 | script_dir.run_env()
2017-02-14 09:23:22.518447 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/script/base.py", line 
407, in run_env
2017-02-14 09:23:22.518466 | util.load_python_file(self.dir, 'env.py')
2017-02-14 09:23:22.518501 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/util/pyfiles.py", line 
93, in load_python_file
2017-02-14 09:23:22.518603 | module = load_module_py(module_id, path)
2017-02-14 09:23:22.518647 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/util/compat.py", line 
79, in load_module_py
2017-02-14 09:23:22.518677 | mod = imp.load_source(module_id, path, fp)
2017-02-14 09:23:22.518719 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/env.py",
 line 24, in 
2017-02-14 09:23:22.518742 | from neutron.db.migration.models import head  
# noqa
2017-02-14 09:23:22.518780 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/models/head.py",
 line 65, in 
2017-02-14 09:23:22.518806 | 
utils.import_modules_recursively(os.path.dirname(models.__file__))
2017-02-14 09:23:22.518846 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/common/utils.py", line 
865, in import_modules_recursively
2017-02-14 09:23:22.518867 | 
modules.extend(import_modules_recursively(dir_))
2017-02-14 09:23:22.518907 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/common/utils.py", line 
861, in import_modules_recursively
2017-02-14 09:23:22.518924 | importlib.import_module(module)
2017-02-14 09:23:22.518975 |   File 
"/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
2017-02-14 09:23:22.518993 | __import__(name)
2017-02-14 09:23:22.519010 | ImportError: No module named setup

[0] 
http://logs.openstack.org/07/433507/1/check/gate-kolla-dsvm-deploy-centos-source-centos-7-nv/80744bf/console.html#_2017-02-14_09_23_22_503742
[1] 
https://github.com/openstack/kolla/blob/stable/newton/docker/neutron/neutron-server/extend_start.sh#L6

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1664495

Title:
  Got ImportError: No module named setup when migrate db

Status in neutron:
  New

Bug description:
  ENV: deploy neutron newton branch by using Kolla.

  when running

  neutron-db-manage --config-file /etc/neutron/neutron.conf
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head

  Get following

  2017-02-14 09:23:22.518158 | INFO:__main__:Writing out command to execute
  2017-02-14 09:23:22.518175 | Traceback (most recent call last):
  2017-02-14 09:23:22.518202 |   File 
"/var/lib/kolla/venv/bin/neutron-db-manage", line 10, in 
  2017-02-14 09:23:22.518215 | sys.exit(main())
  2017-02-14 09:23:22.518251 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/cli.py", 
line 686, in main
  2017-02-14 09:23:22.518278 | return_val |= bool(CONF.command.func(config, 
CONF.command.name))
  2017-02-14 09:23:22.518316 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/cli.py", 
line 205, in do_upgrade
  2017-02-14 09:23:22.518334 | run_sanity_checks(config, revision)
  2017-02-14 09:23:22.518400 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/cli.py", 
line 670, in run_sanity_checks
  2017-02-14 09:23:22.518414 | script_dir.run_env()
  2017-02-14 09:23:22.518447 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/script/base.py", line 
407, in run_env
  2017-02-14 09:23:22.518466 | util.load_python_file(self.dir, 'env.py')
  2017-02-14 09:23:22.518501 |   File 
"/var/lib/kolla/venv/lib/python

[Yahoo-eng-team] [Bug 1664495] Re: Got ImportError: No module named setup when migrate db

2017-02-14 Thread Jeffrey Zhang
9.2.0 fixed this issue.

mark this as  invalid.

** Changed in: neutron
   Status: New => Invalid

** Description changed:

- ENV: deploy neutron newton branch by using Kolla.
+ ENV: deploy neutron 9.0.0 tag by using Kolla.
  
  when running
  
- neutron-db-manage --config-file /etc/neutron/neutron.conf --config-
+ neutron-db-manage --config-file /etc/neutron/neutron.conf --config-
  file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head
  
  Get following
  
  2017-02-14 09:23:22.518158 | INFO:__main__:Writing out command to execute
  2017-02-14 09:23:22.518175 | Traceback (most recent call last):
  2017-02-14 09:23:22.518202 |   File 
"/var/lib/kolla/venv/bin/neutron-db-manage", line 10, in 
  2017-02-14 09:23:22.518215 | sys.exit(main())
  2017-02-14 09:23:22.518251 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/cli.py", 
line 686, in main
  2017-02-14 09:23:22.518278 | return_val |= bool(CONF.command.func(config, 
CONF.command.name))
  2017-02-14 09:23:22.518316 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/cli.py", 
line 205, in do_upgrade
  2017-02-14 09:23:22.518334 | run_sanity_checks(config, revision)
  2017-02-14 09:23:22.518400 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/cli.py", 
line 670, in run_sanity_checks
  2017-02-14 09:23:22.518414 | script_dir.run_env()
  2017-02-14 09:23:22.518447 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/script/base.py", line 
407, in run_env
  2017-02-14 09:23:22.518466 | util.load_python_file(self.dir, 'env.py')
  2017-02-14 09:23:22.518501 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/util/pyfiles.py", line 
93, in load_python_file
  2017-02-14 09:23:22.518603 | module = load_module_py(module_id, path)
  2017-02-14 09:23:22.518647 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/util/compat.py", line 
79, in load_module_py
  2017-02-14 09:23:22.518677 | mod = imp.load_source(module_id, path, fp)
  2017-02-14 09:23:22.518719 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/env.py",
 line 24, in 
  2017-02-14 09:23:22.518742 | from neutron.db.migration.models import head 
 # noqa
  2017-02-14 09:23:22.518780 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/models/head.py",
 line 65, in 
  2017-02-14 09:23:22.518806 | 
utils.import_modules_recursively(os.path.dirname(models.__file__))
  2017-02-14 09:23:22.518846 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/common/utils.py", line 
865, in import_modules_recursively
  2017-02-14 09:23:22.518867 | 
modules.extend(import_modules_recursively(dir_))
  2017-02-14 09:23:22.518907 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/common/utils.py", line 
861, in import_modules_recursively
  2017-02-14 09:23:22.518924 | importlib.import_module(module)
  2017-02-14 09:23:22.518975 |   File 
"/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
  2017-02-14 09:23:22.518993 | __import__(name)
  2017-02-14 09:23:22.519010 | ImportError: No module named setup
  
  [0] 
http://logs.openstack.org/07/433507/1/check/gate-kolla-dsvm-deploy-centos-source-centos-7-nv/80744bf/console.html#_2017-02-14_09_23_22_503742
  [1] 
https://github.com/openstack/kolla/blob/stable/newton/docker/neutron/neutron-server/extend_start.sh#L6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1664495

Title:
  Got ImportError: No module named setup when migrate db

Status in neutron:
  Invalid

Bug description:
  ENV: deploy neutron 9.0.0 tag by using Kolla.

  when running

  neutron-db-manage --config-file /etc/neutron/neutron.conf
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head

  Get following

  2017-02-14 09:23:22.518158 | INFO:__main__:Writing out command to execute
  2017-02-14 09:23:22.518175 | Traceback (most recent call last):
  2017-02-14 09:23:22.518202 |   File 
"/var/lib/kolla/venv/bin/neutron-db-manage", line 10, in 
  2017-02-14 09:23:22.518215 | sys.exit(main())
  2017-02-14 09:23:22.518251 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/cli.py", 
line 686, in main
  2017-02-14 09:23:22.518278 | return_val |= bool(CONF.command.func(config, 
CONF.command.name))
  2017-02-14 09:23:22.518316 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/cli.py", 
line 205, in do_upgrade
  2017-02-14 09:23:22.518334 | run_sanity_checks(config, revision)
  2017-02-14 09:23:22.518400 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/cli.py", 
line 670, in run_sanity_checks
  2017-02-14 09:23:22.518414 | script_dir.run_env()
  2017-02-14 09:23:22.518447 |   File 
"/var/

[Yahoo-eng-team] [Bug 1666029] [NEW] simple_cell_setup should not exist with 1 when there is no compute hosts

2017-02-19 Thread Jeffrey Zhang
Public bug reported:

When deploying ironic environment, even though nova-compute is running,
but there will be no compute nodes until adding ironic nodes. And the
user may not start nova-compute for some reason during fresh deployment.

So stop return 1 when there is no compute hosts in simple_cell_setup.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1666029

Title:
  simple_cell_setup should not exist with 1 when there is no compute
  hosts

Status in OpenStack Compute (nova):
  New

Bug description:
  When deploying ironic environment, even though nova-compute is running,
  but there will be no compute nodes until adding ironic nodes. And the
  user may not start nova-compute for some reason during fresh deployment.
  
  So stop return 1 when there is no compute hosts in simple_cell_setup.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1666029/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603458] Re: Cannot Delete loadbalancers due to undeleteable pools

2017-03-26 Thread Jeffrey Zhang
** Changed in: neutron-lbaas-dashboard
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1603458

Title:
  Cannot Delete loadbalancers due to undeleteable pools

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in neutron:
  Invalid
Status in Neutron LBaaS Dashboard:
  Confirmed

Bug description:
  To delete an LBaaSv2 loadbalancer, you must remove all the members
  from the pool, then delete the pool, then delete the listener, then
  you can delete the loadbalancer. Currently in Horizon you can do all
  of those except delete the pool. Since you can't delete the pool, you
  can't delete the listener, and therefore can't delete the
  loadbalancer.

  Either deleting the listener should trigger the pool delete too (since
  they're 1:1) or the Horizon Wizard for Listener should have a delete
  pool capability.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1603458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1678577] [NEW] nova live migration failed in some case

2017-04-02 Thread Jeffrey Zhang
Public bug reported:

env: nova 15.0.2 + libvirt + kvm + centos

in some situation, nova request spec become

{"nova_object.version": "1.8", "nova_object.changes": ["instance_uuid",
"requested_destination", "retry", "num_instances", "pci_requests",
"limits", "availability_zone", "force_nodes", "image", "instance_group",
"force_hosts", "numa_topology", "flavor", "project_id",
"scheduler_hints", "ignore_hosts"], "nova_object.name": "RequestSpec",
"nova_object.data": {"requested_destination": null, "instance_uuid":
"ca01b22b-d2d4-4291-96bd-ff6111f1f88b", "retry": {"nova_object.version":
"1.1", "nova_object.changes": ["num_attempts", "hosts"],
"nova_object.name": "SchedulerRetries", "nova_object.data":
{"num_attempts": 1, "hosts": {"nova_object.version": "1.16",
"nova_object.changes": ["objects"], "nova_object.name":
"ComputeNodeList", "nova_object.data": {"objects":
[{"nova_object.version": "1.16", "nova_object.changes": ["host",
"hypervisor_hostname"], "nova_object.name": "ComputeNode",
"nova_object.data": {"host": "control01", "hypervisor_hostname":
"control01"}, "nova_object.namespace": "nova"}]},
"nova_object.namespace": "nova"}}, "nova_object.namespace": "nova"},
"num_instances": 1, "pci_requests": {"nova_object.version": "1.1",
"nova_object.name": "InstancePCIRequests", "nova_object.data":
{"instance_uuid": "ca01b22b-d2d4-4291-96bd-ff6111f1f88b", "requests":
[]}, "nova_object.namespace": "nova"}, "limits": {"nova_object.version":
"1.0", "nova_object.changes": ["memory_mb", "vcpu", "disk_gb",
"numa_topology"], "nova_object.name": "SchedulerLimits",
"nova_object.data": {"vcpu": null, "memory_mb": 245427, "disk_gb": 8371,
"numa_topology": null}, "nova_object.namespace": "nova"},
"availability_zone": null, "force_nodes": null, "image":
{"nova_object.version": "1.8", "nova_object.changes": ["min_disk",
"container_format", "min_ram", "disk_format", "properties"],
"nova_object.name": "ImageMeta", "nova_object.data": {"min_disk": 1,
"container_format": "bare", "min_ram": 0, "disk_format": "raw",
"properties": {"nova_object.version": "1.16", "nova_object.name":
"ImageMetaProps", "nova_object.data": {}, "nova_object.namespace":
"nova"}}, "nova_object.namespace": "nova"}, "instance_group": null,
"force_hosts": null, "numa_topology": null, "ignore_hosts": null,
"flavor": {"nova_object.version": "1.1", "nova_object.name": "Flavor",
"nova_object.data": {"disabled": false, "root_gb": 1, "name": "m1.tiny",
"flavorid": "a70249ef-5ea9-49cb-b35f-ab4732064981", "deleted": false,
"created_at": "2017-03-22T08:13:48Z", "ephemeral_gb": 0, "updated_at":
null, "memory_mb": 256, "vcpus": 1, "extra_specs": {}, "swap": 0,
"rxtx_factor": 1.0, "is_public": true, "deleted_at": null,
"vcpu_weight": 0, "id": 119}, "nova_object.namespace": "nova"},
"project_id": "f3c6d500b267432c858c588800b49653", "scheduler_hints":
{}}, "nova_object.namespace": "nova"}


check the retry part

retry": {"nova_object.version": "1.1", "nova_object.changes":
["num_attempts", "hosts"], "nova_object.name": "SchedulerRetries",
"nova_object.data": {"num_attempts": 1, "hosts": {"nova_object.version":
"1.16", "nova_object.changes": ["objects"], "nova_object.name":
"ComputeNodeList", "nova_object.data": {"objects":
[{"nova_object.version": "1.16", "nova_object.changes": ["host",
"hypervisor_hostname"], "nova_object.name": "ComputeNode",
"nova_object.data": {"host": "control01", "hypervisor_hostname":
"control01"}, "nova_object.namespace": "nova"}]}

it has control01 as host even it is in control02

when live migrate this vm from controll02 to control01, get error in
"migration-list", after check the nova-scheduler logs, got


2017-04-02 14:01:47.010 6 DEBUG nova.filters 
[req-191c8f6e-010b-42f6-acc6-c84c689f649c 2442cfcb9d5c4daf8d90af8bcfe30df7 
8eb03bbcdfd84f68b88a7fbaa74e2327 - - -] Starting with 1 host(s) 
get_filtered_objects 
/var/lib/kolla/venv/lib/python2.7/site-packages/nova/filters.py:70
2017-04-02 14:01:47.010 6 INFO nova.scheduler.filters.retry_filter 
[req-191c8f6e-010b-42f6-acc6-c84c689f649c 2442cfcb9d5c4daf8d90af8bcfe30df7 
8eb03bbcdfd84f68b88a7fbaa74e2327 - - -] Host [u'control01', u'control01'] 
fails.  Previously tried hosts: [[u'control01', u'control01']]


I think the root cause is the retry part, and still do not know how it happen.

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "nova-scheduler.log"
   
https://bugs.launchpad.net/bugs/1678577/+attachment/4852537/+files/nova-scheduler.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1678577

Title:
  nova live migration failed in some case

Status in OpenStack Compute (nova):
  New

Bug description:
  env: nova 15.0.2 + libvirt + kvm + centos

  in some situation, nova request spec become

  {"nova_object.version": "1.8", "nova_object.changes":
  ["instance_uuid", "requested_destination", "retry", "num_instances",
  "pci_requests

[Yahoo-eng-team] [Bug 1682344] [NEW] can not create port in admin dashboard

2017-04-12 Thread Jeffrey Zhang
Public bug reported:

Env: ocata + source

Got following error when creating port.

[Thu Apr 13 13:54:16.39 2017] [:error] [pid 19] subnet is: 
[u'f755786f-7e5e-4aef-b1bc-90f6f0ea8748']
[Thu Apr 13 13:54:16.401356 2017] [:error] [pid 19] Internal Server Error: 
/admin/networks/85f6ceeb-09c9-4fed-ac11-679ee2a6f643/ports/create
[Thu Apr 13 13:54:16.401372 2017] [:error] [pid 19] Traceback (most recent call 
last):
[Thu Apr 13 13:54:16.401386 2017] [:error] [pid 19]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/django/core/handlers/base.py", 
line 132, in get_response
[Thu Apr 13 13:54:16.401389 2017] [:error] [pid 19] response = 
wrapped_callback(request, *callback_args, **callback_kwargs)
[Thu Apr 13 13:54:16.401391 2017] [:error] [pid 19]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/horizon/decorators.py", line 
36, in dec
[Thu Apr 13 13:54:16.401394 2017] [:error] [pid 19] return 
view_func(request, *args, **kwargs)
[Thu Apr 13 13:54:16.401411 2017] [:error] [pid 19]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/horizon/decorators.py", line 
52, in dec
[Thu Apr 13 13:54:16.401414 2017] [:error] [pid 19] return 
view_func(request, *args, **kwargs)
[Thu Apr 13 13:54:16.401416 2017] [:error] [pid 19]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/horizon/decorators.py", line 
36, in dec
[Thu Apr 13 13:54:16.401418 2017] [:error] [pid 19] return 
view_func(request, *args, **kwargs)
[Thu Apr 13 13:54:16.401419 2017] [:error] [pid 19]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/horizon/decorators.py", line 
84, in dec
[Thu Apr 13 13:54:16.401421 2017] [:error] [pid 19] return 
view_func(request, *args, **kwargs)
[Thu Apr 13 13:54:16.401436 2017] [:error] [pid 19]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/django/views/generic/base.py", 
line 71, in view
[Thu Apr 13 13:54:16.401438 2017] [:error] [pid 19] return 
self.dispatch(request, *args, **kwargs)
[Thu Apr 13 13:54:16.401439 2017] [:error] [pid 19]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/django/views/generic/base.py", 
line 89, in dispatch
[Thu Apr 13 13:54:16.401457 2017] [:error] [pid 19] return handler(request, 
*args, **kwargs)
[Thu Apr 13 13:54:16.401460 2017] [:error] [pid 19]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/django/views/generic/edit.py", 
line 205, in get
[Thu Apr 13 13:54:16.401463 2017] [:error] [pid 19] form = self.get_form()
[Thu Apr 13 13:54:16.401474 2017] [:error] [pid 19]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/horizon/forms/views.py", line 
168, in get_form
[Thu Apr 13 13:54:16.401503 2017] [:error] [pid 19] return 
form_class(self.request, **self.get_form_kwargs())
[Thu Apr 13 13:54:16.401506 2017] [:error] [pid 19]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/openstack_dashboard/dashboards/admin/networks/ports/forms.py",
 line 98, in __init__
[Thu Apr 13 13:54:16.401508 2017] [:error] [pid 19] subnet_choices = 
self._get_subnet_choices(kwargs['initial'])
[Thu Apr 13 13:54:16.401511 2017] [:error] [pid 19]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/openstack_dashboard/dashboards/admin/networks/ports/forms.py",
 line 151, in _get_subnet_choices
[Thu Apr 13 13:54:16.401513 2017] [:error] [pid 19] for subnet in 
network.subnets]
[Thu Apr 13 13:54:16.401528 2017] [:error] [pid 19] AttributeError: 'unicode' 
object has no attribute 'id'

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1682344

Title:
  can not create port in admin dashboard

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Env: ocata + source

  Got following error when creating port.

  [Thu Apr 13 13:54:16.39 2017] [:error] [pid 19] subnet is: 
[u'f755786f-7e5e-4aef-b1bc-90f6f0ea8748']
  [Thu Apr 13 13:54:16.401356 2017] [:error] [pid 19] Internal Server Error: 
/admin/networks/85f6ceeb-09c9-4fed-ac11-679ee2a6f643/ports/create
  [Thu Apr 13 13:54:16.401372 2017] [:error] [pid 19] Traceback (most recent 
call last):
  [Thu Apr 13 13:54:16.401386 2017] [:error] [pid 19]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/django/core/handlers/base.py", 
line 132, in get_response
  [Thu Apr 13 13:54:16.401389 2017] [:error] [pid 19] response = 
wrapped_callback(request, *callback_args, **callback_kwargs)
  [Thu Apr 13 13:54:16.401391 2017] [:error] [pid 19]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/horizon/decorators.py", line 
36, in dec
  [Thu Apr 13 13:54:16.401394 2017] [:error] [pid 19] return 
view_func(request, *args, **kwargs)
  [Thu Apr 13 13:54:16.401411 2017] [:error] [pid 19]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/horizon/decorators.py", line 
52, in dec
  [Thu Apr 13 13:54:16.401414 2017] [:error] [pid 19]

[Yahoo-eng-team] [Bug 1694420] Re: AttributeError: 'NoneType' object has no attribute 'port_security_enabled

2017-06-03 Thread Jeffrey Zhang
** Also affects: kolla-ansible
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1694420

Title:
  AttributeError: 'NoneType' object has no attribute
  'port_security_enabled

Status in kolla:
  New
Status in kolla-ansible:
  New
Status in neutron:
  In Progress

Bug description:
  Hi, I'm seeing in kolla gates failing with this error:
  AttributeError: 'NoneType' object has no attribute 'port_security_enabled'

  Instances fail to deploy while retrieving the port in
  openvswitch_agent.

  I think may be related to this recent change
  https://review.openstack.org/#/c/466158/

  2017-05-30 10:05:23.865 7 ERROR neutron.agent.rpc 
[req-3d80325e-9430-46d2-ace7-b5a6ad358d77 - - - - -] Failed to get details for 
device c22edc09-6451-4b78-9160-399fd549314e
  2017-05-30 10:05:23.865 7 ERROR neutron.agent.rpc Traceback (most recent call 
last):
  2017-05-30 10:05:23.865 7 ERROR neutron.agent.rpc   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/rpc.py", line 
219, in get_devices_details_list_and_failed_devices
  2017-05-30 10:05:23.865 7 ERROR neutron.agent.rpc 
self.get_device_details(context, device, agent_id, host))
  2017-05-30 10:05:23.865 7 ERROR neutron.agent.rpc   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/rpc.py", line 
257, in get_device_details
  2017-05-30 10:05:23.865 7 ERROR neutron.agent.rpc 
'port_security_enabled': port_obj.security.port_security_enabled,
  2017-05-30 10:05:23.865 7 ERROR neutron.agent.rpc AttributeError: 'NoneType' 
object has no attribute 'port_security_enabled'
  2017-05-30 10:05:23.865 7 ERROR neutron.agent.rpc

  http://logs.openstack.org/93/463593/17/check/gate-kolla-dsvm-deploy-
  centos-source-centos-7-nv/f61667d/logs/kolla/neutron/neutron-
  openvswitch-agent.txt.gz#_2017-05-30_10_05_23_865

  
  Environment:

  Source code from master
  Distributions affected: centos, ubuntu, oraclelinux

  
  Regards

To manage notifications about this bug go to:
https://bugs.launchpad.net/kolla/+bug/1694420/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606777] [NEW] neutron lbaas dashboard has wrong validation with port

2016-07-26 Thread Jeffrey Zhang
Public bug reported:

in the port validation, even if i fill a pure number, it show error
always.

** Affects: neutron-lbaas-dashboard
 Importance: Undecided
 Status: New

** Attachment added: "090.png"
   https://bugs.launchpad.net/bugs/1606777/+attachment/4707900/+files/090.png

** Project changed: neutron => neutron-lbaas-dashboard

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1606777

Title:
  neutron  lbaas dashboard has wrong validation with port

Status in Neutron LBaaS Dashboard:
  New

Bug description:
  in the port validation, even if i fill a pure number, it show error
  always.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron-lbaas-dashboard/+bug/1606777/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619367] [NEW] neutron db upgrade raise error

2016-09-01 Thread Jeffrey Zhang
Public bug reported:

here is the logs:

2016-09-01 14:46:26.838319 | Traceback (most recent call last):
2016-09-01 14:46:26.838344 |   File 
"/var/lib/kolla/venv/bin/neutron-db-manage", line 10, in 
2016-09-01 14:46:26.838356 | sys.exit(main())
2016-09-01 14:46:26.838389 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/cli.py", 
line 686, in main
2016-09-01 14:46:26.838413 | return_val |= bool(CONF.command.func(config, 
CONF.command.name))
2016-09-01 14:46:26.838448 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/cli.py", 
line 207, in do_upgrade
2016-09-01 14:46:26.838464 | desc=branch, sql=CONF.command.sql)
2016-09-01 14:46:26.838500 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/cli.py", 
line 108, in do_alembic_command
2016-09-01 14:46:26.838522 | getattr(alembic_command, cmd)(config, *args, 
**kwargs)
2016-09-01 14:46:26.838563 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/command.py", line 174, 
in upgrade
2016-09-01 14:46:26.838578 | script.run_env()
2016-09-01 14:46:26.838610 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/script/base.py", line 
407, in run_env
2016-09-01 14:46:26.838629 | util.load_python_file(self.dir, 'env.py')
2016-09-01 14:46:26.838663 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/util/pyfiles.py", line 
93, in load_python_file
2016-09-01 14:46:26.838682 | module = load_module_py(module_id, path)
2016-09-01 14:46:26.838715 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/util/compat.py", line 
79, in load_module_py
2016-09-01 14:46:26.838734 | mod = imp.load_source(module_id, path, fp)
2016-09-01 14:46:26.838772 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/env.py",
 line 120, in 
2016-09-01 14:46:26.838787 | run_migrations_online()
2016-09-01 14:46:26.838828 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/env.py",
 line 114, in run_migrations_online
2016-09-01 14:46:26.838848 | context.run_migrations()
2016-09-01 14:46:26.838867 |   File "", line 8, in run_migrations
2016-09-01 14:46:26.838908 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/runtime/environment.py",
 line 797, in run_migrations
2016-09-01 14:46:26.838927 | self.get_context().run_migrations(**kw)
2016-09-01 14:46:26.838962 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/runtime/migration.py", 
line 312, in run_migrations
2016-09-01 14:46:26.838976 | step.migration_fn(**kw)
2016-09-01 14:46:26.839039 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/versions/newton/contract/b12a3ef66e62_add_standardattr_to_qos_policies.py",
 line 60, in upgrade
2016-09-01 14:46:26.839059 | existing_server_default=False)
2016-09-01 14:46:26.839077 |   File "", line 8, in alter_column
2016-09-01 14:46:26.839095 |   File "", line 3, in alter_column
2016-09-01 14:46:26.839129 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/operations/ops.py", 
line 1414, in alter_column
2016-09-01 14:46:26.839146 | return operations.invoke(alt)
2016-09-01 14:46:26.839178 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/operations/base.py", 
line 318, in invoke
2016-09-01 14:46:26.839194 | return fn(self, operation)
2016-09-01 14:46:26.839228 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/operations/toimpl.py", 
line 53, in alter_column
2016-09-01 14:46:26.839240 | **operation.kw
2016-09-01 14:46:26.839273 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/ddl/mysql.py", line 
67, in alter_column
2016-09-01 14:46:26.839289 | else existing_autoincrement
2016-09-01 14:46:26.839319 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/alembic/ddl/impl.py", line 
118, in _exec
2016-09-01 14:46:26.839341 | return conn.execute(construct, *multiparams, 
**params)
2016-09-01 14:46:26.839374 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", 
line 914, in execute
2016-09-01 14:46:26.839392 | return meth(self, multiparams, params)
2016-09-01 14:46:26.839427 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py", line 
68, in _execute_on_connection
2016-09-01 14:46:26.839450 | return connection._execute_ddl(self, 
multiparams, params)
2016-09-01 14:46:26.839483 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", 
line 968, in _execute_ddl
2016-09-01 14:46:26.839495 | compiled
2016-09-01 14:46:26.839530 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", 
line 1146, in _execute_context
2016-09-01 14:46:26.839541 | context)
2016-09-01 14:46:26.839578 |   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/s