[Yahoo-eng-team] [Bug 1730898] [NEW] Cannot resize instances when they was created parallel on dashboard.

2017-11-07 Thread sapd
Public bug reported:

Hi. I'm using Openstack Ocata version. 
When I create two or more instances from dashboard ( fill in Count options). 
After that I can't resize these instance or cold migrate. 

The debug log in nova-scheduler.

2017-11-08 10:29:52.392 55 DEBUG nova.scheduler.filter_scheduler 
[req-7706a6e2-66c0-4126-8bd0-ffc7af6881c9 2d407d7560e24de09e205d3989161d98 
c31732225df74453a6f0155442a9
e449 - default default] There are 1 hosts available but 5 instances requested 
to build. select_destinations 
/usr/lib/python2.7/dist-packages/nova/scheduler/filter_sched
uler.py:101

I think this is a bug in Pike version.

Thanks

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: nova-placement nova-scheduler resize

** Tags added: resize

** Tags added: nova-placement nova-scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1730898

Title:
  Cannot resize instances  when they was created parallel on dashboard.

Status in OpenStack Compute (nova):
  New

Bug description:
  Hi. I'm using Openstack Ocata version. 
  When I create two or more instances from dashboard ( fill in Count options). 
  After that I can't resize these instance or cold migrate. 

  The debug log in nova-scheduler.

  2017-11-08 10:29:52.392 55 DEBUG nova.scheduler.filter_scheduler 
[req-7706a6e2-66c0-4126-8bd0-ffc7af6881c9 2d407d7560e24de09e205d3989161d98 
c31732225df74453a6f0155442a9
  e449 - default default] There are 1 hosts available but 5 instances requested 
to build. select_destinations 
/usr/lib/python2.7/dist-packages/nova/scheduler/filter_sched
  uler.py:101

  I think this is a bug in Pike version.

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1730898/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1729445] Re: Potential IndexError if using the CachingScheduler and not getting alternates

2017-11-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/517134
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=a9d92553b36e96daf6c8e6fca609f608e84ba336
Submitter: Zuul
Branch:master

commit a9d92553b36e96daf6c8e6fca609f608e84ba336
Author: Matt Riedemann 
Date:   Wed Nov 1 19:48:31 2017 -0400

Fix return type in FilterScheduler._legacy_find_hosts

The FilterScheduler._schedule method should be returning
a list of list of selected hosts. When include_alternatives
is False in _legacy_find_hosts, it was only returning back
a list of hosts, which would result in an IndexError when
select_destinations() tries to take the first entry from each
item in the list.

Change-Id: Ia6c87900605d3604beb74b942b0e30575b814112
Closes-Bug: #1729445


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1729445

Title:
  Potential IndexError if using the CachingScheduler and not getting
  alternates

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  If we're using the CachingScheduler and we're not getting alternates,
  maybe because conductor is old, we'll get an IndexError because we're
  not returning a list of list of selected hosts, we're just returning a
  list of selected hosts here:

  
https://github.com/openstack/nova/blob/f974e3c3566f379211d7fdc790d07b5680925584/nova/scheduler/filter_scheduler.py#L342

  And the IndexError would happen here:

  
https://github.com/openstack/nova/blob/f974e3c3566f379211d7fdc790d07b5680925584/nova/scheduler/filter_scheduler.py#L120

  We obviously don't have a test covering this scenario.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1729445/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1730896] [NEW] [qos] some wrong in config qos doc

2017-11-07 Thread Zachary Ma
Public bug reported:

1.It display Inaccurate info when created a QoS policy, rule etc.
2.There is a wrong cmd to update the security group rules.
$ openstack network qos rule set --max-kbps 2000 --max-burst-kbps 200 \
--ingress bw-limiter 92ceb52f-170f-49d0-9528-976e2fee2d6f

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: qos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1730896

Title:
  [qos] some wrong in config qos doc

Status in neutron:
  New

Bug description:
  1.It display Inaccurate info when created a QoS policy, rule etc.
  2.There is a wrong cmd to update the security group rules.
  $ openstack network qos rule set --max-kbps 2000 --max-burst-kbps 200 \
  --ingress bw-limiter 92ceb52f-170f-49d0-9528-976e2fee2d6f

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1730896/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1730890] Re: apiv2. owner isn't set for new images

2017-11-07 Thread Jack Ivanov
[paste_deploy]
# ...
flavor = keystone

should be defined


** Changed in: glance
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1730890

Title:
  apiv2. owner isn't set for new images

Status in Glance:
  Invalid

Bug description:
  After switched to APIv2, owner is not set for new images

  You can see "owner is None". It affects other components, include
  Horizon, to determine the project. Owner should be set even if no
  `--owner` specified

  # openstack image create "cirros-$RANDOM" --file cirros-0.3.5-x86_64-disk.img 
--disk-format qcow2 --container-format bare
  +--+--+
  | Field| Value|
  +--+--+
  | checksum | f8ab98ff5e73ebab884d80c9dc9c7290 |
  | container_format | bare |
  | created_at   | 2017-11-08T06:33:35Z |
  | disk_format  | qcow2|
  | file | /v2/images/5b815302-0ee6-442a-91d8-212e1035d91b/file |
  | id   | 5b815302-0ee6-442a-91d8-212e1035d91b |
  | min_disk | 0|
  | min_ram  | 0|
  | name | cirros-32648 |
  | owner| None |
  | protected| False|
  | schema   | /v2/schemas/image|
  | size | 13267968 |
  | status   | active   |
  | tags |  |
  | updated_at   | 2017-11-08T06:33:35Z |
  | virtual_size | None |
  | visibility   | shared   |
  +--+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1730890/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1730892] [NEW] Nova Image Resize Generating Errors

2017-11-07 Thread Xuanzhou Perry Dong
Public bug reported:

Description
===
When flavor disk size is larger than the image size, Nova will try to increase 
the image disk size to match the flavor disk size. In the process, it will call 
resize2fs to resize the image disk file system as will for raw-format, but this 
will generate an error since resize2fs should be executed on a partition block 
device instead of the whole block device (which includes boot sector, partition 
table, etc). 

Steps to Reproduce
==

1. Set the following configuration for nova-compute:
use_cow_images = False
force_raw_images = True

2. nova boot --image cirros-0.3.5-x86_64-disk --nic net-
id=6f0df6a5-8848-427b-8222-7b69d5602fe4 --flavor m1.tiny test_vm

The following error log are generated:

Nov 08 14:42:51 devstack01 nova-compute[10609]: DEBUG nova.virt.disk.api [None 
req-771fa44d-46ce-4486-9ed7-7a89ddb735ed demo admin] Unable to determine label 
for image  with error Unexpected error while running command.
Nov 08 14:42:51 devstack01 nova-compute[10609]: Command: e2label 
/opt/stack/data/nova/instances/73b49be7-0b5f-4a9d-a187-bbbf7fd59961/disk
Nov 08 14:42:51 devstack01 nova-compute[10609]: Exit code: 1
Nov 08 14:42:51 devstack01 nova-compute[10609]: Stdout: u''
Nov 08 14:42:51 devstack01 nova-compute[10609]: Stderr: u"e2label: Bad magic 
number in super-block while trying to open 
/opt/stack/data/nova/instances/73b49be7-0b5f-4a9d-a187-bbbf7fd59961/disk\nCouldn't
 find valid filesystem superblock.\n". Cannot resize. {{(pid=10609) 
is_image_extendable /opt/stack/nova/nova/virt/disk/api.py:254}}

Expected Result
===
Wrong command should not be executed and no error logs should be generated.

Actual Result
=
Error logs are generated:

Environment
===
1. Openstack Nova
stack@devstack01:/opt/stack/nova$ git log -1
commit 232458ae4e83e8b218397e42435baa9f1d025b68
Merge: 650c9f3 9d400c3
Author: Jenkins 
Date:   Tue Oct 10 06:27:52 2017 +

Merge "rp: Move RP._get|set_aggregates() to module scope"

2. Hypervisor
Libvirt + QEMU
stack@devstack01:/opt/stack/nova$ dpkg -l | grep libvirt
ii  libvirt-bin3.6.0-1ubuntu5~cloud0
  amd64programs for the libvirt library
ii  libvirt-clients3.6.0-1ubuntu5~cloud0
  amd64Programs for the libvirt library
ii  libvirt-daemon 3.6.0-1ubuntu5~cloud0
  amd64Virtualization daemon
ii  libvirt-daemon-system  3.6.0-1ubuntu5~cloud0
  amd64Libvirt daemon configuration files
ii  libvirt-dev:amd64  3.6.0-1ubuntu5~cloud0
  amd64development files for the libvirt library
ii  libvirt0:amd64 3.6.0-1ubuntu5~cloud0
  amd64library for interfacing with different virtualization systems
stack@devstack01:/opt/stack/nova$ dpkg -l | grep qemu
ii  ipxe-qemu  1.0.0+git-20150424.a25a16d-1ubuntu1  
  all  PXE boot firmware - ROM images for qemu
ii  qemu-block-extra:amd64 1:2.10+dfsg-0ubuntu3~cloud0  
  amd64extra block backend modules for qemu-system and qemu-utils
ii  qemu-kvm   1:2.10+dfsg-0ubuntu1~cloud0  
  amd64QEMU Full virtualization
ii  qemu-slof  20151103+dfsg-1ubuntu1   
  all  Slimline Open Firmware -- QEMU PowerPC version
ii  qemu-system1:2.10+dfsg-0ubuntu3~cloud0  
  amd64QEMU full system emulation binaries
ii  qemu-system-arm1:2.10+dfsg-0ubuntu1~cloud0  
  amd64QEMU full system emulation binaries (arm)
ii  qemu-system-common 1:2.10+dfsg-0ubuntu3~cloud0  
  amd64QEMU full system emulation binaries (common files)
ii  qemu-system-mips   1:2.10+dfsg-0ubuntu1~cloud0  
  amd64QEMU full system emulation binaries (mips)
ii  qemu-system-misc   1:2.10+dfsg-0ubuntu1~cloud0  
  amd64QEMU full system emulation binaries (miscellaneous)
ii  qemu-system-ppc1:2.10+dfsg-0ubuntu1~cloud0  
  amd64QEMU full system emulation binaries (ppc)
ii  qemu-system-s390x  1:2.10+dfsg-0ubuntu1~cloud0  
  amd64QEMU full system emulation binaries (s390x)
ii  qemu-system-sparc  1:2.10+dfsg-0ubuntu1~cloud0  
  amd64QEMU full system emulation binaries (sparc)
ii  qemu-system-x861:2.10+dfsg-0ubuntu1~cloud0  
  amd64QEMU full system emulation binaries (x86)
ii  qemu-utils 

[Yahoo-eng-team] [Bug 1730890] [NEW] apiv2. owner isn't set for new images

2017-11-07 Thread Jack Ivanov
Public bug reported:

After switched to APIv2, owner is not set for new images

You can see "owner is None". It affects other components, include
Horizon, to determine the project. Owner should be set even if no
`--owner` specified

# openstack image create "cirros-$RANDOM" --file cirros-0.3.5-x86_64-disk.img 
--disk-format qcow2 --container-format bare
+--+--+
| Field| Value|
+--+--+
| checksum | f8ab98ff5e73ebab884d80c9dc9c7290 |
| container_format | bare |
| created_at   | 2017-11-08T06:33:35Z |
| disk_format  | qcow2|
| file | /v2/images/5b815302-0ee6-442a-91d8-212e1035d91b/file |
| id   | 5b815302-0ee6-442a-91d8-212e1035d91b |
| min_disk | 0|
| min_ram  | 0|
| name | cirros-32648 |
| owner| None |
| protected| False|
| schema   | /v2/schemas/image|
| size | 13267968 |
| status   | active   |
| tags |  |
| updated_at   | 2017-11-08T06:33:35Z |
| virtual_size | None |
| visibility   | shared   |
+--+--+

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1730890

Title:
  apiv2. owner isn't set for new images

Status in Glance:
  New

Bug description:
  After switched to APIv2, owner is not set for new images

  You can see "owner is None". It affects other components, include
  Horizon, to determine the project. Owner should be set even if no
  `--owner` specified

  # openstack image create "cirros-$RANDOM" --file cirros-0.3.5-x86_64-disk.img 
--disk-format qcow2 --container-format bare
  +--+--+
  | Field| Value|
  +--+--+
  | checksum | f8ab98ff5e73ebab884d80c9dc9c7290 |
  | container_format | bare |
  | created_at   | 2017-11-08T06:33:35Z |
  | disk_format  | qcow2|
  | file | /v2/images/5b815302-0ee6-442a-91d8-212e1035d91b/file |
  | id   | 5b815302-0ee6-442a-91d8-212e1035d91b |
  | min_disk | 0|
  | min_ram  | 0|
  | name | cirros-32648 |
  | owner| None |
  | protected| False|
  | schema   | /v2/schemas/image|
  | size | 13267968 |
  | status   | active   |
  | tags |  |
  | updated_at   | 2017-11-08T06:33:35Z |
  | virtual_size | None |
  | visibility   | shared   |
  +--+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1730890/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1730845] [NEW] [RFE] support a port-behind-port API

2017-11-07 Thread Omer Anson
Public bug reported:

This RFE requests a unified API for a port-behind-port behaviour. This
behaviour has a few use-cases:

* MACVLAN - Identify that a port is behind a port using Allowed Address
Pairs, and identifying the behaviour based on MAC.

* HA Proxy behind Amphora - Identify that a port is behind a port using
Allowed Address Pairs and identifying the behaviour based on IP.

* Trunk Port (VLAN aware VMs) - Identify that a port is behind a port
using the Trunk Port API and identifying the behaviour based on VLAN
tags.

This RFE proposes to extend the Trunk Port API to support the first two
use-cases. The rationale is that in an SDN environment, it makes more
sense to explicitly state the intent, rather than have the
implementation infer the intent by matching Allowed Address Pairs and
other existing ports.

This will allow implementations to handle these use cases in a simpler,
flexible, and more robust manner than done today.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1730845

Title:
  [RFE] support a port-behind-port API

Status in neutron:
  New

Bug description:
  This RFE requests a unified API for a port-behind-port behaviour. This
  behaviour has a few use-cases:

  * MACVLAN - Identify that a port is behind a port using Allowed
  Address Pairs, and identifying the behaviour based on MAC.

  * HA Proxy behind Amphora - Identify that a port is behind a port
  using Allowed Address Pairs and identifying the behaviour based on IP.

  * Trunk Port (VLAN aware VMs) - Identify that a port is behind a port
  using the Trunk Port API and identifying the behaviour based on VLAN
  tags.

  This RFE proposes to extend the Trunk Port API to support the first
  two use-cases. The rationale is that in an SDN environment, it makes
  more sense to explicitly state the intent, rather than have the
  implementation infer the intent by matching Allowed Address Pairs and
  other existing ports.

  This will allow implementations to handle these use cases in a
  simpler, flexible, and more robust manner than done today.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1730845/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1720160] Re: cloud-init wait for waagent on Azure CentOS 7.4 - no sshd start

2017-11-07 Thread Scott Moser
Closing per comment from Maik.

If you find this is not working, please feel free to re-open.

Scott

** Changed in: cloud-init
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1720160

Title:
  cloud-init wait for waagent on Azure CentOS 7.4 - no sshd start

Status in cloud-init:
  Fix Released
Status in cloud-init package in CentOS:
  Unknown

Bug description:
  Hello,
  after update a CentOS 7.3-VM on Azure to CentOS 7.4, you can not connet via 
ssh because cloud-init try to start the waagent and the boot process hang. So 
sshd is stopped.

  We install a fresh CentOS 7.4 in the Azure cloud to provide a base
  image template for our company and this will also happens in this VM.

  ###
  # yum info cloud-init:
  Name: cloud-init
  Arch: x86_64
  Version : 0.7.9
  Release : 9.el7.centos.2
  Size: 2.1 M
  Repo: installed
  From repo   : base

  In CentOS 7.3 the cloud-init version is 0.7.5-10.el7.centos.1
  waagent is Package-Version 2.2.14-1.el7 in both CentOS versions witch is 
internal updated to 2.2.17 from waagent it self.

  ###
  To debug the failure I had to install rlogin before update:

  yum remove firewalld -y
  yum install rsh-server -y
  systemctl enable rsh.socket
  systemctl enable rlogin.socket
  systemctl enable rexec.socket
  echo "root:123" | chpasswd
  echo "+ root" > ~/.rlogin
  cat << EOF >> /etc/securetty
  rsh
  rexec
  rlogin
  EOF

  reboot

  yum update -y
  reboot

  ###
  to unblock the process I have connect via rlogin and kill the waagent start:

  # ps -ef | grep "waagent\|cloud"
  root   993 1  0 14:52 ?00:00:02 /usr/bin/python 
/usr/bin/cloud-init init
  root  1134   993  0 14:52 ?00:00:00 /bin/systemctl start 
waagent.service
  root  1337  1222  0 15:56 pts/200:00:00 grep --color=auto 
waagent\|cloud

  # kill 1134

  Then cloud-init do magic and on the next reboot sshd start without any
  trouble.

  ###
  To fail the VM again you can clear the config and reboot:
  yum remove cloud-init WALinuxAgent -y
  rm -f /etc/waagent.con*
  rm -fr /etc/cloud/
  rm -fr /var/lib/cloud/
  rm -fr /var/lib/waagent/
  rm -fr /var/log/waagent.lo*
  rm -fr /var/log/cloud-init*
  yum install cloud-init WALinuxAgent -y

  cp -a /etc/waagent.conf /etc/waagent.conf.rpmsave
  sed -i -e "s/Provisioning.Enabled.*/Provisioning.Enabled=n/g" 
/etc/waagent.conf
  sed -i -e "s/Provisioning.UseCloudInit.*/Provisioning.UseCloudInit=y/g" 
/etc/waagent.conf
  sed -i -e "s/Logs.Verbose.*/Logs.Verbose=y/g" /etc/waagent.conf

  cp -a /etc/cloud/cloud.cfg /etc/cloud/cloud.cfg.rpmsave
  cat << EOF >> /etc/cloud/cloud.cfg

  # From cloud-init docs
  datasource:
Azure:
  agent_command: [service, waagent, start]

  debug:
verbose: True

  EOF

  diff /etc/waagent.conf.rpmsave /etc/waagent.conf
  diff /etc/cloud/cloud.cfg.rpmsave /etc/cloud/cloud.cfg

  reboot

  ###
  I didn't know why the system hang.
  Can you please review this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1720160/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1730834] [NEW] Ironic compute node doesn't take over nodes with instance when the owner compute node is down

2017-11-07 Thread Min Sun
Public bug reported:

   Description
   ===
   Ironic compute node doesn't take over nodes with instance when the owner 
compute node is down

   Steps to reproduce
   ==
   1. I have two ironic compute nodes & two BM nodes
   2. BM1 is controlled by node1, BM2 is controlled by node2
   3. I boot an instance on BM1
   4. Stop nova-compute service on node1
   5. node2 doesn't take over BM1 although node1 is not available
   
   Expected result
   ===
   What I expect is that when node1 id down node2 can take over BM1.
   
   Actual result
   =
   node1 doesn't take over BM1. 
   And all nova operate action will fail as BM1 has nova compute service on 
node1 is down
   
   Environment
   ===
   openstack-nova-common-14.0.3-9.el7ost.noarch
   openstack-nova-novncproxy-14.0.3-9.el7ost.noarch
   openstack-nova-scheduler-14.0.3-9.el7ost.noarch
   python-novaclient-6.0.0-1.el7ost.noarch
   openstack-nova-conductor-14.0.3-9.el7ost.noarch
   openstack-nova-api-14.0.3-9.el7ost.noarch
   python-nova-14.0.3-9.el7ost.noarch
   openstack-nova-cert-14.0.3-9.el7ost.noarch
   openstack-nova-console-14.0.3-9.el7ost.noarch
   openstack-nova-compute-14.0.3-9.el7ost.noarch
   openstack-nova-common-14.0.3-9.el7ost.noarch

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

- I met some trouble in my ironic env:
- 1. I have two ironic compute nodes & two BM nodes
- 2. BM1 is controlled by node1, BM2 is controlled by node2
- 3. I boot an instance on BM1
- 4. Stop nova-compute service on node1
- 5. node2 doesn't take over BM1 although node1 is not available
+Description
+===
+Ironic compute node doesn't take over nodes with instance when the owner 
compute node is down
  
- What I expect is that when node1 id down node2 can take over BM1.
- 
- nova version:
- openstack-nova-common-14.0.3-9.el7ost.noarch
- openstack-nova-novncproxy-14.0.3-9.el7ost.noarch
- openstack-nova-scheduler-14.0.3-9.el7ost.noarch
- python-novaclient-6.0.0-1.el7ost.noarch
- openstack-nova-conductor-14.0.3-9.el7ost.noarch
- openstack-nova-api-14.0.3-9.el7ost.noarch
- python-nova-14.0.3-9.el7ost.noarch
- openstack-nova-cert-14.0.3-9.el7ost.noarch
- openstack-nova-console-14.0.3-9.el7ost.noarch
- openstack-nova-compute-14.0.3-9.el7ost.noarch
- openstack-nova-common-14.0.3-9.el7ost.noarch
+Steps to reproduce
+==
+1. I have two ironic compute nodes & two BM nodes
+2. BM1 is controlled by node1, BM2 is controlled by node2
+3. I boot an instance on BM1
+4. Stop nova-compute service on node1
+5. node2 doesn't take over BM1 although node1 is not available
+
+Expected result
+===
+What I expect is that when node1 id down node2 can take over BM1.
+
+Actual result
+=
+node1 doesn't take over BM1. 
+And all nova operate action will fail as BM1 has nova compute service on 
node1 is down
+
+Environment
+===
+openstack-nova-common-14.0.3-9.el7ost.noarch
+openstack-nova-novncproxy-14.0.3-9.el7ost.noarch
+openstack-nova-scheduler-14.0.3-9.el7ost.noarch
+python-novaclient-6.0.0-1.el7ost.noarch
+openstack-nova-conductor-14.0.3-9.el7ost.noarch
+openstack-nova-api-14.0.3-9.el7ost.noarch
+python-nova-14.0.3-9.el7ost.noarch
+openstack-nova-cert-14.0.3-9.el7ost.noarch
+openstack-nova-console-14.0.3-9.el7ost.noarch
+openstack-nova-compute-14.0.3-9.el7ost.noarch
+openstack-nova-common-14.0.3-9.el7ost.noarch

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1730834

Title:
  Ironic compute node doesn't take over nodes with instance when the
  owner compute node is down

Status in OpenStack Compute (nova):
  New

Bug description:
 Description
 ===
 Ironic compute node doesn't take over nodes with instance when the owner 
compute node is down

 Steps to reproduce
 ==
 1. I have two ironic compute nodes & two BM nodes
 2. BM1 is controlled by node1, BM2 is controlled by node2
 3. I boot an instance on BM1
 4. Stop nova-compute service on node1
 5. node2 doesn't take over BM1 although node1 is not available
 
 Expected result
 ===
 What I expect is that when node1 id down node2 can take over BM1.
 
 Actual result
 =
 node1 doesn't take over BM1. 
 And all nova operate action will fail as BM1 has nova compute service on 
node1 is down
 
 Environment
 ===
 openstack-nova-common-14.0.3-9.el7ost.noarch
 openstack-nova-novncproxy-14.0.3-9.el7ost.noarch
 openstack-nova-scheduler-14.0.3-9.el7ost.noarch
 python-novaclient-6.0.0-1.el7ost.noarch
 openstack-nova-conductor-14.0.3-9.el7ost.noarch
 openstack-nova-api-14.0.3-9.el7ost.noarch

[Yahoo-eng-team] [Bug 1730800] [NEW] UnknownConnectionError

2017-11-07 Thread zhi xu
Public bug reported:

[root@controller ~]# openstack server create --flavor m1.nano --image 
cirros-0.3.0-i386 --security-group permitall --key-name mykey provider-instance 
   Unexpected API Error. Please report this 
at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) 
(Request-ID: req-87716e72-824e-4618-9854-4be8ea96166b)
[root@controller ~]# 
[root@controller ~]# 
[root@controller ~]# !cat
cat /var/log/nova/nova-api.log 

2017-11-08 09:29:32.956 1938 INFO nova.api.openstack.wsgi 
[req-a6350065-4421-4e8f-b9a4-a65913a1d568 78688b319571447080aea1e62f59ecb5 
15e6f6ed6c9240168b85b561fcc8bc0f - default default] HTTP exception thrown: 
Flavor m1.nano could not be found.
2017-11-08 09:29:32.959 1938 INFO nova.osapi_compute.wsgi.server 
[req-a6350065-4421-4e8f-b9a4-a65913a1d568 78688b319571447080aea1e62f59ecb5 
15e6f6ed6c9240168b85b561fcc8bc0f - default default] 192.168.99.6 "GET 
/v2.1/flavors/m1.nano HTTP/1.1" status: 404 len: 500 time: 0.0435319
2017-11-08 09:29:32.982 1938 INFO nova.osapi_compute.wsgi.server 
[req-8a0509c7-3178-4458-94ab-7a10299d0134 78688b319571447080aea1e62f59ecb5 
15e6f6ed6c9240168b85b561fcc8bc0f - default default] 192.168.99.6 "GET 
/v2.1/flavors HTTP/1.1" status: 200 len: 586 time: 0.0198040
2017-11-08 09:29:32.999 1938 INFO nova.osapi_compute.wsgi.server 
[req-49747957-6405-4370-81ac-c9c4eaf80708 78688b319571447080aea1e62f59ecb5 
15e6f6ed6c9240168b85b561fcc8bc0f - default default] 192.168.99.6 "GET 
/v2.1/flavors/0 HTTP/1.1" status: 200 len: 752 time: 0.0131040
2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions 
[req-87716e72-824e-4618-9854-4be8ea96166b 78688b319571447080aea1e62f59ecb5 
15e6f6ed6c9240168b85b561fcc8bc0f - default default] Unexpected exception in API 
method: UnknownConnectionError: Unexpected exception for api_server = 
http://controller:9292/v2/images/6c861554-35b2-49eb-8dbf-a2a51bc1d07a: No 
connection adapters were found for 'api_server = 
http://controller:9292/v2/images/6c861554-35b2-49eb-8dbf-a2a51bc1d07a'
2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 336, 
in wrapped
2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, 
in wrapper
2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, 
in wrapper
2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, 
in wrapper
2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, 
in wrapper
2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, 
in wrapper
2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, 
in wrapper
2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 108, 
in wrapper
2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/servers.py", line 
553, in create
2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions 
**create_kwargs)
2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/hooks.py", line 154, in inner
2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions rv = 
f(*args, **kwargs)
2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1635, in create
2017-11-08 09:29:33.189 1938 ERROR nova.api.openstack.extensions tags=tags)
2017-11-08 

[Yahoo-eng-team] [Bug 1730761] [NEW] Volumes not cleaned up on rescheduled VM deploys

2017-11-07 Thread Tyler Blakeslee
Public bug reported:

Description:
When attempting to deploy an instance, if the instance fails to deploy on 
multiple hosts, the volumes are not cleaned up after the last deploy fails. The 
issue appears to only affect instances which fail an initial deploy and are 
rescheduled, and still ultimately fail for either NoValidHost or 
MaxRetriesExceeded. Instances which fail the initial deploy are cleaned up 
correctly.

Steps to Reproduce:
 - Perform a virtual machine deploy that fails an initial deploy and attempts 
to reschedule.
 - Raise an error during the reschedule.

Expected Result:
 - Volumes should be cleaned up similarly to when exception are raised in 
nova.compute.manager.ComputeManager._build_and_run_instance().

Actual Result:
 - Volumes are not cleaned up and remain even after the virtual machine deploy 
fails.

Additional Notes:
In nova.conductor.manager.ComputeTaskManager.build_instances(), an exception 
can be raised by the call to self._schedule_instances(). If this occurs, the 
exception is caught, but only the networks are cleaned up. No clean up of the 
volumes is performed.
 - 
https://github.com/openstack/nova/blob/master/nova/conductor/manager.py#L551-L565
In nova.compute.manager.ComputeManager._do_build_and_run_instance(), if an 
exception is raised by the call to self._build_and_run_instance() the volumes 
are cleaned up by a call to self._cleanup_volumes()
 - 
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1854-L1866

Environment:
 - nova version: openstack-nova-16.0.0

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1730761

Title:
  Volumes not cleaned up on rescheduled VM deploys

Status in OpenStack Compute (nova):
  New

Bug description:
  Description:
  When attempting to deploy an instance, if the instance fails to deploy on 
multiple hosts, the volumes are not cleaned up after the last deploy fails. The 
issue appears to only affect instances which fail an initial deploy and are 
rescheduled, and still ultimately fail for either NoValidHost or 
MaxRetriesExceeded. Instances which fail the initial deploy are cleaned up 
correctly.

  Steps to Reproduce:
   - Perform a virtual machine deploy that fails an initial deploy and attempts 
to reschedule.
   - Raise an error during the reschedule.

  Expected Result:
   - Volumes should be cleaned up similarly to when exception are raised in 
nova.compute.manager.ComputeManager._build_and_run_instance().

  Actual Result:
   - Volumes are not cleaned up and remain even after the virtual machine 
deploy fails.

  Additional Notes:
  In nova.conductor.manager.ComputeTaskManager.build_instances(), an exception 
can be raised by the call to self._schedule_instances(). If this occurs, the 
exception is caught, but only the networks are cleaned up. No clean up of the 
volumes is performed.
   - 
https://github.com/openstack/nova/blob/master/nova/conductor/manager.py#L551-L565
  In nova.compute.manager.ComputeManager._do_build_and_run_instance(), if an 
exception is raised by the call to self._build_and_run_instance() the volumes 
are cleaned up by a call to self._cleanup_volumes()
   - 
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1854-L1866

  Environment:
   - nova version: openstack-nova-16.0.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1730761/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1730756] [NEW] Creating a VM with a non-ASCII name fails with an Unicode error

2017-11-07 Thread Victor Stinner
Public bug reported:

Creating a VM with a non-ASCII name in nova with the libvirt driver
fails with an UnicodeEncodeError error.

Instance failed to spawn: UnicodeEncodeError: 'ascii' codec can't encode 
character u'\u266b' in position 294: ord
Traceback (most recent call last):
  File "/opt/stack/nova/nova/compute/manager.py", line 2208, in _build_resources
yield resources
  File "/opt/stack/nova/nova/compute/manager.py", line 2001, in 
_build_and_run_instance
block_device_info=block_device_info)
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2821, in spawn
destroy_disks_on_failure=True)
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5289, in 
_create_domain_and_network
destroy_disks_on_failure)
  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 
__exit__
self.force_reraise()
  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
six.reraise(self.type_, self.value, self.tb)
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5259, in 
_create_domain_and_network
post_xml_callback=post_xml_callback)
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5170, in 
_create_domain
guest = libvirt_guest.Guest.create(xml, self._host)
  File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 129, in create
encodeutils.safe_decode(xml))
  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 
__exit__
self.force_reraise()
  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
six.reraise(self.type_, self.value, self.tb)
  File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 125, in create
guest = host.write_instance_config(xml)
  File "/opt/stack/nova/nova/virt/libvirt/host.py", line 826, in 
write_instance_config
domain = self.get_connection().defineXML(xml)
  File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 186, in doit
result = proxy_call(self._autowrap, f, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 144, in 
proxy_call
rv = execute(f, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 125, in 
execute
six.reraise(c, e, tb)
  File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 83, in tworker
rv = meth(*args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3817, in defineXML
ret = libvirtmod.virDomainDefineXML(self._o, xml)
UnicodeEncodeError: 'ascii' codec can't encode character u'\u266b' in position 
294: ordinal not in range(128)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1730756

Title:
  Creating a VM with a non-ASCII name fails with an Unicode error

Status in OpenStack Compute (nova):
  New

Bug description:
  Creating a VM with a non-ASCII name in nova with the libvirt driver
  fails with an UnicodeEncodeError error.

  Instance failed to spawn: UnicodeEncodeError: 'ascii' codec can't encode 
character u'\u266b' in position 294: ord
  Traceback (most recent call last):
File "/opt/stack/nova/nova/compute/manager.py", line 2208, in 
_build_resources
  yield resources
File "/opt/stack/nova/nova/compute/manager.py", line 2001, in 
_build_and_run_instance
  block_device_info=block_device_info)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2821, in spawn
  destroy_disks_on_failure=True)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5289, in 
_create_domain_and_network
  destroy_disks_on_failure)
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, 
in __exit__
  self.force_reraise()
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, 
in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5259, in 
_create_domain_and_network
  post_xml_callback=post_xml_callback)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5170, in 
_create_domain
  guest = libvirt_guest.Guest.create(xml, self._host)
File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 129, in create
  encodeutils.safe_decode(xml))
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, 
in __exit__
  self.force_reraise()
File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, 
in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/nova/nova/virt/libvirt/guest.py", line 125, in create
  guest = host.write_instance_config(xml)
File "/opt/stack/nova/nova/virt/libvirt/host.py", line 826, in 
write_instance_config
  domain = self.get_connection().defineXML(xml)
File 

[Yahoo-eng-team] [Bug 1730730] [NEW] AllocationCandidates.get_by_filters returns garbage with only sharing providers

2017-11-07 Thread Eric Fried
Public bug reported:

If my placement database is set up with only sharing providers (no
"compute nodes"), the results are broken.

Steps to reproduce
==
Here's one example:

SS1 has inventory in IPV4_ADDRESS, SRIOV_NET_VF, and DISK_GB.
SS2 has inventory in just DISK_GB.

Both are associated with the same aggregate; both have the
MISC_SHARES_VIA_AGGREGATE trait.

I make a request for resources in all three classes (in amounts that can
be satisfied by those inventories).

Expected result
===
It is unclear what the expected result is.  There is a school of thought that 
we are only dealing with compute hosts right now, so we should never get back a 
candidate that doesn't include a compute host.  In that case, this scenario 
should yield *zero* candidates.

On the other hand, in the long-term vision of placement, there should be
no reason not to support scenarios where allocations are made *only*
against sharing providers (as long as they're in the same aggregate for
a given candidate).  In that case, this scenario should yield two
candidates:

One that gets all its resources from SS1;
One that gets DISK_GB from SS2, and IPV4_ADDRESS and SRIOV_NET_VF from SS1.

Actual result
=
The actual result is three candidates:

One that gets all its resources from SS1 (cool);
One that gets DISK_GB from SS2 and IPV4_ADDRESS from SS1 (not cool - 
SRIOV_NET_VF isn't in here!)
One that gets DISK_GB from SS2 and SRIOV_NET_VF from SS1 (not cool - 
IPV4_ADDRESS isn't in here!)

I will post a functional test to demonstrate this.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1730730

Title:
  AllocationCandidates.get_by_filters returns garbage with only sharing
  providers

Status in OpenStack Compute (nova):
  New

Bug description:
  If my placement database is set up with only sharing providers (no
  "compute nodes"), the results are broken.

  Steps to reproduce
  ==
  Here's one example:

  SS1 has inventory in IPV4_ADDRESS, SRIOV_NET_VF, and DISK_GB.
  SS2 has inventory in just DISK_GB.

  Both are associated with the same aggregate; both have the
  MISC_SHARES_VIA_AGGREGATE trait.

  I make a request for resources in all three classes (in amounts that
  can be satisfied by those inventories).

  Expected result
  ===
  It is unclear what the expected result is.  There is a school of thought that 
we are only dealing with compute hosts right now, so we should never get back a 
candidate that doesn't include a compute host.  In that case, this scenario 
should yield *zero* candidates.

  On the other hand, in the long-term vision of placement, there should
  be no reason not to support scenarios where allocations are made
  *only* against sharing providers (as long as they're in the same
  aggregate for a given candidate).  In that case, this scenario should
  yield two candidates:

  One that gets all its resources from SS1;
  One that gets DISK_GB from SS2, and IPV4_ADDRESS and SRIOV_NET_VF from SS1.

  Actual result
  =
  The actual result is three candidates:

  One that gets all its resources from SS1 (cool);
  One that gets DISK_GB from SS2 and IPV4_ADDRESS from SS1 (not cool - 
SRIOV_NET_VF isn't in here!)
  One that gets DISK_GB from SS2 and SRIOV_NET_VF from SS1 (not cool - 
IPV4_ADDRESS isn't in here!)

  I will post a functional test to demonstrate this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1730730/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1575935] Re: Rebuild should also accept a configdrive

2017-11-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/517436
Committed: 
https://git.openstack.org/cgit/openstack/python-ironicclient/commit/?id=07290220762e023f1e197a7d854228d4d09fce7a
Submitter: Zuul
Branch:master

commit 07290220762e023f1e197a7d854228d4d09fce7a
Author: Mathieu Gagné 
Date:   Thu Nov 2 16:45:52 2017 -0400

Add ability to provide configdrive when rebuilding with OSC

Ironic introduces the API microversion 1.35 which allows
configdrive to be provided when setting the node's provisioning state
to "rebuild".

This change adds the ability to provide a config-drive
when rebuilding a node.

Closes-bug: #1575935
Change-Id: I950ac35bcde97b0f93225f80f989d42c5519faf2


** Changed in: python-ironicclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1575935

Title:
  Rebuild should also accept a configdrive

Status in Ironic:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid
Status in python-ironicclient:
  Fix Released

Bug description:
  Users desire the ability to rebuild pre-existing hosts and update the
  configuration drive, especially in CI environments.

  
https://github.com/openstack/ironic/blob/master/ironic/api/controllers/v1/node.py#L518

  Presently does not pass a submitted configuration drive.  Compared
  with Line 516.

  That being said, logic further down in the deployment (both legacy
  iscsi deployment and full disk deployment) processes should be checked
  to ensure that nothing else is broken, however this is standard
  behavior PRESENTLY because this is how nova submits requests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1575935/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1730637] [NEW] PortBindingFailed_Remote (HTTP 500)

2017-11-07 Thread Christos Tsalidis
Public bug reported:

Hi all,

Description
===

I am running newton on centos7 and have currently one active instance.
This instance is connected to two vxlan networks(int and vxlannetwork)
and I would like to get it connected to two additonal, vlan and gre.
Networks are created, but after I create the port and try to attach the
interface I get an error. See below.


Steps to reproduce
===

# nova list
+--++++-+---+
| ID   | Name   | Status | Task State | Power 
State | Networks  |
+--++++-+---+
| 354caa3d-bddb-4396-ac0a-ef12e06b14e3 | cirros | ACTIVE | -  | Running 
| int=10.0.0.5; vxlannetwork=10.0.10.12 |
+--++++-+---+

when I create a port on vlan network and try to attach I get the
following error

# neutron net-list
+--+--++
| id   | name | subnets 
   |
+--+--++
| 625914e4-c490-4808-a309-8591c97d62ea | ext  | 
6dc90b28-4b84-456f-bc95-1b3161c77ab5 10.0.100.0/24 |
| 7974307b-d4e8-4b6b-9354-62ffb0d148e2 | vxlannetwork | 
e0494841-73dd-4c35-ad07-b8b7a0ae6fdb 10.0.10.0/24  |
| afd58b60-64c4-4ed3-9da9-c107bf40f593 | vlannetwork  | 
2549725d-9847-4d03-a9ab-02be5da3907b 10.0.20.0/24  |
| da177468-f3d8-45ef-a9d7-4ad03960dfab | int  | 
934d2862-bdb0-4369-a290-3e8dae7c 10.0.0.0/24   |
| de00479b-0777-4589-b20c-efe0858a9041 | grenetwork   | 
e9ad3685-3935-4c5c-bc6a-64d8fe0f09c3 10.0.30.0/24  |
+--+--++

#neutron port-create afd58b60-64c4-4ed3-9da9-c107bf40f593
Created a new port:
+---+--+
| Field | Value 
   |
+---+--+
| admin_state_up| True  
   |
| allowed_address_pairs |   
   |
| binding:host_id   |   
   |
| binding:profile   | {}
   |
| binding:vif_details   | {}
   |
| binding:vif_type  | unbound   
   |
| binding:vnic_type | normal
   |
| created_at| 2017-11-06T19:00:40Z  
   |
| description   |   
   |
| device_id |   
   |
| device_owner  |   
   |
| extra_dhcp_opts   |   
   |
| fixed_ips | {"subnet_id": "2549725d-9847-4d03-a9ab-02be5da3907b", 
"ip_address": "10.0.20.9"} |
| id| 2795d640-adfe-417e-a598-68b7336b19fa  
   |
| mac_address   | fa:16:3e:cc:27:f9 
   |
| name  |   
   |
| network_id| afd58b60-64c4-4ed3-9da9-c107bf40f593  
   |
| project_id| 2dda4fa3451947808fec2b15ace75719  
   |
| revision_number   | 4 
   |
| security_groups   | 68c3297c-82b9-4a03-9f7c-6e9d904d143a  
   |
| status| DOWN  
   |
| tenant_id | 2dda4fa3451947808fec2b15ace75719  
   |
| updated_at 

[Yahoo-eng-team] [Bug 1377781] Re: VMWare: should use ShutDownGuest to do grace OS shutdown and then force power off if timeout passed

2017-11-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/494169
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=24aaf8752db25e1e71f4a14502f0ea3b1ab1b7de
Submitter: Zuul
Branch:master

commit 24aaf8752db25e1e71f4a14502f0ea3b1ab1b7de
Author: Thomas Kaergel 
Date:   Fri Mar 4 10:25:45 2016 +0100

VMware: add support for graceful shutdown of instances

Change-Id: I40643e9d358be89c87a0311b1c1fd7718ec75361
Closes-Bug: #1377781
Co-Authored-By: David Rabel 


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1377781

Title:
  VMWare: should use ShutDownGuest to do grace OS shutdown and then
  force power off if timeout passed

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  With currently vmware driver. the power_off will explicitly calls 
  PowerOffVM_Task. 

  In this case, If a virtual machine is writing to disk when it receives
  a Power Off command, data corruption may occur.

  Actually in SDK, there is another method
  ShutdownGuest which will issues a command to the guest operating system 
asking it to perform a clean shutdown of all services.

  So it is suggest to use ShutdownGuest  first and wait for some
  interval, if the power state is still up, make a force shutdown.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1377781/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1730605] [NEW] neutron qos bindlimit by ovs is not accurate

2017-11-07 Thread Zachary Ma
Public bug reported:

1. openstack version: pike
2. neutron --version 6.5.0
3. ovs-vsctl (Open vSwitch) 2.7.2
4. iperf3-3.1.3-1.fc24.x86_64.rpm

egress bw-limiter is never accurate

ingress bw-limiter is also not accurte,
but using ovs 2.5, ingress is accurate !

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: qos

** Tags added: qos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1730605

Title:
  neutron qos bindlimit by ovs is not accurate

Status in neutron:
  New

Bug description:
  1. openstack version: pike
  2. neutron --version 6.5.0
  3. ovs-vsctl (Open vSwitch) 2.7.2
  4. iperf3-3.1.3-1.fc24.x86_64.rpm

  egress bw-limiter is never accurate

  ingress bw-limiter is also not accurte,
  but using ovs 2.5, ingress is accurate !

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1730605/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp