[Yahoo-eng-team] [Bug 1775563] Re: Install and configure a compute node for Ubuntu in nova

2018-06-13 Thread jichenjc
based on above comments

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1775563

Title:
  Install and configure a compute node for Ubuntu in nova

Status in OpenStack Compute (nova):
  Invalid

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [x] This doc is inaccurate in this way: __

  [keystone_authtoken]
  # ...
  auth_uri = http://controller:5000
  auth_url = http://controller:35357
  memcached_servers = controller:11211
  auth_type = password
  project_domain_name = default
  user_domain_name = default
  project_name = service
  username = nova
  password = NOVA_PASS

  
  the line auth_url = http://controller:35357 must be auth_url = 
http://controller:5000 to be work

  
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 16.1.4.dev15 on 2018-06-04 05:37
  SHA: 2c9c4a09cb5fd31ccff368315534eaa788e90e67
  Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/install/compute-install-ubuntu.rst
  URL: https://docs.openstack.org/nova/pike/install/compute-install-ubuntu.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1775563/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1776797] [NEW] test_convert_image_with_errors fails with OSError: [Errno 2] No such file or directory

2018-06-13 Thread Corey Bryant
Public bug reported:

test_convert_image_with_errors fails with OSError: [Errno 2] No such
file or directory

See traceback here: https://paste.ubuntu.com/p/bQ6Z9QPXCY/

where args = ['qemu-img', 'convert', '-t', 'none', '-O', 'raw', '-f',
'qcow2', '/path/that/does/not/exist', '/other/path/that/does/not/exist']

It seems the execute mock is incorrect.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1776797

Title:
  test_convert_image_with_errors fails with OSError: [Errno 2] No such
  file or directory

Status in OpenStack Compute (nova):
  New

Bug description:
  test_convert_image_with_errors fails with OSError: [Errno 2] No such
  file or directory

  See traceback here: https://paste.ubuntu.com/p/bQ6Z9QPXCY/

  where args = ['qemu-img', 'convert', '-t', 'none', '-O', 'raw', '-f',
  'qcow2', '/path/that/does/not/exist',
  '/other/path/that/does/not/exist']

  It seems the execute mock is incorrect.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1776797/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775797] Re: The mac table size of neutron bridges (br-tun, br-int, br-*) is too small by default and eventually makes openvswitch explode

2018-06-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/573696
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=1f8378e0ac4b8c3fc4670144e6efc51940d796ad
Submitter: Zuul
Branch:master

commit 1f8378e0ac4b8c3fc4670144e6efc51940d796ad
Author: Slawek Kaplonski 
Date:   Fri Jun 8 15:37:39 2018 +0200

[OVS] Add mac-table-size to be set on each ovs bridge

By default number of MAC addresses which ovs stores in memory
is quite low - 2048.

Any eviction of a MAC learning table entry triggers revalidation.
Such revalidation is very costly so it cause high CPU usage by
ovs-vswitchd process.

To workaround this problem, higher value of mac-table-size
option can be set for bridge. Then this revalidation will happen
less often and CPU usage will be lower.
This patch adds config option for neutron-openvswitch-agent to allow
users tune this setting in bridges managed by agent.
By default this value is set to 5 which should be enough for most
systems.

Change-Id: If628f52d75c2b5fec87ad61e0219b3286423468c
Closes-Bug: #1775797


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1775797

Title:
  The mac table size of neutron bridges (br-tun, br-int, br-*) is too
  small by default and eventually makes openvswitch explode

Status in neutron:
  Fix Released

Bug description:
  Description of problem:

  the CPU utilization of ovs-vswitchd is high without DPDK enabled

   PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
  1512 root  10 -10 4352840 793864  12008 R  1101  0.3  15810:26 
ovs-vswitchd

  at the same time we were observing failures to send packets (ICMP)
  over VXLAN tunnel, we think this might be related to high CPU usage.

  
  --- Reproducer and analysis on ovs side done by Jiri Benc:

  Reproducer:

  Create an ovs bridge:

  --
  ovs-vsctl add-br ovs0
  ip l s ovs0 up
  --

  Save this to a file named "reproducer.py":

  --
  #!/usr/bin/python
  from scapy.all import *

  data = [(str(RandMAC()), str(RandIP())) for i in
  range(int(sys.argv[1]))]

  s = conf.L2socket(iface="ovs0")
  while True:
  for mac, ip in data:
  p = Ether(src=mac, dst=mac)/IP(src=ip, dst=ip)
  s.send(p)
  --

  Run the reproducer:

  ./reproducer.py 5000

  
  
  The problem is how flow revalidation works in ovs. There are several 
'revalidator' threads launched. They should normally sleep (modulo waking every 
0.5 second just to do nothing) and they wake if anything of interest happens 
(udpif_revalidator => poll_block). On every wake up, each revalidator thread 
checks whether flow revalidation is needed and if it is, it does the 
revalidation.

  The revalidation is very costly with high number of flows. I also
  suspect there's a lot of contention between the revalidator threads.

  The flow revalidation is triggered by many things. What is of interest
  for us is that any eviction of a MAC learning table entry triggers
  revalidation.

  The reproducer script repeatedly sends the same 5000 packets, all of
  them with a different MAC address. This causes constant overflows of
  the MAC learning table and constant revalidation. The revalidator
  threads are being immediately woken up and are busy looping the
  revalidation.

  Which is exactly the pattern from the customers' data: there are
  16000+ flows and the packet capture shows that the packets are
  repeating every second.

  A quick fix is to increase the MAC learning table size:

  ovs-vsctl set bridge  other-config:mac-table-size=5

  This should lower the CPU usage down substantially; allow a few
  seconds for things to settle down.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1775797/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1776778] [NEW] Floating IPs broken after upgrade to Centos 7.5 - DNAT not working

2018-06-13 Thread Arjun Baindur
Public bug reported:

Since upgrading to Centos 7.5, floating IP functionality has been
completely busted. Packets arrive inbound to qrouter from fip namespace
via RFP, but are not DNAT'd or routed, as we see nothing going out qr-
interface. For outbound packets leaving the VM, they are fine, but then
all responses are again dropped inbound to qrouter after arriving on
rfp. It appears the DNAT rules in the "-t nat" iptables within qrouter
are not being hit (packet counters are zero).

SNAT functionality works when we remove floating IP from the VM (VM can
then ping outbound). So problem seems isolated to DNAT / qrouter
receiving packets from fip?

We are able to reproduce this 100% consistently, whenever we update our
working centos 7.2 / centos 7.4 hosts to 7.5. Nothing changes except a
"yum update". All routes, rules, iptables are identical on a working
older host vs. broken centos 7.5 host.

I added some basic rules to log packets at top of PREROUTING chain in
raw, mangle, and nat tables. Filtering either by my source IP, or all
packets on -i rfp ingress interface. While packet counters increment for
raw and mangle, they remain at 0 for nat, indicating the nat iptable is
not invoked for PREROUTING.

Floating IP = 10.8.17.52, Fixed IP = 192.168.94.9.

[root@centos7-neutron-template ~]# ip netns exec 
qrouter-f48d5536-eefa-4410-b17b-1b3d14426323 tcpdump -l -evvvnn -i 
rfp-f48d5536-e
tcpdump: listening on rfp-f48d5536-e, link-type EN10MB (Ethernet), capture size 
262144 bytes
13:42:00.345440 7a:3b:f1:c7:5d:4e > aa:24:89:9e:c8:f0, ethertype IPv4 (0x0800), 
length 98: (tos 0x0, ttl 62, id 1832, offset 0, flags [DF], proto ICMP (1), 
length 84)
10.4.165.22 > 10.8.17.52: ICMP echo request, id 5771, seq 1, length 64
13:42:01.344047 7a:3b:f1:c7:5d:4e > aa:24:89:9e:c8:f0, ethertype IPv4 (0x0800), 
length 98: (tos 0x0, ttl 63, id 1833, offset 0, flags [DF], proto ICMP (1), 
length 84)
10.4.165.22 > 10.8.17.52: ICMP echo request, id 5771, seq 2, length 64
13:42:02.398300 7a:3b:f1:c7:5d:4e > aa:24:89:9e:c8:f0, ethertype IPv4 (0x0800), 
length 98: (tos 0x0, ttl 63, id 1834, offset 0, flags [DF], proto ICMP (1), 
length 84)
10.4.165.22 > 10.8.17.52: ICMP echo request, id 5771, seq 3, length 64
13:42:03.344345 7a:3b:f1:c7:5d:4e > aa:24:89:9e:c8:f0, ethertype IPv4 (0x0800), 
length 98: (tos 0x0, ttl 63, id 1835, offset 0, flags [DF], proto ICMP (1), 
length 84)
10.4.165.22 > 10.8.17.52: ICMP echo request, id 5771, seq 4, length 64
^C
4 packets captured
4 packets received by filter
0 packets dropped by kernel
[root@centos7-neutron-template ~]# ip netns exec 
qrouter-f48d5536-eefa-4410-b17b-1b3d14426323 tcpdump -l -evvvnn -i 
qr-295f9857-21
tcpdump: listening on qr-295f9857-21, link-type EN10MB (Ethernet), capture size 
262144 bytes

***CRICKETS***

[root@centos7-neutron-template ~]# ip netns exec 
qrouter-f48d5536-eefa-4410-b17b-1b3d14426323 ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: rfp-f48d5536-e:  mtu 1500 qdisc noqueue 
state UP group default qlen 1000
link/ether aa:24:89:9e:c8:f0 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 169.254.106.114/31 scope global rfp-f48d5536-e
   valid_lft forever preferred_lft forever
inet6 fe80::a824:89ff:fe9e:c8f0/64 scope link
   valid_lft forever preferred_lft forever
59: qr-295f9857-21:  mtu 1500 qdisc noqueue 
state UNKNOWN group default qlen 1000
link/ether fa:16:3e:3d:f1:12 brd ff:ff:ff:ff:ff:ff
inet 192.168.94.1/24 brd 192.168.94.255 scope global qr-295f9857-21
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe3d:f112/64 scope link
   valid_lft forever preferred_lft forever

[root@centos7-neutron-template ~]# ip netns exec 
qrouter-f48d5536-eefa-4410-b17b-1b3d14426323 ip route
169.254.106.114/31 dev rfp-f48d5536-e proto kernel scope link src 
169.254.106.114
192.168.94.0/24 dev qr-295f9857-21 proto kernel scope link src 192.168.94.1
[root@centos7-neutron-template ~]# ip netns exec 
qrouter-f48d5536-eefa-4410-b17b-1b3d14426323 ip rule
0:  from all lookup local
32766:  from all lookup main
32767:  from all lookup default
57481:  from 192.168.94.9 lookup 16
3232259585: from 192.168.94.1/24 lookup 3232259585
[root@centos7-neutron-template ~]# ip netns exec 
qrouter-f48d5536-eefa-4410-b17b-1b3d14426323 ip route show table 16
default via 169.254.106.115 dev rfp-f48d5536-e
[root@centos7-neutron-template ~]# ip netns exec 
qrouter-f48d5536-eefa-4410-b17b-1b3d14426323 ip neighbor
169.254.106.115 dev rfp-f48d5536-e lladdr 7a:3b:f1:c7:5d:4e STALE
192.168.94.4 dev qr-295f9857-21 lladdr fa:16:3e:cf:a1:08 PERMANENT
192.168.94.13 dev qr-295f9857-21 lladdr fa:16:3e:91:37:54 PERMANENT
192.168.94.2 dev qr-295f9857-21 lladdr fa:16:3e:b2:18:5e PERMANENT
192.168.94.9 dev 

[Yahoo-eng-team] [Bug 1776587] Re: Configure neutron services on compute and controller node., keystone listens port 5000 but document instructs to configure the service at 35357

2018-06-13 Thread johnpham
hi @brian-haley,

The document that I was looking at was for the queen release, but it still uses 
35357 port. Can you please leave me a link to the document where this was fixed?
I'll need it in the future.

Thanks in advance, really appreciate it!

** Changed in: neutron
   Status: Invalid => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1776587

Title:
  Configure neutron services on compute and controller node., keystone
  listens port 5000 but document instructs to configure the service at
  35357

Status in neutron:
  Incomplete

Bug description:
  Hi everyone,

  I was following the document to install and configure the neutron
  services on a controller and a compute node. The document specified
  the services to authenticate at http://controller:35357

  "https://docs.openstack.org/neutron/queens/install/compute-install-
  ubuntu.html"

  The services wasn't working and "openstack network agent list" returned an 
empty table.
  However, I realised keystone service only listen to port 5000/v3.

  I also checked to see if any process is listening to port 35357 on the 
controller, but got nothing
  "lsof -i :35357" returned empty

  This might be an issue with the document, however I am 100% positive.
  So it would be great if someone can check and confirm this.

  setup: 2 nodes running Ubuntu 16.04
  ---
  Release: 12.0.3.dev25 on 2018-06-09 01:18
  SHA: 9eef1db160521076d8243f1980e681f0f04ecbc6
  Source: 
https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/install/compute-install-ubuntu.rst
  URL: 
https://docs.openstack.org/neutron/queens/install/compute-install-ubuntu.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1776587/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1776743] [NEW] Allocation healer should ignore deleted instances

2018-06-13 Thread Mathieu Gagné
Public bug reported:

When running this command:

  nova-manage placement heal_allocations

It fails with an InstanceNotFound exception when trying to heal deleted
instances.

The healer should ignore deleted instances because there is nothing to
heal for them.

** Affects: nova
 Importance: High
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged


** Tags: nova-manage

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1776743

Title:
  Allocation healer should ignore deleted instances

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  When running this command:

nova-manage placement heal_allocations

  It fails with an InstanceNotFound exception when trying to heal
  deleted instances.

  The healer should ignore deleted instances because there is nothing to
  heal for them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1776743/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1776701] [NEW] ec2: xenial unnecessary openstack datasource probes during discovery

2018-06-13 Thread Chad Smith
Public bug reported:

Now that OpenStack datasource is detected at init-local timeframe, that
discovery can occur before Ec2 datasource. As a result, cloud-init
integration tests started failing becuase of unexpected WARNINGs in
cloud-init.log [1].

The unexpected warning message is emitted by OpenStack.get_data probing
for openstack metadata service when it should instead check whether the
environment is ec2 and exit False to avoid wasting cycles.

Unexpected warning: ['2018-06-10 01:11:08,101 - util.py[WARNING]: No
active metadata service found']

References:
[1] failed jenkins integration test: 
https://jenkins.ubuntu.com/server/view/cloud-init,%20curtin,%20streams/job/cloud-init-integration-ec2-x/23/console

** Affects: cloud-init
 Importance: High
 Assignee: Chad Smith (chad.smith)
 Status: In Progress

** Changed in: cloud-init
   Importance: Undecided => High

** Changed in: cloud-init
 Assignee: (unassigned) => Chad Smith (chad.smith)

** Changed in: cloud-init
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1776701

Title:
  ec2: xenial unnecessary  openstack datasource probes during discovery

Status in cloud-init:
  In Progress

Bug description:
  Now that OpenStack datasource is detected at init-local timeframe,
  that discovery can occur before Ec2 datasource. As a result, cloud-
  init integration tests started failing becuase of unexpected WARNINGs
  in cloud-init.log [1].

  The unexpected warning message is emitted by OpenStack.get_data
  probing for openstack metadata service when it should instead check
  whether the environment is ec2 and exit False to avoid wasting cycles.

  Unexpected warning: ['2018-06-10 01:11:08,101 - util.py[WARNING]: No
  active metadata service found']

  References:
  [1] failed jenkins integration test: 
https://jenkins.ubuntu.com/server/view/cloud-init,%20curtin,%20streams/job/cloud-init-integration-ec2-x/23/console

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1776701/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1776697] [NEW] Horizon Instance Launch displays duplicate instances on instances page

2018-06-13 Thread wondernath
Public bug reported:

Steps to reproduce 
1. Navigate to Project > Compute > Instances.
2. Launch Instance with following info :
- Instance Name : test
- availability zone : nova
- count : 1
- source : Image : cirros-0.3.5-x86_64-disk
- flavor : m1.tiny
3. Click on Launch Instance

Expected:
one Instance should be launched

Actual :
one instance is launched, however multiple(duplicate) instance rows displayed.
On refreshing or re-navigating to instances page, one instance is displayed

Notes :
- tested against devstack master
- Screenshot attached

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: horizon-core

** Attachment added: "Horizon_display_duplicate_instances_row.PNG"
   
https://bugs.launchpad.net/bugs/1776697/+attachment/5152138/+files/Horizon_display_duplicate_instances_row.PNG

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1776697

Title:
  Horizon Instance Launch displays duplicate instances on instances page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to reproduce 
  1. Navigate to Project > Compute > Instances.
  2. Launch Instance with following info :
  - Instance Name : test
  - availability zone : nova
  - count : 1
  - source : Image : cirros-0.3.5-x86_64-disk
  - flavor : m1.tiny
  3. Click on Launch Instance

  Expected:
  one Instance should be launched

  Actual :
  one instance is launched, however multiple(duplicate) instance rows displayed.
  On refreshing or re-navigating to instances page, one instance is displayed

  Notes :
  - tested against devstack master
  - Screenshot attached

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1776697/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1776587] Re: Configure neutron services on compute and controller node., keystone listens port 5000 but document instructs to configure the service at 35357

2018-06-13 Thread Brian Haley
This has already been fixed.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1776587

Title:
  Configure neutron services on compute and controller node., keystone
  listens port 5000 but document instructs to configure the service at
  35357

Status in neutron:
  Invalid

Bug description:
  Hi everyone,

  I was following the document to install and configure the neutron
  services on a controller and a compute node. The document specified
  the services to authenticate at http://controller:35357

  "https://docs.openstack.org/neutron/queens/install/compute-install-
  ubuntu.html"

  The services wasn't working and "openstack network agent list" returned an 
empty table.
  However, I realised keystone service only listen to port 5000/v3.

  I also checked to see if any process is listening to port 35357 on the 
controller, but got nothing
  "lsof -i :35357" returned empty

  This might be an issue with the document, however I am 100% positive.
  So it would be great if someone can check and confirm this.

  setup: 2 nodes running Ubuntu 16.04
  ---
  Release: 12.0.3.dev25 on 2018-06-09 01:18
  SHA: 9eef1db160521076d8243f1980e681f0f04ecbc6
  Source: 
https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/install/compute-install-ubuntu.rst
  URL: 
https://docs.openstack.org/neutron/queens/install/compute-install-ubuntu.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1776587/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1776684] Re: MultipleCreateTestJSON.test_multiple_create intermittently fails for cells v1 due to server name check change

2018-06-13 Thread Matt Riedemann
Restored the tempest revert: https://review.openstack.org/#/c/575132/

** Also affects: tempest
   Importance: Undecided
   Status: New

** Changed in: tempest
   Status: New => In Progress

** Changed in: tempest
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Changed in: nova
   Status: Confirmed => Invalid

** Changed in: tempest
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1776684

Title:
  MultipleCreateTestJSON.test_multiple_create intermittently fails for
  cells v1 due to server name check change

Status in OpenStack Compute (nova):
  Invalid
Status in tempest:
  In Progress

Bug description:
  http://logs.openstack.org/75/486475/38/gate/nova-cells-v1/2c2a566/job-
  output.txt.gz#_2018-06-13_13_26_17_075329

  2018-06-13 13:26:17.110798 | primary | Captured traceback:
  2018-06-13 13:26:17.110851 | primary | ~~~
  2018-06-13 13:26:17.110932 | primary | Traceback (most recent call last):
  2018-06-13 13:26:17.03 | primary |   File 
"tempest/api/compute/servers/test_multiple_create.py", line 43, in 
test_multiple_create
  2018-06-13 13:26:17.111218 | primary | self.assertEqual(set(['VM-1', 
'VM-2']), server_names)
  2018-06-13 13:26:17.111435 | primary |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
  2018-06-13 13:26:17.111542 | primary | self.assertThat(observed, 
matcher, message)
  2018-06-13 13:26:17.111758 | primary |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  2018-06-13 13:26:17.111830 | primary | raise mismatch_error
  2018-06-13 13:26:17.111992 | primary | 
testtools.matchers._impl.MismatchError: set(['VM-2', 'VM-1']) != set([u'VM'])

  This started failing after this change was made:

  https://review.openstack.org/#/c/569199/

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22self.assertEqual(set(%5B'VM-1'%2C%20'VM-2'%5D)%2C%20server_names)%5C%22%20AND%20tags%3A%5C%22console%5C%22=7d

  8 hits in 24 hours

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1776684/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1776684] Re: MultipleCreateTestJSON.test_multiple_create intermittently fails for cells v1 due to server name check change

2018-06-13 Thread Matt Riedemann
** No longer affects: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1776684

Title:
  MultipleCreateTestJSON.test_multiple_create intermittently fails for
  cells v1 due to server name check change

Status in OpenStack Compute (nova):
  Invalid
Status in tempest:
  In Progress

Bug description:
  http://logs.openstack.org/75/486475/38/gate/nova-cells-v1/2c2a566/job-
  output.txt.gz#_2018-06-13_13_26_17_075329

  2018-06-13 13:26:17.110798 | primary | Captured traceback:
  2018-06-13 13:26:17.110851 | primary | ~~~
  2018-06-13 13:26:17.110932 | primary | Traceback (most recent call last):
  2018-06-13 13:26:17.03 | primary |   File 
"tempest/api/compute/servers/test_multiple_create.py", line 43, in 
test_multiple_create
  2018-06-13 13:26:17.111218 | primary | self.assertEqual(set(['VM-1', 
'VM-2']), server_names)
  2018-06-13 13:26:17.111435 | primary |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
  2018-06-13 13:26:17.111542 | primary | self.assertThat(observed, 
matcher, message)
  2018-06-13 13:26:17.111758 | primary |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  2018-06-13 13:26:17.111830 | primary | raise mismatch_error
  2018-06-13 13:26:17.111992 | primary | 
testtools.matchers._impl.MismatchError: set(['VM-2', 'VM-1']) != set([u'VM'])

  This started failing after this change was made:

  https://review.openstack.org/#/c/569199/

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22self.assertEqual(set(%5B'VM-1'%2C%20'VM-2'%5D)%2C%20server_names)%5C%22%20AND%20tags%3A%5C%22console%5C%22=7d

  8 hits in 24 hours

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1776684/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1776684] Re: MultipleCreateTestJSON.test_multiple_create intermittently fails for cells v1 due to server name check change

2018-06-13 Thread Matt Riedemann
This is likely the reason for the failure, cells v1 doesn't honor that
multi-create instance name contract:

https://github.com/openstack/nova/blob/dd87118acc2bf57f235cb287ed0ee12736263ecb/nova/compute/api.py#L1419

** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1776684

Title:
  MultipleCreateTestJSON.test_multiple_create intermittently fails for
  cells v1 due to server name check change

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  http://logs.openstack.org/75/486475/38/gate/nova-cells-v1/2c2a566/job-
  output.txt.gz#_2018-06-13_13_26_17_075329

  2018-06-13 13:26:17.110798 | primary | Captured traceback:
  2018-06-13 13:26:17.110851 | primary | ~~~
  2018-06-13 13:26:17.110932 | primary | Traceback (most recent call last):
  2018-06-13 13:26:17.03 | primary |   File 
"tempest/api/compute/servers/test_multiple_create.py", line 43, in 
test_multiple_create
  2018-06-13 13:26:17.111218 | primary | self.assertEqual(set(['VM-1', 
'VM-2']), server_names)
  2018-06-13 13:26:17.111435 | primary |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
  2018-06-13 13:26:17.111542 | primary | self.assertThat(observed, 
matcher, message)
  2018-06-13 13:26:17.111758 | primary |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  2018-06-13 13:26:17.111830 | primary | raise mismatch_error
  2018-06-13 13:26:17.111992 | primary | 
testtools.matchers._impl.MismatchError: set(['VM-2', 'VM-1']) != set([u'VM'])

  This started failing after this change was made:

  https://review.openstack.org/#/c/569199/

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22self.assertEqual(set(%5B'VM-1'%2C%20'VM-2'%5D)%2C%20server_names)%5C%22%20AND%20tags%3A%5C%22console%5C%22=7d

  8 hits in 24 hours

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1776684/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1776684] [NEW] MultipleCreateTestJSON.test_multiple_create intermittently fails for cells v1 due to server name check change

2018-06-13 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/75/486475/38/gate/nova-cells-v1/2c2a566/job-
output.txt.gz#_2018-06-13_13_26_17_075329

2018-06-13 13:26:17.110798 | primary | Captured traceback:
2018-06-13 13:26:17.110851 | primary | ~~~
2018-06-13 13:26:17.110932 | primary | Traceback (most recent call last):
2018-06-13 13:26:17.03 | primary |   File 
"tempest/api/compute/servers/test_multiple_create.py", line 43, in 
test_multiple_create
2018-06-13 13:26:17.111218 | primary | self.assertEqual(set(['VM-1', 
'VM-2']), server_names)
2018-06-13 13:26:17.111435 | primary |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
2018-06-13 13:26:17.111542 | primary | self.assertThat(observed, 
matcher, message)
2018-06-13 13:26:17.111758 | primary |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
2018-06-13 13:26:17.111830 | primary | raise mismatch_error
2018-06-13 13:26:17.111992 | primary | 
testtools.matchers._impl.MismatchError: set(['VM-2', 'VM-1']) != set([u'VM'])

This started failing after this change was made:

https://review.openstack.org/#/c/569199/

http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22self.assertEqual(set(%5B'VM-1'%2C%20'VM-2'%5D)%2C%20server_names)%5C%22%20AND%20tags%3A%5C%22console%5C%22=7d

8 hits in 24 hours

** Affects: nova
 Importance: High
 Status: Confirmed


** Tags: cells gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1776684

Title:
  MultipleCreateTestJSON.test_multiple_create intermittently fails for
  cells v1 due to server name check change

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  http://logs.openstack.org/75/486475/38/gate/nova-cells-v1/2c2a566/job-
  output.txt.gz#_2018-06-13_13_26_17_075329

  2018-06-13 13:26:17.110798 | primary | Captured traceback:
  2018-06-13 13:26:17.110851 | primary | ~~~
  2018-06-13 13:26:17.110932 | primary | Traceback (most recent call last):
  2018-06-13 13:26:17.03 | primary |   File 
"tempest/api/compute/servers/test_multiple_create.py", line 43, in 
test_multiple_create
  2018-06-13 13:26:17.111218 | primary | self.assertEqual(set(['VM-1', 
'VM-2']), server_names)
  2018-06-13 13:26:17.111435 | primary |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
  2018-06-13 13:26:17.111542 | primary | self.assertThat(observed, 
matcher, message)
  2018-06-13 13:26:17.111758 | primary |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  2018-06-13 13:26:17.111830 | primary | raise mismatch_error
  2018-06-13 13:26:17.111992 | primary | 
testtools.matchers._impl.MismatchError: set(['VM-2', 'VM-1']) != set([u'VM'])

  This started failing after this change was made:

  https://review.openstack.org/#/c/569199/

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22self.assertEqual(set(%5B'VM-1'%2C%20'VM-2'%5D)%2C%20server_names)%5C%22%20AND%20tags%3A%5C%22console%5C%22=7d

  8 hits in 24 hours

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1776684/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1776678] [NEW] Horizon password change is throwing Unathorized error

2018-06-13 Thread wondernath
Public bug reported:

Steps to reproduce:
1. Log into horizon as admin
2. navigate to settings > change password page
3. enter current password and new passwords
4. click on Change button

Expected:
Password should be updated

Actual:
Unauthorized eror is thrown

Notes:
- Run against devstack - master
- Screenshot attached.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: horizon-core

** Attachment added: "Horizon_password_change_unauthorized.PNG"
   
https://bugs.launchpad.net/bugs/1776678/+attachment/5152109/+files/Horizon_password_change_unauthorized.PNG

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1776678

Title:
  Horizon password change is throwing Unathorized error

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to reproduce:
  1. Log into horizon as admin
  2. navigate to settings > change password page
  3. enter current password and new passwords
  4. click on Change button

  Expected:
  Password should be updated

  Actual:
  Unauthorized eror is thrown

  Notes:
  - Run against devstack - master
  - Screenshot attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1776678/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1773342] Re: hyper-v: Unused images are always deleted

2018-06-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/570571
Committed: 
https://git.openstack.org/cgit/openstack/compute-hyperv/commit/?id=7caef58be03cee448d5845628b59ddd669511b87
Submitter: Zuul
Branch:master

commit 7caef58be03cee448d5845628b59ddd669511b87
Author: Lucian Petrut 
Date:   Fri May 25 14:26:19 2018 +0300

Avoid cleaning up cached images when configured not to

This change ensure that we're honoring the "remove_unused_base_images"
config option, allowing deployers to disable the auto-removal of
old images.

Change-Id: I49a0a83ab34ca0b9da5d589aaa9006d169275b15
Closes-Bug: #1773342


** Changed in: compute-hyperv
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1773342

Title:
  hyper-v: Unused images are always deleted

Status in compute-hyperv:
  Fix Released
Status in OpenStack Compute (nova):
  New

Bug description:
  The Hyper-V driver will always delete unused images, ignoring the
  "remove_unused_base_images" config option.

  One workaround would be to set
  "remove_unused_original_minimum_age_seconds" to a really large value
  (e.g. 2^30). Setting it to -1 won't help either.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1773342/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1776668] [NEW] the placement version discovery doc at / does't have a status field, it should

2018-06-13 Thread Chris Dent
Public bug reported:

Version discovery docs are supposed to have a status field:
http://specs.openstack.org/openstack/api-
wg/guidelines/microversion_specification.html#version-discovery

Placement's does not. This was probably an oversight resulting from
casual attention to detail since placement only has one version.

This is easily fixable and easily backportable, so I'll get on that.

This is causing problems for at least mnaser when trying to write his
own client code.

** Affects: nova
 Importance: Medium
 Assignee: Chris Dent (cdent)
 Status: Triaged


** Tags: placement queens-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1776668

Title:
  the placement version discovery doc at / does't have a status field,
  it should

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  Version discovery docs are supposed to have a status field:
  http://specs.openstack.org/openstack/api-
  wg/guidelines/microversion_specification.html#version-discovery

  Placement's does not. This was probably an oversight resulting from
  casual attention to detail since placement only has one version.

  This is easily fixable and easily backportable, so I'll get on that.

  This is causing problems for at least mnaser when trying to write his
  own client code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1776668/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1716834] Re: Network Topology graph "twitches"

2018-06-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/543231
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=e5dae9b35349c53fa6184e415e254094893cab1f
Submitter: Zuul
Branch:master

commit e5dae9b35349c53fa6184e415e254094893cab1f
Author: Ameed Ashour 
Date:   Sun Feb 11 09:52:52 2018 -0500

Network Topology graph "twitches"

the graph is redrawn which causes visual glitching.
Depending on the graph complexity.

my patch resolved this issue so I compare between previous
data and received data, if that changed it will redrawn,
otherwise it keeps running.

Change-Id: I813dfb329f46cda9afacce89c9a8b84eb2827115
Closes-Bug: #1716834


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1716834

Title:
  Network Topology graph "twitches"

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The Network Topology graph goes through periodic updates to
  synchronize server data. On each update, the graph is redrawn which
  causes visual glitching. Depending on the graph complexity, this may
  manifest anywhere from a slight 'twitch' to a full blown node
  reorganization which may completely change the way the graph looks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1716834/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750121] Re: Dynamic routing: adding speaker to agent fails

2018-06-13 Thread Launchpad Bug Tracker
This bug was fixed in the package neutron-dynamic-routing -
2:12.0.0-0ubuntu1.1

---
neutron-dynamic-routing (2:12.0.0-0ubuntu1.1) bionic; urgency=medium

  * d/p/fix-failure-when-adding-a-speaker-to-an-agent.patch: Cherry-picked
from upstream stable/queens branch to ensure adding speaker to agent
doesn't fail (LP: #1750121).
  * d/gbp.conf: Create stable/queens branch.

 -- Corey Bryant   Wed, 25 Apr 2018 09:16:30
-0400

** Changed in: neutron-dynamic-routing (Ubuntu Bionic)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750121

Title:
  Dynamic routing: adding speaker to agent fails

Status in Ubuntu Cloud Archive:
  Fix Committed
Status in Ubuntu Cloud Archive pike series:
  Fix Committed
Status in Ubuntu Cloud Archive queens series:
  Fix Committed
Status in neutron:
  Fix Released
Status in neutron-dynamic-routing package in Ubuntu:
  Fix Released
Status in neutron-dynamic-routing source package in Artful:
  Fix Committed
Status in neutron-dynamic-routing source package in Bionic:
  Fix Released
Status in neutron-dynamic-routing source package in Cosmic:
  Fix Released

Bug description:
  SRU details for Ubuntu
  --
  [Impact]
  See "Original description" below.

  [Test Case]
  See "Original description" below.

  [Regression Potential]
  Low. This is fixed upstream in corresponding stable branches.

  
  Original description
  
  When following 
https://docs.openstack.org/neutron-dynamic-routing/latest/contributor/testing.html
 everything works fine because the speaker is scheduled to the agent 
automatically (in contrast to what the docs say). But if I remove the speaker 
from the agent and add it again with

  $ neutron bgp-dragent-speaker-remove 0159fc0a-22de-4995-8fad-8fb8835a4d86 
bgp-speaker
  $ neutron bgp-dragent-speaker-add 0159fc0a-22de-4995-8fad-8fb8835a4d86 
bgp-speaker

  the following error is seen in the log:

  Feb 17 10:56:30 test-node01 neutron-bgp-dragent[18999]: ERROR
  neutron_dynamic_routing.services.bgp.agent.bgp_dragent [None req-
  da9a22ae-52a2-4be7-a3e8-2dc2dc970fdd admin admin] Call to driver for
  BGP Speaker d2aa5935-30c2-4369-83ee-b3a0ff77cc49 add_bgp_peer has
  failed with exception 'auth_type'.

  The same thing happens when there are multiple agents and one tries to
  add the speaker to one of the other agents.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1750121/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1776621] [NEW] Scale: when periodic pool size is small and there is a lot of load the compute service goes down

2018-06-13 Thread Gary Kotton
Public bug reported:

When the nova power sync pool is exhausted the compute service will go
down. This results in scale and performance tests failing.

2018-06-12 19:58:48.871 30126 WARNING oslo.messaging._drivers.impl_rabbit 
[req-196321bb-a11a-4e6e-a80a-544ecd093986 c3de6d9ec02c494d978330d8f1a64da1 
d37803befc35418981f1f0b6dceec696 - default default] Unexpected error during 
heartbeart thread processing, retrying...: error: [Errno 104] Connection reset 
by peer
2018-06-12 19:58:48.872 30126 WARNING oslo.messaging._drivers.impl_rabbit 
[req-196321bb-a11a-4e6e-a80a-544ecd093986 c3de6d9ec02c494d978330d8f1a64da1 
d37803befc35418981f1f0b6dceec696 - default default] Unexpected error during 
heartbeart thread processing, retrying...: error: [Errno 104] Connection reset 
by peer
2018-06-12 19:58:54.793 30126 WARNING oslo.messaging._drivers.impl_rabbit 
[req-196321bb-a11a-4e6e-a80a-544ecd093986 c3de6d9ec02c494d978330d8f1a64da1 
d37803befc35418981f1f0b6dceec696 - default default] Unexpected error during 
heartbeart thread processing, retrying...: error: [Errno 104] Connection reset 
by peer
2018-06-12 21:37:23.805 30126 DEBUG oslo_concurrency.lockutils 
[req-196321bb-a11a-4e6e-a80a-544ecd093986 c3de6d9ec02c494d978330d8f1a64da1 
d37803befc35418981f1f0b6dceec696 - default default] Lock "compute_resources" 
released by "nova.compute.resource_tracker._update_available_resource" :: held 
6004.943s inner 
/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:288
2018-06-12 21:37:23.807 30126 ERROR nova.compute.manager 
[req-196321bb-a11a-4e6e-a80a-544ecd093986 c3de6d9ec02c494d978330d8f1a64da1 
d37803befc35418981f1f0b6dceec696 - default default] Error updating resources 
for node domain-c7.fd3d2358-cc8d-4773-9fef-7a2713ac05ba.: MessagingTimeout: 
Timed out waiting for a reply to message ID 1eb4b1b40f0f4c66b0266608073717e8

root@controller01:/var/log/nova# vi nova-conductor.log.1
2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager 
[req-77b5e1d7-a4b7-468e-98af-dfdfbf2fad7f 1b5d8da24b39464cb6736d122ccc0665 
eb361d7bc9bd40059a2ce2848c985772 - default default] Failed to schedule 
instances: NoValidHost_Remote: No valid host was found. There are not enough 
hosts available.
Traceback (most recent call last):

  File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
226, in inner
return func(*args, **kwargs)

  File "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 153, 
in select_destinations
allocation_request_version, return_alternates)

  File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", 
line 93, in select_destinations
allocation_request_version, return_alternates)

  File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", 
line 245, in _schedule
claimed_instance_uuids)

  File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", 
line 282, in _ensure_sufficient_hosts
raise exception.NoValidHost(reason=reason)

NoValidHost: No valid host was found. There are not enough hosts available.
2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager Traceback (most 
recent call last):
2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager File 
"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 1118, in 
schedule_and_build_instances
2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager instance_uuids, 
return_alternates=True)
2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager File 
"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 718, in 
_schedule_instances
2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager 
return_alternates=return_alternates)
2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/utils.py", line 727, in wrapped
2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager return func(*args, 
**kwargs)
2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line 53, 
in select_destinations
2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager instance_uuids, 
return_objects, return_alternates)
2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/client/__init__.py", line 37, 
in __run_method
2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager return 
getattr(self.instance, __name)(*args, **kwargs)
2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/client/query.py", line 42, in 
select_destinations
2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager instance_uuids, 
return_objects, return_alternates)
2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager File 
"/usr/lib/python2.7/dist-packages/nova/scheduler/rpcapi.py", line 158, in 
select_destinations
2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager return 
cctxt.call(ctxt, 

[Yahoo-eng-team] [Bug 1749574] Re: [tracking] removal and migration of pycrypto

2018-06-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/560292
Committed: 
https://git.openstack.org/cgit/openstack/trove/commit/?id=46a031e76544572562eaf3e757a0ff488c3389f2
Submitter: Zuul
Branch:master

commit 46a031e76544572562eaf3e757a0ff488c3389f2
Author: Zhao Chao 
Date:   Wed Apr 11 12:39:05 2018 +0800

Switch to cryptography from pycrypto

PyCrypto isn't active developed for quite a while, cryptography is
recommended instead. This patch does this migration, but still keeps
pycrytpo as a fallback solution.

Random generation is also migrated to os.urandom as the cryptography
document suggests:
https://cryptography.io/en/latest/random-numbers/

Closes-Bug: #1749574

Change-Id: I5c0c1a238023c116af5a84d899e629f1c7c3513f
Co-Authored-By: Fan Zhang 
Signed-off-by: Zhao Chao 


** Changed in: trove
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1749574

Title:
  [tracking] removal and migration of pycrypto

Status in Barbican:
  In Progress
Status in Compass:
  New
Status in daisycloud:
  New
Status in OpenStack Backup/Restore and DR (Freezer):
  New
Status in Fuel for OpenStack:
  New
Status in OpenStack Compute (nova):
  Triaged
Status in openstack-ansible:
  Fix Committed
Status in OpenStack Global Requirements:
  New
Status in pyghmi:
  Fix Committed
Status in Solum:
  Fix Released
Status in Tatu:
  New
Status in OpenStack DBaaS (Trove):
  Fix Released

Bug description:
  trove
  tatu
  barbican
  compass
  daisycloud
  freezer
  fuel
  nova
  openstack-ansible - https://review.openstack.org/544516
  pyghmi - https://review.openstack.org/569073
  solum

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1749574/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1749574] Re: [tracking] removal and migration of pycrypto

2018-06-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/574244
Committed: 
https://git.openstack.org/cgit/openstack/solum/commit/?id=4bb3f91e8afc1e60f674d3e191a5945dfe909bd2
Submitter: Zuul
Branch:master

commit 4bb3f91e8afc1e60f674d3e191a5945dfe909bd2
Author: zhurong 
Date:   Mon Jun 11 20:57:58 2018 +0800

Remove pycrypto dependency

Change-Id: I49f912974840c7afe516bc451e1f1d4ba72c7479
Closes-Bug: #1749574


** Changed in: solum
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1749574

Title:
  [tracking] removal and migration of pycrypto

Status in Barbican:
  In Progress
Status in Compass:
  New
Status in daisycloud:
  New
Status in OpenStack Backup/Restore and DR (Freezer):
  New
Status in Fuel for OpenStack:
  New
Status in OpenStack Compute (nova):
  Triaged
Status in openstack-ansible:
  Fix Committed
Status in OpenStack Global Requirements:
  New
Status in pyghmi:
  Fix Committed
Status in Solum:
  Fix Released
Status in Tatu:
  New
Status in OpenStack DBaaS (Trove):
  In Progress

Bug description:
  trove
  tatu
  barbican
  compass
  daisycloud
  freezer
  fuel
  nova
  openstack-ansible - https://review.openstack.org/544516
  pyghmi - https://review.openstack.org/569073
  solum

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1749574/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1776596] Re: [QUEENS] Promotion Jobs failing at overcloud deployment with AttributeError: 'IronicNodeState' object has no attribute 'failed_builds'

2018-06-13 Thread Tony Breeds
It looks like when we backported https://review.openstack.org/#/c/573248
to queens (and pike) we missed the fact that the Ironic Host Manger is
still in queens and needs an update that wasn't needed on master becuase
we removed it in https://review.openstack.org/#/c/565805/1

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1776596

Title:
  [QUEENS] Promotion Jobs failing at overcloud deployment with
  AttributeError: 'IronicNodeState' object has no attribute
  'failed_builds'

Status in OpenStack Compute (nova):
  New
Status in tripleo:
  New

Bug description:
  Queens overcloud deployment in all ovb promotion jobs is failing with
  AttributeError: 'IronicNodeState' object has no attribute
  'failed_builds'.

  Logs:-
  
https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset002-queens-upload/556a09f/undercloud/home/jenkins/failed_deployment_list.log.txt.gz
  
https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset002-queens-upload/556a09f/undercloud/var/log/nova/nova-scheduler.log.txt.gz#_2018-06-13_01_08_25_689
  
https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset020-queens/3909a7f/undercloud/home/jenkins/failed_deployment_list.log.txt.gz

  This is happening with a cherry-picked patch in nova:-
  https://review.openstack.org/#/c/573239/

  In master it's not seen probably because of:-
  https://review.openstack.org/#/c/565805/ (Remove IronicHostManager and
  baremetal scheduling options)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1776596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp