[Yahoo-eng-team] [Bug 2002288] Re: Install and configure for Ubuntu in horizon

2023-01-18 Thread Akihiro Motoki
*** This bug is a duplicate of bug 1892279 ***
https://bugs.launchpad.net/bugs/1892279

The similar bugs were reported and fixed in stable/yoga and later up to msater.
stable/xena backport was proposed.

** This bug has been marked a duplicate of bug 1892279
   Install and configure for Red Hat Enterprise Linux and CentOS in horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2002288

Title:
  Install and configure for Ubuntu in horizon

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Error in text: "OPENSTACK_KEYSTONE_URL = "http://%s/identity/v3"; %
  OPENSTACK_HOST". After "%s" should be ":5000"

  Link: https://docs.openstack.org/horizon/xena/install/install-
  
ubuntu.html#:~:text=OPENSTACK_KEYSTONE_URL%20%3D%20%22http%3A//%25s/identity/v3%22%20%25%20OPENSTACK_HOST

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/2002288/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2003196] [NEW] [FT] Error in "test_arp_spoof_doesnt_block_ipv6"

2023-01-18 Thread Rodolfo Alonso
Public bug reported:

Logs:
https://404fa55dc27f44e0606d-9f131354b122204fb24a7b43973ed8e6.ssl.cf5.rackcdn.com/860639/22/check/neutron-
functional-with-uwsgi/6a57e16/testr_results.html

Snippet: https://paste.opendev.org/show/bBI2hbch48bCHE9XgKYq/

** Affects: neutron
 Importance: Undecided
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2003196

Title:
  [FT] Error in "test_arp_spoof_doesnt_block_ipv6"

Status in neutron:
  New

Bug description:
  Logs:
  
https://404fa55dc27f44e0606d-9f131354b122204fb24a7b43973ed8e6.ssl.cf5.rackcdn.com/860639/22/check/neutron-
  functional-with-uwsgi/6a57e16/testr_results.html

  Snippet: https://paste.opendev.org/show/bBI2hbch48bCHE9XgKYq/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2003196/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1996638] Re: horizon-nodejs16/18-run-test job failing in Ubuntu Jammy (22.04)

2023-01-18 Thread Vishal Manchanda
Fixed by https://review.opendev.org/c/openstack/horizon/+/865453

** Changed in: horizon
 Assignee: (unassigned) => Vishal Manchanda (vishalmanchanda)

** Changed in: horizon
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1996638

Title:
  horizon-nodejs16/18-run-test job failing in Ubuntu Jammy (22.04)

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  As per the 2023.1 cycles community-wide goal of migrating the CI/CD to
  Ubuntu Jammy (22.04), we are testing all jobs in advance before the
  actual migration happens on m-1 (Nov 18).

  horizon-nodejs16-run-test, horizon-selenium-headless, 
horizon-integration-tests jobs are failing on Jammy
  - 
https://zuul.opendev.org/t/openstack/build/bbdfff9573894d469d7394a4da4c4274/log/job-output.txt#2564-2584

  I guess we are hitting bug 
https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1951491/
  I have already discussed this topic with openstack-infra team, please refer 
https://meetings.opendev.org/irclogs/%23openstack-infra/%23openstack-infra.2022-11-09.log.html#t2022-11-09T17:27:06

  Right now if we migrate nodeset of "horiozon-nodejs16-run-test" job
  from focal->jammy, it will be broken and all other horizon plugins
  which are running this job are also broken [1].

  [1] https://codesearch.openstack.org/?q=horizon-nodejs-
  jobs&i=nope&literal=nope&files=&excludeFiles=&repos=

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1996638/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2002535] Re: [NFS] Server resize failed when image volume cache enabled

2023-01-18 Thread Sofia Enriquez
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2002535

Title:
  [NFS] Server resize failed when image volume cache enabled

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  Description: When resizing an instance, an error is thrown to the
  user:

  Error resizing server: 17a238f3-01be-490c-97c8-35f47c804624
  Error resizing server

  Steps to reproduce: 
  1.  Create a bootable volume from an existing image
  openstack volume create --image cirros-0.5.2-x86_64-disk --size 1 --bootable 
volboot
  2.  Create an instance attached to that volume
  openstack server create --flavor m1.small --volume volboot --network public  
server1
  3.  Resize the server
  openstack server resize --flavor m1.medium --wait server1

  In the nova compute logfile, we can see an error which refers to the disk 
format
  .
  Jan 10 20:29:09 e2e-os-pstorenfs105 nova-compute[3362105]: ERROR 
oslo_messaging.rpc.server libvirt.libvirtError: internal error: process exited 
while connecting to monitor: 2023-01-10T12:29:08.542757Z qemu-system-x86_64: 
-blockdev {"node-name":"libvirt-1-format","read-only":false,"cache":

  {"direct":true,"no-flush":false}

  ,"driver":"qcow2","file":"libvirt-1-storage","backing":null}: Image is
  not in qcow2 format

  
  After a first analysis, it seems that during the creation of the volume, 
qemu-img convert qcow2 file format to raw and never converts it back to qcow2. 
Attachement information then mistmatches and server resize is failing.

  It's an NFS environment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/2002535/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1996274] Re: Horizon: Monitoring > Overview error display AttributeError at /monitoring/

2023-01-18 Thread Vishal Manchanda
It looks like a bug of moansaca-ui, please report it there [1].

[1] https://storyboard.openstack.org/#!/project/openstack/monasca-ui

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1996274

Title:
  Horizon:  Monitoring > Overview  error display AttributeError at
  /monitoring/

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  I'm trying to be able to see the overview tab for monitoring. I get
  errors like the attachment:

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1996274/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2002951] Re: OOM kills python / mysqld in various nova devstack jobs

2023-01-18 Thread Sylvain Bauza
FWIW, I created another change that was running this test *earlier*, and
it worked :

https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_362/870924/2/check/nova-
ceph-multistore/3626391/testr_results.html

That being said, this test tooked more than 181secs so I created a new
revision for knowing how it takes for creating the cached image and how
large this cached image is using the memory :

https://review.opendev.org/c/openstack/tempest/+/870913/2/tempest/api/compute/admin/test_aaa_volume.py#90

Still waiting the results but here I think we need to modify this test
to maybe not caching this way if we can, or maybe to be run differently.


** Also affects: tempest
   Importance: Undecided
   Status: New

** Also affects: glance
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2002951

Title:
  OOM kills python / mysqld in various nova devstack jobs

Status in Glance:
  New
Status in OpenStack Compute (nova):
  Confirmed
Status in tempest:
  New

Bug description:
  The following tests exited without returning a status
  and likely segfaulted or crashed Python:

  *
  
tempest.api.compute.admin.test_volume.AttachSCSIVolumeTestJSON.test_attach_scsi_disk_with_config_drive[id-777e468f-17ca-4da4-b93d-b7dbf56c0494]

  
  And in the syslog: 
https://zuul.opendev.org/t/openstack/build/f5aa5edd4d354c2685fc1f3e13d0ef77/log/controller/logs/syslog.txt#3688

  Jan 13 22:31:13 np0032729364 kernel: Out of memory: Killed process
  114509 (python) total-vm:4966188kB, anon-rss:3914748kB, file-
  rss:5080kB, shmem-rss:0kB, UID:1002 pgtables:9764kB oom_score_adj:0

  Example run:
  https://zuul.opendev.org/t/openstack/build/f5aa5edd4d354c2685fc1f3e13d0ef77

  I see this happening in multiple jobs in the last 10 days:
  * nova-ceph-multistore 14x
  * nova-multi-cell 1x
  * nova-next 1x

  $ logsearch log --result FAILURE --project openstack/nova --branch master 
--file controller/logs/syslog.txt 'kernel: Out of memory: Killed process' 
--days 10
  [..snip..]
  Searching logs:
  
ece0cf2ce71c4a8790a0a36529dd0a8e:/home/gibi/.cache/logsearch/ece0cf2ce71c4a8790a0a36529dd0a8e/controller/logs/syslog.txt:3774:Jan
 14 22:57:33 np0032733292 kernel: Out of memory: Killed process 115024 (python) 
total-vm:4981004kB, anon-rss:3904068kB, file-rss:5320kB, shmem-rss:0kB, 
UID:1002 pgtables:9376kB oom_score_adj:0

  
f5aa5edd4d354c2685fc1f3e13d0ef77:/home/gibi/.cache/logsearch/f5aa5edd4d354c2685fc1f3e13d0ef77/controller/logs/syslog.txt:3688:Jan
  13 22:31:13 np0032729364 kernel: Out of memory: Killed process 114509
  (python) total-vm:4966188kB, anon-rss:3914748kB, file-rss:5080kB,
  shmem-rss:0kB, UID:1002 pgtables:9764kB oom_score_adj:0

  
1447c6274e924e068578ca260c9ac2a6:/home/gibi/.cache/logsearch/1447c6274e924e068578ca260c9ac2a6/controller/logs/syslog.txt:3824:Jan
  13 21:34:13 np0032729237 kernel: Out of memory: Killed process 114489
  (python) total-vm:4975072kB, anon-rss:3954804kB, file-rss:5312kB,
  shmem-rss:0kB, UID:1002 pgtables:9400kB oom_score_adj:0

  
446a5a73b22d432295820e5b8083a2f9:/home/gibi/.cache/logsearch/446a5a73b22d432295820e5b8083a2f9/controller/logs/syslog.txt:5103:Jan
  13 10:04:25 np0032720733 kernel: Out of memory: Killed process 48920
  (mysqld) total-vm:5233384kB, anon-rss:300872kB, file-rss:0kB, shmem-
  rss:0kB, UID:116 pgtables:2652kB oom_score_adj:0

  
fae1fbe258134dd8ba060cb743707247:/home/gibi/.cache/logsearch/fae1fbe258134dd8ba060cb743707247/controller/logs/syslog.txt:6686:Jan
  13 09:44:04 np0032720410 kernel: Out of memory: Killed process 47404
  (mysqld) total-vm:5208828kB, anon-rss:278080kB, file-rss:0kB, shmem-
  rss:0kB, UID:116 pgtables:2572kB oom_score_adj:0

  
1bbcaa703b7d42c7a266fde3a6acca65:/home/gibi/.cache/logsearch/1bbcaa703b7d42c7a266fde3a6acca65/controller/logs/syslog.txt:3717:Jan
  13 03:41:39 np0032719591 kernel: Out of memory: Killed process 114777
  (python) total-vm:4954352kB, anon-rss:4001500kB, file-rss:5124kB,
  shmem-rss:0kB, UID:1002 pgtables:9416kB oom_score_adj:0

  
7d9ca42edc5e4bdeb17be8e8045c6468:/home/gibi/.cache/logsearch/7d9ca42edc5e4bdeb17be8e8045c6468/controller/logs/syslog.txt:3828:Jan
  12 22:06:40 np0032716841 kernel: Out of memory: Killed process 114731
  (python) total-vm:4964792kB, anon-rss:4055532kB, file-rss:5072kB,
  shmem-rss:0kB, UID:1002 pgtables:9212kB oom_score_adj:0

  
bcb7bc3b478586906c31c6558b13:/home/gibi/.cache/logsearch/bcb7bc3b478586906c31c6558b13/controller/logs/syslog.txt:3769:Jan
  12 20:17:35 np0032714959 kernel: Out of memory: Killed process 114973
  (python) total-vm:4971976kB, anon-rss:3855572kB, file-rss:5356kB,
  shmem-rss:0kB, UID:1002 pgtables:9696kB oom_score_adj:0

  
7572c2bf5e6547c0a1fc6b0f180a2e1f:/home/gibi/.cache/logsearch/7572c2bf5e6547c0a1fc6b0f180a2e1f/controller/logs/syslog.txt:3805:Jan
  12 17:44

[Yahoo-eng-team] [Bug 1998202] Re: [RFE] Allow shared resource providers for physical and tunnelled networks

2023-01-18 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/866127
Committed: 
https://opendev.org/openstack/neutron/commit/70a86637e71eb67e7785f090ebc1045040bca897
Submitter: "Zuul (22348)"
Branch:master

commit 70a86637e71eb67e7785f090ebc1045040bca897
Author: Rodolfo Alonso Hernandez 
Date:   Tue Nov 29 19:43:09 2022 +0100

Allow shared resources between physical and tunnelled networks

Since [1] it is possible to create resource providers for tunnelled
networks and model the available bandwidth for overlay networks.

In some deployments, the admin uses the same physical network interface
for a physical network and the tunnelled networks. In that case it is
complicated to correctly split the bandwidth between the inventories
of the resource providers. This patch adds the ability to create a
resource provider with traits of both the physical network and the
tunnelled networks. E.g.:

  $ openstack resource provider trait list $brex
  +--+
  | name |
  +--+
  | CUSTOM_NETWORK_TUNNEL_PROVIDER   |
  | CUSTOM_PHYSNET_PUBLIC|
  | ...  |
  +--+

With this resource provider, a request comming from a port in the
physical network or an overlay network is provided by the same
inventory.

[1]https://review.opendev.org/c/openstack/neutron/+/860639

What is missing and will be added in next patches:
* Tempest tests, that will be pushed to the corresponding repository.

Closes-Bug: #1998202
Change-Id: I801e770fd8b4d6504f5407108e1d1fd848f96c09


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1998202

Title:
  [RFE] Allow shared resource providers for physical and tunnelled
  networks

Status in neutron:
  Fix Released

Bug description:
  This RFE is an extension of
  https://bugs.launchpad.net/neutron/+bug/1991965.

  The goal of this RFE is to allow to share the same resource provider
  for one physical network and the tunnelled networks. This is a very
  common deployment configuration: there is one physical network and one
  device connected to this physical network bridge. This is used both
  for the VLAN tagged traffic (physical network) and as VTEP/TEP for the
  tunnelled traffic. In this configuration, the resource provider used
  for the physical network (e.g.: br-ex), must be shared with the
  tunnelled traffic.

  Related bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2097932

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1998202/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2003253] [NEW] Nova fails to SRIOV VM with error "libvirtError: Requested operation is not valid: PCI device 0000:5e:05.6 is in use by driver QEMU, domain instance-....."

2023-01-18 Thread Radoslaw Smigielski
Public bug reported:

On two SRIOV computes, with Mellanox ConextX-5 NIC, we can create SRIOV VMs 
with no problems.
When we create several of these SRIOV VMs and start live migrate these VMs at 
some point we hit below error:

2023-01-17 08:09:04.413 7 INFO nova.virt.libvirt.driver 
[req-f128d0fc-fab7-43e0-b5c3-7d039ed3122c 7280f3f5a7cd430f9ab5310b3e8acb27 
6e24c3394ab14ec2823d991ff3bd4371 - default default] Attaching vif 
26ab618c-186b-402e-b8d1-0c0f9e57d8cf to instance 37
2023-01-17 08:09:04.433 7 ERROR nova.virt.libvirt.driver 
[req-f128d0fc-fab7-43e0-b5c3-7d039ed3122c 7280f3f5a7cd430f9ab5310b3e8acb27 
6e24c3394ab14ec2823d991ff3bd4371 - default default] [instance: 
dc84de60-274b-4694-b73b-9aa237d9561b] attaching network adapter failed.: 
libvirtError: Requested operation is not valid: PC
I device :5e:05.6 is in use by driver QEMU, domain instance-002e
2023-01-17 08:09:04.433 7 ERROR nova.virt.libvirt.driver [instance: 
dc84de60-274b-4694-b73b-9aa237d9561b] Traceback (most recent call last):
2023-01-17 08:09:04.433 7 ERROR nova.virt.libvirt.driver [instance: 
dc84de60-274b-4694-b73b-9aa237d9561b]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2139, in 
attach_interface
2023-01-17 08:09:04.433 7 ERROR nova.virt.libvirt.driver [instance: 
dc84de60-274b-4694-b73b-9aa237d9561b] guest.attach_device(cfg, 
persistent=True, live=live)
2023-01-17 08:09:04.433 7 ERROR nova.virt.libvirt.driver [instance: 
dc84de60-274b-4694-b73b-9aa237d9561b]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 305, in 
attach_device
2023-01-17 08:09:04.433 7 ERROR nova.virt.libvirt.driver [instance: 
dc84de60-274b-4694-b73b-9aa237d9561b] 
self._domain.attachDeviceFlags(device_xml, flags=flags)
2023-01-17 08:09:04.433 7 ERROR nova.virt.libvirt.driver [instance: 
dc84de60-274b-4694-b73b-9aa237d9561b]   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 190, in doit
2023-01-17 08:09:04.433 7 ERROR nova.virt.libvirt.driver [instance: 
dc84de60-274b-4694-b73b-9aa237d9561b] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
2023-01-17 08:09:04.433 7 ERROR nova.virt.libvirt.driver [instance: 
dc84de60-274b-4694-b73b-9aa237d9561b]   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 148, in proxy_call
2023-01-17 08:09:04.433 7 ERROR nova.virt.libvirt.driver [instance: 
dc84de60-274b-4694-b73b-9aa237d9561b] rv = execute(f, *args, **kwargs)
2023-01-17 08:09:04.433 7 ERROR nova.virt.libvirt.driver [instance: 
dc84de60-274b-4694-b73b-9aa237d9561b]   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 129, in execute
2023-01-17 08:09:04.433 7 ERROR nova.virt.libvirt.driver [instance: 
dc84de60-274b-4694-b73b-9aa237d9561b] six.reraise(c, e, tb)
2023-01-17 08:09:04.433 7 ERROR nova.virt.libvirt.driver [instance: 
dc84de60-274b-4694-b73b-9aa237d9561b]   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 83, in tworker
2023-01-17 08:09:04.433 7 ERROR nova.virt.libvirt.driver [instance: 
dc84de60-274b-4694-b73b-9aa237d9561b] rv = meth(*args, **kwargs)
2023-01-17 08:09:04.433 7 ERROR nova.virt.libvirt.driver [instance: 
dc84de60-274b-4694-b73b-9aa237d9561b]   File 
"/usr/lib64/python2.7/site-packages/libvirt.py", line 605, in attachDeviceFlags
2023-01-17 08:09:04.433 7 ERROR nova.virt.libvirt.driver [instance: 
dc84de60-274b-4694-b73b-9aa237d9561b] if ret == -1: raise libvirtError 
('virDomainAttachDeviceFlags() failed', dom=self)
2023-01-17 08:09:04.433 7 ERROR nova.virt.libvirt.driver [instance: 
dc84de60-274b-4694-b73b-9aa237d9561b] libvirtError: Requested operation is not 
valid: PCI device :5e:05.6 is in use by driver QEMU, domain 
instance-002e
2023-01-17 08:09:04.433 7 ERROR nova.virt.libvirt.driver [instance: 
dc84de60-274b-4694-b73b-9aa237d9561b] 
2023-01-17 08:09:04.437 7 ERROR nova.compute.manager 
[req-f128d0fc-fab7-43e0-b5c3-7d039ed3122c 7280f3f5a7cd430f9ab5310b3e8acb27 
6e24c3394ab14ec2823d991ff3bd4371 - default default] [instance: 
dc84de60-274b-4694-b73b-9aa237d9561b] Unexpected error during post live 
migration at destination host.: InterfaceAttachFailed: 
Failed to attach network adapter device to dc84de60-274b-4694-b73b-9aa237d9561b

What seems to be happening is that on two different SRIOV computes,
virtual functions with the same PCI address are in used by two VMs. When
we migrate one of the VMs to the second compute where the same PCI
virtual function is used we run into above PCI device conflict.

The end result is pretty bad, on the target compute
1. There is still running the VM which was originally running there
2. Migrated VM, the libvirt domain is running but it does not have the NIC 
based on virtual function connected, no connectivity


In general the problem seems to be that Nova does not check for the PCI devices 
(SRIOV virtual functions) to be unique across all SRIOV capable computes and 
more than one VM can get PCI devices with the same, conflicting address.