[Yahoo-eng-team] [Bug 1950894] Re: live_migration_permit_post_copy mode does not work

2021-11-26 Thread Erlon R. Cruz
** Project changed: nova => charm-nova-compute

** Summary changed:

- live_migration_permit_post_copy mode does not work
+ live-migration-permit-post-copy mode does not work

** Description changed:

  Description
  ===
  Some customers have noted that some VMs never complete a
  live migration. The VM's memory copy keeps oscillating
- around 1-10% but never completes. After changing 
- live_migration_permit_post_copy = True, we expected this to
+ around 1-10% but never completes. After changing
+ live-migration-permit-post-copy = True, we expected this to
  converge and migrate successfully as this feature describes it
  should.
  
  Workaround 1: It's possible to complete the process if you log into the source
  host and run the QMP command[1]:
  
  virsh qemu-monitor-command instance-0026  '{"execute":"migrate-
  start-postcopy"}'
  
- 
- Workaround 2: The migration finishes if you run 'nova 
live-migration-force-complete'
- 
+ Workaround 2: The migration finishes if you run 'nova live-migration-
+ force-complete'
  
  I believe this can also be a libvirt bug given that I don't see any 
"migrate-start-postcopy"
  coming from nova/libvirt logs[4], but only after I manually triggered it via 
the execute
  command above, at 2021-11-12 19:14:08.053+[4].
- 
  
  Steps to reproduce
  ==
  
  * Set up an OpenStack deployment with live_migration_permit_post_copy=False
  * Create a large VM (8+ CPUs) and install stress-ng
  * Run stress-ng:
-   nohup stress-ng --vm 4 --vm-bytes 10% --vm-method write64 --vm-addr-method 
pwr2 -t 1h &
+   nohup stress-ng --vm 4 --vm-bytes 10% --vm-method write64 --vm-addr-method 
pwr2 -t 1h &
  * Migrate the VM, and check for the source host logs messages like:
-   'Migration running for \d+ secs, memory \d+% remaining'
-   This should be oscillating like describing and migration not completing
+   'Migration running for \d+ secs, memory \d+% remaining'
+   This should be oscillating like describing and migration not completing
  * Complete or cancel the  above migration, set 
live_migration_permit_post_copy=True,
-   restart nova services on the computes, and re-do the operation
- 
+   restart nova services on the computes, and re-do the operation
  
  Expected result
  ===
  Migration should complete 100% of times
  
  Actual result
  =
  The migration does not complete and VM's memory is never copied.
  
  Environment
  ===
  1. Exact version of OpenStack you are running[8]
  
  21.2.1-0ubuntu1
  
- 
  2. Which hypervisor did you use[8]?
  
  qemu-kvm: 4.2-3ubuntu6.18
  libvirt-daemon: 6.0.0-0ubuntu8.14
  
- 
  2. Which storage type did you use?
  
  Shared Ceph
- 
  
  3. Which networking type did you use?
  
  OpenvSwitch L3HA
  
  Logs & Configs
  ==
- 
  
  [1] QMP Commands: 
https://gist.github.com/sombrafam/5e8e991058001c2b3843c0d08b4cd7d1
  [2] Migration (completed manually with workaround 1) logs: 
https://gist.github.com/sombrafam/b74497150ae4ae32494ac5735189e149
  [3] nova-compute.log src: 
https://gist.github.com/sombrafam/b74497150ae4ae32494ac5735189e149
  [4] libvirt.log src: 
https://gist.github.com/sombrafam/69f05404d7097265140e1578ea50c00c
  [5] Migration list: 
https://gist.github.com/sombrafam/39b72e242e27b6a3123603db1faa7b19
  [6] Nova.conf dst host: 
https://gist.github.com/sombrafam/ad43b268e7f4b69e7da513a0f7a0095f
  [7] Nova.conf src host: 
https://gist.github.com/sombrafam/ab27b40e577fbe56d741f01e811f3a18
  [8] Package versions: 
https://gist.github.com/sombrafam/0622792d82750b2141b45580b625b69f
  [9] VM info: 
https://gist.github.com/sombrafam/57eaa4c4ba4b141dec9659ee01f25b6d

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1950894

Title:
  live-migration-permit-post-copy mode does not work

Status in OpenStack Nova Compute Charm:
  New

Bug description:
  Description
  ===
  Some customers have noted that some VMs never complete a
  live migration. The VM's memory copy keeps oscillating
  around 1-10% but never completes. After changing
  live-migration-permit-post-copy = True, we expected this to
  converge and migrate successfully as this feature describes it
  should.

  Workaround 1: It's possible to complete the process if you log into the source
  host and run the QMP command[1]:

  virsh qemu-monitor-command instance-0026  '{"execute":"migrate-
  start-postcopy"}'

  Workaround 2: The migration finishes if you run 'nova live-migration-
  force-complete'

  I believe this can also be a libvirt bug given that I don't see any 
"migrate-start-postcopy"
  coming from nova/libvirt logs[4], but only after I manually triggered it via 
the execute
  command above, at 2021-11-12 19:14:08.053+[4].

  Steps to reproduce
  ==

  * Set up an OpenStack deployment with live_migration_permit_post_copy=False
  * Create a large VM (8+ 

[Yahoo-eng-team] [Bug 1950894] [NEW] live_migration_permit_post_copy mode does not work

2021-11-14 Thread Erlon R. Cruz
Public bug reported:

Description
===
Some customers have noted that some VMs never complete a
live migration. The VM's memory copy keeps oscillating
around 1-10% but never completes. After changing 
live_migration_permit_post_copy = True, we expected this to
converge and migrate successfully as this feature describes it
should.

Workaround 1: It's possible to complete the process if you log into the source
host and run the QMP command[1]:

virsh qemu-monitor-command instance-0026  '{"execute":"migrate-
start-postcopy"}'


Workaround 2: The migration finishes if you run 'nova 
live-migration-force-complete'


I believe this can also be a libvirt bug given that I don't see any 
"migrate-start-postcopy"
coming from nova/libvirt logs[4], but only after I manually triggered it via 
the execute
command above, at 2021-11-12 19:14:08.053+[4].


Steps to reproduce
==

* Set up an OpenStack deployment with live_migration_permit_post_copy=False
* Create a large VM (8+ CPUs) and install stress-ng
* Run stress-ng:
  nohup stress-ng --vm 4 --vm-bytes 10% --vm-method write64 --vm-addr-method 
pwr2 -t 1h &
* Migrate the VM, and check for the source host logs messages like:
  'Migration running for \d+ secs, memory \d+% remaining'
  This should be oscillating like describing and migration not completing
* Complete or cancel the  above migration, set 
live_migration_permit_post_copy=True,
  restart nova services on the computes, and re-do the operation


Expected result
===
Migration should complete 100% of times

Actual result
=
The migration does not complete and VM's memory is never copied.

Environment
===
1. Exact version of OpenStack you are running[8]

21.2.1-0ubuntu1


2. Which hypervisor did you use[8]?

qemu-kvm: 4.2-3ubuntu6.18
libvirt-daemon: 6.0.0-0ubuntu8.14


2. Which storage type did you use?

Shared Ceph


3. Which networking type did you use?

OpenvSwitch L3HA

Logs & Configs
==


[1] QMP Commands: 
https://gist.github.com/sombrafam/5e8e991058001c2b3843c0d08b4cd7d1
[2] Migration (completed manually with workaround 1) logs: 
https://gist.github.com/sombrafam/b74497150ae4ae32494ac5735189e149
[3] nova-compute.log src: 
https://gist.github.com/sombrafam/b74497150ae4ae32494ac5735189e149
[4] libvirt.log src: 
https://gist.github.com/sombrafam/69f05404d7097265140e1578ea50c00c
[5] Migration list: 
https://gist.github.com/sombrafam/39b72e242e27b6a3123603db1faa7b19
[6] Nova.conf dst host: 
https://gist.github.com/sombrafam/ad43b268e7f4b69e7da513a0f7a0095f
[7] Nova.conf src host: 
https://gist.github.com/sombrafam/ab27b40e577fbe56d741f01e811f3a18
[8] Package versions: 
https://gist.github.com/sombrafam/0622792d82750b2141b45580b625b69f
[9] VM info: https://gist.github.com/sombrafam/57eaa4c4ba4b141dec9659ee01f25b6d

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1950894

Title:
  live_migration_permit_post_copy mode does not work

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Some customers have noted that some VMs never complete a
  live migration. The VM's memory copy keeps oscillating
  around 1-10% but never completes. After changing 
  live_migration_permit_post_copy = True, we expected this to
  converge and migrate successfully as this feature describes it
  should.

  Workaround 1: It's possible to complete the process if you log into the source
  host and run the QMP command[1]:

  virsh qemu-monitor-command instance-0026  '{"execute":"migrate-
  start-postcopy"}'

  
  Workaround 2: The migration finishes if you run 'nova 
live-migration-force-complete'

  
  I believe this can also be a libvirt bug given that I don't see any 
"migrate-start-postcopy"
  coming from nova/libvirt logs[4], but only after I manually triggered it via 
the execute
  command above, at 2021-11-12 19:14:08.053+[4].

  
  Steps to reproduce
  ==

  * Set up an OpenStack deployment with live_migration_permit_post_copy=False
  * Create a large VM (8+ CPUs) and install stress-ng
  * Run stress-ng:
nohup stress-ng --vm 4 --vm-bytes 10% --vm-method write64 --vm-addr-method 
pwr2 -t 1h &
  * Migrate the VM, and check for the source host logs messages like:
'Migration running for \d+ secs, memory \d+% remaining'
This should be oscillating like describing and migration not completing
  * Complete or cancel the  above migration, set 
live_migration_permit_post_copy=True,
restart nova services on the computes, and re-do the operation

  
  Expected result
  ===
  Migration should complete 100% of times

  Actual result
  =
  The migration does not complete and VM's memory is never copied.

  Environment
  ===
  1. Exact version of OpenStack you are running[8]

  

[Yahoo-eng-team] [Bug 1943266] Re: Duplicated ARP responses from ovnmetadata namespaces

2021-09-27 Thread Erlon R. Cruz
FYI, this bug was fixed in the upstream branch v21.06.0 this is the
patch:

https://github.com/ovn-
org/ovn/commit/578238b36073256c524a4c2b6ed7521f73aa0019

** Changed in: networking-ovn
   Status: New => Confirmed

** Changed in: networking-ovn
 Assignee: (unassigned) => Erlon R. Cruz (sombrafam)

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1943266

Title:
  Duplicated ARP responses from ovnmetadata namespaces

Status in networking-ovn:
  Confirmed

Bug description:
  When OpenStack instances are connected to an external network, an 
ovn-etadata-namespace is created in each compute that has VMs attached to that
  network. Because the ovn-metadata namespace has interfaces with the same mac 
address in all computers, external switches might ARP query for the IP
   and receive multiple responses in different ports triggering network error 
alerts.

  [ubuntu@sombrafam-bastion(kvm):~/internal_git/stsstack-bundles/openstack]$ 
sudo arping -c 1 10.5.150.0
  ARPING 10.5.150.0
  42 bytes from fa:16:3e:d3:10:01 (10.5.150.0): index=0 time=1.678 msec
  42 bytes from fa:16:3e:d3:10:01 (10.5.150.0): index=1 time=2.143 msec

  --- 10.5.150.0 statistics ---
  1 packets transmitted, 2 packets received,   0% unanswered (1 extra)
  rtt min/avg/max/std-dev = 1.678/1.911/2.143/0.232 ms


  Reproducer: https://paste.ubuntu.com/p/nbnhvTM9d4/

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ovn/+bug/1943266/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1944619] [NEW] Instances with SRIOV ports loose access after failed live migrations

2021-09-22 Thread Erlon R. Cruz
Public bug reported:

If for some reason a live migration fails for an instance with an SRIOV
port during the '_pre_live_migration' hook. The instance will lose
access to the network and leave behind duplicated port bindings on the
database.

The instance re-gains connectivity on the source host after a reboot
(don't know if there's another way to restore connectivity). As a side
effect of this behavior, the pre-live migration cleanup hook also fails
with:

PCI device :3b:10.0 is in use by driver QEMU

[How to reproduce]

- Create an environment with SRIOV, (our case uses switchdev[1])
- Create 1 VM
- Provoke a failure in the _pre_live_migration process (for example creating a 
directory /var/lib/nova/instances/)
- Check the VM's connectivity
- Check the logs for: libvirt.libvirtError: Requested operation is not valid: 
PCI device :03:04.1 is in use by driver QEMU, domain instance-0001
Full-stack trace[2]

[Expected]

VM connectivity is restored even if it gets a brief disconnection

[Observed]
VM loses connectivity which is only is restored after the VM status is set to 
ERROR and the VM is power recycled

[1] https://paste.ubuntu.com/p/PzBM7y6Dbr/
[2] https://paste.ubuntu.com/p/ThQmDYtdSS/

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

- If for some reason a live migration fails for an instance with an SRIOV port
- during the '_pre_live_migration' hook. The instance will lose access to the
- network and leave behind duplicated port bindings on the database.
+ If for some reason a live migration fails for an instance with an SRIOV
+ port during the '_pre_live_migration' hook. The instance will lose
+ access to the network and leave behind duplicated port bindings on the
+ database.
  
- The instance re-gains connectivity on the source host after a reboot (don't
- know if there's another way to restore connectivity). As a side effect of this
- behavior, the pre-live migration cleanup hook also fails with: 
+ The instance re-gains connectivity on the source host after a reboot
+ (don't know if there's another way to restore connectivity). As a side
+ effect of this behavior, the pre-live migration cleanup hook also fails
+ with:
  
  PCI device :3b:10.0 is in use by driver QEMU
  
  [How to reproduce]
  
- Create an environment with SRIOV, (our case uses switchdev[1])
- Create 1 VM
- Provoke a failure in the _pre_live_migration process (for example creating a 
directory /var/lib/nova/instances/)
- Check the VM's connectivity
- Check the logs for: libvirt.libvirtError: Requested operation is not valid: 
PCI device :03:04.1 is in use by driver QEMU, domain instance-0001
+ - Create an environment with SRIOV, (our case uses switchdev[1])
+ - Create 1 VM
+ - Provoke a failure in the _pre_live_migration process (for example creating 
a directory /var/lib/nova/instances/)
+ - Check the VM's connectivity
+ - Check the logs for: libvirt.libvirtError: Requested operation is not valid: 
PCI device :03:04.1 is in use by driver QEMU, domain instance-0001
  Full-stack trace[2]
  
  [Expected]
  
  VM connectivity is restored even if it gets a brief disconnection
  
  [Observed]
  VM loses connectivity which is only is restored after the VM status is set to 
ERROR and the VM is power recycled
  
- 
- 
  [1] https://paste.ubuntu.com/p/PzBM7y6Dbr/
  [2] https://paste.ubuntu.com/p/ThQmDYtdSS/

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1944619

Title:
  Instances with SRIOV ports loose access after failed live migrations

Status in neutron:
  New

Bug description:
  If for some reason a live migration fails for an instance with an
  SRIOV port during the '_pre_live_migration' hook. The instance will
  lose access to the network and leave behind duplicated port bindings
  on the database.

  The instance re-gains connectivity on the source host after a reboot
  (don't know if there's another way to restore connectivity). As a side
  effect of this behavior, the pre-live migration cleanup hook also
  fails with:

  PCI device :3b:10.0 is in use by driver QEMU

  [How to reproduce]

  - Create an environment with SRIOV, (our case uses switchdev[1])
  - Create 1 VM
  - Provoke a failure in the _pre_live_migration process (for example creating 
a directory /var/lib/nova/instances/)
  - Check the VM's connectivity
  - Check the logs for: libvirt.libvirtError: Requested operation is not valid: 
PCI device :03:04.1 is in use by driver QEMU, domain instance-0001
  Full-stack trace[2]

  [Expected]

  VM connectivity is restored even if it gets a brief disconnection

  [Observed]
  VM loses connectivity which is only is restored after the VM status is set to 
ERROR and the VM is power recycled

  [1] https://paste.ubuntu.com/p/PzBM7y6Dbr/
  [2] https://paste.ubuntu.com/p/ThQmDYtdSS/

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1943266] Re: Duplicated ARP responses from ovnmetadata namespaces

2021-09-17 Thread Erlon R. Cruz
** Description changed:

- When OpenStack instances are connected to an external network, an 
ovn-etadata-namespace is created in each compute that has VMs attached to that 
- network. Because the ovn-metadata namespace has interfaces with the same mac 
address in all computers, external switches might ARP query for the IP 
-  and receive multiple responses in different ports triggering network error 
alerts.
+ When OpenStack instances are connected to an external network, an 
ovn-etadata-namespace is created in each compute that has VMs attached to that
+ network. Because the ovn-metadata namespace has interfaces with the same mac 
address in all computers, external switches might ARP query for the IP
+  and receive multiple responses in different ports triggering network error 
alerts.
+ 
+ Reproducer: https://paste.ubuntu.com/p/nbnhvTM9d4/

** Description changed:

  When OpenStack instances are connected to an external network, an 
ovn-etadata-namespace is created in each compute that has VMs attached to that
  network. Because the ovn-metadata namespace has interfaces with the same mac 
address in all computers, external switches might ARP query for the IP
   and receive multiple responses in different ports triggering network error 
alerts.
  
+ [ubuntu@sombrafam-bastion(kvm):~/internal_git/stsstack-bundles/openstack]$ 
sudo arping -c 1 10.5.150.0
+ ARPING 10.5.150.0
+ 42 bytes from fa:16:3e:d3:10:01 (10.5.150.0): index=0 time=1.678 msec
+ 42 bytes from fa:16:3e:d3:10:01 (10.5.150.0): index=1 time=2.143 msec
+ 
+ --- 10.5.150.0 statistics ---
+ 1 packets transmitted, 2 packets received,   0% unanswered (1 extra)
+ rtt min/avg/max/std-dev = 1.678/1.911/2.143/0.232 ms
+ 
+ 
  Reproducer: https://paste.ubuntu.com/p/nbnhvTM9d4/

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1943266

Title:
  Duplicated ARP responses from ovnmetadata namespaces

Status in networking-ovn:
  New
Status in neutron:
  New

Bug description:
  When OpenStack instances are connected to an external network, an 
ovn-etadata-namespace is created in each compute that has VMs attached to that
  network. Because the ovn-metadata namespace has interfaces with the same mac 
address in all computers, external switches might ARP query for the IP
   and receive multiple responses in different ports triggering network error 
alerts.

  [ubuntu@sombrafam-bastion(kvm):~/internal_git/stsstack-bundles/openstack]$ 
sudo arping -c 1 10.5.150.0
  ARPING 10.5.150.0
  42 bytes from fa:16:3e:d3:10:01 (10.5.150.0): index=0 time=1.678 msec
  42 bytes from fa:16:3e:d3:10:01 (10.5.150.0): index=1 time=2.143 msec

  --- 10.5.150.0 statistics ---
  1 packets transmitted, 2 packets received,   0% unanswered (1 extra)
  rtt min/avg/max/std-dev = 1.678/1.911/2.143/0.232 ms


  Reproducer: https://paste.ubuntu.com/p/nbnhvTM9d4/

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ovn/+bug/1943266/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1832021] Re: Checksum drop of metadata traffic on isolated networks with DPDK

2021-03-01 Thread Erlon R. Cruz
** Description changed:

+ [Impact]
+ 
  When an isolated network using provider networks for tenants (meaning
  without virtual routers: DVR or network node), metadata access occurs in
  the qdhcp ip netns rather than the qrouter netns.
  
  The following options are set in the dhcp_agent.ini file:
  force_metadata = True
  enable_isolated_metadata = True
  
  VMs on the provider tenant network are unable to access metadata as
  packets are dropped due to checksum.
  
- When we added the following in the qdhcp netns, VMs regained access to
- metadata:
+ [Test Plan]
  
-  iptables -t mangle -A OUTPUT -o ns-+ -p tcp --sport 80 -j CHECKSUM
- --checksum-fill
+ 1. Create an OpenStack deployment with DPDK options enabled and 'enable-
+ local-dhcp-and-metadata: true' in neutron-openvswitch. A sample, simple
+ 3 node bundle can be found here[1].
  
- It seems this setting was recently removed from the qrouter netns [0]
- but it never existed in the qdhcp to begin with.
+ 2. Create an external flat network and subnet:
  
- [0] https://review.opendev.org/#/c/654645/
+ openstack network show dpdk_net || \
+   openstack network create --provider-network-type flat \
+--provider-physical-network physnet1 dpdk_net \
+--external
  
- Related LP Bug #1831935
- See 
https://bugs.launchpad.net/charm-neutron-openvswitch/+bug/1831935/comments/10
+ openstack subnet show dpdk_net || \
+ openstack subnet create --allocation-pool 
start=10.230.58.100,end=10.230.58.200 \
+ --subnet-range 10.230.56.0/21 --dhcp --gateway 
10.230.56.1 \
+ --dns-nameserver 10.230.56.2 \
+ --ip-version 4 --network dpdk_net dpdk_subnet
+ 
+ 
+ 3. Create an instance attached to that network. The instance must have a 
flavor that uses huge pages.
+ 
+ openstack flavor create --ram 8192 --disk 50 --vcpus 4 m1.dpdk
+ openstack flavor set m1.dpdk --property hw:mem_page_size=large
+ 
+ openstack server create --wait --image xenial --flavor m1.dpdk --key-
+ name testkey --network dpdk_net i1
+ 
+ 4. Log into the instance host and check the instance console. The
+ instance will hang into the boot and show the following message:
+ 
+ 2020-11-20 09:43:26,790 - openstack.py[DEBUG]: Failed reading optional
+ path http://169.254.169.254/openstack/2015-10-15/user_data due to:
+ HTTPConnectionPool(host='169.254.169.254', port=80): Read timed out.
+ (read timeout=10.0)
+ 
+ 5. Apply the fix in all computes, restart the DHCP agents in all
+ computes and create the instance again.
+ 
+ 6. No errors should be shown and the instance quickly boots.
+ 
+ 
+ [Where problems could occur]
+ 
+ * This change is only touched if datapath_type and ovs_use_veth. Those 
settings are mostly used for DPDK environments. The core of the fix is
+ to toggle off checksum offload done by the DHCP namespace interfaces.
+ This will have the drawback of adding some overhead on the packet processing 
for DHCP traffic but given DHCP does not demand too much data, this should be a 
minor proble.
+ 
+ * Future changes on the syntax of the ethtool command could cause
+ regressions
+ 
+ 
+ [Other Info]
+ 
+  * None
+ 
+ 
+ [1] https://gist.github.com/sombrafam/e0741138773e444960eb4aeace6e3e79

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1832021

Title:
  Checksum drop of metadata traffic on isolated networks with DPDK

Status in OpenStack neutron-openvswitch charm:
  Fix Released
Status in Ubuntu Cloud Archive:
  New
Status in neutron:
  Fix Released

Bug description:
  [Impact]

  When an isolated network using provider networks for tenants (meaning
  without virtual routers: DVR or network node), metadata access occurs
  in the qdhcp ip netns rather than the qrouter netns.

  The following options are set in the dhcp_agent.ini file:
  force_metadata = True
  enable_isolated_metadata = True

  VMs on the provider tenant network are unable to access metadata as
  packets are dropped due to checksum.

  [Test Plan]

  1. Create an OpenStack deployment with DPDK options enabled and
  'enable-local-dhcp-and-metadata: true' in neutron-openvswitch. A
  sample, simple 3 node bundle can be found here[1].

  2. Create an external flat network and subnet:

  openstack network show dpdk_net || \
openstack network create --provider-network-type flat \
 --provider-physical-network physnet1 dpdk_net \
 --external

  openstack subnet show dpdk_net || \
  openstack subnet create --allocation-pool 
start=10.230.58.100,end=10.230.58.200 \
  --subnet-range 10.230.56.0/21 --dhcp --gateway 
10.230.56.1 \
  --dns-nameserver 10.230.56.2 \
  

[Yahoo-eng-team] [Bug 1856175] [NEW] Horizon domain managing member dropdown menu does not work

2019-12-12 Thread Erlon R. Cruz
Public bug reported:

- Create an additional domain in OpenStack, and add users to this domain
- Configure horizon to display the domain selector in the login screem
- Set the admin contex
- Go to Identity > Domains (should list all the domains)
- Click the dropdown of the default domain > Manage Members[1]: Users are 
shown[2]
- Click the dropdown of a created domain > Manage Members[3]: Users are not 
shown![4]

[1] https://imgbbb.com/image/Le4inv
[2] https://imgbbb.com/image/Le44Br
[3] https://imgbbb.com/image/Le4fLU
[4] https://imgbbb.com/image/Le4S3W

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1856175

Title:
  Horizon domain managing member dropdown menu does not work

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  - Create an additional domain in OpenStack, and add users to this domain
  - Configure horizon to display the domain selector in the login screem
  - Set the admin contex
  - Go to Identity > Domains (should list all the domains)
  - Click the dropdown of the default domain > Manage Members[1]: Users are 
shown[2]
  - Click the dropdown of a created domain > Manage Members[3]: Users are not 
shown![4]

  [1] https://imgbbb.com/image/Le4inv
  [2] https://imgbbb.com/image/Le44Br
  [3] https://imgbbb.com/image/Le4fLU
  [4] https://imgbbb.com/image/Le4S3W

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1856175/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620028] Re: Nova issue - InternalError: (1049, u"Unknown database 'nova_api'")

2016-09-14 Thread Erlon R. Cruz
Im getting this error in a gate job:
http://logs.openstack.org/16/369516/6/check/gate-tempest-dsvm-full-
devstack-plugin-nfs-nv/cd3409a/logs/devstacklog.txt.gz

gerrit link: https://review.openstack.org/#/c/369516/

** Changed in: nova
   Status: Invalid => New

** Project changed: nova => devstack-plugin-sheepdog

** Project changed: devstack-plugin-sheepdog => devstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1620028

Title:
  Nova issue - InternalError: (1049, u"Unknown database 'nova_api'")

Status in devstack:
  New

Bug description:
  Hi all,

  I run stack.sh with devstack today. Devstack is still installed
  successfully but when I tracked stack.sh log I found an error:

  InternalError: (1049, u"Unknown database 'nova_api'")

  The detailed log is attached here:
  http://paste.openstack.org/show/566648/

  and full stack.sh log:
  https://drive.google.com/file/d/0B7Fzz6EvT2F9T0tVUHUtdk55SVE/view

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1620028/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491586] [NEW] Boot from image (Creates a new volume) does not work

2015-09-02 Thread Erlon R. Cruz
Public bug reported:

The instance creation fails with:

Invalid input for field/attribute delete_on_termination. Value: 1. 1 is
not of type 'boolean', 'string' (HTTP 400) (Request-ID: req-7cd57330
-cbfc-40ab-9622-084edc2f4d57)

** Affects: horizon
 Importance: Undecided
 Assignee: Erlon R. Cruz (sombrafam)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1491586

Title:
  Boot from image (Creates a new volume) does not work

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The instance creation fails with:

  Invalid input for field/attribute delete_on_termination. Value: 1. 1
  is not of type 'boolean', 'string' (HTTP 400) (Request-ID: req-
  7cd57330-cbfc-40ab-9622-084edc2f4d57)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1491586/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp