[Yahoo-eng-team] [Bug 2053163] [NEW] VM hard reboot fails on Live Migration Abort with node having Two numa sockets

2024-02-14 Thread keerthivasan
ot;nova_object.version": "1.3", "nova_object.data": {"cells":
[{"nova_object.name": "InstanceNUMACell", "nova_object.namespace":
"nova", "nova_object.version": "1.6", "nova_object.data": {"id": 0,
"cpuset": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15],
"pcpuset": [], "cpuset_reserved": null, "memory": 81920, "pagesize":
1048576, "cpu_pinning_raw": null, "cpu_policy": null,
"cpu_thread_policy": null}, "nova_object.changes": ["cpuset_reserved",
"id", "pcpuset", "pagesize", "cpu_pinning_raw", "cpu_policy",
"cpu_thread_policy", "memory", "cpuset"]}], "emulator_threads_policy":
null}, "nova_object.changes": ["emulator_threads_policy", "cells"]},
"old_numa_topology": {"nova_object.name": "InstanceNUMATopology",
"nova_object.namespace": "nova", "nova_object.version": "1.3",
"nova_object.data": {"cells": [{"nova_object.name": "InstanceNUMACell",
"nova_object.namespace": "nova", "nova_object.version": "1.6",
"nova_object.data": {"id": 1, "cpuset": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
10, 11, 12, 13, 14, 15], "pcpuset": [], "cpuset_reserved": null,
"memory": 81920, "pagesize": 1048576, "cpu_pinning_raw": null,
"cpu_policy": null, "cpu_thread_policy": null}, "nova_object.changes":
["id", "pagesize"]}], "emulator_threads_policy": null},
"nova_object.changes": ["emulator_threads_policy", "cells"]}

old numa cell is 1, new numa cell is 0

#trigger abort

Feb 13 20:59:00 cdc-appblx095-36 nova-compute[638201]: 2024-02-13
20:59:00.991 638201 ERROR nova.virt.libvirt.driver [None
req-05850c05-ba5b-40ae-a37c-5ccdde8ded47
4807f132b7bb47bbabbe50de9bd974c8 b61fc56101024f498d4d95e863c7333f - -
default default] [instance: 4b115eb3-59f7-4e27-b877-2e326ef017b3]
Migration operation has aborted

Post abort numa topology got updated to numa cell 0 which is part of
destination

| {"nova_object.name": "InstanceNUMATopology", "nova_object.namespace":
"nova", "nova_object.version": "1.3", "nova_object.data": {"cells":
[{"nova_object.name": "InstanceNUMACell", "nova_object.namespace":
"nova", "nova_object.version": "1.6", "nova_object.data": {"id": 0,
"cpuset": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15],
"pcpuset": [], "cpuset_reserved": null, "memory": 81920, "pagesize":
1048576, "cpu_pinning_raw": null, "cpu_policy": null,
"cpu_thread_policy": null}, "nova_object.changes": ["cpu_thread_policy",
"cpuset_reserved", "cpu_pinning_raw", "cpuset", "cpu_policy", "memory",
"pagesize", "pcpuset", "id"]}], "emulator_threads_policy": null},
"nova_object.changes": ["emulator_threads_policy", "cells"]} |

Migration context is not deleted


Expected result
===
numa topology of vm should have proper rollback with its original state & 
further hard reboot of vm's is failing due to no resources available on numa 
node 

Actual result
=
VM is having newer numa topology based on calculated destination numa details 
post abort

Environment
===

Using Antelope version & Ubuntu 6.5.0-15-generic #15~22.04.1-Ubuntu SMP
PREEMPT_DYNAMIC Fri Jan 12 18:54:30 UTC 2 x86_64 x86_64 x86_64 GNU/Linux

** Affects: nova
 Importance: Undecided
 Assignee: keerthivasan (keerthivassan86)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => keerthivasan (keerthivassan86)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2053163

Title:
  VM hard reboot fails on Live Migration Abort  with node having Two
  numa sockets

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Able to see live migration on abort is mapping new numa topology to the 
instance on source & instance continue working in source, while performing hard 
reboot to re-calculate xml, it is using updated numa topology with cell having 
no resources & vm is failed to recover

  
  Steps to reproduce [100%]
  ==

  
  As part of this, compu

[Yahoo-eng-team] [Bug 2052473] Re: Live migration post-copy not working as Expected

2024-02-06 Thread keerthivasan
** Changed in: nova
   Status: Invalid => New

** Changed in: nova
 Assignee: (unassigned) => keerthivasan (keerthivassan86)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2052473

Title:
  Live migration post-copy not working as Expected

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  I am trying to enable live migration feature post copy using below config, 
seeing post-copy is not supported error

  
  block_migration_flag = 
VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_NON_SHARED_INC
  cpu_mode = custom
  cpu_model_extra_flags = 
-ds,-acpi,+ss,-ht,-tm,-pbe,-dtes64,-monitor,-ds_cpl,+vmx,-smx,-est,-tm2,-xtpr,+pdcm,-dca,+tsc_adjust,-intel-pt,+md-clear,+stibp,+ssbd,+pdpe1gb,-invtsc,-hle,-rtm,-mpx,-xsavec,-xgetbv1
  cpu_models = Skylake-Client-IBRS
  live_migration_bandwidth = 900
  live_migration_downtime = 100
  live_migration_flag = 
VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE
  live_migration_permit_post_copy = True
  live_migration_timeout_action=force_complete


  Steps to reproduce
  ==

  KVM hypervisor

  Using Openstack Antelope base version

  qemu-system-x86_64 --version
  QEMU emulator version 6.2.0 (Debian 1:6.2+dfsg-2ubuntu6.16)
  Copyright (c) 2003-2021 Fabrice Bellard and the QEMU Project developers

  libvirtd (libvirt) 8.0.0

  #Create general vm once config is set
  #perform live migration either with block-migration or without

  After Pre-migration phase, able  to see migration got trigger
  successfully & while copying memory , seeing postcopy is not supported


  Expected result
  ===
  Migration should be sucessfull

  Actual result
  =

  compute.log

  Feb 05 22:16:02 cdc-appblx095-37 nova-compute[1156821]: 2024-02-05 
22:16:02.988 1156821 INFO nova.virt.libvirt.migration [None 
req-f6650a9f-9465-40ca-9981-fff470751fc7 4807f132b7bb47bbabbe50de9bd974c8 
b61fc56101024f498d4d95e863c7333f - - default default] [instance: 
31fcf3ba-c0b1-4c74-afdd-685ba45a11f0] Increasing downtime to 10 ms after 0 sec 
elapsed time
  Feb 05 22:16:03 cdc-appblx095-37 nova-compute[1156821]: 2024-02-05 
22:16:03.069 1156821 INFO nova.virt.libvirt.driver [None 
req-f6650a9f-9465-40ca-9981-fff470751fc7 4807f132b7bb47bbabbe50de9bd974c8 
b61fc56101024f498d4d95e863c7333f - - default default] [instance: 
31fcf3ba-c0b1-4c74-afdd-685ba45a11f0] Migration running for 0 secs, memory 100% 
remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes 
processed=0, remaining=0, total=0).
  Feb 05 22:16:03 cdc-appblx095-37 nova-compute[1156821]: 2024-02-05 
22:16:03.571 1156821 DEBUG nova.virt.libvirt.migration [None 
req-f6650a9f-9465-40ca-9981-fff470751fc7 4807f132b7bb47bbabbe50de9bd974c8 
b61fc56101024f498d4d95e863c7333f - - default default] [instance: 
31fcf3ba-c0b1-4c74-afdd-685ba45a11f0] Current 10 elapsed 1 steps [(0, 10), 
(960, 19), (1920, 28), (2880, 37), (3840, 46), (4800, 55), (5760, 64), (6720, 
73), (7680, 82), (8640, 91), (9600, 100)] update_downtime 
/openstack/venvs/nova-27.4.0/lib/python3.10/site-packages/nova/virt/libvirt/migration.py:512
  Feb 05 22:16:03 cdc-appblx095-37 nova-compute[1156821]: 2024-02-05 
22:16:03.572 1156821 DEBUG nova.virt.libvirt.migration [None 
req-f6650a9f-9465-40ca-9981-fff470751fc7 4807f132b7bb47bbabbe50de9bd974c8 
b61fc56101024f498d4d95e863c7333f - - default default] [instance: 
31fcf3ba-c0b1-4c74-afdd-685ba45a11f0] Downtime does not need to change 
update_downtime 
/openstack/venvs/nova-27.4.0/lib/python3.10/site-packages/nova/virt/libvirt/migration.py:525
  Feb 05 22:16:04 cdc-appblx095-37 nova-compute[1156821]: 2024-02-05 
22:16:04.074 1156821 DEBUG nova.virt.libvirt.migration [None 
req-f6650a9f-9465-40ca-9981-fff470751fc7 4807f132b7bb47bbabbe50de9bd974c8 
b61fc56101024f498d4d95e863c7333f - - default default] [instance: 
31fcf3ba-c0b1-4c74-afdd-685ba45a11f0] Current 10 elapsed 1 steps [(0, 10), 
(960, 19), (1920, 28), (2880, 37), (3840, 46), (4800, 55), (5760, 64), (6720, 
73), (7680, 82), (8640, 91), (9600, 100)] update_downtime 
/openstack/venvs/nova-27.4.0/lib/python3.10/site-packages/nova/virt/libvirt/migration.py:512
  Feb 05 22:16:04 cdc-appblx095-37 nova-compute[1156821]: 2024-02-05 
22:16:04.075 1156821 DEBUG nova.virt.libvirt.migration [None 
req-f6650a9f-9465-40ca-9981-fff470751fc7 4807f132b7bb47bbabbe50de9bd974c8 
b61fc56101024f498d4d95e863c7333f - - default default] [instance: 
31fcf3ba-c0b1-4c74-afdd-685ba45a11f0] Downtime does not need to change 
update_downtime 
/openstack/venvs/nova-27.4.0/lib/python3.10/site-packages/nova/virt/libvirt/migration.py:525
  Feb 05 22:16:04 cdc-appblx095-37 nova-compute[1156821]: 2024-02-05 
22:16:04.577 1156821 DEBUG nova.virt.libvirt.migration [None 
req-f6650a9f-9465-40ca-9981-fff470751fc7 4807f132b7bb47bbabbe50

[Yahoo-eng-team] [Bug 2052473] Re: Live migration post-copy not working as Expected

2024-02-06 Thread keerthivasan
Thanks @sean for your inputs, after setting
`vm.unprivileged_userfaultfd` it is working as expected. Changed status
of this bugs to Invalid

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2052473

Title:
  Live migration post-copy not working as Expected

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description
  ===
  I am trying to enable live migration feature post copy using below config, 
seeing post-copy is not supported error

  
  block_migration_flag = 
VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_NON_SHARED_INC
  cpu_mode = custom
  cpu_model_extra_flags = 
-ds,-acpi,+ss,-ht,-tm,-pbe,-dtes64,-monitor,-ds_cpl,+vmx,-smx,-est,-tm2,-xtpr,+pdcm,-dca,+tsc_adjust,-intel-pt,+md-clear,+stibp,+ssbd,+pdpe1gb,-invtsc,-hle,-rtm,-mpx,-xsavec,-xgetbv1
  cpu_models = Skylake-Client-IBRS
  live_migration_bandwidth = 900
  live_migration_downtime = 100
  live_migration_flag = 
VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE
  live_migration_permit_post_copy = True
  live_migration_timeout_action=force_complete


  Steps to reproduce
  ==

  KVM hypervisor

  Using Openstack Antelope base version

  qemu-system-x86_64 --version
  QEMU emulator version 6.2.0 (Debian 1:6.2+dfsg-2ubuntu6.16)
  Copyright (c) 2003-2021 Fabrice Bellard and the QEMU Project developers

  libvirtd (libvirt) 8.0.0

  #Create general vm once config is set
  #perform live migration either with block-migration or without

  After Pre-migration phase, able  to see migration got trigger
  successfully & while copying memory , seeing postcopy is not supported


  Expected result
  ===
  Migration should be sucessfull

  Actual result
  =

  compute.log

  Feb 05 22:16:02 cdc-appblx095-37 nova-compute[1156821]: 2024-02-05 
22:16:02.988 1156821 INFO nova.virt.libvirt.migration [None 
req-f6650a9f-9465-40ca-9981-fff470751fc7 4807f132b7bb47bbabbe50de9bd974c8 
b61fc56101024f498d4d95e863c7333f - - default default] [instance: 
31fcf3ba-c0b1-4c74-afdd-685ba45a11f0] Increasing downtime to 10 ms after 0 sec 
elapsed time
  Feb 05 22:16:03 cdc-appblx095-37 nova-compute[1156821]: 2024-02-05 
22:16:03.069 1156821 INFO nova.virt.libvirt.driver [None 
req-f6650a9f-9465-40ca-9981-fff470751fc7 4807f132b7bb47bbabbe50de9bd974c8 
b61fc56101024f498d4d95e863c7333f - - default default] [instance: 
31fcf3ba-c0b1-4c74-afdd-685ba45a11f0] Migration running for 0 secs, memory 100% 
remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes 
processed=0, remaining=0, total=0).
  Feb 05 22:16:03 cdc-appblx095-37 nova-compute[1156821]: 2024-02-05 
22:16:03.571 1156821 DEBUG nova.virt.libvirt.migration [None 
req-f6650a9f-9465-40ca-9981-fff470751fc7 4807f132b7bb47bbabbe50de9bd974c8 
b61fc56101024f498d4d95e863c7333f - - default default] [instance: 
31fcf3ba-c0b1-4c74-afdd-685ba45a11f0] Current 10 elapsed 1 steps [(0, 10), 
(960, 19), (1920, 28), (2880, 37), (3840, 46), (4800, 55), (5760, 64), (6720, 
73), (7680, 82), (8640, 91), (9600, 100)] update_downtime 
/openstack/venvs/nova-27.4.0/lib/python3.10/site-packages/nova/virt/libvirt/migration.py:512
  Feb 05 22:16:03 cdc-appblx095-37 nova-compute[1156821]: 2024-02-05 
22:16:03.572 1156821 DEBUG nova.virt.libvirt.migration [None 
req-f6650a9f-9465-40ca-9981-fff470751fc7 4807f132b7bb47bbabbe50de9bd974c8 
b61fc56101024f498d4d95e863c7333f - - default default] [instance: 
31fcf3ba-c0b1-4c74-afdd-685ba45a11f0] Downtime does not need to change 
update_downtime 
/openstack/venvs/nova-27.4.0/lib/python3.10/site-packages/nova/virt/libvirt/migration.py:525
  Feb 05 22:16:04 cdc-appblx095-37 nova-compute[1156821]: 2024-02-05 
22:16:04.074 1156821 DEBUG nova.virt.libvirt.migration [None 
req-f6650a9f-9465-40ca-9981-fff470751fc7 4807f132b7bb47bbabbe50de9bd974c8 
b61fc56101024f498d4d95e863c7333f - - default default] [instance: 
31fcf3ba-c0b1-4c74-afdd-685ba45a11f0] Current 10 elapsed 1 steps [(0, 10), 
(960, 19), (1920, 28), (2880, 37), (3840, 46), (4800, 55), (5760, 64), (6720, 
73), (7680, 82), (8640, 91), (9600, 100)] update_downtime 
/openstack/venvs/nova-27.4.0/lib/python3.10/site-packages/nova/virt/libvirt/migration.py:512
  Feb 05 22:16:04 cdc-appblx095-37 nova-compute[1156821]: 2024-02-05 
22:16:04.075 1156821 DEBUG nova.virt.libvirt.migration [None 
req-f6650a9f-9465-40ca-9981-fff470751fc7 4807f132b7bb47bbabbe50de9bd974c8 
b61fc56101024f498d4d95e863c7333f - - default default] [instance: 
31fcf3ba-c0b1-4c74-afdd-685ba45a11f0] Downtime does not need to change 
update_downtime 
/openstack/venvs/nova-27.4.0/lib/python3.10/site-packages/nova/virt/libvirt/migration.py:525
  Feb 05 22:16:04 cdc-appblx095-37 nova-compute[1156821]: 2024-02-05 
22:16:04.577 1156821 DEBUG nova.virt.libvirt.migration [None 
req-f6650a9f-9465-40ca-9981-fff470751fc7 

[Yahoo-eng-team] [Bug 2052473] [NEW] Live migration post-copy not working as Expected

2024-02-05 Thread keerthivasan
Public bug reported:

Description
===
I am trying to enable live migration feature post copy using below config, 
seeing post-copy is not supported error


block_migration_flag = 
VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_NON_SHARED_INC
cpu_mode = custom
cpu_model_extra_flags = 
-ds,-acpi,+ss,-ht,-tm,-pbe,-dtes64,-monitor,-ds_cpl,+vmx,-smx,-est,-tm2,-xtpr,+pdcm,-dca,+tsc_adjust,-intel-pt,+md-clear,+stibp,+ssbd,+pdpe1gb,-invtsc,-hle,-rtm,-mpx,-xsavec,-xgetbv1
cpu_models = Skylake-Client-IBRS
live_migration_bandwidth = 900
live_migration_downtime = 100
live_migration_flag = 
VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE
live_migration_permit_post_copy = True
live_migration_timeout_action=force_complete


Steps to reproduce
==

KVM hypervisor

Using Openstack Antelope base version

qemu-system-x86_64 --version
QEMU emulator version 6.2.0 (Debian 1:6.2+dfsg-2ubuntu6.16)
Copyright (c) 2003-2021 Fabrice Bellard and the QEMU Project developers

libvirtd (libvirt) 8.0.0

#Create general vm once config is set
#perform live migration either with block-migration or without

After Pre-migration phase, able  to see migration got trigger
successfully & while copying memory , seeing postcopy is not supported


Expected result
===
Migration should be sucessfull

Actual result
=

compute.log

Feb 05 22:16:02 cdc-appblx095-37 nova-compute[1156821]: 2024-02-05 22:16:02.988 
1156821 INFO nova.virt.libvirt.migration [None 
req-f6650a9f-9465-40ca-9981-fff470751fc7 4807f132b7bb47bbabbe50de9bd974c8 
b61fc56101024f498d4d95e863c7333f - - default default] [instance: 
31fcf3ba-c0b1-4c74-afdd-685ba45a11f0] Increasing downtime to 10 ms after 0 sec 
elapsed time
Feb 05 22:16:03 cdc-appblx095-37 nova-compute[1156821]: 2024-02-05 22:16:03.069 
1156821 INFO nova.virt.libvirt.driver [None 
req-f6650a9f-9465-40ca-9981-fff470751fc7 4807f132b7bb47bbabbe50de9bd974c8 
b61fc56101024f498d4d95e863c7333f - - default default] [instance: 
31fcf3ba-c0b1-4c74-afdd-685ba45a11f0] Migration running for 0 secs, memory 100% 
remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes 
processed=0, remaining=0, total=0).
Feb 05 22:16:03 cdc-appblx095-37 nova-compute[1156821]: 2024-02-05 22:16:03.571 
1156821 DEBUG nova.virt.libvirt.migration [None 
req-f6650a9f-9465-40ca-9981-fff470751fc7 4807f132b7bb47bbabbe50de9bd974c8 
b61fc56101024f498d4d95e863c7333f - - default default] [instance: 
31fcf3ba-c0b1-4c74-afdd-685ba45a11f0] Current 10 elapsed 1 steps [(0, 10), 
(960, 19), (1920, 28), (2880, 37), (3840, 46), (4800, 55), (5760, 64), (6720, 
73), (7680, 82), (8640, 91), (9600, 100)] update_downtime 
/openstack/venvs/nova-27.4.0/lib/python3.10/site-packages/nova/virt/libvirt/migration.py:512
Feb 05 22:16:03 cdc-appblx095-37 nova-compute[1156821]: 2024-02-05 22:16:03.572 
1156821 DEBUG nova.virt.libvirt.migration [None 
req-f6650a9f-9465-40ca-9981-fff470751fc7 4807f132b7bb47bbabbe50de9bd974c8 
b61fc56101024f498d4d95e863c7333f - - default default] [instance: 
31fcf3ba-c0b1-4c74-afdd-685ba45a11f0] Downtime does not need to change 
update_downtime 
/openstack/venvs/nova-27.4.0/lib/python3.10/site-packages/nova/virt/libvirt/migration.py:525
Feb 05 22:16:04 cdc-appblx095-37 nova-compute[1156821]: 2024-02-05 22:16:04.074 
1156821 DEBUG nova.virt.libvirt.migration [None 
req-f6650a9f-9465-40ca-9981-fff470751fc7 4807f132b7bb47bbabbe50de9bd974c8 
b61fc56101024f498d4d95e863c7333f - - default default] [instance: 
31fcf3ba-c0b1-4c74-afdd-685ba45a11f0] Current 10 elapsed 1 steps [(0, 10), 
(960, 19), (1920, 28), (2880, 37), (3840, 46), (4800, 55), (5760, 64), (6720, 
73), (7680, 82), (8640, 91), (9600, 100)] update_downtime 
/openstack/venvs/nova-27.4.0/lib/python3.10/site-packages/nova/virt/libvirt/migration.py:512
Feb 05 22:16:04 cdc-appblx095-37 nova-compute[1156821]: 2024-02-05 22:16:04.075 
1156821 DEBUG nova.virt.libvirt.migration [None 
req-f6650a9f-9465-40ca-9981-fff470751fc7 4807f132b7bb47bbabbe50de9bd974c8 
b61fc56101024f498d4d95e863c7333f - - default default] [instance: 
31fcf3ba-c0b1-4c74-afdd-685ba45a11f0] Downtime does not need to change 
update_downtime 
/openstack/venvs/nova-27.4.0/lib/python3.10/site-packages/nova/virt/libvirt/migration.py:525
Feb 05 22:16:04 cdc-appblx095-37 nova-compute[1156821]: 2024-02-05 22:16:04.577 
1156821 DEBUG nova.virt.libvirt.migration [None 
req-f6650a9f-9465-40ca-9981-fff470751fc7 4807f132b7bb47bbabbe50de9bd974c8 
b61fc56101024f498d4d95e863c7333f - - default default] [instance: 
31fcf3ba-c0b1-4c74-afdd-685ba45a11f0] Current 10 elapsed 2 steps [(0, 10), 
(960, 19), (1920, 28), (2880, 37), (3840, 46), (4800, 55), (5760, 64), (6720, 
73), (7680, 82), (8640, 91), (9600, 100)] update_downtime 
/openstack/venvs/nova-27.4.0/lib/python3.10/site-packages/nova/virt/libvirt/migration.py:512
Feb 05 22:16:04 cdc-appblx095-37 nova-compute[1156821]: 2024-02-05 22:16:04.577 
1156821 DEBUG nova.virt.libvirt.migration [None 

[Yahoo-eng-team] [Bug 2008935] [NEW] Yoga, Live migration bandwidth not applies to disks

2023-03-01 Thread keerthivasan
Public bug reported:

Description
===
I am trying to live migrate using --block-migration flag to move local disks 
between compute nodes. As part of the I am setting 
"live_migration_bandwidth=900" (MiB), able to see value applied by the libvirt, 
but copying disks take very longer, definitely not using the above bandwidth. 
Memory copy is ver faster & bandwidth applied properly

Steps to reproduce
==

VM spec ( Pinned vm ) with disk  OS-FLV-EXT-DATA:ephemeral 600 & root disk 10.
As part of migration using --block-migration copies disk first before the 
actual memory transfer

from libvirt disk targets

 Target   Source
--
 vda  /var/lib/nova/instances/84b43962-5623-42c2-9ecd-26e09753dead/disk
 vdb  /var/lib/nova/instances/84b43962-5623-42c2-9ecd-26e09753dead/disk.eph0

blockjob info for vdb


Block Copy: [ 12 %]Bandwidth limit: 943718400 bytes/s (900.000
MiB/s) ( applied bandwidth )

Actual speed is 59.3Mb while checking with iftop. It is clear that
actual bandwidth is not applied for the disk transfer

Expected result
===
Disk copy should be done using the applied bandwidth

Actual result
=

It is clear that actual bandwidth is not applied for the disk transfer.
It copied only after 40 min

 13Gdisk.eph0


Environment
===

Openstack version Yoga across both source & destination compute nodes

Logs & Configs
==

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  Description
  ===
- I am trying to live migrate using --block-migration flag to move local disks 
between compute nodes. As part of the I am setting 
"live_migration_bandwidth=900" (MiB), able to see value applied by the libvirt, 
but copying disks take very longer, definitely not using the above bandwidth. 
Memory copy is ver faster & bandwidth applied properly 
+ I am trying to live migrate using --block-migration flag to move local disks 
between compute nodes. As part of the I am setting 
"live_migration_bandwidth=900" (MiB), able to see value applied by the libvirt, 
but copying disks take very longer, definitely not using the above bandwidth. 
Memory copy is ver faster & bandwidth applied properly
  
  Steps to reproduce
  ==
  
- VM spec ( Pinned vm ) with disk  OS-FLV-EXT-DATA:ephemeral 600 & root disk 
10. 
+ VM spec ( Pinned vm ) with disk  OS-FLV-EXT-DATA:ephemeral 600 & root disk 10.
  As part of migration using --block-migration copies disk first before the 
actual memory transfer
  
  from libvirt disk targets
  
-  Target   Source
+  Target   Source
  
--
-  vda  /var/lib/nova/instances/84b43962-5623-42c2-9ecd-26e09753dead/disk
-  vdb  
/var/lib/nova/instances/84b43962-5623-42c2-9ecd-26e09753dead/disk.eph0
- 
+  vda  /var/lib/nova/instances/84b43962-5623-42c2-9ecd-26e09753dead/disk
+  vdb  
/var/lib/nova/instances/84b43962-5623-42c2-9ecd-26e09753dead/disk.eph0
  
  blockjob info for vdb
  
  
  Block Copy: [ 12 %]Bandwidth limit: 943718400 bytes/s (900.000
  MiB/s) ( applied bandwidth )
  
  Actual speed is 59.3Mb while checking with iftop. It is clear that
  actual bandwidth is not applied for the disk transfer
  
- 
  Expected result
  ===
  Disk copy should be done using the applied bandwidth
  
- 
  Actual result
  =
  
-  It is clear that actual bandwidth is not applied for the disk transfer
+ It is clear that actual bandwidth is not applied for the disk transfer.
+ It copied only after 40 min
+ 
+  13Gdisk.eph0
  
  
  Environment
  ===
  
  Openstack version Yoga across both source & destination compute nodes
  
  Logs & Configs
  ==

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2008935

Title:
  Yoga, Live migration bandwidth not applies to disks

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  I am trying to live migrate using --block-migration flag to move local disks 
between compute nodes. As part of the I am setting 
"live_migration_bandwidth=900" (MiB), able to see value applied by the libvirt, 
but copying disks take very longer, definitely not using the above bandwidth. 
Memory copy is ver faster & bandwidth applied properly

  Steps to reproduce
  ==

  VM spec ( Pinned vm ) with disk  OS-FLV-EXT-DATA:ephemeral 600 & root disk 10.
  As part of migration using --block-migration copies disk first before the 
actual memory transfer

  from libvirt disk targets

   Target   Source
  
--
   vda  /var/lib/nova/instances/84b43962-5623-42c2-9ecd-26e09753dead/disk
   vdb  

[Yahoo-eng-team] [Bug 1990082] [NEW] Scheduler is not choosing host based on higher weight value

2022-09-18 Thread keerthivasan
Public bug reported:

Description:
===


As part of the openstack scheduling behaviour, able to observe this pattern 
where scheduling is not happening based on weight values, instead it is picking 
a random host from the list, thus violates the weighting behaviour

Configuration Nova:
===

ram_weight_multiplier = 5.0
host_subset_size = 4

Steps To reproduce:
===

I am testing this is in nova-23.2.0 ( Wallaby ) version based on
openstack-ansible

1) Create aggregate & make sure it has 5 hosts in it
2) Please make sure we are using instance extra spec for scheduling, to make 
sure vm's to use these 5 hosts from the aggregate
3) create 3 vm's in parallel
4) weighed_hosts obj list's all available weight hosts ( already sorted based 
on descending order ]
5) We are setting host_subset_size as 4 currently
6) 
 
chosen_host = random.choice(weighed_subset)-> [Since this is 
already sorted , why are we randomizing the behaviour rather than picking the 
first host ,weighed_subset[0] having higher weights]


Expected Results:
===
1) Host with higher weights need's to be picked


Actual result
=

1) It is picking based on random order from the sorted weighted list


Not sure the purpose of picking random host on the below one

https://github.com/openstack/nova/blob/stable/yoga/nova/scheduler/manager.py#L645

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: wallaby-rc-potential

** Tags added: wallaby-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1990082

Title:
  Scheduler is not choosing host based on higher weight value

Status in OpenStack Compute (nova):
  New

Bug description:
  Description:
  ===

  
  As part of the openstack scheduling behaviour, able to observe this pattern 
where scheduling is not happening based on weight values, instead it is picking 
a random host from the list, thus violates the weighting behaviour

  Configuration Nova:
  ===

  ram_weight_multiplier = 5.0
  host_subset_size = 4

  Steps To reproduce:
  ===

  I am testing this is in nova-23.2.0 ( Wallaby ) version based on
  openstack-ansible

  1) Create aggregate & make sure it has 5 hosts in it
  2) Please make sure we are using instance extra spec for scheduling, to make 
sure vm's to use these 5 hosts from the aggregate
  3) create 3 vm's in parallel
  4) weighed_hosts obj list's all available weight hosts ( already sorted based 
on descending order ]
  5) We are setting host_subset_size as 4 currently
  6) 
   
  chosen_host = random.choice(weighed_subset)-> [Since this is 
already sorted , why are we randomizing the behaviour rather than picking the 
first host ,weighed_subset[0] having higher weights]
  

  Expected Results:
  ===
  1) Host with higher weights need's to be picked

  
  Actual result
  =

  1) It is picking based on random order from the sorted weighted list

  
  Not sure the purpose of picking random host on the below one

  
https://github.com/openstack/nova/blob/stable/yoga/nova/scheduler/manager.py#L645

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1990082/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1753676] [NEW] Live migration not working as Expected when Restarting nova-compute service while migration

2018-03-06 Thread keerthivasan selvaraj
Public bug reported:

Description
===

Environment: Ubuntu 16.04
Openstack Version: Pike

I am trying to migrate VM ( live migration ( block migration ) ) form
one compute node to another compute node...Everything looks good unless
I restart nova-compute service, live migration still running underneath
with help of libvirt, once the vm reaches destination, database is not
updated properly.


Steps to reproduce:
===

nova.conf ( libvirt setting on both compute nodes )

[libvirt]
live_migration_bandwidth=1200
live_migration_downtime=100
live_migration_downtime_steps =3
live_migration_downtime_delay=10
live_migration_flag = 
VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE
virt_type = kvm
inject_password = False
disk_cachemodes = network=writeback
live_migration_uri = "qemu+tcp://nova@%s/system"
live_migration_tunnelled = False
block_migration_flag = 
VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_NON_SHARED_INC


( default openstack live migration configuration ( pre-copy with no tunneling )
Source vm root disk ( boot from volume with one ephemernal disk (160GB) )


Trying to migrate vm from compute1 to compute2, below is my source vm.

| OS-EXT-SRV-ATTR:host | compute1   
 |
| OS-EXT-SRV-ATTR:hostname | testcase1-all-ephemernal-boot-from-vol 
  |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute1   
 |
| OS-EXT-SRV-ATTR:instance_name| instance-0153

1) nova live-migration --block-migrate  compute2


[req-48a3df61-3974-46ac-8019-c4c4a0f8a8c8 4a8150eb246a4450829331e993f8c3fd 
f11a5d3631f14c4f879a2e7dddb96c06 - default default] pre_live_migration data is 
LibvirtLiveMigrateData(bdms=,block_migration=True,disk_available_mb=6900736,disk_over_commit=,filename='tmpW5ApOS',graphics_listen_addr_spice=x.x.x.x,graphics_listen_addr_vnc=127.0.0.1,image_type='default',instance_relative_path='504028fc-1381-42ca-ad7c-def7f749a722',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=True,migration=,serial_listen_addr=None,serial_listen_ports=,supported_perf_events=,target_connect_addr=)
 pre_live_migration 
/openstack/venvs/nova-16.0.6/lib/python2.7/site-packages/nova/compute/manager.py:5437


Migration started, able to see the data and memory transfer ( using iftop )

Data transfer between compute nodes using iftop 

  <=
  4.94Gb  4.99Gb  5.01Gb

Restarted Nova-compute service on source compute node ( where the vm is
migrating)

Live migration still it is going, once migration completes, below is my
total data transfer ( using iftop )

TX: cum:   17.3MB   peak:   2.50Mb  

rates:   11.1Kb  7.11Kb   463Kb
RX:97.7GB   4.97Gb  

 3.82Kb  1.93Kb  1.87Gb
TOTAL: 97.7GB   4.97Gb

Once migration completes, from the destination compute node ( we can
able to see the virsh domain running)

root@compute2:~# virsh list --all
 IdName   State

 3 instance-0153  running

>From the nova-compute.log

Instance  has been moved to another host compute1(compute1). There
are allocations remaining against the source host that might need to be
removed: {u'resources': {u'VCPU': 8, u'MEMORY_MB': 23808, u'DISK_GB':
180}}. _remove_deleted_instances_allocations
/openstack/venvs/nova-16.0.6/lib/python2.7/site-
packages/nova/compute/resource_tracker.py:123

Nova compute still showing 0 vcpus ( but 8 core vm was there )

Total usable vcpus: 56, total allocated vcpus: 0
_report_final_resource_view /openstack/venvs/nova-16.0.6/lib/python2.7
/site-packages/nova/compute/resource_tracker.py:792

nova show  ( still nova db shows src hostname, db is not updated
with new compute_node )

  OS-EXT-SRV-ATTR:host | compute1   
  |
| OS-EXT-SRV-ATTR:hostname | testcase1-all-ephemernal-boot-from-vol 
  |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute1   
 |
| OS-EXT-SRV-ATTR:instance_name| instance-0153


Entire vm data is still present on both compute nodes.

After restarting nova-compute service on destination machine ( got below
warning from nova-compute )

2018-03-05 11:19:05.942 5791 WARNING nova.compute.manager [-] [instance:

[Yahoo-eng-team] [Bug 1735687] [NEW] Not able to list the compute_nodes mapped under single cell using nova-manage

2017-12-01 Thread keerthivasan selvaraj
Public bug reported:

! READ THIS !!

Description
===
Currently I am not able to list the compute_nodes present under single cell 
using nova-manage.  Only way to check the available compute_nodes under 
specific cell is from the host_mappings table.

Steps to reproduce
==

* nova-manage cell_v2 list_hosts --cell_uuid 
* No method found


Expected result
===
nova-manage cell_v2 list_hosts --cell_uuid 1794f657-d8e9-41ad-bff7-01b284b55a9b

++--+---+
| Id |   Host   | Cell Name |
++--+---+
| 2  | compute2 |   cell2   |
++--+---+

Actual result
=
No such method (list_hosts) found from nova-manage

Environment
===

Using Devstack stable/pike, nova-manage (16.0.3 ).

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1735687

Title:
  Not able to list the compute_nodes mapped under single cell using
  nova-manage

Status in OpenStack Compute (nova):
  New

Bug description:
  ! READ THIS !!

  Description
  ===
  Currently I am not able to list the compute_nodes present under single cell 
using nova-manage.  Only way to check the available compute_nodes under 
specific cell is from the host_mappings table.

  Steps to reproduce
  ==

  * nova-manage cell_v2 list_hosts --cell_uuid 
  * No method found

  
  Expected result
  ===
  nova-manage cell_v2 list_hosts --cell_uuid 
1794f657-d8e9-41ad-bff7-01b284b55a9b

  ++--+---+
  | Id |   Host   | Cell Name |
  ++--+---+
  | 2  | compute2 |   cell2   |
  ++--+---+

  Actual result
  =
  No such method (list_hosts) found from nova-manage

  Environment
  ===

  Using Devstack stable/pike, nova-manage (16.0.3 ).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1735687/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1663600] Re: Showing forced_down value for the compute nodes.

2017-02-12 Thread keerthivasan selvaraj
** Also affects: python-novaclient
   Importance: Undecided
   Status: New

** Changed in: python-novaclient
 Assignee: (unassigned) => keerthivasan selvaraj (keerthiv)

** Changed in: python-novaclient
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1663600

Title:
  Showing forced_down value for the compute nodes.

Status in OpenStack Compute (nova):
  In Progress
Status in python-novaclient:
  In Progress

Bug description:
  Currently, no ways to identify whether specific compute_node was
  really forced_down or not.

  There may be the possibility that nova-compute service down. We know
  only after seeing the compute logs.

  Steps to Reproduce:
  

  1) Forced_down the compute_node.
  2) Execute nova hypervisor-list ( saying state is down ). State will even 
down, if nova-compute service not able to start.

  
  Actual Output:

  +--+---+---+--+
  | ID   | Hypervisor hostname   | State | Status   |
  +--+---+---+--+
  | 1| compute1.hostname.com | down  | enabled  |
  ---

  Expected Output:

  
+++---+-+-+
  | ID | Hypervisor hostname| State | Status  | 
Forced_down |
  
+++---+-+-+
  | 1  | compute1.hostname.com  | down  | enabled | yes 
|
  
+++---+-+-+

  Forced_down = True ( value will be yes )
  Forced_down = False ( value will be no )

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1663600/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1663600] [NEW] Showing forced_down value for the compute nodes.

2017-02-10 Thread keerthivasan selvaraj
Public bug reported:

Currently, no ways to identify whether specific compute_node was really
forced_down or not.

There may be the possibility that nova-compute service down. We know
only after seeing the compute logs.


Actual Output:

+--+---+---+--+
| ID   | Hypervisor hostname   | State | Status   |
+--+---+---+--+
| 1| compute1.hostname.com | down  | enabled  |
---

Expected Output:

+++---+-+-+
| ID | Hypervisor hostname| State | Status  | 
Forced_down |
+++---+-+-+
| 1  | compute1.hostname.com  | down  | enabled | yes   
  |
+++---+-+-+

Forced_down = True ( value will be yes )
Forced_down = False ( value will be no )

** Affects: nova
 Importance: Undecided
 Assignee: keerthivasan selvaraj (keerthiv)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => keerthivasan selvaraj (keerthiv)

** Changed in: nova
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1663600

Title:
  Showing forced_down value for the compute nodes.

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Currently, no ways to identify whether specific compute_node was
  really forced_down or not.

  There may be the possibility that nova-compute service down. We know
  only after seeing the compute logs.

  
  Actual Output:

  +--+---+---+--+
  | ID   | Hypervisor hostname   | State | Status   |
  +--+---+---+--+
  | 1| compute1.hostname.com | down  | enabled  |
  ---

  Expected Output:

  
+++---+-+-+
  | ID | Hypervisor hostname| State | Status  | 
Forced_down |
  
+++---+-+-+
  | 1  | compute1.hostname.com  | down  | enabled | yes 
|
  
+++---+-+-+

  Forced_down = True ( value will be yes )
  Forced_down = False ( value will be no )

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1663600/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1593120] [NEW] Unable to get OAuth request token

2016-06-16 Thread keerthivasan selvaraj
Public bug reported:

I am using keystone(Liberty), trying to use OAuth functionality inside
keystone.

References:

https://specs.openstack.org/openstack/keystone-specs/api/v3/identity-
api-v3-os-oauth1-ext.html

I am following the above link for reference to use OAuth

Able to create consumer, while creating request_token i am getting
invalid signature error.(unauthorized)

 == > /usr/lib/python2.7/site-
packages/keystoneclient/v3/contrib/oauth1/request_tokens.py

After the Oauth sign, i got the header,endpoint as below,

headers:

{u'Authorization': u'OAuth oauth_nonce="16761963350708363801466058910",
oauth_timestamp="1466058910", oauth_version="1.0",
oauth_signature_method="HMAC-SHA1",
oauth_consumer_key="7fc8945e648248e9a5694bee3a141ac0",
oauth_callback="oob",
oauth_signature="i4UDLi75qcTKu%2FClW7KNtIl1SI4%3D"', u'requested-
project-id': u'7908bbde268348a9991ecdfa76fda577'}

endpoint:

'/OS-OAUTH1/request_token'


Error:

keystoneclient.exceptions.Unauthorized: Invalid signature (Disable debug
mode to suppress these details.) (HTTP 401)

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: federation keystone oauth

** Attachment added: "python for getting request_token"
   https://bugs.launchpad.net/bugs/1593120/+attachment/4684752/+files/oauth-1.py

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1593120

Title:
  Unable to get OAuth request token

Status in OpenStack Identity (keystone):
  New

Bug description:
  I am using keystone(Liberty), trying to use OAuth functionality inside
  keystone.

  References:

  https://specs.openstack.org/openstack/keystone-specs/api/v3/identity-
  api-v3-os-oauth1-ext.html

  I am following the above link for reference to use OAuth

  Able to create consumer, while creating request_token i am getting
  invalid signature error.(unauthorized)

   == > /usr/lib/python2.7/site-
  packages/keystoneclient/v3/contrib/oauth1/request_tokens.py

  After the Oauth sign, i got the header,endpoint as below,

  headers:

  {u'Authorization': u'OAuth
  oauth_nonce="16761963350708363801466058910",
  oauth_timestamp="1466058910", oauth_version="1.0",
  oauth_signature_method="HMAC-SHA1",
  oauth_consumer_key="7fc8945e648248e9a5694bee3a141ac0",
  oauth_callback="oob",
  oauth_signature="i4UDLi75qcTKu%2FClW7KNtIl1SI4%3D"', u'requested-
  project-id': u'7908bbde268348a9991ecdfa76fda577'}

  endpoint:

  '/OS-OAUTH1/request_token'

  
  Error:

  keystoneclient.exceptions.Unauthorized: Invalid signature (Disable
  debug mode to suppress these details.) (HTTP 401)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1593120/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1579749] [NEW] Unable to create a Egress rule in security group using python-novaclient

2016-05-09 Thread keerthivasan selvaraj
Public bug reported:

Description
===
I am not able to create Egress security rules using python-novaclient,by 
default it creates Ingress security rule and also try to list the rules inside 
security group.

Steps to reproduce
==
Execute the below command, will create Ingress rule in the default security 
group.


sec_group = nova.security_groups.find(name = "default")
nova.security_group_rules.create(sec_group.id,ip_protocol="udp",from_port="5201",to_port="5201")

nova.security_group_rules._list() 


Expected result
===
I want to create Egress rule using the above create method, but no provision 
for adding the rule type in the method.

Actual result
=
It create a rule with ingress type in the security group.

Environment
===

Packstack Mitaka (Rdo)

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: nova nova-manage security-groups

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1579749

Title:
  Unable to create a Egress rule in security group  using  python-
  novaclient

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  I am not able to create Egress security rules using python-novaclient,by 
default it creates Ingress security rule and also try to list the rules inside 
security group.

  Steps to reproduce
  ==
  Execute the below command, will create Ingress rule in the default security 
group.

  
  sec_group = nova.security_groups.find(name = "default")
  
nova.security_group_rules.create(sec_group.id,ip_protocol="udp",from_port="5201",to_port="5201")

  nova.security_group_rules._list() 

  
  Expected result
  ===
  I want to create Egress rule using the above create method, but no provision 
for adding the rule type in the method.

  Actual result
  =
  It create a rule with ingress type in the security group.

  Environment
  ===

  Packstack Mitaka (Rdo)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1579749/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494615] [NEW] Opening the workflow in new window is not proper

2015-09-11 Thread keerthivasan selvaraj
Public bug reported:

I am using kilo , when ever i try to open the workflows in horizon
dashboards like( Eg: Launch Instances) in a separate tab, it is not
loading properly.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: dashboard-core horizon-core workflow

** Attachment added: "launcg-bug.png"
   
https://bugs.launchpad.net/bugs/1494615/+attachment/4461189/+files/launcg-bug.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1494615

Title:
  Opening the workflow in new window is not proper

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I am using kilo , when ever i try to open the workflows in horizon
  dashboards like( Eg: Launch Instances) in a separate tab, it is not
  loading properly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1494615/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp