[Yahoo-eng-team] [Bug 2058908] [NEW] fix auto_scheduler_network uderstanding dhcp_agents_per_network

2024-03-25 Thread Sahid Orentino
Public bug reported:

when using routed provided network there is condition which is bypassing the 
option dhcp_agents_per_network, which results that in a env with 3 agents and 
dhcp_agents_per_network=2, for a given network already well handled by 2 
agents. if you restart the third agent It will start to handle the
network also which will result to have 3 agents handling the network.

The issue in under auto_scheduler_network function.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2058908

Title:
  fix auto_scheduler_network uderstanding dhcp_agents_per_network

Status in neutron:
  New

Bug description:
  when using routed provided network there is condition which is bypassing the 
option dhcp_agents_per_network, which results that in a env with 3 agents and 
dhcp_agents_per_network=2, for a given network already well handled by 2 
agents. if you restart the third agent It will start to handle the
  network also which will result to have 3 agents handling the network.

  The issue in under auto_scheduler_network function.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2058908/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2051729] [NEW] issue dhcp cleaning stale devices process when enable action

2024-01-30 Thread Sahid Orentino
Public bug reported:

When call driver enable is called. the cleanup_stale_devices function is 
invoked to remove stale devices
within the namespace. The method cleanup_stale_devices examines the
ports in the network to prevent the unintentional removal of
legitimate devices.

In a multisegment context, the initial device created might be deleted
during the second iteration. This occurs because the network variable
used in the loop is not a singular reference to the same object,
resulting in its ports not being updated by the ones created during
previous iterations.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2051729

Title:
  issue dhcp cleaning stale devices process when enable action

Status in neutron:
  New

Bug description:
  When call driver enable is called. the cleanup_stale_devices function is 
invoked to remove stale devices
  within the namespace. The method cleanup_stale_devices examines the
  ports in the network to   prevent the unintentional removal of
  legitimate devices.

  In a multisegment context, the initial device created might be deleted
  during the second iteration. This occurs because the network variable
  used in the loop is not a singular reference to the same object,
  resulting in its ports not being updated by the ones created during
  previous iterations.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2051729/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2051690] [NEW] when removing net for agent dnsmask constantly tries to restart

2024-01-30 Thread Sahid Orentino
Public bug reported:

When removing network for agent, dnsmask constantly tries to revive.

This has been observed when using multisegment. The external process
monitor is not well unregistered for that service.

This is because the correct helper to get the process identifier is not
used for unregister.

** Affects: neutron
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2051690

Title:
  when removing net for agent dnsmask constantly tries to restart

Status in neutron:
  In Progress

Bug description:
  When removing network for agent, dnsmask constantly tries to revive.

  This has been observed when using multisegment. The external process
  monitor is not well unregistered for that service.

  This is because the correct helper to get the process identifier is
  not used for unregister.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2051690/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2049615] [NEW] multisegments: cleaning DHCP process for segment 0 should happen first

2024-01-17 Thread Sahid Orentino
Public bug reported:

With the new support of multi-segments some code has been added to clean
old dhcp setup for a network. that clean should happen first and clean
segment index == 0.

As list of segment for a given network does not come ordered by segment
index, in the process we can be in that situation of having network
setup for multi index 1 coming before index 0 which means it will be
destroyed by the clean resulting a missing setup.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2049615

Title:
  multisegments: cleaning DHCP process for segment 0 should happen first

Status in neutron:
  New

Bug description:
  With the new support of multi-segments some code has been added to
  clean old dhcp setup for a network. that clean should happen first and
  clean segment index == 0.

  As list of segment for a given network does not come ordered by
  segment index, in the process we can be in that situation of having
  network setup for multi index 1 coming before index 0 which means it
  will be destroyed by the clean resulting a missing setup.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2049615/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2018398] [NEW] Wrong AZ gets showed when adding new compute node

2023-05-03 Thread Sahid Orentino
Public bug reported:

On a deployment with multi availability zones. When the operator adds a
new compute host, the service gets registered as part of
“default_availability_zone”.

This is an undesirable behavior for users as they see a new AZ appearing
which may not be related to the deployment the time window that the host
finally gets configured to its correct AZ.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2018398

Title:
  Wrong AZ gets showed when adding new compute node

Status in OpenStack Compute (nova):
  New

Bug description:
  On a deployment with multi availability zones. When the operator adds
  a new compute host, the service gets registered as part of
  “default_availability_zone”.

  This is an undesirable behavior for users as they see a new AZ
  appearing which may not be related to the deployment the time window
  that the host finally gets configured to its correct AZ.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2018398/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619002] Re: Networking API v2.0 in Networking API Reference missing information

2023-04-07 Thread Sahid Orentino
load balancer has been deprecated and removed I guess we can close this
one.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1619002

Title:
  Networking API v2.0 in Networking API Reference missing information

Status in neutron:
  Won't Fix

Bug description:
  In extensions, the loadbalancer object also has vip_port_id 
(http://developer.openstack.org/api-ref/networking/v2/index.html?expanded=show-load-balancer-details-detail#id3)
 - this does not appear on the documentation.
  ---
  Release: 0.4.1.dev4 on 'Sat Aug 27 19:31:24 2016, commit adef52e'
  SHA: 
  Source: Can't derive source file URL
  URL: http://developer.openstack.org/api-ref/networking/v2/index.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1619002/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2013045] Re: CI: MacvtapAgentTestCase

2023-03-29 Thread Sahid Orentino
*** This bug is a duplicate of bug 2012510 ***
https://bugs.launchpad.net/bugs/2012510

dup 2012510

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2013045

Title:
  CI: MacvtapAgentTestCase

Status in neutron:
  Invalid

Bug description:
  ft1.1: 
neutron.tests.functional.plugins.ml2.drivers.macvtap.agent.test_macvtap_neutron_agent.MacvtapAgentTestCase.test_get_all_devicestesttools.testresult.real._StringException:
 Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/macvtap/agent/test_macvtap_neutron_agent.py",
 line 47, in test_get_all_devices
  self.assertEqual(set([macvtap.link.address]),
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/testtools/testcase.py",
 line 394, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/testtools/testcase.py",
 line 481, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: {'3a:83:7e:60:34:b6'} != 
{'66:81:56:14:7d:0d'}

  
https://zuul.opendev.org/t/openstack/build/235c115c538f4f84b839f15b628339b6/logs

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2013045/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2013045] [NEW] CI: MacvtapAgentTestCase

2023-03-28 Thread Sahid Orentino
Public bug reported:

ft1.1: 
neutron.tests.functional.plugins.ml2.drivers.macvtap.agent.test_macvtap_neutron_agent.MacvtapAgentTestCase.test_get_all_devicestesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
return f(self, *args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/macvtap/agent/test_macvtap_neutron_agent.py",
 line 47, in test_get_all_devices
self.assertEqual(set([macvtap.link.address]),
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/testtools/testcase.py",
 line 394, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/testtools/testcase.py",
 line 481, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: {'3a:83:7e:60:34:b6'} != 
{'66:81:56:14:7d:0d'}

https://zuul.opendev.org/t/openstack/build/235c115c538f4f84b839f15b628339b6/logs

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2013045

Title:
  CI: MacvtapAgentTestCase

Status in neutron:
  New

Bug description:
  ft1.1: 
neutron.tests.functional.plugins.ml2.drivers.macvtap.agent.test_macvtap_neutron_agent.MacvtapAgentTestCase.test_get_all_devicestesttools.testresult.real._StringException:
 Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/macvtap/agent/test_macvtap_neutron_agent.py",
 line 47, in test_get_all_devices
  self.assertEqual(set([macvtap.link.address]),
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/testtools/testcase.py",
 line 394, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/testtools/testcase.py",
 line 481, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: {'3a:83:7e:60:34:b6'} != 
{'66:81:56:14:7d:0d'}

  
https://zuul.opendev.org/t/openstack/build/235c115c538f4f84b839f15b628339b6/logs

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2013045/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1918145] Re: Slownesses on neutron API with many RBAC rules

2023-01-06 Thread Sahid Orentino
I think one of the first step that we can have is to remove the ORDER BY
as it creates the temporary filesort that you have mentioned in #9.

I may missing something, an order by UUID does not bring any kind value?

A second step would be to understand why the possible key object_id is
not used.

There is also another point, we can notice that we do filter per action,
but I think that we do not have an index on it, maybe we could also
investigate that point.


** Changed in: neutron
   Status: Fix Released => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1918145

Title:
  Slownesses on neutron API with many RBAC rules

Status in neutron:
  Confirmed

Bug description:
  * Summary: Slownesses on neutron API with many RBAC rules

  * High level description: Sharing several networks or security groups
  to project drastically increase API response time on some routes
  (/networks or /server/detail).

  For quite some time we have observing that reponse times are
  increasing (slowly fur surely) on /networks calls. We have increased
  the number of Neutron workers, but in vain.

  Lately, we're observing that it's getting worse (reponse time form 5 to 370 
seconds). We discarded possible bottlenecks one by one (our service endpoint 
performance, neutron API configuration, etc).
  But we have found that some calls in the DB takes a lot of time. It seems 
they are stuck in the mariadb database (10.3.10). So we have captured a slow 
queries in mysql.

  An example of for /server/detail:
  -
  http://paste.openstack.org/show/803334/

  We can see that there are more than 2 millions of rows examinated, and
  around 1657 returned.

  An example of for /networks:
  
  http://paste.openstack.org/show/803337/
  Rows_sent: 517  Rows_examined: 223519

  * Pre-conditions:
  Database tables size:
  table:
  -   networkrbacs 16928 rows
  -   securitygrouprbacs 1691 rows
  -   keystone.project 1713 rows

  Control plane nodes are shared with some others services:
  - RMQ
  - mariadb
  - Openstack APIs
  - DHCP agents

  It seems the code of those lines are based on
  https://github.com/openstack/neutron-
  
lib/blob/698e4c8daa7d43018a71122ec5b0cd5b17b55141/neutron_lib/db/model_query.py#L120

  * Step-by-step reproduction steps:

  - Create a lot of projects (at least 1000)
  - Create a SG in admin account
  - Create fake networks (vlan, vxlan) with associated
  - Share the SG and all networks with all projects

  * Expected output: lower response time, less than 5 seconds
  (approximatively).

  * Actual output: May lead to gateway timeout.

  * Version:
    ** OpenStack version Stein releases for all components (neutron 14.2.0).
    ** CentOS 7.4 with kolla containers
    ** kolla-ansible for stein release

  * Environment: We operate all services in Openstack except for Cinder.

  * Perceived severity: Medium

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1918145/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1994967] [NEW] Evacuating instances should be stopped at virt-driver level

2022-10-27 Thread Sahid Orentino
Public bug reported:

The current behavior for an evacuated instance at destination node is to
have the virt-driver starting the virtual machine, then a compute API
call if needed to stop the instance.

A cleaner solution would be to have virt driver API handling an expected
state when spawned on  host.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1994967

Title:
  Evacuating instances should be stopped at virt-driver level

Status in OpenStack Compute (nova):
  New

Bug description:
  The current behavior for an evacuated instance at destination node is
  to have the virt-driver starting the virtual machine, then a compute
  API call if needed to stop the instance.

  A cleaner solution would be to have virt driver API handling an
  expected state when spawned on  host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1994967/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1959750] [NEW] potential performance issue when scheduling network segments

2022-02-02 Thread Sahid Orentino
Public bug reported:

During some investigations regarding segments we may have noticed
performance issues related to the current algorithm that schedules
network segments on hosts.

When an agent is reporting a change in segment, the process goes to the
function `auto_schedule_new_network_segments` with the list of the
segments that this host handles.

This function is retrieving from the segments the related networks, then
we can notice that the algorithm is running a double for loop. That one
iterates through network and per segments to schedule network segments
on all hosts.

for network_id in network_ids:
for segment in segments:
self._schedule_network(
payload.context, network_id, dhcp_notifier,
candidate_hosts=segment['hosts'])

Depending on the design chosen, in a setup that has hundred segments per
host with hundred networks and potentially segments that share the same
list of hosts, we will endup by calling _schedule_network 1 times
with duplication.

To avoid such duplication and unnecessary calls of _schedule_network for
the same hosts we may want to provide a datastructure that is storing
for each network the hosts already scheduled.

 for network_id in network_ids:
 for segment in segments:
 if not _already_scheduled(network_id, segment['hosts']):
 self._schedule_network(
 payload.context, network_id, dhcp_notifier,
 candidate_hosts=segment['hosts'])

With this same scenario, and by using such algorithm we may reduce the
number of call per the number of networks, 100.

Thanks,
s.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1959750

Title:
  potential performance issue when scheduling network segments

Status in neutron:
  New

Bug description:
  During some investigations regarding segments we may have noticed
  performance issues related to the current algorithm that schedules
  network segments on hosts.

  When an agent is reporting a change in segment, the process goes to
  the function `auto_schedule_new_network_segments` with the list of the
  segments that this host handles.

  This function is retrieving from the segments the related networks,
  then we can notice that the algorithm is running a double for loop.
  That one iterates through network and per segments to schedule network
  segments on all hosts.

  for network_id in network_ids:
  for segment in segments:
  self._schedule_network(
  payload.context, network_id, dhcp_notifier,
  candidate_hosts=segment['hosts'])

  Depending on the design chosen, in a setup that has hundred segments
  per host with hundred networks and potentially segments that share the
  same list of hosts, we will endup by calling _schedule_network 1
  times with duplication.

  To avoid such duplication and unnecessary calls of _schedule_network
  for the same hosts we may want to provide a datastructure that is
  storing for each network the hosts already scheduled.

   for network_id in network_ids:
   for segment in segments:
   if not _already_scheduled(network_id, segment['hosts']):
   self._schedule_network(
   payload.context, network_id, dhcp_notifier,
   candidate_hosts=segment['hosts'])

  With this same scenario, and by using such algorithm we may reduce the
  number of call per the number of networks, 100.

  Thanks,
  s.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1959750/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1835037] Re: Upgrade from bionic-rocky to bionic-stein failed migrations.

2019-07-23 Thread Sahid Orentino
I also proposed a fix for nova since 'nova-manage cellv2 update_cell' is
bugged for cell0.

  https://review.opendev.org/#/c/672045/

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1835037

Title:
  Upgrade from bionic-rocky to bionic-stein failed migrations.

Status in OpenStack nova-cloud-controller charm:
  In Progress
Status in OpenStack Compute (nova):
  New

Bug description:
  We were trying to upgrade from rocky to stein using the charm
  procedure described here:

  https://docs.openstack.org/project-deploy-guide/charm-deployment-
  guide/latest/app-upgrade-openstack.html

  and we got into this problem,

  
  2019-07-02 09:56:44 ERROR juju-log online_data_migrations failed
  b'Running batches of 50 until complete\nError attempting to run \n9 rows matched query 
populate_user_id, 0 
migrated\n+-+--+---+\n|
  Migration  | Total Needed | Completed 
|\n+-+--+---+\n|
 create_incomplete_consumers |  0   | 0 |\n| 
delete_build_requests_with_no_instance_uuid |  0   | 0 |\n| 
fill_virtual_interface_list |  0   | 0 |\n| 
migrate_empty_ratio |  0   | 0 |\n|  
migrate_keypairs_to_api_db |  0   | 0 |\n|   
migrate_quota_classes_to_api_db   |  0   | 0 |\n|
migrate_quota_limits_to_api_db   |  0   | 0 |\n|  
migration_migrate_to_uuid  |  0   | 0 |\n| 
populate_missing_availability_zones |  0   | 0 |\n| 
 populate_queued_for_delete |  0   | 0 |\n| 
  populate_user_id  |  9   | 0 |\n|
populate_uuids   |  0   | 0 |\n| 
service_uuids_online_data_migration |  0   | 0 
|\n+-+--+---+\nSome
 migrations failed unexpectedly. Check log for details.\n'

  What should we do to get this fixed?

  Regards,

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-cloud-controller/+bug/1835037/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1831986] Re: fwaas_v2 - unable to associate port with firewall (PXC strict mode)

2019-06-07 Thread Sahid Orentino
Missing primary keys for firewall_group_port_associations_v2 and
firewall_policy_rule_associations_v2. The workaround is to change the
mode used [0].

  juju config percona-cluster pxc-strict-mode=PERMISSIVE

[0]
https://bugs.launchpad.net/ubuntu/+source/octavia/+bug/1826875/comments/3

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => Sahid Orentino (sahid-ferdjaoui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1831986

Title:
  fwaas_v2 - unable to associate port with firewall (PXC strict mode)

Status in Ubuntu Cloud Archive:
  New
Status in neutron:
  New
Status in neutron-fwaas package in Ubuntu:
  New

Bug description:
  Impacts both Stein and Rocky (although rocky does not enable v2 just
  yet).

  542 a9761fa9124740028d0c1d70ff7aa542] DBAPIError exception wrapped from 
(pymysql.err.InternalError) (1105, 'Percona-XtraDB-Cluster prohibits use of DML 
command on a table (neutron.firewall_group_port_associations_v2) without an 
explicit primary key with pxc_strict_mode = ENFORCING or MASTER') [SQL: 'DELETE 
FROM firewall_group_port_associations_v2 WHERE 
firewall_group_port_associations_v2.firewall_group_id = 
%(firewall_group_id_1)s'] [parameters: {'firewall_group_id_1': 
'85a277d0-ebaf-4a5d-9d45-6a74b8f54372'}] (Background on this error at: 
http://sqlalche.me/e/2j85): pymysql.err.InternalError: (1105, 
'Percona-XtraDB-Cluster prohibits use of DML command on a table 
(neutron.firewall_group_port_associations_v2) without an explicit primary key 
with pxc_strict_mode = ENFORCING or MASTER')
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters Traceback 
(most recent call last):
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1193, in 
_execute_context
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
context)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 509, in 
do_execute
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/cursors.py", line 165, in execute
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters result 
= self._query(query)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/cursors.py", line 321, in _query
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
conn.query(q)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/connections.py", line 860, in query
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/connections.py", line 1061, in 
_read_query_result
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
result.read()
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/connections.py", line 1349, in read
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
first_packet = self.connection._read_packet()
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/connections.py", line 1018, in 
_read_packet
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
packet.check_error()
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/connections.py", line 384, in 
check_error
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
err.raise_mysql_exception(self._data)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/err.py", line 107, in 
raise_mysql_exception
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters raise 
errorclass(errno, errval)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
pymysql.err.InternalError: (1105, 'Percona-XtraDB-Cluster prohibits use of DML 
command on a table (neutron.firewall_group_port_associations_v2) without an 
explicit primary key with pxc_strict_mode = ENFORCING or MASTER')
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: neutron-server 2:1

[Yahoo-eng-team] [Bug 1667736] Re: gate-neutron-fwaas-dsvm-functional failure after recent localrc change

2019-03-21 Thread Sahid Orentino
** Changed in: cloud-archive
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1667736

Title:
  gate-neutron-fwaas-dsvm-functional failure after recent localrc change

Status in Ubuntu Cloud Archive:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron-fwaas package in Ubuntu:
  Fix Released

Bug description:
  eg. http://logs.openstack.org/59/286059/1/check/gate-neutron-fwaas-
  dsvm-functional/a0f2285/console.html

  2017-02-24 15:27:58.187720 | + 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/gate_hook.sh:main:26 : 
  source /opt/stack/new/devstack/localrc
  2017-02-24 15:27:58.187833 | 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/gate_hook.sh: line 26: 
/opt/stack/new/devstack/localrc: No such file or directory

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1667736/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1806079] Re: revert use of stestr in stable/pike

2019-03-21 Thread Sahid Orentino
The patch which reverts the problematic change in Nova has been released
in our packages for version 2:16.1.6-0ubuntu1~cloud0 [0]. Let's mark
this bug has Fix Released [1].

For upstream Nova, the community are against the revert, it should
probably be marked as won't fix.

[0] 
https://git.launchpad.net/~ubuntu-server-dev/ubuntu/+source/nova/commit/?h=stable/pike=f42d697d606bd1ceff54cce665fe80641956f932
[1] 
https://git.launchpad.net/~ubuntu-server-dev/ubuntu/+source/nova/commit/?h=stable/pike=21f71d906812a80ab3d1d96d22b04cf5744ed35c

** Changed in: cloud-archive
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1806079

Title:
  revert use of stestr in stable/pike

Status in Ubuntu Cloud Archive:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The following commit changed dependencies of nova in the stable/pike
  branch and switched it to use stestr. There aren't any other projects
  (as far as I can tell) that use stestr in pike. This causes issues,
  for example, the Ubuntu cloud archive for pike doesn't have stestr. If
  possible I think this should be reverted.

  
  commit 5939ae995fdeb2746346ebd81ce223e4fe891c85
  Date:   Thu Jul 5 16:09:17 2018 -0400

  Backport tox.ini to switch to stestr
  
  The pike branch was still using ostestr (instead of stestr) which makes
  running tests significantly different from queens or master. To make
  things behave the same way this commit backports most of the tox.ini
  from queens so that pike will behave the same way for running tests.
  This does not use the standard backport mechanism because it involves a
  lot of different commits over time. It's also not a functional change
  for nova itself, so the proper procedure is less important here.
  
  Change-Id: Ie207afaf8defabc1d1eb9332f43a9753a00f784d

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1806079/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815844] Re: iscsi multipath dm-N device only used on first volume attachment

2019-02-27 Thread Sahid Orentino
Basically the issue is related to 'find_multipaths "yes"' in
/etc/multipath.conf. The patch I proposed fix the issue but adds more
complexity to the algorithm which is already a bit tricky. So let see
whether upstream is going to accept it.

At least we should document something that using multipath should be
when multipathd configured like:

   find_multipaths "no"

I'm re-adding the charm-nova-compute to this bug so we add a not about
it in the doc of the option.


   

** Changed in: charm-nova-compute
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1815844

Title:
  iscsi multipath dm-N device only used on first volume attachment

Status in OpenStack nova-compute charm:
  New
Status in OpenStack Compute (nova):
  Invalid
Status in os-brick:
  New

Bug description:
  With nova-compute from cloud:xenial-queens and use-multipath=true
  iscsi multipath is configured and the dm-N devices used on the first
  attachment but subsequent attachments only use a single path.

  The back-end storage is a Purestorage array.
  The multipath.conf is attached
  The issue is easily reproduced as shown below:

  jog@pnjostkinfr01:~⟫ openstack volume create pure2 --size 10 --type pure
  +-+--+
  | Field   | Value|
  +-+--+
  | attachments | []   |
  | availability_zone   | nova |
  | bootable| false|
  | consistencygroup_id | None |
  | created_at  | 2019-02-13T23:07:40.00   |
  | description | None |
  | encrypted   | False|
  | id  | e286161b-e8e8-47b0-abe3-4df411993265 |
  | migration_status| None |
  | multiattach | False|
  | name| pure2|
  | properties  |  |
  | replication_status  | None |
  | size| 10   |
  | snapshot_id | None |
  | source_volid| None |
  | status  | creating |
  | type| pure |
  | updated_at  | None |
  | user_id | c1fa4ae9a0b446f2ba64eebf92705d53 |
  +-+--+

  jog@pnjostkinfr01:~⟫ openstack volume show pure2
  ++--+
  | Field  | Value|
  ++--+
  | attachments| []   |
  | availability_zone  | nova |
  | bootable   | false|
  | consistencygroup_id| None |
  | created_at | 2019-02-13T23:07:40.00   |
  | description| None |
  | encrypted  | False|
  | id | e286161b-e8e8-47b0-abe3-4df411993265 |
  | migration_status   | None |
  | multiattach| False|
  | name   | pure2|
  | os-vol-host-attr:host  | cinder@cinder-pure#cinder-pure   |
  | os-vol-mig-status-attr:migstat | None |
  | os-vol-mig-status-attr:name_id | None |
  | os-vol-tenant-attr:tenant_id   | 9be499fd1eee48dfb4dc6faf3cc0a1d7 |
  | properties |  |
  | replication_status | None |
  | size   | 10   |
  | snapshot_id| None |
  | source_volid   | None |
  | status | available|
  | type   | pure |
  | updated_at | 2019-02-13T23:07:41.00   |
  | user_id| 

[Yahoo-eng-team] [Bug 1815844] Re: iscsi multipath dm-N device only used on first volume attachment

2019-02-22 Thread Sahid Orentino
Patch proposed against os-brick here [0]

[0] https://review.openstack.org/#/c/638639/

** Also affects: os-brick
   Importance: Undecided
   Status: New

** Changed in: os-brick
 Assignee: (unassigned) => Sahid Orentino (sahid-ferdjaoui)

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1815844

Title:
  iscsi multipath dm-N device only used on first volume attachment

Status in OpenStack nova-compute charm:
  Invalid
Status in OpenStack Compute (nova):
  Invalid
Status in os-brick:
  New

Bug description:
  With nova-compute from cloud:xenial-queens and use-multipath=true
  iscsi multipath is configured and the dm-N devices used on the first
  attachment but subsequent attachments only use a single path.

  The back-end storage is a Purestorage array.
  The multipath.conf is attached
  The issue is easily reproduced as shown below:

  jog@pnjostkinfr01:~⟫ openstack volume create pure2 --size 10 --type pure
  +-+--+
  | Field   | Value|
  +-+--+
  | attachments | []   |
  | availability_zone   | nova |
  | bootable| false|
  | consistencygroup_id | None |
  | created_at  | 2019-02-13T23:07:40.00   |
  | description | None |
  | encrypted   | False|
  | id  | e286161b-e8e8-47b0-abe3-4df411993265 |
  | migration_status| None |
  | multiattach | False|
  | name| pure2|
  | properties  |  |
  | replication_status  | None |
  | size| 10   |
  | snapshot_id | None |
  | source_volid| None |
  | status  | creating |
  | type| pure |
  | updated_at  | None |
  | user_id | c1fa4ae9a0b446f2ba64eebf92705d53 |
  +-+--+

  jog@pnjostkinfr01:~⟫ openstack volume show pure2
  ++--+
  | Field  | Value|
  ++--+
  | attachments| []   |
  | availability_zone  | nova |
  | bootable   | false|
  | consistencygroup_id| None |
  | created_at | 2019-02-13T23:07:40.00   |
  | description| None |
  | encrypted  | False|
  | id | e286161b-e8e8-47b0-abe3-4df411993265 |
  | migration_status   | None |
  | multiattach| False|
  | name   | pure2|
  | os-vol-host-attr:host  | cinder@cinder-pure#cinder-pure   |
  | os-vol-mig-status-attr:migstat | None |
  | os-vol-mig-status-attr:name_id | None |
  | os-vol-tenant-attr:tenant_id   | 9be499fd1eee48dfb4dc6faf3cc0a1d7 |
  | properties |  |
  | replication_status | None |
  | size   | 10   |
  | snapshot_id| None |
  | source_volid   | None |
  | status | available|
  | type   | pure |
  | updated_at | 2019-02-13T23:07:41.00   |
  | user_id| c1fa4ae9a0b446f2ba64eebf92705d53 |
  ++--+

  Add the volume to an instance:
  jog@pnjostkinfr01:~⟫ openstack server add volume T1 pure2
  jog@pnjostkinfr01:~⟫ openstack server s

[Yahoo-eng-team] [Bug 1727260] [NEW] Nova assumes that a volume is fully detached from the compute if the volume is not defined in the instance's libvirt definition

2017-10-25 Thread sahid
Public bug reported:

During a volume detach operation, Nova compute attempts to remove the
volume from libvirt for the instance before proceeding to remove the
storage lun from the underlying compute host. If Nova discovers that the
volume was not found in the instance's libvirt definition then it
ignores that error condition and returns (after issuing a warning
message "Ignoring DiskNotFound exception while detaching").

However, under certain failure scenarios it may be that although the
libvirt definition for the volume has been removed for the instance that
the associated storage lun on the compute server may not have been fully
cleaned up yet.

** Affects: nova
 Importance: Undecided
 Assignee: sahid (sahid-ferdjaoui)
 Status: New


** Tags: libvirt ocata-backport-potential

** Changed in: nova
 Assignee: (unassigned) => sahid (sahid-ferdjaoui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1727260

Title:
   Nova assumes that a volume is fully detached from the compute if the
  volume is not defined in the instance's libvirt definition

Status in OpenStack Compute (nova):
  New

Bug description:
  During a volume detach operation, Nova compute attempts to remove the
  volume from libvirt for the instance before proceeding to remove the
  storage lun from the underlying compute host. If Nova discovers that
  the volume was not found in the instance's libvirt definition then it
  ignores that error condition and returns (after issuing a warning
  message "Ignoring DiskNotFound exception while detaching").

  However, under certain failure scenarios it may be that although the
  libvirt definition for the volume has been removed for the instance
  that the associated storage lun on the compute server may not have
  been fully cleaned up yet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1727260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715317] Re: Hybrid bridge should permanently keep MAC entries

2017-09-13 Thread sahid
https://review.openstack.org/#/c/501132/

** Also affects: os-vif
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1715317

Title:
  Hybrid bridge should permanently keep MAC entries

Status in OpenStack Compute (nova):
  Incomplete
Status in os-vif:
  New

Bug description:
  The linux bridge installed for the particular vif type ovs-hybrid
  should be configured to persistently keep the MAC learned from the
  RARP packets sent by QEMU when starting on destination node. That to
  avoid any break of the datapath during a live-migration.

  That issue can be saying when using the opflex plugin.

https://github.com/noironetworks/python-opflex-
  agent/commit/3163b9a2668f29dd1e52e9757b8c25ef48822765

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1715317/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715374] [NEW] Reloading compute with SIGHUP prenvents instances to boot

2017-09-06 Thread sahid
Public bug reported:

When trying to boot a new instance at a compute-node, where nova-compute
received SIGHUP(the SIGHUP is used as a trigger for reloading mutable
options), it always failed.

  == nova/compute/manager.py ==
def cancel_all_events(self):
if self._events is None:
LOG.debug('Unexpected attempt to cancel events during shutdown.')
return
our_events = self._events
# NOTE(danms): Block new events
self._events = None<--- Set self._events to "None" 
...
=

  This will cause a NovaException when prepare_for_instance_event() was called.
  It's the cause of the failure of network allocation.

== nova/compute/manager.py ==
def prepare_for_instance_event(self, instance, event_name):
...
if self._events is None:
# NOTE(danms): We really should have a more specific error
# here, but this is what we use for our default error case
raise exception.NovaException('In shutdown, no new events '
  'can be scheduled')
=

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1715374

Title:
  Reloading compute with SIGHUP prenvents instances to boot

Status in OpenStack Compute (nova):
  New

Bug description:
  When trying to boot a new instance at a compute-node, where nova-
  compute received SIGHUP(the SIGHUP is used as a trigger for reloading
  mutable options), it always failed.

== nova/compute/manager.py ==
  def cancel_all_events(self):
  if self._events is None:
  LOG.debug('Unexpected attempt to cancel events during shutdown.')
  return
  our_events = self._events
  # NOTE(danms): Block new events
  self._events = None<--- Set self._events to 
"None" 
  ...
  =

This will cause a NovaException when prepare_for_instance_event() was 
called.
It's the cause of the failure of network allocation.

  == nova/compute/manager.py ==
  def prepare_for_instance_event(self, instance, event_name):
  ...
  if self._events is None:
  # NOTE(danms): We really should have a more specific error
  # here, but this is what we use for our default error case
  raise exception.NovaException('In shutdown, no new events '
'can be scheduled')
  =

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1715374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715317] [NEW] Hybrid bridge should permanently keep MAC entries

2017-09-06 Thread sahid
Public bug reported:

The linux bridge installed for the particular vif type ovs-hybrid should
be configured to persistently keep the MAC learned from the RARP packets
sent by QEMU when starting on destination node. That to avoid any break
of the datapath during a live-migration.

That issue can be saying when using the opflex plugin.

  https://github.com/noironetworks/python-opflex-
agent/commit/3163b9a2668f29dd1e52e9757b8c25ef48822765

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1715317

Title:
  Hybrid bridge should permanently keep MAC entries

Status in OpenStack Compute (nova):
  New

Bug description:
  The linux bridge installed for the particular vif type ovs-hybrid
  should be configured to persistently keep the MAC learned from the
  RARP packets sent by QEMU when starting on destination node. That to
  avoid any break of the datapath during a live-migration.

  That issue can be saying when using the opflex plugin.

https://github.com/noironetworks/python-opflex-
  agent/commit/3163b9a2668f29dd1e52e9757b8c25ef48822765

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1715317/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1686116] [NEW] domain xml not well defined when using virio-scsi disk bus

2017-04-25 Thread sahid
Public bug reported:

When using virtio-scsi we should be able to attach up to 256 devices but
because the XML device definition do not specify which controller to use
and place on that on that one, we are currently able to attach no more
than 6 disks.

step to reproduce the issue:

- glance image-update --property hw_scsi_model=virtio-scsi " to creates 
the virtio-scsi controller
- glance image-update --property hw_disk_bus=scsi " disks will be using 
scsi

Start instance with more than 6 disks/volumes

** Affects: nova
 Importance: Undecided
 Assignee: sahid (sahid-ferdjaoui)
 Status: In Progress


** Tags: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1686116

Title:
  domain xml not well defined when using virio-scsi disk bus

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  When using virtio-scsi we should be able to attach up to 256 devices
  but because the XML device definition do not specify which controller
  to use and place on that on that one, we are currently able to attach
  no more than 6 disks.

  step to reproduce the issue:

  - glance image-update --property hw_scsi_model=virtio-scsi " to 
creates the virtio-scsi controller
  - glance image-update --property hw_disk_bus=scsi " disks will be 
using scsi

  Start instance with more than 6 disks/volumes

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1686116/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1685226] [NEW] uninitialized local variable ‘sibling_set’ referenced before assignment.

2017-04-21 Thread sahid
Public bug reported:

The code is using a variable which can be not defined if sibling_sets
(plural) is empty.

# NOTE(sfinucan): If siblings weren't available and we're using PREFER
# (implicitly or explicitly), fall back to linear assignment across
# cores
if (instance_cell.cpu_thread_policy !=
fields.CPUThreadAllocationPolicy.REQUIRE and
not pinning):
pinning = list(zip(sorted(instance_cell.cpuset),
 itertools.chain(*sibling_set)))  <-not 
defined if sibling_sets is empty

So far the only path I could see to be in a situation where sibling_sets
is empty at this step would be if two instances get scheduled "in same
time" in the same host where we could consider that all the checks to
ensure that the host cell provides enough cpus to handle the request
have been accepted.

Even if that could happen only in such circumstance which should fix the
issue.

[0]
https://github.com/openstack/nova/blob/master/nova/virt/hardware.py#L882

** Affects: nova
 Importance: Undecided
     Assignee: sahid (sahid-ferdjaoui)
 Status: New


** Tags: numa

** Changed in: nova
 Assignee: (unassigned) => sahid (sahid-ferdjaoui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1685226

Title:
  uninitialized local variable ‘sibling_set’ referenced before
  assignment.

Status in OpenStack Compute (nova):
  New

Bug description:
  The code is using a variable which can be not defined if sibling_sets
  (plural) is empty.

  # NOTE(sfinucan): If siblings weren't available and we're using PREFER
  # (implicitly or explicitly), fall back to linear assignment across
  # cores
  if (instance_cell.cpu_thread_policy !=
  fields.CPUThreadAllocationPolicy.REQUIRE and
  not pinning):
  pinning = list(zip(sorted(instance_cell.cpuset),
   itertools.chain(*sibling_set)))  <-not 
defined if sibling_sets is empty

  So far the only path I could see to be in a situation where
  sibling_sets is empty at this step would be if two instances get
  scheduled "in same time" in the same host where we could consider that
  all the checks to ensure that the host cell provides enough cpus to
  handle the request have been accepted.

  Even if that could happen only in such circumstance which should fix
  the issue.

  [0]
  https://github.com/openstack/nova/blob/master/nova/virt/hardware.py#L882

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1685226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1653718] [NEW] Target host in nova DB got updated to new compute while migration failed

2017-01-03 Thread sahid
Public bug reported:

During live-migration if the process goes in a unpredictable situation
for example if QEMU suffers an issue Nova could still consider the
migration has succeeded even if not.

The VM on source node can be still registered even if stopped. In the
worst scenario, the operator could start it and so two VMs running could
share the same disk.

We should fix that issue, do not consider a migration to have succeeded
if it was not the case. For that we should handle the return
migrateToURI*


[1] 
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/driver.py#n6303
[2] 
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/migration.py#n223

** Affects: nova
 Importance: Undecided
 Assignee: sahid (sahid-ferdjaoui)
 Status: In Progress


** Tags: libvirt live-migration

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1653718

Title:
  Target host in nova DB got updated to new compute while migration
  failed

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  During live-migration if the process goes in a unpredictable situation
  for example if QEMU suffers an issue Nova could still consider the
  migration has succeeded even if not.

  The VM on source node can be still registered even if stopped. In the
  worst scenario, the operator could start it and so two VMs running
  could share the same disk.

  We should fix that issue, do not consider a migration to have
  succeeded if it was not the case. For that we should handle the return
  migrateToURI*

  
  [1] 
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/driver.py#n6303
  [2] 
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/migration.py#n223

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1653718/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628449] [NEW] Exception when live block migration multiple ephemerals

2016-09-28 Thread sahid
Public bug reported:

When block live migrate an instance with multiple ephemeral an exception
FlavorDiskSmallerThanImage is raised.


Steps to Reproduce:
1) Created flavor with two ephemeral support.

~~~
[root@allinone9 ~(keystone_admin)]# nova flavor-create 2ephemeral-disks 6 512 1 
1 --ephemeral 2
~~~

2) Spawned instance using created flavor.

~~~
[root@allinone9 ~(keystone_admin)]# nova boot --flavor 2ephemeral-disks --image 
cirros --ephemeral size=1 --ephemeral size=1 internal1
~~~

3) Instance spawned successfully.

~~~
[root@allinone9 ~(keystone_admin)]# nova list --field name,status,host | grep 
-i internal1
| 08619d2d-e3a2-4f67-a959-33cfbc08d153 | internal1 | ACTIVE | allinone9 |
~~~

4) Verifying that two extra ephemeral disks are connected with instance.

~~~
[root@allinone9 ~(keystone_admin)]# virsh domblklist 4
Target Source

vda/var/lib/nova/instances/08619d2d-e3a2-4f67-a959-33cfbc08d153/disk
vdb
/var/lib/nova/instances/08619d2d-e3a2-4f67-a959-33cfbc08d153/disk.eph0
vdc
/var/lib/nova/instances/08619d2d-e3a2-4f67-a959-33cfbc08d153/disk.eph1
~~~

5) Tried to perform the block migration but it end with same error which
you have seen.

[root@allinone9 ~(keystone_admin)]# nova live-migration 08619d2d-
e3a2-4f67-a959-33cfbc08d153 compute1-9 --block-migrate

~~~
>From : /var/log/nova/nova-compute.log

2016-09-26 08:53:12.033 3958 ERROR nova.compute.manager [req-
f24d49f7-4d8e-4683-bcc0-952254764fca b09d7a1af46d42398c79a1dc0da02954
ca23990ed6c846b0b8d588fb5e304aeb - - -] [instance: 08619d2d-
e3a2-4f67-a959-33cfbc08d153] Pre live migration failed at compute1-9

2016-09-26 08:53:12.033 3958 ERROR nova.compute.manager [instance: 
08619d2d-e3a2-4f67-a959-33cfbc08d153] 
2016-09-26 08:53:12.033 3958 ERROR nova.compute.manager [instance: 
08619d2d-e3a2-4f67-a959-33cfbc08d153] FlavorDiskSmallerThanImage: Flavor's disk 
is too small for requested image. Flavor disk is 1073741824 bytes, image is 
2147483648 bytes.
~~~

That error is because two mistake:

... LINE ~ 6588 in libvirt.py (method
libvirt._create_images_and_backing)

image = self.image_backend.image(instance,
 instance_disk,
 CONF.libvirt.images_type)
if cache_name.startswith('ephemeral'):
image.cache(fetch_func=self._create_ephemeral,
fs_label=cache_name,
os_type=instance.os_type,
filename=cache_name,
size=info['virt_disk_size'],  
(a)
ephemeral_size=instance.flavor.ephemeral_gb)  
(b) 
elif cache_name.startswith('swap'):
inst_type = instance.get_flavor()
swap_mb = inst_type.swap

...


(a) That argument 'size' does not exist in _create_ephemeral
(b) We should report here the actual size of the ephemeral disk (which is what 
has been asked by user during boot insteado of the total size allowed by the 
flavor for ephemeral disks)

** Affects: nova
 Importance: Undecided
 Assignee: sahid (sahid-ferdjaoui)
 Status: New


** Tags: newton-backport-potential

** Changed in: nova
 Assignee: (unassigned) => sahid (sahid-ferdjaoui)

** Tags added: newton-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1628449

Title:
  Exception when live block migration multiple ephemerals

Status in OpenStack Compute (nova):
  New

Bug description:
  When block live migrate an instance with multiple ephemeral an
  exception FlavorDiskSmallerThanImage is raised.

  
  Steps to Reproduce:
  1) Created flavor with two ephemeral support.

  ~~~
  [root@allinone9 ~(keystone_admin)]# nova flavor-create 2ephemeral-disks 6 512 
1 1 --ephemeral 2
  ~~~

  2) Spawned instance using created flavor.

  ~~~
  [root@allinone9 ~(keystone_admin)]# nova boot --flavor 2ephemeral-disks 
--image cirros --ephemeral size=1 --ephemeral size=1 internal1
  ~~~

  3) Instance spawned successfully.

  ~~~
  [root@allinone9 ~(keystone_admin)]# nova list --field name,status,host | grep 
-i internal1
  | 08619d2d-e3a2-4f67-a959-33cfbc08d153 | internal1 | ACTIVE | allinone9 |
  ~~~

  4) Verifying that two extra ephemeral disks are connected with
  instance.

  ~~~
  [root@allinone9 ~(keystone_admin)]# virsh domblklist 4
  Target Source
  
  vda/var/lib/nova/instances/08619d2d-e3a2-4f67-a959-33cfbc08d153/disk
  vdb
/var/lib/nova/instances/08619d2d-e3a2-4f67-a959-33cfbc08d153/disk.eph0
  vdc
/var/lib/nova/instances/08619d2d-e3a2-4f67-a959-33cfbc08d153/disk.eph1
  ~~~

  5) Tried to perform the block mig

[Yahoo-eng-team] [Bug 1614054] [NEW] Incorrect host cpu is given to emulator threads when cpu_realtime_mask flag is set

2016-08-17 Thread sahid
Public bug reported:

Description of problem:
When using the cpu_realtime and cpu_realtim_mask flag to create new instance, 
the 'cpuset' of 'emulatorpin' option is using the id of vcpu which is 
incorrect. The id of host cpu should be used here.

e.g.
  


  ### the cpuset should be '2' here, when 
cpu_realtime_mask=^0.  

  

How reproducible:
Boot new instance with cpu_realtime_mask flavor.

Steps to Reproduce:
1. Create RT flavor
nova flavor-create m1.small.performance 6 2048 20 2
nova flavor-key m1.small.performance set hw:cpu_realtime=yes
nova flavor-key m1.small.performance set hw:cpu_realtime_mask=^0
nova flavor-key m1.small.performance set hw:cpu_policy=dedicated
2. Boot a instance with this flavor
3. Check the xml of the new instance

Actual results:
  


  

  

Expected results:
  


  

  

** Affects: nova
 Importance: Undecided
 Assignee: sahid (sahid-ferdjaoui)
 Status: In Progress


** Tags: liberty-backport-potential libvirt

** Changed in: nova
 Assignee: (unassigned) => sahid (sahid-ferdjaoui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1614054

Title:
  Incorrect host cpu is given to emulator threads when cpu_realtime_mask
  flag is set

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description of problem:
  When using the cpu_realtime and cpu_realtim_mask flag to create new instance, 
the 'cpuset' of 'emulatorpin' option is using the id of vcpu which is 
incorrect. The id of host cpu should be used here.

  e.g.

  
  
### the cpuset should be '2' here, 
when cpu_realtime_mask=^0.  
  


  How reproducible:
  Boot new instance with cpu_realtime_mask flavor.

  Steps to Reproduce:
  1. Create RT flavor
  nova flavor-create m1.small.performance 6 2048 20 2
  nova flavor-key m1.small.performance set hw:cpu_realtime=yes
  nova flavor-key m1.small.performance set hw:cpu_realtime_mask=^0
  nova flavor-key m1.small.performance set hw:cpu_policy=dedicated
  2. Boot a instance with this flavor
  3. Check the xml of the new instance

  Actual results:

  
  

  


  Expected results:

  
  

  


To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1614054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614019] [NEW] Instances lose its serial ports during soft-reboot after live-migration

2016-08-17 Thread sahid
Public bug reported:

Instances lose its serial ports during soft-reboot if the instances experienced 
live-migration just before.
Therefore we cannot access to the instance from serial console after 
soft-reboot.

That is because the method post_live_migration which defines the domain
XML to libvirt on the destination host is calling the method
get_guest_config instead of just retrieving the domain XML of the
migrated and running guest.

** Affects: nova
 Importance: Undecided
 Assignee: sahid (sahid-ferdjaoui)
 Status: In Progress


** Tags: libvirt

** Changed in: nova
 Assignee: (unassigned) => sahid (sahid-ferdjaoui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1614019

Title:
  Instances lose its serial ports during soft-reboot after live-
  migration

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Instances lose its serial ports during soft-reboot if the instances 
experienced live-migration just before.
  Therefore we cannot access to the instance from serial console after 
soft-reboot.

  That is because the method post_live_migration which defines the
  domain XML to libvirt on the destination host is calling the method
  get_guest_config instead of just retrieving the domain XML of the
  migrated and running guest.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1614019/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1587014] [NEW] Serial ports lost after hard-reboot

2016-05-30 Thread sahid
Public bug reported:

After I executed "nova reboot --hard ", We could not access to the
vm's serial-console.

This is due because during hare-reboot process in driver destroy the
guest without to undefine the domain. So when we recreate the guest and
so call 'get_guest_xml' serial ports are still defined in the XML so the
process does not try to re-acquire them on host.

** Affects: nova
 Importance: Undecided
 Assignee: sahid (sahid-ferdjaoui)
 Status: New


** Tags: console libvirt

** Changed in: nova
 Assignee: (unassigned) => sahid (sahid-ferdjaoui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1587014

Title:
  Serial ports lost after hard-reboot

Status in OpenStack Compute (nova):
  New

Bug description:
  After I executed "nova reboot --hard ", We could not access to the
  vm's serial-console.

  This is due because during hare-reboot process in driver destroy the
  guest without to undefine the domain. So when we recreate the guest
  and so call 'get_guest_xml' serial ports are still defined in the XML
  so the process does not try to re-acquire them on host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1587014/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1577725] [NEW] Expired token passed to neutron return 500 instead of 401

2016-05-03 Thread sahid
Public bug reported:

If we pass to Nova an about-to-expire token, then Nova pass this token
to neutronclient to manipulate networks. neutronclient can raise an
unauthorized exception. This exception is not understand by Nova and so
converted to a 500 which does not let any chance for novaclient to try a
new attempt of this request with a re-generated token.

We should to convert the unauthorized exception from Neutron to a 401
returned to nova clients.

https://github.com/openstack/python-
novaclient/blob/master/novaclient/client.py#L438

** Affects: nova
 Importance: Undecided
 Assignee: sahid (sahid-ferdjaoui)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => sahid (sahid-ferdjaoui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1577725

Title:
  Expired token passed to neutron return 500 instead of 401

Status in OpenStack Compute (nova):
  New

Bug description:
  If we pass to Nova an about-to-expire token, then Nova pass this token
  to neutronclient to manipulate networks. neutronclient can raise an
  unauthorized exception. This exception is not understand by Nova and
  so converted to a 500 which does not let any chance for novaclient to
  try a new attempt of this request with a re-generated token.

  We should to convert the unauthorized exception from Neutron to a 401
  returned to nova clients.

  https://github.com/openstack/python-
  novaclient/blob/master/novaclient/client.py#L438

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1577725/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567461] Re: Possible race when allocating local port for serial console

2016-04-11 Thread sahid
** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1567461

Title:
  Possible race when allocating local port for serial console

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Nova binds a port to verify its availability but immediately after to
  close the socket other instance can have also tested that same port
  and so the method will return the same port for two different
  instances.

  We should to do not let that situation to happen

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1567461/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567461] [NEW] Possible race when allocating local port for serial console

2016-04-07 Thread sahid
Public bug reported:

Nova binds a port to verify its availability but immediately after to
close the socket other instance can have also tested that same port
and so the method will return the same port for two different
instances.

We should to do not let that situation to happen

** Affects: nova
 Importance: Undecided
 Assignee: sahid (sahid-ferdjaoui)
 Status: In Progress


** Tags: console

** Changed in: nova
 Assignee: (unassigned) => sahid (sahid-ferdjaoui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1567461

Title:
  Possible race when allocating local port for serial console

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Nova binds a port to verify its availability but immediately after to
  close the socket other instance can have also tested that same port
  and so the method will return the same port for two different
  instances.

  We should to do not let that situation to happen

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1567461/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543149] [NEW] Reserve host pages on compute nodes

2016-02-08 Thread sahid
Public bug reported:

In some use cases we may want to avoid Nova to use an amount of
hugepages in compute nodes. (example when using ovs-dpdk). We should to
provide an option 'reserved_memory_pages' which provides way to
determine amount of pages we want to reserved for third part components

** Affects: nova
 Importance: Undecided
 Assignee: sahid (sahid-ferdjaoui)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => sahid (sahid-ferdjaoui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1543149

Title:
  Reserve host pages on compute nodes

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  In some use cases we may want to avoid Nova to use an amount of
  hugepages in compute nodes. (example when using ovs-dpdk). We should
  to provide an option 'reserved_memory_pages' which provides way to
  determine amount of pages we want to reserved for third part
  components

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1543149/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542303] [NEW] When using realtime guests we should to avoid using QGA

2016-02-05 Thread sahid
Public bug reported:

When running in realtime we should to leading to a very minimal hardware
support for guest and so disable support of QEMU guest agent.

** Affects: nova
 Importance: Undecided
 Assignee: sahid (sahid-ferdjaoui)
 Status: New


** Tags: libvirt

** Changed in: nova
 Assignee: (unassigned) => sahid (sahid-ferdjaoui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1542303

Title:
  When using realtime guests we should to avoid using QGA

Status in OpenStack Compute (nova):
  New

Bug description:
  When running in realtime we should to leading to a very minimal
  hardware support for guest and so disable support of QEMU guest agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1542303/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542302] [NEW] We should to initialize request_spec to handle expected exception

2016-02-05 Thread sahid
Public bug reported:

in nova/conductor/manager.py, in method build_instances populate_retry
is only doing arithmetic without to interact with third parts and can
throw an *expected* exception where 'build_request_spec()' not. So we
should to change initializes request_spec soon as possible since it's
used in case where that *expected* exception is raised.

** Affects: nova
 Importance: Undecided
 Assignee: sahid (sahid-ferdjaoui)
 Status: New


** Tags: conductor

** Changed in: nova
 Assignee: (unassigned) => sahid (sahid-ferdjaoui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1542302

Title:
  We should to initialize request_spec to handle expected exception

Status in OpenStack Compute (nova):
  New

Bug description:
  in nova/conductor/manager.py, in method build_instances populate_retry
  is only doing arithmetic without to interact with third parts and can
  throw an *expected* exception where 'build_request_spec()' not. So we
  should to change initializes request_spec soon as possible since it's
  used in case where that *expected* exception is raised.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1542302/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527497] [NEW] Unbound local variable request_spec

2015-12-18 Thread sahid
Public bug reported:

When building instance a MaxRetryExceeded exception can be raised which
is handle by an unbound local variable request_spec.

in nova/conductor/manager.py, method build_instance()

** Affects: nova
 Importance: Low
 Assignee: sahid (sahid-ferdjaoui)
 Status: New


** Tags: conductor

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1527497

Title:
  Unbound local variable request_spec

Status in OpenStack Compute (nova):
  New

Bug description:
  When building instance a MaxRetryExceeded exception can be raised
  which is handle by an unbound local variable request_spec.

  in nova/conductor/manager.py, method build_instance()

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1527497/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496854] [NEW] libvirt: CPU tune bw policy not available in some linux kernels

2015-09-17 Thread sahid
Public bug reported:

In some circumstances mostly related to latency , Linux kernel can have
been build with cgroung configuration CONFIG_CGROUP_SCHED not defined
which makes not possible to boot virtual machines.

We should to verify if that cgroup is well mounted on host;

  by default if nothing has been requested, we can just pass that "cpu
shares" default configuration, if a request has been intended so we
should raise exception to let scheduler tries an other host.

** Affects: nova
 Importance: Low
 Assignee: sahid (sahid-ferdjaoui)
 Status: In Progress


** Tags: libvirt

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1496854

Title:
  libvirt: CPU tune bw policy not available in some linux kernels

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  In some circumstances mostly related to latency , Linux kernel can
  have been build with cgroung configuration CONFIG_CGROUP_SCHED not
  defined which makes not possible to boot virtual machines.

  We should to verify if that cgroup is well mounted on host;

by default if nothing has been requested, we can just pass that "cpu
  shares" default configuration, if a request has been intended so we
  should raise exception to let scheduler tries an other host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1496854/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459144] [NEW] Enhance VMware to support VirtualVmxnet3 as network type

2015-05-27 Thread sahid
Public bug reported:

Some devices may need to support VirtualVmxnet3 as a network. We should
to make sure VMware can handle that case.

** Affects: nova
 Importance: Low
 Assignee: sahid (sahid-ferdjaoui)
 Status: Fix Committed


** Tags: juno-backport-potential kilo-backport-potential

** Changed in: nova
   Importance: Undecided = Low

** Changed in: nova
   Status: New = Fix Released

** Changed in: nova
   Status: Fix Released = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459144

Title:
  Enhance VMware to support VirtualVmxnet3 as network type

Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  Some devices may need to support VirtualVmxnet3 as a network. We
  should to make sure VMware can handle that case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1459144/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451801] [NEW] Console tokens are not correctly cleaned when destroy instance from host

2015-05-05 Thread sahid
Public bug reported:

There are two cases when we need to clean console tokens for an
instance:

1/ When destroy the instance
2/ When migrate the instance

For the case 1 the current code base does not clean tokens when only rdp or 
only serial console is enable. 
  http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n917

For the case 2 the current code base does not clean tokens when only serial 
console is enable.
  
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n5413


Nice to have: One private method to clean tokens

** Affects: nova
 Importance: Low
 Assignee: sahid (sahid-ferdjaoui)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1451801

Title:
  Console tokens are not correctly cleaned when destroy instance from
  host

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  There are two cases when we need to clean console tokens for an
  instance:

  1/ When destroy the instance
  2/ When migrate the instance

  For the case 1 the current code base does not clean tokens when only rdp or 
only serial console is enable. 

http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n917

  For the case 2 the current code base does not clean tokens when only serial 
console is enable.

http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n5413

  
  Nice to have: One private method to clean tokens

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1451801/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439256] Re: Small pages memory are not take into account when not explicitly requested

2015-04-01 Thread sahid
** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439256

Title:
  Small pages memory are not take into account when not explicitly
  requested

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Guests using small pages (as default) on compute node are not take
  into account when calculating available small pages memory [1] -
  Consequence when booting instance with an explicitly small pages
  request the compute of available resources is corrupted.

  In order to fix the issue two solutions are able.

  1/
  Associate to every guest a NUMA topology and set the default page_size to 
MEMPAGES_SMALL when nothing has been requested by user.  ** This also implies 
when using libvirt the default option of virt-type should be KVM **

  A small couple of change are needed in hardware.py:
  - make the method 'numa_get_constraints' to return NUMATopology in all cases.
  - make the method ' _numa_get_pagesize_constraints' to return MEMPAGES_SMALL 
instead of None when nothing is requested.

  2/
  Disallow to request a memory page size small, means remove all of the code 
which handle that case since the information reported to the host are not 
correctly updated and let the default behavior handle that case.

  [1]
  http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/hardware.py#n1087

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1439256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439257] Re: Small pages memory are not take into account when not explicitly requested

2015-04-01 Thread sahid
** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439257

Title:
  Small pages memory are not take into account when not explicitly
  requested

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Guests using small pages (as default) on compute node are not take
  into account when calculating available small pages memory [1] -
  Consequence when booting instance with an explicitly small pages
  request the compute of available resources is corrupted.

  In order to fix the issue two solutions are able.

  1/
  Associate to every guest a NUMA topology and set the default page_size to 
MEMPAGES_SMALL when nothing has been requested by user.  ** This also implies 
when using libvirt the default option of virt-type should be KVM **

  A small couple of change are needed in hardware.py:
  - make the method 'numa_get_constraints' to return NUMATopology in all cases.
  - make the method ' _numa_get_pagesize_constraints' to return MEMPAGES_SMALL 
instead of None when nothing is requested.

  2/
  Disallow to request a memory page size small, means remove all of the code 
which handle that case since the information reported to the host are not 
correctly updated and let the default behavior handle that case.

  [1]
  http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/hardware.py#n1087

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1439257/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439262] Re: Small pages memory are not take into account when not explicitly requested

2015-04-01 Thread sahid
** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439262

Title:
  Small pages memory are not take into account when not explicitly
  requested

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Guests using small pages (as default) on compute node are not take
  into account when calculating available small pages memory [1] -
  Consequence when booting instance with an explicitly small pages
  request the compute of available resources is corrupted.

  In order to fix the issue two solutions are able.

  1/
  Associate to every guest a NUMA topology and set the default page_size to 
MEMPAGES_SMALL when nothing has been requested by user.  ** This also implies 
when using libvirt the default option of virt-type should be KVM **

  A small couple of change are needed in hardware.py:
  - make the method 'numa_get_constraints' to return NUMATopology in all cases.
  - make the method ' _numa_get_pagesize_constraints' to return MEMPAGES_SMALL 
instead of None when nothing is requested.

  2/
  Disallow to request a memory page size small, means remove all of the code 
which handle that case since the information reported to the host are not 
correctly updated and let the default behavior handle that case.

  [1]
  http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/hardware.py#n1087

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1439262/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439251] Re: Small pages memory are not take into account when not explicitly requested

2015-04-01 Thread sahid
** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439251

Title:
  Small pages memory are not take into account when not explicitly
  requested

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Guests using small pages (as default) on compute node are not take
  into account when calculating available small pages memory [1] -
  Consequence when booting instance with an explicitly small pages
  request the compute of available resources is corrupted.

  In order to fix the issue two solutions are able.

  1/
  Associate to every guest a NUMA topology and set the default page_size to 
MEMPAGES_SMALL when nothing has been requested by user.  ** This also implies 
when using libvirt the default option of virt-type should be KVM **

  A small couple of change are needed in hardware.py:
  - make the method 'numa_get_constraints' to return NUMATopology in all cases.
  - make the method ' _numa_get_pagesize_constraints' to return MEMPAGES_SMALL 
instead of None when nothing is requested.

  2/
  Disallow to request a memory page size small, means remove all of the code 
which handle that case since the information reported to the host are not 
correctly updated and let the default behavior handle that case.

  [1]
  http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/hardware.py#n1087

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1439251/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439255] Re: Small pages memory are not take into account when not explicitly requested

2015-04-01 Thread sahid
** Changed in: nova
   Status: New = Incomplete

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439255

Title:
  Small pages memory are not take into account when not explicitly
  requested

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Guests using small pages (as default) on compute node are not take
  into account when calculating available small pages memory [1] -
  Consequence when booting instance with an explicitly small pages
  request the compute of available resources is corrupted.

  In order to fix the issue two solutions are able.

  1/
  Associate to every guest a NUMA topology and set the default page_size to 
MEMPAGES_SMALL when nothing has been requested by user.  ** This also implies 
when using libvirt the default option of virt-type should be KVM **

  A small couple of change are needed in hardware.py:
  - make the method 'numa_get_constraints' to return NUMATopology in all cases.
  - make the method ' _numa_get_pagesize_constraints' to return MEMPAGES_SMALL 
instead of None when nothing is requested.

  2/
  Disallow to request a memory page size small, means remove all of the code 
which handle that case since the information reported to the host are not 
correctly updated and let the default behavior handle that case.

  [1]
  http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/hardware.py#n1087

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1439255/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439259] Re: Small pages memory are not take into account when not explicitly requested

2015-04-01 Thread sahid
** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439259

Title:
  Small pages memory are not take into account when not explicitly
  requested

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Guests using small pages (as default) on compute node are not take
  into account when calculating available small pages memory [1] -
  Consequence when booting instance with an explicitly small pages
  request the compute of available resources is corrupted.

  In order to fix the issue two solutions are able.

  1/
  Associate to every guest a NUMA topology and set the default page_size to 
MEMPAGES_SMALL when nothing has been requested by user.  ** This also implies 
when using libvirt the default option of virt-type should be KVM **

  A small couple of change are needed in hardware.py:
  - make the method 'numa_get_constraints' to return NUMATopology in all cases.
  - make the method ' _numa_get_pagesize_constraints' to return MEMPAGES_SMALL 
instead of None when nothing is requested.

  2/
  Disallow to request a memory page size small, means remove all of the code 
which handle that case since the information reported to the host are not 
correctly updated and let the default behavior handle that case.

  [1]
  http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/hardware.py#n1087

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1439259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439256] [NEW] Small pages memory are not take into account when not explicitly requested

2015-04-01 Thread sahid
Public bug reported:

Guests using small pages (as default) on compute node are not take into
account when calculating available small pages memory [1] - Consequence
when booting instance with an explicitly small pages request the compute
of available resources is corrupted.

In order to fix the issue two solutions are able.

1/
Associate to every guest a NUMA topology and set the default page_size to 
MEMPAGES_SMALL when nothing has been requested by user.  ** This also implies 
when using libvirt the default option of virt-type should be KVM **

A small couple of change are needed in hardware.py:
- make the method 'numa_get_constraints' to return NUMATopology in all cases.
- make the method ' _numa_get_pagesize_constraints' to return MEMPAGES_SMALL 
instead of None when nothing is requested.

2/
Disallow to request a memory page size small, means remove all of the code 
which handle that case since the information reported to the host are not 
correctly updated and let the default behavior handle that case.

[1]
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/hardware.py#n1087

** Affects: nova
 Importance: Medium
 Assignee: sahid (sahid-ferdjaoui)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439256

Title:
  Small pages memory are not take into account when not explicitly
  requested

Status in OpenStack Compute (Nova):
  New

Bug description:
  Guests using small pages (as default) on compute node are not take
  into account when calculating available small pages memory [1] -
  Consequence when booting instance with an explicitly small pages
  request the compute of available resources is corrupted.

  In order to fix the issue two solutions are able.

  1/
  Associate to every guest a NUMA topology and set the default page_size to 
MEMPAGES_SMALL when nothing has been requested by user.  ** This also implies 
when using libvirt the default option of virt-type should be KVM **

  A small couple of change are needed in hardware.py:
  - make the method 'numa_get_constraints' to return NUMATopology in all cases.
  - make the method ' _numa_get_pagesize_constraints' to return MEMPAGES_SMALL 
instead of None when nothing is requested.

  2/
  Disallow to request a memory page size small, means remove all of the code 
which handle that case since the information reported to the host are not 
correctly updated and let the default behavior handle that case.

  [1]
  http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/hardware.py#n1087

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1439256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439257] [NEW] Small pages memory are not take into account when not explicitly requested

2015-04-01 Thread sahid
Public bug reported:

Guests using small pages (as default) on compute node are not take into
account when calculating available small pages memory [1] - Consequence
when booting instance with an explicitly small pages request the compute
of available resources is corrupted.

In order to fix the issue two solutions are able.

1/
Associate to every guest a NUMA topology and set the default page_size to 
MEMPAGES_SMALL when nothing has been requested by user.  ** This also implies 
when using libvirt the default option of virt-type should be KVM **

A small couple of change are needed in hardware.py:
- make the method 'numa_get_constraints' to return NUMATopology in all cases.
- make the method ' _numa_get_pagesize_constraints' to return MEMPAGES_SMALL 
instead of None when nothing is requested.

2/
Disallow to request a memory page size small, means remove all of the code 
which handle that case since the information reported to the host are not 
correctly updated and let the default behavior handle that case.

[1]
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/hardware.py#n1087

** Affects: nova
 Importance: Medium
 Assignee: sahid (sahid-ferdjaoui)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439257

Title:
  Small pages memory are not take into account when not explicitly
  requested

Status in OpenStack Compute (Nova):
  New

Bug description:
  Guests using small pages (as default) on compute node are not take
  into account when calculating available small pages memory [1] -
  Consequence when booting instance with an explicitly small pages
  request the compute of available resources is corrupted.

  In order to fix the issue two solutions are able.

  1/
  Associate to every guest a NUMA topology and set the default page_size to 
MEMPAGES_SMALL when nothing has been requested by user.  ** This also implies 
when using libvirt the default option of virt-type should be KVM **

  A small couple of change are needed in hardware.py:
  - make the method 'numa_get_constraints' to return NUMATopology in all cases.
  - make the method ' _numa_get_pagesize_constraints' to return MEMPAGES_SMALL 
instead of None when nothing is requested.

  2/
  Disallow to request a memory page size small, means remove all of the code 
which handle that case since the information reported to the host are not 
correctly updated and let the default behavior handle that case.

  [1]
  http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/hardware.py#n1087

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1439257/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439255] [NEW] Small pages memory are not take into account when not explicitly requested

2015-04-01 Thread sahid
Public bug reported:

Guests using small pages (as default) on compute node are not take into
account when calculating available small pages memory [1] - Consequence
when booting instance with an explicitly small pages request the compute
of available resources is corrupted.

In order to fix the issue two solutions are able.

1/
Associate to every guest a NUMA topology and set the default page_size to 
MEMPAGES_SMALL when nothing has been requested by user.  ** This also implies 
when using libvirt the default option of virt-type should be KVM **

A small couple of change are needed in hardware.py:
- make the method 'numa_get_constraints' to return NUMATopology in all cases.
- make the method ' _numa_get_pagesize_constraints' to return MEMPAGES_SMALL 
instead of None when nothing is requested.

2/
Disallow to request a memory page size small, means remove all of the code 
which handle that case since the information reported to the host are not 
correctly updated and let the default behavior handle that case.

[1]
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/hardware.py#n1087

** Affects: nova
 Importance: Medium
 Assignee: sahid (sahid-ferdjaoui)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439255

Title:
  Small pages memory are not take into account when not explicitly
  requested

Status in OpenStack Compute (Nova):
  New

Bug description:
  Guests using small pages (as default) on compute node are not take
  into account when calculating available small pages memory [1] -
  Consequence when booting instance with an explicitly small pages
  request the compute of available resources is corrupted.

  In order to fix the issue two solutions are able.

  1/
  Associate to every guest a NUMA topology and set the default page_size to 
MEMPAGES_SMALL when nothing has been requested by user.  ** This also implies 
when using libvirt the default option of virt-type should be KVM **

  A small couple of change are needed in hardware.py:
  - make the method 'numa_get_constraints' to return NUMATopology in all cases.
  - make the method ' _numa_get_pagesize_constraints' to return MEMPAGES_SMALL 
instead of None when nothing is requested.

  2/
  Disallow to request a memory page size small, means remove all of the code 
which handle that case since the information reported to the host are not 
correctly updated and let the default behavior handle that case.

  [1]
  http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/hardware.py#n1087

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1439255/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439247] [NEW] Small pages memory are not take into account when not explicitly requested

2015-04-01 Thread sahid
Public bug reported:

Guests using small pages (as default) on compute node are not take into
account when calculating available small pages memory [1] - Consequence
when booting instance with an explicitly small pages request the compute
of available resources is corrupted.

In order to fix the issue two solutions are able.

1/
Associate to every guest a NUMA topology and set the default page_size to 
MEMPAGES_SMALL when nothing has been requested by user.  ** This also implies 
when using libvirt the default option of virt-type should be KVM **

A small couple of change are needed in hardware.py:
- make the method 'numa_get_constraints' to return NUMATopology in all cases.
- make the method ' _numa_get_pagesize_constraints' to return MEMPAGES_SMALL 
instead of None when nothing is requested.

2/
Disallow to request a memory page size small, means remove all of the code 
which handle that case since the information reported to the host are not 
correctly updated and let the default behavior handle that case.

[1]
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/hardware.py#n1087

** Affects: nova
 Importance: Medium
 Assignee: sahid (sahid-ferdjaoui)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439247

Title:
  Small pages memory are not take into account when not explicitly
  requested

Status in OpenStack Compute (Nova):
  New

Bug description:
  Guests using small pages (as default) on compute node are not take
  into account when calculating available small pages memory [1] -
  Consequence when booting instance with an explicitly small pages
  request the compute of available resources is corrupted.

  In order to fix the issue two solutions are able.

  1/
  Associate to every guest a NUMA topology and set the default page_size to 
MEMPAGES_SMALL when nothing has been requested by user.  ** This also implies 
when using libvirt the default option of virt-type should be KVM **

  A small couple of change are needed in hardware.py:
  - make the method 'numa_get_constraints' to return NUMATopology in all cases.
  - make the method ' _numa_get_pagesize_constraints' to return MEMPAGES_SMALL 
instead of None when nothing is requested.

  2/
  Disallow to request a memory page size small, means remove all of the code 
which handle that case since the information reported to the host are not 
correctly updated and let the default behavior handle that case.

  [1]
  http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/hardware.py#n1087

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1439247/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439254] [NEW] Small pages memory are not take into account when not explicitly requested

2015-04-01 Thread sahid
Public bug reported:

Guests using small pages (as default) on compute node are not take into
account when calculating available small pages memory [1] - Consequence
when booting instance with an explicitly small pages request the compute
of available resources is corrupted.

In order to fix the issue two solutions are able.

1/
Associate to every guest a NUMA topology and set the default page_size to 
MEMPAGES_SMALL when nothing has been requested by user.  ** This also implies 
when using libvirt the default option of virt-type should be KVM **

A small couple of change are needed in hardware.py:
- make the method 'numa_get_constraints' to return NUMATopology in all cases.
- make the method ' _numa_get_pagesize_constraints' to return MEMPAGES_SMALL 
instead of None when nothing is requested.

2/
Disallow to request a memory page size small, means remove all of the code 
which handle that case since the information reported to the host are not 
correctly updated and let the default behavior handle that case.

[1]
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/hardware.py#n1087

** Affects: nova
 Importance: Medium
 Assignee: sahid (sahid-ferdjaoui)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439254

Title:
  Small pages memory are not take into account when not explicitly
  requested

Status in OpenStack Compute (Nova):
  New

Bug description:
  Guests using small pages (as default) on compute node are not take
  into account when calculating available small pages memory [1] -
  Consequence when booting instance with an explicitly small pages
  request the compute of available resources is corrupted.

  In order to fix the issue two solutions are able.

  1/
  Associate to every guest a NUMA topology and set the default page_size to 
MEMPAGES_SMALL when nothing has been requested by user.  ** This also implies 
when using libvirt the default option of virt-type should be KVM **

  A small couple of change are needed in hardware.py:
  - make the method 'numa_get_constraints' to return NUMATopology in all cases.
  - make the method ' _numa_get_pagesize_constraints' to return MEMPAGES_SMALL 
instead of None when nothing is requested.

  2/
  Disallow to request a memory page size small, means remove all of the code 
which handle that case since the information reported to the host are not 
correctly updated and let the default behavior handle that case.

  [1]
  http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/hardware.py#n1087

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1439254/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439251] [NEW] Small pages memory are not take into account when not explicitly requested

2015-04-01 Thread sahid
Public bug reported:

Guests using small pages (as default) on compute node are not take into
account when calculating available small pages memory [1] - Consequence
when booting instance with an explicitly small pages request the compute
of available resources is corrupted.

In order to fix the issue two solutions are able.

1/
Associate to every guest a NUMA topology and set the default page_size to 
MEMPAGES_SMALL when nothing has been requested by user.  ** This also implies 
when using libvirt the default option of virt-type should be KVM **

A small couple of change are needed in hardware.py:
- make the method 'numa_get_constraints' to return NUMATopology in all cases.
- make the method ' _numa_get_pagesize_constraints' to return MEMPAGES_SMALL 
instead of None when nothing is requested.

2/
Disallow to request a memory page size small, means remove all of the code 
which handle that case since the information reported to the host are not 
correctly updated and let the default behavior handle that case.

[1]
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/hardware.py#n1087

** Affects: nova
 Importance: Medium
 Assignee: sahid (sahid-ferdjaoui)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439251

Title:
  Small pages memory are not take into account when not explicitly
  requested

Status in OpenStack Compute (Nova):
  New

Bug description:
  Guests using small pages (as default) on compute node are not take
  into account when calculating available small pages memory [1] -
  Consequence when booting instance with an explicitly small pages
  request the compute of available resources is corrupted.

  In order to fix the issue two solutions are able.

  1/
  Associate to every guest a NUMA topology and set the default page_size to 
MEMPAGES_SMALL when nothing has been requested by user.  ** This also implies 
when using libvirt the default option of virt-type should be KVM **

  A small couple of change are needed in hardware.py:
  - make the method 'numa_get_constraints' to return NUMATopology in all cases.
  - make the method ' _numa_get_pagesize_constraints' to return MEMPAGES_SMALL 
instead of None when nothing is requested.

  2/
  Disallow to request a memory page size small, means remove all of the code 
which handle that case since the information reported to the host are not 
correctly updated and let the default behavior handle that case.

  [1]
  http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/hardware.py#n1087

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1439251/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439262] [NEW] Small pages memory are not take into account when not explicitly requested

2015-04-01 Thread sahid
Public bug reported:

Guests using small pages (as default) on compute node are not take into
account when calculating available small pages memory [1] - Consequence
when booting instance with an explicitly small pages request the compute
of available resources is corrupted.

In order to fix the issue two solutions are able.

1/
Associate to every guest a NUMA topology and set the default page_size to 
MEMPAGES_SMALL when nothing has been requested by user.  ** This also implies 
when using libvirt the default option of virt-type should be KVM **

A small couple of change are needed in hardware.py:
- make the method 'numa_get_constraints' to return NUMATopology in all cases.
- make the method ' _numa_get_pagesize_constraints' to return MEMPAGES_SMALL 
instead of None when nothing is requested.

2/
Disallow to request a memory page size small, means remove all of the code 
which handle that case since the information reported to the host are not 
correctly updated and let the default behavior handle that case.

[1]
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/hardware.py#n1087

** Affects: nova
 Importance: Medium
 Assignee: sahid (sahid-ferdjaoui)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439262

Title:
  Small pages memory are not take into account when not explicitly
  requested

Status in OpenStack Compute (Nova):
  New

Bug description:
  Guests using small pages (as default) on compute node are not take
  into account when calculating available small pages memory [1] -
  Consequence when booting instance with an explicitly small pages
  request the compute of available resources is corrupted.

  In order to fix the issue two solutions are able.

  1/
  Associate to every guest a NUMA topology and set the default page_size to 
MEMPAGES_SMALL when nothing has been requested by user.  ** This also implies 
when using libvirt the default option of virt-type should be KVM **

  A small couple of change are needed in hardware.py:
  - make the method 'numa_get_constraints' to return NUMATopology in all cases.
  - make the method ' _numa_get_pagesize_constraints' to return MEMPAGES_SMALL 
instead of None when nothing is requested.

  2/
  Disallow to request a memory page size small, means remove all of the code 
which handle that case since the information reported to the host are not 
correctly updated and let the default behavior handle that case.

  [1]
  http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/hardware.py#n1087

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1439262/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439259] [NEW] Small pages memory are not take into account when not explicitly requested

2015-04-01 Thread sahid
Public bug reported:

Guests using small pages (as default) on compute node are not take into
account when calculating available small pages memory [1] - Consequence
when booting instance with an explicitly small pages request the compute
of available resources is corrupted.

In order to fix the issue two solutions are able.

1/
Associate to every guest a NUMA topology and set the default page_size to 
MEMPAGES_SMALL when nothing has been requested by user.  ** This also implies 
when using libvirt the default option of virt-type should be KVM **

A small couple of change are needed in hardware.py:
- make the method 'numa_get_constraints' to return NUMATopology in all cases.
- make the method ' _numa_get_pagesize_constraints' to return MEMPAGES_SMALL 
instead of None when nothing is requested.

2/
Disallow to request a memory page size small, means remove all of the code 
which handle that case since the information reported to the host are not 
correctly updated and let the default behavior handle that case.

[1]
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/hardware.py#n1087

** Affects: nova
 Importance: Medium
 Assignee: sahid (sahid-ferdjaoui)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439259

Title:
  Small pages memory are not take into account when not explicitly
  requested

Status in OpenStack Compute (Nova):
  New

Bug description:
  Guests using small pages (as default) on compute node are not take
  into account when calculating available small pages memory [1] -
  Consequence when booting instance with an explicitly small pages
  request the compute of available resources is corrupted.

  In order to fix the issue two solutions are able.

  1/
  Associate to every guest a NUMA topology and set the default page_size to 
MEMPAGES_SMALL when nothing has been requested by user.  ** This also implies 
when using libvirt the default option of virt-type should be KVM **

  A small couple of change are needed in hardware.py:
  - make the method 'numa_get_constraints' to return NUMATopology in all cases.
  - make the method ' _numa_get_pagesize_constraints' to return MEMPAGES_SMALL 
instead of None when nothing is requested.

  2/
  Disallow to request a memory page size small, means remove all of the code 
which handle that case since the information reported to the host are not 
correctly updated and let the default behavior handle that case.

  [1]
  http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/hardware.py#n1087

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1439259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404839] [NEW] NUMA topology from image meta data is bugged

2014-12-22 Thread sahid
Public bug reported:

The way we are retrieving NUMA properties from image meta data is coming
from the method 'numa_get_constraints' this method is waiting for the
dict property form image_meta.

We can see on the part of the code this method called with an instance
of image meta data and not the property dict.


We fix it we should always pass the whole object of image.

** Affects: nova
 Importance: High
 Assignee: sahid (sahid-ferdjaoui)
 Status: New

** Changed in: nova
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404839

Title:
  NUMA topology from image meta data is bugged

Status in OpenStack Compute (Nova):
  New

Bug description:
  The way we are retrieving NUMA properties from image meta data is
  coming from the method 'numa_get_constraints' this method is waiting
  for the dict property form image_meta.

  We can see on the part of the code this method called with an instance
  of image meta data and not the property dict.

  
  We fix it we should always pass the whole object of image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1404839/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399573] [NEW] allows to configure disk driver IO policy

2014-12-05 Thread sahid
Public bug reported:

libvirt allows to configure the disk IO policy with io=native or
io=threads which according this email clearly improves performance:

https://www.redhat.com/archives/libvirt-users/2011-June/msg4.html

We should give the possibility to configure this as we do for
disk_cachemode

** Affects: nova
 Importance: Wishlist
 Assignee: sahid (sahid-ferdjaoui)
 Status: In Progress


** Tags: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1399573

Title:
  allows to configure disk driver IO policy

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  libvirt allows to configure the disk IO policy with io=native or
  io=threads which according this email clearly improves performance:

  https://www.redhat.com/archives/libvirt-users/2011-June/msg4.html

  We should give the possibility to configure this as we do for
  disk_cachemode

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1399573/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375379] [NEW] console: wrong check when verify the server response

2014-09-29 Thread sahid
Public bug reported:

When trying to connect to a console with internal_access_path if the
server does not respond by 200 we should raise an exception but the
current code does not insure this case.

https://github.com/openstack/nova/blob/master/nova/console/websocketproxy.py#L68


The method 'find' return -1 on failure not False or 0

** Affects: nova
 Importance: Undecided
 Assignee: sahid (sahid-ferdjaoui)
 Status: New


** Tags: console

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1375379

Title:
  console: wrong check when verify the server response

Status in OpenStack Compute (Nova):
  New

Bug description:
  When trying to connect to a console with internal_access_path if the
  server does not respond by 200 we should raise an exception but the
  current code does not insure this case.

  
https://github.com/openstack/nova/blob/master/nova/console/websocketproxy.py#L68

  
  The method 'find' return -1 on failure not False or 0

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1375379/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374414] [NEW] console: websocketproxy needs to handle token from path

2014-09-26 Thread sahid
Public bug reported:

Currently websocketproxy are looking for a valid token in the cookie
that is how novnc works but we should take a look at the path too.

This broke the authentication from other clients like full websocket
client used for the feature serial console.

  https://gist.github.com/sahid/894c31f306bebacb2207

** Affects: nova
 Importance: High
 Assignee: sahid (sahid-ferdjaoui)
 Status: New


** Tags: console

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1374414

Title:
  console: websocketproxy needs to handle token from path

Status in OpenStack Compute (Nova):
  New

Bug description:
  Currently websocketproxy are looking for a valid token in the cookie
  that is how novnc works but we should take a look at the path too.

  This broke the authentication from other clients like full websocket
  client used for the feature serial console.

https://gist.github.com/sahid/894c31f306bebacb2207

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1374414/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369563] [NEW] Keep tracking image association when create volume from image

2014-09-15 Thread sahid
Public bug reported:

When booting a instance with volume created from image we should
reference in the instance the image used to create the volume.

nova show will show as below.
...
| image| Attempt to boot from volume - no image 
supplied  |
...

Resources:
 - 
http://docs.openstack.org/user-guide/content/create_volume_from_image_and_boot.html

** Affects: nova
 Importance: Wishlist
 Assignee: sahid (sahid-ferdjaoui)
 Status: New


** Tags: compute

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1369563

Title:
  Keep tracking image association when create volume from image

Status in OpenStack Compute (Nova):
  New

Bug description:
  When booting a instance with volume created from image we should
  reference in the instance the image used to create the volume.

  nova show will show as below.
  ...
  | image| Attempt to boot from volume - no 
image supplied  |
  ...

  Resources:
   - 
http://docs.openstack.org/user-guide/content/create_volume_from_image_and_boot.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1369563/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366832] [NEW] serial console, ports are not released

2014-09-08 Thread sahid
Public bug reported:

When booting an instance with serial console activated, port(s) are allocated 
but never released since the code responsible to freeing port(s) is called 
after the domain is undefined from libvirt.
Also since the domain is already undefined, when calling the method 
'_lookup_by_name' an exception DomainnotFound is raised which makes not 
possible to correctly finish the deleting process

** Affects: nova
 Importance: High
 Assignee: sahid (sahid-ferdjaoui)
 Status: New


** Tags: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1366832

Title:
  serial console, ports are not released

Status in OpenStack Compute (Nova):
  New

Bug description:
  When booting an instance with serial console activated, port(s) are allocated 
but never released since the code responsible to freeing port(s) is called 
after the domain is undefined from libvirt.
  Also since the domain is already undefined, when calling the method 
'_lookup_by_name' an exception DomainnotFound is raised which makes not 
possible to correctly finish the deleting process

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1366832/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361611] [NEW] console/virt stop returning arbitrary dicts in driver API

2014-08-26 Thread sahid
Public bug reported:

We have a general desire though to stop returning / passing arbitrary
dicts in the virt driver API - On this report we would like to create
typed objects for consoles that will be used by drivers to return values
on the compute manager.

** Affects: nova
 Importance: Undecided
 Assignee: sahid (sahid-ferdjaoui)
 Status: New


** Tags: api virt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361611

Title:
  console/virt stop returning arbitrary dicts in driver API

Status in OpenStack Compute (Nova):
  New

Bug description:
  We have a general desire though to stop returning / passing arbitrary
  dicts in the virt driver API - On this report we would like to create
  typed objects for consoles that will be used by drivers to return
  values on the compute manager.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361611/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337359] Re: The io_ops_filter is not working while instance is rebuilding.

2014-08-18 Thread sahid
** Changed in: nova
 Assignee: sahid (sahid-ferdjaoui) = (unassigned)

** Changed in: nova
   Importance: Low = Undecided

** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1337359

Title:
  The io_ops_filter is not working while instance is rebuilding.

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I am trying to control the host's ios at the same time. I set the
  properties in nova.conf:

  
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,IoOpsFilter
  max_io_ops_per_host=2

  But I can still schedule an instance at the host which has two
  instances is rebuilding.

  The task status of rebuild instances is
  REBUILD_SPAWNING=rebuild_spawning, but the io_workload in stats.py
  is :

  @property
  def io_workload(self):
  Calculate an I/O based load by counting I/O heavy operations.

  def _get(state, state_type):
  key = num_%s_%s % (state_type, state)
  return self.get(key, 0)

  num_builds = _get(vm_states.BUILDING, vm)
  num_migrations = _get(task_states.RESIZE_MIGRATING, task)
  num_rebuilds = _get(task_states.REBUILDING, task)
  num_resizes = _get(task_states.RESIZE_PREP, task)
  num_snapshots = _get(task_states.IMAGE_SNAPSHOT, task)
  num_backups = _get(task_states.IMAGE_BACKUP, task)

  return (num_builds + num_rebuilds + num_resizes + num_migrations +
  num_snapshots + num_backups)

  
  The I/O heavy operations not contain the rebuild_spawning status.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1337359/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358316] [NEW] disk: add support to resize ntfs

2014-08-18 Thread sahid
Public bug reported:

We should add to disk.api the support to resize ntfs file system after
to extend the size of the image.

** Affects: nova
 Importance: Undecided
 Assignee: sahid (sahid-ferdjaoui)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358316

Title:
  disk: add support to resize ntfs

Status in OpenStack Compute (Nova):
  New

Bug description:
  We should add to disk.api the support to resize ntfs file system after
  to extend the size of the image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1358316/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257594] Re: Unshelving an instance uses original image not shelved image

2014-08-05 Thread sahid
** Changed in: tempest
 Assignee: sahid (sahid-ferdjaoui) = (unassigned)

** Changed in: tempest
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1257594

Title:
  Unshelving an instance uses original image not shelved image

Status in OpenStack Compute (Nova):
  Fix Released
Status in Tempest:
  Invalid

Bug description:
  When unshelving a shelved instance that has been offloaded to glance it 
doesn't actually use the image stored in glance.
  It actually uses the image that the instance was booted up with in the first 
place.

  This seems a bit crazy to me so it would be great if someone could
  replicate.

  Note: This is with stable/havana but looking at master I don't see
  anything that would mean that this actually works in master either

  Please tell me I'm wrong and I have some messed up setup..

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1257594/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1314548] [NEW] neutron_driver - wrap_check_secturity_groups_policy is already defined

2014-04-30 Thread sahid
Public bug reported:

the local method 'wrap_check_secturity_groups_policy' in the module
network/neutron_driver.py is already defined in compute_api. We should
use it.

** Affects: nova
 Importance: Low
 Assignee: sahid (sahid-ferdjaoui)
 Status: In Progress


** Tags: low-hanging-fruit network

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1314548

Title:
  neutron_driver - wrap_check_secturity_groups_policy is already defined

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  the local method 'wrap_check_secturity_groups_policy' in the module
  network/neutron_driver.py is already defined in compute_api. We should
  use it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1314548/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1311137] [NEW] GlanceImageService client needs to handle image_id equal to none

2014-04-22 Thread sahid
Public bug reported:

When we use methods show, download, details, delete we need to
check the value of image_id to avoid an unnecessary call to the api. in:
nova/image/glance.py

** Affects: nova
 Importance: Low
 Assignee: sahid (sahid-ferdjaoui)
 Status: New

** Description changed:

  When we use methods show, download, details, delete we need to
  check the value of image_id to avoid an unnecessary call to the api. in:
- nova/glance.py
+ nova/image/glance.py

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1311137

Title:
  GlanceImageService client needs to handle image_id equal to none

Status in OpenStack Compute (Nova):
  New

Bug description:
  When we use methods show, download, details, delete we need to
  check the value of image_id to avoid an unnecessary call to the api.
  in: nova/image/glance.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1311137/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1307416] [NEW] Unshelve instance needs handling exceptions

2014-04-14 Thread sahid
Public bug reported:

There are some cases not handled when we unshelve an instance in the
conductor.

   nova/conductor/manager.py#823

if the key shelved_image_id is not defined this will raise an KeyError not 
handled.
Also when the shelved_image_id is set to None, the error is not correctly 
handled and the message could be confusing.

** Affects: nova
 Importance: Low
 Assignee: sahid (sahid-ferdjaoui)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1307416

Title:
  Unshelve instance needs handling exceptions

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  There are some cases not handled when we unshelve an instance in the
  conductor.

 nova/conductor/manager.py#823

  if the key shelved_image_id is not defined this will raise an KeyError not 
handled.
  Also when the shelved_image_id is set to None, the error is not correctly 
handled and the message could be confusing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1307416/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301340] [NEW] Remove duplicate code with aggregate filters

2014-04-02 Thread sahid
Public bug reported:

Some filters are using the same logic to handle per-aggregate options.
We should create helper to remove this duplicate code and help to
implement new filters based on aggregates

Filters that needs to be addressed:
 * AggregateRamFilter
 * AggregateCoreFilter,
 * AggregateTypeAffinityFilter

** Affects: nova
 Importance: Wishlist
 Assignee: sahid (sahid-ferdjaoui)
 Status: In Progress


** Tags: scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1301340

Title:
  Remove duplicate code with aggregate filters

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Some filters are using the same logic to handle per-aggregate options.
  We should create helper to remove this duplicate code and help to
  implement new filters based on aggregates

  Filters that needs to be addressed:
   * AggregateRamFilter
   * AggregateCoreFilter,
   * AggregateTypeAffinityFilter

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1301340/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1300775] [NEW] Scheduler, performance impact when dealing with aggregates

2014-04-01 Thread sahid
Public bug reported:

During a scheduling if we use filter that needs to get data from aggregates 
like CoreFilterAggregate, RamFilterAggregate does.
The filter retrieves metadata from the database for every host and can creates 
a performance impact if we have several hosts.

** Affects: nova
 Importance: Medium
 Assignee: sahid (sahid-ferdjaoui)
 Status: New


** Tags: scheduler

** Changed in: nova
 Assignee: (unassigned) = sahid (sahid-ferdjaoui)

** Description changed:

  During a scheduling if we use filter that needs to get data from aggregates 
like CoreFilterAggregate, RamFilterAggregate does.
- The filter retrieves metadata from the database for every host and can create 
a performance impact if we have several hosts.
+ The filter retrieves metadata from the database for every host and can 
creates a performance impact if we have several hosts.

** Tags added: scheduler

** Changed in: nova
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1300775

Title:
  Scheduler, performance impact when dealing with aggregates

Status in OpenStack Compute (Nova):
  New

Bug description:
  During a scheduling if we use filter that needs to get data from aggregates 
like CoreFilterAggregate, RamFilterAggregate does.
  The filter retrieves metadata from the database for every host and can 
creates a performance impact if we have several hosts.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1300775/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298975] [NEW] libvirt.finish_migration is too large and not tested

2014-03-28 Thread sahid
Public bug reported:

This method needs to be spitted in several small methods then each
methods has to be tested.

A possible solution could be:
  * determines the disk size from instance properties
  * methods to convert disk from qcow2 to raw and raw to qcow2
  * method to resize the disk

** Affects: nova
 Importance: Wishlist
 Assignee: sahid (sahid-ferdjaoui)
 Status: In Progress


** Tags: libvirt testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1298975

Title:
  libvirt.finish_migration is too large and not tested

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  This method needs to be spitted in several small methods then each
  methods has to be tested.

  A possible solution could be:
* determines the disk size from instance properties
* methods to convert disk from qcow2 to raw and raw to qcow2
* method to resize the disk

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1298975/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298976] [NEW] Be sure converted image will be restored

2014-03-28 Thread sahid
Public bug reported:

On the driver libvirt. During the process of resizing disk if an image is qcow2 
with
partition less the process converts the instance to raw.

After the extend we should restore the original format in all cases
not only if 'use_cow_images' is configured to True.

** Affects: nova
 Importance: Low
 Assignee: sahid (sahid-ferdjaoui)
 Status: New


** Tags: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1298976

Title:
  Be sure converted image will be restored

Status in OpenStack Compute (Nova):
  New

Bug description:
  On the driver libvirt. During the process of resizing disk if an image is 
qcow2 with
  partition less the process converts the instance to raw.

  After the extend we should restore the original format in all cases
  not only if 'use_cow_images' is configured to True.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1298976/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298981] [NEW] Skip resizing disk if the parameter resize_instance is False

2014-03-28 Thread sahid
Public bug reported:

On the libvirt driver the driver.finish_migration method is called with an extra
parameter 'resize_instance'. It should be used to know if it is
necessary or not to resize the disks.

** Affects: nova
 Importance: Wishlist
 Assignee: sahid (sahid-ferdjaoui)
 Status: New


** Tags: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1298981

Title:
  Skip resizing disk if the parameter resize_instance is False

Status in OpenStack Compute (Nova):
  New

Bug description:
  On the libvirt driver the driver.finish_migration method is called with an 
extra
  parameter 'resize_instance'. It should be used to know if it is
  necessary or not to resize the disks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1298981/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293794] Re: memcached_servers timeout causes poor API response time

2014-03-20 Thread sahid
Actually after some investigation it looks like we use the version 1.48
and this version handles a param 'socket_timeout' in the client
constructor.

We can add a option to configure it.

** Changed in: nova
   Importance: Undecided = Wishlist

** Changed in: nova
   Status: New = Confirmed

** Changed in: nova
 Assignee: (unassigned) = sahid (sahid-ferdjaoui)

** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1293794

Title:
  memcached_servers timeout causes poor API response time

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  In nova.conf, when configured for HA by setting the memcached_servers
  parameter to several memcached servers in the nova API cluster, e.g.:

  memcached_servers=192.168.50.11:11211,192.168.50.12:11211,192.168.50.13:11211

  If there are memcached servers on this list that are down, the time it
  takes to complete Nova API requests increases from  1 second to 3-6
  seconds.

  It seems to me that Nova should protect itself from such performance
  degradation in an HA scenario.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1293794/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293433] Re: tempest test.py new_url not defined

2014-03-17 Thread sahid
It looks it is not related to nova

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1293433

Title:
  tempest test.py new_url not defined

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When I run some tempest test, I found the following error. It seems in
  the /opt/stack/tempest/tempest/test.py, the var might be not defined.

  Traceback (most recent call last):
File 
/opt/stack/tempest/tempest/api/compute/flavors/test_flavors_negative.py, line 
52, in test_get_flavor_details
  self.execute(self._schema_file)
File /opt/stack/tempest/tempest/test.py, line 519, in execute
  resp, resp_body = client.send_request(method, new_url,
  UnboundLocalError: local variable 'new_url' referenced before assignment

    begin captured logging  
  tempest.test: DEBUG: Open schema file: 
/opt/stack/tempest/etc/schemas/compute/flavors/flavors_list.json
  tempest.test: DEBUG: {u'url': u'flavors/detail', u'http-method': u'GET', 
u'name': u'list-flavors-with-detail', u'json-schema': {u'type': u'object', 
u'properties': {u'minRam': {u'type': u'integer', u'results': {u'gen_none': 400, 
u'gen_string': 400}}, u'minDisk': {u'type': u'integer', u'results': 
{u'gen_none': 400, u'gen_string': 400}
  tempest.test: DEBUG: Open schema file: 
/opt/stack/tempest/etc/schemas/compute/flavors/flavor_details.json
  tempest.test: INFO: Executing get-flavor-details
  tempest.test: DEBUG: {u'url': u'flavors/%s', u'http-method': u'GET', u'name': 
u'get-flavor-details', u'resources': [{u'expected_result': 404, u'name': 
u'flavor'}]}
  -  end captured logging  -

  --
  Ran 1 test in 0.142s

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1293433/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293444] [NEW] filter: aggregate image props isolation needs a strict option

2014-03-17 Thread sahid
Public bug reported:

The filter AggregateImagePropertiesIsolation needs an option to provide
a way that an image without key does not satisfy the request.

Strict isolation False: 


   |  key=foo  |  key=xxx  |  empty   


---+---+---+


  key=foo  |  True |  False|  True  


  key=bar  |  False|  False|  True  


  empty  |  True |  True |  True  





Strict isolation True:  


   |  key=foo  |  key=xxx  |  empty   


---+---+---+


  key=foo  |  True |  False|  False 


  key=bar  |  False|  False|  False 


  empty  |  False|  False|  False

** Affects: nova
 Importance: Wishlist
 Assignee: sahid (sahid-ferdjaoui)
 Status: New

** Changed in: nova
   Importance: Undecided = Wishlist

** Changed in: nova
 Assignee: (unassigned) = sahid (sahid-ferdjaoui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1293444

Title:
  filter: aggregate image props isolation needs a strict option

Status in OpenStack Compute (Nova):
  New

Bug description:
  The filter AggregateImagePropertiesIsolation needs an option to
  provide a way that an image without key does not satisfy the request.

  Strict isolation False:   

  
 |  key=foo  |  key=xxx  |  empty 

  
  ---+---+---+  

  
key=foo  |  True |  False|  True

  
key=bar  |  False|  False|  True

  
empty  |  True |  True |  True

[Yahoo-eng-team] [Bug 1291161] Re: Need a property in glance metadata to indicate the vm id when create vm snapshot

2014-03-12 Thread sahid
This need a blueprint to be accepted.

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291161

Title:
  Need a property in glance metadata to indicate the vm id when create
  vm snapshot

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  In order to manage the snapshot of vm in glance conveniently, we need know 
which images in glance are captured from VM.
  So we need add new property in glance metadata when create the vm snapshot, 
for example: server_id = vm uuid. This new property will help to filter the 
image when use the glance image-list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291161/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268891] Re: Possible livelock on sqlachemy.api.retry_on_deadlock

2014-02-22 Thread sahid
since a discuss on the review, it looks this change is not relevant.

** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1268891

Title:
  Possible livelock on sqlachemy.api.retry_on_deadlock

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  A random interval is necessary to prevent the colliding
  transactions don't continuously keep bumping into each other
  without progress.

  https://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/api.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1268891/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257594] Re: Unshelving an instance uses original image not shelved image

2014-02-18 Thread sahid
The tempest test needs also to be fixed:

https://review.openstack.org/#/c/74406/

** Also affects: tempest
   Importance: Undecided
   Status: New

** Changed in: tempest
 Assignee: (unassigned) = sahid (sahid-ferdjaoui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1257594

Title:
  Unshelving an instance uses original image not shelved image

Status in OpenStack Compute (Nova):
  In Progress
Status in Tempest:
  In Progress

Bug description:
  When unshelving a shelved instance that has been offloaded to glance it 
doesn't actually use the image stored in glance.
  It actually uses the image that the instance was booted up with in the first 
place.

  This seems a bit crazy to me so it would be great if someone could
  replicate.

  Note: This is with stable/havana but looking at master I don't see
  anything that would mean that this actually works in master either

  Please tell me I'm wrong and I have some messed up setup..

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1257594/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260667] Re: Tox fails to build environment because of MySQL-Python version

2014-02-02 Thread sahid
** Changed in: nova
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260667

Title:
  Tox fails to build environment because of MySQL-Python version

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  During tox building environment, it try to install the package MySQL-
  python version 1.2.4 but the build failed with the error:

  Traceback (most recent call last):
File string, line 16, in module
File /opt/stack/nova/.tox/py27/build/MySQL-python/setup.py, line 18, in 
module
  metadata, options = get_config()
File setup_posix.py, line 43, in get_config
  libs = mysql_config(libs_r)
File setup_posix.py, line 25, in mysql_config
  raise EnvironmentError(%s not found % (mysql_config.path,))
  EnvironmentError: mysql_config not found

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260667/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270088] [NEW] disk/api.py: resize2fs needs tests + better log

2014-01-17 Thread sahid
Public bug reported:

In disk/api.py the method resize2fs does not have a test.

Also, the method firstly use e2fsck to check if the filesystem is
correct. if the program failed no information was logged and the actual
algorithm try to do the resize anyway. Same with e2fsck, if resize2fs
failed not information are logged.

We need to add tests for this function and log every error returned.

** Affects: nova
 Importance: Undecided
 Assignee: sahid (sahid-ferdjaoui)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = sahid (sahid-ferdjaoui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270088

Title:
  disk/api.py: resize2fs needs tests + better log

Status in OpenStack Compute (Nova):
  New

Bug description:
  In disk/api.py the method resize2fs does not have a test.

  Also, the method firstly use e2fsck to check if the filesystem is
  correct. if the program failed no information was logged and the
  actual algorithm try to do the resize anyway. Same with e2fsck, if
  resize2fs failed not information are logged.

  We need to add tests for this function and log every error returned.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1270088/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270238] [NEW] libvirt driver doesn't support disk re-size down

2014-01-17 Thread sahid
Public bug reported:

Currently libvirt driver doesn't support resizing disk down.

During a resizing down all run well and the instance is updated to the new 
flavor with the new disk size.
but in real the disk is not resized and keep the original size.

We need to add the support of resizing down:
1. resizing the fs
2. resizing the image

For the step one we have to be sure we work with only one partition and
we don't erase data.

what to do for the support of ntfs?

** Affects: nova
 Importance: Undecided
 Assignee: sahid (sahid-ferdjaoui)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = sahid (sahid-ferdjaoui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270238

Title:
  libvirt driver doesn't support disk re-size down

Status in OpenStack Compute (Nova):
  New

Bug description:
  Currently libvirt driver doesn't support resizing disk down.

  During a resizing down all run well and the instance is updated to the new 
flavor with the new disk size.
  but in real the disk is not resized and keep the original size.

  We need to add the support of resizing down:
  1. resizing the fs
  2. resizing the image

  For the step one we have to be sure we work with only one partition
  and we don't erase data.

  what to do for the support of ntfs?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1270238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260667] [NEW] Tox fails to build environment because of MySQL-Python version

2013-12-13 Thread sahid
Public bug reported:

During tox building environment, it try to install the package MySQL-
python version 1.2.4 but the build failed with the error:

Traceback (most recent call last):
  File string, line 16, in module
  File /opt/stack/nova/.tox/py27/build/MySQL-python/setup.py, line 18, in 
module
metadata, options = get_config()
  File setup_posix.py, line 43, in get_config
libs = mysql_config(libs_r)
  File setup_posix.py, line 25, in mysql_config
raise EnvironmentError(%s not found % (mysql_config.path,))
EnvironmentError: mysql_config not found

** Affects: nova
 Importance: Undecided
 Assignee: sahid (sahid-ferdjaoui)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) = sahid (sahid-ferdjaoui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260667

Title:
  Tox fails to build environment because of MySQL-Python version

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  During tox building environment, it try to install the package MySQL-
  python version 1.2.4 but the build failed with the error:

  Traceback (most recent call last):
File string, line 16, in module
File /opt/stack/nova/.tox/py27/build/MySQL-python/setup.py, line 18, in 
module
  metadata, options = get_config()
File setup_posix.py, line 43, in get_config
  libs = mysql_config(libs_r)
File setup_posix.py, line 25, in mysql_config
  raise EnvironmentError(%s not found % (mysql_config.path,))
  EnvironmentError: mysql_config not found

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260667/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 872489] Re: utils.execute throws exception.ProcessExecutionError but it is not handled in many case

2013-09-23 Thread sahid
** Changed in: openstack-qa
   Status: In Progress = Fix Released

** Changed in: nova
   Status: Confirmed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/872489

Title:
  utils.execute throws exception.ProcessExecutionError but it is not
  handled in many case

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack QA:
  Fix Released

Bug description:
  utils.execute throws exception.ProcessExecutionError but it is not handled in 
many case.
  These exceptions must be handled and intermediate state must be rollbacked.

  Examples:
  In IptablesManager.apply(), util.execute used. When an 
exception.ProcessExecutionError raised during loop, left procedures are not 
performed. 

  - initialize_gateway_device()
  - LinuxBridgeInterfaceDriver.ensure_bridge()
  - bind_floating_ip()
  - unbind_floating_ip()
  - ensure_metadata_ip()
  - release_dhcp()
  - update_dhcp()
  - update_ra()
  - LinuxBridgeInterfaceDriver.ensure_vlan()
  - LinuxOVSInterfaceDriver.plug()
  - _device_exists()
  - _stop_dnsmasq()

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/872489/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1180783] Re: Clean up code in Cisco Nexus plugin's _get_all_segmentation_ids

2013-09-21 Thread sahid
** Changed in: neutron
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1180783

Title:
  Clean up code in Cisco Nexus plugin's _get_all_segmentation_ids

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  The implementation of the _get_all_segmentation_ids method in the Cisco Nexus 
plugin's virt_phy_sw_v2 module can be cleaned up.  The six lines in the current 
implementation can be replaced with a single line.  This was brought up in the 
review for blueprint cisco-plugin-exception-handling.  The comment in that 
review explains:
Unrelated to this change, but can we file a bug to clean this up? 
Replace 170-175 with:
   return ','.join(str(v_id) for v_id in cdb.get_ovs_vlans() if 
int(v_id)  0)
  Also, in the __init__ method for VirtualPhysicalSwitchModelV2 class in the 
same module, the '\n' can be removed from this line:
  LOG.debug(_(Loaded device plugin %s\n),

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1180783/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp