[Yahoo-eng-team] [Bug 1475722] Re: Never use MagicMock

2016-09-21 Thread sonu
** Also affects: python-designateclient
   Importance: Undecided
   Status: New

** Changed in: python-designateclient
 Assignee: (unassigned) => sonu (sonu-bhumca11)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475722

Title:
  Never use MagicMock

Status in Aodh:
  New
Status in Barbican:
  In Progress
Status in Ceilometer:
  New
Status in Cinder:
  New
Status in Designate:
  New
Status in Glance:
  New
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Identity (keystone):
  In Progress
Status in keystonemiddleware:
  In Progress
Status in Mistral:
  In Progress
Status in Murano:
  In Progress
Status in neutron:
  New
Status in OpenStack Compute (nova):
  New
Status in Panko:
  New
Status in python-barbicanclient:
  In Progress
Status in python-ceilometerclient:
  New
Status in python-designateclient:
  New
Status in python-heatclient:
  In Progress
Status in python-mistralclient:
  In Progress
Status in python-muranoclient:
  In Progress
Status in python-neutronclient:
  In Progress
Status in python-novaclient:
  New
Status in python-openstackclient:
  Fix Released
Status in OpenStack SDK:
  Fix Committed
Status in python-senlinclient:
  New
Status in python-swiftclient:
  In Progress
Status in python-troveclient:
  In Progress
Status in Rally:
  New
Status in senlin:
  New
Status in OpenStack Object Storage (swift):
  New
Status in tacker:
  New
Status in tempest:
  New
Status in OpenStack DBaaS (Trove):
  New

Bug description:
  They magically allow things to pass. This is bad.

  Any usage should be replaced with the Mock class and explicit
  attributes should be set on it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1475722/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475722] Re: Never use MagicMock

2016-09-21 Thread sonu
** Also affects: designate
   Importance: Undecided
   Status: New

** Changed in: designate
 Assignee: (unassigned) => sonu (sonu-bhumca11)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1475722

Title:
  Never use MagicMock

Status in Barbican:
  New
Status in Ceilometer:
  New
Status in Cinder:
  New
Status in Designate:
  New
Status in Glance:
  New
Status in heat:
  New
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Identity (keystone):
  New
Status in Mistral:
  In Progress
Status in Murano:
  In Progress
Status in Panko:
  New
Status in python-ceilometerclient:
  New
Status in python-heatclient:
  In Progress
Status in python-mistralclient:
  In Progress
Status in python-muranoclient:
  In Progress
Status in python-neutronclient:
  In Progress
Status in python-openstackclient:
  Fix Released
Status in OpenStack SDK:
  Fix Committed
Status in python-swiftclient:
  In Progress
Status in python-troveclient:
  In Progress
Status in Rally:
  New
Status in OpenStack Object Storage (swift):
  New
Status in OpenStack DBaaS (Trove):
  New

Bug description:
  They magically allow things to pass. This is bad.

  Any usage should be replaced with the Mock class and explicit
  attributes should be set on it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1475722/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551065] [NEW] IPv6 LLADR is configured on router even if IPv6 is disabled on system

2016-02-28 Thread Sonu
Public bug reported:

Environment:

(a) On openstack master
(b) CVR (centralized router)
(c) L3 HA (L3 HA with min 2 nodes)

Problem Description:

Even if we disable IPv6 system wide on network controller  in
/etc/sysctl.conf, the router namespace created by L3 agent has IPv6
LLADR configured on its interfaces.

This behavior is undesired, because environments that don't use IPv6, still has 
Neighbor discovery and Multi cast listener report packets in their network.
With L3 HA, it causes additional problem where, these ND and multi-cast 
listener report packets results in un-neccessary MAC port redirection that may 
result in packet losses.

Possible Solution:

While creating router name-space check if IPv6 is disabled system-wide
and set the ipv6 flag appropriately in the NS.

Severity : Low

** Affects: neutron
 Importance: Undecided
 Assignee: Sonu (sonu-sudhakaran)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551065

Title:
  IPv6 LLADR is configured on router even if IPv6 is disabled on system

Status in neutron:
  New

Bug description:
  Environment:

  (a) On openstack master
  (b) CVR (centralized router)
  (c) L3 HA (L3 HA with min 2 nodes)

  Problem Description:

  Even if we disable IPv6 system wide on network controller  in
  /etc/sysctl.conf, the router namespace created by L3 agent has IPv6
  LLADR configured on its interfaces.

  This behavior is undesired, because environments that don't use IPv6, still 
has Neighbor discovery and Multi cast listener report packets in their network.
  With L3 HA, it causes additional problem where, these ND and multi-cast 
listener report packets results in un-neccessary MAC port redirection that may 
result in packet losses.

  Possible Solution:

  While creating router name-space check if IPv6 is disabled system-wide
  and set the ipv6 flag appropriately in the NS.

  Severity : Low

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-10 Thread sonu
** Also affects: oslo.log
   Importance: Undecided
   Status: New

** Changed in: oslo.log
   Status: New => In Progress

** Changed in: oslo.log
 Assignee: (unassigned) => sonu (sonu-bhumca11)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in congress:
  Fix Released
Status in Gnocchi:
  Invalid
Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Mistral:
  Fix Released
Status in Monasca:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in oslo.concurrency:
  In Progress
Status in oslo.log:
  In Progress
Status in oslo.service:
  Fix Committed
Status in python-cinderclient:
  Fix Released
Status in python-congressclient:
  In Progress
Status in python-cueclient:
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  Fix Committed
Status in python-keystoneclient:
  Fix Committed
Status in python-magnumclient:
  Fix Released
Status in python-mistralclient:
  Fix Committed
Status in python-neutronclient:
  In Progress
Status in Python client library for Sahara:
  Fix Committed
Status in python-solumclient:
  In Progress
Status in python-swiftclient:
  In Progress
Status in python-troveclient:
  Fix Committed
Status in Python client library for Zaqar:
  Fix Committed
Status in Solum:
  In Progress
Status in OpenStack Object Storage (swift):
  New
Status in Trove:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/congress/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-10 Thread sonu
** Also affects: oslo.cache
   Importance: Undecided
   Status: New

** Changed in: oslo.cache
   Status: New => In Progress

** Changed in: oslo.cache
 Assignee: (unassigned) => sonu (sonu-bhumca11)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in congress:
  Fix Released
Status in Gnocchi:
  Invalid
Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Mistral:
  Fix Released
Status in Monasca:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in oslo.cache:
  In Progress
Status in oslo.concurrency:
  In Progress
Status in oslo.log:
  In Progress
Status in oslo.service:
  Fix Committed
Status in python-cinderclient:
  Fix Released
Status in python-congressclient:
  In Progress
Status in python-cueclient:
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  Fix Committed
Status in python-keystoneclient:
  Fix Committed
Status in python-magnumclient:
  Fix Released
Status in python-mistralclient:
  Fix Committed
Status in python-neutronclient:
  In Progress
Status in Python client library for Sahara:
  Fix Committed
Status in python-solumclient:
  In Progress
Status in python-swiftclient:
  In Progress
Status in python-troveclient:
  Fix Committed
Status in Python client library for Zaqar:
  Fix Committed
Status in Solum:
  In Progress
Status in OpenStack Object Storage (swift):
  New
Status in Trove:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/congress/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-10 Thread sonu
** Also affects: python-cueclient
   Importance: Undecided
   Status: New

** Changed in: python-cueclient
   Status: New => In Progress

** Changed in: python-cueclient
 Assignee: (unassigned) => sonu (sonu-bhumca11)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in congress:
  Fix Released
Status in Gnocchi:
  Invalid
Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Mistral:
  Fix Released
Status in Monasca:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in oslo.service:
  Fix Committed
Status in python-cinderclient:
  Fix Released
Status in python-congressclient:
  In Progress
Status in python-cueclient:
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  Fix Committed
Status in python-keystoneclient:
  Fix Committed
Status in python-magnumclient:
  Fix Released
Status in python-mistralclient:
  Fix Committed
Status in python-neutronclient:
  In Progress
Status in Python client library for Sahara:
  Fix Committed
Status in python-solumclient:
  In Progress
Status in python-swiftclient:
  In Progress
Status in python-troveclient:
  Fix Committed
Status in Python client library for Zaqar:
  Fix Committed
Status in Solum:
  In Progress
Status in OpenStack Object Storage (swift):
  New
Status in Trove:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/congress/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-10 Thread sonu
** Also affects: oslo.concurrency
   Importance: Undecided
   Status: New

** Changed in: oslo.concurrency
   Status: New => In Progress

** Changed in: oslo.concurrency
 Assignee: (unassigned) => sonu (sonu-bhumca11)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in congress:
  Fix Released
Status in Gnocchi:
  Invalid
Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Mistral:
  Fix Released
Status in Monasca:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in oslo.concurrency:
  In Progress
Status in oslo.service:
  Fix Committed
Status in python-cinderclient:
  Fix Released
Status in python-congressclient:
  In Progress
Status in python-cueclient:
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  Fix Committed
Status in python-keystoneclient:
  Fix Committed
Status in python-magnumclient:
  Fix Released
Status in python-mistralclient:
  Fix Committed
Status in python-neutronclient:
  In Progress
Status in Python client library for Sahara:
  Fix Committed
Status in python-solumclient:
  In Progress
Status in python-swiftclient:
  In Progress
Status in python-troveclient:
  Fix Committed
Status in Python client library for Zaqar:
  Fix Committed
Status in Solum:
  In Progress
Status in OpenStack Object Storage (swift):
  New
Status in Trove:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/congress/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-08 Thread sonu
** Also affects: oslo.service
   Importance: Undecided
   Status: New

** Changed in: oslo.service
 Assignee: (unassigned) => sonu (sonu-bhumca11)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in congress:
  Fix Released
Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Mistral:
  Fix Released
Status in Monasca:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in oslo.service:
  New
Status in python-cinderclient:
  Fix Committed
Status in python-congressclient:
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  Fix Committed
Status in python-keystoneclient:
  Fix Committed
Status in python-magnumclient:
  Fix Released
Status in python-mistralclient:
  New
Status in python-neutronclient:
  In Progress
Status in Python client library for Sahara:
  Fix Committed
Status in python-swiftclient:
  In Progress
Status in python-troveclient:
  Fix Committed
Status in Python client library for Zaqar:
  Fix Committed
Status in Trove:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/congress/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-08 Thread sonu
** Changed in: oslo.service
   Status: New => In Progress

** Also affects: python-solumclient
   Importance: Undecided
   Status: New

** Changed in: python-solumclient
   Status: New => In Progress

** Changed in: python-solumclient
 Assignee: (unassigned) => sonu (sonu-bhumca11)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in Ceilometer:
  New
Status in congress:
  Fix Released
Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Mistral:
  Fix Released
Status in Monasca:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in oslo.service:
  In Progress
Status in python-cinderclient:
  Fix Committed
Status in python-congressclient:
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  Fix Committed
Status in python-keystoneclient:
  Fix Committed
Status in python-magnumclient:
  Fix Released
Status in python-mistralclient:
  New
Status in python-neutronclient:
  In Progress
Status in Python client library for Sahara:
  Fix Committed
Status in python-solumclient:
  In Progress
Status in python-swiftclient:
  In Progress
Status in python-troveclient:
  Fix Committed
Status in Python client library for Zaqar:
  Fix Committed
Status in Trove:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-08 Thread sonu
** Also affects: solum
   Importance: Undecided
   Status: New

** Changed in: solum
   Status: New => In Progress

** Changed in: solum
 Assignee: (unassigned) => sonu (sonu-bhumca11)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in Ceilometer:
  New
Status in congress:
  Fix Released
Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Mistral:
  Fix Released
Status in Monasca:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in oslo.service:
  In Progress
Status in python-cinderclient:
  Fix Committed
Status in python-congressclient:
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  Fix Committed
Status in python-keystoneclient:
  Fix Committed
Status in python-magnumclient:
  Fix Released
Status in python-mistralclient:
  New
Status in python-neutronclient:
  In Progress
Status in Python client library for Sahara:
  Fix Committed
Status in python-solumclient:
  In Progress
Status in python-swiftclient:
  In Progress
Status in python-troveclient:
  Fix Committed
Status in Python client library for Zaqar:
  Fix Committed
Status in Solum:
  In Progress
Status in Trove:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465086] Re: tox doesn't work under proxy

2015-12-03 Thread sonu
** Also affects: designate
   Importance: Undecided
   Status: New

** Changed in: designate
 Assignee: (unassigned) => sonu (sonu-bhumca11)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1465086

Title:
  tox doesn't work under proxy

Status in Barbican:
  In Progress
Status in Ceilometer:
  Won't Fix
Status in cloudkitty:
  Won't Fix
Status in Designate:
  In Progress
Status in Gnocchi:
  Won't Fix
Status in Magnum:
  Fix Released
Status in neutron:
  In Progress
Status in python-magnumclient:
  Fix Released
Status in senlin:
  Fix Committed

Bug description:
  When a development environment is under a proxy, tox fails like this:
  (even if environment variables of the proxy are set.)

  $ tox -epep8
  pep8 create: /home/system/magnum/.tox/pep8
  pep8 installdeps: -r/home/system/magnum/requirements.txt, 
-r/home/system/magnum/test-requirements.txt
  ERROR: invocation failed (exit code 1), logfile: 
/home/system/magnum/.tox/pep8/log/pep8-1.log
  ERROR: actionid: pep8
  msg: getenv
  cmdargs: [local('/home/system/magnum/.tox/pep8/bin/pip'), 'install', '-U', 
'-r/home/system/magnum/requirements.txt', 
'-r/home/system/magnum/test-requirements.txt']
  env: {'PATH': 
'/home/system/magnum/.tox/pep8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games',
 'VIRTUAL_ENV': '/home/system/magnum/.tox/pep8', 'PYTHONHASHSEED': '2857363521'}

  Collecting Babel>=1.3 (from -r /home/system/magnum/requirements.txt (line 1))
Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=3, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=2, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=1, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=0, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=3, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=2, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=1, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=0, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Could not find a version that satisfies the requirement Babel>=1.3 (from -r 
/home/system/magnum/requirements.txt (line 1)) (from versions: )
  No matching distribution found for Babel>=1.3 (from -r 
/home/system/magnum/requirements.txt (line 1))

  ERROR: could not install deps [-r/home/system/magnum/requirements.txt, 
-r/home/system/magnum/test-requirements.txt]; v = 
InvocationError('/home/system/magnum/.tox/pep8/bin/pip install -U 
-r/home/system/magnum/requirements.txt 
-r/home/system/magnum/test-requirements.txt (see 
/home/system/magnum/.tox/pep8/log/pep8-1.log)', 1)
  ___ summary 

  ERROR:   pep8: could not install deps 
[-r/home/system/magnum/requirements.txt, 
-r/home/system/magnum/test-requirements.txt]; v = 
InvocationError('/home/system/magnum/.tox/pep8/bin/pip install -U 
-r/home/system/magnum/requirements.txt 
-r/home/system/magnum/test-requirements.txt (see 
/home/system/magnum/.tox/pep8/log/pep8-1.log)', 1)

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1465086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465086] Re: tox doesn't work under proxy

2015-12-03 Thread sonu
** Also affects: python-designateclient
   Importance: Undecided
   Status: New

** Changed in: python-designateclient
 Assignee: (unassigned) => sonu (sonu-bhumca11)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1465086

Title:
  tox doesn't work under proxy

Status in Barbican:
  In Progress
Status in Ceilometer:
  Won't Fix
Status in cloudkitty:
  Won't Fix
Status in Designate:
  In Progress
Status in Gnocchi:
  Won't Fix
Status in Magnum:
  Fix Released
Status in neutron:
  In Progress
Status in python-designateclient:
  In Progress
Status in python-magnumclient:
  Fix Released
Status in senlin:
  Fix Committed

Bug description:
  When a development environment is under a proxy, tox fails like this:
  (even if environment variables of the proxy are set.)

  $ tox -epep8
  pep8 create: /home/system/magnum/.tox/pep8
  pep8 installdeps: -r/home/system/magnum/requirements.txt, 
-r/home/system/magnum/test-requirements.txt
  ERROR: invocation failed (exit code 1), logfile: 
/home/system/magnum/.tox/pep8/log/pep8-1.log
  ERROR: actionid: pep8
  msg: getenv
  cmdargs: [local('/home/system/magnum/.tox/pep8/bin/pip'), 'install', '-U', 
'-r/home/system/magnum/requirements.txt', 
'-r/home/system/magnum/test-requirements.txt']
  env: {'PATH': 
'/home/system/magnum/.tox/pep8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games',
 'VIRTUAL_ENV': '/home/system/magnum/.tox/pep8', 'PYTHONHASHSEED': '2857363521'}

  Collecting Babel>=1.3 (from -r /home/system/magnum/requirements.txt (line 1))
Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=3, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=2, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=1, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=0, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=3, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=2, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=1, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Retrying (Retry(total=0, connect=None, read=None, redirect=None)) after 
connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name 
or service not known'))': /simple/babel/
Could not find a version that satisfies the requirement Babel>=1.3 (from -r 
/home/system/magnum/requirements.txt (line 1)) (from versions: )
  No matching distribution found for Babel>=1.3 (from -r 
/home/system/magnum/requirements.txt (line 1))

  ERROR: could not install deps [-r/home/system/magnum/requirements.txt, 
-r/home/system/magnum/test-requirements.txt]; v = 
InvocationError('/home/system/magnum/.tox/pep8/bin/pip install -U 
-r/home/system/magnum/requirements.txt 
-r/home/system/magnum/test-requirements.txt (see 
/home/system/magnum/.tox/pep8/log/pep8-1.log)', 1)
  ___ summary 

  ERROR:   pep8: could not install deps 
[-r/home/system/magnum/requirements.txt, 
-r/home/system/magnum/test-requirements.txt]; v = 
InvocationError('/home/system/magnum/.tox/pep8/bin/pip install -U 
-r/home/system/magnum/requirements.txt 
-r/home/system/magnum/test-requirements.txt (see 
/home/system/magnum/.tox/pep8/log/pep8-1.log)', 1)

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1465086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1220234] Re: *.pyc should be removed before run test in tox.ini

2015-11-24 Thread sonu
** Also affects: designate
   Importance: Undecided
   Status: New

** Changed in: designate
 Assignee: (unassigned) => sonu (sonu-bhumca11)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1220234

Title:
  *.pyc should be removed before run test in tox.ini

Status in Designate:
  New
Status in OpenStack Identity (keystone):
  Invalid
Status in neutron:
  Invalid

Bug description:
  reproduce:

  $ git rm neutron/scheduler/l3_agent_scheduler.py
  $ tox -epy27 test_agent_scheduler
  ...
  Ran 486 (+116) tests in 63.833s (-82.301s)
  PASSED (id=225)
  ...
py27: commands succeeded

  the  *.pyc will be removed before run test in run_tests.sh, so
  ./run_tests.sh will fail, but tox doesn't do the same, so it still
  success

  after add line after tox.ini#L44
  /usr/bin/find . -type f -name "*.pyc" -delete

  $ tox -epy27 test_agent_scheduler
  ...
  ImportError: No module named l3_agent_scheduler
  ...
  Ran 488 (+481) tests in 10.087s (+9.636s)
  FAILED (id=231, failures=245 (+238))
  ...
  ERROR:   py27: commands failed

  the absolute path of find may cause problem, but if i don't specify it, there 
is a warning:
  WARNING:test command found but not installed in testenv
cmd: /usr/bin/find
env: /home/zqfan/openstack/neutron/.tox/py27
  Maybe forgot to specify a dependency?

  but tox still performs in the correct way

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate/+bug/1220234/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511782] [NEW] securitygroup rule and member updates not applied correctly

2015-10-30 Thread Sonu
Public bug reported:

Summary:
When using enhanced RPC, the security group rules and members are updated after 
the call to update port filter. This is with a firewall driver that has no need 
to use defer_apply based implementation.

Description:

In class SecurityGroupAgentRpc(..) refresh_firewall, if we use
enhanced_rpc, the rules and members are updated after the calls to
update_port_filter (...). This works fine for IP Tables based firewall
driver, since it has the need to override 'filter_defer_apply_on' and
'filter_defer_apply_off' methods to defer calling of iptables cmds.

Due to this, Firewall drivers that do not override
filter_defer_apply_on/off methods misses applying the new rules, since
rule updates happens post update_port_filter call into the driver.

Symptoms:
Rule update or a security group member update is not processed by the firewall 
driver instantly. 

Environment:
Openstack master with hyper-v security groups driver with enhanced_rpc set to 
True. 
This is applicable to any Firewall driver that chooses not to implement 
defer_apply* related methods.

** Affects: neutron
 Importance: Undecided
 Assignee: Sonu (sonu-sudhakaran)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Sonu (sonu-sudhakaran)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1511782

Title:
  securitygroup rule and member updates not applied correctly

Status in neutron:
  New

Bug description:
  Summary:
  When using enhanced RPC, the security group rules and members are updated 
after the call to update port filter. This is with a firewall driver that has 
no need to use defer_apply based implementation.

  Description:

  In class SecurityGroupAgentRpc(..) refresh_firewall, if we use
  enhanced_rpc, the rules and members are updated after the calls to
  update_port_filter (...). This works fine for IP Tables based firewall
  driver, since it has the need to override 'filter_defer_apply_on' and
  'filter_defer_apply_off' methods to defer calling of iptables cmds.

  Due to this, Firewall drivers that do not override
  filter_defer_apply_on/off methods misses applying the new rules, since
  rule updates happens post update_port_filter call into the driver.

  Symptoms:
  Rule update or a security group member update is not processed by the 
firewall driver instantly. 

  Environment:
  Openstack master with hyper-v security groups driver with enhanced_rpc set to 
True. 
  This is applicable to any Firewall driver that chooses not to implement 
defer_apply* related methods.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1511782/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505571] [NEW] VM delete operation fails with 'Connection to neutron failed - Read timeout' error

2015-10-13 Thread Sonu
Public bug reported:

Problem description:
With series of VM delete operation in openstack (4000 vms) with KVM compute 
nodes,  the VM instance goes into ERROR state.
The error shown in Horizon UI is 
"ConnectionFailed: Connection to neutron failed: 
HTTPConnectionPool(host='192.168.0.1', port=9696): Read timed out. (read 
timeout=30)"

This happens because neutron takes more than 30 secs (actually around 80 secs) 
to delete one port, and nova sets the instance into ERROR state 'cz the default 
timeout of all neutron api(s) is set to 30 sec in nova.
This can be worked around, by increasing the timeout to 120 in nova.conf. But 
this cannot be recommended as the solution.

cat /etc/nova/nova.conf | grep url_timeout
url_timeout = 120

** Affects: neutron
 Importance: Undecided
 Assignee: Sonu (sonu-sudhakaran)
 Status: New


** Tags: delete read timeout

** Changed in: neutron
 Assignee: (unassigned) => Sonu (sonu-sudhakaran)

** Tags added: delete

** Tags added: read timeout

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505571

Title:
  VM delete operation fails with 'Connection to neutron failed - Read
  timeout' error

Status in neutron:
  New

Bug description:
  Problem description:
  With series of VM delete operation in openstack (4000 vms) with KVM compute 
nodes,  the VM instance goes into ERROR state.
  The error shown in Horizon UI is 
  "ConnectionFailed: Connection to neutron failed: 
HTTPConnectionPool(host='192.168.0.1', port=9696): Read timed out. (read 
timeout=30)"

  This happens because neutron takes more than 30 secs (actually around 80 
secs) to delete one port, and nova sets the instance into ERROR state 'cz the 
default timeout of all neutron api(s) is set to 30 sec in nova.
  This can be worked around, by increasing the timeout to 120 in nova.conf. But 
this cannot be recommended as the solution.

  cat /etc/nova/nova.conf | grep url_timeout
  url_timeout = 120

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505571/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong

2015-09-28 Thread sonu
** Also affects: python-designateclient
   Importance: Undecided
   Status: New

** Changed in: python-designateclient
 Assignee: (unassigned) => sonu (sonu-bhumca11)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259292

Title:
  Some tests use assertEqual(observed, expected) , the argument order is
  wrong

Status in Ceilometer:
  Invalid
Status in Cinder:
  Fix Released
Status in congress:
  New
Status in Designate:
  New
Status in Glance:
  Fix Committed
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Keystone:
  In Progress
Status in Manila:
  In Progress
Status in Mistral:
  In Progress
Status in murano:
  Confirmed
Status in OpenStack Compute (nova):
  In Progress
Status in python-ceilometerclient:
  Invalid
Status in python-cinderclient:
  Fix Released
Status in python-designateclient:
  New
Status in python-mistralclient:
  New
Status in Python client library for Zaqar:
  In Progress
Status in Sahara:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  The test cases will produce a confusing error message if the tests
  ever fail, so this is worth fixing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1259292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong

2015-09-28 Thread sonu
** Also affects: designate
   Importance: Undecided
   Status: New

** Changed in: designate
 Assignee: (unassigned) => sonu (sonu-bhumca11)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259292

Title:
  Some tests use assertEqual(observed, expected) , the argument order is
  wrong

Status in Ceilometer:
  Invalid
Status in Cinder:
  Fix Released
Status in Designate:
  New
Status in Glance:
  Fix Committed
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Keystone:
  In Progress
Status in Manila:
  In Progress
Status in Mistral:
  In Progress
Status in murano:
  Confirmed
Status in OpenStack Compute (nova):
  In Progress
Status in python-ceilometerclient:
  Invalid
Status in python-cinderclient:
  Fix Released
Status in python-mistralclient:
  New
Status in Python client library for Zaqar:
  In Progress
Status in Sahara:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  The test cases will produce a confusing error message if the tests
  ever fail, so this is worth fixing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1259292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473291] [NEW] nova compute on hyperv don't wait for vif plugged event from neutron

2015-07-10 Thread Sonu
Public bug reported:

1. Exact version of Nova/OpenStack you are running: 
Juno/stable

2. Reproduce steps:

* Launch a VM on a Hyper-V cloud.
* Note the boot time of the Virtual Machine in nova-compute.log.
* Note the port up status time in neutron DB for the port of the VM.

The boot time of the Virtual machine is earlier than the port UP status,
which should not be the case.

Expected result:
* The boot time of the Virtual Machine should be later than port UP status.
* All port rules are applied before the VM is booted and presented to the user.

Actual result:
* VM boots before the port rules are applied by neutron and it results in VM 
not getting IP, missing rules to communicate etc.

4. Description
When the port binding is complete, neutron uses notifier to notify the port UP 
event to Nova, so that Nova could power ON or resume the VM instance. On Nova 
compute for hyper-v, Nova doesn't wait for the Port Up event from neutron and 
boots the instance immediately once the disk preparation is done. 
This causes VM instances to boot without proper security rules and hence would 
result in VMs not getting IP or connectivity as desired by the user. To prevent 
this, Nova compute should wait for the vif plugged event from neutron and then 
do power ON operation.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1473291

Title:
  nova compute on hyperv don't wait for vif plugged event from neutron

Status in OpenStack Compute (Nova):
  New

Bug description:
  1. Exact version of Nova/OpenStack you are running: 
  Juno/stable

  2. Reproduce steps:

  * Launch a VM on a Hyper-V cloud.
  * Note the boot time of the Virtual Machine in nova-compute.log.
  * Note the port up status time in neutron DB for the port of the VM.

  The boot time of the Virtual machine is earlier than the port UP
  status, which should not be the case.

  Expected result:
  * The boot time of the Virtual Machine should be later than port UP status.
  * All port rules are applied before the VM is booted and presented to the 
user.

  Actual result:
  * VM boots before the port rules are applied by neutron and it results in VM 
not getting IP, missing rules to communicate etc.

  4. Description
  When the port binding is complete, neutron uses notifier to notify the port 
UP event to Nova, so that Nova could power ON or resume the VM instance. On 
Nova compute for hyper-v, Nova doesn't wait for the Port Up event from neutron 
and boots the instance immediately once the disk preparation is done. 
  This causes VM instances to boot without proper security rules and hence 
would result in VMs not getting IP or connectivity as desired by the user. To 
prevent this, Nova compute should wait for the vif plugged event from neutron 
and then do power ON operation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1473291/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458770] [NEW] OvsdbMonitor module should respawn monitor only on specific failures

2015-05-26 Thread Sonu
Public bug reported:

As of today, OvsdbMonitor used in neutron-openswitch-agent module, re-spawns 
the monitor in case of any verb that appears on the stderr of the child monitor 
process irrespective of the severity or relevance.
It is not ideal to restart child monitor process when the errors are not fatal 
or error, such as warnings from a plugin driver that echos warnings to the 
stderr etc.

And if such errors appears periodically, there are two side effects  
a) Too frequent monitor processes could result in db.sock contention.
b) Frequent re-starts, would result in neutron agent missing interface 
additions. 

In ideal case, only fatal errors where it would effect the monitoring or
change detection on bridges ports and interfaces should result in
restart of the monitor, not like the case as it exists today.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458770

Title:
  OvsdbMonitor module should respawn monitor only on specific failures

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  As of today, OvsdbMonitor used in neutron-openswitch-agent module, re-spawns 
the monitor in case of any verb that appears on the stderr of the child monitor 
process irrespective of the severity or relevance.
  It is not ideal to restart child monitor process when the errors are not 
fatal or error, such as warnings from a plugin driver that echos warnings to 
the stderr etc.

  And if such errors appears periodically, there are two side effects  
  a) Too frequent monitor processes could result in db.sock contention.
  b) Frequent re-starts, would result in neutron agent missing interface 
additions. 

  In ideal case, only fatal errors where it would effect the monitoring
  or change detection on bridges ports and interfaces should result in
  restart of the monitor, not like the case as it exists today.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1458770/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444966] [NEW] Clean up operation not performed properly by nova .

2015-04-16 Thread sonu
Public bug reported:


while I associate a floating IP to an instance  and when I remove fixed IP  
associated to that instance  ,  fixed IP as well as floating IP are  removed  
but when  I list floating IP  for the current tenant , floating  IP list  still 
 maintains  the  old data .  Floating IP  list still shows the attachment  of  
the fixed IP with  the instance from which it  was removedthat is  required 
clean up operations  was not performed properly . 

I reproduced this bug using the following steps:-

vyom@vyom:~$ nova list
+--+-+++-++
| ID   | Name| Status | Task State 
| Power State | Networks   |
+--+-+++-++
| 7d4a97df-f0e0-4a15-bd1f-1e0d3e76c434 | instance3   | ACTIVE | -  
| Running | private=10.0.0.18  |
| 973cc64d-ee93-4419-8e1f-9b5f413eed6f | test_instance22 | ACTIVE | -  
| Running | private=10.0.0.201, 172.24.4.5 |
+--+-+++-++

vyom@vyom:~$ nova floating-ip-list
+-+--+++
| Ip  | Server Id| Fixed Ip   | Pool   |
+-+--+++
| 172.24.4.10 | -| -  | public |
| 172.24.4.16 | -| -  | public |
| 172.24.4.15 | -| -  | public |
| 172.24.4.11 | -| -  | public |
| 172.24.4.7   |- | -  | public |
| 172.24.4.12 | -| -  | public |
| 172.24.4.5  | 973cc64d-ee93-4419-8e1f-9b5f413eed6f | 10.0.0.201 | public |
| 172.24.4.13 | -| -  | public |
| 172.24.4.9   | -| -  | public |
| 172.24.4.14 | -| -  | public |
 +-+--+++ 

*Associating  floating IP 172.24.4.10  to instance3 .
vyom@vyom:~$ nova floating-ip-associate  7d4a97df-f0e0-4a15-bd1f-1e0d3e76c434 
172.24.4.10

vyom@vyom:~$ nova list
+--+-+++-++
| ID   | Name| Status | Task State 
| Power State | Networks   |
+--+-+++-++
| 7d4a97df-f0e0-4a15-bd1f-1e0d3e76c434 | instance3   | ACTIVE | -  
| Running | private=10.0.0.18, 172.24.4.10 |
| 973cc64d-ee93-4419-8e1f-9b5f413eed6f | test_instance22 | ACTIVE | -  
| Running | private=10.0.0.201, 172.24.4.5 |
+--+-+++-++

vyom@vyom:~$ nova floating-ip-list
+-+--+++
| Ip  | Server Id| Fixed Ip   | Pool   |
+-+--+++
| 172.24.4.10 | 7d4a97df-f0e0-4a15-bd1f-1e0d3e76c434 | 10.0.0.18  | public |
| 172.24.4.16 | -| -  | public |
| 172.24.4.15 | -| -  | public |
| 172.24.4.11 | -| -  | public |
| 172.24.4.7   | - | -  | public |
| 172.24.4.12 | -| -  | public |
| 172.24.4.5  | 973cc64d-ee93-4419-8e1f-9b5f413eed6f | 10.0.0.201 | public |
| 172.24.4.13 | -| -  | public |
| 172.24.4.9   | -| -  | public |
| 172.24.4.14 | -| -  | public |
+-+--+++ 

* Removing fixed ip from instance instance3

vyom@vyom:~$ nova  remove-fixed-ip 7d4a97df-f0e0-4a15-bd1f-1e0d3e76c434
10.0.0.18


vyom@vyom:~$ nova list
+--+-+++-++
| ID   | Name| Status | Task State 
| Power State | Networks   |