[Yahoo-eng-team] [Bug 1699060] Re: Impossible to define policy rule based on domain ID

2017-06-20 Thread Valeriy Ponomaryov
** Also affects: aodh
   Importance: Undecided
   Status: New

** Also affects: panko
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1699060

Title:
  Impossible to define policy rule based on domain ID

Status in Aodh:
  New
Status in Ceilometer:
  New
Status in Cinder:
  New
Status in Glance:
  New
Status in heat:
  New
Status in Manila:
  New
Status in Murano:
  New
Status in neutron:
  New
Status in OpenStack Compute (nova):
  New
Status in Panko:
  New
Status in watcher:
  New

Bug description:
  We have common approach to set rules for each API using policy.json file.
  And for the moment, it is not possible to use "domain_id" in policy rules,
  only "project_id" and "user_id". It becomes very important because Keystone 
API v3 is used more and more.
  The only service that supports rules with "domain_id" is Keystone itself.

  As a result we should be able to use following rules:
  "admin_or_domain_owner": "is_admin:True or domain_id:%(domain_id)s",
  "domain_owner": "domain_id:%(domain_id)s",

  like this:

  "volume:get": "rule:domain_owner",

  or

  "volume:get": "rule:admin_or_domain_owner",

  Right now, we always get 403 error having such rules.

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1699060/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1699060] Re: Impossible to define policy rule based on domain ID

2017-06-20 Thread Valeriy Ponomaryov
** Also affects: ceilometer
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1699060

Title:
  Impossible to define policy rule based on domain ID

Status in Aodh:
  New
Status in Ceilometer:
  New
Status in Cinder:
  New
Status in Glance:
  New
Status in heat:
  New
Status in Manila:
  New
Status in Murano:
  New
Status in neutron:
  New
Status in OpenStack Compute (nova):
  New
Status in Panko:
  New
Status in watcher:
  New

Bug description:
  We have common approach to set rules for each API using policy.json file.
  And for the moment, it is not possible to use "domain_id" in policy rules,
  only "project_id" and "user_id". It becomes very important because Keystone 
API v3 is used more and more.
  The only service that supports rules with "domain_id" is Keystone itself.

  As a result we should be able to use following rules:
  "admin_or_domain_owner": "is_admin:True or domain_id:%(domain_id)s",
  "domain_owner": "domain_id:%(domain_id)s",

  like this:

  "volume:get": "rule:domain_owner",

  or

  "volume:get": "rule:admin_or_domain_owner",

  Right now, we always get 403 error having such rules.

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1699060/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1699060] Re: Impossible to define policy rule based on domain ID

2017-06-20 Thread Valeriy Ponomaryov
** Also affects: watcher
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1699060

Title:
  Impossible to define policy rule based on domain ID

Status in Cinder:
  New
Status in Glance:
  New
Status in heat:
  New
Status in Manila:
  New
Status in Murano:
  New
Status in neutron:
  New
Status in OpenStack Compute (nova):
  New
Status in watcher:
  New

Bug description:
  We have common approach to set rules for each API using policy.json file.
  And for the moment, it is not possible to use "domain_id" in policy rules,
  only "project_id" and "user_id". It becomes very important because Keystone 
API v3 is used more and more.
  The only service that supports rules with "domain_id" is Keystone itself.

  As a result we should be able to use following rules:
  "admin_or_domain_owner": "is_admin:True or domain_id:%(domain_id)s",
  "domain_owner": "domain_id:%(domain_id)s",

  like this:

  "volume:get": "rule:domain_owner",

  or

  "volume:get": "rule:admin_or_domain_owner",

  Right now, we always get 403 error having such rules.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1699060/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1699060] [NEW] Impossible to define policy rule based on domain ID

2017-06-20 Thread Valeriy Ponomaryov
Public bug reported:

We have common approach to set rules for each API using policy.json file.
And for the moment, it is not possible to use "domain_id" in policy rules,
only "project_id" and "user_id". It becomes very important because Keystone API 
v3 is used more and more.
The only service that supports rules with "domain_id" is Keystone itself.

As a result we should be able to use following rules:
"admin_or_domain_owner": "is_admin:True or domain_id:%(domain_id)s",
"domain_owner": "domain_id:%(domain_id)s",

like this:

"volume:get": "rule:domain_owner",

or

"volume:get": "rule:admin_or_domain_owner",

Right now, we always get 403 error having such rules.

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: glance
 Importance: Undecided
 Status: New

** Affects: heat
 Importance: Undecided
 Status: New

** Affects: manila
 Importance: Undecided
 Status: New

** Affects: murano
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: policy

** Also affects: manila
   Importance: Undecided
   Status: New

** Description changed:

  We have common approach to set rules for each API using policy.json file.
  And for the moment, it is not possible to use "domain_id" in policy rules,
  only "project_id" and "user_id". It becomes very important because Keystone 
API v3
  is used more and more.
  The only service that supports rules with "domain_id" is Keystone itself.
+ 
+ As a result we should be able to use following rules:
+ "admin_or_domain_owner": "is_admin:True or domain_id:%(domain_id)s",
+ "domain_owner": "domain_id:%(domain_id)s",
+ 
+ like this:
+ 
+ "volume:get": "rule:domain_owner",
+ 
+ or
+ 
+ "volume:get": "rule:admin_or_domain_owner",

** Tags added: policy

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: glance
   Importance: Undecided
   Status: New

** Also affects: murano
   Importance: Undecided
   Status: New

** Also affects: heat
   Importance: Undecided
   Status: New

** Description changed:

  We have common approach to set rules for each API using policy.json file.
  And for the moment, it is not possible to use "domain_id" in policy rules,
  only "project_id" and "user_id". It becomes very important because Keystone 
API v3
  is used more and more.
  The only service that supports rules with "domain_id" is Keystone itself.
  
  As a result we should be able to use following rules:
  "admin_or_domain_owner": "is_admin:True or domain_id:%(domain_id)s",
  "domain_owner": "domain_id:%(domain_id)s",
  
  like this:
  
  "volume:get": "rule:domain_owner",
  
  or
  
  "volume:get": "rule:admin_or_domain_owner",
+ 
+ Right now, we always get 403 error having such rules.

** Description changed:

  We have common approach to set rules for each API using policy.json file.
  And for the moment, it is not possible to use "domain_id" in policy rules,
- only "project_id" and "user_id". It becomes very important because Keystone 
API v3
- is used more and more.
+ only "project_id" and "user_id". It becomes very important because Keystone 
API v3 is used more and more.
  The only service that supports rules with "domain_id" is Keystone itself.
  
  As a result we should be able to use following rules:
  "admin_or_domain_owner": "is_admin:True or domain_id:%(domain_id)s",
  "domain_owner": "domain_id:%(domain_id)s",
  
  like this:
  
  "volume:get": "rule:domain_owner",
  
  or
  
  "volume:get": "rule:admin_or_domain_owner",
  
  Right now, we always get 403 error having such rules.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1699060

Title:
  Impossible to define policy rule based on domain ID

Status in Cinder:
  New
Status in Glance:
  New
Status in heat:
  New
Status in Manila:
  New
Status in Murano:
  New
Status in neutron:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  We have common approach to set rules for each API using policy.json file.
  And for the moment, it is not possible to use "domain_id" in policy rules,
  only "project_id" and "user_id". It becomes very important because Keystone 
API v3 is used more and more.
  The only service that supports rules with "domain_id" is Keystone itself.

  As a result we should be able to use following rules:
  "admin_or_domain_owner": "is_admin:True or domain_id:%(domain_id)s",
  "domain_owner": "domain_id:%(domain_id)s",

  like this:

  "volume:get": "rule:domain_owner",

  or

  "volume:get": "rule:admin_or_domain_owner",

  Right now, we always get 403 error having such rules.

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1659391] Re: Server list API does not show existing servers if cell service disabled and default cell not configured

2017-02-02 Thread Valeriy Ponomaryov
Updated description. Bug is valid.

** Changed in: nova
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1659391

Title:
  Server list API does not show existing servers if cell service
  disabled and default cell not configured

Status in OpenStack Compute (nova):
  New

Bug description:
  After merge of commit [1] command "nova list --all-" started returning empty 
list when servers exist. Revert of this change makes API work again.
  It is possible when we disable cell services and do not configure default 
one. But, "list" operation should always show all scheduled servers.

  Steps to reproduce:
  1) install latest nova that contains commit [1], not configuring cell service 
and not creating default cell.
  2) create VM
  3) run any of following commands:
  $ nova list --all-
  $ openstack server list --all
  $ openstack server show %name-of-server%
  $ nova show %name-of-server%

  Expected: we see data of server we created on second step.
  Actual: empty list on "list" command or "NotFound" error on "show" command.

  [1] https://review.openstack.org/#/c/396775/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1659391/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1659391] [NEW] Server list API does not show existing servers

2017-01-25 Thread Valeriy Ponomaryov
Public bug reported:

After merge of commit [1] command "nova list --all-" started returning
empty list when servers exist. Revert of this change makes API work
again.

Steps to reproduce:
1) install latest nova that contains commit [1]
2) create VM
3) run any of following commands:
$ nova list --all-
$ openstack server list --all
$ openstack server show %name-of-server%
$ nova show %name-of-server%

Expected: we see data of server we created on second step.
Actual: empty list on "list" command or "NotFound" error on "show" command.

[1] https://review.openstack.org/#/c/396775/

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  After merge of commit [1] command "nova list --all-" started returning
  empty list when servers exist. Revert of this change makes API work
  again.
  
+ Steps to reproduce:
+ 1) install latest nova that contains commit [1]
+ 2) create VM
+ 3) run any of following commands:
+ $ nova list --all-
+ $ openstack server list --all
+ $ openstack server show %name-of-server%
+ $ nova show %name-of-server%
+ 
  [1] https://review.openstack.org/#/c/396775/

** Description changed:

  After merge of commit [1] command "nova list --all-" started returning
  empty list when servers exist. Revert of this change makes API work
  again.
  
  Steps to reproduce:
  1) install latest nova that contains commit [1]
  2) create VM
  3) run any of following commands:
  $ nova list --all-
  $ openstack server list --all
  $ openstack server show %name-of-server%
  $ nova show %name-of-server%
  
+ Expected: we see data of server we created on second step.
+ Actual: empty list on "list" command or "NotFound" error on "show" command.
+ 
  [1] https://review.openstack.org/#/c/396775/

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1659391

Title:
  Server list API does not show existing servers

Status in OpenStack Compute (nova):
  New

Bug description:
  After merge of commit [1] command "nova list --all-" started returning
  empty list when servers exist. Revert of this change makes API work
  again.

  Steps to reproduce:
  1) install latest nova that contains commit [1]
  2) create VM
  3) run any of following commands:
  $ nova list --all-
  $ openstack server list --all
  $ openstack server show %name-of-server%
  $ nova show %name-of-server%

  Expected: we see data of server we created on second step.
  Actual: empty list on "list" command or "NotFound" error on "show" command.

  [1] https://review.openstack.org/#/c/396775/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1659391/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1633535] Re: Cinder fails to attach second volume to Nova VM

2016-12-02 Thread Valeriy Ponomaryov
** Changed in: manila
   Status: Confirmed => Invalid

** Changed in: manila
 Assignee: Valeriy Ponomaryov (vponomaryov) => (unassigned)

** Changed in: manila
Milestone: ocata-1 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1633535

Title:
  Cinder fails to attach second volume to Nova VM

Status in Cinder:
  In Progress
Status in ec2-api:
  Fix Released
Status in Manila:
  Invalid
Status in OpenStack Compute (nova):
  New
Status in tempest:
  Fix Released

Bug description:
  Cinder fails to attach second volume to Nova VM. This second volume gets 
"in-use" status, but does not have any attachments. Also,  such volume cannot 
be detached from VM [4].  Test gerrit change [2] proves that commit to Cinder 
[3] is THE CAUSE of a bug.
  Also, bug was reproduced even before merge of [3] with 
"gate-rally-dsvm-cinder" CI job [4], but, I assume, no one has paid attention 
to this.

  Local testing shows that IF bug appears then volume never gets
  attached and list of attachments stays empty. And waiting between
  'create' (wait until 'available' status) and 'attach' commands does
  not help at all.

  How to reproduce:
  1) Create VM
  2) Create Volume
  3) Attach volume (2) to the VM (1)
  4) Create second volume
  5) Try attach second volume (4) to VM (1) - it will fail.

  [Tempest] Also, the fact that Cinder gates passed with [3] means that
  tempest does not have test that attaches more than one volume to one
  Nova VM. And it is also tempest bug, that should be addressed.

  [Manila] In scope of Manila project, one of its drivers is broken -
  Generic driver that uses Cinder as backend.

  [1] http://logs.openstack.org/64/386364/1/check/gate-manila-tempest-
  dsvm-postgres-generic-singlebackend-ubuntu-xenial-
  nv/eef11b0/logs/screen-m-shr.txt.gz?level=TRACE#_2016-10-14_15_15_19_898

  [2] https://review.openstack.org/387915

  [3]
  
https://github.com/openstack/cinder/commit/6f174b412696bfa6262a5bea3ac42f45efbbe2ce
  ( https://review.openstack.org/385122 )

  [4] http://logs.openstack.org/22/385122/1/check/gate-rally-dsvm-
  cinder/b0332e2/rally-
  plot/results.html.gz#/CinderVolumes.create_snapshot_and_attach_volume/failures

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1633535/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1578978] [NEW] Limit summary view on project/overview page is broken when amount of charts is changed

2016-05-06 Thread Valeriy Ponomaryov
Public bug reported:

Commit [1] introduced bug of wrong compilation of html tags in [2] template.
It improperly handles chart elements when their amount does not satisfy 
criterion "(amount-1) % 2 = 0". "Bug" is in not closed "div" element that leads 
to improper interpretation of elements on page.
Also, this template always shows 6 elements on each row and one more on the new 
one. This template should not split charts by rows explicitly. It fits page 
nicely without it.

For the moment it looks as 2 rows aligned for left on wide screen.

[1] https://github.com/openstack/horizon/commit/3a2564e3
[2] 
https://github.com/openstack/horizon/blob/3a2564e3/horizon/templates/horizon/common/_limit_summary.html

** Affects: horizon
 Importance: Undecided
     Assignee: Valeriy Ponomaryov (vponomaryov)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1578978

Title:
  Limit summary view on project/overview page is broken when amount of
  charts is changed

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Commit [1] introduced bug of wrong compilation of html tags in [2] template.
  It improperly handles chart elements when their amount does not satisfy 
criterion "(amount-1) % 2 = 0". "Bug" is in not closed "div" element that leads 
to improper interpretation of elements on page.
  Also, this template always shows 6 elements on each row and one more on the 
new one. This template should not split charts by rows explicitly. It fits page 
nicely without it.

  For the moment it looks as 2 rows aligned for left on wide screen.

  [1] https://github.com/openstack/horizon/commit/3a2564e3
  [2] 
https://github.com/openstack/horizon/blob/3a2564e3/horizon/templates/horizon/common/_limit_summary.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1578978/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1177924] Re: Use testr instead of nose as the unittest runner.

2016-04-20 Thread Valeriy Ponomaryov
** Also affects: manila-ui
   Importance: Undecided
   Status: New

** Changed in: manila-ui
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1177924

Title:
  Use testr instead of nose as the unittest runner.

Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in django-openstack-auth:
  New
Status in Glance:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Triaged
Status in OpenStack Identity (keystone):
  Fix Released
Status in manila-ui:
  New
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-heatclient:
  Fix Released
Status in python-keystoneclient:
  Fix Released
Status in OpenStack Object Storage (swift):
  Triaged
Status in OpenStack DBaaS (Trove):
  Triaged

Bug description:
  We want to start using testr as our test runner instead of nose so
  that we can start running tests in parallel. For the projects that
  have switched we have seen improvements to test speed and quality.

  As part of getting set for that, we need to start using testtools and
  fixtures so provide the plumbing and test isolation needed for
  automatic parallelization. The work can be done piecemeal - with
  testtools and fixtures being added first, and then tox/run_tests being
  ported to us testr/subunit instead of nose.

  This work was semi tracked during Grizzly with this
  https://blueprints.launchpad.net/openstack-ci/+spec/grizzly-testtools
  blueprint. I am opening this bug so that we can track migration to
  testr on a per project basis.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1177924/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548807] [NEW] neutron lbaas DB support is broken on PostgreSQL

2016-02-23 Thread Valeriy Ponomaryov
Public bug reported:

Recent change [1] to neutron-lbaas project broke PostgreSQl support.

Also, this project is installed in each CI Devstck job and breaks each
PoestreSQL job of other projects just trying to be installed.

Here is error:

logs: http://logs.openstack.org/95/283495/1/check/gate-manila-tempest-
dsvm-neutron-postgres-lvm-
multibackend/a27485f/logs/devstacklog.txt.gz#_2016-02-23_10_40_31_494

paste: http://paste.openstack.org/show/487887/

raw:

oslo_db.exception.DBError: (psycopg2.ProgrammingError) constraint
"lbaas_listeners_ibfk_2" of relation "lbaas_listeners" does not exist

[1] https://review.openstack.org/#/c/218560/

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: db lbaas postgres

** Tags added: db lbaas postgres

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548807

Title:
  neutron lbaas DB support is broken on PostgreSQL

Status in neutron:
  New

Bug description:
  Recent change [1] to neutron-lbaas project broke PostgreSQl support.

  Also, this project is installed in each CI Devstck job and breaks each
  PoestreSQL job of other projects just trying to be installed.

  Here is error:

  logs: http://logs.openstack.org/95/283495/1/check/gate-manila-tempest-
  dsvm-neutron-postgres-lvm-
  multibackend/a27485f/logs/devstacklog.txt.gz#_2016-02-23_10_40_31_494

  paste: http://paste.openstack.org/show/487887/

  raw:

  oslo_db.exception.DBError: (psycopg2.ProgrammingError) constraint
  "lbaas_listeners_ibfk_2" of relation "lbaas_listeners" does not exist

  [1] https://review.openstack.org/#/c/218560/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548807/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546723] [NEW] dnsmasq processes inherit system mounts that should not be inherited

2016-02-17 Thread Valeriy Ponomaryov
Public bug reported:

See paste [1] - there is list of mounts that each dnsmasq process holds.
The ones that have "alpha", "betta" and "gamma" words in names are ZFS
filesystems. And it is impossible to unmount them. In case of ZFS it
means we cannot "destroy" ZFS filesystems that are in that list because
it is "busy". To be able to destroy ZFS dataset we need either terminate
dnsmasq processes or hack them to unmount those mounts.

It happens when we create dataset first then spawn dnsmasq process.

Problem was found in Manila project with its new ZFSonLinux share driver
[2] running Neutron on same host.

So, it is expected that such bug affects lots of filesystems.

Expected behaviour: each dnsmasq process should hold only required
mounts for it not blocking all other while it is alive.

[1] http://paste.openstack.org/show/487325/

[2] https://review.openstack.org/#/c/277192/

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: dnsmasq

** Tags added: dnsmasq

** Description changed:

  See paste [1] - there is list of mounts that each dnsmasq process holds.
  The ones that have "alpha", "betta" and "gamma" words in names are ZFS
  filesystems. And it is impossible to unmount them. In case of ZFS it
  means we cannot "destroy" ZFS filesystems that are in that list. To be
  able to destroy ZFS dataset we need either terminate dnsmasq processes
  or hack them to unmount those mounts.
  
  It happens when we create dataset first then spawn dnsmasq process.
  
- Problem was found in Manila project with its new share driver ZFSonLinux
- [2] running neutron on same host.
+ Problem was found in Manila project with its new ZFSonLinux share driver
+ [2] running Neutron on same host.
  
  Expected behaviour: each dnsmasq process should hold only required for
  them mounts not blocking all other while it is alive.
  
  [1] http://paste.openstack.org/show/487325/
  
  [2] https://review.openstack.org/#/c/277192/

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1546723

Title:
  dnsmasq processes inherit system mounts that should not be inherited

Status in neutron:
  New

Bug description:
  See paste [1] - there is list of mounts that each dnsmasq process
  holds. The ones that have "alpha", "betta" and "gamma" words in names
  are ZFS filesystems. And it is impossible to unmount them. In case of
  ZFS it means we cannot "destroy" ZFS filesystems that are in that list
  because it is "busy". To be able to destroy ZFS dataset we need either
  terminate dnsmasq processes or hack them to unmount those mounts.

  It happens when we create dataset first then spawn dnsmasq process.

  Problem was found in Manila project with its new ZFSonLinux share
  driver  [2] running Neutron on same host.

  So, it is expected that such bug affects lots of filesystems.

  Expected behaviour: each dnsmasq process should hold only required
  mounts for it not blocking all other while it is alive.

  [1] http://paste.openstack.org/show/487325/

  [2] https://review.openstack.org/#/c/277192/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1546723/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1530847] [NEW] TypeError attaching volume to instance

2016-01-04 Thread Valeriy Ponomaryov
Public bug reported:

Manila project has been facing following error since "2016, january 3,
~16:00+" of ZUUL's timezone:

2016-01-04 11:47:11.087 ERROR nova.api.openstack.extensions 
[req-d3ea820b-5b7e-4174-aa92-60b5d6283ee9 nova service] Unexpected exception in 
API method
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/extensions.py", line 478, in wrapped
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/validation/__init__.py", line 73, in wrapper
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/compute/volumes.py", line 283, in create
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions 
volume_id, device)
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/compute/api.py", line 235, in wrapped
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions return 
func(self, context, target, *args, **kwargs)
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/compute/api.py", line 224, in inner
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions return 
function(self, context, instance, *args, **kwargs)
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/compute/api.py", line 205, in inner
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions return 
f(self, context, instance, *args, **kw)
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/compute/api.py", line 3082, in attach_volume
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions disk_bus, 
device_type)
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/compute/api.py", line 3055, in _attach_volume
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions 
device_type=device_type)
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/compute/rpcapi.py", line 813, in 
reserve_block_device_name
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions 
volume_bdm = cctxt.call(ctxt, 'reserve_block_device_name', **kw)
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
158, in call
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions 
retry=self.retry)
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, 
in _send
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions 
timeout=timeout, retry=retry)
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 464, in send
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions 
retry=retry)
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 455, in _send
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions raise 
result
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions TypeError: 
__init__() takes at most 2 arguments (3 given)
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions 
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
143, in _dispatch_and_reply
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions 
executor_callback))
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions 
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
189, in _dispatch
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions 
executor_callback)
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions 
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
130, in _do_dispatch
2016-01-04 11:47:11.087 26423 ERROR nova.api.openstack.extensions result = 
func(ctxt, **new_args)
2016-01-04 11:47:11.087 26423 ERROR 

[Yahoo-eng-team] [Bug 1505374] Re: Unit tests failing with oslo.policy 0.12.0

2015-10-13 Thread Valeriy Ponomaryov
** Also affects: manila
   Importance: Undecided
   Status: New

** Changed in: manila
Milestone: None => mitaka-1

** Changed in: manila
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1505374

Title:
  Unit tests failing with oslo.policy 0.12.0

Status in Keystone:
  In Progress
Status in Manila:
  In Progress

Bug description:
  
  oslo.policy 0.12.0 was released recently, and this caused a couple keystone 
unit tests to fail. The new release has a change to use requests rather than 
urllib, and keystone's unit tests were assuming that oslo.policy was 
implemented using urllib (by mocking the response).

  failing tests:

   keystone.tests.unit.test_policy.PolicyTestCase.test_enforce_http_true
   keystone.tests.unit.test_policy.PolicyTestCase.test_enforce_http_false

  Keystone doesn't need to test these internal implementation details of
  oslo.policy, let's just assume it works as designed and they have
  their own tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1505374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495523] [NEW] router-interface-add fails with error 500 on PostgreSQL

2015-09-14 Thread Valeriy Ponomaryov
Public bug reported:

If PostgreSQL is used as DB backend then Neutron fails with error code
500 using CLI "router-interface-add":

2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters Traceback 
(most recent call last):
2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1139, 
in _execute_context
2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters context)
2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
450, in do_execute
2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters 
ProgrammingError: column "agents.id" must appear in the GROUP BY clause or be 
used in an aggregate function
2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters LINE 1: 
SELECT agents.id AS agents_id, agents.agent_type AS agents_a...

Manila CI Tempest job with PostreSQL errors:

http://logs.openstack.org/01/218801/9/check/gate-manila-tempest-dsvm-
neutron-
postgres/76739da/logs/devstacklog.txt.gz#_2015-09-14_11_37_13_009

http://logs.openstack.org/01/218801/9/check/gate-manila-tempest-dsvm-
neutron-
postgres/76739da/logs/screen-q-svc.txt.gz?level=TRACE#_2015-09-14_11_37_13_976

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: db

** Tags added: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495523

Title:
  router-interface-add fails with error 500 on PostgreSQL

Status in neutron:
  New

Bug description:
  If PostgreSQL is used as DB backend then Neutron fails with error code
  500 using CLI "router-interface-add":

  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters Traceback 
(most recent call last):
  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1139, 
in _execute_context
  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters 
context)
  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
450, in do_execute
  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters 
ProgrammingError: column "agents.id" must appear in the GROUP BY clause or be 
used in an aggregate function
  2015-09-14 11:37:13.976 25772 ERROR oslo_db.sqlalchemy.exc_filters LINE 1: 
SELECT agents.id AS agents_id, agents.agent_type AS agents_a...

  Manila CI Tempest job with PostreSQL errors:

  http://logs.openstack.org/01/218801/9/check/gate-manila-tempest-dsvm-
  neutron-
  postgres/76739da/logs/devstacklog.txt.gz#_2015-09-14_11_37_13_009

  http://logs.openstack.org/01/218801/9/check/gate-manila-tempest-dsvm-
  neutron-
  postgres/76739da/logs/screen-q-svc.txt.gz?level=TRACE#_2015-09-14_11_37_13_976

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1495523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493576] Re: Incorrect usage of python-novaclient

2015-09-09 Thread Valeriy Ponomaryov
** Also affects: cinder
   Importance: Undecided
   Status: New

** Description changed:

  All projects should use only `novaclient.client` as entry point. It designed 
with some version checks and backward compatibility.
  Direct import of versioned client object(i.e. novaclient.v2.client) is a way 
to "shoot yourself in the foot".
  
  Python-novaclient's doc: http://docs.openstack.org/developer/python-
  novaclient/api.html
  
  Horizon: 
https://github.com/openstack/horizon/blob/69d6d50ef4a26e2629643ed35ebd661e82e10586/openstack_dashboard/api/nova.py#L31
  Manila: 
https://github.com/openstack/manila/blob/master/manila/compute/nova.py#L23
+ Cinder: https://github.com/openstack/cinder/blob/master/cinder/compute/nova.py

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1493576

Title:
  Incorrect usage of python-novaclient

Status in Cinder:
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in Manila:
  New

Bug description:
  All projects should use only `novaclient.client` as entry point. It designed 
with some version checks and backward compatibility.
  Direct import of versioned client object(i.e. novaclient.v2.client) is a way 
to "shoot yourself in the foot".

  Python-novaclient's doc: http://docs.openstack.org/developer/python-
  novaclient/api.html

  Horizon: 
https://github.com/openstack/horizon/blob/69d6d50ef4a26e2629643ed35ebd661e82e10586/openstack_dashboard/api/nova.py#L31
  Manila: 
https://github.com/openstack/manila/blob/master/manila/compute/nova.py#L23
  Cinder: https://github.com/openstack/cinder/blob/master/cinder/compute/nova.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1493576/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458945] Re: Use graduated oslo.policy instead of oslo-incubator code

2015-07-24 Thread Valeriy Ponomaryov
** Changed in: manila
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458945

Title:
  Use graduated oslo.policy instead of oslo-incubator code

Status in Barbican:
  Fix Released
Status in Ceilometer:
  Fix Released
Status in Cinder:
  In Progress
Status in congress:
  Fix Committed
Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic:
  Fix Released
Status in Keystone:
  Fix Released
Status in MagnetoDB:
  Confirmed
Status in Magnum:
  New
Status in Manila:
  Invalid
Status in Mistral:
  Invalid
Status in murano:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in Rally:
  Invalid
Status in Sahara:
  Fix Released
Status in OpenStack Object Storage (swift):
  Invalid
Status in Trove:
  Invalid

Bug description:
  The Policy code is now be managed as a library, named oslo.policy.

  If there is a CVE level defect, deploying a fix should require
  deploying a new version of the library, not syncing each individual
  project.

  All the projects in the OpenStack ecosystem that are using the policy
  code from oslo-incubator should use the new library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1458945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371022] Re: Idle client connections can persist indefinitely

2015-07-22 Thread Valeriy Ponomaryov
** Also affects: manila
   Importance: Undecided
   Status: New

** Changed in: manila
 Assignee: (unassigned) = Valeriy Ponomaryov (vponomaryov)

** Changed in: manila
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1371022

Title:
  Idle client connections can persist indefinitely

Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in Glance juno series:
  New
Status in Glance kilo series:
  Fix Committed
Status in Manila:
  In Progress

Bug description:
  Idle client socket connections can persist forever, eg:

  
  $ nc localhost 8776
  [never returns]

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1371022/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361360] Re: Eventlet green threads not released back to the pool leading to choking of new requests

2015-07-22 Thread Valeriy Ponomaryov
** Also affects: manila
   Importance: Undecided
   Status: New

** Changed in: manila
 Assignee: (unassigned) = Valeriy Ponomaryov (vponomaryov)

** Changed in: manila
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361360

Title:
  Eventlet green threads not released back to the pool leading to
  choking of new requests

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Released
Status in Cinder juno series:
  Fix Released
Status in Glance:
  Fix Released
Status in Glance icehouse series:
  Fix Committed
Status in heat:
  Fix Released
Status in Keystone:
  Fix Released
Status in Keystone icehouse series:
  Confirmed
Status in Keystone juno series:
  Fix Committed
Status in Manila:
  In Progress
Status in neutron:
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in neutron juno series:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix
Status in Sahara:
  Confirmed

Bug description:
  Currently reproduced  on Juno milestone 2. but this issue should be
  reproducible in all releases since its inception.

  It is possible to choke OpenStack API controller services using
  wsgi+eventlet library by simply not closing the client socket
  connection. Whenever a request is received by any OpenStack API
  service for example nova api service, eventlet library creates a green
  thread from the pool and starts processing the request. Even after the
  response is sent to the caller, the green thread is not returned back
  to the pool until the client socket connection is closed. This way,
  any malicious user can send many API requests to the API controller
  node and determine the wsgi pool size configured for the given service
  and then send those many requests to the service and after receiving
  the response, wait there infinitely doing nothing leading to
  disrupting services for other tenants. Even when service providers
  have enabled rate limiting feature, it is possible to choke the API
  services with a group (many tenants) attack.

  Following program illustrates choking of nova-api services (but this
  problem is omnipresent in all other OpenStack API Services using
  wsgi+eventlet)

  Note: I have explicitly set the wsi_default_pool_size default value to 10 in 
order to reproduce this problem in nova/wsgi.py.
  After you run the below program, you should try to invoke API
  

  import time
  import requests
  from multiprocessing import Process

  def request(number):
 #Port is important here
 path = 'http://127.0.0.1:8774/servers'
  try:
  response = requests.get(path)
  print RESPONSE %s-%d % (response.status_code, number)
  #during this sleep time, check if the client socket connection is 
released or not on the API controller node.
  time.sleep(1000)
  print “Thread %d complete % number
  except requests.exceptions.RequestException as ex:
  print “Exception occurred %d-%s % (number, str(ex))

  if __name__ == '__main__':
  processes = []
  for number in range(40):
  p = Process(target=request, args=(number,))
  p.start()
  processes.append(p)
  for p in processes:
  p.join()

  


  Presently, the wsgi server allows persist connections if you configure 
keepalive to True which is default.
  In order to close the client socket connection explicitly after the response 
is sent and read successfully by the client, you simply have to set keepalive 
to False when you create a wsgi server.

  Additional information: By default eventlet passes “Connection: keepalive” if 
keepalive is set to True when a response is sent to the client. But it doesn’t 
have capability to set the timeout and max parameter.
  For example.
  Keep-Alive: timeout=10, max=5

  Note: After we have disabled keepalive in all the OpenStack API
  service using wsgi library, then it might impact all existing
  applications built with the assumptions that OpenStack API services
  uses persistent connections. They might need to modify their
  applications if reconnection logic is not in place and also they might
  experience the performance has slowed down as it will need to
  reestablish the http connection for every request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1361360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https

[Yahoo-eng-team] [Bug 1458945] Re: Use graduated oslo.policy instead of oslo-incubator code

2015-05-27 Thread Valeriy Ponomaryov
** No longer affects: manila

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458945

Title:
  Use graduated oslo.policy instead of oslo-incubator code

Status in OpenStack Key Management (Barbican):
  New
Status in OpenStack Telemetry (Ceilometer):
  New
Status in Cinder:
  New
Status in OpenStack Congress:
  New
Status in Designate:
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in MagnetoDB - key-value storage service for OpenStack:
  New
Status in OpenStack Magnum:
  New
Status in Mistral:
  New
Status in Murano:
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New
Status in Rally:
  Invalid
Status in Openstack Database (Trove):
  Invalid

Bug description:
  The Policy code is now be managed as a library, named oslo.policy.

  If there is a CVE level defect, deploying a fix should require
  deploying a new version of the library, not syncing each individual
  project.

  All the projects in the OpenStack ecosystem that are using the policy
  code from oslo-incubator should use the new library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1458945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415835] [NEW] VM boot is broken with providing port-id from Neutron

2015-01-29 Thread Valeriy Ponomaryov
Public bug reported:

Commit https://review.openstack.org/#/c/124059/ has introduced bug,
where Nova can not boot VM.

Steps to reproduce:

1) Create port in Neutron
2) Boot Vm without security group, but with port:

nova --debug boot tt --image=25a15f92-6bbe-43d6-8da5-b015966a4bd1
--flavor=100 --nic port-id=01e02c22-6ea3-4fe6-8cfe-407a06b634a0

...

REQ: curl -i
'http://172.18.198.52:8774/v2/35b86f321c03497fbfa1c0fdf98a3426/servers'
-X POST -H Accept: application/json -H Content-Type:
application/json -H User-Agent: python-novaclient -H X-Auth-Project-
Id: demo -H X-Auth-Token:
{SHA1}696ac31a35c12934a64485459b0a95a48a9ab4dd -d '{server: {name:
tt, imageRef: 25a15f92-6bbe-43d6-8da5-b015966a4bd1, flavorRef:
100, max_count: 1, min_count: 1, networks: [{port:
01e02c22-6ea3-4fe6-8cfe-407a06b634a0}]}}'

...

Trace as a result:

2015-01-29 12:14:03.338 ERROR nova.compute.manager [-] Instance failed network 
setup after 1 attempt(s)
2015-01-29 12:14:03.338 TRACE nova.compute.manager Traceback (most recent call 
last):
2015-01-29 12:14:03.338 TRACE nova.compute.manager   File 
/opt/stack/nova/nova/compute/manager.py, line 1677, in _allocate_network_async
2015-01-29 12:14:03.338 TRACE nova.compute.manager 
dhcp_options=dhcp_options)
2015-01-29 12:14:03.338 TRACE nova.compute.manager   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 457, in 
allocate_for_instance
2015-01-29 12:14:03.338 TRACE nova.compute.manager raise 
exception.SecurityGroupNotAllowedTogetherWithPort()
2015-01-29 12:14:03.338 TRACE nova.compute.manager 
SecurityGroupNotAllowedTogetherWithPort: It's not allowed to specify security 
groups if port_id is provided on instance boot. Neutron should be used to 
configure security groups on port.
2015-01-29 12:14:03.338 TRACE nova.compute.manager
Traceback (most recent call last):
  File /usr/local/lib/python2.7/dist-packages/eventlet/hubs/poll.py, line 
115, in wait
listener.cb(fileno)
  File /usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 
214, in main
result = function(*args, **kwargs)
  File /opt/stack/nova/nova/compute/manager.py, line 1677, in 
_allocate_network_async
dhcp_options=dhcp_options)
  File /opt/stack/nova/nova/network/neutronv2/api.py, line 457, in 
allocate_for_instance
raise exception.SecurityGroupNotAllowedTogetherWithPort()
SecurityGroupNotAllowedTogetherWithPort: It's not allowed to specify security 
groups if port_id is provided on instance boot. Neutron should be used to 
configure security groups on port.
Removing descriptor: 19
2015-01-29 12:14:03.529 DEB

2015-01-29 12:14:03.710 INFO nova.virt.libvirt.driver [-] [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800] Using config drive
2015-01-29 12:14:03.763 ERROR nova.compute.manager [-] [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800] Instance failed to spawn
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800] Traceback (most recent call last):
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800]   File 
/opt/stack/nova/nova/compute/manager.py, line 2303, in _build_resources
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800] yield resources
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800]   File 
/opt/stack/nova/nova/compute/manager.py, line 2173, in _build_and_run_instance
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800] flavor=flavor)
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 2309, in spawn
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800] admin_pass=admin_password)
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 2783, in _create_image
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800] content=files, extra_md=extra_md, 
network_info=network_info)
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800]   File 
/opt/stack/nova/nova/api/metadata/base.py, line 159, in __init__
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800] 
ec2utils.get_ip_info_for_instance_from_nw_info(network_info)
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800]   File 
/opt/stack/nova/nova/api/ec2/ec2utils.py, line 152, in 
get_ip_info_for_instance_from_nw_info
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 
c4892579-e32b-44ca-b8c7-72f3e04c6800] fixed_ips = nw_info.fixed_ips()
2015-01-29 12:14:03.763 TRACE nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1368910] Re: intersphinx requires network access which sometimes fails

2014-10-02 Thread Valeriy Ponomaryov
** Changed in: python-manilaclient
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368910

Title:
  intersphinx requires network access  which sometimes fails

Status in Cinder:
  In Progress
Status in Manila:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in The Oslo library incubator:
  Fix Released
Status in python-manilaclient:
  Fix Released

Bug description:
  The intersphinx module requires internet access, and periodically
  causes docs jobs to fail.

  This module also prevents docs from being built without internet
  access.

  Since we don't actually use intersphinx for much (if anything), lets
  just remove it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1368910/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284677] Re: Python 3: do not use 'unicode()'

2014-08-03 Thread Valeriy Ponomaryov
** Changed in: manila
   Status: In Progress = Fix Committed

** Changed in: manila
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1284677

Title:
  Python 3: do not use 'unicode()'

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Manila:
  Fix Released
Status in Python client library for Glance:
  Fix Committed
Status in Tuskar:
  Fix Released

Bug description:
  The unicode() function is Python2-specific, we should use
  six.text_type() instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1284677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284677] Re: Python 3: do not use 'unicode()'

2014-07-11 Thread Valeriy Ponomaryov
** Also affects: manila
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1284677

Title:
  Python 3: do not use 'unicode()'

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Manila:
  New
Status in Python client library for Glance:
  Fix Committed
Status in Tuskar:
  Fix Released

Bug description:
  The unicode() function is Python2-specific, we should use
  six.text_type() instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1284677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1190149] Re: Token auth fails when token is larger than 8k

2014-06-16 Thread Valeriy Ponomaryov
** No longer affects: manila

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1190149

Title:
  Token auth fails when token is larger than 8k

Status in Cinder:
  Fix Released
Status in Cinder havana series:
  Fix Committed
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance havana series:
  Fix Committed
Status in Orchestration API (Heat):
  Fix Released
Status in heat havana series:
  Fix Committed
Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Object Storage (Swift):
  Fix Released
Status in Openstack Database (Trove):
  Fix Released

Bug description:
  The following tests fail when there are 8 or more endpoints registered with 
keystone 
  tempest.api.compute.test_auth_token.AuthTokenTestJSON.test_v3_token 
  tempest.api.compute.test_auth_token.AuthTokenTestXML.test_v3_token

  Steps to reproduce:
  - run devstack with the following services (the heat h-* apis push the 
endpoint count over the threshold

ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-sch,horizon,mysql,rabbit,sysstat,tempest,s-proxy,s-account,s-container,s-object,cinder,c-api,c-vol,c-sch,n-cond,heat,h-api,h-api-cfn,h-api-cw,h-eng,n-net
  - run the failing tempest tests, eg
testr run test_v3_token
  - results in the following errors:
  ERROR: tempest.api.compute.test_auth_token.AuthTokenTestJSON.test_v3_token
  tags: worker-0
  --
  Traceback (most recent call last):
File tempest/api/compute/test_auth_token.py, line 48, in test_v3_token
  self.servers_v3.list_servers()
File tempest/services/compute/json/servers_client.py, line 138, in 
list_servers
  resp, body = self.get(url)
File tempest/common/rest_client.py, line 269, in get
  return self.request('GET', url, headers)
File tempest/common/rest_client.py, line 394, in request
  resp, resp_body)
File tempest/common/rest_client.py, line 443, in _error_checker
  resp_body = self._parse_resp(resp_body)
File tempest/common/rest_client.py, line 327, in _parse_resp
  return json.loads(body)
File /usr/lib64/python2.7/json/__init__.py, line 326, in loads
  return _default_decoder.decode(s)
File /usr/lib64/python2.7/json/decoder.py, line 366, in decode
  obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File /usr/lib64/python2.7/json/decoder.py, line 384, in raw_decode
  raise ValueError(No JSON object could be decoded)
  ValueError: No JSON object could be decoded
  ==
  ERROR: tempest.api.compute.test_auth_token.AuthTokenTestXML.test_v3_token
  tags: worker-0
  --
  Traceback (most recent call last):
File tempest/api/compute/test_auth_token.py, line 48, in test_v3_token
  self.servers_v3.list_servers()
File tempest/services/compute/xml/servers_client.py, line 181, in 
list_servers
  resp, body = self.get(url, self.headers)
File tempest/common/rest_client.py, line 269, in get
  return self.request('GET', url, headers)
File tempest/common/rest_client.py, line 394, in request
  resp, resp_body)
File tempest/common/rest_client.py, line 443, in _error_checker
  resp_body = self._parse_resp(resp_body)
File tempest/common/rest_client.py, line 519, in _parse_resp
  return xml_to_json(etree.fromstring(body))
File lxml.etree.pyx, line 2993, in lxml.etree.fromstring 
(src/lxml/lxml.etree.c:63285)
File parser.pxi, line 1617, in lxml.etree._parseMemoryDocument 
(src/lxml/lxml.etree.c:93571)
File parser.pxi, line 1495, in lxml.etree._parseDoc 
(src/lxml/lxml.etree.c:92370)
File parser.pxi, line 1011, in lxml.etree._BaseParser._parseDoc 
(src/lxml/lxml.etree.c:89010)
File parser.pxi, line 577, in 
lxml.etree._ParserContext._handleParseResultDoc (src/lxml/lxml.etree.c:84711)
File parser.pxi, line 676, in lxml.etree._handleParseResult 
(src/lxml/lxml.etree.c:85816)
File parser.pxi, line 627, in lxml.etree._raiseParseError 
(src/lxml/lxml.etree.c:85308)
  XMLSyntaxError: None
  Ran 2 tests in 2.497s (+0.278s)
  FAILED (id=214, failures=2)

  - run keystone endpoint-delete on endpoints until there is 7 endpoints
  - failing tests should now pass

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1190149/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1190149] Re: Token auth fails when token is larger than 8k

2014-04-24 Thread Valeriy Ponomaryov
** Also affects: manila
   Importance: Undecided
   Status: New

** Changed in: manila
 Assignee: (unassigned) = Valeriy Ponomaryov (vponomaryov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1190149

Title:
  Token auth fails when token is larger than 8k

Status in Cinder:
  Fix Released
Status in Cinder havana series:
  Fix Committed
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance havana series:
  Fix Committed
Status in Orchestration API (Heat):
  Fix Released
Status in heat havana series:
  Fix Committed
Status in OpenStack Identity (Keystone):
  Fix Released
Status in Manila:
  New
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Object Storage (Swift):
  Fix Released
Status in Openstack Database (Trove):
  Fix Released

Bug description:
  The following tests fail when there are 8 or more endpoints registered with 
keystone 
  tempest.api.compute.test_auth_token.AuthTokenTestJSON.test_v3_token 
  tempest.api.compute.test_auth_token.AuthTokenTestXML.test_v3_token

  Steps to reproduce:
  - run devstack with the following services (the heat h-* apis push the 
endpoint count over the threshold

ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-sch,horizon,mysql,rabbit,sysstat,tempest,s-proxy,s-account,s-container,s-object,cinder,c-api,c-vol,c-sch,n-cond,heat,h-api,h-api-cfn,h-api-cw,h-eng,n-net
  - run the failing tempest tests, eg
testr run test_v3_token
  - results in the following errors:
  ERROR: tempest.api.compute.test_auth_token.AuthTokenTestJSON.test_v3_token
  tags: worker-0
  --
  Traceback (most recent call last):
File tempest/api/compute/test_auth_token.py, line 48, in test_v3_token
  self.servers_v3.list_servers()
File tempest/services/compute/json/servers_client.py, line 138, in 
list_servers
  resp, body = self.get(url)
File tempest/common/rest_client.py, line 269, in get
  return self.request('GET', url, headers)
File tempest/common/rest_client.py, line 394, in request
  resp, resp_body)
File tempest/common/rest_client.py, line 443, in _error_checker
  resp_body = self._parse_resp(resp_body)
File tempest/common/rest_client.py, line 327, in _parse_resp
  return json.loads(body)
File /usr/lib64/python2.7/json/__init__.py, line 326, in loads
  return _default_decoder.decode(s)
File /usr/lib64/python2.7/json/decoder.py, line 366, in decode
  obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File /usr/lib64/python2.7/json/decoder.py, line 384, in raw_decode
  raise ValueError(No JSON object could be decoded)
  ValueError: No JSON object could be decoded
  ==
  ERROR: tempest.api.compute.test_auth_token.AuthTokenTestXML.test_v3_token
  tags: worker-0
  --
  Traceback (most recent call last):
File tempest/api/compute/test_auth_token.py, line 48, in test_v3_token
  self.servers_v3.list_servers()
File tempest/services/compute/xml/servers_client.py, line 181, in 
list_servers
  resp, body = self.get(url, self.headers)
File tempest/common/rest_client.py, line 269, in get
  return self.request('GET', url, headers)
File tempest/common/rest_client.py, line 394, in request
  resp, resp_body)
File tempest/common/rest_client.py, line 443, in _error_checker
  resp_body = self._parse_resp(resp_body)
File tempest/common/rest_client.py, line 519, in _parse_resp
  return xml_to_json(etree.fromstring(body))
File lxml.etree.pyx, line 2993, in lxml.etree.fromstring 
(src/lxml/lxml.etree.c:63285)
File parser.pxi, line 1617, in lxml.etree._parseMemoryDocument 
(src/lxml/lxml.etree.c:93571)
File parser.pxi, line 1495, in lxml.etree._parseDoc 
(src/lxml/lxml.etree.c:92370)
File parser.pxi, line 1011, in lxml.etree._BaseParser._parseDoc 
(src/lxml/lxml.etree.c:89010)
File parser.pxi, line 577, in 
lxml.etree._ParserContext._handleParseResultDoc (src/lxml/lxml.etree.c:84711)
File parser.pxi, line 676, in lxml.etree._handleParseResult 
(src/lxml/lxml.etree.c:85816)
File parser.pxi, line 627, in lxml.etree._raiseParseError 
(src/lxml/lxml.etree.c:85308)
  XMLSyntaxError: None
  Ran 2 tests in 2.497s (+0.278s)
  FAILED (id=214, failures=2)

  - run keystone endpoint-delete on endpoints until there is 7 endpoints
  - failing tests should now pass

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1190149/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng

[Yahoo-eng-team] [Bug 1306177] [NEW] wrong event expectation in _attachInputHandlers

2014-04-10 Thread Valeriy Ponomaryov
Public bug reported:

file horizon/static/horizon/js/horizon.quota.js

Usage of 'data-progress-indicator-for' depends on 'keyup' event.
In google chrome we can change value of input with arrows of increasing and 
decreasing of integer values.
It makes only 'change' event without 'keyup'.

Place with error - gigabyte quotas in volume creation dialog for cinder.
(see attachment)

** Affects: horizon
 Importance: Undecided
 Assignee: Valeriy Ponomaryov (vponomaryov)
 Status: In Progress

** Attachment added: horizon_quota.png
   
https://bugs.launchpad.net/bugs/1306177/+attachment/4080085/+files/horizon_quota.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1306177

Title:
  wrong event expectation in _attachInputHandlers

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  file horizon/static/horizon/js/horizon.quota.js

  Usage of 'data-progress-indicator-for' depends on 'keyup' event.
  In google chrome we can change value of input with arrows of increasing and 
decreasing of integer values.
  It makes only 'change' event without 'keyup'.

  Place with error - gigabyte quotas in volume creation dialog for
  cinder. (see attachment)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1306177/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1253497] Re: Replace uuidutils.generate_uuid() with str(uuid.uuid4())

2014-02-28 Thread Valeriy Ponomaryov
** Changed in: manila
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1253497

Title:
  Replace uuidutils.generate_uuid() with str(uuid.uuid4())

Status in Project Barbican:
  Confirmed
Status in BillingStack:
  In Progress
Status in Cinder:
  In Progress
Status in Climate:
  In Progress
Status in Designate:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Ironic (Bare Metal Provisioning):
  Fix Released
Status in Manila:
  Fix Released
Status in Murano Project:
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in OpenStack Data Processing (Savanna):
  Fix Released
Status in Staccato VM Image And Data Transfer Service:
  In Progress
Status in Taskflow for task-oriented systems.:
  In Progress
Status in Trove - Database as a Service:
  Fix Released
Status in Tuskar:
  Fix Committed

Bug description:
  http://lists.openstack.org/pipermail/openstack-
  dev/2013-November/018980.html

  
   Hi all,
  
   We had a discussion of the modules that are incubated in Oslo.
  
   https://etherpad.openstack.org/p/icehouse-oslo-status
  
   One of the conclusions we came to was to deprecate/remove uuidutils in
   this cycle.
  
   The first step into this change should be to remove generate_uuid() from
   uuidutils.
  
   The reason is that 1) generating the UUID string seems trivial enough to
   not need a function and 2) string representation of uuid4 is not what we
   want in all projects.
  
   To address this, a patch is now on gerrit.
   https://review.openstack.org/#/c/56152/
  
   Each project should directly use the standard uuid module or implement its
   own helper function to generate uuids if this patch gets in.
  
   Any thoughts on this change? Thanks.
  

  Unfortunately it looks like that change went through before I caught up on
  email. Shouldn't we have removed its use in the downstream projects (at
  least integrated projects) before removing it from Oslo?

  Doug

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1253497/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1266962] Re: Remove set_time_override in timeutils

2014-02-19 Thread Valeriy Ponomaryov
** Changed in: manila
   Status: In Progress = Fix Released

** Changed in: manila
Milestone: None = icehouse-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1266962

Title:
  Remove set_time_override in timeutils

Status in OpenStack Telemetry (Ceilometer):
  Fix Committed
Status in Cinder:
  In Progress
Status in Gantt:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Ironic (Bare Metal Provisioning):
  In Progress
Status in OpenStack Identity (Keystone):
  In Progress
Status in Manila:
  Fix Released
Status in OpenStack Message Queuing Service (Marconi):
  Fix Committed
Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  In Progress
Status in Messaging API for OpenStack:
  Fix Released
Status in Python client library for Keystone:
  Fix Released
Status in Python client library for Nova:
  Fix Committed
Status in Tuskar:
  Fix Released

Bug description:
  set_time_override was written as a helper function to mock utcnow in
  unittests.

  However we now use mock or fixture to mock our objects so
  set_time_override has become obsolete.

  We should first remove all usage of set_time_override from downstream
  projects before deleting it from oslo.

  List of attributes and functions to be removed from timeutils:
  * override_time
  * set_time_override()
  * clear_time_override()
  * advance_time_delta()
  * advance_time_seconds()

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1266962/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp