[Yahoo-eng-team] [Bug 1397882] [NEW] api/test_auth.py:test_user_only.py and test_user_id_only.py

2014-12-01 Thread Dingyx
Public bug reported:

the function name is mismatch with the function itself.
for test_user_only:
self.request.headers['X_USER_ID'] =  'testuserid'
===
self.request.headers['X_USER'] =  'testuser'

for test_user_id_only, the same way.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1397882

Title:
  api/test_auth.py:test_user_only.py and test_user_id_only.py

Status in OpenStack Compute (Nova):
  New

Bug description:
  the function name is mismatch with the function itself.
  for test_user_only:
  self.request.headers['X_USER_ID'] =  'testuserid'
  ===
  self.request.headers['X_USER'] =  'testuser'

  for test_user_id_only, the same way.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1397882/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397836] Re: openstack-dashboard : Depends: python: (= 2.7.1-0ubuntu2) but it is not installable Recommends: openstack-dashboard-ubuntu-theme but it is not installed

2014-12-01 Thread Julie Pichon
Looks to be packaging related, let's try and update the target project.

** Project changed: horizon = ubuntu

** Package changed: ubuntu = horizon (Ubuntu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1397836

Title:
  openstack-dashboard : Depends: python: (= 2.7.1-0ubuntu2) but it is
  not installableRecommends: openstack-
  dashboard-ubuntu-theme but it is not installed

Status in horizon package in Ubuntu:
  New

Bug description:
   packages have unmet dependencies:
   openstack-dashboard : Depends: python: (= 2.7.1-0ubuntu2) but it is not 
installable
 Recommends: openstack-dashboard-ubuntu-theme but it is 
not installed

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/horizon/+bug/1397836/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397890] [NEW] Missing primary key constraint at endpoint_group.id column

2014-12-01 Thread Ilya Pekelny
Public bug reported:

Most tables should have a primary key, and each table can have only ONE
primary key. The PRIMARY KEY constraint uniquely identifies each record
in a database table. The endpoint_group has no primary key. But
project_endpoint_group table provides a primary key constraint pointed
to endpoint_group.id column. Such a migration can't be applied with any
sql backend except SQLite.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1397890

Title:
  Missing primary key constraint at endpoint_group.id column

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Most tables should have a primary key, and each table can have only
  ONE primary key. The PRIMARY KEY constraint uniquely identifies each
  record in a database table. The endpoint_group has no primary key. But
  project_endpoint_group table provides a primary key constraint pointed
  to endpoint_group.id column. Such a migration can't be applied with
  any sql backend except SQLite.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1397890/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397894] [NEW] Migration helpers designed to sync database couldn't be used with shared engine.

2014-12-01 Thread Ilya Pekelny
Public bug reported:

When test cases or any real app need to provide database migration and
use synced database required to share engine between helpers and app
(test case). The helpers need to have access to share engine to apply
migrations to the same database as application uses.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1397894

Title:
  Migration helpers designed to sync database couldn't be used with
  shared engine.

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When test cases or any real app need to provide database migration and
  use synced database required to share engine between helpers and app
  (test case). The helpers need to have access to share engine to apply
  migrations to the same database as application uses.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1397894/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397899] [NEW] Database indexes missing or have inappropriate names.

2014-12-01 Thread Ilya Pekelny
Public bug reported:

Database indexes missing in model class or have inappropriate names
provided with migrations.

** Affects: keystone
 Importance: Undecided
 Assignee: Ilya Pekelny (i159)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1397899

Title:
  Database indexes missing or have inappropriate names.

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  Database indexes missing in model class or have inappropriate names
  provided with migrations.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1397899/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397471] Re: No valid host was found after installing DevStack

2014-12-01 Thread Timur Sufiev
You're welcome :). Closing as Invalid, because it's not Horizon issue.

** Changed in: horizon
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1397471

Title:
  No valid host was found after installing DevStack

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Followed the DevStack installation steps from
  http://docs.openstack.org/developer/devstack/

  VM: Ubuntu: 12.04

  
  nova availability-zone-list  is empty. There is enough free memory on the 
machine.

  amogh@amogh-VirtualBox:~/devstack$ nova availability-zone-list
  +--++
  | Name | Status |
  +--++
  +--++

  amogh@amogh-VirtualBox:~/devstack$ free -m
   total   used   free sharedbuffers cached
  Mem:  6066   3078   2987  0243   1166
  -/+ buffers/cache:   1669   4397
  Swap: 1996  0   1996

  amogh@amogh-VirtualBox:~/devstack$ df -m
  Filesystem 1M-blocks  Used Available Use% Mounted on
  /dev/sda1  48299 33588 12236  74% /
  udev3021 1  3021   1% /dev
  tmpfs607 1   606   1% /run
  none   5 0 5   0% /run/lock
  none3034 1  3033   1% /run/shm
  cgroup  3034 0  3034   0% /sys/fs/cgroup
  amogh@amogh-VirtualBox:~/devstack$

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1397471/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397903] [NEW] Hardcoded initial database version

2014-12-01 Thread Ilya Pekelny
Public bug reported:

Migration repositories provide hardcoded initial version value or even
missing it at all. Need to provide single automated tool to get real
initial version from any migration repo.

** Affects: keystone
 Importance: Undecided
 Assignee: Ilya Pekelny (i159)
 Status: In Progress

** Changed in: keystone
   Status: New = In Progress

** Changed in: keystone
 Assignee: (unassigned) = Ilya Pekelny (i159)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1397903

Title:
  Hardcoded initial database version

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  Migration repositories provide hardcoded initial version value or even
  missing it at all. Need to provide single automated tool to get real
  initial version from any migration repo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1397903/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397917] [NEW] Clarify vCPUs Usage legend for hypervisors

2014-12-01 Thread Julie Pichon
Public bug reported:

The vCPUs usage chart is confusing because it can end up with values
like Used 5 of 4.

(see e.g. https://launchpadlibrarian.net/155236888/Nova_bug_1202965.png
which was posted on bug 1202965)

This may be improved in Nova as part of bug 1202965 but I think if it is
possible we should try to make the meaning clearer in Horizon. Chatting
with a Nova developer and based on the discussion on that other bug as
well, the actual meaning is: This hypervisor has 4 cores and is running
VMs with a total of 5 vCPUs. - which is a bit too long for the chart
legend itself.

I'd like to suggest adding a star (*) after the small chart legend and
putting this full sentence as a legend below the chart series. It may
not be the best UI but hopefully it will help with reducing confusion -
I'm including the UX tag on this so UX folks can hopefully add their
input as well.

https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/admin/hypervisors/templates/hypervisors/index.html

** Affects: horizon
 Importance: Low
 Status: New


** Tags: ux

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1397917

Title:
  Clarify vCPUs Usage legend for hypervisors

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The vCPUs usage chart is confusing because it can end up with values
  like Used 5 of 4.

  (see e.g.
  https://launchpadlibrarian.net/155236888/Nova_bug_1202965.png which
  was posted on bug 1202965)

  This may be improved in Nova as part of bug 1202965 but I think if it
  is possible we should try to make the meaning clearer in Horizon.
  Chatting with a Nova developer and based on the discussion on that
  other bug as well, the actual meaning is: This hypervisor has 4 cores
  and is running VMs with a total of 5 vCPUs. - which is a bit too long
  for the chart legend itself.

  I'd like to suggest adding a star (*) after the small chart legend and
  putting this full sentence as a legend below the chart series. It may
  not be the best UI but hopefully it will help with reducing confusion
  - I'm including the UX tag on this so UX folks can hopefully add their
  input as well.

  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/admin/hypervisors/templates/hypervisors/index.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1397917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397926] [NEW] create dependancy between http service and horizon

2014-12-01 Thread Dafna Ron
Public bug reported:

can we create a dependency between horizon packages and http so that after 
package update of horizon packages we would restart http service as well? 
the reason I think this should be done is because I updated horizon packages 
and after the upgrade some modules failed to load properly because of http 
restart so I think it would be good practice to restart http as part of package 
update.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1397926

Title:
  create dependancy between http service and horizon

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  can we create a dependency between horizon packages and http so that after 
package update of horizon packages we would restart http service as well? 
  the reason I think this should be done is because I updated horizon packages 
and after the upgrade some modules failed to load properly because of http 
restart so I think it would be good practice to restart http as part of package 
update.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1397926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397926] Re: create dependancy between http service and horizon

2014-12-01 Thread Julie Pichon
Hi Dafna, this looks like a packaging issue that should be reported at
the distribution level downstream. Thanks!

** Changed in: horizon
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1397926

Title:
  create dependancy between http service and horizon

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  can we create a dependency between horizon packages and http so that after 
package update of horizon packages we would restart http service as well? 
  the reason I think this should be done is because I updated horizon packages 
and after the upgrade some modules failed to load properly because of http 
restart so I think it would be good practice to restart http as part of package 
update.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1397926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397956] [NEW] Incorrect available free space when datastore_regex is used for vcenter

2014-12-01 Thread Roman Podoliaka
Public bug reported:

When vCenter is used as hypervisor, datastore_regex option is ignored
when calculating free space available (which affects nova hypervisor-
stats/Horizon and scheduling of new instances).

datastore_regex value should be passed down the stack when the
datastores are selected.

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: Confirmed


** Tags: vmware

** Changed in: nova
 Assignee: (unassigned) = Roman Podoliaka (rpodolyaka)

** Changed in: nova
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1397956

Title:
  Incorrect available free space when datastore_regex is used for
  vcenter

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  When vCenter is used as hypervisor, datastore_regex option is ignored
  when calculating free space available (which affects nova hypervisor-
  stats/Horizon and scheduling of new instances).

  datastore_regex value should be passed down the stack when the
  datastores are selected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1397956/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1308958] Re: Neutron net-list returns all networks for user in multiple tenants

2014-12-01 Thread Ann Kamyshnikova
Seems that wrong command was used to show all networks that belongs to
current tenant. I run

neutron net-list -- --tenant_id TENANT_ID

And it shows correctly only list of networks which belongs to that
tenant.  Flag --os-tenant-id means the authentication tenant ID, it is
not the same as list with filtering on tenant_id. Here is example of
running commands with --debug flag
http://paste.openstack.org/show/142516/. It shows that neutron --os-
tenant-id TENANT_ID net-list sends request without specification of
tenant_id and neutron net-list -- --tenant_id TENANT_ID specifies it.

** Changed in: neutron
   Status: Confirmed = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1308958

Title:
  Neutron net-list returns all networks for user in multiple tenants

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  I have a user, who belongs to multiple tenants.

  When executing neutron net-list (specifying the tenant id), neutron
  returns all networks for all of the tenants my user belongs to; I
  would expect it to only return the networks for the specified tenant.

  e.g.

  neutron --os-tenant-id 0dc52bffe50d47f7a42674969bd29f3c net-list
  
+--++-+
  | id   | name   | subnets 
|
  
+--++-+
  | 11e304ec-5b67-4980-aa57-da10d0f057a6 | Content| 
3d550793-2da9-4354-9243-0a071a5aa5d8 172.16.0.0/24  |
  | 3942eef0-8fe8-4ec1-aa3b-77a4c40ab1fc | Internal   | 
479785e7-246d-473a-8cb1-4730240342b3 192.168.0.0/24 |
  | 3aed9b6b-387b-4b9d-a9e4-a4bdeab349b7 | Internal   | 
d6ab13ff-2de4-44f9-ac07-b4bb998d2b72 192.168.0.0/24 |
  | 3d4883f9-7b3d-4ef1-a293-419127bc958c | Content| 
22c7d766-ea8b-4e42-9830-82fe8b239b3f 172.16.0.0/24  |
  | 5bab1a18-34fa-400e-a357-cb4d16e4b0b2 | Content| 
aaa60d54-dd84-4a39-9fee-dc928ef1b532 172.16.0.0/24  |
  | 6edaf1b2-bbd1-4ae4-b3a4-faea5ebf3732 | Internal   | 
be944439-ecea-4006-9fca-c4402c461360 192.168.0.0/24 |
  | 71533970-1cb6-415c-9845-0e850f08526b | Internal   | 
c6efc50b-17ba-4dc4-9602-12e4a5dff9a7 192.168.0.0/24 |
  | 937d50a0-c07a-49e5-8d5e-277a21a79a60 | ext_net| 
|
  | 9b3cb15d-099d-4673-97b6-fbcd9181962f | Management | 
0ddb260e-1f30-4def-8304-19733a90c860 10.20.76.0/24  |
  | 9c534554-7d5d-47d8-8305-28af162c9c52 | Content| 
a73f7e75-d1eb-4f96-b25a-ba2d832c7c76 172.16.0.0/24  |
  | a2031601-6a01-4986-b984-98eb0701f393 | Management | 
803a6c01-a78b-47a8-bc51-e4e698283128 10.20.78.0/24  |
  | ac9af807-8205-4649-80c4-962202a6ac8c | Management | 
08650fa9-7fe4-481a-a0ab-357455e658ad 10.20.77.0/24  |
  
+--++-+

  The problem is in a multi-tenant environment, I deploy multiple
  networks with the same names.  This means I cannot look up networks by
  name, but must always use the unique ID.  This makes
  templating/scripting more challenging.

  If I were to execute  'nova --os-tenant-id
  0dc52bffe50d47f7a42674969bd29f3c list' as the same user, this will
  only list the instances in the specified tenant.

  Neutron should behave in the same way.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1308958/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1396544] Re: Default `target={}` value leaks into subsequent `policy.check()` calls

2014-12-01 Thread Thierry Carrez
Confirmed class D

** Information type changed from Private Security to Public

** Changed in: ossa
   Status: Incomplete = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1396544

Title:
  Default `target={}` value leaks into subsequent `policy.check()` calls

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in OpenStack Dashboard (Horizon) icehouse series:
  New
Status in OpenStack Dashboard (Horizon) juno series:
  New
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  Due to mutable dictionary being used as the default `target` argument
  value the first target calculated from scratch in POLICY_CHECK
  function will be used for all subsequent calls to POLICY_CHECK with 2
  arguments. The wrong `target` can either lead to a reduced set of
  operations on an entity for a given user, or to enlarged one. The
  latter case poses a security breach from an cloud operators' point of
  view.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1396544/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1390124] Re: No validation between client's IdP and Keystone IdP

2014-12-01 Thread Thierry Carrez
Confirmed Class B1

** Information type changed from Private Security to Public

** Changed in: ossa
   Status: Incomplete = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1390124

Title:
  No validation between client's IdP and Keystone IdP

Status in OpenStack Identity (Keystone):
  Triaged
Status in OpenStack Security Advisories:
  Won't Fix
Status in OpenStack Security Notes:
  In Progress

Bug description:
  With today's configuration there is no strict link between  federated
  assertion issued by a trusted IdP and a IdP configured inside
  Keystone. Hence, user has ability to choose a mapping and possibly get
  unauthorized access.

  Proposed solution: setup a IdP identified included in an assertion
  issued by a IdP and validate whether that both values are equal.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1390124/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398086] [NEW] nova servers pagination does not work with deleted marker

2014-12-01 Thread Alvaro Lopez
Public bug reported:

Nova does not paginate correctly if the marker is a deleted server.

I am trying to get all of the servers for a given tenant. In total (i.e.
active, delete, error, etc.) there are 405 servers.

If I query the API without a marker and with a limit larger (for example, 500)
than the total number of servers I get all of them, i.e. the following query
correctly returns 405 servers:

curl (...) http://cloud.example.org:8774/v1.1/foo/servers?changes-
since=2014-01-01limit=500

However, if I try to paginate over them, doing:

curl (...) http://cloud.example.org:8774/v1.1/foo/servers?changes-
since=2014-01-01limit=100

I get the first 100 with a link to the next page. If I try to follow it:

curl (...) http://cloud.example.org:8774/v1.1/foo/servers?changes-
since=2014-01-01limit=100marker=foobar

I am always getting a badRequest error saying that the marker is not found. I
guess this is because of these lines in nova/db/sqlalchemy/api.py

2000 # paginate query
2001 if marker is not None:
2002 try:
2003 marker = _instance_get_by_uuid(context, marker, 
session=session)
2004 except exception.InstanceNotFound:
2005 raise exception.MarkerNotFound(marker)

The function _instance_get_by_uuid gets the machines that are not
deleted, therefore it fails to locate the marker if it is a deleted
server.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1398086

Title:
  nova servers pagination does not work with deleted marker

Status in OpenStack Compute (Nova):
  New

Bug description:
  Nova does not paginate correctly if the marker is a deleted server.

  I am trying to get all of the servers for a given tenant. In total
  (i.e. active, delete, error, etc.) there are 405 servers.

  If I query the API without a marker and with a limit larger (for example, 500)
  than the total number of servers I get all of them, i.e. the following query
  correctly returns 405 servers:

  curl (...) http://cloud.example.org:8774/v1.1/foo/servers
  ?changes-since=2014-01-01limit=500

  However, if I try to paginate over them, doing:

  curl (...) http://cloud.example.org:8774/v1.1/foo/servers
  ?changes-since=2014-01-01limit=100

  I get the first 100 with a link to the next page. If I try to follow
  it:

  curl (...) http://cloud.example.org:8774/v1.1/foo/servers
  ?changes-since=2014-01-01limit=100marker=foobar

  I am always getting a badRequest error saying that the marker is not found. 
I
  guess this is because of these lines in nova/db/sqlalchemy/api.py

  2000 # paginate query
  2001 if marker is not None:
  2002 try:
  2003 marker = _instance_get_by_uuid(context, marker, 
session=session)
  2004 except exception.InstanceNotFound:
  2005 raise exception.MarkerNotFound(marker)

  The function _instance_get_by_uuid gets the machines that are not
  deleted, therefore it fails to locate the marker if it is a deleted
  server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1398086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398094] [NEW] cloud-init-0.7.4-2.el6.noarch requires rsyslog rather than the general service syslog

2014-12-01 Thread Evan
Public bug reported:

When installing the cloud-init-0.7.4-2.el6.noarch package, the
dependency on rsyslog prevents our install.

We replace rsyslog with syslog-ng and the rest of the OS does not have a
problem because all of the OS package require syslog rather than the
specific rsyslog package.

All of the syslog package (logd, rsyslog, syslog-ng) all provide
syslog so any one of them will work.

The cloud-init-0.7.4-2.el6.noarch should only require syslog

** Affects: cloud-init
 Importance: Undecided
 Status: New


** Tags: cloud-init rsyslog syslog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1398094

Title:
  cloud-init-0.7.4-2.el6.noarch requires rsyslog rather than the general
  service syslog

Status in Init scripts for use on cloud images:
  New

Bug description:
  When installing the cloud-init-0.7.4-2.el6.noarch package, the
  dependency on rsyslog prevents our install.

  We replace rsyslog with syslog-ng and the rest of the OS does not have
  a problem because all of the OS package require syslog rather than
  the specific rsyslog package.

  All of the syslog package (logd, rsyslog, syslog-ng) all provide
  syslog so any one of them will work.

  The cloud-init-0.7.4-2.el6.noarch should only require syslog

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1398094/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1396362] [NEW] Support UID and GID specification in user and group definitions

2014-12-01 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Hi, it would be awesome if the Group and User directives had attributes
for setting UID and GIDs.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
Support UID and GID specification in user and group definitions
https://bugs.launchpad.net/bugs/1396362
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to cloud-init.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1396362] Re: Support UID and GID specification in user and group definitions

2014-12-01 Thread Robie Basak
Thank you for taking the time to report this bug and helping to make
Ubuntu better.

Sounds like this is a wishlist item that needs to go upstream in the
first instance.

** Package changed: cloud-init (Ubuntu) = cloud-init

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1396362

Title:
  Support UID and GID specification in user and group definitions

Status in Init scripts for use on cloud images:
  New

Bug description:
  Hi, it would be awesome if the Group and User directives had
  attributes for setting UID and GIDs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1396362/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1396849] Re: internalURL and adminURL of endpoints should not be visible to ordinary user

2014-12-01 Thread Morgan Fainberg
Based on the ML topic, and that admin/internal URL is not universal (nor
clearly isolated) this is not something that we can likely fix without
breaking the API contract. We could look at changing the format of the
catalog, but I think this is a much, much, bigger topic. Many actions
need access to the different interfaces to succeed.

Second, if someone does not have the endpoint in the catalog it doesn't
prevent them from accessing/using the endpoint if they know if apriori.
This is not something that I expect we will change. This should be
handled in policy enforcement (currently policy.son)

Longer term we are looking at providing endpoint binding - in theory we
could expand this to cover the differing interfaces *where* possible.
Feel free to comment at https://review.openstack.org/#/c/123726/ on the
token constraint specification which will include the ability to
restrict the user from accessing a specific endpoint if they are not
authorized to do-so.

** Changed in: keystone
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1396849

Title:
  internalURL and adminURL of endpoints should not be visible to
  ordinary user

Status in OpenStack Identity (Keystone):
  Won't Fix

Bug description:
  if an ordinary user sent a get-token request to KeyStone, internalURL
  and adminURL of endpoints will also be returned. It'll expose the
  internal high privilege access address to the ordinary user, and leads
  to the risk for malicious user to attack or hijack the system.

  the request to get token for ordinary user: 
  curl -d '{auth:{passwordCredentials:{username: huawei, password: 
2014},tenantName:huawei}}' -H Content-type: application/json 
http://localhost:5000/v2.0/tokens

  the response:
  {access: {token: {issued_at: 2014-11-27T02:30:59.218772, expires: 
2014-11-27T03:30:59Z, id: b8684d2b68ab49d5988da9197f38a878, tenant: 
{description: normal Tenant, enabled: true, id: 
7ed3351cd58349659f0bfae002f76a77, name: huawei}, audit_ids: 
[Ejn3BtaBTWSNtlj7beE9bQ]}, serviceCatalog: [{endpoints: [{adminURL: 
http://10.67.148.27:8774/v2/7ed3351cd58349659f0bfae002f76a77;, region: 
regionOne, internalURL: 
http://10.67.148.27:8774/v2/7ed3351cd58349659f0bfae002f76a77;, id: 
170a3ae617a1462c81bffcbc658b7746, publicURL: 
http://10.67.148.27:8774/v2/7ed3351cd58349659f0bfae002f76a77}], 
endpoints_links: [], type: compute, name: nova}, {endpoints: 
[{adminURL: http://10.67.148.27:9696;, region: regionOne, internalURL: 
http://10.67.148.27:9696;, id: 7c0f28aa4710438bbd84fd25dbe4daa6, 
publicURL: http://10.67.148.27:9696}], endpoints_links: [], type: 
network, name: neutron}, {endpoints: [{adminURL: 
 http://10.67.148.27:9292;, region: regionOne, internalURL: 
http://10.67.148.27:9292;, id: 576f41fc8ef14b4f90e516bb45897491, 
publicURL: http://10.67.148.27:9292}], endpoints_links: [], type: 
image, name: glance}, {endpoints: [{adminURL: 
http://10.67.148.27:8777;, region: regionOne, internalURL: 
http://10.67.148.27:8777;, id: 77d464e146f242aca3c50e10b6cfdaa0, 
publicURL: http://10.67.148.27:8777}], endpoints_links: [], type: 
metering, name: ceilometer}, {endpoints: [{adminURL: 
http://10.67.148.27:6385;, region: regionOne, internalURL: 
http://10.67.148.27:6385;, id: 1b8177826e0c426fa73e5519c8386589, 
publicURL: http://10.67.148.27:6385}], endpoints_links: [], type: 
baremetal, name: ironic}, {endpoints: [{adminURL: 
http://10.67.148.27:35357/v2.0;, region: regionOne, internalURL: 
http://10.67.148.27:5000/v2.0;, id: 435ae249fd2a427089cb4bf2e6c0b8e9, 
publicURL: http://10.67.148.27:5000/v2.
 0}], endpoints_links: [], type: identity, name: keystone}], user: 
{username: huawei, roles_links: [], id: 
a88a40a635334e5da2ac3523d9780ed3, roles: [{name: _member_}], name: 
huawei}, metadata: {is_admin: 0, roles: 
[73b0a1ac6b0c48cb90205c53f2b9e48d]}}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1396849/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398165] [NEW] Unable to update a region description to None

2014-12-01 Thread Lance Bragstad
Public bug reported:

The region table doesn't allow for nullable descriptions [1] . The
catalog Manager checks if region['description'] is set in the request
and if the user hasn't provided a description for the region, the
Manager will set it to an empty string [2]. If the user creates a region
with a description and then later tries to update the description to be
None, or an empty string, the request will fail because validation
against the description field will fail.

 Invalid input for field 'description'. The value is 'None'.

The user should be able to pass None, or null in json, to Keystone in a
region request. Region description are documented as being optional.

[1] 
https://github.com/openstack/keystone/blob/2d829b4d9a886909735daa0f8a9419c8ba8d3f87/keystone/common/validation/parameter_types.py#L40-L42
[2] 
https://github.com/openstack/keystone/blob/2d829b4d9a886909735daa0f8a9419c8ba8d3f87/keystone/catalog/core.py#L103-L106

** Affects: keystone
 Importance: Undecided
 Assignee: David Stanek (dstanek)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1398165

Title:
  Unable to update a region description to None

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  The region table doesn't allow for nullable descriptions [1] . The
  catalog Manager checks if region['description'] is set in the request
  and if the user hasn't provided a description for the region, the
  Manager will set it to an empty string [2]. If the user creates a
  region with a description and then later tries to update the
  description to be None, or an empty string, the request will fail
  because validation against the description field will fail.

   Invalid input for field 'description'. The value is 'None'.

  The user should be able to pass None, or null in json, to Keystone in
  a region request. Region description are documented as being optional.

  [1] 
https://github.com/openstack/keystone/blob/2d829b4d9a886909735daa0f8a9419c8ba8d3f87/keystone/common/validation/parameter_types.py#L40-L42
  [2] 
https://github.com/openstack/keystone/blob/2d829b4d9a886909735daa0f8a9419c8ba8d3f87/keystone/catalog/core.py#L103-L106

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1398165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397504] Re: In LBaas, delete the pool at the same time direct delete member is not reasonable

2014-12-01 Thread Eugene Nikanorov
I think we will not be fixing this issue in LBaaS v1.

That however could be brought up for the discussion in community for the
LBaaS v2

** Tags added: api

** Changed in: neutron
   Importance: Undecided = Low

** Changed in: neutron
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1397504

Title:
  In LBaas,delete the pool at the same time direct delete member is not
  reasonable

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  For the user, the member is free from the pool of objects, so delete
  binding member pool, can't delete the pool and member directly, and
  should be deleted after the member, then delete the pool.Or,should
  delete with the prompts.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1397504/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1396650] Re: Duplicated tests for Catalog

2014-12-01 Thread Steve Martinelli
test_backend.py does as it suggests, it creates an instance of
catalog_api or whatever is comparable (identity_api), and it attempts to
test functions that are common to all backends.

test_catalog.py should be testing a larger flow, it'll hit the endpoint
(tests the routers.py file), and the controller (tests the controller.py
file), and finally the backend. So I think there is merit to having both
exist.


Also, we generally don't file bugs for tests (unless they are failing), more so 
for refactoring. 

If you disagree re-open the bug,

** Changed in: keystone
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1396650

Title:
  Duplicated tests for Catalog

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  About testing the catalog backends (KVS, SQL and Templated). Now, we
  have to different test files to do the same things.

  keystone/tests/test_backend.py - Which has a lot of test related with 
Identity, Policy, Token, ... and Catalog
  keystone/tests/test_catalog.py   - Which has a few tests only for the 
catalog backends. Those test are not enough to test everything in the catalog.

  In my opinion is not a good idea to have different test files for the
  same purpose because some people could implement test only in one of
  the test files, not in both. Also, it is a pain to maintain.

  I propose to remove all the test that we have right now in
  keystone/tests/test_catalog.py (please forget class
  V2CatalogTestCase(rest.RestfulTestCase)) and move the tests related
  with Catalog that, right now, we have in
  keystone/tests/test_backend.py to keystone/tests/test_catalog.py

  What do you think?

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1396650/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1396405] Re: why neutron agent use loop to detect the port's change, not use rpc call ?

2014-12-01 Thread Eugene Nikanorov
This is a question, not a bug.

OVS agent monitors ovs directly, because it's a source of real devices created 
by compute service (nova).
Nova can't talk to Neutron agents by RPC, that's why this approach is not 
applicable here.

After encountering new device on OVS, OVS agent wires the port according
to a newtork the port is plugged in and reports to Neutron server that
the port is ready.

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1396405

Title:
  why neutron agent use loop to detect the port's change, not use rpc
  call ?

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  I found that all neutron agent has a loop to detect the hypervisor's ports 
change and create port for instance, 
  But I found that neutron plugion will send a RPC  call with 
fucion:AgentNotifierAPI, 
  if agent can write a function in RPCCall Back, it can get the message and 
then create the port.

  so I feel puzzled, why agent does not use rpc callback function to
  create port, but use loop to make it?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1396405/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1396965] Re: Add capability to detach root device volume of an instance, when in shutoff state

2014-12-01 Thread melanie witt
** Changed in: nova
   Importance: Undecided = Wishlist

** Changed in: nova
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1396965

Title:
  Add capability to detach root device volume of an instance, when in
  shutoff state

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  Currently we cannot detach root device volume, even if instance is in shutoff 
state. Following error comes,
  +++
  ERROR (Forbidden): Can't detach root device volume (HTTP 403) (Request-ID: 
req-57159c1c-5835-4a44-8e41-1b822b92127e)
  +++

  When instance is in shutoff this task should be allowed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1396965/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398190] [NEW] Nuage: Manual sync should not start if Neutron sync is in progress

2014-12-01 Thread Sayaji Patil
Public bug reported:

There are two ways to run sync in Nuage plugin

1) Run it as part of Neutron
2) Run it as a standalone tool

So when Neutron sync cycle is in progress,  one should not be 
able to run sync usig the standalone tool and vice-versa

** Affects: neutron
 Importance: Undecided
 Assignee: Sayaji Patil (sayaji15)
 Status: New


** Tags: nuage

** Tags added: nuage

** Changed in: neutron
 Assignee: (unassigned) = Sayaji Patil (sayaji15)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1398190

Title:
  Nuage: Manual sync should not start if Neutron sync is in progress

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  There are two ways to run sync in Nuage plugin

  1) Run it as part of Neutron
  2) Run it as a standalone tool

  So when Neutron sync cycle is in progress,  one should not be 
  able to run sync usig the standalone tool and vice-versa

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1398190/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398224] [NEW] Change “Modify Users” to “Manage Members” in the Project Panel

2014-12-01 Thread Pieter
Public bug reported:

Change “Modify Users” to “Manage Members” in the Project and Domain
Panels in-row Actions.  This allows us to be consistent with the tab
headings in both panels.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1398224

Title:
  Change “Modify Users” to “Manage Members” in the Project Panel

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Change “Modify Users” to “Manage Members” in the Project and Domain
  Panels in-row Actions.  This allows us to be consistent with the tab
  headings in both panels.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1398224/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1391694] Re: Warning message about missing policy.d folder during Sahara start

2014-12-01 Thread melanie witt
Just noticed this in nova:

2014-12-02 00:32:49.506 DEBUG nova.openstack.common.fileutils 
[req-a6efc5a7-df21-4d4e-9d37-37761c633416 demo demo] Reloading cached file 
/etc/nova/policy.json from (pid=8046) read_cached_file 
/opt/stack/nova/nova/openstack/common/fileutils.py:62
2014-12-02 00:32:49.510 DEBUG nova.openstack.common.policy 
[req-a6efc5a7-df21-4d4e-9d37-37761c633416 demo demo] Rules successfully 
reloaded from (pid=8046) _load_policy_file 
/opt/stack/nova/nova/openstack/common/policy.py:267
2014-12-02 00:32:49.511 WARNING nova.openstack.common.policy 
[req-a6efc5a7-df21-4d4e-9d37-37761c633416 demo demo] Can not find policy 
directories policy.d

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
   Importance: Undecided = Low

** Changed in: nova
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1391694

Title:
  Warning message about missing policy.d folder during Sahara start

Status in OpenStack Compute (Nova):
  Confirmed
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Confirmed

Bug description:
  2014-11-11 16:14:05.786 403 WARNING sahara.openstack.common.policy [-]
  Can not find policy directories policy.d

  Example: https://sahara.mirantis.com/logs/31/133131/2/check/gate-
  sahara-integration-vanilla-1/9ca6d41/console.html

  Policy library from oslo searches for policy in directories specified
  by 'policy_dirs' parameter and warns if directory doesn't exist.
  Default value is ['policy.d'].

  Need to check what other projects do about this. I have never seen
  such warnings in other openstack projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1391694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398244] [NEW] DVR flows doesn't work for IPv6 subnets

2014-12-01 Thread Xu Han Peng
Public bug reported:

Current DVR flows on integration bridge only works for IPv4 because of
the proto option:


self.int_br.add_flow(table=constants.DVR_TO_SRC_MAC,
 priority=2,
 proto='ip',
 dl_vlan=local_vlan,
 nw_dst=ip_subnet,
 actions=strip_vlan,mod_dl_src:%s,
 output:%s %


We need to change the proto to ipv6 when subnet is IPv6.

** Affects: neutron
 Importance: Undecided
 Assignee: Xu Han Peng (xuhanp)
 Status: New


** Tags: dvr ipv6

** Changed in: neutron
 Assignee: (unassigned) = Xu Han Peng (xuhanp)

** Tags added: dvr ipv6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1398244

Title:
  DVR flows doesn't work for IPv6 subnets

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Current DVR flows on integration bridge only works for IPv4 because of
  the proto option:

  
  self.int_br.add_flow(table=constants.DVR_TO_SRC_MAC,
   priority=2,
   proto='ip',
   dl_vlan=local_vlan,
   nw_dst=ip_subnet,
   actions=strip_vlan,mod_dl_src:%s,
   output:%s %

  
  We need to change the proto to ipv6 when subnet is IPv6.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1398244/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398086] Re: nova servers pagination does not work with deleted marker

2014-12-01 Thread Deliang Fan
In my opinion, it's not a bug because the deleted vm should not be
queried.

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1398086

Title:
  nova servers pagination does not work with deleted marker

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Nova does not paginate correctly if the marker is a deleted server.

  I am trying to get all of the servers for a given tenant. In total
  (i.e. active, delete, error, etc.) there are 405 servers.

  If I query the API without a marker and with a limit larger (for example, 500)
  than the total number of servers I get all of them, i.e. the following query
  correctly returns 405 servers:

  curl (...) http://cloud.example.org:8774/v1.1/foo/servers
  ?changes-since=2014-01-01limit=500

  However, if I try to paginate over them, doing:

  curl (...) http://cloud.example.org:8774/v1.1/foo/servers
  ?changes-since=2014-01-01limit=100

  I get the first 100 with a link to the next page. If I try to follow
  it:

  curl (...) http://cloud.example.org:8774/v1.1/foo/servers
  ?changes-since=2014-01-01limit=100marker=foobar

  I am always getting a badRequest error saying that the marker is not found. 
I
  guess this is because of these lines in nova/db/sqlalchemy/api.py

  2000 # paginate query
  2001 if marker is not None:
  2002 try:
  2003 marker = _instance_get_by_uuid(context, marker, 
session=session)
  2004 except exception.InstanceNotFound:
  2005 raise exception.MarkerNotFound(marker)

  The function _instance_get_by_uuid gets the machines that are not
  deleted, therefore it fails to locate the marker if it is a deleted
  server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1398086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352609] Re: RTNETLINK answers: File exists

2014-12-01 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1352609

Title:
  RTNETLINK answers: File exists

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  Running into this bug intermittently.

  2014-08-04 23:24:43.520 4038 ERROR neutron.agent.dhcp_agent [-] Unable to 
enable dhcp for 415a0839-eb05-4e7a-907c-413c657f4bf5.
  2014-08-04 23:24:43.520 4038 TRACE neutron.agent.dhcp_agent Traceback (most 
recent call last):
  2014-08-04 23:24:43.520 4038 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/dist-packages/neutron/agent/dhcp_agent.py, line 127, in 
call_driver
  2014-08-04 23:24:43.520 4038 TRACE neutron.agent.dhcp_agent 
getattr(driver, action)(**action_kwarg2014-08-04 23:24:43.520 4038 TRACE 
neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/dist-packages/neutron/agent/dhcp_agent.py, line 127, in 
call_driver
  2014-08-04 23:24:43.520 4038 TRACE neutron.agent.dhcp_agent 
getattr(driver, action)(**action_kwargs)
  2014-08-04 23:24:43.520 4038 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/dist-packages/neutron/agent/linux/dhcp.py, line 166, in 
enable
  2014-08-04 23:24:43.520 4038 TRACE neutron.agent.dhcp_agent 
reuse_existing=True)
  2014-08-04 23:24:43.520 4038 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/dist-packages/neutron/agent/linux/dhcp.py, line 832, in 
setup
  2014-08-04 23:24:43.520 4038 TRACE neutron.agent.dhcp_agent 
namespace=network.namespace)
  2014-08-04 23:24:43.520 4038 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/dist-packages/neutron/agent/linux/interface.py, line 178, 
in plug
  2014-08-04 23:24:43.520 4038 TRACE neutron.agent.dhcp_agent 
namespace2=namespace)
  2014-08-04 23:24:43.520 4038 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py, line 129, in 
add_veth
  2014-08-04 23:24:43.520 4038 TRACE neutron.agent.dhcp_agent 
self._as_root('', 'link', tuple(args))
  2014-08-04 23:24:43.520 4038 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py, line 70, in 
_as_root
  2014-08-04 23:24:43.520 4038 TRACE neutron.agent.dhcp_agent namespace)
  2014-08-04 23:24:43.520 4038 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py, line 81, in 
_execute
  2014-08-04 23:24:43.520 4038 TRACE neutron.agent.dhcp_agent 
root_helper=root_helper)
  2014-08-04 23:24:43.520 4038 TRACE neutron.agent.dhcp_agent   File 
/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py, line 76, in 
execute
  2014-08-04 23:24:43.520 4038 TRACE neutron.agent.dhcp_agent raise 
RuntimeError(m)
  2014-08-04 23:24:43.520 4038 TRACE neutron.agent.dhcp_agent RuntimeError:
  2014-08-04 23:24:43.520 4038 TRACE neutron.agent.dhcp_agent Command: ['sudo', 
'/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'link', 'add', 
'tapec90379e-17', 'type', 'veth', 'peer', 'name', 'ns-ec90379e-17', 'netns', 
'qdhcp-415a0839-eb05-4e7a-907c-413c657f4bf5']
  2014-08-04 23:24:43.520 4038 TRACE neutron.agent.dhcp_agent Exit code: 2
  2014-08-04 23:24:43.520 4038 TRACE neutron.agent.dhcp_agent Stdout: ''
  2014-08-04 23:24:43.520 4038 TRACE neutron.agent.dhcp_agent Stderr: 
'RTNETLINK answers: File exists\n'
  2014-08-04 23:24:43.520 4038 TRACE neutron.agent.dhcp_agent

  ip netns seems broken:

  $ sudo ip netns exec qrouter-42ce8973-6a4c-4eb8-9678-4aec5532d7b6 ip route
  seting (sic) the network namespace 
qrouter-42ce8973-6a4c-4eb8-9678-4aec5532d7b6 failed: Invalid argument

  This is a MAAS juju deployed openstack using charmstore charms. This
  is in a smoosh environment where neutron-gateway is hulked smashed on
  juju node 0.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1352609/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1341243] Re: The instance's status is ACTIVE even though the TAP device is DOWN and it doesn't has an IP address

2014-12-01 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1341243

Title:
  The instance's status is ACTIVE even though the TAP device is DOWN and
  it doesn't has an IP address

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  During scale tests (80 instances) a few instances created and their
  status is ACTIVE (nova list/show) even though their TAP divices is
  DOWN and they don't have an IP address.

  Version - Icehouse with RHEL7
  GRE+ML2 - All-In-One+ Compute node

  openstack-nova-cert-2014.1-7.el7ost.noarch
  openstack-neutron-openvswitch-2014.1-35.el7ost.noarch
  openstack-nova-compute-2014.1-7.el7ost.noarch
  openstack-neutron-2014.1-35.el7ost.noarch
  openstack-neutron-ml2-2014.1-35.el7ost.noarch

  nova list
  19d932b1-02fa-44bb-ade3-425d7d74baad | host1 | stress8-26 | 
private8=192.168.8.69 | ACTIVE

  neutron port list
  1d98786a-b4ce-4bb2-83d1-aa53c36fb047 |  | fa:16:3e:90:c2:26 | 
{subnet_id: cff025fc-ac5c-4931-b019-927bcc9b0cb0, ip_address: 
192.168.8.69}

  ip a | grep 1d98786a-b4
  569: qbr1d98786a-b4: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue 
state UP
  570: qvo1d98786a-b4: BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP mtu 1500 qdisc 
pfifo_fast master ovs-system state UP qlen 1000
  571: qvb1d98786a-b4: BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP mtu 1500 qdisc 
pfifo_fast master qbr1d98786a-b4 state UP qlen 1000
  575: tap1d98786a-b4: BROADCAST,MULTICAST mtu 1500 qdisc pfifo_fast master 
qbr1d98786a-b4 state DOWN qlen 500

  Steps to reproduce:

  1. ifconfig tap1d98786a-b4 down
  2. nova show instance id

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1341243/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398266] [NEW] pretty_tox unnecessarily uses bash

2014-12-01 Thread YAMAMOTO Takashi
Public bug reported:

A recent change (commit 0d5a11d9c722870f9c5e31a993219c7e240b4e19)
introduced bash dependency for something which can be easily done without using 
bash specific feature.

** Affects: neutron
 Importance: Undecided
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1398266

Title:
  pretty_tox unnecessarily uses bash

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  A recent change (commit 0d5a11d9c722870f9c5e31a993219c7e240b4e19)
  introduced bash dependency for something which can be easily done without 
using bash specific feature.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1398266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398270] [NEW] test_floating_ips internal server error while processing your request

2014-12-01 Thread Joshua Harlow
Public bug reported:

http://logs.openstack.org/58/136958/9/check/gate-tempest-dsvm-neutron-
src-taskflow-icehouse/692967d/logs/screen-q-svc.txt.gz

pythonlogging:'': {{{
2014-12-02 04:02:35,490 1953 DEBUG[tempest.common.rest_client] Request 
(FloatingIPTestJSON:test_floating_ip_delete_port): 500 POST 
http://127.0.0.1:9696/v2.0/floatingips 51.120s
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': 'omitted'}
Body: {floatingip: {floating_network_id: 
3b4e3acc-e97a-4e2f-ac35-4151ab41ffe8}}
Response - Headers: {'status': '500', 'content-length': '88', 'connection': 
'close', 'date': 'Tue, 02 Dec 2014 04:02:35 GMT', 'content-type': 
'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-f3d9c304-2e15-4ea9-bd82-3d45fa491b2b'}
Body: {NeutronError: Request Failed: internal server error while 
processing your request.}
}}}

Traceback (most recent call last):
  File tempest/api/network/test_floating_ips.py, line 125, in 
test_floating_ip_delete_port
floating_network_id=self.ext_net_id)
  File tempest/services/network/network_client_base.py, line 151, in _create
resp, body = self.post(uri, post_data)
  File tempest/services/network/network_client_base.py, line 74, in post
return self.rest_client.post(uri, body, headers)
  File tempest/common/rest_client.py, line 249, in post
return self.request('POST', url, extra_headers, headers, body)
  File tempest/common/rest_client.py, line 451, in request
resp, resp_body)
  File tempest/common/rest_client.py, line 547, in _error_checker
raise exceptions.ServerFault(message)
ServerFault: Got server fault
Details: {NeutronError: Request Failed: internal server error while 
processing your request.}

2014-12-02 04:02:35.481 28507 INFO requests.packages.urllib3.connectionpool [-] 
Starting new HTTP connection (1): 127.0.0.1
2014-12-02 04:02:35.481 28507 ERROR neutron.api.v2.resource [-] create failed
2014-12-02 04:02:35.481 28507 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
2014-12-02 04:02:35.481 28507 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/resource.py, line 87, in resource
2014-12-02 04:02:35.481 28507 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2014-12-02 04:02:35.481 28507 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 448, in create
2014-12-02 04:02:35.481 28507 TRACE neutron.api.v2.resource obj = 
obj_creator(request.context, **kwargs)
2014-12-02 04:02:35.481 28507 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/services/l3_router/l3_router_plugin.py, line 
107, in create_floatingip
2014-12-02 04:02:35.481 28507 TRACE neutron.api.v2.resource 
initial_status=q_const.FLOATINGIP_STATUS_DOWN)
2014-12-02 04:02:35.481 28507 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/l3_db.py, line 649, in create_floatingip
2014-12-02 04:02:35.481 28507 TRACE neutron.api.v2.resource 
context.session.add(floatingip_db)
2014-12-02 04:02:35.481 28507 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 470, 
in __exit__
2014-12-02 04:02:35.481 28507 TRACE neutron.api.v2.resource self.rollback()
2014-12-02 04:02:35.481 28507 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py, line 
60, in __exit__
2014-12-02 04:02:35.481 28507 TRACE neutron.api.v2.resource 
compat.reraise(exc_type, exc_value, exc_tb)
2014-12-02 04:02:35.481 28507 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 467, 
in __exit__
2014-12-02 04:02:35.481 28507 TRACE neutron.api.v2.resource self.commit()
2014-12-02 04:02:35.481 28507 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 377, 
in commit
2014-12-02 04:02:35.481 28507 TRACE neutron.api.v2.resource 
self._prepare_impl()
2014-12-02 04:02:35.481 28507 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 357, 
in _prepare_impl
2014-12-02 04:02:35.481 28507 TRACE neutron.api.v2.resource 
self.session.flush()
2014-12-02 04:02:35.481 28507 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/openstack/common/db/sqlalchemy/session.py, 
line 597, in _wrap
2014-12-02 04:02:35.481 28507 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
2014-12-02 04:02:35.481 28507 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/openstack/common/db/sqlalchemy/session.py, 
line 836, in flush
2014-12-02 04:02:35.481 28507 TRACE neutron.api.v2.resource return 
super(Session, self).flush(*args, **kwargs)
2014-12-02 04:02:35.481 28507 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 1919, 
in 

[Yahoo-eng-team] [Bug 1398267] [NEW] when restart the vpn and l3 agent, the firewall rule apply to all tenants' router.

2014-12-01 Thread yangzhenyu
Public bug reported:

Hi all:
   when restart the vpn and l3 agent, the firewall rule apply to all tenants' 
router.
   step:
   1. Create network and router in A and B tenant.
   2. Create a firewall in A tenant.
   3. Restart vpn and l3 agent serivce.
   4. ip netns exec qrouter-B_router_uuid iptables -L -t filter -vn 

Then I find the firewall rule in chain neutron-l3-agent-FORWARD and
neutron-vpn-agen-FORWARD.

So I  debug the code,and add some code in
neutron/services/firewall/agents/l3reference/firewall_l3_agent.py :

 def _process_router_add(self, ri):
On router add, get fw with rules from plugin and update driver.
LOG.debug(_(Process router add, router_id: '%s'), ri.router['id'])
routers = []
routers.append(ri.router)
router_info_list = self._get_router_info_list_for_tenant(
routers,
ri.router['tenant_id'])
if router_info_list:
# Get the firewall with rules
# for the tenant the router is on.
ctx = context.Context('', ri.router['tenant_id'])
fw_list = self.fwplugin_rpc.get_firewalls_for_tenant(ctx)
LOG.debug(_(Process router add, fw_list: '%s'),
  [fw['id'] for fw in fw_list])
for fw in fw_list:
+if fw['tenant_id'] == ri.router['tenant_id']:
   self._invoke_driver_for_sync_from_plugin(
ctx,
router_info_list,
 fw)

My neutron version is icehouse.

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  Hi all:
-when restart the vpn and l3 agent, the firewall rule apply to all 
tenants' router. 
-step:
-1. Create network and router in A and B tenant.
-2. Create a firewall in A tenant.
-3. Restart vpn and l3 agent serivce.
-4. ip netns exec qrouter-B_router_uuid iptables -L -t filter -vn
- Then i find the firewall rule in chain neutron-l3-agent-FORWARD and 
neutron-vpn-agen-FORWARD.
+    when restart the vpn and l3 agent, the firewall rule apply to all tenants' 
router.
+    step:
+    1. Create network and router in A and B tenant.
+    2. Create a firewall in A tenant.
+    3. Restart vpn and l3 agent serivce.
+    4. ip netns exec qrouter-B_router_uuid iptables -L -t filter -vn Then I 
find the firewall rule in chain neutron-l3-agent-FORWARD and 
neutron-vpn-agen-FORWARD.
  
- so I  debug the code,and add some code in 
neutron/services/firewall/agents/l3reference/firewall_l3_agent.py :
- 
-  def _process_router_add(self, ri):
- On router add, get fw with rules from plugin and update driver.
- LOG.debug(_(Process router add, router_id: '%s'), ri.router['id'])
- routers = []
- routers.append(ri.router)
- router_info_list = self._get_router_info_list_for_tenant(
- routers,
- ri.router['tenant_id'])
- if router_info_list:
- # Get the firewall with rules
- # for the tenant the router is on.
- ctx = context.Context('', ri.router['tenant_id'])
- fw_list = self.fwplugin_rpc.get_firewalls_for_tenant(ctx)
- LOG.debug(_(Process router add, fw_list: '%s'),
-   [fw['id'] for fw in fw_list])
- for fw in fw_list:
- +++if fw['tenant_id'] == ri.router['tenant_id']:
-self._invoke_driver_for_sync_from_plugin(
- ctx,
- router_info_list,
-  fw)
+ so I  debug the code,and add some code in
+ neutron/services/firewall/agents/l3reference/firewall_l3_agent.py :
+ 
+  def _process_router_add(self, ri):
+ On router add, get fw with rules from plugin and update driver.
+ LOG.debug(_(Process router add, router_id: '%s'), ri.router['id'])
+ routers = []
+ routers.append(ri.router)
+ router_info_list = self._get_router_info_list_for_tenant(
+ routers,
+ ri.router['tenant_id'])
+ if router_info_list:
+ # Get the firewall with rules
+ # for the tenant the router is on.
+ ctx = context.Context('', ri.router['tenant_id'])
+ fw_list = self.fwplugin_rpc.get_firewalls_for_tenant(ctx)
+ LOG.debug(_(Process router add, fw_list: '%s'),
+   [fw['id'] for fw in fw_list])
+ for fw in fw_list:
+ +if fw['tenant_id'] == ri.router['tenant_id']:
+    self._invoke_driver_for_sync_from_plugin(
+ ctx,
+ router_info_list,
+  fw)
+ 
+ My neutron version is icehouse.

** Description changed:

  Hi all:
     when restart the vpn and l3 agent, the firewall rule apply to all tenants' 
router.
     step:
     1. Create network and router in A and B 

[Yahoo-eng-team] [Bug 1397658] Re: mistake in creating the panel by the command systempanel

2014-12-01 Thread Lin Hua Cheng
*** This bug is a duplicate of bug 1325099 ***
https://bugs.launchpad.net/bugs/1325099

This is fixed now in master

** This bug has been marked a duplicate of bug 1325099
   Index.html is not correctly created when using startpanel command

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1397658

Title:
  mistake in creating the panel by the command systempanel

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  after we creating a panel by :
  python manage.py startpanel mytestpanel -d 
openstack_dashboard.dashboards.admin –-target auto
  the contents of the index.xml in 
dashboards/admin/mytestpanel/templates/mytestpanel are:

  {% extends 'admin/base.html' %}
  {% load i18n %}
  {% block title %}{% trans Mytestpanel %}{% endblock %}

  {% block page_header %}
    {% include horizon/common/_page_header.html with title=_(Mytestpanel) %}
  {% endblock page_header %}

  {% block admin_main %}
  {% endblock %}

  Actually, the first line should be the {% extends 'base.html' %}
  without admin

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1397658/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp