[Yahoo-eng-team] [Bug 1480131] [NEW] Volume_Attachment_ID uses Volume_ID

2015-07-31 Thread Maurice Schreiber
Public bug reported:

Version: Kilo Stable

Problem Description: querying nova for volume attachments returns the wrong 
volume_attachment_id.
I receive the volume_id instead of the volume_attachment_id.

Example:

curl -g -H "X-Auth-Token: $ADMIN_TOKEN" -X GET
https://compute:8774/v2/(tenant_id)/servers/56293904-9384-48f8-9329-c961056583f1
/os-volume_attachments

{"volumeAttachments": [{"device": "/dev/vdb", "serverId":
"56293904-9384-48f8-9329-c961056583f1", "id": "a75bec42-77b5-42ff-
90e5-e505af14b84a", "volumeId": "a75bec42-77b5-42ff-
90e5-e505af14b84a"}]}


Having a look at the database directly, I see the real volume_attachment_id:

select (id, volume_id, instance_uuid) from volume_attachment where
volume_id='a75bec42-77b5-42ff-90e5-e505af14b84a';

(9cb82021-e77e-495f-8ade-524bc5ccf68c,a75bec42-77b5-42ff-
90e5-e505af14b84a,56293904-9384-48f8-9329-c961056583f1)


Cinder API gets it right, though.


Further Impact:
Horizon uses the returned volume_attachment_id to query  for volume_details.
That is wrong and only works now because of the broken nova behaviour.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1480131

Title:
  Volume_Attachment_ID uses Volume_ID

Status in OpenStack Compute (nova):
  New

Bug description:
  Version: Kilo Stable

  Problem Description: querying nova for volume attachments returns the wrong 
volume_attachment_id.
  I receive the volume_id instead of the volume_attachment_id.

  Example:

  curl -g -H "X-Auth-Token: $ADMIN_TOKEN" -X GET
  
https://compute:8774/v2/(tenant_id)/servers/56293904-9384-48f8-9329-c961056583f1
  /os-volume_attachments

  {"volumeAttachments": [{"device": "/dev/vdb", "serverId":
  "56293904-9384-48f8-9329-c961056583f1", "id": "a75bec42-77b5-42ff-
  90e5-e505af14b84a", "volumeId": "a75bec42-77b5-42ff-
  90e5-e505af14b84a"}]}

  
  Having a look at the database directly, I see the real volume_attachment_id:

  select (id, volume_id, instance_uuid) from volume_attachment where
  volume_id='a75bec42-77b5-42ff-90e5-e505af14b84a';

  (9cb82021-e77e-495f-8ade-524bc5ccf68c,a75bec42-77b5-42ff-
  90e5-e505af14b84a,56293904-9384-48f8-9329-c961056583f1)

  
  Cinder API gets it right, though.

  
  Further Impact:
  Horizon uses the returned volume_attachment_id to query  for volume_details.
  That is wrong and only works now because of the broken nova behaviour.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1480131/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480131] Re: Volume_Attachment_ID uses Volume_ID

2015-07-31 Thread Maurice Schreiber
** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1480131

Title:
  Volume_Attachment_ID uses Volume_ID

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  Version: Kilo Stable

  Problem Description: querying nova for volume attachments returns the wrong 
volume_attachment_id.
  I receive the volume_id instead of the volume_attachment_id.

  Example:

  curl -g -H "X-Auth-Token: $ADMIN_TOKEN" -X GET
  
https://compute:8774/v2/(tenant_id)/servers/56293904-9384-48f8-9329-c961056583f1
  /os-volume_attachments

  {"volumeAttachments": [{"device": "/dev/vdb", "serverId":
  "56293904-9384-48f8-9329-c961056583f1", "id": "a75bec42-77b5-42ff-
  90e5-e505af14b84a", "volumeId": "a75bec42-77b5-42ff-
  90e5-e505af14b84a"}]}

  
  Having a look at the database directly, I see the real volume_attachment_id:

  select (id, volume_id, instance_uuid) from volume_attachment where
  volume_id='a75bec42-77b5-42ff-90e5-e505af14b84a';

  (9cb82021-e77e-495f-8ade-524bc5ccf68c,a75bec42-77b5-42ff-
  90e5-e505af14b84a,56293904-9384-48f8-9329-c961056583f1)

  
  Cinder API gets it right, though.

  
  Further Impact:
  Horizon uses the returned volume_attachment_id to query  for volume_details.
  That is wrong and only works now because of the broken nova behaviour.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1480131/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497957] [NEW] Cannot re-trigger delete action on instances in state 'deleting'

2015-09-21 Thread Maurice Schreiber
Public bug reported:

Version: Kilo Stable

TEST CASE: Instance is hanging in state 'deleting'. I click on terminate to 
re-trigger the deletion.
EXPECTED BEHAVIOR: server DELETE request is sent to nova
OBSERVED BEHAVIOR: several GET requests are made and the UI displays: "Error: 
You are not allowed to terminate instance: [instance-id]"


Using the command-line clients for re-triggering the termination is obviously 
possible.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1497957

Title:
  Cannot re-trigger delete action on instances in state 'deleting'

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Version: Kilo Stable

  TEST CASE: Instance is hanging in state 'deleting'. I click on terminate to 
re-trigger the deletion.
  EXPECTED BEHAVIOR: server DELETE request is sent to nova
  OBSERVED BEHAVIOR: several GET requests are made and the UI displays: "Error: 
You are not allowed to terminate instance: [instance-id]"

  
  Using the command-line clients for re-triggering the termination is obviously 
possible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1497957/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501211] [NEW] subnet quota usage count wrong

2015-09-30 Thread Maurice Schreiber
Public bug reported:

The newly introduced disabling of the subnet create button
(https://review.openstack.org/#/c/121935/) does not respect the right
quota usages.

Apparently all subnets within a domain (including the ones of shared
networks) are counted to decide against the subnet quota if creation of
new subnets should be allowed.

But it should only be the subnets of the current project (counting the
shared networks is arguable).

Illustrating example:

I have got a domain 'Colors' with projects 'Blue' and 'Green'.

In project Blue are 10 subnets, in project Green 0.
Subnet Quota in both projects is 10 - nevertheless the 'Create Subnet' Button 
is disabled in both projects (but it shouldn't in project Green).

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1501211

Title:
  subnet quota usage count wrong

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The newly introduced disabling of the subnet create button
  (https://review.openstack.org/#/c/121935/) does not respect the right
  quota usages.

  Apparently all subnets within a domain (including the ones of shared
  networks) are counted to decide against the subnet quota if creation
  of new subnets should be allowed.

  But it should only be the subnets of the current project (counting the
  shared networks is arguable).

  Illustrating example:

  I have got a domain 'Colors' with projects 'Blue' and 'Green'.

  In project Blue are 10 subnets, in project Green 0.
  Subnet Quota in both projects is 10 - nevertheless the 'Create Subnet' Button 
is disabled in both projects (but it shouldn't in project Green).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1501211/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1520507] [NEW] Edit project checks wrong usage

2015-11-27 Thread Maurice Schreiber
Public bug reported:

Release: Kilo

Given my token is scoped to project A
and I am project admin in project B

I go to Identity > Projects > Edit Project B
I click 'Save'
I get the error: 'Quota value(s) cannot be less than the current usage 
value(s):...' for volume usages

The problem is: Horizon checks that the quota >= usage before saving and
it wrongly takes the usage of the project you are scoped to instead of
the project you are editing.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1520507

Title:
  Edit project checks wrong usage

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Release: Kilo

  Given my token is scoped to project A
  and I am project admin in project B

  I go to Identity > Projects > Edit Project B
  I click 'Save'
  I get the error: 'Quota value(s) cannot be less than the current usage 
value(s):...' for volume usages

  The problem is: Horizon checks that the quota >= usage before saving
  and it wrongly takes the usage of the project you are scoped to
  instead of the project you are editing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1520507/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1520570] [NEW] Unable to retrieve limits on unlimited quota

2015-11-27 Thread Maurice Schreiber
Public bug reported:

Release: Kilo

I got some compute quota set to -1 (tested with keypairs and cores) or some 
network quota set to -1 (tested with floating ip).
When I click the (new in Kilo) launch instance button, I get the error 'Unable 
to retrieve limits' and I'm not able to launch an instance, because all flavors 
are disabled.

Seems to be a reincarnation of
https://bugs.launchpad.net/horizon/+bug/1098480, the 'old' launch
instance button still works.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1520570

Title:
  Unable to retrieve limits on unlimited quota

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Release: Kilo

  I got some compute quota set to -1 (tested with keypairs and cores) or some 
network quota set to -1 (tested with floating ip).
  When I click the (new in Kilo) launch instance button, I get the error 
'Unable to retrieve limits' and I'm not able to launch an instance, because all 
flavors are disabled.

  Seems to be a reincarnation of
  https://bugs.launchpad.net/horizon/+bug/1098480, the 'old' launch
  instance button still works.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1520570/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1520598] [NEW] several launch instance hits cause several requests

2015-11-27 Thread Maurice Schreiber
Public bug reported:

Release: Kilo

I have the 'new' launch instance dialog. After I clicked the launch
instance button it takes a while to get the response, during this time
it is possible to click the button again and another request will be
triggered (resulting in multiple instances being created).

Maybe https://bugs.launchpad.net/horizon/+bug/1461641 would be enough to
acknowledge that the request was sent, but I would rather have the
button disabled after the first submission to provide the users from
accidental creation of multiple instances.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1520598

Title:
  several launch instance hits cause several requests

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Release: Kilo

  I have the 'new' launch instance dialog. After I clicked the launch
  instance button it takes a while to get the response, during this time
  it is possible to click the button again and another request will be
  triggered (resulting in multiple instances being created).

  Maybe https://bugs.launchpad.net/horizon/+bug/1461641 would be enough
  to acknowledge that the request was sent, but I would rather have the
  button disabled after the first submission to provide the users from
  accidental creation of multiple instances.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1520598/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1538625] [NEW] glance-api.conf kilo limit

2016-01-27 Thread Maurice Schreiber
Public bug reported:

The example glance-api.conf is missing the api_limit_max and
limit_param_default [DEFAULT] parameters.

This got fixed in Liberty, but not back in Kilo, where this parameters
were already present, too.

I want to take care of the implementation myself.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1538625

Title:
  glance-api.conf kilo limit

Status in Glance:
  New

Bug description:
  The example glance-api.conf is missing the api_limit_max and
  limit_param_default [DEFAULT] parameters.

  This got fixed in Liberty, but not back in Kilo, where this parameters
  were already present, too.

  I want to take care of the implementation myself.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1538625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649532] [NEW] private flavors globally visible

2016-12-13 Thread Maurice Schreiber
Public bug reported:

I have project A with user Anna, who has a role representing nova admin 
assigned (needed to allow creation of private flavors).
I have project B with user Ben, who has a role representing nova admin assigned 
(needed to allow creation of private flavors).
Anna has no permission on project B.
Ben has no permission on project A.

Anna creates a private flavor 'A_private', gives flavor access to
project A.

Expected behaviour: only Anna (or any other nova admin in project A) can
perform actions on this flavor.

Issue: Ben can perform all sort of actions on the private flavor
'A_private' (read, delete, manage access, manage extra specs).

Observed in Mitaka, but I haven't seen any updates related to this, so
this should be the same in master. Please correct me if I'm wrong.

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  I have project A with user Anna, who has a role representing nova admin 
assigned (needed to allow creation of private flavors).
  I have project B with user Ben, who has a role representing nova admin 
assigned (needed to allow creation of private flavors).
  Anna has no permission on project B.
  Ben has no permission on project A.
  
  Anna creates a private flavor 'A_private', gives flavor access to
  project A.
  
  Expected behaviour: only Anna (or any other nova admin in project A) can
  perform actions on this flavor.
  
  Issue: Ben can perform all sort of actions on the private flavor
  'A_private' (read, delete, manage access, manage extra specs).
+ 
+ Observed in Mitaka, but I haven't seen any updates related to this, so
+ this should be the same in master. Please correct me if I'm wrong.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1649532

Title:
  private flavors globally visible

Status in OpenStack Compute (nova):
  New

Bug description:
  I have project A with user Anna, who has a role representing nova admin 
assigned (needed to allow creation of private flavors).
  I have project B with user Ben, who has a role representing nova admin 
assigned (needed to allow creation of private flavors).
  Anna has no permission on project B.
  Ben has no permission on project A.

  Anna creates a private flavor 'A_private', gives flavor access to
  project A.

  Expected behaviour: only Anna (or any other nova admin in project A)
  can perform actions on this flavor.

  Issue: Ben can perform all sort of actions on the private flavor
  'A_private' (read, delete, manage access, manage extra specs).

  Observed in Mitaka, but I haven't seen any updates related to this, so
  this should be the same in master. Please correct me if I'm wrong.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1649532/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1653932] [NEW] network router:external field not exported

2017-01-04 Thread Maurice Schreiber
Public bug reported:

Hi, I want to use the network RBAC Feature to give 'access_as_external'
to a target project so that this project is able to allocate floating
IPs from that external network (without being owner or admin).

Let's say the external network has two subnets.

So far so good, but if I want to select one subnet on floating ip
allocation, that is not possible. I don't see that subnet (I see the
IDs, but this is useless for the decision what subnet to choose the
floating ip from) and I can't give access in the policy based on the
'router:external' field of the network.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1653932

Title:
  network router:external field not exported

Status in neutron:
  New

Bug description:
  Hi, I want to use the network RBAC Feature to give
  'access_as_external' to a target project so that this project is able
  to allocate floating IPs from that external network (without being
  owner or admin).

  Let's say the external network has two subnets.

  So far so good, but if I want to select one subnet on floating ip
  allocation, that is not possible. I don't see that subnet (I see the
  IDs, but this is useless for the decision what subnet to choose the
  floating ip from) and I can't give access in the policy based on the
  'router:external' field of the network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1653932/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661189] [NEW] calls to cinder always in user context

2017-02-02 Thread Maurice Schreiber
Public bug reported:

My user is not Admin in Cinder. On attaching a volume nova tries to
update the volume admin metadata, but this fails:

"Policy doesn't allow volume:update_volume_admin_metadata to be
performed."

I would have expected that this call from nova to cinder happens in
context of an elevated service user, and not with the user's
context/token.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661189

Title:
  calls to cinder always in user context

Status in OpenStack Compute (nova):
  New

Bug description:
  My user is not Admin in Cinder. On attaching a volume nova tries to
  update the volume admin metadata, but this fails:

  "Policy doesn't allow volume:update_volume_admin_metadata to be
  performed."

  I would have expected that this call from nova to cinder happens in
  context of an elevated service user, and not with the user's
  context/token.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1661189/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1662477] [NEW] rbac shared network add_router_interface fails for non-admin

2017-02-07 Thread Maurice Schreiber
Public bug reported:

We are on mitaka and use rbac to share private networks.
We defined in the neutron policy that non-admin users can attach router 
interfaces, but this fails on shared networks, because rbac is not taken into 
account here: 
https://github.com/openstack/neutron/blob/a0e0e8b6686b847a4963a6aa6a3224b5768544e6/neutron/api/v2/attributes.py#L372

The related error, that led me to that line is this:
http://paste.openstack.org/show/597918/

And this is still present in master:
https://github.com/openstack/neutron/blob/1c5bf09a03b0fe463ba446d2a19087be7a0504a7/neutron/api/v2/attributes.py#L372

I'm happy to give more details, if needed.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1662477

Title:
  rbac shared network add_router_interface fails for non-admin

Status in neutron:
  New

Bug description:
  We are on mitaka and use rbac to share private networks.
  We defined in the neutron policy that non-admin users can attach router 
interfaces, but this fails on shared networks, because rbac is not taken into 
account here: 
https://github.com/openstack/neutron/blob/a0e0e8b6686b847a4963a6aa6a3224b5768544e6/neutron/api/v2/attributes.py#L372

  The related error, that led me to that line is this:
  http://paste.openstack.org/show/597918/

  And this is still present in master:
  
https://github.com/openstack/neutron/blob/1c5bf09a03b0fe463ba446d2a19087be7a0504a7/neutron/api/v2/attributes.py#L372

  I'm happy to give more details, if needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1662477/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666501] [NEW] add_router_interface does not export port_id

2017-02-21 Thread Maurice Schreiber
Public bug reported:

In order to be able to control via policy
"add_router_interface:port_id", who can add a router interface by
providing a port (instead of a subnet), the export of the port_id field
is needed.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1666501

Title:
  add_router_interface does not export port_id

Status in neutron:
  New

Bug description:
  In order to be able to control via policy
  "add_router_interface:port_id", who can add a router interface by
  providing a port (instead of a subnet), the export of the port_id
  field is needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1666501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632263] [NEW] swift backend not working with disabled swift auto account creation

2016-10-11 Thread Maurice Schreiber
Public bug reported:

Hi,

we have our swift proxy configured with account_autocreate = false
(which is the default), i.e. a new project does not automatically have a
swift account, but it needs to be created via PUT on the swift auth url.

In combination with multi-tenant swift backend use in glance that means, that 
new projects cannot create images. Image creation in-transparently fails with a 
500 internal server error.
This is visible in the logs as "Failed to add container to Swift.
Got error from Swift: Container PUT failed [...] 404 Not Found".

We are running mitaka.

I think glance should try to create the swift account on that error.
It even could ask swift, whether the account_autocreate option is set (via 
/info, see 
http://developer.openstack.org/api-ref/object-storage/index.html#list-activated-capabilities).

If account creation is not an option, a more transparent error would be
good to hint the end-user that a swift account is missing.

Thanks,
Maurice

P.S. This also holds true when you only have one swift container to
store all your images, but it's not as big a problem because swift
account creation then is a one-time only task.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1632263

Title:
  swift backend not working with disabled swift auto account creation

Status in Glance:
  New

Bug description:
  Hi,

  we have our swift proxy configured with account_autocreate = false
  (which is the default), i.e. a new project does not automatically have
  a swift account, but it needs to be created via PUT on the swift auth
  url.

  In combination with multi-tenant swift backend use in glance that means, that 
new projects cannot create images. Image creation in-transparently fails with a 
500 internal server error.
  This is visible in the logs as "Failed to add container to Swift.
  Got error from Swift: Container PUT failed [...] 404 Not Found".

  We are running mitaka.

  I think glance should try to create the swift account on that error.
  It even could ask swift, whether the account_autocreate option is set (via 
/info, see 
http://developer.openstack.org/api-ref/object-storage/index.html#list-activated-capabilities).

  If account creation is not an option, a more transparent error would
  be good to hint the end-user that a swift account is missing.

  Thanks,
  Maurice

  P.S. This also holds true when you only have one swift container to
  store all your images, but it's not as big a problem because swift
  account creation then is a one-time only task.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1632263/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp