[Yahoo-eng-team] [Bug 1968343] [NEW] Security Group Rule create with forged integer security_group_id causes exceptions

2022-04-08 Thread Andrew Karpow
Public bug reported:

Assuming a project xyz has Security Groups, following POST requests
fails with HTTP 500 ValueError:

/v2.0/security-group-rules
{
"security_group_rule": {
"direction": "egress",
"ethertype": "IPv4",
"port_range_max": 443,
"port_range_min": 443,
"project_id": "xyz",
"protocol": "tcp",
"remote_ip_prefix": "34.231.24.224/32",
"security_group_id": 0
}
}

The value error is raised by python uuid with `badly formed hexadecimal UUID 
string`.
This is because the prior validation _check_security_group in 
securitygroups_db.py is using 
sg_obj.SecurityGroup.objects_exist(context, id=id) which yields true with 
MySQL, e.g.:

MariaDB [neutron]> SELECT count(*) FROM securitygroups WHERE securitygroups.id 
IN (0);
+--+
| count(*) |
+--+
|   15 |
+--+
1 row in set, 46 warnings (0.001 sec)

MariaDB [neutron]> SHOW WARNINGS LIMIT 1;
+-+--+--+
| Level   | Code | Message  
|
+-+--+--+
| Warning | 1292 | Truncated incorrect DOUBLE value: 
'77dd53b2-59c0-4208-b03c-9f9f65bf9a28' |
+-+--+--+
1 row in set (0.000 sec)

Thus, the validation succeeds and the code path is followed till the id
is converted to a UUID - which causes the unexpected exception.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1968343

Title:
  Security Group Rule create with forged integer security_group_id
  causes exceptions

Status in neutron:
  New

Bug description:
  Assuming a project xyz has Security Groups, following POST requests
  fails with HTTP 500 ValueError:

  /v2.0/security-group-rules
  {
"security_group_rule": {
"direction": "egress",
"ethertype": "IPv4",
"port_range_max": 443,
"port_range_min": 443,
"project_id": "xyz",
"protocol": "tcp",
"remote_ip_prefix": "34.231.24.224/32",
"security_group_id": 0
}
  }

  The value error is raised by python uuid with `badly formed hexadecimal UUID 
string`.
  This is because the prior validation _check_security_group in 
securitygroups_db.py is using 
  sg_obj.SecurityGroup.objects_exist(context, id=id) which yields true with 
MySQL, e.g.:

  MariaDB [neutron]> SELECT count(*) FROM securitygroups WHERE 
securitygroups.id IN (0);
  +--+
  | count(*) |
  +--+
  |   15 |
  +--+
  1 row in set, 46 warnings (0.001 sec)

  MariaDB [neutron]> SHOW WARNINGS LIMIT 1;
  
+-+--+--+
  | Level   | Code | Message
  |
  
+-+--+--+
  | Warning | 1292 | Truncated incorrect DOUBLE value: 
'77dd53b2-59c0-4208-b03c-9f9f65bf9a28' |
  
+-+--+--+
  1 row in set (0.000 sec)

  Thus, the validation succeeds and the code path is followed till the
  id is converted to a UUID - which causes the unexpected exception.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1968343/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1941537] [NEW] Neutron Policy Engine issues with PUT/Update

2021-08-25 Thread Andrew Karpow
Public bug reported:

We are using a policy that looks like that:

"network_device": "field:port:device_owner=~^network:",
"update_port:fixed_ips": "not rule:network_device",

Idea is to protect special ports (by device_owner) from being updated
but still allow users to create custom ports.

Causes following error in the policy engine if a client tries to update
fixed-ips of a port:

DEBUG neutron.policy [] Unable to find requested field: device_owner in target: 
{
'id': 'abc', 
'network_id': 'abc', 
'tenant_id': 'abc', 
'status': 'ACTIVE', 
'project_id': 'abc', 
'fixed_ips': [{'subnet_id': 'abc', 'ip_address': '10.180.128.89'}], 
'attributes_to_update': ['fixed_ips']
} neutron/policy.py:395


When using PUT/Update, the policy engine is populated with data from the 
database, but only if the conditions in the policy_enforcement.py:54 met, like 
"required_by_policy", "primary_key". The definition of the port attribute 
"device_owner" doesn't have any of the conidition and is therefor filtered out 
from the target dict.

But this is not the case for all other operations like GET, DELETE and
CREATE. This seems to me like unintended behaviour, shouldn't all
attributes that annoted by "enforce_policy" be pulled into the target
dict?

>From doc/source/contributor/internals/policy.rst
* If an attribute of a resource might be subject to authorization checks
  then the ``enforce_policy`` attribute should be set to ``True``...

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: policy

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1941537

Title:
  Neutron Policy Engine issues with PUT/Update

Status in neutron:
  New

Bug description:
  We are using a policy that looks like that:

  "network_device": "field:port:device_owner=~^network:",
  "update_port:fixed_ips": "not rule:network_device",

  Idea is to protect special ports (by device_owner) from being updated
  but still allow users to create custom ports.

  Causes following error in the policy engine if a client tries to
  update fixed-ips of a port:

  DEBUG neutron.policy [] Unable to find requested field: device_owner in 
target: {
  'id': 'abc', 
  'network_id': 'abc', 
  'tenant_id': 'abc', 
  'status': 'ACTIVE', 
  'project_id': 'abc', 
  'fixed_ips': [{'subnet_id': 'abc', 'ip_address': '10.180.128.89'}], 
  'attributes_to_update': ['fixed_ips']
  } neutron/policy.py:395

  
  When using PUT/Update, the policy engine is populated with data from the 
database, but only if the conditions in the policy_enforcement.py:54 met, like 
"required_by_policy", "primary_key". The definition of the port attribute 
"device_owner" doesn't have any of the conidition and is therefor filtered out 
from the target dict.

  But this is not the case for all other operations like GET, DELETE and
  CREATE. This seems to me like unintended behaviour, shouldn't all
  attributes that annoted by "enforce_policy" be pulled into the target
  dict?

  From doc/source/contributor/internals/policy.rst
  * If an attribute of a resource might be subject to authorization checks
then the ``enforce_policy`` attribute should be set to ``True``...

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1941537/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1829261] [NEW] duplicate quota entry for project_id/resource causes inconsisten behaviour

2019-05-15 Thread Andrew Karpow
Public bug reported:

We experienced in one of our cluster an inconsisten behaviour when
setting quota for a tenant (setting security_group to 3 but still
returned always 2:

$ curl -g -i -X PUT 
http://127.0.0.1:9696/v2.0/quotas/3e0fd3f8e9ec449686ef26a16a284265 
"X-Auth-Token: $OS_AUTH_TOKEN" -d '{"quota": {"security_group": 3}}'
HTTP/1.1 200 OK
Content-Type: application/json
X-Openstack-Request-Id: req-c6f01da8-1373-4968-b78f-87d7698cde15
Date: Wed, 15 May 2019 14:13:29 GMT
Transfer-Encoding: chunked

{"quota": {"subnet": 1, "network": 1, "floatingip": 22, "l7policy": 11,
"subnetpool": 0, "security_group_rule": 110, "listener": 11, "member":
880, "pool": 22, "security_group": 2, "router": 2, "rbac_policy": 5,
"port": 550, "loadbalancer": 11, "healthmonitor": 11}}

after some research, we found there is a duplicate entry with same,
project_id and resource in the quotas table:

$ SELECT project_id, resource, count(*) as qty FROM quotas GROUP BY project_id, 
resource HAVING count(*)> 1
project_id|   resource | qty
--++-
 3e0fd3f8e9ec449686ef26a16a284265 | security_group |   2
(1 row)

deleting one of duplicate entries recovered the correct behaviour. This
could be caused by a race-condition or backup leftovers.

I would suggest to add a migration that adds a unique constrain to
(project_id, resource), does that sound reasonable?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1829261

Title:
  duplicate quota entry for project_id/resource causes inconsisten
  behaviour

Status in neutron:
  New

Bug description:
  We experienced in one of our cluster an inconsisten behaviour when
  setting quota for a tenant (setting security_group to 3 but still
  returned always 2:

  $ curl -g -i -X PUT 
http://127.0.0.1:9696/v2.0/quotas/3e0fd3f8e9ec449686ef26a16a284265 
"X-Auth-Token: $OS_AUTH_TOKEN" -d '{"quota": {"security_group": 3}}'
  HTTP/1.1 200 OK
  Content-Type: application/json
  X-Openstack-Request-Id: req-c6f01da8-1373-4968-b78f-87d7698cde15
  Date: Wed, 15 May 2019 14:13:29 GMT
  Transfer-Encoding: chunked

  {"quota": {"subnet": 1, "network": 1, "floatingip": 22, "l7policy":
  11, "subnetpool": 0, "security_group_rule": 110, "listener": 11,
  "member": 880, "pool": 22, "security_group": 2, "router": 2,
  "rbac_policy": 5, "port": 550, "loadbalancer": 11, "healthmonitor":
  11}}

  after some research, we found there is a duplicate entry with same,
  project_id and resource in the quotas table:

  $ SELECT project_id, resource, count(*) as qty FROM quotas GROUP BY 
project_id, resource HAVING count(*)> 1
  project_id|   resource | qty
  --++-
   3e0fd3f8e9ec449686ef26a16a284265 | security_group |   2
  (1 row)

  deleting one of duplicate entries recovered the correct behaviour.
  This could be caused by a race-condition or backup leftovers.

  I would suggest to add a migration that adds a unique constrain to
  (project_id, resource), does that sound reasonable?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1829261/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1811873] [NEW] get_l3_agent_with_min_routers fails with postgresql backend

2019-01-15 Thread Andrew Karpow
Public bug reported:

We have our own L3 agent that uses the generic neutron function
get_l3_agent_with_min_routers, rendering following exception if using a
postgresql backend:

2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource   File 
"/var/lib/openstack/local/lib/python2.7/site-packages/neutron/db/l3_agentschedulers_db.py",
 line 469, in get_l3_agent_with_min_routers
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource context, 
agent_ids)
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource   File 
"/var/lib/openstack/local/lib/python2.7/site-packages/neutron/objects/agent.py",
 line 102, in get_l3_agent_with_min_routers
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource res = 
query.filter(agent_model.Agent.id.in_(agent_ids)).first()
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource   File 
"/var/lib/openstack/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", 
line 2778, in first
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource ret = 
list(self[0:1])
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource   File 
"/var/lib/openstack/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", 
line 2570, in __getitem__
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource return 
list(res)
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource   File 
"/var/lib/openstack/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", 
line 2878, in __iter__
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource return 
self._execute_and_instances(context)
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource   File 
"/var/lib/openstack/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", 
line 2901, in _execute_and_instances
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource result = 
conn.execute(querycontext.statement, self._params)
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource   File 
"/var/lib/openstack/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 948, in execute
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource return 
meth(self, multiparams, params)
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource   File 
"/var/lib/openstack/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py",
 line 269, in _execute_on_connection
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource return 
connection._execute_clauseelement(self, multiparams, params)
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource   File 
"/var/lib/openstack/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1060, in _execute_clauseelement
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource compiled_sql, 
distilled_params
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource   File 
"/var/lib/openstack/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1200, in _execute_context
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource context)
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource   File 
"/var/lib/openstack/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1409, in _handle_dbapi_exception
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource 
util.raise_from_cause(newraise, exc_info)
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource   File 
"/var/lib/openstack/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py",
 line 203, in raise_from_cause
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource 
reraise(type(exception), exception, tb=exc_tb, cause=cause)
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource   File 
"/var/lib/openstack/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1193, in _execute_context
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource context)
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource   File 
"/var/lib/openstack/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py",
 line 507, in do_execute
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource 
cursor.execute(statement, parameters)
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource DBError: 
(psycopg2.ProgrammingError) column "agents.agent_type" must appear in the GROUP 
BY clause or be used in an aggregate function
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource LINE 1: SELECT 
agents.id AS agents_id, agents.agent_type AS agents_a...
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource
^
2019-01-15 16:34:33,091.091 59 ERROR neutron.api.v2.resource  [SQL: 'SELECT 
agents.id AS agents_id, agents.agent_type AS agents_agent_type, agents."binary" 
AS agents_binary, agents.topic AS agents_topic, agents.host AS agents_host, 
agents.availability_zone AS agents_availability_zone, agents.admin_state_up AS 
agents_admin_state_up, agents.created_at AS