On Wed, Aug 10, 2011 at 7:35 PM, BGB <cr88...@gmail.com> wrote:

> not all code may be from trusted sources.
> consider, say, code comes from the internet.
>
> what is a "good" way of enforcing security in such a case?
>

Object capability security is probably the very best approach available
today - in terms of a wide variety of criterion such as flexibility,
performance, precision, visibility, awareness, simplicity, and usability.

In this model, ability to send a message to an object is sufficient proof
that you have rights to use it - there are no passwords, no permissions
checks, etc. The security discipline involves controlling whom has access to
which objects - i.e. there are a number of patterns, such as 'revocable
forwarders', where you'll provide an intermediate object that allows you to
audit and control access to another object. You can read about several of
these patterns on the erights wiki [1].

Access to FFI and such would be regulated through objects. This leaves the
issue of deciding: how do we decide which objects untrusted code should get
access to? Disabling all of FFI is often too extreme.

My current design: FFI is a network of registries. Plugins and services
publish FFI objects (modules) to these registries. Different registries are
associated with different security levels, and there might be connections
between them based on relative trust and security. A single FFI plugin
might provide similar objects at multiple security levels - e.g. access to
HTTP service might be provided at a low security level for remote addresses,
but at a high security level that allows for local (127, 192.168, 10.0.0,
etc.) addresses. One reason to favor plugin-based FFI is that it is easy to
develop security policy for high-level features compared to low-level
capabilities. (E.g. access to generic 'local storage' is lower security
level than access to 'filesystem'.)

Other than security, my design is to solve other difficult problems
involving code migration [2], multi-process and distributed extensibility
(easy to publish modules to registries even from other processes or servers;
similar to web-server CGI), smooth transitions from legacy, extreme
resilience and self-healing (multiple fallbacks per FFI dependency), and
policy&configuration management [3].

[1] http://wiki.erights.org/wiki/Walnut/Secure_Distributed_Computing
[2] http://wiki.erights.org/wiki/Unum
[3] http://c2.com/cgi/wiki?PolicyInjection


>
> the second thing seems to be the option of moving the code to a local
> toplevel where its ability to see certain things is severely limited.
>

Yes, this is equivalent to controlling which 'capabilities' are available in
a given context. Unfortunately, developers lack 'awareness' - i.e. it is not
explicit in code that certain capabilities are needed by a given library, so
failures occur much later when the library is actually loaded. This is part
of why I eventually abandoned dynamic scopes (where 'dynamic scope' would
include the toplevel [4]).

[4] http://c2.com/cgi/wiki?ExplicitManagementOfImplicitContext


> simply disabling compiler features may not be sufficient


It is also a bad idea. You end up with 2^N languages for N switches. That's
hell to test and verify. Libraries developed for different sets of switches
will consequently prove buggy when people try to compose them. This is even
more documentation to manage.



>
> anything still visible may be tampered with, for example, suppose a global
> package is made visible in the new toplevel, and the untrusted code decides
> to define functions in a system package, essentially overwriting the
> existing functions


Indeed. Almost every language built for security makes heavy use of
immutable objects. They're easier to reason about. For example, rather than
replacing the function in the package, you would be forced to create a new
record that is the same as the old one but replaces one of the functions.

Access to mutable state is more tightly controlled - i.e. an explicit
capability to inject a new stage in a pipeline, rather than implicit access
to a variable. We don't lose any flexibility, but the 'path of least
resistance' is much more secure.



an exposed API function may indirectly give untrusted code "unexpected
> levels of power" if it, by default, has unhindered access to the system,
> placing additional burden on library code not to perform operations which
> may be exploitable


This is why whitelisting, rather than blacklisting, should be the rule for
security.


> assigning through a delegated object may in-turn move up and assign the
> variable in a delegated-to object (at the VM level there are multiple
> assignment operators to address these different cases, namely which object
> will have a variable set in...).


The security problem isn't delegation, but rather the fact that this
chaining is 'implicit' so developers easily forget about it and thus leave
security holes.

A library of security patterns could help out. E.g. you could ensure your
revocable forwarders and facet-pattern constructors also provide barriers
against propagation of assignment.


>
> could a variation of, say, the Unix security model, be applied at the VM
> level?
>

Within the VM, this has been done before, e.g. Java introduced thread
capabilities. But the Unix security model is neither simple nor flexible nor
efficient, especially for fine-grained delegation. I cannot recommend it.
But if you do pursue this route: it has been done before, and there's a lot
of material you can learn from. Look up LambdaMoo, for example.

Regards,

David
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to