well, ok, this is currently mostly about my own language, but I figured it might be relevant/interesting.

the basic idea is this:
not all code may be from trusted sources.
consider, say, code comes from the internet.

what is a "good" way of enforcing security in such a case?


first obvious thing seems to be to disallow any features which could directly circumvent security. say, the code is marked untrusted, and the first things likely to happen would be to disable access to things like raw pointers and to the C FFI.

the second thing seems to be the option of moving the code to a local toplevel where its ability to see certain things is severely limited.

both of these pose problems:
simply disabling compiler features may not be sufficient, since there may be ways of "using" the language which may be insecure and which go beyond simply enabling/disabling certain features in the compiler.

anything still visible may be tampered with, for example, suppose a global package is made visible in the new toplevel, and the untrusted code decides to define functions in a system package, essentially overwriting the existing functions. this is useful, say, for writing program mods, but may be a bad things from a security perspective.

a partial option is to give untrusted code its own "shadowed" packages, but this poses other problems.

similarly, an exposed API function may indirectly give untrusted code "unexpected levels of power" if it, by default, has unhindered access to the system, placing additional burden on library code not to perform operations which may be exploitable.

consider something trivial like:

function getTopVar(name) { return top[name]; }

which, if exposed under a visibility-based security scheme, and the function was part of a library package (with full system access), now suddenly the whole security model is dead. essentially it would amount to trying to create "water tight" code to avoid potential "security leaks".


another security worry is created by, among other things, the semantics of object delegation (at least in my language), where assigning through a delegated object may in-turn move up and assign the variable in a delegated-to object (at the VM level there are multiple assignment operators to address these different cases, namely which object will have a variable set in...).

this in-turn compromises the ability to simply use delegation to give each module its own local toplevel (effectively, the toplevel, and any referenced scopes, would need to be cloned, so as to avoid them being modified), ...


so, I am left idly wondering:
could a variation of, say, the Unix security model, be applied at the VM level?

in this case, any running code would have, say, a UID (UserID, more refers to the origin of the code than to the actual user) and GID (GroupID). VM objects, variables, and methods, would themselves (often implicitly) have access rights assigned to them (for example: Read/Write/Execute/Special).

possible advantages:
could be reasonably secure without going through contortions;
lessens the problem of unintended levels of power;
reasonably transparent at the language level (for the most part);
...

disadvantages:
have to implement VM level security checking in many places;
there are many cases where static validation will not work, and where runtime checks would be needed (possible performance issue); may add additional memory costs (now, several types of memory objects will have to remember their owner and access rights, ...); it could in some cases require funky attributes "$[mode(0xF51),setuid] public function foo() ...";
...

uncertain:
a means could be provided for a program to request "trust" from the user, probably in a manner vaguely similar to UAC ("User Access Control") in Windows, or the digital-signing requests; if one allows for a system similar to digital signing, there is a potential risk of forged/stolen keys (effectively requiring user-vigilence), the main alternative being that the user authorize modules individually if they require certain features (more similar to installing apps on Android);
...


probable semantics:
declaration permissions (variable/function/class) are likely to be applied in a manner similar to lexical scoping, where a declaration will retain the UID in effect at the time of its creation. this will also likely be the case for functions and methods. this is also likely to be the model employed for static security checks.

run-time security is likely to follow a model similar to dynamic scoping, and will likely be applied on a per-thread basis (the current UID and GID then being a part of the current VM context).

will likely be applied to:
functions/methods, which will mostly themselves care about execute permissions; objects, which will care both about access to themselves, and to individual members.

say, object:
Read, needed to read from any field;
Write, needed to assign to any field;
Execute, needed to call any ordinary method (special cases may require RX).

field/method:
Read, get value from field, get function/method handle;
Write, assign to field, replace/override method;
Execute: call method (N/A for fields);

block/lambda:
Read/Write: N/A
Execute: call lambda.

probably, for now, I will pack all of this into a 32-bit value, say:
12-bits, access flags; 10 bits, owner UID; 10 bits, owner GID.
setuid/setgid would probably be stored with the modifier flags (currently a 64-bit value present in fields/methods/blocks, yes I really do have this many modifier flags... sadly...).


any thoughts?...


_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to