On Thu, Mar 11, 2010 at 6:57 PM, Jonathan Leto <[email protected]> wrote: > Howdy, > >> >> As another example, say we want to restrict access in a sandbox to a >> handfull of objects. If Lorito permits pointer arithmetic, it could >> become very tricky to guess where things will wind up pointing. >> >> Both of these examples leave us in a bad situation: either permit >> possibly unsafe operations, or explain to users what hoops they have >> to jump through to get their bytecode to validate as safe. > > I am very interested in making our security layer as robust as > possible. Would the problems that you describe be mitigated by having > runloops that have certain opcodes removed? That way, if some funny > business happens via a security hole, the worst a malicious attacker > could do is generate a missing opcode error, instead of possibly > running arbitrary code.
Not really. My fear (which has been allayed) was that ops might be too low level for security features to distinguish between behaviours. As levels get lower, different behaviours start to use the same operations, so preventing malicious behaviours by omitting their required ops would also prevent perfectly acceptable behaviours. For example, if all syscall-based operations compiled down to a series of ops, say a preamble, a syscall op, and a postamble, preventing filesystem manipulation by omitting ops would be obvious: omit the syscall op. Unfortunately that would prevent perfectly acceptable things like reading from already open filehandles. The problem is op level. I wan to be able to communicate with the security system on the level of behaviours, but I want to communicate with the JIT on a level lower than that, where the behaviour described by a sequence of ops might not readily be discernible. If both communications used the same language (op set), their interests would conflict. The two-level approach described by Allison solves this. _______________________________________________ http://lists.parrot.org/mailman/listinfo/parrot-dev
