Matthew Keeter writes:

> I’m currently embedding Python in a C / C++ application that evaluates
> user-provided scripts.
>
> Obviously, this is terribly unsafe: user-provided scripts can execute
> arbitrary malicious actions, and there’s no good way to sandbox Python
> in a desktop context.
>
> If I were to replace Python with Guile, is there a way to sandbox it
> so that arbitrary (perhaps malicious) user-provided scripts can be run
> safely?
>
> Regards,
> Matt

I think there's nothing in Guile that provides sandboxing currently.

A path towards it is possible though: a limited subset of guile in a
capability security based environment could probably provide the
features desired.  See the Rees Thesis:

  http://mumble.net/~jar/pubs/secureos/secureos.html

Wingo has written about it with respect to Guile:

  http://wingolog.org/archives/2011/03/19/bart-and-lisa-hacker-edition

I have thought about how this could be achieved in the Guile-verse.  My
suspicion is that the best way to achieve it is to provide a new
language layer on the compiler tower which is "mostly scheme", but only
exposes a number of deemed-safe operators by default, and provides a
mechanism to add further procedures to the default environment.
Everything from there on out takes the "capability security through
lexical scope and the labmda calculus" approach described in the Rees
Thesis.

However, this doesn't exist in Guile at present.  I'd love to see it
exist, though.

 - Chris

Reply via email to