I've made another attempt at Python sandboxing, which does something
which I've not seen tried before - using the 'ast' module to do static
analysis of the untrusted code before it's executed, to prevent most
of the sneaky tricks that have been used to break out of past attempts
at sandboxes.

In short, I'm turning Python's usual "gentleman's agreement" that you
should not access names and attributes that are indicated as private
by starting with an underscore into a rigidly enforced rule: try and
access anything starting with an underscore and your code will not be
run.

Anyway the code is at https://github.com/jribbens/unsafe
It requires Python 3.4 or later (it could probably be made to work on
Python 2.7 as well, but it would need some changes).

I would be very interested to see if anyone can manage to break it.
Bugs which are trivially fixable are of course welcomed, but the real
question is: is this approach basically sound, or is it fundamentally
unworkable?
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to