Cameron Laird wrote:
In article <[EMAIL PROTECTED]>,
Michael Spencer  <[EMAIL PROTECTED]> wrote:
                        .
                        .
                        .

Right - the crux of the problem is how to identify dangerous objects. My point is that if such as test is possible, then safe exec is very easily implemented within current Python. If it is not, then it is essentially impossible.




I'll suggest yet another perspective: add another indirection.
As the virtual machine becomes more available to introspection,
it might become natural to define a *very* restricted interpreter
which we can all agree is safe, PLUS a means to extend that specific instance of the VM with, say, new definitions of bindings
for particular AST nodes. Then the developer has the means to
"build out" his own VM in a way he can judge useful and safe for
his own situation. Rather than the Java there-is-one-"safe"-for-
all approach, Pythoneers would have the tools to create safety.

That does sound good. And evolutionary, because the very restricted VM could be implemented today (in Python), and subsequently PyPy (or whatever) could optimize it.


The safe eval recipe I referred to earlier in the thread is IMO a trivial example of of this approach. Of course, its restrictions are extreme - only constant expressions, but it is straightforwardly extensible to any subset of the language.

The limitation that I see with this approach is that it is not, in general, syntax that is safe or unsafe (with the notable exception of 'import' and its relatives). Rather, it the library objects, especially the built-ins, that present the main source of risk.

So, if I understand your suggestion, it would require assessing the safety of the built-in objects, as well as providing an interpreter that could control access to them, possibly with fine-grain control at the attribute level.

M




-- http://mail.python.org/mailman/listinfo/python-list

Reply via email to