"Michael Sparks" <[EMAIL PROTECTED]> wrote

> Yes, there are a tiny set of scenarios where doing 
> eval(raw_input(...)) could
> be a problem. The idea that its always a gaping security hole is 
> completely
> bogus.

The number of scenarios is not tiny but the likelihood of attack by 
that
route is small. However we live in a world where ever increasing 
numbers
of people are deliberately trying to find such opportunities and 
exploit
them. For example in my own organisation we have over 100,000 users
and have basic spyware logging their PC activity and we have over
1,000 attempted attacks per month - and that's just the employees!
Not all of that is malicious, some of it is just accidental 
mis-typing/clicking
etc. But some is deliberate attempts to access things they shouldn't 
or just
to see if they can break it - it can be boring working the night shift 
in a
call centre! :-).

The problem is real even if not enormous and all programmers have
a duty to learn how to avoid it. And that includes not using such
open doors to vandalism as eval() etc. While very few would trash
their own computer there are plenty employees happy to trash the
company computer, especially since it often leads to an easy
few hours until the tech guys fix it!

> The scenario's raised I've never once seen happen.

As I say we see it on a monthly basis many times.

>   * Scenario A (and only that scenario) is hardly a risk considering
>     in >99% of cases where the user can type something in response 
> to
>     eval(raw_input(...)) they have FAR more ways of causing 
> problems.

This is true, and eval() is not the main risk in this scenario it's 
true,
but it does still constitute a risk if its input can be read from 
stdin.

> Denouncing a piece of code as a gaping security hole without
> discussing the context is irresponsible.

No, neglecting to mention that it is a gaping security hole would
be irresponsible. It would however be good to add a context about
exactly when and how it is dangerous. In the case of eval() that
is *anywhere* that untrusted or indeterminate input can be supplied.

> After all piece of code is never a security risk by itself. It's how 
> that
> code is deployed and used that _can_ be.

Hmmm, I'm not sure I buy that. It's a bit like saying a gun is not
a safety risk, it's only how it's used. But the very presence of the
gun itself poses a risk that it will be abused. Same with risky code,
if it makes a breach possible then it is itself a risk. If the risk
matures then it's an issue, but one which may be too late to deal
with!

-- 
Alan Gauld
Author of the Learn to Program web site
http://www.freenetpages.co.uk/hp/alan.gauld 


_______________________________________________
Tutor maillist  -  Tutor@python.org
http://mail.python.org/mailman/listinfo/tutor

Reply via email to