Nicholas Clark wrote:
> 
> On Fri, Aug 29, 2003 at 05:30:37PM +0200, Leopold Toetsch wrote:
>> I think, we need a general solution for freeze, dump and clone. As
>> shown
> 
> I don't know if this is relevant here, but I'll mention it in case.
> For perl5 there isn't a single good generic clone system. Probably the
> best (in terms of quality of what it can cope with, if not speed) is
> Storable.
> Storable provides a dclone method which internally serialises the passed
> in reference, and then deserialises a new reference. This is a lot of
> extra effort, but it does work. Obviously this fail's parrots aim -
> speed.
> 
> I don't know whether this is relevant either:
> The internal representation of Storable's serialisation is stream of
> tagged data. There was a discussion on p5p a while back about whether it
> would be possible to make a callback interface to the tags, so that
> people could write custom deserialisers (eg vetting the data as it came
> in) This sort of interface to hooking deserialisation would be useful.
> Likewise being able to hook into the traversal/serialisation interface
> to provide custom output (eg XML when we're serialising as standard to
> compressed YAML) would save people having to reinvent
> traversal/introspection routines. (eg perl as both Data::Dumper and
> Storable in core doing separate traversal routines)
> 
> I suspect that this is relevant:
> You can't trust you data deserialiser. It can do evil on you before it
> returns.
> 
> I forget who worked this attack out, but Jonathan Stowe presented a talk
> on it at YAPC::EU in Amsterdam. The specific attack form on Storable is
> as follows, but it should generalise to any system capable of
> deserialising objects:
> 
> You create serialisation which holds a hash. The hash happens to have 2
> identical keys (the serialisation format does not prevent this)
> The second key is innocent. When Storable writes this key into the hash
> it's deserialising, perl's hash API automatically frees any previous
> value for that key. This is how hashes are supposed to work.
> 
> Obviously while deserialising there should never have been a previous
> key (a legitimate file generated from a real hash could not have had
> repeating keys) The problem is that things like perl let you
> (de)serialise objects, and objects have destructors that can run
> arbitrary code. So your attacker puts an object as the value associated
> with the first copy of the key which is an object with attributes that
> cause it to do something "interesting". The suggestion for attacking a
> Perl 5 CGI was to use a CGITempFile object. The destructor looks like
> this:
> 
> sub DESTROY {
>     my($self) = @_;
>     unlink $$self;              # get rid of the file
> }
> 
> The attacker can craft a bogus CGITempFile object that refers to any
> file on the system, and when this object is destroyed it will attempt to
> delete that file at whatever privilege level the CGI runs at. And
> because that object is getting destroyed inside the deserialise routine
> of Storable, this all happens without the user written code getting any
> chance to inspect the data. And even Storable can't do anything about
> it, because by the time it encounters the repeated hash key, it has
> already deserialised this time-bomb. How does it defuse it?

The simplest solution *I* can think of, is to have storable copy the
taint
flag from the input string/stream onto every single string that it
produces.

Taint checking doesn't solve *all* security problems, of course, but it
can catch many of them, and it certainly would catch this one (if $$self
were tainted).

> Simple solutions such as checking the hash keys are unique don't work
> either. All the attacker does then increase the abstraction slightly.
> Put the time-bomb as the first element in an array. Put one of these
> bogus hashes as the second object. Fine, you can realise that you've
> got bad keys in the bogus hash and never build it. But at this
> point the time-bomb object already exists. You'd have to validate the
> entire serialised stream before continuing. And if deserialisers for
> objects are allowed to fail, then you're still stuffed, because the
> attacker then crafts a time-bomb object, and a second malformed object
> that is known to cause its class's deserialiser to fail.

One defense against following a bomb with malformed data, might be to
have Storable save up the SV*s and the names with which to bless them,
and only do the blessing *after* the data is fully deserialized, as a
last step before returning it to the user.  This way, if there's
malformed data, no destructors get called.  The user still needs to
validate the returned data, though, and rebless anything which might
result in an evil destructor being called.

Another defense is to run deserialization and validation inside of a
Safe object.  Make sure that if the object fails to validate, it is
completely destructed before we exit the Safe compartment.

These defenses still aren't perfect:  One could embed a time-bomb inside
of a circular reference loop, then cut off the loop so it's not
reachable from the main data structure.  It doesn't get destructed right
away (the ref loop keeps it alive), and it isn't visible to someone
validating the returned data structure.  Only when the program is
exitting, during global destruction, does the destructor get called. 
You'd have to exit with _exit to avoid that, I think.  Or maybe turn off
potentially bad opcodes with the ops.pm pragma.

(In perl6, a DoD run before we leave the Safe should clean up that
circular reference loop, right?)

> I presume that parrot is going to be able to de-serialise objects. In
> which case we are exposed to this sort of attack.

For backwards compatibility with perl5, parrot will quite likely support
taint checking, safe.pm, and ops.pm.

-- 
$a=24;split//,240513;s/\B/ => /for@@=qw(ac ab bc ba cb ca
);{push(@b,$a),($a-=6)^=1 for 2..$a/6x--$|;print "[EMAIL PROTECTED]
]\n";((6<=($a-=6))?$a+=$_[$a%6]-$a%6:($a=pop @b))&&redo;}

Reply via email to