> HTML::Scrubber is not really broken.  The problem is that the
> documentation leads the user to do broken things, as was shown with
> Planet Plagger.  It is possible to make a secure HTML::Scrubber config,
> but you need to default deny everything and then only allow a select
> list of tags and attributes - and you need to really think about that
> list.  The underlying problem, which I suspect HTML::Stripscripts shares
> is that HTML::Parser thinks that the attribute "foo=bar" is different
> than the attribute "foo.=bar" (RSnake covers this kind of evasion in his
> document fairly well) and your browser thinks that everthing non
> alphanumeric before the equals sign is junk.  

HTML::StripScripts::Parser has a default deny everything approach, and
reconstructs the HTML fed to it, so unless it makes sense as html, it
doesn't get passed through and reconstructed.

I tried out loads of different forms of XSS attacks from RSnake's site,
and they were all neutered by StripScripts, including the 'foo.=' form.

> So without actually 
> sitting down and auditing HTML::Stripscripts I'd say it probably _can_
> be used safely, but odds are most people won't.

Again, without actually auditing it but based on my experience of
configuring and using it, I would say that with the default settings of
HTML::StripScripts::Parser, you'd be pretty safe, but that you could
configure it so that it would NOT be safe.

Clint

Reply via email to