The validation strategy you cite is well and good when the you *have* 'a set of tightly constrained known good values.' It's not useful in the general case.

Your concerns with respect to XSS should only present a problem if you need to render untrusted HTML (such as is often the case with web-base email applications, for example). Unless you need to preserve user-submitted HTML, though, the correct answer is, as Greg said, to HTML-escape all user supplied data (or at least, all user supplied data you haven't previously sanitized via strategies such as you referenced).

If you do that, the browser will never see anything harmful in a context it will treat as anything other than text (i.e. it will never try to interpret such data as markup) and therefore you wont be vulnerable.

L.

egetchell wrote:
Greg,

Thanks for the reply.

The common approach for mitigating XSS is to provide a blacklist of XSS
enabling characters, enables would include "<", ">", "%3f", etc.  However,
these filters are easily bypassed by clever encoding constructs, so the
blacklist concept quickly fails and the site is open for attack.
By inverting the solution and supplying only the allowed set of characters,
the site remains secure no matter what clever encoding scheme someone dreams
up. The OWASP group provides some pretty extensive documentation around this. Here is a direct link to some common validation strategies:
http://www.owasp.org/index.php/Data_Validation#Data_Validation_Strategies

Their document, as a whole, is a very intereseting read.


Greg Lindholm wrote:
Sorry, I've never heard of whitelisting of allowable characters as being a
"normal" approach. <Remainder Removed>



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to