I've urged over and over to separate the literally unfixable problem of
protecting sites such as MySpace and LiveJournal from improperly
sanitized or sandboxed user-generated content, and us fixing our own
codebase to track javascript: subject principals.  So I'm good with
making progress, even if it requires brute-force to classify cases that
should not work as they did in Netscape 2-4.

Breaking pnglets or doodlepad
(https://bugzilla.mozilla.org/show_bug.cgi?id=148967), on the other
hand, with the claim that such sites should use data: anyway, seems
wrong to me.  Those techniques pre-date data: being supported well or
at all.  If it were the case that, with correct propagation of
principals through all URI load paths where the URI could be
javascript:, pnglets and the doodlepad would "just work", why should we
break them?

The web platform components being used here are over ten years old, but
that doesn't mean "time to change".  It could mean "leave working" or
"still important".  If our principals tracking is sound, we should
maximize cases that work, not look for cases to break by brute-force
checks, with our message being "change your content".

/be

_______________________________________________
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security

Reply via email to