On Thursday, August 25, 2005, at 08:16  AM, James M Snell wrote:
Good points but it's more than just the handling of human-readable content. That's one use case but there are others. Consider, for example, if I was producing a feed that contained javascript and CSS styles that would otherwise be unwise for an online aggregator to try to display (e.g. the now famous Platypus prank... http://diveintomark.org/archives/2003/06/12/ how_to_consume_rss_safely). Typically aggregators and feed readers are (rightfully) recommended to strip scripts and styles from the content in order to reliably display the information. But, it is foreseeable that applications could be built that rely on these types of mechanism within the feed content. For example, I may want to create a feed that provides the human interaction for a workflow process -- each entry contains a form that uses javascript for validation and perhaps some CSS styles for formatting.

For that, you'd either need to use a less sophisticated feed reader that didn't strip anything out (and only use it to subscribe to fully trusted feeds, like internal feeds), or a more sophisticated feed reader that allowed you to turn off the stripping of "potentially dangerous" stuff, or to configure exactly what was, or better yet, wasn't, stripped (perhaps and a feed by feed basis).

The stripping-or-not behavior should be controlled from the client side, so I don't see any point in providing a mechanism for the publisher to provide hints about whether or not to strip things out. That would probably only benefit malicious publishers at the expense of brain-dead clients:

<entry>
        ...
        <ext:keep-potentially-dangerous-stuff="true" />
<content ... ><script ... >TriggerExploitThatErasesDrive('C:');</script></content>
</entry>

Reply via email to