William F Hammond wrote:
The experiment begun around 2001 of "punishing" bad
documents in application/xhtml+xml seems to have led to that mime type
not being much used.

That has more to do with the fact that it wasn't supported in browsers used by 90+% of users for a number of years.

So user agents need to learn how to recognize the good and the bad
in both mimetypes.

Recognize and do what with it?

Otherwise you have Gresham's Law: the bad documents will drive out the
good.

Perhaps you should clearly state your definitions of "bad" and "good" in this case? I'd also like to know, given those definitions, why it's bad for the "bad" documents to drive out the "good", and how you think your proposal will prevent that from happening.

If it has a preamble beginning with "^<?xml " or a sensible
xhtml DOCTYPE declaration or a first element "<html xmlns=...>",
then handle it as xhtml unless and until it proves to be non-compliant
xhtml (e.g, not well-formed xml, unquoted attributes, munged handling
of xml namespaces, ...).  At the point it proves to be bad xhtml reload
it and treat it as "regular" html.

What's the benefit? This seems to give the worst of both worlds, as well as a poor user experience.

So most bogus xhtml will then be 1 or 2 seconds slower than good xhtml.
Astute content providers will notice that and then do something about it.
It provides a feedback mechanism for making the web become better.

In the meantime, it punishes the users for things outside their control by degrading their user experience. It also provides a competitive advantage to UAs who ignore your proposal.

Sounds like an unstable equilibrium to me, even if attainable.

-Boris

Reply via email to