On 16/10/2008, at 10:11 PM, Danny Ayers wrote:

1) If the site has a normal representation there (i.e., a home page), it could be big, which would be an impediment to clients getting the metadata
quickly

Fair point.

(or at all, in the case of resource-constrained use cases).

I don't get that - do you have an example of such a use case?

E.g., a mobile device / printer / remote sensor / etc. might have to download an entire HTML homepage to get the metadata it needs. True, it could drop the connection, but that's nasty and there'd still be packets in flight.


Remember, conneg can't be used to get something fundamentally different; it
needs to be a representation of the *same* resource.

Yep, but I don't think that's particularly relevant - usual conneg
rules apply, but representations of the root namespace resource MAY
contain a link to the metadata doc.

Of course. I just think it's stretching it a bit to say that a 10K HTML file and three lines of RDF (for example) are representations of the same resource...


2) The step of indirection is a deal-killer for some users.

For example..?

It's the extra round-trip time; subjecting all of your users to that is a big deal to performance-minded people, especially when you're considering things like users on high-latency, low-bandwidth, high- loss links, running very popular sites and the bandwidth associated with doing that, etc. This topic occupied a *lot* of time in the P3P discussions, and it still comes up with a lot of users considering this issue today.


Given that the whole idea here is to make this a slam-dunk solution
for the problem (so as to avoid creating any *other* new well-known
locations), it has to have as few points of friction as possible.

Do you happen to know if robot.txt has any extension points (or could
be viably revised)? (Got a presentation to prep last minute or I'd go
look :-)

I looked at that, but the situation is really muddy; AIUI some parsers will choke on unrecognised content. I actually started out assuming robots.txt, but seeing as there isn't even a decent spec for it...


What' I'm *really* wondering at this point is if XML itself is too complex -- i.e., should this be a line-oriented format? One pre-draft reviewer
already suggested as much.

That sounds reasonable, though it would be good if an agent could make
some sense of the doc without prior knowledge - which is a point, the
current proposed format doesn't have an XML namespace, which pre-empts
any chance of follow-your-nose discovery (a la GRDDL).

Yeah, I'm trying to see how far I can get in 2008 without a namespace :)

Question out of the blue -- can GRDDL do dispatch on a media type? If not, why not?

Cheers and thanks,


--
Mark Nottingham     http://www.mnot.net/


Reply via email to