Re: Second Last Call: draft-hammer-hostmeta-16.txt (Web Host Metadata) to Proposed Standard -- feedback

2011-07-14 Thread Peter Saint-Andre
Mark, I will work with the authors on proposed wording to describe the
context in which this technology is most applicable, and post again once
we've done that.

On 7/13/11 5:35 AM, Mark Nottingham wrote:
 Personally, I think Informational is most appropriate (and probably easier), 
 but a paragraph or two of context, as well as corresponding adjustments, 
 would work as well. 
 
 Cheers,
 
 
 On 13/07/2011, at 5:36 AM, Peter Saint-Andre wrote:
 
 On 6/21/11 11:08 PM, Mark Nottingham wrote:
 Generally, it's hard for me to be enthusiastic about this proposal,
 for a few reasons. That doesn't mean it shouldn't be published, but I
 do question the need for it to be Standards Track as a general
 mechanism.

 How about publishing it on the standards track but not as a general
 mechanism (i.e., why not clarify when it is and is not appropriate)?

 Clearly, both service providers (Google, Yahoo, etc.) and spec authors
 (draft-hardjono-oauth-dynreg-00, draft-hardjono-oauth-umacore-00) have
 found hostmeta somewhat useful in certain contexts.

 RFC 2026 says:

   A Proposed Standard specification is generally stable, has resolved
   known design choices, is believed to be well-understood, has received
   significant community review, and appears to enjoy enough community
   interest to be considered valuable.

 and:

   Usually, neither implementation nor operational experience is
   required for the designation of a specification as a Proposed
   Standard.  However, such experience is highly desirable, and will
   usually represent a strong argument in favor of a Proposed Standard
   designation.

 The spec seems to be stable at this point, it's received significant
 review, people seem to understand what it does and how it works, it's
 had both implementation and operational experience, and it appears to
 enjoy enough community interest to be considered valuable in certain
 contexts. I also think it has resolved the design choices and solved the
 requirements that it set out to solve, although you might be right that
 it doesn't solve all of the problems that a more generic metadata
 framework would need to solve.

 As a result, it seems like a fine candidate for Proposed Standard, i.e.,
 an entry-level document on the standards track that might be modified or
 even retracted based on further experience.

 Mostly, it's because I hasn't really seen much discussion of it as a
 general component of the Web / Internet architecture; AFAICT all of
 the interest in it and discussion of it has happened in more
 specialised / vertical places. 

 Again, perhaps we need to clarify that it is not necessarily a general
 component of the web architecture, although it can be used to solve more
 specific problems.

 The issues below are my concerns;
 they're not insurmountable, but I would have expected to see some
 discussion of them to date on lists like this one and/or the TAG list
 for something that's to be an Internet Standard.


 * XRD -- XRD is an OASIS spec that's used by OpenID and OAuth. Maybe
 I'm just scarred by WS-*, but it seems very over-engineered for what
 it does. I understand that the communities had reasons for using it
 to leverage an existing user base for their specific user cases, but
 I don't see any reason to generalise such a beast into a generic
 mechanism.

 As discussed in responses to your message, XRD seems to have been an
 appropriate tool for the job in this case. Whether XRD, too, is really a
 general component of the web architecture is another question.

 * Precedence -- In my experience one of the most difficult parts of a
 metadata framework like this is specifying the combination of
 metadata from multiple sources in a way that's usable, complete and
 clear. Hostmeta only briefly mentions precedence rules in the
 introduction.

 That could be something to work on if and when folks try to advance this
 technology to the next maturity level (currently Draft Standard).

 * Scope of hosts -- The document doesn't crisply define what a host
 is.

 This seems at least somewhat well-defined:

   a host is not a single resource but the entity
   controlling the collection of resources identified by Uniform
   Resource Identifiers (URI) with a common URI host [RFC3986].

 That is, it references Section 3.2.2 of RFC 3986, which defines host
 with some precision (albeit perhaps not crisply).

 * Context of metadata -- I've become convinced that the most
 successful uses of .well-known URIs are those that have commonality
 of use; i.e., it makes sense to define a .well-known URI when most of
 the data returned is applicable to a particular use case or set of
 use cases. This is why robots.txt works well, as do most other
 currently-deployed examples of well-known URIs.

 Defining a bucket for potentially random, unassociated metadata in a
 single URI is, IMO, asking for trouble; if it is successful, it could
 cause administrative issues on the server (as potentially many
 parties will need control of 

Re: Second Last Call: draft-hammer-hostmeta-16.txt (Web Host Metadata) to Proposed Standard -- feedback

2011-07-13 Thread Mark Nottingham
Personally, I think Informational is most appropriate (and probably easier), 
but a paragraph or two of context, as well as corresponding adjustments, would 
work as well. 

Cheers,


On 13/07/2011, at 5:36 AM, Peter Saint-Andre wrote:

 On 6/21/11 11:08 PM, Mark Nottingham wrote:
 Generally, it's hard for me to be enthusiastic about this proposal,
 for a few reasons. That doesn't mean it shouldn't be published, but I
 do question the need for it to be Standards Track as a general
 mechanism.
 
 How about publishing it on the standards track but not as a general
 mechanism (i.e., why not clarify when it is and is not appropriate)?
 
 Clearly, both service providers (Google, Yahoo, etc.) and spec authors
 (draft-hardjono-oauth-dynreg-00, draft-hardjono-oauth-umacore-00) have
 found hostmeta somewhat useful in certain contexts.
 
 RFC 2026 says:
 
   A Proposed Standard specification is generally stable, has resolved
   known design choices, is believed to be well-understood, has received
   significant community review, and appears to enjoy enough community
   interest to be considered valuable.
 
 and:
 
   Usually, neither implementation nor operational experience is
   required for the designation of a specification as a Proposed
   Standard.  However, such experience is highly desirable, and will
   usually represent a strong argument in favor of a Proposed Standard
   designation.
 
 The spec seems to be stable at this point, it's received significant
 review, people seem to understand what it does and how it works, it's
 had both implementation and operational experience, and it appears to
 enjoy enough community interest to be considered valuable in certain
 contexts. I also think it has resolved the design choices and solved the
 requirements that it set out to solve, although you might be right that
 it doesn't solve all of the problems that a more generic metadata
 framework would need to solve.
 
 As a result, it seems like a fine candidate for Proposed Standard, i.e.,
 an entry-level document on the standards track that might be modified or
 even retracted based on further experience.
 
 Mostly, it's because I hasn't really seen much discussion of it as a
 general component of the Web / Internet architecture; AFAICT all of
 the interest in it and discussion of it has happened in more
 specialised / vertical places. 
 
 Again, perhaps we need to clarify that it is not necessarily a general
 component of the web architecture, although it can be used to solve more
 specific problems.
 
 The issues below are my concerns;
 they're not insurmountable, but I would have expected to see some
 discussion of them to date on lists like this one and/or the TAG list
 for something that's to be an Internet Standard.
 
 
 * XRD -- XRD is an OASIS spec that's used by OpenID and OAuth. Maybe
 I'm just scarred by WS-*, but it seems very over-engineered for what
 it does. I understand that the communities had reasons for using it
 to leverage an existing user base for their specific user cases, but
 I don't see any reason to generalise such a beast into a generic
 mechanism.
 
 As discussed in responses to your message, XRD seems to have been an
 appropriate tool for the job in this case. Whether XRD, too, is really a
 general component of the web architecture is another question.
 
 * Precedence -- In my experience one of the most difficult parts of a
 metadata framework like this is specifying the combination of
 metadata from multiple sources in a way that's usable, complete and
 clear. Hostmeta only briefly mentions precedence rules in the
 introduction.
 
 That could be something to work on if and when folks try to advance this
 technology to the next maturity level (currently Draft Standard).
 
 * Scope of hosts -- The document doesn't crisply define what a host
 is.
 
 This seems at least somewhat well-defined:
 
   a host is not a single resource but the entity
   controlling the collection of resources identified by Uniform
   Resource Identifiers (URI) with a common URI host [RFC3986].
 
 That is, it references Section 3.2.2 of RFC 3986, which defines host
 with some precision (albeit perhaps not crisply).
 
 * Context of metadata -- I've become convinced that the most
 successful uses of .well-known URIs are those that have commonality
 of use; i.e., it makes sense to define a .well-known URI when most of
 the data returned is applicable to a particular use case or set of
 use cases. This is why robots.txt works well, as do most other
 currently-deployed examples of well-known URIs.
 
 Defining a bucket for potentially random, unassociated metadata in a
 single URI is, IMO, asking for trouble; if it is successful, it could
 cause administrative issues on the server (as potentially many
 parties will need control of a single file, for different uses --
 tricky when ordering is important for precedence), and if the file
 gets big, it will cause performance issues for some use cases.
 
 It would be helpful 

Re: Second Last Call: draft-hammer-hostmeta-16.txt (Web Host Metadata) to Proposed Standard -- feedback

2011-07-13 Thread Peter Saint-Andre
On 7/13/11 5:35 AM, Mark Nottingham wrote:
 Personally, I think Informational is most appropriate (and probably
 easier), but a paragraph or two of context, as well as corresponding
 adjustments, would work as well.

Personally I'm not wedded to Standards Track for this document, and
neither are the authors. I just want to make sure that there's a good
reason to change it from Standards Track to Informational. IMHO this
document doesn't solve the problem in the most generic way would
prevent us from publishing a rather large number of specifications on
the Standards Track. There's nothing evil about scoping a document so
that it solves a more particular problem.

Peter

-- 
Peter Saint-Andre
https://stpeter.im/


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Second Last Call: draft-hammer-hostmeta-16.txt (Web Host Metadata) to Proposed Standard -- feedback

2011-07-12 Thread Peter Saint-Andre
On 6/21/11 11:08 PM, Mark Nottingham wrote:
 Generally, it's hard for me to be enthusiastic about this proposal,
 for a few reasons. That doesn't mean it shouldn't be published, but I
 do question the need for it to be Standards Track as a general
 mechanism.

How about publishing it on the standards track but not as a general
mechanism (i.e., why not clarify when it is and is not appropriate)?

Clearly, both service providers (Google, Yahoo, etc.) and spec authors
(draft-hardjono-oauth-dynreg-00, draft-hardjono-oauth-umacore-00) have
found hostmeta somewhat useful in certain contexts.

RFC 2026 says:

   A Proposed Standard specification is generally stable, has resolved
   known design choices, is believed to be well-understood, has received
   significant community review, and appears to enjoy enough community
   interest to be considered valuable.

and:

   Usually, neither implementation nor operational experience is
   required for the designation of a specification as a Proposed
   Standard.  However, such experience is highly desirable, and will
   usually represent a strong argument in favor of a Proposed Standard
   designation.

The spec seems to be stable at this point, it's received significant
review, people seem to understand what it does and how it works, it's
had both implementation and operational experience, and it appears to
enjoy enough community interest to be considered valuable in certain
contexts. I also think it has resolved the design choices and solved the
requirements that it set out to solve, although you might be right that
it doesn't solve all of the problems that a more generic metadata
framework would need to solve.

As a result, it seems like a fine candidate for Proposed Standard, i.e.,
an entry-level document on the standards track that might be modified or
even retracted based on further experience.

 Mostly, it's because I hasn't really seen much discussion of it as a
 general component of the Web / Internet architecture; AFAICT all of
 the interest in it and discussion of it has happened in more
 specialised / vertical places. 

Again, perhaps we need to clarify that it is not necessarily a general
component of the web architecture, although it can be used to solve more
specific problems.

 The issues below are my concerns;
 they're not insurmountable, but I would have expected to see some
 discussion of them to date on lists like this one and/or the TAG list
 for something that's to be an Internet Standard.
 
 
 * XRD -- XRD is an OASIS spec that's used by OpenID and OAuth. Maybe
 I'm just scarred by WS-*, but it seems very over-engineered for what
 it does. I understand that the communities had reasons for using it
 to leverage an existing user base for their specific user cases, but
 I don't see any reason to generalise such a beast into a generic
 mechanism.

As discussed in responses to your message, XRD seems to have been an
appropriate tool for the job in this case. Whether XRD, too, is really a
general component of the web architecture is another question.

 * Precedence -- In my experience one of the most difficult parts of a
 metadata framework like this is specifying the combination of
 metadata from multiple sources in a way that's usable, complete and
 clear. Hostmeta only briefly mentions precedence rules in the
 introduction.

That could be something to work on if and when folks try to advance this
technology to the next maturity level (currently Draft Standard).

 * Scope of hosts -- The document doesn't crisply define what a host
 is.

This seems at least somewhat well-defined:

   a host is not a single resource but the entity
   controlling the collection of resources identified by Uniform
   Resource Identifiers (URI) with a common URI host [RFC3986].

That is, it references Section 3.2.2 of RFC 3986, which defines host
with some precision (albeit perhaps not crisply).

 * Context of metadata -- I've become convinced that the most
 successful uses of .well-known URIs are those that have commonality
 of use; i.e., it makes sense to define a .well-known URI when most of
 the data returned is applicable to a particular use case or set of
 use cases. This is why robots.txt works well, as do most other
 currently-deployed examples of well-known URIs.
 
 Defining a bucket for potentially random, unassociated metadata in a
 single URI is, IMO, asking for trouble; if it is successful, it could
 cause administrative issues on the server (as potentially many
 parties will need control of a single file, for different uses --
 tricky when ordering is important for precedence), and if the file
 gets big, it will cause performance issues for some use cases.

It would be helpful to hear from folks who have deployed hostmeta to
hear if they have run into any operational issues of the kind you
describe here.

 * Chattiness -- the basic model for resource-specfic metadata in
 hostmeta requires at least two requests; one to get the hostmeta
 document, and one to 

Re: Second Last Call: draft-hammer-hostmeta-16.txt (Web Host Metadata) to Proposed Standard -- feedback

2011-07-05 Thread Eran Hammer-Lahav
Hannes,

None of the current OAuth WG document address discovery in any way, so clearly 
there will be no use of XRD. But the OAuth community predating the IETF had 
multiple proposals for it. In addition, multiple times on the IETF OAuth WG 
list, people have suggested using host-meta and XRD for discovery purposes.

The idea that XRD was reused without merit is both misleading and 
mean-spirited. Personally, I'm sick of it, especially coming from standards 
professionals.

XRD was largely developed by the same people who worked on host-meta. XRD 
predated host-meta and was designed to cover the wider use case. Host-meta was 
an important use case when developing XRD in its final few months. It was done 
in OASIS out of respect to proper standards process in which the body that 
originated a work (XRDS) gets to keep it.

I challenge anyone to find any faults with the IPR policy or process used to 
develop host-meta in OASIS.

XRD is one of the simplest XML formats I have seen. I bet most of the people 
bashing it now have never bothered to read it. At least some of these people 
have been personally invited by me to comment on XRD while it was still in 
development and chose to dismiss it.

XRD was designed in a very open process with plenty of community feedback and 
it was significantly simplified based on that feedback. In addition, host-meta 
further simplifies it by profiling it down, removing some of the more complex 
elements like Subject and Alias (which are very useful in other contexts). XRD 
is nothing more than a cleaner version of HTML LINK elements with literally a 
handful of new elements based on well defined and widely supported 
requirements. It's entire semantic meaning is based on the IETF Link relation 
registry RFC.

There is something very disturbing going on these days in how people treat 
XML-based formats, especially form OASIS.

When host-meta's predecessor - side–meta – was originally proposed a few years 
ago, Mark Nottingham proposed an XML format not that different from XRD. There 
is nothing wrong with JSON taking over as a simpler alternative. I personally 
prefer JSON much better. But it would be reckless and counter productive to 
ignore a decade of work on XML formats just because it is no longer cool. Feels 
like we back in high school.

If you have technical arguments against host-meta, please share. But if your 
objections are based on changing trends, dislike of XML or anything OASIS, grow 
up.

EHL



From: Hannes Tschofenig 
hannes.tschofe...@gmx.netmailto:hannes.tschofe...@gmx.net
Date: Sun, 3 Jul 2011 00:36:29 -0700
To: Mark Nottingham m...@mnot.netmailto:m...@mnot.net
Cc: Hannes Tschofenig 
hannes.tschofe...@gmx.netmailto:hannes.tschofe...@gmx.net, 
ietf@ietf.orgmailto:ietf@ietf.org IETF 
ietf@ietf.orgmailto:ietf@ietf.org, Eran Hammer-lahav 
e...@hueniverse.commailto:e...@hueniverse.com, oauth WG 
oa...@ietf.orgmailto:oa...@ietf.org
Subject: Re: Second Last Call: draft-hammer-hostmeta-16.txt (Web Host 
Metadata) to Proposed Standard -- feedback

I also never really understood why XRD was re-used.

Btw, XRD is not used by any of the current OAuth WG documents, see 
http://datatracker.ietf.org/wg/oauth/


On Jun 22, 2011, at 8:08 AM, Mark Nottingham wrote:

* XRD -- XRD is an OASIS spec that's used by OpenID and OAuth. Maybe I'm just 
scarred by WS-*, but it seems very over-engineered for what it does. I 
understand that the communities had reasons for using it to leverage an 
existing user base for their specific user cases, but I don't see any reason to 
generalise such a beast into a generic mechanism.


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Second Last Call: draft-hammer-hostmeta-16.txt (Web Host Metadata) to Proposed Standard -- feedback

2011-07-03 Thread Hannes Tschofenig
I also never really understood why XRD was re-used. 

Btw, XRD is not used by any of the current OAuth WG documents, see 
http://datatracker.ietf.org/wg/oauth/


On Jun 22, 2011, at 8:08 AM, Mark Nottingham wrote:

 * XRD -- XRD is an OASIS spec that's used by OpenID and OAuth. Maybe I'm just 
 scarred by WS-*, but it seems very over-engineered for what it does. I 
 understand that the communities had reasons for using it to leverage an 
 existing user base for their specific user cases, but I don't see any reason 
 to generalise such a beast into a generic mechanism.

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: Second Last Call: draft-hammer-hostmeta-16.txt (Web Host Metadata) to Proposed Standard -- feedback

2011-06-22 Thread Paul E. Jones
Mark,

 Generally, it's hard for me to be enthusiastic about this proposal, for
 a few reasons. That doesn't mean it shouldn't be published, but I do
 question the need for it to be Standards Track as a general mechanism.

I believe standards track is appropriate, since the objective is to define
procedures that are interoperable and the specification defines a set of
procedures that would be implemented by multiple software products.

 Mostly, it's because I hasn't really seen much discussion of it as a
 general component of the Web / Internet architecture; AFAICT all of the
 interest in it and discussion of it has happened in more specialised /
 vertical places. The issues below are my concerns; they're not
 insurmountable, but I would have expected to see some discussion of them
 to date on lists like this one and/or the TAG list for something that's
 to be an Internet Standard.

You might be right that more discussion has happened off the apps-discuss
list, but I would not equate that with not being a component of the web
architecture.  On the contrary, host-meta has a lot of utility and is an
important building block for the web architecture.  With host-meta, it is
possible to advertise information in a standard way, discover services, etc.
Some of the latter is not fully defined, but cannot be defined without this
standard in place.

 * XRD -- XRD is an OASIS spec that's used by OpenID and OAuth. Maybe I'm
 just scarred by WS-*, but it seems very over-engineered for what it
 does. I understand that the communities had reasons for using it to
 leverage an existing user base for their specific user cases, but I
 don't see any reason to generalise such a beast into a generic
 mechanism.

XRD is not complicated.  It's an XML document spec with about seven elements
defined.  In order to convey metadata, one must have some format defined and
XRD is as good as any other.  I don't think the use of XRD should be
considered as negative aspect.  OpenID uses (through Yadis) a precursor to
XRD called XRDS. I'm not sure about Oauth's usage of XRD.  Either way, does
this matter?

 * Precedence -- In my experience one of the most difficult parts of a
 metadata framework like this is specifying the combination of metadata
 from multiple sources in a way that's usable, complete and clear.
 Hostmeta only briefly mentions precedence rules in the introduction.

I assume you are referring to the processing rules in 1.1.1?  How would you
propose strengthening that text?
 
 * Scope of hosts -- The document doesn't crisply define what a host
 is.

This might be deliberate and not really fault of this document.  The
hostname that we are all used to using for a host may or may not refer
to a physical host.  It might refer to a virtual host or a virtually hosted
domain.  In any case, this term is consistent with the term used on the HTTP
spec and the header line Host:.

 * Context of metadata -- I've become convinced that the most successful
 uses of .well-known URIs are those that have commonality of use; i.e.,
 it makes sense to define a .well-known URI when most of the data
 returned is applicable to a particular use case or set of use cases.
 This is why robots.txt works well, as do most other currently-deployed
 examples of well-known URIs.
 
 Defining a bucket for potentially random, unassociated metadata in a
 single URI is, IMO, asking for trouble; if it is successful, it could
 cause administrative issues on the server (as potentially many parties
 will need control of a single file, for different uses -- tricky when
 ordering is important for precedence), and if the file gets big, it will
 cause performance issues for some use cases.

All of the use cases are not defined, but the host-meta document provides
some examples, such as finding the author of a web page, copyright
information, etc.  There has been discussion of finding a user's identity
provider.  The particular uses.  Each of these examples fits well within the
host-meta framework.  It builds upon the web linking (RFC 5988) work you
did in a logical and consistent way, and I see these as complementary
documents.  To your concern, host-meta is flexible, but the functionality is
bounded.
 
 * Chattiness -- the basic model for resource-specfic metadata in
 hostmeta requires at least two requests; one to get the hostmeta
 document, and one to get the resource-specific metadata after
 interpolating the URI of interest into a template.

This is true, but the web is all about establishing links to other
information.  I view this is a good thing about host-meta: it provides a
very simple syntax with a way to use well-defined link relation types to
discover other information.
 
 For some use cases, this might be appropriate; however, for many others
 (most that I have encountered), it's far too chatty. Many use cases find
 the latency of one extra request unacceptable, much less two. Many use
 cases require fetching metadata for a number of distinct resources; in
 

Re: Second Last Call: draft-hammer-hostmeta-16.txt (Web Host Metadata) to Proposed Standard -- feedback

2011-06-22 Thread Mark Nottingham

On 23/06/2011, at 2:04 AM, Paul E. Jones wrote:

 Mark,
 
 Generally, it's hard for me to be enthusiastic about this proposal, for
 a few reasons. That doesn't mean it shouldn't be published, but I do
 question the need for it to be Standards Track as a general mechanism.
 
 I believe standards track is appropriate, since the objective is to define
 procedures that are interoperable and the specification defines a set of
 procedures that would be implemented by multiple software products.

That can be said of pretty much every specification that comes along; does this 
imply that you think everything should be standards track?

At the end of the day, it's standards track if the IESG says it is. They asked 
for feedback on the Last Call, and I gave mine. It's not the end of the world 
if this becomes Standards Track, but I felt that it shouldn't pass without 
comment.


 Mostly, it's because I hasn't really seen much discussion of it as a
 general component of the Web / Internet architecture; AFAICT all of the
 interest in it and discussion of it has happened in more specialised /
 vertical places. The issues below are my concerns; they're not
 insurmountable, but I would have expected to see some discussion of them
 to date on lists like this one and/or the TAG list for something that's
 to be an Internet Standard.
 
 You might be right that more discussion has happened off the apps-discuss
 list, but I would not equate that with not being a component of the web
 architecture.  

... and I didn't equate it with that either; I said it was concerning that it 
hadn't been discussed broadly.


 On the contrary, host-meta has a lot of utility and is an
 important building block for the web architecture.  With host-meta, it is
 possible to advertise information in a standard way, discover services, etc.
 Some of the latter is not fully defined, but cannot be defined without this
 standard in place.

A lot of utility and being an important building block are completely 
subjective, of course. I'd agree with a statement that it's an important 
building block of OAuth, for example, but it seems quite premature to call it 
an important building block of the Web arch. 


 * XRD -- XRD is an OASIS spec that's used by OpenID and OAuth. Maybe I'm
 just scarred by WS-*, but it seems very over-engineered for what it
 does. I understand that the communities had reasons for using it to
 leverage an existing user base for their specific user cases, but I
 don't see any reason to generalise such a beast into a generic
 mechanism.
 
 XRD is not complicated.  It's an XML document spec with about seven elements
 defined.  In order to convey metadata, one must have some format defined and
 XRD is as good as any other.  I don't think the use of XRD should be
 considered as negative aspect.  OpenID uses (through Yadis) a precursor to
 XRD called XRDS. I'm not sure about Oauth's usage of XRD.  Either way, does
 this matter?

Choosing your foundations well matters greatly.


 * Precedence -- In my experience one of the most difficult parts of a
 metadata framework like this is specifying the combination of metadata
 from multiple sources in a way that's usable, complete and clear.
 Hostmeta only briefly mentions precedence rules in the introduction.
 
 I assume you are referring to the processing rules in 1.1.1?  How would you
 propose strengthening that text?

It's not a matter of strengthening the text, it's a matter of agreeing upon and 
defining an algorithm. As it sits, the document doesn't do much more than wave 
its hands about precedence.


 * Scope of hosts -- The document doesn't crisply define what a host
 is.
 
 This might be deliberate and not really fault of this document.  The
 hostname that we are all used to using for a host may or may not refer
 to a physical host.  It might refer to a virtual host or a virtually hosted
 domain.

You use might a lot here. Do you know what it is, or are you just speculating?


 In any case, this term is consistent with the term used on the HTTP
 spec and the header line Host:.

The Host header field conveys a host and a port, where this document seems to 
attach a very ephemeral concept to the term.


 * Context of metadata -- I've become convinced that the most successful
 uses of .well-known URIs are those that have commonality of use; i.e.,
 it makes sense to define a .well-known URI when most of the data
 returned is applicable to a particular use case or set of use cases.
 This is why robots.txt works well, as do most other currently-deployed
 examples of well-known URIs.
 
 Defining a bucket for potentially random, unassociated metadata in a
 single URI is, IMO, asking for trouble; if it is successful, it could
 cause administrative issues on the server (as potentially many parties
 will need control of a single file, for different uses -- tricky when
 ordering is important for precedence), and if the file gets big, it will
 cause performance issues for some use cases.
 
 All