I can see reasonable uses for this, like marking a feed of local disk errors
as not of general interest. I would not be surprised to see RSS/Atom catch
on for system monitoring.

Search engines see this all the time -- just because it is HTML doesn't
mean it is the primary content on the site. Log analysis reports are
one good example.

/robots.txt is one approach. Wouldn't hurt to have a recommendation
for whether Atom clients honor that.

A long time ago, I proposed a Robots PI, similar to the Robots meta tag.
That would get around the "only webmaster can edit" problem with /robots.txt.
The Robots PI did not catch on, but I've still got the proposal somewhere.

wunder

--On August 24, 2005 11:25:12 PM -0700 James M Snell <[EMAIL PROTECTED]> wrote:

> 
> Up to this point, the vast majority of use cases for Atom feeds is the 
> traditional syndicated content case.  A bunch of content updates that are 
> designed to be distributed and aggregated within Feed readers or online 
> aggregators, etc.  But with Atom providing a much more flexible content model 
> that allows for data that may not be suitable for display within a feed 
> reader or online aggregator, I'm wondering what the best way would be for a 
> publisher to indicate that a feed should not be
aggregated?
> 
> For example, suppose I build an application that depends on an Atom feed 
> containing binary content (e.g. a software update feed).  I don't really want 
> aggregators pulling and indexing that feed and attempting to display it 
> within a traditional feed reader.  What can I do?
> 
> Does the following work?
> 
> <feed>
>   ...
>   <x:aggregate>no</x:aggregate>
> </feed>
> 
> Should I use a processing instruction instead?
> 
> <?dont-aggregate ?>
> <feed>
>   ...
> </feed>
> 
> I dunno. What do you all think?  Am I just being silly or does any of this 
> actually make a bit of sense?
> 
> - James
> 
> 



--
Walter Underwood
Principal Software Architect, Verity

Reply via email to