Re: RDF Update Feeds + URI time travel on HTTP-level

2009-11-24 Thread Peter Ansell
2009/11/25 Michael Nelson :
> In practice, agent-driven CN is rarely done (I can only guess as to why). In
> practice, you get either server-driven (as defined in RFC 2616) or
> transparent CN (introduced in RFC 2616 (well, RFC 2068 actually), but really
> defined in RFCs 2295 & 2296).  See:
> http://httpd.apache.org/docs/2.3/content-negotiation.html

My guess is that it relies on users making decisions that they aren't
generally qualified, or concerned enough, to make. Considering
language is basically a constant from the users operating system
configuration, and format differences do not affect users enough to
warrant giving them a choice between XHTML and HTML, or JPG and PNG,
for example. I think browser designers see CN as a good thing for
them, but basically irrelevant to users, and hence they decide it is
easiest to just automate the process using server or transparent
negotiation.

Similar reasoning about why Apache goes so far to try to break down,
what are likely unintentional mix-ups with equal q/qs value
combinations, as it reduces confusion the user. The fact that the
server and transparent CN processes rely on servers for part of the
decision (qs), makes it perfectly fine for them to make the tie
breaker decision in my opinion. There is basically no reason why the
choice the server makes will be inconvienient for users as they
already said that both formats or languages were acceptable in some
way through the Accept- headers. Combined with the servers knowledge,
the tie breaker will only choose one slightly better format compared
to another decent format, resulting in a win-win scenario according to
the users declared preferences. As long as the server sends back the
real Content-Type it chose I am happy.

Cheers,

Peter



Re: RDF Update Feeds + URI time travel on HTTP-level

2009-11-24 Thread Danny Ayers
What Damian said. I keep all my treasures in Subversion, it seems to work.



-- 
http://danny.ayers.name



Re: RDF Update Feeds + URI time travel on HTTP-level

2009-11-24 Thread Danny Ayers
Good man, I couldn't help thinking there was a paper in that...

2009/11/22 Herbert Van de Sompel :
> hi all,
> (thanks Chris, Richard, Danny)
>
> In light of the current discussion, I would like to provide some
> clarifications regarding "Memento: Time Travel for the Web", ie the idea of
> introducing HTTP content negotiation in the datetime dimension:
> (*) Some extra pointers:
> - For those who prefer browsing slides over reading a paper, there is
> http://www.slideshare.net/hvdsomp/memento-time-travel-for-the-web
> - Around mid next week, a video recording of a presentation I gave on
> Memento should be available at http://www.oclc.org/research/dss/default.htm
> - The Memento site is at http://www.mementoweb.org. Of special interest may
> be the proposed HTTP interactions for (a) web servers with internal archival
> capabilities such as content management systems, version control systems,
> etc (http://www.mementoweb.org/guide/http/local/) and (b) web servers
> without internal archival capabilities
> (http://www.mementoweb.org/guide/http/remote/).
> (*) The overall motivation for the work is the integration of archived
> resources into regular web navigation by making them available via their
> original URIs. The archived resources we have focused on in our experiments
> so far are those kept by
> (a) Web Archives such as the Internet Archive, Webcite, archive-it.org and
> (b) Content Management Systems such as wikis, CVS, ...
> The reason I pinged Chris Bizer about our work is that we thought that our
> proposed approach could also be of interest in the LoD environment.
>  Specifically, the ability to get to prior descriptions of LoD resources by
> doing datetime content negotiation on their URI seemed appealing; e.g. what
> was the dbpedia description for the City of Paris on March 20 2008? This
> ability would, for example, allow analysis of (the evolution of ) data over
> time. The requirement that is currently being discussed in this thread
> (which I interpret to be about approaches to selectively get updates for a
> certain LoD database) is not one I had considered using Memento for,
> thinking this was more in the realm of feed technologies such as Atom (as
> suggested by Ed Summers), or the pre-REST OAI-PMH
> (http://www.openarchives.org/OAI/openarchivesprotocol.html).
> (*) Regarding some issues that were brought up in the discussion so far:
> - We use an X header because that seems to be best practice when doing
> experimental work. We would very much like to eventually migrate to a real
> header, e.g. Accept-Datetime.
> - We are definitely considering and interested in some way to formalize our
> proposal in a specification document. We felt that the I-D/RFC path would
> have been the appropriate one, but are obviously open to other approaches.
> - As suggested by Richard, there is a bootstrapping problem, as there is
> with many new paradigms that are introduced. I trust LoD developers fully
> understand this problem. Actually, the problem is not only at the browser
> level but also at the server level. We are currently working on a FireFox
> plug-in that, when ready, will be available through the regular channels.
> And we have successfully (and experimentally) modified the Mozilla code
> itself to be able to demonstrate the approach. We are very interested in
> getting support in other browsers, natively or via plug-ins. We also have
> some tools available to help with initial deployment
> (http://www.mementoweb.org/tools/ ). One is a plug-in for the mediawiki
> platform; when installed the wiki natively supports datetime content
> negotiation and redirects a client to the history page that was active at
> the datetime requested in the X-Accept-Header. We just started a Google
> group for developers interested in making Memento happen for their web
> servers, content management system, etc.
> (http://groups.google.com/group/memento-dev/).
> (*) Note that the proposed solution also leverages the OAI-ORE specification
> (fully compliant with LoD best practice) as a mechanism to support discovery
> of archived resources.
> I hope this helps to get a better understanding of what Memento is about,
> and what its current status is. Let me end by stating that we would very
> much like to get these ideas broadly adopted. And we understand we will need
> a lot of help to make that happen.
> Cheers
> Herbert
> ==
> Herbert Van de Sompel
> Digital Library Research & Prototyping
> Los Alamos National Laboratory, Research Library
> http://public.lanl.gov/herbertv/
> tel. +1 505 667 1267
>
>
>
>
>



-- 
http://danny.ayers.name



Re: RDF Update Feeds + URI time travel on HTTP-level

2009-11-24 Thread Michael Nelson

On Wed, 25 Nov 2009, Mark Baker wrote:


Herbert,

On Tue, Nov 24, 2009 at 6:10 PM, Herbert Van de Sompel
 wrote:

Just to let you know that our response to some issues re Memento raised here
and on Pete Johnston's blog post
(http://efoundations.typepad.com/efoundations/2009/11/memento-and-negotiating-on-time.html) is
now available at:
http://www.cs.odu.edu/~mln/memento/response-2009-11-24.html


Regarding the suggestion to use the Link header, I was thinking the
same thing.  But the way you describe it being used is different than
how I would suggest it be used.  Instead of providing a link to each
available representation, the server would just provide a single link
to the timegate.  The client could then GET the timegate URI and find
either the list of URIs (along with date metadata), or some kind of
form-like declaration that would permit it to specify the date/time
for which it desires a representation (e.g. Open Search).  Perhaps
this is what you meant by "timemap", I can't tell, though I don't see
a need for the use of the Accept header in that case if the client can
either choose or construct a URI for the desired archived
representation.


Hi Mark,

What you describe is really close to what RFC 2616 calls "Agent-driven 
Negotiation", which is how CN exists in the absence of "Accept-*" request 
headers.


12.2 Agent-driven Negotiation

   With agent-driven negotiation, selection of the best representation
   for a response is performed by the user agent after receiving an
   initial response from the origin server. Selection is based on a list
   of the available representations of the response included within the
   header fields or entity-body of the initial response, with each
   representation identified by its own URI. Selection from among the
   representations may be performed automatically (if the user agent is
   capable of doing so) or manually by the user selecting from a
   generated (possibly hypertext) menu.

   ...

RFC 2295 also permits this style of CN via the "TCN: List" response header 
semantics (generally sent with a 300 or 406 response).


But the "TCN: Choice" approach is introduced as an optimization.  The idea 
is that if you know you prefer .en, .pdf and .gz then tell the server when 
making your original request and it will do its best to honor those 
requests.


We think adding an orthogonal dimension for CN will be similar: if you 
know you prefer .en, .pdf, .gz and .20091031, then tell the server when 
making your original request and it will do its best to honor those 
requests.


In practice, agent-driven CN is rarely done (I can only guess as to why). 
In practice, you get either server-driven (as defined in RFC 2616) or 
transparent CN (introduced in RFC 2616 (well, RFC 2068 actually), but 
really defined in RFCs 2295 & 2296).  See: 
http://httpd.apache.org/docs/2.3/content-negotiation.html


 Apache supports 'server driven' content negotiation, as defined in the
 HTTP/1.1 specification. It fully supports the Accept, Accept-Language,
 Accept-Charset andAccept-Encoding request headers. Apache also supports
 'transparent' content negotiation, which is an experimental negotiation
 protocol defined in RFC 2295 and RFC 2296. It does not offer support for
 'feature negotiation' as defined in these RFCs.

In practice, Apache goes out of its way to not send back a 300 response: 
it will look at your Accept-* request preferences, compute the q-values, 
and has an elaborate tie-breaking scheme to always try to send back 
*something* (and not a 300).


Having said all of the above, the client can always force the server to 
send back a 300, along with a full list of possible URIs.  To do this, you 
simply have to send a "Negotiate: 1.0" request header, and the server will 
send back a 300/"TCN: List" response.  In the case of Memento, the URI 
list is the time map.


So while I think you are describing agent-driven CN (or something very 
similar), I also think it would be desirable to go ahead and get the full 
monty and define the appropriate Accept header and allow server-driven & 
transparent CN.  Agent-driven CN is still available for clients that wish 
to do so.


regards,

Michael



As for the "current state" issue, you're right that it isn't a general
constraint of Web architecture.  I was assuming we were talking only
about the origin server.  Of course, any Web component can be asked
for a representation of any resource, and they are free to answer
those requests in whatever way suits their purpose, including
providing historical versions.

Mark.




Michael L. Nelson m...@cs.odu.edu http://www.cs.odu.edu/~mln/
Dept of Computer Science, Old Dominion University, Norfolk VA 23529
+1 757 683 6393 +1 757 683 4900 (f)

Re: RDF Update Feeds + URI time travel on HTTP-level

2009-11-24 Thread Michael Nelson

Hi Erik,

Thanks for your response.  I'm just going to cherry pick a few bits from 
it:



As an aside, which may or may not be related to Memento, do you think
there is a useful distinction to be made between web archives which
preserve the actual bytestream of an HTTP response made at a certain
time (e.g., the Internet Archive) and CMSs that preserve the general
content, but allow headers, advertisements, and so on to change (e.g.,
Wikipedia).

To see what I mean, visit:

http://en.wikipedia.org/w/index.php?title=World_Wide_Web&oldid=9419736

and then:

http://web.archive.org/web/20050213030130/en.wikipedia.org/wiki/World_Wide_Web

I am not sure what the relationship is between these two resources.


I'm not 100% sure either.  I think this is a difficult problem in web 
archiving in general.  The wikipedia link with current content substituted 
is not exactly the 2005 version, but the IA version isn't really what a 
user would have seen in 2005 either (at least in terms of presentation).


And:

http://web.archive.org/web/20080103014411/http://www.cnn.com/

for example gives me at least a pop-up add that is relative to today, not 
Jan 2008 (there may be better examples where "today's" content is 
in-lined, but the point remains the same).


As an aside, the Zoetrope (http://doi.acm.org/10.1145/1498759.1498837) 
took an entirely different approach to this problem in their archives (see 
pp. 246-247).  They basically took DOM dumps from the client and saved 
them, rather than a crawler-based URI approach.



My confusion on this issue stems, I believe, from a longstanding
confusion that I have had with the 302 Found response.

My understanding of 302 Found has always been that, if I visit R and
receive a 302 Found with Location R', my browser should continue to
consider R the canonical version and use it for all further requests.
If I bookmark R' after having been redirected to R, it is in fact R
which should be bookmarked, and not R'. If I use my browser to send
that link to a friend, my browser should send R, not R'. I believe
that this is the meaning given to 302 Found in [3].

I am aware that browsers do not implement what I consider to be the
correct behavior here, but it is the way that I understand the
definition of 302 Found.

Perhaps somebody could help me out by clarifying this for me?


Firefox will attempt to do the right thing, but it depends on the client 
maintaining state about the original URI.  If you dereference R, then get 
302'd to R', a reload in Firefox will be on R and not R'.


Obviously, if you email or share or probably even bookmark R', then this 
client-side state will be lost and 3rd party reloads will be relative to 
R' (in fact, that might be want you *want* to occur).  But at least within 
a session, Firefox (and possibly other browsers) will reload wrt to the 
original URI.


Although it is not explicit in the current paper or presentation, we're 
planning on some method for having R' "point" back to R to facilitate 
Memento-aware clients to know the original URI.  We're not sure 
syntactically how it should be done (a value in the "Alternates" response 
header maybe?), but semantically we want R' to point to R.  This


regards,

Michael



best,
Erik Hetzner




Michael L. Nelson m...@cs.odu.edu http://www.cs.odu.edu/~mln/
Dept of Computer Science, Old Dominion University, Norfolk VA 23529
+1 757 683 6393 +1 757 683 4900 (f)




Re: RDF Update Feeds + URI time travel on HTTP-level

2009-11-24 Thread Mark Baker
Herbert,

On Tue, Nov 24, 2009 at 6:10 PM, Herbert Van de Sompel
 wrote:
> Just to let you know that our response to some issues re Memento raised here
> and on Pete Johnston's blog post
> (http://efoundations.typepad.com/efoundations/2009/11/memento-and-negotiating-on-time.html) is
> now available at:
> http://www.cs.odu.edu/~mln/memento/response-2009-11-24.html

Regarding the suggestion to use the Link header, I was thinking the
same thing.  But the way you describe it being used is different than
how I would suggest it be used.  Instead of providing a link to each
available representation, the server would just provide a single link
to the timegate.  The client could then GET the timegate URI and find
either the list of URIs (along with date metadata), or some kind of
form-like declaration that would permit it to specify the date/time
for which it desires a representation (e.g. Open Search).  Perhaps
this is what you meant by "timemap", I can't tell, though I don't see
a need for the use of the Accept header in that case if the client can
either choose or construct a URI for the desired archived
representation.

As for the "current state" issue, you're right that it isn't a general
constraint of Web architecture.  I was assuming we were talking only
about the origin server.  Of course, any Web component can be asked
for a representation of any resource, and they are free to answer
those requests in whatever way suits their purpose, including
providing historical versions.

Mark.



Re: RDF Update Feeds + URI time travel on HTTP-level

2009-11-24 Thread Erik Hetzner
Hi -

At Mon, 23 Nov 2009 21:02:37 -0700,
Herbert Van de Sompel wrote:
> 
> On Nov 23, 2009, at 4:59 PM, Erik Hetzner wrote:
> > I think this is a very compelling argument.
> 
> Actually, I don't think it is. The issue was also brought up (in a
> significantly more tentative manner) in Pete Johnston blog entry on
> eFoundations
> (http://efoundations.typepad.com/efoundations/2009/11/memento-and-negotiating-on-time.html
> ). Tomorrow, we will post a response that will try and show that
> "current state" issue is - as far as we can see - not quite as
> "written in stone" as suggested above in the specs that matter in
> this case, i.e. Architecture of the World Wide Web and RFC 2616.
> Both are interestingly vague about this.

Thanks for your response, both in email and in your posted document
[1]. As with your the paper, I have read it & found a lot to consider
and a lot that I agree with.

> > On the other hand, there is, nothing I can see that prevents one
> > URI from representing another URI as it changes through time. This
> > is already the case with, e.g.,
> > , which
> > represents the URI  at all times. So this URI
> > could, perhaps, be a target for X-Accept-Datetime headers.
> 
> That is actually what we do in Memento (see our paper
> http://arxiv.org/abs/0911.1112), and we recognize two cases, here:
> 
> (1) If the web server does not keep track of its own archival
> versions, then we must rely on archival versions that are stored
> elsewhere, i.e. in Web Archives. In this case, the original server
> who receives the request can redirect the client to a resource like
> the one you mention above, i.e. a resource that stands for archived
> versions of another resource. Note that this redirect is a simple
> redirect like the ones that happen all the time on the Web. This is
> not a redirect that is part of a datetime content negotiation flow,
> rather a redirect that occurs because the server has detected an X-
> Accept-Datetime header. Now, we don't want to overload the existing
>  as you suggest,
> but rather choose to introduce a special-purpose resource that we
> call a TimeGate
> . And we
> indeed introduce this resource as a target for datetime content
> negotiation.
> 
> (2) If the web server does keep track of its own archival versions
> (think CMS), then it can handle requests for old versions "locally"
> as it has all the information that is required to do so. In this
> case, we could also introduce a special-purpose, distinct, TimeGate
> on this server, and have the original resource redirect to it. That
> would make this case in essence the same as (1) above. This,
> however, seemed like a bit of overkill and we felt that the original
> resource and the Timegate could coincide; meaning datetime content
> negotiation occurs directly against the original resource. Meaning
> the URI that represents the resource as it evolves over time is the
> URI of the resource itself. It stands for past and present versions.
> The present version is delivered (200 OK) from that URI itself
> (business as usual), archived versions are delivered from other
> resources via content negotiation (302 with Location different than
> the original URI)
> 
> In In both (1) and (2) the original resource plays a role in the  
> framework, either because it redirects to an external TimeGate that  
> performs the datetime content negotiation, or because it performs the  
> datetime content negotiation itself. And we actually think that is  
> quite essential that this original resource is involved. It is the URI  
> of the original resource by which the resource has been known as it  
> evolved over time. It makes sense to be able to use that URI to try  
> and get to its past versions. And by "get", I don't mean search for  
> it, but rather use the network to get there. After all, we all go by  
> the same name irrespective of the day you talk to us. Or we have the  
> same Linked Data URI irrespective of the day it is dereferenced. Why  
> would we suddenly need a new URI when we want to see what the LoD  
> description for any of us was, say, a year ago? Why must we prevent  
> that this same URI helps us to get to prior versions?

I agree completely that a user should be able to discover - if
possible - archived archival web content, either on the original
server or in a web archive, for a given URI, starting from a point of
knowing only the original URI. I know that the UK National Archives
have built a system for redirecting 404s on UK government web sites to
their web archive [2], and I was very happy to see your work
attempting to standardize something similar, although more general.

As an aside, which may or may not be related to Memento, do you think
there is a useful distinction to be made between web archives which
preserve the actual 

Re: Ontology Wars? Concerned

2009-11-24 Thread Danny Ayers
Hi Nathan,

A good question, the way it gets answered as far as I can see depends
on what you're after.

Glad to see you're thinking linked data.

But people really do try to overthink it when it comes to ontologies,
in my opinion:  ideally the best ontologies/vocabs will win -

- rubbish.

The ontologies/vocabularies on machines will always be poor
reflections of the things they try to describe. There is a huge amount
of software around these days (and has been for many years) that tries
to describe things. The advantage that the Web languages have is that
it can work in a big distributed environment. Put a marker down (a
URI) for a concept or a dog and it's reusable.

When it comes to multiple ontologies - yes, it's a reality. In
practice maybe it means lots of different clauses in the query - but
that depends on how far you want to ask. What are you interested in?
Certainly good practice says as a publisher of information you should
use existing terms wherever appropriate rather than invent (c'mon,
should I call the creature next to me a wurble or a dog?). But
everyone can make up their own terms, and there's nothing wrong with
that.

To answer your subject line, the only way we can avoid ontology wars
is by making the field flat, globally. I think we have that now, at
least in principle.

Cheers,
Danny.

-- 
http://danny.ayers.name



Re: RDF Update Feeds + URI time travel on HTTP-level

2009-11-24 Thread Herbert Van de Sompel

On Nov 23, 2009, at 9:02 PM, Herbert Van de Sompel wrote:

On Nov 23, 2009, at 4:59 PM, Erik Hetzner wrote:

At Mon, 23 Nov 2009 00:40:33 -0500,
Mark Baker wrote:


On Sun, Nov 22, 2009 at 11:59 PM, Peter Ansell > wrote:
It should be up to resource creators to determine when the nature  
of a
resource changes across time. A web architecture that requires  
every

single edit to have a different identifier is a large hassle and
likely won't catch on if people find that they can work fine with a
system that evolves constantly using semi-constant identifiers,  
rather

than through a series of mandatory time based checkpoints.


You seem to have read more into my argument than was there, and
created a strawman; I agree with the above.

My claim is simply that all HTTP requests, no matter the headers,  
are

requests upon the current state of the resource identified by the
Request-URI, and therefore, a request for a representation of the
state of "Resource X at time T" needs to be directed at the URI for
"Resource X at time T", not "Resource X".


I think this is a very compelling argument.


Actually, I don't think it is.  The issue was also brought up (in a  
significantly more tentative manner) in Pete Johnston blog entry on  
eFoundations (http://efoundations.typepad.com/efoundations/2009/11/memento-and-negotiating-on-time.html 
). Tomorrow, we will post a response that will try and show that  
"current state" issue is - as far as we can see - not quite as  
"written in stone" as suggested above in the specs that matter in  
this case, i.e. Architecture of the World Wide Web and RFC 2616.  
Both are interestingly vague about this.






Just to let you know that our response to some issues re Memento  
raised here and on Pete Johnston's blog post (http://efoundations.typepad.com/efoundations/2009/11/memento-and-negotiating-on-time.html 
) is now available at:


http://www.cs.odu.edu/~mln/memento/response-2009-11-24.html

We have also submitted this as an inline Comment to Pete's blog, but  
Comments require approval and that has not happened yet.


Greetings

Herbert Van de Sompel


==
Herbert Van de Sompel
Digital Library Research & Prototyping
Los Alamos National Laboratory, Research Library
http://public.lanl.gov/herbertv/
tel. +1 505 667 1267







Re: RDF Update Feeds

2009-11-24 Thread Michael Hausenblas

FWIW, I had a quick look at the current caching support in LOD datasets [1]
- not very encouraging, to be honest.

Cheers,
  Michael

[1] http://webofdata.wordpress.com/2009/11/23/linked-open-data-http-caching/

-- 
Dr. Michael Hausenblas
LiDRC - Linked Data Research Centre
DERI - Digital Enterprise Research Institute
NUIG - National University of Ireland, Galway
Ireland, Europe
Tel. +353 91 495730
http://linkeddata.deri.ie/
http://sw-app.org/about.html

> From: Michael Hausenblas 
> Date: Sat, 21 Nov 2009 11:19:18 +
> To: Hugh Glaser , Georgi Kobilarov
> 
> Cc: Linked Data community 
> Subject: Re: RDF Update Feeds
> Resent-From: Linked Data community 
> Resent-Date: Sat, 21 Nov 2009 11:19:57 +
> 
> Georgi, Hugh,
> 
>>> Could be very simple by expressing: "Pull our update-stream once per
>>> seconds/minute/hour in order to be *enough* up-to-date".
> 
> Ah, Georgi, I see. You seem to emphasise the quantitative side whereas I
> just seem to want to flag what kind of source it is. I agree that  "Pull our
> update-stream once per seconds/minute/hour in order to be *enough*
> up-to-date" should be available, however I think that having the information
> regular/irregular vs. how frequent the update should be made available as
> well. My main use case is motivated from the LOD application-writing area. I
> figured that I quite often have written code that essentially does the same:
> based on the type of data-source it either gets a live copy of the data or
> uses already local available data. Now, given that data set publisher would
> declare the characteristics of their dataset in terms of dynamics, one could
> write such a LOD cache quite easily, I guess, abstracting the necessary
> steps and hence offering a reusable solution. I'll follow-up on this one
> soon via a blog post with a concrete example.
> 
> My main question would be: what do we gain if we explicitly represent these
> characteristics, compared to what HTTP provides in terms of caching [1]. One
> might want to argue that the 'built-in' features are sort of too fine
> granular and there is a need for a data-source-level solution.
> 
>> in our semantic sitemaps, and these suggestions seem very similar.
>> Eg
>> http://dotac.rkbexplorer.com/sitemap.xml
>> (And I think these frequencies may correspond to "normal" sitemaps.)
>> So a naïve approach, if you want RDF, would be to use something very similar
>> (and simple).
>> Of course I am probably known for my naivity, which is often misplaced.
> 
> Hugh, of course you're right (as often ;). Technically, this sort of
> information ('changefreq') is available via sitemaps. Essentially, one could
> lift this to RDF straight-forward, if desired. If you look closely to what I
> propose, however, then you'll see that I aim at a sort of qualitative
> description which could drive my LOD cache (along with the other information
> I already have from the void:Dataset).
> 
> Now, before I continue to argue here on a purely theoretical level, lemme
> implement a demo and come back once I have something to discuss ;)
> 
> 
> Cheers,
>   Michael
> 
> [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html
> 
> -- 
> Dr. Michael Hausenblas
> LiDRC - Linked Data Research Centre
> DERI - Digital Enterprise Research Institute
> NUIG - National University of Ireland, Galway
> Ireland, Europe
> Tel. +353 91 495730
> http://linkeddata.deri.ie/
> http://sw-app.org/about.html
> 
> 
> 
>> From: Hugh Glaser 
>> Date: Fri, 20 Nov 2009 18:29:17 +
>> To: Georgi Kobilarov , Michael Hausenblas
>> 
>> Cc: Linked Data community 
>> Subject: Re: RDF Update Feeds
>> 
>> Sorry if I have missed something, but...
>> We currently put things like
>> monthly
>> daily
>> never
>> in our semantic sitemaps, and these suggestions seem very similar.
>> Eg
>> http://dotac.rkbexplorer.com/sitemap.xml
>> (And I think these frequencies may correspond to "normal" sitemaps.)
>> So a naïve approach, if you want RDF, would be to use something very similar
>> (and simple).
>> Of course I am probably known for my naivity, which is often misplaced.
>> Best
>> Hugh
>> 
>> On 20/11/2009 17:47, "Georgi Kobilarov"  wrote:
>> 
>>> Hi Michael,
>>> 
>>> nice write-up on the wiki! But I think the vocabulary you're proposing is
>>> too much generally descriptive. Dataset publishers, once offering update
>>> feeds, should not only tell that/if their datasets are "dynamic", but
>>> instead how dynamic they are.
>>> 
>>> Could be very simple by expressing: "Pull our update-stream once per
>>> seconds/minute/hour in order to be *enough* up-to-date".
>>> 
>>> Makes sense?
>>> 
>>> Cheers,
>>> Georgi 
>>> 
>>> --
>>> Georgi Kobilarov
>>> www.georgikobilarov.com
>>> 
 -Original Message-
 From: Michael Hausenblas [mailto:michael.hausenb...@deri.org]
 Sent: Friday, November 20, 2009 4:01 PM
 To: Georgi Kobilarov
 Cc: Linked Data community
 Subject: Re: RDF Update Feeds
 
 
 Georgi, All,
 
 I like