As far as I understand them, clients are supposed to submit only simple queries 
to servers in order to retrieve subsets of the data. Queries like "?subject 
<ns:property> ?object", and then download this snapshot (subset) of the data; 
in this case the list of triples for the propery <ns:property>. The data is 
downloaded locally, and then the client can issue SPARQL queries on the local 
copy of the data just downloaded.
Is this correct? Is this how "Linked Data Fragments" work? Doesn't this 
generate a lot of traffic, a lot of data moving around, and very little 
improvement over using a local SPARQL endpoint?

I really can't understand the benefits. Haaaalp!

For reference: http://linkeddatafragments.org

Reply via email to