Right, but the auth distinction could be made along similar lines to http/https - it's orthogonal. I do think at some point we end up caching graphs (and their provenance to, I hope) but the bit I like about danbri's Gremlin play is that it's a really stateless wander. Starting from the assumption that the graph is public, we go and forth as we choose.
You know I'm a big fan of SPARQL, but the Gremlin approach really does seem to render a lot of that redundant. Ok, maybe while you're walking the Web you might want to pass the data into a local SQL DB (for example), but being able to walk the paths could be really useful. SPARQL 1.1 has path stuff, looks good on paper, but let's say you've set up your eCommerce site, being able to walk with data spectacles on looks good to me. On 12 May 2011 14:57, Reto Bachmann-Gmuer <[email protected]> wrote: > HI, > > The .in .out seem to be equivalent to / and /- in graphnode. Now for > navigating the web-of data we could simply add a virtual graph that > dereferences named resources in a triple pattern adding the triples to > a cache. A simple solution for authority would be the MSG, this > wouldn't prevent me from saying that you know me and for this triple > to be in the virtual graph as if you had asserted it but it would > prevent me from linking two named resource without having authority > (i.e. control resolution of the uri-space) over at least one of them. > Another approach would to limit authority to the non-symmetric CBD > (expanding only objects but not subjects) of the dereferenced > resource. > > Cheers, > Reto > > On Thu, May 12, 2011 at 2:10 PM, Danny Ayers <[email protected]> wrote: >> Hi Henry, >> >> I did have problems seeing the relevance of your work on friendly RDF >> syntax to the Clerezza project, while it's good work the tie-in isn't >> obvious. But I just had a demo of Gremlin from danbri, and now I think >> there's a way of pulling this stuff together. >> Gremlin is a little language for graph traversal which allows you to >> walk the Web of data, node by node. The key part is that as you are >> going through the graph, HTTP GETs are taking place. Get that into >> your Friendly code and it's a winner! >> >> The way I imagine it working is using the command line bits to visit >> the parts of the published data from the point of view of a client - a >> browser or crawler, hopefully more intelligent things too. >> >> Check Dan's blog post (and the addendums in comments), I think you'll like >> this: >> >> http://danbri.org/words/2011/05/10/675 >> >> Cheers, >> Danny. >> >> -- >> http://danny.ayers.name >> > -- http://danny.ayers.name
