Re: how to consume linked data

2009-10-10 Thread rick

Kingsley  All:

OK, it took a few weeks to put my thoughts together, but I think you'll 
enjoy reading this new post called Linked Data: Interpretants and 
Interpretation.


http://phaneron.rickmurphy.org/?p=36

See below for additional comments.

Kingsley Idehen wrote:

rick wrote:

Danny  All:

Of course there's been a lot of work on this subject over the years. 
You can find one nice piece on analogical reasoning here [1].


But for linked data to become useful it's important to refine our 
understanding of web architecture beginning with the language of 
resources. I'm currently working up a piece for my blog on this topic. 
Stay tuned.

Amen re. resources!


Amen.

information and non-information resource terminology is only second to 
RDF/XML re. historic impediments to comprehending essence of Semantic 
Web Project vision.


I agree that resource terminology remains an impediment even after 
browsing the TAG discussions and understanding the background of some 
key W3C recommendations. I hope Interpretants and Interpretation 
furthers the long standing discussion of resources and URIs in W3C.


If by the essence of the semantic web project vision you mean the 2003 
Scientific American article, you'll find Interpretants and 
Interpretation speaks directly to an updated RDF model theory to support 
the 2003 article.


I don't intend the suggestions in Interpretants and Interpretation as a 
criticism. It took me more than a few years of careful research and a 
lot of help from the very smart folks who got us where we are today to 
understand these issues.


That being said, I have recommended that the US and UK governments 
engage in creating a Linked Data roadmap. One essential element of 
that road map would include a lifecycle for how linked data publishers 
and linked data consumers could work towards the goals of linked data 
without the oversight of a directed authority.


1. http://www.jfsowa.com/pubs/analog.htm






--
Rick

cell: 703-201-9129
web:  http://www.rickmurphy.org
blog: http://phaneron.rickmurphy.org



Re: how to consume linked data

2009-09-27 Thread John Graybeal
I find this answer valuable, but unsatisfying.  To me this is the  
fundamental weak spot in the whole chain of semantic web/linked data.


I do appreciate the tremendous flexibility, generality, simplicity,  
novelty, and cool factor in the semantic web/linked data frameworks.   
But having done everything you can with that, for effective  
interoperability people doing similar things (i.e., making similar  
resources) will need to build and label them in known compatible ways.


I think it is entirely analogous to folksonomy searching (e.g.,  
Google searches of free text, more or less) vs Controlled vocabulary  
searching (e.g, using metadata standards with controlled  
vocabularies).  At scale, the former will stay in the lead and be  
increasingly powerful; but the latter will always be necessary for  
more deterministic, consistent, and targeted results.  Well, at least  
until computers are Really, Really smart.


John

On Sep 26, 2009, at 3:08 AM, Olaf Hartig wrote:


Hey Danny,

On Friday 25 September 2009 22:51:37 Danny Ayers wrote:

2009/9/25 Juan Sequeda juanfeder...@gmail.com:
Linked Data is out there. Now it's time to develop smart  
(personalized)

software agents to consume the data and give it back to humans.


I don't disagree, but I do think the necessary agents aren't smart,
just stupid bots (aka Web services a la Fielding).


These stupid bots are able to discover and make use of data from a  
wide
variety of sources on the Web. I'm still convinced this allows  
applications of
an interesting novelty. And let's not forget, these applications  
enable users
to retain full control over the authoritative source of data  
provided by them.

This is a big step.

It is more a question of why so little of these applications came up  
yet. I
agree with Kjetil here. Tools are missing that bring developers (who  
don't
know all the technical details) on board. One possible approach to  
this is:



try also using SQUIN (www.squin.org)


Thanks, not seen before.


... which is a query service (currently still in pre-alpha) that is  
based on
the functionality of the SemWeb Client Lib. An application simply  
sends a
SPARQL query. This query is executed over the Web of Linked Data  
using the
link traversal query execution approach as implemented in the SemWeb  
Client
Lib. The result is returned to the app which may visualize or  
process it.
Hence, the app developer does not need to bother with traversing RDF  
links,

RDF/XML vs. RDFa, etc.

Another important issue in consuming LD is the filtering of data as  
you
mention in your original question. Indeed, we need approaches of  
filtering
automatically during the discovery of data. Unfortunately, for many  
filter
criteria (e.g. reliability, timeliness, trustworthiness) we do not  
even now

very well how we may filter automatically given we have the data.

Greetings,
Olaf





---
John Graybeal
Marine Metadata Interoperability Project: http://marinemetadata.org
grayb...@marinemetadata.org







Re: how to consume linked data

2009-09-27 Thread Adrian Walker
Hi Kjetil --

You wrote...

*I think there is a critical piece of technology that is missing in our
arsenal, namely a (free software) programming stack that makes a large group
of developers, who are likely to have little prior understanding of semweb,
to go yeah, I can do that.*

How about being more ambitious?  In the above, change a large group of *
developers* to a large group of *non-programmers*.

That would get you Social Media Meets Linked Data.

Here's step in that direction

  www.reengineeringllc.com/demo_agents/RDFQueryLangComparison1.agent

There's also a short paper


www.reengineeringllc.com/A_Wiki_for_Business_Rules_in_Open_Vocabulary_Executable_English.pdf

and the technology is online at the same site.

Cheers,   -- Adrian

Internet Business Logic
A Wiki and SOA Endpoint for Executable Open Vocabulary English over SQL and
RDF
Online at www.reengineeringllc.comShared use is free

Adrian Walker
Reengineering


On Fri, Sep 25, 2009 at 6:53 AM, Kjetil Kjernsmo kje...@kjernsmo.netwrote:

 On Friday 25. September 2009 10:15:34 you wrote:
  sorry if I sound negative, I reckon the semweb is a done deal now, the
  many-eyeballs arrived.

 Thanks for asking the right questions, Danny, I believe it is critical for
 the success that someone does!

  but - where should we take it?

 What I'd like to do with it, is to solve problems for people when combining
 data sets that are cannot be solved by conventional means, i.e. today the
 number of people who are interested in a particular combination of datasets
 goes down whereas the cost generally goes up, so it doesn't scale.

 I think there is a critical piece of technology that is missing in our
 arsenal, namely a (free software) programming stack that makes a large
 group of developers, who are likely to have little prior understanding of
 semweb, to go yeah, I can do that.

 I think the work done by the Drupal folks is a right step in this
 direction, for the kind of stuff that people use a CMS for. But I think
 that we also need a stack, probably built around the MVC pattern, that can
 be used for more generic purposes.

 I haven't got anywhere with my ideas on this topic though...


 Kjetil




Re: how to consume linked data

2009-09-26 Thread Olaf Hartig
Hey Danny,

On Friday 25 September 2009 22:51:37 Danny Ayers wrote:
 2009/9/25 Juan Sequeda juanfeder...@gmail.com:
  Linked Data is out there. Now it's time to develop smart (personalized)
  software agents to consume the data and give it back to humans.

 I don't disagree, but I do think the necessary agents aren't smart,
 just stupid bots (aka Web services a la Fielding).

These stupid bots are able to discover and make use of data from a wide 
variety of sources on the Web. I'm still convinced this allows applications of 
an interesting novelty. And let's not forget, these applications enable users 
to retain full control over the authoritative source of data provided by them. 
This is a big step.

It is more a question of why so little of these applications came up yet. I 
agree with Kjetil here. Tools are missing that bring developers (who don't 
know all the technical details) on board. One possible approach to this is:

  try also using SQUIN (www.squin.org)

 Thanks, not seen before.

... which is a query service (currently still in pre-alpha) that is based on 
the functionality of the SemWeb Client Lib. An application simply sends a 
SPARQL query. This query is executed over the Web of Linked Data using the 
link traversal query execution approach as implemented in the SemWeb Client 
Lib. The result is returned to the app which may visualize or process it. 
Hence, the app developer does not need to bother with traversing RDF links, 
RDF/XML vs. RDFa, etc.

Another important issue in consuming LD is the filtering of data as you 
mention in your original question. Indeed, we need approaches of filtering 
automatically during the discovery of data. Unfortunately, for many filter 
criteria (e.g. reliability, timeliness, trustworthiness) we do not even now 
very well how we may filter automatically given we have the data.

Greetings,
Olaf




Re: how to consume linked data

2009-09-25 Thread Graham Klyne

Dan Brickley wrote:

This doc-typing idiom never got heavily used in FOAF, beyond the type
PersonalProfileDocument, which FOAF defines. Mostly we just linked
FOAF files together (initially with seeAlso and IFPs, lately using
URIs more explicitly).

I think there are many other reasons why characterising typical RDF
document patterns makes sense, related to the frustration of dealing
with documents when all you know is they have triples in them. We
don't have good mechanisms for doing so yet, ie. for characterising
these higher level patterns. But various folk are heading in same
direction, some using SPARQL, others OWL or XForms, or DC Application
Profile definitions

Without some hints about what we're pointing at with our links,
crawlers don't have much to go on. Merely knowing that the information
at the other end of the link is more RDF, or that it describes a
thing of a certain type, might not be enough. There are a lot of
things you might want to know about a person, or a place, and at many
different levels of detail. For apps eg running in a mobile/handheld
environment, they can't afford to speculatively download everything..


Interesting...  I'm doing work at the moment with CIDOC-CRM 
(http://cidoc.ics.forth.gr/) and its expression in OWL 
(http://www8.informatik.uni-erlangen.de/IMMD8/Services/cidoc-crm/versions.html). 
 Something I've noticed is that the extension/refinement mechanism provided by 
CIDOC-CRM is based on  what they call Types (though I think it's more like 
skos:Concept), so that the core properties tend be be very predictable.  There 
are some areas where I've used new properties to capture finer-grained 
information, but they tend to be at the margins (e.g. putting numeric values on 
date-ranges) rather than in the core (e.g. this object was made in this time 
period).


Maybe there's scope for using SKOS in a doc-typing idiom?

#g





Re: how to consume linked data

2009-09-25 Thread Danny Ayers
Many thanks for responses, stuff to think about.

Yihong got to /root of my question, ...miss the main purpose why we
want to have data linked in the first place

why are places like itsy, youtube and redtube (yup, pr0n still lives)
more compelling, given what we know?

people *are* getting the data out, but it seems to me there's a gap
between that and stuff that actually improves people's quality of
life.

sorry if I sound negative, I reckon the semweb is a done deal now, the
many-eyeballs arrived.

but - where should we take it?




-- 
http://danny.ayers.name



Re: how to consume linked data

2009-09-25 Thread Kjetil Kjernsmo
On Friday 25. September 2009 10:15:34 you wrote:
 sorry if I sound negative, I reckon the semweb is a done deal now, the
 many-eyeballs arrived.

Thanks for asking the right questions, Danny, I believe it is critical for 
the success that someone does!

 but - where should we take it?

What I'd like to do with it, is to solve problems for people when combining 
data sets that are cannot be solved by conventional means, i.e. today the 
number of people who are interested in a particular combination of datasets 
goes down whereas the cost generally goes up, so it doesn't scale. 

I think there is a critical piece of technology that is missing in our 
arsenal, namely a (free software) programming stack that makes a large 
group of developers, who are likely to have little prior understanding of 
semweb, to go yeah, I can do that. 

I think the work done by the Drupal folks is a right step in this 
direction, for the kind of stuff that people use a CMS for. But I think 
that we also need a stack, probably built around the MVC pattern, that can 
be used for more generic purposes. 

I haven't got anywhere with my ideas on this topic though...


Kjetil



Re: how to consume linked data

2009-09-25 Thread Juan Sequeda
Linked Data is out there. Now it's time to develop smart (personalized)
software agents to consume the data and give it back to humans.

try also using SQUIN (www.squin.org)

Juan Sequeda, Ph.D Student
Dept. of Computer Sciences
The University of Texas at Austin
www.juansequeda.com
www.semanticwebaustin.org


On Fri, Sep 25, 2009 at 11:56 AM, Leo Sauermann leo.sauerm...@dfki.dewrote:

 Uh, I thought the answer to danny's question is semwebclient by Olaf Hartig
 and others.

 http://www4.wiwiss.fu-berlin.de/bizer/ng4j/semwebclient/

 In general, I thought that Olaf Hartig would be the first contact for such
 things...

 best
 Leo


 It was Danny Ayers who said at the right time 24.09.2009 09:59 the
 following words:

  The human reading online texts has a fair idea of what is and what
 isn't relevant, but how does this work for the Web of data? Should we
 have tools to just suck in any nearby triples, drop them into a model,
 assume that there's enough space for the irrelevant stuff, filter
 later?

 How do we do (in software) things like directed search without the human
 agent?

 I'm sure we can get to the point of - analogy -  looking stuff up in
 Wikipedia  picking relevant links, but we don't seem to have the user
 stories for the bits linked data enables. Or am I just
 imagination-challenged?

 Cheers,
 Danny.





 --
 _
 Dr. Leo Sauermann   
 http://www.dfki.de/~sauermannhttp://www.dfki.de/%7Esauermann
 Deutsches Forschungszentrum fuer Kuenstliche Intelligenz DFKI GmbH
 Trippstadter Strasse 122
 P.O. Box 2080   Fon:   +43 6991 gnowsis
 D-67663 Kaiserslautern  Fax:   +49 631 20575-102
 Germany Mail:  leo.sauerm...@dfki.de

 Geschaeftsfuehrung:
 Prof.Dr.Dr.h.c.mult. Wolfgang Wahlster (Vorsitzender)
 Dr. Walter Olthoff
 Vorsitzender des Aufsichtsrats:
 Prof. Dr. h.c. Hans A. Aukes
 Amtsgericht Kaiserslautern, HRB 2313
 _





Re: how to consume linked data

2009-09-25 Thread Danny Ayers
2009/9/25 Kjetil Kjernsmo kje...@kjernsmo.net:
 On Friday 25. September 2009 10:15:34 you wrote:
 sorry if I sound negative, I reckon the semweb is a done deal now, the
 many-eyeballs arrived.

 Thanks for asking the right questions, Danny, I believe it is critical for
 the success that someone does!

Thanks, but I'm not even sure they are the right questions.

 but - where should we take it?

 What I'd like to do with it, is to solve problems for people when combining
 data sets that are cannot be solved by conventional means, i.e. today the
 number of people who are interested in a particular combination of datasets
 goes down whereas the cost generally goes up, so it doesn't scale.

Yes, but please bear with me now - do we have to wait for another
generation arriving on the Web? There must be ways we can kick-start
this stuff.

 I think there is a critical piece of technology that is missing in our
 arsenal, namely a (free software) programming stack that makes a large
 group of developers, who are likely to have little prior understanding of
 semweb, to go yeah, I can do that.

Like bengee's ARC2 stack - PHP?

 I think the work done by the Drupal folks is a right step in this
 direction, for the kind of stuff that people use a CMS for. But I think
 that we also need a stack, probably built around the MVC pattern, that can
 be used for more generic purposes.

Absolutely. If we can re-use existing patterns we can get people involved.

 I haven't got anywhere with my ideas on this topic though...

Me neither :)

I should insert a Star Trek quote here, but can't think of one.

Cheers,
Danny.



-- 
http://danny.ayers.name



Re: how to consume linked data

2009-09-25 Thread Danny Ayers
2009/9/25 Juan Sequeda juanfeder...@gmail.com:
 Linked Data is out there. Now it's time to develop smart (personalized)
 software agents to consume the data and give it back to humans.

I don't disagree, but I do think the necessary agents aren't smart,
just stupid bots (aka Web services a la Fielding).

 try also using SQUIN (www.squin.org)

Thanks, not seen before.


-- 
http://danny.ayers.name



Re: how to consume linked data

2009-09-25 Thread Danny Ayers
Olaf, comments?

2009/9/25 Leo Sauermann leo.sauerm...@dfki.de:
 Uh, I thought the answer to danny's question is semwebclient by Olaf Hartig
 and others.

 http://www4.wiwiss.fu-berlin.de/bizer/ng4j/semwebclient/

 In general, I thought that Olaf Hartig would be the first contact for such
 things...

 best
 Leo


 It was Danny Ayers who said at the right time 24.09.2009 09:59 the following
 words:

 The human reading online texts has a fair idea of what is and what
 isn't relevant, but how does this work for the Web of data? Should we
 have tools to just suck in any nearby triples, drop them into a model,
 assume that there's enough space for the irrelevant stuff, filter
 later?

 How do we do (in software) things like directed search without the human
 agent?

 I'm sure we can get to the point of - analogy -  looking stuff up in
 Wikipedia  picking relevant links, but we don't seem to have the user
 stories for the bits linked data enables. Or am I just
 imagination-challenged?

 Cheers,
 Danny.




 --
 _
 Dr. Leo Sauermann       http://www.dfki.de/~sauermann
 Deutsches Forschungszentrum fuer Kuenstliche Intelligenz DFKI GmbH
 Trippstadter Strasse 122
 P.O. Box 2080           Fon:   +43 6991 gnowsis
 D-67663 Kaiserslautern  Fax:   +49 631 20575-102
 Germany                 Mail:  leo.sauerm...@dfki.de

 Geschaeftsfuehrung:
 Prof.Dr.Dr.h.c.mult. Wolfgang Wahlster (Vorsitzender)
 Dr. Walter Olthoff
 Vorsitzender des Aufsichtsrats:
 Prof. Dr. h.c. Hans A. Aukes
 Amtsgericht Kaiserslautern, HRB 2313
 _





-- 
http://danny.ayers.name



Re: how to consume linked data

2009-09-25 Thread Toby Inkster

On 25 Sep 2009, at 07:41, Graham Klyne wrote:

Interesting...  I'm doing work at the moment with CIDOC-CRM (http:// 
cidoc.ics.forth.gr/) and its expression in OWL (http:// 
www8.informatik.uni-erlangen.de/IMMD8/Services/cidoc-crm/ 
versions.html).


Have you seen Simon Reinhardt's recent OWL2 version?

http://bloody-byte.net/rdf/cidoc-crm/index.html

--
Toby A Inkster
mailto:m...@tobyinkster.co.uk
http://tobyinkster.co.uk






Re: how to consume linked data

2009-09-24 Thread rick

Danny  All:

Of course there's been a lot of work on this subject over the years. You 
can find one nice piece on analogical reasoning here [1].


But for linked data to become useful it's important to refine our 
understanding of web architecture beginning with the language of 
resources. I'm currently working up a piece for my blog on this topic. 
Stay tuned.


That being said, I have recommended that the US and UK governments 
engage in creating a Linked Data roadmap. One essential element of that 
road map would include a lifecycle for how linked data publishers and 
linked data consumers could work towards the goals of linked data 
without the oversight of a directed authority.


1. http://www.jfsowa.com/pubs/analog.htm

--
Rick

cell: 703-201-9129
web:  http://www.rickmurphy.org
blog: http://phaneron.rickmurphy.org

Danny Ayers wrote:

The human reading online texts has a fair idea of what is and what
isn't relevant, but how does this work for the Web of data? Should we
have tools to just suck in any nearby triples, drop them into a model,
assume that there's enough space for the irrelevant stuff, filter
later?

How do we do (in software) things like directed search without the human agent?

I'm sure we can get to the point of - analogy -  looking stuff up in
Wikipedia  picking relevant links, but we don't seem to have the user
stories for the bits linked data enables. Or am I just
imagination-challenged?

Cheers,
Danny.







Re: how to consume linked data

2009-09-24 Thread Kingsley Idehen

Danny Ayers wrote:

The human reading online texts has a fair idea of what is and what
isn't relevant, but how does this work for the Web of data? Should we
have tools to just suck in any nearby triples, drop them into a model,
assume that there's enough space for the irrelevant stuff, filter
later?

How do we do (in software) things like directed search without the human agent?

I'm sure we can get to the point of - analogy -  looking stuff up in
Wikipedia  picking relevant links, but we don't seem to have the user
stories for the bits linked data enables. Or am I just
imagination-challenged?

Cheers,
Danny.

  
I think users have to discover, comprehend, and then exploit (consume or 
extend the reference chain). This is the vital sequence.


fwiw, here is how I tell the story to general observers:

Today, you put a resource URL in your browser and get either of the 
following:

- Rendered Page
- Markup behind the Page

Linked Data simply adds the ability to see a resource description 
(metadata). The description takes honors the Web core architecture by 
providing links for each component of the description.


That's it.  All the other smart stuff simply happens behind the scenes 
and shows up in the resource description.


--


Regards,

Kingsley Idehen   Weblog: http://www.openlinksw.com/blog/~kidehen
President  CEO 
OpenLink Software Web: http://www.openlinksw.com