Re: Alternatives to OWL for linked data?

2009-07-27 Thread Axel Rauschmayer

You were asking about description logic programming; well, OWL 2 RL:

http://www.w3.org/TR/owl2-profiles/#OWL_2_RL

is exactly that: it is a manifestation of DLP. It has a Direct  
Semantics
'side', compatible with OWL 2 DL, and a rule based 'side', described  
by

the rule set:

http://www.w3.org/TR/owl2-profiles/#Reasoning_in_OWL_2_RL_and_RDF_Graphs_using_Rules

This rule set can be used for a forward or backward chaining approach
(or a combination thereof) that you describe. I have heard rumours
and/or statements on implementations coming up from various vendors. I
have, actually, a purely proof-of-concept-stupid-simple implementation
doing brute force forward chaining:

http://www.ivan-herman.net/Misc/2008/owlrl/

Just to show what happens. And I am sure other implementations will  
come

to the fore that I do not yet about.



Cool stuff. How would backward chaining work? Would it be invoked via  
SPARQL? Is listing all properties of a given resource still possible?


Axel

--
axel.rauschma...@ifi.lmu.de
http://www.pst.ifi.lmu.de/~rauschma/






Re: Alternatives to OWL for linked data?

2009-07-27 Thread Ivan Herman


Axel Rauschmayer wrote:
 You were asking about description logic programming; well, OWL 2 RL:

 http://www.w3.org/TR/owl2-profiles/#OWL_2_RL

 is exactly that: it is a manifestation of DLP. It has a Direct Semantics
 'side', compatible with OWL 2 DL, and a rule based 'side', described by
 the rule set:

 http://www.w3.org/TR/owl2-profiles/#Reasoning_in_OWL_2_RL_and_RDF_Graphs_using_Rules


 This rule set can be used for a forward or backward chaining approach
 (or a combination thereof) that you describe. I have heard rumours
 and/or statements on implementations coming up from various vendors. I
 have, actually, a purely proof-of-concept-stupid-simple implementation
 doing brute force forward chaining:

 http://www.ivan-herman.net/Misc/2008/owlrl/

 Just to show what happens. And I am sure other implementations will come
 to the fore that I do not yet about.
 
 
 Cool stuff. How would backward chaining work? Would it be invoked via
 SPARQL? Is listing all properties of a given resource still possible?
 

At this moment, your guess is as good as mine:-) We will have to see how
implementers will come up with solution.

_Conceptually_ one could say that the SPARQL query is done on the
deductive closure (via OWL RL) of the data set. But taking it literally
(ie, expanding the graph with, say, forward chaining, and making the
query on top of it) is probably not efficient, so I expect implementers
coming up with cool tricks:-)

Ivan


 Axel
 

-- 

Ivan Herman, W3C Semantic Web Activity Lead
Home: http://www.w3.org/People/Ivan/
mobile: +31-641044153
PGP Key: http://www.ivan-herman.net/pgpkey.html
FOAF: http://www.ivan-herman.net/foaf.rdf


smime.p7s
Description: S/MIME Cryptographic Signature


Alternatives to OWL for linked data?

2009-07-24 Thread Axel Rauschmayer
I'm currently reading Hendler's brilliant book Semantic Web for the  
Working Ontologist. It really drove home the point that OWL is not a  
good fit when using RDF for *data* (names are generally not unique,  
open world assumption, ...).


But what is the alternative? For my applications, I have the following  
requirements:


- Properties: transitivity, inverse, sub-properties.
- Resources, classes: equivalence. For my purposes, equivalence is a  
way of implementing the topic merging in topic maps [1].

- Constraints for integrity checking.
- Schema declaration: partially overlaps with constraints, serves for  
documentation and for providing default values for properties.
- Computed property values: for example, one property value being the  
concatenation of two other property values etc.


The difficulty seems to me to find something universal that fulfills  
these requirements and is still easy to understand. Inference, when  
used for transitivity and equivalence, is simple, but when it comes to  
editing RDF, they can confound the user: Why can some triples be  
replaced, others not? Why do I have to replace the triples of a  
different instance if I want to replace the triples in my instance?


While it's not necessarily easier to understand for end users, I've  
always found Prolog easy to understand, where OWL is more of a  
challenge.


So what solutions are out there? I would prefer description logic  
programming to OWL. Does Prolog-like backward-chaining make sense for  
RDF? If so, how would it be combined with SPARQL; or would it replace  
it? Or maybe something frame-based?


Am I making sense? I would appreciate any pointers, hints and insights.

Axel

[1] http://www.topicmaps.org/xtm/index.html#desc-merging

--
axel.rauschma...@ifi.lmu.de
http://www.pst.ifi.lmu.de/~rauschma/






Re: Alternatives to OWL for linked data?

2009-07-24 Thread Axel Rauschmayer

Thanks, looks interesting. I've also found related work:
https://www.uni-koblenz-landau.de/koblenz/fb4/institute/IFI/AGStaab/Research/systeme/NetworkedGraphs/

But there does not seem to be a library one can use with, say, Sesame.

On Jul 24, 2009, at 15:43 , Martin Hepp (UniBW) wrote:


Did you look at SPIN?

http://spinrdf.org/

That should allow you do do a lot with data without leaving the now  
mainstream Semantic Web technology stack (as long as a small  
fragment of OWL is sufficient for you).


Best
Martin


Axel Rauschmayer wrote:
I'm currently reading Hendler's brilliant book Semantic Web for  
the Working Ontologist. It really drove home the point that OWL is  
not a good fit when using RDF for *data* (names are generally not  
unique, open world assumption, ...).


But what is the alternative? For my applications, I have the  
following requirements:


- Properties: transitivity, inverse, sub-properties.
- Resources, classes: equivalence. For my purposes, equivalence is  
a way of implementing the topic merging in topic maps [1].

- Constraints for integrity checking.
- Schema declaration: partially overlaps with constraints, serves  
for documentation and for providing default values for properties.
- Computed property values: for example, one property value being  
the concatenation of two other property values etc.


The difficulty seems to me to find something universal that  
fulfills these requirements and is still easy to understand.  
Inference, when used for transitivity and equivalence, is simple,  
but when it comes to editing RDF, they can confound the user: Why  
can some triples be replaced, others not? Why do I have to replace  
the triples of a different instance if I want to replace the  
triples in my instance?


While it's not necessarily easier to understand for end users, I've  
always found Prolog easy to understand, where OWL is more of a  
challenge.


So what solutions are out there? I would prefer description logic  
programming to OWL. Does Prolog-like backward-chaining make sense  
for RDF? If so, how would it be combined with SPARQL; or would it  
replace it? Or maybe something frame-based?


Am I making sense? I would appreciate any pointers, hints and  
insights.


Axel

[1] http://www.topicmaps.org/xtm/index.html#desc-merging



--
--
martin hepp
e-business  web science research group
universitaet der bundeswehr muenchen

e-mail:  mh...@computer.org
phone:   +49-(0)89-6004-4217
fax: +49-(0)89-6004-4620
www: http://www.unibw.de/ebusiness/ (group)
   http://www.heppnetz.de/ (personal)
skype:   mfhepp twitter: mfhepp

Check out GoodRelations for E-Commerce on the Web of Linked Data!
=

Webcast:
http://www.heppnetz.de/projects/goodrelations/webcast/

Recipe for Yahoo SearcMonkey:
http://tr.im/rAbN

Talk at the Semantic Technology Conference 2009: Semantic Web-based  
E-Commerce: The GoodRelations Ontology

http://tinyurl.com/semtech-hepp

Overview article on Semantic Universe:
http://tinyurl.com/goodrelations-universe

Project page:
http://purl.org/goodrelations/

Resources for developers:
http://www.ebusiness-unibw.org/wiki/GoodRelations

Tutorial materials:
CEC'09 2009 Tutorial: The Web of Data for E-Commerce: A Hands-on  
Introduction to the GoodRelations Ontology, RDFa, and Yahoo!  
SearchMonkey http://tr.im/grcec09


martin_hepp.vcf


--
axel.rauschma...@ifi.lmu.de
http://www.pst.ifi.lmu.de/~rauschma/






Re: Alternatives to OWL for linked data?

2009-07-24 Thread Paul Houle
On Fri, Jul 24, 2009 at 9:30 AM, Axel Rauschmayer a...@rauschma.de wrote:


 While it's not necessarily easier to understand for end users, I've always
 found Prolog easy to understand, where OWL is more of a challenge.

 So what solutions are out there? I would prefer description logic
 programming to OWL. Does Prolog-like backward-chaining make sense for RDF?
 If so, how would it be combined with SPARQL; or would it replace it? Or
 maybe something frame-based?

 Am I making sense? I would appreciate any pointers, hints and insights.



 I've got some projects in the pipe that are primarily based on Dbpedia
and Freebase,  but I'm incorporating data from other sources as well.  The
core of this is a system called Isidore which is a specialized system for
handling generic databases.
 My viewpoint is that there are certain kinds of reasoning that are best
done in a specialized way;  for instance,  the handling of identities,
 names and categories (category here includes the Dbpedia ontology and
Freebase types as well as internally generated.  For instance, a common task
is looking up an object by name.  Last time I looked,  there were about 10k
Wikipedia articles that had names that differed only by capitalization;
 most of the time you want name-lookups to be case-insensitive,  but you
still want addressability for the strange cases.

Wikipedia also has a treasure trove of information about disambiguation.
 The projects I do are about specific problem domains,  say animals,  cars,
 or video games:  I can easily qualify a search for Jaguar against a
problem domain and get the right dbpedia resource.

The core of identity,  naming and category information is small:  it's
easy to handle and easy to construct from Dbpedia and Freebase dumps.  From
the core it's possible to identify a problem domain and import data from
Dbpedia,  Freebase and other sources to construct a working database.

---

You might say that this is too specialized,  but this is the way the
brain works.  It's got specific modules for understanding particular problem
domains (faces,  people,  space,  etc.)  It's not so bad because the number
of modules that you need is finite.  Persons and Places represent a large
fraction of Dbpedia,  so reasoning about people and GIS can get you a lot of
mileage.  Freebase has particularly rich collection of data about musical
recordings and

I'm not sure if systems like OWL,  etc are really the answer -- we might
need something more like Cyc (or own brain) that has a lot of specialized
knowledge about the world embedded in it.



I see reification as an absolute requirement.  Underlying this is the
fact that generic databases are full of junk.  I'm attracted to Prolog-like
systems (Datalog?) but conventional logic systems are easily killed by
contradictory information.  This becomes a scalability limitation unless
you've got a system that is naturally robust to junk data.  You've also got
to be able to do conventional filtering:  you've got to be able to say
Triple A is wrong,  I don't trust triples from source B,  Source C uses
predicate D incorrectly,  Don't believe anything that E says about subject
F.  To deal with the (existing and emerging) semspam threat,  we'll also
need the same kind of probabilistic filtering that's used for e-mail and
blog comments.  (Take a look at the external links Dbpedia table if you
don't believe me)

The biggest challenge I see in generic databases is fiction.  Wikipedia
has a shocking amount of information about fiction:  this is a both an
opportunity and a danger.  For one thing,  people love fiction -- a G.P.A.I.
certainly needs to be able to appreciate fiction in order to appreciate the
human experience  On the other hand,  any system that does reasoning about
physics needs to tell the difference between

http://en.wikipedia.org/wiki/Minkowski_space

and

http://en.wikipedia.org/wiki/Minovsky_Physics#Minovsky_Physics

Also,  really it's all fiction when it comes down to it.  When a robocop
shows up at the scene of a fight,  it's going to hear contradictory stories
about who punched who first.  It's got to be able to listen to contradictory
stories and keep them apart,  and not fall apart like a computer from a bad
sci-fi movie.

---

Microtheories?  Nonmonotonic logic?

Perhaps.

You can go ahead and write standards and write papers about systems that
ignore the problems above,  but you're not going to make systems that work,
 on an engineering basis,  unless you confront them.