Re: Owning URIs (Was: Yet Another LOD cloud browser)

2009-05-20 Thread David Huynh

Kingsley Idehen wrote:

David Huynh wrote:

Sherman Monroe wrote:


To be more specific, these days a news reporter can say
foobar.com http://foobar.com on TV and expect that to mean
something to most of the audience. That's a marvel. Something more
than just the string foobar.com http://foobar.com is
transfered. It's the expectation that if anyone in the audience
were to type foobar.com http://foobar.com into any web
browser, then they would be seeing information served by the
authority associated with some topic or entity called foobar as
socially defined. And 99% of the audience would be seeing the same
information. What's the equivalent or analogous of that on the SW?


I just want to make sure the analogies are aligned properly and are 
salient. The WWW contains only nouns (no sentences). If I have an 
interest or service I want to share with others, then I post a 
webpage and /share its URL/ with you. In the SW, things are centered 
around the crowd, if I have something to say about the an interest, 
service, place, person, etc, then I /reference its URL/ in my 
statements. So the SW contains sentences that can be browsed. Type 
the URL in the WWW browser, you get /the thing /being shared. Type 
the URI in the SW browser, you get the /things people say about the 
thing/.
I didn't quite express myself clearly. If you were to take the 
previous sentence (I didn't quite express myself clearly), and 
encode it in RDF, what would you get? It certainly is something that 
I said about the thing, the thing being vaguely what I tried to 
explain before (how do you mint a URI for that?). The point is that 
using RDF or whatever other non-natural language structured data 
representation, you cannot practically represent the things people 
say about the thing in the majority of real-life cases. You can only 
express a very tiny subset of what can be said in natural language. 
This affects how people conceptualize and use this medium. If I hear 
a URI on TV, would I be motivated enough to type it into some browser 
when what I get back looks like an engineering spec sheet, but 
worse--with different rows from different sources, forcing me to 
derive the big picture myself,

   urn:sdajfdadjfai324829083742983:sherman_monroe
  name: Sherman Monroe (according to foo.com)
  age: __ (according to bar.com)
  age: ___ (according to bar2.com)
  nationality: __ (according to baz.com)
  ...
rather than, say, a natural language essay that conveys a coherent 
opinion, or a funny video?


David




David,

When you see a URI (a URL is a URI to me) on the TV, or hear one 
mentioned on the TV or Radio, you now have the option to interact with 
a variety of representations associated with the aforementioned Thing 
identified by the URI. You have representational choices that didn't 
exist until now. Choice is inherently optional :-)

Beware the paradox of choices :-)

http://www.amazon.com/Paradox-Choice-Why-More-Less/dp/0060005696/ref=sr_1_2?ie=UTF8qid=1242800143sr=8-2


A URI by definition cannot presuppose representation. This is the 
heart of the matter.


The Semantic Web Project isn't about a new Web distinct from the 
ubiquitous World Wide Web. I think that sentiment and thinking faded a 
long time ago.


If you are used to seeing a nice looking HTML based Web Page when you 
place URIs in a browser or click on them,  then there's nothing wrong 
with that, always interact with a Web resource using the 
representation that best suits the kind of interaction at hand. Thus, 
someone else may want to know what data was contextualized by the nice 
looking HTML representation (the data behind and around the page), and 
on that basis seek a different representation via the same URI that 
unveils the kind descriptive granularity delivered by an 
Entity-Attribute-Value graph (e.g., RDF).


The revolution is about choice via negotiated representations in a 
manner that's unobtrusive to the Web in its current form.  Nobody has 
to change how they use the Web, we are just adding options to an 
evolving medium.


You've forced my hand, I need to make a movie once and for all :-)

It's not forcing, just nudging :) It'll be a win for all.

David




Re: RDF: a suitable NLP KB representation (Was: Owning URIs (Was: Yet Another LOD cloud browser))

2009-05-20 Thread Dan Brickley

On 20/5/09 07:44, David Huynh wrote:

Sherman Monroe wrote:

 That's when I was turned on

to Frame Semantics, which I immediately praised, it is by far the most
expressive and elegant knowledge representation framework for NL I
have come across (although, it's been 3 or 4 years since I really
looked). In short, frame semantics sees all sentences as a scene
(like a movie scene) and the nouns all play roles in that scene.
E.g. a boy eating is involved in a ConsumeFood scene, and the actors
are the boy, the utensil he uses, the food, the chair he sits in. So I
choose framesemantics as the KB model for Cypher grammar parser output.

Thanks, Sherman, for your story. I had a history with Semantic Web
technologies, too, since 2001. Data on the Web is inevitable. I just
want to figure out ahead of time what it will actually be like.


This sent off lightbulbs for me, I went back to RDF, and saw that, low
and behold, frames can be represented as RDF, the scene types being
classes, a scene instance (i.e. the thing representing a complete
sentence) being the subject, the property is the role, and the object
is the thing playing that role, e.g:

EatFrame023 rdf:type mlo:EatFrame
EatFrame023 mlo:eater someschema:URIForJohn
EatFrame023 utensil someschema:JohnFavoriteSpoon
EatFrame023 mlo:seatedAt _:anonChair
EatFrame023 foaf:location someschema:JohnsLivingRoom
EatFrame023 someschema:time _:01122
EatFrame023 truthval false^booleanValueType

dbpedia:Heroes(Series) rdf:type dbpedia:TVShow
dbpedia:Heroes(Series) dbpedia:showtime _:01122

_:01122 rdf:type types:TimeSpan
_:01122 types:startHour 20^num:PositiveInteger
_:01122 types:startMinutes 00^num:PositiveInteger
_:01122 types:endHour 21^num:PositiveInteger
_:01122 types:endMinutes 00^num:PositiveInteger
_:01122 types:timezone EST

This says: /No, John didn't eat in a sandwich in a chair in his living
room using his favorite spoon, during the TV show Heroes/. Do you
still believe RDF is incapable of expressing complex NL statements?

Yes, I still believe. :)


Skeptical? Me too, here. You have to be pretty careful with negations 
expressed over representations that are shipped around in RDF triples.


 EatFrame023 rdf:type mlo:EatFrame
 EatFrame023 mlo:eater someschema:URIForJohn
 EatFrame023 utensil someschema:JohnFavoriteSpoon
 EatFrame023 mlo:seatedAt _:anonChair
 EatFrame023 foaf:location someschema:JohnsLivingRoom
 EatFrame023 someschema:time _:01122
 EatFrame023 truthval false^booleanValueType

What happens when you add properties to EatFrame023, or when you remove 
properties from the description? If the above is true, can I take it, 
omit  EatFrame023 utensil someschema:JohnFavoriteSpoon and pass it 
along to some downstream system in a context that suggests it remains a 
true description?


Detail aside: yes, event-centric descriptions are pretty seductive. In a 
world where everything is constantly changing, at least a true 
description of an event seems something that is timelessly true. But 
they can be super-slippery to compute with, especially in a world of 
partial, fragmented descriptions. They're also hard to manage w.r.t. to 
identification: given two descriptions of an event or frame, how do you 
know whether they refer to different ones? etc.


My dabbling with event modelling came through 
http://www.ilrt.bris.ac.uk/discovery/harmony/docs/abc/abc_draft.html 
where we tried to explore the resolution of some tensions between Dublin 
Core and other metadata vocabularies by articulating everything in terms 
of events. It is very appealing, but ultimately I think the problem with 
describing everything in terms of a giant what happened? log is that 
it doesn't directly tell you what the state of the world is at any 
point. And that's what many information consumers (human or mechanical) 
more often need.




/What
happens if all the worlds databases (e.g. Oracle, Mysql, etc databases
out there) could be directly connected to one another in a large
global network, all sharing one massive, distributed schema, and
people were able to send queries to that network using a Esperanto for
SQL?/ The ability of RDF to represent (not sentences but) rows and
columns of any database schema imaginable means it can deliver this
vision, and the value tied to it.



And look what happened to Esperanto... After one century, 2 million
speakers, or 0.025% of the world population.

[...]

Media are notoriously hard to understand, from what I can understand. If
we were to say that television was radio but just with images, then we
would be missing something huge. Or that printing was writing but just
much faster. Or that writing was speech just recorded on paper.


Or that SPARQL is just Esperanto for computers? The metaphor is nice 
but ... just that. As you point out, even non-metaphorical conceptual 
shortcuts can mislead us. Sometimes metaphors can inspire us and show a 
glimpse of the bigger picture. But I don't think they so often help us 
predict what'll actually 

Re: ANN: GoodRelations - E-Commerce on the Web of Data - New Datasets and Applications

2009-05-20 Thread Martin Hepp (UniBW)

Hi Libby,



That's rather fabulous! Can you give some information about how often 
this dataset is updated, and what's its geographical and product type 
reach?
Thanks! This particular data set is a rather static collection and has a 
bias towards US products. It will soon be complemented by a more dynamic 
and European-centric second data set.


In the long run, we will have to convince professional providers of 
commodity master data (e.g. GS1) to release their data following our 
structure. Currently, this is not possible due to licensing restrictions 
(there are look-up services like GEPIR, but none of them allows 
redistribution of the data).


The upcoming second data set will be based on a community process, i.e., 
shop owners enter labels for EAN/UPCs in a Wiki.


Since EAN/UPCs must (theoretically) not be reused, the current data set 
should be pretty reliable, though not necessarily very complete.


I see the main benefit of the current data set in
- using it as a showcase how small businesses can fetch product master 
data from the Semantic Web and
- showing how data on the same commodity from multiple sources can be 
easily linked on the basis of having the same


http://purl.org/goodrelations/v1.html#hasEAN_UCC-13

property value.



Individual commodity descriptions can be retrieved as follows:

http://openean.kaufkauf.net/id/EanUpc_UPC/EAN

Example:

http://openean.kaufkauf.net/id/EanUpc_0001067792600


This seems to give me multiple product descriptions - am I 
misunderstanding?
The whole data set is divided in currently 100 (will be changed to 1000 
soon) RDF files, which are being served via a bit complicated .htaccess 
configuration.


The reason is that the large number of instance data would otherwise 
require 1 million very small files (a few triples each), which may cause 
problems with several file systems. Also, since we want as much of our 
data as possible to stay within OWL DL (I know not everybody in the 
community shares that), this would cause a lot of redundancy due to 
ontology imports / header data in each single file.


But as far as I can see, the current approach should not have major side 
effects - you get back additional triples, but the size of the files 
being served is limited. Currently, we serve 4 MB file chunks. We will 
shortly reduce that to 400 - 800 KB. That seems reasonable to me.


Best
Martin




Libby




begin:vcard
fn:Martin Hepp
n:Hepp;Martin
org:Bundeswehr University Munich;E-Business and Web Science Research Group
adr:;;Werner-Heisenberg-Web 39;Neubiberg;;D-85577;Germany
email;internet:mh...@computer.org
tel;work:+49 89 6004 4217
tel;pager:skype: mfhepp
url:http://www.heppnetz.de
version:2.1
end:vcard



Re: Owning URIs (Was: Yet Another LOD cloud browser)

2009-05-20 Thread Kingsley Idehen

David Huynh wrote:

Kingsley Idehen wrote:

David Huynh wrote:

Sherman Monroe wrote:


To be more specific, these days a news reporter can say
foobar.com http://foobar.com on TV and expect that to mean
something to most of the audience. That's a marvel. Something more
than just the string foobar.com http://foobar.com is
transfered. It's the expectation that if anyone in the audience
were to type foobar.com http://foobar.com into any web
browser, then they would be seeing information served by the
authority associated with some topic or entity called foobar as
socially defined. And 99% of the audience would be seeing the same
information. What's the equivalent or analogous of that on the SW?


I just want to make sure the analogies are aligned properly and are 
salient. The WWW contains only nouns (no sentences). If I have an 
interest or service I want to share with others, then I post a 
webpage and /share its URL/ with you. In the SW, things are 
centered around the crowd, if I have something to say about the an 
interest, service, place, person, etc, then I /reference its URL/ 
in my statements. So the SW contains sentences that can be browsed. 
Type the URL in the WWW browser, you get /the thing /being shared. 
Type the URI in the SW browser, you get the /things people say 
about the thing/.
I didn't quite express myself clearly. If you were to take the 
previous sentence (I didn't quite express myself clearly), and 
encode it in RDF, what would you get? It certainly is something that 
I said about the thing, the thing being vaguely what I tried to 
explain before (how do you mint a URI for that?). The point is that 
using RDF or whatever other non-natural language structured data 
representation, you cannot practically represent the things people 
say about the thing in the majority of real-life cases. You can 
only express a very tiny subset of what can be said in natural 
language. This affects how people conceptualize and use this medium. 
If I hear a URI on TV, would I be motivated enough to type it into 
some browser when what I get back looks like an engineering spec 
sheet, but worse--with different rows from different sources, 
forcing me to derive the big picture myself,

   urn:sdajfdadjfai324829083742983:sherman_monroe
  name: Sherman Monroe (according to foo.com)
  age: __ (according to bar.com)
  age: ___ (according to bar2.com)
  nationality: __ (according to baz.com)
  ...
rather than, say, a natural language essay that conveys a coherent 
opinion, or a funny video?


David




David,

When you see a URI (a URL is a URI to me) on the TV, or hear one 
mentioned on the TV or Radio, you now have the option to interact 
with a variety of representations associated with the aforementioned 
Thing identified by the URI. You have representational choices that 
didn't exist until now. Choice is inherently optional :-)

Beware the paradox of choices :-)

http://www.amazon.com/Paradox-Choice-Why-More-Less/dp/0060005696/ref=sr_1_2?ie=UTF8qid=1242800143sr=8-2 




A URI by definition cannot presuppose representation. This is the 
heart of the matter.


The Semantic Web Project isn't about a new Web distinct from the 
ubiquitous World Wide Web. I think that sentiment and thinking faded 
a long time ago.


If you are used to seeing a nice looking HTML based Web Page when you 
place URIs in a browser or click on them,  then there's nothing wrong 
with that, always interact with a Web resource using the 
representation that best suits the kind of interaction at hand. Thus, 
someone else may want to know what data was contextualized by the 
nice looking HTML representation (the data behind and around the 
page), and on that basis seek a different representation via the same 
URI that unveils the kind descriptive granularity delivered by an 
Entity-Attribute-Value graph (e.g., RDF).


The revolution is about choice via negotiated representations in a 
manner that's unobtrusive to the Web in its current form.  Nobody has 
to change how they use the Web, we are just adding options to an 
evolving medium.


You've forced my hand, I need to make a movie once and for all :-)

It's not forcing, just nudging :) It'll be a win for all.

David



David,

Okay, so you've successfully nudged me :-)

Here is the first cut (others will follow as this was done in haste, but 
demonstrates the essence of the matter).


1. YouTube -- http://www.youtube.com/watch?v=CweYtyw7fnY
2.  Vimeo -- http://vimeo.com/4736569


--


Regards,

Kingsley Idehen   Weblog: http://www.openlinksw.com/blog/~kidehen
President  CEO 
OpenLink Software Web: http://www.openlinksw.com








Re: DBpedia user, who are you?

2009-05-20 Thread Davide Palmisano

Georgi Kobilarov wrote:

Hi all,
  

Dear folks,

I'm Davide Palmisano, an Asemantics[1] senior researcher and I'm very 
pleased to reply to Georgi's questions.

I'm currently doing some planning for the future roadmap of DBpedia, and
therefore gathering requirements and use cases.

So I'm wondering: 
- Who is using DBpedia today or has evaluated it in the past,
  

Currently we are using DBPedia within two disjoint main scenarios.

the first one is related to the EU project called NoTube[2] where we are 
planning to use DBpedia as a main knowledge core to build semantic web 
based user profiles in order to make personalized TV content 
recommendation. This is a research project mainly aimed to produce 
innovative algorithm for the content discovery.


The second one, partly covered by an NDA so I cannot be more precise, is 
an ambitious project that we will present to the next SemWeb09 called 
99ways[3] where we are planning to make an intensive use of DBpedia. For 
example, we are currently making an autocompletion service that  taking 
as input a substring it returns a list of DBpedia URIs grouped by their 
most representative skos:subject. The way we are calculating the most 
representative skos:subject for each URI is the key point within the 
overall algorithm.
- What are you doing with it or how would you like to use it, 
  

oops, as the precedent one :)

- How would you like to see it evolve?
  
Grow, grow and grow! Jokes apart, the first real and important evolution 
that comes up in my mind is partially related to the uptime and to the 
scalability of the system. Improving the scalability of the SPARQL end 
point backend would be the key task to allow the resolution of very 
frequent and complex SPARQL queries.

Especially interested in usage of DBpedia (and Linked Data) within
organizations or even commercial scenarios. 


Please let me know, either on-list of off-list (and state in case you
don't want that information to be disclosed).

Thanks,
Georgi
  

all the best,

Davide

--
Georgi Kobilarov
Freie Universität Berlin
www.georgikobilarov.com


  



--


Davide Palmisano
Head of Research and Development
Asemantics Srl - www.asemantics.com
Circonvallazione Trionfale 27
00195 ROMA Italy
skype id: davidepalmisano
mobile: +393396101142





Re: DBpedia user, who are you?

2009-05-20 Thread Toby Inkster
On Wed, 2009-05-20 at 13:04 +0200, Georgi Kobilarov wrote:

 - What are you doing with it or how would you like to use it, 

Linked railway data project - right now, I'm just querying dbpedia to
find an alternative URI for each station, so that I can provide
ovterms:similarTo links from one URI for the station to another. But in
the future I'd hope to also pull data about each station from dbpedia.

libre.fm - not using dbpedia yet, but it's being considered as a place
to find album art, artist biographies, etc.

 - How would you like to see it evolve?

Batch queries would be nice. Imagine that I want to find a particular
piece of information about multiple resources - e.g. the number of
platforms a train station has, for each station in a list I've got of
1000.

Right now I have two options, a single SPARQL query like:

SELECT ?s ?o
WHERE { ?s dbpedia:platforms ?o }

which might return hundreds of results for subjects I'm not interested
in; or querying each subject individually. The former is not the best
use of bandwidth, especially for properties like rdfs:label where there
are likely to be far more irrelevant results than relevant; and the
latter can't be especially efficient in terms of dbpedia's CPU time.

The other thing I'd like to see is some sort of better reasoning about
redirects. I don't know quite how, but there must be something better
than what we have now.

-- 
Toby Inkster t...@g5n.co.uk




Re: DBpedia user, who are you?

2009-05-20 Thread Hugh Glaser
We use dbpedia as part of the linked data world when computing networks and
service details of things that we know have dbpedia entries; we also use the
sameAs information.

Eg
For example see the ³Description² bit of
http://www.rkbexplorer.com/detail/?uri=http://southampton.rkbexplorer.com/id
/person-02686
or
http://www.rkbexplorer.com/explorer/#display=person-{http://southampton.rkbe
xplorer.com/id/person-02686}

So, as with the nature of this world, it is not obvious at all that it is
being used (unless you look at the raw data), because it is simply (like
other sites) that there is a sameAs to a Uri there.

Future?
Just want to be able to continue using it?

Best
Hugh

On 20/05/2009 12:04, Georgi Kobilarov georgi.kobila...@gmx.de wrote:

 Hi all,
 
 I'm currently doing some planning for the future roadmap of DBpedia, and
 therefore gathering requirements and use cases.
 
 So I'm wondering:
 - Who is using DBpedia today or has evaluated it in the past,
 - What are you doing with it or how would you like to use it,
 - How would you like to see it evolve?
 
 Especially interested in usage of DBpedia (and Linked Data) within
 organizations or even commercial scenarios.
 
 Please let me know, either on-list of off-list (and state in case you
 don't want that information to be disclosed).
 
 Thanks,
 Georgi
 
 --
 Georgi Kobilarov
 Freie Universität Berlin
 www.georgikobilarov.com
 
 
 




Re: DBpedia user, who are you?

2009-05-20 Thread Yves Raimond
Hello!

On Wed, May 20, 2009 at 12:04 PM, Georgi Kobilarov
georgi.kobila...@gmx.de wrote:
 Hi all,

 I'm currently doing some planning for the future roadmap of DBpedia, and
 therefore gathering requirements and use cases.

 So I'm wondering:
 - Who is using DBpedia today or has evaluated it in the past,
 - What are you doing with it or how would you like to use it,
 - How would you like to see it evolve?

 Especially interested in usage of DBpedia (and Linked Data) within
 organizations or even commercial scenarios.

 Please let me know, either on-list of off-list (and state in case you
 don't want that information to be disclosed).


Glad to contribute to that :-) We are using DBpedia in quite a lot of
services at the BBC, as detailed in our ESWC paper [1]. I am also
using it in almost all the services hosted at dbtune.org.

Wrt. future plans, here are a couple of things that would be very
great to have in future versions of dbpedia:
1) Query by example. You submit a bunch of DBpedia resources, and it
returns a SPARQL query selecting them and resources with similar
properties.
2) Live update from Wikipedia (but it seems quite close to being real, now)
3) An interface for submitting out-going links, instead of having to
ping the dbpedia list each time

Cheers,
y

[1] http://www.georgikobilarov.com/publications/2009/eswc2009-bbc-dbpedia.pdf

 Thanks,
 Georgi

 --
 Georgi Kobilarov
 Freie Universität Berlin
 www.georgikobilarov.com






Re: ANN: GoodRelations - E-Commerce on the Web of Data - New Datasets and Applications

2009-05-20 Thread Martin Hepp (UniBW)

Hi Steve,
as I replied to Libby (but did not include all mailing lists): The whole 
data set is served from currently 100 smaller files, which will be 
broken down to 1000 files shortly. For various reasons however, we don't 
want to serve one file per element, because that will create a huge 
overhead - the individual data sets are rather small (a few triples per 
item). Having one million micro-files is hard to manage. Also, since we 
want to stay within OWL DL, we would have to duplicate proper ontology 
header meta-data a million times.


Thus, we use a (rather large) set of rules in the .htaccess file to 
serve that part of the data set that contains the element you are 
actually looking for. You will receive a few more triples than you need, 
but simply discard those ;-)


Martin

Steve Harris wrote:

Very cool resource.

On 20 May 2009, at 10:18, Libby Miller wrote:

Individual commodity descriptions can be retrieved as follows:

http://openean.kaufkauf.net/id/EanUpc_UPC/EAN

Example:

http://openean.kaufkauf.net/id/EanUpc_0001067792600


This seems to give me multiple product descriptions - am I 
misunderstanding?


Yeah, looks like it returns the entire document that the particular 
EAN appears in.


Not very linked data friendly (you'll end up with a large proportion 
of repeated triples in identical graphs, with different graph URIS), 
but certainly better than nothing.


- Steve



--
--
martin hepp
e-business  web science research group
universitaet der bundeswehr muenchen

e-mail: mh...@computer.org
phone:  +49-(0)89-6004-4217
fax:+49-(0)89-6004-4620
www:http://www.unibw.de/ebusiness/ (group)
http://www.heppnetz.de/ (personal)
skype:  mfhepp 



Check out the GoodRelations vocabulary for E-Commerce on the Web of Data!


Webcast explaining the Web of Data for E-Commerce:
-
http://www.heppnetz.de/projects/goodrelations/webcast/

Tool for registering your business:
--
http://www.ebusiness-unibw.org/tools/goodrelations-annotator/

Overview article on Semantic Universe:
-
http://www.semanticuniverse.com/articles-semantic-web-based-e-commerce-webmasters-get-ready.html

Project page and resources for developers:
-
http://purl.org/goodrelations/

Upcoming events:
---
Full-day tutorial at ESWC 2009: The Web of Data for E-Commerce in One Day: A 
Hands-on Introduction to the GoodRelations Ontology, RDFa, and Yahoo! 
SearchMonkey

http://www.eswc2009.org/program-menu/tutorials/70

Talk at the Semantic Technology Conference 2009: Semantic Web-based E-Commerce: 
The GoodRelations Ontology

http://www.semantic-conference.com/session/1881/

begin:vcard
fn:Martin Hepp
n:Hepp;Martin
org:Bundeswehr University Munich;E-Business and Web Science Research Group
adr:;;Werner-Heisenberg-Web 39;Neubiberg;;D-85577;Germany
email;internet:mh...@computer.org
tel;work:+49 89 6004 4217
tel;pager:skype: mfhepp
url:http://www.heppnetz.de
version:2.1
end:vcard



Re: ANN: GoodRelations - E-Commerce on the Web of Data - New Datasets and Applications

2009-05-20 Thread Damian Steer
Steve Harris wrote:
 Very cool resource.
 
 On 20 May 2009, at 10:18, Libby Miller wrote:
 Individual commodity descriptions can be retrieved as follows:

 http://openean.kaufkauf.net/id/EanUpc_UPC/EAN

 Example:

 http://openean.kaufkauf.net/id/EanUpc_0001067792600

 This seems to give me multiple product descriptions - am I
 misunderstanding?
 
 Yeah, looks like it returns the entire document that the particular EAN
 appears in.
 
 Not very linked data friendly (you'll end up with a large proportion of
 repeated triples in identical graphs, with different graph URIS), but
 certainly better than nothing.

thought entertained=minimal
If the location header was set in the response I guess that might help.
/thought

Damian



Re: ANN: GoodRelations - E-Commerce on the Web of Data - New Datasets and Applications

2009-05-20 Thread Yves Raimond
Hello!

 Not very linked data friendly (you'll end up with a large proportion of
 repeated triples in identical graphs, with different graph URIS), but
 certainly better than nothing.

Just jumping on that - is that an issue? I would think not, as you may
want to repeat information across different views. For example
(slightly biased :-) ), you may want to repeat broadcast information
in a schedule view, instead of asking the user agent to manually get a
hundred of URIs.

Cheers,
y



Re: ANN: GoodRelations - E-Commerce on the Web of Data - New Datasets and Applications

2009-05-20 Thread Steve Harris
Alternatively you could put that data in a RDF store, and just serve  
up the fragments using a wrapped CONSTRUCT query.


That's what we do for qdos.com, eg
  http://qdos.com/user/Steve-Harris/18b6f60b41e05aaa418565ebfe901d6b/rdfxml
and it's pretty efficient, more efficient that storing 1000 separate  
files as XML.


The downside is that the RDF is not very pretty to look at, but it  
could be with a better RDF/XML serialiser.


- Steve

On 20 May 2009, at 14:59, Martin Hepp (UniBW) wrote:


Hi Steve,
as I replied to Libby (but did not include all mailing lists): The  
whole data set is served from currently 100 smaller files, which  
will be broken down to 1000 files shortly. For various reasons  
however, we don't want to serve one file per element, because that  
will create a huge overhead - the individual data sets are rather  
small (a few triples per item). Having one million micro-files is  
hard to manage. Also, since we want to stay within OWL DL, we would  
have to duplicate proper ontology header meta-data a million times.


Thus, we use a (rather large) set of rules in the .htaccess file to  
serve that part of the data set that contains the element you are  
actually looking for. You will receive a few more triples than you  
need, but simply discard those ;-)


Martin

Steve Harris wrote:

Very cool resource.

On 20 May 2009, at 10:18, Libby Miller wrote:

Individual commodity descriptions can be retrieved as follows:

http://openean.kaufkauf.net/id/EanUpc_UPC/EAN

Example:

http://openean.kaufkauf.net/id/EanUpc_0001067792600


This seems to give me multiple product descriptions - am I  
misunderstanding?


Yeah, looks like it returns the entire document that the particular  
EAN appears in.


Not very linked data friendly (you'll end up with a large  
proportion of repeated triples in identical graphs, with different  
graph URIS), but certainly better than nothing.


- Steve



--
--
martin hepp
e-business  web science research group
universitaet der bundeswehr muenchen

e-mail: mh...@computer.org
phone:  +49-(0)89-6004-4217
fax:+49-(0)89-6004-4620
www:http://www.unibw.de/ebusiness/ (group)
http://www.heppnetz.de/ (personal)
skype:  mfhepp

Check out the GoodRelations vocabulary for E-Commerce on the Web of  
Data!
= 
= 
==


Webcast explaining the Web of Data for E-Commerce:
-
http://www.heppnetz.de/projects/goodrelations/webcast/

Tool for registering your business:
--
http://www.ebusiness-unibw.org/tools/goodrelations-annotator/

Overview article on Semantic Universe:
-
http://www.semanticuniverse.com/articles-semantic-web-based-e-commerce-webmasters-get-ready.html

Project page and resources for developers:
-
http://purl.org/goodrelations/

Upcoming events:
---
Full-day tutorial at ESWC 2009: The Web of Data for E-Commerce in  
One Day: A Hands-on Introduction to the GoodRelations Ontology,  
RDFa, and Yahoo! SearchMonkey


http://www.eswc2009.org/program-menu/tutorials/70

Talk at the Semantic Technology Conference 2009: Semantic Web-based  
E-Commerce: The GoodRelations Ontology


http://www.semantic-conference.com/session/1881/

martin_hepp.vcf


--
Steve Harris
Garlik Limited, 2 Sheen Road, Richmond, TW9 1AE, UK
+44(0)20 8973 2465  http://www.garlik.com/
Registered in England and Wales 535 7233 VAT # 849 0517 11
Registered office: Thames House, Portsmouth Road, Esher, Surrey, KT10  
9AD





Re: ANN: GoodRelations - E-Commerce on the Web of Data - New Datasets and Applications

2009-05-20 Thread Steve Harris

On 20 May 2009, at 15:48, Yves Raimond wrote:


Hello!

Not very linked data friendly (you'll end up with a large  
proportion of

repeated triples in identical graphs, with different graph URIS), but
certainly better than nothing.


Just jumping on that - is that an issue? I would think not, as you may
want to repeat information across different views. For example
(slightly biased :-) ), you may want to repeat broadcast information
in a schedule view, instead of asking the user agent to manually get a
hundred of URIs.


It's not ideal if you're crawling that data - you'd end up with many  
thousands of copies of an (almost?) identical document. But with no  
obvious clue that they're the same.


I didn't look at what was going on in the HTTP, but using the right  
40x forwards it could make it clear to the client what's happening.


- Steve

--
Steve Harris
Garlik Limited, 2 Sheen Road, Richmond, TW9 1AE, UK
+44(0)20 8973 2465  http://www.garlik.com/
Registered in England and Wales 535 7233 VAT # 849 0517 11
Registered office: Thames House, Portsmouth Road, Esher, Surrey, KT10  
9AD





Re: DBpedia user, who are you?

2009-05-20 Thread Toby Inkster
On Wed, 2009-05-20 at 14:57 +0100, Yves Raimond wrote:
 3) An interface for submitting out-going links, instead of having to
 ping the dbpedia list each time

Ooh!! This can be done?!

Ping!

http://ontologi.es/rail/links_dbpedia.ttl

-- 
Toby Inkster t...@g5n.co.uk




Re: DBpedia user, who are you?

2009-05-20 Thread Hugh Glaser
Nice work.
However :-)
It should be
@prefix   : http://ontologi.es/rail/stations/gb/ .
not
@prefix   : http://ontologi.es/rail/station/gb/ .

Cheers
Hugh

On 20/05/2009 16:15, Toby Inkster t...@g5n.co.uk wrote:

On Wed, 2009-05-20 at 14:57 +0100, Yves Raimond wrote:
 3) An interface for submitting out-going links, instead of having to
 ping the dbpedia list each time

Ooh!! This can be done?!

Ping!

http://ontologi.es/rail/links_dbpedia.ttl

--
Toby Inkster t...@g5n.co.uk






Re: [semanticweb] ANN: GoodRelations Service Update 2009-05-05 - Please refresh your caches!

2009-05-20 Thread Azamat

Hi, Martin,
I found your works on business ontologies as having some useful commercial 
prospects. I suggest to post your Good Relations ontology under the heading of 
Ontology Standards and Industry Standards, a special committee of the emerging 
non-profit international research organization: 
International Body for Ontology and Semantics Standards (IBOSS), 
http://www.standardontology.org (In the sage of devevlopment)
Any researchers, developers, institutions, or legal entities in possession of 
high quality ontology content or semantic applications are also welcomed.
Azamat Abdoullaev
IBOSS Group 
  - Original Message - 
  From: Martin Hepp (UniBW) 
  To: semantic-web at W3C ; semantic...@yahoogroups.com ; public-lod@w3.org 
  Sent: Wednesday, May 20, 2009 11:36 AM
  Subject: [semanticweb] ANN: GoodRelations Service Update 2009-05-05 - Please 
refresh your caches!





  Dear all:
  We just released a service update of the GoodRelations ontology for
  e-commerce.

  The ontology is available at

  http://purl.org/goodrelations/v1

  If you want to explicitly fetch the OWL file or the HTML documentation,
  you may also use

  http://purl.org/goodrelations/v1.owl

  or

  http://purl.org/goodrelations/v1.html

  Please replace all local copies of GoodRelations by this new file.

  The service update is designed to be fully backwards-compatible. Only a
  few changes may require small modifications of current applications or
  data sets. Those few potentially incompatible changes are as follows:

  - gr:description is now deprecated; we suggest using rdfs:comment instead.
  - gr:isListPrice is now deprecated and replaced by the more powerful
  gr:priceType property.

  The remaining changes are vocabulary extensions or improvements that
  help increase the compatibility with Semantic Web applications. For
  example, we
  - changed cardinality recommendations for opening hours and business
  functions, which simplifies the usage of GoodRelations in RDFa;
  - added PayPal as a payment method, and
  - added labels to all elements.

  A complete change log is at

  http://tinyurl.com/q8yln9

  A big thank you for the many valuable suggestions for improvement!

  Best

  Martin

  --
  martin hepp
  e-business  web science research group
  universitaet der bundeswehr muenchen

  e-mail: mh...@computer.org
  phone: +49-(0)89-6004-4217
  fax: +49-(0)89-6004-4620
  www: http://www.unibw.de/ebusiness/ (group)
  http://www.heppnetz.de/ (personal)
  skype: mfhepp

  Check out the GoodRelations vocabulary for E-Commerce on the Web of Data!
  

  Webcast explaining the Web of Data for E-Commerce:
  -
  http://www.heppnetz.de/projects/goodrelations/webcast/

  Tool for registering your business:
  --
  http://www.ebusiness-unibw.org/tools/goodrelations-annotator/

  Overview article on Semantic Universe:
  -
  
http://www.semanticuniverse.com/articles-semantic-web-based-e-commerce-webmasters-get-ready.html

  Project page and resources for developers:
  -
  http://purl.org/goodrelations/

  Upcoming events:
  ---
  Full-day tutorial at ESWC 2009: The Web of Data for E-Commerce in One
  Day: A Hands-on Introduction to the GoodRelations Ontology, RDFa, and
  Yahoo! SearchMonkey

  http://www.eswc2009.org/program-menu/tutorials/70

  Talk at the Semantic Technology Conference 2009: Semantic Web-based
  E-Commerce: The GoodRelations Ontology

  http://www.semantic-conference.com/session/1881/

  -- 
  --
  martin hepp
  e-business  web science research group
  universitaet der bundeswehr muenchen

  e-mail: mh...@computer.org
  phone: +49-(0)89-6004-4217
  fax: +49-(0)89-6004-4620
  www: http://www.unibw.de/ebusiness/ (group)
  http://www.heppnetz.de/ (personal)
  skype: mfhepp

  Check out the GoodRelations vocabulary for E-Commerce on the Web of Data!
  

  Webcast explaining the Web of Data for E-Commerce:
  -
  http://www.heppnetz.de/projects/goodrelations/webcast/

  Tool for registering your business:
  --
  http://www.ebusiness-unibw.org/tools/goodrelations-annotator/

  Overview article on Semantic Universe:
  -
  
http://www.semanticuniverse.com/articles-semantic-web-based-e-commerce-webmasters-get-ready.html

  Project page and resources for developers:
  -
  http://purl.org/goodrelations/

  Upcoming events:
  ---
  Full-day tutorial at ESWC 2009: The Web of Data for E-Commerce in One 
  Day: A Hands-on Introduction to the 

Re: ANN: GoodRelations - E-Commerce on the Web of Data - New Datasets and Applications

2009-05-20 Thread Kingsley Idehen

Steve Harris wrote:
Alternatively you could put that data in a RDF store, and just serve 
up the fragments using a wrapped CONSTRUCT query.


That's what we do for qdos.com, eg
  
http://qdos.com/user/Steve-Harris/18b6f60b41e05aaa418565ebfe901d6b/rdfxml
and it's pretty efficient, more efficient that storing 1000 separate 
files as XML.


The downside is that the RDF is not very pretty to look at, but it 
could be with a better RDF/XML serialiser.


- Steve

Steve,

The data is already in an RDF Store. Of course, you can add yours etc. :-)

Martin should have sent links like:

1. http://tr.im/lThV -- the whole ontology
2. http://tr.im/lTiC -- sampling of Products or Service Model instance data


Kingsley


On 20 May 2009, at 14:59, Martin Hepp (UniBW) wrote:


Hi Steve,
as I replied to Libby (but did not include all mailing lists): The 
whole data set is served from currently 100 smaller files, which will 
be broken down to 1000 files shortly. For various reasons however, we 
don't want to serve one file per element, because that will create a 
huge overhead - the individual data sets are rather small (a few 
triples per item). Having one million micro-files is hard to manage. 
Also, since we want to stay within OWL DL, we would have to duplicate 
proper ontology header meta-data a million times.


Thus, we use a (rather large) set of rules in the .htaccess file to 
serve that part of the data set that contains the element you are 
actually looking for. You will receive a few more triples than you 
need, but simply discard those ;-)


Martin

Steve Harris wrote:

Very cool resource.

On 20 May 2009, at 10:18, Libby Miller wrote:

Individual commodity descriptions can be retrieved as follows:

http://openean.kaufkauf.net/id/EanUpc_UPC/EAN

Example:

http://openean.kaufkauf.net/id/EanUpc_0001067792600


This seems to give me multiple product descriptions - am I 
misunderstanding?


Yeah, looks like it returns the entire document that the particular 
EAN appears in.


Not very linked data friendly (you'll end up with a large proportion 
of repeated triples in identical graphs, with different graph URIS), 
but certainly better than nothing.


- Steve



--
--
martin hepp
e-business  web science research group
universitaet der bundeswehr muenchen

e-mail: mh...@computer.org
phone:  +49-(0)89-6004-4217
fax:+49-(0)89-6004-4620
www:http://www.unibw.de/ebusiness/ (group)
http://www.heppnetz.de/ (personal)
skype:  mfhepp

Check out the GoodRelations vocabulary for E-Commerce on the Web of 
Data!



Webcast explaining the Web of Data for E-Commerce:
-
http://www.heppnetz.de/projects/goodrelations/webcast/

Tool for registering your business:
--
http://www.ebusiness-unibw.org/tools/goodrelations-annotator/

Overview article on Semantic Universe:
-
http://www.semanticuniverse.com/articles-semantic-web-based-e-commerce-webmasters-get-ready.html 



Project page and resources for developers:
-
http://purl.org/goodrelations/

Upcoming events:
---
Full-day tutorial at ESWC 2009: The Web of Data for E-Commerce in One 
Day: A Hands-on Introduction to the GoodRelations Ontology, RDFa, and 
Yahoo! SearchMonkey


http://www.eswc2009.org/program-menu/tutorials/70

Talk at the Semantic Technology Conference 2009: Semantic Web-based 
E-Commerce: The GoodRelations Ontology


http://www.semantic-conference.com/session/1881/

martin_hepp.vcf





--


Regards,

Kingsley Idehen   Weblog: http://www.openlinksw.com/blog/~kidehen
President  CEO 
OpenLink Software Web: http://www.openlinksw.com








Re: URI lifecycle (Was: Owning URIs)

2009-05-20 Thread Kingsley Idehen

David Booth wrote:

Hi John,

Re: The URI Lifecycle in Semantic Web Architecture:
http://dbooth.org/2009/lifecycle/

On Tue, 2009-05-19 at 10:46 -0700, John Graybeal wrote:
  
*Very* interesting paper, for the content and for the links.   
Addresses many a topic I've been trying to sort out.


If I may ask for a clarification on a few key points at the beginning:

1) At what point does 'minting' occur?  (a) When I think of the URI,  
(b) when I first write it down as a string in some file, (c) when I  
'serve' it in some formal way, (d) when I make a statement that  
references it, or (e) ...? You define it as 'establishing the  
association between the URI and the resource it denotes', but how does  
the process of establishing that association occur, exactly? It all  
seems a little imprecise with respect to real-world resources.



The simplest answer is that the URI is minted when the URI owner
publishes its URI declaration, since it is the URI declaration that
establishes the association between the URI and the resource it denotes.

  
2) Am I correct in thinking the URI owner is just the person who has  
the authority to create a URI (and optionally provide an initial set  
of statements about it)? In the SW, the idea of someone having the  
authority to link their URI to the actual resource -- Earth's moon  
for example -- is confusing, since many people will mint URIs meant to  
refer to the Earth's moon; I think they all have that authority, in  
some sense. (AWWW focused more on the actual URI and information  
resources, where there is an implicit association, often through  
deferencing.)



In simple terms, the URI owner is the owner of the domain from which the
URI is allocated, or the owner's delegate.  For example, if John owns
domain foo.example.com then John is the owner of all URIs allocated
within that domain, such as http://foo.example.com/bar/whiz/bang .
However, John could delegate minting authority to all or part of his URI
space.  For example, John could delegate minting authority for all URIs
matching http://foo.example.com/lucinda/* to Lucinda.  
  

David,

What about describing this in terms of: Data Space or URI Space ownership?

You are describing functionality that should be integral to any Data 
Space or URI Space platform that plugs into the Internet ?


Kingsley

The notion of URI ownership is defined in the AWWW section 2.2.2.1:
http://www.w3.org/TR/webarch/#uri-ownership

  
3) Can you define a core assertion?  If I can improve my assertions  
to clarify that I meant the Earth moon we all know about, as opposed  
to some other 'Earth moon', is that not allowed per R1? How do we know  
when an improvement makes the original concept more useful, as opposed  
to erroneous for some users? (Note your suggestion later that it's OK  
when expectations are properly set, a la SKOS.)



The core assertions are merely those that are provided in the URI
declaration and serve to define the association between the URI and a
resource.  They do so by constraining the permissible interpretations
for that URI.  (An interpretation in RDF semantics lingo maps URIs to
resources.)  In the end the question of whether a change in a URI
declaration will be helpful or harmful to your users is a judgement
call.  In theory, any change to the core assertions has the potential of
invalidating some user's code.  However, in practice some changes are
far less likely to cause problems than others, because they don't affect
the set of permissible interpretations -- at least not in a way that
matters.  For example, in the moon example at
http://dbooth.org/2007/uri-decl/#example
changing the rdfs:seeAlso assertion is unlikely to break users' code
because it doesn't really constrain the resource identity of the URI
http://dbooth.org/2007/moon/ .  


One can think of the core assertions as constraining the set of
permissible interpretations for that URI.  There will always be some
ambiguity about what resource the URI denotes -- this is inescapable --
but the core assertions clearly delineate that ambiguity.  This is
further explained in a companion paper, Denotation as a Two-Step
Mapping in Semantic Web Architecture:
http://dbooth.org/2009/denotation/

  
The paper is a nice encapsulation of many of the idiosyncrasies of the  
current state of the social practice. Thanks



You're welcome.  And thanks very much for your comments!


  


--


Regards,

Kingsley Idehen   Weblog: http://www.openlinksw.com/blog/~kidehen
President  CEO 
OpenLink Software Web: http://www.openlinksw.com








Re: DBpedia user, who are you?

2009-05-20 Thread Kingsley Idehen

Yves Raimond wrote:

Hello!

On Wed, May 20, 2009 at 12:04 PM, Georgi Kobilarov
georgi.kobila...@gmx.de wrote:
  

Hi all,

I'm currently doing some planning for the future roadmap of DBpedia, and
therefore gathering requirements and use cases.

So I'm wondering:
- Who is using DBpedia today or has evaluated it in the past,
- What are you doing with it or how would you like to use it,
- How would you like to see it evolve?

Especially interested in usage of DBpedia (and Linked Data) within
organizations or even commercial scenarios.

Please let me know, either on-list of off-list (and state in case you
don't want that information to be disclosed).




Glad to contribute to that :-) We are using DBpedia in quite a lot of
services at the BBC, as detailed in our ESWC paper [1]. I am also
using it in almost all the services hosted at dbtune.org.

Wrt. future plans, here are a couple of things that would be very
great to have in future versions of dbpedia:
1) Query by example. You submit a bunch of DBpedia resources, and it
returns a SPARQL query selecting them and resources with similar
properties.
2) Live update from Wikipedia (but it seems quite close to being real, now)
3) An interface for submitting out-going links, instead of having to
ping the dbpedia list each time

Cheers,
y

[1] http://www.georgikobilarov.com/publications/2009/eswc2009-bbc-dbpedia.pdf
  
Re. pinger services for SPARUL type effects, the availability of a 
FOAF+SSL based DBpedia SPARQL endpoint  will make this feasible. And for 
those that don't have WebIDs (URIs), OAuth based SPARQL endpoint will do.




Kingsley
  

Thanks,
Georgi

--
Georgi Kobilarov
Freie Universität Berlin
www.georgikobilarov.com







  



--


Regards,

Kingsley Idehen   Weblog: http://www.openlinksw.com/blog/~kidehen
President  CEO 
OpenLink Software Web: http://www.openlinksw.com








Re: ANN: GoodRelations - E-Commerce on the Web of Data - New Datasets and Applications

2009-05-20 Thread Kingsley Idehen

Steve Harris wrote:


On 20 May 2009, at 16:38, Kingsley Idehen wrote:

Steve Harris wrote:
Alternatively you could put that data in a RDF store, and just serve 
up the fragments using a wrapped CONSTRUCT query.


That's what we do for qdos.com, eg
 http://qdos.com/user/Steve-Harris/18b6f60b41e05aaa418565ebfe901d6b/rdfxml 

and it's pretty efficient, more efficient that storing 1000 separate 
files as XML.


The downside is that the RDF is not very pretty to look at, but it 
could be with a better RDF/XML serialiser.


- Steve

Steve,

The data is already in an RDF Store. Of course, you can add yours 
etc. :-)


Martin should have sent links like:

1. http://tr.im/lThV -- the whole ontology
2. http://tr.im/lTiC -- sampling of Products or Service Model 
instance data


Cool, how do I for eg. get the immediate data around 
http://openean.kaufkauf.net/id/EanUpc_0001067792600 out? It's not 
obvious how you request it.


Do I just issue SPARQL to somewhere? The SPARQL link at the bottom 
goes to a page about SPARQL.

I poked around for a bit and tried
http://lod.openlinksw.com/sparql?query=DESCRIBE+%3Chttp%3A%2F%2Fopenean.kaufkauf.net%2Fid%2FEanUpc_0001067792600%3Eoutput=n3 


but it gives an error, so I'm not sure what I did wrong.

Steve,

Still hot staging a few things. Nothing wrong with your command, we just 
need to complete some data re-organization work on this particular instance.


Check back in a day or so :-)

Also, we are adding this Linked Commerce Data to the LOD cloud, so do 
expect a published dump for the static data from this emerging space. 
Re. new data about new business entities, check PingTheSemantic Web .


Kingsley


- Steve


On 20 May 2009, at 14:59, Martin Hepp (UniBW) wrote:


Hi Steve,
as I replied to Libby (but did not include all mailing lists): The 
whole data set is served from currently 100 smaller files, which 
will be broken down to 1000 files shortly. For various reasons 
however, we don't want to serve one file per element, because that 
will create a huge overhead - the individual data sets are rather 
small (a few triples per item). Having one million micro-files is 
hard to manage. Also, since we want to stay within OWL DL, we would 
have to duplicate proper ontology header meta-data a million times.


Thus, we use a (rather large) set of rules in the .htaccess file to 
serve that part of the data set that contains the element you are 
actually looking for. You will receive a few more triples than you 
need, but simply discard those ;-)


Martin

Steve Harris wrote:

Very cool resource.

On 20 May 2009, at 10:18, Libby Miller wrote:

Individual commodity descriptions can be retrieved as follows:

http://openean.kaufkauf.net/id/EanUpc_UPC/EAN

Example:

http://openean.kaufkauf.net/id/EanUpc_0001067792600


This seems to give me multiple product descriptions - am I 
misunderstanding?


Yeah, looks like it returns the entire document that the 
particular EAN appears in.


Not very linked data friendly (you'll end up with a large 
proportion of repeated triples in identical graphs, with different 
graph URIS), but certainly better than nothing.


- Steve



--
--
martin hepp
e-business  web science research group
universitaet der bundeswehr muenchen

e-mail: mh...@computer.org
phone:  +49-(0)89-6004-4217
fax:+49-(0)89-6004-4620
www:http://www.unibw.de/ebusiness/ (group)
   http://www.heppnetz.de/ (personal)
skype:  mfhepp

Check out the GoodRelations vocabulary for E-Commerce on the Web of 
Data!
 



Webcast explaining the Web of Data for E-Commerce:
-
http://www.heppnetz.de/projects/goodrelations/webcast/

Tool for registering your business:
--
http://www.ebusiness-unibw.org/tools/goodrelations-annotator/

Overview article on Semantic Universe:
-
http://www.semanticuniverse.com/articles-semantic-web-based-e-commerce-webmasters-get-ready.html 



Project page and resources for developers:
-
http://purl.org/goodrelations/

Upcoming events:
---
Full-day tutorial at ESWC 2009: The Web of Data for E-Commerce in 
One Day: A Hands-on Introduction to the GoodRelations Ontology, 
RDFa, and Yahoo! SearchMonkey


http://www.eswc2009.org/program-menu/tutorials/70

Talk at the Semantic Technology Conference 2009: Semantic Web-based 
E-Commerce: The GoodRelations Ontology


http://www.semantic-conference.com/session/1881/

martin_hepp.vcf





--


Regards,

Kingsley Idehen  Weblog: http://www.openlinksw.com/blog/~kidehen
President  CEO OpenLink Software Web: http://www.openlinksw.com









--


Regards,

Kingsley Idehen   Weblog: http://www.openlinksw.com/blog/~kidehen
President  CEO 
OpenLink Software Web: http://www.openlinksw.com








VoCamp Sunnyvale 2009

2009-05-20 Thread Peter Mika

(Apologies if you receive multiple copies of this message)

===

VoCamp Sunnyvale 2009

June 18-19, 2009, Sunnyvale, California

http://vocamp.org/wiki/VoCampSunnyvale2009

===

VoCamps are informal events for people interested in solving the 
practical problems of the Semantic Web in a social setting, with a 
particular focus on topics related to vocabularies and semantic 
interoperability in general.


Following on the success of the previous four events, the next VoCamp 
will take place on the week of the Semantic Technologies  conference 
(SemTech) at Yahoo in Sunnyvale, California.


VoCamps are free for participants. Sign up by going to the website at 
vocamp.org. Attendance is limited to 30 people at the moment so be quick.


Co-organizers:

Peter Mika (Yahoo!)
Melinda Chung (Yahoo!)
Evan Goer (Yahoo!)

Links:

http://vocamp.org/wiki/VoCampSunnyvale2009
http://semtech2009.com/session/2134/



Re: DBpedia user, who are you?

2009-05-20 Thread Juan Sequeda
At Turn2Live.com we will start (soon) to consume music data about artists
from LOD and obviously from DBpedia. We are at a very initial phase. Hope to
have demos soon!

Juan Sequeda, Ph.D Student
Dept. of Computer Sciences
The University of Texas at Austin
www.juansequeda.com
www.semanticwebaustin.org


On Wed, May 20, 2009 at 1:04 PM, Georgi Kobilarov
georgi.kobila...@gmx.dewrote:

 Hi all,

 I'm currently doing some planning for the future roadmap of DBpedia, and
 therefore gathering requirements and use cases.

 So I'm wondering:
 - Who is using DBpedia today or has evaluated it in the past,
 - What are you doing with it or how would you like to use it,
 - How would you like to see it evolve?

 Especially interested in usage of DBpedia (and Linked Data) within
 organizations or even commercial scenarios.

 Please let me know, either on-list of off-list (and state in case you
 don't want that information to be disclosed).

 Thanks,
 Georgi

 --
 Georgi Kobilarov
 Freie Universität Berlin
 www.georgikobilarov.com





Re: URI lifecycle (Was: Owning URIs)

2009-05-20 Thread David Booth
Hi John,

Re: The URI Lifecycle in Semantic Web Architecture:
http://dbooth.org/2009/lifecycle/

On Tue, 2009-05-19 at 10:46 -0700, John Graybeal wrote:
 *Very* interesting paper, for the content and for the links.   
 Addresses many a topic I've been trying to sort out.
 
 If I may ask for a clarification on a few key points at the beginning:
 
 1) At what point does 'minting' occur?  (a) When I think of the URI,  
 (b) when I first write it down as a string in some file, (c) when I  
 'serve' it in some formal way, (d) when I make a statement that  
 references it, or (e) ...? You define it as 'establishing the  
 association between the URI and the resource it denotes', but how does  
 the process of establishing that association occur, exactly? It all  
 seems a little imprecise with respect to real-world resources.

The simplest answer is that the URI is minted when the URI owner
publishes its URI declaration, since it is the URI declaration that
establishes the association between the URI and the resource it denotes.

 
 2) Am I correct in thinking the URI owner is just the person who has  
 the authority to create a URI (and optionally provide an initial set  
 of statements about it)? In the SW, the idea of someone having the  
 authority to link their URI to the actual resource -- Earth's moon  
 for example -- is confusing, since many people will mint URIs meant to  
 refer to the Earth's moon; I think they all have that authority, in  
 some sense. (AWWW focused more on the actual URI and information  
 resources, where there is an implicit association, often through  
 deferencing.)

In simple terms, the URI owner is the owner of the domain from which the
URI is allocated, or the owner's delegate.  For example, if John owns
domain foo.example.com then John is the owner of all URIs allocated
within that domain, such as http://foo.example.com/bar/whiz/bang .
However, John could delegate minting authority to all or part of his URI
space.  For example, John could delegate minting authority for all URIs
matching http://foo.example.com/lucinda/* to Lucinda.  

The notion of URI ownership is defined in the AWWW section 2.2.2.1:
http://www.w3.org/TR/webarch/#uri-ownership

 
 3) Can you define a core assertion?  If I can improve my assertions  
 to clarify that I meant the Earth moon we all know about, as opposed  
 to some other 'Earth moon', is that not allowed per R1? How do we know  
 when an improvement makes the original concept more useful, as opposed  
 to erroneous for some users? (Note your suggestion later that it's OK  
 when expectations are properly set, a la SKOS.)

The core assertions are merely those that are provided in the URI
declaration and serve to define the association between the URI and a
resource.  They do so by constraining the permissible interpretations
for that URI.  (An interpretation in RDF semantics lingo maps URIs to
resources.)  In the end the question of whether a change in a URI
declaration will be helpful or harmful to your users is a judgement
call.  In theory, any change to the core assertions has the potential of
invalidating some user's code.  However, in practice some changes are
far less likely to cause problems than others, because they don't affect
the set of permissible interpretations -- at least not in a way that
matters.  For example, in the moon example at
http://dbooth.org/2007/uri-decl/#example
changing the rdfs:seeAlso assertion is unlikely to break users' code
because it doesn't really constrain the resource identity of the URI
http://dbooth.org/2007/moon/ .  

One can think of the core assertions as constraining the set of
permissible interpretations for that URI.  There will always be some
ambiguity about what resource the URI denotes -- this is inescapable --
but the core assertions clearly delineate that ambiguity.  This is
further explained in a companion paper, Denotation as a Two-Step
Mapping in Semantic Web Architecture:
http://dbooth.org/2009/denotation/

 
 The paper is a nice encapsulation of many of the idiosyncrasies of the  
 current state of the social practice. Thanks

You're welcome.  And thanks very much for your comments!


-- 
David Booth, Ph.D.
Cleveland Clinic (contractor)

Opinions expressed herein are those of the author and do not necessarily
reflect those of Cleveland Clinic.




Re: Dereferencing a URI vs querying a SPARQL endpoint

2009-05-20 Thread Hugh Glaser
I think sitemap may already have what you want.
We use slicing=subject-object
For our sets such as:
(See http://acm.rkbexplorer.com/sitemap.xml).

http://sw.deri.org/2007/07/sitemapextension/
Says:

The sc:linkedDataPrefix and sc:sparqlEndpointLocation tags can have an 
optional slicing attribute that takes a value from the list of slicing methods 
below.

slicing=subject
The description of a resource X includes the triples whose subject is X.

slicing=subject-object
The description of a resource X includes the triples whose subject or object is 
X.

slicing=cbd
The description of a resource X includes its Concise Bounded Description.

slicing=scbd
The description of a resource X includes its Symmetric Concise Bounded 
Description.

slicing=msgs
The description of a resource X includes all the Minimal Self-Contained Graphs 
involving X.



On 20/05/2009 17:59, Daniel Schwabe dschw...@inf.puc-rio.br wrote:

Dear all,

while designing Explorator [1], where one can explore one or more triple 
repositories that provide SPARQL enpoints (as well as direct URI 
dereferencing), I found the following question, to which I don't really know 
the answer...

For the sake of this discussion, I'm considering only such sites, i.e., those 
that provide SPRQL enpoints.
For a given URI r, is there any relation between the triples I get when I 
dereference it directly, as opposed to  querying the SPARQL enpoint for all 
triples r, ?p, ?o ?  Should there be (I could also get ?s, ?p, r, for 
example) ?
For sites such as dbpedia I believe that I get the same set of triples. But I 
believe this is not a general behavior.
Should there be a good practice about this for LoD sites that provide SPARQL 
endpoints?
At the very least, perhaps this could also be described in the semantic 
sitemap.xml, no?

Cheers
D

[1] http://www.tecweb.inf.puc-rio.br/explorator



Re: Dereferencing a URI vs querying a SPARQL endpoint

2009-05-20 Thread Hugh Glaser
Sorry, I'll try harder :-)
I understand that what you are asking is something like this.
For some sites (including rkbexplorer), when you resolve a URI, it constructs a 
SPARQL query and returns the result of the query.
This might be all the triples with the subject, or object, or both, or 
something more complex that takes into account b-nodes.
So it might be nice if somewhere, such as the sitemap.xml, this query was 
documented.
I think this is exactly what the slicing is trying to do, but instead of 
publishing the actual query, it names the common (obvious) ones to use.
So a slicing of subject would tell you that you could do the query you say 
below on the appropriate SPARQL endpoint and get exactly the same thing you 
would get by resolving the URI.
So I still think that answers your question, but I'm sure you can tell me if it 
doesn't :-)
And others will say if I am wrong.
Best
Hugh

On 20/05/2009 20:47, Daniel Schwabe dschw...@inf.puc-rio.br wrote:

Dan and Hugh,
let me be more specific.
I'm not really advocating that only *one* direction should be returned
(or even both directions).
I am asking a more general question (to which I don't think Hugh really
gave an answer either) which is, is there any query that returns the
same triples as the ones you get when you dereference a URI, in a site
that also provides a SPARQL endpoint?
In the affirmative case, I am suggesting that the corresponding query be
documented in the sitemap.xml document.
Does this make sense?

Cheers
D

On 20/05/2009 14:15, Dan Brickley wrote:
 On 20/5/09 18:59, Daniel Schwabe wrote:
 Dear all,

 while designing Explorator [1], where one can explore one or more triple
 repositories that provide SPARQL enpoints (as well as direct URI
 dereferencing), I found the following question, to which I don't really
 know the answer...

 For the sake of this discussion, I'm considering only such sites, i.e.,
 those that provide SPRQL enpoints.
 For a given URI r, is there any relation between the triples I get when
 I dereference it directly, as opposed to querying the SPARQL enpoint for
 all triples r, ?p, ?o ? Should there be (I could also get ?s, ?p, r,
 for example) ?
 For sites such as dbpedia I believe that I get the same set of triples.
 But I believe this is not a general behavior.
 Should there be a good practice about this for LoD sites that provide
 SPARQL endpoints?
 At the very least, perhaps this could also be described in the semantic
 sitemap.xml, no?

 In general, I'd be wary of doing anything that assumes the direction a
 property is named in is important.

 Taking the old MCF example,
 http://www.w3.org/TR/NOTE-MCF-XML-970624/#sec2.1

 the_songlines eg:author bruce_chatwin .

 where eg:author has a domain of Document and a range of Person.

 Exactly the same information could be conveyed in data where the
 property naming direction was reversed. And case by case, different
 natural languages and application environments will favour slightly
 one direction over the other. Here we could as well have had

 bruce_chatwin eg:wrote the_songlines .

 or eg:book or eg:pub or eg:xyz, with domain Person, range Document.

 As it happens in English, the word author doesn't have a natural and
 obvious inverse here but that's incidental. The point is that both
 forms tell you just as much about the person as about the document,
 regardless of property naming and direction. The form using
 eg:author seems to be document-centric, but in fact it should
 equally support UI layers that are concerned with the person or the
 document. It would be dissapointing if a UI that was presenting info
 about Bruce Chatwin was to miss out that he was the author of
 the_songlines, simply because somewhere along the line a schema writer
 chose to deploy a property author rather than wrote...

 cheers,

 Dan


 [1] http://www.tecweb.inf.puc-rio.br/explorator






Re: Dereferencing a URI vs querying a SPARQL endpoint

2009-05-20 Thread Pierre-Antoine Champin
I would expect that a DESCRIBE query to the SPARQL endpoint return what
I get when dereferencing the URI.

  pa

Daniel Schwabe a écrit :
 Dan and Hugh,
 let me be more specific.
 I'm not really advocating that only *one* direction should be returned
 (or even both directions).
 I am asking a more general question (to which I don't think Hugh really
 gave an answer either) which is, is there any query that returns the
 same triples as the ones you get when you dereference a URI, in a site
 that also provides a SPARQL endpoint?
 In the affirmative case, I am suggesting that the corresponding query be
 documented in the sitemap.xml document.
 Does this make sense?
 
 Cheers
 D
 
 On 20/05/2009 14:15, Dan Brickley wrote:
 On 20/5/09 18:59, Daniel Schwabe wrote:
 Dear all,

 while designing Explorator [1], where one can explore one or more triple
 repositories that provide SPARQL enpoints (as well as direct URI
 dereferencing), I found the following question, to which I don't really
 know the answer...

 For the sake of this discussion, I'm considering only such sites, i.e.,
 those that provide SPRQL enpoints.
 For a given URI r, is there any relation between the triples I get when
 I dereference it directly, as opposed to querying the SPARQL enpoint for
 all triples r, ?p, ?o ? Should there be (I could also get ?s, ?p, r,
 for example) ?
 For sites such as dbpedia I believe that I get the same set of triples.
 But I believe this is not a general behavior.
 Should there be a good practice about this for LoD sites that provide
 SPARQL endpoints?
 At the very least, perhaps this could also be described in the semantic
 sitemap.xml, no?

 In general, I'd be wary of doing anything that assumes the direction a
 property is named in is important.

 Taking the old MCF example,
 http://www.w3.org/TR/NOTE-MCF-XML-970624/#sec2.1

 the_songlines eg:author bruce_chatwin .

 where eg:author has a domain of Document and a range of Person.

 Exactly the same information could be conveyed in data where the
 property naming direction was reversed. And case by case, different
 natural languages and application environments will favour slightly
 one direction over the other. Here we could as well have had

 bruce_chatwin eg:wrote the_songlines .

 or eg:book or eg:pub or eg:xyz, with domain Person, range Document.

 As it happens in English, the word author doesn't have a natural and
 obvious inverse here but that's incidental. The point is that both
 forms tell you just as much about the person as about the document,
 regardless of property naming and direction. The form using
 eg:author seems to be document-centric, but in fact it should
 equally support UI layers that are concerned with the person or the
 document. It would be dissapointing if a UI that was presenting info
 about Bruce Chatwin was to miss out that he was the author of
 the_songlines, simply because somewhere along the line a schema writer
 chose to deploy a property author rather than wrote...

 cheers,

 Dan


 [1] http://www.tecweb.inf.puc-rio.br/explorator
 
 





Re: Dereferencing a URI vs querying a SPARQL endpoint

2009-05-20 Thread Peter Ansell
Hi,

If you have a dataset that is very large and highly interlinked on
particular URI's, the DESCRIBE response may be too large to reasonably
transmit to a user over the internet (and to expect a sparql endpoint
to give out in one chunk). This is assuming the typical DESCRIBE
behaviour that sparql vendors implement which picks out r ?p1 ?o
(forward) and ?s ?p2 r (reverse) .

If you know that you want both forward and reverse behaviour then to
be you should probably utilise a SPARQL endpoint and page through the
possible results with OFFSET and LIMIT until you don't get anymore
results.

In relation to the Bio2RDF results, the URI that you dereference with
the federated queries is a mixture of what you could get at a
particular set of endpoints, with some forward and some reverse
relations, configured so that the system won't go down just from the
weight of someone trying to effectively do DESCRIBE
http://bio2rdf.org/taxon:9606. That would be linked to in a few
hundred thousand places, but still only has a few forward construct
triples that come out of the taxonomy database. In this case, the
direction of the relationship is important in real world terms because
it the size of the relationship.

Insisting that whenever someone wants to get information about a
taxonomy identifier (or some other classification method) that they
have to also get everything else possibly related to it would cause a
mountain of information. This is why [1] [2] [3] etc. are available
for people wanting to get more related links. (although there may be
slow endpoints that make each of those quite long operations)

Admittedly, the results for resolving Bio2RDF URI's come from multiple
endpoints, so if you just focused on a single Bio2RDF SPARQL endpoint
you would get reasonable results from DESCRIBE most of the time.

Cheers,

Peter

[1] http://qut.bio2rdf.org/pageoffset1/links/taxon:9606
[2] http://qut.bio2rdf.org/pageoffset2/links/taxon:9606
[3] http://qut.bio2rdf.org/pageoffset3/links/taxon:9606

2009/5/21 Pierre-Antoine Champin swlists-040...@champin.net:
 I would expect that a DESCRIBE query to the SPARQL endpoint return what
 I get when dereferencing the URI.

  pa



Re: URI lifecycle (Was: Owning URIs)

2009-05-20 Thread Hugh Glaser
Hi David,
On 20/05/2009 06:01, David Booth da...@dbooth.org wrote:
 
 A last comment, which I know we have discussed, and you possibly disagree:
 Community expropriation of a URI
 Might have meant something else.
 One of the problems is that many authors will not discharge their Statement
 Author Responsibilities, but will assume that the URI is the one they want.
 Over time, this may mean that the general SW uses a URI in a way other than
 the URI owner intends, to the extent that it becomes irrelevant what was the
 original meaning (there are many parallels for this in natural language, and
 indeed it is the social process that causes language to change).
 [ . . . ]
 
 Yes, that's a great topic for discussion.  It is clear that semantic
 drift is a natural part of natural language: a word that meant one thing
 years ago may mean something quite different now.  As humans we can
 usually deal with this semantic drift by knowing the context in which a
 word is used, though it can cause real life misunderstandings sometimes.
 
 However, I think our use of URIs in RDF is different from our use of
 words in natural language, in two important ways:
 
  - RDF is designed for machine processing -- not just human
 communication -- and machines are not so good at understanding context
 and resolving ambiguity; and
 
  - with URI declarations there is a simple, feasible, low-cost mechanism
 available that can be used to anchor the semantics of a URI.
 
 In short, although semantic web architecture could be designed to permit
 unrestricted semantic drift, I think it is a better design -- better
 serving the semantic web community as a whole -- to adopt an
 architecture that permits the semantics of each URI to be anchored, by
 use of a URI declaration.
Absolutement.
But your paper is not about architecture.
The architecture, as you say, permits the semantics of each URI to be
anchored.
The (one of the?) good thing about your paper is that it is about the stuff
that is not enforced by the architecture, but rather addresses what might be
called the social processes and what responsibilities might be.
And works hard to avoid confusion between them.
So if one was to envisage ways in which the consequences of failure to
adhere to the responsibilities might have a significant impact, and how that
impact might be accommodated or challenged, then I think it can be useful to
study it.
I happen to think that people and hence agents will simply assume they know
what URIs mean without checking the anchor, in the same way they use words
without checking the dictionary. If I was marking this email up in RDFa, I
would be much more likely to guess, or simply go and use the URIs you had
used to mark up your email, rather than check each one back at base - I
would never be able to do anything if I checked every word in the
dictionary.
In fact, how much of all the RDFa that is now being generated gets checked?
I do take your point that a lot of this is happening with machines, but even
they will make the same mistake when choosing a URI.
Best
Hugh  
 
 For more explanation see: Why URI Declarations? A comparison of
 architectural approaches
 http://dbooth.org/2008/irsw/
 
 
 --
 David Booth, Ph.D.
 Cleveland Clinic (contractor)
 
 Opinions expressed herein are those of the author and do not necessarily
 reflect those of Cleveland Clinic.
 
 




Re: Dereferencing a URI vs querying a SPARQL endpoint

2009-05-20 Thread Kingsley Idehen

Pierre-Antoine Champin wrote:

I would expect that a DESCRIBE query to the SPARQL endpoint return what
I get when dereferencing the URI.

  pa
  

Daniel,

Is this your problem:

Linked Data Servers publish URIs. The mechanism that delivers these URIs 
tends to vary since they are the product of URL-rewrite rules that may 
or may not be associated with SPARQL queries, and when SPARQL Query 
based you may be dealing with a CONSTRUCT or a DESCRIBE.


Ideally, you would like to be able to discern via SPARQL, what SPARQL 
query patterns sits behind the re-write rule for a given de-referencable 
URI.


Please confirm yay or nay.

Kingsley

Daniel Schwabe a écrit :
  

Dan and Hugh,
let me be more specific.
I'm not really advocating that only *one* direction should be returned
(or even both directions).
I am asking a more general question (to which I don't think Hugh really
gave an answer either) which is, is there any query that returns the
same triples as the ones you get when you dereference a URI, in a site
that also provides a SPARQL endpoint?
In the affirmative case, I am suggesting that the corresponding query be
documented in the sitemap.xml document.
Does this make sense?

Cheers
D

On 20/05/2009 14:15, Dan Brickley wrote:


On 20/5/09 18:59, Daniel Schwabe wrote:
  

Dear all,

while designing Explorator [1], where one can explore one or more triple
repositories that provide SPARQL enpoints (as well as direct URI
dereferencing), I found the following question, to which I don't really
know the answer...

For the sake of this discussion, I'm considering only such sites, i.e.,
those that provide SPRQL enpoints.
For a given URI r, is there any relation between the triples I get when
I dereference it directly, as opposed to querying the SPARQL enpoint for
all triples r, ?p, ?o ? Should there be (I could also get ?s, ?p, r,
for example) ?
For sites such as dbpedia I believe that I get the same set of triples.
But I believe this is not a general behavior.
Should there be a good practice about this for LoD sites that provide
SPARQL endpoints?
At the very least, perhaps this could also be described in the semantic
sitemap.xml, no?


In general, I'd be wary of doing anything that assumes the direction a
property is named in is important.

Taking the old MCF example,
http://www.w3.org/TR/NOTE-MCF-XML-970624/#sec2.1

the_songlines eg:author bruce_chatwin .

where eg:author has a domain of Document and a range of Person.

Exactly the same information could be conveyed in data where the
property naming direction was reversed. And case by case, different
natural languages and application environments will favour slightly
one direction over the other. Here we could as well have had

bruce_chatwin eg:wrote the_songlines .

or eg:book or eg:pub or eg:xyz, with domain Person, range Document.

As it happens in English, the word author doesn't have a natural and
obvious inverse here but that's incidental. The point is that both
forms tell you just as much about the person as about the document,
regardless of property naming and direction. The form using
eg:author seems to be document-centric, but in fact it should
equally support UI layers that are concerned with the person or the
document. It would be dissapointing if a UI that was presenting info
about Bruce Chatwin was to miss out that he was the author of
the_songlines, simply because somewhere along the line a schema writer
chose to deploy a property author rather than wrote...

cheers,

Dan


  

[1] http://www.tecweb.inf.puc-rio.br/explorator







  



--


Regards,

Kingsley Idehen   Weblog: http://www.openlinksw.com/blog/~kidehen
President  CEO 
OpenLink Software Web: http://www.openlinksw.com








Re: Dereferencing a URI vs querying a SPARQL endpoint

2009-05-20 Thread Daniel Schwabe

Kingsley Idehen wrote:

Pierre-Antoine Champin wrote:

I would expect that a DESCRIBE query to the SPARQL endpoint return what
I get when dereferencing the URI.

  pa
  

Daniel,

Is this your problem:

Linked Data Servers publish URIs. The mechanism that delivers these 
URIs tends to vary since they are the product of URL-rewrite rules 
that may or may not be associated with SPARQL queries, and when SPARQL 
Query based you may be dealing with a CONSTRUCT or a DESCRIBE.
If a URI actually refers to an RDF document, I would imagine there is no 
URL rewriting involved; it resolves to the document itself. And for 
SPARQL based, who knows what I may be dealing with? Peter already 
exemplified that you may get something that is neither a CONSTRUCT nor a 
DESCRIBE...




Ideally, you would like to be able to discern via SPARQL, what SPARQL 
query patterns sits behind the re-write rule for a given 
de-referencable URI.
Basically yes, although I'm not even requiring being able to do it 
directly via SPARQL (that would be actually nice)...
Also, I'd be curious to know what is more efficient - dereferencing or 
issuing the query through the endpoint.


Cheers
D