Re: Ontology to link food and diseases

2015-05-05 Thread Bernard Vatant
Hi Marco

This is a very touchy domain, where vocabularies and data should be
carefully wrapped within provenance, source, time stamp, authority. More
than anywhere else, beware of any positivist, unique thought, thruth-based
approach ...
The examples you give are not facts, but just statements which should be
backed by literature. Exceptions and different viewpoints exist, etc.
Think about the fact it will feed algorithms, at the end of the day. And if
you make them public, end in Google Knowledge Graph ...

See http://bvatant.blogspot.fr/2015/02/statements-are-only-statements.html


2015-05-03 23:20 GMT+02:00 Marco Brandizi :

>  Hi all,
>
> I'm looking for an ontology/controlled vocabulary/alike that links food
> ingredients/substances/dishes to human diseases/conditions, like
> intolerances, allergies, diabetes etc.
>
> Examples of information I'd like to find coded (please assume they're
> true, I'm no expert):
>   - gluten must be avoided by people affected by coeliac disease
>   - omega-3 is good for people with high cholesterol
>   - sugar should be avoided by people with diabetes risk
>
> I also would like linked data about commercial food products, but even an
> ontology without 'instances' would be useful.
>
> So far, I've found an amount of literature (eg, [1-3]) and vocabularies
> like AGROVOC[4], but nothing like the above.
>
> Thanks in advance for any help!
> Marco
>
> *[1] http://fruct.org/publications/abstract14/files/Kol_21.pdf
> <http://fruct.org/publications/abstract14/files/Kol_21.pdf>*
> [2]
> *http://www.researchgate.net/publication/224331263_FOODS_A_Food-Oriented_Ontology-Driven_System
> <http://www.researchgate.net/publication/224331263_FOODS_A_Food-Oriented_Ontology-Driven_System>
> *[3]
> *http://www.hindawi.com/journals/tswj/aip/475410/
> <http://www.hindawi.com/journals/tswj/aip/475410/> *[4]
> http://tinyurl.com/ndtdhwn
>
>  --
>
> ===
> Marco Brandizi, PhD  , 
> http://www.marcobrandizi.info
>
> Functional Genomics Group - Sr Software 
> Engineerhttp://www.ebi.ac.uk/microarray
>
> European Bioinformatics Institute (EMBL-EBI)
> European Molecular Biology Laboratory
> Wellcome Trust Genome Campus, Hinxton, Cambridge CB10 1SD, United Kingdom
>
> Office V2-26, Phone: +44 (0)1223 492 613, Fax: +44 (0)1223 492 620
>
>


-- 

*Bernard Vatant*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
http://google.com/+BernardVatant

*Mondeca*
35 boulevard de Strasbourg 75010 Paris
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--


Re: Government ontology?

2015-04-07 Thread Bernard Vatant
So, you know the way forward

- Pick the best of what is already there
- Extend and improve
- Push the result into LOV

... et voilà :)

2015-04-08 0:03 GMT+02:00 Daniel Schwabe :

> Hi Bernard,
> yes, we did look at LoV. Unfortunately, the ones reported there don't
> quite fit our needs, but we may borrow from some.
> Best
> Daniel
>
> On Apr 7, 2015, at 18:49  - 07/04/15, Bernard Vatant <
> bernard.vat...@mondeca.com> wrote:
>
> Hi Daniel
>
> Did you explore http://lov.okfn.org/dataset/lov/vocabs?tag=Government
>
> I suppose you did, but just in case ...
>
> 2015-04-07 23:08 GMT+02:00 Daniel Schwabe :
>
>> Hi,
>> I'm looking for an ontology describing political bodies in Government,
>> e.g., Parliament, Congress, Senate, etc...
>> It needs to describe the relation between a person and an office,
>> legislature, geographical base (state, district, county, ...), etc...
>> Any pointers will be greatly appreciated!
>>
>>
>> Daniel Schwabe  Dept. de Informatica, PUC-Rio
>> Tel:+55-21-3527 1500 r. 4356R. M. de S. Vicente, 225
>> Fax: +55-21-3527 1530   Rio de Janeiro, RJ 22453-900, Brasil
>> http://www.inf.puc-rio.br/~dschwabe
>>
>
>


-- 

*Bernard Vatant*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
http://google.com/+BernardVatant

*Mondeca*
35 boulevard de Strasbourg 75010 Paris
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--


Re: Government ontology?

2015-04-07 Thread Bernard Vatant
Hi Daniel

Did you explore http://lov.okfn.org/dataset/lov/vocabs?tag=Government

I suppose you did, but just in case ...

2015-04-07 23:08 GMT+02:00 Daniel Schwabe :

> Hi,
> I'm looking for an ontology describing political bodies in Government,
> e.g., Parliament, Congress, Senate, etc...
> It needs to describe the relation between a person and an office,
> legislature, geographical base (state, district, county, ...), etc...
> Any pointers will be greatly appreciated!
>
>
> Daniel Schwabe  Dept. de Informatica, PUC-Rio
> Tel:+55-21-3527 1500 r. 4356R. M. de S. Vicente, 225
> Fax: +55-21-3527 1530   Rio de Janeiro, RJ 22453-900, Brasil
> http://www.inf.puc-rio.br/~dschwabe
>
>
>
>
>


-- 

*Bernard Vatant*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
http://google.com/+BernardVatant

*Mondeca*
35 boulevard de Strasbourg 75010 Paris
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--


Re: How do you explore a SPARQL Endpoint?

2015-01-25 Thread Bernard Vatant
Hi Pavel

Very interesting discussion, thanks for the follow-up. Some quick answers
below, but I'm currently writing a blog post which will go in more details
on the notion of Data Patterns, a term I've been pushing last week on the
DC Architecture list, where it seems to have gained some traction.
See https://www.jiscmail.ac.uk/cgi-bin/webadmin?A1=ind1501&L=dc-architecture
for the discussion.

2015-01-25 22:53 GMT+01:00 Pavel Klinov :

>
> On Fri, Jan 23, 2015 at 11:28 AM, Bernard Vatant
>  wrote:
> > Hi Pavel
> >
> > Maybe what you are missing is that RDF data, by design, do not need a
> > schema.
>
> Right, I am aware of that. I think it's important to separate the
> absence of the schema from the absence of an explicit representation
> of the schema.
>

Well indeed, but there is no fine line of separation, since there is
neither standard or even consensual definition of "schema" here :)

Most of real-life datasets have some structure...


Indeed, because they are generally transformed from structured data (data
bases, XML, whatever). But structure does not mean schema. I would rather
say patterns than structure. Structure carries the implicit notion of a
global, consistent architecture. Pattern is more generic, and more fit to
denote regularities that can happen at various levels of granularity, and
not necessarily everywhere in the data (because of heterogeneous sources
for examples).


> ... which reflects what the
> data is all about. Knowing such structure is useful (and often
> necessary) to be able to write meaningful queries and that's, I think,
> what the initial question was.


Certainly, and I would rewrite this question : How do you find out data
patterns in a dataset?


> When such structure exists, I'd say
> that the dataset has an *implicit* schema (or a conceptual model, if
> you will).


Well, that's where I don't follow. If data, as it happens more and more, is
gathered from heterogeneous sources, the very notion of a conceptual model
is jumping to conclusions. In natural languages, patterns often precede the
grammar describing them, even if the patterns described in the grammar at
some point become prescriptive rules. Data should be looked at the same way.


> What is absent is an explicit representation of the schema,
> or the conceptual model, in terms of RDFS, OWL, or SKOS axioms.
>

When the dataset gathers various sources and various vocabularies, such a
schema does not exists, actually.


> Then, of course, you're right that the RDF spec doesn't mandate that
> the schema is explicitly represented (put in other words, that the
> structure is explicitly modeled). Which is fine.
>

Nobody can disagree on that :)


> However, when the schema *is* represented explicitly, knowing it is a
> huge help to users which otherwise know little about the data.


OK, but the question is : which is a good format for exposing this
structure?
RDFS/OWL ontology/vocabulary, Application Profiles, RDF Shapes / whatever
it will be named, or ... ?


> It's especially important for public data endpoints. Several projects, e.g.
> ProLOD++, aimed at analyzing the structure of LOD would benefit from
> being able to request the schema from those datasets, which i)
> represent it explicitly and ii) manage it separately from the data and
> thus can service such requests efficiently. As I said above, ii) also
> makes sense for other reasons.
>

Agreed

What is missing is a simple protocol for asking "is your data's
> structure modeled explicitly? If yes, please give me the schema
> triples".


Assuming the "schema" is expressed in some idiom of RDF ...

Or at least the vocabulary used in the data. Instead,
> everyone just comes up with their own exploratory SPARQL queries,
> which would seem like unnecessary work if there were a simpler
> question to ask.
>

Sure.


> Cheers,
> Pavel
>
> PPS. It'd also be correct to claim that even when a structure exists,
> realistic data can be messy and not fit into it entirely. We've seen
> stuff like literals in the range of object properties, etc. It's a
> separate issue having to do with validation, for which there's an
> ongoing effort at W3C. However, it doesn't generally hinder writing
> queries which is what we're discussing here.
>

Well I don't see it as a separate issue. All the raging debate around RDF
Shapes is not (yet) about validation, but on the definition of what a
shape/structure/schema can be.



>  > Since the very notion of schema for RDF data has no meaning at all,
> > and the absence of schema is a bit frightening, people tend to give it a
> lot
> > of possible meanings, depending on your closed world or open world
> > assumption, otherw

Re: How do you explore a SPARQL Endpoint?

2015-01-23 Thread Bernard Vatant
Hi Pavel

Maybe what you are missing is that RDF data, by design, do not need a
schema. Since the very notion of schema for RDF data has no meaning at all,
and the absence of schema is a bit frightening, people tend to give it a
lot of possible meanings, depending on your closed world or open world
assumption, otherwise said if the "schema" will be used for some kind of
inference or validation. The use of "Schema" in RDFS has done nothing to
clarify this, and the use of "Ontology" in OWL added a layer of confusion.
I tend to say "vocabulary" to name the set of types and predicates used by
a dataset (like in Linked Open Vocabularies), which is a minimal commitment
to how it is considered by the dataset owner, bearing in mind that this
"vocabulary" is generally a mix of imported terms from SKOS, FOAF, Dublin
Core ... and home-made ones. Which is completely OK with the spirit of RDF.

The brand new LDOM [1] or whatever it ends up to be named at the end of the
day might clarify the situation, or muddle those waters a bit more :)

[1] http://spinrdf.org/ldomprimer.html

2015-01-23 10:37 GMT+01:00 Pavel Klinov :

> Alright, so this isn't an answer and I might be saying something
> totally silly (since I'm not a Linked Data person, really).
>
> If I re-phrase this question as the following: "how do I extract a
> schema from a SPARQL endpoint?", then it seems to pop up quite often
> (see, e.g., [1]). I understand that the original question is a bit
> more general but it's fair to say that knowing the schema is a huge
> help for writing meaningful queries.
>
> As an outsider, I'm quite surprised that there's still no commonly
> accepted (i'm avoiding "standard" here) way of doing this. People
> either hope that something like VoID or LOV vocabularies are being
> used, or use 3-party tools, or write all sorts of ad hoc SPARQL
> queries themselves, looking for types, object properties,
> domains/ranges etc-etc. There are also papers written on this subject.
>
> At the same time, the database engines which host datasets often (not
> always) manage the schema separately from the data. There're good
> reasons for that. One reason, for example, is to be able to support
> basic reasoning over the data, or integrity validation. Just because
> in RDF the schema language and the data language are the same, so
> schema and data triples can be interleaved, it need not (and often
> not) be managed that way.
>
> Yet, there's no standard way of requesting the schema from the
> endpoint, and I don't quite understand why. There's the SPARQL 1.1
> Service Description, which could, in theory, cover it, but it doesn't.
> Servicing such schema extraction requests doesn't have to be mandatory
> so the endpoints which don't have their schemas right there don't have
> to sift through the data. Also, schemas are typically quite small.
>
> I guess there's some problem with this which I'm missing...
>
> Thanks,
> Pavel
>
> [1]
> http://answers.semanticweb.com/questions/25696/extract-ontology-schema-for-a-given-sparql-endpoint-data-set
>
> On Thu, Jan 22, 2015 at 3:09 PM, Juan Sequeda 
> wrote:
> > Assume you are given a URL for a SPARQL endpoint. You have no idea what
> data
> > is being exposed.
> >
> > What do you do to explore that endpoint? What queries do you write?
> >
> > Juan Sequeda
> > +1-575-SEQ-UEDA
> > www.juansequeda.com
>
>


-- 

*Bernard Vatant*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
http://google.com/+BernardVatant

*Mondeca*
35 boulevard de Strasbourg 75010 Paris
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--


Re: How do you explore a SPARQL Endpoint?

2015-01-22 Thread Bernard Vatant
Interesting to note that the answers so far are converging towards looking
first for types and predicates, but bottom-up from the data, and not
queries looking for a declared model layer using RDFS or OWL, such as e.g.,

SELECT DISTINCT ?class
WHERE { {?class a owl:Class} UNION {?class a rdfs:Class}}

SELECT DISTINCT ?property ?domain ?range
WHERE { {?property rdfs:domain ?domain} UNION {?property rdfs:range ?range}}

Which means globally you don't think the SPARQL endpoint will expose a
formal model along with the data.
That said, if the model is exposed with the data, the values of rdf:type
will contain e.g., rdfs:Class and owl:Class ...

Of course in the ideal situation where you have an ontology, the following
would bring its elements.

SELECT DISTINCT ?o ?x ?type
WHERE {?x rdf:type ?type.
?x rdfs:isDefinedBy ?o.
?o a owl:Ontology }

It's worth trying, because if the dataset you query is really big, it will
be faster to look first for a declared model than asking all distinct
rdf:type


2015-01-22 15:23 GMT+01:00 Alfredo Serafini :

> Hi
>
> the most basic query is the usual query for concepts, something like:
>
> SELECT DISTINCT ?concept
> WHERE {
> ?uri a ?concept.
> }
>
> then, given a specific concept, you  can infer from the data what are the
> predicates/properties for it:
> SELECT DISTINCT ?prp
> WHERE {
> [] ?prp .
> }
>
> and so on...
>
> Apart from other more complex query (here we are of course omitting a lot
> of important things), these two "patterns" are usually the most useful as a
> starting point, for me.
>
>
>
>
> 2015-01-22 15:09 GMT+01:00 Juan Sequeda :
>
>> Assume you are given a URL for a SPARQL endpoint. You have no idea what
>> data is being exposed.
>>
>> What do you do to explore that endpoint? What queries do you write?
>>
>> Juan Sequeda
>> +1-575-SEQ-UEDA
>> www.juansequeda.com
>>
>
>


-- 

*Bernard Vatant*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
http://google.com/+BernardVatant

*Mondeca*
35 boulevard de Strasbourg 75010 Paris
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--


Re: How do you explore a SPARQL Endpoint?

2015-01-22 Thread Bernard Vatant
Hi Juan

My strategy is as following

Q1 : Which types are used?
SELECT DISTINCT ?type
WHERE {?x rdf:type ?type}

Q2 : Which predicates are used?
SELECT DISTINCT ?p
WHERE {?x ?p ?o}

Q3 : Which predicates are used by instances of the type :foo found in Q1
I'm interested in
SELECT DISTINCT ?p
WHERE {?x ?p :foo}


2015-01-22 15:09 GMT+01:00 Juan Sequeda :

> Assume you are given a URL for a SPARQL endpoint. You have no idea what
> data is being exposed.
>
> What do you do to explore that endpoint? What queries do you write?
>
> Juan Sequeda
> +1-575-SEQ-UEDA
> www.juansequeda.com
>



-- 

*Bernard Vatant*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
http://google.com/+BernardVatant

*Mondeca*
35 boulevard de Strasbourg 75010 Paris
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--


Re: Politics ontology

2014-12-29 Thread Bernard Vatant
Hello

Did you have a look at
http://lov.okfn.org/dataset/lov/details/vocabularySpace_Government.html

Best regards

2014-12-28 3:18 GMT+01:00 Jamshaid Ashraf :

> Hi,
>
> I am looking for an ontology that covers politics domain. It should have
> the basic constructs to cover politics, politicians, their affiliation and
> government.
>
> Please suggest ontologies that fully or partially cover above mentioned
> concepts.
>
> Regards
> Jamshaid Ashraf
>
>
>


-- 

*Bernard Vatant*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
http://google.com/+BernardVatant

*Mondeca*
35 boulevard de Strasbourg 75010 Paris
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--


Re: Debates of the European Parliament as LOD

2014-11-06 Thread Bernard Vatant
+1

http://purl.org/linkedpolitics/vocabulary/404
http://purl.org/linkedpolitics/vocabulary/eu/plenary/404

Those vocabularies SHOULD be available from their URI namespace indeed -
and of course are candidates for inclusion LOV as soon as they are.

Looking forward for this !

2014-11-06 14:31 GMT+01:00 Ghislain Atemezing 
:

> Dear Laura,
>
> Thanks again  for this great dataset !
>
> Please note that this is a first version; we hope you will try it out and
> send us your feedback!
>
>
> Regarding the vocabulary used for the dataset, it would be great to have
> it inserted into LOV [a].
> Currently in LOV, only two vocabularies seem to deal with politics [1].
> So, it will be useful for other publishers to reuse the one you implemented.
> When the dereferencing issue is solved [2],  it will be a pleasure for us
> to have the vocabulary in the LOV catalogue.
>
> Best,
> Ghislain
>
> [a] http://lov.okfn.org/dataset/lov/
> [1] http://lov.okfn.org/dataset/lov/search?q=politics
> [2]
> http://validator.linkeddata.org/vapour?uri=http%3A%2F%2Fpurl.org%2Flinkedpolitics%2Fvocabulary%2F&defaultResponse=dontmind&userAgent=vapour.sourceforge.net
> <http://validator.linkeddata.org/vapour?uri=http://purl.org/linkedpolitics/vocabulary/&defaultResponse=dontmind&userAgent=vapour.sourceforge.net>
>



-- 

*Bernard Vatant*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
http://google.com/+BernardVatant

*Mondeca*
35 boulevard de Strasbourg 75010 Paris
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--


Re: How to model valid time of resource properties?

2014-10-16 Thread Bernard Vatant
Hi all

Quickly browsing this thread, seems to me that the Property Reification
Vocabulary [1] provides exactly what is needed without going through the
burden of explicit reification or named graphs, and I did not see it
mentioned unless I missed something.
It would be cool if this vocabulary could move upwards to some standard
status (it's work in progress, last version is Feb 2011). Certainly at
least one of the great brains beyond it is lurking here :)

http://purl.org/ontology/prv/core#


2014-10-16 0:02 GMT+02:00 John Walker :

>   Hi
>
>
> On October 15, 2014 at 2:59 PM Kingsley Idehen 
> wrote:
>
>  On 10/15/14 8:36 AM, Frans Knibbe | Geodan wrote:
>
>  On 2014-10-13 14:14, John Walker wrote:
>
>
>  Hi Frans,
>
>  See this example:
>  http://patterns.dataincubator.org/book/qualified-relation.html
>
>
> Thank you John! Strangely enough, I had not come across the Linked Data
> Patterns book before. But I can see it is a valuable resource with
> solutions for many common problems. And it looks pretty too! I am sure it
> will come in handy for problems that I haven't stumbled upon yet.
>
> A nice thing about this solution is that it doesn't need any extensions of
> core technologies. I do see some downsides, though:
>
> Let's assume I want to publish data about people, as in the examples. A
> person can have common properties defined by the FOAF vocabulary, like
> foaf:age or foaf:based_near. Properties like these are likely to change. If
> I want to record the time in which a statement is valid I would have to
> create a class for that relationship and add properties to that class that
> will allow me to associate a start time and an end time with the class. But
> by doing that I would not only be forced to create my own vocabulary, I
> would also replace common web wide semantics with my own semantics. Or
> would it still be possible to relate the original property with the custom
> class somehow?
>
>  Personally I would not use this approach for foaf:age and
> foaf:based_near as these capture a certain snapshot/state of (the
> information about) a resource. Having some representation where the
> foaf:age triple could be entailed could lead to having multiple conflicting
> statements with no easy way to find the truth.
>
> Having a clear understanding of the questions you want to ask of your
> knowledge base should help steer modelling choices.
>
>  In the cases known to me that require the recording of history of
> resources, *all* resource properties (except for the identifier) are
> things that can change in time. If this pattern would be applied, it would
> have to be applied to all properties, leading to vocabularies exploding and
> becoming unwieldy, as described in the Discussion paragraph.
>
> I think that the desire to annotate statements with things like valid time
> is very common. Wouldn't it be funny if the best solution to a such a
> common and relatively straightforward requirement is to create large custom
> vocabularies?
>
>  If you want to be able to capture historical states of a resource, using
> named graphs to provide that context would be my first thought.
>
> If that resource consists of just one triple, then RDF reification of that
> statement would also work as Kingsley mentions.
>
>
> Regards,
> Frans
>
>
> Frans,
>
> How about reified RDF statements?
>
> I think discounting RDF reification vocabulary is yet another act of
> premature optimization, in regards to the Semantic Web meme :)
>
> Some examples:
>
> [1] http://bit.ly/utterances-since-sept-11-2014 -- List of statements
> made from a point in time.
> [2] http://linkeddata.uriburner.com/c/8EPG33 -- About Connotation
>
> --
> Regards,
>
> Kingsley Idehen
> Founder & CEO
> OpenLink Software
> Company Web: http://www.openlinksw.com
> Personal Weblog 1: http://kidehen.blogspot.com
> Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
> Twitter Profile: https://twitter.com/kidehen
> Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
> LinkedIn Profile: http://www.linkedin.com/in/kidehen
> Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this
>
>
>
>



-- 

*Bernard Vatant*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
http://google.com/+BernardVatant

*Mondeca*
35 boulevard de Strasbourg 75010 Paris
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--


Re: ANN: DBpedia Version 2014 released

2014-09-10 Thread Bernard Vatant
Thanks Kingsley

Not sure I have understood all of your explanations yet (will try to later
on), but in any case you surely know how to speak to the LOV-Bot, he got it
100%.
DBpedia is in the queue to be LOV-ed (should show later on today)

2014-09-10 16:00 GMT+02:00 Kingsley Idehen :

> On 9/10/14 7:47 AM, Kingsley Idehen wrote:
>
>> On 9/10/14 6:01 AM, Bernard Vatant wrote:
>>
>>> Hi all
>>>
>>> Following an off-list answer to Kingsley on the G+ LOV community
>>> conversation [1]
>>>
>>> - The current state of affair needs to follow the ov:defines predicates
>>> in the ontology description to each of its elements to get the full
>>> description of the ontology content (classes and propertie). I tried to
>>> replace those by owl:imports predicates and submit the file to Protégé, but
>>> after more than one hour it was still struggling with importing the 3480
>>> elements from their URI, with so many queries on DBpedia servers. Cleraly
>>> not the good solution.
>>>
>>> - So I tried otherwise, sending to the SPARQL endpoint this very basic
>>> query.
>>>
>>> CONSTRUCT {?s ?p ?o}
>>>
>>> WHERE {   ?s rdfs:isDefinedBy <http://dbpedia.org/ontology/>.
>>>   ?s ?p ?o}
>>>
>>> This is a compact URI for this query result (in RDF/XML)
>>> http://bit.ly/1xHzpv5 which I successfully submitted to either Protégé
>>> or the LOV-Bot.
>>> So, seems to me if the ontology namespace had a conneg to such a query
>>> it would be all we need.
>>> Or, if you keep the things as they are, we will take internally in LOV
>>> such a URI to feed the LOV-Bot.
>>>
>>> [1] https://plus.google.com/+BernardVatant/posts/jVVSVbxuDfq
>>>
>>
>> Bernard,
>>
>> Alternatively, we can implement the following, which basically leverages
>> the much underutilized <http://www.w3.org/2007/05/powder-s#describedby>
>> relation as a mechanism for incorporating an external (outside quad store)
>> ontology terms description document into the DBpedia Ontology description:
>>
>> ## DBpedia Ontology Fix
>>
>> # Ontology IRI: <http://dbpedia.org/ontology/>
>> # Named Graph IRI: <http://dbpedia.org/ontology/definitions#>
>> # Ontology Definitions Document URLs: <http://dbpedia.org/ontology/
>> data/definitions.ttl>,
>> # <http://dbpedia.org/ontology/data/definitions.jsonld>, etc..
>> # URL Re-write rule:
>> # for all lookups requests for: <http://dbpedia.org/ontology/>
>> # resolve to (subject to Accept: headers), a SPARQL URL for:
>> # DESCRIBE <http://dbpedia.org/ontology/> FROM <
>> http://dbpedia.org/ontology/definitions#>
>> #
>> # HTTP/WebDAV accessible Docs, generated by internal indirection of
>> SPARQL DESCRIBE or CONSTRUCT:
>> # <http://dbpedia.org/ontology/data/definitions.ttl>,
>> # <http://dbpedia.org/ontology/data/definitions.jsonld>, etc..
>>
>>  INSERT
>>   {GRAPH <http://dbpedia.org/ontology/definitions#>
>>  {
>>   ?s rdfs:isDefinedBy <http://dbpedia.org/ontology/>.
>> <http://dbpedia.org/ontology/> <http://open.vocab.org/terms/defines> ?s.
>> <http://dbpedia.org/ontology/> a owl:Ontology .
>>   ?s <http://www.w3.org/2007/05/powder-s#describedby> <
>> http://dbpedia.org/ontology/data/definitions.ttl> .
>> <http://dbpedia.org/ontology/dqta/definitions.ttl> <
>> http://open.vocab.org/terms/describes> ?s .
>>   }
>>   }
>>   WHERE
>>   {GRAPH <http://dbpedia.org/ontology/definitions#>
>>   {
>>   {?s rdfs:subClassOf ?o}
>>   UNION
>>   {?s rdfs:subPropertyOf ?o}
>>   UNION
>>   {?s owl:equivalentClass ?o}
>>   UNION
>>   {?s owl:equivalentProperty ?o}
>>   UNION
>>   {?s a ?o}
>>       }
>>}
>>
>> ETA for this going live:  next 20 - 60 minutes.
>>
>> Kingsley
>>
>
> Bernard,
>
> Done.
>
> You can now lookup <http://dbpedia.org/ontology/> and retrieve the entire
> DBpedia ontology, in your preferred document type.
>
>
> --
> Regards,
>
> Kingsley Idehen
> Founder & CEO
> OpenLink Software
> Company Web: http://www.openlinksw.com
> Personal Weblog 1: http://kidehen.blogspot.com
> Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
> Twitter Profile: https://twitter.com/kidehen
> Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
> LinkedIn Profile: http://www.linkedin.com/in/kidehen
> Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this
>
>
>


-- 

*Bernard Vatant*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
http://google.com/+BernardVatant

*Mondeca*
35 boulevard de Strasbourg 75010 Paris
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--


Re: ANN: DBpedia Version 2014 released

2014-09-10 Thread Bernard Vatant
Hi all

Following an off-list answer to Kingsley on the G+ LOV community
conversation [1]

- The current state of affair needs to follow the ov:defines predicates in
the ontology description to each of its elements to get the full
description of the ontology content (classes and propertie). I tried to
replace those by owl:imports predicates and submit the file to Protégé, but
after more than one hour it was still struggling with importing the 3480
elements from their URI, with so many queries on DBpedia servers. Cleraly
not the good solution.

- So I tried otherwise, sending to the SPARQL endpoint this very basic
query.

CONSTRUCT {?s ?p ?o}

WHERE {   ?s rdfs:isDefinedBy <http://dbpedia.org/ontology/>.
  ?s ?p ?o}

This is a compact URI for this query result (in RDF/XML)
http://bit.ly/1xHzpv5 which I successfully submitted to either Protégé or
the LOV-Bot.
So, seems to me if the ontology namespace had a conneg to such a query it
would be all we need.
Or, if you keep the things as they are, we will take internally in LOV such
a URI to feed the LOV-Bot.

[1] https://plus.google.com/+BernardVatant/posts/jVVSVbxuDfq

2014-09-09 23:51 GMT+02:00 Kingsley Idehen :

>  On 9/9/14 9:43 AM, Bernard Vatant wrote:
>
>  Understood, great!
>
>  Please ping us when this is done, and/or suggest the ontology URI to
> http://lov.okfn.org/dataset/lov/suggest/
>
>  Looking forward for a LOV-ed DBpedia ontology :)
>
>
> Done.
>
> [1] http://dbpedia.org/c/9BV46KCN - DBpedia Ontology
>
> [2] http://dbpedia.org/c/9336LB2 - Class description
>
> [3] http://dbpedia.org/ontology/PoloLeague -- HTTP URI that identifies a
> class.
>
> --
> Regards,
>
> Kingsley Idehen   
> Founder & CEO
> OpenLink Software
> Company Web: http://www.openlinksw.com
> Personal Weblog 1: http://kidehen.blogspot.com
> Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
> Twitter Profile: https://twitter.com/kidehen
> Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
> LinkedIn Profile: http://www.linkedin.com/in/kidehen
> Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this
>
>


-- 

*Bernard Vatant*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
http://google.com/+BernardVatant

*Mondeca*
35 boulevard de Strasbourg 75010 Paris
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--


Re: ANN: DBpedia Version 2014 released

2014-09-09 Thread Bernard Vatant
Understood, great!

Please ping us when this is done, and/or suggest the ontology URI to
http://lov.okfn.org/dataset/lov/suggest/

Looking forward for a LOV-ed DBpedia ontology :)




2014-09-09 15:34 GMT+02:00 Kingsley Idehen :

>  On 9/9/14 8:17 AM, Bernard Vatant wrote:
>
>  Hi Kingsley
>
>  Not sure how to understand your answer. I have no problem with the
> ontology elements URI such as [1] and varoious conneg formats, my problemn
> is about the URI of the whole ontology re. metadata etc.
>
> 2014-09-09 13:46 GMT+02:00 Kingsley Idehen :
>
>
>> Bernard,
>>
>> As you can see from the announcement, this is a little transition period
>> between the combination of DBpedia datasets and Linked Data deployment, via
>> the DBpedia endpoint.
>>
>> This is simply a case of transition. The problem will be fixed, promptly.
>> The URIs below should aid tracking and problem comprehension. Also, these
>> URIs won't change following resolution of the TCN issues. Might even be
>> fixed before you read this response.
>>
>> [1] http://dbpedia.org/ontology/TimePeriod
>> [2] http://linkeddata.uriburner.com/c/9CU7GSFP -- the problem (which is
>> basically TCN and legacy issues with text/rdf+n3 and text/turtle).
>>
>>
>> Kingsley
>>
>>
> Bernard,
>
> Part 2 of the matter is what you need, I should have been clearer. It
> boils down to the following which will be applied shortly:
>
> ## DBpedia Ontology Fix
>
> # Given the following:
> # Ontology IRI: <http://dbpedia.org/ontology/>
> <http://dbpedia.org/ontology/>
> # Named Graph IRI: <http://dbpedia.org/resource/classes#>
> <http://dbpedia.org/resource/classes#>
> # Apply the SPARQL 1.1 below.
> # Then apply a Re-write rule:
> # for all lookups requests for: <http://dbpedia.org/ontology/>
> <http://dbpedia.org/ontology/>
> # resolve to (based on dynamic content negotiation), a SPARQL URL for:
> # DESCRIBE <http://dbpedia.org/ontology/> <http://dbpedia.org/ontology/>
> FROM <http://dbpedia.org/resource/classes#>
> <http://dbpedia.org/resource/classes#> .
>
>  INSERT
>  {GRAPH  <http://dbpedia.org/resource/classes#>
> <http://dbpedia.org/resource/classes#>
> {
>  ?s rdfs:isDefinedBy  <http://dbpedia.org/ontology/>
> <http://dbpedia.org/ontology/>.
>  <http://dbpedia.org/ontology/> <http://dbpedia.org/ontology/>
> <http://open.vocab.org/terms/defines>
> <http://open.vocab.org/terms/defines> ?s.
>  <http://dbpedia.org/ontology/> <http://dbpedia.org/ontology/>
> a owl:Ontology .
>  ?s <http://www.w3.org/2007/05/powder-s#describedby>
> <http://www.w3.org/2007/05/powder-s#describedby>
> <http://dbpedia.org/ontology/> <http://dbpedia.org/ontology/> .
>  <http://dbpedia.org/ontology/> <http://dbpedia.org/ontology/>
> <http://open.vocab.org/terms/describes>
> <http://open.vocab.org/terms/describes> ?s .
>  }
>  }
>  WHERE
>  {GRAPH <http://ti.arc.nasa.gov/m/profile/shawn/tfmontology/tfmBJ1.owl>
> <http://ti.arc.nasa.gov/m/profile/shawn/tfmontology/tfmBJ1.owl>
>  {
>  {?s rdfs:subClassOf ?o}
>  UNION
>  {?s rdfs:subPropertyOf ?o}
>  UNION
>  {?s owl:equivalentClass ?o}
>  UNION
>  {?s owl:equivalentProperty ?o}
>  UNION
>  {?s a ?o}
>  }
>   }
>
>
>
> --
> Regards,
>
> Kingsley Idehen   
> Founder & CEO
> OpenLink Software
> Company Web: http://www.openlinksw.com
> Personal Weblog 1: http://kidehen.blogspot.com
> Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
> Twitter Profile: https://twitter.com/kidehen
> Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
> LinkedIn Profile: http://www.linkedin.com/in/kidehen
> Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this
>
>


-- 

*Bernard Vatant*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
http://google.com/+BernardVatant

*Mondeca*
35 boulevard de Strasbourg 75010 Paris
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--


Re: ANN: DBpedia Version 2014 released

2014-09-09 Thread Bernard Vatant
Hi Kingsley

Not sure how to understand your answer. I have no problem with the ontology
elements URI such as [1] and varoious conneg formats, my problemn is about
the URI of the whole ontology re. metadata etc.

2014-09-09 13:46 GMT+02:00 Kingsley Idehen :


> Bernard,
>
> As you can see from the announcement, this is a little transition period
> between the combination of DBpedia datasets and Linked Data deployment, via
> the DBpedia endpoint.
>
> This is simply a case of transition. The problem will be fixed, promptly.
> The URIs below should aid tracking and problem comprehension. Also, these
> URIs won't change following resolution of the TCN issues. Might even be
> fixed before you read this response.
>
> [1] http://dbpedia.org/ontology/TimePeriod
> [2] http://linkeddata.uriburner.com/c/9CU7GSFP -- the problem (which is
> basically TCN and legacy issues with text/rdf+n3 and text/turtle).
>
>
> Kingsley
>
>


Re: ANN: DBpedia Version 2014 released

2014-09-09 Thread Bernard Vatant
Hi

I would happily add the DBpedia ontology to the Linked Open Vocabularies
data base, but I'm afraid to say that this version is still published in a
completely non LOV-able way.

- The ontology namespace http://dbpedia.org/ontology/ is dereferenceable,
but what it yields is neither the ontology content, nor even proper
metadata beyond owl:versionInfo 4.0-SNAPSHOT. Big deal :) To put it short,
the description you GET from this URI is basically useless.

- To get the ontology content, you have to make your way through the
documentation at http://wiki.dbpedia.org/Ontology2014, find out that the
ontology file is available at
http://downloads.dbpedia.org/2014/dbpedia_2014.owl.bz2 in the form of an
archive that you have to open to get the RDF content itself. Which does not
provide more metadata than the above, BTW. Nor link to the prior version,
changes

I'm really sad to see DBpedia giving such a bad practice example for the
publication of its ontology. This point has already been made for previous
versions, I was hoping to find some improvement in this version.

I really think that it should be peanuts to fix this given the tremendous
task force and technical infrastructure supporting DBpedia. If there are
technical reasons preventing this ontology to be properly published at its
namespace, I'm curious to hear about them.

Best regards


2014-09-09 11:07 GMT+02:00 Christian Bizer :

> Hi all,
>
>
>
> we are happy to announce the release of DBpedia 2014.
>
>
> ...
>
>
>
> 2. the DBpedia ontology is enlarged and the number of infobox to ontology
> mappings has risen, leading to richer and cleaner data.
>
>
>
> The English version of the DBpedia knowledge base currently describes 4.58
> million things, out of which 4.22 million are classified in a consistent
> ontology (http://wiki.dbpedia.org/Ontology2014) ...
>

>
>
>
> --
>
> Prof. Dr. Christian Bizer
>
> Data and Web Science Group
>
> University of Mannheim, Germany
> ch...@informatik.uni-mannheim.de
>
> www.bizer.de
>
>
>



-- 

*Bernard Vatant*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
http://google.com/+BernardVatant

*Mondeca*
35 boulevard de Strasbourg 75010 Paris
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--


Re: Call for Linked Research

2014-07-28 Thread Bernard Vatant
Hi Sarven

On point 2 : Publish your progress and work following the Linked Data
design principles. Create a URI for everything that is of some value to you
and may be to others e.g., hypothesis, workflow steps, variables,
provenance, results etc.

For such public to be really interoperable, all this should rely on shared
vocabularies. This important point is not obvious in your call. Which
vocabularies would you suggest?
Semanticscience Integrated Ontology is a god candidate for this
http://lov.okfn.org/dataset/lov/details/vocabulary_sio.html
https://code.google.com/p/semanticscience/wiki/SIO


2014-07-28 18:22 GMT+02:00 Sarven Capadisli :

> On 2014-07-28 16:16, Paul Houle wrote:
>
>> I'd add to all of this publishing the raw data,  source code,  and
>> industrialized procedures so that results are truly reproducible,  as
>> few results in science actually are.
>>
>
>  On Mon, Jul 28, 2014 at 9:01 AM, Sarven Capadisli 
>> wrote:
>>
>>> 2. Publish your progress and work following the Linked Data design
>>> principles. Create a URI for everything that is of some value to you and
>>> may
>>> be to others e.g., hypothesis, workflow steps, variables, provenance,
>>> results etc.
>>>
>>
>
> Agreed, but I think point 2 covers that. It was not my intention to give a
> complete coverage of the scientific method. Covering reproducibility is a
> given. It also goes for making sure that all of the publicly funded
> research material is accessible and free. And, one should not have to go
> through a 3rd party service ("gatekeepers") to get a hold of someone else's
> knowledge.
>
> If we can not have open and free access to someone else's research, or
> reproduce (within reasonable amount of effort), IMO, that "research" *does
> not exist*. That may not be a popular opinion out there, but I fail to see
> how such inaccessible work would qualify as scientific. Having to create an
> account on a publisher's site, and pay for the material, is not what I
> consider accessible. Whether that payment is withdrawn directly from my
> account or indirectly from the institution I'm with (which still comes out
> of my pocket).
>
> Any way, this is discussed in great detail elsewhere by a lot of smart
> folks. Like I said, I had different intentions in my proposal i.e., DIY.
> Control your own publishing on the Web. If you must, hand out a copy e.g.,
> PDF, to fulfil your h-index high-score.
>
> -Sarven
> http://csarven.ca/#i
>
>


-- 

*Bernard Vatant*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
http://google.com/+BernardVatant

*Mondeca*
35 boulevard de Strasbourg 75010 Paris
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--


Call for participation : Vocabulary Carnival at SEMANTiCS 2014, Leipzig

2014-07-28 Thread Bernard Vatant
Aplogies for cross-posting

Call for Participation : * Vocabulary Carnival at SEMANTiCS 2014*
http://www.semantics.cc/vocarnival

The Vocabulary Carnival at SEMANTiCS 2014 is a unique opportunity for
vocabulary publishers to showcase and share their work, meet the growing
community of vocabulary publishers and users, and build useful semantic,
technical and social links.

*What kind of vocabularies do we expect?*
•Any kind! For this event we use a very open definition of what a
vocabulary is. Ontologies, classifications, thesauri, concept and metadata
schemes, whatever their format, in RDF or not, are all welcome.

*What is your benefit of submitting?*
•attention: make people aware of your work
•feedback: a room full of other vocabulary creators will guarantee
expert feedback
•linking: discover links from your vocabulary to others on-site

*How to submit your Vocabulary to the Carnival?*
1.Make sure your vocabulary is accessible on the Web through a public
URI.
2.Communicate your intention to participate at
http://tinyurl.com/vocarnival
 by joining and posting your vocabulary link and writing “See you at the
Carnival in Leipzig” or send an email to bernard.vat...@mondeca.com, with
subject “Vocabulary Carnival”.
3.Register to SEMANTiCS: http://www.semantics.cc/registration/
4.Submissions will be handled on a first come, first serve basis

*Are there any technical requirements?*
No, not really. We expect your vocabulary (terminology, taxonomy, ontology,
etc.) to be hosted on the Web. If it is not Linked Data or uploaded to
http://lov.okfn.org you can get technical help and advise at the
conference.

*At the SEMANTiCS 2014 Conference:*
•Prepare a poster (max format A0) presenting your vocabulary:
description, purpose, history, links to other vocabularies, datasets using
it, link to its publication page ...
•Present your poster in the dedicated space at SEMANTiCS conference.
•Brace yourself to participate in the Vocabulary Minute Madness, where
every vocabulary will have one minute to convince of its usefulness and
quality. Sporting your vocabulary colors at this occasion is optional, but
will be much appreciated.
•An independent jury will select the best vocabulary poster and
presentation for the coveted position of SEMANTiCS Vocabulary Carnival
Prince.

*Vocabulary Carnival and LOV*

•If your vocabulary is already recorded at http://lov.okn.org, check
its record to see if everything is OK. Ping the LOV curators if something
is missing or inaccurate, or if you brush up a brand new version for the
Carnival. If you think your vocabulary is LOV-able but not yet recorded,
submit its URI at http://lov.okfn.org/dataset/lov/suggest/
•If your vocabulary is not yet meeting the technical requirements to be
included in LOV, and you wish it could, we can help you to achieve that
during the Carnival.
About the Vocabulary Carnival
The Vocabulary Carnival will be hosted by the SEMANTiCS conference, Sep 4-5
2014, in Leipzig, Germany, in coordination with the Linked Open
Vocabularies project and the support of Mondeca.

More at http://www.semantics.cc/vocarnival
SEMANTiCS http://www.semantics.cc/
Linked Open Vocabularieshttp://lov.okfn.org
Mondecahttp://www.mondeca.com
ContactBernard Vatant, Senior Consultant, Mondeca, coordinator of
the LOV project. bernard.vat...@mondeca.com

-- 

*Bernard Vatant*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
http://google.com/+BernardVatant

*Mondeca*
35 boulevard de Strasbourg 75010 Paris
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--


Re: [help] semanticweb.org admin

2014-07-21 Thread Bernard Vatant
According to http://whois.net/whois/semanticweb.org the DNS registrant is
Stefan Decker, based somewhere in Galway, Ireland :)

2014-07-21 15:40 GMT+02:00 Gannon Dick :

> Apparently SemanticWeb.Org is run by a Mr./Ms(?) W. Ki. First name Wi.
> He/she is quite friendly, but perfers to speak RDF, which I often mistake a
> for Pidgin Klingon variant.  Maybe it's just me.
>
> In any case, here is community residence home page:
> http://semanticweb.org/wiki/Main_Page
>
> I suspect the community may be located in the Lake Wobegon District of
> Erewhon, Minnesota because everybody seems to have above average expertise
> and Wi Ki makes executive decisions.
>
> HTH
>
> --Gannon
>
>
> 
> On Mon, 7/21/14, Maxim Kolchin  wrote:
>
>  Subject: Re: [help] semanticweb.org admin
>  To: "Stéphane Corlosquet" 
>  Cc: a.l.gent...@dcs.shef.ac.uk, "public-lod@w3.org" 
>  Date: Monday, July 21, 2014, 7:25 AM
>
>  Hi Annalisa,
>
>  Was you able to contact
>  someone? I've sent emails to all people
>  mentioned here, but no one responded to me.
>
>  Thank you in advance!
>  Maxim Kolchin
>  PhD Student
>  ITMO University (National Research
>  University)
>  E-mail: kolchin...@gmail.com
>  Tel.: +7 (911) 199-55-73
>
>
>  On Wed, Mar 26, 2014 at 6:17 PM, Stéphane
>  Corlosquet
>  
>  wrote:
>  > Knud Möller was the main
>  developer of this site:
>  > http://www.linkedin.com/in/knudmoeller /
>  https://twitter.com/knudmoeller
>  >
>  >
>  >
>  On Wed, Mar 26, 2014 at 9:34 AM, Anna Lisa Gentile
>  > 
>  wrote:
>  >>
>  >> Hi
>  guys, just a quick question.
>  >> Does
>  anyone know who to contact for technical questions about
>  >> http://data.semanticweb.org ?
>  >> The admin contact ad...@data.semanticweb.org
>  seems unreachable atm.
>  >> Thanks
>  you!
>  >> Annalisa
>  >>
>  >> --
>  >> Anna Lisa Gentile
>  >> Research Associate
>  >> Department of Computer Science
>  >> University of Sheffield
>  >> http://staffwww.dcs.shef.ac.uk/people/A.L.Gentile
>  >> office: +44 (0)114 222 1876
>  >
>  >
>  >
>  >
>  >
>  --
>  > Steph.
>
>
>


-- 

*Bernard Vatant*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
http://google.com/+BernardVatant

*Mondeca*
35 boulevard de Strasbourg 75010 Paris
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--


Re: Encoding an incomplete date as xsd:dateTime

2014-06-25 Thread Bernard Vatant
Hi all

Are you aware of the Library of Congress Extended Date/Time Format (EDTF)?
There was an interesting presentation at DC 2013 about its implementation
in real world
http://dcevents.dublincore.org/IntConf/dc-2013/paper/view/183

Bernard


2014-06-25 16:00 GMT+02:00 Paul Houle :

> I've been thinking about date representations a lot lately.  Even if
> you're going to cobble something together out of the various XSD
> types,  it still helps to have a theory.
>
> A better underlying data type for dates is a time interval or set of
> time intervals.
>
> This represents the fact that many "events" happen over a time
> interval (such as a meeting or movie show time),  that we often only
> know a year or a day,  that things are measured on idiosyncratic time
> basis such as the fiscal years of various organizations,  that there
> are both practical and theoretical limits on both the precision and
> accuracy of time measurements.
>
> Intervals have their charms,  but if you include interval sets you can
> also represent concepts such as "Monday", "June 25" and "the third
> Tuesday of the month".
>
> Of course,  it creates trouble that there is no total ordering over
> intervals/interval sets,  but that's a fundamental problem to any
> flexible time representation.
> ᐧ
>
> On Mon, Feb 10, 2014 at 9:37 AM, Heiko Paulheim
>  wrote:
> > Hi all,
> >
> > xsd:dateTime and xsd:date are used frequently for encoding dates in RDF,
> > e.g., for birthdays in the vcard ontology [1]. Is there any best
> practice to
> > encode incomplete date information, e.g., if only the birth *year* of a
> > person is known?
> >
> > As far as I can see, the XSD spec enforces the provision of all date
> > components [2], but "1997-01-01" seems like a semantically wrong way of
> > expressing that someone is born in 1997, but the author does not know
> > exactly when.
> >
> > Thanks,
> > Heiko
> >
> > [1] http://www.w3.org/2006/vcard/ns
> > [2] http://www.w3.org/TR/xmlschema-2/#dateTime
> > [3] http://www.w3.org/TR/xmlschema-2/#date
> >
> > --
> > Dr. Heiko Paulheim
> > Research Group Data and Web Science
> > University of Mannheim
> > Phone: +49 621 181 2646
> > B6, 26, Room C1.08
> > D-68159 Mannheim
> >
> > Mail: he...@informatik.uni-mannheim.de
> > Web: www.heikopaulheim.com
> >
> >
>
>
>
> --
> Paul Houle
> Expert on Freebase, DBpedia, Hadoop and RDF
> (607) 539 6254paul.houle on Skype   ontolo...@gmail.com
>
>


-- 

*Bernard Vatant*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
http://google.com/+BernardVatant

*Mondeca*
35 boulevard de Strasbourg 75010 Paris
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--


Incorrect lang tags Re: Princeton WordNet RDF

2014-04-16 Thread Bernard Vatant
John

Looking at the data in more details, it appears that the lang tags are
using systematically ISO 639-2 codes (3 letters-code), even when the ISO
639-1 exists and should be used, as per *BCP 47
<https://tools.ietf.org/html/bcp47>*.
See e.g.,
http://www.w3.org/RDF/Validator/rdfval?URI=http%3A%2F%2Fwordnet-rdf.princeton.edu%2Fwn31%2F109637345-n.rdf
The W3C validator is right except when not up-to-date with the last ISO 639
values like in :
Error: {W116} ISO-639 does not define language: 'zsm'.[Line = 53, Column =
50]

Nope, there is such a code in ISO 639-3 :)
See http://www.lingvoj.org/languages/tag-zsm.html
and source http://www-01.sil.org/iso639-3/documentation.asp?id=zsm

Hope you can fix this easily!

Bernard

2014-04-16 15:30 GMT+02:00 John P. McCrae :

> Princeton University in collaboration with the Cognitive Interaction
> Technology
> Excellence Center of Bielefeld University are proud to announce the first
> RDF version of WordNet 3.1, now available at:
>
>  http://wordnet-rdf.princeton.edu/
>
> This version, based on the current development of the WordNet project,
> intends to be a nucleus for the Linguistic Linked Open Data cloud and the
> global
> WordNet projects. The data are accessible in five formats (HTML+RDFa,
> RDF/XML,
> Turtle, N-Triples and JSON-LD) as well as by querying a SPARQL endpoint.
> The model is itself based on the *lemon* model and follows the guidelines
> of the W3C OntoLex Community Group.
>
> We have incorporated direct links to the previous W3C
> WordNets, UBY, Lexvo.org, VerbNet as well as translations collected
> by the Open Multilingual WordNet Project. Furthermore, we include links
> within the resource for previous versions of WordNets to further enable
> linking. We are interested in incorporating any resources that are linked
> to
> WordNet and would greatly appreciate suggestions.
>
> Regards,
> John P. McCrae, Christiane Fellbaum & Philipp Cimiano
>



-- 

*Bernard Vatant*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
http://google.com/+BernardVatant

*Mondeca*
35 boulevard de Strasbourg 75010 Paris
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--


Re: Princeton WordNet RDF

2014-04-16 Thread Bernard Vatant
Indeed! Well done, and deserves a matching good ontology on top of the cake
:)

Beyond recipes pointed by Martin, for inclusion in LOV it lacks a good old
owl:Ontology element with minimal metadata, as described in
http://lov.okfn.org/dataset/lov/Recommendations_Vocabulary_Design.pdf.

Best regards

Bernard


2014-04-16 16:36 GMT+02:00 martin.h...@ebusiness-unibw.org <
martin.h...@ebusiness-unibw.org>:

> Hi,
> thanks - well done! You could make it a little better by deploying the
> ontology at
>
> http://wordnet-rdf.princeton.edu/ontology
>
> according to current best practices (HTML for humans, RDF in various
> syntaxes for machines). Currently, only RDF/XML is served, even if you
> explicitly request text/html, you end up with RDF/XML, which most browsers
> cannot handle well.
>
> Example:
>
> curl -I -H "Accept: text/html"  http://wordnet-rdf.princeton.edu/ontology
>
> returns:
>
> HTTP/1.1 200 OK
> Date: Wed, 16 Apr 2014 14:20:37 GMT
> Server: Apache/2.2.15 (Red Hat)
> Content-Length: 31660
> Connection: close
> Content-Type: application/rdf+xml
>
> It would be nice if you would apply the recipesfor HTML and RDF from
> http://www.w3.org/TR/swbp-vocab-pub/.
>
> Also, as far as I can see, the JSON deployment could be improved by
> implementing
>
> http://www.w3.org/TR/json-ld/#interpreting-json-as-json-ld,
>
> as a IRI to a valid JSON-LD document in a HTTP Link Header field is
> currently missing.
>
> Again, thanks for your valuable work!
>
> Martin
> ---
> martin hepp
> e-business & web science research group
> universitaet der bundeswehr muenchen
>
> e-mail:  martin.h...@unibw.de
> phone:   +49-(0)89-6004-4217
> fax: +49-(0)89-6004-4620
> www: http://www.unibw.de/ebusiness/ (group)
>  http://www.heppnetz.de/ (personal)
> skype:   mfhepp
> twitter: mfhepp
>
> Check out GoodRelations for E-Commerce on the Web of Linked Data!
> =
> * Project Main Page: http://purl.org/goodrelations/
>
>
>
>
> On 16 Apr 2014, at 15:30, John P. McCrae 
> wrote:
>
> > Princeton University in collaboration with the Cognitive Interaction
> Technology
> > Excellence Center of Bielefeld University are proud to announce the first
> > RDF version of WordNet 3.1, now available at:
> >
> >  http://wordnet-rdf.princeton.edu/
> >
> > This version, based on the current development of the WordNet project,
> > intends to be a nucleus for the Linguistic Linked Open Data cloud and
> the global
> > WordNet projects. The data are accessible in five formats (HTML+RDFa,
> RDF/XML,
> > Turtle, N-Triples and JSON-LD) as well as by querying a SPARQL endpoint.
> > The model is itself based on the lemon model and follows the guidelines
> > of the W3C OntoLex Community Group.
> >
> > We have incorporated direct links to the previous W3C
> > WordNets, UBY, Lexvo.org, VerbNet as well as translations collected
> > by the Open Multilingual WordNet Project. Furthermore, we include links
> > within the resource for previous versions of WordNets to further enable
> > linking. We are interested in incorporating any resources that are
> linked to
> > WordNet and would greatly appreciate suggestions.
> >
> > Regards,
> > John P. McCrae, Christiane Fellbaum & Philipp Cimiano
>
>
>


-- 

*Bernard Vatant*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
http://google.com/+BernardVatant

*Mondeca*
35 boulevard de Strasbourg 75010 Paris
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--


httpRange-14 is an empty Turtle ...

2014-04-11 Thread Bernard Vatant
... according to http://dbpedia.org/data/HTTPRange-14.n3

We already knew it was turtles all the way down, but this is *news*.

Thanks to danbri for the pointer discovered in a side discussion about a
raging debate (too serious for a Friday evening) [1]

Have a great week-end.

[1] https://plus.google.com/+BernardVatant/posts/aRnC6wnZ9J5
-- 

*Bernard Vatant*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
http://google.com/+BernardVatant

*Mondeca*
35 boulevard de Strasbourg 75010 Paris
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--


Re: Five Stars of Linked Data Vocabulary Use

2014-04-11 Thread Bernard Vatant
Hi Pascal

Interesting piece, forwarded to the Linked Open Vocabularies community [1].
Since there is not mention of the LOV project in the article, except only
indirectly by mentioning VOAF which was developed for this project, I
wonder if you were aware of this proposal I made two years ago [2]. At
first sight, a Five-Star dataset on the scale you propose would use only
Five-Star vocabularies as defined by [2]. I've not looked in details if the
matching would work for the other levels.

[1] https://plus.google.com/u/0/+BernardVatant/posts/7kcmJ7ryGU2
[2]
http://bvatant.blogspot.fr/2012/02/is-your-linked-data-vocabulary-5-star_9588.html


2014-04-11 3:45 GMT+02:00 Pascal Hitzler :

> An opinion piece re. linked data quality and reusability:
>
> http://www.semantic-web-journal.net/content/five-
> stars-linked-data-vocabulary-use
>
> (Semantic Web journal)
>
> All comments and feedback welcome.
>
> Best Regards,
>
> Pascal.
> --
> Prof. Dr. Pascal Hitzler
> Dept. of Computer Science, Wright State University, Dayton, OH
> pas...@pascal-hitzler.de   http://www.pascal-hitzler.de
> Semantic Web Textbook: http://www.semantic-web-book.org
> Semantic Web Journal: http://www.semantic-web-journal.net
>
>
>


-- 

*Bernard Vatant*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
http://google.com/+BernardVatant

*Mondeca*
35 boulevard de Strasbourg 75010 Paris
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--


Re: Using OWL ontology as data

2014-03-03 Thread Bernard Vatant
Hi Julien

Protégé will not indeed let you create an owl:ObjectProperty with range
owl:Class, because this puts you in OWL-Full.
It will let you do the other way round, e.g.; create an annotation property
that you attach to DBpedia classes.
oll:hasLemma a owl:AnnotationProperty.
dbonto:City   oll:hasLemma "ville"@fr.

Not sure that's the way you want to go, though


2014-03-03 15:57 GMT+01:00 Julien Plu :

> Hello,
>
> I try to create an ontology to lexicalize the french language. And in my
> vocabulary I link a lemma to his meaning which come from DBpedia ontology,
> more precisely a meaning is a DBpedia class. For example :
>
> oll:ville_1
> a oll:Lemma ;
> oll:hasMeaning dbonto:City .
>
> Unfortunatelly when I try my ontology with Protégé, it told me this
> instance is an error. The property "oll:hasMeaning" has a range which is
> "owl:Class".
>
> So my question is, is-it possible to do that ? If yes that come from
> certainly of a bug in Protégé, if no by which way I can define this ?
>
> Don't hesitate to ask if I was not clear enough.
>
> Thanks in advance for any help.
>
> Best.
>
> Julien.
>



-- 

*Bernard Vatant*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
http://google.com/+BernardVatant

*Mondeca*
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--


Re: LOD publishing question

2014-02-03 Thread Bernard Vatant
description.
>> >
>> > Good luck!
>> > Hugh
>> >
>> > On 28 Jan 2014, at 04:12, WILDER, COLIN 
>> wrote:
>> >
>> > > Another question to you very helpful people-
>> > >
>> > > 
>> > >
>> > > Our LOD working group is having trouble publishing our data (see
>> email below) in RDF form. Our programmer, a master's student, who is
>> working under the supervision of myself and a computer science professor,
>> has mapped sample data into RDF, has the triplestore on a D2RQ server
>> (software) on our server and has set up a SPARQL end-point on the latter.
>> But he has been unsuccessful so far getting 3 candidate semantic web search
>> engines (Falcons, Swoogle and Sindice) to  be able to find our data when he
>> puts a test query in to them. He has tried communicating with the people
>> who run these, but to little avail. Any suggestions about sources of
>> information, pointers, best practices for this actual process of publishing
>> LOD? Or, if you know of problems with any of those three search engines and
>> would suggest a different candidate, that would be great too.
>> > >
>> > > Thanks again,
>> > >
>> > > Colin Wilder
>> > >
>> > >
>> > > From: WILDER, COLIN [mailto:wilde...@mailbox.sc.edu]
>> > > Sent: Thursday, January 16, 2014 11:51 AM
>> > > To: 'public-lod@w3.org'
>> > > Subject: LOD for historical humanities information about people and
>> texts
>> > >
>> > > To the many people who have kindly responded to my recent email:
>> > >
>> > > Thanks for your suggestions and clarifying questions. To explain a
>> bit better, we have a data curation platform called RL, which is a large,
>> complex web-based MySQL database designed for users to be able to simply
>> input, store and share data about social and textual networks with each
>> other, or to share it globally in RL's data commons. The data involved are
>> individual data items, such as info about one person's name, age, a book
>> title, a specific social relationship, etc. The entity types (in the
>> ordinary-language sense of actors and objects, not in the database tabular
>> sense) can be seen athttp://tundra.csd.sc.edu/rol/browse.php. The data
>> commons in RL is basically a subset of user data that users have elected
>> (irrevocably) to share with all other users of the system. NB there is a
>> lot of dummy data in the data commons right now because of testing.
>> > >
>> > > We are designing an expansion of RL's functionality so as to publish
>> data from the data commons as LOD, so I am doing some preliminary work to
>> assess feasibility and fit by matching up our entity types with
>> RDFvocabularies. Here is what I have so far. First are the entity(ies) and
>> relationships, followed by the appropriate vocabularies:
>> > >
>> > > 1.   Persons, social relations: FOAF, BIO. The "Catalogus
>> Professorum Lipsiensis" or CPL(
>> http://svn.aksw.org/papers/2010/ISWC_CP/public.pdf) looks enormously
>> useful for connecting academics (people), their relations and their books.
>>  But, I cannot seem to get any info page or specification page to load,
>> making me worry that it's dead.
>> > > 2.   Membership in organizations: ORG
>> > > 3.   Enrollment in an academic course (e.g. a lecture course):
>> ??? maybe use a RDF container or RDF collection type of resource to list
>> all students enrolled in a certain course?
>> > > 4.   Travel: ??? We are trying to encode trips, in which one or
>> more people leave one place at one time and arrive at another place at
>> another time. This thus links people, places and times.
>> > > 5.   Texts - i.e. old editions of books and manuscripts: Dublin
>> Core, Bibframe. Use FRBR to distinguish sub- and pre-edition levels of
>> manuscripts, works and ideas.
>> > > 6.   Relationship among texts, including intertexts and
>> citations: Bibliographic ontology (Bibo)
>> > > 7.   Collections of texts in historical library catalogs, e.g.
>> from centuries ago: the DC Collection AP. Maybe also the Bibliographic
>> Reference Ontology (BiRO)?
>> > >
>> > > My understanding is that the Linked Open Vocabulary cloud (LOV) is a
>> useful tool for finding relevant ontologies. The Vocabulary of Interlinked
>> Datasets (VoID) seems more like underlying infrastructure - the tool to
>> translate and link data items in a dataset written in one vocabulary to
>> data items in a set written in another.
>> > >
>> > > Any further help or clarifications are much appreciated. Thanks again-
>> > >
>> > > Colin
>> > >
>> > >
>> > > 
>> > > Dr. Colin F. Wilder
>> > > Associate Director
>> > > Center for Digital Humanities (website; projects page)
>> > > Thomas Cooper Library, University of South Carolina
>> > > 1322 Greene St., Columbia, SC 29208
>> > > Phones: office (803) 777-2810 & mobile (603) 831-3998
>> > > Emails: wilde...@mailbox.sc.edu & colinwil...@gmail.com
>> > > open office hours (use week view in upper right)
>> > > frango ut patefaciam
>> >
>> > --
>> > Hugh Glaser
>> >20 Portchester Rise
>> >Eastleigh
>> >SO50 4QS
>> > Mobile: +44 75 9533 4155, Home: +44 23 8061 5652
>> >
>> >
>> >
>> >
>>
>> --
>> Hugh Glaser
>>20 Portchester Rise
>>Eastleigh
>>SO50 4QS
>> Mobile: +44 75 9533 4155, Home: +44 23 8061 5652
>>
>>
>>
>


-- 

*Bernard Vatant*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
http://google.com/+BernardVatant

*Mondeca*
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--


Re: Is the same video but in different encodings the owl:sameAs?

2013-12-05 Thread Bernard Vatant
Hi all

Reading the thread, I was also thinking about a FRBR-ish approach.
Maybe you will not use the exact FRBR classes, but the spirit of it
See
http://bvatant.blogspot.fr/2013/07/frbr-and-beyond-its-abstraction-all-way.html


2013/12/5 Damian Steer 

>
> On 5 Dec 2013, at 13:52, Thomas Steiner  wrote:
>
> > Dear Public-LOD,
> >
> > Thank you all for your very helpful replies. Following your joint
> > arguments, owl:sameAs is _not_ an option then.
>
> You could use dc:hasFormat to link them:
>
> "A related resource that is substantially the same as the pre-existing
> described resource, but in another format." [1]
>
> <http://ex.org/video.mp4> dc:hasFormat <http://ex.org/video.ogv> .
>
> 
>
> > The most reasonable
> > thing to do seems to introduce some sort of proxy object, on top of
> > which statements can be made.
>
> I prefer this. It feels FRBR-ish [2][3] although that's not quite right.
> (Are the individual videos items, and the proxy object a manifestation?)
>
> Damian
>
> [1] <http://dublincore.org/documents/dcmi-terms/#terms-hasFormat>
> [2] <
> https://en.wikipedia.org/wiki/Functional_Requirements_for_Bibliographic_Records
> >
> [3] <http://vocab.org/frbr/core.html>
>



-- 

*Bernard Vatant*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
http://google.com/+BernardVatant

*Mondeca*
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--


Re: YASGUI: Web-based SPARQL client with bells ‘n wistles

2013-08-20 Thread Bernard Vatant
Hello Barry

I had a reminder today that I never answered the question below, and I
am very late indeed !

Properties and classes of all vocabularies in LOV are aggregated in a
triple store

of which SPARQL endpoint is at http://lov.okfn.org/endpoint/lov_aggregator

This is quite "raw data" but you should find everything you need in there.

Otherwise can also use the new API http://lov.okfn.org/dataset/lov/api/v1/vocabs

which for each vocabulary provides the prefix and link to the last
version stored.

Hope that helps

Bernard

From: Barry Norton >Date:
Sat, 06 Jul 2013 11:27:46 +0100

Bernard, does LOV keep a cache of properties and classes?

I'd really like to see resource auto-completion in Web-based tools like
YASGUI, but a cache is clearly needed for the to be feasible.

Barry


Re: Linked Data Glossary is published!

2013-06-27 Thread Bernard Vatant
Hi Bernadette

Great job. What about a publication of the glossary as linked data? In SKOS
for example :)

Bernard

*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Blog : the wheel and the hub <http://bvatant.blogspot.com>
Linked Open Vocabularies : lov.okfn.org

*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--
Meet us during the European Open Data Week <http://opendataweek.org> in
Marseille (June 25-28)




2013/6/27 Bernadette Hyland 

> Hi,
> On behalf of the editors, I'm pleased to announce the publication of the
> peer-reviewed *Linked Data Glossary* published as a W3C Working Group
> Note effective 27-June-2013.[1]
>
> We hope this document serves as a useful glossary containing terms defined
> and used to describe Linked Data, and its associated vocabularies and best
> practices for publishing structured data on the Web.
>
> The LD Glossary is intended to help foster constructive discussions
> between the Web 2.0 and 3.0 developer communities, encouraging all of us
> appreciate the application of different technologies for different use
> cases.  We hope the glossary serves as a useful starting point in your
> discussions about data sharing on the Web.
>
> Finally, the editors are grateful to David Wood for contributing the
> initial glossary terms from Linking Government 
> Data<http://www.springer.com/computer/database+management+%26+information+retrieval/book/978-1-4614-1766-8>,
> (Springer 2011). The editors wish to also thank members of the Government
> Linked Data Working Group <http://www.w3.org/2011/gld/> with special
> thanks to the reviewers and contributors: Thomas Baker, Hadley Beeman,
> Richard Cyganiak, Michael Hausenblas, Sandro Hawke, Benedikt Kaempgen,
> James McKinney, Marios Meimaris, Jindrich Mynarz and Dave Reynolds who
> diligently iterated the W3C Linked Data Glossary in order to create a
> foundation of terms upon which to discuss and better describe the Web of
> Data.  If there is anyone that the editors inadvertently overlooked in this
> list, please accept our apologies.
>
> Thank you one & all!
>
> Sincerely,
> Bernadette 
> Hyland<http://3roundstones.com/about-us/leadership-team/bernadette-hyland/>,
> 3 Round Stones <http://3roundstones.com/> Ghislain 
> Atemezing<http://www.eurecom.fr/%7Eatemezin>,
> EURECOM <http://www.eurecom.fr> Michael Pendleton, US Environmental
> Protection Agency <http://www.epa.gov> Biplav Srivastava, 
> IBM<http://www.ibm.com/in/research/>
>
> W3C Government Linked Data Working Group
> Charter: http://www.w3.org/2011/gld/
>
> [1] http://www.w3.org/TR/ld-glossary/
>


Re: Are Topic Maps Linked Data?

2013-06-23 Thread Bernard Vatant
Back in 2001-2002 we had quite a lot of passionate interaction between the
Topic Maps and RDF working groups
My preferred presentation at that time was the one by Nikita Ogievetsky
wearing his "Semantic Web Glasses", various versions of the concept are
still on line at http://www.cogx.com/?si=urn:cogx:resource:swg.
Lars Marius Garshol made also quite good comparisons of the two piles of
standards, see http://www.garshol.priv.no/blog/92.html

Now when the Linked Data "brand" started around 2006, unfortunately Topic
Maps were already more or less in a deadlock (for all sorts of reasons
off-topic here - no pun), so the question "Are Topic Maps Linked Data?" is
a sort of de facto anachronism.

That said, well, yes, of course, Topic Maps is a technology meant to link
data. It was even its core business. Jim Mason [1] (if I remember
correctly) used to say that XML was SGML with good marketing, maybe Linked
Data is Topic Maps with good marketing :)

Bernard

[1] http://www.open-std.org/jtc1/sc34old/repository/0688.pdf

*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Blog : the wheel and the hub <http://bvatant.blogspot.com>
Linked Open Vocabularies : lov.okfn.org

*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--
Meet us during the European Open Data Week <http://opendataweek.org> in
Marseille (June 25-28)




2013/6/23 rich...@light.demon.co.uk 

> Didn't Steve Pepper do an analysis which mapped Topic Maps to RDF a decade
> or so back?
>
> Richard Light
> Sent from my phone
>
> - Reply message -
> From: "Dan Brickley" 
> To: "public-lod" 
> Subject: Are Topic Maps Linked Data?
> Date: Sun, Jun 23, 2013 15:04
>
>
> Just wondering,
>
> Dan
>


RDF, Linked Data etc : please ping me when it's over ...

2013-06-19 Thread Bernard Vatant
I guess I'm not the only one : I'm about to put a filter rule on my inbox

"from public-lod" AND (contains "RDF" and "Linked Data") => trash

No one having a decent full-time job and normal life can have the bandwidth
(not even speaking of the will or interest) to follow those threads. It's
too bad because there is certainly a lot of amazing stuff I miss.

So please ping me when it's over, and if someone can write a summary and
possibly draw useful conclusions, please do so and post it on a stable URI
where everything could be parsed in a single piece of document.

Note : anyone willing to do that is both a saint and a fool :)

Have fun

Bernard


*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Blog : the wheel and the hub <http://bvatant.blogspot.com>
Linked Open Vocabularies : lov.okfn.org

*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--
Meet us during the European Open Data Week <http://opendataweek.org> in
Marseille (June 25-28)


Re: RDF and CIDOC CRM

2013-06-14 Thread Bernard Vatant
Hi all

I'm a bit lost with all those avatars of CIDOC-CRM ontology published under
various URIs, under various namespaces and confusing redirections

In LOV [1]  we have registered two versions and two different namespaces, a
version 5.01 in OWL [2] and the more recent one 5.04 in RDFS [3],
dereferencing to [6].
The draft mentioned by Kingsley [4] is a more recent version 5.1, the
xml:base it declares is yet another one [5] which actually dereferences to
[6] as [3] does. And the "Erlangen manifestation" [7] mentioned by Richard
is yet another avatar, apparently also of version 5.04.

You are lost already? Imagine the poor linked data publisher wanting to use
the latest, authoritative version of the ontology ...

Since none of those vocabularies in their RDF form expose either clear
provenance metadata, more recent versions do not mention the previous
one(s), you have to look at comments in [4]  :

"This is the encoding approved by CRM-SIG in the meeting 21/11/2012 as the
current version for the CIDOC CRM namespace. Note that this is NOT a
definition of the CIDOC CRM, but an encoding derived from the authoritative
release of the CIDOC CRM v5.1 (draft) May 2013 on
http://www.cidoc-crm.org/official_release_cidoc.html";

And from this html page I understand that indeed the 5.04 version is the
current official one, 5.1 is just a draft, so after all both namespaces [3]
and [5]  redirecting to the current "official" version might be a feature
and not a bug, but the above comment in the draft about "the current
version for the CIDOC CRM namespace" is confusing at least ...

If editors of those various versions are around, could they please step
forward and clarify what should be used as of today as the authoritative
URI and namespace for this important ontology, so that potential users do
not need, beyond mastering RDF technologies, a degree in hermeneutics :)

Thanks for your time

Bernard


[1] http://lov.okfn.org/dataset/lov/details/vocabulary_crm.html
[2] http://purl.org/NET/cidoc-crm/core
[3] http://www.cidoc-crm.org/rdfs/cidoc-crm
[4] http://www.cidoc-crm.org/rdfs/cidoc_crm_v5.1-draft-2013May.rdfs
[5] http://www.cidoc-crm.org/cidoc-crm/
[6] http://www.cidoc-crm.org/rdfs/5.0.4/cidoc-crm.rdf
[7] http://erlangen-crm.org/current/


*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Blog : the wheel and the hub <http://bvatant.blogspot.com>
Linked Open Vocabularies : lov.okfn.org

*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--

Mondeca is selected to present at ReInvent Law,
London<http://reinventlawlondon.com/> on
June 14th

Meet us during the European Open Data Week <http://opendataweek.org> in
Marseille (June 25-28)


Re: Ending the Linked Data debate -- PLEASE VOTE *NOW*!

2013-06-14 Thread Bernard Vatant
Some speak about "linked data", and other speak about "linked" and "data".
How can they possibly agree?

This is really a very old debate, and it can go forever
"A white horse is not a horse"
http://www.thezensite.com/ZenEssays/Philosophical/Horse.html

Bernard


2013/6/14 Gregg Reynolds 

> On Thu, Jun 13, 2013 at 12:20 PM, David Booth  wrote:
> >  Original Message 
> > Subject: Ending the Linked Data debate -- PLEASE VOTE *NOW*!
> > Date: Thu, 13 Jun 2013 13:19:27 -0400
> > From: David Booth 
> > To: community, Linked 
>
> >
> > In normal usage within the Semantic Web community,
> > does the term "Linked Data" imply the use of RDF?
> >
> > PLEASE VOTE NOW at
>
> Hate to rain on your parade, but I can't resist, since I've spent the
> past two years researching survey design, validity, etc. which I
> pretty much hated all the way, but you've innocently given me a chance
> to use some of that knowledge.  The likelihood that this will question
> will produce valid data that can be unambiguously interpreted is
> pretty close to zero.  It's a pretty well-established fact that even
> the simplest questions - e.g. how many children do you have? - will be
> misinterpreted by an astonishingly large number of respondents
> (approaching 50% if I recall).  In this case, given the intrinsic
> ambiguity of the question ("normal", "imply", etc.) and the high
> degree of education and intelligence of the respondents, I predict
> that if 50 people respond there will be at least 51 different
> interpretations of the question.  In other words they are all highly
> likely to be responding to different questions.  Which means you won't
> be able to draw any valid conclusions.
>
> Here's an obvious example:  is "normal usage" descriptive or
> evaluative?  In other words, does it refer to the fact of how people
> do use it, or to a norm of how they ought to use it?  Somebody
> strongly committed one way or the other could claim that "normal"
> usage is just the usage they favor - people who don't in fact use it
> that way are weirdos and deviants, even if they're in the majority.
> So your question is inherently ambiguous, and that's not counting
> problems with "Semantic Web community", etc.
>
> Besides, you omitted the "Refused to answer" option. ;)
>
> -Gregg
>
>


Re: Linking to non-RDF datasets

2013-03-12 Thread Bernard Vatant
Hi Alasdair

Some results from http://lov.okfn.org/dataset/lov/search/#s=dataset

http://purl.org/dc/dcmitype/Dataset is quite generic, but does not have any
attached properties.
http://purl.org/ctic/dcat#Dataset is a subclass of the above, is intended
to represent datasets in a catalogue, and is not limited to RDF datasets

... many more I let you explore

Hope that helps

Bernard


2013/3/12 Alasdair J G Gray 

> Hi All,
>
> We are making extensive use of the VoID vocabulary [1] in the Open PHACTS
> project [2] to describe our datasets.  We are currently deciding how to
> model a recurring use case of needing to describe non-RDF datasets and
> manage linksets to them.
>
> In the VoID vocabulary, a dataset is defined to be [3]
>
> A set of RDF triples that are published, maintained or aggregated by a
> single provider.
>
> Since all predicates are defined with a domain/range of void:Dataset, this
> would mean that it would be incorrect to use them for any dataset that is
> not a set of RDF triples. However, this usage is becoming common.
>
> Should we go ahead and use the predicates despite this inaccurate
> interpretation of the non-RDF dataset?
>
> Is there another vocabulary that allows for the modelling of linksets that
> does not restrict the dataset to a set of RDF triples? I am aware of DCAT
> [4] but do not see suitable linking predicates.
>
> Should we develop a set of super-properties that do not have the
> domain/range restrictions?
>
> Thanks,
>
> Alasdair
>
> [1] http://www.w3.org/TR/void/
> [2] http://www.openphacts.org/
> [3] http://vocab.deri.ie/void#Dataset
> [4] http://www.w3.org/TR/vocab-dcat/
>
>
>   Dr Alasdair J G Gray
> Research Associate
> alasdair.g...@manchester.ac..uk 
> +44 161 275 0145
>
> http://www.cs.man.ac.uk/~graya/
>
> Please consider the environment before printing this email.
>
>
>
>


-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Blog : the wheel and the hub <http://blog.hubjects.com/>

*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--

Meet us at Documation <http://www.documation.fr/> in Paris, March 20-21


Re: How can I express containment/composition?

2013-02-21 Thread Bernard Vatant
ountry consist of Provinces
> A Province consists of Municipalities
>
> I thought this should be straightforward because this is a common and
> logical kind of relationship, but I could not find a vocabulary which
> allows
> be to make this kind of statement. Perhaps I am bad at searching, or maybe
> I
> did not use the right words.
>
> I did find this document:
> http://www.w3.org/2001/sw/BestPractices/OEP/SimplePartWhole/ ("Simple
> part-whole relations in OWL Ontologies"). It explains that OWL has no
> direct
> support for this kind of relationship and it goes on to give examples on
> how
> one can create ontologies that do support the relationship in one way or
> the
> other.
>
> Is there a ready to use ontology/vocabulary out there that can help me
> express containment/composition?
>
> Thanks in advance,
> Frans
>
>
-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Blog : the wheel and the hub <http://blog.hubjects.com/>

*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--

Meet us at Documation <http://www.documation.fr/> in Paris, March 20-21


Re: I've built www.vocabs.org - A community driven website that allows you to build RDF vocabularies

2013-02-15 Thread Bernard Vatant
Hi Luca

Welcome to the vocabularies galaxy. I cc to the public-vocabs list, which
might be more relevant for this topic.

Seems to me the best ways to learn to write (a little) is to read (a lot).
Books, apps, tutorials and so on are fine. But above all, read vocabularies
to figure out from examples done by people who know just a bit more than
you do. You have more than 300 examples listed in the Linked Open
Vocabularies data base [1], including the famous wine ontology developed
along with the OWL recommandation [2]
As for your idea of collaborative construction, I think it's worth the try,
you'll see how it flies. The vocabularies are a critical part of the linked
data ecosystem (their genetic code, sort of), raising complex technical and
social issues, and a global governance model still to be invented. Next
Dublin Core conference in September will focus on this issue [3]

Regarding the complexity of building and publishing a vocabulary, after
years of struggling with Protege and other ontology editors, I've come to
the point that if you're not building a complex ontology with thousands of
axioms, but a basic vocabulary with typically 10-20 classes and a similar
number of properties, without fancy logical constructs, you just need a
good text editor and learn a bit of Turtle, and for publication rely on
public stylesheets or cool services such as Parrot.
The only tricky (just a bit) part being the server configuration for
content negotiation, but that's not very big deal either.

It figures that we (the linked data vocabularies community) definitely
should provide a good tutorial on "Publish your simple vocabularies using
Turtle". I should put that on my backburner, actually.

Best regards

[1] http://lov.okfn.org/dataset/lov
[2] http://lov.okfn.org/dataset/lov/details/vocabulary_vin.html
[3] http://dcevents.dublincore.org/IntConf/dc-2013

2013/2/14 Luca Matteis 

> Dear all,
>
> It's my first time here, but I've been attracted to the Linked data
> initiative for quite a while now. A couple of weeks ago I needed to build
> my first RDF vocabulary.. I cannot tell you how hard this process was for
> an RDF newbie as myself. I had to read a couple of books, and read a lot
> all over the web before I could get a grasp of it all.
>
> Even after understanding the linked-data context, and how the technologies
> involved worked, I was still left with a set of tools that I thought were
> pretty limited. I had to download apps, that did or didn't work. And learn
> various different programming APIs to generate the RDF that I wanted. I can
> only imagine the difficulty a non-techie person would have when trying to
> build a vocabulary.
>
> Another issue that I confronted when looking for existing vocabularies,
> was that most of the time they were created by a single entity (a group of
> people) that knows about the lexicon of the subject. I think this is quite
> limited as well. A vocabulary should be open and agreed upon a group of
> people. It should be community-driven. It should be crowd-sourced and
> validated, the same way correct answers are validated on Stackoverflow.
>
> So in a couple of days I built http://www.vocabs.org/ that does exactly
> this. It allows people, with very little technical experience, to start
> creating vocabularies (entirely through the web-interface). Not only that,
> but different users can then join and comment, and add new vocabulary
> terms. An example of this: http://www.vocabs.org/term/WineOntology(*hint* 
> click "download" at the top).
>
> I was just wondering what the Semantic community thinks of this idea. I
> hope it's clear what I'm trying to achieve here, but maybe a better
> explanation would be here: http://www.vocabs.org/about
>
> Thanks!
>



-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Blog : the wheel and the hub <http://blog.hubjects.com/>

*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--

Meet us at Documation <http://www.documation.fr/> in Paris, March 20-21


Re: Content negotiation for Turtle files

2013-02-06 Thread Bernard Vatant
Thanks Kingsley!

Was about to answer but you beat me at it :)

But Richard, could you elaborate on this view that hand-written and
machine-processible data would not fit together?

I don't feel like "people are still writing far too many Linked Data
examples and resources by hand". On the opposite seems to me we have seen
so far too many linked data produced by (more or less dumb or smart)
programs, without their human "productors" (so to speak) always checking
too much for quality in the process, provided they can proudly announce
that they have produced so many billions of triples ... so many, actually,
that nobody will ever be able to assess their quality whatsoever :)

Of course migrating automagically heaps of legacy data and making them
available as linked data is great, but as Kingsley puts it, linked data are
not only about machines talking to machines, it's also about enabling
people to talk to machines as simply as possible, and the other way round.
That's where Turtle fits.

Bernard


2013/2/6 Kingsley Idehen 

>  On 2/6/13 6:45 AM, Richard Light wrote:
>
>
> On 06/02/2013 10:59, Bernard Vatant wrote:
>
> More ??? Well, I was heading the other way round actually for sake of
> simplicity. As said before I've used RDF/XML for years despite all
> criticisms, and was happy with it (the devil you know etc). What I
> understand of the current trend is that to ease RDF and linked data
> adoption we should promote now this simple, both human-readable and
> machine-friendly publication syntax (Turtle). And having tried it for a
> while, I now begin to be convinced enough as to adopt it in publication -
> thanks to continuing promotion by Kingsley among others :)
>
> And now you tell me I should still bother to provide n other formats,
> RDF/XML and more. I thought I was about to simplify my life, you tell me I
> have to make the simple things, *plus* the more complex ones as before.
> Hmm.
>
> Well I for one would make a plea to keep RDF/XML in the portfolio. Turtle
> is only machine-processible if you happen to have a Turtle parser in your
> tool box.
>
> I'm quite happily processing Linked Data resources as XML, using only XSLT
> and a forwarder which adds Accept headers to an HTTP request. It thereby
> allows me to grab and work with LD content (including SPARQL query results)
> using the standard XSLT document() function.
>
> In a web development context, JSON would probably come second for me as a
> practical proposition, in that it ties in nicely with widely-supported
> javascript utilities.
>
> To me, Turtle is symptomatic of a world in which people are still writing
> far too many Linked Data examples and resources by hand, and want something
> that is easier to hand-write than RDF/XML.  I don't really see how that
> fits in with the promotion of the idea of machine-processible web-based
> data.
>
> Richard
> --
> *Richard Light*
>
>
> If people can't express data by hand we are on a futile mission. The era
> of over bearing applications placing artificial barriers between users and
> their data is over. Just as the same applies to overbearing schemas and
> database management systems.
>
> This isn't about technology for programmers. Its about technology for
> everyone. Just as everyone is able to write on a piece of paper today, as a
> mechanism for expressing and sharing data, information, and knowledge.
>
>
> It is absolutely mandatory that folks be able to express triple based
> statements (propositions) by hand. This is the key to making Linked Data
> and the broader Semantic Web vision a natural reality.
>
> We have to remember that content negotiation (implicit or explicit) is a
> part of this whole deal.
>
> Vapour was built at a time when RDF/XML was the default format of choice.
> That's no longer the case, but it doesn't mean RDF/XML is dead either, its
> just means its no longer the default. As I've said many times, RDF/XML is
> the worst and best thing that ever happened to the Semantic Web vision.
> Sadly, the worst aspect has dominated the terrain for years and created
> artificial inertia by way of concept obfuscation.
>
> If your consumer prefers data in RDF/XML format then it can do one of the
> following:
>
> 1. Locally transform the Turtle to RDF/XML -- assuming this is all you can
> de-reference from a given URI
> 2. Transform the Turtle to RDF/XML via a transformation service (these
> exist and they are RESTful) -- if your user agent can't perform the
> transformation.
>
> The subtleties of Linked Data are best understood via Turtle.
>
> --
>
> Regards,
>
> Kingsley Idehen   
> Founder & CEO
> OpenLink Software
> Company Web:

Re: Content negotiation for Turtle files

2013-02-06 Thread Bernard Vatant
Hi Chris

2013/2/6 Chris Beer 

> Bernard, Ivan
>
> (At last! Something I can speak semi-authoritatively on ;P )
>
> @ Bernard - no - there is no reason to go back if you do not want to, and
> every reason to serve both formats plus more.
>

More ??? Well, I was heading the other way round actually for sake of
simplicity. As said before I've used RDF/XML for years despite all
criticisms, and was happy with it (the devil you know etc). What I
understand of the current trend is that to ease RDF and linked data
adoption we should promote now this simple, both human-readable and
machine-friendly publication syntax (Turtle). And having tried it for a
while, I now begin to be convinced enough as to adopt it in publication -
thanks to continuing promotion by Kingsley among others :)

And now you tell me I should still bother to provide n other formats,
RDF/XML and more. I thought I was about to simplify my life, you tell me I
have to make the simple things, *plus* the more complex ones as before.
Hmm.


> Your comment about UA's complaining about a content negotiation issue is
> key to what you're trying to do here. I'd like to provide some clear
> guidance or suggestions back, but first, if possible, can you please post
> the http request headers for the four (and any others you have) user
> agents you've used to attempt to request your rdf+xml files and which have
> either choked or accepted the .ttl file.


I can try to find out how do that, although remind you I can discuss
languages, ontologies, syntax and semantics of data at will, but when it
comes to protocols and Webby things it's not really my story, so I don't
promise anything.

AND : there's NO rdf+xml file in that case, only text/turtle. And that's
exactly the point : can/should one do that, or not? Do I have to pass the
message to adopters : publish RDF in Turtle, it's a very cool an simple
syntax (oh but BTW don't forget to add HTML documentation, and also
RDF/XML, and JSON, and multilingual variants, and proper content
negotiation ...) ... well, OK, let's be clear about it if we have to do
that ... but it looks like a non-starter for adoption of Turtle.


> Extra points if you can also post
> the server's response headers.
>

Same remark as above.

Thanks for your time

Bernard

-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Blog : the wheel and the hub <http://blog.hubjects.com/>

*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--

Meet us at Documation <http://www.documation.fr/> in Paris, March 20-21


Re: Content negotiation for Turtle files

2013-02-06 Thread Bernard Vatant
Thanks all for your precious help!

... which takes me back to my first options, the ones I had set before
looking at Vapour results which misled me - more below.

AddType  text/turtle;charset=utf-8   .ttl
AddType  application/rdf+xml.rdf

Plus Rewrite for html etc.

I now get this on cURL

curl -IL http://www.lingvoj.org/ontology
HTTP/1.1 303 See Other
Date: Wed, 06 Feb 2013 09:28:45 GMT
Server: Apache
Location: http://www.lingvoj.org/ontology_v2.0.ttl
Content-Type: text/html; charset=iso-8859-1

HTTP/1.1 200 OK
Date: Wed, 06 Feb 2013 09:28:45 GMT
Server: Apache
Last-Modified: Wed, 06 Feb 2013 09:19:34 GMT
ETag: "60172428-5258-4d50ad316b5b2"
Accept-Ranges: bytes
Content-Length: 21080
Content-Type: text/turtle; charset=utf-8

... to which Kingsley should not frown anymore (hopefully)

But what I still don't understand is the answer of Vapour when requesting
RDF/XML :

   - 1st request while dereferencing resource URI without specifying the
   desired content type (HTTP response code should be 303 (redirect)):
   Passed
   - 2nd request while dereferencing resource URI without specifying the
   desired content type (Content type should be 'application/rdf+xml'):
   Failed
   - 2nd request while dereferencing resource URI without specifying the
   desired content type (HTTP response code should be 200): Passed

Of course this request is bound to fail somewhere since there is no RDF/XML
file, but the second bullet point is confusing : why should the content
type be 'application/rdf+xml' when the desired content type is not
specified?

And should not a "Linked Data validator" handle the case where there is no
RDF/XML file, but only Turtle or n3?

The not-so-savvy linked data publisher (me), as long as he sees something
flashinf RED in the results, thinks he has not made things right, and is
led to made blind tricks just to have everything green (such as
contradictory mime type declarations).

At least if the validator does not handle this case it should say so. The
current answer does not help adoption of Turtle, to say the least!

Hoping someone behind Vapour is lurking here and will answer :)

Thanks again for your time

Bernard


Content negotiation for Turtle files

2013-02-05 Thread Bernard Vatant
Hello all

Back in 2006, I thought had understood with the help of folks around here,
how to configure my server for content negotiation at lingvoj.org.
Both vocabulary and instances were published in RDF/XML.

I updated the ontology last week, and since after years of happy living
with RDF/XML people eventually convinced that it was a bad, prehistoric and
ugly syntax, I decided to be trendy and published the new version in Turtle
at http://www.lingvoj.org/ontology_v2.0.ttl

The vocabulary URI is still the same : http://www.lingvoj.org/ontology, and
the namespace  http://www.lingvoj.org/ontology# (cool URI don't change)

Then I turned to Vapour to test this new publication, and found out that to
be happy with the vocabulary URI it has to find some answer when requesting
application/rdf+xml. But since I have no more RDF/XML file for this
version, what should I do?
I turned to best practices document at http://www.w3.org/TR/swbp-vocab-pub,
but it does not provide examples with Turtle, only RDF/XML.

So I blindly put the following in the .htaccess : AddType
application/rdf+xml .ttl
I found it a completely stupid and dirty trick ... but amazigly it makes
Vapour happy.

But now Firefox chokes on http://www.lingvoj.org/ontology_v2.0.ttl because
it seems to expect a XML file. Chrome has not this issue.
The LOV-Bot says there is a content negotiation issue and can't get the
file. So does Parrot.

I feel dumb, but I'm certainly not the only one, I've stumbled upon a
certain number of vocabularies published in Turtle for which the conneg
does not seem to be perfectly clear either.

What do I miss, folks? Should I forget about it, and switch back to good
ol' RDF/XML?

Bernard

-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Blog : the wheel and the hub <http://blog.hubjects.com/>

*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--

Meet us at Documation <http://www.documation.fr/> in Paris, March 20-21


Searching for LOV responsibles : the Drake equation

2013-01-31 Thread Bernard Vatant
Hello all

Apologies for cross-posting, but Kingsley started it :)
And I've reduced the cc list to a minimum ...

I had a couple of feedback those days by people who understood that the
open spreadsheet referenced below [1] was aimed either at claiming prefixes
for vocabularies, or submitting vocabularies to LOV, whatever.
Please refer to the original discussion on public-vocab [2] to understand
what it is about, but I want to clarify it agian here : This initiative is
an experiment to identify for each of the 300+ current vocabularies
gathered in LOV, a responsible person as of today for the availability,
content, past and foreseeable future management of the vocabulary.
This information is different of what can be found in the vocabulary
metadata or documentation if any, in particular for vocabularies published
years ago.

Assuming that :
- Such responsible people do exist for a reasonable proportion (p1) of the
vocabularies.
- Among those, a reasonable proportion (p2) do lurk on either public-lod,
public-vocabs or semantic-web list, or wherever this message will be pushed
via social networks.
- Among those, a reasonable proportion (p3) is ready to raise a hand saying
: yes, that's me!
- Among those, a reasonable proportion (p4) considers the proposed method
as a sensible way to do so.
- Among those, a reasonable proportion (p5) will actually do so.

The above assumptions lead to a number of answers
N = p1.p2.p3.p4.p5.V
where V is the number of vocabularies in the LOV cloud

Similar to Drake equation [3] At the difference of aliens, though, some
vocabulary responsible have already given signs of life. In the first week,
18 people have been listed, representing 39 vocabularies ... out of more
than 300.
Showing at least that all above factors are strictly positive, which is
good start ... if it's only a start.
In short, more aliens are welcome to show up at [1]

I've given an arbitrary delay of one month for this experiment. Basically,
at the end of February, we'll make the counts and try to evaluate the value
of each of the above factors. I'm afraid the critical one is p1, actually,
but I would be happy to be proven wrong.

Last word : if you want to show up for a vocabulary not yet listed at [1]
feel free to do so by adding an entry in the spreadsheet, but at the same
time please submit it to LOV using the "suggest" form at [4], and don't
forget to read [5] before in order to make sure your vocabulary is LOV-able.

Thanks for your attention!

Bernard

[1] http://bit.ly/WB0ad5
[2] http://lists.w3.org/Archives/Public/public-vocabs/2013Jan/0125.html
[3] http://en.wikipedia.org/wiki/Drake_equation
[4] http://lov.okfn.org/dataset/lov/suggest/
[5] http://lov.okfn.org/dataset/lov/Recommendations_Vocabulary_Design.pdf

2013/1/22 Kingsley Idehen 

>  On 1/22/13 11:45 AM, Bernard Vatant wrote:
>
> ACTION
>
> Make a list of "globally adopted schemas" (vocabularies)  and put a *
> responsible* agent name/email/URI whatever Web identifier in front of it
> https://docs.google.com/spreadsheet/ccc?key=0AiYc9tLJbL4SdHByWkRYUkYxZU5qS1lQOE5FV0hiNlE#gid=0
> Free to edit by anyone. If you are* currently responsible* for a
> vocabulary, put your name and contact email address.
> Let's take a month to see what we can gather. A month from now I will mail
> all declared responsible to have confirmation, lock the document, and add
> this information to LOV vocabularies description.
>
>
> Best
>
> Bernard
>
>
> FYI
>
> --
>
> Regards,
>
> Kingsley Idehen   
> Founder & CEO
> OpenLink Software
> Company Web: http://www.openlinksw.com
> Personal Weblog: http://www.openlinksw.com/blog/~kidehen
> Twitter/Identi.ca handle: @kidehen
> Google+ Profile: https://plus.google.com/112399767740508618350/about
> LinkedIn Profile: http://www.linkedin.com/in/kidehen
>
>
>
>
>
> --
> Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
> MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
> with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
> MVPs and experts. ON SALE this month only -- learn more at:
> http://p.sf.net/sfu/learnnow-d2d
> ___________
> Dbpedia-discussion mailing list
> dbpedia-discuss...@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion
>
>


-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Blog : the wheel and the hub <http://blog.hubjects.com/>

*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
--

Meet us at the SIIA Information Industry
Summit<http://www.siia.net/iis/2013> in
NY, January 30-31
<>

Re: [Virtuoso-users] WebSchemas, Schema.org and W3C

2013-01-23 Thread Bernard Vatant
Hi Alexey

(limiting the cc list to avoid noise)

2013/1/23 Alexey Zakhlestin 

> Is it attempt to reimplement http://prefix.cc/ ?
>
Not at all. http://prefix.cc/ is a precious resource, and that has nothing
to do with prefix wars. Prefixes in the spreadsheet are simply informative,
they are the ones used in the LOV data base and web site. Most of the time
they are the ones chosen by the vocabulary editors, but the LOV
infrastructure needs a 1-1 correspondance so sometimes we have to differ
from the vocabulary publishers.
More at http://lov.okfn.org/dataset/lov/about/#lovdataset "Note on
prefixes".

The point of this spreadsheet is to clarify current responsibilties re. the
listed vocabularies.
More context on the public-vocabs list at
http://lists.w3.org/Archives/Public/public-vocabs/2013Jan/0125.html
BTW I suggest people interested to follow-up on public-vocabs forum rather
than on either public-lod or semantic-web.

Best

Bernard


> On 23 Jan 2013 06:35, "Kingsley Idehen"  wrote:
>
>>  On 1/22/13 11:45 AM, Bernard Vatant wrote:
>>
>> ACTION
>>
>> Make a list of "globally adopted schemas" (vocabularies)  and put a *
>> responsible* agent name/email/URI whatever Web identifier in front of it
>> https://docs.google.com/spreadsheet/ccc?key=0AiYc9tLJbL4SdHByWkRYUkYxZU5qS1lQOE5FV0hiNlE#gid=0
>> Free to edit by anyone. If you are* currently responsible* for a
>> vocabulary, put your name and contact email address.
>> Let's take a month to see what we can gather. A month from now I will
>> mail all declared responsible to have confirmation, lock the document, and
>> add this information to LOV vocabularies description.
>>
>>
>> Best
>>
>> Bernard
>>
>>
>> FYI
>>
>> --
>>
>> Regards,
>>
>> Kingsley Idehen  
>> Founder & CEO
>> OpenLink Software
>> Company Web: http://www.openlinksw.com
>> Personal Weblog: http://www.openlinksw.com/blog/~kidehen
>> Twitter/Identi.ca handle: @kidehen
>> Google+ Profile: https://plus.google.com/112399767740508618350/about
>> LinkedIn Profile: http://www.linkedin.com/in/kidehen
>>
>>
>>
>>
>>
>> --
>> Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
>> MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
>> with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
>> MVPs and experts. ON SALE this month only -- learn more at:
>> http://p.sf.net/sfu/learnnow-d2d
>> ___
>> Virtuoso-users mailing list
>> virtuoso-us...@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/virtuoso-users
>>
>>


-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Blog : the wheel and the hub <http://blog.hubjects.com/>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: Linked Data Dogfood circa. 2013

2013-01-08 Thread Bernard Vatant
Hi Sarven

Spot on, and could have been asked (and applied) since many years indeed!

Just a reminder that XML folks have forced presenters at XML conferences to
eat their dog food for at least 10 years.
Remember at least to have submitted for XML Europe 2002 in this format, and
maybe even at XML 2001.
For the former at least, the CFP is still on-line
http://xml.coverpages.org/XMLEuropeConference2002CFP.html
"A version of the paper MUST also be written in XML for publication in the
CD ROM and Web site conference proceedings. XML authoring software will be
provided. Further instructions will follow upon acceptance."

Bernard

2013/1/8 Sarven Capadisli 

> On 01/04/2013 02:02 AM, Kingsley Idehen wrote:
>
>> On 1/3/13 7:50 PM, Sarven Capadisli wrote:
>>
>>> On 01/04/2013 12:34 AM, Bernard Vatant wrote:
>>>
>>>> Dog Food People
>>>> http://data.semanticweb.org/**person/<http://data.semanticweb.org/person/>
>>>>
>>>
>>> [Off topic]
>>>
>>> Given that the people in that list originally published their papers
>>> using anything but machine friendly Web practices, would anyone care
>>> to enlighten me the dogfood bit in "Dog Food People"?
>>>
>>> I do think that data.semanticweb.org is doing a great thing i.e., they
>>> are the ones dogfooding! It is unfortunately a limited "patch" to a
>>> problem that the Semantic Web / Linked Data community is too careless
>>> to tackle head on!
>>>
>>> -Sarven
>>>
>>>
>>>
>>>  [On topic]
>>
>> So why don't we all make a concerted effort in 2013 to clean up these
>> kinds of issues. Basically, let's make dogfooding meaningful since its
>> the ultimate demonstrator of technology utility :-)
>>
>
> Enter Sarven's Rant:
>
> I acknowledge that sometimes it is indeed complicated (e.g., too many
> variables) to eat our own dogfood in every corner, when faced with all the
> business' needs, for whatever reasons they may be. I don't wish to debate
> whether the solutions that SW/LD offers are realistic enough or can be
> fulfilled or not. We use the tools we can to get the work done. Everyone
> does what they can.
>
> However, what's frustrating to see, not to mention the ongoing facepalms,
> is the situation that the self-proclaimed SW/LD conferences and academia
> puts themselves into.
>
> Every year, the conferences primarily requests the research work to be
> submitted in PDF/Word and then maybe the source in LaTeX for camera-ready
> versions which is then handed off to the (print) publishers. Similarly,
> that's what happens in academia too.
>
> That is precisely what data.semanticweb.org is "patching", the bits the
> SW/LD community shouldn't be intentionally breaking in the first place. It
> is not good enough to get some metadata when all is said and done. What's
> that? The leftover from the conferences and publishers? G, thanks, but, no
> thanks, we can do better. We ought to do better.
>
> All the value in research work is locked up in desktop-friendly formats
> (yes, Google showing a preview of the PDFs is a hack too) or maybe gets
> printed for others researchers to consume from. But, wait a minute, wasn't
> the SW/LD community fighting to get machines to do something? You know it
> exactly as I do. We need to get a hold of all that awesome information in
> those papers; from hypotheses, results, claims, conclusions, references,
> to.. in a way that we can point at it.
>
> Is the current situation seriously something we can't fix or improve on?
> Is the community at the mercy of cool conferences, organizations,
> publishers, academia or die hard followers of archaic methods?
>
> How about a possible solution to steer towards in the right direction:
>
> * SW/LD conferences asks research work to be submitted *first* in a
> machine-friendly format (e.g., HTML+RDFa); if you are responsible for its
> organization, please make it right and first serve the *Web* community.
> Otherwise, you are the ones that's holding things back!
>
> * If you are an author, researcher, or whatever, put your word where your
> mouth is and *first* publish it on the Web yourself and make sure it is
> machine friendly. Give its URI to the conference or your academic
> institution. Otherwise, you are the ones that's holding things back!
>
> * If you are an SW/LD academic supervisor, ask and encourage your students
> to publish or submit their work in such fashion. Otherwise, you are the
> ones that's holding things back!
>
> Do your bit :)
>
> -Sarven
>
>


-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Blog : the wheel and the hub <http://blog.hubjects.com/>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: Linked Data Dogfood circa. 2013

2013-01-07 Thread Bernard Vatant
Awesome debate indeed ...
It reminded me of an old story, so I made a little post about it. Enjoy :)

http://blog.hubjects.com/2013/01/everybody-knows-what-semantic-dog-looks.html

2013/1/8 Kingsley Idehen 

> On 1/7/13 5:57 PM, Hugh Glaser wrote:
>
>> Thanks Kingsley.
>> On 7 Jan 2013, at 14:04, Kingsley Idehen  wrote:
>>
>>  On 1/6/13 7:05 PM, Hugh Glaser wrote:
>>>
>>>> See my comments above.  What is an Application in your world view?
>>>>>
>>>> Interesting and useful comment. You used the word - tell me what you
>>>> mean please.
>>>> Best
>>>> Hugh
>>>>
>>> As you know, definitions are subjective. If you can define what an
>>> Application (in the context of Linked Data consumption) is, I can then
>>> attempt to answer your question about specific applications in light of
>>> such clarity.
>>>
>>
> 
>



-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Blog : the wheel and the hub <http://blog.hubjects.com/>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: Linked Data Dogfood circa. 2013

2013-01-04 Thread Bernard Vatant
2013/1/4 Melvin Carvalho 

"2013 is the year to get serious about linked data!"

+100!

Let 2013 be indeed the year of serious gardening of the Data (and
Vocabularies) Commons

Reminder :
http://blog.hubjects.com/2012/03/lov-stories-part-2-gardeners-and.html

-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Blog : the wheel and the hub <http://blog.hubjects.com/>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: Personae

2013-01-03 Thread Bernard Vatant
Hi all

Who Am I! ... a nice catch indeed, I was not aware of it either, but it's
now integrated in LOV so that no one can miss it any more :)
http://lov.okfn.org/dataset/lov/details/vocabulary_wai.html

@Luis : the list of creators/contributors is not complete, simply added
those I could find at Dog Food People http://data.semanticweb.org/person/
I need URIs to add the other ones, we can forge some in LOV namespace, but
if they had "preferredURI" (see other thread ...) it would be better. Can
you provide some?

Regards

Bernard


2013/1/3 Boris Villazon-Terrazas 

> Hi Alex
>
> I think Michael is working in sth similar [1].
>
> @Luis, nice work [2]  how it is possible I wasn't aware of that?
>
> Best
>
> Boris
>
> [1] 
> http://dvcs.w3.org/hg/gld/raw-**file/1835a71f6a5f/people/**index.html<http://dvcs.w3.org/hg/gld/raw-file/1835a71f6a5f/people/index.html>
> [2] http://vocab.ctic.es/wai
>
>
> On 03/01/2013 17:11, Luis Polo wrote:
>
>> Hi Alex,
>>
>> The WAI ontology may suit your modeling requirements [1]. It is an
>> extension of the FOAF ontology introducing the notion of
>> "person-as-role" (i.e., qua-individuals). Hope this helps!
>>
>> Best,
>>
>> p.
>>
>> [1] http://vocab.ctic.es/wai
>>
>>
>> 2013/1/3 Kingsley Idehen :
>>
>>> On 1/3/13 10:18 AM, Alexander Dutton wrote:
>>>
>>>> Hi all,
>>>>
>>>> Here's a modelling question.
>>>>
>>>> Suppose I advertise a vacancy (or publish a webpage, or write a book),
>>>> and that thing is in some way linked to me (contact person, author).
>>>> Suppose further that I want to go by another name (e.g. I'm Iain [M]
>>>> Banks), or don't want people to contact me by phone.
>>>>
>>>> I can quite easily create a foaf:Person, exposing just the details that
>>>> are pertinent in the context in which it will appear. However, there's
>>>> no way to link it to me.
>>>>
>>>> What I'd like is:
>>>>
>>>> :vacancy oo:contact :persona .
>>>>
>>>> :persona a persona:Persona ;
>>>> foaf:name "Mr Dutton" ;
>>>> v:email 
>>>> <mailto:alex+recruitment@**example.org>
>>>> ;
>>>> persona:personaOf :alex .
>>>>
>>>> :alex a foaf:Person ;
>>>> foaf:name "Alexander Dutton" ;
>>>> foaf:mbox <mailto:a...@example.org> .
>>>>
>>>> Even better if I can use it to present two talks at a conference wearing
>>>> two different hats:
>>>>
>>>> :talk-one a ex:Talk ;
>>>> rdfs:label "Science!" ;
>>>> ex:givenBy :alexQuaEmployedByAperture .
>>>>
>>>> :alexQuaEmployedByAperture a persona:Persona ;
>>>> foaf:name "Alexander Dutton" ;
>>>> persona:personaOf :alex ;
>>>> persona:qua [
>>>>   a org:Membership ;
>>>>   org:member :alex ;
>>>>   org:organization :aperture ;
>>>>   org:role [
>>>> a skos:Concept ;
>>>> skos:prefLabel "eccentric scientist"
>>>>   ]
>>>> ] .
>>>>
>>>> etc.
>>>>
>>>> Does this exist? Has anyone done it differently/better? Should I just
>>>> get on and make it?
>>>>
>>>> Yours questioningly,
>>>>
>>>> Alex
>>>>
>>>>
>>>>
>>>>
>>>>  I suggest you just crack on with your modelling. Once done, publish it.
>>> Ultimately, it can be cross referenced with other related ontologies, as
>>> you
>>> or others discover them.
>>>
>>> What you shouldn't do is procrastinate on the basis of an existing
>>> ontology
>>> :-)
>>>
>>> Happy New Year to everyone !
>>>
>>> --
>>>
>>> Regards,
>>>
>>> Kingsley Idehen
>>> Founder & CEO
>>> OpenLink Software
>>> Company Web: http://www.openlinksw.com
>>> Personal Weblog: 
>>> http://www.openlinksw.com/**blog/~kidehen<http://www.openlinksw.com/blog/~kidehen>
>>> Twitter/Identi.ca handle: @kidehen
>>> Google+ Profile: 
>>> https://plus.google.com/**112399767740508618350/about<https://plus.google.com/112399767740508618350/about>
>>> LinkedIn Profile: 
>>> http://www.linkedin.com/in/**kidehen<http://www.linkedin.com/in/kidehen>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>>
>
>


-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Blog : the wheel and the hub <http://blog.hubjects.com/>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: canonicURI property

2013-01-03 Thread Bernard Vatant
Hi folks

Sorry to differ with Kingsley on this, but this is an old trap :)

On 1/3/13 11:19 AM, SERVANT Francois-Paul wrote:
>
>>
>> what property should be used to write in RDF links such as those denoted
>> by ? Is it con:preferredURI?
>>
>
Although con:preferredURI is a priori dedicated to agents, I guess you can
extend its use to other resources, since the domain is left open in this
vocabulary. If the creator TBL is lurking, he can confirm his intentions :)


> Why is the object of con:preferredURI a string and not a resource?
>>
>
Because the preferred URI value is what it is : a URI, hence a rdf:Literal,
and not the resource named/identified by this literal
con:preferredURI is a simple rdf:Property because contact vocabulary is
expressed in RDFS, but it's clear by its definition ...

  http://www.w3.org/2000/10/swap/pim/contact#preferredURI";>
 A string which is the URI a person, organization, etc,
prefers that people use for them.
preferred

... that if this vocabulary were to be translated in OWL, it would become a
owl:DatatypeProperty with range xsd:anyURI


> I have in a linked data set URIs in my namespace that are owl:sameAs, and
>> among them one which is a "canonical one". When dereferencing one of these
>> URIs, I want to state in the returned RDF something like:
>> :OneOfThoseURIs x:canonicURI :TheCanonicOne.
>> and then have triples about :TheCanonicOne
>>
>
You can't do that, because  :TheCanonicOne is a rdf:Literal which cannot be
in subject position (so far ...)


>  My goal is to make clear that the preferredURI (the one that should be
>> used - and the one that actually is used in the returned RDF) is
>> :TheCanonicOne. Of course:
>>
>> x:canonicURI rdfs:subPropertyOf owl:sameAs.
>>
>
Of course not! This is the trap. You confuse the URI (the string) with the
resource it identifies.

What you mean is that all sameAs resources share the preferred URI. For
example

IF
:x  con:preferredURI  'myNiceURI'

THEN
( :y  con:preferredURI  'myNiceURI'' ) <=> ( :y  owl:sameAs  :x )

A system can rely on the preferredURI value e.g., to use it as the
rdf:about value in a RDF/XML. But that's all. If you have owl:sameAs
declarations, all sameAs URIs would be equivalent in rdf:about with the
same semantics. preferredURI is akin to skos:prefLabel, no more, no less.

Best regards

Bernard


*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Blog : the wheel and the hub <http://blog.hubjects.com/>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: Breaking news: GoodRelations now fully integrated with schema.org!

2012-11-12 Thread Bernard Vatant
Dan, Martin, all

This breaking news made me un-earth the couple of questions I already
discussed with you regarding the (more or less declared) "soft" semantics
of schema.org, and how both http://schema.org/docs/schemaorg.owl and
http://schema.rdfs.org are interpreting those semantics a bit harder than
it should, in particular regarding domains and ranges of properties.

I take now for granted from your message that :
- The reference file for schema.org declared semantics is
http://schema.org/docs/schema_org_rdfa.html (rather than the outdated, or
at lesat not clearly dated OWL file at http://schema.org/docs/schemaorg.owl)
- It declares explicitly schema.org types (classes) as instances of
rdfs:Class, and attached properties as instances of rdf:Property.
- It uses rdfs:subClassOf for the type hierarchy. There is no use of
rdfs:subPropertyOf
- It uses specific properties  http://schema.org/domain and
http://schema.org/range
to attach properties to classes.

The latter is the most interesting and innovative feature. It should be
good to document in th file the implied semantics of those properties, of
which semantics is weaker than the ones of rdfs:domain and rdfs:range, as
implicitly (explicitly?) stated in http://schema.org/docs/datamodel.html.
And maybe it would be wise to rename them otherwise, since confusion is
likely to occur (the more so that http://schema.rdfs.org has interpreted
them abusively as rdfs:domain and rdfs:range). Why not call them the same
as in the html pages : "expectedOnType" and "expectedValueType", since it's
really what they mean.

Side question to Martin. Is there any issue in formally mapping the OWL
classes and properties of GoodRelations to their schema.org equivalents,
which do not even rely on RDFS semantics? I'm pretty sure you have thought
about it and I would be happy to have your take on this.

Another point is since you now declare that the RDF expression of
schema.orgis the root of it, why not publish a proper RDF schema that
could be GET
from the http://schema.org/ namespace through content negotiation, as any
other vocabulary conformant to SW publishing best practices? BTW for
example we would be happy to have such a thing in order to integrate
seamlessly schema.org in LOV. So far we use the
http://schema.rdfs.orgsource but this is really suboptimal, we would
like to get rid of this, and
insert the real stuff.

I submitted the page to the W3C vRDFa validator at
http://www.w3.org/2012/pyRdfa/Validator.html it's happy with the file and
produces a very clean n3 file, the kind it would be cool to have in above
said content negotiation.

Best

Bernard




2012/11/9 Dan Brickley 

>
> This latest build of schema.org uses a different approach to previous
> updates. Earlier versions (apart from health/medicine) were relatively
> small, and could be hand coded. With Good Relations, the approach we
> took was to use an import system that reads schema definitions
> expressed in HTML+RDFa/RDFS and generates the site as an aggregation
> of these 'layers'. In other words, schema.org is built by a system
> that reads a collection of schema definitions expressed using W3C
> standards. The public site is also now more standards-friendly, aiming
> for 'Polyglot' HTML that works as HTML5 and XHTML, and you can find an
> RDFa view of the overall schema at
> http://schema.org/docs/schema_org_rdfa.html
>
>
> I'm really happy to see Good Relations go live, and look forward to
> catching up on the other contributions that are in the queue. The
> approach will be to express each of these in HTML/RDFa/RDFS and make
> some test sites on Appspot that show each proposal 'in place', and in
> combination with other proposals. Since schemas tend to overlap in
> coverage, this is really important for improving the quality and
> integration of schema.org as we grow. While it took us a little while
> to get this mechanism in place, I'm glad we now have this
> standards-based machinery in place that will help us scale up the
> collaboration around schema.org.
>
> Thanks again to all involved,
>
> Dan
>
>


-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Blog : the wheel and the hub <http://blog.hubjects.com/>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: referencing a concept scheme as the code list of some referrer's property

2012-08-29 Thread Bernard Vatant
;>>
>>>> skos:ConceptScheme priotises domain conventions over common and shared
>>>> (better: to be shared) RDFS/OWL patterns.
>>>>
>>>> I found something similar in the Data Structure Definition of Data
>>>> Cubes. Dave (cc) will understand, as we had some discussion about this
>>>> topic ;-)
>>>>
>>>> I strictly believe it is a better strategy to convince each domain
>>>> inheritance of a single global standard with only few indispensable
>>>> options.
>>>>
>>>> Following each of the aquainted domain pattern leeds to structural
>>>> weakness of this one global standard, as everything can be expressed
>>>> using multiple patterns even though one pattern can fit all. In the open
>>>> world, each reasoner needs to understand all those domain patterns.
>>>> This is a quite obscure requirement.
>>>>
>>>> Dave et al. is conciliatory with SDMX and weakens RDFS/OWL by this.
>>>> SKOS is conciliatory with the ISO thesaurus people and weakens RDFS/OWL
>>>> by this.
>>>> The same happens more and more in any domain, and may be it is too late
>>>> to stop this, or even roll this back.
>>>>
>>>> I am quite sad about this.
>>>> Unfortunately, W3C has no clear governance in this question (@Sandro).
>>>> Sometimes I feel a working draft becomes a recommendation only by public
>>>> rating in some domain which has no understanding of the power of pure
>>>> RDFS/OWL .
>>>>
>>>> Sorry if I have taken so much of your time, if you have read until here.
>>>>
>>>> Finally I quote some lines from Neill Young: "Ambulance Blues" which may
>>>> talk about myself:
>>>>
>>>> "And I still can hear him say:
>>>> You're all just pissin' in the wind
>>>> You don't know it but you are".
>>>>
>>>> Think I even know it.
>>>>
>>>> Best regards,
>>>> Thomas
>>>>
>>>>
>>>>
>>>> [1] http://innoq.github.com/led/
>>>> [2] 
>>>> http://www.w3.org/TR/vocab-**data-cube/<http://www.w3.org/TR/vocab-data-cube/>
>>>> [3] https://github.com/innoq/**iqvoc/ <https://github.com/innoq/iqvoc/>
>>>> [4] 
>>>> http://www.w3.org/TR/2009/REC-**skos-reference-20090818/#L1101<http://www.w3.org/TR/2009/REC-skos-reference-20090818/#L1101>
>>>>
>>>> Am 22.08.2012 21:15, schrieb Antoine Isaac:
>>>>
>>>>> Dear Thomas,
>>>>>
>>>>> I'm ccing public-esw-t...@w3.org. Perhaps this was the one you were
>>>>> looking for!
>>>>>
>>>>> (1)& (2)
>>>>> You probably mean, if a ConceptScheme could be defined as a class, of
>>>>> which the concepts of a given concept scheme are instances?
>>>>> That would be the way to proceed, if you want to use the concept
>>>>> scheme directly as the range of a property.
>>>>> This is has never been suggested for inclusion in SKOS. In fact it is
>>>>> not forbidden, either. You can assert rdf:type statements between
>>>>> concepts and a concept scheme, if you want.
>>>>> You can also define an adhoc sub-class of skos:Concept (say,
>>>>> ex:ConceptOfSchemeX), which includes all concepts that related to a
>>>>> specific concept scheme (ex:SchemeX) by skos:inScheme statements. This
>>>>> is quite easy using OWL. And then you can use this new class as the
>>>>> rdf:range.
>>>>>
>>>>> The possibility of these two options makes it less obvious, why there
>>>>> should be a specific feature in SKOS to represent what you want.
>>>>> But more fundamentally, it was perhaps never discussed, because it's
>>>>> neither a 100% SKOS problem, nor a simple one.
>>>>> It's a bit like the link between a document and a subject concept:
>>>>> there could have been a skos:subject property, but it was argued that
>>>>> Dublin Core's dc:subject was good enough.
>>>>> But it's maybe even worse than that :-) There are indeed discussions
>>>>> in the Dublin Core Architecture community about represent the link
>>>>> between a property and a concept scheme directly, similar to what you
>>>>> want. This is what is called vocabulary/value "encoding schemes" there
>>>>> [1].
>>>>> But the existence of this feature at a quite deep, data-model level,
>>>>> rather confirms for me that it is something that clearly couldn't be
>>>>> tackled at the time SKOS was made a standard. One can view this
>>>>> problem as one of modeling RDFS/OWL properties, rather than
>>>>> representing concepts, no?
>>>>>
>>>>>
>>>>> (3)
>>>>> I'm not sure I get the question. If they exist, such mapping
>>>>> properties could be very difficult to semantically define. Would a
>>>>> concept scheme be broader, equivalent, narrower than another one?
>>>>> Rather, I'd say that the property you're after indicates that some
>>>>> concepts from these two concept schemes are connected. For this I
>>>>> think one could use general linkage properties between datasets, such
>>>>> as voiD's linksets [2].
>>>>>
>>>>> I hope that helps,
>>>>>
>>>>> Antoine
>>>>>
>>>>> [1] 
>>>>> http://dublincore.org/**documents/profile-guidelines/<http://dublincore.org/documents/profile-guidelines/>,
>>>>>  search
>>>>> "Statement template: subject"
>>>>> [2] http://vocab.deri.ie/void
>>>>>
>>>>>
>>>
>>>
>>>
>>
>
>


-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Blog : the wheel and the hub <http://blog.hubjects.com/>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: referencing a concept scheme as the code list of some referrer's property

2012-08-23 Thread Bernard Vatant
gards,
> > Thomas
> >
> >
> >  Original-Nachricht 
> > Betreff: referencing a concept scheme as the code list of some
> > referrer's property
> > Datum: Sun, 19 Aug 2012 10:50:05 +0200
> > Von: Thomas Bandholtz 
> > An: public-swd...@w3.org
> >
> >
> > Hi SKOS,
> >
> > I came across several examples where concept schemes are referenced as
> > the code list of some property.
> > Usually this is done by two statements:
> >
> > ex:property rdfs:range skos:Concept ;
> > ex:codeList < [some concept scheme] > .
> >
> > E.g. Data Cubes and geonames use this pattern, but one uses qb:codeList
> > to point to the scheme, the other gn:featureClass.
> >
> > Regarding this, I have two questions:
> >
> > (1) does someone remember a discussion why concept schemes should not be
> > expressed as subclasses of skos:Concept?
> > If subclassing  would have been used, any concept scheme could be
> > referenced in a single rdfs:type statement of the concept and a single
> > rdfs:range statement of the referrer.
> >
> > (2) if there are sufficient reasons to insist in the current patterns,
> > shouldn't SKOS be extended by a standard property to be used by
> > referrers when they point to a concept scheme along with  a rdfs:range
> > statement?
> >
> > And I add
> > (3) why do we not have mapping properties to link concept schemes from
> > different providers?
> > This cannot be inferred from a given concept mapping, as mapping of some
> > concepts does not imply mappings of their entire schemes.
> >
> > Best regards,
> > Thomas
> >
>
>
> --
> Thomas Bandholtz
> Principal Consultant
>
> innoQ Deutschland GmbH
> Krischerstr. 100,
> D-40789 Monheim am Rhein, Germany
> http://www.innoq.com
> thomas.bandho...@innoq.com
> +49 178 4049387
>
> http://innoq.com/de/themen/linked-data (German)
> https://github.com/innoq/iqvoc/wiki/Linked-Data (English)
>
>
>


-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Blog : the wheel and the hub <http://blog.hubjects.com/>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: Is there a general "preferred" property?

2012-07-18 Thread Bernard Vatant
Nathan

Interesting discussion indeed, at least allowing me to discover
con:preferredURI I missed so far ... although I was looking for something
like that, and it was just under my nose in LOV :)
http://lov.okfn.org/dataset/lov/search/#s=preferred

If I parse correctly the definition of con:preferredURI  (A string which is
the URI a person, organization, etc, prefers that people use for them.) it
applies only to some agent able to express its preference about how
he/she/it should be identified. The domain is open, but if I was to close
it I would declare it to be foaf:Agent.
This is quite different from skos:prefLabel which expresses the preference
of a community of vocabulary users about how some concept should be named
(a practice coming from the library/thesaurus community). The borderline
case are authorities, when LoC uses skos:prefLabel in their authority files
for people of organization, they don't ask those people or organizations if
they agree (many of them not being in position to answer anyway ...).

Seems we lack some x:prefURI expressing the same type of preference as
skos:prefLabel.
With of course con:preferredURI rdfs:subPropertyOf x:prefURI

And a general  property  x:hasURI

x:hasURIx:preferred   x:prefURI

Meaning that :

ex:foox:hasURI  'bar'

entails

   owl:sameAs   ex:foo

Not sure of notations here, what I mean by  is the resource of which
URI is the string 'bar'

And while we are at it x:altURI would be nice to have also :)

Bernard

2012/7/17 Nathan 

> Good point and question! I had assumed preferred by the owner of the
> object, just as you have a con:preferredURI for yourself.
>
> The approach again comes from you, same approach as
> link:listDocumentProperty (which now appears to have dropped from the link:
> ontology?)
>
> Cheers,
>
> Nathan
>
>
> Tim Berners-Lee wrote:
>
>> Interesting to go meta on this with x:preferred .
>>
>> What would be the meaning of "preferred" -- "preferred by the object
>> itself or
>> the owner of the object itself"?
>>
>> In other words, I wouldn't use it to store in a local store my preferred
>> names
>> for people, that would be an abuse of the property.
>>
>> Tim
>>
>> On 2012-07 -15, at 19:42, Nathan wrote:
>>
>>  Essentially what I'm looking for is something like
>>>
>>>  foaf:nick x:preferred foaf:preferredNick .
>>>  rdfs:label x:preferred foaf:preferredLabel .
>>>  owl:sameAs x:preferred x:canonical .
>>>
>>> It's nice to have con:preferredURI and skos:prefLabel, but what I'm
>>> really looking for is a way to let machines know that x value is preferred.
>>>
>>> Anybody know if such a property exists yet?
>>>
>>> Cheers,
>>>
>>> Nathan
>>>
>>>
>>>
>>
>>
>
>


-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Blog : the wheel and the hub <http://blog.hubjects.com/>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: Introducing the Knowledge Graph: things, not strings

2012-05-16 Thread Bernard Vatant
Adrian

Don't dream of accessing the Google Knowledge Graph and query it through a
SPARQL endpoint as you do for DBpedia. As every Google critical
technological infrastructure, I'm afraid it will be well hidden under the
hood, and accessible only through the search interface. If they ever expose
the Graph objects through an API as they do for Gmaps, now THAT would be
really great news.

Kingsley says they have Freebase, yes but Freebase stores only 22 million
entities according to their own stats, which makes less than 5% of the
overall figure, since Google claims 500 million nodes in the Knowledge
Graph, and growing.  So I guess they have also DBpedia and VIAF and
Geonames and you name it ... whatever open and structured they can put
their hands on. Linked data stuff whatever the format.

Bernard


2012/5/17 Adrian Walker 

> Hi All,
>
> Nice videos etc, but has anyone found a link to actually *use* Knowledge
> Graph ?
>
> If it's not online yet, one wonders why Google chose to pre-announce it.
>
> Thanks, -- Adrian
>
> Internet Business Logic
> A Wiki and SOA Endpoint for Executable Open Vocabulary English Q/A over
> SQL and RDF
> Online at www.reengineeringllc.com
> Shared use is free, and there are no advertisements
>
> Adrian Walker
> Reengineering
>
>
> On Wed, May 16, 2012 at 4:05 PM, Kingsley Idehen 
> wrote:
>
>> On 5/16/12 4:02 PM, Melvin Carvalho wrote:
>>
>>> Big thumbs up (at least in principle) from google on linked data
>>>
>>> http://googleblog.blogspot.de/**2012/05/introducing-knowledge-**
>>> graph-things-not.html<http://googleblog.blogspot.de/2012/05/introducing-knowledge-graph-things-not.html>
>>>
>>
>> +1000...
>>
>> It's getting real interesting. Google and Facebook as massive Linked Data
>> Spaces, awesome!
>>
>> --
>>
>> Regards,
>>
>> Kingsley Idehen
>> Founder&  CEO
>> OpenLink Software
>> Company Web: http://www.openlinksw.com
>> Personal Weblog: 
>> http://www.openlinksw.com/**blog/~kidehen<http://www.openlinksw.com/blog/%7Ekidehen>
>> Twitter/Identi.ca handle: @kidehen
>> Google+ Profile: 
>> https://plus.google.com/**112399767740508618350/about<https://plus.google.com/112399767740508618350/about>
>> LinkedIn Profile: 
>> http://www.linkedin.com/in/**kidehen<http://www.linkedin.com/in/kidehen>
>>
>>
>>
>>
>>
>>
>>
>


-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Linked Open Vocabularies <http://labs.mondeca.com/dataset/lov>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: Introducing the Knowledge Graph: things, not strings

2012-05-16 Thread Bernard Vatant
... To put it in a longer way.

Yes this is great news, although it's not completely news, we had quite a
few hints of it by Google in the past months.

But what is just unfair is Google presenting this as if they had invented
it. Apart from a quick allusion to DBpedia and Freebase, no mention of the
collective and converging efforts of so many libraries, museums,
governments, research centers, standard bodies, associations and
institutions, thousands of wikipedians, topic mappers, classifiers,
documentalists ... (apologies to those I forget, too many of them) ... who
have dedicated countless days and nights to build structured data and put
them on the Web. For those who do not know this background story, Google
will show off as the Only One able to organize and make sense of the messy
Web. I wish they were able to acknowledge at least that they are leveraging
all this work.
Neither have they built the core data, nor invented the underlying
concepts. They just bring more power and visibility.

Bernard

2012/5/17 David Wood 

> On May 16, 2012, at 17:45, Bernard Vatant wrote:
>
> Thanks to all who had this ground ploughed and sown patiently since those
> dark ages where Google was all but an idea.
> Now the grain is ripe and it's a great time for them to harvest ... hope
> we are left with some crumbs to pick up as a reward of our efforts :)
>
>
> Hmm, yes.  Will SemWeb researchers feel about Google's Knowledge Graph the
> way hypertext researchers feel about the Web? I hope not.
>
> Still, Kingsley is right, too.  We are certainly busier than we have ever
> been, with no clear end in sight.  That's positive.
>
> Regards,
> Dave
>
>
> Bernard
>
> 2012/5/16 Kingsley Idehen 
>
>> On 5/16/12 4:02 PM, Melvin Carvalho wrote:
>>
>>> Big thumbs up (at least in principle) from google on linked data
>>>
>>> http://googleblog.blogspot.de/**2012/05/introducing-knowledge-**
>>> graph-things-not.html<http://googleblog.blogspot.de/2012/05/introducing-knowledge-graph-things-not.html>
>>>
>>
>> +1000...
>>
>> It's getting real interesting. Google and Facebook as massive Linked Data
>> Spaces, awesome!
>>
>> --
>>
>> Regards,
>>
>> Kingsley Idehen
>> Founder&  CEO
>> OpenLink Software
>> Company Web: http://www.openlinksw.com
>> Personal Weblog: 
>> http://www.openlinksw.com/**blog/~kidehen<http://www.openlinksw.com/blog/%7Ekidehen>
>> Twitter/Identi.ca handle: @kidehen
>> Google+ Profile: 
>> https://plus.google.com/**112399767740508618350/about<https://plus.google.com/112399767740508618350/about>
>> LinkedIn Profile: 
>> http://www.linkedin.com/in/**kidehen<http://www.linkedin.com/in/kidehen>
>>
>>
>>
>>
>>
>>
>>
>
>
> --
> *Bernard Vatant
> *
> Vocabularies & Data Engineering
> Tel :  + 33 (0)9 71 48 84 59
> Skype : bernard.vatant
> Linked Open Vocabularies <http://labs.mondeca.com/dataset/lov>
>
> 
> *Mondeca**  **   *
> 3 cité Nollez 75018 Paris, France
> www.mondeca.com
> Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
>
>
>


-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Linked Open Vocabularies <http://labs.mondeca.com/dataset/lov>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: Introducing the Knowledge Graph: things, not strings

2012-05-16 Thread Bernard Vatant
Thanks to all who had this ground ploughed and sown patiently since those
dark ages where Google was all but an idea.
Now the grain is ripe and it's a great time for them to harvest ... hope we
are left with some crumbs to pick up as a reward of our efforts :)

Bernard

2012/5/16 Kingsley Idehen 

> On 5/16/12 4:02 PM, Melvin Carvalho wrote:
>
>> Big thumbs up (at least in principle) from google on linked data
>>
>> http://googleblog.blogspot.de/**2012/05/introducing-knowledge-**
>> graph-things-not.html<http://googleblog.blogspot.de/2012/05/introducing-knowledge-graph-things-not.html>
>>
>
> +1000...
>
> It's getting real interesting. Google and Facebook as massive Linked Data
> Spaces, awesome!
>
> --
>
> Regards,
>
> Kingsley Idehen
> Founder&  CEO
> OpenLink Software
> Company Web: http://www.openlinksw.com
> Personal Weblog: 
> http://www.openlinksw.com/**blog/~kidehen<http://www.openlinksw.com/blog/%7Ekidehen>
> Twitter/Identi.ca handle: @kidehen
> Google+ Profile: 
> https://plus.google.com/**112399767740508618350/about<https://plus.google.com/112399767740508618350/about>
> LinkedIn Profile: 
> http://www.linkedin.com/in/**kidehen<http://www.linkedin.com/in/kidehen>
>
>
>
>
>
>
>


-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Linked Open Vocabularies <http://labs.mondeca.com/dataset/lov>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: Question on "moving" linked data sets

2012-04-20 Thread Bernard Vatant
Antoine

In fact it seems that the dcterms:replaces option considers two resources
> (one that replace the other).


Indeed. The "bnf" resource replaces, by all means of the term, the "stitch"
one.


> Which in turns hints that you're considering that the URIs denote the URI
> themselves (or a 'concept-with-a-URI'), and not the resource (concept).


There I don't follow you. Let's say I have both URIs stitch:x and bnf:y.
They both identify resources, which happen to be instances of skos:Concept.
When I read   stitch:x  dcterms:isReplacedBy   bnf:y
I understand : Whenever you used the resource stitch:x (in your linked
data, vocabularies, index ...), please now use bnf:y.
That does not mean only change dumbly the URI in your application, but it
means also that if you want to figure the current definition of the
concept, trust what you'll find when you dereference bnf:y. It might be
strictly the same description as was once found at stitch:x, or it might
have changed (for example this concept has been moved in the RAMEAU
hierarchy, or its label or definition slightly modified etc). And since
from stitch side you don't know about it, you can't assert any owl:sameAs
for sure. BNF description can keep owl:sameAs links to assert that indeed
for this very concept, the semantics has not changed since stitch, and get
rid of this sameAs if ever the concept changes.
Actually, quite a lot of RAMEAU concepts have changed since stitch
publication. Maybe (certainly) some concepts present in stitch are not
present any more in RAMEAU. In that case the bnf:y would be 404 and it's
OK. It means stitch:x has been replaced by nothing in bnf namespace.
Of course, this does not look as a good practice, but I'm afraid it justs
shows the plain fact that RAMEAU has not (yet) a clean depreciation
mechanism, unless I miss recent developments (Romain can correct me if I am
wrong). I'm sure it will have some day soon :)

Bernard

-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Linked Open Vocabularies <http://labs.mondeca.com/dataset/lov>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: Question on "moving" linked data sets

2012-04-19 Thread Bernard Vatant
Hello Antoine

My take on this would be to use dcterms:isReplacedBy links rather than
owl:sameAs
Description of the concepts by BNF might change in the future and although
the original identifier is the same, the description might be out of sync
at some point.

Bernard

Le 19 avril 2012 16:23, Antoine Isaac  a écrit :

> Dear all,
>
> We have a question on an what to do when a linked data set is "moved" from
> one namespace to the other. We searched for recipes to apply, but did not
> really find anything 'official'  around...
>  The VU university of Amsterdam has published a Linked Data SKOS
> representation of RAMEAU [1] as a prototype, several years ago. For example
> we have
> http://stitch.cs.vu.nl/**vocabularies/rameau/ark:/**12148/cb14521343b<http://stitch.cs.vu.nl/vocabularies/rameau/ark:/12148/cb14521343b>
>
> Recently, BnF implemented its own production service for RAMEAU. The
> previous concept is at:
> http://data.bnf.fr/ark:/12148/**cb14521343b<http://data.bnf.fr/ark:/12148/cb14521343b>
> (see RDF at 
> http://data.bnf.fr/14521343/**web_semantique/rdf.xml<http://data.bnf.fr/14521343/web_semantique/rdf.xml>
> )
>
> The production services makes the prototype obsolete. Our issue is how to
> properly "transition" from one to the other. Several services are using the
> URIs of the prototype. For example at the Library of Congress:
> http://id.loc.gov/authorities/**subjects/sh2002000569<http://id.loc.gov/authorities/subjects/sh2002000569>
>
> We can ask for the people we know to change their links. But identifying
> the users of URIs seems too manual, error-prone a process. And of course in
> general we do not want links to be broken.
>
> Currently we have done the following:
>
> - a 301 "moved permanently" redirection from the 
> stitch.cs.vu.nl/rameauprototype to
> data.bnf.fr.
>
> - an owl:sameAs statement between the prototype URIs and the production
> ones, so that a client searching for data on the old URI gets data that
> enables it to make the connection with the original resource (URI) it was
> seeking data about.
>
> Does that seem ok? What should we do, otherwise?
>
> Thanks for any feedback you could have,
>
> Antoine Isaac (VU Amsterdam side)
> Romain Wenz (BnF side)
>
> [1] RAMEAU is a vocabulary (thesaurus) used by the National Library of
> France (BnF) for describing books.
>
>


-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Linked Open Vocabularies <http://labs.mondeca.com/dataset/lov>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: ANN: Nature Publishing Group Linked Data Platform

2012-04-05 Thread Bernard Vatant
Hello Tony

Amazing work indeed.  I have a little LOV echo to the big LOD call of
Kingsley :)

At http://ns.nature.com/docs/terms/ I get only the vocabulary OWLDoc, no
conneg to some rdf file?
Is this rdf file available somewhere?

Thanks

Bernard


Le 5 avril 2012 13:25, Kingsley Idehen  a écrit :

> On 4/5/12 5:17 AM, Hammond, Tony wrote:
>
>> ** Apologies for cross-posting **
>>
>> Hi:
>>
>> We just wanted to share this news from yesterday's NPG press release [1]:
>>
>> "Nature Publishing Group (NPG) today is pleased to join the linked
>> data
>> community by opening up access to its publication data via a linked data
>> platform. NPG's Linked Data Platform is available at
>> http://data.nature.com.
>>
>> The platform includes more than 20 million Resource Description
>> Framework (RDF) statements, including primary metadata for more than
>> 450,000
>> articles published by NPG since 1869. In this first release, the datasets
>> include basic citation information (title, author, publication date, etc)
>> as
>> well as NPG specific ontologies. These datasets are being released under
>> an
>> open metadata license, Creative Commons Zero (CC0), which permits maximal
>> use/re-use of this data.
>>
>> NPG's platform allows for easy querying, exploration and extraction of
>> data and relationships about articles, contributors, publications, and
>> subjects. Users can run web-standard SPARQL Protocol and RDF Query
>> Language
>> (SPARQL) queries to obtain and manipulate data stored as RDF. The platform
>> uses standard vocabularies such as Dublin Core, FOAF, PRISM, BIBO and OWL,
>> and the data is integrated with existing public datasets including
>> CrossRef
>> and PubMed.
>>
>> More information about NPG's Linked Data Platform is available at
>> http://developers.nature.com/**docs <http://developers.nature.com/docs>.
>> Sample queries can be found at
>> http://data.nature.com/query. "
>>
>> Cheers,
>>
>> Tony
>>
>> [1] 
>> http://www.nature.com/press_**releases/linkeddata.html<http://www.nature.com/press_releases/linkeddata.html>
>>
>
> Great stuff!
>
> BTW -- do you also expose an RDF dump (directly or via a VoiD graph) ?
> Naturally, I would also like to add this dataset to the LOD cloud cache we
> maintain.
>
> --
>
> Regards,
>
> Kingsley Idehen
> Founder&  CEO
> OpenLink Software
> Company Web: http://www.openlinksw.com
> Personal Weblog: 
> http://www.openlinksw.com/**blog/~kidehen<http://www.openlinksw.com/blog/%7Ekidehen>
> Twitter/Identi.ca handle: @kidehen
> Google+ Profile: 
> https://plus.google.com/**112399767740508618350/about<https://plus.google.com/112399767740508618350/about>
> LinkedIn Profile: http://www.linkedin.com/in/kidehen
>
>

-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Linked Open Vocabularies <http://labs.mondeca.com/dataset/lov>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: Proposal to amend the httpRange-14 resolution

2012-04-04 Thread Bernard Vatant
Hello David

Now that this conversation has turned a bit less noisy :)
What I have written recently is along the lines of  the distinction you
propose between definition and description, and the process you are
envisioning[1].
Kingsley has an amazing and enthusiastic faith in the power of the Web's
architecture, but this is not only technical, it is about a social process.
Agreed there is no way to desambiguate once for all a URI, but like in
natural languages there is a neverending quest towards accuracy and
disambiguation.
And indeed this has to start with the URI owner providing the first
description of the resource, acting as a definition, further descriptions,
either provided by the URI owner or other sources can be compared to the
definition to figure if they bring extra information, or other perspectives
on the resources, or if they are inconsistent with what the URI owner
asserts in the definition.

The fact that, as Pat Hayes and others have correctly pointed, over 99,9%
of URIs on the Web do not provide such definitions does not prevent to push
provision of such definitions as a best practice.

If you are a URI owner, and if you want your URIs to play nicely and be a
reliable reference in the Semantic Web, don't take the risk to see third
parties provide various and probably inconsistent descriptions of what your
URIs mean, based or not on debatable and various interpretations of the
semantics of HTTP GET answers.

Best

Bernard

[1] http://blog.hubjects.com/2012/03/beyond-httprange-14-addiction.html


Le 4 avril 2012 03:00, David Booth  a écrit :

> Hi Kingsley,
>
> On Tue, 2012-04-03 at 15:01 -0400, Kingsley Idehen wrote:
> > On 4/3/12 1:46 PM, David Booth wrote:
> [ . . . ]
> > > This use of URI definitions helps to anchor the "meaning" of the URI,
> so
> > > that it does not drift uncontrollably.
> [ . . . ]
> >
> > But once on the Web the user really [loses] control. There is not such
> > thing as real stability per se. Only when you have system faults can one
> > at least pivot accordingly. Thus, you only get the aforementioned
> > behavior in the context of a specific system and its associated rules.
>
> I think you're right that we can never get total semantic stability in
> an absolute sense.  But if we establish a commonly followed convention
> in which the URI owner's URI definition is used when making statements
> involving a URI, then the semantic drift will at least be substantially
> limited.  Again, this does not require *everyone* to follow the
> convention.  But the more that do follow it, the more effective it
> becomes in making the web a sort of self-describing dictionary.
>
>
> --
> David Booth, Ph.D.
> http://dbooth.org/
>
> Opinions expressed herein are those of the author and do not necessarily
> reflect those of his employer.
>
>
>
-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Linked Open Vocabularies <http://labs.mondeca.com/dataset/lov>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: Change Proposal for HttpRange-14

2012-03-26 Thread Bernard Vatant
All

Like many others it seems, I had sworn to myself : nevermore HttpRange-14,
but I will also bite the bullet.
Here goes ... Sorry I've hard time to follow-up with whom said what with
all those entangled threads, so I answer to ideas more than to people.

There is no need for anyone to even talk about "information resources".


YES! I've come with years to a very radical position on this, which is that
we have create ourselves a huge non-issue with those notions of
"information resource" and "non-information resource". Please show any
application making use of this distinction, or which would break if we get
rid of this distinction.
And in any case if there is a distinction, this distinction is about how
the URI behave in the http protocol (what it accesses), which should be
kept independent of what the URI denotes. The neverending debate will never
end as long as those two aspects are mixed, as they are in the current
httpRange-14 as well as in various change proposals (hence those
interminable threads).


> The important point about http-range-14, which unfortunately it itself
> does not make clear, is that the 200-level code is a signal that the URI
> *denotes* whatever it *accesses* via the HTTP internet architecture.
>

The proposal is that URI X denotes what the publisher of X says it denotes,
> whether it returns 200 or not.
>

This is the only position which makes sense to me. What the URI is intended
to denote can be only derived from explicit descriptions, whatever the way
you access those descriptions. And assume that if there is no such
description, the URI is intended to provide access to somewhere, but not to
denote *some* *thing*. It's just actionable in the protocol, and clients do
whatever they want with what they get. It's the way the (non-semantic) Web
works, and it's OK.


> And what if the publisher simply does not say anything about what the URi
> denotes?


Then nobody knows, and actually nobody cares what the URI denotes, or say
that all users implicitly agree it is the same thing, but it does not break
any system to ignore what it is. Or, again, show me counter-examples..

After all, something like 99.999% of the URIs on the planet lack this
> information.


Which means that for the Web to work so far, knowing what a URI denotes is
useless. But it's useful for the Semantic Web. So let's say that a URI is
useful for, or is part of, the Semantic Web if some description(s) of it
can be found. And we're done.


> What, if anything, can be concluded about what they denote?


Nothing, and let's face it.


> The http-range-14 rule provides an answer to this which seems reasonably
> intuitive.


Wonder if it can be the same Pat Hayes writing this as the one who wrote
six years ago "In Defence of Ambiguity" :)
http://www.ibiblio.org/hhalpin/irw2006/presentations/HayesSlides.pdf
Quote (from the conclusion)
"WebArch http-range-14 seems to presume that if a URI accesses  something
directly (not via an http redirect), then the URI must refer  to what it
accesses.
This decision is so bad that it is hard to list all the mistakes in it, but
here are a few :
- It presumes, wrongly, that the distinction between access and  reference
is based
on the distinction between accessible and  inaccessible referents.
 ... [see above link for full list]

Pat, has your position changed on this?


> What would be your answer? Or do you think there should not be any
> 'default' rule in such cases?
>

I would say so, because such a rule is basically useless. As useless as to
wonder what a phone number denotes. A phone number allows you to access a
point in a network given the phone infrastructure and protocols, it does
not denote anything except in specific contexts where it's used explicitly
as an identifier e.g., to uniquely identify people, organizations or
services. Otherwise it works just like a phone number should do.

Best regards

Bernard

-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Linked Open Vocabularies <http://labs.mondeca.com/dataset/lov>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: owl:sameAs temptation

2012-03-07 Thread Bernard Vatant
Hi Sarven

You might be interested by the way I've mapped the Geonames feature codes,
which are modelled as instances of a subclass of skos:Concept (hence OWL
individuals) to equivalent classes in other ontologies. See [1] and [2].

The rationale is that most of the time when you assert that a owl:Thing T
is "equivalent to" some owl:Class C, it means that being of rdf:type C is
equivalent to have T as a value of some "typing" property. For example
being an instance of the class "BlueThing" is equivalent to having "Blue"
as value of some "hasColor" property. This can be modelled as in [2] using
a owl:hasValue" restriction, avoiding the owl:sameAs temptation and keep
all your ontology in safe OWL-DL land, this way :

http://example.org/BlueThing";>
  Blue Thing
  

  http://example.org/hasColor"/>

  http://example.org/Blue"/>

  


http://example.org/Blue";>
  Blue
  Bleu


Hope this helps

Bernard

[1] http://www.geonames.org/ontology/ontology_v3.01.rdf
[2] http://www.geonames.org/ontology/mappings_v3.01.rdf


Le 7 mars 2012 07:43, Sarven Capadisli  a écrit :

> Hi,
>
> I'm sure this is talked somewhere, I'd love a pointer if you know any:
>
> I often see resources of type owl:Class get paired with resources of type
> owl:Thing using owl:sameAs. As far as I understand, this is incorrect since
> domain and range of owl:sameAs should be owl:Thing.
>
> I'm tempted to change my resource that is a skos:Concept
> skos:exactMatch'ed with a resource of type owl:Thing, and use owl:sameAs.
> Sort of like "everyone else is doing it, it should be okay", and "don't
> need to fear the thought police".
>
> However, I don't wish to do that with a clear conscience, hence, I'd
> appreciate it if anyone can shed some light here for me and help me
> understand to make an informed decision based on reason (no pun intended).
>
> Related to this, I was wondering whether it makes sense to claim a
> resource to be of type owl:Class as well as of type owl:Thing, where may be
> appropriate, or one could get away with it e.g., a country. If this is
> okay, I imagine it is okay to use owl:sameAs for the subject at hand and
> point to yet another thing.
>
> Thanks all.
>
> -Sarven
>
>


-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Linked Open Vocabularies <http://labs.mondeca.com/dataset/lov>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: How do OBO ontologies work on the LOD?

2012-02-22 Thread Bernard Vatant
;
>>> - Pete
>>>
>>> --
>>>
>>> 
>>> Pete DeVries
>>> Department of Entomology
>>> University of Wisconsin - Madison
>>> 445 Russell Laboratories
>>> 1630 Linden Drive
>>> Madison, WI 53706
>>> Email: pdevr...@wisc.edu
>>> TaxonConcept <http://www.taxonconcept.org/>  &  
>>> GeoSpecies<http://about.geospecies.org/> Knowledge
>>> Bases
>>> A Semantic Web, Linked Open Data <http://linkeddata.org/>  Project
>>>
>>> ------
>>>
>>
>>
>
>
> --
>
> 
> Pete DeVries
> Department of Entomology
> University of Wisconsin - Madison
> 445 Russell Laboratories
> 1630 Linden Drive
> Madison, WI 53706
> Email: pdevr...@wisc.edu
> TaxonConcept <http://www.taxonconcept.org/>  &  
> GeoSpecies<http://about.geospecies.org/> Knowledge
> Bases
> A Semantic Web, Linked Open Data <http://linkeddata.org/>  Project
>
> --
>
>


-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Linked Open Vocabularies <http://labs.mondeca.com/dataset/lov>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: URIs for languages

2012-02-17 Thread Bernard Vatant
tain a namespace such as http://sharedname.org/lang/ that could be
> redirected from lexvo to future-lexvo domains/URLs.
>
> [Lars - your message came in just as I was about to press . I'm
> confused by your reply. What about the problems with LOC lang ids that
> Gerard pointed out? Is that what you meant by "If only they could do
> ISO 3166 countries as well..."?]
>
> Best,
> Scott
>
> On Thu, Feb 16, 2012 at 8:21 PM, Gerard de Melo 
> wrote:
> > Hi Bernard,
> >
> >
> > I think now we should forget about URIs published by pionneer projects
> such
> > as OASIS TC, lingvoj.org and lexvo.org, and stick to URIs published by
> > genuine authority Library of Congress which is as close to the primary
> > source as can be. So if you want to use a URI for Ancient Greek as
> defined
> > by ISO 639-2, please use http://id.loc.gov/vocabulary/iso639-2/grc.
> >
> > BTW Lars Marius, hello, what do you think? URIs at id.loc.gov are really
> > what we were dreaming to achieve in 2001, right?
> >
> >
> > Now of course I may be a bit biased here, but I do not believe that the
> > id.loc.gov service solves
> > all of the problems. This is from the Lexvo.org FAQ [1]:
> >
> > The advantage of using those URIs is that they are maintained by the
> Library
> > of Congress. However, there are also several issues to consider. First of
> > all, ISO 639-2 is orders of magnitude smaller than ISO 639-3 and for
> example
> > lacks an adequate code for Cantonese, which is spoken by over 60 million
> > speakers.
> > More importantly, the LOC's URIs do not describe languages per se but
> rather
> > describe code-mediated conceptualizations of languages. This implies, for
> > instance, that the French language (<http://lexvo.org/id/iso639-3/fra>)
> has
> > two different counterparts at the LOC,
> > <http://id.loc.gov/vocabulary/iso639-2/fra> and
> > <http://id.loc.gov/vocabulary/iso639-2/fre>, which each have slightly
> > different properties.
> > Finally, connecting your data to Lexvo.org's information is likely to be
> > more useful in practical applications. It offers information about the
> > languages themselves, e.g. where they are spoken, while the LOC mostly
> > provides information about the codes, e.g. when the codes were created
> and
> > updated and what kind of code they are.
> > In practice, you can also use both codes simultaneously in your data.
> > However, you need to be very careful to make sure that you are asserting
> > that a publication is written in French rather than in some concept of
> > French created on January, 1, 1970 in the United States.
> >
> >
> > Best,
> > Gerard
> >
> > [1] http://www.lexvo.org/linkeddata/faq.html
> >
> > --
> > Gerard de Melo [dem...@icsi.berkeley.edu]
> > http://www.icsi.berkeley.edu/~demelo/
>



-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Linked Open Vocabularies <http://labs.mondeca.com/dataset/lov>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: URIs for languages

2012-02-16 Thread Bernard Vatant
Hi all

As creator and curator of lingvoj.org, I think I can give some explanations
of such mysteries :)

Lingvoj.org URIs included quite an arbitrary set of URI for languages, gory
details of the story starting in 2007 can be found at lingvoj.org main page.
To be in line with BCP 47 and e.g., values used in xml:lang tags, the
lingvoj.org URIs are based on ISO 639-1 (2 letters) codes when available.
For Japanese the lingvoj.org URI is therefore http://www.lingvoj.org/lang/ja,
which is actually redirected to http://lexvo.org/id/iso639-3/jpn since 2010
for reasons explained at the same page.
For Ancient Greek there is no 2-letters code hence the 3-letters code "grc"
is used (either ISO 639-2 or 639-3 in this case)
Lexvo.org URIs are all based on ISO 639-3 3-letters code, which is simpler.

Now this is part of a story which started even earlier, more than ten years
ago in OASIS Published Subjects Technical Committee with URIs such as
http://psi.oasis-open.org/iso/639/#grc (BTW still in use inside Mondeca
software) Lars Marius Garshol, editor is in cc.

I think now we should forget about URIs published by pionneer projects such
as OASIS TC, lingvoj.org and lexvo.org, and stick to URIs published by
genuine authority Library of Congress which is as close to the primary
source as can be. So if you want to use a URI for Ancient Greek as defined
by ISO 639-2, please use http://id.loc.gov/vocabulary/iso639-2/grc.

BTW Lars Marius, hello, what do you think? URIs at id.loc.gov are really
what we were dreaming to achieve in 2001, right?

Bernard



2012/2/16 M. Scott Marshall 

> I was planning to give the example URI for the Japanese language
> (stemming out of work at the Biohackathon 2011):
> http://lexvo.org/id/iso639-3/jpn
>
> BTW, I wasn't able to use the simpler URI scheme below for jpn as you
> had done with grc:
> http://www.lingvoj.org/lang/jpn
> ?
>
> -Scott
>
> On Thu, Feb 16, 2012 at 5:26 PM, Barry Norton 
> wrote:
> >
> > http://www.lingvoj.org/lang/grc
> >
> > Barry
> >
> >
> >
> >
> > On 16/02/2012 16:15, Jordanous, Anna wrote:
> >
> > Hi LOD list,
> >
> > I am looking for URIs to use  to represent particular languages
> (primarily
> > Ancient Greek, Arabic, English and Spanish). This is to represent what
> > language a document is written in, in an RDF triple. I thought it would
> be
> > obvious how to refer to the language itself, but I am struggling.
> >
> > I would like to use something like the ISO 639 standard for languages. To
> > distinguish between Ancient Greek and Modern Greek, I have to use the
> > ISO-639-2 set of language codes. http://www.loc.gov/standards/iso639-2/(The
> > codes are grc and gre respectively)
> >
> > http://downlode.org/Code/RDF/ISO-639/ is an RDF representation of ISO
> 639
> > but it doesn’t include Ancient Greek as it only includes ISO-639-1
> > languages.
> >
> > As far as I see, I have the following options e.g. for Arabic
> > Use the
> > http://www.loc.gov/standards/iso639-2/php/langcodes_name.php?code_ID=22
> >
> http://www.loc.gov/standards/iso639-2/php/langcodes-keyword.php?SearchTerm=ara&SearchType=iso_639_2
> > http://www.loc.gov/standards/iso639-2#ara
> >
> >
> > This really must be simpler – what am I missing? Any comments welcomed.
> > Thanks for your help
> > anna
> >
> > ---
> > Anna Jordanous
> > Research Associate
> > Centre for e-Research
> > King's College London
> > Tel: +44 (0) 20 7848 1988
> >
> >
> >
> >
> >
>
>
>
> --
> M. Scott Marshall
> http://staff.science.uva.nl/~marshall
>
>


-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Linked Open Vocabularies <http://labs.mondeca.com/dataset/lov>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: [Ann] LODStats - Real-time Data Web Statistics

2012-02-03 Thread Bernard Vatant
Hello Richard

> All in all, almost half of the vocabularies used in LOD are not meeting a
> minimal quality requirement : be published at their namespace.
>
> Now, if there was a list of these, annotated with some stats (used in how
> many datasets? occurring in how many triples?), then we could start at the
> top of the list, and sort it out with the various publishers involved.
>

Indeed! That's the purpose of what I started in the Gdocs ... I just sent
you edition rights :)

That is a work we have already started with Pierre-Yves inside the LOV
ecosystem : ping the vocabularies curators when they rely on
non-such-reliable namespaces (either their own ones, or the ones of
vocabularise they re-use but don't maintain). The objective being to
augment the overall quality of the vocabulary ecosystem, one vocabulary at
a time :)

It is a patient but important task. You're welcome to participate. It is
actually 80% social and 20% technical :)

Best

Bernard

-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Linked Open Vocabularies <http://labs.mondeca.com/dataset/lov>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: [Ann] LODStats - Real-time Data Web Statistics

2012-02-02 Thread Bernard Vatant
Hello all

I've started comparing http://stats.lod2.eu/vocabularies with what we have
in store in LOV.

A few preliminary stats are available. Those who prefer raw data can go
directly to the shared GDocs (waiting for better formats)
https://docs.google.com/spreadsheet/ccc?key=0AiYc9tLJbL4SdEhvMlJjSmJELVhqVk9RUzBIWEhBMUE
Public access in read-only, if you want edit rights, just ask.
Pretty much sandbox/work in progress, provisional but interesting figures
nevertheless. Three sheets available :

1. LOV in LOD : vocabularies extracted by LODStats and already present in
LOV : 54 so far
2. LOV w/o LOD : vocabularies in LOV not yet used in LOD (at least not
extracted by LODStats) : 137
(figures to be consolidated since there are 189 vocs in LOV altogether -
duplicates to double-check)
3. LOD w/o LOV : vocabularies extracted by LODStats and not (yet) present
in LOV : 150

Figures 1 and 2 show that there is still a large majority of unused
vocabularies in LOV.. This is useful information. Does that mean they are
useless? Time will tell ...

Figure 3 is more challenging. I've looked at each of those 150 URIs and, as
of today they can be distributed as following :

Less than 50 are proper de-referencable vocabularies, hence "LOV-able".
Which means a challenging to-do list for LOV curators, which should lead
the figures in 1 and 3 to meet somewhere around 100 with a little effort,
but be patient, this is human-checked. If you want some of those to be
added in priority, use the suggest facility at
http://labs.mondeca.com/dataset/lov/suggest/

More than 60 are either 404, time out or access denied, which does not come
as a surprise, but is nevertheless a big issue. It means that data using
those vocabularies are relying on semantics no one can check.

The rest is de-referencable, but to various types of resources more or less
close to one or several vocabularies, but not published following good
practices, in a word not in a LOV-able state.

All in all, almost half of the vocabularies used in LOD are not meeting a
minimal quality requirement : be published at their namespace.

Conclusion : Quality, Quality, Quality please !
Double-check the vocabularies you use, publish them properly if they are in
your namespace etc etc.

Bernard


2012/2/2 Bernard Vatant 

> Hello Sören
>
> Great work! Of course as you can imagine I jumped right away to
> http://stats.lod2.eu/vocabularies.
> Interesting to see the broad figures (205 vocabularies) vs 189 harvested
> as of today at http://labs.mondeca.com/dataset/lov
> So I would like to compare, see the overlap ... and complete LOV as needed
> :)
>
> Do you have the vocabularies and datasets using them available in a single
> file? (preferably RDF of course!)
>
> Thanks
>
> Bernard
>
>
>
> 2012/2/2 Sören Auer 
>
>> Dear all,
>>
>> We are happy to announce the first public *release of LODStats*.
>>
>> LODStats is a statement-stream-based approach for gathering
>> comprehensive statistics about datasets adhering to the Resource
>> Description Framework (RDF). LODStats was implemented in Python and
>> integrated into the CKAN dataset metadata registry [1]. Thus it helps to
>> obtain a comprehensive picture of the current state of the Data Web.
>>
>> More information about LODStats (including its open-source
>> implementation) is available from:
>>
>> http://aksw.org/projects/LODStats
>>
>> A demo installation collecting statistics from all LOD datasets
>> registered on CKAN is available from:
>>
>> http://stats.lod2.eu
>>
>> We would like to thank the AKSW research group [2] and LOD2 project [3]
>> members for their suggestions. The development LODStats was supported by
>> the FP7 project LOD2 (GA no. 257943).
>>
>> On behalf of the LODStats team,
>>
>> Sören Auer, Jan Demter, Michael Martin, Jens Lehmann
>>
>> [1] http://ckan.net
>> [2] http://aksw.org
>> [3] http://lod2.eu
>>
>>
>
>
> --
> *Bernard Vatant
> *
> Vocabularies & Data Engineering
> Tel :  + 33 (0)9 71 48 84 59
>  Skype : bernard.vatant
> Linked Open Vocabularies <http://labs.mondeca.com/dataset/lov>
>
> 
> *Mondeca**  **   *
> 3 cité Nollez 75018 Paris, France
> www.mondeca.com
> Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>
>
>


-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Linked Open Vocabularies <http://labs.mondeca.com/dataset/lov>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: [Ann] LODStats - Real-time Data Web Statistics

2012-02-02 Thread Bernard Vatant
Hello Sören

Great work! Of course as you can imagine I jumped right away to
http://stats.lod2.eu/vocabularies.
Interesting to see the broad figures (205 vocabularies) vs 189 harvested as
of today at http://labs.mondeca.com/dataset/lov
So I would like to compare, see the overlap ... and complete LOV as needed
:)

Do you have the vocabularies and datasets using them available in a single
file? (preferably RDF of course!)

Thanks

Bernard


2012/2/2 Sören Auer 

> Dear all,
>
> We are happy to announce the first public *release of LODStats*.
>
> LODStats is a statement-stream-based approach for gathering
> comprehensive statistics about datasets adhering to the Resource
> Description Framework (RDF). LODStats was implemented in Python and
> integrated into the CKAN dataset metadata registry [1]. Thus it helps to
> obtain a comprehensive picture of the current state of the Data Web.
>
> More information about LODStats (including its open-source
> implementation) is available from:
>
> http://aksw.org/projects/LODStats
>
> A demo installation collecting statistics from all LOD datasets
> registered on CKAN is available from:
>
> http://stats.lod2.eu
>
> We would like to thank the AKSW research group [2] and LOD2 project [3]
> members for their suggestions. The development LODStats was supported by
> the FP7 project LOD2 (GA no. 257943).
>
> On behalf of the LODStats team,
>
> Sören Auer, Jan Demter, Michael Martin, Jens Lehmann
>
> [1] http://ckan.net
> [2] http://aksw.org
> [3] http://lod2.eu
>
>


-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Linked Open Vocabularies <http://labs.mondeca.com/dataset/lov>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: Modelling colors

2012-01-26 Thread Bernard Vatant
Hi Melvin

There are a few resources in the LOV database which might be of interest
http://labs.mondeca.com/dataset/lov/search/#s=color

Bernard

2012/1/26 Melvin Carvalho 

> I see hasColor a lot in the OWL documentation but I was trying to work
> out a way to say something has a certain color.
>
> I understand linked open colors was a joke
>
> Anyone know of an ontology with color or hasColor as a predicate?
>
>


-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Linked Open Vocabularies <http://labs.mondeca.com/dataset/lov>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: Recommendations for Documenting EoL.org Content Partners

2012-01-18 Thread Bernard Vatant
Hi Peter

What about something like :

<http://dbpedia.org/resource/EOL> rdf:type <
http://www.w3.org/ns/org#OrganizationalCollaboration>
<http://dbpedia.org/resource/EOL> <http://www.w3.org/ns/org#hasMember> <
http://eol.org/partner/159>
<http://eol.org/content/159>  foaf:page <http://eol.org/content_partners/159
>

Bernard

2012/1/18 Peter DeVries 

> Hi All,
>
> If you were to recommend how to markup the content providers listed on
> this page how would you do it?
>
> http://eol.org/content_partners
>
> Would you use SIOC, DOAP or some other vocabulary?
>
> What some would like is the ability to cite a content partner using just a
> URI.
>
> For example:
>
> 
>
> I would appreciate any suggestion or comments :-)
>
> Thanks,
>
> - Pete
>
>
> 
> Pete DeVries
> Department of Entomology
> University of Wisconsin - Madison
> 445 Russell Laboratories
> 1630 Linden Drive
> Madison, WI 53706
> Email: pdevr...@wisc.edu
> TaxonConcept <http://www.taxonconcept.org/>  &  
> GeoSpecies<http://about.geospecies.org/> Knowledge
> Bases
> A Semantic Web, Linked Open Data <http://linkeddata.org/>  Project
>
> --
>



-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
Linked Open Vocabularies <http://labs.mondeca.com/dataset/lov>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Fwd: status and problems on sematicweb.org

2012-01-12 Thread Bernard Vatant
Trying the lod list since apparently the message did not make it to semweb
list ...

-- Forwarded message --
From: Bernard Vatant 
Date: 2012/1/12
to Semantic Web 

Hi all

A related issue is that under semanticweb.org domain or subdomains are
living several vocabularies (ontologies), some of them are used in the
linked data space, either by published data sets or other vocabularies
relying on them.

But their status is variable. Examples :

http://data.semanticweb.org/ns/swc/ontology is alive and well so far
and re-used e.g., by http://online-presence.net/opo/ns

http://proton.semanticweb.org/2005/04/protons# is alive and well so far
and re-used e.g., by http://www.bbc.co.uk/ontologies/sport/

But

http://www.semanticweb.org/ontologies/2009/2/HumanEmotions.owl is 404,
although http://kdo.render-project.eu/kdo declares that it imports it
Actually http://semanticweb.org/ontologies/ itself is 404

http://knowledgeweb.semanticweb.org/semanticportal/OWL/Documentation_Ontology.owlis
404
although http://lsdis.cs.uga.edu/projects/semdis/opus# declares many
mappings to it

http://data.semanticweb.org/ns/misc is 404
although it is used by http://data.semanticweb.org/ns/swc/ontology

And that's only what I can quickly discover using the results provided by
the LOV bot which explores the Linked Open Vocabularies space.

What is the bottom line of this? Data and vocabularies publishers take for
granted that published vocabularies can be re-used at will and rely on
them. But when a re-used vocabulary goes off-line, not only we have 404 in
the linked data web, but semantics of dependent vocabularies is affected.

semanticweb.org is just an example. Unfortunately it's not the only one. It
seems that vocabulary publishers are often not aware of their long-term
responsibility. We have in the LOV project even had answers from some
people mentioned as vocabulary creators who were not even aware that their
vocabulary was actually still used ...

But given its singular place in the semantic web space, one could think
that semanticweb.org should show off good practices ...

Best

Bernard


2012/1/12 Markus Krötzsch 

> Hi Yuri,
>
> let us take this to one mailing list semantic-...@w3.org, as this is the
> list that is most involved (please drop the others when you reply).
>
> As the technical maintainer of the site, I largely agree with your
> assessment. In spite of the very high visibility of the site (and perceived
> authority), the active editing community is not big. This is a problem
> especially given the significant and continued spam attacks that the site
> is under due to its high visibility (I just recently changed the captcha
> system and rolled back thousands of edits, yet it seems they are already
> breaking through again, though in smaller numbers).
>
> I do not want to blame anybody for the state of affairs: most of us do not
> have the time to contribute significant content to such sites. However,
> given the extraordinary visibility of the site, we should all perceive this
> as a major problem (to the extent that we attach our work to the label
> "semantic web" in any way).
>
> So what can be done?
>
> (1) Freeze the wiki. A weaker version of this is: allow users only to edit
> after they were manually added to a group of trusted users (all humans
> welcome). This would require somebody to manage these permissions but would
> allow existing projects/communities to continue to use the site.
>
> (2) Re-enforce spam protection on the wiki. Maybe this could be done, but
> the site is targeted pretty heavily. Standard captchas like ReCaptcha are
> thus getting broken (spammers do have an effective infrastructure for
> this), but maybe non-standard captchas could work better. This is a task
> for the technical maintainers (i.e., me and the folks at AIFB Karlsruhe
> where the site is hosted).
>
> (3) Clean the wiki. Whether frozen or not, there is a lot of spam already.
> Something needs to be done to get rid of it. This requires (easy but
> tedious) manual effort. Some stakeholders need to be found to provide basic
> workforce (e.g., by hiring a student to help with spam deletion).
>
> (4) Restore the wiki. Update the main pages (about technologies and active
> projects) to reflect a current and/or timeless state that we would like new
> readers to see. This again needs somebody to push it, and for writing pages
> about topics like SPARQL one would need some expertise. This is a challenge
> for the community.
>
> I am willing to invest /some/ time here to help with the above, but (3)
> and (4) requires support from more people. On the other hand, there are
> probably hardly more than 20 or 30 *essential* content pages that we are
> talking about here, plus many pages about projects and people that one
> should ask the stakeholders t

Re: ANN: Modular Unified Tagging Ontology (MUTO)

2011-11-18 Thread Bernard Vatant
Hi folks

Maybe a good way to capture the fact that MUTO has used previous works, but
with significant changes making difficult to assert equivalences at element
level such as equivalentClass etc, would be is to assert link at vocabulary
(ontology) level, using for example the property
http://www.w3.org/2000/10/swap/pim/doc#derivedFrom

<http://purl.org/muto/core>  doc:derivedFrom  <
http://www.holygoat.co.uk/projects/tags/>
<http://purl.org/muto/core>  doc:derivedFrom  <http://moat-project.org/ns>

etc.

Such assertions could be added to other cross-vocabulary links at e.g.,
http://labs.mondeca.com/dataset/lov/details/vocabulary_muto.html. Actually
"derivedFrom" and "derivativeWork" should be mentioned in VOAF.

BTW if the honorable creator of http://www.w3.org/2000/10/swap/pim/doc is
following this thread (he might/should) he could benefit of it to revisit
the definitions of doc:derivedFrom and doc:derivativeWork, inverse
properties with the same definition "A work wholey or partly used in the
creation of this one." Guess it is OK for the former, but not the latter :)

Best

Bernard

2011/11/18 Steffen Lohmann 

> On 17.11.2011 20:03, Richard Cyganiak wrote:
>
>> Hi Steffen,
>>
>> On 17 Nov 2011, at 14:34, Steffen Lohmann wrote:
>>
>>> MUTO should thus not be considered as yet another tagging ontology but
>>> as a unification of existing approaches.
>>>
>> I'm curious why you decided not to include mappings (equivalentClass,
>> subProperty etc) to the existing approaches.
>>
>
> Good point, Richard. I thought about it but finally decided to separate
> these alignments from the core ontology - therefore the "MUTO Mappings
> Module" 
> (http://muto.socialtagging.**org/core/v1.html#Modules<http://muto.socialtagging.org/core/v1.html#Modules>
> ).
>
> SIOC and SKOS can be nicely reused but aligning MUTO with the nine
> reviewed tagging ontologies is challenging and would result in a number of
> inconsistencies. This is mainly due to a different conceptual understanding
> of tagging and folksonomies in the various ontologies. To give some
> examples:
>
> - Are tags with same labels merged in the ontology (i.e. are they one
> instance)?
> - Is the number of tags per tagging limited to one or not?
> - In case of semantic tagging: Are single tags or complete taggings
> disambiguated?
> - How are the creators of taggings linked?
> - Are tags from private taggings visible to other users or not?
>
> Apart from that, I would have risk that MUTO is no longer OWL Lite/DL
> which I consider important for a tagging ontology (reasoning of
> folksonomies).
>
> The current version of the MUTO Mappings Module provides alignments to
> Newman's popular TAGS ontology (mainly for compatibility reasons). Have a
> look at it and you'll get an idea of the difficulties in correctly aligning
> MUTO with existing tagging ontologies.
>
>
> Best,
> Steffen
>
> --
> Steffen Lohmann - DEI Lab
> Computer Science Department, Universidad Carlos III de Madrid
> Avda de la Universidad 30, 28911 Leganés, Madrid (Spain), Office: 22A20
> Phone: +34 916 24-9419, 
> http://www.dei.inf.uc3m.es/**slohmann/<http://www.dei.inf.uc3m.es/slohmann/>
>
>
>
>


-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant
Linked Open Vocabularies <http://labs.mondeca.com/dataset/lov>


*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: Address Bar URI

2011-10-18 Thread Bernard Vatant
Hi Michael

Let me try to write down your case as I understand it, trying to avoid
Capitalized Buzzwords ;-)
Seems a good idea to me, although it introduces yet another level of
indirection in the picture, but maybe we need it.

We have three different types of animals to identify by URI

1. Something known as 'foo' in the "real" (or not) world :
http://example.org/thing/foo
2. A generic information resource binding the various representations of
'foo' on my server(s) : http://example.org/resource/foo
3. Representations/renderings of 'foo' in various formats (html, rdf, xml,
json, ...) / languages etc : http://example.org/resource/foo.html

The first URI is used in RDF descriptions of the thing, that I get for
example at http://example.org/resource/foo.rdf
The second URI is not used in the RDF descriptions whatsoever. It's a webby
trick enabling easy copy-paste, caching, display in address bar, whatever
deal with Web conversation only interested in information resources. It's my
IR proxy to 1.

The conneg for 1 is a systematic 303 to 2, whatever the query.
The conneg for 2 indirects to the desired type of representation.

Using 2 in Web dialogue avoids confusion : the URI in the browser is not
misleading. You've asked for an IR, here it is, and in the format you've
asked.

Do I get your point correctly?

Bernard

2011/10/18 Michael Smethurst 

> **
>
> Hi Richard
>
> (Again top post courtesy of webmail. sorry)
>
> I'm saying dbpedia is missing the concept of a *generic* information
> resource URI and it's that URI that should show up in the address bar and be
> used in link targets. Ignoring the linked data aspect for a moment if you
> publish your data in various serialisations like:
>
> - /foo.html
> - /foo.xhtml-mp (mobile profile xhtml for feature (non-smart) phones)
> - /foo.json
> - /foo.xml
>
> you want to allow people to copy and paste the address bar into email /
> twitter etc and for someone clicking the resulting link to get back an
> appropriate representation (depending on their accept headers + a bit of
> messy device detection in the case of the html and xhtml-mp)
>
> So you need a generic IR URI that does the conneg / device detection and
> sends back the appropriate serialisation without a redirect. The generic IR
> URI (/foo) stays in the address bar and the full location (/foo.json etc) is
> only exposed in the content location header (not in the address bar)
>
> All links then target the generic IR resource (not the NIR and NOT the
> specific representation (.html etc))
>
> So link targets are to generic ir uri and the address bar always shows the
> generic ir uri. Which gives you two benefits:
> - you only expose one set of uris to crawlers (google etc)
> - the uri in the address bar becomes universally sharable with copy + paste
>
> It's reasonable / necessary to expect publishers to take a conneg / device
> detection hit for every request because you want your content shared and the
> ability to send back an appropriate representation and it's all nicely
> cachable (even in cdn mode) with varies
>
> It's not reasonable / necessary to expect publishers to take an uncachable
> 303 hit for every request
>
> When you start writing rdf you just need the ability to talk about
> something that can't be sent down the wires. So you add in the nir uri. If
> someone requests the nir then:
>
> nir > 303 > *generic* ir > conneg > ir representation (url only exposed as
> location header)
>
> lots of linked data seems to do the 303 and conneg as one step but they're
> not happening for the same reason. the job of the conneg is to return an
> appropriate representation from the ir; the job of the 303 is to say "i
> can't send you that but here's some information that will hopefully be
> useful". conneg is needed regardless of whether you're doing linked data and
> linked data only adds in the 303 when the nir is requested. i think the two
> steps tend to get conflated in linked data publishing patterns and we should
> attempt to separate them
>
> hth
> michael
>
>

-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant



*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: Press.net News Ontology

2011-09-08 Thread Bernard Vatant
Adding to Bob's list with which I fully agree

In http://data.press.net/ontology/stuff/ the namespace
http://www.w3.org/TR/owl-time/ used for time ontology is not correct.
http://www.w3.org/TR/owl-time/Instant is 404. Bing.

The time ontology is indeed specified by http://www.w3.org/TR/owl-time
But the namespace is http://www.w3.org/2006/time#

... speak about good URI practice in W3C specs ;-)

Bob is using cute prefixes pns, pna etc.
I'm using them as "recommended prefixes" at
http://labs.mondeca.com/dataset/lov/ where I just started adding them.
(not quite sure if they are in the right vocabulary space, though ...)

Best

Bernard

2011/9/8 Bob Ferris 

> Hi Jarred,
>
> at a first glance, here are my remarks:
>
> 1. pne:Event, pne:sub_event seem to be a bit duplicated. I guess,
> event:Event, event:sub_event are enough.
>
> 2. pne:title can be replaced by, e.g., dc:title.
>
> 3. pns:Person can be replaced by foaf:Person.
>
> 4. pns:Organization can be replaced by foaf:Organization.
>
> 5. pns:worksFor can be replaced by rel:employedBy [1].
>
> 6. pns:Lcoation can be replaced by geo:SpatialThing
>
> 7. Re. the tagging terms, I would recommend to have a look at the Tag
> Ontology [2] or similar (see, e.g., [3])
>
> 8. Re. biographical events I would recommend to have a look at the Bio
> Vocabulary [4], e.g., bio:birth/bio:death.
>
> 9. pns:label can be replaced by dc:title (or rdfs:label).
>
> 10. pns:comment can be replaced by dc:description (or rdfs:comment).
>
> 11. pns:describedBy can be replaced by wdrs:describedby [5].
>
> 12. Re. bibliographic terms I would recommend to have a look at the Bibo
> Ontology [6], e.g., bibo:Image (or foaf:Image), or the FRBR Vocabulary [7],
> e.g., frbr:Text.
>
> 13. pna:hasThumbnail can be replaced by foaf:thumbnail.
>
> ...
>
> Please help us to create 'shared understanding' by reutilising terms of
> existing Semantic Web ontologies.
>
> Cheers,
>
>
> Bo
>
>
> [1] 
> http://purl.org/vocab/**relationship/employedBy<http://purl.org/vocab/relationship/employedBy>
> [2] 
> http://www.holygoat.co.uk/**projects/tags/<http://www.holygoat.co.uk/projects/tags/>
> [3] http://answers.semanticweb.**com/questions/1566/**
> ontologyvocabulary-and-design-**patterns-for-tags-and-tagged-**data<http://answers.semanticweb.com/questions/1566/ontologyvocabulary-and-design-patterns-for-tags-and-tagged-data>
> [4] http://purl.org/vocab/bio/0.1/
> [5] 
> http://www.w3.org/2007/05/**powder-s#describedby<http://www.w3.org/2007/05/powder-s#describedby>
> [6] http://purl.org/ontology/bibo/
> [7] http://purl.org/vocab/frbr/**core# <http://purl.org/vocab/frbr/core#>
>
>
> On 9/8/2011 3:48 PM, Jarred McGinnis wrote:
>
>> Hello all,
>>
>> The Press Association has just published our first draft of a 'news'
>> ontology 
>> (_http://data.press.net/**ontology_<http://data.press.net/ontology_>).
>> For each of the ontologies
>> documented, we've included the motivation for the ontologies as well as
>> some of the design decisions behind it. Also, you can get the rdf or ttl
>> by adding the extension. For example,
>> http://data.press.net/**ontology/asset.rdf<http://data.press.net/ontology/asset.rdf>
>> <http://**data.press.net/ontology/asset.**rdf<http://data.press.net/ontology/asset.rdf>
>> >gives
>>
>> you the ontology described at 
>> http://data.press.net/**ontology/asset/<http://data.press.net/ontology/asset/>..
>>
>> Have a look at the ontology and tell us what you think. We think it is
>> pretty good but feel free to point out our mistakes. We will fix it. Ask
>> why we did it one way and not another. We will give you an answer.
>>
>> Paul Wilton of Ontoba has been working with us at the PA and has spelled
>> out a lot of the guiding principles of this work at
>> http://www.ontoba.com/blog.
>>
>> The reasons behind this work were talked about at SemTech 2011 San
>> Fransisco:
>> http://semtech2011.**semanticweb.com/sessionPop.**
>> cfm?confid=62&proposalid=4134<http://semtech2011.semanticweb.com/sessionPop.cfm?confid=62&proposalid=4134>
>> <http://semtech2011.**semanticweb.com/sessionPop.**
>> cfm?confid=62&proposalid=4134<http://semtech2011.semanticweb.com/sessionPop.cfm?confid=62&proposalid=4134>
>> >
>>
>> Looking forward to hearing from you,
>>
>> *Jarred McGinnis, PhD*
>>
>
>
>


-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant



*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: Press.net News Ontology

2011-09-08 Thread Bernard Vatant
Hello Stéphane

Any idea when rNews will be available as an RDFS or OWL vocabulary?
So far we have at least an URI for it http://dev.iptc.org/rnewsowl but no
description :)

Bernard

2011/9/8 Stéphane Corlosquet 

> Hi Jarred,
>
> It seems to me that your work is similar or at least related to rNews [1].
> I'm curious to know if you're looked at rNews when building the
> News Ontology. Do they complement each other, or are we re-inventing the
> wheel?
>
> Steph.
>
> [1] http://dev.iptc.org/rNews
>
>
> On Thu, Sep 8, 2011 at 9:48 AM, Jarred McGinnis <
> jarred.mcgin...@pressassociation.com> wrote:
>
>> ** ** ** ** ** **
>>
>> Hello all,
>>
>> ** **
>>
>> The Press Association has just published our first draft of a 'news'
>> ontology (*http://data.press.net/ontology*). For each of the ontologies
>> documented, we've included the motivation for the ontologies as well as some
>> of the design decisions behind it. Also, you can get the rdf or ttl by
>> adding the extension. For example, http://data.press.net/ontology/asset
>> .rdf gives you the ontology described at
>> http://data.press.net/ontology/asset/ ..
>>
>> ** **
>>
>> Have a look at the ontology and tell us what you think. We think it is
>> pretty good but feel free to point out our mistakes. We will fix it. Ask why
>> we did it one way and not another. We will give you an answer.
>>
>> ** **
>>
>> Paul Wilton of Ontoba has been working with us at the PA and has spelled
>> out a lot of the guiding principles of this work at
>> http://www.ontoba.com/blog.
>>
>> ** **
>>
>> The reasons behind this work were talked about at SemTech 2011 San
>> Fransisco:
>> http://semtech2011.semanticweb.com/sessionPop.cfm?confid=62&proposalid=4134
>> 
>>
>> ** **
>>
>> Looking forward to hearing from you,
>>
>> ** **
>>
>> *Jarred McGinnis, PhD*
>>
>> *Research Manager, Semantic Technologies*
>>
>> *PRESS**
>> **ASSOCIATION***
>>
>> *www.pressassociation.com*
>>
>> jarred.mcgin...@pressassociation.com
>>
>> T: +44 (0) 2079 637 198
>> Extension: (7198)
>> M: +44 (0) 7816 286 852 
>>
>> ** **
>>
>> Registered Address: The Press Association Limited, 292 Vauxhall
>> Bridge Road**, London, SW1V 1AE**. Registered in 
>> England No. 5946902
>>
>> ** **
>>
>> This email is from the Press Association. For more information, see
>> www.pressassociation.com. This email may contain confidential
>> information. Only the addressee is permitted to read, copy, distribute or
>> otherwise use this email or any attachments. If you have received it in
>> error, please contact the sender immediately. Any opinion expressed in this
>> email is personal to the sender and may not reflect the opinion of the Press
>> Association. Any email reply to this address may be subject to interception
>> or monitoring for operational reasons or for lawful business practices.
>>
>
>


-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant



*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: Question: Authoritative URIs for Geo locations? Multi-lingual labels?

2011-09-08 Thread Bernard Vatant
Hi all

2011/9/8 Sarven Capadisli 

> Here is a nice:
>
> <http://dbpedia.org/resource/Montreal> owl:sameAs
> <http://sws.geonames.org/6077244/> .
>

A nice abuse of owl:sameAs indeed :)
http://sws.geonames.org/6077244/ is Montréal *Post Office* since its feature
code is S.PO, called simply "Montréal" by laziness of the data curator ...
and mistaken as the city by some dumb script not leveraging geonames
classification to sort out homonyms.


> http://sws.geonames.org/6077244/ doesn't provide more labels however.
>

Indeed. No one has taken the time so far to name Montréal Post Office in
other languages.
But http://sws.geonames.org/6077243/ which is indeed the *City* of Montréal,
does provide suite a bunch of them.

So data curation is needed :
- Specify the name(s) of http://sws.geonames.org/6077244/ on geonames side
to ease disambiguation (will do it)
- Correct the DBpedia dumb matching (I have no power on that one)

Cheers

Bernard


>
> -Sarven
>
> On Thu, 2011-09-08 at 15:49 +0100, Paul Wilton wrote:
> > Hi Scott
> > http://www.geonames.org is a good source of global Geospatial RDF
> > linked data - it is a very large global dataset
> >
> >
> > For the UK:  http://data.ordnancesurvey.co.uk  is a good option
> >
> >
> > freebase also has a large global geospatial dataset
> >
> >
> >
> > cheers
> > Paul
> >
> >
> >
> > On Thu, Sep 8, 2011 at 3:38 PM, M. Scott Marshall
> >  wrote:
> > It seems that dbpedia is a de facto source of URIs for
> > geographical
> > place names. I would expect to find a more specialized source.
> > I think
> > that I saw one mentioned here in the last few months. Are
> > there
> > alternatives that are possible more fine-grained or designed
> > specifically for geo data? With multi-lingual labels? Perhaps
> > somebody
> > has kept track of the options on a website?
> >
> > -Scott
> >
> > --
> > M. Scott Marshall
> > http://staff.science.uva.nl/~marshall
> >
> > On Thu, Sep 8, 2011 at 3:07 PM, Sarven Capadisli
> >  wrote:
> > > On Thu, 2011-09-08 at 14:01 +0100, Sarven Capadisli wrote:
> > >> On Thu, 2011-09-08 at 14:07 +0200, Karl Dubost wrote:
> > >> > # Using RDFa (not implemented in browsers)
> > >> >
> > >> >
> > >> > http://www.w3.org/2003/01/geo/wgs84_pos#";
> > id="places-rdfa">
> > >> >  > >> > about="http://www.dbpedia.org/resource/Montreal";
> > >> > geo:lat_long="45.5,-73.67">Montréal,
> > Canada
> > >> >  > >> > about="http://www.dbpedia.org/resource/Paris";
> > >> > geo:lat_long="48.856578,2.351828">Paris,
> > France
> > >> > 
> > >> >
> > >> > * Issue: Latitude and Longitude not separated
> > >> >   (have to parse them with regex in JS)
> > >> > * Issue: xmlns with 
> > >> >
> > >> >
> > >> > # Question
> > >> >
> > >> > On RDFa vocabulary, I would really like a solution with
> > geo:lat and geo:long, Ideas?
> > >>
> > >> Am I overlooking something obvious here? There is lat, long
> > properties
> >     >> in wgs84 vocab. So,
> > >>
> > >> http://dbpedia.org/resource/Montreal";>
> > >>  > >>   content="45.5"
> > >>   datatype="xsd:float">
> > >>  > >>   content="-73.67"
> > >>   datatype="xsd:float">
> > >> Montreal
> > >> 
> > >>
> > >> Tabbed for readability. You might need to get rid of
> > whitespace.
> > >>
> > >> -Sarven
> > >
> > > Better yet:
> > >
> > > http://dbpedia.org/resource/Montreal";>
> > > > > ...
> > >
> > >
> > > -Sarven
> >
> >
> >
>
>
>
>
>


-- 
*Bernard Vatant
*
Vocabularies & Data Engineering
Tel :  + 33 (0)9 71 48 84 59
Skype : bernard.vatant



*Mondeca**  **   *
3 cité Nollez 75018 Paris, France
www.mondeca.com
Follow us on Twitter : @mondecanews <http://twitter.com/#%21/mondecanews>


Re: Get your dataset on the next LOD cloud diagram

2011-07-13 Thread Bernard Vatant
Re. availability, just a reminder of SPARQL Endpoints Status service
http://labs.mondeca.com/sparqlEndpointsStatus/index.html
As of today 80% (192/240) endpoints registered at CKAN are up and running.
Monitor grey dots (still alive?) for candidate passed out datasets ...

Bernard

2011/7/13 Leigh Dodds :
> Hi,
>
> On 12 July 2011 18:45, Pablo Mendes  wrote:
>> Dear fellow Linked Open Data publishers and consumers,
>> We are in the process of regenerating the next LOD cloud diagram and
>> associated statistics [1].
>> ...
>
> This email prompted a discussion about how to the data collection or
> diagram could be improved or updated. As CKAN is an open platform and
> anyone can add additional tags to datasets, why doesn't everyone who
> is interested in seeing a particular improvement or alternate view of
> the data just go ahead and do it? There's no need to require all this
> to be done by one team on a fixed schedule.
>
> Some light co-ordination between people doing similar analyses would
> be worthwhile, but it wouldn't be hard to, e.g. tag datasets based on
> whether their Linked Data or SPARQL endpoint is available regularly,
> whether they're currently maintained, or (my current bug bear) whether
> the data dumps they publish parse with more than one tool chain.
>
> It'd be nice to see many different aspects of the cloud being explored.
>
> Cheers,
>
> L.
>
> --
> Leigh Dodds
> Programme Manager, Talis Platform
> Mobile: 07850 928381
> http://kasabi.com
> http://talis.com
>
> Talis Systems Ltd
> 43 Temple Row
> Birmingham
> B2 5LS
>
>



-- 
Bernard Vatant
Senior Consultant
Vocabulary & Data Integration
Tel:       +33 (0) 971 488 459
Mail:     bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:    http://www.mondeca.com
Blog:    http://mondeca.wordpress.com




Re: Schema.org in RDF ...

2011-06-07 Thread Bernard Vatant
Kingsley, you lost me once again :(

>From the URI you provide I follow my nose to
http://uriburner.com/describe/?url=http%3A%2F%2Fschema.org%2FPerson%23this

Which as it says provides a description of the resource identified by
http://schema.org/Person#this, including the following triple :

  rdf:type  <
http://www.w3.org/2000/01/rdf-schema#Class>

AFAIK, http://schema.org/Person#this is no more declared as an RDFS class
than http://schema.org/Person. Actually since  is
currently an information resource per its answer to http GET, I wonder what
http://schema.org/Person#this actually identifies, since there is no actual
#this anchor in the page.

Tweaking a new URI to explicit the semantics of http://schema.org/Person is
OK, but this new URI has to be in a namespace you control etc.

Bernard



2011/6/7 Kingsley Idehen 

>
> Here is an example of an updated tweak [1] of what we did with Google's
> initial foray into this realm combined with recent developments at:
> schema.rdfs.org.
>
> Note, anyone can yank out this data, tweak, and then share (ideally via Web
> in pure Linked Data form). I'll be sending an archive to Micheal and Co.
> post hitting send button re. this mail.
>
> Links:
>
> 1.
> http://uriburner.com/describe/?url=http%3A%2F%2Fschema.rdfs.org%2Fall&p=2&lp=4&first=&op=0&gp=2
>
>


Re: Schema.org in RDF ...

2011-06-07 Thread Bernard Vatant
Hi Michael

I just repeated what some people-who-know-better around assumed  ...
For myself I'm sure of nothing, in particular regarding the future :)
And that's exactly why seems to me that assertions published today should
not preempt (possible) semantics of tomorrow, but rely on semantics as they
stand : http://schema.org/Person is an information resource, not a
rdfs:Class.

In the solution I propose, whenever the event you expect happens, just add
owl:equivalentClass and owl:equivalentProperty to your descriptions.
If it does not happen as you wish, nothing is broken. If people at
schema.org change their mind and throw away everything, you get rid of the
dcterms:source and your descriptions stay alive and backward compatible for
people in the RDF world. Et voilà.

Bernard

2011/6/7 Michael Hausenblas 

> Something I don't understand. If I read well all savvy discussions so far,
>> publishers behind http://schema.org URIs are unlikely to ever provide any
>> RDF description,
>>
>
> What makes you so sure about that not one day in the (near?) future the
> Schema.org URIs will serve RDF or JSON, FWIW, additionally to HTML? ;)
>
>
> Cheers,
>Michael
> --
> Dr. Michael Hausenblas, Research Fellow
> LiDRC - Linked Data Research Centre
> DERI - Digital Enterprise Research Institute
> NUIG - National University of Ireland, Galway
> Ireland, Europe
> Tel. +353 91 495730
> http://linkeddata.deri.ie/
> http://sw-app.org/about.html
>
> On 7 Jun 2011, at 08:44, Bernard Vatant wrote:
>
>  Hi all
>>
>> Something I don't understand. If I read well all savvy discussions so far,
>> publishers behind http://schema.org URIs are unlikely to ever provide any
>> RDF description, so why are those URIs declared as identifiers of RDFS
>> classes in the http://schema.rdfs.org/all.rdf. For all I can see,
>> http://schema.org/Person is the URI of an information resource, not of a
>> class.
>> So I would rather have expected mirroring of the schema.org URIs by
>> schema.rdfs.org URIs, the later fully dereferencable proper RDFS classes
>> expliciting the semantics of the former, while keeping the reference to the
>> source in some dcterms:source element.
>>
>> Example, instead of ...
>>
>> http://schema.org/Person";>
>> http://www.w3.org/2000/01/rdf-schema#Class"/>
>> Person
>> A person (alive, dead, undead, or
>> fictional).
>> http://schema.org/Thing"/>
>> http://schema.org/Person"/>
>> 
>>
>> where I see a clear abuse of rdfs:isDefinedBy, since if you dereference
>> the said URI, you don't find any explicit RDF definition ...
>>
>> I would rather have the following
>>
>> http://schema.rdfs.org/Person";>
>> http://www.w3.org/2000/01/rdf-schema#Class"/>
>> Person
>> A person (alive, dead, undead, or
>> fictional).
>> http://schema.rdfs.org/Thing"/>
>> http://schema.org/Person"/>
>> 
>>
>> To the latter declaration, one could safely add statements like
>>
>> schema.rdfs:Person rdfs:subClassOf  foaf:Person
>>
>> etc
>>
>> Or do I miss the point?
>>
>> Bernard
>>
>> 2011/6/3 Michael Hausenblas 
>>
>> http://schema.rdfs.org
>>
>> ... is now available - we're sorry for the delay ;)
>>
>> Cheers,
>>   Michael
>> --
>> Dr. Michael Hausenblas, Research Fellow
>> LiDRC - Linked Data Research Centre
>> DERI - Digital Enterprise Research Institute
>> NUIG - National University of Ireland, Galway
>> Ireland, Europe
>> Tel. +353 91 495730
>> http://linkeddata.deri.ie/
>> http://sw-app.org/about.html
>>
>>
>>
>>
>>
>> --
>> Bernard Vatant
>> Senior Consultant
>> Vocabulary & Data Integration
>> Tel:   +33 (0) 971 488 459
>> Mail: bernard.vat...@mondeca.com
>> 
>> Mondeca
>> 3, cité Nollez 75018 Paris France
>> Web:http://www.mondeca.com
>> Blog:http://mondeca.wordpress.com
>> 
>>
>
>


-- 
Bernard Vatant
Senior Consultant
Vocabulary & Data Integration
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Schema.org in RDF ...

2011-06-07 Thread Bernard Vatant
Hi all

Something I don't understand. If I read well all savvy discussions so far,
publishers behind http://schema.org URIs are unlikely to ever provide any
RDF description, so why are those URIs declared as identifiers of RDFS
classes in the http://schema.rdfs.org/all.rdf. For all I can see,
http://schema.org/Person is the URI of an information resource, not of a
class.
So I would rather have expected mirroring of the schema.org URIs by
schema.rdfs.org URIs, the later fully dereferencable proper RDFS classes
expliciting the semantics of the former, while keeping the reference to the
source in some dcterms:source element.

Example, instead of ...

http://schema.org/Person";>
http://www.w3.org/2000/01/rdf-schema#Class"/>
Person
A person (alive, dead, undead, or fictional).
http://schema.org/Thing"/>
http://schema.org/Person"/>


where I see a clear abuse of rdfs:isDefinedBy, since if you dereference the
said URI, you don't find any explicit RDF definition ...

I would rather have the following

http://schema.rdfs.org/Person";>
http://www.w3.org/2000/01/rdf-schema#Class"/>
Person
A person (alive, dead, undead, or fictional).
http://schema.rdfs.org/Thing"/>
http://schema.org/Person"/>


To the latter declaration, one could safely add statements like

schema.rdfs:Person rdfs:subClassOf  foaf:Person

etc

Or do I miss the point?

Bernard

2011/6/3 Michael Hausenblas 

>
> http://schema.rdfs.org
>
> ... is now available - we're sorry for the delay ;)
>
> Cheers,
>Michael
> --
> Dr. Michael Hausenblas, Research Fellow
> LiDRC - Linked Data Research Centre
> DERI - Digital Enterprise Research Institute
> NUIG - National University of Ireland, Galway
> Ireland, Europe
> Tel. +353 91 495730
> http://linkeddata.deri.ie/
> http://sw-app.org/about.html
>
>
>


-- 
Bernard Vatant
Senior Consultant
Vocabulary & Data Integration
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: How many instances of foaf:Person are there in the LOD Cloud?

2011-04-13 Thread Bernard Vatant
Thanks everybody !

Could not imagine that this simple question would trigger such an activity.
Actually my naive "quest" was to figure how many people had actively
published, and possibly still maintain a FOAF profile for themselves, vs the
number of profiles stored and maintained in a proprietary social system, vs
a profile computed out of their activity on the web for any purpose.
Browsing all the answers makes me wonder. I was not aware of so many sources
of FOAF information (to tell the truth a great majority of domains quoted by
Michael in his 25 top list were totally unknown to me until today). The
number I had in mind when asking was rather abou FOAF profiles actively
maintained by some "primary topic" aware of what FOAF is and deliberately
using it to be present in the social semantic web. I suppose this number
really represents a microscopic part of the millions announced, but I do not
know more about it at the end of this day. Except that most FOAF information
is certainly produced without people subject of the triples even being aware
of it, or even knowing that FOAF exists at all (supposing they are living,
real people).
Actually it's quite easy to produce FOAF out of any social application data
with an open API. So the millions I read about are simply an image of the
millions of users of social software using open API, plus the growing number
of people for which "public" data is available such as people listed in
Wikipedia and Freebase.

So tonight I would turn my question otherwise : Among those millions of FOAF
profiles, how do I discover those of which primary source is their primary
topic, expressing herself natively in FOAF, vs the ocean of second-hand
remashed / remixed information, captured with or without clear approbation
of their subjects, and eventually released in FOAF syntax in the Cloud ...

Bernard


2011/4/13 Kingsley Idehen 

> On 4/13/11 6:54 AM, Michael Brunnbauer wrote:
>
>> I could not find working bridges for last.fm and flikr but
>> semantictweet.com
>> is really working again - interesting:-)
>>
> We've always had Sponger Cartridges (bridges) for last.fm and flickr. In
> addition there are cartridges for Crunchbase, Amazon, and many others. Of
> course, the context of Bernard's quest ultimately determines the relevance
> of these data sources :-)
>
>
> --
>
> Regards,
>
> Kingsley Idehen
> President&  CEO
> OpenLink Software
> Web: http://www.openlinksw.com
> Weblog: http://www.openlinksw.com/blog/~kidehen
> Twitter/Identi.ca: kidehen
>
>
>
>
>
>
>


-- 
Bernard Vatant
Senior Consultant
Vocabulary & Data Integration
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: How many instances of foaf:Person are there in the LOD Cloud?

2011-04-13 Thread Bernard Vatant
Thanks Michael

That's exactly the kind of figures I was looking for.
And I was not aware of www.foaf-search.net ... shame on me!

Bernard

2011/4/13 Michael Brunnbauer 

>
> re
>
> BTW: The note on http://wiki.foaf-project.org/w/DataSources that the
> Billion
> Triples Challenge 2009 contains "40 million FOAFs" is a bit misleading. If
> you
> follow the link you can see that there are 39 mio "X a foaf:Person"
> assertions
> in the dataset which boils down to much less distinct foaf:Persons. We have
> ca. 40 mio "X a foaf:Person" assertions and ca. 3.5 mio distinct
> foaf:Persons.
>
> Regards,
>
> Michael Brunnbauer
>
> On Wed, Apr 13, 2011 at 10:54:12AM +0200, Michael Brunnbauer wrote:
> >
> > re
> >
> > On Wed, Apr 13, 2011 at 10:15:46AM +0200, Bernard Vatant wrote:
> > > Just trying to figure what is the size of personal information
> available as
> > > LOD vs billions of person profiles stored by Google, Amazon, Facebook,
> > > LinkedIn, unameit ... in proprietary formats.
> >
> > At www.foaf-search.net, we have ca. 3.5 mio instances of foaf:Person.
> >
> > The biggest chunk out there is probably livejournal.com with more than
> 25mio
> > users which we cannot index all right now (we have 221090 of them).
> >
> > Another big one is hi5.com but the FOAF is quite broken so we don't
> crawl it.
> >
> > See also:
> >
> > http://www.w3.org/wiki/FoafSites
> > http://wiki.foaf-project.org/w/DataSources
> >
> > Regards,
> >
> > Michael Brunnbauer
> >
> > --
> > ++  Michael Brunnbauer
> > ++  netEstate GmbH
> > ++  Geisenhausener Straße 11a
> > ++  81379 München
> > ++  Tel +49 89 32 19 77 80
> > ++  Fax +49 89 32 19 77 89
> > ++  E-Mail bru...@netestate.de
> > ++  http://www.netestate.de/
> > ++
> > ++  Sitz: München, HRB Nr.142452 (Handelsregister B München)
> > ++  USt-IdNr. DE221033342
> > ++  Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer
> > ++  Prokurist: Dipl. Kfm. (Univ.) Markus Hendel
>
> --
> ++  Michael Brunnbauer
> ++  netEstate GmbH
> ++  Geisenhausener Straße 11a
> ++  81379 München
> ++  Tel +49 89 32 19 77 80
> ++  Fax +49 89 32 19 77 89
> ++  E-Mail bru...@netestate.de
> ++  http://www.netestate.de/
> ++
> ++  Sitz: München, HRB Nr.142452 (Handelsregister B München)
> ++  USt-IdNr. DE221033342
> ++  Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer
> ++  Prokurist: Dipl. Kfm. (Univ.) Markus Hendel
>



-- 
Bernard Vatant
Senior Consultant
Vocabulary & Data Integration
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



How many instances of foaf:Person are there in the LOD Cloud?

2011-04-13 Thread Bernard Vatant
Hello all

Just trying to figure what is the size of personal information available as
LOD vs billions of person profiles stored by Google, Amazon, Facebook,
LinkedIn, unameit ... in proprietary formats.

Any hint of the proportion of "living" people vs historical characters is
also welcome.

Any idea?

Bernard


-- 
Bernard Vatant
Senior Consultant
Vocabulary & Data Integration
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: LOV - Linked Open Vocabularies

2011-04-01 Thread Bernard Vatant
Hi Bob

I certainly know this URI like anybody on this list, and I pretty well know
that there is a bunch of classes and properties used to describe DBpedia
resources, in the http://dbpedia.org/ontology/ namespace, such as
http://dbpedia.org/ontology/birthPlace

At this specific URI I get the following description in n3

@prefix rdf:<http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix dbpedia-owl:<http://dbpedia.org/ontology/> .
@prefix owl:<http://www.w3.org/2002/07/owl#> .
dbpedia-owl:birthPlacerdf:typeowl:ObjectProperty .
@prefix rdfs:<http://www.w3.org/2000/01/rdf-schema#> .
dbpedia-owl:birthPlacerdfs:domaindbpedia-owl:Person ;
rdfs:rangedbpedia-owl:Place ;
rdfs:label"birth place"@en ;
rdfs:comment"where the person was born"@en .

Which is cool. I have description of a property, and reference to domain and
range classes, so I can follow my nose from there

But at http://dbpedia.org/ontology/ itself I get

@prefix rdf:<http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix owl:<http://www.w3.org/2002/07/owl#> .
<http://dbpedia.org/ontology/>rdf:typeowl:Ontology ;
owl:versionInfo"Version 3.5"@en .

Missing a triple such as

<http://dbpedia.org/ontology/>rdfs:isDefinedByfoo

I've got at least a version number :)

So where is the RDF file containing the whole ontology?

Bernard


2011/4/1 Bob Ferris 

> Hi Bernard,
>
>
> On 4/1/2011 3:59 PM, Bernard Vatant wrote:
>
>> Maybe I missed something, but can someone tell me what the URI of the
>> ontology of dbpedia is?
>>
>
> please have a look at http://wiki.dbpedia.org/Ontology
>
> Cheers,
>
>
> Bob
>
>


-- 
Bernard Vatant
Senior Consultant
Vocabulary & Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: LOV - Linked Open Vocabularies

2011-04-01 Thread Bernard Vatant
Maybe I missed something, but can someone tell me what the URI of the
ontology of dbpedia is?

Bernard

2011/3/30 Benedikt Kaempgen 

> Hello,
>
> Maybe I missed something, but can someone tell me why the ontology of
> dbpedia is not listed on LOV [1]?
>
> [1] http://labs.mondeca.com/dataset/lov/index.html
>
> Regards,
>
> Benedikt
>
> --
> AIFB, Karlsruhe Institute of Technology (KIT)
> Phone: +49 721 608-47946
> Email: benedikt.kaemp...@kit.edu
> Web: http://www.aifb.kit.edu/web/Hauptseite/en
>
>
>
> -Original Message-
> From: semantic-web-requ...@w3.org [mailto:semantic-web-requ...@w3.org] On
> Behalf Of Kingsley Idehen
> Sent: Tuesday, March 29, 2011 1:09 AM
> To: Pierre-Yves Vandenbussche
> Cc: public-lod@w3.org; SW-forum; semantic...@yahoogroups.com;
> info...@listes.irisa.fr
> Subject: Re: LOV - Linked Open Vocabularies
>
> On 3/28/11 5:37 PM, Pierre-Yves Vandenbussche wrote:
>
>Kingsley,
>
>I've just added rdfs:isDefinedBy property to vocabularies which
> accept content negotiation.
>Example here:
> http://labs.mondeca.com/dataset/lov/details/vocabulary_voaf.html
>
>
> Okay, a few more things though. For instance, what type of Entity is
> Identified by this URI: http://labs.mondeca.com/vocab/voaf#VocabularySpace?
>
>
> Effect of entity ambiguity shows here:
>
> http://uriburner.com/describe/?url=http%3A%2F%2Flabs.mondeca.com%2Fvocab%2Fv
> oaf%23VocabularySpace<http://uriburner.com/describe/?url=http%3A%2F%2Flabs.mondeca.com%2Fvocab%2Fv%0Aoaf%23VocabularySpace>.
>
> Also, we have an Entity ID (URI based Named Ref):
> http://labs.mondeca.com/dataset/lov/lov#CITY, and we (hopefully most
> Linked
> Data folk) kinda know said Entities representation (in the form of a linked
> data graph pictorial) is accessible from the Address (URL):
> http://labs.mondeca.com/dataset/lov/lov, but for absolute clarity (human
> and
> machines) you should add a wdrs:describedby relation of the form:
>
> <http://labs.mondeca.com/dataset/lov/lov#CITY>
> <http://labs.mondeca.com/dataset/lov/lov#CITY>  wdrs:describedby
> <http://labs.mondeca.com/dataset/lov/lov>
> <http://labs.mondeca.com/dataset/lov/lov>  .
>
> Good job!
>
> Kingsley
>
>
>
>
>regards,
>
>
>
>Pierre-Yves Vandenbussche
>Research & Development
>Mondeca
>3, cité Nollez 75018 Paris France
>Tel. +33 (0)1 44 92 35 07 - fax +33 (0)1 44 92 02 59
>Mail: pierre-yves.vandenbuss...@mondeca.com
> Website: www.mondeca.com <http://www.mondeca.com/>
>Blog: Leçons de choses <http://mondeca.wordpress.com/>
>
>
>
>On Mon, Mar 28, 2011 at 4:57 PM, Kingsley Idehen
>  wrote:
>
>
>On 3/28/11 10:39 AM, Pierre-Yves Vandenbussche wrote:
>
>Hello all,
>
>We are pleased to announce the Linked Open
> Vocabularies initiative [1].
>
>The web of data is based on datasets publication.
> When building a dataset some questions arise: which existing vocabularies
> will be the best-suited for my needs? To facilitate this task we propose
> the
> Linked Open Vocabularies (LOV)  dataset [1]. It identifies the defined
> vocabularies for data description but also the relationships between these
> vocabularies.
>The work within the LOV is not exhaustive but, by
> suggesting us some vocabulary modifications and/or creations, we could
> improve this dataset.
>You could access this dataset via an RDF/XML file
> [2] and via a SPARQL Endpoint [3].
>
>
>
>[1] http://labs.mondeca.com/dataset/lov/index.html
>[2] http://labs.mondeca.com/dataset/lov/lov.rdf
>[3] http://labs.mondeca.com/endpoint/lov
>
>
>
>Pierre-Yves Vandenbussche, Bernard Vatant, Lise
> Rozat
>Research & Development
>Mondeca
>3, cité Nollez 75018 Paris France
> Website: www.mondeca.com <http://www.mondeca.com/>
>Lab: Mondeca Labs <http://labs.mondeca.com/>
>
>
>
>
>Nice!
>
>See:
>
> http://linkeddata.uriburner.com/describe/?url=http%3A%2F%2Flabs.mondeca..com%
> 2Fdataset%2Flov%2Flov%23LOV
>
>Would be nice if you also added isDefinedBy relations so
> that one can FYN between TBox and ABox with ease :-)
>
>
>
>--
>
>Regards,
&

Re: [ANN] Linked Open Colors

2011-04-01 Thread Bernard Vatant
Ola Sergio

Very cool ... and could be actually useful, so maybe less a joke than it
seems

Bernard


2011/4/1 Sergio Fernández 

> Hi,
>
> for giving some color to the semantic web folks, we are happy to
> announce the release the "Linked Open Colors" dataset [1]. The Linked
> Open Colors project offers tons of facts about colors, all readily
> available as Linked Open Data, linking with other relevant datasets
> such as dbpedia.
>
> The dataset and its publication mechanisms have been pedantically
> checked, and we expect no errors in the triples; if you do find some,
> please let us know.
>
> This project is highly inspired by Linked Open Numbers project [2].
> Happy April Fools' Day!
>
> Cheers,
>
> [1] http://purl.org/colors
> [2] http://km.aifb.kit.edu/projects/numbers/
>
> --
> Carlos Tejo, Iván Mínguez and Sergio Fernández
>
>


-- 
Bernard Vatant
Senior Consultant
Vocabulary & Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



SWEET Ontologies

2011-03-09 Thread Bernard Vatant
Hello all

I am wondering about the use, reuse, reusability of the SWEET ontologies in
the LOD Cloud
http://sweet.jpl.nasa.gov/

Any dataset using one of them?
Any vocabulary relying on or extending one of them?

Pointers welcome

Bernard

-- 
Bernard Vatant
Senior Consultant
Vocabulary & Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Proposal to assess the quality of Linked Data sources

2011-02-25 Thread Bernard Vatant
Hi Annika

- "A vocabulary is said to be established, if it is one of the 100 most
>> popular vocabularies stated on pre x.cc" - uhm, as the results from
>> Richard's evaluation have, this is quite arguable
>>
> It's a practical way to determine it (which I can use for the
> implementation of the formalism). Another way would be to compare many
> documents from many data sources and to find out, which vocabularies are
> most popular.


I'm particularly interested in this aspect of vocabulary selection.
Regarding popularity, I fully go along with Bob regarding prefix.cc in which
all sorts of biases can be introduced. I think the popularity is better
measured by the use of vocabularies in CKAN datasets, as indicated by
"format-*" tags. See http://ckan.net/tag/?page=F and for example
http://ckan.net/tag/format-bibo or http://ckan.net/tag/format-foaf.

Another approach I'm currently working on is the one you can find at
http://labs.mondeca.com/dataset/lov. The description of interlinked
vocabularies (using VOAF vocabulary) provide indication of popularity at the
vocabulary level itself. From this dataset (still far from exhaustive of
course) you can see which vocabularies are reused, extended, used for
annotation by other ones. I think the density of links to and from a
vocabulary to other ones gives a good indicator of its "establishment", in
combination with the number of datasets actually using it.

Best

Bernard


-- 
Bernard Vatant
Senior Consultant
Vocabulary & Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Introducing Vocabularies of a Friend (VOAF)

2011-01-25 Thread Bernard Vatant
Hello all

Points taken. Somehow changed the headings and itroduction at
http://www.mondeca.com/foaf/voaf-doc.html
to make more explicit what is is about (hopefully).

I did not change (yet) either VOAF acronym or namespace. To tell the truth,
my first idea was LOV for Linked Open Vocabularies, but I guess some would
have found that pun confusing too.
Sorry to keep on pushing puns and portmanteau(s?), from the "Semantopic Map"
(back in 2001, maybe some folks here remember it, it's offline now) to
"hubjects" ... Maybe it's not a good idea after all.

So if I sum up the feedback so far
- there is no question the dataset is worth it
- the introduction is a bit confusing (changed a couple of things, let's see
if it's better or worse)
- the name is totally confusing for some not-so-dumb people, so go figure
waht happens to not-so-smart ones :)

I'm open to all suggestions to change to something better. Is LOV a good
idea?
Other proposals :

LV or LVoc : Linked Vocabularies
WOV : Web of Vocabularies
...

Bernard



2011/1/25 Kingsley Idehen 

>  On 1/25/11 11:59 AM, William Waites wrote:
>
> * [2011-01-25 11:21:45 -0500] Kingsley Idehen  
>  écrit:
>
> ] Hmm. Is it the Name or Description that's important?
> ]
> ] But what about discerning meaning from the VOAF graph?
>
> Humans looking at documents and trying to understand a system
> do so in a very different way from machines. While what you
> suggest might be strictly true according to the way RDF and
> formal logic work, it isn't the way humans work (otherwise
> the strong AI project of the past half-century might have
> succeeded by now). So we should try arrange things in a way
> that is both consistent with what the machines want and as
> easy as possible for humans to understand. That Hugh, an
> expert in the domain, had trouble figuring it out due to
> poetic references to well known concepts suggests that there
> is some room for improvement.
>
> Cheers,
> -w
>
>
> Yes, but does a human say: you lost me at VOAF due to FOAF? I think they do
> read the docs, at least the opening paragraph :-)
>
> --
>
> Regards,
>
> Kingsley Idehen   
> President & CEO
> OpenLink Software
> Web: http://www.openlinksw.com
> Weblog: http://www.openlinksw.com/blog/~kidehen 
> <http://www.openlinksw.com/blog/%7Ekidehen>
> Twitter/Identi.ca: kidehen
>
>
-- 
Bernard Vatant
Senior Consultant
Vocabulary & Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



SWEET (but not friendly) ontologies

2011-01-21 Thread Bernard Vatant
Hello all

Gathering vocabularies for the growing VOAF dataset [1] leads to the
discovery of a bunch of linking and resusing good practice (good news) but
also makes obvious in comparison some data islands, apparently isolated from
everyything else whatsoever in the Cloud.

The SWEET ontologies developed by NASA [2] [3] seem to be in that case. We
have there a set of about 200 interlinked ontologies for Earth and
Environment sciences, but neither relying on any external namespace, nor
bearing any kind of metadata (creator, date, publisher, rights ...) to which
we are used in "friendly" vocabularies. SWEET ontologies don't seem used in
any VOAF vocabulary or CKAN package I've met so far. And the homepage has
not even a contact email to cc this message :(

I've heard that NASA uses those ontologies internally, but could not find
any pointer to that kind of use.

This is really a sad observation given the size of the work and the reliable
organization backing up this effort, those ontologies should be linked to
and from many other vocabularies!

So, if anyone has used one of SWEET vocabularies in a dataset or extend it
in some vocabulary, please send pointers!
And if someone behind SWEET ontologies is lurking on this list, I would be
happy to make contact :)

Bernard

[1] http://www.mondeca.com/foaf/voaf-vocabs.rdf
[2] http://sweet.jpl.nasa.gov/
[3] http://sweet.jpl.nasa.gov/2.1/

-- 
Bernard Vatant
Senior Consultant
Vocabulary & Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Fwd: Vocabulary of a Friend (VOAF)

2011-01-19 Thread Bernard Vatant
t)
> at both the vocabulary and term level.


I don't do it by now, but vaguely intend to do so, as said above, using the
lexvo.org terms glue. Strangely enough, we lack a predicate to assert that a
class or property belongs to a vocabulary. The equivalent of skos:inScheme.
What would you use? dcterms:partOf ? In any case it's a more ambitious task.


> I certainly find it easier to say that FOAF (as a vocabulary / project)
> "relies on" the Dublin Core
> community's vocabulary to provide detailed descriptions of documents
> and bibliographic content, or that it "relies on" SIOC when there's a
> need to describe eg. forums or bulletin boards.


Hmm. Interesting. Introducing facets or contexts in which "relies on"
applies. I have to munch over this.


> Expressing 'relies'
> links between terms is harder. I like to add mappings; but I don't
> like to add dependencies. So my guess is VOAF will be easier to use as
> a kind of 'vocabulary buddylist' than at the term level, and for
> terms, we might turn directly to things like subClassOf /
> subPropertyOf...
>

Indeed. But the terminological glue is something to think about.


> ps. feel  free to migrate this to public-lod
>

Done :)

Bernard


-- 
Bernard Vatant
Senior Consultant
Vocabulary & Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Introducing Vocabularies of a Friend (VOAF)

2011-01-19 Thread Bernard Vatant
Hi Christopher

 I can't help but feel that calling it VOAF is just going to muddy the
> waters. "Friendly vocabularies for the linked data Web"
> doesn't help clarify either. It's cute, but I strongly suggest you at the
> very least make this 'tag line' far more clear.
>

I agree the current documentation is too sketchy and potentially misleading
as is. I have put efforts mainly on the dataset itself so far, but you're
right it has to be better documented.

Regarding the name, well, the pun is here to stay I'm afraid. I've had
positive feedback from Dan Brickley about it, so I already feel it's too
late to change now.


> Frankly calling something 'voaf' when people will hear it mixed in with
> 'foaf' is just making the world more confusing.
>

Actually I've not thought much (not at all) about how people would pronounce
or hear it. I principally communicate with vocabularies (and people using
them) through written stuff, and very rarely speak about them. I barely know
how to pronounce OWL, and always feel like a fool when I've to, and will
eventually spell it O.W.L. - as every other french native would do. If I had
to speak about VOAF, I think I would spell it also V.O.A.F.


> I had a lot of confusion until I found out the "SHOCK" vocab people were
> talking about was spelled SIOC.
>

Interesting, I was confused exactly the other way round. I've read a lot
(and written a bit) about SIOC since it's been around, but realized only two
days ago how it was pronounced when I actually heard someone "speaking"
about it the "right" way ... and thought at first time it was something
else.


> One other minor suggestion;
> Vocabulary<http://graphite.ecs.soton.ac.uk/browser/?uri=http%3A%2F%2Fwww.mondeca.com%2Ffoaf%2Fvoaf%23Vocabulary#http://www.mondeca.com/foaf/voaf%23Vocabulary>
> → 
> rdfs:subClassOf<http://graphite.ecs.soton.ac.uk/browser/?uri=http%3A%2F%2Fwww.w3.org%2F2000%2F01%2Frdf-schema%23subClassOf#http://www.w3.org/2000/01/rdf-schema%23subClassOf>
> → 
> void:Dataset<http://graphite.ecs.soton.ac.uk/browser/?uri=http%3A%2F%2Frdfs.org%2Fns%2Fvoid%23Dataset#http://rdfs.org/ns/void%23Dataset>
>
> might be a mistake because void:Dataset is defined as "A set of RDF
> triples that are published, maintained or aggregated by a single provider."
>

Not a bug, but a feature. It's exactly what a voaf:Vocabulary is.

and it may be that you would want to define non RDF vocabs using this.
>

You might want to do that but I don't and I'm the vocabulary creator
(right?) so I can insist on the fact that this is really meant to describe
*RDF* vocabularies, and cast this intention in the stone of formal
semantics.
If you want to describe other kind of vocabularies the same way, feel free
to use or create something else. Or extend foaf:Vocabulary to a more generic
class. It's an open world, let thousand flowers blossom :)


> I see no value in making this restriction.
>

The value I see is to keep this vocabulary use focused on what it was meant
for.

Best

Bernard

-- 
Bernard Vatant
Senior Consultant
Vocabulary & Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Introducing Vocabularies of a Friend (VOAF)

2011-01-19 Thread Bernard Vatant
Hello Stephane

The question makes sense of course, although the initial focus of VOAF was
on ontology-like vocabularies.
Instances of both owl:Ontology and skos:ConceptScheme could be instances of
voaf:Vocabulary.
Actually we could define
voaf:OWLVocabulary as subclass of both owl:Ontology and voaf:Vocabulary
voaf:SKOSVocabulary as subclass of both skos:ConceptScheme and
voaf:Vocabulary
then restrict the domain of voaf:classNumber and voaf:propertyNumber to the
former, and add properties such as conceptNumber to the latter.
I put that on the backburner, but it's clearly one the many possible
extensions.

Bernard

2011/1/17 Stephane Fellah 

> Bernard,
>
> Thanks for your answer. Another question I was wondering: Can we extend the
> VOAF ontology to describe SKOS taxonomies ? Does this question make sense to
> you ?  In the case of SKOS, we have only the notion of concepts not classes
> and properties.
>


-- 
Bernard Vatant
Senior Consultant
Vocabulary & Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Introducing Vocabularies of a Friend (VOAF)

2011-01-15 Thread Bernard Vatant
Hi Stephane

2011/1/15 Stephane Fellah 

> Sounds very interesting initiative. Based on my understanding, I think it
> should be possible to write a tool that read any OWL document and generate a
> VOAF document.


Indeed I've been thinking along those lines. The current dataset is
handcrafted as a prototype should be, but I'm indeed thinking now about ways
to generate the VOAF description automagically from the OWL or RDFS files.
Devil is in the details, though. Some information you can't really get by
conventional parsing of the graph, such as which namespaces are used, to
populate the voaf:reliesOn property. Those you can get by ad hoc syntactic
scripts, but vocabularies are published using a variety of syntaxes.


> May be Swoogle could be a good starting point, but not sure how the API can
> provide the list of ontology namespaces through the REST API.


I don't know either, but I'm sure someone will find a way :)


> The imports section would corresponds to the imports statement. The tools
> would count the number of classes and properties in the ontology namespace.
> It would be interesting to aggregate all this information and see which
> vocabularies have the most influence using SNA algorithms.


You are welcome to play along those lines. I think there are a lot of
opportunities and things to discover. This is just the beginning of the
story.

Best

Bernard



-- 
Bernard Vatant
Senior Consultant
Vocabulary & Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Introducing Vocabularies of a Friend (VOAF)

2011-01-15 Thread Bernard Vatant
Hi Egon

2011/1/15 Egon Willighagen 

> Hi Bernard,
>
> Maybe it's just Saturday morning, but what exactly is the goal of your
> VOAF effort? What problems with existing ontologies does it address?
> Just curious, as it sounds interesting...
>

There are no "problems" with existing ontologies. VOAF just aims at easing
their discovery. I think Kingsley has shown by a few applications the
potential of such an interlinking. I've very often the question : what
vocabularies can I use for my data, and how can I discover them. VOAF is a
tool addressing this question.

It can help to put in light which vocabularies have a good practice of
reusing and relying existing ones, and which reinvent everything in their
namespace. Having quick access on creators and publishers (those data are
yet largely to complete in the current dataset) is also linking the
vocabularies to the community creating them etc etc.

Next step for example I would like to add terminological links between
vocabularies using the same term, using lexvo.org ontology, such as to add

<http://lexvo.org/id/term/eng/event>  lvont:means  <
http://purl.org/vocab/bio/0.1/Event>

<http://lexvo.org/id/term/eng/event>  lvont:means  <
http://purl.org/dc/dcmitype/Event>

<http://lexvo.org/id/term/eng/event>  lvont:means  <
http://www.aktors.org/ontology/portal#Event >

This might help curators of those various ontologies to wonder if they
needed to duplicate the class in their own vocabulary, and the data curator
wanting to publish data about events to look up those various flavours of
"Event" before picking her choice ...

And as Kingsley says this is just the beginning of what you can imagine ...

Bernard


-- 
Bernard Vatant
Senior Consultant
Vocabulary & Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Introducing Vocabularies of a Friend (VOAF)

2011-01-14 Thread Bernard Vatant
Hello all

I'm pleased to announce the first publication of
"Vocabularies of a Friend (VOAF) - Friendly vocabularies for the linked data
Web"

Data sets published in the framework of the Linked Open Data Cloud are
relying on a variety of RDFS vocabularies or OWL ontologies.
The aim of VOAF is to provide information on such vocabularies, and in
particular how they rely on each other.
VOAF defines a network of vocabularies the same way FOAF is used to define
networks of people"

VOAF is of course a clear homage to FOAF, which is the hub of the network :
more than half of the listed vocabularies rely on it one way or another.
I've asked Dan Brickley a couple of days ago if he did not mind this
friendly hack. Without answer from him, I just went ahead following the
adage "Qui ne dit mot consent".

More at http://www.mondeca.com/foaf/voaf-doc.html

The VOAF dataset is available as linked data at
http://www.mondeca.com/foaf/voaf-vocabs.rdf

This is a work in progress, still a bit sketchy, which hopefully will
benefit from the community feedback.

In particular I've tried to link the vocabularies to the CKAN datasets using
the Tag ontology, for example making explicit the link from
http://xmlns.com/foaf/0.1 to http://ckan.net/tag/format-foaf
Instead of re-inventing a specific attribute I've reused Lexvo, MOAT and Tag
ontologies like in the following, which is might be a bit convoluted for the
purpose at hand.

http://xmlns.com/foaf/0.1";>
...
   
  http://ckan.net/tag/format-foaf";>
format-foaf
  

...


I've made sense of quite a bunch of CKAN tags in terms of corresponding
vocabulary used, but there are still quite a few vocabularies w/o
corresponding CKAN tags. If people in charge of CKAN tags see anything I've
missed pleas feel free to push it to me.

And of course any feedback on whatever you would like to see
added/modified/deleted is welcome.

Thanks for your attention.

Bernard



-- 
Bernard Vatant
Senior Consultant
Vocabulary & Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Concurrent namespaces for Creative Commons ontology

2010-12-21 Thread Bernard Vatant
Interesting ... it figures that the authoritative namespace for cc has been
indeed http://creativecommons.org/ns#, for at least two years and certainly
more.
But since there is no mention of the new namespace whatsoever at
http://web.resource.org/cc/ <http://web.resource.org/cc/license>, people can
keep using it without being aware of any change. I would be curious to have
people behind Music Ontology for example explaining why they keep using the
old namespace.

Bernard

2010/12/21 Søren Roug 

>  According to footnote 8 of
> http://wiki.creativecommons.org/images/d/d6/Ccrel-1.0.pdf :
>
>
>
> 8 ... CC initially used the http://web.resource.org/cc/ namespace,
> migrating to http://creativecommons.org/ns# for superior human interaction
> with the vocabulary when it became apparent RDFa would facilitate this. In
> 2004 the Dublin Core Metadata Initiative approved a "license" refinement of
> its "rights" term (see
> http://dublincore.org/usage/decisions/2004/2004-01.Rights-terms.shtml).
> Had http:
>
> //purl.org/dc/terms/license existed in 2002, CC would not have defined
> http://web.resource.org/cc/license.
>
> Thanks to the extensibility properties of RDF,
> http://creativecommons.org/ns#license describes its relationship
>
> to each of these other properties.
>
> --
> Sincerely yours / Med venlig hilsen, Søren Roug 
>
> European Environment Agency, Kongens Nytorv 6, DK-1050 Copenhagen K
> Tel: +45 2368 3660 Jabber: r...@jabber.eea.europa.eu
>
> *This email was delivered using 100% recycled electrons. Please try to
> keep it that way.***
>
>
>
>
>
>
>
> *From:* public-lod-requ...@w3.org [mailto:public-lod-requ...@w3.org] *On
> Behalf Of *Bernard Vatant
> *Sent:* 21 December 2010 10:26
> *To:* Linking Open Data
>
> *Cc:* m...@aaronsw.com; nat...@yergler.net
> *Subject:* Concurrent namespaces for Creative Commons ontology
>
>
>
> Hi folks
>
>
> It seems that there are two concurrent publications and namespaces for the
> Creative Commons Rights Expression Language.
>
> http://creativecommons.org/schema.rdf uses 
> http://creativecommons.org/ns#<http://creativecommons.org/ns>
> http://web.resource.org/cc/schema.rdf uses http://web.resource.org/cc/
>
> The first one looks at first sight more reliable since it is maintained by
> Creative Commons folks themselves
> It is apparently the namespace underlying CC tag on CKAN packages
> http://ckan.net/tag/format-cc
> Datasets under this tag indeed use it, such as Eurostat or Geospecies
> (and BTW it would be great if CKAN tags pages could explicit the vocabulary
> namespace underlying the tag, if any)
>
> OTOH I found the second one to be used by a bunch of more or less famous
> ontologies such as:
>
> BIO http://purl.org/vocab/bio/0.1
> Music Ontology http://purl.org/ontology/mo/
> FRBR http://purl.org/vocab/frbr/core
> Review Ontology http://purl.org/stuff/rev
> Talis Address Schema http://schemas.talis.com/2005/address/schema
> VANN http://purl.org/vocab/vann
>
> So I wonder ... I cc people behind both vocabularies, maybe they can do
> something about it?
>
> Best
>
> Bernard
>
> --
> Bernard Vatant
> Senior Consultant
> Vocabulary & Data Engineering
> Tel:   +33 (0) 971 488 459
> Mail: bernard.vat...@mondeca.com
> 
> Mondeca
> 3, cité Nollez 75018 Paris France
> Web:http://www.mondeca.com
> Blog:http://mondeca.wordpress.com
> 
>



-- 
Bernard Vatant
Senior Consultant
Vocabulary & Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Concurrent namespaces for Creative Commons ontology

2010-12-21 Thread Bernard Vatant
Hi folks

It seems that there are two concurrent publications and namespaces for the
Creative Commons Rights Expression Language.

http://creativecommons.org/schema.rdf uses http://creativecommons.org/ns#
http://web.resource.org/cc/schema.rdf uses http://web.resource.org/cc/

The first one looks at first sight more reliable since it is maintained by
Creative Commons folks themselves
It is apparently the namespace underlying CC tag on CKAN packages
http://ckan.net/tag/format-cc
Datasets under this tag indeed use it, such as Eurostat or Geospecies
(and BTW it would be great if CKAN tags pages could explicit the vocabulary
namespace underlying the tag, if any)

OTOH I found the second one to be used by a bunch of more or less famous
ontologies such as:

BIO http://purl.org/vocab/bio/0.1
Music Ontology http://purl.org/ontology/mo/
FRBR http://purl.org/vocab/frbr/core
Review Ontology http://purl.org/stuff/rev
Talis Address Schema http://schemas.talis.com/2005/address/schema
VANN http://purl.org/vocab/vann

So I wonder ... I cc people behind both vocabularies, maybe they can do
something about it?

Best

Bernard

-- 
Bernard Vatant
Senior Consultant
Vocabulary & Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Looking for metalex ontology

2010-11-30 Thread Bernard Vatant
Tim

Thanks for taking the time to drill down to those gory details. To follow-up
with Rinke ... ouch indeed :)

BTW seems to me (now that you eventually led me to the OWL file) there is
another way this ontology does not follow Linked Data best practices : It
does not rely on any other vocabulary, although many classes and properties
it defines could be easily find in what I call the Linked Open Vocabularies
(FOAF, Dublin Core, BIBO, FRBR etc.)
Regarding this last point I'm in the process to gather in the single dataset
how vocabularies used in the LOD cloud rely on each other. Stay tuned,
publication in the days to come, bandwidth permitting.

Best

Bernard

2010/11/30 Tim Berners-Lee 

> Bernard,
>
> You have been tripped up by abuse of content negotiation.
>
> Their document says they do conneg.
>
> cwm http://www.metalex.eu/metalex/1.0
> gives you data, as cwm only asks for RDF.
>
> Following it by hand
> $ curl -H Accept:application/rdf+xml http://www.metalex.eu/metalex/1.0
> 
> 
> 303 See Other
> 
> See Other
> The answer to your request is located http://svn.metalex.eu/svn/MetaLexWS/branches/latest/metalex-cen.owl
> ">here.
> 
> Apache/2.2.11 (Ubuntu) DAV/2 SVN/1.5.4 mod_jk/1.2.26
> PHP/5.2.6-3ubuntu4.6 with Suhosin-Patch mod_python/3.3.1 Python/2.6.2
> mod_ruby/1.2.6 Ruby/1.8.7(2008-08-11) mod_ssl/2.2.11 OpenSSL/0.9.8g
> mod_perl/2.0.4 Perl/v5.10.0 Server at www.metalex.eu Port 80
> 
> $
>
> *** Note here we are a 303 which means that what we are being redirected to
> is NOT the
> ontology, but may be relevant.  The chain of custody is broken, te site
> does not assert that what follows
> is the ontology.  But let us follow it anyway:
>
> Following the 303
>
> $ curl -H Accept:application/rdf+xml
> http://svn.metalex.eu/svn/MetaLexWS/branches/latest/metalex-cen.owl
> 
>
>
>  http://www.w3.org/2002/07/owl#"; >
> http://www.w3.org/2006/12/owl11#"; >
> [...]
>  xmlns:metalex="http://www.metalex.eu/metalex/2008-05-02#";
>  xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#";
>  xmlns:owl="http://www.w3.org/2002/07/owl#";>
> 
> [...]
>
> *** Note that here you do get some RDF.
>
> Tabulator can read that.  Each term is as you explore marked by a red dot
> to
> indicate that it could not be looked up on the web.  Because:
> ** You do not get information about the namespace
>   xmlns:metalex="http://www.metalex.eu/metalex/2008-05-02#";
> instead of the one you originally asked about!
>
> So after all that you can see what they are getting  at and how they are
> thinking.
> But their linked data is seriously and needlessly broken.
>
> To fix it, they should just serve the ontology with 200 from
> http://www.metalex.eu/metalex/1.0
> and fix the namespace in it to be that. No conneg.
>
>
> *** Without Firefox, however, even with tabulator, so accepting RDF or
> HTML, is redirected to:
>
> http://www.cen.eu/cen/Sectors/Sectors/ISSS/CEN%20Workshop%20Agreements/Pages/MLX%20CWAs.aspx
> which is a sort of a home page about the document, with copyright stuff,
> but it is not the ontology.
>
> So they are using the same URI for two documents with very different
> information, which is architecturally bad and practically messed you up.
>
> Note that John Shedidan (Ccd) and colleagues have put the UK laws on line
> with lots of RDF -- you could sync up wit them if you haven't
>
> Moral: point tabulator at it and if it doesn't work, fix it.
> Tim
>
> PS: Their copyright
>
> "CWAs are CEN copyright. Those made available for downloading are provided
> on the condition that they may not be modified, re-distributed, sold or
> repackaged in any form without the prior consent of CEN, and are only for
> the use of the person downloading them.  For additional copyright
> information, please refer to the statements on the cover pages of the CWAs
> concerned."   sounds as though if it applies to the ontology, which isn't
> obvious
>
> On 2010-11 -29, at 14:41, Bernard Vatant wrote:
>
> Hi all
>
> According to http://www.ckan.net/tag/format-metalex there are two datasets
> in the LOD cloud relying on metalex ontology.
>
> But they provide different URIs for this ontology ...
>
> http://www.best-project.nl/rechtspraak.ttl  says :
> void:vocabulary <http://www.metalex.eu/schema>
>
> http://www.legislation.gov.uk/ukpga/1985/67/section/6/data.rdf says :
> xmlns:metalex="http://www.metalex.eu/metalex/2008-05-02#";
>
> ... and both are dead links ...
>
> OTOH http://www.metalex.eu/documentation/ says:
>
> "The namespace of the CEN MetaLex XML Schema and OWL

Looking for metalex ontology

2010-11-29 Thread Bernard Vatant
Hi all

According to http://www.ckan.net/tag/format-metalex there are two datasets
in the LOD cloud relying on metalex ontology.

But they provide different URIs for this ontology ...

http://www.best-project.nl/rechtspraak.ttl  says :
void:vocabulary <http://www.metalex.eu/schema>

http://www.legislation.gov.uk/ukpga/1985/67/section/6/data.rdf says :
xmlns:metalex="http://www.metalex.eu/metalex/2008-05-02#";

... and both are dead links ...

OTOH http://www.metalex.eu/documentation/ says:

"The namespace of the CEN MetaLex XML Schema and OWL specification is
http://www.metalex.eu/metalex/1.0";

... which redirects to ... well ...

Too bad because this metalex ontology looks really interesting :)

Pointer, someone?

Bernard

-- 
Bernard Vatant
Senior Consultant
Vocabulary & Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: survey: who uses the triple foaf:name rdfs:subPropertyOf rdfs:label?

2010-11-12 Thread Bernard Vatant
Hi Dan

For the record what happened to geonames ontology re. this issue

Answering to the first publication of geonames ontology in october 2006, Tim
Berners-Lee himself asked for the "geonames:name" attribute to be declared
as a subproperty of rdfs:label to make Tabulator able to use it. And in
order to make DL tools also happy the trick was to have a "Full" ontology
declaring the subproperties of rdfs:label and importing a "Lite" ontology.
I'm afraid I can find now neither on which list this conversation took
place, nor who suggested the trick.

It was done so until version 2.0, see
http://www.geonames.org/ontology/ontology_v2.0_Full.rdf

I changed it from version 2.1, by declaring the various geonames naming
properties as subproperties of either skos:prefLabel or skos:altLabel,
kicking the issue out towards the SKOS outfield, and getting rid of this
cumbersome splitting of the ontology into a "Full" and "Lite" part.

That can't be done for foaf:name I'm afraid, but it would be interesting to
know if Tabulator uses subproperty declarations in the case of foaf:name.

Best

Bernard


2010/11/12 Dan Brickley 

> Dear all,
>
> The FOAF RDFS/OWL document currently includes the triple
>
>  foaf:name rdfs:subPropertyOf rdfs:label .
>
> This is one of several things that OWL DL oriented tools (eg.
> http://www.mygrid.org.uk/OWL/Validator) don't seem to like, since it
> mixes application schemas with the W3C builtins.
>
> So for now, pure fact-finding. I would like to know if anyone is
> actively using this triple, eg. for Linked Data browsers. If we can
> avoid this degenerating into a thread about the merits or otherwise of
> description logic, I would be hugely grateful.
>
> So -
>
> 1. do you have code / applications that checks to see if a property is
> "rdfs:subPropertyOf rdfs:label" ?
> 2. do you have any scope to change this behaviour (eg. it's a web
> service under your control, rather than shipping desktop software )
> 3. would you consider checking for ?x rdf:type foaf:LabelProperty or
> other idioms instead (or rather, as well).
> 4. would you object if the triple "foaf:name rdfs:subPropertyOf
> rdfs:label " is removed from future version of the main FOAF RDFS/OWL
> schema? (it could be linked elsewhere, mind)
>
> Thanks in advance,
>
> Dan
>
>


-- 
Bernard Vatant
Senior Consultant
Vocabulary & Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Best Way to Extend the Geo Vocabulary to include an "error" or "extent" radius in meters

2010-10-07 Thread Bernard Vatant
Hi Peter

Something like the example below, but I suspect that this might not make it
> a real geo:Point?
>

barely. The old maths teacher in me frowns at points having a radius :)


>   
> 55.701
> 12.552
> 10
>   
>

What about something as the following, since the radius is not really a
property of the point ...




55.701
12.552


10


namespaces ad libitum of course

Cheers

Bernard


-- 
Bernard Vatant
Senior Consultant
Vocabulary & Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



Re: Best practice for permantently moved resources?

2010-08-12 Thread Bernard Vatant
Hi Kjetil

You might be interested by what has been done for lingvoj.org language URIs
(which you have used in a project if I remember well) redirected now to
lexvo.org. See http://www.lingvoj.org/

There are not many explanations of the rationale and method, but your
message reminds me it's on my to-do list to document it further.

At http://www.lingvoj.org/lingvo/fr.rdf you get the following descriptions

http://www.lingvoj.org/lingvo/fr.rdf";>
http://www.lexvo.org/data/iso639-3/fra"/>


Provides the RDF document replacing the current one

http://www.lingvoj.org/lang/fr";>
http://www.lexvo.org/data/iso639-3/fra"/>
http://lexvo.org/id/iso639-3/fra"/>


Provides the new URI and the new document where it is defined.

Re. conneg, I've set up a simple redirect for the html pages.

I of course welcome any feedback about this method.

Best

Bernard



2010/8/12 Kjetil Kjernsmo 

> Hi all!
>
> Cool URIs don't change, but cool content does, so the problem surfaces that
> I
> need to permanently redirect now and then. I discussed this problem in a
> meetup yesterday, and it turns out that people have found dbpedia
> problematic
> to use because it is too much of a moving target, when a URI changes
> because
> the underlying concepts change, there's a need for more 301s.
>
> The problem is then that I need to record the relation between the old and
> the
> new URI somehow. As of now, it seems that the easiest way to do this would
> be
> to do something like:
>
> <http://example.org/old> ex:permanently_moved_to <http://example.org/new>
>
> and if the former is dereferenced, the server will 301 redirect to the
> latter.
> Has anyone done something like that, or have other useful experiences
> relevant
> to this problem?
>
> Cheers,
>
> Kjetil
>
>


-- 
Bernard Vatant
Senior Consultant
Vocabulary & Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com



  1   2   >