Re: Size a linked open data set

2016-07-06 Thread Nandana Mihindukulasooriya
Dear Jean-Claude,

I'm not sure exactly what you meant by the "number of distinct resources in
a dataset". Is it "the total number of distinct subjects" including both
IRIs and blank nodes? It seems your first query counts that. Your second
query seems to count the number of triples in the dataset. You can also
count total number of distinct resources or IRIs taking into account
subject, predicate, objects of all triples. The VoID vocabulary defines
some of those statistics. https://www.w3.org/TR/void/#statistics

Loupe, a tool that we built to explore datasets, provide some of those
statistics for the DBpedia (FR) 2015-04 dataset.
http://loupe.linkeddata.es/loupe/summary.jsp?dataset=frdb

At the moment, we are creating a new version with DBpedia 2015-10 datasets
and we will be happy to share those statistics with you in advance. Please
feel free to contact us if you don't find the information you need in the
current online version.

Best Regards,
Nandana

Ontology Engineering Group (OEG)
Universidad Politécnica de Madrid
Madrid, Spain

On Wed, Jul 6, 2016 at 1:49 PM, Jean-Claude Moissinac <
jean-claude.moissi...@telecom-paristech.fr> wrote:

> Hello
>
> In my work, I need to know the number of distinct resources in a dataset.
> For example, with dbpedia-fr, I'm trying
> select count(distinct ?r) where { ?r ?p ?l }
>
> And I'm always getting a timeout error message
> While with
> select count(?r) where { ?r ?p ?l }
> I'm getting
> 185404575
>
> Is it a good way to know about such size?
>
> --
> Jean-Claude Moissinac
>
>


Re: Loupe - a tool for inspecting and exploring datasets

2015-10-10 Thread Nandana Mihindukulasooriya
Hi John,

Thanks! yes, we see the value in using Loupe with private SPARQL endpoints
and we looking at possible models for using it with private SPARQL
endpoints. We will update the thread with the details after ISWC conf.

Best Regards,
Nandana

On Fri, Oct 9, 2015 at 12:45 PM, John Walker  wrote:

> Hi Nandana
>
> Nice tool, is it possible to use on private SPARQL endpoints?
>
> John
>
>
>


Re: Loupe - a tool for inspecting and exploring datasets

2015-10-10 Thread Nandana Mihindukulasooriya
Hi Olaf,

On Fri, Oct 9, 2015 at 7:22 PM, Olaf Görlitz <olaf.goerl...@gmail.com>
wrote:

> very nice, indeed. Are you planning to make it available as Open Source.
> Thus one could also install it locally for private datasets.
>

This work was supported by a Spanish national project called 4V (Volume,
Velocity, Variety and Veracity in innovative data management) and we are
now in the process of releasing it as an open source project with proper
licensing etc. according to project guidelines. It will take a bit of time
but our goal is to release it as an open source project.


> +1 for using ElasticSearch and Docker
>
> However, what is your experience with using ElasticSearch for triple
> indexing? Why did you not use a triple store?
>

Well, I use a triple store (OpenLink Virtuoso Open Source) in the indexing
phase but for storing indexed information I use ElasticSearch. Each dataset
has its own index with a set of predefined document type mappings. The main
reason for using in Elasticsearch was that it was much easier and faster to
do auto completions, term searches, and page generations using ES. Also
scaling with cluster of machines is more transparent in ES. Overall, my
experience with Elasticsearch is very positive. In addition, I use an in
memory cache (Ehcache) mainly to optimize the results that are paged etc.

Best Regards,
Nandana

>
> Am 08.10.2015 um 18:09 schrieb Nandana Mihindukulasooriya:
>
>> Hi all,
>>
>> We are developing a tool called Loupe [ http://loupe.linkeddata.es ] for
>> inspecting and exploring datasets to understand which vocabularies
>> (classes, and properties) are used in a dataset and which are common
>> triple patterns. Loupe has some similarities to LODStat, Aether,
>> ProLOD++, etc. but it provides the ability to dig into more details. It
>> also connects the information provided directly to data so that so that
>> one can see the triples that correspond to those numbers.  At the
>> moment, it indexes 2+ billion triples from datasets including DBpedia
>> (17 languages), wikidata, Linked Brainz, Bio models, etc.
>>
>> It's easier to describe what information Loupe provides using an
>> example. If we take the DBpedia dataset, first it provides a summary
>> with the number of triples, distinct subjects, objects, their
>> composition (IRIs, blank nodes, literals), etc. and summary of the other
>> information that we will present below. http://tinyurl.com/loupe-dbpedia
>>
>> The class explorer provides the list of 941 classes used, number of
>> instances per each class, number classes in each namespace etc. It also
>> allows you to search for classes. http://tinyurl.com/dbpedia-classes
>>
>> If we select a concrete class such as dbo:Person, it shows the 13,128
>> distinct properties associated with instances of dbo:Person and the
>> probability that a given property is found in an instance. It also
>> provides a list 438 other types that are declared in dbo:Person
>> instances which can be equivalents classes, superclasses, subclasses,
>> etc. http://tinyurl.com/dbo-person
>>
>> The property explorer provides a list of 60347 properties with the
>> number of triples, number properties in each namespace etc. It also
>> allows searching. http://tinyurl.com/dbpedia-properties
>>
>> Again, if we select a concrete property such as dbprop:name, it looks at
>> all the triples that contain the given property and analyze the subjects
>> and objects of those triples. For subjects, it looks at IRI / blank node
>> counts and also the their types. For objects, it does the same but
>> additionally analyzes literals for numeric, integers, averages, min,
>> max, etc. http://tinyurl.com/dbp-name
>>
>> The triple pattern explorer allows you to search the 3,807,196 abstract
>> triple patterns. http://tinyurl.com/dbpedia-triple-patterns
>> Or you can select a pattern you are interested, for instance what are
>> the properties that connect dbo:Politician to dbo:Criminal
>> http://tinyurl.com/politician-criminal
>>
>> In all these cases, the numbers are directly linked to the corresponding
>> triples.
>>
>> That's a glimpse of Loupe.  We would like to know whether it useful to
>> your use cases so that we can keep improving it. It's still in its early
>> stages so any feedback on improvements are more than welcome. If are
>> interested, we will we doing a demo [1] at ISWC 2015.
>>
>> Best Regards,
>> Nandana Mihindukulasooriya
>> María Poveda Villalón
>> Raúl García Castro
>> Asunción Gómez Pérez
>>
>> [1] Nandana Mihindukulasooriya, María Poveda Villalón, Raúl García
>> Castro, and Asunción Gómez Pérez. "Loupe - An Online Tool for Inspecting
>> Datasets in the Linked Data Cloud", Demo at The 14th International
>> Semantic Web Conference, Bethlehem, USA, 2015.
>>
>


Re: Loupe - a tool for inspecting and exploring datasets

2015-10-09 Thread Nandana Mihindukulasooriya
Thanks a lot Kingsley / Ghislain for your feedback !!

On Fri, Oct 9, 2015 at 10:41 AM, Ghislain Atemezing <
auguste.atemez...@eurecom.fr> wrote:
>
> That's a glimpse of Loupe.  We would like to know whether it useful to
> your use cases so that we can keep improving it. It's still in its early
> stages so any feedback on improvements are more than welcome. If are
> interested, we will we doing a demo [1] at ISWC 2015.
>
>
> I see your work also closed to the lodlaundromat http://lodlaundromat.org/
> .
>

Yes. We were looking at LOD Laundromat as a way of harvesting more datasets
for Loupe and also possibility of using its LDF endpoints. May be there are
other ways to integrate Loupe better with LOD Laundromat and we will surely
look into that.

I was looking at the “provenance” tab, and was a bit confused. For example,
> in the case of DBpedia French [1], the source is the endpoint and the type
> “RDF Dump”, while someone would expect “SPARQL endpoint”. Also the creator
> of this instance is you. Maybe it could be nice to use in the future  the
> DUV metadata [2] to fill in this section?
>

Yes. It's a bit confusing. That provenance information was about the
internal index we created for the tool but not about the dataset itself.
But we are in the process of changing that section so that the provenance
explorer will be about the provenance of the dataset itself using the info
we extract from the dataset. We will definitely look into how we can use
DUV for that. We are also planning to include more explorers such as a one
for extracting language metadata, etc.

In the “ontology explorer” for this dataset, It only shows the OWL
> vocabulary. Is this something related to the generic query used in this
> particular case ?
>

We extract both OWL and RDFS vocabularies but there is a small glitch in
the generation of that page. We will fix it soon. Thanks a lot for pointing
it out!

Best Regards,
Nandana


Re: Loupe - a tool for inspecting and exploring datasets

2015-10-09 Thread Nandana Mihindukulasooriya
Hi Pierre-Yves,

On Fri, Oct 9, 2015 at 11:09 AM, Pierre-Yves Vandenbussche <
py.vandenbuss...@gmail.com> wrote:
>
> very nice initiative.
>

Thanks!

Is there any API, or dump to get access to your data? I would be very
> interested to use the information of triple patterns.
>

At moment we don't have an API for that but we could add it to our roadmap.


> When trying to add a dataset I got an field validation error "must match
> pattern" on email, dump and endpoint although they are all valid.
>

Sorry, you should be able to add the information now. Anyway we will write
you offlist to ask more about your use case with the triple patterns and
also the dataset that you wanted include.

Best Regards,
Nandana


Re: Loupe - a tool for inspecting and exploring datasets

2015-10-08 Thread Nandana Mihindukulasooriya
Thanks a lot Alvaro for your feedback! There was a small error in the
dataset search and it's fixed now.

The correct URL should've been
http://loupe.linkeddata.es/loupe/summary.jsp?dataset=engdbp

Best Regards,
Nandana


On Thu, Oct 8, 2015 at 6:18 PM, Alvaro Graves <alv...@graves.cl> wrote:

> Hi Nandana,
>
> I got several 500 errors when trying to use it, for example
> http://loupe.linkeddata.es/loupe/dataset?dataset=DBpedia%20(English)
>
> Alvaro Graves-Fuenzalida, PhD
> Web: http://graves.cl - Twitter: @alvarograves
>
> On Thu, Oct 8, 2015 at 9:09 AM, Nandana Mihindukulasooriya <
> nmihi...@fi.upm.es> wrote:
>
>> Hi all,
>>
>> We are developing a tool called Loupe [ http://loupe.linkeddata.es ] for
>> inspecting and exploring datasets to understand which vocabularies
>> (classes, and properties) are used in a dataset and which are common triple
>> patterns. Loupe has some similarities to LODStat, Aether, ProLOD++, etc.
>> but it provides the ability to dig into more details. It also connects the
>> information provided directly to data so that so that one can see the
>> triples that correspond to those numbers.  At the moment, it indexes 2+
>> billion triples from datasets including DBpedia (17 languages), wikidata,
>> Linked Brainz, Bio models, etc.
>>
>> It's easier to describe what information Loupe provides using an example.
>> If we take the DBpedia dataset, first it provides a summary with the number
>> of triples, distinct subjects, objects, their composition (IRIs, blank
>> nodes, literals), etc. and summary of the other information that we will
>> present below.  http://tinyurl.com/loupe-dbpedia
>>
>> The class explorer provides the list of 941 classes used, number of
>> instances per each class, number classes in each namespace etc. It also
>> allows you to search for classes. http://tinyurl.com/dbpedia-classes
>>
>> If we select a concrete class such as dbo:Person, it shows the 13,128
>> distinct properties associated with instances of dbo:Person and the
>> probability that a given property is found in an instance. It also provides
>> a list 438 other types that are declared in dbo:Person instances which can
>> be equivalents classes, superclasses, subclasses, etc.
>> http://tinyurl.com/dbo-person
>>
>> The property explorer provides a list of 60347 properties with the number
>> of triples, number properties in each namespace etc. It also allows
>> searching. http://tinyurl.com/dbpedia-properties
>>
>> Again, if we select a concrete property such as dbprop:name, it looks at
>> all the triples that contain the given property and analyze the subjects
>> and objects of those triples. For subjects, it looks at IRI / blank node
>> counts and also the their types. For objects, it does the same but
>> additionally analyzes literals for numeric, integers, averages, min, max,
>> etc. http://tinyurl.com/dbp-name
>>
>> The triple pattern explorer allows you to search the 3,807,196 abstract
>> triple patterns. http://tinyurl.com/dbpedia-triple-patterns
>> Or you can select a pattern you are interested, for instance what are the
>> properties that connect dbo:Politician to dbo:Criminal
>> http://tinyurl.com/politician-criminal
>>
>> In all these cases, the numbers are directly linked to the corresponding
>> triples.
>>
>> That's a glimpse of Loupe.  We would like to know whether it useful to
>> your use cases so that we can keep improving it. It's still in its early
>> stages so any feedback on improvements are more than welcome. If are
>> interested, we will we doing a demo [1] at ISWC 2015.
>>
>> Best Regards,
>> Nandana Mihindukulasooriya
>> María Poveda Villalón
>> Raúl García Castro
>> Asunción Gómez Pérez
>>
>> [1] Nandana Mihindukulasooriya, María Poveda Villalón, Raúl García
>> Castro, and Asunción Gómez Pérez. "Loupe - An Online Tool for Inspecting
>> Datasets in the Linked Data Cloud", Demo at The 14th International Semantic
>> Web Conference, Bethlehem, USA, 2015.
>>
>
>


Discovering a query endpoint associated with a given Linked Data resource

2015-08-26 Thread Nandana Mihindukulasooriya
Hi,

Is there a standard or widely used way of discovering a query endpoint
(SPARQL/LDF) associated with a given Linked Data resource?

I know that a client can use the follow your nose and related link
traversal approaches such as [1], but if I wonder if it is possible to have
a hybrid approach in which the dereferenceable Linked Data resources that
optionally advertise query endpoint(s) in a standard way so that the
clients can perform queries on related data.

To clarify the use case a bit, when a client dereferences a resource URI it
gets a set of triples (an RDF graph) [2].  In some cases, it might be
possible that the returned graph could be a subgraph of a named graph /
default graph of an RDF dataset. The client wants to discover if a query
endpoint that exposes the relevant dataset, if one is available.

For example, something like the following using the search link relation
[3].

--
HEAD /resource/Sri_Lanka
Host: http://dbpedia.org
--
200 OK
Link: http://dbpedia.org/sparql; rel=search; type=sparql, 
http://fragments.dbpedia.org/2014/en#dataset; rel=search; type=ldf
... other headers ...
--

Best Regards,
Nandana

[1]
http://swsa.semanticweb.org/sites/g/files/g524521/f/201507/DissertationOlafHartig_0.pdf
[2] http://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#section-rdf-graph
[3] http://www.iana.org/assignments/link-relations/link-relations.xhtml


Re: Discovering a query endpoint associated with a given Linked Data resource

2015-08-26 Thread Nandana Mihindukulasooriya
Hi Heiko,

Thanks a lot for the pointer to the paper.

In your experiment, were you able to get some insights on *why* data
publishers are not providing VoID descriptions when it is applicable to do
so (leaving out single FOAF documents etc.) ?

[[Approaches using proposed methods such as VoID and the provenance
vocabulary are scarcely in use (and sometimes not implemented according to
the specification), they lead to a valid SPARQL endpoint in less than 1% of
all cases.]]

Also did you find many occasions where actually a VoID description is
available, but it is not discoverable according to the VoID spec (such as
the case you mention about not having the description in the root level but
in another level). For instance,  http://dbpedia.org/void/Dataset exists
but is not in http://dbpedia.org/.well-known/void and the resources don't
provide a back-link.

Best Regards,
Nandana


On Wed, Aug 26, 2015 at 12:05 PM, Heiko Paulheim 
he...@informatik.uni-mannheim.de wrote:

 Hi all,

 two years ago, we conducted an empirical experiment to find out how
 promising the different approaches to discover SPARQL endpoints are. The
 results were rather disappointing, see [1].

 Executive summary: rather than trying to find VoID descriptions (which
 rarely exist), querying catalogues like datahub seems more promising
 (higher recall at least, precision is lower).

 Hth.

 Best,
 Heiko

 [1] http://www.heikopaulheim.com/docs/iswc2013_poster.pdf




 Am 26.08.2015 um 11:50 schrieb Nandana Mihindukulasooriya:

 Thanks all for the pointers.

 Yes, it seems it is quite rare in practice. I tried several hosts that
 provide Linked Data resources but couldn't find ones that provide a VoID
 description in .well-known/void.

 I guess there is a higher technical barrier in making that description
 available in the given location compared to providing that information in
 the response in most cases. So probably the pragmatic thing to do would be
 to include this information either in the content or as a Link relation
 header using the void properties when dereferenced.

 So I can use the void:inDataset back-link mechanism [1] and point to a
 VoID description that will have the necessary information about the query
 endpoints.

 -
 dbpedia:Sri_Lanka void:inDataset _:DBpedia .
 _:DBpedia a void:Dataset;
 void:sparqlEndpoint http://dbpedia.org/sparql;
 void:uriLookupEndpoint http://fragments.dbpedia.org/2014/en?subject=
 .
 --
 or

 
 Link:  http://dbpedia.org/void/Datasethttp://dbpedia.org/void/Dataset;
 rel=http://rdfs.org/ns/void#inDataset;
 

 Best Regards,
 Nandana

 [1] http://www.w3.org/TR/void/#discovery-links

 On Wed, Aug 26, 2015 at 11:05 AM, Miel Vander Sande 
 miel.vandersa...@ugent.bemiel.vandersa...@ugent.be wrote:

 Hi Nandana,

 I guess VoID would be the best fit

 In case of LDF you could use

 ... void:uriLookupEndpoint 
 http://fragments.dbpedia.org/2014/en?subject=
 http://fragments.dbpedia.org/2014/en?subject=

 But wether these exists in practice? Probably not. I'd leave it up to the
 dereference publisher to provide this triple in te response, rather than
 doing the .well_known thing.

 Best,

 Miel

 On 26 Aug 2015, at 10:57, Víctor Rodríguez Doncel 
 vrodrig...@fi.upm.esvrodrig...@fi.upm.es wrote:

 
  Well, you might try to look in this folder location:
  .well-known/void
  And possibly find a void:sparqlEndpoint.
 
  But this would be too good to be true.
 
  Regards,
  Víctor
 
  El 26/08/2015 10:45, Nandana Mihindukulasooriya escribió:
  Hi,
 
  Is there a standard or widely used way of discovering a query endpoint
 (SPARQL/LDF) associated with a given Linked Data resource?
 
  I know that a client can use the follow your nose and related link
 traversal approaches such as [1], but if I wonder if it is possible to have
 a hybrid approach in which the dereferenceable Linked Data resources that
 optionally advertise query endpoint(s) in a standard way so that the
 clients can perform queries on related data.
 
  To clarify the use case a bit, when a client dereferences a resource
 URI it gets a set of triples (an RDF graph) [2].  In some cases, it might
 be possible that the returned graph could be a subgraph of a named graph /
 default graph of an RDF dataset. The client wants to discover if a query
 endpoint that exposes the relevant dataset, if one is available.
 
  For example, something like the following using the search link
 relation [3].
 
  --
  HEAD /resource/Sri_Lanka
  Host: http://dbpedia.org
  --
  200 OK
  Link: http://dbpedia.org/sparql; rel=search; type=sparql, 
 http://fragments.dbpedia.org/2014/en#dataset
 http://fragments.dbpedia.org/2014/en#dataset; rel=search; type=ldf
  ... other headers ...
  --
 
  Best Regards,
  Nandana
 
  [1]
 http://swsa.semanticweb.org/sites/g/files/g524521/f/201507/DissertationOlafHartig_0.pdf
  [2]
 http://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#section-rdf-graph
  [3]
 http://www.iana.org/assignments/link

[Deadline Extension: 27 July 2015] 1st International Workshop on Linked Data Repair and Certification (ReCert 2015)

2015-07-20 Thread Nandana Mihindukulasooriya
*** Submission Deadline Extended: 27 July, 2015 ***

1st International Workshop on Linked Data Repair and Certification (ReCert
2015)
at the 8th International Conference on Knowledge Capture (K-CAP 2015)
October 7, 2015 - Palisades, NY, USA

The ReCert 2015 workshop is focused on Linked Data Repair and
Certification. The main aim is to raise the awareness of dataset repair and
certification techniques for Linked Data and to promote approaches to
assess, monitor, maintain, improve, and certify Linked Data quality.

Workshop website: http://recert-ld.github.io/2015/
Extended Submission Deadline: Monday 27th of July, 2015

We invite the submissions of original research results related to the focus
areas of the workshop. The workshop accepts long research papers (maximum
10 pages LNCS style) that present mature work, or short papers presenting
novel ideas and ongoing work (maximum 4 pages LNCS style).

Papers can be submitted through the EasyChair submission page:
https://easychair.org/conferences/?conf=recert2015

Accepted papers will be published online as CEUR-Workshop Proceedings.

---
Topics of interest
---
Quality models for Linked Data
Linked Data quality assessment
Linked Data diagnosis
Linked Data repair
Linked Data quality certification
Architectures and Services for Linked Data repair and certification
User-driven quality repair and certification
Benchmarking of repair and certification approaches
Representation of provenance and licensing towards trusted certification
Representation of quality reports and certification information
Linked Data quality in service level agreements
Trust and reputation management of Linked Data publishers

-
Important dates
-
Paper Submission: 27 July, 2015 (Extended)
Acceptance Notification: August 10, 2015
Camera-ready Version: September 7, 2015
Workshop: October 7, 2015

--
Organizing Committee
--
Nandana Mihindukulasooriya nmihi...@fi.upm.es
Dr. Víctor Rodríguez Doncel vrod...@fi.upm.es
Dr. Raúl García Castro rgar...@fi.upm.es


Re: How do you explore a SPARQL Endpoint?

2015-01-22 Thread Nandana Mihindukulasooriya
May be not just looking at the classes and properties but looking at their
frequencies using counts can give a better idea about what sort of data is
exposed. If there is a Void information it certainly helps. Tools such as
http://data.aalto.fi/visu also help. Similar approach described here [1] .

Best Regards,
Nandana

[1] - http://ceur-ws.org/Vol-782/PresuttiEtAl_COLD2011.pdf

On Thu, Jan 22, 2015 at 4:25 PM, Bernard Vatant bernard.vat...@mondeca.com
wrote:

 Interesting to note that the answers so far are converging towards looking
 first for types and predicates, but bottom-up from the data, and not
 queries looking for a declared model layer using RDFS or OWL, such as e.g.,

 SELECT DISTINCT ?class
 WHERE { {?class a owl:Class} UNION {?class a rdfs:Class}}

 SELECT DISTINCT ?property ?domain ?range
 WHERE { {?property rdfs:domain ?domain} UNION {?property rdfs:range
 ?range}}

 Which means globally you don't think the SPARQL endpoint will expose a
 formal model along with the data.
 That said, if the model is exposed with the data, the values of rdf:type
 will contain e.g., rdfs:Class and owl:Class ...

 Of course in the ideal situation where you have an ontology, the following
 would bring its elements.

 SELECT DISTINCT ?o ?x ?type
 WHERE {?x rdf:type ?type.
 ?x rdfs:isDefinedBy ?o.
 ?o a owl:Ontology }

 It's worth trying, because if the dataset you query is really big, it will
 be faster to look first for a declared model than asking all distinct
 rdf:type


 2015-01-22 15:23 GMT+01:00 Alfredo Serafini ser...@gmail.com:

 Hi

 the most basic query is the usual query for concepts, something like:

 SELECT DISTINCT ?concept
 WHERE {
 ?uri a ?concept.
 }

 then, given a specific concept, you  can infer from the data what are the
 predicates/properties for it:
 SELECT DISTINCT ?prp
 WHERE {
 [] ?prp a-concept.
 }

 and so on...

 Apart from other more complex query (here we are of course omitting a lot
 of important things), these two patterns are usually the most useful as a
 starting point, for me.




 2015-01-22 15:09 GMT+01:00 Juan Sequeda juanfeder...@gmail.com:

 Assume you are given a URL for a SPARQL endpoint. You have no idea what
 data is being exposed.

 What do you do to explore that endpoint? What queries do you write?

 Juan Sequeda
 +1-575-SEQ-UEDA
 www.juansequeda.com





 --

 *Bernard Vatant*
 Vocabularies  Data Engineering
 Tel :  + 33 (0)9 71 48 84 59
 Skype : bernard.vatant
 http://google.com/+BernardVatant
 
 *Mondeca*
 35 boulevard de Strasbourg 75010 Paris
 www.mondeca.com
 Follow us on Twitter : @mondecanews http://twitter.com/#%21/mondecanews
 --



Re: Real-world concept URIs

2014-07-17 Thread Nandana Mihindukulasooriya
Hi Pieter,

If we still stick with URIs (as a name but not a locator) [1] but with a
different scheme, say things or something, your solution will still work
the same, right? There are already URN/DOI to URL resolvers [3], so
similarly but rather than using a service, your URIs identifying real world
things will use a convention to resolve them to information resources by
converting, say things:{foobar} to http://{foobar}, when one have to do a
lookup.

In my opinion it probably it could have been an alternative solution to the
http-range-14 [4,5] issue and provide a clear separation of information
resources and real world things. However, the challenge is to have everyone
agree to this convention and as we have so many real world things already
named using HTTP URIs, I am not sure whether it will be a practical
solution right now.

Luca,
In addition what Pieter said, sometimes we have add licences to the
information resource so that the information about Zebra is given in a free
open licence [for the Zebra you will have to pay :)], and history of the
document (not the Zebra), owener of the document (not the Zebra), etc. So
IMO there is a need to identify and name the information resource and real
world thing separately.

Best Regards,
Nandana

[1] - http://tools.ietf.org/html/rfc3986#section-1.1.3
[2] - http://www.iana.org/assignments/uri-schemes/uri-schemes.xhtml
[3] - http://nbn-resolving.de/
[4] - http://norman.walsh.name/2005/06/19/httpRange-14
[5] - https://plus.google.com/109693896432057207496/posts/Q7pCA6yqNtS

On Wed, Jul 16, 2014 at 5:29 PM, Pieter Colpaert pieter.colpa...@ugent.be
wrote:

 Hi list,

 Short version:

 I want real-world concepts to be able to have a URI without a http://;.
 You cannot transfer any real-world concept over an Internet protocol
 anyway. Why I would consider changing this can be

  * If you don't agree, why?
  * If you do agree, should we change the definition of a URI? Will this
 break existing Linked Data infrastructure?



Re: Real-world concept URIs

2014-07-17 Thread Nandana Mihindukulasooriya
Hi Pieter,

On Thu, Jul 17, 2014 at 7:36 PM, Pieter Colpaert pieter.colpa...@ugent.be
wrote:

  Hi Nandana,

 Thank you a lot for your clear reply!


 On 2014-07-17 19:17, Nandana Mihindukulasooriya wrote:

 Hi Pieter,

  If we still stick with URIs (as a name but not a locator) [1] but with a
 different scheme, say things or something, your solution will still work
 the same, right? There are already URN/DOI to URL resolvers [3], so
 similarly but rather than using a service, your URIs identifying real world
 things will use a convention to resolve them to information resources by
 converting, say things:{foobar} to http://{foobar}, when one have to do a
 lookup.


 Correct! thing: could be the protocol of the real world: thing:A can
 shake hands with thing:B, http://A can serve the fact that thing:A shook
 hands with thing:B over HTTP. I like it!


Apparently this has been considered and discarded with the argument of
building the Semantic Web on top of what was already available back then.
http://www.w3.org/DesignIssues/HTTP-URI.html#L920

Along with the URI collision that David Booth pointed out, we have
the Indirect Identification use case (i.e., the context defines what the
URI identifies). Probably this works well for humans but not so well for
machines.
http://www.w3.org/TR/webarch/#indirect-identification

Best Regards,
Nandana


LDP4j is launched!

2014-06-02 Thread Nandana Mihindukulasooriya
LDP4j (http://www.ldp4j.org/) is an open source Java-based framework for
the development of read-write Linked Data applications based on the W3C
Linked Data Platform (LDP) 1.0 specification. LDP4j is available under the
Apache 2.0 licence.

The LDP4j framework provides both client and server components for handling
LDP communication, hiding the complexity of the protocol details to
application developers and letting them focus on implementing their
application-specific business logic. In addition, a set of middleware
services for requirements beyond the LDP specification scope are provided.

Getting Started - http://www.ldp4j.org/#/learn/start
Source code - https://github.com/ldp4j/ldp4j

Please try it out and any feedback is welcome!

Linked Data Platform (http://www.w3.org/2012/ldp/) is an initiative from
W3C with the mission of producing a W3C Recommendation for HTTP-based
(RESTful) protocol for read/write Linked Data applications. After two years
of constructive discussions within the LDP working group and several rounds
of public comments, the specification is ready to become a W3C candidate
recommendation soon.

Best Regards,
The LDP4j team
https://twitter.com/LDP4j


Re: There's No Money in Linked Data

2013-06-07 Thread Nandana Mihindukulasooriya
Hi,

On Fri, Jun 7, 2013 at 11:10 AM, Andrea Splendiani 
andrea.splendi...@iscb.org wrote:

 I think the issue is not whether there is money or not in linked data,
 but: how much money is in linked data ?

 Lot of money has been injected by research funds, maybe governments and
 maybe even industry.
 Is the business generated of less, more, or just about the same value ?


As it was mentioned several times in this thread, Linked Data for
Enterprise Application Integration has generated a lot of business and
there is a lot of money in it. One good example would be IBM Jazz products
[2] based on Linked Data [1].

Best Regards,
Nandana

[1] - https://jazz.net/story/about/about-jazz-platform.jsp
[2] - https://jazz.net/products/


Re: There's No Money in Linked Data

2013-06-07 Thread Nandana Mihindukulasooriya
Hi,

On Fri, Jun 7, 2013 at 11:52 AM, Andrea Splendiani 
andrea.splendi...@iscb.org wrote:

 Hi,

 but let me play the devil's advocate. And actually this is something I
 have been asked sometimes by time by people not strictly in the linked-data
 crowd.
 How do you quantify the business generated ?


I think this is a hard question to answer. How much money has the web
generated ? Anyways I like your analogy of highways, Linked Data is an
enabler, people build so many different businesses around that.


 Is there some success story with a quantification of business savings due
 to adopting a Linked-Data infrastructure ?
 Or perhaps some estimate of revenues/sales from linked-data based products
 ?


I think if some companies using Linked Data based technologies can publish
something on those lines (may be some have already done so, but I don't
recall seeing any with numbers), it would definitely help the industrial
adaption of Linked Data as others can evaluate the benefits and risks of
using Linked Data based on some empirical evidence.

Best Regards,
Nandana

Then there are a few other considerations to be made.

 I'm not familiar with Jazz products, but looking at the website is a
 complete suite of things. Is Linked-Data used as a flagship products for a
 suite where the other components are what people is willing to pay for ?


 Also, where do we trace the boundary of what is linked data and what is
 not ? Beyond a strict technical definition, if somebody takes a linked-data
 resource, put it into a graph database and doesn't use RDF/sparql
 anymore... we could argue that the value (eventually) generated is still at
 some point due to the availability of linked-data resources. Or not ?

 Again, I'm playing the devil's advocate here, I'm not needing to be
 convinced. But I'm expressing some reflections that were actual comments of
 people, to which I didn't really know what to answer, except that they
 should think at linked-data like highways...

 best,
 Andrea


 Il giorno 07/giu/2013, alle ore 10:31, Nandana Mihindukulasooriya 
 nandana@gmail.com
  ha scritto:

 Hi,

 On Fri, Jun 7, 2013 at 11:10 AM, Andrea Splendiani 
 andrea..splendi...@iscb.org andrea.splendi...@iscb.org wrote:

 I think the issue is not whether there is money or not in linked data,
 but: how much money is in linked data ?

 Lot of money has been injected by research funds, maybe governments and
 maybe even industry.
 Is the business generated of less, more, or just about the same value ?


 As it was mentioned several times in this thread, Linked Data for
 Enterprise Application Integration has generated a lot of business and
 there is a lot of money in it. One good example would be IBM Jazz products
 [2] based on Linked Data [1].

 Best Regards,
 Nandana

 [1] - https://jazz.net/story/about/about-jazz-platform.jsp
 [2] - https://jazz.net/products/

 --
 This message has been scanned for viruses and
 dangerous content by *MailScanner* http://www.mailscanner.info/, and
 we believe but do not warrant that this e-mail and any attachments thereto
 do not contain any viruses. However, you are fully responsible for
 performing any virus scanning.