Hi,
On 07/25/2014 12:39 AM, Mike Liebhold wrote:
I recall earlier versions of the LOD Cloud diagram included freebase - I
don't see it here, - or the google knowledge graph either.
am I missing something?
it might be because of bugs in their Linked Data API.
I've sent a mail on
Hi,
vocab.cc has a comprehensive list of vocabularies, with a keyword
API which could be used for autocomplete in YASGUI.
Cheers,
Andreas.
On 06/07/13 12:27, Barry Norton wrote:
Bernard, does LOV keep a cache of properties and classes?
I'd really like to see resource auto-completion in
Hi,
On 06/07/13 15:39, Barry Norton wrote:
Ah wait, is this just classes and properties across the whole BTC set
though?
Can I use this to say I'm typing ':example a foaf:Per...' and get Person
as the top class within the foaf namespace?
the data is there, but we probably need to add prefix
Hi,
On 29/10/12 20:00, Vishal Sinha wrote:
I want to model a RDF graph such that I can query by temporal factors
later, for example:
- how the graph changed between 20th October 2012 to 30th October 2012.
I want to see all updates.
- Snapshot of a particular node on 20th July 2012, 25th July
Hello,
we are happy to announce that the Billion Triples Challenge 2012
Dataset [1] has been published yesterday.
The Billion Triple Challenge 2012 dataset consists of over a billion
triples. This year we used several seed sets: DBpedia, Datahub and
Tim Berners-Lee's FOAF file.
The Semantic
Scott,
On 09/08/2011 04:38 PM, M. Scott Marshall wrote:
It seems that dbpedia is a de facto source of URIs for geographical
place names. I would expect to find a more specialized source. I think
that I saw one mentioned here in the last few months. Are there
alternatives that are possible more
Hi,
On 08/09/2011 02:24 PM, Hugh Williams wrote:
The http://dbpedia.org/sparql endpoint has both rate limiting on the number
of connections/sec you can make, as well as restrictions on resultset and
query time, as per the following settings:
[SPARQL] ResultSetMaxRows = 2000
Hi Christopher,
On 06/22/2011 10:14 AM, Christopher Gutteridge wrote:
Right now queries to data.southampton.ac.uk (eg.
http://data.southampton.ac.uk/products-and-services/CupCake.rdf ) are made live,
but this is not efficient. My colleague, Dave Challis, has prepared a SPARQL
endpoint which
Hi Martin,
first let me say that I do think crawlers should follow basic politeness
rules (contact info in User-Agent, adhere to the Robot Exclusion Protocol).
However, I am delighted that people actually start consuming Linked Data,
and we should encourage that.
On 06/22/2011 11:42 AM, Martin
Hi Martin,
On 06/22/2011 09:08 PM, Martin Hepp wrote:
Please make a survey among typical Web site owners on how many of them have
1. access to this level of server configuration and
2. the skills necessary to implement these recommendations.
d'accord .
But the case we're discussing there's
to configure and
control the details of the crawling process. LDSpider is multi-threaded
and can be used to collect small to medium-sized datasets up to tens of
millions of triples.
The project is a co-operation between Andreas Harth at AIFB and Juergen Umbrich
at DERI. Aidan Hogan (DERI) and Robert Isele
Hi Dank,
Dan Brickley wrote:
OK, you goaded me into writing up what I was thinking about...
sorry didn't mean to be irritating - I'm very interested in the topic
myself, hence my reply.
So to take David's example,
Box 1: A journey exploring information about presidents, their kids
and
12 matches
Mail list logo