Dear all,
The Smart Data Analytics group [1] is happy to announce SANSA 0.2 - the
second release of the Scalable Semantic Analytics Stack. SANSA employs
distributed computing for semantic technologies in order to allow
scalable machine learning, inference and querying capabilities for large
knowl
Organization:
Jens Lehmann, University of Leipzig, Germany
Heiko Paulheim, University of Mannheim, Germany
Vojtěch Svátek, University of Economics, Prague, Czech Republic
Johanna Völker, University of Mannheim, Germany
--
Dr. Jens Lehmann
Head of AKSW group, University of Leipzig
Homepage: http
-learner-1-0/
Kind regards,
Lorenz Bühmann, Jens Lehmann and Patrick Westphal
[1] http://aksw.org
[2] http://ore-tool.net
[3] http://aksw.org/Projects/RDFUnit.html
[4] http://dl-learner.org/community/carcinogenesis/
[5] https://github.com/AKSW/DL-Learner-Protege-Plugin
[6] http
Organization:
Jens Lehmann, University of Leipzig, Germany
Heiko Paulheim, University of Mannheim, Germany
Vojtěch Svátek, University of Economics, Prague, Czech Republic
Johanna Völker, University of Mannheim, Germany
--
Dr. Jens Lehmann
Head of AKSW group, University of Leipzig
Homepage: http
=
Call for Papers
1st International Workshop on Geospatial Linked Data (GeoLD 2014)
in conjunction with the annual SEMANTiCS conference
1st September 2014, Leipzig, Germany
http://geold.geoknow.eu
==
=
Call for Papers
1st International Workshop on Geospatial Linked Data (GeoLD 2014)
in conjunction with the annual SEMANTiCS conference
1st September 2014, Leipzig, Germany
http://geold.geoknow.eu
=
July 31, 2014
* Camera Ready Paper: Aug 01, 2014
Please submit at https://www.easychair.org/conferences/?conf=semantics2014.
Committee:
Sebastian Hellmann, Conference Chair
Christian Dirschl, Industry Chair
Andreas Blumauer, Industry Chair
Agata Filipowska, Scientific Chair
Harald Sack, Scientific
July 31, 2014
* Camera Ready Paper: Aug 01, 2014
Please submit at https://www.easychair.org/conferences/?conf=semantics2014.
--
Dr. Jens Lehmann
AKSW Group, Department of Computer Science, University of Leipzig
Hom
Dear all,
the new DBpedia overview article has been accepted at the Semantic Web
Journal!
The updated final version is available here:
http://svn.aksw.org/papers/2013/SWJ_DBpedia/public.pdf
Kind regards,
Jens
Am 24.06.2013 18:03, schrieb Jens Lehmann:
>
> Dear all,
>
> we ar
Camera ready version: April 15th, 2014
Workshop: May 25th or 26th, 2014
Organization:
Johanna Völker, University of Mannheim, Germany
Jens Lehmann, University of Leipzig, Germany
Heiko Paulheim, University of Mannheim, Germany
Harald Sack, University of Potsdam, Germany
Voijtech Svatek, University of
regards,
Jens
--
Dr. Jens Lehmann
AKSW/MOLE Group, Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc
--
This SF.net email is
acy_Act
Kind regards,
Jens
--
Dr. Jens Lehmann
AKSW/MOLE Group, Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc
--
Kee
previous releases.
Many thanks to Sarven Capadisli, Michael Hausenblas and Tom Heath.
Kind regards,
Jens
--
Dr. Jens Lehmann
Head of AKSW/MOLE group, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc
ple of days for all DBpedia modules:
http://maven.aksw.org/repository/snapshots/org/dbpedia/extraction/
It's not tested, but should work.
Kind regards,
Jens
--
Dr. Jens Lehmann
AKSW/MOLE Group, Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key:
Hello Kingsley,
On 24.06.2011 18:08, Kingsley Idehen wrote:
> On 6/24/11 4:38 PM, Jens Lehmann wrote:
>
> Re. Linked Data do remember what's already in place (as part of the hot
> staging of this whole thing) at: http://dbpedia-live.openlinksw.com/live .
>
> When
ate measure
because there are several reasons why the size can change apart from
more extracted information):
1.8G./1.0
2.5G./2.0
7.6G./3.0rc
5.1G./3.0
6.0G./3.1
6.4G./3.2
7.3G./3.3
21G ./3.4
32G ./3.5
35G ./3.5.1
34G ./3.6
Kind regards,
Jens
--
Dr.
inkeddata.org/ldow2011/papers/ldow2011-paper02-coppens.pdf),
which would however introduce additional complexity.
Input/opinions on those issues are welcome (if there is a best practice
for this case, please let us know).
Kind regards,
Jens
--
Dr. Jens Lehmann
AKSW/MOLE Group, Department of C
ing we are doing is open for discussion and we welcome
suggestions.
I'll post other replies on the DBpedia mailing list to avoid too much
cross mailing list traffic.
Kind regards,
Jens
--
Dr. Jens Lehmann
AKSW/MOLE Group, Department of Computer Science, University of Leipzig
Homepage: ht
partners at the FU Berlin and
OpenLink as well as the LOD2 project [3] for their support.
Kind regards,
Jens
[1] http://aksw.org
[2] http://live.dbpedia.org
[3] http://lod2.eu
--
Dr. Jens Lehmann
AKSW/MOLE Group, Department of Computer Science, University of Leipzig
Homepage: http://www.jens
> dataset is damaged.
This is probably the same problem, which has previously been reported on
the list. The fixed German files are here:
http://downloads.dbpedia.org/3.6/de/fixed_truncated_files/
Kind regards,
Jens
--
Dr. Jens Lehmann
AKSW/MOLE Group, Department of Computer Science, Universit
so available on the download server for future
reference:
http://downloads.dbpedia.org/3.6/de/fixed_truncated_files/
Kind regards,
Jens
--
Dr. Jens Lehmann
AKSW/MOLE Group, Department of Computer Science, University of Leipzig
Homepage: http://www.jens
issued to DBpedia to see how
> the system behaves.
>
> Is such a set publicly available? Would it be possible to have one?
An anonymous log excerpt for DBpedia 3.5.1 is available here:
ftp://download.openlinksw.com/support/dbpedia/
Kind regards,
Jens
--
Dr. Jens Lehmann
Head of AKS
ily coincide.
Kind regards,
Jens
[1] http://linkedgeodata.org
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc
-
y work when you use the form at http://dbpedia.org/sparql?
(You could also test via something like
wget -S -O- --header='Accept: application/sparql-results+xml'
'http://dbpedia.org/sparql?query=YOUR QUERY'.)
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Computer
y=SELECT+%3fs+%3fo+%7b+%3fs+a+%3fo+.+%7d+LIMIT+5&id=cardio
> <http://202.73.13.50:56001/sparql?query=SELECT+%3fs+%3fo+%7b+%3fs+a+%3fo+.+%7d+LIMIT+5&id=cardio>
You can use a tool like SILK:
http://www4.wiwiss.fu-berlin.de/bizer/silk/
Kind regards,
Jens
--
Dipl. Inf. Jens Lehman
whether to include
checksums in the next release:
https://sourceforge.net/tracker/?func=detail&aid=3005725&group_id=190976&atid=935523
For now, I computed the md5sum for all_languages.tar:
http://downloads.dbpedia.org/3.5.1/all_languages.tar.md5
Kind regards,
Jens
--
Dipl. Inf. Jens
d be fixed now.
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http
Hello,
Tom Morris wrote:
> On Tue, Apr 20, 2010 at 1:45 AM, Jens Lehmann
> wrote:
>
>> For some link datasets in DBpedia, there is no proper update mechanism
>> included in the DBpedia SVN repository. In such cases, the link data
>> sets are copied from the previous
it...
SVN can be accessed as follows:
http://sourceforge.net/scm/?type=svn&group_id=190976
However, the SVN contains the extraction framework (and not the data
sets generated by it), so you won't find another Geonames link file there.
Kind regards,
Jens
--
Dipl. Inf. Jens Lehman
n
efficiently compute the links between the two data sets to the SVN
repository, such that it can be run regularly.
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://j
re:
http://downloads.dbpedia.org/$release/$language/$extractor_$lang.$format.bz2
(We might change it in the future, but it has been stable for a while.)
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.o
/Documentation
In essence, you need to import the Wikipedia dumps, checkout the DBpedia
SVN and run the appropriate extract_*.php file. (It depends on what you
want to achieve.)
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http:
Hello,
Jens Lehmann schrieb:
> Hi all,
>
> while the new DBpedia live extraction framework is in place, there is
> now a discussion regarding additional annotations made in doc subpages
> of Wikipedia infoboxes.
The discussion now takes place at:
http://en.wik
quot;name" does not stand for the name of the person, but rather for a
team in which the person played). It is not clear yet whether and when
the issue will be fixed.
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jen
)#DBpedia_Template_Annotations
[2]http://jens-lehmann.org/files/2009_dbpedia_live_extraction.pdf
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc
ctor"). The result is printed to
stdout.
As mentioned previously, extract_full.php should be used for producing a
complete DBpedia release (but you need to use import.php to download the
corresponding Wikipedia dumps before and import them in MySQL databases).
Kind regards,
Jens
--
Dipl. In
Hello,
Alex schrieb:
> Hello,
>
> PHP Fatal error: Class 'ValidateExtractionResult' not found in
> C:\Users\Alex\Do
>
> cuments\DBpedia\extraction\extractors\ExtractorContainer.php on line 32
>
> From my previous correspondence with Jens Lehmann, I believ
angelog [2]).
Kind regards,
Jens
[1] http://www.mpi-inf.mpg.de/yago-naga/yago/
[2] http://wiki.dbpedia.org/ChangeLog
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.j
er, at the moment we have some urgent items on our ToDo list, so we
cannot make any promise on whether and when we implement it.
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-l
currently store all types of an entity (Place, Area,
PopulatedPlace) for an entity and not just the most specific one
(PopulatedPlace), you could also calculate the depth by just counting
the number of classes. This wo
ato/kes2008-AKS_Track.pdf
[5]http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?tp=&arnumber=245&isnumber=190
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www
Hello,
Paul Houle schrieb:
> Jens Lehmann wrote:
> I ran it through a converter last night and got a document that,
> like yours, contained blank nodes. These are implicit in the RDF-XML,
> but need to be named in order to be serialized as NT. That's one
> su
Hello,
Paul Houle schrieb:
>
> Any chance we could get the OWL ontology in NT as well?
It can be converted of course:
http://downloads.dbpedia.org/3.2/en/dbpedia-ontology.nt
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Ho
map of the Northeastern US.
The OpenStreetMap project [1] faces the same problems and solves them
using "ways" for large objects and "nodes" for small objects. You might
also be interested in our new LinkedGeoData effort [2].
Kind regards,
Jens
[1] http://www.ope
Hello,
Michael Haas schrieb:
> Jens Lehmann wrote:
[...]
>
> Are you going to use the OAI harvester which requires a password?
Yes.
> I'm currently using the DBPedia framework to extract the Company
> Infoboxes using the LiveWikipedia collection. What new approach ar
hich will use a different mechanism
than the current mapping based extractor. However, we will take your
changes into account if possible.
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: ht
oach, i.e.
there is a manually maintained set of mapping rules. For instance the
attribute $foo could be mapped to
http://dbpedia.org/ontology/$differentFromFoo. The first approach
extracts more data and the latter one extracts cleaner data. In
particular different spellings of an attribute are often
Hello,
Petite Escalope schrieb:
> Hello,
> I need to make the profile of all countries of the world (area,
> population, currency, etc...)
>
> So I would like to have informations countained in infosboxes of all
> wikipedia country pages. (look at this exemple:
> http://en.wikipedia.org/wiki/
addresses, i.e. you can
ignore this message.
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc
--
Re
ove "tricks", but ...)
That is URL encoding. There should be a urldecode() method available for
your programming language to reverse the encoding process.
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http
g.de> .
There are no requirements apart from PHP 5 (command line), an internet
connection and Wikipedia not being offline. I just tested it and can
confirm that it works. Can you try again? If it does not work, can you
make sure that http://en.wikipedia.org/wiki/Special:Export/Leipzig is
not
DefaultGraph("http://dbpedia.org";);
ResultSet rs = qe.execSelect();
ResultSetFormatter.out(System.out, rs, yourSparqlQuery);
qe.close();
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: ht
rks up
to a URL length of 4092.
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-leh
for POST or PUT requests
>* Illegal character in hostname; underscores are not allowed
[...]
>
> Where am i doing wrong?
Did you try to fix all the issues reported by Virtuoso above (by setting
further curl parameters)?
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Comput
Hello,
Jens Lehmann schrieb:
> Hello,
>
> $headers = array("Content-Type: ".$this->contentType);
> $c = curl_init();
> curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);
> curl_setopt($c, CURLOPT_URL, $url);
> curl_setopt($c, CURLOPT_HTTPHEADER, $hea
lication/sparql-results+xml",
"application/sparql-results+json", "text/rdf+n3" etc. depending on what
you need. The result of your query is in $contents.
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
D
plain text description of an
entity.
I guess you need this for a statistical analysis of Wikipedia.(?)
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Com
that you won't have a guarantee
to get the distance this way: Say you have a complete graph with three
vertices a, b, root. If you ask for the distance between a and b using
your method it would return 2 instead of one. (There are also other
counterexamples.) The reason is that going towards the
h works well
for distances up to 3/4. (You can try SPARQL queries against the
official DBpedia endpoint to test this.)
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.j
ming the
Wikipedia database dump is loaded). I'm note sure whether this is a good
starting point, but you are of course free to have a look at it. :-)
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehm
me both in extendind
> my SPARQL knowledge and in finding a better and simplier solution for
> the problem i'm trying to solve.
For general advice on SPARQL documentation, tutorials etc., this
probably isn't the right group (please ask at the W3C Semantic Web
mailing list, but make sur
ki/import.php and then
run your extractor on all articles using /extraction/extract_dataset.php.
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-le
a DBpedia
extractor (which is not too hard).
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc
--
SF.N
e ontology (in this case the range "integer" of a
property). We are working on providing a strict data set.
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.
> inconsistencies, unless of course you check all the inferred types for
> the instances?
Due to the amount of data any reasoning tasks are challenging, but not
impossible (maybe a challenge for approximate, incomplete inference
engines; reasoning with large ABoxes etc.).
K
consistency checking?
> If not, I would opt for removing the restrictions.
What is the added value in removing the restrictions?
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-l
that's the one. There were a few other cases of that
> elsewhere as well.
The range issue is (hopefully) fixed now.
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG
you could get some undesired
> inferences.
In the future, there will be a user interface for specifying
domains/ranges. (Georgi is working on it.) We hope that the quality of
the schema will increase over time.
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Computer Scie
action: a loose one
and a strict one. In the strict one, it is guaranteed that the data
complies to the ranges specified in the ontology schema. Currently, only
the loose (probably inconsistent) one is provided.
Kind regards,
Jens
[1] http://www.w3.org/TR/owl-guide/#owl_unionOf
--
Dipl. Inf. Jens
t extract information from the textual context of
articles in the sense that it tries to understand the semantics of the
text. It extracts e.g. links between articles and uses the first part of
the text as abstracts for the article etc., but does not use NLP parsers
(yet).
Kind regards,
Jens
se),
Virtuoso, and Sesame:
http://www4.wiwiss.fu-berlin.de/benchmarks-200801/
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.or
the fact that RDFS inferencing in Virtuoso is only
done if rdf:type is explicitly given in the query.
Kind regards,
Jens
[1] http://www.mpi-inf.mpg.de/~suchanek/downloads/yago/
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.o
r "YAGO". It is at the
bottom of the second list. (It was also loaded in the official SPARQL
endpoint already.)
The data sets might be updated soon, because there are apparently still
some encoding issues.
Kind regards,
Jens
[1] http://wiki.dbpedia.org/Downloads31
--
Dipl. Inf
.nt
geo_zh.nt
geo_sv.nt
geo_ru.nt
geo_pt.nt
geo_pl.nt
geo_no.nt
geo_nl.nt
geo_ja.nt
geo_it.nt
geo_fr.nt
geo_fi.nt
geo_es.nt
geo_en.nt
geo_de.nt
flickr_en.nt
homepage_fr.nt
homepage_en.nt
homepage_de.nt
externallinks_en.nt
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of
Kobilarov, Christian Becker, the OpenLink team, and all
other contributors for their DBpedia support.
Kind regards,
Jens Lehmann
[1] http://wiki.dbpedia.org/Downloads
[2] http://wiki.dbpedia.org/Changelog
[3] http://aksw.org
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University
gt;> The
>> Beijing_Zoo page [2] does not have an infobox, so you won't get much
>> building specific information there. You could watch out for common
>> Wikipedia infoboxes related to buildings (if those exist) to find
>> typical properties.
>
> I fea
.dbpedia.org/Downloads
[2] http://en.wikipedia.org/wiki/Beijing_Zoo
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc
---
ix this issue soon.
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc
-
This SF.Net ema
a few cases manually. If you have any further additions to
this file, drop me a message.
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens
uded in the next release.
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_lehmann.asc
-
S
edia webservice at runtime. You can do one request per second.
Technically, you should use pattern matching functions on the Wiki
markup. Also try to use a common vocabulary for references.
If you are very fast to implement this, it may go into the next DBpedia
release, which is currently in prepar
ted manually, but I do not know how
accurate the WordNet links are in general. Apart from this, you can
(with moderate effort) contribute to DBpedia and improve the WordNet
extractor if you like.
Kind regards,
Jens
[1] http://wiki.dbpedia.org/Downloads30
--
Dipl. Inf. Jens Lehmann
Department
mport the dumps and extract the data sets. Of
course, you can choose to extract only the data sets and language
versions you need to reduce the runtime of the extraction script.
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://w
ring up a few
results, e.g. this one:
ftp://ftp.cs.wisc.edu/machine-learning/shavlik-group/ilp07wip/ilp07_damato.pdf
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann
) serve you as a
test bed for assessing Virtuoso performance and stabilising further.
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: h
Hello,
Kingsley Idehen wrote:
> Jens Lehmann wrote:
>>
>
> We are completing a similar test here based on my response earlier this
> week.
Was the test successful?
> I assume you are trying to load the Yago Class Hierarchy? If so, let us
> finish our investigation
t? If I understand correctly, there shoudn't be any significant
performance drawback for SPARQL queries, which do not explicity ask for
inference support (using "sparql define input:inference $ruleset").
Kind regards,
Jens
--
Dipl. Inf. Jens Lehmann
Department of Computer Scienc
Hello,
Omid Rouhani wrote:
> Hi,
>
> I'm curious about the "Cleanded Wikipedia Category Class (CWCC)
> Hierarchy" dataset.
> I read the quite short description available at
> "http://wiki.dbpedia.org/Downloads#cleandedwikipediacategoryclass(cwcc)hierarchy",
> however is there any other documenta
tp://www4.wiwiss.fu-berlin.de/benchmarks-200801/ for comparisons
between them.
Jens
[*] http://sourceforge.net/projects/virtuoso
--
Dipl. Inf. Jens Lehmann
Department of Computer Science, University of Leipzig
Homepage: http://www.jens-lehmann.org
GPG Key: http://jens-lehmann.org/jens_le
Hello,
Michael K. Bergman schrieb:
> Hi Richard,
>
> Richard Cyganiak wrote:
>
>> Interesting project. Just a word of caution: Note that DBpedia isn't
>> an ontology. From an ontology point of view, DBpedia is just a bunch
>> of instances, without a nice well-engineered class hierarchy. Two
Chris Bizer, Richard
Cyganiak, Georgi Kobilarov, the OpenLink team, and many other
contributors for their DBpedia support.
Best regards,
Jens Lehmann
[1] http://wiki.dbpedia.org/Downloads
[2] http://wiki.dbpedia.org/Changelog
[3] http://aksw.org
--
Dipl. Inf. Jens Lehmann
Department of Com
91 matches
Mail list logo