Yes. You only got the assertions between each individual and its type.
The type system is a DAG (mostly a tree, but there are a few nodes with
multiple parents). In order to get the relationships between types (subclass),
you need to load also the ontology, which comes in an OWL file.
Cheers,
(apologies for cross posting)
Hi all,
We would like to announce a maintenance release of DBpedia Spotlight v0.6 -
Shedding Light on the Web of Documents. DBpedia Spotlight looks for ~3.5M
things of ~320 types in text provided as input, and tries to link them to
their global unique identifiers in
Maybe also a link on /Downloads37?
(from a cell phone)
On Jul 1, 2012 7:58 PM, Jona Christopher Sahnwaldt j...@sahnwaldt.de
wrote:
Hi Mike,
you can download the latest OWL file here:
http://mappings.dbpedia.org/server/ontology/dbpedia.owl
It contains the latest version of all classes and
SnipCFP: (Submission deadline in ~4 weeks, Keynote speaker: Fabian M.
Suchanek, Topics: NLP, Linked Data, IR, ML)
[Apologies for cross-posting]
Workshop on Web of Linked Entities (WoLE2012)
http://wole2012.eurecom.fr
In conjunction with the 11th
Hi Fuyuko, done. Christopher has given you rights. Happy mapping!
On Fri, Jun 22, 2012 at 10:54 AM, Fuyuko Matsumura
fuyuko.matsum...@gmail.com wrote:
Hi,
I'm planning to modify the mapping wiki of DBpedia Japanese.
I'd like to appreciate it if you could authorize my account as Editor.
My
.
On Fri, Jun 1, 2012 at 6:20 PM, Pablo Mendes pablomen...@gmail.com
wrote:
This sounds good to me. Marking something to be parsed as you'd
parse
english numbers is easy enough to get.
Cheers,
Pablo
On Fri, Jun 1, 2012 at 6:06 PM, Jona Christopher
Perhaps intermediate node mapping?
http://mappings.dbpedia.org/index.php/How_to_edit_DBpedia_Mappings#Intermediate_Node_Mapping
Cheers,
Pablo
On Thu, Jun 7, 2012 at 4:49 PM, Marco Fossati hell.j@gmail.com wrote:
Hi Jona,
Thank you for the quick bug fix.
However, I tested the two
Hi Felix,
What happens from command line?
hg clone
http://dbpedia.hg.sourceforge.net:8000/hgroot/dbpedia/extraction_framework
Also, I'd personally suggest going with IntelliJ IDEA since the scala
plugin there seems to work better. I think Jona uses Eclipse, though.
Did you try the trick
it cause there is no project yet.
Thanks,
Felix
Von: Pablo Mendes [mailto:pablomen...@gmail.com]
Gesendet: Mittwoch, 6. Juni 2012 14:47
An: Burkhardt, Felix
Cc: dbpedia-discussion@lists.sourceforge.net
Betreff: Re: [Dbpedia-discussion] Checking out the DBpedia Extraction
Framework
Hi Felix
work better on DBPedia.
Also we have projects that require NER and ontological classification and
I have the feeling DBPedia is just the right thing for this ;-)
Cheers,
Felix
Von: Pablo Mendes [mailto:pablomen...@gmail.com]
Gesendet: Mittwoch, 6. Juni 2012 14:47
An: Burkhardt, Felix
Cc
[Apologies for cross-posting]
Workshop on Web of Linked Entities (WoLE2012)http://wole2012.eurecom.fr
In conjunction with the 11th International Semantic Web Conference
(ISWC2012), Boston, 11/12 November 2012
Most of the
Separation of schema and instances is a good solution for this symptom. The
root of the problem seems to be that when the amount of data grows, just
returning everything you know about something is not going to cut it. A
solution for that seems to be still missing.
Perhaps we need a mechanism of
Add a configuration value decimalSeparator whose value may be dot or
comma: , or .. Bit hard to read... We would also need a
configuration value groupSeparator.
+1 to this. Accepted values:
- dot or .
- comma or ,
- space or
(it is the case that groupSeparators are spaces sometimes)
Portuguese will load what 3.8 generates, and generate compatible releases
from that point on.
On Thu, May 24, 2012 at 10:10 AM, Marco Amadori marco.amad...@gmail.comwrote:
2012/5/24 Alexandru Todor bakara...@gmail.com:
The main branch should set the general direction. I think if you switch
I think I have seen the same report for Portuguese.
Dimitris, does it work for Greek?
Cheers
Pablo
On May 18, 2012 10:04 PM, Roland Cornelissen rol...@metamatter.nl wrote:
Hi,
I tried to use the MappingTool (for Dutch) and noticed that the
OntologyBrowser does not show any classes etc.
Am I
+1 on the proposed solution
+1 on the beer*
Also, -ambiguous can contain triples where the property is defined as
ObjectProperty and the value is a String. I've heard that there is a tool
out there called DBpedia Spotlight (or something like that) that could be
used to disambiguate these links
Hi Aleksander, (cc. discussion list for the DBpedia Spotlight project)
The links are provided here
http://wiki.dbpedia.org/spotlight/isemantics2011/evaluation
The content of each article is not directly reproduced in our pages because
the copyright notice of New York Times does not allow us to
Hi Christoph,
You downloaded the instance labels. I think you want the ontology labels.
Here: http://downloads.dbpedia.org/3.7/dbpedia_3.7.owl.bz2
Cheers,
Pablo
On Mon, May 14, 2012 at 5:13 PM, Christoph Lauer dbpe...@online.ms wrote:
Hello everyone,
I'm writing a program which trys to
Hi Marco,
This smells like another thread:
http://www.mail-archive.com/dbpedia-discussion@lists.sourceforge.net/msg02767.html
Would it be possible to emulate Wikipedia renderer engine behavior? It
is written in PHP, so it should be a piece of cake to implement it in
powerful Scala.
So... are
Marco,
My pointer to the thread was with reference to the Sweble parser usage.
Rendering pages like the PHP of wikipedia seems to do means just using
their code or implementing template resolution. We've been doing the first,
in that thread we suggested doing the second.
Fixing this in the code
, 2012 at 6:18 PM, Pablo Mendes pablomen...@gmail.com
wrote:
Hi Marco, Jona,
Shouldn't the mapping be the king or this could be wrong in other ways?
Absolutely!!! User-generated content is our business.
Let me repeat, with added emphasis:
When there is a link, then the object
Hi all,
We have been watching with excitement as the international DBpedia
community races to higher coverage and a more homogeneous RDF view of the
facts on Wikipedia. New labels are being added to the ontology for
multi-language support, and dozens of mappings are being added so that more
Hi Kingsley,
You're right that there's much to be embelished there. It was put together
in a self imposed least-effort-max-effect regiment.
It would be really nice if we could easily trace and aggregate the
contributions by country, but that would probably go over my effort
budget. Unless
Google Spreadsheet.
The issue is not storing, but getting it from the wiki.
Cheers
Pablo
On May 4, 2012 8:50 PM, Kingsley Idehen kide...@openlinksw.com wrote:
On 5/4/12 2:17 PM, Pablo Mendes wrote:
Hi Kingsley,
You're right that there's much to be embelished there. It was put together
be very
pleased to have some hints or examples of the right command.
cheers,
roberto
Il 18/04/2012 10:07, Pablo Mendes ha scritto:
Well in your message you were trying to run Index, not Server. Can you
please try again and copy+paste the command line and results?
On Apr 17, 2012 11:17 AM
://dbpedia.hg.sourceforge.net/hgweb/dbpedia/lookup/file/475a32257232/src/main/scala/org/dbpedia/lookup/util/DBpedia2Lucene.scala
On Sun, Apr 15, 2012 at 8:39 PM, Roberto
Mirizziroberto.mirizzi@gmail.**com roberto.miri...@gmail.com
wrote:
Il 15/04/2012 20.24, Pablo Mendes ha scritto:
It seems
, rather than for testing out some
prototype.
Cheers,
Pablo
On Sun, Apr 15, 2012 at 8:06 PM, Roberto Mirizzi
roberto.miri...@gmail.comwrote:
Why is it so slow in providing results (20, 30 seconds and more)? :-(
Last time I used it, it was great...
cheers,
roberto
Il 21/03/2012 16.53, Pablo
:
Il 15/04/2012 20.24, Pablo Mendes ha scritto:
It seems to have gotten too popular. :)
We have been trying to shift resources around, as many of our servers
are a bit under pressure. Hopefully we'll manage to find a solution soon.
I'd recommend considering hosting your own mirror
Hi all,
If you are a student (BSc,MSc,PhD) and would like to get some Google
funding to spend the northern summer or the southern winter coding away
with us from the DBpedia Spotlight project, I just wanted to remind you
that the deadline is this Friday (less than 2 days).
You can apply at:
Hi Jaidev,
I assume you are a prospective GSoC applicant for DBpedia Spotlight? If so,
please discuss your application at dbp-spotlight-developers. Please check
this and other important information from our page:
http://www.google-melange.com/gsoc/org/google/gsoc2012/dbpediaspotlight
Answering
Hi David,
What about downloading with wget?
Cheers,
Pablo
On Thu, Mar 29, 2012 at 5:33 PM, David Gösenbauer
david.goesenba...@gmail.com wrote:
Hi dbpedia-community!
I'm experiencing heavy problems trying to get the extraction framework
to run. The step I'm stuck at is downloading the
/me wonders if just removing spaces and stopping on the first char that is
not either a number or a number separator wouldn't fix at least the most
obvious ones?
On Wed, Mar 28, 2012 at 9:25 AM, Dimitris Kontokostas jimk...@gmail.comwrote:
Some issues are sticky and coming back over and over
Luis,
For some reason I assumed he was talking about DBpedia French already. But
I shouldn't have. You are right, we should point out that there is more
data and a SPARQL endpoint available for French here:
http://fr.dbpedia.org/
Cheers,
Pablo
2012/3/24 Luis Daniel Ibáñez González
Thanks for the report. It's restarting.
On Wed, Mar 21, 2012 at 4:31 PM, Nemanja Vukosavljevic
nemanja.vukosavlje...@gmail.com wrote:
Hi guys,
It looks like that Dbpedia lookup service is down but only the Keyword
Search API. It gives back HTTP 500 error. On the other hand, Prefix Search
Hi Amit,
We have been trying to setup an instance of dbpedia to continously
extract data from wikipedia dumps/updates. While
We would like to do the same for the DBpedia Portuguese. If you can share
any code, it would be much appreciated.
Cheers
Pablo
On Mar 19, 2012 10:38 AM, Amit Kumar
few mails in the last week about the same. If you are
facing some particular issue in particular with DBpedia Portuguese, do let
me know. If we have faced the same, we would let you know.
Regards
Amit
On 3/19/12 3:45 PM, Pablo Mendes pablomen...@gmail.com wrote:
Hi Amit,
We have been
Dear fellow DBpedians,
I am very excited to announce that DBpedia Spotlight has been selected for
the Google Summer of Code 2012!!!
If you know energetic students (BSc,MSc,PhD) interested in working with
DBpedia, text processing, and semantics, please encourage them to apply!
/Mapping:Infobox_Automobile_generation
[2]
http://mappings.dbpedia.org/server/mappings/en/extractionSamples/Mapping:Infobox_aircraft_type
On Mon, Mar 12, 2012 at 23:24, Pablo Mendes pablomen...@gmail.com wrote:
Hi emijrp,
If by underdeveloped you mean where does DBpedia need more data, then
you
should take a look
Marco,
I have not found the usernames Marco Fossati, MarcoFossati, Michele
Barbera or MicheleBarbera.
Please send us their usernames on mappings.dbpedia.org so that we can give
them rights to edit. I did find MarcoAmadori, though, and I have given
rights to that user. Please confirm that it is
Hi emijrp,
If by underdeveloped you mean where does DBpedia need more data, then you
should take a look at the mappings statistics for the language of your
interest:
http://mappings.dbpedia.org/index.php/Mapping_Statistics
If by underdeveloped you mean where does DBpedia needs some coding, then
I
Nemanja,
Thank you for your message. Our machine running the lookup is under heavy
stress. We're currently investigating the issue. We'll let you know once we
manage to work it out.
Cheers,
Pablo
On Wed, Feb 29, 2012 at 9:20 AM, Nemanja Vukosavljevic
nemanja.vukosavlje...@gmail.com wrote:
Hi
I've copied and pasted your command and it worked for me. Firewall problem
on your side?
Cheers,
pablo
On Wed, Feb 29, 2012 at 2:46 PM, amulya rattan talk2amu...@gmail.comwrote:
Hi Mohamed,
Thanks for the response. I have been doing:
hg clone
Done. Happy mapping!
On Thu, Feb 16, 2012 at 2:57 PM, haytham alfeel haytham_alfe...@hotmail.com
wrote:
Hi,
My name is Haytham Al-Feel . I am a staff member at Faculty of Computers
and Information,Fayoum University,Egypt and I am now a Post-Doc at the
Corporate Semantic Web Research
Ciao Riccardo,
Thanks for offering help. Please register at mappings.dbpedia.org and send
us your username. After we give you rights, you will be able to map
infoboxes of Organization pages to their corresponding Class/Properties in
the DBpedia Ontology. You can then check out how many mappings
Hi Jean,
In general, there is no real standard way to fill in infobox values. It can
contain numbers, links and even other templates. This lack of regularity
makes parsing it a pain.
For example, in English, the field is filled like this:
|lat_deg = 50 |lat_min = 44 |lat_sec = 02.37
works fine.
Is there another way to get to this information (ref_counts.nt and
surface_forms.nt)
Cheers,
Batica
--- On *Mon, 1/30/12, Pablo Mendes pablomen...@gmail.com* wrote:
From: Pablo Mendes pablomen...@gmail.com
Subject: Re: [Dbpedia-discussion] DBpedia Lookup lucene index
The source code is here, under the same license as the extraction
framework. Kudos to Max Jakob for (re)implementing it.
http://dbpedia.hg.sourceforge.net/hgweb/dbpedia/lookup/
The string to URI associations are available from the DBpedia Spotlight
lexicalizations dataset:
Enno,
Anja beat me to it. :) You should be good to go.
GoogleTranslate
welkom en gelukkig in kaart brengen!
/GoogleTranslate
Cheers,
Pablo
On Sun, Jan 29, 2012 at 3:35 PM, Enno Meijers
enno.meij...@bibliotheek.nlwrote:
Hello,
I 'm looking into the extraction framework and mappings
Hi Imil,
Just adding to Mariano' s advice: in case having the data offline is enough
for you, you could also download the extraction framework and extract the
triples yourself.
http://dbpedia.hg.sourceforge.net/hgweb/dbpedia/extraction_framework
http://wiki.dbpedia.org/Documentation
Cheers,
Benjamin,
Are you guys associated with the GeneWiki project?
http://en.wikipedia.org/wiki/Gene_Wiki
If not, talking to them might be a good way to start. A lot of gene data is
pulled into Wikipedia by bots such as
http://en.wikipedia.org/wiki/User:ProteinBoxBot
Cheers,
Pablo
On Wed, Jan 11,
projects) flowing out as linked data via
DBpedia.
-Ben
On Jan 20, 2012, at 4:08 AM, Pablo Mendes pablomen...@gmail.com wrote:
Benjamin,
Are you guys associated with the GeneWiki project?
http://en.wikipedia.org/wiki/Gene_Wiki
If not, talking to them might be a good way to start. A lot of gene
Apparently the only configuration that works with the maven scala plugin is
the one performed within the pom.xml using the jvmArgs tag. There are
examples in our repo.
Best
Pablo
On Dec 23, 2011 11:12 AM, Jairo Sarabia jairo.sara...@appstylus.com
wrote:
Hello from Appstylus S.L. in Spain,
I
it to work. In the
meanwhile, if some one already has the working values, it would be a big
help.
Plus do you know anyone running the DEF on Hadoop ?
Thanks
Amit
On 12/1/11 4:39 PM, Pablo Mendes pablomen...@gmail.com wrote:
Hi Amit,
I tried giving jvm options such –Xmx to the ‘mvn
hope this helps identify weak spots in Portuguese dbpedia/wikipedia
or at least can be redirected by someone to someone that cares (Pablo
Mendes?). At the moment I don't know if there are more cases like this
or how many of the articles are like this, I'll go on testing and report
back.
Best
Done. Happy mapping!
2011/11/13 Václav Zeman p...@palmovka.cz
Hi,
** **
Please, could you assign right for cs mapping editing to account -
matouj10? This member belongs to our czech dbpedia team.
** **
Thanks
Vaclav Zeman
Robert,
My guess is that it is a problem with parsing templates when they are in
property values, as you already seem to have found out.
About your initial question:
Where is that raw wikipedia infobox dataset??
I imagine that people decided to abort outputting templates within the
values as
Hi Mariano,
I don't have answers for everything, but here goes my 2c. (split by subject)
MAPPING GUIDE
is there any policy for creating DBpedia classes or properties?. For
example, we missed the class BullFighter, we checked there was no
othersimilar class, and we created it.
The only
Hi Mariano,
I don't have answers for everything, but here goes my 2c. (split by subject)
STATISTICS
In the statistics page (e.g spanish at
http://mappings.dbpedia.org/server/statistics/es)http://mappings.dbpedia.org/server/templatestatistics/es/INFOBOXNAME%29we
get information
about the
Hi Mariano,
I don't have answers for everything, but here goes my 2c. (split by subject)
EXTRACTION
It seems that the extraction process reads the properties found in the
infobox instances, without checking if those properties are in the
infobox definition. is that so?
I think so.
All wiki
Dimitris,
We could also create an I18n FAQ page for similar questions. Maybe the
Spanish guys can gather all their questions in an page (i.e. [2]) and we
could help them write the answers :-)
Great idea! A sort of one act of kindness generates another, or pay it
forward. Mariano, Oscar, can
the problem is amplified.
Apart from fixing the parsers, we could also create tools to find such
errors and point them to Wikipedia editors
Dimitris
On Mon, Nov 7, 2011 at 10:36 PM, Pablo Mendes pablomen...@gmail.comwrote:
Hi all,
First thing, thanks to Zsíros for pointing out the error
the problem is amplified.
Apart from fixing the parsers, we could also create tools to find such
errors and point them to Wikipedia editors
Dimitris
On Mon, Nov 7, 2011 at 10:36 PM, Pablo Mendes pablomen...@gmail.comwrote:
Hi all,
First thing, thanks to Zsíros for pointing out the error
Hi all,
First thing, thanks to Zsíros for pointing out the error, to the DBpedia
co-founder Sören for his quick response - can we assign bugs to you too? :P
- and to our i18n pioneer Dimitris for looking deeper into the issue.
Dimitris has a point there. That is not a valid number. However, maybe
Hi Lushan,
Thanks for reporting the issue.
You can try to find the answer by comparing the versions of the Wikipedia
page as of 3.6 release and as of 3.7 release. The exact date is on the
mapping wiki. If the infobox changed, this could explain the difference,
and adjusting the mapping on the
You have now been made editor.
Happy mappings!
Cheers
Pablo
On Oct 31, 2011 7:57 PM, Arup Sarkar tella...@gmail.com wrote:
Hello Mr. Mendes,
Thanks for the reply.
My username is Arup and user ID is 233.
Arup...
On Mon, Oct 31, 2011 at 7:43 PM, Pablo Mendes pablomen...@gmail.comwrote
I meant: happy mapping! :)
Cheers
Pablo
On Oct 31, 2011 8:43 PM, Pablo Mendes pablomen...@gmail.com wrote:
You have now been made editor.
Happy mappings!
Cheers
Pablo
On Oct 31, 2011 7:57 PM, Arup Sarkar tella...@gmail.com wrote:
Hello Mr. Mendes,
Thanks for the reply.
My username
Mariya,
It sounds like your question is more Jena related than DBpedia related. You
may get better help at jena-us...@incubator.apache.org
Have you tried that?
Also, I assume you're pointing Jena to the same SPARQL endpoint? Or are you
loading data from a file?
Best,
Pablo
On Wed, Oct 26, 2011
Christian,
The data under http://dbpedia.org gets updated about twice a year, when the
entire Wikipedia is re-extracted completely from scratch. However, the data
from http://live.dbpedia.org gets updated virtually instantly.
The data is extracted by the DBpedia Extraction Framework (
Do you have too many quotes ' 'around Louisiana?
(unfortunately can't test now as the endpoit seems to be down)
Best
Pablo
On Oct 23, 2011 6:39 PM, Lushan Han lush...@umbc.edu wrote:
Hi,
I have been trying to run the following sparql query on
http://dbpedia.org/sparql but failed to get
+1 on enabling CORS.
On Wed, Oct 19, 2011 at 4:43 PM, Florian florian.hei...@gmail.com wrote:
Hi,
i am playing around with the dbpedia SPARQL Endpoint and tried to get
information by using ajax.
I wasn't able to get any results.
It seems that there is no cross-origin resource sharing
Hi Tommy,
You can also try to look at how DBpedia Spotlight extracts paragraphs from
Wikipedia using the DBpedia Extraction Framework.
relation suggestion is done. Is it
similar to the spotlight solution?
About collecting feedback: absolutely. The only issue now is that we
are not identifying the users in any way which should be changed to
get sensible data.
All the best
Mihály
On 5 October 2011 17:22, Pablo Mendes pablomen
Szia Mihály,
This is truly awesome! You have read my mind.
Please take a look at these two related ideas below.
Human-powered data fusion: round trip (in/ex)ternal data reuse in Wikipedia
It may be worth requesting this fix at the Yago list? David, if you would
like to contribute the fix, I can help to get it pulled to the repo.
Best,
Pablo
On Oct 4, 2011 1:06 AM, Kingsley Idehen kide...@openlinksw.com wrote:
On 10/3/11 6:57 PM, David Butler wrote:
Thanks Kingsley, much
12:03 AM, David Butler david.william.but...@gmail.com
wrote:
Hi Pablo,
I wouldn't mind contributing a fix, but I'm not too familiar with where
the
YAGO mailing list or source code is. Can you point me in the right
direction?
Thanks,
David
On Tue, Oct 4, 2011 at 1:44 AM, Pablo Mendes
are
looking for a optimal auxiliary tool. DBpedia is just the simplest solution
for us.
** **
Václav
** **
*From:* Pablo Mendes [mailto:pablomen...@gmail.com]
*Sent:* Friday, September 30, 2011 9:39 AM
*To:* Václav Zeman
*Cc:* dbpedia-discussion@lists.sourceforge.net
*Subject
Hi all,
We are happy to announce the release of DBpedia Spotlight v0.5 - Shedding
Light on the Web of Documents.
DBpedia Spotlight is a tool for annotating mentions of DBpedia entities and
concepts in text, providing a solution for linking unstructured information
sources to the Linked Open Data
Dimitris,
Cool!
Maybe we could test Sweble first as the new AbstractExtractor, since it
seems to be the weakest link? If it works for that, then it could be
gradually introduced in the core to substitute SimpleWikiParser.
Alessio, if you take the challenge, please keep us updated about your
Tania,
I'm unsure what you mean by corpus.
There are several datasets available from
http://wiki.dbpedia.org/Downloads37
Look at the column that says de (for Deutschland).
And Wikipedia (hyper)text can be downloaded from:
http://de.wikipedia.org/wiki/Hilfe:Download
I'm assuming what you mean by
Congrats to Ghislain and the Eurecom team for having already made progress
on the DBpedia in French mappings! :)
I can see you are already catching up with Italian.
http://mappings.dbpedia.org/sprint/
Cheers,
Pablo
On Wed, Aug 24, 2011 at 12:30 PM, Pablo Mendes pablomen...@gmail.comwrote
I cannot reproduce.
This works for me:
http://dbpedia.org/page/John_Paul_Jones_%28musician%29
Cheers,
Pablo
On Mon, Sep 5, 2011 at 3:55 PM, Yves Raimond yves.raim...@gmail.com wrote:
Hello!
I spotted a few missing DBpedia URIs, both on the currently live
dataset and on DBpedia live, for
Jürgen,
I hear you. Unfortunately our availability is limited by that of
zedat.fu-berlin.de
The services will be available as soon as possible.
Cheers,
Pablo
Research Associate
http://wbsg.de
Freie Universität Berlin
On Thu, Aug 18, 2011 at 11:12 AM, Jürgen Jakobitsch
'yes, i also consider DBpedia buggy in this sense (hence the crossposting)'
Just a small note.
I think you mean that the SPARQL engine behind a particular deployment of
DBpedia is behaving differently from what you would desire. Although there
are bugs in DBpedia, this is not one of them. :) I
I believe these libraries are in the AKSW repo. Also, I think the project
uses Maven2. So it's probably a good idea to confirm these two suspicions
via the documentation for developers in wiki.dbpedia.org
Cheers
Pablo
On Jul 10, 2011 3:13 AM, Tommy Chheng tommy.chh...@gmail.com wrote:
I'm trying
Andy, (cc. OC)
Thanks for the idea! If Open Corporates offered this linkset to DBpedia, it
would be a nice candidate for featuring in the next release of the LOD Cloud
Diagram (http://www4.wiwiss.fu-berlin.de/lodcloud/state/).
I feel this would have good impact on their adoption, and could be
Cool!
Is there some language-specific magic that needs to be done? Are Max's
improvements on the configurability going to show up in the live branch as
well?
I'd like to try a live DBpedia Portuguese if my colleagues from Brazil would
be able to help.
Also, any help on how to obtain the
I take the point of view that Linked Data are claims, rather than facts.
Claims are made by different people/datasources, possibly conflicting, and
the consumer decides what/who to believe. I think that both dbpedia.org and
live.dbpedia.org should provide claims about the same URIs, without
I like this solution. Especially if a request to
http://live.dbpedia.org/{resource|page}/{Thing} returns triples about
http://dbpedia.org/{http://live.dbpedia.org/%7Bresource%7Cpage%7D/%7BThing%7D
resource|page}/{Thing}http://live.dbpedia.org/%7Bresource%7Cpage%7D/%7BThing%7D
Cheers,
Pablo
On
Maybe it didn't have the infobox at the time the dump was generated? I see
some infobox changes in Sept2010
http://en.wikipedia.org/w/index.php?title=Toyota_Priuslimit=500action=history
You can look at the Wikipedia Dump from Oct2010 to know for sure.
Cheers,
Pablo
On Jun 15, 2011 10:34 PM,
If by dbpedia2 you mean http://dbpedia.org/property/, than that's (unmapped)
information coming straight from the infobox, with the same information
entered in the Wikipedia page. For the companies that use an Infobox mapped
in the wiki at mappings.dbpedia.org you should also see a
Prateek
On 5/19/11 5:51 PM, Pablo Mendes wrote:
Hi Prateek,
My guess is that you're seeing the is [property] of links automatically
generated by the web interface. Birth place is a property of Person, with
range on Place. So to get those properties you need to get instances of
Person.
See
Olá a todos,
This e-mail is a call for participation for a working group to
internationalize DBpedia to the Lusosphere (
http://en.wikipedia.org/wiki/Lusosphere).
We are actively collaborating with the DBpedia Internationalization Team (
http://wiki.dbpedia.org/Internationalization) to include
Guillermo,
I believe this is stored in a MySQL database that is populated/read by the
mappings wiki. It would be great to have this in a MongoDB or other store
that would be convenient to store a mapping as a document with fields or
object with attributes that we could then query under many
Hi Joachim, Baran,
The page at http://de.dbpedia.org/ indicates that Sebastian Krebs (
sebastian.kr...@fu-berlin.de) may be a good person to ask.
Cheers,
Pablo
On Tue, Apr 26, 2011 at 11:26 AM, baran_H baran...@gmail.com wrote:
Hello Dimitris,
can this be the same problem which i formulated
and store a copy of the data for our convenience. So it would be
a bit rude to make a reference to them as the system against which a man
hopelessly fights.
On Tue, Apr 26, 2011 at 12:49 PM, baran_H baran...@gmail.com wrote:
On Tue, 26 Apr 2011 12:20:17 +0200, Pablo Mendes pablomen...@gmail.com
i think a script can do the job just fine
it will have the redirects.nt as a look-up table and replace all
occurrences in the extraction dumps
cheers,
Dimitris
On Fri, Apr 15, 2011 at 4:10 PM, Pablo Mendes pablomen...@gmail.comwrote:
I like the second approach ... if we could use
Maybe what Dimitris says is that this query would indeed be answered if:
- redirects were treated as sameAs and inference was used (works for this
but not all cases)
- the framework used redirects to do identity resolution at extraction time
Also, i should point out that you can probably sort
Hey guys,
I can verify that http://dbpedia.org/sparql/ is CORS enabled, but it seems
that the dereferenceable URIs aren't:
http://dbpedia.org/resource/Berlin
http://dbpedia.org/resource/Berlin/ seems not yet to be CORS-enabled.
(from http://enable-cors.org/)
I could always use a DESCRIBE query
José,
I tried this:
http://www.google.de/search?sourceid=chromeie=UTF-8q=unescape+Unicode+characters+perl
And the top result gave me this:
my $unescaped2 = Unicode::Escape::unescape($str4);
http://search.cpan.org/~itwarrior/Unicode-Escape-0.0.2/lib/Unicode/Escape.pm
Hi Darren,
As far as I know, all the relationships between the 272 classes
(including subClassOf) coming from DBpedia are in the DBpedia Ontology
file.
http://downloads.dbpedia.org/3.6/dbpedia_3.6.owl.bz2
The relationships between classes and instances, on the other hand, should
be in the DBpedia
1 - 100 of 101 matches
Mail list logo