Hi Edi
On Fri, Feb 6, 2015 at 7:22 PM, Edi Bice edi_b...@yahoo.com.invalid wrote:
Quick question. Does the disambiguation-mlt engine work with the FST Linking
engine?
Quick answer. Currently not :(
P.S. For the gory details:
Chain 1: lang, sent, tok, pos, chunker, dbpedia-link, disamb
Hello,
I just subscribed to the mailing list but have been usingStanbol for a few
months now.
Kudos to all – it is a great product and I hope it will keepgetting better.
Quick question. Does the disambiguation-mlt engine work with the FST Linking
engine?
Thanks,
Edi
P.S
Hi
The disambiguation API is something we already talked about in the past.
I think that since we have already 4 disambiguation engines (dbpedia-mlt,
freebase neo4j, coreference, aida) we could try to find the best API
fitting the requirementes for all the already implemented disambiguation
Hi all
On Thu, Oct 23, 2014 at 5:29 PM, Rafa Haro rh...@apache.org wrote:
1. Extend Aida-Light for supporting others datasets. We would need to check
how much the disambiguation algorithms are coupled with the information
provided by YAGO and try to convert them to a generic approach
Hi Rupert!
Nice to see great feedback on this topic!. I wanted to comment part of your
previous email:
En 24 de octubre de 2014 en 12:31:39, Rupert Westenthaler
(rupert.westentha...@gmail.com) escrito:
I fear that each disambiguation approach will come with its own data
model. Mainly
problems to
find a machine for testing it. I agree that probably it is difficult to find
a valid architecture for any disambiguation approach, but focusing only in
Aida-Light I think that it is worth to provide also an other data management
solution. It will affect the performance for sure
quite tough. It may prevent a lot of people
to give it try or experiment with it. Actually, Chalitha had serious
problems to find a machine for testing it. I agree that probably it is
difficult to find a valid architecture for any disambiguation approach, but
focusing only in Aida-Light I think
Hi all,
As you know, For GSOC I have integrated YAGO
knowledge base and Aida-light disambiguation server
with Stanbol. But there are many improvements that can be
applied to current disambiguation engine. For example current
engine only works with YAGO site, but it would be more useful
the project outcome as baseline, I can suggest a first list of possible
improvements:
1. Extend Aida-Light for supporting others datasets. We would need to check how
much the disambiguation algorithms are coupled with the information provided by
YAGO and try to convert them to a generic approach.
2
be
included in the trunk when it will be mature enough.
Using the project outcome as baseline, I can suggest a first list of
possible improvements:
1. Extend Aida-Light for supporting others datasets. We would need to
check how much the disambiguation algorithms are coupled
Hi all,
This is my current progress with GSOC work. After discussing
with my mentor Rafa, we have decided to use aidalight[1] as the
disambiguation service to use. Aidalight can be directly integrated
with Stanbol because it is licensed with Apache licence 2,0.
I have first converted
Hi all,
To give this a start I created STANBOL-1183 [1] and added a first
suggestion for a disambiguation API.
* the `Entity Disambiguation Context` is tailored towards the Session
(local) disambiguation usage scenario.
* the `DisambiguationData` resembles the class with the same name
at 7:16 PM, Dileepa Jayakody
dileepajayak...@gmail.comwrote:
On Thu, Oct 3, 2013 at 10:21 PM, Rafa Haro rh...@zaizi.com wrote:
Hi fellas,
With http://svn.apache.org/r1528907 the GSoC projects source code has
been commited in a new branch that we have called disambiguation. As you
might know
Hi fellas,
With http://svn.apache.org/r1528907 the GSoC projects source code has
been commited in a new branch that we have called disambiguation. As
you might know, this year, there were two proposals for Stanbol, both
related to disambiguation engines. Dileepa Jayakody has developed
On Thu, Oct 3, 2013 at 10:21 PM, Rafa Haro rh...@zaizi.com wrote:
Hi fellas,
With http://svn.apache.org/r1528907 the GSoC projects source code has
been commited in a new branch that we have called disambiguation. As you
might know, this year, there were two proposals for Stanbol, both
Hi Andreas
On Tue, Sep 17, 2013 at 1:28 PM, Andreas Kuckartz a.kucka...@ping.de wrote:
Dileepa Jayakody:
I have successfully implemented and tested the foaf disambiguation engine
with the help of Stanbol community including Rupert, Rafa and Andreas.
+1 and thanks to Rupert and Rafa from
Hi all
Thanks Rupert.
I think I can help in this task to create the Java API for disambiguation
engines with classes to deal with suggestions, normalize values (between 0
and 1), generating new confidences from old confidences and weights, etc.
Regards
On Thu, Sep 26, 2013 at 1:14 PM, Rupert
Hi mates,
El 26/09/13 13:14, Rupert Westenthaler escribió:
Also note that there is also the freebase disambiguation engine from
Antonio (STANBOL-1157) and I also noticed that both the foaf and the
freebase disambiguation engine do share some code with the
disambiguation-mlt engine. So maybe
Westenthaler
rupert.westentha...@gmail.com wrote:
Hi Andreas
On Tue, Sep 17, 2013 at 1:28 PM, Andreas Kuckartz a.kucka...@ping.de
wrote:
Dileepa Jayakody:
I have successfully implemented and tested the foaf disambiguation
engine
with the help of Stanbol community including Rupert, Rafa
to Graph Importer tool [2]: Tool to import Freebase data dump
into a Neo4j graph (at this moment) through Tinkerpop Blueprints API.
Following the project, I have successfully implemented and tested a first
version of the Freebase entity disambiguation engine with the help of
Rupert and Rafa.
The target
Hi All,
I have successfully implemented and tested the foaf disambiguation engine
with the help of Stanbol community including Rupert, Rafa and Andreas.
The engine's main functionality is to increase the confidence of
Entity-Annotations identified from previous engines, by using 2 fundamental
Dileepa Jayakody:
mvn clean install
That suceeds but at the beginning I see this warning:
[WARNING] Some problems were encountered while building the effective
model for
org.apache.stanbol:org.apache.stanbol.enhancer.engines.disambiguation.foaf:bundle:1.0-SNAPSHOT
[WARNING]
at 11:06 PM, Dileepa Jayakody
dileepajayak...@gmail.com wrote:
Hi All,
As the third milestone of my project I will describe my initial design of
the FOAF Co-reference based entity disambiguation engine here.
The main disambiguation technique used here is FOAF co-reference
Hi Reto,
Thanks a lot for the pointer.
I will look at the clerezza, rdf smusher to get a better idea.
Regards,
Dileepa
On Mon, Aug 26, 2013 at 6:17 PM, Reto Bachmann-Gmür r...@wymiwyg.comwrote:
Hi Dileepa
Basic co-reference rule to be used is :
{?p a owl:IFP. ?a ?p ?x. ?b ?p ?x) = {?a
will describe my initial design of
the FOAF Co-reference based entity disambiguation engine here.
The main disambiguation technique used here is FOAF co-reference. This
aims to merge multiple fise:EntityAnnotations identified by different
surface mentions in the text to a single FOAF entity
the Wikilinks extended dataset [1] and store it
in a Jena TDB database, in order to take advantage of the contained
information to be used in some tasks, like disambiguation.
Moreover a service has been created (along with the parser tool) in order
to query the data and retrieve information about Wikilink
on vacation) for taking over this!
The next task was to parse the Wikilinks extended dataset [1] and store it
in a Jena TDB database, in order to take advantage of the contained
information to be used in some tasks, like disambiguation.
Moreover a service has been created (along
I had in mind for my
Project.
The next task is to identify and define the foaf properties set which are
going to be used as keys in the disambiguation algorithm. This task also
includes developing an EntityProcessor to filter foaf entities further by
allowing only the entities which have
the first milestone I had in mind for my
Project.
The next task is to identify and define the foaf properties set which are
going to be used as keys in the disambiguation algorithm. This task also
includes developing an EntityProcessor to filter foaf entities further by
allowing only
milestone (midterm evaluation) the following tasks need to be
done:
1. Convert wiki-links data dump to RDF
* Wiki-links contains a lot of disambiguation information which it is
wanted to incorporate to the Entityhub Freebase site.
* The wiki-link data dump will be converted to RDF
(midterm evaluation) the following tasks need to
be
done:
1. Convert wiki-links data dump to RDF
* Wiki-links contains a lot of disambiguation information which it is
wanted to incorporate to the Entityhub Freebase site.
* The wiki-link data dump will be converted to RDF to be easier
milestone (midterm evaluation) the following tasks need to be
done:
1. Convert wiki-links data dump to RDF
* Wiki-links contains a lot of disambiguation information which it is
wanted to incorporate to the Entityhub Freebase site.
* The wiki-link data dump will be converted to RDF to be easier
Dileepa Jayakody:
In the foaf-wiki site [1] there are many datasource projects but many
of them are out of date.
If possible please take a few minutes to update that Wiki page.
Can I please have your opinions on finalizing a dataset for my
project?
The main criteria in my opinion should be:
: FOAF co-reference based disambiguation, as the first
milestone I'm developing an EntityHub ReferencedSite for a foaf data-set.
With help from Rupert and others I was able to index a sample foaf dataset
using the genericrdf indexing tool and setup a referenced-site. foaf-data
can be filtered
-2012/
[2] http://code.google.com/p/**ldspider/http://code.google.com/p/ldspider/
Thanks,
Dileepa
On Tue, Jun 25, 2013 at 5:45 PM, Dileepa Jayakody dileepajayak...@gmail.com
wrote:
Hi All,
For my project: FOAF co-reference based disambiguation, as the first
milestone I'm developing an EntityHub
Hi all
For my project: Freebase Entity Disambiguation in Stanbol, for the first
milestone for next Friday 28 June, I've integrated the Freebase data dump
as EntityHub ReferencedSite in Stanbol, using the Freebase indexing tool
(Readme file) and following the Rupert's indications (and freebase
at 12:47 PM, Antonio Perez ape...@zaizi.com wrote:
Hi all
For my project: Freebase Entity Disambiguation in Stanbol, for the first
milestone for next Friday 28 June, I've integrated the Freebase data dump
as EntityHub ReferencedSite in Stanbol, using the Freebase indexing tool
(Readme file
Hi All,
For my project: FOAF co-reference based disambiguation, as the first
milestone I'm developing an EntityHub ReferencedSite for a foaf data-set.
With help from Rupert and others I was able to index a sample foaf dataset
using the genericrdf indexing tool and setup a referenced-site. foaf
Hi Rafa et al,
On Tue, Jun 18, 2013 at 7:57 PM, Rafa Haro rh...@zaizi.com wrote:
Hi Dileepa,
El 18/06/13 13:20, Dileepa Jayakody escribió:
Hi All,
After going through a lot documentation on Stanbol and Entity
Disambiguation, I started trying out the Stanbol EntityHub indexing tool
[1
Hi Dileepa,
El 18/06/13 13:20, Dileepa Jayakody escribió:
Hi All,
After going through a lot documentation on Stanbol and Entity
Disambiguation, I started trying out the Stanbol EntityHub indexing tool
[1] to create a site for foaf-dataset. I found a sufficient foaf dataset in
N-Quad format
, disambiguation-mlt
/entityhub/site/managed
Greatly appreciate your pointers to relavant areas of the codebase that I
should be more focused on.
Thanks,
Dileepa
On Tue, Jun 11, 2013 at 2:36 PM, Dileepa Jayakody dileepajayak...@gmail.com
wrote:
Hi Rafa
On Tue, Jun 11, 2013 at 2:16 PM, Rafa Haro rh
/enhancement-engines/entityhublinking, disambiguation-mlt
/entityhub/site/managed
Greatly appreciate your pointers to relavant areas of the codebase that I
should be more focused on.
Thanks,
Dileepa
On Tue, Jun 11, 2013 at 2:36 PM, Dileepa Jayakody dileepajayak...@gmail.com
wrote:
Hi Rafa
Hi Dileepa,
El 11/06/13 07:07, Dileepa Jayakody escribió:
My suggestion on integrating foaf-search [3] would basically need to do a
on-the-fly retrieval of data, but as you have pointed out it could impose a
performance hit. But foaf-search looks promising with a big index of FOAF
data.
A
Hi Rafa
On Tue, Jun 11, 2013 at 2:16 PM, Rafa Haro rh...@zaizi.com wrote:
Hi Dileepa,
El 11/06/13 07:07, Dileepa Jayakody escribió:
My suggestion on integrating foaf-search [3] would basically need to do a
on-the-fly retrieval of data, but as you have pointed out it could impose
a
Hi Dileepa,
Congratulations again for your GSOC proposal. It's quite clear and well
explained. Please, find some thoughts about your mail inline:
El 10/06/13 11:56, Dileepa Jayakody escribió:
Hi All,
I have started working on my GSOC project : FOAF co-reference based entity
disambiguation
how Freebase compares to the Wikidata project
[4] that only recently entered phase 2.
Designing disambiguation in a way that it can be applied to other
datasets would be for sure a great bonus. But given the good results
one can get with Freebase I would even be very interested if the
results
.
Designing disambiguation in a way that it can be applied to other
datasets would be for sure a great bonus. But given the good results
one can get with Freebase I would even be very interested if the
results would only work on Freebase ^^
Following Rupert's idea, I agree that maybe the best
interested in the disambiguation problem, so I would like to
prepare a proposal for GSoC about this topic.
If you do have already some experiences with Apache Stanbol, this
would be fore sure a big help for a GSoC project.
I have been following last mails about disambiguation and WebID
protocol
Apache Stanbol.
Currently, I've been assigned to a project that involves different
technologies like Apache Stanbol and Apache ManifoldCF. So, related to
Stanbol, I'm interested in the disambiguation problem, so I would like to
prepare a proposal for GSoC about this topic.
I have been following
for disambiguation purposes: disambiguation-mlt [1], developed by
Kritarth Anand as part of a GSOC project supervised by Rupert, and
DBpedia Spotlight [2], contributed by Pablo Mendes and Iavor Jelev as
part of the Early Adopters programme [3] and currently in the trunk
integrated within
Hi Rafa
Great the see some movement on the disambiguation topic.
On Wed, Jan 30, 2013 at 12:16 PM, Rafa Haro rh...@zaizi.com wrote:
We wanted to propose in the list a first approach to a roadmap for
disambiguation in Stanbol. In our opinion, a high-level list of tasks that
should be done
Thanks Rupert for your valuable feedback and contributions!
El 30/01/13 16:03, Rupert Westenthaler escribió:
As those algorithm will be the main source for requirements on the
disambiguation index model we might need to investigate this while
designing the disambiguation index model.
It would
On Wed, Jan 30, 2013 at 4:51 PM, Rafa Haro rh...@zaizi.com wrote:
Thanks Rupert for your valuable feedback and contributions!
El 30/01/13 16:03, Rupert Westenthaler escribió:
As those algorithm will be the main source for requirements on the
disambiguation index model we might need
Hi Rupert,
Thanks. Now is working perfectly.
By the way, Is the pos-tagger model for Spanish installed in Stanbol? I
want to know if is possible to filter the disambiguation just for nouns
Thanks
El 19/12/12 12:27, Rupert Westenthaler escribió:
enhancer.engines.linking.suggestions=20
Hi all,
I have been trying to use disambiguation-mlt engine with the new
EntityHub Linking Engine for Spanish. My goal is to link and
disambiguate with any kind of entity within the EntityHub, not only with
Named Entities. So, I have configured a new Enhancement Chain including
only language
55 matches
Mail list logo