Hi Lydia, all,
Many thanks for the feedback on the system!
For those wanting just to play with the system you can try it here:
https://woolnet.dcc.uchile.cl/
The survey itself is here:
https://forms.gle/sCNqrAtJo98388iC6
(Just in case someone wishes to answer the survey, it might be best to
Hi everyone,
Cristóbal Torres, in CC, has been working in the past few months on a
new web application for Wikidata called WoolNet. The application helps
users to find and visualise connections between Wikidata entities. We
are pleased to share this application with you, and would love to
Hi Diego,
Thanks for the pointer; this is very cool! We would be happy to share
experiences. (It's very impressive how many points you are able to
render, and how these resize at different scales!)
Indeed it seems we were not so original with the name. :)
It seems both systems offer two
Hi all,
Benjamín, in CC, is an undergraduate student who has been working on a
system and interface called "Wikidata Atlas". The system allows the user
to search for different types of entities (with geo-coordinates) on
Wikidata and visualise them on a world map.
The system is available
Hi all,
Francisca in CC is an undergraduate student who has been working in the
past few months on a new template-based Question Answering (QA) tool for
Wikidata called Templet, which is available here:
https://templet.dcc.uchile.cl/
We would be *very* grateful if you could help us to evaluate
Hi Adam,
On 2020-07-13 13:41, Adam Sanchez wrote:
Hi,
I have to launch 2 million queries against a Wikidata instance.
I have loaded Wikidata in Virtuoso 7 (512 RAM, 32 cores, SSD disks with RAID 0).
The queries are simple, just 2 types.
select ?s ?p ?o {
?s ?p ?o.
filter (?s = ?param)
}
ata filtered by
sitelinks (perhaps also allowing other high-degree or high-PageRank
nodes to pass the filter). At least I know I would use such a dump.
Best,
Aidan
On 2019-12-19 6:46, Lydia Pintscher wrote:
On Tue, Dec 17, 2019 at 7:16 PM Aidan Hogan wrote:
Hey all,
As someone who likes t
Hey all,
As someone who likes to use Wikidata in their research, and likes to
give students projects relating to Wikidata, I am finding it more and
more difficult to (recommend to) work with recent versions of Wikidata
due to the increasing dump sizes, where even the truthy version now
costs
Hey all,
Andra recently mentioned about finding laureates in Wikidata, and it
reminded me that some weeks ago I was trying to come up with a SPARQL
query to find all Nobel Prize Winners in Wikidata.
What I ended up with was:
SELECT ?winner
WHERE {
?winner wdt:P166 ?prize .
?prize
t?
2018-03-08 21:02 GMT+01:00 Aidan Hogan <aid...@gmail.com
<mailto:aid...@gmail.com>>:
Hi Joachim!
Understood, yes! The ability to select entities with any value
for some property is something I agree would be a useful next
step, and
Hi all,
With a couple of students we are working on various topics relating to
the dynamics of RDF and Wikidata. The public dumps in RDF cover the past
couple of months:
https://dumps.wikimedia.org/wikidatawiki/entities/
I'm wondering is there a way to get access to older dumps or perhaps
s, Joachim
-Ursprüngliche Nachricht-
Von: Wikidata [mailto:wikidata-boun...@lists.wikimedia.org] Im Auftrag von
Aidan Hogan
Gesendet: Mittwoch, 7. März 2018 06:16
An: wikidata@lists.wikimedia.org
Betreff: Re: [Wikidata] GraFa: Faceted browser for RDF/Wikidata [thanks!]
Hi Joachim,
On 14-02-2018 7:3
eswc14.pdf
... and the code is on github to replicate:
https://github.com/ahmadassaf/KBE
Raphaël
Le 07/03/2018 à 05:53, Aidan Hogan a écrit :
Hi all,
Tomás and I would like to share a paper that might be of interest to
the community. It presents some preliminary results of a work looking
...@lists.wikimedia.org] Im Auftrag von
Aidan Hogan
Gesendet: Donnerstag, 8. Februar 2018 21:33
An: Discussion list for the Wikidata project.
Cc: José Ignacio .
Betreff: Re: [Wikidata] GraFa: Faceted browser for RDF/Wikidata [thanks!]
Hi all,
On behalf of José and myself, we would really like to thank the people
,
and in what order. The results are based on asking users (students) to
rate some prototypes of generated info-boxes.
Tomás Sáez, Aidan Hogan "Automatically Generating Wikipedia Infoboxes
from Wikidata". In the Proceedings of the Wiki Workshop at WWW 2018,
Lyon, France, Apri
the system soon with some evaluation results. Once it's ready we will of
course share a link with you.
Best,
Aidan and José
Forwarded Message
Subject: Re: GraFa: Faceted browser for RDF/Wikidata [feedback requested]
Date: Mon, 15 Jan 2018 11:47:18 -0300
From: Aidan Hogan &
,
José & Aidan
On 09-01-2018 14:18, Aidan Hogan wrote:
Hey all,
A Masters student of mine (José Moreno in CC) has been working on a
faceted navigation system for (large-scale) RDF datasets called "GraFa".
The system is available here loaded with a recent version of W
Hey all,
A Masters student of mine (José Moreno in CC) has been working on a
faceted navigation system for (large-scale) RDF datasets called "GraFa".
The system is available here loaded with a recent version of Wikidata:
http://grafa.dcc.uchile.cl/
Hopefully it is more or less
Hi Stas,
On 21-04-2017 15:04, Stas Malyshev wrote:
Hi!
You agree with me that this query :
`select distinct ?g { graph ?g {?s ?p ?o} }` seems to be a valid SPARQL
query, but throws an error in WDQS service [1].
It is a valid SPARQL query, but what you're essentially asking is to
download
This looks great Lydia, thanks!!
The descriptions look like enough for me to catch the idea and explain
it to a student.
If such a student is interested, we will let you know. :)
Best!
Aidan
On 13-04-2017 12:44, Lydia Pintscher wrote:
On Thu, Apr 13, 2017 at 4:44 PM, Aidan Hogan <
Hi all,
So at my university the undergraduate students must do a three-month
work towards writing a final short thesis. Generally this work doesn't
need to involve research but should result in a final demonstrable
outcome, meaning a tool, application, something like that.
The students are
,
Aidan
On 02.03.2017 21:56, Aidan Hogan wrote:
Hi all,
Is there a list somewhere of instances where the "mainstream" media has
used Wikidata as a source?
I found two thus far:
[1] BuzzFeed checking if 2016 really was a bad year (relatively
speaking) in terms of celebrity deaths.
[2
Hi all,
Is there a list somewhere of instances where the "mainstream" media has
used Wikidata as a source?
I found two thus far:
[1] BuzzFeed checking if 2016 really was a bad year (relatively
speaking) in terms of celebrity deaths.
[2] Le monde checking if 27 is really a dangerous year
Hi Stas,
I think in terms of the dump, /replacing/ the Turtle dump with the
N-Triples dump would be a good option. (Not sure if that's what you were
suggesting?)
As you already mentioned, N-Triples is easier to process with typical
unix command-line tools and scripts, etc. But also any (RDF
number of wiki subjects in all
languages).
I have no idea yet how to write the SQL/SPARQL for this, but rankable Q*
identifiers, new Q* identifiers and Google would be places I'd begin if
I did. What do you think?
Cheers, Scott
On Sun, Aug 7, 2016 at 2:02 PM, Aidan Hogan <aid...@gmail.com
<m
cking it for Wikidata was
the lack of load-balancing support in the open source version, not
the performance of a single instance.
Best regards,
Markus
On 06.08.2016 18:19, Aidan Hogan wrote:
Hey all,
Recently we wrote a paper discussing the query perfo
rticularly easy, nor intuitive.)
Cheers,
Aidan
Am 06.08.2016 um 18:19 schrieb Aidan Hogan:
Hey all,
Recently we wrote a paper discussing the query performance for Wikidata,
comparing different possible representations of the knowledge-base in Postgres
(a relational database), Neo4J (a graph
On 06-08-2016 18:48, Stas Malyshev wrote:
Hi!
There's a brief summary in the paper of the models used. In terms of all
the "gory" details of how everything was generated, (hopefully) all of
the relevant details supporting the paper should be available here:
On 06-08-2016 17:56, Stas Malyshev wrote:
Hi!
On a side note, the results we presented for BlazeGraph could improve
dramatically if one could isolate queries that timed out. Once one query
in a sequence timed-out (we used server-side timeouts), we observed that
a run of queries would then
29 matches
Mail list logo