Dropping my two cents here: I'm wondering about the Wikidata Linked Data
Fragments (LDF) service [1] usage.
LDF [2] is nice because it shifts the computation burden to the client,
at the cost of less expressive SPARQL queries, IIRC.
I think it would be a good idea to forward simple queries to
already have properties for most of these links? I'm not sure what
you're asking as I have little knowledge of the context of the situation...
lectrician1,
On Thu, Jul 29, 2021 at 8:56 AM Marco Fossati <mailto:foss...@spaziodati.eu>> wrote:
[Cross-posting from the Wikidata chat]
Hi
[Please pardon me if you have already read this on the Wikidata chat]
Hello folks,
TL;DR: what do you think of the 3 validation criteria below?
I'm excited to let you know
Hi everyone,
---
TL;DR: soweego 2 is on its way.
Here's the Project Grant proposal:
https://meta.wikimedia.org/wiki/Grants:Project/Hjfocs/soweego_2
---
Does the name
Hi everyone,
Benno (in CC) has recently announced this tool:
https://tools.wmflabs.org/wdumps/
I haven't checked it out yet, but it sounds related to Aidan's inquiry.
Hope this helps.
Cheers,
Marco
On 12/18/19 8:01 AM, Edgard Marx wrote:
+1
On Tue, Dec 17, 2019, 19:14 Aidan Hogan
Hi Denny,
Thanks for publishing your Colab notebook!
I went through it and would like to share my first thoughts here. We can
then move further discussion somewhere else.
1. in general, how can we compare datasets with totally different time
stamps? Wikidata is alive, Freebase is dead, and
Hey Sebastian,
On 9/20/19 10:22 AM, Sebastian Hellmann wrote:
Not much of Freebase did end up in Wikidata.
Dropping here some pointers to shed light on the migration of Freebase
to Wikidata, since I was partially involved in the process:
1. WikiProject [1];
2. the paper behind [2];
3.
That's just awesome, Denny.
Unity is strength.
Wishing you all the best.
Marco
On 19/09/19 18:56, Denny Vrandečić wrote:
Hello all,
Over the last few years, more and more research teams all around the
world have started to use Wikidata. Wikidata is becoming a fundamental
resource [1]. That
linkedin.com/in/thadguidry/
On Wed, Aug 28, 2019 at 11:22 AM Marco Fossati <mailto:foss...@spaziodati.eu>> wrote:
Hi everyone,
Wearing the soweego project lead hat, I'm pleased to announce that the
Wikimedia Foundation has approved the *soweego 1.1* proposal:
https://met
Hi everyone,
Wearing the soweego project lead hat, I'm pleased to announce that the
Wikimedia Foundation has approved the *soweego 1.1* proposal:
https://meta.wikimedia.org/wiki/Grants:Project/Rapid/Hjfocs/soweego_1.1
The main goal is to put together different machine learning algorithms
and
Hi everyone,
TL;DR: soweego version 1 is out!
https://soweego.readthedocs.io/
Like it? Star it!
The soweego team is delighted to announce the release of *version 1* [1]!
If you like it, why don't you click on the Star button?
Dear all,
---
TL;DR: soweego version 1 will be released soon. In the meanwhile, why
don't you consider endorsing the next steps?
https://meta.wikimedia.org/wiki/Grants:Project/Rapid/Hjfocs/soweego_1.1
Hi Maarten,
On 9/22/18 13:28, Maarten Dammers wrote:
The equivalent property and equivalent class are used, but not that
much. Did anyone already try a structured approach with reporting? I'm
considering parsing popular ontology descriptions and producing reports
of what is linked to what so
Hi Niklas,
On 8/10/18 11:23, Daniel Kinzler wrote:
You could go for a straight up import of the XML dump.
If you are using MediaWiki Vagrant [1] for your Wikibase instance, I've
contributed a couple of walkthroughs [2,3].
In my experience, however, the XML import ended in lots of missing data,
Sounds good, thank you Daniel and Stas.
Best,
Marco
On 10/26/17 19:20, Stas Malyshev wrote:
Hi!
Thanks a lot Stas for this present.
Could you please share any pointers on how to integrate it into other
tools?
It's the same API as before, wbsearchentities. If you need additional
profiles -
Thanks a lot Stas for this present.
Could you please share any pointers on how to integrate it into other tools?
Cheers,
Marco
On 10/25/17 22:22, Stas Malyshev wrote:
Wikidata and Search Platform teams are happy to announce that Wikidata
prefix search
is now using new and improved
Hi everyone,
Remember the Wikidata primary sources tool?
https://www.wikidata.org/wiki/Wikidata:Primary_sources_tool
While the StrepHit team is building its next version, I'd like to
invite you to have a look at a new project proposal.
The main goal is to add a high volume of identifiers to
Hi everyone,
As a data quality addict, I've been investigating the coverage of
external identifiers linked to Wikidata items about people.
Given the numbers on SQID [1] and some SPARQL queries [2, 3], it seems
that even the second most used ID (VIAF) only covers *25%* of people
items circa.
Hi everyone,
I asked a related question during the presentation at Wikimania: my
understanding is that Ontolex/Lemon [1, 2] was used to model the
Wikidata lexicographical items.
WordNet was discarded, as it has an older model.
Best,
Marco
[1] https://www.w3.org/2016/05/ontolex/
[2]
Hi Antonin,
On 8/7/17 20:36, Antonin Delpeuch (lists) wrote:
Does anybody know an alternative to CrowdFlower that can be used for
free with volunteer workers?
There you go: https://crowdcrafting.org/
Hope this helps you keep up with your great work on openrefine.
I believe entity
Hi everyone,
The StrepHit team [1] has submitted an official uplift proposal for the
primary sources tool [2].
This is part of a Wikimedia project grant [3], which has 2 big goals:
1. to improve the reference coverage of Wikidata statements;
2. to standardize the data release workflow for
Thanks for the report, Markus.
I will file it in the primary sources tool issues.
Best,
Marco
Il 28 dic 2016 3:33 PM, "Markus Kroetzsch" <markus.kroetz...@tu-dresden.de>
ha scritto:
Hi Marco,
On 27.12.2016 14:27, Marco Fossati wrote:
> Hi Markus and thanks for thi
I was about to mention the StrepHit renewal proposal.
Thanks Lydia for doing that faster than me! :-)
Best,
Marco
On 12/20/16 19:56, Lydia Pintscher wrote:
On Tue, Dec 20, 2016 at 7:40 PM, Gerard Meijssen
wrote:
Hoi,
Please consider, it has been said all too often
Hi Markus and thanks for this major SQID update.
On 12/16/16 13:32, Markus Kroetzsch wrote:
== Known issues ==
* Some statements cannot be rejected in Primary Sources. This problem
affects both SQID and the Wikidata gadget in the same way. It seems to
be a bug in the PS web service, which we
Hi Thad,
The examples you mentioned are facts automatically extracted from
natural language texts.
It looks like those facts are not incorrect on their own: for instance,
the first 3 people listed in https://www.wikidata.org/wiki/Q11629 seem
to be painters indeed.
In my opinion, what is
, Marco Fossati <foss...@spaziodati.eu
<mailto:foss...@spaziodati.eu>> wrote:
Hi John, Navino,
the primary sources tool uses the QuickStatements syntax for
large-scale non-curated dataset imports, see:
https://www.wikidata.org/wiki/Wikidata:Dat
Hi John, Navino,
the primary sources tool uses the QuickStatements syntax for large-scale
non-curated dataset imports, see:
https://www.wikidata.org/wiki/Wikidata:Data_donation#3._Work_with_the_Wikidata_community_to_import_the_data
https://www.wikidata.org/wiki/Wikidata:Primary_sources_tool
-- Messaggio inoltrato --
Da: "Marco Fossati" <foss...@fbk.eu>
Data: 11 nov 2016 1:23 PM
Oggetto: Fwd: Re: [wikicite-discuss] Entity tagging and fact extraction
(from a scholarly publisher perspective)
A: "Marco Fossati" <foss...@spaziodati.eu>
Cc:
, the notion of the StrepHit functionality with improved methods
to add them to Wikidata make sense. Maybe it is even time to reconsider
many of the notions of Primary Sources.
Thanks,s
GerardM
On 24 July 2016 at 15:33, Marco Fossati <foss...@spaziodati.eu
<mailto:foss...@spaziodati.eu&g
[Begging pardon if you have already read this in the Wikidata project chat]
Hi everyone,
If you care about data quality, you probably know that high quality is
synonym of references to trusted sources.
That's why the primary sources tool is out there as a Wikidata gadget:
Thanks for the heads-up, Lydia.
I assume future contributors won't have to sign a Google Contributor
License Agreement, right?
Cheers,
Marco
On 7/12/16 17:11, Lydia Pintscher wrote:
Hey folks :)
Based on requests here Denny and I have worked on getting the Primary
Sources Tool code moved
Hi Satya,
the knowledge base produced by StrepHit could be queried by a QA system,
pretty much as any structured knowledge base.
Not sure what you want to know though, could you please expand?
Cheers,
Marco
On 6/16/16 18:51, Satya Gadepalli wrote:
Can this be used as factoid QA System?
thx
[Feel free to blame me if you read this more than once]
To whom it may interest,
Full of delight, I would like to announce the first beta release of
*StrepHit*:
https://github.com/Wikidata/StrepHit
TL;DR: StrepHit is an intelligent reading agent that understands text
and translates it into
Hi Lydia,
On 6/15/16 07:42, Lydia Pintscher wrote:
Lydia - can you assign someone to come up to speed at whatever level
Denny requires to feel comfortable making the transfer?
I will take care of it with Denny in the next days.
Repasting part of a previous message with the list of
Hi Tom and thanks Lydia for the clarification,
that request for comments (RFC) [1] aims at gathering feedback both on
the primary sources tool and the available datasets (especially StrepHit
[2]), which are closely intertwined: the dataset is in the tool, so
people can play with both in one
Sorry, I forgot to rename the "digest" subject, fixed now.
On 5/30/16 16:06, Marco Fossati wrote:
Hi Markus,
this is a known issue:
https://github.com/google/primarysources/issues/94
It seems to be related to the front-end: @Thomas, this and
https://github.com/google/primarysources/
Hi Markus,
this is a known issue:
https://github.com/google/primarysources/issues/94
It seems to be related to the front-end: @Thomas, this and
https://github.com/google/primarysources/issues/107 are blocking the
usage of the tool.
Would it be possible for you to investigate them?
Cheers,
Hi Tom,
FYI, the primary sources tool is not dead: besides Freebase, it will
also cater for other datasets.
The StrepHit team will take care of it in the next few months, as per
one of the project goals [1].
The code repository is owned by Google, and the StrepHit team will
collaborate with
I couldn't wait for a detailed description of the primary sources tool.
Thanks a lot to the authors for mentioning the StrepHit soccer dataset!
Cheers,
Marco
On 2/19/16 13:00, wikidata-requ...@lists.wikimedia.org wrote:
Date: Thu, 18 Feb 2016 11:07:41 -0600
From: Maximilian
Hi Magnus,
>I was aware of the Sourcerer tool: I'm concerned with those references
>coming from Wikipedia articles though, since they stem from inside a
>Wikimedia project, and I want to make sure that everything comes from
>the outside.
>
The Sourcerer references do NOT come from Wikipedia! I
Here is the link for the online streaming:
https://youtu.be/uvfd_HmPOrc
Cheers,
Marco
2016-01-11 16:11 GMT+01:00 Marco Fossati <foss...@spaziodati.eu>:
> Dear all,
>
> This is a kind reminder for the upcoming StrepHit IEG project kick-off
> seminar.
> Schedule: 15 J
://www.openstreetmap.org/way/67197096
The seminar will be streamed online, a link will be shared as soon as it is
available.
See you in Trento!
Cheers,
Marco
2015-12-23 17:03 GMT+01:00 Marco Fossati <foss...@spaziodati.eu>:
> [Begging pardon if you read this multiple times]
>
> Hi everyo
Hi Dario,
Date: Wed, 23 Dec 2015 08:04:33 -0800
> From: Dario Taraborelli
> To: "Discussion list for the Wikidata project."
>
> Subject: Re: [Wikidata] [ANNOUNCEMENT] StrepHit IEG project kick-off
> seminar
> Message-ID:
the gold-standard hub of
the Open Data landscape.
Link:
https://meta.wikimedia.org/wiki/Grants:IEG/StrepHit:_Wikidata_Statements_Validation_via_References
Speaker's bio: Marco Fossati is a researcher with a double background in
Natural Languages and Information Technologies. He works at the Data
Dear all,
I have no words to say how much I'm happy: StrepHit [1] has been selected
as an IEG project [2]!!!
I'd like to express my gratitude to all the community members that have
provided feedback and endorsements.
Thanks, thanks, thanks for believing in the idea.
Cheers!
--
Marco Fossati
Hi everyone,
I was just wondering whether any Wikidatan will be present at the
upcoming Google Summer of Code Mentor summit:
https://sites.google.com/site/gsoc2015ms/
If so, it would be cool to meet there, just ping me before the summit.
Cheers,
--
Marco Fossati
http://about.me/marco.fossati
on the *endorse* blue
button on the project page.
Looking forward to your updates.
Cheers,
--
Marco Fossati
http://about.me/marco.fossati
Twitter: @hjfocs
Skype: hell_j
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman
Hi Denny, Thomas,
I would like to thank you both for your support in making the StrepHit
soccer dataset available! I owe you some hectolitres of beer :-)
There is one thing that was mentioned during our summer discussions and
that I sadly forgot: shall the Freebase ontology mappings be added
to your updates.
Cheers!
On 9/9/15 11:39, Marco Fossati wrote:
Hi Markus, everyone,
The project proposal is currently in active development.
I would like to focus now on the dissemination of the idea and the
engagement of the Wikidata community.
Hence, I would love to gather feedback
..@semantic-mediawiki.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Dear Marco,
Sounds interesting, but the project page still has a lot of gaps. Will
you notify us again when you are done? It is a bit tricky to endorse a
proposal that is not finished yet;-)
Markus
On 04.09.201
Lydia Pintscher<lydia.pintsc...@wikimedia.de>
Thank you for working on this, Marco. This is a great step forward. I
wish you good luck for the IEG proposal!
Thanks @Lydia for your encouragement!
Cheers,
--
Marco Fossati
http://about.me/marco.fossati
Twitter: @hjfocs
S
l will also include an investigation phase to select a
set of authoritative sources, see the first task in the proposal work
package:
https://meta.wikimedia.org/wiki/Grants:IEG/StrepHit:_Wikidata_Statements_Validation_via_References#Work_Package
I'll expand on this.
Cheers,
--
Marco Fossati
h
/Wikidata:Primary_sources_tool
--
Marco Fossati
http://about.me/marco.fossati
Twitter: @hjfocs
Skype: hell_j
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
Billion+ RDF triples culled
from across the LOD Cloud (if you lookup Wikidata URIs that are objects
of owl:sameAs relations you'll end up in Wikidata own Linked Data Space)
[2]http://wikidata.metaphacts.com/sparql -- another endpoint I
discovered yesterday .
--
Marco Fossati
http://about.me
Hi Vladimir,
Il 24/feb/2015 08:38 Vladimir Alexiev vladimir.alex...@ontotext.com ha
scritto:
Excellent, thanks!!
1. Do you have a description how does this work?
I can't even find your presentation from Dublin 7 Feb
The paper describing the approach is under review on a top conference, I
I am pleased to announce the 3.4. release of the Italian DBpedia chapter.
It includes links to Wikidata and exhaustive type coverage through DBTax.
Check out the blog post here:
http://it.dbpedia.org/2015/02/dbpedia-italiana-release-3-4-wikidata-e-dbtax/?lang=en
Cheers!
--
Marco Fossati
http
Marco and friends, Any chance you're streaming this to the web,
Marco? If so, what's the URL please? Thanks, Scott On Mon, Nov 3, 2014
at 9:45 AM, Marco Fossati hell.j@gmail.com wrote:
Hi folks,
This is just to let you know that I will run live demos for the tutorial
at the Unicode conference
.
Cheers!
[1] http://mappings.dbpedia.org
[2] dbpedia-discuss...@lists.sourceforge.net
[3] http://mappings.dbpedia.org/index.php/OntologyClass:Person
[4] http://it.dbpedia.org/resource/Joey_Ramone
Thanks and regards,
Micru
--
Marco Fossati
http://about.me/marco.fossati
Twitter: @hjfocs
Skype: hell_j
58 matches
Mail list logo