Hi all,
The Wikidata-Toolkit Java library has been doing that for a while, but I
think there has been some changes in the RDF format that have not been
reflected in Wikidata-Toolkit yet.
https://github.com/Wikidata/Wikidata-Toolkit/
There is an example Java application taking a JSON dump
Hi,
If you are only fetching data via the API, then you should only be
making GET requests, right? In that case, did you try setting the
"origin=*" GET parameter? That should be enough to set the appropriate
CORS headers on the response.
See:
I do not know what the process looks like to request a ban, have you
contacted wikidata-ow...@lists.wikimedia.org? It seems to be the address
of the mailing list "owner" according to the email headers.
I agree this volume of email looks disproportionate for a single event
which is only remotely
I agree with you, Jan!
Antonin
On 19/09/2021 10:10, Jan Ainali wrote:
> I find all these academic call for papers/abstracts/submissions emails
> on this mailing list a bit spammy.
>
> I would be okay with them if the person mailing introduced it with a
> sentence or two why they believe it to
o say, only
> those after the date of Stas' post.
>
> Thad
> https://www.linkedin.com/in/thadguidry/
> <https://www.linkedin.com/in/thadguidry/>
> https://calendly.com/thadguidry/ <https://calendly.com/thadguidry/>
>
>
> On Sun, Aug 15, 2021 at 1:47 AM Antonin Delpe
Hi Thad,
This suggestion box does not use ElasticSearch, it uses a simple prefix
search on labels and aliases, run directly against the SQL database, I
think. ElasticSearch is only used when you go to Special:Search.
Best,
Antonin
On 15/08/2021 04:25, Thad Guidry wrote:
> I thought that ","
Hi all,
For what it's worth, it's supported by OpenRefine:
https://www.wikidata.org/wiki/Wikidata:Tools/OpenRefine/Editing/Schema_alignment#Dates
But only if you upload your edits directly via OpenRefine - of course
not via QuickStatements export.
Best,
Antonin
On 13/02/2021 16:06, Olaf Simons
I agree - it would be great if help pages like
https://www.wikidata.org/w/api.php?action=help=wbsearchentities
could also mention parameters such as "uselang", even if those apply to
all of MediaWiki.
I filed a Phabricator ticket here: https://phabricator.wikimedia.org/T265734
Antonin
On
/T257405
[2]: https://github.com/wetneb/openrefine-wikibase/issues/83
On 08/07/2020 14:10, Antonin Delpeuch (lists) wrote:
> Hi,
>
> This change is now live! If you cannot reconcile to Wikidata anymore,
> delete the Wikidata reconciliation service and add it again with the new
>
2020 00:22, Antonin Delpeuch (lists) wrote:
> Hi,
>
> The upcoming domain name migration to on the Wikimedia Toolforge implies
> that OpenRefine users need to update their Wikidata reconciliation
> service to the new endpoint:
>
> https://wdreconcile.toolforge.org/en/api
>
&
Hi,
The upcoming domain name migration to on the Wikimedia Toolforge implies
that OpenRefine users need to update their Wikidata reconciliation
service to the new endpoint:
https://wdreconcile.toolforge.org/en/api
or by replacing "en" by any other Wikimedia language code.
The new home page of
Hi,
I wonder if there is any guidance about how to poll the recent changes
feed of a MediaWiki instance (in particular of a Wikibase one) to keep
up with its stream of edits? In particular, how to do this responsibly
(without hammering the server) and how to ensure that all changes are
seen by
Hi!
Have you ever used OpenRefine and wished its features were documented a
bit better? The OpenRefine team is looking for contractors to help write
a proper documentation of the tool.
https://openrefine.org/blog/2020/04/23/documentation-hire.html
The job is fully remote and can be compatible
eet! Thanks for the reminder Antonin.
> Will there also be a longlist of requests that results, that can't be
> prioritized but are unlikely to be a focus by the core team?
>
>
>
> On Mon., Feb. 10, 2020, 2:16 p.m. Antonin Delpeuch (lists),
> mailto:li...@antonin.delpeuch
Hi all,
Just a reminder that OpenRefine needs your feedback to decide how it
should evolve in the coming years. We have a very short survey that you
can take to let us know what you think:
https://docs.google.com/forms/d/e/1FAIpQLSd-Z00h433Y0pIutYyeW98C0Yss6p7RFisVIkkM8uxUtEpmRw/viewform
In the
On 11/10/2019 16:11, Lydia Pintscher wrote:
> We hadn't looked at edits other than terms as part of the change but
> it's something I agree we should have. I'll bump it up and see what we
> can do.
Awesome! I understand that this might require complicated refactoring on
your side, it's probably
In Phabricator terms, this corresponds to the following tickets:
https://phabricator.wikimedia.org/T191885
https://phabricator.wikimedia.org/T67846
Antonin
On 02/10/2019 09:52, Antonin Delpeuch (lists) wrote:
> Hi Léa,
>
> Sorry, my question was not very clear. Let us take th
gt; :
> # Does that answer your question or did you mean something else? :)
> # Cheers,
> # Léa
>
> On Mon, 30 Sep 2019 at 16:23, Antonin Delpeuch (lists)
> mailto:li...@antonin.delpeuch.eu>> wrote:
>
> This is great news! Thank you so much for working on thi
This is great news! Thank you so much for working on this!
It must be hard to figure out how to generate summaries for a wide range
of edit shapes, but it is definitely useful.
I cannot tell from the Phab ticket whether you considered to reuse
existing summaries from atomic actions (such as
For anyone considering to publish in an Elsevier journal like this one,
it might be worth reading up a bit about this publisher:
http://thecostofknowledge.com/
Antonin
On 7/24/19 10:56 AM, Andy Mabbett wrote:
> Possibly of interest to those of you working on lexemes?
>
> --
> Andy Mabbett
>
Hi Stas,
Many thanks for writing this down! It is very useful to have a clear
statement like this from the dev team.
Given the sustainability concerns that you mention, I think the way
forward for the community could be to hold a RFC to determine a stricter
admissibility criterion for scholarly
Hi Thomas,
Great that you discovered this! In my experience the tool can indeed be
used to surface issues with our data that would be harder to discover in
other ways.
By keeping the tool in sync with Wikidata, the idea is to encourage
users to fix the data directly there (probably by fixing the
On 12/11/18 7:38 AM, Jakob Voß wrote:
> A more formal document (e.g. JSON Schema) may help to detect when
> implementation and documentatation get out of sync.
One way to generate a JSON schema would be to use Wikidata-Toolkit's
implementation, which can generate a JSON schema via Jackson. It
By the way, what is the difference between this "gsrsearch" and the
"srsearch" parameter from the docs?
https://www.wikidata.org/w/api.php?action=help=query%2Bsearch
The documentation only mentions "srsearch" but the example is formulated
with "gsrsearch".
Antonin
On 12/1/18 2:38 AM, Lucas
Hi Olivier,
Yes, this can be done by adding a new column from reconciled values.
Terms can be accessed as follows, with a syntax similar to QuickStatements:
Len for English label
Dfi for Finnish description
Apt for Portuguese aliases
More info here:
On 06/05/2018 10:37, Ettore RIZZA wrote:
> More simply, there's still a long way to go until Wikidata imports
> all the data contained in Wikipedia infoboxes (or equivalent data
> from other sources), let alone the rest.
>
>
> This surprises me. Are there any statistics somewhere on
Hi,
It is possible to use templates in Listeria table cells, by tweaking
your SPARQL query so that it returns the appropriate wikicode:
https://www.wikidata.org/wiki/User:Pintoch/orgid
If you have templates which depend on multiple variables in your SPARQL
query, I suppose you could take
On 21/02/2018 13:55, Andy Mabbett wrote:
>
> Consider, for instance:
>
>https://www.wikidata.org/wiki/Q3596440
>
> an "instance of" a "telephone numbering plan" (Q103903)
Hum, this becomes a bit tricky − maybe it's actually a legitimate
identifier? I don't see any error in the chain…
>
Overall, how do we deal with this duplication of information (on the
item about the identifier and on the corresponding Wikidata property)?
We do need to have items about unique identifiers (because they can have
sitelinks) so would it make sense to make sure every Wikidata property
for an ID is
Hi Andy,
Thanks, there seems to be quite a lot of work to do in this area indeed!
On 20/02/2018 19:49, Andy Mabbett wrote:
> As an example, I created 'KoreaMed Unique Identifier':
>
>https://www.wikidata.org/wiki/Q47489994
>
> How could we improve that? What additional properties might we
On 19/12/2017 10:17, Marco Fossati wrote:
> The format is the same as WDQS [4], although it is a subset of the data
> model, for the sake of simplicity.
Great - looking forward to reading the specs when they are made available.
Antonin
___
Wikidata
Fantastic! Thank you so much Thomas!
I completed the survey and would like to add a few points:
- OpenRefine is dropping Java 7
- for the RDF export feature, maybe we could repurpose it to export
statements to the Primary Sources Tool. I think the new version of the
PST is expected to ingest
Awww… this is awesome! It works really well, I can't wait to see this
deployed.
This is going to give a huge boost to the OpenRefine reconciliation service.
Where can I learn about the internals of this jewel? (which search
engine, what metrics are used to rank items, and so on).
Antonin
On
I like the idea of storing tables in Commons, but for now I am still
using Wikidata to store the lists I upload, because:
* tabular data is not integrated with WDQS as far as I know
* the tabular data format is quite poor compared to things like
https://www.w3.org/TR/tabular-metadata/
* it is not
Hi Marco,
I agree that many of these lists and tables could be harvested (with
some care, of course).
However, I don't think that the information they contain should go to
the Wikidata item they are associated with. This Wikidata item mostly
exists to store inter-language links, but is poorly
On 16/10/2017 14:16, Antonin Delpeuch (lists) wrote:
> Thanks Ettore for spotting that!
>
> Wikidata types (P31) only make sense when you consider the "subclass of"
> (P279) property that we use to build the ontology (except in a few cases
> where the community
Thanks Ettore for spotting that!
Wikidata types (P31) only make sense when you consider the "subclass of"
(P279) property that we use to build the ontology (except in a few cases
where the community has decided not to use any subclass for a particular
type).
So, to retrieve all items of a
In general, I think it would be great to store inside Wikidata the graph
of relations between identifiers. Something like:
VIAF linksTo ISNI
VIAF linksTo GND
…
GRID linksTo ISNI
arXiv linksTo DOI
Last time I looked, there was no simple way to do that. So for
WikiProject Universities we have used
Hi!
That reminds me of the crowdsourcing extension that LODrefine has - it
lets you crowdsource the manual part of the reconciliation process. But
it uses CrowdFlower for that (which is quite pricy). It'd be great if
Wikidata Game could evolve into a decent Wikimedia-focused alternative
to this
0 0427 4A2C
>
>
>
>
>
>
>
>
> On Mon, Jul 17, 2017 at 1:13 PM, Antonin Delpeuch (lists)
> <li...@antonin.delpeuch.eu <mailto:li...@antonin.delpeuch.eu>> wrote:
>
> Hi,
>
> I would be interested to run an introd
On 06/07/2017 16:41, Lydia Pintscher wrote:
> I am not sure I understand what you mean exactly. Do you mean that
> when you are on the file page
> (http://structured-commons.wmflabs.org/wiki/File:LighthouseinDublin.jpg)
> you see the data from the data page
>
Awesome!
I wonder if there are any plans to display Wikibase's statements on
Commons' side? Currently all I can see is a "MediaInfo:M13" link which
does not really showcase all the awesome data that is hidden behind it! :)
Antonin
On 06/07/2017 15:10, Léa Lacroix wrote:
> Hello all,
>
> As you
Hi,
Is there any API call that does a string search for entities on their
labels and aliases, accross all languages?
(As a test case, it should return results both for "Universität
Toulouse" and "university of toulouse" without changing any other
parameter.)
I have tried:
-
Hi John,
At the moment, Wikidata is not the place where you will be able to find
a comprehensive database of citations in Wikipedia.
These citations can be downloaded from other places:
-
https://figshare.com/articles/Wikipedia_Scholarly_Article_Citations/1299540
(only covering citations with
Hi Mek,
One simple first step would be to write a bot that would add the "full
work available at (P953)" property to items with OL ids when the full
text can be downloaded from OL.
This could potentially be part of a Wikidata version of the OAbot.
Is there any API endpoint which, given an OL
efine/
>>
>> -Thad
>>
>>
>> On Thu, Jan 26, 2017 at 11:18 AM AMIT KUMAR JAISWAL
>> <amitkumarj...@gmail.com <mailto:amitkumarj...@gmail.com>> wrote:
>>
>> Hey Alina,
>>
>> Thanks for letting us know about this.
>>
&g
ould help in terms of synchronisation, I believe.
>
> Cheers,
> Magnus
>
>
>
> On Thu, Jan 26, 2017 at 4:44 PM Antonin Delpeuch (lists)
> <li...@antonin.delpeuch.eu <mailto:li...@antonin.delpeuch.eu>> wrote:
>
> Hi Magnus,
>
> Mi
t;types" only? At what rate are
> you hitting the tool? Do you have an example query, preferably one
> that breaks?
>
> Please note that this is not an "official" WMF service, only parts
> of the API are implemented, and there are currently other technical
&
Hi,
I'm also very interested in this. How did you configure your OpenRefine
to use Wikidata? (Even if it does not currently work, I am interested in
the setup.)
There is currently an open issue (with a nice bounty) to improve the
integration of Wikidata in OpenRefine:
49 matches
Mail list logo