Yes, this sounds like a sensible approach to me.

Am 19. März 2020 16:36:07 MEZ schrieb Zsolt Ero <zsolt....@gmail.com>:
>Thanks! I was able to get these queries running, making them in a
>batch of 100, putting a 5 seconds sleep between them.
>
>Now I've downloaded population data and mixed them so I think I've
>solved this.
>
>Just to clarify:
>1. I need to run both the "simple" and the "latest" query and mix the
>results.
>2. I see that for 95% the cases the "latest" query works and returns
>the latest data.
>3. 5% of the cases, only the "simple" query works, for example:
>https://www.wikidata.org/wiki/Q128323
>
>Is that correct like this?
>
>Zsolt
>
>On Thu, 19 Mar 2020 at 11:10, Benno Fünfstück
><benno.fuenfstu...@mailbox.tu-dresden.de> wrote:
>>
>> You can just insert all those items into the `VALUES` statement at
>the
>> top of the query.
>>
>> Here is a query to only selects data where the "point in time"
>qualifier
>> is present, and then only gives you the latest version:
>https://w.wiki/KkM
>>
>> Note that this query won't return any results for items where "point
>in
>> time" is not specified on any statement. It's unclear to me how that
>> case should be dealt with, from a semantic point of view: what can we
>do
>> if we don't know from which time the data is?
>>
>> Regards,
>> Benno
>>
>> On 18.03.20 23:40, Zsolt Ero wrote:
>> > Thanks! There is about 5000 item ids in the Natural Earth dataset,
>> > what would be the best way to get them? Also, how can I get the
>latest
>> > data? For example in your query Italy shows 2016, and there is 2017
>> > and 2020 in there.
>> >
>> > On Wed, 18 Mar 2020 at 23:34, Lucas Werkmeister
>> > <m...@lucaswerkmeister.de> wrote:
>> >> If you have the item IDs already, the query is relatively simple:
>> >>
>> >> SELECT ?item ?population WHERE {
>> >>   VALUES ?item { wd:Q38 wd:Q148 wd:Q884 }
>> >>   ?item wdt:P1082 ?population.
>> >> }
>> >>
>> >> https://w.wiki/KjA
>> >>
>> >> You can add more values for the ?item (and spread them across
>several
>> >> lines as well), the three above are just an example.
>> >>
>> >> Cheers,
>> >> Lucas
>> >>
>> >> On 18.03.20 22:58, Zsolt Ero wrote:
>> >>> Hi,
>> >>>
>> >>> I'm contributing to develop an open source scraper for COVID-19
>data
>> >>> and we are looking to download the population data from Wikidata
>for
>> >>> regions around the world.
>> >>>
>> >>> First, we'd like to get province / state / county items but later
>on
>> >>> probably much finer granularity. We have Wikidata Q id-s from
>Natural
>> >>> Earth, we just don't know how to get the population data from
>Wikidata
>> >>> without scraping. I've seen that there is either a 71 GB gzip
>JSON
>> >>> archive or the query service on https://query.wikidata.org/.
>> >>>
>> >>> What I'm looking for would be very simple, just {"Q1234567":
>> >>> population} pairs in a JSON, I guess the query service would be
>ideal,
>> >>> but I have no idea how to use it (even after looking at the
>tutorial).
>> >>>
>> >>> Can you help me write this very simple query?
>> >>>
>> >>> Zsolt
>> >>>
>> >>> _______________________________________________
>> >>> Wikidata mailing list
>> >>> Wikidata@lists.wikimedia.org
>> >>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>> >>>
>> >> _______________________________________________
>> >> Wikidata mailing list
>> >> Wikidata@lists.wikimedia.org
>> >> https://lists.wikimedia.org/mailman/listinfo/wikidata
>> > _______________________________________________
>> > Wikidata mailing list
>> > Wikidata@lists.wikimedia.org
>> > https://lists.wikimedia.org/mailman/listinfo/wikidata
_______________________________________________
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata

Reply via email to