[Wikidata] Tool for consuming left-over data from import

2017-08-04 Thread André Costa
Hi all!

As part of the Connected Open Heritage project Wikimedia Sverige have been
migrating Wiki Loves Monuments datasets from Wikipedias to Wikidata.

In the course of doing this we keep a note of the data which we fail to
migrate. For each of these left-over bits we know which item and which
property it belongs to as well as the source field and language from the
Wikipedia list.  An example would e.g. be a "type of building" field where
we could not match the text to an item on Wikidata but know that the target
property is P31.

We have created dumps of these (such as
https://tools.wmflabs.org/coh/_total_se-ship_new.json, don't worry this one
is tiny) but are now looking for an easy way for users to consume them.

Does anyone know of a tool which could do this today? The Wikidata game
only allows (AFAIK) for yes/no/skip whereas you would here want something
like /invalid/skip. And if not are there any tools which with
a bit of forking could be made to do it?

We have only published a few dumps but there are more to come. I would also
imagine that this, or a similar, format could be useful for other
imports/template harvests where some fields are more easily handled by
humans.

Any thoughts and suggestions are welcome.
Cheers,
André
André Costa | Senior Developer, Wikimedia Sverige | andre.co...@wikimedia.se
| +46 (0)733-964574

Stöd fri kunskap, bli medlem i Wikimedia Sverige.
Läs mer på blimedlem.wikimedia.se
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] New step towards structured data for Commons is now available: federation

2017-07-06 Thread André Costa
Nice!

Will the connection back to the image be included in the rdf? The /entity/
path was not available so couldn't check what is there now.

Cheers,
André



On 6 Jul 2017 15:10, "Léa Lacroix"  wrote:

> Hello all,
>
> As you may know, WMF, WMDE and volunteers are working together on the 
> structured
> data for Commons
>  project.
> We’re currently working on a lot of technical groundwork for this project.
> One big part of that is allowing the use of Wikidata’s items and properties
> to describe media files on Commons. We call this feature federation. We
> have now developed the necessary code for it and you can try it out on a
> test system and give feedback.
>
> We have one test wiki that represents Commons (http://structured-commons.
> wmflabs.org) and another one simulating Wikidata (
> http://federated-wikidata.wmflabs.org). You can see an example
>  where the
> statements use items and properties from the faked Wikidata. Feel free to
> try it by adding statements to to some of the files on the test system.
> (You might need to create some items on http://federated-wikidata.
> wmflabs.org if they don’t exist yet. We have created a few for testing.)
> If you have any questions or concern, please let us know.
> Thanks,
>
> --
> Léa Lacroix
> Project Manager Community Communication for Wikidata
>
> Wikimedia Deutschland e.V.
> Tempelhofer Ufer 23-24
> 10963 Berlin
> www.wikimedia.de
>
> Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
>
> Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
> unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das Finanzamt
> für Körperschaften I Berlin, Steuernummer 27/029/42207.
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Three folders about Wikidata

2016-02-23 Thread André Costa
The three originals are in Swedish but are in turn based on a single one in
German.

--
André Costa
GLAM Developer
Wikimedia Sverige
On 23 Feb 2016 12:33, "Gerard Meijssen" <gerard.meijs...@gmail.com> wrote:

> Hoi,
> What was the original language. German ?
> Thanks,
>  GerardM
>
> On 23 February 2016 at 09:54, Romaine Wiki <romaine.w...@gmail.com> wrote:
>
>> Hi all,
>>
>> Currently I am working on translating three folders about Wikidata
>> (aiming at GLAMs, businesses and research) to English, and later Dutch and
>> French (so they can be used in Belgium, France and the Netherlands).
>>
>> I translated the texts of these folders here:
>> https://be.wikimedia.org/wiki/User:Romaine/Wikidata
>>
>> The texts of the folders are not completely reviewed, but if anyone as
>> native speaker wants to look at them and fix some grammar/etc, feel free to
>> do that.
>>
>> Greetings,
>> Romaine
>>
>> ___
>> Wikidata mailing list
>> Wikidata@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>
>>
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Wikidata Propbrowse

2016-02-15 Thread André Costa
Would it be possible to set the language used to search with? Whilst I most
often use English on Wikidata I'm sure a lot of people don't.

/André
On 14 Feb 2016 22:03, "Markus Kroetzsch" 
wrote:

> On 14.02.2016 18:03, Hay (Husky) wrote:
>
>> On Sun, Feb 14, 2016 at 4:40 PM, Markus Kroetzsch
>>  wrote:
>>
>>> I suspect that https://query.wikidata.org can count how many times each
 property is used.

>>>
>>>
>>> Amazingly, you can (I was surprised):
>>>
>>>
>>> https://query.wikidata.org/#SELECT%20%3FanyProp%20%28count%28*%29%20as%20%3Fcount%29%0AWHERE%20{%0A%20%20%20%20%3Fpid%20%3FanyProp%20%3FsomeValue%20.%0A}%0AGROUP%20BY%20%3FanyProp%0AORDER%20BY%20DESC%28%3Fcount%29
>>> 
>>>
>> That's a really nice find! Any idea how to filter the query so you
>> only get the property statements?
>>
>
> I would just filter this in code; a more complex SPARQL query is just
> getting slower. Here is a little example Python script that gets all the
> data you need:
>
>
> https://github.com/Wikidata/WikidataClassBrowser/blob/master/helpers/python/fetchPropertyStatitsics.py
>
> I intend to use this in our upcoming new class/property browser as well.
> Maybe it would actually make sense to merge the two applications at some
> point (the focus of our tool are classes and their connection to
> properties, as in the existing Miga tool, but a property browser is an
> integral part of this).
>
> Markus
>
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Photographers' Identities Catalog (& WikiData)

2015-12-14 Thread André Costa
I'm planning to bring a few of the datasets into mix'n'match (@Magnus this
is the one I asked sbout on Twitter) in January but not all of them are
suitable and I believe separating KulturNav into multiple datasets on
mix'n'match maxes more sense and makes it more likely that they get matched.

Some of the early adopters of KulturNav have been working with WMSE to
facilitate bi-directional matching. This is done on a dataset-by-dataset
level since different institutions are responsible for different datasets.
My hope is that mix'n'match will help in this area as well, even as a tool
for the institutions own staff who are often interested in matching entries
to Wikipedia (which most of the time means wikidata).

@John: There are processes for matching kulturnav identifiers to wikidata
entities. Only afterwards are details imported. Mainly to source statements
[1] and [2]. There is some (not so user friendly) stats at [3].

Cheers,
André

[1]
https://www.wikidata.org/wiki/Wikidata:Requests_for_permissions/Bot/L_PBot_2
[2]
https://www.wikidata.org/wiki/Wikidata:Requests_for_permissions/Bot/L_PBot_3
[3] https://tools.wmflabs.org/lp-tools/misc/data/
--
André Costa
GLAM developer
Wikimedia Sverige

Magnus Manske, 13/12/2015 11:24:

>
> Since no one mentioned it, there is a tool to do the matching to WD much
> more efficiently:
> https://tools.wmflabs.org/mix-n-match/
<https://tools.wmflabs.org/mix-n-match/>

+1

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Photographers' Identities Catalog (& WikiData)

2015-12-09 Thread André Costa
In case you haven't come across it before
http://kulturnav.org/1f368832-7649-4386-97b6-ae40cce8752b is the entry
point to the Swedish database of (primarily early) photographers curated by
the Nordic Museum in Stockholm.

It's not that well integrated into Wikidata yet but the plan is to fix that
during early 2016. That would also allow a variety of photographs on
Wikimedia Commons to be linked to these entries.

Cheers,
André

André Costa | GLAM developer, Wikimedia Sverige | andre.co...@wikimedia.se |
 +46 (0)733-964574

Stöd fri kunskap, bli medlem i Wikimedia Sverige.
Läs mer på blimedlem.wikimedia.se

On 9 December 2015 at 02:44, David Lowe <davidl...@nypl.org> wrote:

> Thanks, Tom.
> I'll have to look at this specific case when I'm back at work tomorrow, as
> it does seem you found something in error.
> As for my process: with WD, I queried out the label, description & country
> of citizenship, dob & dod of of everyone with occupation: photographer.
> After some cleaning, I can get the WD data formatted like my own (Name,
> Nationality, Dates). I can then do a simple match, where everything matches
> exactly. For the remainder, I then match names and dates- without
> Nationality, which is often very "soft" information. For those that pass a
> smell test (one is "English" the other is "British") I pass those along,
> too. For those with greater discrepancies, I look still closer. For those
> with still greater discrepancies, I manually, individually query my
> database for anyone with the same last name & same first initial to catch
> misspellings or different transliterations. I also occasionally put my
> entire database into open refine to catch instances where, for instance, a
> Chinese name has been given as FamilyName, GivenName in one source, and
> GivenName, FamilyName in another.
> In short, this is scrupulously- and manually- checked data. I'm not savvy
> enough to let an algorithm make my mistakes for me! But let me know if this
> seems to be more than bad luck of the draw- finding the conflicting data
> you found.
> I have also to say, I may suppress the Niepce Museum collection, as it's
> from a really crappy list of photographers in their collection which I
> found many years ago, and can no longer find. I don't want to blame them
> for the discrepancy, but that might be the source. I don't know.
> As I start to query out places of birth & death from WD in the next days,
> I expect to find more discrepancies. (Just today, I found dozens of folks
> whom ULAN gendered one way, and WD another- but were undeniably the same
> photographer. )
> Thanks,
> David
>
>
> On Tuesday, December 8, 2015, Tom Morris <tfmor...@gmail.com> wrote:
>
>> Can you explain what "indexing" means in this context?  Is there some
>> type of matching process?  How are duplicates resolved, if at all? Was the
>> Wikidata info extracted from a dump or one of the APIs?
>>
>> When I looked at the first person I picked at random, Pierre Berdoy
>> (ID:269710), I see that both Wikidata and Wikipedia claim that he was born
>> in Biarritz while the NYPL database claims he was born in Nashua, NH.  So,
>> it would appear that there are either two different people with the same
>> name, born in different places, or the birth place is wrong.
>>
>>
>> http://mgiraldo.github.io/pic/?=2028247=269710|42.7575,-71.4644
>> https://www.wikidata.org/wiki/Q3383941
>>
>> Tom
>>
>>
>>
>>
>> On Tue, Dec 8, 2015 at 7:10 PM, David Lowe <davidl...@nypl.org> wrote:
>>
>>> Hello all,
>>> The Photographers' Identities Catalog (PIC) is an ongoing project of
>>> visualizing photo history through the lives of photographers and photo
>>> studios. I have information on 115,000 photographers and studios as of
>>> tonight. It is still under construction, but as I've almost completed an
>>> initial indexing of the ~12,000 photographers in WikiData, I thought I'd
>>> share it with you. We (the New York Public Library) hope to launch it
>>> officially in mid to late January. This represents about 12 years worth of
>>> my work of researching in NYPL's photography collection, censuses and
>>> business directories, and scraping or indexing trusted websites, databases,
>>> and published biographical dictionaries pertaining to photo history.
>>> Again, please bear in mind that our programmer is still hard at work
>>> (and I continue to refine and add to the data*), but we welcome your
>>> feedback, questions, critiques, etc. To see the WikiData photographers,
>>> s

Re: [Wikidata] Wikidata Analyst, a tool to comprehensively analyze quality of Wikidata

2015-12-09 Thread André Costa
Nice tool!

To understand the statistics better.
If a claim has two sources, one wikipedia and one other, how does that show
up in the statistics?

The reason I'm wondering is because I would normally care if a claim is
sourced or not (but not by how many sources) and whether it is sourced by
only Wikipedias or anything else.

E.g.
1) a statment with 10 claims each sourced is "better" than one with 10
claims where one claim has 10 sources.
2) a statement with a wiki source + another source is "better" than on with
just a wiki source and just as "good" as one without the wiki source.

Also is wiki ref/source Wikipedia only or any Wikimedia project? Whilst
(last I checked) the others were only 70,000 refs compared to the 21
million from Wikipedia they might be significant for certain domains and
are just as "bad".

Cheers,
André
On 9 Dec 2015 10:37, "Gerard Meijssen"  wrote:

> Hoi,
> What would be nice is to have an option to understand progress from one
> dump to the next like you can with the Statistics by Magnus. Magnus also
> has data on sources but this is more global.
> Thanks,
>  GerardM
>
> On 8 December 2015 at 21:41, Markus Krötzsch <
> mar...@semantic-mediawiki.org> wrote:
>
>> Hi Amir,
>>
>> Very nice, thanks! I like the general approach of having a stand-alone
>> tool for analysing the data, and maybe pointing you to issues. Like a
>> dashboard for Wikidata editors.
>>
>> What backend technology are you using to produce these results? Is this
>> live data or dumped data? One could also get those numbers from the SPARQL
>> endpoint, but performance might be problematic (since you compute averages
>> over all items; a custom approach would of course be much faster but then
>> you have the data update problem).
>>
>> An obvious feature request would be to display entity ids as links to the
>> appropriate page, and maybe with their labels (in a language of your
>> choice).
>>
>> But overall very nice.
>>
>> Regards,
>>
>> Markus
>>
>>
>> On 08.12.2015 18:48, Amir Ladsgroup wrote:
>>
>>> Hey,
>>> There has been several discussion regarding quality of information in
>>> Wikidata. I wanted to work on quality of wikidata but we don't have any
>>> source of good information to see where we are ahead and where we are
>>> behind. So I thought the best thing I can do is to make something to
>>> show people how exactly sourced our data is with details. So here we
>>> have *http://tools.wmflabs.org/wd-analyst/index.php*
>>>
>>> You can give only a property (let's say P31) and it gives you the four
>>> most used values + analyze of sources and quality in overall (check this
>>> out )
>>>   and then you can see about ~33% of them are sources which 29.1% of
>>> them are based on Wikipedia.
>>> You can give a property and multiple values you want. Let's say you want
>>> to compare P27:Q183 (Country of citizenship: Germany) and P27:Q30 (US)
>>> Check this out
>>> . And
>>> you can see US biographies are more abundant (300K over 200K) but German
>>> biographies are more descriptive (3.8 description per item over 3.2
>>> description over item)
>>>
>>> One important note: Compare P31:Q5 (a trivial statement) 46% of them are
>>> not sourced at all and 49% of them are based on Wikipedia **but* *get
>>> this statistics for population properties (P1082
>>> ) It's not a
>>> trivial statement and we need to be careful about them. It turns out
>>> there are slightly more than one reference per statement and only 4% of
>>> them are based on Wikipedia. So we can relax and enjoy these
>>> highly-sourced data.
>>>
>>> Requests:
>>>
>>>   * Please tell me whether do you want this tool at all
>>>   * Please suggest more ways to analyze and catch unsourced materials
>>>
>>> Future plan (if you agree to keep using this tool):
>>>
>>>   * Support more datatypes (e.g. date of birth based on year,
>>> coordinates)
>>>   * Sitelink-based and reference-based analysis (to check how much of
>>> articles of, let's say, Chinese Wikipedia are unsourced)
>>>
>>>   * Free-style analysis: There is a database for this tool that can be
>>> used for way more applications. You can get the most unsourced
>>> statements of P31 and then you can go to fix them. I'm trying to
>>> build a playground for this kind of tasks)
>>>
>>> I hope you like this and rock on!
>>> 
>>> Best
>>>
>>>
>>> ___
>>> Wikidata mailing list
>>> Wikidata@lists.wikimedia.org
>>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>>
>>>
>>
>> ___
>> Wikidata mailing list
>> Wikidata@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>
>
>
> ___

Re: [Wikidata] Photographers' Identities Catalog (& WikiData)

2015-12-09 Thread André Costa
Happy to be of use. There is also one for:
* Swedish photo studios [1]
* Norwegian photographers[2]
* Norwegian photo studios [3]
I'm less familiar with these though and don't have a timeline for wikidata
integration.

Cheers,
André

[1] http://kulturnav.org/deb494a0-5457-4e5f-ae9b-e1826e0de681
[2] http://kulturnav.org/508197af-6e36-4e4f-927c-79f8f63654b2
[3] http://kulturnav.org/7d2a01d1-724c-4ad2-a18c-e799880a0241
--
André Costa
GLAM developer
Wikimedia Sverige
On 9 Dec 2015 15:07, "David Lowe" <davidl...@nypl.org> wrote:

> Thanks, André! I don't know that I've found that before. Great to get
> country (or region) specific lists like this.
> D
>
> On Wednesday, December 9, 2015, André Costa <andre.co...@wikimedia.se>
> wrote:
>
>> In case you haven't come across it before
>> http://kulturnav.org/1f368832-7649-4386-97b6-ae40cce8752b is the entry
>> point to the Swedish database of (primarily early) photographers curated by
>> the Nordic Museum in Stockholm.
>>
>> It's not that well integrated into Wikidata yet but the plan is to fix
>> that during early 2016. That would also allow a variety of photographs on
>> Wikimedia Commons to be linked to these entries.
>>
>> Cheers,
>> André
>>
>> André Costa | GLAM developer, Wikimedia Sverige |
>> andre.co...@wikimedia.se | +46 (0)733-964574
>>
>> Stöd fri kunskap, bli medlem i Wikimedia Sverige.
>> Läs mer på blimedlem.wikimedia.se
>>
>> On 9 December 2015 at 02:44, David Lowe <davidl...@nypl.org> wrote:
>>
>>> Thanks, Tom.
>>> I'll have to look at this specific case when I'm back at work tomorrow,
>>> as it does seem you found something in error.
>>> As for my process: with WD, I queried out the label, description &
>>> country of citizenship, dob & dod of of everyone with occupation:
>>> photographer. After some cleaning, I can get the WD data formatted like my
>>> own (Name, Nationality, Dates). I can then do a simple match, where
>>> everything matches exactly. For the remainder, I then match names and
>>> dates- without Nationality, which is often very "soft" information. For
>>> those that pass a smell test (one is "English" the other is "British") I
>>> pass those along, too. For those with greater discrepancies, I look still
>>> closer. For those with still greater discrepancies, I manually,
>>> individually query my database for anyone with the same last name & same
>>> first initial to catch misspellings or different transliterations. I also
>>> occasionally put my entire database into open refine to catch instances
>>> where, for instance, a Chinese name has been given as FamilyName, GivenName
>>> in one source, and GivenName, FamilyName in another.
>>> In short, this is scrupulously- and manually- checked data. I'm not
>>> savvy enough to let an algorithm make my mistakes for me! But let me know
>>> if this seems to be more than bad luck of the draw- finding the conflicting
>>> data you found.
>>> I have also to say, I may suppress the Niepce Museum collection, as it's
>>> from a really crappy list of photographers in their collection which I
>>> found many years ago, and can no longer find. I don't want to blame them
>>> for the discrepancy, but that might be the source. I don't know.
>>> As I start to query out places of birth & death from WD in the next
>>> days, I expect to find more discrepancies. (Just today, I found dozens of
>>> folks whom ULAN gendered one way, and WD another- but were undeniably the
>>> same photographer. )
>>> Thanks,
>>> David
>>>
>>>
>>> On Tuesday, December 8, 2015, Tom Morris <tfmor...@gmail.com> wrote:
>>>
>>>> Can you explain what "indexing" means in this context?  Is there some
>>>> type of matching process?  How are duplicates resolved, if at all? Was the
>>>> Wikidata info extracted from a dump or one of the APIs?
>>>>
>>>> When I looked at the first person I picked at random, Pierre Berdoy
>>>> (ID:269710), I see that both Wikidata and Wikipedia claim that he was born
>>>> in Biarritz while the NYPL database claims he was born in Nashua, NH.  So,
>>>> it would appear that there are either two different people with the same
>>>> name, born in different places, or the birth place is wrong.
>>>>
>>>>
>>>> http://mgiraldo.github.io/pic/?=2028247=269710|42.7575,-71.4644
>>>> https://www.wikidata.org/wiki/Q3383941
&g

[Wikidata] Source statistics

2015-09-07 Thread André Costa
Hi all!

I'm wondering if there is a way (SQL, api, tool or otherwise) for finding
out how often a particular source is used on Wikidata.

The background is a collaboration with two GLAMs where we have used ther
open (and CC0) datasets to add and/or source statements on Wikidata for
items on which they can be considered an authority. Now I figured it would
be nice to give them back a number for just how big the impact was.

While I can find out how many items should be affected I couldn't find an
easy way, short of analysing each of these, for how many statements were
affected.

Any suggestions would be welcome.

Some details: Each reference is a P248 claim + P577 claim (where the latter
may change)

Cheers,
André / Lokal_Profil
André Costa | GLAM-tekniker, Wikimedia Sverige | andre.co...@wikimedia.se |
+46 (0)733-964574

Stöd fri kunskap, bli medlem i Wikimedia Sverige.
Läs mer på blimedlem.wikimedia.se
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata-l] novalue in qualifiers or references

2015-04-26 Thread André Costa
Could you not add the last active date as a qualifier to the somevalue
death date?

In general uncertainty in dates are not so easily entered. Born 1969 or
1970 cannot be entered as 1969 with uncertainty decade since that becomes
1960s (at least that is what is shown to readers) so the only legit way of
entering it is 20th century (bringing the uncertainty from 2 to 100 years).

In general being able to model dates as between X and Y (as for numbers)
would be nice.

Um.. sorry for the sidetrack from somevalue which sidetracked from the
novalue discussion.

/André

--
André Costa
GLAM-tekniker
Wikimedia Sverige
On 26 Apr 2015 10:23, Thomas Douillard thomas.douill...@gmail.com wrote:

 For the unknown date case, I also used some imprecise dates in the past,
 if you set  date withe a precision of the century around the last time it
 wa known active for example, you get something semantically correct and
 that is probably esaier to handle in queries (athough the way to handle
 imprecise or overlapping dates interval in date comparison for the query
 engine is probably not known yet :) I'm curious to know)

 2015-04-26 9:29 GMT+02:00 Stas Malyshev smalys...@wikimedia.org:

 Hi!

  It would make sense to have a bot run and add dates of novalue for dob
  dod where we know that people must be dead.

 That would actually be opposite of what we want, since novalue would
 mean they were not born and are not dead. I think you meant unknown
 for date of death, in which case it does make sense.

 --
 Stas Malyshev
 smalys...@wikimedia.org

 ___
 Wikidata-l mailing list
 Wikidata-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikidata-l



 ___
 Wikidata-l mailing list
 Wikidata-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikidata-l


___
Wikidata-l mailing list
Wikidata-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata-l


Re: [Wikidata-l] Mapillary property

2015-03-05 Thread André Costa
Sorry for the delay. The promised Commons links:

* Template to subst-ing:
https://commons.wikimedia.org/wiki/Template:Mapillary
* Gadget for navigating Mapillary from image:
https://commons.wikimedia.org/wiki/User:Peterneubauer/mapillary.js (you
have to stick importScript( 'User:Peterneubauer/mapillary.js' ); in your
common.js to try it out)
* Example image:
https://commons.wikimedia.org/wiki/File:Camino_de_Oriente.jpg

And a prototype for finding and uploading Mapillary images to commons:
https://tools.wmflabs.org/mapillary-commons/wlm-maps/#15/55.6086/13.0152

End of aside

That said I would agree that adding it to the geohack would make more sense
than adding a specific Mapillary property. The few times when you have a
specific Mapillary-image in mind it's probably worth sticking it on Commons
as well.

/André

 André Costa | GLAM-tekniker, Wikimedia Sverige | andre.co...@wikimedia.se |
 +46 (0)733-964574

Stöd fri kunskap, bli medlem i Wikimedia Sverige.
Läs mer på blimedlem.wikimedia.se

On 3 March 2015 at 15:41, Neil Harris n...@tonal.clara.co.uk wrote:

 On 02/03/15 16:07, André Costa wrote:

 An aside:

 There is an easy template for Mapillary images on commons along with a
 gadget which allows you to navigate Mapillary from that commons image. On
 mobile now but can send the links once I'm on a laptop, if noone beats me
 to it.

 As a sidenote the template is intended to be used (substed) together with
 Magnus' url2commons tool.

 --
 André Costa
 GLAM-tekniker
 Wikimedia Sverige
 On 2 Mar 2015 13:56, Andy Mabbett a...@pigsonthewing.org.uk wrote:


 Another thought: since Mapillary seem to allow linking by geocoordinate:
 for example,

 https://www.mapillary.com/map/im/bbox/55.53640250425626/55.
 89726132569528/12.658309936523438/13.616867065429688

 might it be also worth adding Mapillary as an option to Magnus' geohack
 page?

 Neil



 ___
 Wikidata-l mailing list
 Wikidata-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikidata-l

___
Wikidata-l mailing list
Wikidata-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata-l


Re: [Wikidata-l] Mapillary property

2015-03-02 Thread André Costa
An aside:

There is an easy template for Mapillary images on commons along with a
gadget which allows you to navigate Mapillary from that commons image. On
mobile now but can send the links once I'm on a laptop, if noone beats me
to it.

As a sidenote the template is intended to be used (substed) together with
Magnus' url2commons tool.

--
André Costa
GLAM-tekniker
Wikimedia Sverige
On 2 Mar 2015 13:56, Andy Mabbett a...@pigsonthewing.org.uk wrote:

 On 2 March 2015 at 10:06, Jo winfi...@gmail.com wrote:

  There you go:
  [...]

 That's the link to edit the section; to read it, use:


 https://www.wikidata.org/wiki/Wikidata:Property_proposal/References#Mapillary

 --
 Andy Mabbett
 @pigsonthewing
 http://pigsonthewing.org.uk

 ___
 Wikidata-l mailing list
 Wikidata-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikidata-l

___
Wikidata-l mailing list
Wikidata-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata-l


Re: [Wikidata-tech] [Wikidata-l] BREAKING CHANGE: Wikidata API changing top upper-case IDs.

2013-09-12 Thread André Costa
For consistency the wikipedia api should probably also return an upper case
wikibase_item for the pageprops query [1]

/André

[1]
https://en.wikipedia.org/w/api.php?action=queryprop=pagepropsformat=jsonredirects=titles=Wikidata


On 10 September 2013 12:11, Daniel Kinzler daniel.kinz...@wikimedia.dewrote:

 Hi all.


 With today's deployment, the Wikibase API modules used on wikidata.orgwill
 change from using lower-case IDs (q12345) to upper-case IDs (Q12345). This
 is
 done for consistency with the way IDs are shown in the UI and used in URLs.

 The API will continue to accept entity IDs in lower-case as well as
 upper-case.
 Any bot or other client that has no property or item IDs hardcoded or
 configured
 in lower case should be fine.

 If however your code looks for some specific item or property in the output
 returned from the API, and it's using a lower-case ID to do so, it may now
 fail
 to match the respective ID.

 There is potential for similar problems with Lua code, depending on how
 the data
 structure is processed by Lua. We are working to minimize the impact there.

 Sorry for the short notice.

 Please test your code against test.wikidata.org and let us know if you
 find any
 issues.


 Thanks,
 Daniel


 PS: issue report on bugzilla:
 https://bugzilla.wikimedia.org/show_bug.cgi?id=53894

 ___
 Wikidata-l mailing list
 wikidat...@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikidata-l

___
Wikidata-tech mailing list
Wikidata-tech@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata-tech


Re: [Wikidata-l] BREAKING CHANGE: Wikidata API changing top upper-case IDs.

2013-09-12 Thread André Costa
For consistency the wikipedia api should probably also return an upper case
wikibase_item for the pageprops query [1]

/André

[1]
https://en.wikipedia.org/w/api.php?action=queryprop=pagepropsformat=jsonredirects=titles=Wikidata


On 10 September 2013 12:11, Daniel Kinzler daniel.kinz...@wikimedia.dewrote:

 Hi all.


 With today's deployment, the Wikibase API modules used on wikidata.orgwill
 change from using lower-case IDs (q12345) to upper-case IDs (Q12345). This
 is
 done for consistency with the way IDs are shown in the UI and used in URLs.

 The API will continue to accept entity IDs in lower-case as well as
 upper-case.
 Any bot or other client that has no property or item IDs hardcoded or
 configured
 in lower case should be fine.

 If however your code looks for some specific item or property in the output
 returned from the API, and it's using a lower-case ID to do so, it may now
 fail
 to match the respective ID.

 There is potential for similar problems with Lua code, depending on how
 the data
 structure is processed by Lua. We are working to minimize the impact there.

 Sorry for the short notice.

 Please test your code against test.wikidata.org and let us know if you
 find any
 issues.


 Thanks,
 Daniel


 PS: issue report on bugzilla:
 https://bugzilla.wikimedia.org/show_bug.cgi?id=53894

 ___
 Wikidata-l mailing list
 Wikidata-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikidata-l

___
Wikidata-l mailing list
Wikidata-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata-l


Re: [Wikidata-l] Recent API changes?

2013-07-26 Thread André Costa
Thanks for fixing =)


On 25 July 2013 14:40, Daniel Kinzler daniel.kinz...@wikimedia.de wrote:

 That's a bug, thanks for reporting!

 https://bugzilla.wikimedia.org/show_bug.cgi?id=52020

 -- daniel

 Am 25.07.2013 13:54, schrieb André Costa:
  Did the recent API bugfixes touch the sitelinks/urls property (for
  wbgetentities)? Because this no longer returns the urls but instead
 returns
  the same result as sitelinks
 
  [1]
 
 https://www.wikidata.org/w/api.php?action=wbgetentitiesformat=xmlids=Q1props=sitelinks%2Furls
  [2]
 
 https://www.wikidata.org/w/api.php?action=wbgetentitiesformat=xmlids=Q1props=sitelinks
 
  /Lokal_Profil
 
 
 
  ___
  Wikidata-l mailing list
  Wikidata-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikidata-l
 


 ___
 Wikidata-l mailing list
 Wikidata-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikidata-l

___
Wikidata-l mailing list
Wikidata-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata-l