[Wikidata] Upcoming short presentation in London

2015-06-22 Thread Navino Evans
Hi all,

I will be doing a short presentation about Wikidata at an ISKO conference
in London Knowledge Organization – making a difference

(13th - 14th July)

The title of the talk is "Wikidata: the potential of structured data to
enable applications such as Histropedia"

Unfortunately it's a bit misleading as I've changed the focus of the talk,
but missed the boat for changing the title. It's actually going to be
entirely about Wikidata, with a brief mention of a range of third party
uses (the description in the programme will reflect this though).

The organisers have asked that I spread the word about the conference, so
please do let anyone know who may interested in attending :)

Best wishes,


Navino
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Upcoming short presentation in London

2015-06-22 Thread Navino Evans
Thanks Gerard! will do :)

Best wishes,

Nav

On 22 June 2015 at 20:38, Gerard Meijssen  wrote:

> Hoi,
> Have fun :) and do the good job :) you have been doing :)
> Thanks,
>  GerardM
>
> On 22 June 2015 at 14:07, Navino Evans  wrote:
>
>> Hi all,
>>
>> I will be doing a short presentation about Wikidata at an ISKO conference
>> in London Knowledge Organization – making a difference
>> <http://www.iskouk.org/content/knowledge-organization-making-difference>
>> (13th - 14th July)
>>
>> The title of the talk is "Wikidata: the potential of structured data to
>> enable applications such as Histropedia"
>>
>> Unfortunately it's a bit misleading as I've changed the focus of the
>> talk, but missed the boat for changing the title. It's actually going to be
>> entirely about Wikidata, with a brief mention of a range of third party
>> uses (the description in the programme will reflect this though).
>>
>> The organisers have asked that I spread the word about the conference, so
>> please do let anyone know who may interested in attending :)
>>
>> Best wishes,
>>
>>
>> Navino
>>
>> ___
>> Wikidata mailing list
>> Wikidata@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>
>>
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
>


-- 
___

The Timeline of Everything

www.histropedia.com

Twitter <https://twitter.com/Histropedia> Facebo
<https://www.facebook.com/Histropedia>ok
<https://www.facebook.com/Histropedia> Google +
<https://plus.google.com/u/0/b/104484373317792180682/104484373317792180682/posts>
   L <http://www.linkedin.com/company/histropedia-ltd>inke
<http://www.linkedin.com/company/histropedia-ltd>dIn
<http://www.linkedin.com/company/histropedia-ltd>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Units are live! \o/

2015-09-09 Thread Navino Evans
So much exciting news lately! many congratulations! :)

On 9 September 2015 at 22:29, David Cuenca Tudela  wrote:

> Just created "area" after two years of waiting! yay! congratulations! \o/
>
> On Wed, Sep 9, 2015 at 11:08 PM, Lydia Pintscher <
> lydia.pintsc...@wikimedia.de> wrote:
>
>> On Wed, Sep 9, 2015 at 9:49 PM, Lydia Pintscher
>>  wrote:
>> > Hey everyone :)
>> >
>> > As promised we just enabled support for quantities with units on
>> > Wikidata. So from now on you'll be able to store fancy things like the
>> > height of a mountain or the boiling point of an element.
>> >
>> > Quite a few properties have been waiting on unit support before they
>> > are created. I assume they will be created in the next hours and then
>> > you can go ahead and add all of the measurements.
>>
>> For anyone who is curious: Here is the list of properties already
>> created since unit support is available:
>>
>> https://www.wikidata.org/w/index.php?title=Special:ListProperties/quantity&limit=50&offset=103
>> and here is the list of properties that were waiting on unit support:
>> https://www.wikidata.org/wiki/Wikidata:Property_proposal/Pending/2
>> Those should change over the next hours/days.
>>
>>
>> Cheers
>> Lydia
>>
>> --
>> Lydia Pintscher - http://about.me/lydia.pintscher
>> Product Manager for Wikidata
>>
>> Wikimedia Deutschland e.V.
>> Tempelhofer Ufer 23-24
>> 10963 Berlin
>> www.wikimedia.de
>>
>> Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
>>
>> Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
>> unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
>> Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.
>>
>> ___
>> Wikidata mailing list
>> Wikidata@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>
>
>
>
> --
> Etiamsi omnes, ego non
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
>


-- 
___

The Timeline of Everything

www.histropedia.com

Twitter  Facebo
ok
 Google +

   L inke
dIn

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


[Wikidata] Wikidata helpers in Edinburgh for Repository Fringe 2016 ?

2016-06-01 Thread Navino Evans
Hi all,

Just posting this on the off chance off finding some Wikidatans based in or
around Edinburgh who would be free to help out in a practical Wikidata
editing session on 1st August?

Here's a link to the event: http://rfringe16.blogs.edina.ac.uk/

The session is basically an introduction to Wikidata, demo of some of the
coolest things about Wikidata, then practical session to teach everyone how
to edit.


We're expecting an audience of up to 64 people, and currently have two
people with wikidata knowledge so a couple more would be very handy for
practical part.

Cheers :)

Nav



-- 
___

The Timeline of Everything

www.histropedia.com

Twitter  Facebo
ok
 Google +

   L inke
dIn

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


[Wikidata] Render sparql queries using the Histropedia timeline engine

2016-08-10 Thread Navino Evans
Hi all,



At long last, we’re delighted to announce you can now render sparql queries
using the Histropedia timeline engine \o/


Histropedia WikidataQuery Viewer




Unlike the main Histropedia site this tool renders timelines with data
directly from live Wikidata queries. It lets you map query variables to
values used to render the timeline. A few notable extra features compared
with the built in timeline view on the Wikidata query service:

*Precision* - You can render each event according to the precision of the
date (as long as you add date precision to your query). It will default to
day precision if you leave this out.

*Rank *– The events on the timeline have a rank defined by the order of
your sparql query results. You can also choose a query variable to use for
rank, but it’s not really needed if you use ORDER BY in your query to
control the order of results. Higher ranked events are placed more
prominently on the timeline.

*URL* – You can choose whichever URL you like from your query results,
which will be opened in a new tab when you double click on an event on the
timeline.

*Automatic colour code / filter* – You can choose any variable in your
sparql query to use for colour coding and filtering. From what I could tell
from the preview, this seems to be the same as the new map layers feature
that is close to launch on the Wikidata Query service (which looks awesome
by the way!)

Also similar to the ‘group by property’ feature on Magnus’ Listeria tool,
but using an arbitrary variable from the sparql results instead of a
Wikidata property.


*Some cool examples:*

Note: click on the droplet icon (top right) to see the colour code key and
filter options


   - Discoveries about planetary systems, colour coded by type of object
    (only items with an image and discoverer)
   - Who's birthday is today? colour coded by country of citizenship
   
   - Oil paintings at the Louvre, colour coded by creator
   
   - Descendants of Alfred the Great, colour coded by religion, in Japanese
    – Note: select ‘no value’ in the filter
   panel for a fun edit list of people missing religion statement :)

More examples on a dropdown list from the query input page
 in the tool.




The tool has been created by myself and fellow Histropedia co-founder Sean
using our newly released JavaScript library. We are only just learning to
code, and it’s a very early stage app so please let me know if anything
breaks!


You can find more info on the JS library (called HistropediaJS) on this
announcement from the Histropedia mailing list





Cheers!



Navino
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Render sparql queries using the Histropedia timeline engine

2016-08-10 Thread Navino Evans
Cheers Dan! :-)

I actually still need to sort out a bug reporting system, but I've got that
one noted down for now.

Will add a link to the app soon for a better way to report things.

Best,

Navino

On 10 Aug 2016 21:36, "Dan Garry"  wrote:

Neat! It's always exciting to see awesome things like this built on top of
the Wikidata Query Service.

One small piece of feedback: the timelines look quite blurry
<https://i.imgur.com/k56aeFD.png> on my computer. Should I file this as a
bug report somewhere? :-)

Thanks, and keep up the great work!

Dan

On 10 August 2016 at 12:49, Navino Evans  wrote:

> Hi all,
>
>
>
> At long last, we’re delighted to announce you can now render sparql
> queries using the Histropedia timeline engine \o/
>
>
> Histropedia WikidataQuery Viewer
> <http://histropedia.com/showcase/wikidata-viewer.html>
>
>
>
> Unlike the main Histropedia site this tool renders timelines with data
> directly from live Wikidata queries. It lets you map query variables to
> values used to render the timeline. A few notable extra features compared
> with the built in timeline view on the Wikidata query service:
>
> *Precision* - You can render each event according to the precision of the
> date (as long as you add date precision to your query). It will default to
> day precision if you leave this out.
>
> *Rank *– The events on the timeline have a rank defined by the order of
> your sparql query results. You can also choose a query variable to use for
> rank, but it’s not really needed if you use ORDER BY in your query to
> control the order of results. Higher ranked events are placed more
> prominently on the timeline.
>
> *URL* – You can choose whichever URL you like from your query results,
> which will be opened in a new tab when you double click on an event on the
> timeline.
>
> *Automatic colour code / filter* – You can choose any variable in your
> sparql query to use for colour coding and filtering. From what I could tell
> from the preview, this seems to be the same as the new map layers feature
> that is close to launch on the Wikidata Query service (which looks awesome
> by the way!)
>
> Also similar to the ‘group by property’ feature on Magnus’ Listeria tool,
> but using an arbitrary variable from the sparql results instead of a
> Wikidata property.
>
>
> *Some cool examples:*
>
> Note: click on the droplet icon (top right) to see the colour code key and
> filter options
>
>
>- Discoveries about planetary systems, colour coded by type of object
><http://tinyurl.com/zlqupz9> (only items with an image and discoverer)
>- Who's birthday is today? colour coded by country of citizenship
><http://tinyurl.com/hla7nqb>
>- Oil paintings at the Louvre, colour coded by creator
><http://tinyurl.com/zu7cygv>
>- Descendants of Alfred the Great, colour coded by religion, in
>Japanese <http://tinyurl.com/h75utbg> – Note: select ‘no value’ in the
>filter panel for a fun edit list of people missing religion statement
>:)
>
> More examples on a dropdown list from the query input page
> <http://www.histropedia.com/showcase/wikidata-viewer.html> in the tool.
>
>
>
>
> The tool has been created by myself and fellow Histropedia co-founder Sean
> using our newly released JavaScript library. We are only just learning to
> code, and it’s a very early stage app so please let me know if anything
> breaks!
>
>
> You can find more info on the JS library (called HistropediaJS) on this
> announcement from the Histropedia mailing list
> <https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!topic/histropedia-i/5_9_nBqvMx0>
>
>
>
>
> Cheers!
>
>
>
> Navino
>
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
>


-- 
Dan Garry
Lead Product Manager, Discovery
Wikimedia Foundation

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Render sparql queries using the Histropedia timeline engine

2016-08-11 Thread Navino Evans
Yay, fun indeed!
I can see all of the BCE dates are out by one on the timeline, will get
that fixed.

Thanks a lot for updating the JSON spec and filling me in on the details,
that's cleared a few things up :-)

On 11 Aug 2016 12:40, "Daniel Kinzler"  wrote:

> Hi Navino!
>
> Thank you for your awesome work!
>
> Since this has caused some confusion again recently, I want to caution you
> about
> a major gotcha regarding dates in RDF and JSON: they use different
> conventions
> to represent years BCE. I just updated our JSON spec to reflect that
> reality,
> see <https://www.mediawiki.org/wiki/Wikibase/DataModel/JSON#time>.
>
> There is a lot of confusion about this issue throughout the linked data
> web,
> since the convention changed between XSL 1.0 (which uses -0044 to
> represent 44
> BCE, and -0001 to represent 1 BCE) and XSL 1.1 (which uses -0043 to
> represent 44
> BCE, and + to represent 1 BCE). Our JSON uses the traditional
> numbering (1
> BCE is -0001), while RDF uses the astronomical numbering (1 BCE is +).
>
> Yay, fun.
>
> Am 10.08.2016 um 21:49 schrieb Navino Evans:
> > Hi all,
> >
> >
> >
> > At long last, we’re delighted to announce you can now render sparql
> queries
> > using the Histropedia timeline engine \o/
> >
> >
> > Histropedia WikidataQuery Viewer
> > <http://histropedia.com/showcase/wikidata-viewer.html>
>
>
> --
> Daniel Kinzler
> Senior Software Developer
>
> Wikimedia Deutschland
> Gesellschaft zur Förderung Freien Wissens e.V.
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] A property to exemplify SPARQL queries associated witha property

2016-08-24 Thread Navino Evans
>
> If you could store queries, you could also store queries for each item
> that is about a list of things, so that the query returns exactly the
> things that should be in the list ... could be useful.


This also applies to a huge number of Wikipedia categories (the non
subjective ones). It would be extremely useful to have queries describing
them attached to the Wikidata items for the categories.

On 24 August 2016 at 02:31, Ananth Subray  wrote:

> मा
> --
> From: Stas Malyshev 
> Sent: ‎24-‎08-‎2016 12:33 AM
> To: Discussion list for the Wikidata project.
> 
> Subject: Re: [Wikidata] A property to exemplify SPARQL queries associated
> witha property
>
> Hi!
>
> > Relaying a question from a brief discussion on Twitter [1], I am curious
> > to hear how people feel about the idea of creating a a "SPARQL query
> > example" property for properties, modeled after "Wikidata property
> > example" [2]?
>
> Might be nice, but we need a good way to present the query in the UI
> (see below).
>
> > This would allow people to discover queries that exemplify how the
> > property is used in practice. Does the approach make sense or would it
> > stretch too much the scope of properties of properties? Are there better
> > ways to reference SPARQL examples and bring them closer to their source?
>
> I think it may be a good idea to start thinking about some way of
> storing queries on Wikidata maybe? On one hand, they are just strings,
> on the other hand, they are code - like CSS or Javascript - and storing
> them just as strings may be inconvenient. Maybe .sparql file extension
> handler like we have for .js and .json and so on?
>
> --
> Stas Malyshev
> smalys...@wikimedia.org
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
>


-- 
___

The Timeline of Everything

www.histropedia.com

Twitter  Facebo
ok
 Google +

   L inke
dIn

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Dynamic Lists, Was: Re: List generation input

2016-09-15 Thread Navino Evans
Hi Jan,

The main issue that comes up for me with Listeria is with the 'section by
property' feature. There is currently no control over how it deals with
multiple values, so a simple list of people sectioned by occupation can
lead to very misleading results.
Every item appears only once on the list, so someone with two occupations
will just end up in one section or the other.

Ideally you need some way to specify how to choose between multiple vales.
Some possible options that would help a lot :

   1. Just take the most recent value (according to a time qualifier)
   2. Just take the highest ranked value
   3. Use an arbitrary variable from the sparql query instead of just a
   property
   4. Allow items to appear in multiple sections


The idea being that the list-creating user can choose between a set of
possible methods for consolidating multiple values. Obviously with some
logical default set for when no selection has been made.

Cheers,

Nav



On 15 September 2016 at 13:09, Jan Dittrich 
wrote:

> ​
>>
>> I found people opposed to Listeria lists (in article namespace) for two
>> main reasons:
>>
>
> Thanks! This is very helpful for me.
>
> The wikitext overwriting is a good point and it is easy to understand that
> this leads to confusion.
>
> For handcrafted lists: For now, we only have some rather vague ideas how
> to supplement lists with custom data, but it is something that turned out
> to be very important for users and I'll continue working on this.
>
> Jan
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
>


-- 
___

The Timeline of Everything

www.histropedia.com

Twitter  Facebo
ok
 Google +

   L inke
dIn

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Data import hub, data preperation instructions and import workflow for muggles

2016-11-21 Thread Navino Evans
I've just added some more to the page in the previously 'coming soon' Self
import
<https://www.wikidata.org/wiki/User:John_Cummings/wikidataimport_guide#Option_2:_Self_import>
section,
as it seemed like this is actually the place where QuickStatements and
mix'n'match should come in.
I've tried to keep the details out, and just give a guide to choosing which
tool/approach to use for a particular situation. The mechanics of using the
tools etc should all be on other pages I presume.

Cheers,

Nav


On 21 November 2016 at 16:24, john cummings 
wrote:

> Hi Magnus
>
> I've avoided mentioning those for now as I know you are working on new
> tools, also I'm not very good at using them so wouldn't write good
> instructions :) I hope that once this is somewhere 'proper' that others
> with more knowledge can add this information in.
>
> My main idea with this is to break up the steps so people can collaborate
> on importing datasets and also learn skills along the workflow over time
> rather than having to learn everything in one go.
>
> Thanks
>
> John
>
> On 21 November 2016 at 17:11, Magnus Manske 
> wrote:
>
>> There are other options to consider:
>> * Curated import/sync via mix'n'match
>> * Batch-based import via QuickStatements (also see rewrite plans at
>> https://www.wikidata.org/wiki/User:Magnus_Manske/quick_statements2 )
>>
>> On Mon, Nov 21, 2016 at 3:11 PM john cummings 
>> wrote:
>>
>>> Dear all
>>>
>>>
>>> Myself and Navino Evans have been working on a bare bone as possible
>>> workflow and instructions for making importing data into Wikidata available
>>> to muggles like me. We have written instructions up to the point where
>>> people would make a request on the 'bot requests' page to import the data
>>> into Wikidata.
>>>
>>>
>>> Please take a look and share your thoughts
>>>
>>>
>>> https://www.wikidata.org/wiki/User:John_Cummings/Dataimporthub
>>>
>>> https://www.wikidata.org/wiki/User:John_Cummings/wikidataimport_guide
>>>
>>>
>>> Thanks very much
>>>
>>>
>>> John
>>> ___
>>> Wikidata mailing list
>>> Wikidata@lists.wikimedia.org
>>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>>
>>
>> ___
>> Wikidata mailing list
>> Wikidata@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>>
>>
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
>


-- 

*nav...@histropedia.com *

@NavinoEvans <https://twitter.com/NavinoEvans>

-

   www.histropedia.com

Twitter <https://twitter.com/Histropedia>Facebo
<https://www.facebook.com/Histropedia>ok
<https://www.facebook.com/Histropedia>Google +
<https://plus.google.com/+Histropedia>
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Data import hub, data preperation instructions and import workflow for muggles

2016-11-22 Thread Navino Evans
Thanks Marco!

Do you know if there's a system in place yet for adding new data to the
Primary Sources Tool?  I thought it was still only covering Freebase data
at the moment, but it should be in the import guide for sure if it can be
used for new data sets already.

Cheers,

Navino




On 22 November 2016 at 09:43, Marco Fossati  wrote:

> Hi John, Navino,
>
> the primary sources tool uses the QuickStatements syntax for large-scale
> non-curated dataset imports, see:
> https://www.wikidata.org/wiki/Wikidata:Data_donation#3._Work
> _with_the_Wikidata_community_to_import_the_data
> https://www.wikidata.org/wiki/Wikidata:Primary_sources_tool
>
> Best,
>
> Marco
>
> On 11/21/16 21:39, Navino Evans wrote:
>
>> I've just added some more to the page in the previously 'coming
>> soon' Self import
>> <https://www.wikidata.org/wiki/User:John_Cummings/wikidataim
>> port_guide#Option_2:_Self_import> section,
>> as it seemed like this is actually the place where QuickStatements and
>> mix'n'match should come in.
>> I've tried to keep the details out, and just give a guide to choosing
>> which tool/approach to use for a particular situation. The mechanics of
>> using the tools etc should all be on other pages I presume.
>>
>> Cheers,
>>
>> Nav
>>
>>
>> On 21 November 2016 at 16:24, john cummings > <mailto:mrjohncummi...@gmail.com>> wrote:
>>
>> Hi Magnus
>>
>> I've avoided mentioning those for now as I know you are working on
>> new tools, also I'm not very good at using them so wouldn't write
>> good instructions :) I hope that once this is somewhere 'proper'
>> that others with more knowledge can add this information in.
>>
>> My main idea with this is to break up the steps so people can
>> collaborate on importing datasets and also learn skills along the
>> workflow over time rather than having to learn everything in one go.
>>
>> Thanks
>>
>> John
>>
>> On 21 November 2016 at 17:11, Magnus Manske
>> mailto:magnusman...@googlemail.com>>
>> wrote:
>>
>> There are other options to consider:
>> * Curated import/sync via mix'n'match
>> * Batch-based import via QuickStatements (also see rewrite plans
>> at https://www.wikidata.org/wiki/User:Magnus_Manske/quick_state
>> ments2
>> <https://www.wikidata.org/wiki/User:Magnus_Manske/quick_stat
>> ements2> )
>>
>> On Mon, Nov 21, 2016 at 3:11 PM john cummings
>> mailto:mrjohncummi...@gmail.com>>
>> wrote:
>>
>> Dear all
>>
>>
>> Myself and Navino Evans have been working on a bare bone as
>> possible workflow and instructions for making importing data
>> into Wikidata available to muggles like me. We have written
>> instructions up to the point where people would make a
>> request on the 'bot requests' page to import the data into
>> Wikidata.
>>
>>
>> Please take a look and share your thoughts
>>
>>
>> https://www.wikidata.org/wiki/User:John_Cummings/Dataimporth
>> ub
>> <https://www.wikidata.org/wiki/User:John_Cummings/Dataimport
>> hub>
>>
>> https://www.wikidata.org/wiki/User:John_Cummings/wikidataimp
>> ort_guide
>> <https://www.wikidata.org/wiki/User:John_Cummings/wikidataim
>> port_guide>
>>
>>
>> Thanks very much
>>
>>
>> John
>>
>> ___
>> Wikidata mailing list
>> Wikidata@lists.wikimedia.org
>> <mailto:Wikidata@lists.wikimedia.org>
>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>> <https://lists.wikimedia.org/mailman/listinfo/wikidata>
>>
>>
>> ___
>> Wikidata mailing list
>> Wikidata@lists.wikimedia.org <mailto:Wikidata@lists.wikimedia.org
>> >
>> https://lists.wikimedia.org/mailman/listinfo/wikidata
>> <https://lists.wikimedia.org/mailman/listinfo/wikidata>
>>
>>
>>
>> ___
>> Wikidata mailing list
>> Wikidata@lists.wikime

Re: [Wikidata] Data import hub, data preperation instructions and import workflow for muggles

2016-11-25 Thread Navino Evans
Many thanks for the info Marco :)

I'll get in touch for an API when I get some time to try that out.

Best,

Navino

On 22 November 2016 at 20:28, Marco Fossati  wrote:

> Hi Navino,
>
> Currently, there is an (undocumented and untested) API endpoint accepting
> POST requests as QuickStatements datasets:
> https://github.com/Wikidata/primarysources/tree/master/backe
> nd#import-statements
> If you want to try it, feel free to privately ping me for an API token.
>
> As a side note, the primary sources tool is undergoing a Wikimedia
> Foundation grant renewal request to give it a radical uplift:
> https://meta.wikimedia.org/wiki/Grants:IEG/StrepHit:_Wikidat
> a_Statements_Validation_via_References/Renewal
>
> Best,
>
> Marco
>
> On 11/22/16 14:00, Navino Evans wrote:
>
>> Thanks Marco!
>>
>> Do you know if there's a system in place yet for adding new data to the
>> Primary Sources Tool?  I thought it was still only covering Freebase
>> data at the moment, but it should be in the import guide for sure if it
>> can be used for new data sets already.
>>
>> Cheers,
>>
>> Navino
>>
>>
>>
>>
>> On 22 November 2016 at 09:43, Marco Fossati > <mailto:foss...@spaziodati.eu>> wrote:
>>
>> Hi John, Navino,
>>
>> the primary sources tool uses the QuickStatements syntax for
>> large-scale non-curated dataset imports, see:
>> https://www.wikidata.org/wiki/Wikidata:Data_donation#3._Work
>> _with_the_Wikidata_community_to_import_the_data
>> <https://www.wikidata.org/wiki/Wikidata:Data_donation#3._
>> Work_with_the_Wikidata_community_to_import_the_data>
>> https://www.wikidata.org/wiki/Wikidata:Primary_sources_tool
>> <https://www.wikidata.org/wiki/Wikidata:Primary_sources_tool>
>>
>> Best,
>>
>> Marco
>>
>> On 11/21/16 21:39, Navino Evans wrote:
>>
>> I've just added some more to the page in the previously 'coming
>> soon' Self import
>> <https://www.wikidata.org/wiki/User:John_Cummings/wikidataim
>> port_guide#Option_2:_Self_import
>> <https://www.wikidata.org/wiki/User:John_Cummings/wikidataim
>> port_guide#Option_2:_Self_import>>
>> section,
>> as it seemed like this is actually the place where
>> QuickStatements and
>> mix'n'match should come in.
>> I've tried to keep the details out, and just give a guide to
>> choosing
>> which tool/approach to use for a particular situation. The
>> mechanics of
>> using the tools etc should all be on other pages I presume.
>>
>> Cheers,
>>
>> Nav
>>
>>
>> On 21 November 2016 at 16:24, john cummings
>> mailto:mrjohncummi...@gmail.com>
>> <mailto:mrjohncummi...@gmail.com
>> <mailto:mrjohncummi...@gmail.com>>> wrote:
>>
>> Hi Magnus
>>
>> I've avoided mentioning those for now as I know you are
>> working on
>> new tools, also I'm not very good at using them so wouldn't
>> write
>> good instructions :) I hope that once this is somewhere
>> 'proper'
>> that others with more knowledge can add this information in.
>>
>> My main idea with this is to break up the steps so people can
>> collaborate on importing datasets and also learn skills
>> along the
>> workflow over time rather than having to learn everything in
>> one go.
>>
>> Thanks
>>
>> John
>>
>> On 21 November 2016 at 17:11, Magnus Manske
>> > <mailto:magnusman...@googlemail.com>
>> <mailto:magnusman...@googlemail.com
>> <mailto:magnusman...@googlemail.com>>>
>> wrote:
>>
>> There are other options to consider:
>> * Curated import/sync via mix'n'match
>> * Batch-based import via QuickStatements (also see
>> rewrite plans
>> at
>> https://www.wikidata.org/wiki/User:Magnus_Manske/quick_state
>> ments2
>> <https://www.wikidata.org/wiki/User:Magnus_Manske/quick_stat
>> ements2>
>>
>> <https://www.wikidata.org/wiki/User:Magnus_Manske/quick_stat
>> ements2
&g

[Wikidata] New features for Wikidata Query Timeline tool

2016-12-13 Thread Navino Evans
Hi all,

We’ve just released some cool new features for the Histropedia Wikidata
Query Timeline  tool.

Here's a quick overview of what's new, along with some examples (more
examples available in the drop down menu on the query input page
) :

*1. Vertical spacing controls + auto-fit mode*
A new control panel on the timeline for controlling the space between rows
of events.
This includes an 'auto' mode that keeps everything visible as you zoom and
scroll (set to ‘on’ by default for small timelines).
*Example: *


   - Heritage structures in London  - should
  load in auto mode. Try zooming in to an area with your
mousewheel to see it
  auto adjust. You can click on the arrows in the new panel (right
of screen)
  to adjust the spacing manually.



*2. 'Colour scale' colour coding*
This is created automatically if you choose to colour code by a variable
that returns a number (e.g. population, height, etc).
*Examples:*
*Click the [image: Inline images 4] icon to see the colour code key*


   - Discovery of the chemical elements  –
  colour coded by atomic number (shows that heavy elements are discovered
  much later than light ones)
  - Things located within 20km of the Statue of Liberty
   - colour coded by distance from the
  Statue of Liberty (light colours are closest)



*3. Multiple filters*
You can now have multiple filter options on a timeline (previously only
allowed a single filter option)
*Examples:*
*Click on the [image: Inline images 1] icon to see the available filters.*


   - The Louvre Collections   - filter by
  creator, genre, movement, material used and room


   - People born on this day  – filter by
  gender, occupation, education, cause of death and ethnic group



*4. Automatic detection of timeline data from the SPARQL query*
Just paste a SPARQL query in to the input box on the query input page
 and click 'generate
timeline'. Any query that has the timeline view built available on the
Wikidata Query Service should work automatically.
You can then optionally use the 'map variables' section to add *date
precision*, *colour codes *and *filters*, or to override anything that
automatic detection got wrong.



*5. New help popups *
There are now lots of new instructions on the query input page
 (just click on the
little help icons *[image: Inline images 2]* ). This includes info on how
to add colour codes and filters to a timeline.



Let me know if you have any feedback or suggestions!


Cheers :)


Navino

-- 

*nav...@histropedia.com *

@NavinoEvans 
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Label gaps on Wikidata - (SPARQL help needed. SERVICE wikibase:label)

2017-02-24 Thread Navino Evans
Hi Rick,

Is this what you're after? http://tinyurl.com/z7ru9yr

Once you run the query there is a download drop-down menu, just above the
query results on the right hand side of the screen - it has a range of
options including CSV.

Hope that helps!

Nav




On 24 February 2017 at 02:25, Rick Labs  wrote:

> Thanks Stas & especially Kingsley for the example:
>
> # All subclasses of a class example
> # here all subclasses of P279 Organization (Q43229)
> SELECT ?item ?label ?itemDescription ?itemAltLabel
> WHERE
> {
>  ?item wdt:P279 wd:Q43229;
>rdfs:label ?label .
>  # SERVICE wikibase:label { bd:serviceParam wikibase:language
> "en,de,fr,ja,cn,ru,es,sv,pl,nl,sl,ca,it" }
>  FILTER (LANG(?label) = "en")
> }
> ORDER BY ASC(LCASE(?itemLabel))
>
> When I pull the FILTER line out of above I have almost what I need - "the
> universe" of all sub classes of organization (regardless of language).  I
> want all subclasses in the output, not just those available currently with
> an English label.
>
> In the table output, is it possible to get: a column for language code,
> and get the description to show up  (if available for that row)? That would
> be very helpful prior to my manual operations.
>
> Can I easily export the results table to CSV or Excel?  I can filter and
> sort easily from there provided I have the hooks.
>
> Thanks very much!
>
> Rick
>
> .
>
>
>
>
>
> On 2/23/2017 1:22 PM, Kingsley Idehen wrote:
>
> On 2/23/17 12:59 PM, Stas Malyshev wrote:
>
> Hi!
>
> On 2/23/17 7:20 AM, Thad Guidry wrote:
>
> In Freebase we had a parameter %lang=all
>
> Does the SPARQL label service have something similar ?
>
> Not as such, but you don't need it if you want all the labels, just do:
>
> ?item rdfs:label ?label
>
> and you'd get all labels. No need to invoke service for that, the
> service is for when you have specific set of languages you're interested
> in.
>
>
> Yep.
>
> Example at: http://tinyurl.com/h2sbvhd
>
> --
> Regards,
>
> Kingsley Idehen   
> Founder & CEO
> OpenLink Software   (Home Page: http://www.openlinksw.com)
>
> Weblogs (Blogs):
> Legacy Blog: http://www.openlinksw.com/blog/~kidehen/
> Blogspot Blog: http://kidehen.blogspot.com
> Medium Blog: https://medium.com/@kidehen
>
> Profile Pages:
> Pinterest: https://www.pinterest.com/kidehen/
> Quora: https://www.quora.com/profile/Kingsley-Uyi-Idehen
> Twitter: https://twitter.com/kidehen
> Google+: https://plus.google.com/+KingsleyIdehen/about
> LinkedIn: http://www.linkedin.com/in/kidehen
>
> Web Identities (WebID):
> Personal: http://kingsley.idehen.net/dataspace/person/kidehen#this
> : 
> http://id.myopenlink.net/DAV/home/KingsleyUyiIdehen/Public/kingsley.ttl#this
>
>
>
> ___
> Wikidata mailing 
> listWikidata@lists.wikimedia.orghttps://lists.wikimedia.org/mailman/listinfo/wikidata
>
>
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
>


-- 

*nav...@histropedia.com *

@NavinoEvans 

-

   www.histropedia.com

Twitter Facebo
ok
Google +

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Label gaps on Wikidata - (SPARQL help needed. SERVICE wikibase:label)

2017-02-25 Thread Navino Evans
On 24 February 2017 at 22:00, Rick Labs  wrote:

> Nav,
>
> YES!!! that's it! Your SPARQL works perfectly, exactly what I wanted.
>
> Thanks very much. Just had to learn how to get the CVS into Excel as
> UTF-8, not hard. Can finally see what objects people want immediately below
> "Organizations", worldwide. (yes, whats evolved is pretty darn "chaotic")
> Very much appreciated.
>
> Rick


Excellent!! Very happy to help. Best of luck cleaning up the chaos :)


-- 

*nav...@histropedia.com *

@NavinoEvans 

-

   www.histropedia.com

Twitter Facebo
ok
Google +

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


[Wikidata] Grant application for extending work with UNESCO data

2017-04-05 Thread Navino Evans
Hi all,


If you have a moment spare, please take a look at this WMF grant application
.
Feedback and endorsements will be greatly appreciated!

The application is to extend John Cummings' brilliant work making UNESCO
content available on Wikimedia projects, and includes a portion allocated
to me to continue the work on importing data from UNESCO and partner
agencies into Wikidata. It also covers improvements to the workflow and
documentation for the data import process, building on our previous work
getting the following pages together:

Wikidata:Data Import Hub


Wikidata:Data donation
4

Wikidata:Data Import Guide


Wikidata:Partnerships and data imports



As well as the data import work described above, the main goals are:

   1. UNESCO’s publication workflows incorporate sharing open license
   content on Wikimedia projects.
   2. Support other Intergovernmental Organisations and the wider public to
   share content on Wikimedia projects.
   3. Support Wikimedia contributors to easily discover and use UNESCO
   content and the documentation produced.



Many thanks!


Nav
-- 

*nav...@histropedia.com *

@NavinoEvans 
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Grant application for extending work with UNESCO data

2017-04-07 Thread Navino Evans
Thanks Fariz!

It will include a lot of data about heritage sites and the other UNESCO
inscription programmes (e.g. Memory of the World register). Quite a bit of
ground work has already be covered but there is still lots to do.
The other data of interest will come from the UNESCO institute of
statistics and other UN agencies, e.g. Global Education Monitoring Report,
World Water Development Report and World Science Report.
The actual form that the statistical data would take in Wikidata is very
much dependent on the feedback from the community and what can work within
the Wikidata model.

Cheers,

Nav



On 5 Apr 2017 13:53, "Fariz Darari"  wrote:

Hello Nav,

the project sounds interesting! What kind of UNESCO data will be imported
to Wikidata? Will it be something like UNESCO heritage sites or anything
else?

Regards,
Fariz

On Wed, Apr 5, 2017 at 12:22 PM, Navino Evans 
wrote:

> Hi all,
>
>
> If you have a moment spare, please take a look at this WMF grant
> application
> <https://meta.wikimedia.org/wiki/Grants:Project/Wikimedian_in_Residence_at_UNESCO_2017-2018>.
> Feedback and endorsements will be greatly appreciated!
>
> The application is to extend John Cummings' brilliant work making UNESCO
> content available on Wikimedia projects, and includes a portion allocated
> to me to continue the work on importing data from UNESCO and partner
> agencies into Wikidata. It also covers improvements to the workflow and
> documentation for the data import process, building on our previous work
> getting the following pages together:
>
> Wikidata:Data Import Hub
> <https://www.wikidata.org/wiki/Wikidata:Data_Import_Hub>
>
> Wikidata:Data donation
> <https://www.wikidata.org/wiki/Wikidata:Data_donation>4
>
> Wikidata:Data Import Guide
> <https://www.wikidata.org/wiki/Wikidata:Data_Import_Guide>
>
> Wikidata:Partnerships and data imports
> <https://www.wikidata.org/wiki/Wikidata:Partnerships_and_data_imports>
>
>
> As well as the data import work described above, the main goals are:
>
>1. UNESCO’s publication workflows incorporate sharing open license
>content on Wikimedia projects.
>2. Support other Intergovernmental Organisations and the wider public
>to share content on Wikimedia projects.
>3. Support Wikimedia contributors to easily discover and use UNESCO
>content and the documentation produced.
>
>
>
> Many thanks!
>
>
> Nav
> --
>
> *nav...@histropedia.com *
>
> @NavinoEvans <https://twitter.com/NavinoEvans>
>
>
> ___
> Wikidata mailing list
> Wikidata@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikidata
>
>

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


[Wikidata] Action plan for improving the data import process

2017-12-13 Thread Navino Evans
Hi all,

Following on from a session I co-hosted at WikidataCon
,
we've put together a project page aimed at creating an action plan for for
improving the data import process:

Data import processes map
 (outlines
what we have already, and what is still needed)
Discussion page

(discussion
points and suggestions)

The main objective is to use this area to build a picture of what already
exists and what's missing in the data import process. This can then be used
to create actionable tasks (which we're proposing be managed on
Phabricator).

If you have anything to add please go ahead and edit the project page
and/or join the discussion.

We want to get as many community members as possible involved in these
early stages of planning, so please spread the word to anyone you think
will be interested in the data import process! :)


Many thanks,

Nav



-- 

*nav...@histropedia.com *

@NavinoEvans 

-

   www.histropedia.com

Twitter Facebo
ok
Google +

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata