[Wikidata] LD4 Wikibase Working Hour May 27: The history, present and future of WBStack & WBaas (Wikibase as a service)

2021-05-20 Thread j s

Please join us for the next installment of the LD4 Wikibase Working Hour 
!

When: Thurs. 27 May 2021, 1PM Eastern US (time zone converter 

 [update date/time and link])
Registration: Please fill in this form 

 to register

The May 2021 Working Hour will feature two presentations: 

The history, present and future of WBStack & WBaas (Wikibase as a service): a 
presentation by Adam Shorland
Adam will give a broad overview of WBStack and WBaas: Looking at why it exists, 
what it does, how it does it, and where it wants to go.
Adam Shorland (Addshore) Wikidata & Wikibase Tech lead, working at Wikimedia 
Germany for 6+ years.

Testing WBStack at Columbia University Libraries with Timothy Ryan Mendenhall
Mendenhall will provide a brief demonstration of an exploration of WBStack at 
CUL to try to resolve identifiers within Columbia University Libraries.
Timothy Ryan Mendenhall is a Metadata Librarian at Columbia University Libraries


About the LD4 Wikibase Working Hour: 
The LD4 Wikibase Working Hour 

 seeks to create a space for GLAM professionals experimenting with 
Wikibase/WBStack implementation, the software that Wikidata is based on, to 
learn collaboratively and share tips, tools, and resources.  For more details, 
see the project page 
.
  Sign up via this link  for LD4 
Wikidata Affinity Group, Wikidata Working Hour, and Wikibase Working Hour 
announcements.






___
Wikidata mailing list -- wikidata@lists.wikimedia.org
To unsubscribe send an email to wikidata-le...@lists.wikimedia.org


[Wikidata] LD4 Wikibase Working Hour, 30 March 2021, 11AM EDT: Data modeling for Rhizome's Wikibase instance

2021-04-27 Thread j s

On behalf of LD4 Wikibase Working Hour team, please join us.
— Jackie


**

Please join us for the next installment of the LD4 Wikibase Working Hour 
!
When: 30 April 2021, 11AM-12PM Eastern (US) (time zone converter) 

Registration: Please fill in this form 

 to register

The April Working Hour will feature Lozana Rossenova discussing her work 
developing a custom data model for Rhizome's wikibase instance. 

Dr Lozana Rossenova is a digital humanities researcher and designer. Between 
2016–2021, Rossenova was a PhD candidate in Digital Archive Design at the 
Centre for the Study of the Networked Image (London South Bank University), in 
collaboration with Rhizome, an international born-digital art organisation 
based in New York. Rossenova is an active member of the Wikidata and Wikibase 
open source development communities, and a Steering Committee member of 
OpenRefine, an open source data management tool with wide adoption in heritage, 
research and digital humanities communities.

About the LD4 Wikibase Working Hour: 
The LD4 Wikibase Working Hour 

 seeks to create a space for GLAM professionals experimenting with 
Wikibase/WBStack implementation, the software that Wikidata is based on, to 
learn collaboratively and share tips, tools, and resources.  For more details, 
see the project page 
.
  To sign up for monthly announcements, please fill out this form 
.

-- 
Christine Fernsebner Eslao
Metadata Management
Harvard Library Information & Technical Services
625 Massachusetts Avenue
Cambridge MA 02139
 
es...@fas.harvard.edu 


[Wikidata] Fwd: Guidelines for entering label in various languages, e.g. Chinese

2021-02-22 Thread j s
Thank you, Gerard. I have used the tool you alluded to as an end-user, changing 
display language from English, to Spanish, to Chinese traditional or to Chinese 
simplified.

But the  for describing an entity in Wikidata is on the data 
recording side. At the moment,  (zh) appears to represent spoken and 
written Chinese. But the scripts are quite all over the map. I cannot figure 
out whether best practices. Will you mind pointing me to the guidelines if you 
are aware of one?

Thank you very much.

—Jackie


> Begin forwarded message:
> 
> 
> Date: Sun, 21 Feb 2021 10:23:13 +0100
> From: Gerard Meijssen 
> To: Discussion list for the Wikidata project 
> Subject: Re: [Wikidata] Guidelines for entering label in various languages, 
> e.g. Chinese
>  
> 
> Hoi,
> For Wikipedia we have a tool that provides to the reader the simplified or
> traditional representation based on the users preference. We could do that
> for Wikidata if we cared to. The same is true for Commons, its search
> engine works for any language based on the availability of Wikidata labels
> in other languages and we could serve more than half the world's population
> if we cared to.
> Thanks,
>  GerardM
> 
> On Fri, 19 Feb 2021 at 21:09, j s  wrote:
> 
>> Hi,
>> 
>> I was alerted that there are many properties without a label in zh-Hant 
>> <https://www.wikidata.org/w/index.php?search=-haslabel:zh-hant&title=Special:Search&profile=advanced&fulltext=1&advancedSearch-current=%7B%7D&ns120=1>.
>> When I reviewed the search result, I was surprised to see Chinese
>> traditional (zh-Hant) and simplified scripts (zh-Hans) were both recorded
>> under the lable Chinese (zh).
>> 
>> Does anyone know if there are guidelines for entering data under different
>> language tags, subtags for script and region? Thank you for your help.
>> 
>> ---
>> Jackie Shieh
>> Descriptive Data Management
>> Discovery Services Division
>> 
>> LibrariesArchives.si.edu
>> library.si.edu
>> 
>> 
> *

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


[Wikidata] Guidelines for entering label in various languages, e.g. Chinese

2021-02-19 Thread j s
Hi,

I was alerted that there are many properties without a label in zh-Hant 
.
 When I reviewed the search result, I was surprised to see Chinese traditional 
(zh-Hant) and simplified scripts (zh-Hans) were both recorded under the lable 
Chinese (zh).

Does anyone know if there are guidelines for entering data under different 
language tags, subtags for script and region? Thank you for your help.

---
Jackie Shieh
Descriptive Data Management
Discovery Services Division 
 
LibrariesArchives.si.edu 
library.si.edu



 




___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


Re: [Wikidata] Entity data serialized in JSON

2020-10-26 Thread j s
Thanks and apologies that I should have included screenshots when describing 
what I experienced. 

Wondering what caused the default setting for Firefox browser when one item 
(below), a user must expand to see the data in other property,  
https://www.wikidata.org/wiki/Special:EntityData/Q151580.json 
<https://www.wikidata.org/wiki/Special:EntityData/Q151580.json> 


while others need not, data were displayed not collapsed like the one above. Is 
it the Firefox browser or data source that triggered the collapsing of data for 
Q151580?
https://www.wikidata.org/wiki/Special:EntityData/Q2833128.json 
<https://www.wikidata.org/wiki/Special:EntityData/Q2833128.json>  

Thanks!

—Jackie




> 
> 
> Date: Fri, 23 Oct 2020 11:01:41 -0500
> From: Thad Guidry 
> To: Discussion list for the Wikidata project 
> Subject: Re: [Wikidata] Entity data serialized in JSON
> 
> 
> "Collapse All" on the menu row for the JSON display.
> Then expand 2 levels with the dropdown arrows... you will then see they
> have the same top-level structure... labels, descriptions, aliases, claims,
> sitelinks  (essentially, the Wikidata data model
> <https://www.mediawiki.org/wiki/Wikibase/DataModel> )
> entities
> Q151580
> pageid 152997
> ns 0
> title "Q151580"
> lastrevid 1292352369
> modified "2020-10-15T23:04:00Z"
> type "item"
> id "Q151580"
> labels {?}
> descriptions {?}
> aliases {?}
> claims {?}
> sitelinks {?}
> 
> entities
> Q2833128
> pageid 2713503
> ns 0
> title "Q2833128"
> lastrevid 1276579382
> modified "2020-09-13T09:02:49Z"
> type "item"
> id "Q2833128"
> labels {?}
> descriptions {?}
> aliases {?}
> claims {?}
> sitelinks {?}
> 
> Thad
> https://www.linkedin.com/in/thadguidry/
> 
> 
> On Fri, Oct 23, 2020 at 9:02 AM j s  wrote:
> 
>> 
>> Does anyone know the cause of the JSON returns for the two entities below
>> as they are so different on Firefox display?
>> 
>> https://www.wikidata.org/wiki/Special:EntityData/Q151580.json
>> vs.
>> https://www.wikidata.org/wiki/Special:EntityData/Q2833128.json
>> 
>> Is this something worth noting? Thank you.
>> 
>> ?Jackie
>> _
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


[Wikidata] Fwd: Any API available for extraction based on Q#?

2020-10-23 Thread j s
Hi Fariz,

This is simply marvelous… 

The JSON serialization returns just the WB P and value which may take less 
effort to extract value for what we need at the moment. TTL returns so much 
more info which we should be able to do more at a later time.

Very exciting. Thank you so very much! 

---
Jackie Shieh
Descriptive Data Management
Discovery Services Division 



> Begin forwarded message:
> 
> Date: Fri, 23 Oct 2020 10:38:56 +0700
> From: Fariz Darari  
> To: Discussion list for the Wikidata project  
> Subject: Re: [Wikidata] Any API available for extraction based on Q#?
> 
> 
> Hello Jackie,
> 
> not necessarily an answer, but you could get description/statements of Q#
> via: https://www.wikidata.org/wiki/Special:EntityData/Q#.ttl
> 
> So, for instance, the (turtle syntax) statements of Indonesia would be:
> https://www.wikidata.org/wiki/Special:EntityData/Q252.ttl
> 
> Regards,
> Fariz
> 
> On Wed, Oct 21, 2020 at 7:23 PM j s  wrote:
> 
>> Hi,
>> 
>> I am extremely new to SPARQL and have yet to find how I could feed a file
>> containing a series of Q item numbers in order to extract statements for
>> our needs.
>> 
>> I suspect some of you may have succeeded in this. Would you recommend an
>> API tool (or combination of tools) to use to extract all statements from a
>> file that contains a series of Q item number? I must preface to say that I
>> am a metadata person not a software developer.
>> 
>> Thank you very much for any pointers!
>> 
>> With regards,
>> --Jackie
> *

___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata


[Wikidata] Any API available for extraction based on Q#?

2020-10-21 Thread j s
Hi,

I am extremely new to SPARQL and have yet to find how I could feed a file
containing a series of Q item numbers in order to extract statements for
our needs.

I suspect some of you may have succeeded in this. Would you recommend an
API tool (or combination of tools) to use to extract all statements from a
file that contains a series of Q item number? I must preface to say that I
am a metadata person not a software developer.

Thank you very much for any pointers!

With regards,
--Jackie
___
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata