Hi,
I'm preparing an image donation of some 350 picture books from 1810 to 1880 
(taken from the collection 
http://www.geheugenvannederland.nl/?/en/collecties/prentenboeken_van_1810_tot_1950)
For every book I've constructed an XML file describing the pages (metadata). So 
eg. for a book of 20 pages I've an XML with 20 records. I can upload these in 
the normal way via the GWToolset webinterface, also assigning a Commons 
category to the book.

For 1 book that's doable, but for 350 books I would need to upload 350 XML 
files, 1 by 1, using the GWT-webinterface (using the same json mapping file for 
all uploads). But this would take me a lot of time (and it's rather boring)...

So I'm wondering if / how I could automate this. Is there a more 
direct/efficient way?

I can image that I could do some command line interfacing (Pywiki??), with the 
XML, the json-mapping and the target Commonscat-name as input parameters. Would 
that be an option?

Any tricks, tips & directions are very welcome


Met vriendelijke groet / With kind regards

Olaf Janssen

Wikipedia & open data coordinator

Koninklijke Bibliotheek - National Library of the Netherlands
olaf.jans...@kb.nl<mailto:olaf.jans...@kb.nl>
+31 (0)70 3140 388
@ookgezellig
www.slideshare.net/OlafJanssenNL<http://www.slideshare.net/OlafJanssenNL>


        [Koninklijke Bibliotheek, National Library of the Netherlands]
        Prins Willem-Alexanderhof 5 | 2595 BE Den Haag
Postbus 90407 | 2509 LK Den Haag | (070) 314 09 11 | 
www.kb.nl<http://www.kb.nl/>
        [http://www.kb.nl/sites/default/files/dots.jpg]
        English version<http://www.kb.nl/en/email> | 
Disclaimer<http://www.kb.nl/disclaimer>
_______________________________________________
Glamtools mailing list
Glamtools@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/glamtools

Reply via email to