Alex Brollo, 28/11/2013 11:38:
I (painfully) opened a bug into Bugzilla asking for djvulibre binaries
installation into Labs. Any new about? Were they already installed and I
asked unusefully? Are they - as I presume - a necessary tool for
wikisource?
What's mising? https://bugzilla.wikimedia.o
I'll try and see, thanks Nemo; I apologyze if I sent a unuseful alarm. I
hate GIT, BugZilla, the new settings of piwikipedia/pywikibot and the whole
stuff of recent changes. I hope in a KISS "revolution" into these
procedures as soon as possible.
Alex
2013/11/28 Federico Leva (Nemo)
> Alex Bro
Yes, djvulibre is running into Labs! I apologyze again for false alarm. But
I'd like to tale this opportunity to ask you about something related to
djvu file management.
I'll try to test some routines to manage both image and text layers of our
itsource djvu files. My question: have I to upload th
Alex Brollo, 28/11/2013 20:36:
I'll try to test some routines to manage both image and text layers of
our itsource djvu files. My question: have I to upload them from
Commons, or there's the possibility to access to them/to a copy of them
into some folder of Labs without any need of uploading (pa
I feel uncomfortable thinking to upload large files just to use a little
bit of data... I presume that djvu are saved as "bundled" files, have you
any new about saving them as "indirect" files, t.i. as single pages + an
index file and some rather small pieces? Who could give me some detail
about dj
There are already a lot of data in the img_metadata field of the image table. I
hope all data you are looking for are in it.
Thomas
Le 28 nov. 2013 à 23:57, Alex Brollo a écrit :
> I feel uncomfortable thinking to upload large files just to use a little bit
> of data... I presume that djvu ar
Thanks Thomas, but I'm looking for something much subtler: I look for
mapped text of OCR with any possible detail - t.i. I need at least the
output of djvutxt, djvudump, djvused - and obviously of a copy of djvu
file.
Presently I can't follow the wikidata adventure nor the metadata flow - I
focus
For these use cases I think that download the file is the best way to do it.
It’s very quick because the connection between labs and the other Wikimedia
clusters is very good.
Thomas
Le 29 nov. 2013 à 18:25, Alex Brollo a écrit :
> Thanks Thomas, but I'm looking for something much subtler: I
OK, I'll do. I hate to move many Mby around the web without a real and
strong need but I hope to build some tools to help users while
contributing, and this, IMHO, is one from the best justifications to use
band and servers time.
Alex
2013/11/29 Thomas Tanon
> For these use cases I think