https://bugzilla.wikimedia.org/show_bug.cgi?id=18046
--- Comment #5 from ThomasV <thoma...@gmx.de> 2009-03-21 08:05:15 UTC --- I think you do not want to fetch and discard a 20M djvu file everytime you extract a single page from it. And I suppose that doing this at upload time would require a schema change. Perhaps the text extraction should be performed on the scaler servers ? Please let me know if there is anything I can do to help/speedup this. I think having this feature is important for our project (currently our contributors need to ask robot owners to do the preprocessing for them; it will free them from this dependence), and I am willing to spend time on this if it can elp. -- Configure bugmail: https://bugzilla.wikimedia.org/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug. _______________________________________________ Wikibugs-l mailing list Wikibugs-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikibugs-l