On Sun, Aug 15, 2010 at 6:46 PM, ThomasV <thoma...@gmx.de> wrote: > Magnus Manske a écrit : >> On Sat, Aug 14, 2010 at 8:49 PM, Thomas Voegtlin <thoma...@gmx.de> wrote: >> >>>>> Also, in on_body_scroll, you could avoid the for loop : divide >>>>> >>>> $('#body').position()['scrollTop'] by the height of an image >>>> >>>> 'fraid not - sometimes the rendered text runs longer than the image, >>>> so the "row" can be higher than the image. Example: >>>> http://toolserver.org/~magnus/book2scroll/index.html >>>> (scroll down and you'll see it) >>>> >>>> >>> hmm, you are right ; I had a "pure scan" version in mind. >>> >>> But it would be nice to have a version that does not load >>> the text, just in order to see if the WMF servers are fast >>> enough to provide the same fluidity as in the Google Books >>> interface. >>> >> >> I don't think the text retrieval is the slow step here... >> > > No, but the for loop in the scroll handler makes it a bit slow. > > Another problem occurs when you are viewing page p, and when > p-1 is not loaded yet : if you scroll up, at the moment where p-1 is > loaded, the size of its container div increases, and the text you > are viewing (page p) is pushed towards the bottom. On the > Dictionary of National Biography this offset can be quite big, > so you lose track of the text you are viewing. > > I don't really know how to solve this ; but it seems to me that > using divs with variable size is part of the problem here too.
I tried to solve that by fixing the div height to the same as the image div and using overflow-y to have per-page "sub-scroll". However, it does not seem to work with divs "display:table-cell", and altering that breaks the entire layout. I suppose I could go back to good 'ol table, but that would be a shame... >> I've switched to specifying width rounded to 100s; however, the API >> still gives me one-off images (599 instead of 600 px). I could hack >> the API thumbnail URL, though. Better yet, I can probably skip that >> step entirely after the first one... >> > I can see that too (599 instead of 600); but that's not a > problem, because the filename does not change, it is "600px-" > >> Why load a giant text and then hack around on broken HTML, when I can >> just query each page individually? It's not really slow, at least not >> in Google Chrome. >> >> > oh, that was in order to display the text without headers, footers > and page breaks ; but I guess it's ok to show headers, because > they are in the scans too. (here I'm not talking about the headers > that you hide with your button ; I mean the other elements > that are in this field : running title, references, etc.) Yes, I know what you mean, but they're not really in that way... Anyway, I've added a permalink. Also, I've fiddled with the scrolling for loop; it should be much quicker now. Cheers, Magnus _______________________________________________ Wikisource-l mailing list Wikisource-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikisource-l