On Mon, 2002-09-16 at 10:12, Raffaele Belardi wrote:
> I should have been more precise...
> 
> The method you suggest is definetly correct, but I would like a tool 
> which follows the links recursively: for example, index.html contains 
> link to chapter1.html and chapter2.html. I'd like a tool which generates 
> a single pdf file containing index.html, chapter1.html and capter2.html 
> (in this order :-)).
> 
> I'm asking too much, am I?

Yes, unfortunately. Adobe Web Capture (the "official" Adobe tool) is
needed to traverse Web sites and produce single PDFs.

The irony is that there _is_ a Linux tool which will do this traversal,
and with far more control over what it picks up, than Web Capture ...
but it produces the wrong file format. It's Plucker:

http://www.plkr.org/

All joking apart, for grabbing Web pages and reading them offline
without having to print out reams of text, Plucker plus a decent
handheld is the way to go.

Sitescooper does the same thing and could convert to PDF but is famously
difficult to set up (haven't tried it):

http://www.sitescooper.org/

Alastair

Attachment: signature.asc
Description: This is a digitally signed message part

Reply via email to