Aren't there ways of downloading whole websites onto your local machine. Kind of "follow all links from <url> but stop when going outside of site <site>". I'm hopelessly ignorant but this seems like something that must exist

yes, and i was seriously considering writing a script for wget
to do that, before the topic came up here. but that wouldn't
work as well as the approach i suggested:

- just staying on the site isn't sufficient to avoid the
   dynamic areas of trac/wiki, so one would have
   to choose urls carefully, and update the list whenever
   anything interesting changes in the wiki link structure

- the wiki doesn't provide last-modified headers, so
   wget would retrieve the whole lot again and again

recording/pushing from wiki dumps server-side would
instead allow us to use the same incremental update pull mechanisms we use for source code.

claus


_______________________________________________
Cvs-ghc mailing list
[email protected]
http://www.haskell.org/mailman/listinfo/cvs-ghc

Reply via email to