Hi,

I wonder whether there is a simple, scriptable way of putting pages
into the cache under a given URI.

The reason for my wish to do this is as follows: I have a private
laptop where I have been using wwwoffle in the last couple of months
(and before that, I'd been using wwwoffle since pre-2.0 on another
machine that, alas!, is no longer with us); now, however, I have moved
to another place, and that laptop is without a connection to the
internet.  If I want to have some webpage at home, currently I just
carry it home on floppy disk (usually just the output from `lynx
-dump'), copy it to the hard disc, and read or store it.  This works
but has the disadvantage that I cannot follow links to other pages and
that I cannot see whether I already have some page a link points to.

So it would be helpful if I could just do something like `curl -D URI'
on one computer, copy the files to the correct location in the
wwwoffle spool on the other computer, and ideally update the
"lasttime" index there, too.  From looking at the files in the spool,
I think the part after the D or U prefix is just an md5 hash of the
URI, so this should be easy to script; however, I suspect (on the
basis of my hazy memory of old changelogs) that the URI is put through
some standardization to avoid, e.g., caching variants with both ~ and
%E7 in them.

I have looked at the FAQ (but this Q, though now definitely A,
probably is not F so) and the mailing list archive at
<http://www.mail-archive.com/> (which I think should be mentioned in
the welcome message after subscribing to this list) and done some
googling, but haven't found anything useful.

Thanks for any light you may shed,

Albert.



Reply via email to