Hi,

a couple of questions:

When refreshing pages, one has the option to specify whether or not to
get images and other files referenced in the page.  Is it also possible
to recurse to level x when *monitoring* the page?  I do not see how to
do that.  How can I tell WWWOFFLE to monitor a page with all images to
recursion depth 2 for example?

I have a local HTML file with links I refresh from time to time.  I have
stored it locally under http://localhost:8080/local/refresh.html.  What
I would like WWWOFFLE to do is to monitor that page daily to depth 1.
Problem is that WWWOFFLE apparently cannot request pages from itself.
Is there a way I can trick the program into doing what I want?  I tried
using another host name that also resolves to 127.0.0.1 but that did not
work.  WWWOFFLE even gave me an error and suggested that I should report
it to the author if the problem reoccurred.

How do I request password protected pages like
http://quotes.ubs.com/myquotes/ ?  I tried fetching it normally, then
giving the password and refetching the resulting page and of course the
usual suspect http://user:[EMAIL PROTECTED]/myquotes/.
Unfortunately, both did not yield the desired result.  What am I doing
wrong?

My last question deals with the possibility to add files to the WWWOFFLE
cache without downloading them piece by piece.  Some websites like
http://go.to/wingnus offer the option to download the complete website
in a single compressed file which is of course faster and much more
convenient than spidering through all the pages.  Some homepages are
even available on CD-ROM (I know about one HTML tutorial).  Now I was
wondering if it is possible to merge those files into the wwwoffle
cache.  Is there a tool or a procedure to do that?

Thank you for your help.

Regards

Rolf

Reply via email to