Am Freitag, 9. November 2001 22:23 schrieb Felix Karpfen: > This may be a configuration failure or my part. > > Or it might even be a bug in WWWOFFLE. > > I use the WWWOFFLE monitor option for a daily download of the > headline-page of several well-known newspapers. Sometimes, due to > other priorities, they remain unread until the same URLs have been > monitored the next day. > > If I click on the relevant URL in the Previous Time Online Pages, I > still get the latest downloaded page. Since I only run -purge once > a week, I presume that the missing page is still somewhere in the > Cache. > > But how do I find it?
If the pages you've downloaded the day before have the same URLs as the newer ones, the old pages get overwritten and deleted. No chance. You need to use wget or a similar tool to save them. htdig won't help. In some (most?) newspapers (sueddeutsche.de e.g.) however, only the overview page(s) get overwritten, 'cause the articel URLs change each day. In that case, I think, you will draw most benefit of using htdig or others(?). > > Do I really need to bite the bullet and learn how to use htdig? > > Felix Karpfen Christian -- * Christian Knoke +49 4852 92248 * * D-25541 Brunsbuettel Wurtleutetweute 49 * * * * * * * * * * Ceterum censeo Microsoft esse dividendum.
