Hi, i´m not sure if that is a feature request or a bug. Wget does not collect all page requisites of a given URL. Many sites are referencing components of these sites in cascading style sheets, but wget does not collect these components as page requisites.
A example: --- $ wget -q -p -k -nc -x --convert-links \ http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/496901 $ find . -name "*.css" ./aspn.activestate.com/ASPN/static/aspn.css $ grep "url(" ./aspn.activestate.com/ASPN/static/aspn.css list-style-image: url(/ASPN/img/dot_A68C53_8x8_.gif); background-image: url(/ASPN/img/ads/ASPN_banner_bg.gif); background-image: url('/ASPN/img/ads/ASPN_komodo_head.gif'); background-image: url('/ASPN/img/ads/ASPN_banner_bottom.gif'); $ find . -name "ASPN_banner_bg.gif" || echo "not found" --- A solution for this problem would to parse all collected *.css files for lines which match for "url(.*)" and to collect these files. Best regards Marc Schoechlin -- I prefer non-proprietary document-exchange. http://sector7g.wurzel6.de/pdfcreator/ http://www.prooo-box.org/ Contact me via jabber: [EMAIL PROTECTED]