Hi Phil!

Without more info (wget's verbose or even debug output, full command
line,...) I find it hard to tell what is happening.
However, I have had very good success with wget and google.
So, some hints:
1. protect the google URL by enclosing it in "
2. remember to span (and allow only certain) hosts, otherwise, wget will
only download google pages 
And lastly -but you obviously did so- think about restricting the recursion
depth.

Hope that helps a bit
Jens

 > I have been trying to wget several levels deep from a Google search page
> (e.g., http://www.google.com/search?=deepwater+oil). But on the very first
> page, wget returns a 403 Forbidden error and stops. Anyone know how I can
> get around this?
> 
> Regards, Phil 
> Philip E. Lewis, P.E.
> [EMAIL PROTECTED]
> 
> 

-- 
"Sie haben neue Mails!" - Die GMX Toolbar informiert Sie beim Surfen!
Jetzt aktivieren unter http://www.gmx.net/info

Reply via email to