From: Gekko > [...] returns the first page it downloads only, and does not continue > to download the other links, while omitting the -O - allows the > downloading to work.
That's right. In recursive HTTP operation, wget expects to read its own output files to find the links to follow. It's not designed to read its one-and-only "-O" output file to find links while it is writing that file. It would not be impossible to arrange this sort of thing, but it would be complicated, and it's not obvious that it would be particularly useful. Why would you want to do this? It should be relatively easy to get the same effect with a normal "wget -r" command and a shell script to go through the resulting files and "cat" them into a single mess. I still don't know why you'd want to do it, however. Thanks for including the wget and OS info in the question. It's a rare thing to get all the useful info around here. ------------------------------------------------------------------------ Steven M. Schweda [EMAIL PROTECTED] 382 South Warwick Street (+1) 651-699-9818 Saint Paul MN 55105-2547