Tim Ruehsen <tim.rueh...@gmx.de> writes: > the changes in recur.c are not acceptable. They circumvent too many checks > like host-spanning, excludes and even --https-only.
I suppose it depends on what you consider the semantics to be. Generally, I look at it if I've specified to download http://x/y/z and http://x/y/z redirects to http://a/b/c, if http://x/y/z passes the tests I've specified, then the page should be downloaded; the fact that it's redirected to http://a/b/c is incidental. Most checks *should* be circumvented. I guess I'd make exceptions for --https-only, which is presumably placing a requirement on *how* the pages should be fetched, and probably the robots check, as that's a policy statement by the server. Dale