If you are trying to spider an FTP site, you would be better off just using
plain wget or a purpose-built multi-threaded spidering tool like Heritrix.
The network overhead of setting up new connections would likely dwarf any
benefit from parallelism, especially in the case of many small files, the
only case you stand a chance of being cpu- rather than bandwidth-limited
anyway -- to say nothing of avoiding attempts to download files that don't
exist.  Keep it simple.

Reply via email to