urllib has a "hole" in its timeout protection. Using "socket.setdefaulttimeout" will make urllib time out if a site doesn't open a TCP connection in the indicated time. But if the site opens the TCP connection and never sends HTTP headers, it takes about 20 minutes for the read in urllib's "open" to time out.
There are some web servers that produce this behavior, and many seem to be associated with British universities and nonprofits. With these, requesting "http://example.com" opens a TCP connection on which nothing is ever sent, while "http://www.example.com" yields a proper web page. Even Firefox doesn't time this out properly. Try "http://soton.ac.uk" in Firefox, and be prepared for a long wait. There was some active work in the urllib timeout area last summer. What happened to that? John Nagle -- http://mail.python.org/mailman/listinfo/python-list