Peter Hansen wrote:How have you proven that it is not *that* program which is at fault?)
It would surprise me: even if it consumes much CPU-time, memory and other resources, each instances returns all resources when it exits.
I have not. All I know is that on WinXP, the program uses 100% CPU at times and consumes more Ram than is available (the page file grows to 700 or 800MB). It runs OK for a few hours and then produces a 'not enough resources' error. And, the machine is generally unuserable. On Linux, it has no impact whatsoever on resources. Granted, the Linux machine is much more robust, but one wouldn't expect this great a difference. I can rewrite it so that it's pure Pyhton (no calling nmap) if you think that would be a good idea. Perhaps that would at least remove nmap from the equation.
I wrote a very simple and small fake_nmap that just looks at the IP-address and prints "open", "closed" or "filtered" to stdout. When I run your python program (Python 2.4 on Windows XP, like you), the CPU is utilized 100% (about half of it goes to csrss.exe whatever that may be); about half of the CPU time is spent in the kernel. The system stays usable (at least for now, it's been running for about 5 minutes now), but memory use is increasing, slow but steadily.
The task manager shows, in addition to a number of fake_nmap.exe processes, a number of cmd.exe processes. I don't understand where these come from: I know os.system ()uses the shell, but isn't os.popen() supposed to open the process directly? It seems there are a lot more instances of cmd.exe than of fake_nmap.exe; no idea what that tells us.
Also, it takes quite some time before "256 threads running incl. main" is printed the first time, so I think the system needs all that time to create all the threads. It would be normal for memory use to keep increasing untill all threads are created, but I'm fairly certain memory use is still increasing now.
-- "Codito ergo sum" Roel Schroeven -- http://mail.python.org/mailman/listinfo/python-list