On 6/4/07, Jani M. <[EMAIL PROTECTED]> wrote:
You are correct in that performance (cpu usage) might be worse with the threaded workers. However, scalability is definately much better.
No, it's just the opposite. Using prefork won't save CPU, but it will save memory, meaning you can run more perl interpreters.
With threaded processes, I can easily have the same machine run 5x the number of threads, with some 400-500 perl interpreters available. This is very helpful when you need to be able to serve a large amount of concurrent slow(ish) clients.
The normal way to do this is to have your mod_perl server separate from your static file server, usually by doing a reverse proxy to the mod_perl server. Then the static server handles doling out bytes to slow clients. A few variations of this setup are discussed in the docs.
Are you sure that this would help in this case? From what I can see, the cause of the segfaults appears to be memory corruption, which is likely to happen earlier than the actual segfault. If this is the case, would the backtrace likely not show any useful information?
Honestly, the person who has done the most work on debugging thread crashes is Torsten. His advice on how to debug it will be better than mine. It does seem like people usually solve them by using backtrace analysis though.
The reason I ask this is because setting up a full-blown debugging environment could be a bit tricky. I might be able to compile a debug-enabled version of mod_perl, but doing the same for Apache and/or Perl could be a problem. Or are there debug-enabled versions available from Debian somewhere?
I don't use Debian, so I couldn't tell you that. Compiling perl, apache, and mod_perl is not very difficult on a Linux system though. It takes a little while to compile, but there's no trick to it. - Perrin