Re: How to get a core dump

2004-11-08 Thread Marc Gracia
Hi,
Well, It's only 2 hours of on line testing, but seems that the
recompilation and upgrade to apache-1.3.33 I did with -g to get debug
info on coredumps solved the problem

I used exactly the same configure options on apache,mod_ssl and mod_perl
(I used the same shell script, in fact)

Now I don't know what caused the problem first place, but now seems ok.

I taked in account that first time I compiled all on a shell with
en_US locale, and now I used one with en_US.UTF8. 
May the locale used in the shell to compile apache+mod_perl affect the
final executable in some way? 
The server has to deal with UTF8 info coming from/going to a SQLServer
backend...

On dv, 2004-11-05 at 17:46, Marc Gracia wrote:
 Many Thanks Stass and Glenn,
 I'll try all this anf will get back..
 
 On dv, 2004-11-05 at 13:38, Marc Gracia wrote:
  Hi everybody.
  I have a problem on a production cluster with a somewhat big mod_perl
  app, and I just cannot get any clue of what is happening.
  
  The problem is that the servers just exit with Segmentation fault
  randomly.  
  The problem is rare, hapens 10/20 times each day in each of the 6
  frontends, which globaly processes about 1.000.000 daily hits.
  The global stats show about 30 Internal server errors daily, I don't
  know if a segfault can cause an Internal Server Error on the client (I
  suppose not, if the server dies, cannot send the 505), but the numbers
  don't match anyway.
  
  I think a coredump will help me understand why it segfaults, but I don't
  know how to make apache dump a coredump, I've tried a lot of recipes
  found on internet with any success. Making things more complicated, this
  problems only happens on the production systems, (Suppose only on some
  pages...) so I cannot reproduce it on my test system.
  
  So, my question is... There is any way to force apache to dump a
  coredump file? I suppose I'm forgotting something but I really
  desperate...
  
  A secondary question is, some of the servers transforms all UTF8 strings
  with garbage when some Segfault happens (Seems like double-encoded
  UTF8, the page shows 3 or 4 chars for every UTF8 char...). The only way
  to solve this is reboot the machine completely.
  Is that related to this same problem? Or is an obscure UTF8 perl/Apache
  problem?
  
  Many thanks, 
  I'm using mod_perl 1.29 with apache 1.3.31. 
  My perl conf:
  
  Summary of my perl5 (revision 5.0 version 8 subversion 0) configuration:
Platform:
  osname=linux, osvers=2.4.20-2.48smp,
  archname=i386-linux-thread-multi
  uname='linux str'
  config_args='-des -Doptimize=-O2 -march=i386 -mcpu=i686 -g
  -Dmyhostname=localhost [EMAIL PROTECTED] -Dcc=gcc -Dcf_by=Red
  Hat, Inc. -Dinstallprefix=/usr -Dprefix=/usr -Darchname=i386-linux
  -Dvendorprefix=/usr -Dsiteprefix=/usr
  -Dotherlibdirs=/usr/lib/perl5/5.8.0 -Duseshrplib -Dusethreads
  -Duseithreads -Duselargefiles -Dd_dosuid -Dd_semctl_semun -Di_db
  -Ui_ndbm -Di_gdbm -Di_shadow -Di_syslog -Dman3ext=3pm -Duseperlio
  -Dinstallusrbinperl -Ubincompat5005 -Uversiononly -Dpager=/usr/bin/less
  -isr'
  hint=recommended, useposix=true, d_sigaction=define
  usethreads=define use5005threads=undef'
   useithreads=define usemultiplicity=
  useperlio= d_sfio=undef uselargefiles=define usesocks=undef
  use64bitint=undef use64bitall=un uselongdouble=
  usemymalloc=, bincompat5005=undef
Compiler:
  cc='gcc', ccflags ='-D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS
  -DDEBUGGING -fno-strict-aliasing -I/usr/local/include
  -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm',
  optimize='',
  cppflags='-D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS -DDEBUGGING
  -fno-strict-aliasing -I/usr/local/include -I/usr/include/gdbm'
  ccversion='', gccversion='3.2.2 20030213 (Red Hat Linux 8.0
  3.2.2-1)', gccosandvers=''
  gccversion='3.2.2 200302'
  intsize=e, longsize= , ptrsize=p, doublesize=8, byteorder=1234
  d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12
  ivtype='long'
  k', ivsize=4'
  ivtype='long'
  known_ext, nvtype='double'
  o_nonbl', nvsize=, Off_t='', lseeksize=8
  alignbytes=4, prototype=define
Linker and Libraries:
  ld='gcc'
  l', ldflags =' -L/usr/local/lib'
  ldf'
  libpth=/usr/local/lib /lib /usr/lib
  libs=-lnsl -lgdbm -ldb -ldl -lm -lpthread -lc -lcrypt -lutil
  perllibs=
  libc=/lib/libc-2.3.1.so, so=so, useshrplib=true, libperl=libper
  gnulibc_version='2.3.1'
Dynamic Linking:
  dlsrc=dl_dlopen.xs, dlext=so', d_dlsymun=undef, ccdlflags='-rdynamic
  -Wl,-rpath,/usr/lib/perl5/5.8.0/i386-linux-thread-multi/CORE'
  cccdlflags='-fPIC'
  ccdlflags='-rdynamic -Wl,-rpath,/usr/lib/perl5', lddlflags='s
  Unicode/Normalize XS/A'
  
  
  Characteristics of this binary (from libperl):
Compile-time options: DEBUGGING MULTIPLICITY USE_ITHREADS
  USE_LARGE_FILES PERL_IMPLICIT_CONTEXT
Locally applied patches:
  MAINT18379
Built under

Re: How to get a core dump

2004-11-08 Thread Marc Gracia
Well at last the core where produced. 
When called gdb with the httpd executable and the core file, gdb shows
this:

Core was generated by `/usr/eBD/bin/httpd'.
Program terminated with signal 11, Segmentation fault.
Reading symbols from /lib/tls/libpthread.so.0...done.
Loaded symbols for /lib/tls/libpthread.so.0
Reading symbols from /lib/tls/libm.so.6...done.
Loaded symbols for /lib/tls/libm.so.6
..
..
..
Loaded symbols for
/usr/eBD/perl//i386-linux-thread-multi/auto/DBD/ODBC/ODBC.so
Reading symbols from /usr//lib/libodbc.so.1...done.
Loaded symbols for /usr//lib/libodbc.so.1
Reading symbols from /usr/lib/gconv/ISO8859-1.so...done.
Loaded symbols for /usr/lib/gconv/ISO8859-1.so
#0  0x4018de75 in Perl_hv_free_ent (my_perl=0x85ba6d8, hv=0xd103fd0,
entry=0xcfdb698) at hv.c:1592
1592hv.c: No such file or directory.
in hv.c
(gdb)

This hv.c error is because gdb didn't find this file? 
Or this is really the originator of the Segfault?



On dv, 2004-11-05 at 13:38, Marc Gracia wrote:
 Hi everybody.
 I have a problem on a production cluster with a somewhat big mod_perl
 app, and I just cannot get any clue of what is happening.
 
 The problem is that the servers just exit with Segmentation fault
 randomly.  
 The problem is rare, hapens 10/20 times each day in each of the 6
 frontends, which globaly processes about 1.000.000 daily hits.
 The global stats show about 30 Internal server errors daily, I don't
 know if a segfault can cause an Internal Server Error on the client (I
 suppose not, if the server dies, cannot send the 505), but the numbers
 don't match anyway.
 
 I think a coredump will help me understand why it segfaults, but I don't
 know how to make apache dump a coredump, I've tried a lot of recipes
 found on internet with any success. Making things more complicated, this
 problems only happens on the production systems, (Suppose only on some
 pages...) so I cannot reproduce it on my test system.
 
 So, my question is... There is any way to force apache to dump a
 coredump file? I suppose I'm forgotting something but I really
 desperate...
 
 A secondary question is, some of the servers transforms all UTF8 strings
 with garbage when some Segfault happens (Seems like double-encoded
 UTF8, the page shows 3 or 4 chars for every UTF8 char...). The only way
 to solve this is reboot the machine completely.
 Is that related to this same problem? Or is an obscure UTF8 perl/Apache
 problem?
 
 Many thanks, 
 I'm using mod_perl 1.29 with apache 1.3.31. 
 My perl conf:
 
 Summary of my perl5 (revision 5.0 version 8 subversion 0) configuration:
   Platform:
 osname=linux, osvers=2.4.20-2.48smp,
 archname=i386-linux-thread-multi
 uname='linux str'
 config_args='-des -Doptimize=-O2 -march=i386 -mcpu=i686 -g
 -Dmyhostname=localhost [EMAIL PROTECTED] -Dcc=gcc -Dcf_by=Red
 Hat, Inc. -Dinstallprefix=/usr -Dprefix=/usr -Darchname=i386-linux
 -Dvendorprefix=/usr -Dsiteprefix=/usr
 -Dotherlibdirs=/usr/lib/perl5/5.8.0 -Duseshrplib -Dusethreads
 -Duseithreads -Duselargefiles -Dd_dosuid -Dd_semctl_semun -Di_db
 -Ui_ndbm -Di_gdbm -Di_shadow -Di_syslog -Dman3ext=3pm -Duseperlio
 -Dinstallusrbinperl -Ubincompat5005 -Uversiononly -Dpager=/usr/bin/less
 -isr'
 hint=recommended, useposix=true, d_sigaction=define
 usethreads=define use5005threads=undef'
  useithreads=define usemultiplicity=
 useperlio= d_sfio=undef uselargefiles=define usesocks=undef
 use64bitint=undef use64bitall=un uselongdouble=
 usemymalloc=, bincompat5005=undef
   Compiler:
 cc='gcc', ccflags ='-D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS
 -DDEBUGGING -fno-strict-aliasing -I/usr/local/include
 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm',
 optimize='',
 cppflags='-D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS -DDEBUGGING
 -fno-strict-aliasing -I/usr/local/include -I/usr/include/gdbm'
 ccversion='', gccversion='3.2.2 20030213 (Red Hat Linux 8.0
 3.2.2-1)', gccosandvers=''
 gccversion='3.2.2 200302'
 intsize=e, longsize= , ptrsize=p, doublesize=8, byteorder=1234
 d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12
 ivtype='long'
 k', ivsize=4'
 ivtype='long'
 known_ext, nvtype='double'
 o_nonbl', nvsize=, Off_t='', lseeksize=8
 alignbytes=4, prototype=define
   Linker and Libraries:
 ld='gcc'
 l', ldflags =' -L/usr/local/lib'
 ldf'
 libpth=/usr/local/lib /lib /usr/lib
 libs=-lnsl -lgdbm -ldb -ldl -lm -lpthread -lc -lcrypt -lutil
 perllibs=
 libc=/lib/libc-2.3.1.so, so=so, useshrplib=true, libperl=libper
 gnulibc_version='2.3.1'
   Dynamic Linking:
 dlsrc=dl_dlopen.xs, dlext=so', d_dlsymun=undef, ccdlflags='-rdynamic
 -Wl,-rpath,/usr/lib/perl5/5.8.0/i386-linux-thread-multi/CORE'
 cccdlflags='-fPIC'
 ccdlflags='-rdynamic -Wl,-rpath,/usr/lib/perl5', lddlflags='s
 Unicode/Normalize XS/A'
 
 
 Characteristics of this binary (from libperl):
   Compile-time options: DEBUGGING MULTIPLICITY USE_ITHREADS
 USE_LARGE_FILES

Re: How to get a core dump

2004-11-08 Thread Marc Gracia
Sorry, here the backtrace... 

#0  0x4018de75 in Perl_hv_free_ent (my_perl=0x85ba6d8, hv=0xd103fd0,
entry=0xcfdb698) at hv.c:1592
#1  0x4018e11b in S_hfreeentries (my_perl=0x85ba6d8, hv=0x85ba6d8) at
hv.c:1681
#2  0x4018e182 in Perl_hv_undef (my_perl=0x85ba6d8, hv=0xd103fd0) at
hv.c:1707
#3  0x401a4367 in Perl_sv_clear (my_perl=0x85ba6d8, sv=0xd103fd0) at
sv.c:5051
#4  0x401a49e4 in Perl_sv_free (my_perl=0x85ba6d8, sv=0xd103fd0) at
sv.c:5226
#5  0x4019be83 in do_clean_all (my_perl=0x85ba6d8, sv=0xd103fd0) at
sv.c:422
#6  0x4019bba2 in S_visit (my_perl=0x85ba6d8, f=0x4019be40
do_clean_all)
at sv.c:314
#7  0x4019bee3 in Perl_sv_clean_all (my_perl=0x85ba6d8) at sv.c:440
#8  0x4012fa04 in perl_destruct (my_perl=0x85ba6d8) at perl.c:796
#9  0x400be0f3 in perl_shutdown (s=0x809979c, p=0xa1d9fd4) at
mod_perl.c:294
#10 0x400c0ca1 in perl_child_exit (s=0x809979c, p=0xa1d9fd4) at
mod_perl.c:965
#11 0x400c0958 in perl_child_exit_cleanup (data=0xd103fd0) at
mod_perl.c:933
#12 0x08051976 in run_cleanups (c=0xa1da164) at alloc.c:1936
#13 0x08050104 in ap_clear_pool (a=0xa1d9fd4) at alloc.c:650
#14 0x08050178 in ap_destroy_pool (a=0xa1d9fd4) at alloc.c:680
#15 0x0805ea10 in clean_child_exit (code=0) at http_main.c:519
#16 0x08061ef5 in child_main (child_num_arg=7) at http_main.c:4558
#17 0x08062659 in make_child (s=0x809979c, slot=7, now=1099922742)
at http_main.c:5051
#18 0x080629bc in perform_idle_server_maintenance () at http_main.c:5236
#19 0x08063068 in standalone_main (argc=1, argv=0xb0c4) at
http_main.c:5499
---Type return to continue, or q return to quit---
#20 0x080636cf in main (argc=1, argv=0xb0c4) at http_main.c:5767
#21 0x42015574 in __libc_start_main () from /lib/tls/libc.so.6

On dv, 2004-11-05 at 13:38, Marc Gracia wrote:
 Hi everybody.
 I have a problem on a production cluster with a somewhat big mod_perl
 app, and I just cannot get any clue of what is happening.
 
 The problem is that the servers just exit with Segmentation fault
 randomly.  
 The problem is rare, hapens 10/20 times each day in each of the 6
 frontends, which globaly processes about 1.000.000 daily hits.
 The global stats show about 30 Internal server errors daily, I don't
 know if a segfault can cause an Internal Server Error on the client (I
 suppose not, if the server dies, cannot send the 505), but the numbers
 don't match anyway.
 
 I think a coredump will help me understand why it segfaults, but I don't
 know how to make apache dump a coredump, I've tried a lot of recipes
 found on internet with any success. Making things more complicated, this
 problems only happens on the production systems, (Suppose only on some
 pages...) so I cannot reproduce it on my test system.
 
 So, my question is... There is any way to force apache to dump a
 coredump file? I suppose I'm forgotting something but I really
 desperate...
 
 A secondary question is, some of the servers transforms all UTF8 strings
 with garbage when some Segfault happens (Seems like double-encoded
 UTF8, the page shows 3 or 4 chars for every UTF8 char...). The only way
 to solve this is reboot the machine completely.
 Is that related to this same problem? Or is an obscure UTF8 perl/Apache
 problem?
 
 Many thanks, 
 I'm using mod_perl 1.29 with apache 1.3.31. 
 My perl conf:
 
 Summary of my perl5 (revision 5.0 version 8 subversion 0) configuration:
   Platform:
 osname=linux, osvers=2.4.20-2.48smp,
 archname=i386-linux-thread-multi
 uname='linux str'
 config_args='-des -Doptimize=-O2 -march=i386 -mcpu=i686 -g
 -Dmyhostname=localhost [EMAIL PROTECTED] -Dcc=gcc -Dcf_by=Red
 Hat, Inc. -Dinstallprefix=/usr -Dprefix=/usr -Darchname=i386-linux
 -Dvendorprefix=/usr -Dsiteprefix=/usr
 -Dotherlibdirs=/usr/lib/perl5/5.8.0 -Duseshrplib -Dusethreads
 -Duseithreads -Duselargefiles -Dd_dosuid -Dd_semctl_semun -Di_db
 -Ui_ndbm -Di_gdbm -Di_shadow -Di_syslog -Dman3ext=3pm -Duseperlio
 -Dinstallusrbinperl -Ubincompat5005 -Uversiononly -Dpager=/usr/bin/less
 -isr'
 hint=recommended, useposix=true, d_sigaction=define
 usethreads=define use5005threads=undef'
  useithreads=define usemultiplicity=
 useperlio= d_sfio=undef uselargefiles=define usesocks=undef
 use64bitint=undef use64bitall=un uselongdouble=
 usemymalloc=, bincompat5005=undef
   Compiler:
 cc='gcc', ccflags ='-D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS
 -DDEBUGGING -fno-strict-aliasing -I/usr/local/include
 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm',
 optimize='',
 cppflags='-D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS -DDEBUGGING
 -fno-strict-aliasing -I/usr/local/include -I/usr/include/gdbm'
 ccversion='', gccversion='3.2.2 20030213 (Red Hat Linux 8.0
 3.2.2-1)', gccosandvers=''
 gccversion='3.2.2 200302'
 intsize=e, longsize= , ptrsize=p, doublesize=8, byteorder=1234
 d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12
 ivtype='long'
 k', ivsize=4'
 ivtype='long'
 known_ext, nvtype='double'
 o_nonbl', nvsize

Re: How to get a core dump

2004-11-08 Thread Marc Gracia
Yes, I've just taked in account that it happens at clean-up time, at
perl shutdown.
So I suppose that's nothing wrong with this.
Don't know why, I supposed it was related to the GTopLimit automatic
server kill. Now I see that it can be true.

My intention was to try to find out if this coredumps where related to
the sudden change of UTF8 codification, which is the main problem I have
now. Once this happens the only solution is to reboot the entire
machine.
Some Segfault like that can lead to a glibc or iconv library corruption?
The fact that a reboot is needed makes me suspect that glibc is
involved. It's right this assumption?

Many thanks again.

On dl, 2004-11-08 at 15:48, David Hodgkinson wrote:
 On 8 Nov 2004, at 14:39, Marc Gracia wrote:
  So, my question is... There is any way to force apache to dump a
  coredump file? I suppose I'm forgotting something but I really
  desperate...
 
 Yes.
 
 As root, you need to do the ulimit magic and then start the server.
 
 My question is: do you *really* need to debug this? For sure, the 
 process
 has got its knickers in a twist, but this is happening at cleanup time
 anyway and not during any request.
 
 I'm sure you have better things to worry about.
 
 Cheers,
 
 Dave
 


-- 
Report problems: http://perl.apache.org/bugs/
Mail list info: http://perl.apache.org/maillist/modperl.html
List etiquette: http://perl.apache.org/maillist/email-etiquette.html



Re: Segmentation Fault problem

2004-05-15 Thread Marc Gracia (Oasyssoft)




On Fri, 2004-05-14 at 20:56, Stas Bekman wrote:

Marc Gracia (Oasyssoft) wrote:
 Opps.. Sorry after all gdb an strace I forgot to send the perl -V
 
 Summary of my perl5 (revision 5.0 version 8 subversion 0) configuration:
[...]

Thanks.

Have you tried a more recent 5.8.x perl? 5.8.4 is out for quite some time 
already. I'm not sure if it's going to be of help, since it seems to be a 
glibc bug, but if you are already building from scratch, why not go with the 
best available version.



I'll try it... But I'll preffer not to modify this system more by now.


 The staticaly linked perl it's a good idea... but will this hit the
 memory space of the running httpd proceses? There will be a lot on those
 machines

You mean you will have several independent Apache/mod_perl instances running 
on the same machine? If so, yes, then you will use more memory as there will 
be no perl's libperl.so sharing. Though I'm thinking - if you use a static 
perl's libperl.a, but dynamic modperl's libperl.so (which is confusingly 
called in the same way in mp1), you may not be affected at all. I guess that 
doesn't work, since modperl's shared lib gets loaded by Apache, and not 
through ld.so, so the system will not know to reuse the already loaded instance.



There something else you can try though. Force an early resolution of all 
symbols when the program loads (which is a default behavior for MacOSX and a 
few other platforms). For perl xs modules you do that by setting env var 
RTLD_NOW=1. Though that won't work for perl itself. For perl itself (or any 
other app that links to shared libs) you will need to set env var 
LD_BIND_NOW=1. Let us know whether that trick has worked.



I'll try it now, get back to you with the results...


Also this seems to be an interesting util, which I haven't tried yet.
http://www.linuxforum.com/man/prelink.8.php


Aha!! You thinked like me, I'm also a fanatic gentoo user, and gentoo makes use of this utility intensively.
When I suspected a dinamic link resoultion problem I've tried it (RedHat9 includes the utility, and the latest rpm seems to use it... I did'nt know this).
Perl libperl and mod_perl libperl gets prelinked OK, but definitively, is something wrong with the glibc, because prelink aborts trying to prelink it

And also, I can't recompile the glibc srpms. I'm sure it's because I've changed the default RedHat kernel for the aggresive Wolk kernel, and there are some incompatibility

Many thanks for the help, and sorry for this now off-topic post :)




Marc Gracia
Promotion Manager

e-mail: [EMAIL PROTECTED]
tel: +34 675 508 820 
fax: +34 938 721 549


C/Muralla del Carme, 10
08240 Manresa
BARCELONA
SPAIN







attachment: image001.gif

Re: Segmentation Fault problem

2004-05-15 Thread Marc Gracia (Oasyssoft)




On Fri, 2004-05-14 at 20:56, Stas Bekman wrote:


There something else you can try though. Force an early resolution of all 
symbols when the program loads (which is a default behavior for MacOSX and a 
few other platforms). For perl xs modules you do that by setting env var 
RTLD_NOW=1. Though that won't work for perl itself. For perl itself (or any 
other app that links to shared libs) you will need to set env var 
LD_BIND_NOW=1. Let us know whether that trick has worked.

GREAT! IT WORKED!!! 
I've setted up both variables and VOILA!!

Many many thanks, you've made the most happy man in my town, and maybe saved me for being fired (Those machines would be on production 1 month ago...:)





-- 
Marc Gracia (Oasyssoft) [EMAIL PROTECTED]








Segmentation Fault problem

2004-05-14 Thread Marc Gracia




Hi, 
I have some problem that makes me mad for some time. 
We just setted up a web farm to support our application that runs entirely using mod_perl.
Until now we used a traditional apache+vhosts to serve our customers, but as it became so unadministrable, we started this new sistem to serve better.
The basic structure is a reverse proxy as a frontend that redirects requests to a bunch of different machines, each one with a bunch of apaches on differents ports for each customer.
All this little apaches are running as non-root users on ports  5, to protect better one customer of the other. 

Well, once all setted up all seemed to go well. But on one page that uses the Mail::Sendmail module to send an e-mail, the server crashed with a segmentation fault.
After tracing all we could into all the perl modules, we found that the server crashed
when Mail::Sendmail tried to open the network socket. 
Then we did a little test and setted up a program that just opened a socket, and once the page are called, the server segfaults... 
The same test, works perfect outside mod_perl...

The server is an Fully Updated RedHat 9 
custom WOLK 2.4 kernel ( 2.4.20-wolk4.9s )
perl-5.8 (first tried with stock redhat. Then I recompiled my own rpm with no threads, with the same results)
apache-1.3.29 (EAPI+no EXPAT options, tested activating and deactivating those options, with no success)
mod_ssl
mod_perl 1.29 (I've tested 1.27 and 1.28 also, with the same results)

Then I start debugging apache, to see what would be happening... 

gdb httpd 
(gdb) run -X 
Click on the fatal page with the Mail::Sendmail

Program received signal SIGSEGV, Segmentation fault.
0x1555ef8e in do_lookup_versioned () from /lib/ld-linux.so.2
(gdb) where
#0 0x1555ef8e in do_lookup_versioned () from /lib/ld-linux.so.2
#1 0x1555e156 in _dl_lookup_versioned_symbol_internal () from /lib/ld-linux.so.2
#2 0x15561e03 in fixup () from /lib/ld-linux.so.2
#3 0x15561cc0 in _dl_runtime_resolve () from /lib/ld-linux.so.2
#4 0x156e60a8 in getprotobyname_r@@GLIBC_2.1.2 () from /lib/libc.so.6
#5 0x156e5f5f in getprotobyname () from /lib/libc.so.6
#6 0x15d234eb in Perl_pp_gprotoent () at pp_sys.c:4856
#7 0x15d23299 in Perl_pp_gpbyname () at pp_sys.c:4823
#8 0x15cd08d2 in Perl_runops_debug () at dump.c:1414
#9 0x15c8c54e in S_call_body (myop=0x3fffdc40, is_eval=0) at perl.c:2069
#10 0x15c8c1fd in Perl_call_sv (sv=0x15d72d54, flags=4) at perl.c:1987
#11 0x157c82ad in perl_call_handler (sv=0x914d048, r=0x985fffc, args=0x0) at mod_perl.c:1661
#12 0x157c7a90 in perl_run_stacked_handlers (hook=0x914d048 4?\024\t\001, r=0x985fffc, handlers=0x9121560)
 at mod_perl.c:1374
#13 0x157c60da in perl_handler (r=0x9121560) at mod_perl.c:914
#14 0x08054d0c in ap_invoke_handler ()
#15 0x0806b2aa in process_request_internal ()
#16 0x0806b307 in ap_process_request ()
#17 0x08061b5d in child_main ()
#18 0x08061d30 in make_child ()
#19 0x08061eaf in startup_children ()
#20 0x080625a8 in standalone_main ()
#21 0x08062e61 in main ()
#22 0x15602917 in __libc_start_main () from /lib/libc.so.6

(gdb)quit

As the program fails in getprotobyname glibc function, I suppose the problem is the infamous buggy glibc's of RedHat. or also a incompatibility with my current WOLK kernel...

The strace confirms that the problem seems related to the /etc/protocols file (Used by getprotobyname)

# strace -X
...
...
brk(0) = 0x9bdc000
brk(0x9bdd000) = 0x9bdd000
brk(0) = 0x9bdd000
brk(0x9bde000) = 0x9bde000
brk(0) = 0x9bde000
brk(0x9bdf000) = 0x9bdf000
time(NULL) = 1084532294
open(/etc/protocols, O_RDONLY) = 9
fcntl64(9, F_GETFD) = 0
fcntl64(9, F_SETFD, FD_CLOEXEC) = 0
fstat64(9, {st_mode=S_IFREG|0744, st_size=2168, ...}) = 0
old_mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x1556d000
read(9, ip\t0\tIP\nicmp\t1\tICMP\t\t\nigmp\t2\tIGM..., 4096) = 2168
--- SIGSEGV (Segmentation fault) @ 0 (0) ---
+++ killed by SIGSEGV +++

First I supposed a permission problem, as all our apaches runs as normal users, but changing the permissions of that file did'nt help. Seeing the strace, seems to open OK the file in all cases.
To discart silly parsing problems i removed all comments and void lines on that file, with no difference.
I also downgraded glibc to the RedHat default (glibc-2.3.2-11.9), from the upgraded glibc-2.3.2-27. No success...
I'm stuck on this... I suspect a permissions problem, but due to the mature of the system (httpd.conf heavily customized realtime based on username) I can't test it easily. I also suspect on a DNS (Red Hat caused me some strange problems in previous versions) or multiple interface problem, (every machine has a public IP and a private IP). Also can be the custom kernel

While I'll do some more tests, someone can help me on this?
I'm really desperate...