[squid-users] Re: negotiate_kerberos_auth helpers stay busy
Hi Markus, yes, you are right. But why is squid using more and more memory until it dies because there is no more memory available at the server. When I run the same squid 3.2.1 with the same configuration but with basic authentication only the memory usage is constant and it is running stable. As soon as I add the negotiate_kerberos_helper I have to restart squid after some hours because it is out of memory. Running pmap shows the growing amount of memory (last line of my copy): 28076: (squid-1) -f /etc/squid/squid-C.conf Address Kbytes Mode Offset DeviceMapping 00404088 r-x-- 0fd:4 squid 009fe000 148 rw--- 003fe000 0fd:4 squid 00a230001288 rw--- 000:0 [ anon ] 00c22000 444 rw--- 00422000 0fd:4 squid 0163f000 2403608 rw--- 000:0 [ anon ] When I then run "cat /proc/28076/smaps" I see that the heap size of squid is growing: 0163f000-94185000 rw-p 00:00 0 [heap] Size:2403608 kB Rss: 2402648 kB Pss: 2402648 kB Shared_Clean: 0 kB Shared_Dirty: 0 kB Private_Clean: 0 kB Private_Dirty: 2402648 kB Referenced: 2229704 kB Anonymous: 2402648 kB AnonHugePages:403456 kB Swap: 0 kB KernelPageSize:4 kB MMUPageSize: 4 kB What can cause this problem which only occures using kerberos authentication? Klaus Hi Klaus, If I did not make an error it is the following line: service_principal = xstrdup(optarg); and it is not part of a loop and should not create a leak. It gets freed when the helper exits. Markus "Klaus Walter" wrote in message news:1c3bf830b7785fbff9f25e2486d5a...@myway.de... > Hi Markus, > > here the output of valgrind with source code lines form the > negotiate_kerberos_auth > helper that comes with squid 3.2.1: > > ==6124== 14 bytes in 1 blocks are definitely lost in loss record 1 of 1 > ==6124== at 0x4A069EE: malloc (vg_replace_malloc.c:270) > ==6124== by 0x403384: xmalloc (xalloc.cc:116) > ==6124== by 0x40359B: xstrdup (xstring.cc:27) > ==6124== by 0x401ADE: main (negotiate_kerberos_auth.cc:251) > > I also tested the latest helper that comes with squid 3.3.8 with valgrind: > > ==2745== 14 bytes in 1 blocks are definitely lost in loss record 1 of 1 > ==2745== at 0x4A069EE: malloc (vg_replace_malloc.c:270) > ==2745== by 0x403724: xmalloc (xalloc.cc:116) > ==2745== by 0x40393B: xstrdup (xstring.cc:27) > ==2745== by 0x401A8E: main (negotiate_kerberos_auth.cc:250) > > I hope this helps. > > By the way: > Because of bug 3848 in squid 3.3.x I can't use this squid version at the > moment, > but should I use the latest helper together with my squid 3.2.1? > > Klaus > > >> >> Hi Klaus, >> >> Thank you for the valgrind output. Could you compile and link the >> helper >> with -g to get the source code line. >> >> Thank you >> Markus >> > >
[squid-users] Re: negotiate_kerberos_auth helpers stay busy
Hi Markus, here the output of valgrind with source code lines form the negotiate_kerberos_auth helper that comes with squid 3.2.1: ==6124== 14 bytes in 1 blocks are definitely lost in loss record 1 of 1 ==6124==at 0x4A069EE: malloc (vg_replace_malloc.c:270) ==6124==by 0x403384: xmalloc (xalloc.cc:116) ==6124==by 0x40359B: xstrdup (xstring.cc:27) ==6124==by 0x401ADE: main (negotiate_kerberos_auth.cc:251) I also tested the latest helper that comes with squid 3.3.8 with valgrind: ==2745== 14 bytes in 1 blocks are definitely lost in loss record 1 of 1 ==2745==at 0x4A069EE: malloc (vg_replace_malloc.c:270) ==2745==by 0x403724: xmalloc (xalloc.cc:116) ==2745==by 0x40393B: xstrdup (xstring.cc:27) ==2745==by 0x401A8E: main (negotiate_kerberos_auth.cc:250) I hope this helps. By the way: Because of bug 3848 in squid 3.3.x I can't use this squid version at the moment, but should I use the latest helper together with my squid 3.2.1? Klaus Hi Klaus, Thank you for the valgrind output. Could you compile and link the helper with -g to get the source code line. Thank you Markus
Re: [squid-users] Re: negotiate_kerberos_auth helpers stay busy
Hi Markus, thank you very much for your help. I changed MAX_AUTHTOKEN_LEN to 64 KB and recompiled squid. After that there are no longer any hanging negotiate_kerberos_auth helper processes. This problem is fixed now. For the memory problem I tried to debug with valgrind and I hope I did it in the right way: ./negotiate_kerberos_auth_test proxyk1.spb.de testuser | valgrind --log-file=/tmp/negotiate_kerberos_auth.val --leak-check=full --show-reachable=yes -v ./negotiate_kerberos_auth -r -s GSS_C_NO_NAME Here the last part of the output: ==26714== HEAP SUMMARY: ==26714== in use at exit: 14 bytes in 1 blocks ==26714== total heap usage: 59 allocs, 58 frees, 6,863 bytes allocated ==26714== ==26714== Searching for pointers to 1 not-freed blocks ==26714== Checked 318,728 bytes ==26714== ==26714== 14 bytes in 1 blocks are definitely lost in loss record 1 of 1 ==26714==at 0x4A069EE: malloc (vg_replace_malloc.c:270) ==26714==by 0x403384: xmalloc (in /usr/lib64/squid/negotiate_kerberos_auth) ==26714==by 0x40359B: xstrdup (in /usr/lib64/squid/negotiate_kerberos_auth) ==26714==by 0x401ADE: main (in /usr/lib64/squid/negotiate_kerberos_auth) ==26714== ==26714== LEAK SUMMARY: ==26714==definitely lost: 14 bytes in 1 blocks ==26714==indirectly lost: 0 bytes in 0 blocks ==26714== possibly lost: 0 bytes in 0 blocks ==26714==still reachable: 0 bytes in 0 blocks ==26714== suppressed: 0 bytes in 0 blocks ==26714== ==26714== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 6 from 6) --26714-- --26714-- used_suppression: 4 U1004-ARM-_dl_relocate_object --26714-- used_suppression: 2 glibc-2.5.x-on-SUSE-10.2-(PPC)-2a ==26714== ==26714== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 6 from 6) It shows a lost of bytes. Please tell me what I should do for further informations. Klaus Hi Klaus If the token is to big then you need to update the source here: src/auth/UserRequest.h:#define MAX_AUTHTOKEN_LEN 32768 I am curious about the memory leak as I run it through valgrind ( I noticed some underlying Kerberos library leaks, but no leaks in the helper itself). Can you run valgrind against the helper if I give you the instructions ? Markus "Klaus Walter" wrote in message news:20130802163540.207140vakmhtv...@webmail.mnet-online.de... Hi Amos, 2013/07/30 08:48:04 kid1| Starting new negotiateauthenticator helpers... 2013/07/30 08:48:04 kid1| helperOpenServers: Starting 1/500 'negotiate_kerberos_auth' processes 2013/07/30 08:48:04 kid1| ipcCreate: fork: (12) Cannot allocate memory That is bad, it is unrelated to the helpers getting locked up though. How much RAM is the Squid worker process using at the time this appears? Starting helpers with fork() requires Squid to be allocated virtual memory 2x that being used at the time by the worker process. And how much memory is currently in use by each of those 8 BUSY helpers? So it seems that we have two different problems: The first problem is that our kerberos credentials from AD that are greater than 32 KB cause hanging helpers (see my last email). The other problem seems to be a memory leak in kerberos authentication. I also tested the latest negotiate_kerberos_auth helper comming with squid 3.3.8 together with my older squid, but the memory usage ist also increasing until there is no free memory available. Here you can see the memory usage when squid cannot allocate more memory: USER PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND root 8027 0.0 0.0 74932 292 ?Ss Aug01 0:00 squid -f /etc/squid/squid-C.conf squid27950 0.8 67.8 13404452 8262560 ?SAug01 6:34 \_ (squid-1) -f /etc/squid/squid-C.conf squid27951 0.0 0.0 76828 1864 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27952 0.0 0.0 76828 1516 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27953 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27954 0.0 0.0 72484 992 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27955 0.0 0.0 72484 992 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27956 0.0 0.0 76704 1788 ?SAug01 0:01 \_ (ext_kerberos_ldap_group_acl) squid27957 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27958 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27959 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27960 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27961 0.0 0.0 76704 1800 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27962 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27963 0.0 0.0 72484 988
Re: [squid-users] negotiate_kerberos_auth helpers stay busy
Hi Amos, 2013/07/30 08:48:04 kid1| Starting new negotiateauthenticator helpers... 2013/07/30 08:48:04 kid1| helperOpenServers: Starting 1/500 'negotiate_kerberos_auth' processes 2013/07/30 08:48:04 kid1| ipcCreate: fork: (12) Cannot allocate memory That is bad, it is unrelated to the helpers getting locked up though. How much RAM is the Squid worker process using at the time this appears? Starting helpers with fork() requires Squid to be allocated virtual memory 2x that being used at the time by the worker process. And how much memory is currently in use by each of those 8 BUSY helpers? So it seems that we have two different problems: The first problem is that our kerberos credentials from AD that are greater than 32 KB cause hanging helpers (see my last email). The other problem seems to be a memory leak in kerberos authentication. I also tested the latest negotiate_kerberos_auth helper comming with squid 3.3.8 together with my older squid, but the memory usage ist also increasing until there is no free memory available. Here you can see the memory usage when squid cannot allocate more memory: USER PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND root 8027 0.0 0.0 74932 292 ?Ss Aug01 0:00 squid -f /etc/squid/squid-C.conf squid27950 0.8 67.8 13404452 8262560 ?SAug01 6:34 \_ (squid-1) -f /etc/squid/squid-C.conf squid27951 0.0 0.0 76828 1864 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27952 0.0 0.0 76828 1516 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27953 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27954 0.0 0.0 72484 992 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27955 0.0 0.0 72484 992 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27956 0.0 0.0 76704 1788 ?SAug01 0:01 \_ (ext_kerberos_ldap_group_acl) squid27957 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27958 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27959 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27960 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27961 0.0 0.0 76704 1800 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27962 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27963 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27964 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27965 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27966 0.0 0.0 76660 1512 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27967 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27968 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27969 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27970 0.0 0.0 72484 992 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27971 0.0 0.0 72484 992 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27972 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27973 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27974 0.0 0.0 72484 992 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27975 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27976 0.0 0.0 76660 1668 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27977 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27978 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27979 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27980 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27981 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27982 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27983 0.0 0.0 72484 988 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27984 0.0 0.0 72484 992 ?SAug01 0:00 \_ (ext_kerberos_ldap_group_acl) squid27985 0
Re: Re: [squid-users] negotiate_kerberos_auth helpers stay busy
Hi, I am running squid 3.2.1 on CentOS 6.3 with kerberos authentication using negotiate_kerberos_auth. Generally this is working fine, but after some time more and more helper instances stay busy and cannot finish the given request. Therefore squid starts new helper processes to have enough working helpers for kerberos authentication. This release of Squid is quite old now. Are you able to upgrade your proxy to the current stable release and see if the problem disappears? (today that would be 3.3.8) You can find recent versions packages for CentOS at http://wiki.squid-cache.org/KnowledgeBase/CentOS. I would like to update squid to version 3.3.8 but there is still the problem that URLs that contain an ip address are not forwarded to cache peers (see Bug 3848). Therefore it is not usable for me. But I tested squid 3.3.8 on my test system and there was exactly the same problem with kerberos authentication. This is going on until squid has no more memory for the helpers: 2013/07/30 08:48:04 kid1| Starting new negotiateauthenticator helpers... 2013/07/30 08:48:04 kid1| helperOpenServers: Starting 1/500 'negotiate_kerberos_auth' processes 2013/07/30 08:48:04 kid1| ipcCreate: fork: (12) Cannot allocate memory That is bad, it is unrelated to the helpers getting locked up though. How much RAM is the Squid worker process using at the time this appears? Starting helpers with fork() requires Squid to be allocated virtual memory 2x that being used at the time by the worker process. And how much memory is currently in use by each of those 8 BUSY helpers? Negotiate Authenticator Statistics: program: /usr/lib64/squid/negotiate_kerberos_auth number active: 39 of 500 (0 shutting down) requests sent: 11141 replies received: 11133 queue length: 0 avg service time: 4 msec # FD PID # Requests Flags Time Offset Request 1 19 31373 753 B R 3887.019 0 1 37 31390 755 B R 3637.061 0 1 39 313912539 B R 2053.518 0 1 41 31392 78 B R 3859.365 0 1 43 31393 807 B R 2008.036 0 1 57 31396 415 B R 2003.899 0 1 63 31397 363 B R 1975.126 0 1 95 31401 329 B R 1944.980 0 1 29 314911891 0.009 0 (none) 1 77 31492 813 0.011 0 (none) 1 88 31493 578 0.009 0 (none) The first eight helper processes are busy and will never return to normal state until squid is restarted. Gradually more and more helpers stay in busy state. strace shows me that this helpers are blocked during a read-command: read(0, "r", 1) = 1 read(0, "r", 1) = 1 read(0, "7", 1) = 1 read(0, "+", 1) = 1 read(0, "a", 1) = 1 read(0, "G", 1) = 1 read(0, After this the process is never continued. That does not look blocked to me. The value arriving is changing, just eee ssslloowwwlllyyy, one byte per I/O cycle to be exact. Since kerberos credentials can be up to 32 KB large its easy to see why they are stuck in BUSY state for such long times. But Microsoft says that there can be kerberos credentials up to 64 KB (http://support.microsoft.com/kb/327825/en-us). And I think that's the problem. Our Active Directory has actually kerberos credentials up to 40 KB and I found out that exactly these credentials put the kerberos helper in the busy state. strace shows me that the helper process reads credential bytes from squid up to 32 KB then it waits at the socket for the rest and stays busy, but squid doesn't transmit any bytes more than 32 KB. Therefore the helper never finishes the authentication and is no longer usable. Users with kerberos credentials greater than 32 k cannot use squid and cause blocked helper processes. Is there a solution for such long kerberos credentials? I cannot find any error messages in cache.log even if I switch on debugging at the helper. At this rate of I/O in the helper it is unlikely that they will be able to send a message to cache.log in any reasonable time. Thank you for help! Klaus
[squid-users] negotiate_kerberos_auth helpers stay busy
Hi, I am running squid 3.2.1 on CentOS 6.3 with kerberos authentication using negotiate_kerberos_auth. Generally this is working fine, but after some time more and more helper instances stay busy and cannot finish the given request. Therefore squid starts new helper processes to have enough working helpers for kerberos authentication. This is going on until squid has no more memory for the helpers: 2013/07/30 08:48:04 kid1| Starting new negotiateauthenticator helpers... 2013/07/30 08:48:04 kid1| helperOpenServers: Starting 1/500 'negotiate_kerberos_auth' processes 2013/07/30 08:48:04 kid1| ipcCreate: fork: (12) Cannot allocate memory 2013/07/30 08:48:04 kid1| WARNING: Cannot run '/usr/lib64/squid/negotiate_kerberos_auth' process. The problem can only be solved by restarting squid. squidclient mgr:negotiateauthenticator shows the problem (I put away the large kerberos requests waiting to be finished): Negotiate Authenticator Statistics: program: /usr/lib64/squid/negotiate_kerberos_auth number active: 39 of 500 (0 shutting down) requests sent: 11141 replies received: 11133 queue length: 0 avg service time: 4 msec # FD PID # Requests Flags Time Offset Request 1 19 31373 753 B R 3887.019 0 1 37 31390 755 B R 3637.061 0 1 39 313912539 B R 2053.518 0 1 41 31392 78 B R 3859.365 0 1 43 31393 807 B R 2008.036 0 1 57 31396 415 B R 2003.899 0 1 63 31397 363 B R 1975.126 0 1 95 31401 329 B R 1944.980 0 1 29 314911891 0.009 0 (none) 1 77 31492 813 0.011 0 (none) 1 88 31493 578 0.009 0 (none) 1 99 31494 430 0.009 0 (none) 1 111 31512 320 0.010 0 (none) 1 115 31513 200 0.018 0 (none) 1 117 31514 158 0.014 0 (none) 1 119 31515 122 0.013 0 (none) 1 121 31516 99 0.011 0 (none) 1 123 31517 82 0.014 0 (none) 1 125 31518 66 0.012 0 (none) 1 118 31519 58 0.010 0 (none) 1 113 32414 44 0.013 0 (none) 1 116 32415 36 0.015 0 (none) 1 124 367 29 0.014 0 (none) 1 128 368 28 0.015 0 (none) 1 137 375 24 0.012 0 (none) 1 138 376 21 0.015 0 (none) 1 140 377 16 0.040 0 (none) 1 142 378 15 0.036 0 (none) 1 144 379 14 0.033 0 (none) 1 1393490 11 0.037 0 (none) 1 1433491 10 0.036 0 (none) 1 1463495 8 0.037 0 (none) 1 1483496 7 0.046 0 (none) 1 1503497 6 0.047 0 (none) 1 1453498 5 0.047 0 (none) 1 1493499 4 0.041 0 (none) 1 1523500 3 0.104 0 (none) 1 1543501 2 0.105 0 (none) 1 1563502 2 0.089 0 (none) Flags key: B = BUSY C = CLOSING R = RESERVED S = SHUTDOWN PENDING P = PLACEHOLDER The first eight helper processes are busy and will never return to normal state until squid is restarted. Gradually more and more helpers stay in busy state. strace shows me that this helpers are blocked during a read-command: read(0, "r", 1) = 1 read(0, "r", 1) = 1 read(0, "7", 1) = 1 read(0, "+", 1) = 1 read(0, "a", 1) = 1 read(0, "G", 1) = 1 read(0, After this the process is never continued. I cannot find any error messages in cache.log even if I switch on debugging at the helper. Thank you for help! Klaus