Re: new patch new ideas (squid+epoll)
Hi, oops. Forgot to post the attachment. Here goes. Henrik Nordstrom wrote: Can we please have the patch clean again.. (preferably as a MIME attachment). I don't seem to have the original message left in my archives, and trying to recover the patch from the inline discussion is a bit cumbersome due to linewraps and other destructive email formatting effects.. Regards Henrik On Thu, 6 Nov 2003, David Nicklay wrote: Hi, I finally took some time to test this. Gonzalo's patch seems to address a lot of outstanding issues with comm_epoll including: - CPU usage problem w/ idle clients (both local and remote) - epoll kernel table being out of sync with fde table - missed epoll table updates (happens on my boxes but not Gonzalo's) Could someone apply this to the squid-3.0 HEAD? Kudos to Gonzalo. Gonzalo Arana wrote: Hi, (yes, it's me again :-( ). I have this new patch which 1) issues epoll_ctl(EPOLL_CTL_(ADD|MOD|DEL)) according to handler == NULL or not AND based also on past epoll monitoring state (in commSetSelect). 2) does some profiling in case it was defined on compile time (PROF_(start|stop) ), and statistics counters are updated (haven't been checked). 3) removes interest when no handler is defined for event (but read_pending is set if applies) 4) returns COMM_TIMEOUT when no event has been detected Last 2 'features' were already in my original patch. BTW, I got 100% CPU problem when web client (wget) is run on the same machine where squid does. Running wget in other machine, gives me less than 40% of CPU usage. Hope this solves/explains the behaviour you had been having (english is not my native language, so sorry if 'had been having' is not correct) about CPU usage. I'm posting this to you since you are the original author of comm_epoll.cc, and I don't want to add noise to squid devel-list. Thank you very much in advance, diff -bur squid-3.0-PRE3-20031008-CVS/src/comm_epoll.cc squid-3.0-PRE3-20031008-EPOLL-FIXED/src/comm_epoll.cc --- squid-3.0-PRE3-20031008-CVS/src/comm_epoll.ccSun Aug 3 18:38:15 2003 +++ squid-3.0-PRE3-20031008-EPOLL-FIXED/src/comm_epoll.ccFri Oct 17 13:01:05 2003 @@ -94,6 +94,15 @@ } } +static const char* epolltype_atoi(int x) { +switch(x) { +case EPOLL_CTL_ADD: return EPOLL_CTL_ADD; +case EPOLL_CTL_DEL: return EPOLL_CTL_DEL; +case EPOLL_CTL_MOD: return EPOLL_CTL_MOD; +default: return UNKNOWN_EPOLLCTL_OP; +} +} + /* * comm_setselect * @@ -106,81 +115,37 @@ void *client_data, time_t timeout) { fde *F = fd_table[fd]; -int change = 0; -int events = 0; -int pollin = 0; -int pollout = 0; - +int epoll_ctl_type = 0; struct epoll_event ev; assert(fd = 0); assert(F-flags.open); debug(5, DEBUG_EPOLL ? 0 : 8) (commSetSelect(fd=%d,type=%u,handler=%p,client_data=%p,timeout=%ld)\n, fd,type,handler,client_data,timeout); -if(F-read_handler != NULL) -pollin = 1; - -if(F-write_handler != NULL) -pollout = 1; +ev.events = 0; +ev.data.fd = fd; if (type COMM_SELECT_READ) { -if(F-read_handler != handler) -change = 1; - -if(handler == NULL) -pollin = 0; -else -pollin = 1; - +if (handler) ev.events |= EPOLLIN | EPOLLHUP | EPOLLERR; F-read_handler = handler; - F-read_data = client_data; } if (type COMM_SELECT_WRITE) { -if(F-write_handler != handler) -change = 1; - -if(handler == NULL) -pollout = 0; -else -pollout = 1; - +if (handler) ev.events |= EPOLLOUT | EPOLLHUP | EPOLLERR; F-write_handler = handler; - F-write_data = client_data; } -if(pollin) -events |= EPOLLIN; - -if(pollout) -events |= EPOLLOUT; - -if(events) -events |= EPOLLHUP | EPOLLERR; - -ev.data.fd = fd; - -ev.events = events; - -if(events) { -if (epoll_ctl(kdpfd, EPOLL_CTL_MOD, fd, ev) 0) { -if(errno == ENOENT) { -debug(5,4) (commSetSelect: epoll_ctl(,EPOLL_CTL_MOD,,) failed on fd=%d: entry does not exist\n,fd); - -if (epoll_ctl(kdpfd, EPOLL_CTL_ADD, fd, ev) 0) -debug(5,1) (commSetSelect: cpoll_ctl(,EPOLL_CTL_ADD,,) failed on fd=%d!: %s\n,fd,xstrerror()); -} else { -debug(5,1) (commSetSelect: cpoll_ctl(,EPOLL_CTL_MOD,,) failed on fd=%d!: %s\n,fd,xstrerror()); -} -} -} else if(change) { -if(epoll_ctl(kdpfd,EPOLL_CTL_DEL,fd,ev) 0) { -if(errno != ENOENT) -debug(5,1) (commSetSelect: cpoll_ctl(,EPOLL_CTL_DEL,,) failed on fd=%d!: %s\n,fd,xstrerror()); -else -debug(5,4) (commSetSelect: epoll_ctl(,EPOLL_CTL_DEL,,) failed on fd=%d: entry does not exist\n,fd);
Re: new patch new ideas (squid+epoll)
Just wondering if this belongs to Bug #796 url:http://www.squid-cache.org/bugs/show_bug.cgi?id=796. If it does please attache the patch there and write a comment. Don't forget to get the attributions correct if the patch is not uploaded by the author. Regards Henrik On Thu, 6 Nov 2003, David Nicklay wrote: Hi, I finally took some time to test this. Gonzalo's patch seems to address a lot of outstanding issues with comm_epoll including: - CPU usage problem w/ idle clients (both local and remote) - epoll kernel table being out of sync with fde table - missed epoll table updates (happens on my boxes but not Gonzalo's) Could someone apply this to the squid-3.0 HEAD? Kudos to Gonzalo. Gonzalo Arana wrote: Hi, (yes, it's me again :-( ). I have this new patch which 1) issues epoll_ctl(EPOLL_CTL_(ADD|MOD|DEL)) according to handler == NULL or not AND based also on past epoll monitoring state (in commSetSelect). 2) does some profiling in case it was defined on compile time (PROF_(start|stop) ), and statistics counters are updated (haven't been checked). 3) removes interest when no handler is defined for event (but read_pending is set if applies) 4) returns COMM_TIMEOUT when no event has been detected Last 2 'features' were already in my original patch. BTW, I got 100% CPU problem when web client (wget) is run on the same machine where squid does. Running wget in other machine, gives me less than 40% of CPU usage. Hope this solves/explains the behaviour you had been having (english is not my native language, so sorry if 'had been having' is not correct) about CPU usage. I'm posting this to you since you are the original author of comm_epoll.cc, and I don't want to add noise to squid devel-list. Thank you very much in advance,
Re: new patch new ideas (squid+epoll)
This is part of the same bug. The patch I just submitted from Gonzalo is an improved fix for this problem. Henrik Nordstrom wrote: Just wondering if this belongs to Bug #796 url:http://www.squid-cache.org/bugs/show_bug.cgi?id=796. If it does please attache the patch there and write a comment. Don't forget to get the attributions correct if the patch is not uploaded by the author. Regards Henrik On Thu, 6 Nov 2003, David Nicklay wrote: Hi, I finally took some time to test this. Gonzalo's patch seems to address a lot of outstanding issues with comm_epoll including: - CPU usage problem w/ idle clients (both local and remote) - epoll kernel table being out of sync with fde table - missed epoll table updates (happens on my boxes but not Gonzalo's) Could someone apply this to the squid-3.0 HEAD? Kudos to Gonzalo. Gonzalo Arana wrote: Hi, (yes, it's me again :-( ). I have this new patch which 1) issues epoll_ctl(EPOLL_CTL_(ADD|MOD|DEL)) according to handler == NULL or not AND based also on past epoll monitoring state (in commSetSelect). 2) does some profiling in case it was defined on compile time (PROF_(start|stop) ), and statistics counters are updated (haven't been checked). 3) removes interest when no handler is defined for event (but read_pending is set if applies) 4) returns COMM_TIMEOUT when no event has been detected Last 2 'features' were already in my original patch. BTW, I got 100% CPU problem when web client (wget) is run on the same machine where squid does. Running wget in other machine, gives me less than 40% of CPU usage. Hope this solves/explains the behaviour you had been having (english is not my native language, so sorry if 'had been having' is not correct) about CPU usage. I'm posting this to you since you are the original author of comm_epoll.cc, and I don't want to add noise to squid devel-list. Thank you very much in advance, -- David Nicklay Location: CNN Center - SE0811A Office: 404-827-2698Cell: 404-545-6218
ICAP X-Authenticated-User support
Hello, Here's a patch for Squid ICAP 2.5 from CVS to add the X-Authenticated-User header described in : http://www.ietf.org/internet-drafts/draft-stecher-icap-subid-00.txt This mimics the X-Client-IP header to provide more information regarding the original request to the ICAP server. It works with both REQMOD and RESPMOD (a small change was needed to REQMOD to keep the original authentification when generating the new request so RESPMOD could use it too when chained). It adds two configurations options: - icap_send_auth_user to enable or disable it; - icap_auth_scheme to specify the auth_scheme to report to the ICAP server, since squid authentification mechanism doesn't know this. I think this can be useful for others as well, and I'd be glad to know what you think of it. Gaël. -- Gaël Roualland -+- [EMAIL PROTECTED] Index: src/cf.data.pre === RCS file: /cvsroot/squid/squid/src/cf.data.pre,v retrieving revision 1.49.2.33.2.12 diff -u -r1.49.2.33.2.12 cf.data.pre --- src/cf.data.pre 15 Oct 2003 22:20:48 - 1.49.2.33.2.12 +++ src/cf.data.pre 7 Nov 2003 18:38:57 - @@ -2587,6 +2587,39 @@ This adds the header X-Client-IP to ICAP requests. DOC_END +NAME: icap_send_auth_user +TYPE: onoff +IFDEF: HS_FEAT_ICAP +COMMENT: on|off +LOC: Config.icapcfg.send_auth_user +DEFAULT: off +DOC_START + This adds the header X-Authenticated-User to ICAP requests + if proxy access is authentified. +DOC_END + +NAME: icap_auth_scheme +TYPE: string +IFDEF: HS_FEAT_ICAP +LOC: Config.icapcfg.auth_scheme +DEFAULT: Local://%u +DOC_START + Authentification scheme to pass to ICAP requests if + icap_send_auth_user is enabled. The first occurence of %u + is replaced by the authentified user name. If no %u is found, + the username is added at the end of the scheme. + + See http://www.ietf.org/internet-drafts/draft-stecher-icap-subid-00.txt, + section 3.4 for details on this. + + Examples: + + icap_auth_scheme Local://%u + icap_auth_scheme LDAP://ldap-server/cn=%u,dc=company,dc=com + icap_auth_scheme WinNT://nt-domain/%u + icap_auth_scheme Radius://radius-server/%u +DOC_END + NAME: icap_service TYPE: icap_service_type IFDEF: HS_FEAT_ICAP Index: src/icap_common.c === RCS file: /cvsroot/squid/squid/src/Attic/icap_common.c,v retrieving revision 1.1.2.22 diff -u -r1.1.2.22 icap_common.c --- src/icap_common.c 4 Nov 2003 18:22:33 - 1.1.2.22 +++ src/icap_common.c 7 Nov 2003 18:38:57 - @@ -622,3 +622,41 @@ } return bw; } + +/* + * icapAddAuthUserHeader + * + * Builds and adds the X-Authenticated-User header to an ICAP request headers. + */ +void +icapAddAuthUserHeader(MemBuf * mb, auth_user_request_t *auth_user_request) +{ +char *user = authenticateUserRequestUsername(auth_user_request); +char *authuser; +size_t len, userlen, schemelen, userofslen; +char *userofs; + +if (user == NULL) { + debug(81, 5) (icapAddAuthUserHeader: NULL username\n); + return; +} + +userlen = strlen(user); +schemelen = strlen(Config.icapcfg.auth_scheme); +len = userlen + schemelen + 1; +authuser = xcalloc(len, 1); + +if ((userofs = strstr(Config.icapcfg.auth_scheme, %u)) == NULL) { +/* simply add user at end of string */ + snprintf(authuser, len, %s%s, Config.icapcfg.auth_scheme, user); +} else { + userofslen = userofs - Config.icapcfg.auth_scheme; + xmemcpy(authuser, Config.icapcfg.auth_scheme, userofslen); + xmemcpy(authuser + userofslen, user, userlen); + xmemcpy(authuser + userofslen + userlen, + userofs + 2, schemelen - (userofslen + 2) + 1); +} + +memBufPrintf(mb, X-Authenticated-User: %s\r\n, base64_encode(authuser)); +xfree(authuser); +} Index: src/icap_reqmod.c === RCS file: /cvsroot/squid/squid/src/Attic/icap_reqmod.c,v retrieving revision 1.1.2.22 diff -u -r1.1.2.22 icap_reqmod.c --- src/icap_reqmod.c 6 Nov 2003 22:26:28 - 1.1.2.22 +++ src/icap_reqmod.c 7 Nov 2003 18:38:57 - @@ -288,6 +288,11 @@ request-my_addr = icap-request-my_addr; request-my_port = icap-request-my_port; request-class = icap-request-class; +if (icap-request-auth_user_request != NULL) { + /* Copy authentification info in new request */ + request-auth_user_request = icap-request-auth_user_request; + authenticateAuthUserRequestLock(request-auth_user_request); +} icapReqModInterpretHttpRequest(icap, request); xfree(inbuf); } @@ -609,6 +614,8 @@ memBufAppend(mb, crlf, 2); if (Config.icapcfg.send_client_ip) memBufPrintf(mb, X-Client-IP: %s\r\n, client_addr); +if (Config.icapcfg.send_auth_user icap-request-auth_user_request != NULL) +
RE: Little questions
On Fri, 7 Nov 2003, Gonzalo A. Arana wrote: 1) make statCPUUsage public? (I guess this is the best). If so, I would need to know cpu usage with better resolution than 1 minute. Is it possible (using squid API rather than system calls)? Won't give you very accurate information anyway.. uh 2) try to check if compression is taking too much time in some other way? If so, how? Like record the number of consecutive calls to comm_select that didn't result in timeout, or some other ugly/unreliable hack like this?. My instinctive reaction is to run the compression decompression in a separate thread. If the queue to the compression/decompression engine is large, decrease the compression level used. Good instinct :-) Squid-3 does not have an API for running jobs asyncrhonously, right? Another question: ICAP (consecuently running compression in a separate process) is suited for this kind of transformation? Thank you very much, Regards Henrik G
RE: Little questions
On Fri, 7 Nov 2003, Gonzalo A. Arana wrote: Squid-3 does not have an API for running jobs asyncrhonously, right? Not really no. But if you limit yourself to platforms having threading support is is a normal inter-thread message problem. To my knowledge all platforms have threading support today. Another question: ICAP (consecuently running compression in a separate process) is suited for this kind of transformation? ICAP would work for Content-Encoding but not for Transfer-Encoding, at least not without some support from the core as it must maintain the chunked stream state so message delimiting can be done correctly. But the concept of using a separate process does work, as always. The tricky part is efficient inter-process communication. Regards Henrik
RE: Little questions
On Sat, 2003-11-08 at 08:18, Gonzalo A. Arana wrote: My instinctive reaction is to run the compression decompression in a separate thread. If the queue to the compression/decompression engine is large, decrease the compression level used. Good instinct :-) Squid-3 does not have an API for running jobs asyncrhonously, right? Not as such. However, a little generalisation of the aufs or diskd queueing mechanisms will likely do what you need. There are other constraints you'll need to address though. I suspect you'll want (from my arch repository [EMAIL PROTECTED]: my squid--diskio--3.0 branch (separates out all the storage layout stuff from actual diskio, which will ease generalisation of the threaded engine) my squid--mempools--3.0 branch (removes global variable manipulation from mempool alloc and frees, which allows separate pool allocs to be thread safe, but not those from the same pool. You'll want to have a sync layer over whatever allocator your compressor/decompressor use). Rob -- GPG key available at: http://members.aardvark.net.au/lifeless/keys.txt. signature.asc Description: This is a digitally signed message part