Re: [squid-users] memory hit ratio
On Friday 07 November 2003 08:36 pm, [EMAIL PROTECTED] wrote: > Could someone please give me an idea of what am I doing wrong on the > squid.conf ?, I can't get more percentage on "Request Memory Hit Ratios": I wouldn't worry about Memory Hit Ratios - you're better off worrying about Request and Byte Hit Ratios (both of which look ok). If you really want to increase Memory Hit Ratio, about the only way you can do it is by increasing the cache_mem setting. But again, it's probably not worth worrying about. > Also, I am getting a lot of this: > This ip does not belong to our network and is not on our Squid ACL, does > this mean that they are trying to use our Squid ? Looks like it. Don't worry - Squid denied the request (at least in the example you provided). If you don't want the requests to reach Squid at all, then block the Squid port with IPTables. Adam
[squid-users] memory hit ratio
Could someone please give me an idea of what am I doing wrong on the squid.conf ?, I can't get more percentage on "Request Memory Hit Ratios": Average HTTP requests per minute since start: 1470.9 Average ICP messages per minute since start: 0.0 Select loop called: 68814496 times, 7.975 ms avg Cache information for squid: Request Hit Ratios: 5min: 53.2%, 60min: 52.4% Byte Hit Ratios: 5min: 28.1%, 60min: 27.5% Request Memory Hit Ratios: 5min: 9.3%, 60min: 9.0% Request Disk Hit Ratios: 5min: 34.7%, 60min: 30.2% Storage Swap size: 19382004 KB Storage Mem size: 131096 KB Mean Object Size: 12.11 KB The pc is a Pentium IV 2.0 gb and 512k ram, RedHat v9.0 Also, I am getting a lot of this: Cache Clients: Address: 68.73.68.230 Name: 68.73.68.230 Currently established connections: 0 ICP Requests 0 HTTP Requests 1 TCP_DENIED 1 100% This ip does not belong to our network and is not on our Squid ACL, does this mean that they are trying to use our Squid ?, almost the 25% of the Cache Client List reported by the cachemgr are ip's from outside our network. Thanks in advance, Rolando
RE: [squid-users] TCP_MISS/000
On Fri, 7 Nov 2003, Alfredo Aguirre wrote: > > > Can you get to that website without using squid? > > yes i can see it with out using squid Maybe ECN? (see the Squid FAQ Linux section) Regards Henrik
Re: [squid-users] Squid crashes right after start!
On Fri, 7 Nov 2003 [EMAIL PROTECTED] wrote: > seconds later squid is dead! > But it doesn't show any error message. What did I do wrong? Any clues in cache.log and/or your systems messages file? Regards Henrik
RE: [squid-users] TCP_MISS/000
> Can you get to that website without using squid? yes i can see it with out using squid > The server is responding - I can get to it from here - although I simply get > a completely black page with nothing at all on it :( also i got the same problem on www.legatek.com.mx, and when i turn off squid y can see both > You need to find out whether it's a squid problem or a connectivity problem, > so check whether a client can connect direct without using squid. > The other thing to check is whether the squid proxy itself can access the > server using /usr/local/squid/bin/client nope the squid client cannot access from here :( any suggestion? thanks in advance, Alfredo
RE: [squid-users] Squid crashes right after start!
> finally I got samba 2.2.8a running. > But if I gonna start squid now, it crashes after about 10 sec Is there anything in cache.log? Adam
[squid-users] Squid crashes right after start!
Hi there, finally I got samba 2.2.8a running. wbinfo -t -> secret is good wbinfo -g -> shows all groups wbinfo -u -> shows all users wbinfo -a -> succeeded with both messages! wb_auth -d -> returns OK! wb_group -d -> returns OK! But if I gonna start squid now, it crashes after about 10 sec (?). I use squid-2.5.Stable3.tar.gz. I configured the squid.conf. I start it /usr/local/squid/sbin/squid start. and if I check with ps -ae | grep squid it shows that squid is running. If I check seconds later squid is dead! But it doesn't show any error message. What did I do wrong? Can someone please help me? Thanks in advance! Regards, Tommy Hansgrohe, Inc. Information Service 1492 Bluegrass Lakes Parkway Alpharetta, GA 30004 phone (+001) 678 - 762 - 6994
Re: [squid-users] TCP_MISS/000
On Fri, 7 Nov 2003, Alfredo Aguirre wrote: > i've got this problem, whenever the machines inside my network try to reach > this site got an connection time out and on my squid log i see this, i dont > know what this could mean > > TCP_MISS/000 0 GET http://www.ibope.com.mx/ - NONE/- - This most likely means the client aborted before Squid managed to find the IP address to connect to. Regards Henrik
Re: [squid-users] TCP_MISS/000
On Friday 07 November 2003 7:34 pm, Alfredo Aguirre wrote: > Hi guys, > > i've got this problem, whenever the machines inside my network try to reach > this site got an connection time out and on my squid log i see this, i dont > know what this could mean > > TCP_MISS/000 0 GET http://www.ibope.com.mx/ - NONE/- - Can you get to that website without using squid? The server is responding - I can get to it from here - although I simply get a completely black page with nothing at all on it :( You need to find out whether it's a squid problem or a connectivity problem, so check whether a client can connect direct without using squid. The other thing to check is whether the squid proxy itself can access the server using /usr/local/squid/bin/client Antony. -- This email is intended for the use of the individual addressee(s) named above and may contain information that is confidential, privileged or unsuitable for overly sensitive persons with low self-esteem, no sense of humour, or irrational religious beliefs. If you have received this email in error, you are required to shred it immediately, add some nutmeg, three egg whites and a dessertspoonful of caster sugar. Whisk until soft peaks form, then place in a warm oven for 40 minutes. Remove promptly and let stand for 2 hours before adding some decorative kiwi fruit and cream. Then notify me immediately by return email and eat the original message. Please reply to the list; please don't CC me.
[squid-users] TCP_MISS/000
Hi guys, i've got this problem, whenever the machines inside my network try to reach this site got an connection time out and on my squid log i see this, i dont know what this could mean TCP_MISS/000 0 GET http://www.ibope.com.mx/ - NONE/- - im using squid 2.5 STABLE4 thanks in advance; alfredo
[squid-users] TCP_HIT with Surrogate-Control?
No matter what number I set max-age to for Surrogate-Control, I get TCP_REFRESH_HIT. I have to unset the Surrogate-Control header to get a TCP_HIT. Can someone tell me how to set the headers to get a TCP_HIT with Surrogate-Control? These are my apache headers that result in TCP_REFRESH_HIT: HTTP/1.1 200 OK Date: Fri, 07 Nov 2003 18:56:59 GMT Server: Apache/1.3.28 (Unix) Cache-Control: max-age=86400 Expires: Sat, 08 Nov 2003 18:56:59 GMT Surrogate-Control: max-age=3600, content="ESI/1.0" Last-Modified: Wed, 05 Nov 2003 20:20:23 GMT Connection: close Content-Type: text/html I am using squid/3.0-PRE3-20031106 in accelerator mode. Thanks in advance. _ Great deals on high-speed Internet access as low as $26.95. https://broadband.msn.com (Prices may vary by service area.)
RE: [squid-users] ip_wccp problems
Try putting it in /lib/modules/2.4.22 /kernel/net/ipv4 That's where my ip_wccp modules ended up when I compiled it during kernel compilation... The ip_wccp-2_4_18.patch this is available from http://squid.visolve.com/developments/wccpv2.htm can be applied to the kernel, and it makes this much, much easier. At least I've found it so. Regards, Steve Fischer -Original Message- From: Scott Baker [mailto:[EMAIL PROTECTED] Sent: Friday, November 07, 2003 1:03 PM To: [EMAIL PROTECTED] Subject: [squid-users] ip_wccp problems I compiled my own kernel (2.4.22) and the ip_wccp module on my RedHat 9 system as follows gcc -D__KERNEL__ -I/usr/src/linux-2.4.22/include -Wall -Wstrict-prototypes -Wno-trigraphs -O2 -fno-strict-aliasing -fno-common -fomit-frame-pointer -pipe -mpreferred-stack-boundary=2 -march=i686 -nostdinc -iwithprefix include -DKBUILD_BASENAME=sched -fno-omit-frame-pointer -c -o ip_wccp.o ip_wccp.c So now I have ip_wccp.o which I put in my /lib/modules/ directory, but it will NOT load to save my life. --- [EMAIL PROTECTED] linux]# modprobe ip_wccp /lib/modules/2.4.22/kernel/net/ipv4/ip_wccp.o: couldn't find the kernel version the module was compiled for /lib/modules/2.4.22/kernel/net/ipv4/ip_wccp.o: insmod /lib/modules/2.4.22/kernel/net/ipv4/ip_wccp.o failed /lib/modules/2.4.22/kernel/net/ipv4/ip_wccp.o: insmod ip_wccp failed --- [EMAIL PROTECTED] linux]# insmod /lib/modules/2.4.22/kernel/net/ipv4/ip_wccp.o /lib/modules/2.4.22/kernel/net/ipv4/ip_wccp.o: couldn't find the kernel version the module was compiled for --- [EMAIL PROTECTED] linux]# uname -a Linux localhost.localdomain 2.4.22 #5 Wed Nov 5 09:37:47 PST 2003 i686 i686 i386 GNU/Linux --- Am I missing something here? --- Scott Baker - Webster Internet Network Engineer - RHCE [EMAIL PROTECTED] - 503.266.8253
Re: [squid-users] HTTPS Reverse proxy with HTML contents rewrite
On Fri, 7 Nov 2003, Merton Campbell Crockett wrote: > It's only sent accross the Internet to the client in encrypted form. Now, > that doesn't mean in won't be slow as each http request will be redirected > to the https port. But the content won't be retrieved from the internal > server except when an https request is made. Most often it is the client->server data you want to protect most, as this may contain login information, session keys, credit card numbers etc. if the client continously connects first using http and then being told by the server to use https then there is no protection of the client->server data allowing for a wide variety of attacks. > You could return a permanently moved status to the http request. If you're > lucky, the browser will "remember" this and translate all http requests to > https requests. Unfortunately not very effective.. these redirects act on specific URLs only, including the full query string. It should also be noted that the warning you get when trying to access a http:// URL from a page loaded via https:// is a very valid warning. http:// does not provide any protection of sensitive information or authentication of the server. As soon as you leave the https:// session you are basically on your own wrt security and a determined cracker will have little problem hijacking your session. Regards Henrik
Re: [squid-users] ip_wccp problems
On Fri, 7 Nov 2003, Scott Baker wrote: > [EMAIL PROTECTED] linux]# modprobe ip_wccp > /lib/modules/2.4.22/kernel/net/ipv4/ip_wccp.o: couldn't find the kernel > version the module was compiled for Hmm.. what version of ip_wccp.c is this? the version and changelog is shown at the top of the file.. Regards Henrik
Re: [squid-users] Processing of squid?
On Sat, 8 Nov 2003, Kusuma Samorthong wrote: > I want to know that is it really processing of squid according? It is one way of describing the basic operations of Squid, but there are many different aspects depending on what you are looking at. The processing of Squid is best described as a large set of state machines, all driven forward by the central select loop monitoring filedescriptor activity or timer events. At some points in the code new state machines are being born in response to different events. httpAccept() is one such place, DNS lookups another. It is almost impossible to correcly draw one diagram showing all these and their interactions, but it is possible to describe the general flow of a single request. The following is based on the Squid-2.5 code base: * httpAccept() accepts a new client connection * clientReadRequests() reads and parses a request * clientAccessCheck() verifies the http_access rules * redirectStart() calls the redirector helper (if any) * clientCheckNoCache() verifies the no_cache rules * clientProcessRequest() starts processing the request from here it depends on if the request was a cache miss/hit/expired object. I'll focus on the miss case. * clientProcessMiss() starts retreiving of the cache miss and also starts reading of the reply (done within the clientCreateStoreEntry() call) * fwdStart() prepares for selection of forwarding path * peerSelect() selects the possible paths where to forward the request * fwdConnectStart() connects via the selected path * fwdDispatch() initiates forwarding of the request via the server connection and from here it depends on the type of path selected and protocol requested. the rest can be found relatively easily in the code, and the programmers guide also gives valuable information. Regards Henrik
Re: [squid-users] HTTPS Reverse proxy with HTML contents rewrite
On Fri, 7 Nov 2003, Henrik Nordstrom wrote: > On Fri, 7 Nov 2003, Merton Campbell Crockett wrote: > > > Apache using mod_rewrite would be the answer. The > > would need to have a redirect to 443. The problem will be the annoying > > notices about leaving or entering a protected site. > > I don't see how this can be the answer. > > Sure, it makes things appear to work, but the https encryption becomes > virtually useless as all/most requests are sent first unencrypted using > HTTP (including any sensitive request details) and then repeated using > https. It's only sent accross the Internet to the client in encrypted form. Now, that doesn't mean in won't be slow as each http request will be redirected to the https port. But the content won't be retrieved from the internal server except when an https request is made. You could return a permanently moved status to the http request. If you're lucky, the browser will "remember" this and translate all http requests to https requests. Merton Campbell Crockett -- BEGIN: vcard VERSION:3.0 FN: Merton Campbell Crockett ORG:General Dynamics Advanced Information Systems; Intelligence and Exploitation Systems N: Crockett;Merton;Campbell EMAIL;TYPE=internet:[EMAIL PROTECTED] TEL;TYPE=work,voice,msg,pref: +1(805)497-5045 TEL;TYPE=fax,work: +1(805)497-5050 TEL;TYPE=cell,voice,msg:+1(805)377-6762 END:vcard
Re: [squid-users] no_cache help
Why when i put in browser do not use proxy server for addresses de address of mywebserver.pt nothing is tranfer to Temporary Internet Files? A.O - Original Message - From: "Marc Elsen" <[EMAIL PROTECTED]> To: "Adaíl Oliveira" <[EMAIL PROTECTED]> Cc: <[EMAIL PROTECTED]> Sent: Friday, November 07, 2003 2:17 PM Subject: Re: [squid-users] no_cache help > > > "Adaíl Oliveira" wrote: > > > > Hi, > > I have a squid running in a machine but i don´t want make cache of one > > WebServer that i have in Intranet. > > I put the Tag: acl webserver dst mywebserver.pt > > no_cache deny webserver > > > > But when i visit the website this don´t work. And are create very files in > > Temporary Internet Files of my browser. > > > > Can you help me? > > Browser cache(s) are independent from squid and by definition > only under the control of the browser. > > M. > > > > > -- > > [EMAIL PROTECTED] ESTG - (http://www.estg.iplei.pt) > > -- > > 'Love is truth without any future. > (M.E. 1997) >
[squid-users] ip_wccp problems
I compiled my own kernel (2.4.22) and the ip_wccp module on my RedHat 9 system as follows gcc -D__KERNEL__ -I/usr/src/linux-2.4.22/include -Wall -Wstrict-prototypes -Wno-trigraphs -O2 -fno-strict-aliasing -fno-common -fomit-frame-pointer -pipe -mpreferred-stack-boundary=2 -march=i686 -nostdinc -iwithprefix include -DKBUILD_BASENAME=sched -fno-omit-frame-pointer -c -o ip_wccp.o ip_wccp.c So now I have ip_wccp.o which I put in my /lib/modules/ directory, but it will NOT load to save my life. --- [EMAIL PROTECTED] linux]# modprobe ip_wccp /lib/modules/2.4.22/kernel/net/ipv4/ip_wccp.o: couldn't find the kernel version the module was compiled for /lib/modules/2.4.22/kernel/net/ipv4/ip_wccp.o: insmod /lib/modules/2.4.22/kernel/net/ipv4/ip_wccp.o failed /lib/modules/2.4.22/kernel/net/ipv4/ip_wccp.o: insmod ip_wccp failed --- [EMAIL PROTECTED] linux]# insmod /lib/modules/2.4.22/kernel/net/ipv4/ip_wccp.o /lib/modules/2.4.22/kernel/net/ipv4/ip_wccp.o: couldn't find the kernel version the module was compiled for --- [EMAIL PROTECTED] linux]# uname -a Linux localhost.localdomain 2.4.22 #5 Wed Nov 5 09:37:47 PST 2003 i686 i686 i386 GNU/Linux --- Am I missing something here? --- Scott Baker - Webster Internet Network Engineer - RHCE [EMAIL PROTECTED] - 503.266.8253
RE: [squid-users] no_cache help
> Why when i put in browser do not use proxy server for > addresses de address of mywebserver.pt nothing is > tranfer to Temporary Internet Files? Any decision about what is and is not put in Temporary Internet Files is made by IE, not Squid. If you want to know why IE does or does not do something, ask Microsoft. Adam
Re: [squid-users] HTTPS Reverse proxy with HTML contents rewrite
On Fri, 7 Nov 2003, Henrik Nordstrom wrote: > On Fri, 7 Nov 2003, Merton Campbell Crockett wrote: > > > Apache using mod_rewrite would be the answer. The > > would need to have a redirect to 443. The problem will be the annoying > > notices about leaving or entering a protected site. > > I don't see how this can be the answer. > > Sure, it makes things appear to work, but the https encryption becomes > virtually useless as all/most requests are sent first unencrypted using > HTTP (including any sensitive request details) and then repeated using > https. Further, no Apache is needed for the above. A plain browser redirect sent by Squid does the same, easily configured via a redirector helper. Regards Henrik
Re: [squid-users] HTTPS Reverse proxy with HTML contents rewrite
On Fri, 7 Nov 2003, Merton Campbell Crockett wrote: > Apache using mod_rewrite would be the answer. The > would need to have a redirect to 443. The problem will be the annoying > notices about leaving or entering a protected site. I don't see how this can be the answer. Sure, it makes things appear to work, but the https encryption becomes virtually useless as all/most requests are sent first unencrypted using HTTP (including any sensitive request details) and then repeated using https. Regards Henrik
[squid-users] Processing of squid?
I found processing of squid in web site. Processing of squid following: 1. Squid Initialization opens connection (socket) for http and ICP traffic. 2. Client starts connection on http port, connection is established and new client fd is assigned. 3. Client sends request, SQUID receives the request, parses it, all headers (http), methods etc. 4. SQUID processes the request, checks if object is there or not i.e. if HIT in cache or MISS. 5. If it was MISS, find out the proper server cache where to send request (from Client). 6. Open socket and try to connect to the selected server. 7. Create a request for server and then send it to server by writing it to server socket. 8. Response came from server process the response. 9. Send data back to client. 10.Write object (cache it) or to the Disk. I attach a picture processing of squid. I want to know that is it really processing of squid according? Regards, Ae _ Help STOP SPAM with the new MSN 8 and get 2 months FREE* http://join.msn.com/?page=features/junkmail <>
Re: [squid-users] Experience of big squid setups?
On Fri, 7 Nov 2003, Kinkie wrote: > Antony Stone <[EMAIL PROTECTED]> writes: > > > Hi. > > > > Does anyone here have experience of setting up a (multi-)squid install to > > handle tens/hundreds of million HTTP requests per day? I guess I'm talking > > ISP-level network sizes here, but I'm sure squid can do it (given the right > > hardware and the right amount of bandwidth) but can anyone point to an > > installation and say "yes, we're doing that with squid"? > > 25M+/day here. about the same here around 24 million requests a day spread across three boxes > -- -- Joel Jaeggli Unix Consulting [EMAIL PROTECTED] GPG Key Fingerprint: 5C6E 0104 BAF0 40B0 5BD3 C38B F000 35AB B67F 56B2
Fw: [squid-users] no_cache help
Why when i put in browser do not use proxy server for addresses de address of mywebserver.pt nothing is tranfer to Temporary Internet Files? A.O > > - Original Message - > From: "Marc Elsen" <[EMAIL PROTECTED]> > To: "Adaíl Oliveira" <[EMAIL PROTECTED]> > Cc: <[EMAIL PROTECTED]> > Sent: Friday, November 07, 2003 2:17 PM > Subject: Re: [squid-users] no_cache help > > > > > > > > "Adaíl Oliveira" wrote: > > > > > > Hi, > > > I have a squid running in a machine but i don´t want make cache of one > > > WebServer that i have in Intranet. > > > I put the Tag: acl webserver dst mywebserver.pt > > > no_cache deny webserver > > > > > > But when i visit the website this don´t work. And are create very files > in > > > Temporary Internet Files of my browser. > > > > > > Can you help me? > > > > Browser cache(s) are independent from squid and by definition > > only under the control of the browser. > > > > M. > > > > > > > > -- > > > [EMAIL PROTECTED] ESTG - (http://www.estg.iplei.pt) > > > > -- > > > > 'Love is truth without any future. > > (M.E. 1997) > > >
Re: [squid-users] Experience of big squid setups?
Antony Stone <[EMAIL PROTECTED]> writes: > Hi. > > Does anyone here have experience of setting up a (multi-)squid install to > handle tens/hundreds of million HTTP requests per day? I guess I'm talking > ISP-level network sizes here, but I'm sure squid can do it (given the right > hardware and the right amount of bandwidth) but can anyone point to an > installation and say "yes, we're doing that with squid"? 25M+/day here. -- kinkie (kinkie-squid [at] kinkie [dot] it) Random fortune, unrelated to the message: "Necessity is the mother of invention" is a silly proverb. "Necessity is the mother of futile dodges" is much nearer the truth. -- Alfred North Whitehead
[squid-users] Experience of big squid setups?
Hi. Does anyone here have experience of setting up a (multi-)squid install to handle tens/hundreds of million HTTP requests per day? I guess I'm talking ISP-level network sizes here, but I'm sure squid can do it (given the right hardware and the right amount of bandwidth) but can anyone point to an installation and say "yes, we're doing that with squid"? Antony. -- Feeling bad at breakfast because you don't have a hangover is evidence of a complex emotional life it can take many years to perfect. - Pete McCarthy, The Road to McCarthy
Re: [squid-users] Web servers in different ports
On Friday 07 November 2003 4:39 pm, Eicke wrote: > Hi folks, > > I use ipfw to foward all http(port 80) requests to squid server. > I would like that all http and https requests independent of port will be > foward to squid. > I changed the ipfw rule to: > > ipfw add 1 fwd 127.0.0.1,3128 tcp from 192.168.5.0/27 to any > 80,443,8180,8080,8000 via xl0 > > I am trying to access a https (443) server but I did get. https is not the same as http. To proxy https you must tell the browser to use the proxy - you can't do it by NAT. Antony. -- If the human brain were so simple that we could understand it, we'd be so simple that we couldn't. Please reply to the list; please don't CC me.
[squid-users] Traffic Accounting per user
Hi! I think that they are not exactly what you want, but you may take a look at squid2mysql and squid´s delaypools. Squid2mysql (http://evc.fromru.com/squid2mysql/features.html) allows you to define download limits per user based on a period (daily, monthly, etc...) DelayPools is a Squid native feature, and allows you to control bandwidth based on acls (src, for instance) ... Regards, Carlos. - Hello, I am maintaining a local network with internet access over a Squid proxy. The problem is: Squid divides the bandwidth on a per connection base. If user A would start 10 downloads (maybe using a download manager) and user B would start only 1 download, then B will only get 1/11 of the available bandwidth. What I want is to divide the bandwidth on IP base, so that user A and B would equally get 50 percent of the full bandwidth. I googled a long time, but didn't find anything... :-( Thanks a lot, Matthias _ Voce quer um iGMail protegido contra vírus e spams? Clique aqui: http://www.igmailseguro.ig.com.br Ofertas imperdíveis! Link: http://www.americanas.com.br/ig/
[squid-users] Traffic Accounting per user
Hi! I think that they are not exactly what you want, but you may take a look at squid2mysql and squid´s delaypools. Squid2mysql (http://evc.fromru.com/squid2mysql/features.html) allows you to define download limits per user based on a period (daily, monthly, etc...) DelayPools is a Squid native feature, and allows you to control bandwidth based on acls (src, for instance) ... Regards, Carlos. - Hello, I am maintaining a local network with internet access over a Squid proxy. The problem is: Squid divides the bandwidth on a per connection base. If user A would start 10 downloads (maybe using a download manager) and user B would start only 1 download, then B will only get 1/11 of the available bandwidth. What I want is to divide the bandwidth on IP base, so that user A and B would equally get 50 percent of the full bandwidth. I googled a long time, but didn't find anything... :-( Thanks a lot, Matthias _ Voce quer um iGMail protegido contra vírus e spams? Clique aqui: http://www.igmailseguro.ig.com.br Ofertas imperdíveis! Link: http://www.americanas.com.br/ig/
[squid-users] Web servers in different ports
Hi folks, I use ipfw to foward all http(port 80) requests to squid server. I would like that all http and https requests independent of port will be foward to squid. I changed the ipfw rule to: ipfw add 1 fwd 127.0.0.1,3128 tcp from 192.168.5.0/27 to any 80,443,8180,8080,8000 via xl0 I am trying to access a https (443) server but I did get. Could ypu help me? Regards. Eicke.
Re: [squid-users] HTTPS Reverse proxy with HTML contents rewrite
On Fri, 7 Nov 2003, Henrik Nordstrom wrote: > On Fri, 7 Nov 2003, Souchon Yann wrote: > > > Apache with mod_proxy can make this very simply. > > So can Squid-2.5 and later. > > > But I have a problem, that Apache can't help me. > > > > The links on the HTML pages (WEB SERVER) are not relatives, but full. > > Ouch.. for this (without touching the application or web server) you need > a content rewriting proxy. Squid is not such. > Apache using mod_rewrite would be the answer. The would need to have a redirect to 443. The problem will be the annoying notices about leaving or entering a protected site. A possibility is to rewrite the original request to go to a second, internal virtual host on your DMZ system. This host uses PHP to retrieve the content from the internal system and replaces all occurrances of http: to https: and returns the content to the original virtual host that returns it to the external client. Merton Campbell Crockett -- BEGIN: vcard VERSION:3.0 FN: Merton Campbell Crockett ORG:General Dynamics Advanced Information Systems; Intelligence and Exploitation Systems N: Crockett;Merton;Campbell EMAIL;TYPE=internet:[EMAIL PROTECTED] TEL;TYPE=work,voice,msg,pref: +1(805)497-5045 TEL;TYPE=fax,work: +1(805)497-5050 TEL;TYPE=cell,voice,msg:+1(805)377-6762 END:vcard
Re: [squid-users] Enabling SSLGateway
On Fri, 7 Nov 2003, Richard Barrett wrote: > My objective is to use Squid as an https reverse proxy front ending an > Apache server. Squid is happily terminating incoming https connections > from the browser and making http requests to the Apache server. But I > want the requests from Squid to Apache to also use https so that Squid > is functioning as a transparent https 'gateway'. The easies way of doing this in Squid-2.5 is to use the Apache as a "parent" to your Squid with the ssl cache_peer flag. cache_peer your.apache.server parent 80 0 no-query ssl never_direct allow all Another way is to use a redirector helper to rewrite the accelerated URLs to https://... In Squid-3 this is considerably easier and works a bit better (except for the fact Squid-3 is still under development and is somewhat of a moving target..) Regards Henrik
Re: [squid-users] HTTPS Reverse proxy with HTML contents rewrite
On Fri, 7 Nov 2003, Souchon Yann wrote: > Apache with mod_proxy can make this very simply. So can Squid-2.5 and later. > But I have a problem, that Apache can't help me. > > The links on the HTML pages (WEB SERVER) are not relatives, but full. Ouch.. for this (without touching the application or web server) you need a content rewriting proxy. Squid is not such. It should be possible to extent Squid-3 for doing this using the new internal client-streams API interfaces used by ESI, but it is not an easy task and in fact can not solve all problems. In our reverse proxy product (eMARA) we work around this by a small custom rewriting engine running as a parent to Squid. It is strongly adviced to look into if it would be possible to convince the web application to use the public Internet URL names on accesses via the reverse proxy rather than trying to have the reverse proxy modify the content. Some applications do this automatically if receiving a correct Host header, others require a new virual server to be defined. Regards Henrik
[squid-users] Enabling SSLGateway
I am using squid-2.5.STABLE4-20031107 with the ssl-2.5.patch applied. All has built and works at a basic level. My objective is to use Squid as an https reverse proxy front ending an Apache server. Squid is happily terminating incoming https connections from the browser and making http requests to the Apache server. But I want the requests from Squid to Apache to also use https so that Squid is functioning as a transparent https 'gateway'. Although I have searched the squid-users list archive and studied the comments/entries in squid.conf post application of the patch I have not identified what exactly tells Squid to use https rather than http to talk to the back end Apache server. Can some kind soul enlighten me or point me at some published material on this point. I am not averse to reading code but am under the gun on timescale and really need to get this working. Thanks Richard --- Richard Barrett http://www.openinfo.co.uk
Re: [squid-users] HTTPS Reverse proxy with HTML contents rewrite
I did not ask for the addresses ... :-) But if fortunately the _name_ internal-server.domain.com would translate to the same address (not the internal one) as the _name_ external-server.domain.com on public DNS ... your problem would only be a matter of mapping http to https and that could be handled using Apache :-) I am unfortunately not a Squid expert so certainly Squid would do it as well ... But perhaps you do not control these parameters ... Regards, Claus - Original Message - From: "Souchon Yann" <[EMAIL PROTECTED]> > > And unfortunately internal-server.domain.com does _not_ > > translate to external-server.domain.com in public DNS ??? > > The internal-server isn't not reacheable from internet, and SQUID PROXY is > in a DMZ. > Nobody can access to the internal network from internet. > > For security, I would like to use SSL between CLIENTS and PROXY SQUID. > > Yann
RE: [squid-users] HTTPS Reverse proxy with HTML contents rewrite
> And unfortunately internal-server.domain.com does _not_ > translate to external-server.domain.com in public DNS ??? The internal-server isn't not reacheable from internet, and SQUID PROXY is in a DMZ. Nobody can access to the internal network from internet. For security, I would like to use SSL between CLIENTS and PROXY SQUID. Yann
Re: [squid-users] HTTPS Reverse proxy with HTML contents rewrite
And unfortunately internal-server.domain.com does _not_ translate to external-server.domain.com in public DNS ??? Regards, Claus - Original Message - > The PROXY SQUID should be rewrite all HTML contents from > http://internal-server.domain.com to https://external-server
RE: [squid-users] HTTPS Reverse proxy with HTML contents rewrite
> From: "Souchon Yann" <[EMAIL PROTECTED]> > > I would like to know if Squid (stable or devel) can help me for this > > architecture : > > > > CLIENTS <-1-> PROXY SQUID <-2-> WEB SERVER > > > > Protocol on 1 : HTTPS > > Protocol on 2 : HTTP > > Is there only 1 WebServer ? Yes, I have only 1 WEB SERVER. > Do the clients use PROXY SQUID also for other connections? No, the PROXY SQUID is used only for access to the internal WEBSERVER from internet. > If the clients connect to the internet ... how does this happen? The CLIENTS browse a public URL, e.g. https://external-server.domain.com The PROXY SQUID should be rewrite all HTML contents from http://internal-server.domain.com to https://external-server.domain.com. Regards, Yann
Re: [squid-users] HTTPS Reverse proxy with HTML contents rewrite
From: "Souchon Yann" <[EMAIL PROTECTED]> > I would like to know if Squid (stable or devel) can help me for this > architecture : > > CLIENTS <-1-> PROXY SQUID <-2-> WEB SERVER > > Protocol on 1 : HTTPS > Protocol on 2 : HTTP Is there only 1 WebServer ? Do the clients use PROXY SQUID also for other connections? If the clients connect to the internet ... how does this happen? Regards, Claus
RE: [squid-users] Squidalyzer or comparable?
I've modified the squid2common PERL script, and currently I'm using it to break up access.log into a comma-delimited text file. With a little work I've been able to create another file using the squidGuard blacklists to categorize the domains. My output file contains Date, Time, IP, Requested URL, Category, and HTTP Status, which I can then import into excel, etc. for analysis. If you're interested, get in touch with me off-list and I can send you the files. Mike -Original Message- From: Tan Jun Min [mailto:[EMAIL PROTECTED] Sent: Thursday, November 06, 2003 11:50 PM To: Li Wei Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: Re: [squid-users] Squidalyzer or comparable? Try awstats http://awstats.sourceforge.net/ On Friday 07 November 2003 02:42 pm, Li Wei wrote: > Sarg may be a good choice for your No.2 purpose. > > - Original Message - > From: <[EMAIL PROTECTED]> > To: <[EMAIL PROTECTED]> > Sent: Friday, November 07, 2003 8:49 AM > Subject: [squid-users] Squidalyzer or comparable? > > > I am VERY new to Linux and Squid. > > > > I've set up Squid an a Red Hat 9.0 box and am currently using Basic > > Squid Authentication. > > > > I would like to be able to track web site vs user info using > > something like Squidalizer. Webalizer is installed and running but > > does not appear to be able to show user/site info. I've started > > playing with Squidalizer, but, as a newbie, it is currently beyond > > my capabilities. I got bogged down with the GD Graphics library > > install. I was able to get the required mySql table set up OK. > > > > My questions are: > > > > 1. Is there a Red Hat 9 RPM for Squidalizer? I've Google/Linux > > searched and haven't found one. > > > > 2. Is there an alternative to Squidalizer? Again, I'm really only > > interested in seeing: user, web site visited, time/date of visit, > > and workstation IP. I don't need graphs, performance statistics, > > etc. > > > > 3. Has anyone configured Webalizer to show the items in 2 above? > > > > Thanks in advance, > > Mark Cooper > > Columbus, Ohio USA > > > > BTW, I'm leaving Friday at noon for a long weekend away from my > > computer and will not be back until sometime late Monday. If anyone > > asks for more info, I'll be able to respond Monday evening. I'm > > still looking into the list of scripts at > > http://www.squid-cache.org/scripts/ , but I'm hoping to have Plan B > > moving along in case I strike out.
[squid-users] TAG: no_cache help
Hi, I have a squid running in a machine but i don´t want make cache of one WebServer that i have in Intranet. I put the Tag: acl webserver dst mywebserver.pt no_cache deny webserver But when i visit the website this don´t work. And are create very files in Temporary Internet Files of my browser. Can you help me? [EMAIL PROTECTED] ESTG - (http://www.estg.iplei.pt)
[squid-users] HTTPS Reverse proxy with HTML contents rewrite
Hello, I would like to know if Squid (stable or devel) can help me for this architecture : CLIENTS <-1-> PROXY SQUID <-2-> WEB SERVER Protocol on 1 : HTTPS Protocol on 2 : HTTP Apache with mod_proxy can make this very simply. But I have a problem, that Apache can't help me. The links on the HTML pages (WEB SERVER) are not relatives, but full. Example : - href="http://internal-server/test/foo/resources/"; is full - href="/test/foo/resources/" is relative The CLIENTS on internet can't resolve the host internal-server. Can Squid or Squid with modules rewrite all links in HTML contents for each page ? Thanks you for your help, Yann
RE: [squid-users] no_cache help
> I have a squid running in a machine but i don´t want make > cache of one WebServer that i have in Intranet. > I put the Tag: acl webserver dst mywebserver.pt > no_cache deny webserver It looks like you are using the hostname of your webserver. The dst acl type uses the IP address, not the hostname. Change mywebserver.pt to the IP address of the web server, run squid -k reconfigure, and everything should work fine. Adam
Re: [squid-users] no_cache help
"Adaíl Oliveira" wrote: > > Hi, > I have a squid running in a machine but i don´t want make cache of one > WebServer that i have in Intranet. > I put the Tag: acl webserver dst mywebserver.pt > no_cache deny webserver > > But when i visit the website this don´t work. And are create very files in > Temporary Internet Files of my browser. > > Can you help me? Browser cache(s) are independent from squid and by definition only under the control of the browser. M. > > -- > [EMAIL PROTECTED] ESTG - (http://www.estg.iplei.pt) -- 'Love is truth without any future. (M.E. 1997)
[squid-users] no_cache help
Hi, I have a squid running in a machine but i don´t want make cache of one WebServer that i have in Intranet. I put the Tag: acl webserver dst mywebserver.pt no_cache deny webserver But when i visit the website this don´t work. And are create very files in Temporary Internet Files of my browser. Can you help me? -- [EMAIL PROTECTED] ESTG - (http://www.estg.iplei.pt)
Re: [squid-users] Access http server
Then you are hitting your Squid server, not the web server you are looking for. Squid will not accept talking to itself as this would generates a recursive loop with no positive result (squid talks to itself, which will then connect again to itself, which will then again connect to itself, which will then again connect to itself... etc). Regards Henrik On Fri, 7 Nov 2003, Eicke wrote: > I trying to telnet port 80 and result is following: > > HTTP/1.0 400 Bad Request > > Could you help me? > - Original Message - > From: "Henrik Nordstrom" <[EMAIL PROTECTED]> > To: "Eicke" <[EMAIL PROTECTED]> > Cc: <[EMAIL PROTECTED]> > Sent: Thursday, November 06, 2003 6:10 PM > Subject: Re: [squid-users] Access http server > > > > On Thu, 6 Nov 2003, Eicke wrote: > > > > > 1068132264.745 3 192.168.5.9 TCP_MISS/503 1011 GET http://192.168.2.1/ - > > > NONE/- - > > > > This indicates your Squid could not reach the requested server. It is not > > an access control problem within Squid. > > > > Can you from the Squid server connect to the 192.168.2.1 server on port > > 80? > > > >telnet 192.168.2.1 80 > >GET / HTTP/1.0 > >[blank line] > > > > If this does not work from the Squid server there is no chance Squid > > running on the Squid server will be able to do the same.. > > > > Regards > > Henrik > > > > > > >
Re: [squid-users] Access http server
I trying to telnet port 80 and result is following: HTTP/1.0 400 Bad Request Server: Squid/2.4.STABLE7 Mime-Version: 1.0 Date: Fri, 07 Nov 2003 11:54:55 GMT Content-Type: text/html Content-Length: 841 Expires: Fri, 07 Nov 2003 11:54:55 GMT X-Squid-Error: ERR_INVALID_REQ 0 X-Cache: MISS from 192.168.2.1 Proxy-Connection: close ERROR: The requested URL could not be retrieved ERROR The requested URL could not be retrieved While trying to process the request: 3; The following error was encountered: Invalid Request Some aspect of the HTTP Request is invalid. Possible problems: Missing or unknown request method Missing URL Missing HTTP Identifier (HTTP/1.0) Request is too large Content-Length missing for POST or PUT req uests Illegal character in hostname; underscores are not allowed Your cache administrator is mailto:webmaster";>webmaster. Generated Fri, 07 Nov 2003 11:54:55 GMT by 192.168.2.1(Squid/2.4.STABLE7) Could you help me? - Original Message - From: "Henrik Nordstrom" <[EMAIL PROTECTED]> To: "Eicke" <[EMAIL PROTECTED]> Cc: <[EMAIL PROTECTED]> Sent: Thursday, November 06, 2003 6:10 PM Subject: Re: [squid-users] Access http server > On Thu, 6 Nov 2003, Eicke wrote: > > > 1068132264.745 3 192.168.5.9 TCP_MISS/503 1011 GET http://192.168.2.1/ - > > NONE/- - > > This indicates your Squid could not reach the requested server. It is not > an access control problem within Squid. > > Can you from the Squid server connect to the 192.168.2.1 server on port > 80? > >telnet 192.168.2.1 80 >GET / HTTP/1.0 >[blank line] > > If this does not work from the Squid server there is no chance Squid > running on the Squid server will be able to do the same.. > > Regards > Henrik > > >
Re: [squid-users] NTLM Problem with Squid2.5 Stable 4
Hi John, At 00.12 07/11/2003, macross292 wrote: We are currently running Squid 2.5 Stable 4 on Windows 2000, and have a problem with NTLM authentication. During busy times the squid service stops authenticating people and we need to restart the service to get it working again. This has been automated so that a script connects with a local user and attempts to fetch a page every minute and restarts the service if this request fails This has shown that the restart is occurring approx every hour during business hours and then doesn't restart till the next morning. How many is busy Your Squid ? Squid for Windows currently can't handle more the 90 - 100 concurrent clients without rebuilding from sources the Microsoft C Runtime Library MSVCRT.DLL. And what is in your cache.log ? Regards Guido - Guido Serassio Acme Consulting S.r.l. Via Gorizia, 69 10136 - Torino - ITALY Tel. : +39.011.3249426 Fax. : +39.011.3293665 Email: [EMAIL PROTECTED] WWW: http://www.acmeconsulting.it/
Re: [squid-users] Need to change my squid access log dir
On Fri, 7 Nov 2003, Taiwo Akinosho wrote: > i did try that but squid starts and stops shortly after. > i suspect a permission problem but i don't know > where to start. See cache.log. > can someone help me with a step by step > process on creating the new location and enable squid > to write to it. 1. Create the new directory. 2. Make sure your cache_effective_user have write permission to this new directory. 3. Change Squid log file paths to use the new directory and restart Squid. If problems, look for cache.log. If suspecting problems writing to cache.log, start Squid manually including the -N command line flag. This will prevent Squid from going into the background and allows catstrophic errors to be printed in the terminal window from where you started Squid. You should also be able to find these errors in your systems messages syslog file. Regards Henrik
RE: [squid-users] RE: Sample squid.conf
On Fri, 7 Nov 2003, Milind Nanal wrote: > Does the router show that there is hash assignment to the cache server? > > But I am not aware about the hash assignment. To see the hash assignment you need to query your router on the status of WCCPv2. This status display shows what WCCPv2 cache servers ther router knows about, and how traffic is assigned to these servers. Regards Henrik
RE: [squid-users] RE: Sample squid.conf
Does the router show that there is hash assignment to the cache server? But I am not aware about the hash assignment. Regards, Milind -Original Message- From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] Sent: Friday, November 07, 2003 1:10 PM To: Milind Nanal Cc: Henrik Nordstrom; [EMAIL PROTECTED] Subject: RE: [squid-users] RE: Sample squid.conf On Fri, 7 Nov 2003, Milind Nanal wrote: > Is there any extra parameters required to define in squid.conf file? , if so > let me know. >From what I can remember not besides the wccpv2 router address, and the normal parameters required for interception caching. Regards Henrik -Original Message- From: Milind Nanal [mailto:[EMAIL PROTECTED] Sent: Friday, November 07, 2003 12:32 PM To: Henrik Nordstrom Cc: [EMAIL PROTECTED] Subject: RE: [squid-users] RE: Sample squid.conf Henrik, WCCPv2 GRE support is activated in OS kernel. One redirection rule is applied for the packets coming on port 80 to be redirected on port 3128. But I am not aware about the hash assignment. Is there any extra parameters required to define in squid.conf file? , if so let me know. Also tell me if any other details required from my side. Regards, Milind -Original Message- From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] Sent: Thursday, November 06, 2003 5:24 PM To: Milind Nanal Cc: [EMAIL PROTECTED] Subject: RE: [squid-users] RE: Sample squid.conf On Thu, 6 Nov 2003, Milind Nanal wrote: > I have added OS kernel support for WCCPv2. When I checked wccp status on > router, it shows that router identifies the cache engine's presence over > there. > > But still ..problem > > When squid service is started cache.log file generates. > Below is the cache.log generated by squid. > > 2003/11/06 14:34:23| Ready to serve requests. > 2003/11/06 14:35:23| Incoming WCCP v2 I_SEE_YOU length 176. > 2003/11/06 14:35:23| Incoming WCCP2_I_SEE_YOU received id = 50862. > 2003/11/06 14:35:23| Incoming WCCP2_I_SEE_YOU member change = 1 tmp=66. > > No data logs in access.log & store.log files. > Further guidance is required.. Have you activated the WCCPv2 GRE support in your OS (usually involves loading a kernel module or setting up a GRE tunnel if your GRE implementation supports WCCPv2) Have you enabled interception of port 80 packets on the proxy server by firewall/nat rules in the proxy server OS? Does the router show that there is hash assignment to the cache server? Regards Henrik
[squid-users] Need to change my squid access log dir
Hello, Somethime back, i wrote about squid eatin up my hard drive space. i got a reply from Henrik on changing the path of my access log. i did try that but squid starts and stops shortly after. i suspect a permission problem but i don't know where to start. can someone help me with a step by step process on creating the new location and enable squid to write to it. Thanks a lot. **Disclaimer This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify NAL BAnk Plc on [EMAIL PROTECTED] This message contains confidential information and is intended only for the individual named. Dear [EMAIL PROTECTED], our products and services are listed below. Optima: Maxiyield: Frontier Funds: For further details contact [EMAIL PROTECTED] First Bank Right Issues: Please claim your First Bank right issues on or before 25th November at any NAL Bank office nation wide. For further details contact, [EMAIL PROTECTED]
Re: [squid-users] Squid FD_SETSIZE
On Fri, 7 Nov 2003 [EMAIL PROTECTED] wrote: > I re-compiled Squid (2.4.STABLE7 on Solaris 8) with system file descriptors > set to 4096 and Squid now confirms that it is using 4096 on startup. But > during the configure, I noticed a message that said "checking Default > FD_SETSIZE value 1024". > > Is this going to limit Squid to using only 1024 fds? What is it saying? Depends on your OS. On Solaris the default FD_SETSIZE is automatically overridden. What you need to worry about is what Squid says about the number of filedescriptors it will use when starting. Regards Henrik
Re: [squid-users] ncsa password expiry
On Fri, 7 Nov 2003, melvin melvin wrote: > Is there a way to set a password expiration date for users and force them to > change their passwords after a specified time? I have already set up a > change password utility but the problem is users will not change their > passwords if they are not told to do so, thus i would like to set a password > expiry for the users. It should not be too hard to add this information to the NCSA helper if required. You might also consider using one of the other authentication helpers connecting to a backend database supporting password change expiration. Regards Henrik
Re: [squid-users] Squid + Radius Auth
On Fri, 7 Nov 2003, Joel Andersson wrote: > Can I setup Squid to be a reverse proxy that requires login with RADIUS challenge.? The basic HTTP authentication scheme in Squid only supports static or semi-static passwords, and does not support challenging the user with a sever challenge. > Is it possible to do this authentication on a web page instead of the standard IE > authentication pop-up ? Yes, in a reverse proxy this is possible by using cookie based authentication running on a web server accessible by your reverse proxy and a custom external_acl to verify the cookie (and return the username to Squid). Regards Henrik
Re: [squid-users] Can i bypass authentication for an application running on a pc
On Fri, 7 Nov 2003, Matthew Richards wrote: > Can anyone please tell me if it is possible to have a user challenged to > authenticate by squid but > have an application on the same computer access the Internet without being > challenged? Yes. If your application sends a identifiable User-Agent then this can be used to allow any application identifying itself as this User-Agent access without authentication by using the browser acl type. Please note however that this can be abused by your users to bypass the authentication by reconfiguring their browser to identify itself as this User-Agent.. there is no means whereby the proxy can securely verify that the indicated User-Agent string is really your application and not something else claiming to be your application. But assuming your application only visits a handful of sites you can set up access controls to only allow this User-Agent to bypass authentication for these sites, or even all access to these sites if the User-Agent is not identifiable. Regards Henrik
RE: [squid-users] RE: Sample squid.conf
On Fri, 7 Nov 2003, Milind Nanal wrote: > Is there any extra parameters required to define in squid.conf file? , if so > let me know. >From what I can remember not besides the wccpv2 router address, and the normal parameters required for interception caching. Regards Henrik
RE: [squid-users] RE: Sample squid.conf
Henrik, WCCPv2 GRE support is activated in OS kernel. One redirection rule is applied for the packets coming on port 80 to be redirected on port 3128. But I am not aware about the hash assignment. Is there any extra parameters required to define in squid.conf file? , if so let me know. Also tell me if any other details required from my side. Regards, Milind -Original Message- From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] Sent: Thursday, November 06, 2003 5:24 PM To: Milind Nanal Cc: [EMAIL PROTECTED] Subject: RE: [squid-users] RE: Sample squid.conf On Thu, 6 Nov 2003, Milind Nanal wrote: > I have added OS kernel support for WCCPv2. When I checked wccp status on > router, it shows that router identifies the cache engine's presence over > there. > > But still ..problem > > When squid service is started cache.log file generates. > Below is the cache.log generated by squid. > > 2003/11/06 14:34:23| Ready to serve requests. > 2003/11/06 14:35:23| Incoming WCCP v2 I_SEE_YOU length 176. > 2003/11/06 14:35:23| Incoming WCCP2_I_SEE_YOU received id = 50862. > 2003/11/06 14:35:23| Incoming WCCP2_I_SEE_YOU member change = 1 tmp=66. > > No data logs in access.log & store.log files. > Further guidance is required.. Have you activated the WCCPv2 GRE support in your OS (usually involves loading a kernel module or setting up a GRE tunnel if your GRE implementation supports WCCPv2) Have you enabled interception of port 80 packets on the proxy server by firewall/nat rules in the proxy server OS? Does the router show that there is hash assignment to the cache server? Regards Henrik
[squid-users] Squid FD_SETSIZE
I re-compiled Squid (2.4.STABLE7 on Solaris 8) with system file descriptors set to 4096 and Squid now confirms that it is using 4096 on startup. But during the configure, I noticed a message that said "checking Default FD_SETSIZE value 1024". Is this going to limit Squid to using only 1024 fds? What is it saying? TIA Jeff -- Jeff Richards Technical Consultant Unix Enterprise Services [EMAIL PROTECTED] Tel: +61 2 6219 8125 Important: This e-mail is intended for the use of the addressee and may contain information that is confidential, commercially valuable or subject to legal or parliamentary privilege. If you are not the intended recipient you are notified that any review, re-transmission, disclosure, use or dissemination of this communication is strictly prohibited by several Commonwealth Acts of Parliament. If you have received this communication in error please notify the sender immediately and delete all copies of this transmission together with any attachments.
Re: [squid-users] Squidalyzer or comparable?
Try awstats http://awstats.sourceforge.net/ On Friday 07 November 2003 02:42 pm, Li Wei wrote: > Sarg may be a good choice for your No.2 purpose. > > - Original Message - > From: <[EMAIL PROTECTED]> > To: <[EMAIL PROTECTED]> > Sent: Friday, November 07, 2003 8:49 AM > Subject: [squid-users] Squidalyzer or comparable? > > > I am VERY new to Linux and Squid. > > > > I've set up Squid an a Red Hat 9.0 box and am currently using Basic Squid > > Authentication. > > > > I would like to be able to track web site vs user info using something > > like Squidalizer. Webalizer is installed and running but does not appear > > to be able to show user/site info. I've started playing with Squidalizer, > > but, as a newbie, it is currently beyond my capabilities. I got bogged > > down with the GD Graphics library install. I was able to get the required > > mySql table set up OK. > > > > My questions are: > > > > 1. Is there a Red Hat 9 RPM for Squidalizer? I've Google/Linux searched > > and haven't found one. > > > > 2. Is there an alternative to Squidalizer? Again, I'm really only > > interested in seeing: user, web site visited, time/date of visit, and > > workstation IP. I don't need graphs, performance statistics, etc. > > > > 3. Has anyone configured Webalizer to show the items in 2 above? > > > > Thanks in advance, > > Mark Cooper > > Columbus, Ohio USA > > > > BTW, I'm leaving Friday at noon for a long weekend away from my computer > > and will not be back until sometime late Monday. If anyone asks for more > > info, I'll be able to respond Monday evening. I'm still looking into the > > list of scripts at http://www.squid-cache.org/scripts/ , but I'm hoping > > to have Plan B moving along in case I strike out.