RE: [squid-users] squid and swf files

2006-02-24 Thread Gregori Parker
I had the mime_table commented out, so I uncommented it, pointed it to the 
correct file, and replaced in mime.conf "+download" with "+view"...it seems to 
have fixed the problem for the time being.

Also, please disregard all my other messages (epoll, cachemgr, etc) - all is 
well now.  Well, except for peering...I don't think my all-sibling setup is 
doing a damn thing.  I'm going to try eliminating peering and then leave this 
cluster alone for awhile.

Peace -- Gregori 
 


-Original Message-
From: Mark Elsen [mailto:[EMAIL PROTECTED] 
Sent: Friday, February 24, 2006 3:40 PM
To: Gregori Parker
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] squid and swf files

> I've been getting reports of problems with squid and swf files.  After
> doing some testing, I found that a link like
> http://my.squid.cache/directory/something.swf would work fine in Mozilla
> but not in Internet Explorer - IE says something about downloading in
> the status bar and then hangs for a long while.  I researched this a
> bit, and found reports that this issue can be fixed on Apache by
> sticking "AddType application/x-shockwave-flash .swf" in the conf file.
>
> I noticed that squid/etc/mime.conf as the following line:
>
> \.swf$ application/x-shockwave-flash anthony-unknown.gif - image
> +download
>
> But then I read somewhere else that mime.conf only appies to ftp/gopher
> and other non-http traffic...so,
>
>

   Where is somewhere ?


  Since my name is nobody :
 --

 From squid.conf.default :

#  TAG: mime_table
#   Pathname to Squid's MIME table. You shouldn't need to change
#   this, but the default file contains examples and formatting
#   information if you do.
#
#Default:
# mime_table /etc/squid/mime.conf


So it's highly unlikely that SQUID does not use this info
for 'http' operations.
Are you using the default setting for this value , and or
is the specified file readable by squid_effective_user ?

M.



Re: [squid-users] Problem with intercept squid and boinc

2006-02-24 Thread Henrik Nordstrom
fre 2006-02-24 klockan 19:04 -0300 skrev Oliver Schulze L.:
> I have visited the troubled URL in Firefox:
> 
> http://setiboincdata.ssl.berkeley.edu/sah_cgi/file_upload_handler
> 
> And it seems to look at the user-agent and output a speciall
> message if your're using a web browser.
> 
> Maybe squid is changing some headers that setiboinc needs ...

If you send me access.log details with "log_mime_hdrs on" from the
actual use of this server (wher the 100 problem was seen) then I can
easily investigate if this is a broken web server, but I pretty much
suspect it is broken..

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Always TCP_REFRESH_MISS with www.heise.de

2006-02-24 Thread Chris Robertson

Martin Schröder wrote:


On 2006-02-24 23:58:01 +0100, Martin Schröder wrote:
 


setup is a default squid as transparent proxy on OpenBSD. It
works for most hosts, but for www.heise.de it nearly always does
a TCP_REFRESH_MISS/200 like this:
   


From http://www.squid-cache.org/Doc/FAQ/FAQ-6.html#ss6.7...

TCP_REFRESH_MISS:
The requested object was cached but STALE. The IMS query returned the 
new content.



1140821829.506233 192.168.17.2 TCP_REFRESH_MISS/200 466 GET 
http://www.heise.de/tp/r4/icons/frame/eol.gif - DIRECT/193.99.144.85 image/gif

Last-Modified: Tue, 09 Nov 2004 10:37:18 GMT
Mozilla shows expiry as 13.04.2006 06:50:54

Any idea why this is not cached?
   



More info from LiveHeaders:
---
http://www.heise.de/icons/ho/heise.gif

GET /icons/ho/heise.gif HTTP/1.1
Host: www.heise.de
User-Agent: Mozilla/5.0 (X11; U; Linux i686; de-AT; rv:1.7.3) Gecko/20040913
Accept: 
text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Accept-Language: de-DE,de;q=0.8,en-GB;q=0.5,en;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Cookie: RMID=8666070e3dd6df80; 
personality=kind&p&uid&8517&un&Martin%20Schr%F6der&md5&e8b552aa6f099fa3c4d5ff38cb3fb80e
If-Modified-Since: Mon, 26 Dec 2005 10:07:01 GMT
If-None-Match: "13a298-e68-43afc0c5"
Cache-Control: max-age=0

 

Your browser is requesting fresh content (max-age=0).  Try requesting 
the content (without hitting reload) from another computer and see if 
your results are any different.



HTTP/1.0 200 OK
Date: Fri, 24 Feb 2006 23:35:54 GMT
Server: Apache/1.3.34
Cache-Control: max-age=2592000
Expires: Sun, 26 Mar 2006 23:35:54 GMT
Last-Modified: Mon, 26 Dec 2005 10:07:01 GMT
Etag: "2b6a83-e68-43afc0c5"
Accept-Ranges: bytes
Content-Length: 3688
Content-Type: image/gif
X-Cache: MISS from gryphon.oneiros.de
Connection: keep-alive
---
 



The server "chose" to return a 200 instead of a 304 (which would have 
shown up as a TCP_REFRESH_HIT).



Thanks in advance
   Martin
 


Chris


Re: [squid-users] no auth for one domain?

2006-02-24 Thread Terry Dobbs
The dstdomain workaround works perfectly. I had a training site users needed 
to access that contained WMPlayer streams, and users couldnt hear the 
background speech and would get prompted for the userid/passwd.


I did the following... 1st add a ACL for the domain.
acl NTLM_Bypass dstdomain foobar.com

Then allow the domain access, then the Authorized Users
http_access allow NTLM_Bypass
http_access allow AuthorizedUsers


- Original Message - 
From: "nairb rotsak" <[EMAIL PROTECTED]>

To: "Mark Elsen" <[EMAIL PROTECTED]>
Cc: 
Sent: Friday, February 24, 2006 3:57 PM
Subject: Re: [squid-users] no auth for one domain?



We ended up using AD Group policy to not go through
the proxy for that site... not ideal, but just to make
sure I understand the other way to do it

You can put the http_access with the acl before the
http_access allow_ntlm and it should work?

--- Mark Elsen <[EMAIL PROTECTED]> wrote:


> Is it possible to have my ntlm users go around 1
> domain?  We can't seem to get a state web site
(which
> uses a weird front end to it's client... but it
ends
> up on the web) to go through the proxy.  When we
sniff
> the traffic locally, it is popping up a 407, but
their
> isn't anyway to log in.
>
> I tried to put an acl and http_access higher in
the
> list in the .conf, but that didn't seem to matter?
>

It would have been more productive to show that
line, which you put
for that domain in squid.conf, offhand & probably it
should
resemble something like this :

acl ntlm_go_around dstdomain name-excluded-domain
...

http_access allow ntlm_go_around
http_access allow ntlm_users (provided proxy
AUTH ACL is named 'ntlm_users')

M.




__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com


--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.375 / Virus Database: 268.1.0/269 - Release Date: 2/24/2006






--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.375 / Virus Database: 268.1.0/269 - Release Date: 2/24/2006



Re: [squid-users] Always TCP_REFRESH_MISS with www.heise.de

2006-02-24 Thread Martin Schröder
On 2006-02-25 00:30:29 +0100, Mark Elsen wrote:
>  It should do according to :
> 
>   
> http://www.ircache.net/cgi-bin/cacheability.py?query=http%3A%2F%2Fwww.heise.de%2Ftp%2Fr4%2Ficons%2Fframe%2Feol.gif&descend=on
> 
>   Does this work when the browser is set to use the proxy directlty,
> thru proxy settings ?

No, no changes.

Also:

1140821386.945225 192.168.17.2 TCP_REFRESH_HIT/304 179 GET 
http://www.heise.de/tp/r4/icons/frame/eul.gif - DIRECT/193.99.144.85 -
1140821387.027241 192.168.17.2 TCP_REFRESH_MISS/200 471 GET 
http://www.heise.de/tp/r4/icons/frame/eur.gif - DIRECT/193.99.144.85 image/gif

http://www.ircache.net/cgi-bin/cacheability.py?query=http%3A%2F%2Fwww.heise.de%2Ftp%2Fr4%2Ficons%2Fframe%2Feul.gif&descend=on
http://www.ircache.net/cgi-bin/cacheability.py?query=http%3A%2F%2Fwww.heise.de%2Ftp%2Fr4%2Ficons%2Fframe%2Feur.gif&descend=on

Puzzling.

Try it yourself: http://www.heise.de/tp/r4/artikel/22/22128/1.html

Best
Martin
-- 
http://www.tm.oneiros.de


Re: [squid-users] Always TCP_REFRESH_MISS with www.heise.de

2006-02-24 Thread Martin Schröder
On 2006-02-24 23:58:01 +0100, Martin Schröder wrote:
> setup is a default squid as transparent proxy on OpenBSD. It
> works for most hosts, but for www.heise.de it nearly always does
> a TCP_REFRESH_MISS/200 like this:
> 
> 1140821829.506233 192.168.17.2 TCP_REFRESH_MISS/200 466 GET 
> http://www.heise.de/tp/r4/icons/frame/eol.gif - DIRECT/193.99.144.85 image/gif
> 
> Last-Modified: Tue, 09 Nov 2004 10:37:18 GMT
> Mozilla shows expiry as 13.04.2006 06:50:54
> 
> Any idea why this is not cached?

More info from LiveHeaders:
---
http://www.heise.de/icons/ho/heise.gif

GET /icons/ho/heise.gif HTTP/1.1
Host: www.heise.de
User-Agent: Mozilla/5.0 (X11; U; Linux i686; de-AT; rv:1.7.3) Gecko/20040913
Accept: 
text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Accept-Language: de-DE,de;q=0.8,en-GB;q=0.5,en;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Cookie: RMID=8666070e3dd6df80; 
personality=kind&p&uid&8517&un&Martin%20Schr%F6der&md5&e8b552aa6f099fa3c4d5ff38cb3fb80e
If-Modified-Since: Mon, 26 Dec 2005 10:07:01 GMT
If-None-Match: "13a298-e68-43afc0c5"
Cache-Control: max-age=0

HTTP/1.0 200 OK
Date: Fri, 24 Feb 2006 23:35:54 GMT
Server: Apache/1.3.34
Cache-Control: max-age=2592000
Expires: Sun, 26 Mar 2006 23:35:54 GMT
Last-Modified: Mon, 26 Dec 2005 10:07:01 GMT
Etag: "2b6a83-e68-43afc0c5"
Accept-Ranges: bytes
Content-Length: 3688
Content-Type: image/gif
X-Cache: MISS from gryphon.oneiros.de
Connection: keep-alive
---

Thanks in advance
Martin
-- 
http://www.tm.oneiros.de


Re: [squid-users] squid and swf files

2006-02-24 Thread Mark Elsen
> I've been getting reports of problems with squid and swf files.  After
> doing some testing, I found that a link like
> http://my.squid.cache/directory/something.swf would work fine in Mozilla
> but not in Internet Explorer - IE says something about downloading in
> the status bar and then hangs for a long while.  I researched this a
> bit, and found reports that this issue can be fixed on Apache by
> sticking "AddType application/x-shockwave-flash .swf" in the conf file.
>
> I noticed that squid/etc/mime.conf as the following line:
>
> \.swf$ application/x-shockwave-flash anthony-unknown.gif - image
> +download
>
> But then I read somewhere else that mime.conf only appies to ftp/gopher
> and other non-http traffic...so,
>
>

   Where is somewhere ?


  Since my name is nobody :
 --

 From squid.conf.default :

#  TAG: mime_table
#   Pathname to Squid's MIME table. You shouldn't need to change
#   this, but the default file contains examples and formatting
#   information if you do.
#
#Default:
# mime_table /etc/squid/mime.conf


So it's highly unlikely that SQUID does not use this info
for 'http' operations.
Are you using the default setting for this value , and or
is the specified file readable by squid_effective_user ?

M.


Re: [squid-users] Always TCP_REFRESH_MISS with www.heise.de

2006-02-24 Thread Mark Elsen
> Hi,
> setup is a default squid as transparent proxy on OpenBSD. It
> works for most hosts, but for www.heise.de it nearly always does
> a TCP_REFRESH_MISS/200 like this:
>
> 1140821829.506233 192.168.17.2 TCP_REFRESH_MISS/200 466 GET 
> http://www.heise.de/tp/r4/icons/frame/eol.gif - DIRECT/193.99.144.85 image/gif
>
> Last-Modified: Tue, 09 Nov 2004 10:37:18 GMT
> Mozilla shows expiry as 13.04.2006 06:50:54
>
> Any idea why this is not cached?
>

 It should do according to :

  
http://www.ircache.net/cgi-bin/cacheability.py?query=http%3A%2F%2Fwww.heise.de%2Ftp%2Fr4%2Ficons%2Fframe%2Feol.gif&descend=on

  Does this work when the browser is set to use the proxy directlty,
thru proxy settings ?

M.


[squid-users] Always TCP_REFRESH_MISS with www.heise.de

2006-02-24 Thread Martin Schröder
Hi,
setup is a default squid as transparent proxy on OpenBSD. It
works for most hosts, but for www.heise.de it nearly always does
a TCP_REFRESH_MISS/200 like this:

1140821829.506233 192.168.17.2 TCP_REFRESH_MISS/200 466 GET 
http://www.heise.de/tp/r4/icons/frame/eol.gif - DIRECT/193.99.144.85 image/gif

Last-Modified: Tue, 09 Nov 2004 10:37:18 GMT
Mozilla shows expiry as 13.04.2006 06:50:54

Any idea why this is not cached?

Thanks in advance
Martin
-- 
http://www.tm.oneiros.de


[squid-users] squid and swf files

2006-02-24 Thread Gregori Parker
I've been getting reports of problems with squid and swf files.  After
doing some testing, I found that a link like
http://my.squid.cache/directory/something.swf would work fine in Mozilla
but not in Internet Explorer - IE says something about downloading in
the status bar and then hangs for a long while.  I researched this a
bit, and found reports that this issue can be fixed on Apache by
sticking "AddType application/x-shockwave-flash .swf" in the conf file.

I noticed that squid/etc/mime.conf as the following line:

\.swf$ application/x-shockwave-flash anthony-unknown.gif - image
+download

But then I read somewhere else that mime.conf only appies to ftp/gopher
and other non-http traffic...so,

Is there something I can do to make squid handle .swf files consistently
between browsers?





Re: [squid-users] Problem with intercept squid and boinc

2006-02-24 Thread Oliver Schulze L.

I have visited the troubled URL in Firefox:

http://setiboincdata.ssl.berkeley.edu/sah_cgi/file_upload_handler

And it seems to look at the user-agent and output a speciall
message if your're using a web browser.

Maybe squid is changing some headers that setiboinc needs ...

WIll do some more test

Tks
Oliver



Henrik Nordstrom wrote:

ons 2006-02-22 klockan 10:16 -0300 skrev Oliver Schulze L.:

  

and in the problematic squid server I see:
1140566460.404   2060 192.168.2.90 TCP_MISS/100 123 POST 
http://setiboincdata.ssl.berkeley.edu/sah_cgi/file_upload_handler - 
DIRECT/66.28.250.125 -


What does TCP_MISS/100 mean? As I see, the correct value should be 
TCP_MISS/200



Correct. You should never see a 100 response code in Squid. This
indicates there is something upstream which malfunctions and sends a 100
Continue to your Squid even if the HTTP standard forbids this. Squid is
HTTP/1.0, and 100 Continue requires HTTP/1.1.

Something upstream ranges from

  Parent proxy
  Another intercepting proxy
  The origin server

Regards
Henrik
  


--
Oliver Schulze L.
<[EMAIL PROTECTED]>



[squid-users] FW: WCCP: Web Cache ID 0.0.0.0

2006-02-24 Thread Shoebottom, Bryan
Ryan,

I ended up opening a ticket with Cisco regarding the issue and it is a bug with 
WCCPv1, if you do a show ip wccp web-cache view you will see the IPs of you 
cache(s) although the show ip wccp web-cache detail will show the 0.0.0.0 for 
any connected cache.  This will not be fixed; their solution is to use WCCPv2.  
Keep in mind that there is no performance issue here, it is simply cosmetic. 

I posted this to the group just in case anyone else is looking for more info on 
WCCP and squid.

Thanks,
 Bryan Shoebottom


From: Ryan Sumida [mailto:[EMAIL PROTECTED] 
Sent: February 24, 2006 3:35 PM
To: Shoebottom, Bryan
Subject: WCCP: Web Cache ID 0.0.0.0


Hi Bryan, 
I read your posts on the Squid-Users list and was wondering if you fixed the 
problem with WCCP web cache IP showing 0.0.0.0.  I'm having the exact same 
problems as you posted with a very similar setup.  I've been stuck with this 
problem for almost 2 weeks now and it's driving me nuts.  =[  Any advice would 
help. 

Thank you, 

Ryan Sumida
Network Engineer, Network Services
Information Technology Services
California State University, Long Beach


Re: [squid-users] no auth for one domain?

2006-02-24 Thread nairb rotsak
We ended up using AD Group policy to not go through
the proxy for that site... not ideal, but just to make
sure I understand the other way to do it

You can put the http_access with the acl before the
http_access allow_ntlm and it should work?

--- Mark Elsen <[EMAIL PROTECTED]> wrote:

> > Is it possible to have my ntlm users go around 1
> > domain?  We can't seem to get a state web site
> (which
> > uses a weird front end to it's client... but it
> ends
> > up on the web) to go through the proxy.  When we
> sniff
> > the traffic locally, it is popping up a 407, but
> their
> > isn't anyway to log in.
> >
> > I tried to put an acl and http_access higher in
> the
> > list in the .conf, but that didn't seem to matter?
> >
> 
> It would have been more productive to show that 
> line, which you put
> for that domain in squid.conf, offhand & probably it
> should
> resemble something like this :
> 
> acl ntlm_go_around dstdomain name-excluded-domain
> ...
> 
> http_access allow ntlm_go_around
> http_access allow ntlm_users (provided proxy
> AUTH ACL is named 'ntlm_users')
> 
> M.
> 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


Re: [squid-users] low squid performance?

2006-02-24 Thread Tomasz Kolaj
Hello,

I found in archive:
http://www.squid-cache.org/mail-archive/squid-dev/200212/0119.html

How is it possible to get 2833 requests/second on 2xP3 1,4GHz box? Is it true? 
My result is poor in compare to his result;) (my max 135 requests/second with 
95% usage of processor with logging turned off).

Regards,
-- 
Tomasz


Re: [squid-users] reading logs

2006-02-24 Thread Christoph Haas
On Friday 24 February 2006 18:24, Tomas Palfi wrote:
> That's a very good point, some people are faster than the others by
> nature, and some use more paper than the others :)  But it still leaves
> me with the fact of how do I determine from the logs how long per day a
> person spent browsing the net?!

I need to repeat myself: you can't! Unless you install cameras at the desks 
and monitor what the users do you can just use assumptions.

That's a similar question that "web marketing" people ask. How much time do 
users spend on our web site. You just can't know. There is no "login" and 
"logoff" on web sites. And web sites that actually contain a "logoff" 
button have it as a fake or perhaps to clear cookie session information 
for your own security. But that doesn't mean that you will know what a 
user does. HTTP is stateless. A user requests a document. Squid delivers 
the document. Job done.

Clear now?

 Christoph
-- 
~
~
".signature" [Modified] 1 line --100%--1,48 All


Re: [squid-users] Updating Block Lists

2006-02-24 Thread Terry Dobbs
From my experience you have to reload squid to get it to recognize any 

changes...

squid -k reconfigure never worked for me, but I use the command
/etc/init.d/squid reload and it seems to work as desired.


- Original Message - 
From: "Joseph Zappacosta" <[EMAIL PROTECTED]>

To: 
Sent: Friday, February 24, 2006 11:11 AM
Subject: [squid-users] Updating Block Lists



Hello,
I have came in to a situation where we are using squid to block adult web 
sites.  I search and searched and none of the previous posts seem to apply 
to my situation.  Squid guard is not installed, but there exists a list in 
the /etc directory called xxxsites.  I tested it and these are the sites 
that are being blocked.  However, when I add sites to the list, they are 
not then blocked.  Do I need some type of command to reinstate the list?


Thanks,

--
Joseph Zappacosta
Reading Public Library (Reading Consortium)
100 South Fifth Street
Reading, Pennsylvania 19602
Voice : 610-655-6350
FAX : 610-478-9035


--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.375 / Virus Database: 268.1.0/269 - Release Date: 2/24/2006






--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.375 / Virus Database: 268.1.0/269 - Release Date: 2/24/2006



[squid-users] cachemgr.cgi

2006-02-24 Thread Gregori Parker
I've followed all the FAQs and searched all the various threads, but I
cant get cachemgr working.  I was happy with SNMP for a while, but now I
realize I need to examine some metrics that are only available in
cachemgr.

I don't have apache on the squid servers, so I've been trying to move
the cachemgr.cgi file to another server that does have apache.  I set up
all the aliases and chmoded the thing to 755, but I still get 500
errors.

Error message:
Premature end of script headers: cachemgr.cgi

Thanks in advance for any advice!




RE: [squid-users] reading logs

2006-02-24 Thread Tomas Palfi
Hi Christoph,

That's a very good point, some people are faster than the others by
nature, and some use more paper than the others :)  But it still leaves
me with the fact of how do I determine from the logs how long per day a
person spent browsing the net?!

I am using external authenticator to pull user names and passwd from AD
which get inevitably logged in access.log.  My analyser goes through and
builds a database indexed by user and date, containing what each user
visited during that day.  But how much time they spent doing it.  Have
they spent 2 or 3 hours browsing some norties or 10 mins something
business related?

If you know how to add up the total browsing time from the access.log
let me know please

Thanks

Tomas


--
tp




-Original Message-
From: Christoph Haas [mailto:[EMAIL PROTECTED] 
Sent: 24 February 2006 13:50
To: squid-users@squid-cache.org
Subject: Re: [squid-users] reading logs

On Friday 24 February 2006 12:53, Tomas Palfi wrote:
> From the access.log file, which field or from what parameter can I
> determine how long the users stayed on line or browsed the pages.

HTTP is stateless. Squid can record how long it took to deliver the
page. 
But then the page stays on the user's screen. How should Squid be able
to 
know what the user does then like how slow or fast he can read or how
much 
time he spent on the toilet? He may even have logged off, gone outside
and 
be run over by a truck. You won't know.

> PRIVACY & CONFIDENTIALITY



Kindly
 Christoph
-- 
~
~
".signature" [Modified] 1 line --100%--1,48 All

___

This e-mail has been scanned by Messagelabs
___

PRIVACY & CONFIDENTIALITY

This e-mail is private and confidential.  If you have, or suspect you have 
received this message in error please notify the sender as soon as possible and 
remove from your system.  You may not copy, distribute or take any action in 
reliance on it. Thank you for your co-operation.

Please note that whilst best efforts are made, neither the company nor the 
sender accepts any responsibility for viruses and it is your responsibility to 
scan the email and attachments (if any).

This e-mail has been automatically scanned for viruses by MessageLabs.


Re: [squid-users] 0 means no limit??

2006-02-24 Thread Henrik Nordstrom
fre 2006-02-24 klockan 09:56 +0530 skrev mohinder garg:

> when you say less than 1 sec. Isn't it exactly zero?

No, it's up to a second. Squid only checks connection related timeouts
once per second so all timeouts are +- 1 second.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


[squid-users] Updating Block Lists

2006-02-24 Thread Joseph Zappacosta

Hello,
I have came in to a situation where we are using squid to block adult 
web sites.  I search and searched and none of the previous posts seem to 
apply to my situation.  Squid guard is not installed, but there exists a 
list in the /etc directory called xxxsites.  I tested it and these are 
the sites that are being blocked.  However, when I add sites to the 
list, they are not then blocked.  Do I need some type of command to 
reinstate the list?


Thanks,

--
Joseph Zappacosta
Reading Public Library (Reading Consortium)
100 South Fifth Street
Reading, Pennsylvania 19602
Voice : 610-655-6350
FAX : 610-478-9035 



RE: [squid-users] Filtering log data

2006-02-24 Thread David
Hi again,

Henrik, thank you for your swift reply. The customlog patch will certainly
be useful.

I have one question regarding the "logformat" line. In the examples I've
seen, some things seem to be quoted, others not. What is the rule for that?

Also, is it possible to include a condition so only requests of certain mime
types are logged? If so, how is it done? I checked the documentation and
didn't find anything. Is there a separate setting in the squid.conf maybe?

Thank you,

/David



> Hello,
> 
> I am using Squid to collect log data as part of a user study.
> 
> My problem is that logging headers ("log_mime_hdrs") together with the 
> "regular" logging that takes place generates huge amounts of data. I 
> am trying to minimze that load.
> 
> I am exclusively interested in logging:
> 
> - document requests (i.e not image mime types)
> - one specific line in the http request header
> 
> Are there some settings which would allow me to filter out these values?

http://devel.squid-cache.org/old_projects.html#customlog

Regards
Henrik



Re: [squid-users] reading logs

2006-02-24 Thread Christoph Haas
On Friday 24 February 2006 12:53, Tomas Palfi wrote:
> From the access.log file, which field or from what parameter can I
> determine how long the users stayed on line or browsed the pages.

HTTP is stateless. Squid can record how long it took to deliver the page. 
But then the page stays on the user's screen. How should Squid be able to 
know what the user does then like how slow or fast he can read or how much 
time he spent on the toilet? He may even have logged off, gone outside and 
be run over by a truck. You won't know.

> PRIVACY & CONFIDENTIALITY



Kindly
 Christoph
-- 
~
~
".signature" [Modified] 1 line --100%--1,48 All


Re: [squid-users] cache log error

2006-02-24 Thread Mark Elsen
> i am getting to slow browsing and found these errors in cache.log
>
>
> 2006/02/24 04:35:02| parseHttpRequest: Unsupported method
> 'recipientid=104&sessionid=6314
>
> '
> 2006/02/24 04:35:02| clientReadRequest: FD 28 Invalid Request
> 2006/02/24 04:35:02| parseHttpRequest: Unsupported method
> 'recipientid=104&sessionid=6314
>
> '
> 2006/02/24 04:35:02| clientReadRequest: FD 36 Invalid Request
>
>
   You possibly have :

 - buggy browsers
 - malware trying to escape to the Net using http (more likely)

 Identify these clients by looking at the IP addresses logged in access.log

 M.


Re: [squid-users] reading logs

2006-02-24 Thread Mark Elsen
> Hi all,
>
> From the access.log file, which field or from what parameter can I
> determine how long the users stayed on line or browsed the pages.
>

 - Checkout  log analysis tools available in :

http://www.squid-cache.org/Scripts/

Some of them may not do exactly what you want,though many
have very 'nearby' - features though.

M.


Re: [squid-users] Help needed transparent proxy doesnt work

2006-02-24 Thread Mark Elsen
>
> Dear all,
>
> My browser works if I give him the proxy address with port 80 or 3128 but
> doesn't work via default gateway. I gave the following command on the
> command prompt of windows and got the following output.
>
> C:\> telnet 192.168.0.29 80
>
>
>

  - Squid by default does not operate as a transp. proxy, even if , for instance
your network (and routing) parametes are correct.
For setting up transparant proxying, check the FAQ for numerous examples
in different cases.
But , before thinking about that read :
 
http://squidwiki.kinkie.it/SquidFaq/InterceptionProxy?highlight=%28intercept%29#head-1cf13b27d5a6f8c523a4582d38a8cfaaacafb896

first; then take a long walk.

M.


Re: [squid-users] User Authentication webpage examples

2006-02-24 Thread Mark Elsen
>
>
> I'm still in search for an answer to a previous question I need
> answered. I'm using ntlm helpers to authenticate my domain users for
> using the proxy. For people that are not authenticated I want them to be
> redirected to a webpage with a form to be filled out requesting access.
> It would be emailed to the admin for approval. I understand I can use
> the error page directive , but I'm looking to see any examples of what
> some of you are using for this.
>

 - Checkout this thread ;

  
http://www.squid-cache.org/mail-archive/squid-users/200602/0478.html

 M.


Re: [squid-users] low squid performance?

2006-02-24 Thread Tomasz Kolaj
Dnia czwartek, 23 lutego 2006 16:17, Matus UHLAR - fantomas napisał:
> On 23.02 14:25, Tomasz Kolaj wrote:
> > Dnia czwartek, 23 lutego 2006 11:32, napisałeś:
> > > On 22.02 23:13, Tomasz Kolaj wrote:
> > > > I observed have too low performance. On 2x 64bit Xeon 2,8GHz 2GB
> > > > DDR2, 2x WD RAPTOR Squid 2.5.STABLE12 can answer max for 120
> > > > requests/s.  115 r/s - 97-98% usage of first processor. Second is
> > > > unusable for squid :/. I have two cache_dirs (aufs). One pre disk.
> > >
> > > Maybe you have too many ACL's?
> >
> > I pasted my squid.conf in one of last posts. I have much of addresses
> > bloacked in file spywaredomains.txt
>
> sorry - the thread was broken and I didn't see it. (b)lame mailers who
> break threads by not using References: or at least In-Reply-To: headers...

Ok, my mistake possible:
-- cut --
aragorn ~ # cat /etc/squid/squid.conf | grep -v "^#" | tr -s '\n'

http_port 82.160.43.14:3128
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
cache_mem 612 MB
maximum_object_size 8192 KB
maximum_object_size_in_memory 8 KB
cache_replacement_policy heap GDSF
memory_replacement_policy heap GDSF
cache_dir aufs /var/cache/squid/dysk1 3 32 256
cache_dir aufs /var/cache/squid/dysk2 3 32 256
cache_access_log none
cache_store_log none
mime_table /etc/squid/mime.conf
redirect_children 15
request_header_max_size 20 KB
refresh_pattern -i (.*jpg$|.*gif$|.*png$) 0 50% 28800
refresh_pattern -i (.*html$|.*htm|.*shtml|.*php) 0 20% 1440
refresh_pattern .   0   20% 4320
half_closed_clients off
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl our_networks src 82.160.43.0/24 82.160.129.0/24
acl SSL_ports port 443 563
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 901 # SWAT
acl purge method PURGE
acl CONNECT method CONNECT
cache_mgr admin
http_access allow manager localhost
http_access allow manager our_networks
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
acl mGG dstdomain .adserver.gadugadu.pl .adserver.gadu-gadu.pl
redirector_access deny !mGG
redirector_bypass on
redirect_program /home/gg_rewrite
acl spywaredomains dstdomain src "/etc/squid/spywaredomains.txt"
http_access deny spywaredomains
http_access allow our_networks
http_access allow localhost
http_access deny all
http_reply_access allow all
icp_access allow all
cache_mgr [EMAIL PROTECTED]
visible_hostname w3cache.abp.pl
httpd_accel_host virtual
httpd_accel_port 80
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
dns_testnames onet.pl wp.pl microsoft.com abp.pl
logfile_rotate 10
append_domain .abp.pl
forwarded_for off
log_icp_queries off
cachemgr_passwd [cut] all
buffered_logs on
coredump_dir /var/cache/squid
store_dir_select_algorithm least-load
-- cut --

> > acl spywaredomains dstdomain src "/etc/squid/spywaredomains.txt"
> > http_access deny spywaredomains
> >
> > but when I remove it from config squid still generate much processor
> > time.
> > What about epool? I aplied patch for squid_2.5 for tests.
>
> I don't think that would help you much. Maybe using external redirector
> (SquidGuard?) instead of squid itself would help - it may reside on another
> CPU, while squid it one-CPU-process.

External redirector? But im redirecting only few requests, (to gadu-gadu 
addserver).

squid compiled with options:
aragorn ~ # squid -v
Squid Cache: Version 2.5.STABLE12
configure options:  --prefix=/usr --bindir=/usr/bin --exec-prefix=/usr 
--sbindir=/usr/sbin --localstatedir=/var --mandir=/usr/share/man 
--sysconfdir=/etc/squid --libexecdir=/usr/lib/squid 
--enable-auth=basic,digest,ntlm --enable-removal-policies=lru,heap 
--enable-linux-netfilter --enable-truncate --with-pthreads --enable-epoll 
--enable-time-hack --disable-follow-x-forwarded-for 
--host=x86_64-pc-linux-gnu --disable-snmp --enable-ssl --enable-underscores 
--enable-storeio='diskd,coss,aufs,null' --enable-async-io


with flags:
CFLAGS="-march=nocona -O3 -pipe -fomit-frame-pointer -ffast-math 
-funroll-all-loops"
CXXFLAGS="${CFLAGS} -fno-enforce-eh-specs"
LDFLAGS="-Wl,-O1 -Wl,-Bdirect -Wl,-hashvals -Wl,-zdynsort"

Regards,
-- 
Tomasz


[squid-users] User Authentication webpage examples

2006-02-24 Thread Nick Duda


I'm still in search for an answer to a previous question I need
answered. I'm using ntlm helpers to authenticate my domain users for
using the proxy. For people that are not authenticated I want them to be
redirected to a webpage with a form to be filled out requesting access.
It would be emailed to the admin for approval. I understand I can use
the error page directive , but I'm looking to see any examples of what
some of you are using for this.

Thanks,
Nick


-
Confidentiality note
The information in this email and any attachment may contain confidential and 
proprietary information of
VistaPrint and/or its affiliates and may be privileged or otherwise protected 
from disclosure. If you are
not the intended recipient, you are hereby notified that any review, reliance 
or distribution by others
or forwarding without express permission is strictly prohibited and may cause 
liability. In case you have
received this message due to an error in transmission, please notify the sender 
immediately and to delete
this email and any attachment from your system.
-


[squid-users] help on external_acl_type

2006-02-24 Thread Remy Almeida
Hi
My query.sql file
cat /etc/squid/query.sql
USE proxy;
select userid FROM userid;

my squid.conf file
external_acl_type mysql concurrency=20 ttl=5 %LOGIN /etc/squid/query.sql
acl dbauth external mysql
http_access deny dbauth

error in cache.log file

2006/02/24 17:33:23| WARNING: mysql #1 (FD 11) exited
2006/02/24 17:33:23| WARNING: mysql #2 (FD 12) exited
2006/02/24 17:33:23| WARNING: mysql #3 (FD 13) exited
2006/02/24 17:33:23| WARNING: mysql #4 (FD 14) exited
2006/02/24 17:33:23| WARNING: mysql #5 (FD 15) exited
2006/02/24 17:33:23| WARNING: mysql #6 (FD 16) exited
2006/02/24 17:33:23| WARNING: mysql #7 (FD 17) exited
2006/02/24 17:33:23| WARNING: mysql #8 (FD 18) exited
2006/02/24 17:33:23| WARNING: mysql #9 (FD 19) exited
2006/02/24 17:33:23| WARNING: mysql #10 (FD 20) exited
2006/02/24 17:33:23| WARNING: mysql #11 (FD 21) exited
FATAL: Too few mysql processes are running
Squid Cache (Version 2.5.STABLE3): Terminated abnormally.
CPU Usage: 0.740 seconds = 0.190 user + 0.550 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 681
Memory usage for squid via mallinfo():
total space in arena:   16384 KB
Ordinary blocks:16326 KB  4 blks
Small blocks:   0 KB  1 blks
Holding blocks: 0 KB  0 blks
Free Small blocks:  0 KB
Free Ordinary blocks:  58 KB
Total in use:   16326 KB 100%
Total free:58 KB 0%


Can any help me

Thanks & Regards,
Remy Almeida
NIO System Admin
Ph Office: +91-0832-2450421
Cell: 9822586093




[squid-users] Help needed transparent proxy doesnt work

2006-02-24 Thread Muhammad Bilal Ahmad

Dear all,

My browser works if I give him the proxy address with port 80 or 3128 but
doesn't work via default gateway. I gave the following command on the
command prompt of windows and got the following output.

C:\> telnet 192.168.0.29 80






HTTP/1.0 400 Bad Request
Server: squid/2.5.STABLE5
Mime-Version: 1.0
Date: Fri, 24 Feb 2006 12:05:34 GMT
Content-Type: text/html
Content-Length: 1219
Expires: Fri, 24 Feb 2006 12:05:34 GMT
X-Squid-Error: ERR_INVALID_REQ 0
X-Cache: MISS from proxy
Proxy-Connection: close

http://www.w3.or
g/TR/html4/loose.dtd">
  
  ERROR: The requested URL could not be
retrieved


  
 
ERROR
 The requested URL could not be retrieved
  
 
 
Whil
e trying to process the request:

 www.yahoo.com
&#  3;&#  3;&#  3;


  

   The following error was encountered:
   
   
 

 
I
nvalid Request
  
   


   Some aspect of the HTTP Request is invalid.
Poss
ible problems:
  
  Missing or unknown request method
   Missing URL
 
Missin
g HTTP Identifier (HTTP/1.0)
Request is too large
Content-Length
missing f
or POST or PUT requests
   Illegal character in hostname; underscores are
not al
lowed
 
  Your cache administrator is mailto:[EMAIL PROTECTED]">b
[EMAIL PROTECTED].

 
 

 
Generat
ed Fri, 24 Feb 2006 12:05:34 GMT by proxy (squid/2.5.STABLE5)
 
 



Connection to host lost.

C:\>










If any body can help me I would be very grateful.


Thanx a lot

Kind Regards
M Bilal Ahmad




[squid-users] reading logs

2006-02-24 Thread Tomas Palfi
Hi all,

From the access.log file, which field or from what parameter can I
determine how long the users stayed on line or browsed the pages.

Thanks

Tomas

--
tp





PRIVACY & CONFIDENTIALITY

This e-mail is private and confidential.  If you have, or suspect you have 
received this message in error please notify the sender as soon as possible and 
remove from your system.  You may not copy, distribute or take any action in 
reliance on it. Thank you for your co-operation.

Please note that whilst best efforts are made, neither the company nor the 
sender accepts any responsibility for viruses and it is your responsibility to 
scan the email and attachments (if any).

This e-mail has been automatically scanned for viruses by MessageLabs.


[squid-users] cache log error

2006-02-24 Thread ammads
i am getting to slow browsing and found these errors in cache.log


2006/02/24 04:35:02| parseHttpRequest: Unsupported method
'recipientid=104&sessionid=6314

'
2006/02/24 04:35:02| clientReadRequest: FD 28 Invalid Request
2006/02/24 04:35:02| parseHttpRequest: Unsupported method
'recipientid=104&sessionid=6314

'
2006/02/24 04:35:02| clientReadRequest: FD 36 Invalid Request




AW: AW: [squid-users] Squid 2.5.STABLE9 and Kernel 2.6.11 SMP

2006-02-24 Thread Christian Herzberg
Hi,

I didn´t, but you are right. I will try.

Thanks
Christian

-Ursprüngliche Nachricht-
Von: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Gesendet: Donnerstag, 23. Februar 2006 15:51
An: Christian Herzberg
Cc: squid-users@squid-cache.org
Betreff: Re: AW: [squid-users] Squid 2.5.STABLE9 and Kernel 2.6.11 SMP


tis 2006-02-21 klockan 12:42 +0100 skrev Christian Herzberg:
> Hi Bart,
> 
> Squid isn´t crashing. Squid is waiting for what ever. You can wait one 
> hour without any respons from squid. After a restart of squid 
> everythink is working fine. The same squid on a system with kernel 2.4 
> is working for the last 2 years without any problem.

Have you tried upgrading your Squid?

Not that I know of any "hanging" issues in 2.5.STABLE9, but it's still worth
a try.

Regards
Henrik


Re: [squid-users] Solutions for transparent + proxy_auth?

2006-02-24 Thread Matus UHLAR - fantomas
> > I think educating users (yes, there are 2 different passwords) would be
> > most effective.

On 23.02 10:01, Steve Brown wrote:
> Believe me, I wish I could.  But these are sales people, and as I
> said, some of them aren't very bright.

I do. but i think you understand that educating isthe best ever. 

> > 1. give users the same password for mail and proxy and probably fetch
> > them from the same source like LDAP (Win2000 Domain).
> 
> Thought about that, but I won't want to have to maintain it.  Its a hassle.

If you can't easily use the same passwd source for squid than for mail, then
of course skip it.

> > 2. give users SeaMonkey for both browsing and mail, set it up to
> > remember passwords, fill it with proxy and mail password, give users
> > only the master password.
> 
> Multiple users may use the same computer.  We don't want them reading each
> others email, number one, and number two, they would wind up giving out
> someone else's email address as their own.  Like I said, not very bright.

That requires multiple user profiles on those computers. You only have to
set up more accounts on those computers, which you probably need to do
even...

> > 3. set up FF (and probably M$IE too) to use proxy on localhost - this
> > way you will avoid interception and its problems and still give users
> > benefit of local proxy server.
> 
> I posted earlier about my this won't work.  Firefox is too easy to get
> around on OSX.

However, I would think this way: if they can get around proxy setting, they
CAN remember more than one password (and present that solution to boss)

> > I recommend using encrypted connections to protect your passwords, so
> > you might need SSL patch to squid: http://devel.squid-cache.org/ssl/, at
> > least for 1. and 3.
> 
> Thanks, this was going to be my next question. ;-)

good :) at least I'm not sure if this is the right ssl patch to squid...

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
   One OS to rule them all, One OS to find them, 
One OS to bring them all and into darkness bind them 


[squid-users] Is there a linux based technology like riverbed?

2006-02-24 Thread khalid
Its also like a squid except that it caches all files to save bandwidth
from a point to point setup