Re: [squid-users] looking for testers: google maps/earth/youtube caching

2007-11-26 Thread Tek Bahadur Limbu

Hi Adrian,

Adrian Chadd wrote:

I don't know if people understood my last email about the StoreUrlRewrite
changes I've made to squid-2.HEAD, so I'll just be really clear this time
around.


http://www.squid-cache.org/mail-archive/squid-users/200711/0490.html


I read it and I think I understand your email. At least I understand 
it's mission which is to make non-cachable stuff get cached!





I've implemented some changes to Squid-2.HEAD which will allow certain stuff
to be cached which couldn't be in the past. The first two things I'm going
to try and concrete the support for is google maps/earth (web only) and Youtube.

So, I'm looking for testers who are willing to run squid-2.HEAD snapshots
and work with me to evaluate and fine-tune my squid extensions to support
this.




Who is interested? Come on, after the amount of How do you cache youtube?
questions from the mailing lists and search results hitting the squidproxy
blog over the last few months -some- of you have to be interested.




I'm saying right now that I'm willing to spend the time and effort to work
with people for free to get this stuff tested and debugged. It doesn't benefit
me - I'm not getting paid -at all- to do this.


I am interested. Let me study it in more detail. For the time being, if 
I need help, you will be there, won't you?


Thanking you...






Adrian




--

With best regards and good wishes,

Yours sincerely,

Tek Bahadur Limbu

System Administrator

(TAG/TDG Group)
Jwl Systems Department

Worldlink Communications Pvt. Ltd.

Jawalakhel, Nepal

http://www.wlink.com.np

http://teklimbu.wordpress.com


Re: [squid-users] looking for testers: google maps/earth/youtube caching

2007-11-26 Thread Adrian Chadd
On Mon, Nov 26, 2007, Tek Bahadur Limbu wrote:

 I'm saying right now that I'm willing to spend the time and effort to work
 with people for free to get this stuff tested and debugged. It doesn't 
 benefit
 me - I'm not getting paid -at all- to do this.
 
 I am interested. Let me study it in more detail. For the time being, if 
 I need help, you will be there, won't you?

Sure. Just trial Squid-2.HEAD on your caches first and let me know if
that breaks anything. Once Squid-2.HEAD is stable for you then we'll
be able to do the extra magic to get some maps and youtube caching
going.



Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -


[squid-users] High CPU usage when cache full

2007-11-26 Thread John Moylan
Hi,

I have three memory only caches set up 7GB of memory each (the
machines have 12GB of physical memory each). Throughput is fairly high
and this setup works well in reducing the number of requests for
smaller files from my backend storage with lower latency that a disk
and mem. solution. However, the cache's on  of the machines fill up
every 2-3 days and Squid's CPU usage subsequently goes up to 100%
(These are all dual SMP machines and system load average remains
around 0.7). FD's, the number of connections and swap are all fine
when the CPU goes up so the culprit is more than likely to be cache
replacement.

I am using heap GDSF as the policy. The maximum size in memory is set
to 96 KB. I am using squid-2.6.STABLE6-4.el5 on Linux 2.6.

Is there anything I can do to improve expensive cache replacement
apart from stopping and starting Squid every day?

J


[squid-users] Cgi files are not opening

2007-11-26 Thread Tarak Ranjan

Hi List,
 here is th eurl i'm trying to open .

While trying to retrieve the URL: http://www.hp.com/cgi-bin/sbso/exit.cgi?

here is my log..1196077494.260781 192.168.1.210 TCP_MISS/200 1241 
GET 
http://hphqglobal.112.2o7.net/b/ss/hphqglobal,hphqna,hphqsmbrollup,hphqsmbmktg/1/G.9p2/s7231792547102? 
- DIRECT/216.52.17.134 image/gif
1196077494.533   3140 192.168.1.231 TCP_MISS/200 66735 GET 
http://www.glakes.org/images/chart3-2007.jpg - DIRECT/206.117.182.197 
image/jpeg
1196077494.877568 192.168.1.34 TCP_MISS/200 388 POST 
http://mail.google.com/mail/channel/bind? - DIRECT/209.85.137.19 text/html
1196077495.464288 192.168.1.109 TCP_MISS/200 723 GET 
http://mail.liqwidkrystal.com/? - DIRECT/203.92.57.226 
application/x-javascript
1196077495.497576 192.168.1.34 TCP_MISS/200 388 POST 
http://mail.google.com/mail/channel/bind? - DIRECT/209.85.137.83 text/html
1196077495.661   4945 192.168.1.231 TCP_MISS/200 58993 GET 
http://www.glakes.org/images/placements-top.jpg - DIRECT/206.117.182.197 
image/jpeg
1196077495.954953 192.168.1.210 TCP_MISS/200 3386 GET 
http://sales.liveperson.net/hc/43836137/? - DIRECT/130.94.77.118 
application/x-javascript
1196077496.336561 192.168.1.34 TCP_MISS/200 388 POST 
http://mail.google.com/mail/channel/bind? - DIRECT/209.85.137.18 text/html
1196077497.546   1792 192.168.1.210 TCP_DENIED/403 1472 GET 
http://www.hp.com/cgi-bin/sbso/exit.cgi? - DIRECT/15.216.110.22 text/html


please let me know what has to be allow, to access this url

--


Tarak

Online Learning|Certifications|Learning Solutions :
www.liqwidkrystal.com




Re: [squid-users] High CPU usage when cache full

2007-11-26 Thread Tek Bahadur Limbu

Hi John,

John Moylan wrote:

Hi,

I have three memory only caches set up 7GB of memory each (the
machines have 12GB of physical memory each). Throughput is fairly high
and this setup works well in reducing the number of requests for
smaller files from my backend storage with lower latency that a disk
and mem. solution. 


Do you have statistics regarding fetching from memory and disk? How much 
is the performance increment when using memory cache only?



However, the cache's on  of the machines fill up

every 2-3 days and Squid's CPU usage subsequently goes up to 100%
(These are all dual SMP machines and system load average remains
around 0.7). FD's, the number of connections and swap are all fine
when the CPU goes up so the culprit is more than likely to be cache
replacement.

I am using heap GDSF as the policy. The maximum size in memory is set
to 96 KB.


Have you tried the LFUDA or the default LRU memory replacement policies?

 I am using squid-2.6.STABLE6-4.el5 on Linux 2.6.

Try upgrading to the latest version of squid.

http://www.squid-cache.org/Versions/v2/2.6/squid-2.6.STABLE16.tar.gz

It probably contains some improvements over version 2.6.6.



Is there anything I can do to improve expensive cache replacement
apart from stopping and starting Squid every day?


By the way, which Linux distro are you using?

Can you post the output of squidclient mgr:info or the relevant parts 
of your squid.conf?


Thanking you...




J






--

With best regards and good wishes,

Yours sincerely,

Tek Bahadur Limbu

System Administrator

(TAG/TDG Group)
Jwl Systems Department

Worldlink Communications Pvt. Ltd.

Jawalakhel, Nepal

http://www.wlink.com.np

http://teklimbu.wordpress.com


Re: [squid-users] question about filesystems and directories for cache.

2007-11-26 Thread Matias Lopez Bergero
Tony Dodd wrote:
 Matias Lopez Bergero wrote:
 Hello,

 snip

 I'm being reading the wiki and the mailing list to know, which is the
 best filesystem to use, for now I have chose ext3 based on comments on
 the list, also, I have passed the nodev,nosuid,noexec,noatime flags to
 fstab in order to get a security and faster performance.

 snip

 Hi Matias,

 I'd personally recommend against ext3, and point you towards reiserfs.
 ext3 is horribly slow for many small files being read/written at the
 same time.  I'd also recommend maximizing your disk throughput, by
 splitting the raid, and having a cache-dir on each disk; though of
 course, you'll loose redundancy in the event of a disk failure.

 I wrote a howto that revolves around maximizing squid performance,
 take a look at it, you may find it helpful:
 http://blog.last.fm/2007/08/30/squid-optimization-guide

Thank you
I'll try that!

Regards,
Matías.


Re: [squid-users] looking for testers: google maps/earth/youtube caching

2007-11-26 Thread Andreas Pettersson

Adrian Chadd wrote:

I don't know if people understood my last email about the StoreUrlRewrite
changes I've made to squid-2.HEAD, so I'll just be really clear this time
around.
  


I must have missed that one..


So, I'm looking for testers who are willing to run squid-2.HEAD snapshots
and work with me to evaluate and fine-tune my squid extensions to support
this.
  


Count me in.
Do I need to cvs it or are we talking about the daily auto-generated 
release tar.gz-file from here?

http://www.squid-cache.org/Versions/v2/2.6/


--
Andreas




Re: [squid-users] looking for testers: google maps/earth/youtube caching

2007-11-26 Thread Adrian Chadd
On Mon, Nov 26, 2007, Andreas Pettersson wrote:

 Do I need to cvs it or are we talking about the daily auto-generated 
 release tar.gz-file from here?
 http://www.squid-cache.org/Versions/v2/2.6/

http://www.squid-cache.org/Versions/v2/HEAD/squid-HEAD.snapshot.tar.gz -
this isn't Squid-2.6, this is the development Squid-2 branch.



adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -


[squid-users] video.nationalgeographic.com

2007-11-26 Thread dhottinger
I seem to be unable to access any videos on  
video.nationalgeographic.com when behind my transparent proxy.  Im  
running squid version  2.5.STABLE14 and yes I know it is outdated, but  
I also use smartfilter from secure computing, and the version I am  
using isnt compatible with any newer versions of squid.  When  
accessing the site, I get a tcp_miss in my access log 1196093274.044
  38 10.40.20.20 TCP_REFRESH_HIT/200 40922 GET  
http://video.nationalgeographic.com/video/player/media/us-astronomy-apvin/us-astronomy-apvin_150x100.jpg - DIRECT/207.24.89.108 image/jpeg ALLOW Global Allow List [Accept: */*\r\nReferer: http://video.nationalgeographic.com/video/player/flash/society1_0.swf\r\nx-flash-version: 9,0,47,0\r\nCache-Control: no-transform\r\nUA-CPU: x86\r\nAccept-Encoding: gzip, deflate\r\nUser-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 2.0.50727; .NET CLR 1.1.4322; .NET CLR 3.0.04506.30)\r\nHost: video.nationalgeographic.com\r\nConnection: Keep-Alive\r\nCookie: s_cc=true; s_sq=natgeonews%253D%252526pid%25253Dhttp%2525253A//news.nationalgeographic.com/news%252526pidt%25253D1%252526oid%25253Djavascript%2525253AvideoPlayer%25252528%25252527http%2525253A//video.nationalgeographic.com/video/player/news/%25252527%25252529%252526ot%25253DA%252526oi%25253D448; s_nr=1196093247485\r\n] [HTTP/1.1 200 OK\r\nDate: Tue, 20 Nov 2007 12:54:48 GMT\r\nServer: Apache/2.0.52 (Red Hat)\r\nLast-Modified: Mon, 29 Oct 2007 18:40:30 GMT\r\nETag: 944ae-9e65-43da608e60380\r\nAccept-Ranges: bytes\r\nCache-Control: max-age=900\r\nExpires: Tue, 20 Nov 2007 13:09:48 GMT\r\nContent-Type: image/jpeg\r\nContent-length: 40549\r\nConnection: close\r\nAge:  
650\r\n\r]


The error message on nationalgeographic's webpage just says: Were  
sorry but the video player is taking a long time to load.  Please come  
back later or wait for it to load.  Then nothing happens.  Is anyone  
else experiencing any issues or have any ideas?


thanks,

ddh

--
Dwayne Hottinger
Network Administrator
Harrisonburg City Public Schools

rarely do people communicate, they just take turns talking



RE: [squid-users] Authenticating with Samba for logging username in Squid access log

2007-11-26 Thread Leach, Shane - MIS Laptop
Chris,

My http_access lines are below:

http_access allow manager localhost 
http_access deny manager 
http_access deny !Safe_ports 
http_access deny CONNECT !SSL_ports 
acl MyNetwork src 10.1.0.0/255.255.0.0 
http_access allow MyNetwork

Please let me know if you need to see any other lines.  I will review
the link you sent.

Thanks for the help.

Shane

-Original Message-
From: Chris Robertson [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 21, 2007 3:49 PM
To: Squid Users
Subject: Re: [squid-users] Authenticating with Samba for logging
username in Squid access log

Leach, Shane - MIS Laptop wrote:
 Good morning.
  
 I have successfully followed the steps in the walk-through
 http://mkeadle.org/?p=13 http://mkeadle.org/?p=13
  
 However, now, I am interested in how to get the username to appear in 
 the access log.  I have been unable to find any information on this.
  
 Can you provide assistance?  Otherwise, if there is a better way to 
 accomplish my goal, please let me know.  I am still open to other 
 options.
  
 Thank you for the assistance.
  
 Shane
   

The referenced article was a bit sparse on where to insert the ACL and
http_access line...  My guess is you are allowing the traffic without
authentication, but without seeing your http_access lines (in order)
it's impossible to say.

Have a look at the FAQ section on ACLs to see if that helps you solve
this issue:  http://wiki.squid-cache.org/SquidFaq/SquidAcl

Chris




RE: [squid-users] Authenticating with Samba for logging usernamein Squid access log

2007-11-26 Thread Leach, Shane - MIS Laptop
Henrik,

Below is some sections of squid.conf as requested:

logformat squid %ts.%03tu %6tr %a %Ss/%03Hs %st %rm %ru %un %Sh/%A
%mt #logformat squid %ru %ul %un %ea #logformat squidmime  %ts.%03tu
%6tr %a %Ss/%03Hs %st %rm %ru %un %Sh/%A %mt [%h] [%h] logformat
common %a %ui %un [%tl] %rm %ru HTTP/%rv %Hs %st %Ss:%Sh logformat
combined %a %ui %un [%tl] %rm %ru HTTP/%rv %Hs %st %{Referer}h
%{User-Agent}h %Ss:%Sh 

auth_param basic program /usr/lib/squid/squid_ldap_auth -R -b
dc=domain,dc=com -D cn=Administrator,dc=domain,dc=com -w password
-f sAMAccountName=%s -h 10.1.0.207 auth_param basic children 5
auth_param basic realm DOMAIN.COM auth_param basic credentialsttl 5
minute 

acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst
127.0.0.0/8 acl SSL_ports port 443 
acl Safe_ports port 80  # http 
acl Safe_ports port 21  # ftp 
acl Safe_ports port 443 # https 
acl Safe_ports port 70  # gopher 
acl Safe_ports port 210 # wais 
acl Safe_ports port 1025-65535  # unregistered ports 
acl Safe_ports port 280 # http-mgmt 
acl Safe_ports port 488 # gss-http 
acl Safe_ports port 591 # filemaker 
acl Safe_ports port 777 # multiling http 
acl CONNECT method CONNECT 


# Only allow cachemgr access from localhost http_access allow manager
localhost http_access deny manager # Deny requests to unknown ports
http_access deny !Safe_ports # Deny CONNECT to other than SSL ports
http_access deny CONNECT !SSL_ports  acl MyNetwork src
10.1.0.0/255.255.0.0 http_access allow MyNetwork 

Thank you for your assistance.

Shane

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 21, 2007 2:54 PM
To: Leach, Shane - MIS Laptop
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Authenticating with Samba for logging
usernamein Squid access log

On ons, 2007-11-21 at 09:28 -0600, Leach, Shane - MIS Laptop wrote:
 Good morning.
  
 I have successfully followed the steps in the walk-through
 http://mkeadle.org/?p=13 http://mkeadle.org/?p=13
  
 However, now, I am interested in how to get the username to appear in 
 the access log.  I have been unable to find any information on this.

If you followed the above you should already have the username in
access.log..

So what do your squid.conf look like now? In particular auth_param and
http_access directives..

Regards
Henrik


[squid-users] squid log analysis

2007-11-26 Thread Paul Cocker
One of the things I want to draw from the squid logs is how often user X
visited site or domain Y. I've setup Webalizer and was using it to
create some reports on traffic, but it doesn't appear to have the
functionality necessary to create such a report.

Are there any such log analysis tools available to me?

Paul Cocker
IT Systems Administrator




TNT Post is the trading name for TNT Post UK Ltd (company number: 04417047), 
TNT Post (Doordrop Media) Ltd (00613278), TNT Post Scotland Ltd (05695897),TNT 
Post North Ltd (05701709) and TNT Post South West Ltd (05983401). Emma's Diary 
and Lifecycle are trading names for Lifecycle Marketing (Mother and Baby) Ltd 
(02556692). All companies are registered in England and Wales; registered 
address: 1 Globeside Business Park, Fieldhouse Lane, Marlow, Buckinghamshire, 
SL7 1HY.



RE: [squid-users] squid3 WindowsUpdate failed

2007-11-26 Thread Alex Rousskov
On Sat, 2007-11-17 at 15:30 +, Jorge Bastos wrote:

 For now i'm going to leave this as fixed.
 With the debian 3.0.RC1-2, Luigi added the resume patch as I requested and
 it seams to work, I may done the test wrong the other time...
 I'm going to see this for some days and if I notice something I'll warn you.
 I don't know the feedback for other users.

Great. Thanks for posting this update.

If anybody is still having Windows Update problems with a recent Squid3
daily snapshot, please file a bug report.

Thank you,

Alex.


 
 
 -Original Message-
 From: Alex Rousskov [mailto:[EMAIL PROTECTED] 
 Sent: terça-feira, 6 de Novembro de 2007 15:19
 To: Jorge Bastos
 Cc: squid-users@squid-cache.org
 Subject: RE: [squid-users] squid3 WindowsUpdate failed
 
 
 On Tue, 2007-11-06 at 09:24 +, Jorge Bastos wrote:
  Alex,
  The only ACL i have in squid.conf is:
  
  ---
  acl all_cache src 0.0.0.0/0.0.0.0
  no_cache deny all_cache
  ---
 
 OK, thanks.
 
  I'm one of the people who's having this problems.
  Now I'm using 3.0.PRE6 until this is fixed.
 
 Can you help us troubleshoot the problem? Can you run the latest Squid3
 daily snapshot and collect full debugging (debug_options ALL,9) logs
 when Windows Update is malfunctioning?
 
 Thank you,
 
 Alex.
 
  -Original Message-
  From: Alex Rousskov [mailto:[EMAIL PROTECTED] 
  Sent: segunda-feira, 5 de Novembro de 2007 16:31
  To: Amos Jeffries
  Cc: John Mok; squid-users@squid-cache.org
  Subject: Re: [squid-users] squid3 WindowsUpdate failed
  
  On Sun, 2007-11-04 at 19:30 +1300, Amos Jeffries wrote:
   I have just had the opportunity to do WU on a customers box and
   managed to reproduce one of the possible WU failures.
   
   This one was using WinXP, and the old WindowsUpdate (NOT 
   MicrosoftUpdate, teht remains untested). With squid configured to
   permit 
   client access to:
   
   # Windows Update / Microsoft Update
   #
   redir.metaservices.microsoft.com
   images.metaservices.microsoft.com
   c.microsoft.com
   windowsupdate.microsoft.com
   #
   # WinXP / Win2k
   .update.microsoft.com
   download.windowsupdate.com
   # Win Vista
   .download.windowsupdate.com
   # Win98
   wustat.windows.com
   crl.microsoft.com
   
   AND also CONNECT access to www.update.microsoft.com:443
   
   PROBLEM:
  The client box detects a needed update,
  then during the Download Updates phase it says ...failed! and
   stops.
   
   CAUSE:
   
   This was caused by a bug in squid reading the ACL:
  download.windowsupdate.com
 ...
  .download.windowsupdate.com
   
 - squid would detect that download.windowsupdate.com was a
   subdomain 
   of .download.windowsupdate.com  and .download.windowsupdate.com would
   be 
   culled off the ACL as unneeded.
   
 - That culled bit held the wildcard letting v4.download.* and 
   www.download.* be retrieved later in the process.
   
 - BUT, specifying JUST .download.windowsupdate.com would cause 
   download.windowsupdate.com/fubar to FAIL under the same circumstances.
   
   during the WU process requests for application at 
   www.download.windowsupdate.com/fubar and K/Q updates at 
   v(3|4|5).download.windowsupdate.com/fubar2
   would result in a 403 and thus the FAIL.
   
   
   SOLUTION:
 Changing the wildcard match to an explicit for fixes this and WU 
   succeeds again.
   OR,
 Changing the wildcard to .windowsupdate.com also fixes the problem
   for this test.
  
  Can other folks experiencing Windows Update troubles with Squid3 confirm
  that their setup does not have the same ACL problem?
  
  In general, if we do not find a way to get more information about the
  Windows Update problem, we would have to assume it does not exist in
  most environments and release Squid3 STABLE as is. If you want the
  problem fixed before the stable Squid3 release, please help us reproduce
  or debug the problem.
  
  Thank you,
  
  Alex.
  
  
 
 



Re: [squid-users] looking for testers: google maps/earth/youtube caching

2007-11-26 Thread Gleidson Antonio Henriques

Count on me !

I´m download tarball now.

Thanks for your help with this rich feature !

Gleidson Antonio Henriques

- Original Message - 
From: Adrian Chadd [EMAIL PROTECTED]

To: Andreas Pettersson [EMAIL PROTECTED]
Cc: Adrian Chadd [EMAIL PROTECTED]; squid-users@squid-cache.org
Sent: Monday, November 26, 2007 1:10 PM
Subject: Re: [squid-users] looking for testers: google maps/earth/youtube 
caching




On Mon, Nov 26, 2007, Andreas Pettersson wrote:


Do I need to cvs it or are we talking about the daily auto-generated
release tar.gz-file from here?
http://www.squid-cache.org/Versions/v2/2.6/


http://www.squid-cache.org/Versions/v2/HEAD/squid-HEAD.snapshot.tar.gz -
this isn't Squid-2.6, this is the development Squid-2 branch.



adrian

--
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid 
Support - 




Re: [squid-users] looking for testers: google maps/earth/youtube caching

2007-11-26 Thread Andreas Pettersson

Adrian Chadd wrote:

On Mon, Nov 26, 2007, Andreas Pettersson wrote:

  
Do I need to cvs it or are we talking about the daily auto-generated 
release tar.gz-file from here?

http://www.squid-cache.org/Versions/v2/2.6/



http://www.squid-cache.org/Versions/v2/HEAD/squid-HEAD.snapshot.tar.gz -
this isn't Squid-2.6, this is the development Squid-2 branch.
  


Ok, thanks.
2 questions:

1. Does your changes incorporate patches that allows overriding 
the-nocache-header-which-exact-name-I-just-cannot-recall-at-the-moment 
that only squid3 can override?


2. I get these failures when trying to compile:

store_key_md5.o(.text+0x12a): In function `storeKeyPrivate':
/tmp/squid2head/squid-2.HEAD-20071126/src/store_key_md5.c:102: undefined 
reference to `MD5Init'
store_key_md5.o(.text+0x139):/tmp/squid2head/squid-2.HEAD-20071126/src/store_key_md5.c:103: 
undefined reference to `MD5Update'
store_key_md5.o(.text+0x148):/tmp/squid2head/squid-2.HEAD-20071126/src/store_key_md5.c:104: 
undefined reference to `MD5Update'
store_key_md5.o(.text+0x162):/tmp/squid2head/squid-2.HEAD-20071126/src/store_key_md5.c:105: 
undefined reference to `MD5Update'
store_key_md5.o(.text+0x16f):/tmp/squid2head/squid-2.HEAD-20071126/src/store_key_md5.c:106: 
undefined reference to `MD5Final'

store_key_md5.o(.text+0x1d2): In function `storeKeyPublic':
/tmp/squid2head/squid-2.HEAD-20071126/src/store_key_md5.c:116: undefined 
reference to `MD5Init'
store_key_md5.o(.text+0x1e1):/tmp/squid2head/squid-2.HEAD-20071126/src/store_key_md5.c:117: 
undefined reference to `MD5Update'
store_key_md5.o(.text+0x1fb):/tmp/squid2head/squid-2.HEAD-20071126/src/store_key_md5.c:118: 
undefined reference to `MD5Update'
store_key_md5.o(.text+0x208):/tmp/squid2head/squid-2.HEAD-20071126/src/store_key_md5.c:119: 
undefined reference to `MD5Final'

store_key_md5.o(.text+0x245): In function `storeKeyPublicByRequestMethod':
/tmp/squid2head/squid-2.HEAD-20071126/src/store_key_md5.c:143: undefined 
reference to `MD5Init'
store_key_md5.o(.text+0x254):/tmp/squid2head/squid-2.HEAD-20071126/src/store_key_md5.c:144: 
undefined reference to `MD5Update'
store_key_md5.o(.text+0x26e):/tmp/squid2head/squid-2.HEAD-20071126/src/store_key_md5.c:145: 
undefined reference to `MD5Update'
store_key_md5.o(.text+0x29d):/tmp/squid2head/squid-2.HEAD-20071126/src/store_key_md5.c:158: 
undefined reference to `MD5Final'
store_key_md5.o(.text+0x2d2):/tmp/squid2head/squid-2.HEAD-20071126/src/store_key_md5.c:155: 
undefined reference to `MD5Update'
store_key_md5.o(.text+0x2fb):/tmp/squid2head/squid-2.HEAD-20071126/src/store_key_md5.c:156: 
undefined reference to `MD5Update'
store_key_md5.o(.text+0x30c):/tmp/squid2head/squid-2.HEAD-20071126/src/store_key_md5.c:158: 
undefined reference to `MD5Final'
store_key_md5.o(.text+0x32a):/tmp/squid2head/squid-2.HEAD-20071126/src/store_key_md5.c:147: 
undefined reference to `MD5Update'
store_key_md5.o(.text+0x353):/tmp/squid2head/squid-2.HEAD-20071126/src/store_key_md5.c:148: 
undefined reference to `MD5Update'
store_key_md5.o(.text+0x375):/tmp/squid2head/squid-2.HEAD-20071126/src/store_key_md5.c:150: 
undefined reference to `MD5Update'
store_key_md5.o(.text+0x38f):/tmp/squid2head/squid-2.HEAD-20071126/src/store_key_md5.c:151: 
undefined reference to `MD5Update'

wccp2.o(.text+0x208): In function `wccp2_update_md5_security':
/tmp/squid2head/squid-2.HEAD-20071126/src/wccp2.c:472: undefined 
reference to `MD5Init'
wccp2.o(.text+0x21a):/tmp/squid2head/squid-2.HEAD-20071126/src/wccp2.c:473: 
undefined reference to `MD5Update'
wccp2.o(.text+0x229):/tmp/squid2head/squid-2.HEAD-20071126/src/wccp2.c:474: 
undefined reference to `MD5Update'
wccp2.o(.text+0x235):/tmp/squid2head/squid-2.HEAD-20071126/src/wccp2.c:475: 
undefined reference to `MD5Final'

wccp2.o(.text+0x1579): In function `wccp2HandleUdp':
/tmp/squid2head/squid-2.HEAD-20071126/src/wccp2.c:514: undefined 
reference to `MD5Init'
wccp2.o(.text+0x158b):/tmp/squid2head/squid-2.HEAD-20071126/src/wccp2.c:515: 
undefined reference to `MD5Update'
wccp2.o(.text+0x159a):/tmp/squid2head/squid-2.HEAD-20071126/src/wccp2.c:516: 
undefined reference to `MD5Update'
wccp2.o(.text+0x15a6):/tmp/squid2head/squid-2.HEAD-20071126/src/wccp2.c:517: 
undefined reference to `MD5Final'

*** Error code 1

Stop in /tmp/squid2head/squid-2.HEAD-20071126/src.
*** Error code 1

Stop in /tmp/squid2head/squid-2.HEAD-20071126/src.
*** Error code 1

Stop in /tmp/squid2head/squid-2.HEAD-20071126/src.
*** Error code 1

Stop in /tmp/squid2head/squid-2.HEAD-20071126.
anp:/tmp/squid2head/squid-2.HEAD-20071126#


--
Andreas




Re: [squid-users] squid log analysis

2007-11-26 Thread Falk Husemann


On 26.11.2007, at 17:59, Paul Cocker wrote:

One of the things I want to draw from the squid logs is how often  
user X

visited site or domain Y. I've setup Webalizer and was using it to
create some reports on traffic, but it doesn't appear to have the
functionality necessary to create such a report.


I'd suggest SARG. It's able to tell you that, but for ALL Sites, not  
just for User X and Site Y. It show that for all X and all it's Y's.



Greets,
Falk Husemann


[squid-users] Allowing only ntlm clients

2007-11-26 Thread shacky
Hi.

I'm configuring a Squid proxy with the ntlm authentication.
Is there a way to allow the Internet access only from the clients
connected to the Active Directory domain?

Thank you very much!
Bye.


Re: [squid-users] looking for testers: google maps/earth/youtube caching

2007-11-26 Thread Andreas Pettersson

Andreas Pettersson wrote:
1. Does your changes incorporate patches that allows overriding 
the-nocache-header-which-exact-name-I-just-cannot-recall-at-the-moment 
that only squid3 can override?


Sorry for the noise. I've now read up on what your changes includes and 
this question got answered.

http://wiki.squid-cache.org/Features/StoreUrlRewrite

--
Andreas




Re: [squid-users] Allowing only ntlm clients

2007-11-26 Thread Isnard Jaquet
Hi,

yes, there are different ways. 

If you set the authentication scheme to use only ntlm and set the rule
to allow only traffic that matches that acl.

Example:

auth_param ntlm program /usr/local/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 50
auth_param ntlm keep_alive on

acl auth_user proxy_auth REQUIRED
http_access allow auth_users
http_access deny all

Regards,

Isnard

On Mon, 2007-11-26 at 19:09 +0100, shacky wrote:
 Hi.
 
 I'm configuring a Squid proxy with the ntlm authentication.
 Is there a way to allow the Internet access only from the clients
 connected to the Active Directory domain?
 
 Thank you very much!
 Bye.



[squid-users] problem with squid 2.6

2007-11-26 Thread Federico Lopez Sarmiento
Hi again list.
This time i've an issue which i don't know why it happens. What i do
know is that this happened to me other times and i could only resolv
it by reinstalling the squid (apt-get purge squid + apt-get install
squid).
I'm running debian, ver. 4

neurus:/etc/squid# uname -a
Linux neurus 2.6.18-4-686 #1 SMP Mon Mar 26 17:17:36 UTC 2007 i686 GNU/Linux


I was running squid perfectly but decided to configure it to reject
unwonted pages, so in the squid.conf i added two lines (the marked
with *).
When i save the changes and do a reload to the squid i found out that
i won't run, whatever i do.

neurus:/etc/squid# tail -n 1785 squid.conf | head -n 18
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
*acl FILTRADAS dstdomain /etc/squid/filtradas.squid
*http_access deny FILTRADAS
delay_pools 1
delay_class 1 1
delay_parameters 1 12000/16000 8000/1
acl LAN src 192.168.0.0/24
delay_access 1 allow LAN

# Example rule allowing access from your local networks. Adapt
# to list your (internal) IP networks from where browsing should
# be allowed
acl our_networks src 192.168.0.0/24
http_access allow our_networks
http_access allow localhost

neurus:/etc/squid# ls -lh
total 152K
-rw-r--r-- 1 root root0 2007-11-26 13:01 filtradas.squid
-rw--- 1 root root 146K 2007-11-26 13:01 squid.conf
neurus:/etc/squid# chmod 777 filtradas.squid
neurus:~# cd /etc/init.d
neurus:/etc/init.d# squid start
2007/11/26 13:05:55| aclParseAclLine: WARNING: empty ACL: acl
FILTRADAS dstdomain /etc/squid/filtradas.squid
neurus:/etc/squid# cat squid.conf | grep http_port
#  TAG: http_port
#   rather than the http_port number.
#   internal address:port in http_port. This way Squid will only be
http_port 8080

neurus:/etc/init.d# nmap localhost

Starting Nmap 4.11 ( http://www.insecure.org/nmap/ ) at 2007-11-26 13:06 ART
Interesting ports on localhost (127.0.0.1):
Not shown: 1675 closed ports
PORTSTATESERVICE
22/tcp  filtered ssh
25/tcp  open smtp
80/tcp  open http
111/tcp open rpcbind
113/tcp open auth

Nmap finished: 1 IP address (1 host up) scanned in 1.276 seconds
neurus:/etc/init.d#


I thought, damn.. maybe i missconfigured something, i should do a
rollback. I did it, and guess what? proxy don't runs.
Should any give me some help with this i would really apreciate it.
Again, sorry for my bad english.
Best regards.

Federico.


Re: [squid-users] looking for testers: google maps/earth/youtube caching

2007-11-26 Thread Adrian Chadd
On Mon, Nov 26, 2007, Andreas Pettersson wrote:

 2. I get these failures when trying to compile:
 
 store_key_md5.o(.text+0x12a): In function `storeKeyPrivate':
 /tmp/squid2head/squid-2.HEAD-20071126/src/store_key_md5.c:102: undefined 
 reference to `MD5Init'

[snip]

There's some changes going on in squid-2.HEAD and squid-3 revolving around
how the md5 libraries are compiled in. Compile with --enable-openssl so
it includes the openssl md5 implementation and all should be fine.



Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -


Re: [squid-users] read_timeout and fwdServerClosed: re-forwarding

2007-11-26 Thread Chris Hostetter

: Tip: fwdReforwardableStatus() I think is the function which implements
: the behaviour you're seeing. That and fwdCheckRetry.

My C Fu isn't strong enough for me to feel confident that I would even 
know what to look for if I started digging into the code ... I mainly just 
wanted to clarify that:
  a) this is expected behavior
  b) there isn't a(n existing) config option available to change this behavior

: You could set the HTTP Gateway timeout to return 0 so the request
: isn't forwarded and see if that works, or the n_tries check in 
fwdCheckRetry().

I'm not sure I understand ...  are you saying there is a squid option 
to set an explicit gateway timeout value? (such that origin requests which 
take longer then X cause squid to return a 504 to the client) ... This 
would ideal -- the only reason I was even experimenting with read_timeout 
was because I haven't found any documentation of anything like this. (but 
since the servers I'm dealing with don't write anything until the entire 
response is ready I figured I could make do with the read_timeout)

: I could easily make the 10 retry count a configurable parameter.

That might be prudent.  It seems like strange behavior to have hardcoded 
in squid.

: The feature, IIRC, was to work around transient network issues which
: would bring up error pages in a traditional forward-proxying setup.

But in situations like that, wouldn't the normal behavior of a long 
read_timeout (I believe the default is 15 minutes) be sufficient?

: Hm, what about retry_on_error ? Does that do anything in an accelerator
: setup?

It might do something, but I'm not sure what :) ... even when i set it 
explicitly to off squid still retries when the read_timeout is exceeded.


Perhaps I'm approaching things the wrong way -- I set out with some 
specific goals in mind, did some experimenting with various options to try 
and reach that goal, and then asked questions when i encountered behavior I 
couldn't explain.  Let me back up and describe my goals, and perhaps 
someone can offer some insight into the appropriate way to achieve 
them

I'm the middle man between origin servers which respond to every request 
by dynamicly generating (relatively small) responses; and clients that 
make GET requests to these servers but are only willing to wait around 
for a short amount of time (on the order of 100s of milliseconds) to get 
the responses before they abort the connection.  The clients would rather 
get no response (or an error) then wait around for a long time -- the 
servers meanwhile would rather the clients got stale responses then no 
responses (or error responses).  My goal, using squid as an accelerator, 
is to maximize the satisfaction of both the clients and the servers.

In the event that a request is not in the cache at all, and an origin 
server takes too long to send a response, using the quick_abort 0 option 
in squid does exactly what I hoped it would: squid continues to wait 
around for the response so that it is available in the cache for future 
requests.

In the event that stale content is already in the cache, and the origin 
server is down and won't accept any connections, squid does what I'd 
hoped it would: returns the stale content even though it can't be 
validated (albeit, without a proper warning, see bug#2119)

The problem I'm running into is figuring out a way to get the analogous 
behavior when the origin server is up but taking too long to respond 
to the validation requests.   Ideally (in my mind) squid would have a 
force_stale_response_after XX milliseconds option, such that if squid 
has a stale response available in the cache, it will return immediately 
once XX milliseconds have elapsed since the client connected.  Any in 
progress validation requests would still be completed/cached if they met 
the conditions of the quick_abort option just as if the client had 
aborted the connection without receiving any response.

Is there a way to get behavior like this (or close to it) from squid?


read_timeout was the only option I could find that seemed to relate to 
how long squid would wait for an origin server once connected -- but it 
has the retry problems previously discussed.  Even if it didn't retry, and 
returned the stale content as soon as the read_timeout was exceeded, 
I'm guessing it wouldn't wait for the fresh response from the origin 
server to cache it for future requests.

FWIW: The refresh_stale_hit option seemed like a promising mechanism for
ensuring that when concurrent requests come in, all but one would get 
a stale response while waiting for a fresh response to be cached (which 
could help minimize the number of clients that give up while waiting 
for a fresh response) -- but it doesn't seem to work as advertised (see 
bug#2126).



-Hoss


Re: [squid-users] Allowing only ntlm clients

2007-11-26 Thread shacky
 If you set the authentication scheme to use only ntlm and set the rule
 to allow only traffic that matches that acl.

Yes, but I don't want the user not to be allowed to surf the Internet
from a computer that isn't connected to the Active Directory domain.
For example, I don't want the user to use their laptops even if they
insert their user and password in the proxy authentication.

Thank you!


Re: [squid-users] Allowing only ntlm clients

2007-11-26 Thread Adrian Chadd
On Tue, Nov 27, 2007, shacky wrote:
  If you set the authentication scheme to use only ntlm and set the rule
  to allow only traffic that matches that acl.
 
 Yes, but I don't want the user not to be allowed to surf the Internet
 from a computer that isn't connected to the Active Directory domain.
 For example, I don't want the user to use their laptops even if they
 insert their user and password in the proxy authentication.
 

The question then is how can a computer authenticate another computer?
Squid doesn't care (at the moment); its just passed credentials.

Normally you'd actually prevent an entire computer from connecting to the
network. Enterprises do this via a variety of means, including stuff like
802.1x. Drop them in a seperate VLAN if you don't recognise the computer
and disallow that VLAN access to the proxy (and other resources.)



Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -


Re: [squid-users] Allowing only ntlm clients

2007-11-26 Thread Leonardo Rodrigues Magalhães


   If you have ONLY the 'auth_param ntlm' thing and do NOT have the 
'auth_param basic', so there will be no username/password prompt.


   For having the username/password prompt window, you would have to 
configure a 'basic' authenticator. Configuring the ntlm only, you would 
probably acchieve what you're looking for.



shacky escreveu:

If you set the authentication scheme to use only ntlm and set the rule
to allow only traffic that matches that acl.



Yes, but I don't want the user not to be allowed to surf the Internet
from a computer that isn't connected to the Active Directory domain.
For example, I don't want the user to use their laptops even if they
insert their user and password in the proxy authentication.

  


--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
[EMAIL PROTECTED]
My SPAMTRAP, do not email it






Re: [squid-users] Have squid display webpage when user authenticated

2007-11-26 Thread Chris Robertson

Reid wrote:

-
I would like to have a webpage display when a user is first authenticated on my 
squid proxy.

For example, they start by configuring their browser for the proxy, and then go to 
yahoo.com.
But before they see yahoo.com, the proxy will first display a page says You 
are surfing via a
proxy.. click here to continue to your page... From that point on they will 
not be asked again.

Is this possible with squid?
  


Squid 2.6 ships with an external ACL called the session helper that is 
designed to do just this.  Try man squid_session on your proxy, or see 
http://linuxreviews.org/man/squid_session/ for more details.



Thank you
  


Chris


Re: [squid-users] Authenticating with Samba for logging username in Squid access log

2007-11-26 Thread Chris Robertson

Leach, Shane - MIS Laptop wrote:

Chris,

My http_access lines are below:

http_access allow manager localhost 
http_access deny manager 
http_access deny !Safe_ports 
http_access deny CONNECT !SSL_ports 
acl MyNetwork src 10.1.0.0/255.255.0.0 
http_access allow MyNetwork


Please let me know if you need to see any other lines.  I will review
the link you sent.

Thanks for the help.

Shane
  


Assuming you have  the auth_param lines from the how to in your 
squid.conf above the lines included in your email you should be able to 
replace http_access allow MyNetwork with the following two lines...


acl NTLMUsers proxy_auth REQUIRED  # Create an ACL that requires valid 
authentication
http_access allow MyNetwork NTLMUsers  # Allow access to authenticated 
users from my network


Chris


Re: [squid-users] problem with squid 2.6

2007-11-26 Thread Tony Dodd

Try running squid in debug mode:

squid -X

Also, check your cache.log and see if anything relevant is in there.

--
Tony Dodd, Systems Administrator

Last.fm | http://www.last.fm
Karen House 1-11 Baches Street
London N1 6DL

check out my music taste at:
http://www.last.fm/user/hawkeviper


Re: [squid-users] question about filesystems and directories for cache.

2007-11-26 Thread Chris Robertson

Tony Dodd wrote:

Matias Lopez Bergero wrote:

Hello,


snip


I'm being reading the wiki and the mailing list to know, which is the
best filesystem to use, for now I have chose ext3 based on comments on
the list, also, I have passed the nodev,nosuid,noexec,noatime flags to
fstab in order to get a security and faster performance.


snip

Hi Matias,

I'd personally recommend against ext3, and point you towards reiserfs. 
ext3 is horribly slow for many small files being read/written at the 
same time.  I'd also recommend maximizing your disk throughput, by 
splitting the raid, and having a cache-dir on each disk; though of 
course, you'll loose redundancy in the event of a disk failure.


I wrote a howto that revolves around maximizing squid performance, 
take a look at it, you may find it helpful: 
http://blog.last.fm/2007/08/30/squid-optimization-guide




Hi Tony,

First of all, thanks for sharing the write-up.  There are a number of 
high-load squid installations (Wikipedia, and Flikr are two of the 
largest I know of), but not much information on what tweaks to make in 
the interest of performance.


After perusing your posting, I'm wondering if you would run a 
squidclient -p 80 mgr:info |grep method.  I'm making the assumption 
that your squid is listening on port 80, so please change the argument 
to -p if needed.  Your configuration options included --enable-poll, 
but with a 2.6 kernel and 2.6 sources, I would be surprised if you are 
not actually using epoll.  It might be a superfluous compile option.


Cache digests are not the only method of sharing between peers.  ICP is 
an alternative and I have read that multicast works well for scaling 
beyond a handful of peers.  I can't seem to find the posting now that I 
want to reference it.  I'd trust your experience over my memory of 
someone else's posting, but I thought I would raise the possibility.


I'm surprised you had to specify your hosts file in your squid.conf.  
/etc/hosts is the default.


Lastly, I'd be wary of specifying dns_nameservers as a squid.conf 
option.  Squid will use the servers specified in /etc/resolv.conf if 
this option is not specified.  Now you have to maintain name servers in 
two locations.


Chris


Re: [squid-users] question about filesystems and directories for cache.

2007-11-26 Thread Tony Dodd

Chris Robertson wrote:
First of all, thanks for sharing the write-up.  There are a number of 
high-load squid installations (Wikipedia, and Flikr are two of the 
largest I know of), but not much information on what tweaks to make in 
the interest of performance.
No problem. =]  I encountered the same problem when trying to figure out 
how to get more performance so I figured once I'd cracked it, the least 
I could do was document it for the other people having the same issue 
(and to give myself a reference for later).


After perusing your posting, I'm wondering if you would run a 
squidclient -p 80 mgr:info |grep method.  I'm making the assumption 
that your squid is listening on port 80, so please change the argument 
to -p if needed.  Your configuration options included --enable-poll, 
but with a 2.6 kernel and 2.6 sources, I would be surprised if you are 
not actually using epoll.  It might be a superfluous compile option.

[EMAIL PROTECTED] ~]# squidclient -p 8081 mgr:info |grep method
   IO loop method: poll
Cache digests are not the only method of sharing between peers.  ICP 
is an alternative and I have read that multicast works well for 
scaling beyond a handful of peers.  I can't seem to find the posting 
now that I want to reference it.  I'd trust your experience over my 
memory of someone else's posting, but I thought I would raise the 
possibility.
I was under the impression that when utilizing cache peering, it worked 
better if the squids had a digest of what was on X squid server, before 
asking for it.  I could be wrong on that though - Adrian, care to 
comment on this one?  It's now redundant in my situation though, as 
every peering mechanism is slower than going back to parent in our use case.
I'm surprised you had to specify your hosts file in your squid.conf.  
/etc/hosts is the default.
There are a couple of bugs in squid that seem to cause issues if you 
don't actually specify the hosts file within the squid conf... worst 
case, it's an extra line of config to parse on startup.


Lastly, I'd be wary of specifying dns_nameservers as a squid.conf 
option.  Squid will use the servers specified in /etc/resolv.conf if 
this option is not specified.  Now you have to maintain name servers 
in two locations.
Same goes here; DNS lookups were taking  200-1000ms without specifying 
dns_nameservers in the config (the nameservers specified there are the 
same ones within /etc/resolv.conf), now they're sub 1ms.  There isn't 
much chance of us re-ip-ing internally, so it's a pretty safe config 
option for us.  I definitely agree that it could cause problems for 
people using public DNS resolution though.


--
Tony Dodd, Systems Administrator

Last.fm | http://www.last.fm
Karen House 1-11 Baches Street
London N1 6DL

check out my music taste at:
http://www.last.fm/user/hawkeviper 



Re: [squid-users] question about filesystems and directories for cache.

2007-11-26 Thread Adrian Chadd
 not actually using epoll.  It might be a superfluous compile option.

 [EMAIL PROTECTED] ~]# squidclient -p 8081 mgr:info |grep method
IO loop method: poll

Try --enable-epoll and see if your caches are faster?




Adrian



[squid-users] expires time problem between apache and squid

2007-11-26 Thread 千千阙歌
squid-users,您好!

Redhat Linux AS 4: squid2.6.STABLE16 + apache 2.0.61

1.I use apache to control cache time ,
  IfModule mod_expires.c
ExpiresActive on
ExpiresByType text/html A60
ExpiresByType text/css A1296000
ExpiresByType image/gif A1296000
ExpiresByType image/jpeg A1296000
EXpiresByType application/x-shockwave-flash A1296000
EXpiresByType application/x-javascript  A1296000
ExpiresDefault  A60
  /IfModule
  
  then i found the squid can't cache , and the i change A60 to A61 , the squid 
work well!
  
2.If i use squid control the cache time,it also dosn't work!
  refresh_pattern -i ^http://media.mydomain.com/ 1 100% 1 override-lastmod 
override-expire reload-into-ims  (1 minute can't cache)
  
refresh_pattern -i ^http://media.hexun.com/ 2 100% 2 override-lastmod 
override-expire reload-into-ims  (when i set 2 minute , it works! )

  i really don't known why!
By the say: in my intranet , all server ntpdate from ntpserver every several 
minutes!