[squid-users] Out of memory problem in huge cache

2008-09-24 Thread Itzcak Pechtalt
Hi,

I have Squid 2.6 running on RedHat Linux with 8 GB memory, and
configured according to Squid wiki recomendations in
http://wiki.squid-cache.org/SquidFaq/SquidMemory#head-09818ad4cb8a1dfea1f51688c41bdf4b79a69991

However, it crashs every couple of hours with FATAL: xmalloc: Unable
to allocate xxx bytes! message
I wonder if I made something bad in configuration or it's a memory leak problem.

Any help will be appreciate

Following configuration and error details:

Squid version 2.6 STABLE 12
cache_dir size 100GB
cache_mem size 1.995 GB
memory_pools_limit 100 MB

So 2 GB for cacheMem and 100 * 10MB(per 1G disk in above wiki) is 3GB.
In addition there are additional 5 MB
for other system use,  it should be enogh.

My Linux kernel support 4G memory space per process(tested), so the
problem is not in process memory limit.

I found in bugzilla a memory leak bug related to http_accept deny acl
rule, but my cache is configured with http_access alow all (it's a
private network and there are no security issues).

Some prints from cache.log
2008/09/21 22:40:31| Swap maxSize 10240 KB, estimated 7876923 objects
2008/09/21 22:40:31| Target number of buckets: 393846
2008/09/21 22:40:31| Using 524288 Store buckets
2008/09/21 22:40:31| Max Mem  size: 1997824 KB
2008/09/21 22:40:31| Max Swap size: 10240 KB
2008/09/21 22:43:12| Finished rebuilding storage from disk.
2008/09/21 22:43:12|   14085510 Entries scanned
2008/09/21 22:43:12|   14084558 Objects loaded.
2008/09/21 22:43:12|   201 Duplicate URLs purged.
2008/09/21 22:43:12|   751 Swapfile clashes avoided.
2008/09/21 22:43:12|   Took 161.5 seconds (87200.4 objects/sec).
2008/09/21 22:43:17|   Completed Validation Procedure
2008/09/21 22:43:17|   Validated 14084555 Entries
2008/09/21 22:43:17|   store_swap_size = 100657756k

FATAL: xmalloc: Unable to allocate 65535 bytes!

Squid Cache (Version 2.6.STABLE12-20070429): Terminated abnormally.
CPU Usage: 5939.813 seconds = 2462.689 user + 3477.124 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 1
Memory usage for squid via mallinfo():
total space in arena:  -117092 KB
Ordinary blocks:   -129142 KB 286436 blks
Small blocks:   0 KB  0 blks
Holding blocks: 83308 KB 11 blks
Free Small blocks:  0 KB
Free Ordinary blocks:   12049 KB
Total in use:  -45834 KB 136%
Total free: 12049 KB -35%

thanks

Itzcak


Re: [squid-users] running squid on VERY low-end devices

2008-09-24 Thread John Doe
 i have written a small external helper in perl and it seems to do the 
 trick.
 
 
 #!/usr/bin/microperl
 
 while () {
 
 ($second, $minute, $hour, $dayOfMonth, $month, $yearOffset, 
 $dayOfWeek, 
 $dayOfYear, $daylightSavings) = localtime();
 $year =3D 1900 + $yearOffset;
 
 if ($year  2008) {
 print ERR\n;
 } else {
 print OK\n;
 }
 }
 
 
 i know perl isnt the most optimized language  but as the= external 
 acl 
 is started on squid startup and keeps running until squid exits, i dont think 
 that will be a problem.
 
 anyway, i would love to see something similar in C that could be compiled 
 and run with less memory than perl requires  unfortunelly i cant code 
 even a 
 'hello world' in C !!! But i'll try that . shouldnt be too difficult.

Here's a basic C example:

#include stdio.h
#include time.h

#define INPUTSIZE 256

int main(int argc, char *argv[]) {

  char input[INPUTSIZE];
  char *cp;
  struct tm *local;
  time_t t;
  int year;

  while (fgets(input, sizeof(input), stdin)) {
t = time(NULL);
local = localtime(t);
year = local-tm_year + 1900;
if (year  2008) { printf(ERR\n); } else { printf(OK\n); }
fflush(stdout);
  }

}

JD

PS: I put a small INPUTSIZE and do not check for overflow...


  



[squid-users] Custom header based authentication module

2008-09-24 Thread Christoph Rabel

Hi!

I am rather new to squid and I hope my question is fine here.

We have a single sign on service in our company, which essentially sets
a custom header after authentication.
Now, we would like to use this header for squid authentication too.

So, a request which has a valid ssoheader should be considered
authenticated and allowed to access the internet. A user without such a
header(or an invalid one) should be redirected to the login page.

I understand from documentation, that I have to implement a custom auth
module, which checks the credentials, but it says
also (http://wiki.squid-cache.org/SquidFaq/ProxyAuthentication) that the
/Authorization/ request header is given to this module. And if the
header is not present, 407 is sent.

To condense my question: Is it possible to specify which header
information is given to the auth module? And to specify that no 407 but
a redirect is sent?

Another thing that bothers me are SSL requests. What happens when the
proxy encounters a request for a https site? Can it access the cookie
anyway?

Thanks, Christoph




[squid-users] Object becomes STALE: refresh_pattern min and max

2008-09-24 Thread BUI18
Hi -

I have squid box with tons of disk for the cache_dir
(hundreds of GB).  I use wget to perform some pre-fetching of large
video files.  I've set the min and max age to 5 days and 7 days (in
minutes).  And although I have plenty of disk space available, I still
receive TCP_REFRESH_MISS for files that had been pre-fetched and later
accessed the same day.  Does anyone know why Squid would consider it as
STALE?  I thought that by setting the min value for refresh_pattern for
the video file would guarantee freshness.  Not only does the cache
consider it STALE, it then goes and pre-fetches a new copy even though
I know that the video file has not changed.  Any help would be greatly
appreciated.  Thanks.


  


[squid-users] How to Cache aspx Pages?

2008-09-24 Thread BUI18
Hi -  I need to cache aspx pages.  I have read through the squid faq for 
caching dynamic content and have tried the following
configuration, however, it does not seem to cache aspx pages with or without 
query
strings.

A sample link may look like this (this is not a working link) -- 
http://www.domain.com/junk.aspx?id=12345

My current relevant configurations from squid.conf:

# Allows dynamic content with query strings in the path
acl junkname urlpath_regex -i \?
cache allow junkname
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
refresh_pattern -i \.aspx$  144090% 2880 ignore-reload

I get status of TCP_MISS in cache.log when I request the page.  Am I missing 
something?  Do I need an entry for http_access as well?

Does anyone see a problem with the above configuration parameters?  Does any 
one know how to properly cache aspx pages?

Thanks in advance.



  


Re: [squid-users] Object becomes STALE: refresh_pattern min and max

2008-09-24 Thread Itzcak Pechtalt
On Wed, Sep 24, 2008 at 1:39 PM, BUI18 [EMAIL PROTECTED] wrote:
 Hi -

 I have squid box with tons of disk for the cache_dir
 (hundreds of GB).  I use wget to perform some pre-fetching of large
 video files.  I've set the min and max age to 5 days and 7 days (in
 minutes).  And although I have plenty of disk space available, I still
 receive TCP_REFRESH_MISS for files that had been pre-fetched and later
 accessed the same day.  Does anyone know why Squid would consider it as
 STALE?  I thought that by setting the min value for refresh_pattern for
 the video file would guarantee freshness.  Not only does the cache
 consider it STALE, it then goes and pre-fetches a new copy even though
 I know that the video file has not changed.  Any help would be greatly
 appreciated.  Thanks.





Hi,
Check if the video URL changes from request to request. In YouTube
video even if the main URL is the same, there is request ID in URL who
changes per request.

Itzcak


Re: [squid-users] Object becomes STALE: refresh_pattern min and max

2008-09-24 Thread Michael Alger
On Wed, Sep 24, 2008 at 03:39:16AM -0700, BUI18 wrote:
 I have squid box with tons of disk for the cache_dir
 (hundreds of GB).  I use wget to perform some pre-fetching of large
 video files.  I've set the min and max age to 5 days and 7 days (in
 minutes).  And although I have plenty of disk space available, I still
 receive TCP_REFRESH_MISS for files that had been pre-fetched and later
 accessed the same day.  Does anyone know why Squid would consider it as
 STALE?  I thought that by setting the min value for refresh_pattern for
 the video file would guarantee freshness.  Not only does the cache
 consider it STALE, it then goes and pre-fetches a new copy even though
 I know that the video file has not changed.  Any help would be greatly
 appreciated.  Thanks.

The fact that it's doing TCP_REFRESH_xxx means squid does have a
cached copy which it considers potentially stale. So it's sending an
If-Modified-Since request to the origin server. The origin is then
either saying yes, it's been modified since you retrieved it --
here's a new one; or it has no idea how to handle IMS and is
sending the whole object regardless.

What Expires: header is the server sending? You can use the -S
switch with wget to show the server response headers when you're
doing your pre-fetch. If they look fine, maybe keep a copy of them
and compare later in the day when squid decides it needs a new one.

I assume you're setting the min and max age via refresh_pattern
lines? Remember that these are only used in the absence of an
Expires header.


[squid-users] help with page faults

2008-09-24 Thread Nick Duda
What is the best way for me to figure out why every week Squid seems to spike 
on page faults causing everyone that is using the proxy server to experience 
slow browsing habits until the squid process is re-cycled. I run SNMP 
monitoring against squid and use the cacti squid templates so I know when page 
faults happen.

I use squid 2.6 stable19. Pretty standard install. Runs on RHEL

Regards,
Nick Duda



Re: [squid-users] Object becomes STALE: refresh_pattern min and max

2008-09-24 Thread BUI18
Hi - Thanks for responding.  URL for video file never changes.

I did some more checking in the Squid logs and this is what I noticed:

File Properties of video file (Pacific Daylight Time (PDT))

Created On: Monday, September 22, 2008, 8:59:35 AM

Modified On: Monday, September 22, 2008, 8:59:35 AM

Accessed On: Today, September 24, 2008, 3:53:12 AM

***
Wget Grabs File (Time in India Standard Time (IST))

--04:38:35--  http://ftp.mydomain.com/websites/data/myvideofile.vid
 = `/WGET/Temp/myvideofile.vid'
04:38:54 (93.91 KB/s) - `/WGET/Temp/myvideofile.vid' saved [1791244/1791244]

The access.log confirms initial pre-fetch by wget.

1222124934.241  18968 192.168.200.4 TCP_MISS/200 1791684 GET 
http://ftp.mydomain.com/websites/data/myvideofile.vid - DIRECT/69.43.136.41 
video/jpeg

UTC = Mon, 22 Sep 2008 23:08:54 GMT

The store.log shows a write from memory to disk:

1222124934.241 SWAPOUT 00 00057B65 1E18E35BDC9307C6BC3FBEFD5B4120A3  200 
1222124765 1222099175-1 video/jpeg 1791244/1791244 GET 
http://ftp.mydomain.com/websites/data/myvideofile.vid

UTC = Mon, 22 Sep 2008 23:08:54 GMT

***

Then Store.log shows release or removal from cache:

153725.068 RELEASE 00 00057B65 605FAC36E93B0CDE81902BBC6C5EC71A  200 
1222124765 1222099175-1 video/jpeg 1791244/-279 GET 
http://ftp.mydomain.com/websites/data/myvideofile.vid

UTC = Wed, 24 Sep 2008 10:55:25 GMT

Notice the -1 for expiration header (I do not set one on the object).  My min 
age is 5 days so I'm not sure why the object would be released from cache in 
less than 2 days.

If the object was released from cache, when the user tried to access file, 
Squid reports TCP_REFRESH_MISS, which to me means that it was found in cache 
but when it sends a If-Modified-Since request, it thinks that the file has been 
modified (which it was not as seen by the lastmod date indicated in the 
store.log below.

***

User accessed file (access.log):

153742.005  17275 192.168.200.52 TCP_REFRESH_MISS/200 1791688 GET 
http://ftp.mydomain.com/websites/data/myvideofile.vid - DIRECT/69.43.136.41 
video/jpeg

UTC = Wed, 24 Sep 2008 10:55:42 GMT

Then store.log shows a write to disk

153742.005 SWAPOUT 00 00088336 1E18E35BDC9307C6BC3FBEFD5B4120A3  200 
153575 1222099175-1 video/jpeg 
1791244/1791244 GET http://ftp.mydomain.com/websites/data/myvideofile.vid

UTC = Wed, 24 Sep 2008 10:55:42 GMT
datehdr: Wed, 24 Sep 2008 10:55:55 GMT
lastmod: Mon, 22 Sep 2008 15:59:35 GMT

Anyone with ideas on why this behavior occurs?

thanks





- Original Message 
From: Itzcak Pechtalt [EMAIL PROTECTED]
To: Squid Users squid-users@squid-cache.org
Sent: Wednesday, September 24, 2008 4:35:59 AM
Subject: Re: [squid-users] Object becomes STALE: refresh_pattern min and max

On Wed, Sep 24, 2008 at 1:39 PM, BUI18 [EMAIL PROTECTED] wrote:
 Hi -

 I have squid box with tons of disk for the cache_dir
 (hundreds of GB).  I use wget to perform some pre-fetching of large
 video files.  I've set the min and max age to 5 days and 7 days (in
 minutes).  And although I have plenty of disk space available, I still
 receive TCP_REFRESH_MISS for files that had been pre-fetched and later
 accessed the same day.  Does anyone know why Squid would consider it as
 STALE?  I thought that by setting the min value for refresh_pattern for
 the video file would guarantee freshness.  Not only does the cache
 consider it STALE, it then goes and pre-fetches a new copy even though
 I know that the video file has not changed.  Any help would be greatly
 appreciated.  Thanks.





Hi,
Check if the video URL changes from request to request. In YouTube
video even if the main URL is the same, there is request ID in URL who
changes per request.

Itzcak



  


Re: [squid-users] Object becomes STALE: refresh_pattern min and max

2008-09-24 Thread BUI18
Hi -

I went through your same thinking as you described below.

I checked the Expires header from the server and we do not set one.  I checked 
via Fiddler web debug tool.  I also verified with the dev guys here regarding 
no Expires header.  I have set the min and max via refresh_pattern because of 
the absence of the Expires header thinking that Squid would keep it FRESH.

I recently posted the details of the Squid logs, but will re-post it here so 
that the thread will follow properly (it may help others when search).

Hoping someone could explain the behavior I see below.


File Properties of video file (Pacific Daylight Time (PDT))

Created On: Monday, September 22, 2008, 8:59:35 AM

Modified On: Monday, September 22, 2008, 8:59:35 AM

Accessed On: Today, September 24, 2008, 3:53:12 AM

***
Wget Grabs File (Time in India Standard Time (IST))

--04:38:35--  http://ftp.mydomain.com/websites/data/myvideofile.vid
= `/WGET/Temp/myvideofile.vid'
04:38:54 (93.91 KB/s) - `/WGET/Temp/myvideofile.vid' saved [1791244/1791244]

The access.log confirms initial pre-fetch by wget.

1222124934.241  18968 192.168.200.4 TCP_MISS/200 1791684 GET 
http://ftp.mydomain.com/websites/data/myvideofile.vid - DIRECT/69.43.136.41 
video/jpeg

UTC = Mon, 22 Sep 2008 23:08:54 GMT

The store.log shows a write from memory to disk:

1222124934.241 SWAPOUT 00 00057B65 1E18E35BDC9307C6BC3FBEFD5B4120A3  200 
1222124765 1222099175-1 video/jpeg 1791244/1791244 GET 
http://ftp.mydomain.com/websites/data/myvideofile.vid

UTC = Mon, 22 Sep 2008 23:08:54 GMT

***

Then Store.log shows release or removal from cache:

153725.068 RELEASE 00 00057B65 605FAC36E93B0CDE81902BBC6C5EC71A  200 
1222124765 1222099175-1 video/jpeg 1791244/-279 GET 
http://ftp.mydomain.com/websites/data/myvideofile.vid

UTC = Wed, 24 Sep 2008 10:55:25 GMT

Notice
the -1 for expiration header (I do not set one on the object).  My min
age is 5 days so I'm not sure why the object would be released from
cache in less than 2 days.

If the object was released from
cache, when the user tried to access file, Squid reports
TCP_REFRESH_MISS, which to me means that it was found in cache but when
it sends a If-Modified-Since request, it thinks that the file has been
modified (which it was not as seen by the lastmod date indicated in the
store.log below.

***

User accessed file (access.log):

153742.005  17275 192.168.200.52 TCP_REFRESH_MISS/200 1791688 GET 
http://ftp.mydomain.com/websites/data/myvideofile.vid - DIRECT/69.43.136.41 
video/jpeg

UTC = Wed, 24 Sep 2008 10:55:42 GMT

Then store.log shows a write to disk

153742.005 SWAPOUT 00 00088336 1E18E35BDC9307C6BC3FBEFD5B4120A3  200 
153575 1222099175-1 video/jpeg 
1791244/1791244 GET http://ftp.mydomain.com/websites/data/myvideofile.vid

UTC = Wed, 24 Sep 2008 10:55:42 GMT
datehdr: Wed, 24 Sep 2008 10:55:55 GMT
lastmod: Mon, 22 Sep 2008 15:59:35 GMT






- Original Message 
From: Michael Alger [EMAIL PROTECTED]
To: squid-users@squid-cache.org
Sent: Wednesday, September 24, 2008 4:49:38 AM
Subject: Re: [squid-users] Object becomes STALE: refresh_pattern min and max

On Wed, Sep 24, 2008 at 03:39:16AM -0700, BUI18 wrote:
 I have squid box with tons of disk for the cache_dir
 (hundreds of GB).  I use wget to perform some pre-fetching of large
 video files.  I've set the min and max age to 5 days and 7 days (in
 minutes).  And although I have plenty of disk space available, I still
 receive TCP_REFRESH_MISS for files that had been pre-fetched and later
 accessed the same day.  Does anyone know why Squid would consider it as
 STALE?  I thought that by setting the min value for refresh_pattern for
 the video file would guarantee freshness.  Not only does the cache
 consider it STALE, it then goes and pre-fetches a new copy even though
 I know that the video file has not changed.  Any help would be greatly
 appreciated.  Thanks.

The fact that it's doing TCP_REFRESH_xxx means squid does have a
cached copy which it considers potentially stale. So it's sending an
If-Modified-Since request to the origin server. The origin is then
either saying yes, it's been modified since you retrieved it --
here's a new one; or it has no idea how to handle IMS and is
sending the whole object regardless.

What Expires: header is the server sending? You can use the -S
switch with wget to show the server response headers when you're
doing your pre-fetch. If they look fine, maybe keep a copy of them
and compare later in the day when squid decides it needs a new one.

I assume you're setting the min and max age via refresh_pattern
lines? Remember that these are only used in the absence of an
Expires header.



  


Re: [squid-users] Out of memory problem in huge cache

2008-09-24 Thread Amos Jeffries

Itzcak Pechtalt wrote:

Hi,

I have Squid 2.6 running on RedHat Linux with 8 GB memory, and
configured according to Squid wiki recomendations in
http://wiki.squid-cache.org/SquidFaq/SquidMemory#head-09818ad4cb8a1dfea1f51688c41bdf4b79a69991

However, it crashs every couple of hours with FATAL: xmalloc: Unable
to allocate xxx bytes! message
I wonder if I made something bad in configuration or it's a memory leak problem.



Is your squid built with 64-bit support and --with-large-files to 
support 2GB total memory allocation?


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] Custom header based authentication module

2008-09-24 Thread Amos Jeffries

Christoph Rabel wrote:

Hi!

I am rather new to squid and I hope my question is fine here.

We have a single sign on service in our company, which essentially sets
a custom header after authentication.
Now, we would like to use this header for squid authentication too.

So, a request which has a valid ssoheader should be considered
authenticated and allowed to access the internet. A user without such a
header(or an invalid one) should be redirected to the login page.

I understand from documentation, that I have to implement a custom auth
module, which checks the credentials, but it says
also (http://wiki.squid-cache.org/SquidFaq/ProxyAuthentication) that the
/Authorization/ request header is given to this module. And if the
header is not present, 407 is sent.

To condense my question: Is it possible to specify which header
information is given to the auth module? And to specify that no 407 but
a redirect is sent?


Not for auth modules. They only use the regular Proxy-Authentication: 
headers. Maybe WWW-Authentication: header in accelerators.


For checking custom headers you need to make your authenticator an 
external_acl_type helper. And pass it the custom request header by name.




Another thing that bothers me are SSL requests. What happens when the
proxy encounters a request for a https site? Can it access the cookie
anyway?


Depends on how Squid receives the HTTPS request.
a) as a plain URL for squid to handle. Okay, squid has access to all the 
headers etc.


b) as a CONNECT tunnel setup request. Squid has access to destination 
hostname and port. very little else. The sslbump feature coming in 3.1 
has been designed to get around those limits but has its own issues with 
privacy doing a man-in-middle attack on your users.



Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] How to Cache aspx Pages?

2008-09-24 Thread Amos Jeffries

BUI18 wrote:

Hi -  I need to cache aspx pages.  I have read through the squid faq for 
caching dynamic content and have tried the following
configuration, however, it does not seem to cache aspx pages with or without 
query
strings.

A sample link may look like this (this is not a working link) -- 
http://www.domain.com/junk.aspx?id=12345

My current relevant configurations from squid.conf:

# Allows dynamic content with query strings in the path
acl junkname urlpath_regex -i \?
cache allow junkname
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
refresh_pattern -i \.aspx$  144090% 2880 ignore-reload

I get status of TCP_MISS in cache.log when I request the page.  Am I missing 
something?  Do I need an entry for http_access as well?

Does anyone see a problem with the above configuration parameters?  Does any 
one know how to properly cache aspx pages?



Um, if you have no cache_peer entries its safe to remove the QUERY acl 
bits altogether replaced instead with a new refresh_pattern 
(/cgi-bin/|\?) 0 0% 0 just above the '.' one.


Well, first check the cacheability engine to see if the pages have 
expiry information. If cache-controls or Expires: are set 
refresh_pattern has no effect.


Expect to possibly see MISS on first request, its a second immediately 
after that should be HIT.


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] help with page faults

2008-09-24 Thread Amos Jeffries

Nick Duda wrote:

What is the best way for me to figure out why every week Squid seems to spike 
on page faults causing everyone that is using the proxy server to experience 
slow browsing habits until the squid process is re-cycled. I run SNMP 
monitoring against squid and use the cacti squid templates so I know when page 
faults happen.



Quite possibly squid restarting itself after a fatal error. The cache 
re-scan and recovery can slow general access down.


Check cache.log around the time of fault. If possible or necessary, you 
may need to raise the debug_options ALL,n level (n) up as close to 9 
as you can for full info on what squid is doing. It may cause very large 
logs to accumulate very quickly though.


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


Re: [squid-users] Object becomes STALE: refresh_pattern min and max

2008-09-24 Thread Amos Jeffries

BUI18 wrote:

Hi - Thanks for responding.  URL for video file never changes.



What release of Squid?

Did you check the Expires header properly from the transfer rather than 
from the (apparently untrustworthy) info in the store log?




I did some more checking in the Squid logs and this is what I noticed:

File Properties of video file (Pacific Daylight Time (PDT))

Created On: Monday, September 22, 2008, 8:59:35 AM

Modified On: Monday, September 22, 2008, 8:59:35 AM

Accessed On: Today, September 24, 2008, 3:53:12 AM

***
Wget Grabs File (Time in India Standard Time (IST))

--04:38:35--  http://ftp.mydomain.com/websites/data/myvideofile.vid
 = `/WGET/Temp/myvideofile.vid'
04:38:54 (93.91 KB/s) - `/WGET/Temp/myvideofile.vid' saved [1791244/1791244]

The access.log confirms initial pre-fetch by wget.

1222124934.241  18968 192.168.200.4 TCP_MISS/200 1791684 GET 
http://ftp.mydomain.com/websites/data/myvideofile.vid - DIRECT/69.43.136.41 
video/jpeg

UTC = Mon, 22 Sep 2008 23:08:54 GMT

The store.log shows a write from memory to disk:

1222124934.241 SWAPOUT 00 00057B65 1E18E35BDC9307C6BC3FBEFD5B4120A3  200 
1222124765 1222099175-1 video/jpeg 1791244/1791244 GET 
http://ftp.mydomain.com/websites/data/myvideofile.vid

UTC = Mon, 22 Sep 2008 23:08:54 GMT

***

Then Store.log shows release or removal from cache:

153725.068 RELEASE 00 00057B65 605FAC36E93B0CDE81902BBC6C5EC71A  200 
1222124765 1222099175-1 video/jpeg 1791244/-279 GET 
http://ftp.mydomain.com/websites/data/myvideofile.vid

UTC = Wed, 24 Sep 2008 10:55:25 GMT

Notice the -1 for expiration header (I do not set one on the object).  My min 
age is 5 days so I'm not sure why the object would be released from cache in 
less than 2 days.

If the object was released from cache, when the user tried to access file, 
Squid reports TCP_REFRESH_MISS, which to me means that it was found in cache 
but when it sends a If-Modified-Since request, it thinks that the file has been 
modified (which it was not as seen by the lastmod date indicated in the 
store.log below.

***

User accessed file (access.log):

153742.005  17275 192.168.200.52 TCP_REFRESH_MISS/200 1791688 GET 
http://ftp.mydomain.com/websites/data/myvideofile.vid - DIRECT/69.43.136.41 
video/jpeg

UTC = Wed, 24 Sep 2008 10:55:42 GMT

Then store.log shows a write to disk

153742.005 SWAPOUT 00 00088336 1E18E35BDC9307C6BC3FBEFD5B4120A3  200 
153575 1222099175-1 video/jpeg 
1791244/1791244 GET http://ftp.mydomain.com/websites/data/myvideofile.vid

UTC = Wed, 24 Sep 2008 10:55:42 GMT
datehdr: Wed, 24 Sep 2008 10:55:55 GMT
lastmod: Mon, 22 Sep 2008 15:59:35 GMT


Anyone with ideas on why this behavior occurs?

thanks





- Original Message 
From: Itzcak Pechtalt [EMAIL PROTECTED]
To: Squid Users squid-users@squid-cache.org
Sent: Wednesday, September 24, 2008 4:35:59 AM
Subject: Re: [squid-users] Object becomes STALE: refresh_pattern min and max

On Wed, Sep 24, 2008 at 1:39 PM, BUI18 [EMAIL PROTECTED] wrote:

Hi -

I have squid box with tons of disk for the cache_dir
(hundreds of GB).  I use wget to perform some pre-fetching of large
video files.  I've set the min and max age to 5 days and 7 days (in
minutes).  And although I have plenty of disk space available, I still
receive TCP_REFRESH_MISS for files that had been pre-fetched and later
accessed the same day.  Does anyone know why Squid would consider it as
STALE?  I thought that by setting the min value for refresh_pattern for
the video file would guarantee freshness.  Not only does the cache
consider it STALE, it then goes and pre-fetches a new copy even though
I know that the video file has not changed.  Any help would be greatly
appreciated.  Thanks.






Hi,
Check if the video URL changes from request to request. In YouTube
video even if the main URL is the same, there is request ID in URL who
changes per request.

Itzcak



  



--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


RE: [squid-users] Tproxy iptables rules issue

2008-09-24 Thread Ritter, Nicholas
As Amos said, we are close to a solution, I am in the middle of load
testing it to make sure things are ok.

Loading testing it has been delayed a little more than planned because I
am having that same annoying GRE tunnel problem I noted back in May. It
appears that the GRE tunnel setup on the Linux server running Squid is
not the same when the WCCP router identifier is not on the same logical
IP subnet...either that or there is a screwy problem between IOS version
numbers.

I soon as I finish the load testing, I will update the wiki article,
because it is way off now.

Nick

 -Original Message-
 From: Amos Jeffries [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, September 23, 2008 11:42 PM
 To: Dan Letkeman
 Cc: Ritter, Nicholas; squid-users
 Subject: Re: [squid-users] Tproxy iptables rules issue
 
  Did you ever get this going?  I have successfully setup a 
  squid2.6/tproxy/iptables server, and I have successfully setup a 
  squid2.6/wccp server and now I'm trying to combine both of 
 them, but I 
  think the iptables commands i'm trying are wrong.  Do you have any 
  suggestions?
 
 Squid 2.6 does not have Tproxy v4.1+ support. Nick was 
 testing a 3-HEAD Squid server.
 
 We just got it going yesterday :-) the patch to Squid-3 is in 
 HEAD now.
 Though a few alterations to the kernel side of TPROXY were 
 also needed, which may not have been added to the Balabit 
 side quite yet.
 
 The How-to about kernel patching is still awaiting a few 
 adjustments due in shortly.
 
 
 Amos
 
 
  Thanks,
  Dan.
 
  On Fri, May 30, 2008 at 3:58 PM, Ritter, Nicholas 
  [EMAIL PROTECTED] wrote:
  What exactly are the redirection rules for wccp/iptables 1.4/squid 
  2.6/tproxy look like? I have browsed the Internet plus 
 messed with it 
  for a while now and found that the README rules don't 
 fully work, and 
  the examples on the Internet don't fully work.
 
  Symptomatically, I see the router redirecting via the GRE 
 tunnel, the 
  squid box sees the gre packets (2.6 kernel), but ifconfig does not 
  show the GRE interface counters incrementing, and the 
 squid service 
  run in debug mode shows no transactions. Something is wrong with 
  either my iptables rules or my GRE tunnel setup. I don't 
 think it is 
  the GRE tunnel because I set it up the same exact was as I did the 
  non-tproxy squid boxes that I have in the same setup which 
 are working.
 
  Any help would be a appreciated. I can provide my rule 
 setup, etc. if 
  needed. My knowledge and direct interaction is limited 
 with iptables, 
  which is one more reason why I think the problem is there. 
 BTW - my 
  system log does show the tproxy module loading.
 
  Nick
 
 
 
 
 
 


Re: [squid-users] Custom header based authentication module

2008-09-24 Thread Christoph Rabel

Amos Jeffries wrote:

Christoph Rabel wrote:

To condense my question: Is it possible to specify which header
information is given to the auth module? And to specify that no 407 but
a redirect is sent?


Not for auth modules. They only use the regular Proxy-Authentication: 
headers. Maybe WWW-Authentication: header in accelerators.


For checking custom headers you need to make your authenticator an 
external_acl_type helper. And pass it the custom request header by name.

Ok, just looked that up in the manual, looks doable ;-)


Another thing that bothers me are SSL requests. What happens when the
proxy encounters a request for a https site? Can it access the cookie
anyway?

Depends on how Squid receives the HTTPS request.
a) as a plain URL for squid to handle. Okay, squid has access to all 
the headers etc.


b) as a CONNECT tunnel setup request. Squid has access to destination 
hostname and port. very little else. The sslbump feature coming in 3.1 
has been designed to get around those limits but has its own issues 
with privacy doing a man-in-middle attack on your users.

Hmm, hmm...

Because authentication by the proxy is done plain text, security 
department requests that we find another solution. It should not be 
possible to simply sniff out all passwords. So we thought that we could 
use the sso cookie we already have, but I fear that it is not possible 
to do this.


Let me rephrase my question:

How do other people handle the need for secure proxy authentication? Is 
there some kind of trick or browser extension or whatever? We have to 
support IE 7.


Thanks, Christoph


[squid-users] Squid-Authentification

2008-09-24 Thread pooner

Hi,

i try to authentificate squid users against a hsql-db.
I'm using a php-script which i have found. It seems to be okay, squid was
working with it, authentification is ok. Now i have another script which
connects to the database and gives a result-value (0=ok, 1 =err).
If I modify the php-script to proof the return-value, authentification don't
work with user out of the database, but it worked with testuser (fo and
hello) from the script. I get the pages which i want after authentification
with the script-user, but not with the db-user. The squid-access.log said
then TCP_DENIED.

If i test it on the commandline it's okay for both
example:
./squid_helper.php
fo bar
OK
root roter
OK
hello world
OK

I don't know where the error is.
Here is the Script:
#!/usr/bin/php
?
if (! defined(STDIN)) {
define(STDIN, fopen(php://stdin, r));
}
while (!feof(STDIN)) {
$line = trim(fgets(STDIN));
$fields = explode(' ', $line);
$username = rawurldecode($fields[0]); //1738
$password = rawurldecode($fields[1]); //1738
$last_line = exec(cd /root/bin/Ausgabe  java Ausgabe
.$username. .$password,$output, $retval);

//   fwrite(STDOUT, $retval);
if ($username == 'hello'
and $password == 'world') {
fwrite(STDOUT, OK\n);
//fwrite(STDOUT, eins\n);
} else if ($username == 'fo'
and $password == 'bar') {
fwrite(STDOUT, OK\n);
  //  fwrite(STDOUT, zwei\n);
} else if ($retval == 0) {
fwrite(STDOUT, OK\n);
   // fwrite(STDOUT, drei\n);
}else {
fwrite(STDOUT, ERR\n);
}



//  if ($retval == 0) { fwrite(STDOUT, OK\n); }
  //  else { fwrite(STDOUT, ERR\n); }

}
?

Do you have any ideas?


-- 
View this message in context: 
http://www.nabble.com/Squid-Authentification-tp19649606p19649606.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] FTP Read Only

2008-09-24 Thread Jonathan Chretien

Hello All.

OS: Fedora 8
Squid: squid-2.6.STABLE19-1.fc8

I have a strange problem this morning.

I was able via Windows Explorer or Internet Explorer to access our company FTP 
site by using ftp://username:[EMAIL PROTECTED] or ftp://[EMAIL PROTECTED]

This morning, if I try to access this FTP site, I get a message from Microsoft 
indicating that I cannot access this FTP Site because my proxy server is 
configured for Read-Only and if I need to upload files, I need to use an 
another proxy.

I installed the latest updates of my Fedora 8 machine last week and I don't 
know if the problem is related with that because it's the first time since the 
update that I need to use it but I know that before the update it was working. 
The problem, I used it many weeks before the update, so I don't know just 1 
minute after the update if it was working or not.

I have nothing special in my access.log. I have the log indicating that I'm 
accessing the ftp site.
In the cache.log, I have absolutely nothing.

Can someone help me with this problem ?

I have no idea what to check.

Thanks

Jonathan


_



Re: [squid-users] Object becomes STALE: refresh_pattern min and max

2008-09-24 Thread Michael Alger
On Wed, Sep 24, 2008 at 05:29:52AM -0700, BUI18 wrote:
 I went through your same thinking as you described below.
 
 I checked the Expires header from the server and we do not set
 one.  I checked via Fiddler web debug tool.  I also verified with
 the dev guys here regarding no Expires header.  I have set the min
 and max via refresh_pattern because of the absence of the Expires
 header thinking that Squid would keep it FRESH.
 
 Notice the -1 for expiration header (I do not set one on the
 object).  My min age is 5 days so I'm not sure why the object
 would be released from cache in less than 2 days.
 
 If the object was released from cache, when the user tried to
 access file, Squid reports TCP_REFRESH_MISS, which to me means
 that it was found in cache but when it sends a If-Modified-Since
 request, it thinks that the file has been modified (which it was
 not as seen by the lastmod date indicated in the store.log below.

Interesting that it's caching the file for 2 days. What are the full
headers returned with the object? Any other cache control headers?

Is there any chance you have a conflicting refresh_pattern, so the
freshness rules being applied aren't the ones you're expecting? May
be worth doing some tests with very small max ages to confirm it's
matching the right rule.


[squid-users] round robin question

2008-09-24 Thread jeff donovan

greetings

How could I go about load balancing two or more transparent proxy  
squid servers ?

No caching invloved. This is strictly for access.

i thought about dns round robin, but that didn't make sense since i am  
forwarding all connections to a single interface.


any insight would be helpful

-j



Re: [squid-users] round robin question

2008-09-24 Thread Kinkie
On Wed, Sep 24, 2008 at 5:16 PM, jeff donovan [EMAIL PROTECTED] wrote:
 greetings

 How could I go about load balancing two or more transparent proxy squid
 servers ?
 No caching invloved. This is strictly for access.

 i thought about dns round robin, but that didn't make sense since i am
 forwarding all connections to a single interface.

 any insight would be helpful


So both instances are running on the same (bridging?) system?
Can you give some more details?



-- 
 /kinkie


Re: [squid-users] running squid on VERY low-end devices

2008-09-24 Thread Leonardo Rodrigues Magalhães



John Doe escreveu:
anyway, i would love to see something similar in C that could be compiled 
and run with less memory than perl requires  unfortunelly i cant code even a 
'hello world' in C !!! But i'll try that . shouldnt be too difficult.



Here's a basic C example:

#include stdio.h
#include time.h

#define INPUTSIZE 256

int main(int argc, char *argv[]) {

  char input[INPUTSIZE];
  char *cp;
  struct tm *local;
  time_t t;
  int year;

  while (fgets(input, sizeof(input), stdin)) {
t = time(NULL);
local = localtime(t);
year = local-tm_year + 1900;
if (year  2008) { printf(ERR\n); } else { printf(OK\n); }
fflush(stdout);
  }

}

JD

PS: I put a small INPUTSIZE and do not check for overflow...

  


   Nice John, thanks very much for that !! Got it compiled and running, 
no problem. Anyway, i found interesting that squid is reporting a high 
average service time for this do-almost-nothing external ACL (from 
cachemgr.cgi, external acl stats):


External ACL Statistics: verifica-ano
Cache size: 7
program: /etc/squid/verifica-ano
number running: 3 of 3
requests sent: 269
replies received: 269
queue length: 0
--- avg service time: 263.38 msec


   from other machine, which uses squid_ldap_group as external acl, i have:

External ACL Statistics: ldap_group
Cache size: 965
program: /usr/bin/squid_ldap_group
number running: 5 of 5
requests sent: 6991
replies received: 6991
queue length: 0
--- avg service time: 8.26 msec



   The second one, as i told, does LDAP connections and stuff, and is 
running pretty fast.


   i would expect the check-year to run faster, despite the fact it's 
working just fine !!!


   any hint on that ???


--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
[EMAIL PROTECTED]
My SPAMTRAP, do not email it






[squid-users] latency issues squid2.7 WCCP

2008-09-24 Thread Ryan Goddard

Squid 2.7.STABLE1-20080528 on Debian Linux 2.6.19.7
running on quad dual-core 2.6mhz Opterons with 32 gig RAM; 8x140GB disk 
partitions

using WCCP L2 redirects transparently from a Cisco 4948 GigE switch

Server has one GigE NIC for the incoming redirects and two GigE NICs for 
outbound http requests.
Using IPTables to port forward HTTP to Squid; no ICP, auth, etc.; 
strictly a web cache using heap/LFUDA replacement

and 16GB memory allocated with mem pools on, no limit.

Used in an ISP environment, accommodating approx. 8k predominately cable 
modem customers during peak.


Issue we're experiencing is some web pages taking in excess of 20 
seconds to load, marked latency for customers

running web-based speed tests, etc.
Cache.log and Access.log aren't indicating any errors or timeouts; 
system operates 96 DNS instances and 32k file descriptors

(neither has gotten maxed yet).
General Runtime Info from Cachemgr taken during pre-peak usage:
Start Time:Tue, 23 Sep 2008 18:07:37 GMT
Current Time:Tue, 23 Sep 2008 21:00:49 GMT

Connection information for squid:
   Number of clients accessing cache:3382
   Number of HTTP requests received:2331742
   Number of ICP messages received:0
   Number of ICP messages sent:0
   Number of queued ICP replies:0
   Request failure ratio: 0.00
   Average HTTP requests per minute since start:13463.4
   Average ICP messages per minute since start:0.0
   Select loop called: 11255153 times, 0.923 ms avg
Cache information for squid:
   Request Hit Ratios:5min: 42.6%, 60min: 40.0%
   Byte Hit Ratios:5min: 21.2%, 60min: 18.6%
   Request Memory Hit Ratios:5min: 18.3%, 60min: 17.2%
   Request Disk Hit Ratios:5min: 33.6%, 60min: 33.3%
   Storage Swap size:952545580 KB
   Storage Mem size:8237648 KB
   Mean Object Size:40.43 KB
   Requests given to unlinkd:0
Median Service Times (seconds)  5 min60 min:
   HTTP Requests (All):   0.19742  0.12106
   Cache Misses:  0.27332  0.17711
   Cache Hits:0.08265  0.03622
   Near Hits: 0.27332  0.16775
   Not-Modified Replies:  0.02317  0.00865
   DNS Lookups:   0.09535  0.04854
   ICP Queries:   0.0  0.0
Resource usage for squid:
   UP Time:10391.501 seconds
   CPU Time:4708.150 seconds
   CPU Usage:45.31%
   CPU Usage, 5 minute avg:33.29%
   CPU Usage, 60 minute avg:33.36%
   Process Data Segment Size via sbrk(): 1041332 KB
   Maximum Resident Size: 0 KB
   Page faults with physical i/o: 4
Memory usage for squid via mallinfo():
   Total space in arena:  373684 KB
   Ordinary blocks:   372642 KB809 blks
   Small blocks:   0 KB  0 blks
   Holding blocks:216088 KB 21 blks
   Free Small blocks:  0 KB
   Free Ordinary blocks:1041 KB
   Total in use:  588730 KB 100%
   Total free:  1041 KB 0%
   Total size:589772 KB
Memory accounted for:
   Total accounted:   11355185 KB
   memPoolAlloc calls: 439418241
   memPoolFree calls: 378603777
File descriptor usage for squid:
   Maximum number of file descriptors:   32000
   Largest file desc currently in use:   9171
   Number of file desc currently in use: 8112
   Files queued for open:   2
   Available number of file descriptors: 23886
   Reserved number of file descriptors:   100
   Store Disk files open: 175
   IO loop method: epoll
Internal Data Structures:
   23570637 StoreEntries
   532260 StoreEntries with MemObjects
   531496 Hot Object Cache Items
   23561001 on-disk objects

Generated Tue, 23 Sep 2008 21:00:47 GMT, by 
cachemgr.cgi/[EMAIL PROTECTED]



TCPDUMP shows packets traversing all interfaces as expected; bandwidth 
to both upstream providers isn't being maxed
and when Squid is shut down, http traffic loads much faster and without 
any noticeable delay.


Where/what else can I look at for the cause of the latency?  It becomes 
significantly worse during peak use - but as
we're not being choked on bandwidth and things greatly improve when I 
shut down squid that narrows it to something
on the server.  Is the amount of activity overloading a single squid 
process?  I'm not seeing any I/O errors in logs and haven't

found any evidence the kernel is under distress.
Any pointers are greatly appreciated.
thanks
-Ryan





Re: [squid-users] Out of memory problem in huge cache

2008-09-24 Thread Itzcak Pechtalt
My Squid isn't build with 64-bit support.
My RedHat Linux support up to 4GB memory per process.
I tested it on a short test program which tries to allocate memory
until he fails
and only after 4GB allocation I fail to continue allocation.

thanks

Itzcak

On Wed, Sep 24, 2008 at 3:56 PM, Amos Jeffries [EMAIL PROTECTED] wrote:
 Itzcak Pechtalt wrote:

 Hi,

 I have Squid 2.6 running on RedHat Linux with 8 GB memory, and
 configured according to Squid wiki recomendations in

 http://wiki.squid-cache.org/SquidFaq/SquidMemory#head-09818ad4cb8a1dfea1f51688c41bdf4b79a69991

 However, it crashs every couple of hours with FATAL: xmalloc: Unable
 to allocate xxx bytes! message
 I wonder if I made something bad in configuration or it's a memory leak
 problem.


 Is your squid built with 64-bit support and --with-large-files to support
2GB total memory allocation?

 Amos
 --
 Please use Squid 2.7.STABLE4 or 3.0.STABLE9



Re: [squid-users] How to limit duration of SSL connections

2008-09-24 Thread Jason Voorhees
Right now I don't have access to squid's logs but I remember that there 
wasn't enough usable information.


I'd block SSL connections to numeric IP but there are some websites 
(including Hotmail among others) that clients connect under the same 
pattern.


I'll post again to the list when I have access.log in my PC again.

Amos Jeffries escribió:

Hi friends:

I'm running Squid on server running Debian Etch for a customer of mine.
I'm using sarg to generate reports of each user behing Squid accesing
Internet.
There are a lot of restriction about non-related to work websites like
music on line, webchats, MSN, Yahoo, hi5, among other sites goods for
wasting time. All of my rules are blocking them perfectly except for
some users that I do not how connect to random IP addresses and port
443 using (I asume) SSL tunnels.

Those connections are too long, they have a duration of 1 minute, 1 hour
even 5 o 8 hours as I see in my sarg reports.

I was working with a bash script that parses access.log and detects
those IP address to block them later but the same users always find
different IP address to bypass Squid.
I believe they're using some kind of tunneling software like hopster,
ultrasurfer, freegate or who know what!


Sounds a bit like skype. What does access.log show for one of these
connections?
ie CONNECT 1.2.3.4:443 HTTP/1.0 ??
  http://wiki.squid-cache.org/KnowledgeBase/FilteringChat


I'm not allowing any traffic to pass my firewall, users only can reach
Internet through Squid exclusively.

Is there a way to detect these kind of tunneling software? I was
thinking on limit the duration of a SSL connection since a normal SSL
request in https it takes just a few seconds, right?


No, as long as the clients web browser needs it to take. I've had sessions
with my bank in excess of an hour at times.


Is squid able to
limit how long can a SSL connection be?


Looks like you want client_lifetime, but take note of the WARNING...
http://www.squid-cache.org/Versions/v3/3.0/cfgman/client_lifetime.html



Thanks,... and sorry .. My english isn't good



Amos



Re: [squid-users] running squid on VERY low-end devices

2008-09-24 Thread John Doe
 Nice John, thanks very much for that !! Got it compiled and
 running, no problem. Anyway, i found interesting that squid is
 reporting a high average service time for this do-almost-nothing
 external ACL (from cachemgr.cgi, external acl stats):
 any hint on that ???

Did you compare with the perl version...?

First, try it from the shell...
Tested without the while loop and it looks fast (on my workstation):
  real0m0.001s
  user0m0.000s
  sys 0m0.001s
With one fgets, I get in the worst case:
  real0m0.002s
  user0m0.000s
  sys 0m0.002s

If it is also fast, maybe it is the fflush that is slow...

JD


  



[squid-users] Video players not proxy aware

2008-09-24 Thread Steven Engebretson
Good morning,

I am having an issue with squid 2.6 stable 19.

Many video players on the Mac are not proxy aware, and fail to show a login 
screen for the new video window.  Is there a simple way to resolve this from 
squid, rather then putting an an acl in for each domain, with an allow?  We do 
require all users to
authenticate prior to access.


-Steven E.



Re: [squid-users] Custom header based authentication module

2008-09-24 Thread Amos Jeffries

Christoph Rabel wrote:

Amos Jeffries wrote:

Christoph Rabel wrote:

To condense my question: Is it possible to specify which header
information is given to the auth module? And to specify that no 407 but
a redirect is sent?


Not for auth modules. They only use the regular Proxy-Authentication: 
headers. Maybe WWW-Authentication: header in accelerators.


For checking custom headers you need to make your authenticator an 
external_acl_type helper. And pass it the custom request header by name.

Ok, just looked that up in the manual, looks doable ;-)


Another thing that bothers me are SSL requests. What happens when the
proxy encounters a request for a https site? Can it access the cookie
anyway?

Depends on how Squid receives the HTTPS request.
a) as a plain URL for squid to handle. Okay, squid has access to all 
the headers etc.


b) as a CONNECT tunnel setup request. Squid has access to destination 
hostname and port. very little else. The sslbump feature coming in 3.1 
has been designed to get around those limits but has its own issues 
with privacy doing a man-in-middle attack on your users.

Hmm, hmm...

Because authentication by the proxy is done plain text, security 
department requests that we find another solution. It should not be 
possible to simply sniff out all passwords. So we thought that we could 
use the sso cookie we already have, but I fear that it is not possible 
to do this.


Squid can handle digest authentication for proxy auth.
However the proxy-auth is not being done when you use your custom 
headers. They should not include the password plain-text anyway in case 
they leak.




Let me rephrase my question:

How do other people handle the need for secure proxy authentication? Is 
there some kind of trick or browser extension or whatever? We have to 
support IE 7.


Most don't care and use basic auth. It only occurs between squid and the 
client anyway (usually internal network stuff).


Those who do care use digest auth (encrypted hash of the passwords),
or HTTPS (TLS) between client and squid. Or both for the very paranoid.

Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


[squid-users] HDD Configuration Recommendations

2008-09-24 Thread Chris Nighswonger
Hi all,
  I'm preparing to move my squid to new hardware. I have two 500GB
SATA HDD's in the new box which will be used to store squid's cache
on. Any suggestions on the best raid config for these guys so as to
maximize performance?

Regards,
Chris

--
Christopher Nighswonger
Faculty Member
Network  Systems Director
Foundations Bible College  Seminary
www.foundations.edu
www.fbcradio.org


Re: [squid-users] FTP Read Only

2008-09-24 Thread Amos Jeffries

Jonathan Chretien wrote:

Hello All.

OS: Fedora 8
Squid: squid-2.6.STABLE19-1.fc8

I have a strange problem this morning.

I was able via Windows Explorer or Internet Explorer to access our company FTP 
site by using ftp://username:[EMAIL PROTECTED] or ftp://[EMAIL PROTECTED]

This morning, if I try to access this FTP site, I get a message from Microsoft 
indicating that I cannot access this FTP Site because my proxy server is 
configured for Read-Only and if I need to upload files, I need to use an 
another proxy.

I installed the latest updates of my Fedora 8 machine last week and I don't 
know if the problem is related with that because it's the first time since the 
update that I need to use it but I know that before the update it was working. 
The problem, I used it many weeks before the update, so I don't know just 1 
minute after the update if it was working or not.

I have nothing special in my access.log. I have the log indicating that I'm 
accessing the ftp site.
In the cache.log, I have absolutely nothing.

Can someone help me with this problem ?



Set debug_options 9,5 in your squid.conf and reconfigure to see the FTP 
stuff going past.


Sounds like IE is replacing any error message squid produced to tell you 
about the problem.


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE9


RE: [squid-users] Insert Header or Footer into retrieved pages?

2008-09-24 Thread Rodre Ghorashi-Zadeh

Hello,

ICAP solution seems good, however there are no open source/GPL icap servers out 
there, except for the the one at http://icap-server.sourceforge.net , which 
hasn't been maintained since 2002 and is based on Python 2.1. I downloaded and 
installed/set it up and although I could put requests through it, it wasn't 
modifying the responses by injecting the .gif URL. Has anyone been able to get 
this working? Does anyone know where I can get the reference icap server 
mentioned here: http://wiki.squid-cache.org/Features/ICAP with a 404 URL of 
http://www.icap-forum.org/spec/icap-server10.tar.gz ? Can someone offer up a 
different solution to just inject a simple html header into the pages returned 
via the squid proxy?

~Rodre
_



RE: [squid-users] HDD Configuration Recommendations

2008-09-24 Thread Jorge Bastos
I advice a RAID5, wheter it's soft or hardware raid.


 -Original Message-
 From: Chris Nighswonger [mailto:[EMAIL PROTECTED]
 Sent: quarta-feira, 24 de Setembro de 2008 18:18
 To: Squid Users
 Subject: [squid-users] HDD Configuration Recommendations
 
 Hi all,
   I'm preparing to move my squid to new hardware. I have two 500GB
 SATA HDD's in the new box which will be used to store squid's cache
 on. Any suggestions on the best raid config for these guys so as to
 maximize performance?
 
 Regards,
 Chris
 
 --
 Christopher Nighswonger
 Faculty Member
 Network  Systems Director
 Foundations Bible College  Seminary
 www.foundations.edu
 www.fbcradio.org



RE: [squid-users] HDD Configuration Recommendations

2008-09-24 Thread Jorge Bastos
Forgot to say, you'll need one more disk

 -Original Message-
 From: Chris Nighswonger [mailto:[EMAIL PROTECTED]
 Sent: quarta-feira, 24 de Setembro de 2008 18:18
 To: Squid Users
 Subject: [squid-users] HDD Configuration Recommendations
 
 Hi all,
   I'm preparing to move my squid to new hardware. I have two 500GB
 SATA HDD's in the new box which will be used to store squid's cache
 on. Any suggestions on the best raid config for these guys so as to
 maximize performance?
 
 Regards,
 Chris
 
 --
 Christopher Nighswonger
 Faculty Member
 Network  Systems Director
 Foundations Bible College  Seminary
 www.foundations.edu
 www.fbcradio.org



RE: [squid-users] Insert Header or Footer into retrieved pages?

2008-09-24 Thread Alex Rousskov
On Wed, 2008-09-24 at 10:26 -0700, Rodre Ghorashi-Zadeh wrote:

 Does anyone know where I can get the reference icap server mentioned
 here: http://wiki.squid-cache.org/Features/ICAP with a 404 URL of
 http://www.icap-forum.org/spec/icap-server10.tar.gz ?

Tried the Internet Archive? If you cannot find it anywhere, please let
me know and I will try to dig up a copy. I do not know whether I have
one though.

 Can someone offer up a different solution to just inject a simple html
 header into the pages returned via the squid proxy?

You can also wait for eCAP work to be completed. I am supposed to commit
the missing bits by September 29.

HTH,

Alex.




Re: [squid-users] Object becomes STALE: refresh_pattern min and max

2008-09-24 Thread BUI18
My Squid Version is 2.6/STABLE14

Here's my refresh_pattern from squid.conf

#Suggested default:
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440

#The following line will ignore a client no-cache header
#refresh_pattern -i \.vid$   0   90% 2880 ignore-reload
refresh_pattern -i \.vid$   7200100%10080 ignore-reload

refresh_pattern .   0   20% 4320

A link to the file looks something like this -- 
http://ftp.mydomain.com/websites/data/myvideofile.vid

I have to set up a station to grab the header but I can tell you that it does 
not seem out of the ordinary.

There is one cache-control:  Pragma: no-cache

I believe I handle this with the ignore-reload options.

Our server is an IIS server running on Windows 2003.

I also ran a test with min and max age of 0 and 1 respectively, and it seems to 
work.  I receive a TCP_REFRESH_HIT, which is what I would have expected as 
these files do not change.

Please let me know if you have any other ideas on how to track down why it 
would release from cache before min age with no Expiration set on the object.

Open to any suggestions.
Thanks




- Original Message 
From: Michael Alger [EMAIL PROTECTED]
To: squid-users@squid-cache.org
Sent: Wednesday, September 24, 2008 8:09:50 AM
Subject: Re: [squid-users] Object becomes STALE: refresh_pattern min and max

On Wed, Sep 24, 2008 at 05:29:52AM -0700, BUI18 wrote:
 I went through your same thinking as you described below.
 
 I checked the Expires header from the server and we do not set
 one.  I checked via Fiddler web debug tool.  I also verified with
 the dev guys here regarding no Expires header.  I have set the min
 and max via refresh_pattern because of the absence of the Expires
 header thinking that Squid would keep it FRESH.
 
 Notice the -1 for expiration header (I do not set one on the
 object).  My min age is 5 days so I'm not sure why the object
 would be released from cache in less than 2 days.
 
 If the object was released from cache, when the user tried to
 access file, Squid reports TCP_REFRESH_MISS, which to me means
 that it was found in cache but when it sends a If-Modified-Since
 request, it thinks that the file has been modified (which it was
 not as seen by the lastmod date indicated in the store.log below.

Interesting that it's caching the file for 2 days. What are the full
headers returned with the object? Any other cache control headers?

Is there any chance you have a conflicting refresh_pattern, so the
freshness rules being applied aren't the ones you're expecting? May
be worth doing some tests with very small max ages to confirm it's
matching the right rule.



  


Re: [squid-users] Insert Header or Footer into retrieved pages?

2008-09-24 Thread Philipp Rusch - New Vision-IT

Alex Rousskov schrieb:

On Wed, 2008-09-24 at 10:26 -0700, Rodre Ghorashi-Zadeh wrote:

  

Does anyone know where I can get the reference icap server mentioned
here: http://wiki.squid-cache.org/Features/ICAP with a 404 URL of
http://www.icap-forum.org/spec/icap-server10.tar.gz ?



Tried the Internet Archive? If you cannot find it anywhere, please let
me know and I will try to dig up a copy. I do not know whether I have
one though.

  

Can someone offer up a different solution to just inject a simple html
header into the pages returned via the squid proxy?



You can also wait for eCAP work to be completed. I am supposed to commit
the missing bits by September 29.

HTH,

Alex.



  

Hello Rodre,

http://www.icap-forum.org/documents/other/icap-server10.zip

they changed the URL ...

to Alex: that is great news about eCAP, we appreciate your work !

Regards from Germany,
Philipp Rusch



RE: [squid-users] Insert Header or Footer into retrieved pages?

2008-09-24 Thread Rodre Ghorashi-Zadeh


 You can also wait for eCAP work to be completed. I am supposed to commit
 the missing bits by September 29.

 HTH,

 Alex.



   
 Hello Rodre,
 
 http://www.icap-forum.org/documents/other/icap-server10.zip
 
 they changed the URL ...
 
Hello
 
Thanks for the responses guys. I will experiment with this icap server as I=
 pass some time before the eCap code is commited to the repo.
 
~Rod
_



Re: [squid-users] Insert Header or Footer into retrieved pages?

2008-09-24 Thread Amos Jeffries
 Alex Rousskov schrieb:
 On Wed, 2008-09-24 at 10:26 -0700, Rodre Ghorashi-Zadeh wrote:


 Does anyone know where I can get the reference icap server mentioned
 here: http://wiki.squid-cache.org/Features/ICAP with a 404 URL of
 http://www.icap-forum.org/spec/icap-server10.tar.gz ?


 Tried the Internet Archive? If you cannot find it anywhere, please let
 me know and I will try to dig up a copy. I do not know whether I have
 one though.


 Can someone offer up a different solution to just inject a simple html
 header into the pages returned via the squid proxy?


 You can also wait for eCAP work to be completed. I am supposed to commit
 the missing bits by September 29.

 HTH,

 Alex.




 Hello Rodre,

 http://www.icap-forum.org/documents/other/icap-server10.zip

 they changed the URL ...

Ah, thanks. Wiki updated.


 to Alex: that is great news about eCAP, we appreciate your work !

 Regards from Germany,
 Philipp Rusch