Re: [squid-users] adding a parameter to a URL / Problem in the url_redirect program

2008-07-08 Thread Shaine

Hi Henrik,

Thanks for all valuable informations. Those are really help full for me.
Actually in my squid.conf i got enabled url_rewrite_concurrency. As u said
that is why it shifted another step ahead. Now its disabled , took its own
place. second position. I wrote a perl script to handle a sqlite db and
capture some value according to STDIN. 

Its working fine in command line, i tested and it works as i expected. But
when i placed in url_rewrite_program in squid, unfortunately squid get PANIC
and its crashed.

2008/07/09 17:14:34| WARNING: redirector #3 (FD 8) exited
2008/07/09 17:14:34| WARNING: redirector #2 (FD 7) exited
2008/07/09 17:14:34| WARNING: redirector #5 (FD 10) exited
2008/07/09 17:14:34| WARNING: redirector #1 (FD 6) exited
2008/07/09 17:14:34| WARNING: redirector #6 (FD 11) exited
2008/07/09 17:14:34| WARNING: redirector #4 (FD 9) exited
2008/07/09 17:14:34| Too few redirector processes are running
FATAL: The redirector helpers are crashing too rapidly, need help!

Here with i have attached my Perl code also .

#!/usr/bin/perl
use DBI;
use strict;
use warnings;
#
## no buffered output, auto flush
$|=1;

my ($dbh,$sth,$dbargs,$IntValue,$query,$url);

$dbargs = {AutoCommit => 0, PrintError => 1};
$dbh = DBI->connect("dbi:SQLite:dbname=List","","",$dbargs);

if ($dbh->err()) {
   print;
   exit;
}

while () {
   chomp;
   my ($url, $ip) = split(/ /);
   $query = "SELECT * from ACCOUNTING where IPAddress = '". $ip ."' order by
DATETIME desc";
   $sth = $dbh->prepare($query);
   $sth->execute();

   if (my $ref = $sth->fetchrow_hashref()) {
   $IntValue = $ref->{'CalliId'};
   #if ($IntValue == undef) {
#   $IntValue = "NA";
  # }
   }else{
   $IntValue = "NA";
   }

   if (!($url =~ m#VALUE#)) {
   if ($url =~ m#\?#) {
  $url .= ("&VALUE=" . $IntValue);
   } else {
  $url .= ("?VALUE=" . $IntValue);
   }
   print $url."\n";
   } else {
   print "\n";
   }
}
   $sth->finish();
   undef $sth;
   $dbh->commit();
   $dbh->disconnect();


Why , its getting FALAL error when placed in squid? what is wrong with the
script ? Can you help me please ?

Many Thanks
Shaine.

 



Henrik Nordstrom-5 wrote:
> 
> On mån, 2008-07-07 at 05:49 -0700, Shaine wrote:
>> I did same way . but client ip doesn't comes in the second possition. Its
>> in
>> third.
> 
> It's the second..
> 
> http://www.squid-cache.org/ 127.0.0.1/localhost.localdomain - GET -
> myip=127.0.0.1 myport=3128
> 
> unless you have enabled url_rewrite_concurrency in which case all
> parameters is shifted one step due to the added request identifier
> infront... but then url is the second..
> 
> 0 http://www.squid-cache.org/ 127.0.0.1/localhost.localdomain - GET -
> myip=127.0.0.1 myport=3128
> 
> Regards
> Henrik
> 
>  
> 

-- 
View this message in context: 
http://www.nabble.com/adding-a-parameter-to-a-URL-tp17776816p18355096.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] https site access problem!!!

2008-07-08 Thread Shiva Raman
Thanks for the Reply.

 The Problem has been resolved. It was exactly the problem outside
squid. The problem was with the redirector_program in validating the
SSL site.

Regards

Shiva Raman

On 7/4/08, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:
> On fre, 2008-07-04 at 17:33 +0530, Shiva Raman wrote:
>> Thanks for the reply. Following are the logs generated while trying to
>> access secure.icicidirect.com
>>
>>
>> [EMAIL PROTECTED] logs]# tail -f access.log |grep secure.icicidirect.com
>> 1215164529.907641 10.1.3.37 TCP_MISS/200 39 CONNECT
>> secure.icicidirect.com:443 - DIRECT/203.27.235.22 -
>> 1215164529.943 31 10.1.3.37 TCP_MISS/200 39 CONNECT
>> secure.icicidirect.com:443 - DIRECT/203.27.235.22 -
>
> Which matches your openssl results. Squid succeeded in connecting, but
> the connection was closed after only a couple of bytes had been
> exchanged.
>
> I think the evicence is pretty hard that the problem is somewhere
> outside Squid.
>
>- Firewall
>- Server maybe have blacklisted your server IP
>- Other networking issue
>- Some device trying to intercept port 443.
>
> Regards
> Henrik
>


Re: [squid-users] https pages

2008-07-08 Thread Amos Jeffries
> I'm sorry for the delay in my response. I am using Windows Firewall on the
> Squid machine, and have added port 443 to the exceptions. I have even
> tried disabling the firewall and HTTPS still fails, and I get the same in
> the access.log.
> I should also mention that I have tried accessing HTTPS pages using the
> proxy from the proxy server itself. This does work as expected, but HTTPS
> from other machines connecting to the proxy server still fail. Does this
> suggest that the problem is with Squid or elsewhere?
> Oh, and I guess I should also mention that I'm now using 2.7.STABLE3
> (standard).
> Thanks in advance for any help.

Ah, Windows Firewall. I'll refrain from stating my true feelings about
that 'smart' firewall. Failure without the WFW even on shows its somewhere
else. Maybe in multiple failure points.

What was your squid config again?

Amos


>
> - Original Message 
>> From: Henrik Nordstrom <[EMAIL PROTECTED]>
>> To: Michael Johnston <[EMAIL PROTECTED]>
>> Cc: Squid Users 
>> Sent: Monday, June 9, 2008 10:56:28 AM
>> Subject: Re: [squid-users] https pages
>>
>> On mån, 2008-06-09 at 05:21 -0700, Michael Johnston wrote:
>> > When I disabled the "friendly error messages" option in IE, the
>> message was
>> the same: "Internet Explorer cannot display the webpage"
>> > And in Netscape, I get an alert saying: "The document contains no
>> data"
>> >
>> > > Anything in Squid access.log?
>> > >
>> > This is what shows up in the access log:
>> > 1213013343.566CLIENT.EXTERNAL.IP TCP_MISS/200 39 CONNECT
>> www.google.com:443 - DIRECT/72.14.205.104 -
>> > 1213013984.055CLIENT.EXTERNAL.IP  TCP_MISS/200 39 CONNECT
>> www.yahoo.com:443 - DIRECT/209.191.93.52 -
>>
>> Do your firewall allow the Squid server to go out on port 443?
>>
>> Regards
>> Henrik
>
>
>
>
>




Re: [squid-users] Updated Benchmark?

2008-07-08 Thread Amos Jeffries
> I noticed pages about benchmark is a little bit old,
>
> e.g. http://old.squid-cache.org/Benchmarking/
>
>
> Are there any decent benchmark?
>

There is not yet anything comprehensive for later than those stats you found.
Measurement Factory are working on a new TestBed which hopefully will lead
to the availability of new data fro current releases on modern technology.

> For example, for how much config can Squid saturate 100M network?
>

I've found it possible for 2 clients to saturate a 100M network here
requesting the same object from Squid's cache at once. That takes no
special config, delay_pools can reduce that threshold from 2 to many. But
it depends on your local situation more than anything.
I suspect you meant something else though...

Amos



Re: [squid-users] can't get squid to cache

2008-07-08 Thread Amos Jeffries
>
> Hey guys,
>
> I've got a proprietary web application what we use as a back end for
> other applications, and I want to do some agressive caching using squid,
> as a test, to reduce the load on the back end.
>
> I spent 2-3 days on googling and reading the archives, but nothing I do
> or try seems to help!  :(
>
> Here's the original back-end request (for one image in that app):
>
> 
> [EMAIL PROTECTED] ~]$ wget -S --spider
> http://10.94.206.34:8000/stats_components/collapseon.gif
> --00:19:07--  http://10.94.206.34:8000/stats_components/collapseon.gif
>  => `collapseon.gif'
> Connecting to 10.94.206.34:8000... connected.
> HTTP request sent, awaiting response...
> HTTP/1.0 200 OK
> Content-Type: image/gif
> Content-Length: 64
> Length: 64 [image/gif]
> 200 OK
> 
>
> As you can see, it's missing all cache headers, and expires and
> last-modified header.
>
> This is how my squid config is (now running on 2.6STABLE16, tried on
> 3.0RC1 too):
>
> 
> hierarchy_stoplist cgi-bin
> acl QUERY urlpath_regex cgi-bin
>
> shutdown_lifetime 1 second
>
> acl all src 0.0.0.0/0.0.0.0
> cache allow all
>
> #400GB disk cache
> cache_dir ufs /usr/local/squid/cache 409600 16 256
>
> maximum_object_size 5 MB
> cache_mem 1024 MB
> cache_swap_low 90
> cache_swap_high 95
> maximum_object_size_in_memory 512 KB
>
> cache_replacement_policy heap LFUDA
> memory_replacement_policy heap LFUDA
>
> http_port 8000 vhost vport
> cache_peer 10.94.206.34 parent 8000 0 no-query originserver
>
> http_access allow all
>
> minimum_expiry_time 3600 seconds
> refresh_pattern . 3600 100% 3600 ignore-no-cache ignore-reload
> override-expire override-lastmod

These ignore and overrides have no effect when the control headers are
missing. As you noted from your app.

>
> access_log /var/log/squid/access.log squid
> cache_log /var/log/squid/cache.log
> cache_store_log /var/log/squid/store.log

Um, since you have the store.log check it to see what squid is saving to
the cache.

>
> strip_query_terms off
> 
>
> This was the most agressive config I could found, and I expect the
> refresh_pattern line to force squid to cache..
>
> But all my access.log file keeps saying is:
> 
> 121215.645  1 127.0.0.1 TCP_MISS/200 200 HEAD
> http://localhost:8000/stats_components/collapseon.gif -
> FIRST_UP_PARENT/10.94.206.34 image/gif
> 121217.096 92 127.0.0.1 TCP_MISS/200 200 HEAD
> http://localhost:8000/stats_components/collapseon.gif -
> FIRST_UP_PARENT/10.94.206.34 image/gif
> 121217.940  1 127.0.0.1 TCP_MISS/200 200 HEAD
> http://localhost:8000/stats_components/collapseon.gif -
> FIRST_UP_PARENT/10.94.206.34 image/gif
> 121218.718  2 127.0.0.1 TCP_MISS/200 200 HEAD
> http://localhost:8000/stats_components/collapseon.gif -
> FIRST_UP_PARENT/10.94.206.34 image/gif
> 
>
> And in the store.log:
> 
> 121215.645 RELEASE -1  98DDCD4857BAF3122EE99EB25E4C3800  200
>  -1-1-1 image/gif 64/0 HEAD
> http://localhost:8000/stats_components/collapseon.gif
> 121217.096 RELEASE -1  A3AE2ED993B031DBD93CF74E2BD64BC5  200
>  -1-1-1 image/gif 64/0 HEAD
> http://localhost:8000/stats_components/collapseon.gif
> 121217.940 RELEASE -1  FFFE8387EBAB471EC045EFA51F9AE472  200
>  -1-1-1 image/gif 64/0 HEAD
> http://localhost:8000/stats_components/collapseon.gif
> 121218.718 RELEASE -1  EA490C98564ABDF390D216E2C3DC210E  200
>  -1-1-1 image/gif 64/0 HEAD
> http://localhost:8000/stats_components/collapseon.gif
> 
>
>
> Does anyone ave any idea's on why Squid won't cache the requests?

Squid can cache _objects_ but HEAD requests and GET requests are
different. GET contains the object, HEAD does not.

If you can get the front-end apps (or even your testing spider) to pull
the full object into cache with a GET I suspect the MISS would reduce.

Amos



[squid-users] can't get squid to cache

2008-07-08 Thread Angelo Hongens


Hey guys,

I've got a proprietary web application what we use as a back end for
other applications, and I want to do some agressive caching using squid,
as a test, to reduce the load on the back end.

I spent 2-3 days on googling and reading the archives, but nothing I do
or try seems to help!  :(

Here's the original back-end request (for one image in that app):


[EMAIL PROTECTED] ~]$ wget -S --spider
http://10.94.206.34:8000/stats_components/collapseon.gif
--00:19:07--  http://10.94.206.34:8000/stats_components/collapseon.gif
=> `collapseon.gif'
Connecting to 10.94.206.34:8000... connected.
HTTP request sent, awaiting response...
   HTTP/1.0 200 OK
   Content-Type: image/gif
   Content-Length: 64
Length: 64 [image/gif]
200 OK


As you can see, it's missing all cache headers, and expires and
last-modified header.

This is how my squid config is (now running on 2.6STABLE16, tried on
3.0RC1 too):


hierarchy_stoplist cgi-bin
acl QUERY urlpath_regex cgi-bin

shutdown_lifetime 1 second

acl all src 0.0.0.0/0.0.0.0
cache allow all

#400GB disk cache
cache_dir ufs /usr/local/squid/cache 409600 16 256

maximum_object_size 5 MB
cache_mem 1024 MB
cache_swap_low 90
cache_swap_high 95
maximum_object_size_in_memory 512 KB

cache_replacement_policy heap LFUDA
memory_replacement_policy heap LFUDA

http_port 8000 vhost vport
cache_peer 10.94.206.34 parent 8000 0 no-query originserver

http_access allow all

minimum_expiry_time 3600 seconds
refresh_pattern . 3600 100% 3600 ignore-no-cache ignore-reload
override-expire override-lastmod

access_log /var/log/squid/access.log squid
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log

strip_query_terms off


This was the most agressive config I could found, and I expect the
refresh_pattern line to force squid to cache..

But all my access.log file keeps saying is:

121215.645  1 127.0.0.1 TCP_MISS/200 200 HEAD
http://localhost:8000/stats_components/collapseon.gif -
FIRST_UP_PARENT/10.94.206.34 image/gif
121217.096 92 127.0.0.1 TCP_MISS/200 200 HEAD
http://localhost:8000/stats_components/collapseon.gif -
FIRST_UP_PARENT/10.94.206.34 image/gif
121217.940  1 127.0.0.1 TCP_MISS/200 200 HEAD
http://localhost:8000/stats_components/collapseon.gif -
FIRST_UP_PARENT/10.94.206.34 image/gif
121218.718  2 127.0.0.1 TCP_MISS/200 200 HEAD
http://localhost:8000/stats_components/collapseon.gif -
FIRST_UP_PARENT/10.94.206.34 image/gif


And in the store.log:

121215.645 RELEASE -1  98DDCD4857BAF3122EE99EB25E4C3800  200
-1-1-1 image/gif 64/0 HEAD
http://localhost:8000/stats_components/collapseon.gif
121217.096 RELEASE -1  A3AE2ED993B031DBD93CF74E2BD64BC5  200
-1-1-1 image/gif 64/0 HEAD
http://localhost:8000/stats_components/collapseon.gif
121217.940 RELEASE -1  FFFE8387EBAB471EC045EFA51F9AE472  200
-1-1-1 image/gif 64/0 HEAD
http://localhost:8000/stats_components/collapseon.gif
121218.718 RELEASE -1  EA490C98564ABDF390D216E2C3DC210E  200
-1-1-1 image/gif 64/0 HEAD
http://localhost:8000/stats_components/collapseon.gif



Does anyone ave any idea's on why Squid won't cache the requests?

--


Met vriendelijke groet,

Angelo Hongens
The Netherlands


Re: [squid-users] Squid 2.7 access log and url_rewrite_program

2008-07-08 Thread Henrik Nordstrom
On tis, 2008-07-08 at 16:47 -0400, Chris Woodfield wrote:

> I've noticed that squid 2.7STABLE3 logs incoming URLs differently than  
> 2.6 did when using a url_rewrite_program. It appears that under 2.6,  
> the URL logged was pre-rewrite, under 2.7 it's the URL returned by the  
> rewriter. This presents problems as I have the potential for a large  
> number of incoming URL hostnames being rewritten to the same origin  
> hostname, and with the current 2.7 logging I can't tell what the  
> incoming hostnames were.
> 
> Was this an expected change? If so, can I have the old behavior back? :)

Not expected, but now that I read the change log again it's obvious..

File a bug so we have some place to keep a lasing discussion about this.
Not sure today what the solution will look like.

http://bugs.squid-cache.org/

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] reverse proxy with domains

2008-07-08 Thread Thomas E. Maleshafske

Chris Robertson wrote:

Thomas E. Maleshafske wrote:

Henrik Nordstrom wrote:


You could simplify even further

never_direct allow all
http_access allow all

with the never_direct rule being optional.. (implied by accel mode on
the http_port).

Regards
Henrik
  
But if your in a hosting enviroment, a very quick and effective way 
of taking a client offline due to one reason or another is to comment 
their acl
could be that they forgot to pay renewal or something that nature and 
give them a grace period to fix it.


It has its benefits with doing it this way, but I see your point to.

V/r
Thomas E. Maleshafske


Alternatively, just create an ACL for the delinquent customer's 
domain, with a matching http_access deny.


Chris

Very True.
Less typing though with a simple "#" :)  At this point it is all a 
matter of preference and what the administrator feels is easier for him 
to manage and visualize.  The good thing is that a found a way of doing 
it for my situation.


Appreciate Everyone Thoughts and Ideas

V/R
Thomas E. Maleshafske
http://www.maleshafske.com
Helping People Take Control Over their Computer!





[squid-users] Squid 2.7 access log and url_rewrite_program

2008-07-08 Thread Chris Woodfield

Hi,

I've noticed that squid 2.7STABLE3 logs incoming URLs differently than  
2.6 did when using a url_rewrite_program. It appears that under 2.6,  
the URL logged was pre-rewrite, under 2.7 it's the URL returned by the  
rewriter. This presents problems as I have the potential for a large  
number of incoming URL hostnames being rewritten to the same origin  
hostname, and with the current 2.7 logging I can't tell what the  
incoming hostnames were.


Was this an expected change? If so, can I have the old behavior back? :)

-C


Re: [squid-users] prefer_direct configuration

2008-07-08 Thread Henrik Nordstrom
On tis, 2008-07-08 at 12:59 -0500, Dean Weimer wrote:
> I am trying to setup a new proxy server at a remote location which has
> both a T1 link to our main office and a DSL connection to the
> internet.  The DSL connection has a much larger download than the T1
> so it's preferable to use it for web browsing, but I would like to be
> able to have the proxy server automatically route traffic through the
> T1 and use our proxy servers here as parents in the event that the DSL
> would fail and the T1 line is still up.

For this it's best if you use some link monitoring and reconfigure Squid
accordingly.

Trying to do it by "automatic failover" will not perform very well as
there is no link monitoring in Squid and the procedure repeats per each
request..

But by setting connect_timeout reasonably low you can make it almost
work without link monitoring..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] reply_body_max_size + delay_pools

2008-07-08 Thread Chris Robertson

Heinrich Harrer wrote:

HI,

Is it possible to create an ACL to use with delay_pools to limit
bandwidth using reply_body_max_size?

Ex.
reply_body_max_size 10485760 allow all

This only deny an download higher than the limit. I Want to slow down
the connection (if the object is not in the cache), not deny.

Any suggestion?

Squid version is 2.7x.
  


Just use delay pools, and set the initial bucket size to the max object 
size you don't want to limit.  This will have the added benefit of 
preventing someone from circumventing your reply_body_max_size slowdown 
by grabbing lots of little bits of a large file.


delay pools 1
delay_class 1 2
delay_access 1 allow all
delay_parameters 1 -1/-1 1048576/32000  # The first MByte of non-cached 
traffic is delay free, and the bucket refills at 256kbps.


Chris


Re: [squid-users] reply_body_max_size + delay_pools

2008-07-08 Thread Henrik Nordstrom
On tis, 2008-07-08 at 14:22 -0300, Heinrich Harrer wrote:

> Is it possible to create an ACL to use with delay_pools to limit
> bandwidth using reply_body_max_size?

This is possible with Squid-2.HEAD (what will eventually become 2.8 when
ready for release), where delay pools can be reassigned based on the
response size.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] filtering on FQDN

2008-07-08 Thread Henrik Nordstrom
On tis, 2008-07-08 at 16:34 +0100, Robin Clayton wrote:

> can I filter on a source that is the windows machine name rather than the 
> source IP? 

If the windows machine is registered in DNS yes. If you are using MSAD
then this most likely is the case.

See the srcdomain acl.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


RE: [squid-users] Integrating squid with OpenSSL:very slow response

2008-07-08 Thread Henrik Nordstrom
On tis, 2008-07-08 at 16:35 +0530, Geetha_Priya wrote:

> I guess that is a better approach. We will look into it. We also
> verified that browser is able to reach webservers directly through
> squid. Problem arises only when proxy comes between our client and
> squid. Squid does not send subsequent requests for other objects [like
> images in a page] when it gets requests through proxy.

Then it doesn't get those requests from the proxy, or the proxy perhaps
fails to deliver the responses to those request.

Each component of a page (HTML, stylesheets, images, flash files etc
etc) is a separate HTTP request. Squid do not know what a page is, only
individual HTTP requests.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Problem with logrotate and compress

2008-07-08 Thread Chris Robertson

Sergio Belkin wrote:

Hi I am using Centos 5.1 and I have a weird problem with squid logs
rotation. I have the file squid as follow in /etc/logrotate.d:

Recently I reduce size parameter.

/var/log/squid/access.log {
weekly
missingok
rotate 10
compress
create 0660 squid squid
missingok
size 200M
postrotate
  /usr/sbin/squid -k rotate
endscript
}
/var/log/squid/cache.log {
weekly
rotate 5
copytruncate
compress
notifempty
missingok
}

/var/log/squid/store.log {
  missingok
  weekly
  compress
  size 200M
  create 0660 root squid
  rotate 4
# This script asks squid to rotate its logs on its own.
# Restarting squid is a long process and it is not worth
# doing it just to rotate logs
postrotate
  /usr/sbin/squid -k rotate
endscript
}

The strange thing is that I get the following files:

-rw-r- 1 squid squid 3.4M Jul  8 09:37 access.log
-rw-r- 1 squid squid 116K Jul  8 09:31 access.log.0
-rw-rw 1 squid squid0 Jul  8 09:31 access.log.1
-rw-r- 1 squid squid   20 Jul  8 09:31 access.log.1.gz
-rw-r--r-- 1 squid squid  28M Jul  8 04:22 access.log.2.gz
-rw-r- 1 squid squid  39M Jul  8 09:31 access.log.3
-rw-r--r-- 1 root  root   29M Jul  7 15:58 access.log.3.gz
-rw-r--r-- 1 squid squid 252M Jul  8 04:22 access.log.4
-rw-r- 1 squid squid 1.9K Jul  8 09:31 cache.log
-rw-r- 1 squid squid0 Jul  8 09:31 cache.log.0
-rw-r- 1 squid squid 3.1K Jul  8 09:28 cache.log.1
-rw-r- 1 squid squid  367 Jul  8 09:31 cache.log.1.gz
-rw-r- 1 squid squid  367 Jul  8 04:22 cache.log.2.gz
-rw-r--r-- 1 root  root   12K Jun 11 15:40 squid.out
-rw-r- 1 squid squid 1.1M Jul  8 09:37 store.log
-rw-rw 1 root  squid0 Jul  8 09:31 store.log.0
-rw-r- 1 squid squid 2.9M Jul  8 09:31 store.log.1.gz
-rw-r- 1 squid squid  42K Jul  8 09:31 store.log.2
-rw-rw 1 root  root  323M May 26 04:21 store.log.2.gz
-rw-rw 1 root  root  329M May 16 04:25 store.log.3.gz
-rw-rw 1 root  root  357M May  8 04:23 store.log.4.gz


I don't understand why compress old log files but doesn't delete old
non-compressed files uncompressed. Any ideas? (I've also modified
create parameter for squid be owner of access logs and run by hand
logrotate /etc/logrotate.d/squid to see if it repeats the case)

Thanks in advance
  


Check your squid.conf file for the logfile_rotate directive.  With your 
setup it should be set as "logfile_rotate 0".


Chris


Re: [squid-users] reverse proxy with domains

2008-07-08 Thread Chris Robertson

Thomas E. Maleshafske wrote:

Henrik Nordstrom wrote:


You could simplify even further

never_direct allow all
http_access allow all

with the never_direct rule being optional.. (implied by accel mode on
the http_port).

Regards
Henrik
  
But if your in a hosting enviroment, a very quick and effective way of 
taking a client offline due to one reason or another is to comment 
their acl
could be that they forgot to pay renewal or something that nature and 
give them a grace period to fix it.


It has its benefits with doing it this way, but I see your point to.

V/r
Thomas E. Maleshafske


Alternatively, just create an ACL for the delinquent customer's domain, 
with a matching http_access deny.


Chris


[squid-users] prefer_direct configuration

2008-07-08 Thread Dean Weimer
I am trying to setup a new proxy server at a remote location which has both a 
T1 link to our main office and a DSL connection to the internet.  The DSL 
connection has a much larger download than the T1 so it's preferable to use it 
for web browsing, but I would like to be able to have the proxy server 
automatically route traffic through the T1 and use our proxy servers here as 
parents in the event that the DSL would fail and the T1 line is still up.

I have added the proxy servers at our main office using the cache peer entries, 
and defined the icp_port.

cache_peer 10.50.20.5 parent 8080 8181
cache_peer 10.50.20.4 parent 8080 8181
icp_port 8181

Then added the prefer_direct on entry.
prefer_direct on

I tested by manually entering a false route on the remote proxy server for one 
website, it does load, but only after waiting for a timeout  for each and every 
request (watching packet traces appears to show 4 attempts for each before 
falling back to the parent cache).  Since this covers not only the html files, 
but requests for each image, and any subsequent links form the same web site 
all continue to follow this behavior.  The end result is for a small page with 
a few images it takes anywhere from 2 to 4 minutes to complete.
Is there a way to adjust the timeouts, and perhaps have it cache the path for a 
period of time after having one failure before trying again?
Or is my method of testing flawed, and causing this behavior?
 
Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co



[squid-users] reply_body_max_size + delay_pools

2008-07-08 Thread Heinrich Harrer
HI,

Is it possible to create an ACL to use with delay_pools to limit
bandwidth using reply_body_max_size?

Ex.
reply_body_max_size 10485760 allow all

This only deny an download higher than the limit. I Want to slow down
the connection (if the object is not in the cache), not deny.

Any suggestion?

Squid version is 2.7x.


[squid-users] Updated Benchmark?

2008-07-08 Thread Roy M.
I noticed pages about benchmark is a little bit old,

e.g. http://old.squid-cache.org/Benchmarking/


Are there any decent benchmark?

For example, for how much config can Squid saturate 100M network?



Thanks.


[squid-users] Too many queued ntlmauthenticator requests

2008-07-08 Thread David Jiménez Peris


On a linux ubuntu 8.04 server with squid 3.0.STABLE1 and samba 3.0.28a
configured to authenticate against a windows 2003 server active
directory with samba winbind ntlm-auth, the ntlm-auth processes keep
slowly getting locked in "RESERVED" state until squid emits the error
"Too many queued ntlmauthenticator requests" and restarts.

Is this the expected behavior of squid?

Below it's an output of cachemanager3.cgi:

NTLM Authenticator Statistics:
program: /usr/bin/ntlm_auth
number running: 10 of 10
requests sent: 11616
replies received: 11616
queue length: 0
avg service time: 0 msec

#   FD  PID # Requests  # Deferred Requests Flags   Time
Offset  Request
1   9   14035   39  0   R   0.003   0   (none)
2   10  14036   291 0   R   0.002   0   (none)
3   11  14037   79  0   R   0.004   0   (none)
4   12  14038   34  0   R   0.003   0   (none)
5   13  14039   33060   R   0.004   0   (none)
6   14  14040   62920   
0.151   0   (none)
7   15  14041   12510   
0.050   0   (none)
8   16  14042   23  0   R   0.023   0   (none)
9   17  14043   3   0   R   0.023   0   (none)
10  18  14044   298 0   
0.137   0   (none)

Flags key:

  B = BUSY
  C = CLOSING
  R = RESERVED or DEFERRED
  S = SHUTDOWN
  P = PLACEHOLDER


Generated Tue, 08 Jul 2008 15:33:18 GMT, by cachemgr3.cgi/3.0.STABLE1



Best regards
David JP



[squid-users] filtering on FQDN

2008-07-08 Thread Robin Clayton
Hi Guys,

can I filter on a source that is the windows machine name rather than the 
source IP? 

would turning

Re: [squid-users] Squid deny access to some part of website (SOLVED)

2008-07-08 Thread Alexandre augusto
Hi all,

I was in trouble to access some sites using Squid, after post a message here 
and got help from some guy, I had the certainty that my proxy was working 
properly.

My problem was related a DNS fail to resolve some domains.

Part of website was placed on another domain and a problem with DNS server that 
I was using didn´t let Squid find out the servers hosting jpg an flash files.

Thank you for the help Leonardo

Best regards

Alexandre


  Novos endereços, o Yahoo! que você conhece. Crie um email novo com a sua 
cara @ymail.com ou @rocketmail.com.
http://br.new.mail.yahoo.com/addresses


Re: [squid-users] https pages

2008-07-08 Thread Michael Johnston
I'm sorry for the delay in my response. I am using Windows Firewall on the 
Squid machine, and have added port 443 to the exceptions. I have even tried 
disabling the firewall and HTTPS still fails, and I get the same in the 
access.log.
I should also mention that I have tried accessing HTTPS pages using the proxy 
from the proxy server itself. This does work as expected, but HTTPS from other 
machines connecting to the proxy server still fail. Does this suggest that the 
problem is with Squid or elsewhere?
Oh, and I guess I should also mention that I'm now using 2.7.STABLE3 (standard).
Thanks in advance for any help.

- Original Message 
> From: Henrik Nordstrom <[EMAIL PROTECTED]>
> To: Michael Johnston <[EMAIL PROTECTED]>
> Cc: Squid Users 
> Sent: Monday, June 9, 2008 10:56:28 AM
> Subject: Re: [squid-users] https pages
> 
> On mån, 2008-06-09 at 05:21 -0700, Michael Johnston wrote:
> > When I disabled the "friendly error messages" option in IE, the message was 
> the same: "Internet Explorer cannot display the webpage"
> > And in Netscape, I get an alert saying: "The document contains no data"
> > 
> > > Anything in Squid access.log?
> > >
> > This is what shows up in the access log:
> > 1213013343.566    CLIENT.EXTERNAL.IP TCP_MISS/200 39 CONNECT 
> www.google.com:443 - DIRECT/72.14.205.104 -
> > 1213013984.055    CLIENT.EXTERNAL.IP  TCP_MISS/200 39 CONNECT 
> www.yahoo.com:443 - DIRECT/209.191.93.52 -
> 
> Do your firewall allow the Squid server to go out on port 443?
> 
> Regards
> Henrik






[squid-users] Store rebuild above 100%

2008-07-08 Thread john . horne
Hello,

We had a power problem this morning, and one of our web cache/proxy
servers was restarted. The server seems to be working fine, but I
noticed in the cache.log file entries such as this:

==
2008/07/08 13:25:51| Store rebuilding is 132.9% complete
2008/07/08 13:25:55| WARNING: newer swaplog entry for dirno 0, fileno
002BF15B
2008/07/08 13:25:57| WARNING: newer swaplog entry for dirno 1, fileno
00329812
2008/07/08 13:25:57| WARNING: newer swaplog entry for dirno 0, fileno
002C6731
2008/07/08 13:26:06| Store rebuilding is 133.8% complete
2008/07/08 13:26:06| WARNING: newer swaplog entry for dirno 0, fileno
002EF536
2008/07/08 13:26:06| WARNING: newer swaplog entry for dirno 0, fileno
002EF53F
2008/07/08 13:26:08| WARNING: newer swaplog entry for dirno 1, fileno
0034D347
==

We are running squid version 2.6 stable 17. I have tried recreating the
swap directories using 'squid -z -D -F', but this seems to have made no
difference. Anyone have an idea about the above errors? I am wondering
if the rebuild is actually going to complete given that it is already
above 100%!



Thanks,

John.


[squid-users] Problem with logrotate and compress

2008-07-08 Thread Sergio Belkin
Hi I am using Centos 5.1 and I have a weird problem with squid logs
rotation. I have the file squid as follow in /etc/logrotate.d:

Recently I reduce size parameter.

/var/log/squid/access.log {
weekly
missingok
rotate 10
compress
create 0660 squid squid
missingok
size 200M
postrotate
  /usr/sbin/squid -k rotate
endscript
}
/var/log/squid/cache.log {
weekly
rotate 5
copytruncate
compress
notifempty
missingok
}

/var/log/squid/store.log {
  missingok
  weekly
  compress
  size 200M
  create 0660 root squid
  rotate 4
# This script asks squid to rotate its logs on its own.
# Restarting squid is a long process and it is not worth
# doing it just to rotate logs
postrotate
  /usr/sbin/squid -k rotate
endscript
}

The strange thing is that I get the following files:

-rw-r- 1 squid squid 3.4M Jul  8 09:37 access.log
-rw-r- 1 squid squid 116K Jul  8 09:31 access.log.0
-rw-rw 1 squid squid0 Jul  8 09:31 access.log.1
-rw-r- 1 squid squid   20 Jul  8 09:31 access.log.1.gz
-rw-r--r-- 1 squid squid  28M Jul  8 04:22 access.log.2.gz
-rw-r- 1 squid squid  39M Jul  8 09:31 access.log.3
-rw-r--r-- 1 root  root   29M Jul  7 15:58 access.log.3.gz
-rw-r--r-- 1 squid squid 252M Jul  8 04:22 access.log.4
-rw-r- 1 squid squid 1.9K Jul  8 09:31 cache.log
-rw-r- 1 squid squid0 Jul  8 09:31 cache.log.0
-rw-r- 1 squid squid 3.1K Jul  8 09:28 cache.log.1
-rw-r- 1 squid squid  367 Jul  8 09:31 cache.log.1.gz
-rw-r- 1 squid squid  367 Jul  8 04:22 cache.log.2.gz
-rw-r--r-- 1 root  root   12K Jun 11 15:40 squid.out
-rw-r- 1 squid squid 1.1M Jul  8 09:37 store.log
-rw-rw 1 root  squid0 Jul  8 09:31 store.log.0
-rw-r- 1 squid squid 2.9M Jul  8 09:31 store.log.1.gz
-rw-r- 1 squid squid  42K Jul  8 09:31 store.log.2
-rw-rw 1 root  root  323M May 26 04:21 store.log.2.gz
-rw-rw 1 root  root  329M May 16 04:25 store.log.3.gz
-rw-rw 1 root  root  357M May  8 04:23 store.log.4.gz


I don't understand why compress old log files but doesn't delete old
non-compressed files uncompressed. Any ideas? (I've also modified
create parameter for squid be owner of access logs and run by hand
logrotate /etc/logrotate.d/squid to see if it repeats the case)

Thanks in advance
-- 
--
Open Kairos http://www.openkairos.com
Watch More TV http://sebelk.blogspot.com
Sergio Belkin -


RE: [squid-users] Integrating squid with OpenSSL:very slow response

2008-07-08 Thread Geetha_Priya


-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 07, 2008 5:36 PM
To: Geetha_Priya
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Integrating squid with OpenSSL:very slow response

On mån, 2008-07-07 at 14:48 +0530, Geetha_Priya wrote:

> yes we use openssl libraries and created a proxy server that supports 
> persistent connections. Earlier we had wcol as http prefetcher. But we 
> had problems with long urls and less capabilities,  we decided to move 
> to squid. Now we are facing this issue after we configured squid to 
> hear request from our proxy. Hence I am not sure if it is proxy or 
> squid.

>>Time to dig up wireshark and take a look at the traffic I think. Start by 
>>looking at the >>traffic in & out of your proxy..

I guess that is a better approach. We will look into it. We also verified that 
browser is able to reach webservers directly through squid. Problem arises only 
when proxy comes between our client and squid. Squid does not send subsequent 
requests for other objects [like images in a page] when it gets requests 
through proxy.

Thanks
Geetha


Regards
Henrik


DISCLAIMER:
This email (including any attachments) is intended for the sole use of the 
intended recipient/s and may contain material that is CONFIDENTIAL AND PRIVATE 
COMPANY INFORMATION. Any review or reliance by others or copying or 
distribution or forwarding of any or all of the contents in this message is 
STRICTLY PROHIBITED. If you are not the intended recipient, please contact the 
sender by email and delete all copies; your cooperation in this regard is 
appreciated.