[squid-users] H/W requirement for Squid to run in bigger scene like ISP

2008-07-14 Thread bijayant kumar
Hello to list,

I am using squid 3.0.STABLE7. Can i use squid for caching server in a ISP like 
environment. If yes, then what should be the H/W configuration? I have gone 
through to various articles found in google. 

In my observations since Squid is a single threaded(not a multithread) s/w, so 
there is no need to use dual/quad core processor, and RAM is also not very 
important factor because somewhere i read, "10MB RAM for every 1 GB of cache 
space on disk". So RAM is also ok. I will use 4-8 GB RAM and it should be fine. 
I think i am going/thinking into right direction. Please suggest me, what 
should be the H/W requirement for the server where you can expect 1200-1500 
concurrent connection to the squid at a time.

Thanks & Regards,
Bijayant Kumar

Send instant messages to your online friends http://uk.messenger.yahoo.com


Re: [squid-users] H/W requirement for Squid to run in bigger scene like ISP

2008-07-14 Thread Anna Jonna Armannsdottir
On mán, 2008-07-14 at 02:25 -0700, bijayant kumar wrote:
> In my observations since Squid is a single threaded(not a multithread)
> s/w, so there is no need to use dual/quad core processor, and RAM is
> also not very important factor because somewhere i read, "10MB RAM for
> every 1 GB of cache space on disk". So RAM is also ok. I will use 4-8
> GB RAM and it should be fine. I think i am going/thinking into right
> direction. Please suggest me, what should be the H/W requirement for
> the server where you can expect 1200-1500 concurrent connection to the
> squid at a time.

I am running Squid 2.5 and I believe this is outdated information. 

The usage of cpus depends on the configuration. If You configure Squid
to use just ufs cache it will not make an alternate thread. But in my 
configuration I use aufs cache and that makes an alternate thread or 
process just to take care of the of the disk I/O. That is an advantage 
if You have a dual processor machine. 

Also with regard to RAM the 10MB RAM per 1 GB disk cache, it only holds
for 32 bit machines. I have calculated that for 64 bit machines, it is
14 MB per 1 GB disk cache. Maybe somebody would confirm this. 

Then there are the usual tasks on the machine like managing logs and 
doing the kernel tasks (swapping, IP-firewall, etc.), and on a heavily
loaded machine, You would probably not want a bottleneck in that area.  

With todays hardware, wich is often dual processor by default, there are
very little savings using only one processor. Chances are that Your
users will hate the proxy server, if it turns out to be a bottleneck
during heavy load. 

The challenge is the DISK I/O, the disk configuration and how You 
tune the cache file system. 

I would like to stress that I am not an expert on Squid, and I would
like some critique or opinions on the above.

-- 
Kindest Regards, Anna Jonna Ármannsdóttir,   %&   A: Because people read 
from top to bottom.
Unix System Aministration, Computing Services,   %&   Q: Why is top posting bad?
University of Iceland.


signature.asc
Description: This is a digitally signed message part


[squid-users] Reverse Proxy, OWA RPCoHTTPS and NTLM authentication passthrough

2008-07-14 Thread Abdessamad BARAKAT

Hi,

I need to reverse proxied a OWA 2007 service and I have some problems  
with NTLM authentication and the RPC connection.  Squid offers a SSL  
service and connect himself to the OWA with a SSL connection


The NTLM authentication was made bu the OWA so I need squid to pass  
the credentials without modified them.


Actually I get  only 401 error code but when I switch the  
authentication to "Basic authentication" on the Outlook anywhere's  
settings, It's working. I want really to have the NTLM authentication  
working for don't ask all users to change their settings.


The squid is chrooted.

I have tried the following versions:

- 3.0 STABLE7

- 2.7STABLE3

- 2.6STABLE21

- 2.6STABLE3

My setup (sometime I need to add acl all or logfile_daemon beetween  
versions, that's all) :


 CHROOT
chroot /usr/local/squid
mime_table /etc/mime.conf
icon_directory /share/icons
error_directory /share/errors/English
unlinkd_program /libexec/unlinkd
cache_dir ufs /var/cache 100 16 256
cache_store_log /var/logs/store.log
access_log /var/logs/access.log squid
pid_filename /var/logs/squid.pid
logfile_daemon /libexec/logfile-daemon


# Define the required extension methods
extension_methods RPC_IN_DATA RPC_OUT_DATA

# Publish the RPCoHTTP service via SSL
https_port 192.168.1.122:8443 cert=/etc/apache2/ssl/ 
webmail.corporate.com.p

em defaultsite=webmail.corporate.com
cache_peer 172.16.18.13 parent 443 0 no-query originserver login=PASS  
ssl sslfl

ags=DONT_VERIFY_PEER name=exchangeServer

acl all src 0.0.0.0/0.0.0.0
acl EXCH dstdomain .corporate.com
cache_peer_access exchangeServer allow EXCH
cache_peer_access exchangeServer deny all
never_direct allow EXCH
# Lock down access to just the Exchange Server!
http_access allow EXCH
http_access deny all
miss_access allow EXCH
miss_access deny all

#no local caching
#maximum_object_size 0 KB
#minimum_object_size 0 KB
#no_cache deny all

#access_log /usr/local/squid/var/logs/access.log squid


Thanks a lot for any tips or informations .



Re: [squid-users] H/W requirement for Squid to run in bigger scene like ISP

2008-07-14 Thread Angelo Hongens
Anna Jonna Armannsdottir wrote:
> With todays hardware, wich is often dual processor by default, there are
> very little savings using only one processor. Chances are that Your
> users will hate the proxy server, if it turns out to be a bottleneck
> during heavy load. 
> 

I'm not running any squid machines in production now (only in VM's to
test), but I were to 'go physical', I would have the same question..

All the servers I usually buy have either one or two quad core cpu's, so
it's more the question: will 8 cores perform better than 4?

If not, I would be wiser to buy a single Xeon X5460 or so, instead of 2
lower clocked cpu's, right?

-- 


Met vriendelijke groet,

Angelo Höngens


Re: [squid-users] Persistent connect to cache_peer parent question

2008-07-14 Thread Russell Suter



Amos Jeffries wrote:

Russell Suter wrote:
  

Hi,

I have a question with regards to persistent connections to a cache peer
parent.  I have
multiple users connecting through a custom compiled  Squid.2.6.STABLE17
(also tried
3.0.STABLE7) on a RedHat EL 4 box in front of a commercial web filter
appliance.  In my
squid.conf file, I have the cache_peer as:

cache_peer  parent 8084 0 login=*:mxlogic no-query no-digest proxy-only

What seems to happen is that a persistent connection is made to the
appliance.  This in
and of itself isn't a problem except that all of the different users
show up as the first user
that made the initial connection.  This really jacks up the statistics
within the appliance.
I can get around this with:

server_persistent_connections off

but that is not as efficient as the persistent connection.
Is there any way to get one persistent connection per user to the
cache_peer parent?




Not my knowledge. Persistent Connections are a link-layer artifact
between any given client (ie squid) and a server.

  

To me, the behavior is broken.  Either the single connection
to the cache parent should provide the correct user
credentials, or there should be one persistent connection per
user.  To have multiple requests from different users be
represented by only one user is wrong...

--
Russ

We can't solve problems by using the same kind of
thinking we used when we created them.
   -- Albert Einstein

Russell Suter
MX Logic, Inc.
Phone: 720.895.4481

Your first line of email defense.
http://www.mxlogic.com



Re: [squid-users] H/W requirement for Squid to run in bigger scene like ISP

2008-07-14 Thread Anna Jonna Armannsdottir
On mán, 2008-07-14 at 13:01 +0200, Angelo Hongens wrote:
> 
> All the servers I usually buy have either one or two quad core cpu's,
> so it's more the question: will 8 cores perform better than 4?
> 
> If not, I would be wiser to buy a single Xeon X5460 or so, instead of
> 2 lower clocked cpu's, right?

In that case we are fine tuning the CPU power and if there are 8 cores 
in a Squid server, I would think that at least half of them would
produce idle heat: An extra load for the cooling system. As You point
out, the CPU speed is probably important for the part of Squid that does
not use threading or separate process. 

But the real bottlenecks are in the I/O, both RAM and DISK. So if I was 
buying HW now, I would mostly be looking at I/O speed and very little at
CPU speed. SCSI disks with large buffers are preferable, and if SCSI is 
not a viable choice, use the fastest SATA disks you can find - Western
Digital Raptor used to be the fastest SATA disk, dot't know what is the
fastest SATA disk now.  

-- 
Kindest Regards, Anna Jonna Ármannsdóttir,   %&   A: Because people read 
from top to bottom.
Unix System Aministration, Computing Services,   %&   Q: Why is top posting bad?
University of Iceland.


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] FW: Problems with LDAP on Windows XP

2008-07-14 Thread Chris Woodfield
I seem to remember having a similar problem when calling a URL  
rewriter with command-line arguments; I solved it by having squid call  
a shell script instead that had the actual rewriter + arguments on an  
exec line. I later rewrote the helper app to read a config file in  
lieu of command-line arguments.


If it's the same issue, see if you get a similar error when the line  
is just


auth_param basic program "/squid/libexec/squid_ldap_auth"

(even though it won't work without the args...); if so, you may have  
to make up a .bat file to launch your helper with the needed  
arguments, and point squid to that instead.


-C

On Jul 10, 2008, at 5:12 AM, Duncan Peacock wrote:


Hi There,

I have recently installed Squid 2.6.STABLE21 for i686-PC-winnt. The  
proxy server is currently acting as a chained Proxy and will send  
all client HTTP request to an online web filtering site.


I have managed to configure this correctly but I am having problems  
with LDAP.


I have put the following code into the suid.conf file:

auth_param basic program /squid/libexec/squid_ldap_auth -v 3 -b  
"dc=mydomain,dc=net" -D uid=ldap user,ou=IT  
Department,dc=mydomain,dc=net  -w  -f uid=%s  
myserver.mydomain.net

auth_param basic children 5
auth_param basic realm mydomain Server
auth_param basic credentialsttl 5 hours

But I get the following error from the squid.exe txt file:

FATAL: auth_param basic program /squid/libexec/squid_ldap_auth: (2)  
No such file or directory

Any idea what could be wrong?

Thanks

Duncan Peacock





Re: [squid-users] H/W requirement for Squid to run in bigger scene like ISP

2008-07-14 Thread Amos Jeffries

Anna Jonna Armannsdottir wrote:

On mán, 2008-07-14 at 13:01 +0200, Angelo Hongens wrote:

All the servers I usually buy have either one or two quad core cpu's,
so it's more the question: will 8 cores perform better than 4?

If not, I would be wiser to buy a single Xeon X5460 or so, instead of
2 lower clocked cpu's, right?


In that case we are fine tuning the CPU power and if there are 8 cores 
in a Squid server, I would think that at least half of them would

produce idle heat: An extra load for the cooling system. As You point
out, the CPU speed is probably important for the part of Squid that does
not use threading or separate process. 

But the real bottlenecks are in the I/O, both RAM and DISK. So if I was 
buying HW now, I would mostly be looking at I/O speed and very little at
CPU speed. SCSI disks with large buffers are preferable, and if SCSI is 
not a viable choice, use the fastest SATA disks you can find - Western

Digital Raptor used to be the fastest SATA disk, dot't know what is the
fastest SATA disk now.  



This whole issue comes up every few weeks.

The last consensus reached was dual-core on a squid dedicated machine. 
One for squid, one for everything else. With a few GB of RAM and fast 
SATA drives. aufs for Linux. diskd for BSD variants. With many spindles 
preferred over large disk space (2x 100GB instead of 1x 200GB).


The old rule-of-thumb memory usage mentioned earlier (10MB/GB + 
something for 64-buts) still holds true. The more available the larger 
the in-memory cache can be, and that is still where squid gets its best 
cache speeds on general web traffic.


Exact tunings are budget dependent.

Amos
--
Please use Squid 2.7.STABLE3 or 3.0.STABLE7


[squid-users] assertion failed

2008-07-14 Thread pritam

Hi All,

Knowing that it is a bug ( ..? ) I need yours help here.

My squid is getting restarted often (2, 3 times a day) with following 
messages:


2008/07/14 07:56:47| assertion failed: store_client.c:172: 
"!EBIT_TEST(e->flags, ENTRY_ABORTED)"
2008/07/14 17:18:17| assertion failed: forward.c:109: 
"!EBIT_TEST(e->flags, ENTRY_FWD_HDR_WAIT)"


I have recently updated my squid to 2.7ST3 in two of my servers (one in 
Fedora 6, other in CentOS 5.1) and also implemented COSS. The above 
problem is seen in only one of my server ( running CentOS 5.1).


My questions are;

Is this related to COSS...? Or it has to do something with OS 
Installed...? Or related to squid 2.7ST3, because I had no such issue 
before with squid 2.6.


And what could be the best way to get rid of this problem.

Yours' suggestions will be appreciated.

Regards,

Pritam


Re: [squid-users] When worlds collide

2008-07-14 Thread Tuc at T-B-O-H.NET
> On s=C3=B6n, 2008-07-13 at 10:46 -0400, Tuc at T-B-O-H.NET wrote:
> > Thanks for the reply. It turns out, oddly, that the IP that the
> > system is sending them to doesn't seem to be contactable either. Interest=
> ingly,
> > its generating those "0 0" (return code/bytes) I was seeing recently. So =
> maybe
> > if Squid gets a timeout to a site it causes the 0/0's? When the DNS could=
> n't
> > resolve I was getting 503/17?? (I forget exactly).
> 
> Probably it's firewalled, only allowing specific IPs access..
> 
> Regards
> Henrik
> 
I heard back from the company today. Yes, they said it is an internal
(onsite/VPN) only accessible site (Yet they use a Public IP.. And we wonder 
where
all the good IPs have gone. ;) ).

But shouldn't Squid be returning something other than "0 0"?

Thanks, Tuc


[squid-users] wccp and Cisco router identifier

2008-07-14 Thread Clemente Aguiar
I am in the process of installing a "transparent" squid cache using wcpp
using a Cisco Router C2600 (IOS Version 12.2(46a))

Everything is working fine except there is something that I don't know
how to change.

The Cisco router identifier is the address that is used for GRE on the
router. Our router has two FastEthernet interfaces, each configured
with an IP, and the router chose one of the IPs at random as the Cisco
router identifier. How can that be changed? (i.e. how can I force
the Cisco router identifier to be a specific IP) 

I searched in this list and somebody said to use a loopback interface on the 
Cisco,
that it would much more predictable as the wccpv2 routerid is then always 
loopback id.
How is this done?


Clemente



[squid-users] "POST" method doesn't work on Squid 3.0 Stable 7(+ ICAP client) and GreasySpoon (ICAP server)

2008-07-14 Thread Jones
Thanks to developers' hard work to let us enjoy the Squid. Currently,
I am using Squid 3.0 Stable 7 + ICAP client and GreasySpoon (ICAP
server) to customize the page. It works pretty well in Debian.

However, when I browse the page, which has a form with "POST" method,
the page will load repeatedly without finishing, such as
http://www.redbox.com/Titles/Availabletitles.aspx. (The "POST" method
can work well if I disable ICAP client inside Squid.) I search for the
mail archives for a while, but cannot figure out if that is a ICAP
client problem.

I check the Squid log and GreasySpoon access.log and found following messages:


Squid access.log

1215807898.875 313 128.30.84.19 TCP_MISS/200 751 GET
http://XX/main.php - DIRECT/128.30.2.80 text/html
1215807899.103 185 128.30.84.19 TCP_MISS/200 541 GET
http://XX/script.php - DIRECT/128.30.2.80 text/javascript

GreasySpoon access.log:

[11/Jul/2008:16:09:58 -0400] 1215806998388 0 [REQMOD ] [greasyspoon]
ICAP/200 HTTP/POST http://XXX/confirm.php


However, I don't see any POST http:// xxx in squid log, which might be
the reason to cause the page to load repeatedly. (But I can see TCP
package [TCP segment of a reassembled PDU], which contains Post method
message.)

Does anyone encounter similar problem with ICAP client? I appreciate
anyone who can give me some suggestions or point me to past related
mail to know how to fix it. Sorry if this is a repeated question.

Thanks for any suggestion.

- Jones


[squid-users] "POST" method doesn't work on Squid 3.0 Stable 7(+ ICAP client) and GreasySpoon (ICAP server)

2008-07-14 Thread Jones
Thanks to developers' hard work to let us enjoy the Squid. Currently,
I am using Squid 3.0 Stable 7 + ICAP client and GreasySpoon (ICAP
server) to customize the page. It works pretty well in Debian.

However, when I browse the page, which has a form with "POST" method,
the page will load repeatedly without finishing, such as
http://www.redbox.com/Titles/Availabletitles.aspx. (The "POST" method
can work well if I disable ICAP client inside Squid.) I search for the
mail archives for a while, but cannot figure out if that is a ICAP
client problem.

I check the Squid log and GreasySpoon access.log and found following messages:


Squid access.log

1215807898.875 313 XXX.XXX.XXX.XXX TCP_MISS/200 751 GET
http://XX/main.php - DIRECT/XXX.XXX.XXX.XXX text/html
1215807899.103 185 XXX.XXX.XXX.XXX TCP_MISS/200 541 GET
http://XX/script.php - DIRECT/XXX.XXX.XXX.XXX text/javascript

GreasySpoon access.log:

[11/Jul/2008:16:09:58 -0400] 1215806998388 0 [REQMOD ] [greasyspoon]
ICAP/200 HTTP/POST http://XXX/confirm.php


However, I don't see any POST http:// xxx in squid log, which might be
the reason to cause the page to load repeatedly. (But I can see TCP
package [TCP segment of a reassembled PDU], which contains Post method
message.)

Does anyone encounter similar problem with ICAP client? I appreciate
anyone who can give me some suggestions or point me to past related
mail to know how to fix it. Sorry if this is a repeated question.

Thanks for any suggestion.

Jones


Re: [squid-users] When worlds collide

2008-07-14 Thread Henrik Nordstrom
Mån 2008-07-14 klockan 11:03 -0400 skrev Tuc at T-B-O-H.NET:
> > 
>   I heard back from the company today. Yes, they said it is an internal
> (onsite/VPN) only accessible site (Yet they use a Public IP.. And we wonder 
> where
> all the good IPs have gone. ;) ).

Heh..

>   But shouldn't Squid be returning something other than "0 0"?

It does if you are a bit patient..

TCP_MISS/0 0 is when the client aborts before anything is known about
the response, i.e. before Squid has timed out the connection to the
unreachable internal server.. The default timeout is 2 minutes.

Regards
Henrik



[squid-users] Does --enable-ntlm-auth-helpers=fakeauth work with 3.0?

2008-07-14 Thread Jeff Jenkins
Trying to get fakeauth to work with 3.x (have used 3.HEAD-20080711  
sources), but I see crashes in fakeauth.


Anyone have this working?

Additionally, where is a definitive guide on getting NTLM auth to work  
with 3.x?  I have googled a bunch of 2.x stuff, but not much  
mentioning 3.x


Thanks!

-- jrj


[squid-users] Need help with a Reverse proxy situation

2008-07-14 Thread Patson Luk
Hi,

We are using SQUID 3.0 as a reverse proxy on our server and it boosts
the performance a lot!

However, we have problems when some of the requests are coming in as
XML files. On several forums there are mentions that SQUID is only
HTTP 1.0 compliant hence those XML files comes in as chunked
code-encoding in HTTP1.1 would fail with error code 501 (Not
Implemented)

This is quite a dilemma for us as our products include both the client
(a desktop calendaring apps) and the server (a calendar server). The
clients we released were using HTTP 1.1 (so we cant switch to 1.0,
otherwise existing users would have problems) and also the server
itself uses a framework that forces HTTP 1.1.

I have tried SQUID 2.6 but we still get TCP_DENIED/501 in the access.log

There are several ways I can think of that might fix the problem...but
I dun quite know how to implement them :( (SQUID and our server are on
the same machine)

1. Make SQUID to ignore all the PUT/POST request ...but as far as
SQUID is trying to forward them...it fails

or

2. Make SQUID to only catch/forward requests on certain domain name.
For example we have domain name a.com and b.com both runs on the SAME
IP, SAME machine...is it possible to configure SQUID such that it only
touches/forwards stuff that comes in as a.com but b.com just does not
get thru SQUID at all?

or

3. (least favorite) Put some stuff on top of SQUID (that can forward
to different PORT based on request type/domain name), etc. if its a
GET request, forward to PORT 83 (with caching) and PORT 80 for other
request types. A servlet can probably do it...but I really dun want to
:(


Many thanks for the help in advance~!! I am still a newbie for cache
and proxy stuff


Cheers!!

Patson


[squid-users] Need help with POST problem in Squid 3.0 + ICAP client

2008-07-14 Thread Jones
Thanks to developers' hard work to let us enjoy the Squid. Currently,
I am using Squid 3.0 Stable 7 + ICAP client and GreasySpoon (ICAP
server) to customize the page. It works pretty well in Debian.

However, when I browse the page, which has a form with "POST" method,
the page will load repeatedly without finishing, such as
http://www.redbox.com/Titles/Availabletitles.aspx. (The "POST" method
can work well if I disable ICAP client inside Squid.) I search for the
mail archives for a while, but cannot figure out if that is a ICAP
client problem.

I check the Squid log and GreasySpoon access.log and found following messages:

1. Squid access.log:
1215807898.875 313 128.30.84.19 TCP_MISS/200 751 GET
http://XX/main.php - DIRECT/XXX.XXX.XXX.XXX text/html
1215807899.103 185 128.30.84.19 TCP_MISS/200 541 GET
http://XX/script.php - DIRECT/XXX.XXX.XXX.XXX text/javascript

2. GreasySpoon access.log:
[11/Jul/2008:16:09:58 -0400] 1215806998388 0 [REQMOD ] [greasyspoon]
ICAP/200 HTTP/POST http://XXX/confirm.php

However, I don't see any POST http:// xxx in squid log, which might be
the reason to cause the page to load repeatedly. (But I can see TCP
package [TCP segment of a reassembled PDU], which contains Post method
message.)

Does anyone encounter similar problem with ICAP client? I appreciate
anyone who can give me some suggestions or point me to past related
mail to know how to fix it. Sorry if this is a repeated question.
Thanks for any suggestion.

Jones


[squid-users] Need help with POST problem in Squid 3.0 + ICAP client

2008-07-14 Thread Jones
Thanks to developers' hard work to let us enjoy the Squid. Currently,
I am using Squid 3.0 Stable 7 + ICAP client and GreasySpoon (ICAP
server) to customize the page. It works pretty well in Debian.

However, when I browse the page, which has a form with "POST"
method,the page will load repeatedly without finishing, such as
http://www.redbox.com/Titles/Availabletitles.aspx. (The "POST" method
can work well if I disable ICAP client inside Squid.) I check the
Squid log and GreasySpoon access.log and found following messages:

1. Squid access.log:
TCP_MISS/200 751 GET http://XX/main.php - DIRECT/XXX.XXX.XXX.XXX text/html
TCP_MISS/200 541 GET http://XX/script.php - DIRECT/XXX.XXX.XXX.XXX
text/javascript

2. GreasySpoon access.log:
[REQMOD ] [greasyspoon] ICAP/200 HTTP/POST http://XXX/confirm.php

However, I don't see any POST http in squid log, which might be the
reason to cause the page to load repeatedly. (But I can see TCP
package [TCP segment of a reassembled PDU], which contains Post method
message.)

Does anyone encounter similar problem with ICAP client? I appreciate
anyone who can give me some suggestions or point me to past related
mail to know how to fix it. Sorry if this is a repeated question.
Thanks for any suggestion.

- Jones


Re: [squid-users] wccp and Cisco router identifier

2008-07-14 Thread Adrian Chadd
conf t
int lo0
ip addr x.x.x.x 255.255.255.255
end
wri mem

Then probably delete and rebuild your wccp config on the router.



Adrian

2008/7/15 Clemente Aguiar <[EMAIL PROTECTED]>:
> I am in the process of installing a "transparent" squid cache using wcpp
> using a Cisco Router C2600 (IOS Version 12.2(46a))
>
> Everything is working fine except there is something that I don't know
> how to change.
>
> The Cisco router identifier is the address that is used for GRE on the
> router. Our router has two FastEthernet interfaces, each configured
> with an IP, and the router chose one of the IPs at random as the Cisco
> router identifier. How can that be changed? (i.e. how can I force
> the Cisco router identifier to be a specific IP)
>
> I searched in this list and somebody said to use a loopback interface on the 
> Cisco,
> that it would much more predictable as the wccpv2 routerid is then always 
> loopback id.
> How is this done?
>
>
> Clemente
>
>


Re: [squid-users] can't get squid to cache

2008-07-14 Thread Angelo Hongens
Henrik Nordstrom wrote:
> The problem is minimum_expiry_time in your squid.conf:
> 
> minimum_expiry_time 3600 seconds
> refresh_pattern . 3600 100% 3600 ignore-no-cache ignore-reload 
> override-expire override-lastmod
> 
> There is a corner issue with minimum_expiry_time that the expiry time
> needs to be 1 second more for the object to be accepted. But I seriously
> suspect you have misunderstood the meaning of this directive. Most
> likely you want to have it set to 0 or left at the default 60 seconds.
> 
> http://www.squid-cache.org/Versions/v3/3.0/cfgman/minimum_expiry_time.html


I guess you're right, I do not know what the minimum_expiry_time means..
thanks for the link, but I just got the O'Reilly book by mail, I'm
taking it with me on my holiday, so I can read, and git a bit more info
about the big picture.

Hope to be back on the list in August to come back to this issue. Thank
you all for your support so far.


-- 


Met vriendelijke groet,

Angelo Hongens


[squid-users] Fwd: Url redirection to ip

2008-07-14 Thread jason bronson
Is it possible to redirect based on a URL path in squid example

I have
63.45.45.45/login/test
63.45.45.45/login/new

63.45.45.45/login/test  --> 10.108.111.34
63.45.45.45/login/new  --> 10.108.18.254

So I want to redirect squid's call based upon its external path being
seen then send to the correct machine


RE: [squid-users] wccp and Cisco router identifier

2008-07-14 Thread Ritter, Nicholas
 You can't set this in the router that I am aware of. I had the same issue.

-Original Message-
From: Clemente Aguiar [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 14, 2008 11:01 AM
To: squid-users@squid-cache.org
Subject: [squid-users] wccp and Cisco router identifier

I am in the process of installing a "transparent" squid cache using wcpp using 
a Cisco Router C2600 (IOS Version 12.2(46a))

Everything is working fine except there is something that I don't know how to 
change.

The Cisco router identifier is the address that is used for GRE on the router. 
Our router has two FastEthernet interfaces, each configured with an IP, and 
the router chose one of the IPs at random as the Cisco router identifier. How 
can that be changed? (i.e. how can I force the Cisco router identifier to be a 
specific IP) 

I searched in this list and somebody said to use a loopback interface on the 
Cisco, that it would much more predictable as the wccpv2 routerid is then 
always loopback id.
How is this done?


Clemente




Re: [squid-users] Does --enable-ntlm-auth-helpers=fakeauth work with 3.0?

2008-07-14 Thread Jeff Jenkins

I built squid as follows:
$ sudo ./configure --prefix=/usr/local/squid/ntlm --enable-auth=ntlm -- 
enable-ntlm-auth-helpers=fakeauth


When running squid and attempting to get my client to use the squid  
proxy, I see the following errors on the console of the squid machine:


2008/07/14 13:14:06| helperStatefulHandleRead: unexpected read from  
ntlmauthenticator #1, 84 bytes 'TT  
TlRMTVNTUAACCgAKADAGggEATp16pM0 
+8Q06AA==

'
2008/07/14 13:14:06| StatefulHandleRead: no callback data registered
2008/07/14 13:14:06| authenticateNTLMHandleReply: Helper '0x567010'  
crashed!.
2008/07/14 13:14:06| authenticateNTLMHandleReply: Error validating  
user via NTLM. Error returned 'BH Internal error'

2008/07/14 13:14:06| WARNING: ntlmauthenticator #1 (FD 5) exited

I am running 3.HEAD-20080711.  Any ideas why this is crashing?

-- jrj

On Jul 14, 2008, at 10:29 AM, Jeff Jenkins wrote:

Trying to get fakeauth to work with 3.x (have used 3.HEAD-20080711  
sources), but I see crashes in fakeauth.


Anyone have this working?

Additionally, where is a definitive guide on getting NTLM auth to  
work with 3.x?  I have googled a bunch of 2.x stuff, but not much  
mentioning 3.x


Thanks!

-- jrj




Re: [squid-users] wccp and Cisco router identifier

2008-07-14 Thread Michel

> I am in the process of installing a "transparent" squid cache using wcpp
> using a Cisco Router C2600 (IOS Version 12.2(46a))
>
> Everything is working fine except there is something that I don't know
> how to change.
>
> The Cisco router identifier is the address that is used for GRE on the
> router. Our router has two FastEthernet interfaces, each configured
> with an IP, and the router chose one of the IPs at random as the Cisco
> router identifier. How can that be changed? (i.e. how can I force
> the Cisco router identifier to be a specific IP)
>
> I searched in this list and somebody said to use a loopback interface on 
> the
> Cisco,
> that it would much more predictable as the wccpv2 routerid is then always 
> loopback
> id.
> How is this done?
>
>

you can use the int loopback command to create one or go into interface
configuration and use the loopback sub command

anyway, this might not be the right way, eventually better to configure in 
config
mode the wccp webcache and on THE interface you want to use you issue the ip 
wccp
redir command so the reply comes from this interface ip address

details I do not remember, since long time I didn't do it but you figure it out
right :)



michel





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




[squid-users] Re: assertion failed

2008-07-14 Thread pritam

pritam wrote:

Hi All,

Knowing that it is a bug ( ..? ) I need yours help here.

My squid is getting restarted often (2, 3 times a day) with following 
messages:


2008/07/14 07:56:47| assertion failed: store_client.c:172: 
"!EBIT_TEST(e->flags, ENTRY_ABORTED)"
2008/07/14 17:18:17| assertion failed: forward.c:109: 
"!EBIT_TEST(e->flags, ENTRY_FWD_HDR_WAIT)"


I have recently updated my squid to 2.7ST3 in two of my servers (one 
in Fedora 6, other in CentOS 5.1) and also implemented COSS. The above 
problem is seen in only one of my server ( running CentOS 5.1).


My questions are;

Is this related to COSS...? Or it has to do something with OS 
Installed...? Or related to squid 2.7ST3, because I had no such issue 
before with squid 2.6.


Sorry the problem shouldn't be related to the OS Installed as my other 
squid box ( running on fedora) also shows 'assertion failed:'  error and 
gets restarted.


Any suggestions for me...?


And what could be the best way to get rid of this problem.

Yours' suggestions will be appreciated.

Regards,

Pritam


Re: [squid-users] H/W requirement for Squid to run in bigger scene like ISP

2008-07-14 Thread Chris Woodfield

Hi,

One thing to keep in mind is that in my experience, it makes sense to  
not only get fast disks, but put as much RAM in the box you can  
afford. Now *don't* give this all the squid via the mem_cache config;  
let the OS use the spare memory for caching disk reads. This will spee


Additionally, don't RAID your disks beyond RAID 1, and only do that if  
you have to for reliability requirements. The more individual spindles  
attached to separate cache_dirs, the better. Amos is right that I/O  
trumps CPU here every time.


When we swapped out older squid boxes that couldn't take more than 2GB  
of RAM, or more than one disk, and put in 64-bit boxen with 32GB and 3  
cache-dirs (6 drives, paired into three RAID1 devices), we saw things  
improve dramatically despite the fact that the CPUs were actually  
slower. We went from topping out at 5K queries per minute to being  
able to handle ~20K/minute without breaking a sweat. Pretty dramatic  
IMHO.


Hope this helps,

-Chris

On Jul 14, 2008, at 10:04 AM, Amos Jeffries wrote:


Anna Jonna Armannsdottir wrote:

On mán, 2008-07-14 at 13:01 +0200, Angelo Hongens wrote:
All the servers I usually buy have either one or two quad core  
cpu's,

so it's more the question: will 8 cores perform better than 4?

If not, I would be wiser to buy a single Xeon X5460 or so, instead  
of

2 lower clocked cpu's, right?
In that case we are fine tuning the CPU power and if there are 8  
cores in a Squid server, I would think that at least half of them  
would

produce idle heat: An extra load for the cooling system. As You point
out, the CPU speed is probably important for the part of Squid that  
does
not use threading or separate process. But the real bottlenecks are  
in the I/O, both RAM and DISK. So if I was buying HW now, I would  
mostly be looking at I/O speed and very little at
CPU speed. SCSI disks with large buffers are preferable, and if  
SCSI is not a viable choice, use the fastest SATA disks you can  
find - Western
Digital Raptor used to be the fastest SATA disk, dot't know what is  
the

fastest SATA disk now.


This whole issue comes up every few weeks.

The last consensus reached was dual-core on a squid dedicated  
machine. One for squid, one for everything else. With a few GB of  
RAM and fast SATA drives. aufs for Linux. diskd for BSD variants.  
With many spindles preferred over large disk space (2x 100GB instead  
of 1x 200GB).


The old rule-of-thumb memory usage mentioned earlier (10MB/GB +  
something for 64-buts) still holds true. The more available the  
larger the in-memory cache can be, and that is still where squid  
gets its best cache speeds on general web traffic.


Exact tunings are budget dependent.

Amos
--
Please use Squid 2.7.STABLE3 or 3.0.STABLE7





[squid-users] Need help with POST problem in Squid 3.0 + GreasySpoon

2008-07-14 Thread Jones
Thanks to developers' hard work to let us enjoy the Squid. Currently, I am
using Squid 3.0 Stable 7 + ICAP client and GreasySpoon (ICAP
server) to customize the page. It works pretty well in Debian.

However, when I browse the page, which has a form with "POST" method, the
page will load repeatedly without finishing, such as
http://www.redbox.com/Titles/Availabletitles.aspx. (The "POST" method can
work well if I disable ICAP client inside Squid.) I search for the mail
archives for a while, but cannot figure out if that is a ICAP client
problem.

I check the Squid log and GreasySpoon access.log and found following
messages:

1. Squid access.log:
1215807898.875 313 128.30.84.19 TCP_MISS/200 751 GET http://XX/main.php
- DIRECT/XXX.XXX.XXX.XXX text/html
1215807899.103 185 128.30.84.19 TCP_MISS/200 541 GET
http://XX/script.php - DIRECT/XXX.XXX.XXX.XXX text/javascript

2. GreasySpoon access.log:
[11/Jul/2008:16:09:58 -0400] 1215806998388 0 [REQMOD ] [greasyspoon]
ICAP/200 HTTP/POST http://XXX/confirm.php

However, I don't see any POST http:// xxx in squid log, which might be the
reason to cause the page to load repeatedly. (But I can see TCP package [TCP
segment of a reassembled PDU], which contains Post method
message.)

Does anyone encounter similar problem with ICAP client? I appreciate anyone
who can give me some suggestions or point me to past related mail to know
how to fix it. Sorry if this is a repeated question.
Thanks for any suggestion.

Jones




Re: [squid-users] Fwd: Url redirection to ip

2008-07-14 Thread Michael Alger
On Mon, Jul 14, 2008 at 04:31:35PM -0400, jason bronson wrote:
> Is it possible to redirect based on a URL path in squid example
> 
> I have
> 63.45.45.45/login/test
> 63.45.45.45/login/new
> 
> 63.45.45.45/login/test  --> 10.108.111.34
> 63.45.45.45/login/new  --> 10.108.18.254
> 
> So I want to redirect squid's call based upon its external path
> being seen then send to the correct machine

You need to configure a cache_peer for each backend server you want
to serve from:

cache_peer 10.108.111.34  parent  80  0  name=test no-query no-digest 
originserver
cache_peer 10.108.18.254  parent  80  0  name=new  no-query no-digest 
originserver

The "originserver" option tells squid not to make proxy requests to
it, i.e. to request /foo/bar rather than http://server/foo/bar.

The "name" option lets you refer to the cache_peer with something
other than its IP address, which can make your configuration more
readable and is especially useful if you have multiple servers on
the same IP but a different port.

You then define acls to specify what traffic to allow or disallow to
each of these peers, and apply them with cache_peer_access:

acl test_server_paths url_regex 63\.45\.45\.45/login/test
acl new_server_paths url_regex 63\.45\.45\.45/login/new

cache_peer_access test allow test_server_paths
cache_peer_access test deny all

cache_peer_access new allow new_server_paths
cache_peer_access new deny all

You can probably come up with more efficient rules, but that's the
general approach. The "test" and "new" in the cache_peer_access
lines correspond to the name= assigned to each cache_peer; if you
don't explicitly set a name= you just use the hostname or IP address
of the peer.


Re: [squid-users] Need help with a Reverse proxy situation

2008-07-14 Thread Michael Alger
On Tue, Jul 15, 2008 at 03:39:46AM +0900, Patson Luk wrote:
> We are using SQUID 3.0 as a reverse proxy on our server and it
> boosts the performance a lot!
> 
> However, we have problems when some of the requests are coming in
> as XML files. On several forums there are mentions that SQUID is
> only HTTP 1.0 compliant hence those XML files comes in as chunked
> code-encoding in HTTP1.1 would fail with error code 501 (Not
> Implemented)
> 
> I have tried SQUID 2.6 but we still get TCP_DENIED/501 in the
> access.log
> 
> There are several ways I can think of that might fix the
> problem...but I dun quite know how to implement them :( (SQUID and
> our server are on the same machine)
> 
> 1. Make SQUID to ignore all the PUT/POST request ...but as far as
> SQUID is trying to forward them...it fails

I think you're probably correct in that squid can't deal with it
properly, as HTTP/1.1 support is both very basic and experimental so
far. As far as I can tell, the support in 2.6 is mostly a workaround
to fix specific problems (servers returning chunked encoding in response
to HTTP/1.0 requests), and not a general solution.

I do somewhat wonder why the clients are sending HTTP/1.1 requests
to a HTTP/1.0 server in the first place, but I'm not exactly sure
how "negotiation" occurs since the client doesn't know the version
of the server until after it has sent its request.

> 2. Make SQUID to only catch/forward requests on certain domain
> name.  For example we have domain name a.com and b.com both runs
> on the SAME IP, SAME machine...is it possible to configure SQUID
> such that it only touches/forwards stuff that comes in as a.com
> but b.com just does not get thru SQUID at all?

I don't think this is possible, as you'd need to bypass squid at the
network layer, before the client makes a connection to squid. At
that point the only things you have to go on are the IP addresses
and ports in the connection request -- the domain name being
requested isn't known until the connection has been established and
the HTTP request is sent.

If possible, I'd try to get another IP address for the system as
that would probably be the cleanest way to handle it, assuming you
can force all the "bad" requests to go to a particular IP address.

> 3. (least favorite) Put some stuff on top of SQUID (that can
> forward to different PORT based on request type/domain name), etc.
> if its a GET request, forward to PORT 83 (with caching) and PORT
> 80 for other request types. A servlet can probably do it...but I
> really dun want to :(

This is probably what you'll need to do. If you're familiar with
Apache it might be worth looking at mod_proxy; possibly it has
better support for chunked encoding. Then again, it might not.


[squid-users] Need help with POST problem in Squid 3.0 + ICAP client

2008-07-14 Thread Yu Jones
I am using Squid 3.0 Stable 7 + ICAP client and GreasySpoon (ICAP server) to
customize the page. It works pretty well in Debian.

However, when I browse the page, which has a form with "POST" method,the
page will load repeatedly without finishing, such as
http://www.redbox.com/Titles/Availabletitles.aspx. (The "POST" method can
work well if I disable ICAP client inside Squid.) Checking the Squid log and
GreasySpoon access.log, I can see POST in GreasySpoon log, but don't see any
POST http in squid log, which might be the reason to cause the page to load
repeatedly. (But I can see TCP package [TCP segment of a reassembled PDU],
which contains Post method message.)

Does anyone encounter similar problem with ICAP client? I appreciate anyone
who can give me some suggestions or point me to past related mail to know
how to fix it. Sorry if this is a repeated question.

Thanks for any suggestion.

- Jones