Re: [squid-users] maxconn bug ?

2012-03-14 Thread FredB

> > Bit suspicious yes.
> > 
> > Tried apachebench (ab) with concurrency level 10? or anything like
> > that
> > which can guarantee multiple simultaneous connections for the test?
> > 
> > Amos
> 
> Yes, a little script who make many wget recursive + I navigate with
> firefox, after I watch access.log and read 20 cnx by second
> Also with just a simple firefox and 30 tabs refresh I'm not deny, and
> same behaviour with acl all
> 
> I tried maxconn (or something like that I can't remember) a long time
> ago with squid 2.6, and I had to increase the maxconn's value more
> than 5 for comfortable navigation.
> 

Perhaps somebody can try ? And tell if I should open a bug  

Thank 


Re: [squid-users] About access.log hourly?

2012-03-14 Thread Ibrahim Lubis
Ohh i see thx...

-Original Message-

From: Amos Jeffries
Sent: 14 Mar 2012 02:18:58 GMT
To: squid-users@squid-cache.org
Subject: Re: [squid-users] About access.log hourly?

On 14.03.2012 14:54, Ibrahim Lubis wrote:
> I use cron...

Then the answer is quite simply to set it to run its command every 15
minutes and bump up your logfile_rotate limit to prevent loosing logs
earlier.

Amos


>
> -Original Message-
>
> From: Amos Jeffries
>
> On 13.03.2012 23:09, Ibrahim Lubis wrote:
>> How can i configurre squid to create a new access.log file every 15
>> minutes, in 1 hour i have 4 different log file...
>
> What are you using to manage the Squid logs  cron? logrotate?
> something else?
>
> Amos



[squid-users] Digest Problem

2012-03-14 Thread FredB
Hi,

I'm trying ldap and digest with squid 3.2.0.16, the authentication seems works, 
but unfortunately I can only navigate just one time 

1) squid start

2) Open firefox, first cnx deny -> normal
192.168.80.194 - - [14/Mar/2012:09:54:40 +0100] "GET http://www.google.fr/ 
HTTP/1.1" 407 1861 "-" "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.19) 
Gecko/2010091807 Iceweasel/3.0.6 (Debian-3.0.6-3)" TCP_DENIED:HIER_NONE

3) Ident ok with user ftest 
192.168.80.194 - ftest [14/Mar/2012:09:54:51 +0100] "GET http://www.google.fr/ 
HTTP/1.1" 200 22083 "-" "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.19) 
Gecko/2010091807 Iceweasel/3.0.6 (Debian-3.0.6-3)" TCP_MISS:HIER_DIRECT

4) Refresh or get another website -> deny 
192.168.80.194 - - [14/Mar/2012:09:54:51 +0100] "GET 
http://www.google.fr/images/icons/product/chrome-48.png HTTP/1.1" 403 1742 
"http://www.google.fr/"; "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.19) 
Gecko/2010091807 Iceweasel/3.0.6 (Debian-3.0.6-3)" TCP_DENIED:HIER_NONE
192.168.80.194 - - [14/Mar/2012:09:54:51 +0100] "GET 
http://www.google.fr/logos/2012/yoshizawa12-hp.jpg HTTP/1.1" 403 1742 
"http://www.google.fr/"; "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.19) 
Gecko/2010091807 Iceweasel/3.0.6 (Debian-3.0.6-3)" TCP_DENIED:HIER_NONE
192.168.80.194 - - [14/Mar/2012:09:54:51 +0100] "GET 
http://www.google.fr/images/modules/buttons/g-button-chocobo-basic-1.gif 
HTTP/1.1" 403 1742 "http://www.google.fr/"; "Mozilla/5.0 (X11; U; Linux i686; 
en-US; rv:1.9.0.19) Gecko/2010091807 Iceweasel/3.0.6 (Debian-3.0.6-3)" 
TCP_DENIED:HIER_NONE
192.168.80.194 - - [14/Mar/2012:09:54:51 +0100] "GET 
http://www.google.fr/images/modules/buttons/g-button-chocobo-basic-2.gif 
HTTP/1.1" 403 1742 "http://www.google.fr/"; "Mozilla/5.0 (X11; U; Linux i686; 
en-US; rv:1.9.0.19) Gecko/2010091807 Iceweasel/3.0.6 (Debian-3.0.6-3)" 
TCP_DENIED:HIER_NONE
192.168.80.194 - - [14/Mar/2012:09:54:51 +0100] "GET 
http://www.google.fr/extern_js/f/CgJmchICZnIrMEU4ACwrMFo4ACwrMA44ACwrMBc4ACwrMDw4ACwrMFE4ACwrMFk4ACwrMAo4AJoCAmNjLCswmAE4ACwrMBY4ACwrMBk4ACwrMCs4AJoCC2pzX3JlZGlyZWN0LCswQTgALCswTTgALCswTjgALCswUzgALCswVDgALCswaTgALCswkAE4ACwrMJIBOAAsKzCXATgALCswowE4ACwrMKcBOAAsKzDVATgALCsw2AE4ACwrMB04ACwrMFw4ACwrMBg4ACwrMCY4ACyAAmiQAms/VOQ9j5h6dbo.js
 HTTP/1.1" 403 1742 "http://www.google.fr/"; "Mozilla/5.0 (X11; U; Linux i686; 
en-US; rv:1.9.0.19) Gecko/2010091807 Iceweasel/3.0.6 (Debian-3.0.6-3)" 
TCP_DENIED:HIER_NONE
192.168.80.194 - - [14/Mar/2012:09:54:52 +0100] "GET 
http://www.google.fr/images/nav_logo104.png HTTP/1.1" 403 1742 
"http://www.google.fr/"; "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.19) 
Gecko/2010091807 Iceweasel/3.0.6 (Debian-3.0.6-3)" TCP_DENIED:HIER_NONE
192.168.80.194 - - [14/Mar/2012:09:54:52 +0100] "GET 
http://www.google.fr/favicon.ico HTTP/1.1" 403 1742 "-" "Mozilla/5.0 (X11; U; 
Linux i686; en-US; rv:1.9.0.19) Gecko/2010091807 Iceweasel/3.0.6 
(Debian-3.0.6-3)" TCP_DENIED:HIER_NONE
192.168.80.194 - - [14/Mar/2012:09:54:52 +0100] "GET 
http://ssl.gstatic.com/gb/js/sem_24f279c41cbdb53cb15432c98ed5fee2.js HTTP/1.1" 
403 1742 "http://www.google.fr/"; "Mozilla/5.0 (X11; U; Linux i686; en-US; 
rv:1.9.0.19) Gecko/2010091807 Iceweasel/3.0.6 (Debian-3.0.6-3)" 
TCP_DENIED:HIER_NONE
192.168.80.194 - - [14/Mar/2012:09:54:54 +0100] "GET http://www.google.fr/ 
HTTP/1.1" 403 1742 "-" "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.19) 
Gecko/2010091807 Iceweasel/3.0.6 (Debian-3.0.6-3)" TCP_DENIED:HIER_NONE

It's ok only for the first request, for example if my first page is 
www.squid-cache.org I get only the html page without css or pictures

Squid.conf:

auth_param digest program /usr/lib/squid/digest_ldap_auth -b 
ou=People,dc=ldap,dc=test -h 127.0.0.1:389 -A "description" -l: -e -u "uid"

auth_param digest realm PROXY
auth_param digest children 10

Thanks 


TR: [squid-users] https analyze, squid rpc proxy to rpc proxy ii6 exchange2007 with ntlm

2012-03-14 Thread Clem
Hello,

Ok so I know exactly why squid can't forward ntlm credentials and stop at
type1. It's facing the double hop issue, ntlm credentials can be sent only
on one hop, and is lost with 2 hops like : client -> squid (hop1) -> IIS6
rpx proxy (hop2) -> exchange 2007

That's why when I connect directly to my iis6 rpc proxy that works and when
I connect through squid that request login/pass again and again. And we can
clearly see that on https analyzes.

ISA server has a workaround about this double hop issue as I have wrote in
my last mail, I don't know if squid can act like this.

I'm searching atm how to set iis6 perhaps to resolve this problem, but I
don't want to "break" my exchange so I've to do my tests very carefully

Regards

Clem



-Message d'origine-
De : Clem [mailto:clemf...@free.fr] 
Envoyé : lundi 12 mars 2012 13:20
À : squid-users@squid-cache.org
Objet : TR: [squid-users] https analyze, squid rpc proxy to rpc proxy ii6
exchange2007 with ntlm

Progressing in my ntlm/rpcohttps researches

The only reverse proxy that can forward ntlm authentication on outlook
anywhere with ntlm auth is ISA, and in this article it describes what
parameters you must set for this working :

http://blogs.pointbridge.com/Blogs/enger_erik/Pages/Post.aspx?_ID=17

The main parameters are :

. accept all users
And
. No delegation but client may authenticate directly

So the proxy acts "directly" and send credential as if it was the client.

I think squid has to act exactly like ISA to make ntlm auth to work, dunno
if it's possible as ISA is a windows proxy server and surely more
confortable with compatibility.

Regards

Clem

-Message d'origine-
De : Amos Jeffries [mailto:squ...@treenet.co.nz] 
Envoyé : jeudi 8 mars 2012 14:29
À : Clem
Objet : Re: TR: [squid-users] https analyze, squid rpc proxy to rpc proxy
ii6 exchange2007 with ntlm

On 9/03/2012 2:08 a.m., Clem wrote:
> Ok Amos so we go back to same issues, as I said you I have tested all I
> could with the latest 3.2 beta versions before.
>
> So I'm going back to the type-1 ntlm message issue (see my last messages
> with this subject)
>
> And my last question was :
>
>> I think the link SQUID ->   IIS6 RPC PROXY is represented by the
>> cache_peer line on my squid.conf, and I don't know if
>> client_persistent_connections
> and
>> server_persistent_connections parameters affect cache_peer too ?

It does.


Amos



[squid-users] Compile problem on Solaris - Squid 3.19-20120306-r10434 - Ssl::FileLocker and LOCK_EX

2012-03-14 Thread Marcin Jarzab

Hello,

Following problem occured. If it is required I can provide OVA VM 
template with the runtime specified bellow:


Platform: SunOS solaris11 5.11 snv_151a i86pc i386 i86pc Solaris

configure params:
./configure \
--prefix=/usr \
--program-suffix=custom \
--includedir=${prefix}/include \
--mandir=${prefix}/share/man \
--infodir=${prefix}/share/info \
--sysconfdir=/etc \
--localstatedir=/var \
--libexecdir=${prefix}/lib/squid3-custom \
--srcdir=. \
--disable-maintainer-mode \
--disable-dependency-tracking \
--disable-silent-rules \
--datadir=/usr/share/squid3-custom \
--sysconfdir=/etc/squid3-custom \
--mandir=/usr/share/man \
--with-cppunit-basedir=/usr/local \
--enable-inline \
--enable-async-io=8 \
--enable-storeio=ufs,aufs,diskd \
--enable-removal-policies=lru,heap \
--enable-delay-pools \
--enable-cache-digests \
--enable-underscores \
--enable-icap-client \
--enable-follow-x-forwarded-for \
--enable-auth=basic,digest,ntlm,negotiate \
--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SASL,SMB,YP,DB,POP3,getpwnam,squid_radius_auth,multi-domain-NTLM 
\

--enable-ntlm-auth-helpers=smb_lm \
--enable-digest-auth-helpers=ldap,password \
--enable-negotiate-auth-helpers=squid_kerb_auth \
--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group 
\

--enable-arp-acl \
--enable-esi \
--enable-zph-qos \
--disable-translation \
--with-logdir=/var/log/squid3-custom \
--with-pidfile=/var/run/squid3-custom.pid \
--with-filedescriptors=65536 \
--with-large-files \
--with-default-user=proxy \
--enable-linux-netfilter \
--enable-ssl \
--enable-ssl-crtd \
--with-openssl=/usr/include/openssl
---
Make error:
libtool: link: ( cd ".libs" && rm -f "libsslutil.la" && ln -s 
"../libsslutil.la" "libsslutil.la" )
g++ -DHAVE_CONFIG_H  -I../.. -I../../include -I../../src -I../../include 
-I/usr/local/include -I../../libltdl  -I/usr/include/openssl/include 
-I/usr/include/libxml2  -I/usr/include/libxml2 -Wall -Wpointer-arith 
-Wwrite-strings -Wcomments -Werror  -D_REENTRANT -pthreads -m64 -g -O2 
-c -o ssl_crtd.o ssl_crtd.cc
g++ -DHAVE_CONFIG_H  -I../.. -I../../include -I../../src -I../../include 
-I/usr/local/include -I../../libltdl  -I/usr/include/openssl/include 
-I/usr/include/libxml2  -I/usr/include/libxml2 -Wall -Wpointer-arith 
-Wwrite-strings -Wcomments -Werror  -D_REENTRANT -pthreads -m64 -g -O2 
-c -o certificate_db.o certificate_db.cc
certificate_db.cc: In constructor `Ssl::FileLocker::FileLocker(const 
std::string&)':

certificate_db.cc:34: error: `LOCK_EX' undeclared (first use this function)
certificate_db.cc:34: error: (Each undeclared identifier is reported 
only once for each function it appears in.)

certificate_db.cc: In destructor `Ssl::FileLocker::~FileLocker()':
certificate_db.cc:47: error: `LOCK_UN' undeclared (first use this function)
make[3]: *** [certificate_db.o] Error 1
make[3]: Leaving directory 
`/var/installs/squid-3.1.19-20120306-r10434/src/ssl'

make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/var/installs/squid-3.1.19-20120306-r10434/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/var/installs/squid-3.1.19-20120306-r10434/src'
make: *** [all-recursive] Error 1
root@solaris11:/var/installs/squid-3.1.19-20120306-r10434


--
Ph.D. Eng. Marcin Jarzab
m...@agh.edu.pl
http://www.ics.agh.edu.pl/people/mj

Department of Computer Science
AGH University of Science and Technology
Al. Mickiewicza 30, 30-059  Krakow, POLAND
phone: +48 (12) 6173491 (36)
==



RE: [squid-users] Login Popups on Windows XP with squid_kerb_auth and external acl

2012-03-14 Thread Игорь Потапов
I've found failing component. It’s external_acl_type with the %LOGIN parameter. 
It starts some kind of authentification if it thinks user is not authenticated. 
And that procedure force IE on XP to open login window. I think theat procedure 
is different one than in squid_kerb_auth' ACL.
How can I help to determine root cause if this issue?


> -Original Message-
> From: Игорь Потапов [mailto:potapo...@vnipigaz.gazprom.ru]
> Sent: Tuesday, March 13, 2012 10:44 AM
> To: squid-users@squid-cache.org
> Subject: [squid-users] Login Popups on Windows XP with squid_kerb_auth and 
> external acl
> 
> Hi.
> squid is 3.1.19 on FreeBSD 8.2 with MIT kerberos. squid_kerb_auth is in use 
> as the only
> auth scheme. Have some external acl to check authorization in mysql db. On 
> machines
> running XP SP2 with IE8 (enabled Windows Intergrated Auth) sometimes 
> authentication
> windows popup. I think this is happening if some request is denied by 
> external auth
> script. If I hit Cancel page loads further. On Windows 7 see no such behavior.
> Config is here http://pastebin.com/QyCiha8Q Here is external auth script
> http://pastebin.com/LiAmniSz I think IE8 on XP sometimes doesn't send 
> Authorization and
> asks for it. Or falls back to NTLM. I've made some workarounds to disable 
> login windows
> but on XP they appear.
> Can I force IE8 on XP to use only negotiate/Kerberos?




Re: [squid-users] requests per second

2012-03-14 Thread Kinkie
>> These are some performance stats from network admin who have been willing to 
>> donate the info publicly:
>> http://wiki.squid-cache.org/KnowledgeBase/Benchmarks
>
> How do we post results on the above wiki page?

Instructions on how to apply are on the main wiki page.
Quoting:

To contribute to this Wiki, please register by using the Login link
and then email the wiki administrator to be granted write access. Wiki
editing is restricted to registered users to avoid wiki-spam. If you
are new to this wiki, the SyntaxReference may be handy. You can also
practice wiki editing in the WikiSandBox.


Any contribution is welcome :)

-- 
    /kinkie


[squid-users] DSCP tags on regex acl

2012-03-14 Thread Greg Whynott

Hello,

Just wanted to confirm if i am doing this properly,  as it appears to 
not be working.  Thanks very much for your time.


the intent is to tag all traffic heading to identified sites with a TOS 
tag which our internet routers will see and apply a policy route based 
on this tag.   We want to send all requests bulk video traffic to a 
particular ISP (we have multiple ISPs).


in the config I put:

acl youtube url_regex -i youtube
tcp_outgoing_tos af22 youtube


my hope was that any url with youtube in the request will tag the 
outgoing request.   but this doesn't appear to be happening as i'm not 
seeing any af22 DSCP tags at the router.



This is squid version 3.1.

take care,
greg





Re: [squid-users] Help with a tcp_miss/200 issue

2012-03-14 Thread James Ashton
Hello again,
 Does anyone else have any ideas on this?

Thank You
James

- Original Message -
From: "James Ashton" 
To: "Amos Jeffries" 
Cc: squid-users@squid-cache.org
Sent: Tuesday, March 13, 2012 8:44:54 AM
Subject: Re: [squid-users] Help with a tcp_miss/200 issue

Thanks Amos,

The web servers reply to squid with these headers

=
Cache-Control   max-age=60
Connection  Keep-Alive
Content-Encodinggzip
Content-Length  15139
Content-Typetext/html; charset=UTF-8
DateTue, 13 Mar 2012 12:42:26 GMT
Expires Tue, 13 Mar 2012 12:43:26 GMT
Keep-Alive  timeout=15, max=5000
Server  Apache/2.2.15 (CentOS)
VaryAccept-Encoding,User-Agent
X-Pingback  http://planetphotoshop.com/xmlrpc.php
=


They look good to me...
Do you see anything missing from this?

Thank You
James

- Original Message -
From: "Amos Jeffries" 
To: squid-users@squid-cache.org
Sent: Monday, March 12, 2012 10:39:13 PM
Subject: Re: [squid-users] Help with a tcp_miss/200 issue

On 13.03.2012 03:13, James Ashton wrote:
> Any thoughts guys?
>
> This has me baffled.  I am digging through list archives, but nothing
> relevant so far.
> I figure it has to be a response header issue.  I just don't see it.
>

Could be. You will need to know the headers being sent into Squid 
"squid1.kelbymediagroup.com" from the origin server though. I suspect it 
may be missing Date: header or something like that making the original 
non-cacheable. Squid does many fixing-up of details like that on its 
output to ensure the output is more friendly to downstream clients.


> Using Squid 3.1.8

Or it could be some bug in that particular version. Tried the more 
current .19 release?


Config seems okay.

> #
> visible_hostname squid2.kelbymediagroup.com
> #
> refresh_pattern
> 
> (phpmyadmin|process|register|login|contact|signup|admin|gateway|ajax|account|cart|checkout|members)
> 0 10% 0
> refresh_pattern (blog|feed) 300 20% 4320
> refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 40320 75% 86400
> refresh_pattern -i \.(iso|avi|wav|mp3|mpeg|swf|flv|x-flv)$ 1440 40% 
> 40320
> refresh_pattern -i \.mp4$   1440   90% 43200
> refresh_pattern -i \.(css|js)$ 300 40% 7200
> refresh_pattern -i \.(html|htm)$ 300 40% 7200
> refresh_pattern (/cgi-bin/|\?) 300 20% 4320
> refresh_pattern . 300 40% 40320
> #



Amos

>
>
> - Original Message -
> From: "James Ashton"
>
> Hello all,
>  I am trying to improve caching/acceleration on a series of wordpress 
> sites.
> Almost all objects are being cached at this point other than the page
> HTML itself.
> All I am getting there is TCP_MISS/200 log lines.
>
> The request is a GET for the URL  http://planetphotoshop.com
>
> At the moment my response header is:
>
> Cache-Controlmax-age=60
> Cneonctionclose
> Connectionkeep-alive
> Content-Encodinggzip
> Content-Length15339
> Content-Typetext/html; charset=UTF-8
> DateFri, 09 Mar 2012 13:58:01 GMT
> ServerApache/2.2.15 (CentOS)
> VaryAccept-Encoding
> Via1.0 squid1.kelbymediagroup.com (squid)
> X-CacheMISS from squid1.kelbymediagroup.com
> X-Cache-LookupMISS from squid1.kelbymediagroup.com:80
> X-Pingbackhttp://planetphotoshop.com/xmlrpc.php
>
>
> I dont see anything preventing caching
>
> Any thoughts or ideas?
>
> Thank you in advance for the help.
>
> James



[squid-users] TLS v1.2 support

2012-03-14 Thread Sébastien WENSKE
Hi guys,

OpenSSL 10.01 just released, it seems that it supports TLS v1.2.

What about Squid?

Cheers,
Sebastien W.


smime.p7s
Description: S/MIME cryptographic signature


[squid-users] RE: TLS v1.2 support

2012-03-14 Thread Sébastien WENSKE
OpenSSL 1.0.1  (not 10.0.1)

-Original Message-
From: Sébastien WENSKE [mailto:sebast...@wenske.fr] 
Sent: mercredi 14 mars 2012 17:14
To: squid-users@squid-cache.org
Subject: [squid-users] TLS v1.2 support

Hi guys,

OpenSSL 10.01 just released, it seems that it supports TLS v1.2.

What about Squid?

Cheers,
Sebastien W.


smime.p7s
Description: S/MIME cryptographic signature


[squid-users] Caching in 3.2 vs 3.1

2012-03-14 Thread Erik Svensson
Hi,

Objects don't get cached in Squid 3.2. Same transactions and config works in 3.1

I will show my problem with a simple webserver listening on 127.0.0.1:9990
and sending transactions from curl to a squid listening on 127.0.0.1:9993

3.1 logs first a MISS since the cache is empty and when repeating the
transaction a HIT.
3.2 logs 2 MISSes

# /opt/squid-3.1.19/sbin/squid -v
Squid Cache: Version 3.1.19
configure options:  '--prefix=/opt/squid-3.1.19' '--disable-wccp'
'--disable-wccpv2' '--disable-ident-lookups' '--disable-ipv6'
'--with-large-files' --with-squid=/usr/local/src/squid-3.1.19
--enable-ltdl-convenience

# /opt/squid-3.2.0.16/sbin/squid -v
Squid Cache: Version 3.2.0.16
configure options:  '--prefix=/opt/squid-3.2.0.16' '--disable-wccp'
'--disable-wccpv2' '--disable-ident-lookups' '--disable-ipv6'
'--with-large-files' --enable-ltdl-convenience

# cat 3.conf
http_port 127.0.0.1:9993
icp_port 0
cache_mem 128 mb
#cache_dir null /tmp
access_log  /tmp/3/access.log
cache_log   /tmp/3/cache.log
pid_filename/tmp/3/squid.pid
coredump_dir/tmp/3
refresh_pattern . 0 20% 4320
http_access allow all
http_reply_access allow all
shutdown_lifetime 2 seconds

# thttpd -p 9990 -d /tmp# start thttpd webserver serving files in
directory /tmp

# echo HiHo >/tmp/x # Create a file to serve


# /opt/squid-3.1.19/sbin/squid -f 3.conf

# curl -v -H "Pragma:" -x 127.0.0.1:9993 http://127.0.0.1:9990/x
* About to connect() to 127.0.0.1 port 9993
*   Trying 127.0.0.1... * connected
* Connected to 127.0.0.1 (127.0.0.1) port 9993
> GET http://127.0.0.1:9990/x HTTP/1.1
User-Agent: curl/7.12.1 (i386-redhat-linux-gnu) libcurl/7.12.1
OpenSSL/0.9.7a zlib/1.2.1.2 libidn/0.5.6
Host: 127.0.0.1:9990
Accept: */*

< HTTP/1.0 200 OK
< Server: thttpd/2.25b 29dec2003
< Content-Type: text/plain; charset=iso-8859-1
< Date: Wed, 14 Mar 2012 15:49:39 GMT
< Last-Modified: Wed, 14 Mar 2012 15:47:14 GMT
< Accept-Ranges: bytes
< Content-Length: 5
< X-Cache: MISS from localhost.localdomain
< Via: 1.0 localhost.localdomain (squid/3.1.19)
* HTTP/1.0 connection set to keep alive!
< Connection: keep-alive
HiHo
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0

# curl -v -H "Pragma:" -x 127.0.0.1:9993 http://127.0.0.1:9990/x
* About to connect() to 127.0.0.1 port 9993
*   Trying 127.0.0.1... * connected
* Connected to 127.0.0.1 (127.0.0.1) port 9993
> GET http://127.0.0.1:9990/x HTTP/1.1
User-Agent: curl/7.12.1 (i386-redhat-linux-gnu) libcurl/7.12.1
OpenSSL/0.9.7a zlib/1.2.1.2 libidn/0.5.6
Host: 127.0.0.1:9990
Accept: */*

< HTTP/1.0 200 OK
< Server: thttpd/2.25b 29dec2003
< Content-Type: text/plain; charset=iso-8859-1
< Date: Wed, 14 Mar 2012 15:49:39 GMT
< Last-Modified: Wed, 14 Mar 2012 15:47:14 GMT
< Accept-Ranges: bytes
< Content-Length: 5
< Age: 8
< X-Cache: HIT from localhost.localdomain
< Via: 1.0 localhost.localdomain (squid/3.1.19)
* HTTP/1.0 connection set to keep alive!
< Connection: keep-alive
HiHo
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0

# /opt/squid-3.1.19/sbin/squid -f 3.conf -k shutdown

# cat access.log
1331740179.023  2 127.0.0.1 TCP_MISS/200 339 GET
http://127.0.0.1:9990/x - DIRECT/127.0.0.1 text/plain
1331740187.003  0 127.0.0.1 TCP_MEM_HIT/200 346 GET
http://127.0.0.1:9990/x - NONE/- text/plain

# rm access.log


# /opt/squid-3.2.0.16/sbin/squid -f 3.conf

# curl -v -H "Pragma:" -x 127.0.0.1:9993 http://127.0.0.1:9990/x
* About to connect() to 127.0.0.1 port 9993
*   Trying 127.0.0.1... * connected
* Connected to 127.0.0.1 (127.0.0.1) port 9993
> GET http://127.0.0.1:9990/x HTTP/1.1
User-Agent: curl/7.12.1 (i386-redhat-linux-gnu) libcurl/7.12.1
OpenSSL/0.9.7a zlib/1.2.1.2 libidn/0.5.6
Host: 127.0.0.1:9990
Accept: */*

< HTTP/1.1 200 OK
< Server: thttpd/2.25b 29dec2003
< Content-Type: text/plain; charset=iso-8859-1
< Date: Wed, 14 Mar 2012 15:55:29 GMT
< Last-Modified: Wed, 14 Mar 2012 15:47:14 GMT
< Accept-Ranges: bytes
< Content-Length: 5
< X-Cache: MISS from localhost.localdomain
< Via: 1.1 localhost.localdomain (squid/3.2.0.16)
< Connection: keep-alive
HiHo
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0

# curl -v -H "Pragma:" -x 127.0.0.1:9993 http://127.0.0.1:9990/x
* About to connect() to 127.0.0.1 port 9993
*   Trying 127.0.0.1... * connected
* Connected to 127.0.0.1 (127.0.0.1) port 9993
> GET http://127.0.0.1:9990/x HTTP/1.1
User-Agent: curl/7.12.1 (i386-redhat-linux-gnu) libcurl/7.12.1
OpenSSL/0.9.7a zlib/1.2.1.2 libidn/0.5.6
Host: 127.0.0.1:9990
Accept: */*

< HTTP/1.1 200 OK
< Server: thttpd/2.25b 29dec2003
< Content-Type: text/plain; charset=iso-8859-1
< Date: Wed, 14 Mar 2012 15:55:34 GMT
< Last-Modified: Wed, 14 Mar 2012 15:47:14 GMT
< Accept-Ranges: bytes
< Content-Length: 5
< X-Cache: MISS from localhost.localdomain
< Via: 1.1 localhost.localdomain (squid/3.2.0.16)
< Connection: keep-alive
HiHo
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0

# /opt/squid-3.2.0.16/sbin/squid -f 3.c

[squid-users] invalid request problem with wireshark capturing

2012-03-14 Thread Mustafa Raji
hi 
still the problem of invalid request problem exist in my caches server, i will 
explain this problem with more deletes here hopping there is some thing i don't 
know about in squid solve this problem

as a trying i did capturing to the invalid request and the captured packet 
deletes is 

Frame 4139: 110 bytes on wire (880 bits), 110 bytes captured (880 bits)
Arrival Time: Mar 13, 2012 11:53:02.53614 AST
Epoch Time: 1331628782.53614 seconds
Time delta from previous captured frame: 0.008177000 seconds
Time delta from previous displayed frame: 0.008177000 seconds
Time since reference or first frame: 51.377354000 seconds
Frame Number: 4139
Frame Length: 110 bytes (880 bits)
Capture Length: 110 bytes (880 bits)
Frame is marked: False
Frame is ignored: False
Protocols in frame: eth:ip:tcp:http:data
Coloring Rule Name: HTTP
Coloring Rule String: http || tcp.port == 80

Internet Protocol, Src: 192.168.40.3 (192.168.40.3), Dst: 10.10.10(10.10.10.53)
Version: 4
Header length: 20 bytes
Differentiated Services Field: 0x00 (DSCP 0x00: Default; ECN: 0x00)
Total Length: 96
Identification: 0x23e0 (9184)
Flags: 0x02 (Don't Fragment
Fragment offset: 0
Time to live : 127
Protocol : TCP (6)
Header checksum: 0xdacd [correct]
source 10.10.10.53 (10.10.10.53
Destination: 192.168.40.3 (192.168.40.3)
Transmission Control Protocol, Src Port:49869 (49869), Dst Port: http (80), seq:
Source port: 49869 (49869)
Destination port: http (80)
[Stream index: 240]
Sequence number: 1 (relative squence number)
[NEXT squence number: 57 (relative sequence number)]
Acknowledgement number: 1 (relative ack number)
Header length: 20 bytes
Flags: 0x18 (PSH, ACK)
window size: 17520 (scaled)
Checksum: 0xba28 [validation disabled]
[SEQ/ACK analysis]
*Hypertext Transfer Protocol
  *DATA (56 bytes)
   Data:0569ff24fdd6dbd18ffe4d2f2fffaa9020alae217a53923a..
    [Length: 56]

the squid is recognize the ips of the client in the access.log file, the policy 
routing in mikrotik done using the dstnat rule, what ever packets comes from 
any source ip address except the ip of the cache server in the tcp port at port 
80 is distend to the ip address of the cache port 80 
to be a clear it's the same as this linux rule
this rule is for clearance not applied in the linux iptables (because i don't 
know who to explain to you what i did in mikrotik router)
iptables -t nat -A PREROUTING -p tcp --dport 80 ! -s 192.168.40.2 -j DNAT 
--to-destination 192.168.40.2:80
where 192.168.40.2 is the ip of the cache server

if the problem is with sending ssl request to the cache server through the 80 
port but why this happening this type of traffic should work on port number 433 
so the mikrotik rule does not applied for this type of traffic 
and this traffic is directed to the internet directly, i hope i was clear in 
describing the problem 

thanks with my best regards
   





Re: [squid-users] RE: TLS v1.2 support

2012-03-14 Thread Amos Jeffries

On 15.03.2012 05:16, Sébastien WENSKE wrote:

OpenSSL 1.0.1  (not 10.0.1)

-Original Message-
From: Sébastien WENSKE [mailto:sebast...@wenske.fr]
Sent: mercredi 14 mars 2012 17:14
To: squid-users@squid-cache.org
Subject: [squid-users] TLS v1.2 support

Hi guys,

OpenSSL 10.01 just released, it seems that it supports TLS v1.2.



Thanks for the heads-up.



What about Squid?


Squid supports whatever the library you build it with does.

About the only relevance a change like this has is if there are new 
options which we have to map from squid.conf to the OpenSSL API calls 
("NO_TLSv11" or such.). Or if they do some more ABI-breaking alterations 
like the 1.0.0 c->d re-write had.


Amos



Re: [squid-users] Caching in 3.2 vs 3.1

2012-03-14 Thread Amos Jeffries


cc'ing to squid-dev where the people who might know reside


Also, adding "debug_options 11,2" may show something useful in the HTTP 
flow for 3.2.


Amos

On 15.03.2012 05:59, Erik Svensson wrote:

Hi,

Objects don't get cached in Squid 3.2. Same transactions and config
works in 3.1

I will show my problem with a simple webserver listening on 
127.0.0.1:9990
and sending transactions from curl to a squid listening on 
127.0.0.1:9993


3.1 logs first a MISS since the cache is empty and when repeating the
transaction a HIT.
3.2 logs 2 MISSes

# /opt/squid-3.1.19/sbin/squid -v
Squid Cache: Version 3.1.19
configure options:  '--prefix=/opt/squid-3.1.19' '--disable-wccp'
'--disable-wccpv2' '--disable-ident-lookups' '--disable-ipv6'
'--with-large-files' --with-squid=/usr/local/src/squid-3.1.19
--enable-ltdl-convenience

# /opt/squid-3.2.0.16/sbin/squid -v
Squid Cache: Version 3.2.0.16
configure options:  '--prefix=/opt/squid-3.2.0.16' '--disable-wccp'
'--disable-wccpv2' '--disable-ident-lookups' '--disable-ipv6'
'--with-large-files' --enable-ltdl-convenience

# cat 3.conf
http_port 127.0.0.1:9993
icp_port 0
cache_mem 128 mb
#cache_dir null /tmp
access_log  /tmp/3/access.log
cache_log   /tmp/3/cache.log
pid_filename/tmp/3/squid.pid
coredump_dir/tmp/3
refresh_pattern . 0 20% 4320
http_access allow all
http_reply_access allow all
shutdown_lifetime 2 seconds

# thttpd -p 9990 -d /tmp# start thttpd webserver serving files in
directory /tmp

# echo HiHo >/tmp/x  # Create a file to serve


# /opt/squid-3.1.19/sbin/squid -f 3.conf

# curl -v -H "Pragma:" -x 127.0.0.1:9993 http://127.0.0.1:9990/x
* About to connect() to 127.0.0.1 port 9993
*   Trying 127.0.0.1... * connected
* Connected to 127.0.0.1 (127.0.0.1) port 9993

GET http://127.0.0.1:9990/x HTTP/1.1

User-Agent: curl/7.12.1 (i386-redhat-linux-gnu) libcurl/7.12.1
OpenSSL/0.9.7a zlib/1.2.1.2 libidn/0.5.6
Host: 127.0.0.1:9990
Accept: */*

< HTTP/1.0 200 OK
< Server: thttpd/2.25b 29dec2003
< Content-Type: text/plain; charset=iso-8859-1
< Date: Wed, 14 Mar 2012 15:49:39 GMT
< Last-Modified: Wed, 14 Mar 2012 15:47:14 GMT
< Accept-Ranges: bytes
< Content-Length: 5
< X-Cache: MISS from localhost.localdomain
< Via: 1.0 localhost.localdomain (squid/3.1.19)
* HTTP/1.0 connection set to keep alive!
< Connection: keep-alive
HiHo
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0

# curl -v -H "Pragma:" -x 127.0.0.1:9993 http://127.0.0.1:9990/x
* About to connect() to 127.0.0.1 port 9993
*   Trying 127.0.0.1... * connected
* Connected to 127.0.0.1 (127.0.0.1) port 9993

GET http://127.0.0.1:9990/x HTTP/1.1

User-Agent: curl/7.12.1 (i386-redhat-linux-gnu) libcurl/7.12.1
OpenSSL/0.9.7a zlib/1.2.1.2 libidn/0.5.6
Host: 127.0.0.1:9990
Accept: */*

< HTTP/1.0 200 OK
< Server: thttpd/2.25b 29dec2003
< Content-Type: text/plain; charset=iso-8859-1
< Date: Wed, 14 Mar 2012 15:49:39 GMT
< Last-Modified: Wed, 14 Mar 2012 15:47:14 GMT
< Accept-Ranges: bytes
< Content-Length: 5
< Age: 8
< X-Cache: HIT from localhost.localdomain
< Via: 1.0 localhost.localdomain (squid/3.1.19)
* HTTP/1.0 connection set to keep alive!
< Connection: keep-alive
HiHo
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0

# /opt/squid-3.1.19/sbin/squid -f 3.conf -k shutdown

# cat access.log
1331740179.023  2 127.0.0.1 TCP_MISS/200 339 GET
http://127.0.0.1:9990/x - DIRECT/127.0.0.1 text/plain
1331740187.003  0 127.0.0.1 TCP_MEM_HIT/200 346 GET
http://127.0.0.1:9990/x - NONE/- text/plain

# rm access.log


# /opt/squid-3.2.0.16/sbin/squid -f 3.conf

# curl -v -H "Pragma:" -x 127.0.0.1:9993 http://127.0.0.1:9990/x
* About to connect() to 127.0.0.1 port 9993
*   Trying 127.0.0.1... * connected
* Connected to 127.0.0.1 (127.0.0.1) port 9993

GET http://127.0.0.1:9990/x HTTP/1.1

User-Agent: curl/7.12.1 (i386-redhat-linux-gnu) libcurl/7.12.1
OpenSSL/0.9.7a zlib/1.2.1.2 libidn/0.5.6
Host: 127.0.0.1:9990
Accept: */*

< HTTP/1.1 200 OK
< Server: thttpd/2.25b 29dec2003
< Content-Type: text/plain; charset=iso-8859-1
< Date: Wed, 14 Mar 2012 15:55:29 GMT
< Last-Modified: Wed, 14 Mar 2012 15:47:14 GMT
< Accept-Ranges: bytes
< Content-Length: 5
< X-Cache: MISS from localhost.localdomain
< Via: 1.1 localhost.localdomain (squid/3.2.0.16)
< Connection: keep-alive
HiHo
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0

# curl -v -H "Pragma:" -x 127.0.0.1:9993 http://127.0.0.1:9990/x
* About to connect() to 127.0.0.1 port 9993
*   Trying 127.0.0.1... * connected
* Connected to 127.0.0.1 (127.0.0.1) port 9993

GET http://127.0.0.1:9990/x HTTP/1.1

User-Agent: curl/7.12.1 (i386-redhat-linux-gnu) libcurl/7.12.1
OpenSSL/0.9.7a zlib/1.2.1.2 libidn/0.5.6
Host: 127.0.0.1:9990
Accept: */*

< HTTP/1.1 200 OK
< Server: thttpd/2.25b 29dec2003
< Content-Type: text/plain; charset=iso-8859-1
< Date: Wed, 14 Mar 2012 15:55:34 GMT
< Last-Modified: Wed, 14 Mar 2012 15:47:14 GMT
< Accept-Ranges: bytes
< Content-Length: 5
< X-Cache: MISS from localho

Re: [squid-users] DSCP tags on regex acl

2012-03-14 Thread Amos Jeffries

On 15.03.2012 03:24, Greg Whynott wrote:

Hello,

Just wanted to confirm if i am doing this properly,  as it appears to
not be working.  Thanks very much for your time.

the intent is to tag all traffic heading to identified sites with a
TOS tag which our internet routers will see and apply a policy route
based on this tag.   We want to send all requests bulk video traffic
to a particular ISP (we have multiple ISPs).

in the config I put:

acl youtube url_regex -i youtube
tcp_outgoing_tos af22 youtube


Did you mean the base-8 code 0xaf22 or base-7 code 0af22 or a tag named 
"af22" internally by the router?


The squid.conf setting value is a bitmap numeric representation. 0x hex 
values are usually easiest to deal with.


Amos


Re: [squid-users] invalid request problem with wireshark capturing

2012-03-14 Thread Amos Jeffries

On 15.03.2012 08:03, Mustafa Raji wrote:

hi
still the problem of invalid request problem exist in my caches
server, i will explain this problem with more deletes here hopping
there is some thing i don't know about in squid solve this problem

as a trying i did capturing to the invalid request and the captured
packet deletes is

Frame 4139: 110 bytes on wire (880 bits), 110 bytes captured (880 
bits)

Arrival Time: Mar 13, 2012 11:53:02.53614 AST
Epoch Time: 1331628782.53614 seconds
Time delta from previous captured frame: 0.008177000 seconds
Time delta from previous displayed frame: 0.008177000 seconds
Time since reference or first frame: 51.377354000 seconds
Frame Number: 4139
Frame Length: 110 bytes (880 bits)
Capture Length: 110 bytes (880 bits)
Frame is marked: False
Frame is ignored: False
Protocols in frame: eth:ip:tcp:http:data
Coloring Rule Name: HTTP
Coloring Rule String: http || tcp.port == 80

Internet Protocol, Src: 192.168.40.3 (192.168.40.3), Dst:
10.10.10(10.10.10.53)
Version: 4
Header length: 20 bytes
Differentiated Services Field: 0x00 (DSCP 0x00: Default; ECN: 0x00)
Total Length: 96
Identification: 0x23e0 (9184)
Flags: 0x02 (Don't Fragment
Fragment offset: 0
Time to live : 127
Protocol : TCP (6)
Header checksum: 0xdacd [correct]
source 10.10.10.53 (10.10.10.53
Destination: 192.168.40.3 (192.168.40.3)
Transmission Control Protocol, Src Port:49869 (49869), Dst Port: http
(80), seq:
Source port: 49869 (49869)
Destination port: http (80)
[Stream index: 240]
Sequence number: 1 (relative squence number)
[NEXT squence number: 57 (relative sequence number)]
Acknowledgement number: 1 (relative ack number)
Header length: 20 bytes
Flags: 0x18 (PSH, ACK)
window size: 17520 (scaled)
Checksum: 0xba28 [validation disabled]
[SEQ/ACK analysis]
*Hypertext Transfer Protocol
  *DATA (56 bytes)
   Data:0569ff24fdd6dbd18ffe4d2f2fffaa9020alae217a53923a..
    [Length: 56]

the squid is recognize the ips of the client in the access.log file,
the policy routing in mikrotik done using the dstnat rule, what ever


You seem to be confusing the two systems.

 "NAT" is NAT or NAPT maybe both, (*address translation*).
 Policy routing is a type of routing (*packet delivery*).

They are not the same. Routing cannot do NAT and NAT cannot do routing. 
No more than a postman can change the street your house is on, or the 
house you live in can delivery peoples mail.
 NAT is like the postman who changes the erases address on the letters 
with his friends address then puts them back in a post box.


Please get that straight. You have managed to get it "working" (sort 
of) because of some security vulnerabilities in Squid and HTTP, but it 
will break when anything involving those security holes changes ...



packets comes from any source ip address except the ip of the cache
server in the tcp port at port 80 is distend to the ip address of the
cache port 80
to be a clear it's the same as this linux rule
this rule is for clearance not applied in the linux iptables (because
i don't know who to explain to you what i did in mikrotik router)
iptables -t nat -A PREROUTING -p tcp --dport 80 ! -s 192.168.40.2 -j
DNAT --to-destination 192.168.40.2:80
where 192.168.40.2 is the ip of the cache server

if the problem is with sending ssl request to the cache server
through the 80 port but why this happening this type of traffic 
should
work on port number 433 so the mikrotik rule does not applied for 
this

type of traffic


What do you mean? You have found some software abusing port 80 to try 
and sneak things past your firewall security system? Squid breaking such 
abusive things is *good*.


NOTE:
 - traffic *MUST NOT* be sent by browsers etc to the proxy on port 80. 
There is port 3128 for proxy traffic (maybe others >1024 if you want).


 - SSL+HTTP native traffic *MUST NOT* be sent on port 80 either. It 
goes on port 443.


 - SSL traffic (any kind) *MAY* be sent to the proxy on port 3128 so 
long as it is wrapped in a special plain-HTTP CONNECT request.



So what is the problem?


Amos


Re: [squid-users] Compile problem on Solaris - Squid 3.19-20120306-r10434 - Ssl::FileLocker and LOCK_EX

2012-03-14 Thread Amos Jeffries

On 15.03.2012 00:21, Marcin Jarzab wrote:

Hello,

Following problem occured. If it is required I can provide OVA VM
template with the runtime specified bellow:

Platform: SunOS solaris11 5.11 snv_151a i86pc i386 i86pc Solaris




certificate_db.cc:34: error: `LOCK_EX' undeclared (first use this 
function)

certificate_db.cc:34: error: (Each undeclared identifier is reported
only once for each function it appears in.)
certificate_db.cc: In destructor `Ssl::FileLocker::~FileLocker()':
certificate_db.cc:47: error: `LOCK_UN' undeclared (first use this 
function)


Solaris does not support exclusive-access file locking. Which is 
required to retain data integrity within the SSL certificate database 
when multiple helper processes are trying to share it.


Therefore Solaris does not support ssl_crtd functionality. Thank you 
for uncovering this.


Amos



RE: [squid-users] Login Popups on Windows XP with squid_kerb_auth and external acl

2012-03-14 Thread Amos Jeffries

On 15.03.2012 00:51, Игорь Потапов wrote:

I've found failing component. It’s external_acl_type with the %LOGIN
parameter. It starts some kind of authentification if it thinks user
is not authenticated. And that procedure force IE on XP to open login
window. I think theat procedure is different one than in
squid_kerb_auth' ACL.
How can I help to determine root cause if this issue?



To use authenticated details to check authorization one must first have 
authenticated them successfully.


proxy_auth is a simple: test authenticated yes/no. It requires 
credentials to be (1) known; at the point and time when the ACL is 
tested.



external ACL with %LOGIN is a more complex: test authenticate AND test 
authorized yes/no. %LOGIN requires user credentials to be (1) known, (2) 
valid, (3) current; at the point and time when the external ACL is 
tested.


If they are not meeting all three criteria, Squid will attempt to fetch 
some which do meet the criteria.



We have had some troubles in the past (until very recently) with 
external ACL identifying the current+valid parts of the criteria wrong. 
As far as I know these are fixed now in 3.1.19. But you are of course 
welcome to investigate and see if we missed some case that is affecting 
IE8.


Amos





-Original Message-
From: Игорь Потапов

Hi.
squid is 3.1.19 on FreeBSD 8.2 with MIT kerberos. squid_kerb_auth is 
in use as the only
auth scheme. Have some external acl to check authorization in mysql 
db. On machines
running XP SP2 with IE8 (enabled Windows Intergrated Auth) sometimes 
authentication
windows popup. I think this is happening if some request is denied 
by external auth
script. If I hit Cancel page loads further. On Windows 7 see no such 
behavior.
Config is here http://pastebin.com/QyCiha8Q Here is external auth 
script
http://pastebin.com/LiAmniSz I think IE8 on XP sometimes doesn't 
send Authorization and
asks for it. Or falls back to NTLM. I've made some workarounds to 
disable login windows

but on XP they appear.
Can I force IE8 on XP to use only negotiate/Kerberos?




Re: [squid-users] Caching in 3.2 vs 3.1

2012-03-14 Thread Ben

Hi Amos,

Since last 2-3 months i am testing squid 3.2 with different version till 
current latest version, And i observed that 3.1 is working fantastic 
while we are looking for cache gain / cache hit.


Again, i say squid 3.1 is awesome for people who wants cache hit / 
bandwidth saving.:-)


Regards,
Ben


cc'ing to squid-dev where the people who might know reside


Also, adding "debug_options 11,2" may show something useful in the 
HTTP flow for 3.2.


Amos

On 15.03.2012 05:59, Erik Svensson wrote:

Hi,

Objects don't get cached in Squid 3.2. Same transactions and config
works in 3.1

I will show my problem with a simple webserver listening on 
127.0.0.1:9990
and sending transactions from curl to a squid listening on 
127.0.0.1:9993


3.1 logs first a MISS since the cache is empty and when repeating the
transaction a HIT.
3.2 logs 2 MISSes

# /opt/squid-3.1.19/sbin/squid -v
Squid Cache: Version 3.1.19
configure options:  '--prefix=/opt/squid-3.1.19' '--disable-wccp'
'--disable-wccpv2' '--disable-ident-lookups' '--disable-ipv6'
'--with-large-files' --with-squid=/usr/local/src/squid-3.1.19
--enable-ltdl-convenience

# /opt/squid-3.2.0.16/sbin/squid -v
Squid Cache: Version 3.2.0.16
configure options:  '--prefix=/opt/squid-3.2.0.16' '--disable-wccp'
'--disable-wccpv2' '--disable-ident-lookups' '--disable-ipv6'
'--with-large-files' --enable-ltdl-convenience

# cat 3.conf
http_port 127.0.0.1:9993
icp_port 0
cache_mem 128 mb
#cache_dir null /tmp
access_log  /tmp/3/access.log
cache_log   /tmp/3/cache.log
pid_filename/tmp/3/squid.pid
coredump_dir/tmp/3
refresh_pattern . 0 20% 4320
http_access allow all
http_reply_access allow all
shutdown_lifetime 2 seconds

# thttpd -p 9990 -d /tmp# start thttpd webserver serving files in
directory /tmp

# echo HiHo >/tmp/x # Create a file to serve


# /opt/squid-3.1.19/sbin/squid -f 3.conf

# curl -v -H "Pragma:" -x 127.0.0.1:9993 http://127.0.0.1:9990/x
* About to connect() to 127.0.0.1 port 9993
*   Trying 127.0.0.1... * connected
* Connected to 127.0.0.1 (127.0.0.1) port 9993

GET http://127.0.0.1:9990/x HTTP/1.1

User-Agent: curl/7.12.1 (i386-redhat-linux-gnu) libcurl/7.12.1
OpenSSL/0.9.7a zlib/1.2.1.2 libidn/0.5.6
Host: 127.0.0.1:9990
Accept: */*

< HTTP/1.0 200 OK
< Server: thttpd/2.25b 29dec2003
< Content-Type: text/plain; charset=iso-8859-1
< Date: Wed, 14 Mar 2012 15:49:39 GMT
< Last-Modified: Wed, 14 Mar 2012 15:47:14 GMT
< Accept-Ranges: bytes
< Content-Length: 5
< X-Cache: MISS from localhost.localdomain
< Via: 1.0 localhost.localdomain (squid/3.1.19)
* HTTP/1.0 connection set to keep alive!
< Connection: keep-alive
HiHo
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0

# curl -v -H "Pragma:" -x 127.0.0.1:9993 http://127.0.0.1:9990/x
* About to connect() to 127.0.0.1 port 9993
*   Trying 127.0.0.1... * connected
* Connected to 127.0.0.1 (127.0.0.1) port 9993

GET http://127.0.0.1:9990/x HTTP/1.1

User-Agent: curl/7.12.1 (i386-redhat-linux-gnu) libcurl/7.12.1
OpenSSL/0.9.7a zlib/1.2.1.2 libidn/0.5.6
Host: 127.0.0.1:9990
Accept: */*

< HTTP/1.0 200 OK
< Server: thttpd/2.25b 29dec2003
< Content-Type: text/plain; charset=iso-8859-1
< Date: Wed, 14 Mar 2012 15:49:39 GMT
< Last-Modified: Wed, 14 Mar 2012 15:47:14 GMT
< Accept-Ranges: bytes
< Content-Length: 5
< Age: 8
< X-Cache: HIT from localhost.localdomain
< Via: 1.0 localhost.localdomain (squid/3.1.19)
* HTTP/1.0 connection set to keep alive!
< Connection: keep-alive
HiHo
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0

# /opt/squid-3.1.19/sbin/squid -f 3.conf -k shutdown

# cat access.log
1331740179.023  2 127.0.0.1 TCP_MISS/200 339 GET
http://127.0.0.1:9990/x - DIRECT/127.0.0.1 text/plain
1331740187.003  0 127.0.0.1 TCP_MEM_HIT/200 346 GET
http://127.0.0.1:9990/x - NONE/- text/plain

# rm access.log


# /opt/squid-3.2.0.16/sbin/squid -f 3.conf

# curl -v -H "Pragma:" -x 127.0.0.1:9993 http://127.0.0.1:9990/x
* About to connect() to 127.0.0.1 port 9993
*   Trying 127.0.0.1... * connected
* Connected to 127.0.0.1 (127.0.0.1) port 9993

GET http://127.0.0.1:9990/x HTTP/1.1

User-Agent: curl/7.12.1 (i386-redhat-linux-gnu) libcurl/7.12.1
OpenSSL/0.9.7a zlib/1.2.1.2 libidn/0.5.6
Host: 127.0.0.1:9990
Accept: */*

< HTTP/1.1 200 OK
< Server: thttpd/2.25b 29dec2003
< Content-Type: text/plain; charset=iso-8859-1
< Date: Wed, 14 Mar 2012 15:55:29 GMT
< Last-Modified: Wed, 14 Mar 2012 15:47:14 GMT
< Accept-Ranges: bytes
< Content-Length: 5
< X-Cache: MISS from localhost.localdomain
< Via: 1.1 localhost.localdomain (squid/3.2.0.16)
< Connection: keep-alive
HiHo
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0

# curl -v -H "Pragma:" -x 127.0.0.1:9993 http://127.0.0.1:9990/x
* About to connect() to 127.0.0.1 port 9993
*   Trying 127.0.0.1... * connected
* Connected to 127.0.0.1 (127.0.0.1) port 9993

GET http://127.0.0.1:9990/x HTTP/1.1

User-Agent: curl/7.12.1 (i386-redhat-linux-gnu) libcurl/7.12.1
OpenSSL/0.9.7a zlib/1.2.