Re: [squid-users] squid/sslbump + IE9

2011-12-02 Thread Amos Jeffries

On 3/12/2011 6:22 a.m., Sean Boran wrote:

Well yes, we are trying to incept...
I dont see where the "forgery" is, if my proxy CA is trusted and a
cert is generated for that target, signed by that CA, why should the
browser complain?


The "forgery" is that you are creating a certificate claiming to be 
fetched from that website and authorizing you to act as their 
intermediary with complete security clearance. When it is not. Exactly 
like me presenting someone with a cheque against your bank account 
signed by myself. Forgery, by the plain and simple definition of the 
word. This is why the browser complains unless it has explicitly been 
made to trust the CA you use to sign.


I missed the part where you had your signing CA already in the browser 
and read that as the browser not complaining when only presented with 
the plain cert.



And why would FF not complain but IE9 does?


The one complaining does not trust the certificate or some part of its 
CA chain. As others have said, each of the three browser engines uses 
their own CA collections.


Amos


Re: [squid-users] error build squid-3.1.17 with gcc-4.5.3

2011-12-02 Thread Amos Jeffries

On 3/12/2011 12:45 p.m., Jose-Marcio Martins da Cruz wrote:

Pedro Correia Sardinha wrote:

Hello,

When I try to build the last version as usual, "make all" it's giving
me this output (my compiler is gcc-4.5.3):

ftp.cc: In member function 'void
FtpStateData::ftpAcceptDataConnection(const CommAcceptCbParams&)':
ftp.cc:3124:38: error: redeclaration of 'char ntoapeer [75]'
ftp.cc:3076:31: error: 'char ntoapeer [75]' previously declared here
make[3]: *** [ftp.o] Error 1
make[3]: Leaving directory `/usr/local/src/SQUID/squid-3.1.17/src'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/usr/local/src/SQUID/squid-3.1.17/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/usr/local/src/SQUID/squid-3.1.17/src'
make: *** [all-recursive] Error 1

Anyone has this issue or have a sugestion to fix it?


I saw it here. Just comment the second declaration : file src/ftp.cc, 
line 3124




Sorrry folks. Bumping 3.1.18 out in a few hours instead with that 
regression fixed.


Amos


[squid-users] Error compiling on OpenSuSE 11.3

2011-12-02 Thread Ricardo Rios
I Compiled 3.1.15 and 3.1.16 so far without any problems, today i try to 
compile the last version 3.1.17 and i got errors:


./configure CFLAGS=-DNUMTHREADS=128 --with-filedescriptors=16384 
--enable-removal-policies=heap,lru --enable-epoll 
--enable-stopreio=ufs,aufs,diskd --enable-async-io=128 --with-pthreads 
--disable-dlmalloc --with-large-files --enable-large-cache-files 
--with-aio --enable-esi --with-dl --enable-ltdl-convenience 
--enable-linux-netfilter --disable-ident-lookups --enable-snmp --enable-htcp



ftp.o -MD -MP -MF $depbase.Tpo -c -o ftp.o ftp.cc &&\
mv -f $depbase.Tpo $depbase.Po
ftp.cc: In member function ‘void 
FtpStateData::ftpAcceptDataConnection(const CommAcceptCbParams&)’:

ftp.cc:3124:38: error: redeclaration of ‘char ntoapeer [75]’
ftp.cc:3076:31: error: ‘char ntoapeer [75]’ previously declared here
make[3]: *** [ftp.o] Error 1
make[3]: se sale del directorio `/root/squid-3.1.17/src'
make[2]: *** [all-recursive] Error 1
make[2]: se sale del directorio `/root/squid-3.1.17/src'
make[1]: *** [all] Error 2
make[1]: se sale del directorio `/root/squid-3.1.17/src'
make: *** [all-recursive] Error 1


Regards


Re: [squid-users] error build squid-3.1.17 with gcc-4.5.3

2011-12-02 Thread Jose-Marcio Martins da Cruz

Pedro Correia Sardinha wrote:

Hello,

When I try to build the last version as usual, "make all" it's giving
me this output (my compiler is gcc-4.5.3):

ftp.cc: In member function 'void
FtpStateData::ftpAcceptDataConnection(const CommAcceptCbParams&)':
ftp.cc:3124:38: error: redeclaration of 'char ntoapeer [75]'
ftp.cc:3076:31: error: 'char ntoapeer [75]' previously declared here
make[3]: *** [ftp.o] Error 1
make[3]: Leaving directory `/usr/local/src/SQUID/squid-3.1.17/src'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/usr/local/src/SQUID/squid-3.1.17/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/usr/local/src/SQUID/squid-3.1.17/src'
make: *** [all-recursive] Error 1

Anyone has this issue or have a sugestion to fix it?


I saw it here. Just comment the second declaration : file src/ftp.cc, line 3124



[squid-users] error build squid-3.1.17 with gcc-4.5.3

2011-12-02 Thread Pedro Correia Sardinha
Hello,

When I try to build the last version as usual, "make all" it's giving
me this output (my compiler is gcc-4.5.3):

ftp.cc: In member function 'void
FtpStateData::ftpAcceptDataConnection(const CommAcceptCbParams&)':
ftp.cc:3124:38: error: redeclaration of 'char ntoapeer [75]'
ftp.cc:3076:31: error: 'char ntoapeer [75]' previously declared here
make[3]: *** [ftp.o] Error 1
make[3]: Leaving directory `/usr/local/src/SQUID/squid-3.1.17/src'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/usr/local/src/SQUID/squid-3.1.17/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/usr/local/src/SQUID/squid-3.1.17/src'
make: *** [all-recursive] Error 1

Anyone has this issue or have a sugestion to fix it?


[squid-users] Re: Squid 3.1.x and right configuration parameters for tmpfs 8GB

2011-12-02 Thread RW
On Fri, 02 Dec 2011 15:15:59 +1300
Amos Jeffries wrote:

> On 2/12/2011 5:13 a.m., Matus UHLAR - fantomas wrote:
> > On 01.12.11 15:05, Josef Karliak wrote:
> >>  I wanna use tmpfs for squid cache, is 8GB enough or too big ?
> >> We've about 3000 computers behind squid, for OS is 16GB
> >> sufficient, that's why I used 8GB for squid tmpfs.
> >
> > what is the point of using tmpfs as squid cache? I think using only 
> > memory cache would be much more efficient (unless you are running 
> > 32-bit squid).
> 
> Yes, consider the purpose of why a disk cache is better than RAM
> cache: objects are not erased when Squid or the system restarts.
> 
> ==> tmpfs data is erased when Squid or the system restarts. So why
> bother?

tmpfs is cleared when the system is rebooted or shut down, it can
survive a daemon restart.

> All you gain from tmpfs is a drop in speed accessing the data, from
> RAM speeds down to the Disk speeds. Whether it is SSD or HDD that is
> slower than RAM.


That's not really a fundamental difference. Both memory cache and
tmpfs are stored in ram, optionally backed by swap. Both have the
advantage that there's no need to keep a backing store updated. Either
will force-out pages to swap if you set the cache large enough. If you
have swap configured tmpfs can cache more in memory than memory cache
because it's safe to let it use more - swap usage by tmpfs has very
little impact on the rest of the system.

Memory cache isn't faster  because it's in memory per se. It's faster
because there's a lighter interface to the in-memory  objects and
because in some configurations it prioritizes smaller objects giving a
higher hit rate on objects in memory.

I don't think it's necessarily a bad idea to use tmpfs, there may be
many cases where  using both tmpfs and memory cache outperforms memory
cache alone. 







Re: [squid-users] SECURITY ALERT: Squid Cache: Version 3.2.0.13

2011-12-02 Thread Kevin Wilcox
On 2 December 2011 01:01, Jenny Lee  wrote:

> p4$ host download.windowsupdate.com
> mscom-wui-any.vo.msecnd.net has address 70.37.129.251
> mscom-wui-any.vo.msecnd.net has address 70.37.129.244
>
> p12$ host download.windowsupdate.com
> a26.ms.akamai.net.0.1.cn.akamaitech.net has address 92.123.69.42
> a26.ms.akamai.net.0.1.cn.akamaitech.net has address 92.123.69.8
> a26.ms.akamai.net.0.1.cn.akamaitech.net has address 92.123.69.24
> a26.ms.akamai.net.0.1.cn.akamaitech.net has address 92.123.69.26
> a26.ms.akamai.net.0.1.cn.akamaitech.net has address 92.123.69.41

Note also that this can change very rapidly. I've seen Windows Update
DNS TTLs of 300 seconds and yes, the destinations changed on expiry.

That said, I've had a squid cache for several hundred devices with the
primary destinations of Apple/Windows updates (it's a tech support
group and they're constantly imaging/updating machines) for months and
they Just Work. The proxy is inline, running in intercept mode on
their firewall.

kmw


Re: [squid-users] squid/sslbump + IE9

2011-12-02 Thread Sean Boran
Well yes, we are trying to incept...
I dont see where the "forgery" is, if my proxy CA is trusted and a
cert is generated for that target, signed by that CA, why should the
browser complain?

And why would FF not complain but IE9 does?

Sean


On 2 December 2011 17:29, Amos Jeffries  wrote:
> On 3/12/2011 4:16 a.m., Sean Boran wrote:
>>
>> Yes it was add to the Windows cert store.  (Tools>  Options>  Content
>>>
>>> Certiifcates>  Trusted Root Certification Authorities).
>>
>> Not all all HTTPS websites cause errors either, e..g
>> https://www.credit-suisse.com is fine.
>
>
> Ouch. Their certificate is permitting any third-party (including your Squid)
> to forge their site credentials.
>
>
> Amos


[squid-users] Configuring a Squid Reverse Proxy for Multiple Outlook Web App/Access Servers

2011-12-02 Thread Sean Massey
I have an Exchange 2007 Environment that I am upgrading to Exchange 2010. I 
have Squid configured as a reverse proxy, and I placed it in front of my 
Exchange 2007 CAS server. Both servers are located in the same Active Directory 
site.

Exchange 2010 does not allow OWA proxying to Exchange 2007 servers in the same 
AD site, and Microsoft requires OWA redirection during the co-existence period 
(fortunately, this is not the case with ActiveSync). Since I have a very 
limited pool of public IP addresses (translation: none to spare), and I need to 
have OWA available for users during the testing phase, I was hoping to 
configure Squid to act as the reverse proxy for both CAS servers.

The issue that I am running into, though, is that when I configure Squid to 
handle both OWA2007 and OWA2010, it will only serve traffic to the first OWA 
item listed in the config, and any traffic addressed to the other OWA site gets 
redirected to the first.

If I list owa2010.domain.local as the first item in the config, and I attempt 
to go to owa2007.domain.local, Squid directs me to the OWA2010 site.

Here is a copy of the configuration that I am testing.

visible_hostname OWA2010.domain.local
extension_methods RPC_IN_DATA RPC_OUT_DATA
https_port 443 cert=/usr/local/squid/certs/cert.crt 
key=/usr/local/squid/certs/cert.nopass.key defaultsite=OWA2010.domain.local
cache_peer 192.168.1.254 parent 443 0 no-query originserver login=PASS ssl 
sslflags=DONT_VERIFY_PEER sslcert=/usr/local/squid/certs/exchange.crt 
sslkey=/usr/local/squid/certs/nopassexchange.key name=owa2010
acl OWA dstdomain OWA2010.domain.local
cache_peer_access owa2010 allow OWA

never_direct allow OWA
http_access allow OWA
miss_access allow OWA

visible_hostname OWA2007.domain.local
extension_methods RPC_IN_DATA RPC_OUT_DATA
https_port 443 cert=/usr/local/squid/certs/cert2.crt 
key=/usr/local/squid/certs/webmail2nopass.key defaultsite=OWA2007.domain.local
cache_peer 192.168.1.1 parent 443 0 no-query originserver login=PASS ssl 
sslflags=DONT_VERIFY_PEER sslcert=/usr/local/squid/certs/exchange.crt 
sslkey=/usr/local/squid/certs/nopassexchange.key name=owa2007
acl OWA2 dstdomain OWA2007.domain.local
cache_peer_access owa2007 allow OWA2

never_direct allow OWA2
http_access allow OWA2
miss_access allow OWA2
I'm not sure what I need to change to make Squid work as a reverse proxy for 
two OWA servers. Can anyone help me find what I'm doing wrong?

I also have this question cross-posted on ServerFault at 
http://serverfault.com/q/336913/91254


Re: [squid-users] squid dies: ssl_crtd helpers are crashing too rapidly

2011-12-02 Thread Amos Jeffries

On 3/12/2011 4:44 a.m., Sean Boran wrote:

With squid running sslbump in routing mode, and used by a handful of
users, squid is crashing regularly, linked to visiting SSL sites.

Logs
--
2011/11/29 11:39:36| clientNegotiateSSL: Error negotiating SSL connection on FD
45: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number (1/-1)


Something in your OpenSSL library is incompatible with the SSL or TLS 
version being used by one of the certificates.


Given your helper problems I would not put it past being a corrupted 
local certificate file in the helpers databse.



2011/11/29 11:39:43| WARNING: ssl_crtd #2 (FD 11) exited
2011/11/29 11:39:43| Too few ssl_crtd processes are running (need 1/50)
2011/11/29 11:39:43| Starting new helpers
2011/11/29 11:39:43| helperOpenServers: Starting 1/50 'ssl_crtd' processes
2011/11/29 11:39:43| client_side.cc(3462) sslCrtdHandleReply: "ssl_crtd" helper
return  reply


Major problem. Why is the helper dying on startup?


2011/11/29 11:39:44| WARNING: ssl_crtd #1 (FD 9) exited
2011/11/29 11:39:44| Too few ssl_crtd processes are running (need 1/50)
2011/11/29 11:39:44| storeDirWriteCleanLogs: Starting...
2011/11/29 11:39:44|   Finished.  Wrote 0 entries.
2011/11/29 11:39:44|   Took 0.00 seconds (  0.00 entries/sec).
FATAL: The ssl_crtd helpers are crashing too rapidly, need help!
--

So ssl_crtd is dying which is one issue, but its also killing squid which is
even worse.


As designed. These helper dying is not as trivial as you seem to think. 
It is happening immediately on starting the helper. Ignoring the crash 
abort in Squid only works if the helpers get some work done between 
dying. Ignoring startup crashes will lead to the machine CPU(s) being 
overloaded.



Amos


[squid-users] not getting persistent connections to an ssl backend

2011-12-02 Thread rob yates
Hello,

we are trying to set squid up as an SSL reverse proxy in front of SSL.
 The flow is browser -> ssl -> squid -> ssl -> application.

When we do this we're not seeing persistent connections being used for
the backend connection.  It appears that squid is starting a new SSL
connection for every request vs. keeping one open and using it for
other browser requests.

Is there a way of getting squid configured to maintain and reuse the
persistent connection for different browser requests, we'd ideally
like it to maintain the connection for 5 mins.  We're running on squid
2.6 and the pertinent bit of squid.conf is below, we're using the
defaults for everything else.

We're using tcpdump to see that the connection keeps getting
terminated and reopened with every request.

I am happy to upgrade if that is what is needed.

We have changed the pconn_timeout setting but it has no effect.

Certainly appreciate any help,

Thanks,

Rob

https_port 9.32.153.229:443 cert=/etc/pki/tls/certs/www.
daily2.crt key=/etc/pki/tls/private/daily2.key accel
defaultsite=www.daily2.com vhost
https_port 9.32.153.230:443 cert=/etc/pki/tls/certs/apps.daily2.crt
key=/etc/pki/tls/private/daily2.key accel defaultsite=apps.daily2.com
vhost

cache_peer 9.32.154.106 parent 443 0 no-query originserver ssl
sslflags=DONT_VERIFY_PEER name=f5www login=PASS
cache_peer 9.32.154.93 parent 443 0 no-query originserver ssl
sslflags=DONT_VERIFY_PEER name=f5apps login=PASS

acl engage_sites dstdomain www.daily2.com
http_access allow engage_sites
cache_peer_access f5www allow engage_sites

acl engage_sites dstdomain apps.daily2.com
http_access allow engage_sites
cache_peer_access f5apps allow engage_sites


Re: [squid-users] Squid 3.1.x and right configuration parameters for tmpfs 8GB

2011-12-02 Thread Amos Jeffries

On 2/12/2011 11:10 p.m., Josef Karliak wrote:

  Hi,
  I use 64-bit machine, HP DL380 G7. I thought that it should be 
better to use tmpfs (part of the memory). After reboot it is clean and 
empty, squid creates directories again automaticaly.
  So you recommend use a few of disk capacity and set caching to 
memory only ?


Yes.

Amos



Re: [squid-users] squid/sslbump + IE9

2011-12-02 Thread Amos Jeffries

On 3/12/2011 4:16 a.m., Sean Boran wrote:

Yes it was add to the Windows cert store.  (Tools>  Options>  Content

Certiifcates>  Trusted Root Certification Authorities).

Not all all HTTPS websites cause errors either, e..g
https://www.credit-suisse.com is fine.


Ouch. Their certificate is permitting any third-party (including your 
Squid) to forge their site credentials.



Amos


Re: [squid-users] Transparent HTTP Proxy and SSL-BUMP feature

2011-12-02 Thread Amos Jeffries

On 3/12/2011 1:02 a.m., Maret Ludovic wrote:

Hi there !

I want to configure a transparent proxy for HTTP and SSL. HTTP works
pretty well but i'm stuck with SSL even if i use the ssl-bump feature.

Right now, it almost works if i use 2 differents ports for the http_port
&  https_port :

http_port 3129 transparent
https_port 3130 ssl-bump cert=/etc/squid/ssl_cert/partproxy01-test.pem
key=/etc/squid/ssl_cert/private/partproxy01-key-test.pem

HTTP is ok, i get the warning about a probable man-in-the-middle attack
when i tried to access a SSL web site. I did just add an exception. And
i get an error : Invalid URL

In the logs, i found :

1322820580.454 0 10.194.2.63 NONE/400 3625 GET /pki – NONE/- text/html

When i tried to access https://www.switch.ch/pki
Apparently, squid cut the URL and remove the host.domain part…


No, Squid is not doing anything, that is the problem.
This is how HTTP client->origin request URLs look. The client agent 
thinks it is talking directly to the origin, so it uses the partal URL 
format. This is part of what the "transparent" or "intercept" flags make 
Squid know to look out for and fix up.




When i tried to use CONNECT method and ssl-bump on http_port. I get an
error in the browser “ssl_error_rx_record_too_long” or
“ERR_SSL_PROTOCOL_ERROR”

Any clues ?


Somewhere in the OpenSSL documentation lays the meaning of those error 
messages.



Amos


Re: [squid-users] SECURITY ALERT: Squid Cache: Version 3.2.0.13

2011-12-02 Thread Amos Jeffries

On 2/12/2011 10:51 p.m., David Touzeau wrote:

Le vendredi 02 décembre 2011 à 15:05 +1300, Amos Jeffries a écrit :

Hooray progress :)


On 2/12/2011 5:49 a.m., David Touzeau wrote:

Here it is the log in debug mode :

--
2011/12/01 17:49:14.106 kid1| HTTP Client local=4.26.235.254:80
remote=192.168.1.228:1074 FD 30 flags=33
2011/12/01 17:49:14.106 kid1| HTTP Client REQUEST:
-
GET /v9/windowsupdate/a/selfupdate/WSUS3/x86/Other/wsus3setup.cab?1112011649 
HTTP/1.1
Accept: */*
User-Agent: Windows-Update-Agent
Host: download.windowsupdate.com
Connection: Keep-Alive

K. first problem:
#  host download.windowsupdate.com
...
download.windowsupdate.com.c.footprint.net has address 204.160.124.126
download.windowsupdate.com.c.footprint.net has address 8.27.83.126
download.windowsupdate.com.c.footprint.net has address 8.254.3.254


Client is connecting to server 4.26.235.254 port 80. Which is clearly
not "download.windowsupdate.com" according to the official DNS entries I
can see. It is likely you have another set of IPs entirely, so please
confirm that by running "host download.windowsupdate.com" on the Squid box.

Note that transparent Squid requires the same DNS "view" as the clients
to keep the traffic flowing to the right places. Since it should be in
the same network as the clients for transparent to work anyway this is
not usually a problem. But can appear if you or the client is doing
anything fancy with DNS server configurations.

NP: if 4.26.235.254 happens to be a local WSUS server you need to
configure your local DNS to pass that info on to Squid for the relevant
WSUS hosted domains. You will also benefit from Squid helping to enforce
that MS update traffic stays on-LAN.


Amos

OK

Thanks, this is the story..

I'm using a dedicated server has the DNS server (PowerDNS) that cache
for a long time DNS records.


So you are using a DNS srever which caches records after their expiry 
time and facing that anycast problem Jenny mentioned?


All Squid-3.2 is doing here is making it a whole lot more obvious. It is 
still happening in the background out of sight in older Squid. Users 
suffering from broken pages and websites mysteriously disappearing 
(whenever the anycast CDN servers go offline and the DNS system is 
updated, but not for your DNS server).




After set the server to query ISP DNS, the issue is resolved.

I think that this behavior should be met along this new version.

Is there a way to disable this security checks feature ?


It is optional (and off by default) on regular forward-proxy traffic.

For the intercepted traffic "This problem allows any browser script to 
bypass local security and retrieve arbitrary content from any source." 
in the advisory is the best we could describe its importance without 
giving away bad ideas. Please forgive me for being a bit vague on the 
details, but most transparent proxies out there are still vulnerable and 
will be for a while yet.





Sometimes, in companies Proxy IT did not have rights to play with DNS
servers


Understood. But you can still discuss the needs with the DNS admins.

Amos


[squid-users] squid dies: ssl_crtd helpers are crashing too rapidly

2011-12-02 Thread Sean Boran
With squid running sslbump in routing mode, and used by a handful of
users, squid is crashing regularly, linked to visiting SSL sites.

Logs
--
2011/11/29 11:39:36| clientNegotiateSSL: Error negotiating SSL connection on FD
45: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number (1/-1)
2011/11/29 11:39:43| WARNING: ssl_crtd #2 (FD 11) exited
2011/11/29 11:39:43| Too few ssl_crtd processes are running (need 1/50)
2011/11/29 11:39:43| Starting new helpers
2011/11/29 11:39:43| helperOpenServers: Starting 1/50 'ssl_crtd' processes
2011/11/29 11:39:43| client_side.cc(3462) sslCrtdHandleReply: "ssl_crtd" helper
return  reply
2011/11/29 11:39:44| WARNING: ssl_crtd #1 (FD 9) exited
2011/11/29 11:39:44| Too few ssl_crtd processes are running (need 1/50)
2011/11/29 11:39:44| storeDirWriteCleanLogs: Starting...
2011/11/29 11:39:44|   Finished.  Wrote 0 entries.
2011/11/29 11:39:44|   Took 0.00 seconds (  0.00 entries/sec).
FATAL: The ssl_crtd helpers are crashing too rapidly, need help!
--

So ssl_crtd is dying which is one issue, but its also killing squid which is
even worse.

Initially I though it might be  lack of ssL_crtd resources, so the
process count was
increased up from 5 to 50, but that didn't help

Some config settings:
--
http_port 80 ssl-bump cert=/etc/squid/ssl/www.sample.com.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
sslproxy_flags DONT_VERIFY_PEER
sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /var/lib/squid_ssl_db -M
4MB
sslcrtd_children 50
--

This has happened with squid 3.1 and currently on 3.2 HEAD.
A bug report has been opened http://bugs.squid-cache.org/show_bug.cgi?id=3436

Has anyone a workaround to keep squid running and somehow reset its
run away ssl children?

Sean


Re: [squid-users] Transparent HTTP Proxy and SSL-BUMP feature

2011-12-02 Thread Sean Boran
I'm not sure you can use sslbump in transparent mode.
I remember reading something to that effect.
There are also articles like this that might help:
https://dvas0004.wordpress.com/2011/03/22/squid-transparent-ssl-interception/

Sean


On 2 December 2011 13:02, Maret Ludovic  wrote:
> Hi there !
>
> I want to configure a transparent proxy for HTTP and SSL. HTTP works
> pretty well but i'm stuck with SSL even if i use the ssl-bump feature.
>
> Right now, it almost works if i use 2 differents ports for the http_port
> & https_port :
>
> http_port 3129 transparent
> https_port 3130 ssl-bump cert=/etc/squid/ssl_cert/partproxy01-test.pem
> key=/etc/squid/ssl_cert/private/partproxy01-key-test.pem
>
> HTTP is ok, i get the warning about a probable man-in-the-middle attack
> when i tried to access a SSL web site. I did just add an exception. And
> i get an error : Invalid URL
>
> In the logs, i found :
>
> 1322820580.454 0 10.194.2.63 NONE/400 3625 GET /pki – NONE/- text/html
>
> When i tried to access https://www.switch.ch/pki
> Apparently, squid cut the URL and remove the host.domain part…
>
> When i tried to use CONNECT method and ssl-bump on http_port. I get an
> error in the browser “ssl_error_rx_record_too_long” or
> “ERR_SSL_PROTOCOL_ERROR”
>
> Any clues ?
>
> Many Thanks
>
> Ludovic


Re: [squid-users] squid/sslbump + IE9

2011-12-02 Thread Sean Boran
Yes it was add to the Windows cert store.  (Tools > Options > Content
> Certiifcates > Trusted Root Certification Authorities).

Not all all HTTPS websites cause errors either, e..g
https://www.credit-suisse.com is fine.

Sean

On 2 December 2011 15:03, Guy Helmer  wrote:
>
> On Dec 2, 2011, at 3:52 AM, Sean Boran wrote:
>
> > Hi,
> >
> > I'm testing squid v3 with SSL interception  (the interception is to do
> > AV checking with icap) in routing mode.
> > Sslbump/dynamic certs are configured. A self-signed cert is used on
> > the proxy, and installed as a ca on browsers.
> >
> > https to several sites (such as Gmail.com boi.com) works with FF
> > (although FF is initially much slower); but gives errors in IE9
> > "Internet Explorer blocked this website from displaying content with
> > security certificate errors"
> >
> > Clicking on the lock icon shows the certificate with name
> > accounts.google.com and signed by myproxy.com, which is fine. So why
> > is IE not happy?
> >
> > In the squid logs:
> > NONE/000 0 CONNECT accounts.google.com:443 - HIER_NONE/- -
> > TCP_MISS/200 9497 GET https://accounts.google.com/ServiceLogin? -
> > HIER_DIRECT/209.85.148.84 text/html
> > NONE/000 0 CONNECT ssl.google-analytics.com:443 - HIER_NONE/- -
> > NONE/000 0 CONNECT mail.google.com:443 - HIER_NONE/- -
> > NONE/000 0 CONNECT ssl.gstatic.com:443 - HIER_NONE/- -
> > TCP_MISS/200 1301 POST
> > http://safebrowsing.clients.google.com/safebrowsing/downloads
> >
> > Is IE9 fussier that other browsers regarding SSL?
> >
> >
> > Any tips/best practices to get SSL interception running smoothly ? :-)
> >
> > Thanks,
> >
> > Sean
>
> I believe Firefox uses its own certificate store while IE uses the Windows 
> certificate store. Was the self-signed cert added to the Windows cert store?
>
> Guy
> This message has been scanned by ComplianceSafe, powered by Palisade's 
> PacketSure.


Re: [squid-users] Unable to access IIS site through squid3

2011-12-02 Thread Fredrik Eriksson

On 12/02/2011 12:44 AM, Amos Jeffries wrote:

I can't speak for what they know. I only pay attention to the details
directly affecting Squid features on the netfilter lists.


Of course you can't, sorry. I just thought that, out of the thousands of
sites we visit every day, accessing this particular site would trigger a
linux bug at our end was a bit strange.



FWIW I'm running the Wheezy kernels here with no such problems. It may
be something particular in your iptables rules affecting the checksum.


We have no iptables rules. We have issues with these two sites

  http://www.usitc.gov/
  http://hts.usitc.gov/

The second one gives us frames, but no content.. err..

In the access.log it can look like this

  1322829005.662  30215 XX.XX.XX.XX TCP_MISS/000 0 GET http://www.usitc.gov/ - 
DIRECT/63.173.254.47 -

I take it you have already tried that one.



Its probably best to take this to the netfilter mailing list now and see
if anyone there has a better clue than me.


Ok, I might do that. I was thinking of trying to contact the admins at
usitc to see what they think, but thought I would try here first. You
have been very responsive, thank you.


Regards
--
Fredrik


[squid-users] Transparent HTTP Proxy and SSL-BUMP feature

2011-12-02 Thread Maret Ludovic
Hi there !

I want to configure a transparent proxy for HTTP and SSL. HTTP works
pretty well but i'm stuck with SSL even if i use the ssl-bump feature.

Right now, it almost works if i use 2 differents ports for the http_port
& https_port :

http_port 3129 transparent
https_port 3130 ssl-bump cert=/etc/squid/ssl_cert/partproxy01-test.pem
key=/etc/squid/ssl_cert/private/partproxy01-key-test.pem

HTTP is ok, i get the warning about a probable man-in-the-middle attack
when i tried to access a SSL web site. I did just add an exception. And
i get an error : Invalid URL

In the logs, i found :

1322820580.454 0 10.194.2.63 NONE/400 3625 GET /pki – NONE/- text/html

When i tried to access https://www.switch.ch/pki
Apparently, squid cut the URL and remove the host.domain part…

When i tried to use CONNECT method and ssl-bump on http_port. I get an
error in the browser “ssl_error_rx_record_too_long” or
“ERR_SSL_PROTOCOL_ERROR”

Any clues ?

Many Thanks

Ludovic


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid 3.1.x and right configuration parameters for tmpfs 8GB

2011-12-02 Thread Josef Karliak

  Hi,
  I use 64-bit machine, HP DL380 G7. I thought that it should be  
better to use tmpfs (part of the memory). After reboot it is clean and  
empty, squid creates directories again automaticaly.

  So you recommend use a few of disk capacity and set caching to memory only ?
  Thanks
  J.K.

Cituji Matus UHLAR - fantomas :


On 01.12.11 15:05, Josef Karliak wrote:
I wanna use tmpfs for squid cache, is 8GB enough or too big ? We've  
about 3000 computers behind squid, for OS is 16GB sufficient,  
that's why I used 8GB for squid tmpfs.


what is the point of using tmpfs as squid cache? I think using only  
memory cache would be much more efficient (unless you are running  
32-bit squid).

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
They that can give up essential liberty to obtain a little temporary
safety deserve neither liberty nor safety. -- Benjamin Franklin, 1759





--
Ma domena pouziva zabezpeceni a kontrolu SPF (www.openspf.org) a  
DomainKeys/DKIM (with ADSP) . Pokud mate problemy s dorucenim emailu,  
zacnete pouzivat metody overeni puvody emailu zminene vyse. Dekuji.
My domain use SPF (www.openspf.org) and DomainKeys/DKIM (with ADSP)  
policy and check. If you've problem with sending emails to me, start  
using email origin methods mentioned above. Thank you.



This message was sent using IMP, the Internet Messaging Program.



bingmJR7DCdNs.bin
Description: Veřejný PGP klíč


[squid-users] limiting connection not working 3.1.4

2011-12-02 Thread J. Webster

I have squid 3.1.4 but using this conf, the rate limiting to 1Mbps does not 
seem to work.
What can I change in the conf / delay parameters?

auth_param basic realm Myname proxy server
auth_param basic credentialsttl 2 hours
auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/squid_passwd
authenticate_cache_garbage_interval 1 hour
authenticate_ip_ttl 2 hours
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 1863 # MSN messenger
acl ncsa_users proxy_auth REQUIRED
acl maxuser max_user_ip -s 2
acl CONNECT method CONNECT
http_access deny manager
http_access allow ncsa_users
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
http_access deny maxuser
http_access allow localhost
http_access deny all
icp_access allow all
http_port 8080
http_port xx.xx.xx.xx:80
hierarchy_stoplist cgi-bin ?
cache_mem 100 MB
maximum_object_size_in_memory 50 KB
cache_replacement_policy heap LFUDA
#cache_dir aufs /var/spool/squid 4 16 256
#cache_dir null /null
maximum_object_size 50 MB
cache_swap_low 90
cache_swap_high 95
access_log /var/log/squid/access.log squid
cache_log /var/log/squid/cache.log
cache_store_log none
buffered_logs on
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
refresh_pattern ^ftp:   1440    20% 10080
refresh_pattern ^gopher:    1440    0%    1440
refresh_pattern .   0    20% 4320
quick_abort_min 0 KB
quick_abort_max 0 KB
#acl apache rep_header Server ^Apache
#broken_vary_encoding allow apache
half_closed_clients off
visible_hostname MyNameProxyServer
log_icp_queries off
dns_nameservers 208.67.222.222 208.67.220.220
hosts_file /etc/hosts
memory_pools off
client_db off
#coredump_dir /var/spool/squid
delay_pools 1
delay_class 1 2
delay_parameters 1 -1/-1 125000/125000
forwarded_for off
via off


  

[squid-users] Re: squid/sslbump + IE9

2011-12-02 Thread Sean Boran
Hi,

I'm testing squid v3 with SSL interception  (the interception is to do
AV checking with icap) in routing mode.
Sslbump/dynamic certs are configured. A self-signed cert is used on
the proxy, and installed as a ca on browsers.

https to several sites (such as Gmail.com boi.com) works with FF
(although FF is initially much slower); but gives errors in IE9
"Internet Explorer blocked this website from displaying content with
security certificate errors"

Clicking on the lock icon shows the certificate with name
accounts.google.com and signed by myproxy.com, which is fine. So why
is IE not happy?

In the squid logs:
 NONE/000 0 CONNECT accounts.google.com:443 - HIER_NONE/- -
TCP_MISS/200 9497 GET https://accounts.google.com/ServiceLogin? -
HIER_DIRECT/209.85.148.84 text/html
NONE/000 0 CONNECT ssl.google-analytics.com:443 - HIER_NONE/- -
 NONE/000 0 CONNECT mail.google.com:443 - HIER_NONE/- -
NONE/000 0 CONNECT ssl.gstatic.com:443 - HIER_NONE/- -
TCP_MISS/200 1301 POST
http://safebrowsing.clients.google.com/safebrowsing/downloads

Is IE9 fussier that other browsers regarding SSL?


Any tips/best practices to get SSL interception running smoothly ? :-)

Thanks,

Sean


Re: [squid-users] SECURITY ALERT: Squid Cache: Version 3.2.0.13

2011-12-02 Thread David Touzeau
Le vendredi 02 décembre 2011 à 15:05 +1300, Amos Jeffries a écrit :
> Hooray progress :)
> 
> 
> On 2/12/2011 5:49 a.m., David Touzeau wrote:
> >
> > Here it is the log in debug mode :
> >
> > --
> > 2011/12/01 17:49:14.106 kid1| HTTP Client local=4.26.235.254:80
> > remote=192.168.1.228:1074 FD 30 flags=33
> > 2011/12/01 17:49:14.106 kid1| HTTP Client REQUEST:
> > -
> > GET 
> > /v9/windowsupdate/a/selfupdate/WSUS3/x86/Other/wsus3setup.cab?1112011649 
> > HTTP/1.1
> > Accept: */*
> > User-Agent: Windows-Update-Agent
> > Host: download.windowsupdate.com
> > Connection: Keep-Alive
> 
> K. first problem:
> #  host download.windowsupdate.com
> ...
> download.windowsupdate.com.c.footprint.net has address 204.160.124.126
> download.windowsupdate.com.c.footprint.net has address 8.27.83.126
> download.windowsupdate.com.c.footprint.net has address 8.254.3.254
> 
> 
> Client is connecting to server 4.26.235.254 port 80. Which is clearly 
> not "download.windowsupdate.com" according to the official DNS entries I 
> can see. It is likely you have another set of IPs entirely, so please 
> confirm that by running "host download.windowsupdate.com" on the Squid box.
> 
> Note that transparent Squid requires the same DNS "view" as the clients 
> to keep the traffic flowing to the right places. Since it should be in 
> the same network as the clients for transparent to work anyway this is 
> not usually a problem. But can appear if you or the client is doing 
> anything fancy with DNS server configurations.
> 
> NP: if 4.26.235.254 happens to be a local WSUS server you need to 
> configure your local DNS to pass that info on to Squid for the relevant 
> WSUS hosted domains. You will also benefit from Squid helping to enforce 
> that MS update traffic stays on-LAN.
> 
> 
> Amos

OK

Thanks, this is the story..

I'm using a dedicated server has the DNS server (PowerDNS) that cache
for a long time DNS records.

After set the server to query ISP DNS, the issue is resolved.

I think that this behavior should be met along this new version.

Is there a way to disable this security checks feature ?

Sometimes, in companies Proxy IT did not have rights to play with DNS
servers










[squid-users] Risposta: Re: [squid-users] Squid (using External ACL) problem with Icap

2011-12-02 Thread Roberto Galluzzi
I tried using the path end It works perfectly.

Thank you very much!!

>>> Amos Jeffries  02/12/2011 8.54 >>>
On 2/12/2011 4:37 a.m., Roberto Galluzzi wrote:
> Hi,
>
> I'm using Squid 3.1 and SquidGuard with success. Now I want to add 
> SquidClamav 6.
>
> Versions 6.x need Icap and I didn't have problem to install.
>
> In my Squid configuration I use External ACL to get username from a script 
> but enabling Icap I can't surf because user is empty (in access.log). However 
> in my script log I see that Squid is using it.
>
> If I use simple authentication (auth_param basic ...) I get user and all work.
>
> Nevertheless I MUST use External ACL so I need help about this context.

The problem is that external_acl_type "user=" tag is not an 
authenticated username. Just a label for logging etc. in the current Squid.

There is a temporary workaround patch available in the existing bug report:
http://bugs.squid-cache.org/show_bug.cgi?id=3132 

You can use that while we continue to work on redesigning the auth 
systems to handle this better.


>
> This is part of my configuration:
>
> squid.conf
> -
> (...)
> external_acl_type  children=15 ttl=7200 negative_ttl=60 %SRC 
> %SRC  
> (...)
> icap_enable on
> icap_send_client_ip on
> icap_send_client_username on
> icap_client_username_encode off
> icap_client_username_header X-Authenticated-User
> icap_preview_enable on
> icap_preview_size 1024
> icap_service service_req reqmod_precache bypass=1 
> icap://127.0.0.1:1344/squidclamav
> adaptation_access service_req allow all
> icap_service service_resp respmod_precache bypass=1 
> icap://127.0.0.1:1344/squidclamav
> adaptation_access service_resp allow all
> (...)
> -
>
> If you need other info, ask me without problem.
>
> Thank you
>
> Roberto
>




[squid-users] Problem with Bambuser live through squid?

2011-12-02 Thread Peter Olsson
Anyone know if it is possible to watch Bambuser live
broadcasts through squid, and if it should work "out
of the box" or if it needs special configuration?

We can watch finished Bambuser broadcasts, but live
broadcasts won't start.

www.bambuser.com/broadcasts

Their FAQ states:
"
To watch a broadcast:
Mobile broadcast: TCP 80
Webcam broadcast: TCP 1935
"
So the port 1935 might make it impossible, but I'm
wondering if anyone has got it working or know more
about this problem.

Our squid version is 3.1.16.

Thanks!

-- 
Peter Olssonp...@leissner.se


Re: [squid-users] SECURITY ALERT: Squid Cache: Version 3.2.0.13

2011-12-02 Thread FredB

> 
> Yes, welcome to the host header forgery mess. I don't know who
> benefited from this but a lot of people got bitten by it.
> 
> I mentioned this first day
> http://bugs.squid-cache.org/show_bug.cgi?id=3325
> 
> Anyone doing ANYCAST will be screwed (and a whole lotta people do
> that).
> 
> p4$ host download.windowsupdate.com
> mscom-wui-any.vo.msecnd.net has address 70.37.129.251
> mscom-wui-any.vo.msecnd.net has address 70.37.129.244
> 
> p12$ host download.windowsupdate.com
> a26.ms.akamai.net.0.1.cn.akamaitech.net has address 92.123.69.42
> a26.ms.akamai.net.0.1.cn.akamaitech.net has address 92.123.69.8
> a26.ms.akamai.net.0.1.cn.akamaitech.net has address 92.123.69.24
> a26.ms.akamai.net.0.1.cn.akamaitech.net has address 92.123.69.26
> a26.ms.akamai.net.0.1.cn.akamaitech.net has address 92.123.69.41
> 
> Jenny

It's strange, how to explain that I don't have this problem.
I am using two Squid 3.2.0.13-2029-r11445 (with 
http://bugs.squid-cache.org/attachment.cgi?id=2539 and 
http://bugs.squid-cache.org/attachment.cgi?id=2574) in production 

du -h /var/log/squid/access.log
2,2G -> high traffic

grep SECUR /var/log/squid/cache.log -> Nothing

And no complaint from a user
Perhaps, You used transparent proxy like David, or a same option in Squid.conf 
?