Re: [squid-users] what AV products have ICAP support?

2014-08-18 Thread Amos Jeffries
On 18/08/2014 9:30 p.m., Jason Haar wrote:
> Hi there
> 
> I've been testing out squidclamav as an ICAP service and it works well.
> I was wondering what other AV vendors have (linux) ICAP-capable
> offerings that could similarly be hooked into Squid?
> 
> Thanks
> 

http://www.icap-forum.org/icap?do=products&isServer=checked

Amos


[squid-users] Exception handling

2014-08-18 Thread joseph_jose
Hi, i'm using squid with ecap adapter. I just want to know whether there is
any configuration directive available in squid to handle runtime exception
occurred in the loadable module. Currently i have written code in adapter
itself for handling exception, otherwise when the loaded ecap module returns
an exception then squid will crash and reloads. So is there any directive
available to redirect to some error page instead of 'proxy refusing
connection' or 'connection was reset' when an exception occurs?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Exception-handling-tp4667260.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: HTTP/HTTPS transparent proxy doesn't work

2014-08-18 Thread squid




What are the iptables rules for that?
Also look at:
http://wiki.squid-cache.org/EliezerCroitoru/Drafts/SSLBUMP


I recompiled to 3.4.6
and ran everything in your page there.
squid started correctly.
However, it is the same problem. Any https page that I had configured  
does not resolve. It is being redirected by unbound but as soon as it  
hits the proxy, it just gets dropped somehow:


# Generated by iptables-save v1.4.7 on Tue Aug 19 03:14:13 2014
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [5454:2633080]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -j ACCEPT
-A INPUT -s 213.171.217.173/32 -p udp -m udp --dport 161 -m state  
--state NEW -j ACCEPT

-A INPUT -p udp -m udp --dport 161 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 161 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -m state --state NEW -j ACCEPT
-A INPUT -p udp -m udp --dport 53 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 53 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 25 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 110 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 143 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 20 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 21 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 3306 -m state --state NEW -j ACCEPT
-A INPUT -p udp -m udp --dport 3306 -m state --state NEW -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-port-unreachable
-A OUTPUT -o lo -j ACCEPT
-A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -m state --state NEW -j ACCEPT
COMMIT
# Completed on Tue Aug 19 03:14:13 2014
# Generated by iptables-save v1.4.7 on Tue Aug 19 03:14:13 2014
*nat
:PREROUTING ACCEPT [23834173:1866373947]
:POSTROUTING ACCEPT [22194:1519446]
:OUTPUT ACCEPT [22194:1519446]
-A PREROUTING -i eth0 -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 3130
-A POSTROUTING -s 0.0.0.0/32 -o eth0 -j MASQUERADE
COMMIT
# Completed on Tue Aug 19 03:14:13 2014

#acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
#http_access deny to_localhost
external_acl_type time_squid_auth ttl=5 %SRC /usr/local/bin/squidauth
acl interval_auth external time_squid_auth
http_access allow interval_auth
http_access deny all
http_port 80 accel vhost allow-direct
https_port 3130 intercept ssl-bump generate-host-certificates=on  
dynamic_cert_mem_cache_size=16MB   
cert=/usr/local/squid/ssl_cert/myCA.pem
sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s  
/usr/local/squid/var/lib/ssl_db -M 16MB

sslcrtd_children 10
ssl_bump server-first all
#sslproxy_cert_error allow all
#sslproxy_flags DONT_VERIFY_PEER
hierarchy_stoplist cgi-bin ?
coredump_dir /var/spool/squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320





Re: [squid-users] server failover/backup

2014-08-18 Thread Mike

On 8/18/2014 6:56 PM, Amos Jeffries wrote:

1) long passwords encrypted with DES.

The current releases Squid NCSA helper checks length of DES passwords
and rejects if they are more than 8 charecters long instead of silently
truncating and accepting bad input.

If your users have long passwords and you encrypted them into the
original file with DES then they need to be upgraded. Logging in with
only the first 8 characters of their password should still work with DES.


Thanks Amos.
That seemed to be the issue.
I did some digging and we found we had to use MD5 when recreating the 
user/pass file using the "htpasswd -mb /etc/squid/password user pass" 
and didn't have to change anything in squid.conf. The basic_ncsa_auth 
automatically picks up the md5 hash used on the new file and the issue 
is resolved.


Thanks again
Mike



[squid-users] Re: HTTP/HTTPS transparent proxy doesn't work

2014-08-18 Thread agent_js03
Hello again eliezer,

I have decided to do what you said before and set the code to 302 instead of
200 and now the block page works perfectly. All problems are solved.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/HTTP-HTTPS-transparent-proxy-doesn-t-work-tp4667193p4667257.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: HTTP/HTTPS transparent proxy doesn't work

2014-08-18 Thread Eliezer Croitoru
Basically the main issue is that you actually change the request instead 
of redirecting.
You should use a 302 redirect full response for the request that will 
result the client accessing the 192.168.1.145:8089 server by itself.


ELiezer

On 08/19/2014 03:07 AM, agent_js03 wrote:

ICAP/1.0 200 OK
Date: Mon, 18 Aug 2014 23:15:42 GMT
ISTag: i16FID6HcIdc9AbGie8d03f1Ij5dejcj
Encapsulated: req-hdr=0, null-body=545
Server: BaseICAP/1.0 Python/2.7.8

GET
http://192.168.1.145:8089/blockpage.php?category=Banned+URL+Regex&criteria=dog.%2Abiscuits
HTTP/1.1
via: 1.1 localhost (squid/3.2.11)
accept-language: en-US,en;q=0.5
accept-encoding: gzip, deflate
x-forwarded-for: 127.0.0.1
accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
user-agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:30.0) Gecko/20100101
Firefox/30.0
host: search.yahoo.com
cookie: B=c3lrj0t9v516p&b=3&s=90; HP=1
cache-control: max-age=0
surrogate-capability: localhost="Surrogate/1.0 ESI/1.0"




The page 192.168.1.145:8089 is the local php blockpage. The banned URL regex
criteria is the regex dog.*biscuits.

I am not sure what is going on. Here is what works so far: if I do a reqmod
on a non-SSL page and it blocks, then it goes through OK. If I do a respmod
on either a non-SSL page or an SSL-page and feed the content back, it goes
through OK and I see the blockpage. The only thing that doesn't work is if I
do a reqmod and it tries to redirect me to the blockpage. And this only
happens with transparent proxying. When I have my server set up for a manual
proxy, it works fine; the blockpage shows up OK. Why would it behave
differently running as a transparent proxy?




Re: [squid-users] CDN / JS 503 Service Unavailable

2014-08-18 Thread Eliezer Croitoru

On 08/18/2014 10:37 AM, Paul Regan wrote:

@Eliezer - Sorry to say the acl lines made no difference.  Can I use
any of the debugging options to get deeper into this?

Well it depends on the 503 content.
it can be a network issue or an application level issue.
Since you can download the file using wget from the proxy machine it 
seems like a proxy settings error but there is an option that you are 
using some wrong settings.


For me and many others it do work with proper proxy settings.
Have you tried to remove the ufdbguard from the settings?
It might be the reason..

Eliezer


[squid-users] Re: HTTP/HTTPS transparent proxy doesn't work

2014-08-18 Thread agent_js03
Hello Eliezer, thank you for your response.

I have examined the wireshark pcap of this transaction and will now provide
a more detailed run-through of what's going on. As a summary, the problem is
related to SSL; basically what's going on is I am requesting an SSL page,
the and the ICAP server is redirecting to a non-SSL (plain HTTP) page (just
by modifying the request URL). The connection appears to be getting reset as
the client tries to read SSL from the server.

*Here is the full ICAP request:*

REQMOD icap://127.0.0.1:13440/archangel ICAP/1.0
Host: 127.0.0.1:13440
Date: Mon, 18 Aug 2014 23:15:42 GMT
Encapsulated: req-hdr=0, null-body=575
Preview: 0
Allow: 204

GET
https://search.yahoo.com/search;_ylt=A2KLtgzZhPJT85QAm9ebvZx4?p=dog+biscuits&toggle=1&cop=mss&ei=UTF-8&fr=yfp-t-901&fp=1
HTTP/1.1
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:30.0) Gecko/20100101
Firefox/30.0
Host: search.yahoo.com
Cookie: B=c3lrj0t9v516p&b=3&s=90; HP=1
Via: 1.1 localhost (squid/3.2.11)
Surrogate-Capability: localhost="Surrogate/1.0 ESI/1.0"
X-Forwarded-For: 127.0.0.1
Cache-Control: max-age=0

*and here is the full ICAP response:*

ICAP/1.0 200 OK
Date: Mon, 18 Aug 2014 23:15:42 GMT
ISTag: i16FID6HcIdc9AbGie8d03f1Ij5dejcj
Encapsulated: req-hdr=0, null-body=545
Server: BaseICAP/1.0 Python/2.7.8

GET
http://192.168.1.145:8089/blockpage.php?category=Banned+URL+Regex&criteria=dog.%2Abiscuits
HTTP/1.1
via: 1.1 localhost (squid/3.2.11)
accept-language: en-US,en;q=0.5
accept-encoding: gzip, deflate
x-forwarded-for: 127.0.0.1
accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
user-agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:30.0) Gecko/20100101
Firefox/30.0
host: search.yahoo.com
cookie: B=c3lrj0t9v516p&b=3&s=90; HP=1
cache-control: max-age=0
surrogate-capability: localhost="Surrogate/1.0 ESI/1.0"




The page 192.168.1.145:8089 is the local php blockpage. The banned URL regex
criteria is the regex dog.*biscuits.

I am not sure what is going on. Here is what works so far: if I do a reqmod
on a non-SSL page and it blocks, then it goes through OK. If I do a respmod
on either a non-SSL page or an SSL-page and feed the content back, it goes
through OK and I see the blockpage. The only thing that doesn't work is if I
do a reqmod and it tries to redirect me to the blockpage. And this only
happens with transparent proxying. When I have my server set up for a manual
proxy, it works fine; the blockpage shows up OK. Why would it behave
differently running as a transparent proxy?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/HTTP-HTTPS-transparent-proxy-doesn-t-work-tp4667193p4667254.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] server failover/backup

2014-08-18 Thread Amos Jeffries
On 19/08/2014 9:09 a.m., Mike wrote:
> Question, when we copy the /etc/squid/passwd file itself from "server 1"
> to "server 2", and when using the same squid authentication, why does
> server 2 not accept the username and passwords in the file that works on
> server 1?
> Is that file encrypted by server 1?
> Do we need to create a new passwd file from scratch on server 2, and use
> a script to "import" it into that new passwd file from server 1?
> 
> The main differences:
> Server 1 is 64 bit OS Fedora 8 using squid Version 2.6.STABLE19
> Server 2 is recently installed OS with 32 bit CentOS 6.5 i686 (due to
> hardware being 32bit), squid 3.4.5.
> 
> Does that 64 versus 32 bit file setup and creation make an impact? Or
> how about the 2.6.x versus 3.4.x?

Two possibilities:

1) long passwords encrypted with DES.

The current releases Squid NCSA helper checks length of DES passwords
and rejects if they are more than 8 charecters long instead of silently
truncating and accepting bad input.

If your users have long passwords and you encrypted them into the
original file with DES then they need to be upgraded. Logging in with
only the first 8 characters of their password should still work with DES.

2) OS-specific hash algorithm was used to encrypt.

Blowfish and SHA1 algorithms are not universally available. The NCSA
helper which is built against a library missing one of these algorithms
cannot login users with a password file generated using them.

You may have to migrate users via MD5, or ensure libcrypt is used to
build the new Squid helper.

HTH
Amos



Re: [squid-users] Re: server failover/backup

2014-08-18 Thread Amos Jeffries
On 19/08/2014 10:48 a.m., Mike wrote:
> On 8/18/2014 4:27 PM, nuhll wrote:
>> Question: why u spam my thread?
>>
>>
>>
>> -- 
>> View this message in context:
>> http://squid-web-proxy-cache.1019090.n4.nabble.com/ONLY-Cache-certain-Websites-tp4667121p4667249.html
>>
>> Sent from the Squid - Users mailing list archive at Nabble.com.
>>
> This is an email list. I created a new email to
> squid-users@squid-cache.org for assistance from anyone that uses the
> email list. I was told some time ago that Nabble is not recommended
> since it does not always place them in a proper layout according to the
> email user list, so to use it via email, not the website.
> 

Your first email was created as a reply to the thread
"In-Reply-To: <1408378851794-4667247.p...@n4.nabble.com>"

Amos



Re: [squid-users] Very slow site via squid

2014-08-18 Thread Amos Jeffries
On 18/08/2014 11:48 p.m., babajaga wrote:
> I have a squid 2.7 setup on openWRT, running on a 400Mhz/64MB embedded
> system.
> First of all, a bit slow (which is another issue), but one site is
> especially slow, when accessed via squid:
> 
> 1408356096.498  25061 10.255.228.5 TCP_MISS/200 379 GET
> http://dc73.s290.meetrics.net/bb-mx/submit? - DIRECT/78.46.90.182 image/gif
> 1408356103.801  46137 10.255.228.5 TCP_MISS/200 379 GET
> http://dc73.s290.meetrics.net/bb-mx/submit? - DIRECT/78.46.90.182 image/gif
> 
> Digging deeper, (squid.conf: debug ALL,9) I see this:
> 2014/08/18 11:17:26| commConnectStart: FD 198, dc44.s290.meetrics.net:80
> 2014/08/18 11:18:00| fwdConnectDone: FD 198:
> 'http://dc44.s290.meetrics.net/bb-mx/submit?//oxNGf
> 
> which should explain the slowness.
> 
> Example of http-headers:
> 
> Cache-Control no-cache,no-store,must-revalidate
> Content-Length43
> Content-Type  image/gif
> Date  Mon, 18 Aug 2014 10:04:52 GMT
> Expires   Mon, 18 Aug 2014 10:04:51 GMT
> Pragmano-cache
> Servernginx
> X-Cache   MISS from my-embedded-proxy
> X-Cache-LookupMISS from my-embedded-proxy:3128
> ---
> Acceptimage/png,image/*;q=0.8,*/*;q=0.5
> Accept-Encoding   gzip, deflate
> Accept-Language   de,en-US;q=0.7,en;q=0.3
> Connectionkeep-alive
> Cookieid=721557E9-A0E0-C549-7D6A-B2D622DA4B1F
> DNT   1
> Host  dc73.s290.meetrics.net
> Referer   http://www.spiegel.de/
> User-AgentMozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101
> Firefox/31.0
> 
> I can only suspect something special regarding their DNS.
> Any other idea ?

I agree, its likely their DNS response timeor TCP handshake timeouts
happening.

The latest squid-3.x stable releases may be able to help with this. We
have separated the DNS lookup and TCP handshake operations so the info
about bad connections is stored longer for overall faster transactions.

Also, in my experiene the worst slow domains like this are usually
advertising hosts. So blocking their transactions outright (and quickly)
can boost page load time a huge amount. It is worth having a look at
what those requests are for.

Amos


Re: [squid-users] Re: server failover/backup

2014-08-18 Thread Mike

On 8/18/2014 4:27 PM, nuhll wrote:

Question: why u spam my thread?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/ONLY-Cache-certain-Websites-tp4667121p4667249.html
Sent from the Squid - Users mailing list archive at Nabble.com.

This is an email list. I created a new email to 
squid-users@squid-cache.org for assistance from anyone that uses the 
email list. I was told some time ago that Nabble is not recommended 
since it does not always place them in a proper layout according to the 
email user list, so to use it via email, not the website.






[squid-users] Re: server failover/backup

2014-08-18 Thread nuhll
Question: why u spam my thread?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/ONLY-Cache-certain-Websites-tp4667121p4667249.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] server failover/backup

2014-08-18 Thread Mike
Question, when we copy the /etc/squid/passwd file itself from "server 1" 
to "server 2", and when using the same squid authentication, why does 
server 2 not accept the username and passwords in the file that works on 
server 1?

Is that file encrypted by server 1?
Do we need to create a new passwd file from scratch on server 2, and use 
a script to "import" it into that new passwd file from server 1?


The main differences:
Server 1 is 64 bit OS Fedora 8 using squid Version 2.6.STABLE19
Server 2 is recently installed OS with 32 bit CentOS 6.5 i686 (due to 
hardware being 32bit), squid 3.4.5.


Does that 64 versus 32 bit file setup and creation make an impact? Or 
how about the 2.6.x versus 3.4.x?


The squid.conf specifics, older server 1:

auth_param basic program /usr/lib64/squid/ncsa_auth /etc/squid/passwd
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive on

acl ourCustomers proxy_auth REQUIRED
http_access allow ourCustomers



The squid.conf specifics, newer OS server 2:

auth_param basic program 
/usr/src/squid-3.4.5/helpers/basic_auth/NCSA/basic_ncsa_auth 
/etc/squid/passwd

auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive on

acl ourCustomers proxy_auth REQUIRED
http_access allow ourCustomers

http_access deny all


Thanks!
Mike


[squid-users] Re: ONLY Cache certain Websites.

2014-08-18 Thread nuhll
Thanks for no help, but could u please spam then?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/ONLY-Cache-certain-Websites-tp4667121p4667247.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: ONLY Cache certain Websites.

2014-08-18 Thread Alex Crow


http://www.squid-cache.org/Doc/config/cache/

On 03/08/14 10:25, nuhll wrote:

Seems like "acl all src all" fixed it. Thanks!

One problem is left. Is it possible to only cache certain websites, the rest
should just redirectet?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/ONLY-Cache-certain-Websites-tp4667121p4667127.html
Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] Unhandled exception: c

2014-08-18 Thread Alex Crow

Hi,

Anyone have any ideas on this?

Thanks

Alex


Hi Amos,

I spoke to soon. I have this (maybe more informative than the original 
error though).


2014/07/31 11:57:45 kid1| assertion failed: String.cc:201: "len_ + len 
< 65536"
2014/07/31 11:58:07 kid1| Starting Squid Cache version 
3.3.12-20140309-r12678 for x86_64-pc-linux-gnu...

2014/07/31 11:58:07 kid1| Process ID 14375
2014/07/31 11:58:07 kid1| Process Roles: worker
2014/07/31 11:58:07 kid1| With 65535 file descriptors available

This is on 3.3. 12 again. I have set up 3.4.x to remove NTLM auth (in 
fact all auth) but we are going to try to give our users a break for a 
couple of months until we throw this at them in an attempt to get to 
the bottom of the high CPU usage on 3.4.


Cheers

Alex






[squid-users] Re: ONLY Cache certain Websites.

2014-08-18 Thread nuhll
Just to clarify my problem: I dont use it as a transparente proxy! I
distribute the proxy with my dhcp server and a .pac file. So it gets used on
all machines with "auto detection proxy"



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/ONLY-Cache-certain-Websites-tp4667121p4667244.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Very slow site via squid

2014-08-18 Thread babajaga
I have a squid 2.7 setup on openWRT, running on a 400Mhz/64MB embedded
system.
First of all, a bit slow (which is another issue), but one site is
especially slow, when accessed via squid:

1408356096.498  25061 10.255.228.5 TCP_MISS/200 379 GET
http://dc73.s290.meetrics.net/bb-mx/submit? - DIRECT/78.46.90.182 image/gif
1408356103.801  46137 10.255.228.5 TCP_MISS/200 379 GET
http://dc73.s290.meetrics.net/bb-mx/submit? - DIRECT/78.46.90.182 image/gif

Digging deeper, (squid.conf: debug ALL,9) I see this:
2014/08/18 11:17:26| commConnectStart: FD 198, dc44.s290.meetrics.net:80
2014/08/18 11:18:00| fwdConnectDone: FD 198:
'http://dc44.s290.meetrics.net/bb-mx/submit?//oxNGf

which should explain the slowness.

Example of http-headers:

Cache-Control   no-cache,no-store,must-revalidate
Content-Length  43
Content-Typeimage/gif
DateMon, 18 Aug 2014 10:04:52 GMT
Expires Mon, 18 Aug 2014 10:04:51 GMT
Pragma  no-cache
Server  nginx
X-Cache MISS from my-embedded-proxy
X-Cache-Lookup  MISS from my-embedded-proxy:3128
---
Accept  image/png,image/*;q=0.8,*/*;q=0.5
Accept-Encoding gzip, deflate
Accept-Language de,en-US;q=0.7,en;q=0.3
Connection  keep-alive
Cookie  id=721557E9-A0E0-C549-7D6A-B2D622DA4B1F
DNT 1
Hostdc73.s290.meetrics.net
Referer http://www.spiegel.de/
User-Agent  Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101
Firefox/31.0

I can only suspect something special regarding their DNS.
Any other idea ?










--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Very-slow-site-via-squid-tp4667243.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: ONLY Cache certain Websites.

2014-08-18 Thread nuhll
What is pnp. Do you mean UPNP? Its enabled. I dont understand RU. If i were
able to read and understand it, why u think i post it here? Just so that u
tell me thats the answer?!



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/ONLY-Cache-certain-Websites-tp4667121p4667242.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] what AV products have ICAP support?

2014-08-18 Thread Jason Haar
Hi there

I've been testing out squidclamav as an ICAP service and it works well.
I was wondering what other AV vendors have (linux) ICAP-capable
offerings that could similarly be hooked into Squid?

Thanks

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1



Re: [squid-users] CDN / JS 503 Service Unavailable

2014-08-18 Thread Paul Regan
Hello

@Eliezer - Sorry to say the acl lines made no difference.  Can I use
any of the debugging options to get deeper into this?

@Amos - Maybe my British sarcasm was lost on an international audience
;) ... This config is an organic one which in parts has existed
through a number of SA and versions so thanks for the audit and we
will take a look at each suggestion.

Re performance.  Only 200 (ish) users and no issues reported.  Thats
not to say it can't be improved and people accept what they have as
the norm.

The only problem we have (that we know about!) is the .js caching of
this cloudflare site.

Paul



On 17 August 2014 07:03, Amos Jeffries  wrote:
> On 15/08/2014 11:22 p.m., Paul Regan wrote:
>> Urg, thats like standing front of the class for everyone to stare!
>>
>
> If you are not able to take constructive criticisms, sysadmin is
> probably not the best ine of work for you :-)
>
> I see you seem to have found the problem. So consider these a free audit.
>
>>
>> here you go :
>>
>> cache_effective_user squid
>>
>> url_rewrite_program /usr/sbin/ufdbgclient -l /var/ufdbguard/logs
>> url_rewrite_children 64
>>
>> acl localnet src 
>> acl eu-edge-IP src 
>> acl eu-buscon-edge-IP src 
>> acl eu-inet-dmz src 
>> acl na-subnet src 
>> acl na-inet-dmz src 
>> acl na-buscon-edge-IP src 
>> acl st-buscon-vpc src 
>> acl eu-mfmgt src 
>>
>> acl SSL_ports port 443
>> acl Safe_ports port 80 # http
>> acl Safe_ports port 21 # ftp
>> acl Safe_ports port 443 # https
>> acl Safe_ports port 70 # gopher
>> acl Safe_ports port 210 # wais
>> acl Safe_ports port 1025-65535 # unregistered ports
>> acl Safe_ports port 280 # http-mgmt
>> acl Safe_ports port 488 # gss-http
>> acl Safe_ports port 591 # filemaker
>> acl Safe_ports port 777 # multiling http
>>
>> acl CONNECT method CONNECT
>>
>> hosts_file /etc/hosts
>>
>> dns_nameservers   
>>
>> http_access deny !Safe_ports
>>
>> http_access deny CONNECT !SSL_ports
>>
>> acl infrastructure src
>>
>> http_access allow localhost manager
>> http_access allow infrastructure manager
>> http_access deny manager
>>
>> acl mo-whitelist dstdomain "/etc/squid/mo-whitelist"
>> http_access allow mo-whitelist
>>
>> acl mo-blockedsites dstdomain "/etc/squid/mo-blockedsites"
>> deny_info http://restricted_content_blockedsites.html mo-blockedsites
>> http_access deny mo-blockedsites
>>
>> acl mo-blockedkeywords urlpath_regex "/etc/squid/mo-blockedkeywords"
>> deny_info http://restricted_content_keywords.html mo-blockedkeywords
>> http_access deny mo-blockedkeywords
>>
>> acl mo-nocache dstdomain "/etc/squid/mo-nocache"
>> no_cache deny mo-nocache
>
> The correct name for that directive is "cache", has been since Squid-2.4.
> As in, what you should have there is:
>  cache deny mo-nocache
>
>
>>
>> acl mo-blockedIP src "/etc/squid/mo-blockedIP"
>> acl mo-allowURLs dstdomain src "/etc/squid/mo-allowURLs"
>>
>> http_access allow mo-blockedIP mo-allowURLs
>> http_access deny mo-blockedIP
>> deny_info http://restricted_content_blockedip.html mo-blockedIP
>>
>> acl mo-allowNYIP src "/etc/squid/mo-allowNYIP"
>> http_access allow mo-allowNYIP
>>
>> http_access allow na-subnet mo-allowURLs
>> http_access deny na-subnet
>> deny_info http://restricted_content_subnet.html na-subnet
>>
>> http_access allow localnet
>> http_access deny st-buscon-vpc
>> http_access allow eu-edge-IP
>> http_access allow eu-inet-dmz
>> http_access allow eu-buscon-edge-IP
>> http_access allow na-inet-dmz
>> http_access allow na-buscon-edge-IP
>> http_access allow eu-mfmgt
>>
>> acl ftp proto FTP
>> always_direct allow ftp
>>
>> acl purge method PURGE
>> http_access allow purge localhost
>> http_access deny purge
>
> Hmm.. What you have here is a pure forward-proxy configuration.
> If you need to purge things from the cache of a forward-proxy then it is
> caching badly/wrong.
>
> I know that Squid does cache some things badly, but we have taken great
> pains to ensure that those cases are conservative. The wrong cases
> shoudl all take form of dropping things which should have been kept,
> rather than storing things which should have been dropped.
>
> Are you perhase finding that you need to manually erase content
> permitted into cache by the refresh rules with "override-expire
> ignore-no-store ignore-private". Ignoring private and no-store in
> particular are very dangerous... think Captcha images, username in image
> form for embeded session display, company private information, etc.
>
>>
>> http_access allow localhost
>> http_access deny all
>>
>> http_port 8080
>>
>> cache_dir aufs /squid-cache 39322 16 256
>> cache_replacement_policy heap LFUDA
>>
>> cache_swap_low 96
>> cache_swap_high 98
>>
>> cache_mem 256 MB
>>
>> maximum_object_size 64 KB
>
> It's a little unclear why you are limiting cached objects to 64KB while
> refresh patterns also force archive and binary executable types to be
> cached. You have 40.25 GB of cache space available.
>
>> maximum_object_size_in_memory 20 KB
>>
>> quick_abort_min 0 KB
>> quick