Re: [squid-users] New Hungarian translation of the error messages

2012-02-24 Thread Amos Jeffries

On 24/02/2012 4:51 a.m., Gergely Kiss wrote:

Dear Developers and Users,

I noticed that the Hungarian error page translations are incomplete,
most of the pages are only half-translated and the translated strings
are hard to understand for an average user. As a little bit of
contribution, I created a new translation which is complete, more
accurate and easier to understand even for inexperienced users.

You may find the files in the TGZ file attached.

If you find my contribution useful, feel free to merge the files into
the source code.


Regards,
Gergely Kiss


Thanks for the interest in this language. It is incomplete due to lack 
of a volunteer to check and approve suggested text changes.


The Squid project uses an automated system for translations, (see 
http://wiki.squid-cache.org/Translations). I will make an attempt soon 
to convert these templates into it. But can only go so far as to insert 
them as new suggestions, waiting for a moderator to check they converted 
right and approve them.


If you are interested and able to spare some time to be that moderator. 
Please register with the translation system as described in 
http://wiki.squid-cache.org/Translations/Basics and send me an email 
with your chosen login name, describing any experience you have with 
translating, and whether you want to continue long term or just for the 
time to catch up on the waiting tasks.
Workload is a day or so up front to get things up-to-date, then a few 
minutes a month to cross-check new suggestions and source code changes. 
There are no time constraints or deadlines, changes are published daily 
in the langpack rolling release.


Amos


Re: [squid-users] HostHeaderForgery on dual stack ipv4/ipv6 machine and ICAP

2012-02-24 Thread Amos Jeffries

On 25/02/2012 11:52 a.m., Guy Helmer wrote:

On Jan 29, 2012, at 5:13 PM, Amos Jeffries wrote:


On 30/01/2012 6:29 a.m., James R. Leu wrote:

Hello,

I'm in the process of implementing an ICAP server, but I'm encountering the
HostHeaderForgery issue quite often when accessing sites that I can reach
over IPv6.  I've read the KB entry about this.  It lists
that co-locating the NAT device and squid on the same machine,
or enabling EDNS may resolve the issue.

I'm wondering if my issue is specific to dual stack v4/v6
or to ICAP.  Any suggestions for what I can try to
work around this issue?  If this is specific to
dual stack v4/v6, I'm here to beat my v6 migration
drum and I'm willing to help out to resolve it.

The only relation with IP version is if you have disbaled DNS lookups of IPv4 
in Squid. That could make Squid fail to identify the IPv4 destination as valid. 
The latest 3.2 daily snapshot has DNS updates that work faster and obsolete 
that option, so should not hit this particular aspect.

The only relation ICAP/eCAP/url-rewrite/request_header_access adaptation have 
is if they alter the URL and/or Host header to something that does not sync up. 
Upstream interception proxies might fail the verify and produce the conflict 
error after such alterations.

The most common occurance now happening is with websites which force DNS update changes 
across the Internet on very short TTL (er "load balance" via DNS results). Each 
time the DNS changes IP Squid will have race between itself and client as to which picks 
the change up first. DNS stability takes up to TTL duration to happen. At which point 
these load balancers have de-stabilized the network again for a new IP.  We have a patch 
in testing now, hopefully it will be in mainstream soon.


Is there a bug # or URL from which I can obtain the patch? I am encountering 
this issue with intercepting traffic to Akamai servers.


3.2.0.15 was released with the fix you need. Also for the Avira problems.


Amos


Re: [squid-users] squid as transparent ssl proxy

2012-02-24 Thread Amos Jeffries

On 25/02/2012 4:25 a.m., Jan Fischbach wrote:

Hi Everyone,

I compiled an configured squid in the way to get a transparent ssl
proxy. With the debug flag an looking into the access.log, no errors
or warnings are shown. When intercepting http traffic, everything
works fine but there is trouble with ssl.


First of all. Lets be clear that "transparent" is just a modifier word, 
it means different things depending on how its used in a sentence:


 * in "transparent HTTP" means the proxy is not altering anything 
during the relay.
   - CONNECT tunnel *is* transparent in HTTP, the proxy cannot access 
or change anything inside the encrypted tunnel.


 * in "transparent proxy" (TCP layer) it means the proxy is not visible 
from client *and* server end.
- NAT (REDIRECT) is not transparent. The  server is fully aware of 
the proxies presence.


 * in "transparent intercept" it means the client is not aware of the 
interception being done.

  - NAT (REDIRECT) is one form of transparent  intercept.

There are other phrases using the word, but those are the three related 
to your config which are getting confused.


TLS/SSL was designed from the ground up to make transparent interception 
not work. It ensures the either the client or the server is always able 
to detect the interception. This is still true, even with ssl-bump. The 
loophole ssl-bump uses is that many users ignore the warnings, and 
client agents/browser can be configured to trust the proxy when it is 
detected.


So you can see there is no way to do "Translarent SSL Proxy". What you 
are aiming for is SSL interception proxy. A bit pedantic maybe, but we 
have to pay attention to that level of detail to make sense of what the 
config can/cant do.




On the Client (ipad) - safari tells me that it cant establish a safe
connection - nothing more. the acces.logs shows:

ext/html
1330094808.367  3 172.20.0.113 NONE/400 3563
%BF%18%C6%CC%D5%CB%B5+%C5Eq - NONE/- text/html


Squid is agreeing with Safari. A connection was received from 
172.20.0.113. Requesting "%BF%18%C6%CC%D5%CB%B5+%C5Eq". No server was 
contacted, and 3563 bytes forming an HTTP 400 error page were returned.


This is binary data arriving in a plain-HTTP port. We shall get to that 
below.




here is my config and stuff:

ip_forward =1


Chain PREROUTING (policy ACCEPT)
target prot opt source   destination
REDIRECT   tcp  --  anywhere anywheretcp
dpt:www redir ports 3128
REDIRECT   tcp  --  anywhere anywheretcp
dpt:https redir ports 3129


Okay.


and here the squid.conf




acl SSL method CONNECT
ssl_bump allow all


Okay, anything which is eligible for bumping / decrypt is done so.
You will want to restrict that to your LAN ranges, but fine for testing.



## allow users to webistes attemping to use certs belonging to other domains
acl BadSite ssl_error SQUID_X509_V_ERR_DOMAIN_MISMATCH
sslproxy_cert_error allow BadSite
sslproxy_cert_error deny all


If you really want to allow for example; google.com to emit certificates 
for yahoo.com and be accepted silently.
Use with care. You have just taken full responsibility for any problems 
caused by the above example for *all* websites in existence, and there 
are a lot of really malicious phishing sites out there who would jump at 
this opportunity to spoof the SSL certificates as well as the phished 
site content.


SOLUTION: You should be able to add an ACL to the conditions limiting 
what domains *you*, the proxy admin, trust enough to ignore errors from.




# Squid Transparent http listens to port 3128
http_port 172.20.0.79:3128 intercept

http_port 127.0.0.1:3128 intercept
Comment is incorrect. One of these port receives "intercepted HTTP (port 
80) traffic".


Which one depends on the order of interface naming/IDs or some other 
magic in the kernel. Redirect change ste local IP, sending packets to 
the "primary IP" of the box, (whatever that mens on yoru box). This is 
only a problem with REDIRECT, you can use DNAT instead to specify which 
IP:port Squid is listening on and drop one of these listeners.


Additional security note:
  port 3128 is a well-known port for proxies to be listening on. There 
are some nasties out there which take advantage of that to cause DoS 
conditions for intercept proxies. You should make the interception 
listening port(s) a random number known only to you, the firewall, and 
squid config and add the mangle table rule to prevent any external 
packets reaching it without NAT.
  see 
http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxRedirect for 
the iptables rules about that.



# Squid Transparent SSL https listens to por 3129
http_port 172.20.0.79:3129 intercept ssl-bump
cert=/etc/apache2/ssl/server.crt key=/etc/apache2/ssl/server.key

http_port 127.0.0.1:3129 intercept ssl-bump
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
options=ALL cert=/etc/apache2/ssl/apache.pem


Neither of these ports accepts HTTPS tr

[squid-users] squid as tranparent ssl proxy

2012-02-24 Thread Jan Fischbach
Hi Everyone,

I compiled an configured squid in the way to get a transparent ssl
proxy. With the debug flag an looking into the access.log, no errors
or warnings are shown. When intercepting http traffic, everything
works fine but there is trouble with ssl.
On the Client (ipad) - safari tells me that it cant establish a safe
connection - nothing more. the acces.logs shows:

ext/html
1330094808.367  3 172.20.0.113 NONE/400 3563
%BF%18%C6%CC%D5%CB%B5+%C5Eq - NONE/- text/html
1330094809.922  8 172.20.0.113 NONE/400 3546 NONE
error:invalid-request - NONE/- text/html
1330094811.938  2 172.20.0.113 NONE/400 3546 NONE
error:invalid-request - NONE/- text/html
1330094813.603  3 172.20.0.113 NONE/400 3552 NONE
error:invalid-request - NONE/- text/html
1330094813.673  2 172.20.0.113 NONE/400 3552 NONE
error:invalid-request - NONE/- text/html
1330094813.744 10 172.20.0.113 NONE/400 3546 NONE
error:invalid-request - NONE/- text/html


here is my config and stuff:

ip_forward =1


Chain PREROUTING (policy ACCEPT)
target prot opt source   destination
REDIRECT   tcp  --  anywhere anywheretcp
dpt:www redir ports 3128
REDIRECT   tcp  --  anywhere anywheretcp
dpt:https redir ports 3129


Squid Cache: Version 3.1.19
configure options:  '--enable-ssl' '--enable-ssl-crtd'
'--enable-linux-netfilter' --with-squid=/root/squid-3.1.19
--enable-ltdl-convenience

and here the squid.conf


#
# Recommended minimum configuration:
#
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1


acl SSL method CONNECT
ssl_bump allow all


## allow users to webistes attemping to use certs belonging to other domains
acl BadSite ssl_error SQUID_X509_V_ERR_DOMAIN_MISMATCH
sslproxy_cert_error allow BadSite
sslproxy_cert_error deny all

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 172.20.0.0/24  # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly
plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost localnet


### MEINE REGELN FÜR TRANSPARENT SSL PROXYING

always_direct allow all
ssl_bump allow all


# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost


# Squid Transparent http listens to port 3128
http_port 172.20.0.79:3128 intercept

http_port 127.0.0.1:3128 intercept
# Squid Transparent SSL https listens to por 3129
http_port 172.20.0.79:3129 intercept ssl-bump
cert=/etc/apache2/ssl/server.crt key=/etc/apache2/ssl/server.key

http_port 127.0.0.1:3129 intercept ssl-bump
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
options=ALL cert=/etc/apache2/ssl/apache.pem

sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /etc/apache2/ssl2 -M 4MB

sslcrtd_children 50


Hope someone can help me.

Best regards,
Jan


[squid-users] squid as transparent ssl proxy

2012-02-24 Thread Jan Fischbach
Hi Everyone,

I compiled an configured squid in the way to get a transparent ssl
proxy. With the debug flag an looking into the access.log, no errors
or warnings are shown. When intercepting http traffic, everything
works fine but there is trouble with ssl.
On the Client (ipad) - safari tells me that it cant establish a safe
connection - nothing more. the acces.logs shows:

ext/html
1330094808.367  3 172.20.0.113 NONE/400 3563
%BF%18%C6%CC%D5%CB%B5+%C5Eq - NONE/- text/html
1330094809.922  8 172.20.0.113 NONE/400 3546 NONE
error:invalid-request - NONE/- text/html
1330094811.938  2 172.20.0.113 NONE/400 3546 NONE
error:invalid-request - NONE/- text/html
1330094813.603  3 172.20.0.113 NONE/400 3552 NONE
error:invalid-request - NONE/- text/html
1330094813.673  2 172.20.0.113 NONE/400 3552 NONE
error:invalid-request - NONE/- text/html
1330094813.744 10 172.20.0.113 NONE/400 3546 NONE
error:invalid-request - NONE/- text/html


here is my config and stuff:

ip_forward =1


Chain PREROUTING (policy ACCEPT)
target prot opt source   destination
REDIRECT   tcp  --  anywhere anywheretcp
dpt:www redir ports 3128
REDIRECT   tcp  --  anywhere anywheretcp
dpt:https redir ports 3129


Squid Cache: Version 3.1.19
configure options:  '--enable-ssl' '--enable-ssl-crtd'
'--enable-linux-netfilter' --with-squid=/root/squid-3.1.19
--enable-ltdl-convenience

and here the squid.conf


#
# Recommended minimum configuration:
#
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1


acl SSL method CONNECT
ssl_bump allow all


## allow users to webistes attemping to use certs belonging to other domains
acl BadSite ssl_error SQUID_X509_V_ERR_DOMAIN_MISMATCH
sslproxy_cert_error allow BadSite
sslproxy_cert_error deny all

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 172.20.0.0/24  # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly
plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost localnet


### MEINE REGELN FÜR TRANSPARENT SSL PROXYING

always_direct allow all
ssl_bump allow all


# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost


# Squid Transparent http listens to port 3128
http_port 172.20.0.79:3128 intercept

http_port 127.0.0.1:3128 intercept
# Squid Transparent SSL https listens to por 3129
http_port 172.20.0.79:3129 intercept ssl-bump
cert=/etc/apache2/ssl/server.crt key=/etc/apache2/ssl/server.key

http_port 127.0.0.1:3129 intercept ssl-bump
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
options=ALL cert=/etc/apache2/ssl/apache.pem

sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /etc/apache2/ssl2 -M 4MB

sslcrtd_children 50


Hope someone can help me.

Best regards,
Jan


Re: [squid-users] mix: memory and hdd

2012-02-24 Thread Amos Jeffries

On 24/02/2012 11:08 p.m., Werner Kapferer wrote:

Hi!

I'd like to split my cache in 2 parts.
Small files (up to a specified size) should be restored ONLY in memory, bigger 
ones ONLY on the hdd.

Is this set by default (by maximum_object_size_in_memory) or will these files 
saved on the hdd additionally?


That depends on your version of Squid.

The memory cache is always preferred, unless the object is too large to 
fit within you max in-memory limit, or is of unknown (aka "infinitely 
large") size. The only reason things get kicked out of memory cache is 
overly-large, unknown size, or Squid needing more room in the memory 
cache. In those cases small object which have not been needed in a long 
while can get pushed out to disk.


 NP: if you have one of the officially supported Squid (2.7, 3.1+) they 
will "promote" the small items back off disk into memory when they 
become needed again (if ever).


You also have the option of COSS (squid-2.7) or Rock (squid-3.2+) 
storage types of small objects. These act like disk-backed memcache 
storage, as fast as in-memory cache for recent hot items, can be larger, 
and no objects are lost on any crash or restarts.


Amos


Re: [squid-users] URL rewrite on Squid 3.1.6 as ReverseProxy for Exchange OWA

2012-02-24 Thread Amos Jeffries

On 24/02/2012 11:52 a.m., Roman Gelfand wrote:

Hi Amos,

I could be wrong, but I understood from your several posts that this
type of configuration is not recommended (either due to security
issues or performance, I don't remember exactly).

Is that right?


*redirect*, (using deny_info or redirector program which does real 3XX 
status redirects) is fine and a built-in feature of HTTP. Since what it 
does is inform the client browser/agent to change the URI being 
requested. Keeping any state between the server and client synchronized. 
Security, behaviour expectations and working state is all kept predictable.


*rewrite*, (using a redirector/rewriter to alter the URL in-transit) is 
not recommended on grounds of being complex with many breakages from the 
client browser/agent being unaware of the URL change. re-write is at 
heart a cross-site/XSS attack, in the same ways that intercept proxy is 
a MITM attack. Intending for it to happen does not change the side 
effects or lessen the risks.


Amos



[squid-users] mix: memory and hdd

2012-02-24 Thread Werner Kapferer
Hi!

I'd like to split my cache in 2 parts.
Small files (up to a specified size) should be restored ONLY in memory, bigger 
ones ONLY on the hdd.

Is this set by default (by maximum_object_size_in_memory) or will these files 
saved on the hdd additionally?

Thanks!
Werner
-- 
NEU: FreePhone 3-fach-Flat mit kostenlosem Smartphone!  

Jetzt informieren: http://www.gmx.net/de/go/freephone/


Re: [squid-users] Squid reverse proxy configuration

2012-02-24 Thread Amos Jeffries

On 24/02/2012 11:35 a.m., Alex Barakov wrote:

Hello.

I want to set squid as a proxy server to my webserver

I am using squid with the following config:
http_port 80 accel defaultsite=domain.com vhost
cache_peer IPADDRESS parent 80 0 no-query originserver name=myAccel
acl our_sites dstdomain domain.com
http_access allow our_sites
cache_peer_access myAccel allow our_sites
cache_peer_access myAccel deny all

I am getting the following error:

Service Unavailable

The service is temporarily unavailable. Please try again later.


The configuration is correct for http://domain.com/. (You work for 
Dotster Incorporated?)


What URL domain name were you testing it with?

Probably you were using www.* form of the domain. Your dstdomain ACL is 
written not to accept any sub-domains. Maybe you wanted ".domain.com" to 
accept all sub-domains or to list the specific valid ones like 
"www.domain.com" alongside the main domain.


Amos


Re: [squid-users] session helper issue

2012-02-24 Thread Amos Jeffries

On 24/02/2012 11:26 a.m., Andrew Beverley wrote:

On Fri, 2012-02-24 at 02:06 +0400, Vyacheslav Maliev wrote:

Hi!
  I configured my splash like described here:
http://wiki.squid-cache.org/ConfigExamples/Portal/Splash. I have
chosen second example (Active Mode) and adopted it to my squid 3.0
version (there is squid_session helper instead ext_session_acl). So
when i go to matched URL session is not starting. Only after i go to
matched URL and do squid -k reconfigure everything goes well.

Is there any method to renew squid automatically?

You are possibly experiencing a bug with the session helper that means
it does not synchronise correctly between squid workers. The patch
discussed in the following thread fixes that. Regardless of whether this
is your problem, you should really use a version that includes the patch
anyway, but I'm not sure if it was back-ported to any 3.0 versions.


It was not. The 3.0 series is no longer maintained for anything beyond 
major security vulnerabilities.


Vyacheslav, please consider an upgrade. At worst-case you can build the 3.1.19 
sources, and copy the squid_session helper built there into your 3.0 system. It 
should be a drop-in replacement.

Amos


Re: [squid-users] Page seems to load for ever

2012-02-24 Thread Amos Jeffries

On 24/02/2012 7:43 a.m., karj wrote:

The problem was
the html size of the first page (more than 65 KB)..
We don't have any size limits in squid.conf (my question still remains)

Hope that this will help others...


You are right about the size being a problem.

The server at the site is sending a front page object which is larger by 
about 100 bytes than the object size indicated by its Content-Length 
header. Squid will at minimum be cutting off those extra bytes, and may 
be confusing the browser if its waits for a full page to arrive (or a 
timeout) before rendering it.


Amos