Re: [squid-users] How to configure Squid can improve the performance ?

2018-04-10 Thread Amos Jeffries
On 11/04/18 13:48, 赵 俊 wrote:
> Thanks for reading my Email.
> 
> I have two questions:
> 
> My first question is how many maximum concurrent connection and the
> maximum new connection of squid are.
> 


There are 64K ports on an IP address. Your Squid and machine also has a
filedescriptors (FDs) limit it is 64K by default but may be smaller (eg
on Windows it is 256). The smaller of those two numbers is the upper
limit Squid can use.

The ports number is shared between client connections, server
connections and both types of ICAP connections.

The FDs number is shared by the same things as the ports number, as well
as disk files in-use.


You can maybe increase FDs with squid.conf max_filedescriptors, or if
that does not work rebuild Squid with --max-filedescriptors= build
option. Use the ulimit tool on non-Windows machines to increase the OS
limit before starting Squid.



> The second question is how to configure Squid can improve  the maximum
> concurrent connection,maximum new connection and the performance .
> 

If FD available is being your limit you can maybe increase it with
squid.conf max_filedescriptors config option. Of if that does not work
rebuild Squid with --max-filedescriptors= build option. Use the ulimit
tool on non-Windows machines to increase the OS limit before starting Squid.


> I used 3.5.27 version.
> 
> My squid.conf is:
...
> 
> # And finally deny all other access to this proxy
> acl NCACHE method GET
> store_miss deny all

The "store_miss deny all" above will be preventing HTTP objects from
caching. That means every request will consume one extra server
connection and ICAP RESPMOD connection.
 Your Squid will need some amount of less connections if things are
caching. So you may want to remove this.


> via off
> 
> # Squid normally listens to port 3128
> http_port 3128 
> https_port 192.168.XX.XXX:3129 intercept ssl-bump connection-auth=off
> generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
> cert=/usr/local/squid/ssl_cert/myCA.pem
> key=/usr/local/squid/ssl_cert/myCA.pem  options=NO_SSLv3,NO_SSLv2

NP: If cert= and key= are in the same file like this you do not have to
configure key=.

Also, for Squid-3.* add sslflags=NO_DEFAULT_CA on the above port line.
That will free up a lot of memory in OpenSSL for other things it may be
needed for.


> 
> acl ssl_step1 at_step SslBump1
> acl ssl_step2 at_step SslBump2
> acl ssl_step3 at_step SslBump3
> 
> ssl_bump peek ssl_step1
> ssl_bump stare ssl_step2
> ssl_bump bump ssl_step3
> 
> sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s
> /usr/local/squid/lib/ssl_db -M 4MB
> sslcrtd_children 8 startup=1 idle=1
> 

ssl_crtd is a little bit unusual for helpers in that it holds up the TLS
handshake which is somewhat critical to do fast. So it is probably best
to use more than startup=1 to reduce Squid memory usage and delays.

As a general "rule of thumb" look at your running proxy and see how many
helpers it is needing to start for your normal traffic. Use that as the
startup= value.



The below cache_dir, object_size, cache_mem, and cache_swap directives
are not useful while you have "store_miss deny all" preventing cache
storage being used.

> #Uncomment and adjust the following to add a disk cache directory.
> cache_dir ufs /usr/local/squid/var/cache/squid 4096 16 256
> minimum_object_size 0 KB
> maximum_object_size 4096 KB
> maximum_object_size_in_memory 4096 KB
> 
> ipcache_size 1024 MB
> ipcache_low 90
> ipcache_high 95
> fqdncache_size 1024 MB
> 
> cache_mem 2048 MB
> cache_swap_low 90
> cache_swap_high 95
> 
> # Leave coredumps in the first cache dir
> coredump_dir /usr/local/squid/var/cache/squid
> 
> #icap
> icap_enable on
> icap_preview_enable on
> icap_preview_size 1024
> icap_send_client_ip on
> adaptation_meta X-Client-Port "%>p"
> icap_206_enable on
> icap_persistent_connections off

The above disable of persistence on ICAP connections will be slowing
Squid down since it has to repeat TCP handshakes *twice* for every
single message through the proxy.


> 
> icap_service service_req reqmod_precache 0 icap://192.168.XX.XXX:1344/echo
> icap_service service_res respmod_precache 1 icap://192.168.XX.XXX:1344/echo
> adaptation_access service_res allow all
> adaptation_access service_req allow all
> 

You can maybe improve ICAP connection use by tuning some traffic not to
use adaptation. For example CONNECT messages are being SSL-Bump'ed so
they are best not to be adapted.
For example:
  adaptation_access service_req deny CONNECT
  adaptation_access service_req allow all


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] How to configure Squid can improve the performance ?

2018-04-10 Thread 赵 俊
Thanks for reading my Email.

I have two questions:

My first question is how many maximum concurrent connection and the maximum new 
connection of squid are.

The second question is how to configure Squid can improve  the maximum 
concurrent connection,maximum new connection and the performance .

I used 3.5.27 version.

My squid.conf is:

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines

acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

#
# Recommended minimum Access Permission configuration:
#
# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
acl NCACHE method GET
store_miss deny all
via off

# Squid normally listens to port 3128
http_port 3128
https_port 192.168.XX.XXX:3129 intercept ssl-bump connection-auth=off 
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB 
cert=/usr/local/squid/ssl_cert/myCA.pem key=/usr/local/squid/ssl_cert/myCA.pem  
options=NO_SSLv3,NO_SSLv2

acl ssl_step1 at_step SslBump1
acl ssl_step2 at_step SslBump2
acl ssl_step3 at_step SslBump3

ssl_bump peek ssl_step1
ssl_bump stare ssl_step2
ssl_bump bump ssl_step3

sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s 
/usr/local/squid/lib/ssl_db -M 4MB
sslcrtd_children 8 startup=1 idle=1

#Uncomment and adjust the following to add a disk cache directory.
cache_dir ufs /usr/local/squid/var/cache/squid 4096 16 256
minimum_object_size 0 KB
maximum_object_size 4096 KB
maximum_object_size_in_memory 4096 KB

ipcache_size 1024 MB
ipcache_low 90
ipcache_high 95
fqdncache_size 1024 MB

cache_mem 2048 MB
cache_swap_low 90
cache_swap_high 95

# Leave coredumps in the first cache dir
coredump_dir /usr/local/squid/var/cache/squid

#icap
icap_enable on
icap_preview_enable on
icap_preview_size 1024
icap_send_client_ip on
adaptation_meta X-Client-Port "%>p"
icap_206_enable on
icap_persistent_connections off

icap_service service_req reqmod_precache 0 icap://192.168.XX.XXX:1344/echo
icap_service service_res respmod_precache 1 icap://192.168.XX.XXX:1344/echo
adaptation_access service_res allow all
adaptation_access service_req allow all

#
# Add any of your own refresh_pattern entries above these.
#
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320





___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid is very slow after moving to production environment

2018-04-10 Thread Amos Jeffries
On 11/04/18 07:10, Roberto Carna wrote:
> Thanks to everybody...
> 
> I've reviewed what you tell me. I've executed "squid -k parse" and
> everything is ok, and I've restarted de Squid entire server.
> 
> When I use the server with IP#1, it works OK, is fastbut when I
> change its IP to IP#2 (the IP from the current Squid that I want to
> replace), the navigation is very very slow, just 20/30 concurrent
> users.
> 
> So I think the Squid configuration parameters are OK, because with
> IP#1 the proxy runs perfectly.

Then the issue is probably not with Squid. Something outside Squid is
causing the issue - either the VM itself, or the network setup.

> 
> Why just an IP change affected the performance of web browsing 

We do not know the answer to that. None of the info so far shows any
sign of such a problem. Something you have not thought to provide yet
contains the clues.

Perhapse taking a look through the available logs (both Squid and
others) might find better information and ideas.


> Maybe because of something relative to Dansguardian ???
> 

Maybe yes, maybe no. see above.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Secure Web Proxy Stress Testing

2018-04-10 Thread Panagiotis Bariamis
Thank you for the clarification.

On Tue, Apr 10, 2018, 21:11 Alex Rousskov 
wrote:

> On 04/10/2018 11:24 AM, Panagiotis Bariamis wrote:
> > Thank you for your answer  but as far as I can understand this setup is
> > for a regular proxy that just proxies https protocol with http connect
> > headers (unencrypted traffic between client and proxy on http connect
> > request ) .
>
> Your understanding is incorrect: All the traffic between the client and
> the proxy is encrypted in that test.
>
>
> > Secure web proxy encrypts traffic between client and proxy
>
> Yes, and that is what the Polygraph workload sketch tests. The Squid
> port for that workload is an https_port, not an http_port.
>
>
> > meaning that you have an http connect request inside a tls tunnel.
>
> Yes, if the origin server is talking TLS. Just like a regular HTTP
> proxy, an HTTPS proxy can proxy both plain and encrypted origin server
> traffic. The latter requires a CONNECT tunnel. Whether the origin server
> talks HTTP or HTTPS is a separate variable/issue, unrelated to whether
> the client-proxy communication itself is secured.
>
> Polygraph supports HTTPS proxies and HTTPS servers. IIRC, Polygraph v5
> supports the combination of the two: TLS inside TLS (because HTTP/2
> support essentially required that). I am not sure about Polygraph v4.
> The workload I sketched uses HTTPS proxies and plain origin servers.
>
>
> HTH,
>
> Alex.
>
>
>
> > On Tue, Apr 10, 2018, 17:22 Alex Rousskov wrote:
> >
> > On 04/10/2018 06:31 AM, Panagiotis Bariamis wrote:
> > > Is there any stress testing tool to test with a load of 1k to 5k
> > > simultaneous connections ?
> >
> > Web Polygraph (www.web-polygraph.org )
> > supports HTTPS proxies and can
> > create thousands of concurrent connections. Below is a PGL
> configuration
> > snippet from a recent HTTPS proxy test in our lab.
> >
> > HTH,
> >
> > Alex.
> >
> >
> > SslWrap sslWrap = {
> > ssl_config_file = "openssl.conf";
> > root_certificate = "CA-priv+pub.pem";
> > session_resumption = 70%;
> > session_cache = 100;
> > };
> >
> > Server S = {
> > // no ssl_wraps here unless you want to test TLS inside TLS
> > ...
> > };
> >
> > Proxy P = {
> > addresses = [ ... HTTPS proxy address ... ];
> > ssl_wraps = [ sslWrap ]; // this is an HTTPS proxy
> > };
> >
> > Robot R = {
> > ssl_wraps = [ sslWrap ]; // an HTTPS-capable client
> >
> > origins = S.addresses;
> > http_proxies = P.addresses;
> >
> > ...
> > };
> >
> > use(S,P,R);
> >
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid is very slow after moving to production environment

2018-04-10 Thread Roberto Carna
Thanks to everybody...

I've reviewed what you tell me. I've executed "squid -k parse" and
everything is ok, and I've restarted de Squid entire server.

When I use the server with IP#1, it works OK, is fastbut when I
change its IP to IP#2 (the IP from the current Squid that I want to
replace), the navigation is very very slow, just 20/30 concurrent
users.

So I think the Squid configuration parameters are OK, because with
IP#1 the proxy runs perfectly.

Why just an IP change affected the performance of web browsing 
Maybe because of something relative to Dansguardian ???

Thanks and regards !!!

2018-04-10 15:32 GMT-03:00 joseph :
> hi also lower maximum_object_size_in_memory 4096 KB  to
> maximum_object_size_in_memory 1 MB  higher not wise
>
>
>
> -
> **
> * Crash to the future  
> **
> --
> Sent from: 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid is very slow after moving to production environment

2018-04-10 Thread joseph
hi also lower maximum_object_size_in_memory 4096 KB  to 
maximum_object_size_in_memory 1 MB  higher not wise 



-
** 
* Crash to the future  
**
--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Secure Web Proxy Stress Testing

2018-04-10 Thread Alex Rousskov
On 04/10/2018 11:24 AM, Panagiotis Bariamis wrote:
> Thank you for your answer  but as far as I can understand this setup is
> for a regular proxy that just proxies https protocol with http connect
> headers (unencrypted traffic between client and proxy on http connect
> request ) .

Your understanding is incorrect: All the traffic between the client and
the proxy is encrypted in that test.


> Secure web proxy encrypts traffic between client and proxy

Yes, and that is what the Polygraph workload sketch tests. The Squid
port for that workload is an https_port, not an http_port.


> meaning that you have an http connect request inside a tls tunnel. 

Yes, if the origin server is talking TLS. Just like a regular HTTP
proxy, an HTTPS proxy can proxy both plain and encrypted origin server
traffic. The latter requires a CONNECT tunnel. Whether the origin server
talks HTTP or HTTPS is a separate variable/issue, unrelated to whether
the client-proxy communication itself is secured.

Polygraph supports HTTPS proxies and HTTPS servers. IIRC, Polygraph v5
supports the combination of the two: TLS inside TLS (because HTTP/2
support essentially required that). I am not sure about Polygraph v4.
The workload I sketched uses HTTPS proxies and plain origin servers.


HTH,

Alex.



> On Tue, Apr 10, 2018, 17:22 Alex Rousskov wrote:
> 
> On 04/10/2018 06:31 AM, Panagiotis Bariamis wrote:
> > Is there any stress testing tool to test with a load of 1k to 5k
> > simultaneous connections ?
> 
> Web Polygraph (www.web-polygraph.org )
> supports HTTPS proxies and can
> create thousands of concurrent connections. Below is a PGL configuration
> snippet from a recent HTTPS proxy test in our lab.
> 
> HTH,
> 
> Alex.
> 
> 
> SslWrap sslWrap = {
>     ssl_config_file = "openssl.conf";
>     root_certificate = "CA-priv+pub.pem";
>     session_resumption = 70%;
>     session_cache = 100;
> };
> 
> Server S = {
>     // no ssl_wraps here unless you want to test TLS inside TLS
>     ...
> };
> 
> Proxy P = {
>     addresses = [ ... HTTPS proxy address ... ];
>     ssl_wraps = [ sslWrap ]; // this is an HTTPS proxy
> };
> 
> Robot R = {
>     ssl_wraps = [ sslWrap ]; // an HTTPS-capable client
> 
>     origins = S.addresses;
>     http_proxies = P.addresses;
> 
>     ...
> };
> 
> use(S,P,R);
> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Secure Web Proxy Stress Testing

2018-04-10 Thread Panagiotis Bariamis
Thank you for your answer  but as far as I can understand this setup is for
a regular proxy that just proxies https protocol with http connect headers
(unencrypted traffic between client and proxy on http connect request ) .
Secure web proxy encrypts traffic between client and proxy meaning that you
have an http connect request inside a tls tunnel.

On Tue, Apr 10, 2018, 17:22 Alex Rousskov 
wrote:

> On 04/10/2018 06:31 AM, Panagiotis Bariamis wrote:
> > Is there any stress testing tool to test with a load of 1k to 5k
> > simultaneous connections ?
>
> Web Polygraph (www.web-polygraph.org) supports HTTPS proxies and can
> create thousands of concurrent connections. Below is a PGL configuration
> snippet from a recent HTTPS proxy test in our lab.
>
> HTH,
>
> Alex.
>
>
> SslWrap sslWrap = {
> ssl_config_file = "openssl.conf";
> root_certificate = "CA-priv+pub.pem";
> session_resumption = 70%;
> session_cache = 100;
> };
>
> Server S = {
> // no ssl_wraps here unless you want to test TLS inside TLS
> ...
> };
>
> Proxy P = {
> addresses = [ ... HTTPS proxy address ... ];
> ssl_wraps = [ sslWrap ]; // this is an HTTPS proxy
> };
>
> Robot R = {
> ssl_wraps = [ sslWrap ]; // an HTTPS-capable client
>
> origins = S.addresses;
> http_proxies = P.addresses;
>
> ...
> };
>
> use(S,P,R);
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Ideas for better caching these popular urls

2018-04-10 Thread Eliezer Croitoru
Hey Omid,

From what I remember the basics of math to verify the patter of a specific set 
of numbers have some kind of pattern is to have at-least 3 items.
But in the cryptography world it another story.
I have not researched playstation downloads and will probably won't do that.
Others might offer some help but you must understand what you are trying to 
predict in these urls and downloads.
From what I have seen it seem that this CDN "llnwd.net" is very cache friendly 
but you need to know how to handle their traffic.
They don’t use any form of ETAG headers but they do provide some pieces of 
information in the url's that can identify something about it.
If they use a ticketing system such as couple other CDN providers you would 
need to know the "ID" of the url before it's being downloaded.
You will need more then just the urls but also the response headers for these.
I might be able to write an ICAP service that will log requests and response 
headers and it can assist Cache admins to improve their efficiency but this can 
take a while.

All The Bests,
Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



-Original Message-
From: squid-users  On Behalf Of Omid 
Kosari
Sent: Tuesday, April 10, 2018 14:20
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Ideas for better caching these popular urls

Thanks for reply . 

I assumed the community at different scales from little isp to large ISPs
may have common domains like those i highlighted so they may have same issue
as mine . So i ignored common parts .

One of problems with redbot is it shows timeout for big files like 

http://gs2.ww.prod.dl.playstation.net/gs2/appkgo/prod/CUSA00900_00/2/f_2df8e321f37e2f5ea3930f6af4e9571144916013ee38893d881890b454b5fed6/f/UP9000-CUSA00900_00-BLOODBORNE00_4.pkg?downloadId=0187=018700e2291bda0f868f=us=ob=aa2cd9c8d1f359feb843ae4a6c99cfcdb6569ca9cc60ad6d28b6f8de3b5fac23=0=23.57.69.81=0027

http://gs2.ww.prod.dl.playstation.net/gs2/ppkgo/prod/CUSA07557_00/25/f_053bab8c9dec6fbc68a0bd9fc58793285ae350ccf7dadacb35b5840228a9d802/f/EP4001-CUSA07557_00-F12017EMASTER000-A0113-V0100_0.pkg?downloadId=0059=005900e22977e62f91a2=ob=0183=8.248.5.254=0032


I assumed anyone with few thousand of users may have same problem and maybe
they like to share for example their refresh_pattern or storeid to solve my
problem . You better know that playstation is everywhere playstation ;)

Here is part of storeid_db file
^http:\/\/.*\.sonycoment\.loris-e\.llnwd\.net\/(.*?\.pkg)
http://playstation.net.squidinternal/$1
^http:\/\/.*\.playstation\.net\/(.*?\.pkg)
http://playstation.net.squidinternal/$1

Almost all of the playstation huge downloads are with 206 code but it will
download the file from start to end , if i remember correctly in this
situation squid will correctly cache the file .



--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Proxy through another proxy possible?

2018-04-10 Thread Amos Jeffries
On 09/04/18 01:06, xpro wrote:
> Thank you. I did get it to work with snippet below
> 
> cache_peer myproxy.com parent 3114 0 no-query default
> never_direct allow all
> 
> 
> can you tell me how I can assign different ports to different outgoing
> proxies?
> 

What do you mean by assign ports?

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid is very slow after moving to production environment

2018-04-10 Thread Eliezer Croitoru
Well about Cloned VM's acting slower than the original...
I clearly tested it more then once and it's not true and it's a myth.
The only issue I have seen with such cloned systems(I have a very large cluster 
of cloned squid instances) is when the admin over-commit the physical machine.
There is another thing in the hypervisor's world that some admins just do not 
take into account:
- Squid can heavily load a specific CPU.
- You cannot expect the virtualization platform to "create" cycles that do not 
exist.
- You cannot expect the virtualization platform to.. make the disks or the 
network perform more than they have avaliable.

I have a fleet of more than 10 hypervisors which run's more than 90 VM's and 
from them more then 20 percent have Squid-Cache and other services on them.
The only time I had issues was when one of the VM's that was running a java 
based service took a hit of more then 50k requests per second and took\claimed 
brutally more CPU and RAM to spare the other VM's and... all the other VM's 
just crashed with a kernel panic while this specific VM "controlled" or 
"dominated" the hypervisor resources.

All The Bests,
Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Tuesday, April 10, 2018 13:09
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid is very slow after moving to production 
environment

On 10/04/18 07:58, Roberto Carna wrote:
> Dear Antony, both proxies are virtual machines in the same DMZthey
> use the same DNS, the same firewall, the same Internet link, the same
> IP but different MAC Address.


FYI: there were issues some years back with VMs that were cloned
operating VERY much slower for no apparent reason than the original
image they were cloned from.

If you are making production as a clone of the testing you may want to
try a non-clone to see if the problem disappears.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Proxy through another proxy possible?

2018-04-10 Thread Eliezer Croitoru
Hey,

If the snipper works for you then you should be able to use a simple ACL that 
will pass all traffic of a certain http_port to a specific proxy.
However depends on the scenario there are couple things to consider in terms 
performance of this system.

All The Bests,
Eliezer


Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


-Original Message-
From: squid-users  On Behalf Of xpro
Sent: Sunday, April 8, 2018 16:07
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Proxy through another proxy possible?

Thank you. I did get it to work with snippet below

cache_peer myproxy.com parent 3114 0 no-query default
never_direct allow all


can you tell me how I can assign different ports to different outgoing 
proxies?


On 04/07/2018 02:30 AM, Amos Jeffries wrote:
> On 07/04/18 18:02, xpro wrote:
>> Would it be done like below?
>>
>> http_port 3001
>> acl port1 myport 3001
>> tcp_outgoing_address myotherproxy.com:3114 port1
>>
>>
>> I want anyone connecting to my proxy using port 3001, to use the the
>> proxy server on myotherproxy.com:3114
> No. tcp_outgoing_address is the IP your Squid uses on its outgoing TCP
> connections.
>
> cache_peer is for configuring destination details about any specific
> peer (upstream server or proxy) to relay messages through.
>   see 
>
> Amos
>
>
>>
>> On 04/07/2018 01:05 AM, Amos Jeffries wrote:
>>> On 07/04/18 11:34, xpro wrote:
 I'm not sure if Squid is the right tool for this. I'm trying to achieve
 the following.

 I would have access to some exclusive proxies, but I would like for a
 limited amount of people to use these proxies without getting the
 original proxy IP. I want them to go through my proxy server and then my
 proxy server would forward them to the proxy I use.


 Would this be possible with Squid?
>>> Of course. I'm not exactly clear on what you mean by original or
>>> exclusive proxies, but HTTP and Squid are certainly able to chain.
>>>
>>> Amos
>>> ___
>>> squid-users mailing list
>>> squid-users@lists.squid-cache.org
>>> http://lists.squid-cache.org/listinfo/squid-users
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid is very slow after moving to production environment

2018-04-10 Thread Matus UHLAR - fantomas

On 09.04.18 16:53, Roberto Carna wrote:

Dear Periko, so here is what you ask to me:

CPU x 8
RAM x 12 GB
HD x 50 GB

And this is /etc/squid/squid.conf file:



cache_mem 4096 MB


what is squid's real memory usage?
It can be much much more than 4G, 4G is only cache, but squid also uses
buffers and indexes.


memory_replacement_policy lru


I would use heap gdsfhere for betterhit ratio, but this should not be a
problem


cache_dir aufs /var/spool/squid 25000 16 256


What's squid CPU usage?
here can be a problem. aufs cache_dir can be only used by one process.
Maybe you should try rock store for cache_Dir


fqdncache_size 4096


I don't see any reason to specify this. too low fqdn cache can result into
repeated DNS fetches.


acl manager proto cache_object


doesn't squid complain here? the "manager" acl is predefined since 3.4 iirc.
Are you sure squid uses this config file?


auth_param basic program /usr/lib/squid/squid_ldap_auth -b
"dc=company,dc=com,dc=ar" -f "uid=%s" -h ldap.company.com.ar -v 3
auth_param basic children 5


aren't there too few children? it can result into waiting for authentication
result before client is allowed.
what does squid log say?


acl QUERY urlpath_regex cgi-bin \? \.css \.asp \.aspx
cache deny QUERY


this is useless for a long time. urlpath_regex causes squid eat much of CPU.
disable this.


acl gedo dstdomain .gde.gob.ar
always_direct allow gedo


you have no cache peers defined. This is therefore useless.


I've just changed the new proxy to test environment and it works very
well againI get lost.


see the limits above. Some of them may be low for a production system.

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
WinError #9: Out of error messages.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid ipcache and DNS TTL smaller than 60 seconds

2018-04-10 Thread Alex Rousskov
On 04/10/2018 09:19 AM, Amos Jeffries wrote:

> Consider, what would you expect to happen when DNS RRset changes
> _multiple_ times within the same TTL that TCP uses for a SYN-ACK timeout
> and retry?

I would expect that nothing special happens to a good implementation:
The TCP client would not notice the TTL expiration and RRset changes
while dealing with packets on a single TCP connection.

RRset TTL does _not_ mean that the client of a DNS cache cannot use the
answer after the TTL expires. It means that the DNS cache itself should
not return a stale answer to its client after the TTL expires. There is
an architectural boundary between a DNS cache and a client of that DNS
cache. Squid implementation may violate that boundary, but that Squid
problem is not a good (long-term) justification for violating server TTLs.

Connection reuse problems that you have described could be a good
justification for a default minimum TTL of 60 seconds. IMHO, it is not a
valid long-term justification for violating server TTLs when the admin
wants to honor them.


Cheers,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid is very slow after moving to production environment

2018-04-10 Thread Matus UHLAR - fantomas

On 10/04/18 07:58, Roberto Carna wrote:

Dear Antony, both proxies are virtual machines in the same DMZthey
use the same DNS, the same firewall, the same Internet link, the same
IP but different MAC Address.


On 10.04.18 22:09, Amos Jeffries wrote:

FYI: there were issues some years back with VMs that were cloned
operating VERY much slower for no apparent reason than the original
image they were cloned from.

If you are making production as a clone of the testing you may want to
try a non-clone to see if the problem disappears.


maybe using "linked clones" causes the problem.
--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
(R)etry, (A)bort, (C)ancer
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] https proxy authentication

2018-04-10 Thread Amos Jeffries
On 11/04/18 02:07, Adam Weremczuk wrote:
> Hi Amos,
> 
> 
> On 30/03/18 02:44, Amos Jeffries wrote:
>> So, the big question is why you have this setup of Apache being a
>> reverse-proxy for a Squid forward-proxy?
>>
>> Forward-proxy are supposed to be between clients and reverse-proxies or
>> origins. Not the other way around.
> This is a set up I inherited with not much being documented.
> I think the purpose was to split the functionality as below:
> - direct unauthenticated proxy for every day usage ("proxy")
> - hopping through Apache which provides http authentication for sporadic
> testing use only ("aproxy")

You may want to double-check that and redesign how the proxy is used.
Squid can easily do things like receive traffic on multiple IP:port and
selectively perform authentication only for traffic arriving in one.


>> What are you actually trying to achieve here?
> The big picture is we need to test some code against various proxy
> scenarios (http, https, authenticated, unauthenticated).
> ATM we only have http authentication.
> I would imagine real live proxy setups use encrypted https for
> authentication more often than plain text http.
> Am I correct with my assumption?

No. Actually the preferred HTTP authentication schemes do not send any
confidential things in-channel over the network, so do not require HTTPS
protections.

The Basic and Digest auth schemes which could have benefited normally
have to be sent unprotected over TCP instead.


( Ironically that sad situation is due to the Browser developers behind
a certain "TLS/HTTPS everywhere" campaign refusing for _decades_ to
implement TLS to proxies. Directly counter to our campaign to get them
to use TLS where it is actually most needed. )


> 
> If that's the case then my goal is to get https authentication working
> as well.
> If there is no way I can easily get it to work with the existing config
> I guess I can set up a new Apache hop.
> Authenticating over https only and called e.g. "bproxy".
> Would that make most sense?
> 
> Thanks
> Adam


I think what you are wanting is something like below. Then you just need
your testing to send traffic to the right port:

 # reverse-proxy HTTP
 http_port 80 accel
 acl port80 myportname 80

 # forward-proxy HTTP
 http_port 3128
 acl port3128 myportname 3128

 # reverse-proxy HTTPS
 https_port 443 accel cert=...
 acl port443 myportname 443

 # forward-proxy TLS-explicit
 https_port 8443
 acl port8443 myportname 8443

 auth_param ... your auth setup
 acl auth proxy_auth REQUIRED

 acl noauth ... something to determine non-auth testing.

 # ... http_access rules testing things that do not require auth

 # emulate the "deny all" ending the non-auth checks
 http_access deny noauth

 # requires auth ...
 http_access deny !auth

 # ... rules testing things that require auth credentials.


Depending on what you want your test proxy behaviour to be you can
wrangle up some very cool behaviours with the any-of and all-of ACL
types in recent versions, or various lists of ACLs following one of the
port name ones.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid ipcache and DNS TTL smaller than 60 seconds

2018-04-10 Thread Amos Jeffries

On 11/04/18 02:14, Peter Viskup wrote:
> Squid use TTL of 60 seconds for DNS resource records with TTL smaller
> than that value.
> 
> Some sites can have DNS TTL set to lower value due to high availability
> design (DNS load balancer).
> 
> In RFCs [1][2][3] it is explained the received TTL can be lowered to the
> upper bound TTL value of DNS cache, but not to increase it.
> 
> Is it possible to change that 60 seconds default somewhere in
> configuration? Was the 60 seconds default chosen according some reference?
> 



Please note that Best Practice for DNS records is to use *24 hour* TTLs
as the minimum. Shorter times are provided to allow for clean server
migrations, not for load balancing. RRset rotation is for DNS load
balancing, is enabled in most resolvers by default and does not require
short TTLs to operate. It is also compatible with the behaviour of load
balancing mechanisms in every protocol from TCP itself up the stack (ie
they are designed to account for rotation, not for widespread abusive TTLs).


Since you ask;

One reason Squid sets a minimum is that extremely short TTLs in DNS
conflicts directly with both HTTP persistence mechanisms and the load
balancing performed by Squid itself. The default ensures that for any
given server IP Squid can re-use persistent connections to it for ~60
seconds.

NP: These services are actually *worsening* their service times. Squid
and numerous other middleware now has to ignore already setup and
perfectly usable connections in order to perform several entire TCP (and
TLS) handshake processes all over again for the changed IPs.


Another (which no longer applies) was that Squid used to base each new
retry attempt on new DNS record lookup. If the RRset changed on every
retry it could end up trying the same IP from a large set N times in a
row and failing when a different IP from the same RRset would be fine.
 Current Squid do a single lookup and only retry the IPs found there
(think about what that means for TTL). This was explicitly to workaround
and counter the breakages caused by those servers you mention doing
short TTLs.

Consider, what would you expect to happen when DNS RRset changes
_multiple_ times within the same TTL that TCP uses for a SYN-ACK timeout
and retry?



> [1] https://tools.ietf.org/html/rfc2181#section-8
> 
> [2] https://tools.ietf.org/html/rfc1035#section-3.2.1
> 
> [3] https://tools.ietf.org/html/rfc7719#section-4
> 

Which states:
 "Some servers are known to ignore the TTL on some RRsets (such as when
the authoritative data has a very short TTL)".

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Secure Web Proxy Stress Testing

2018-04-10 Thread Alex Rousskov
On 04/10/2018 06:31 AM, Panagiotis Bariamis wrote:
> Is there any stress testing tool to test with a load of 1k to 5k
> simultaneous connections ?

Web Polygraph (www.web-polygraph.org) supports HTTPS proxies and can
create thousands of concurrent connections. Below is a PGL configuration
snippet from a recent HTTPS proxy test in our lab.

HTH,

Alex.


SslWrap sslWrap = {
ssl_config_file = "openssl.conf";
root_certificate = "CA-priv+pub.pem";
session_resumption = 70%;
session_cache = 100;
};

Server S = {
// no ssl_wraps here unless you want to test TLS inside TLS
...
};

Proxy P = {
addresses = [ ... HTTPS proxy address ... ];
ssl_wraps = [ sslWrap ]; // this is an HTTPS proxy
};

Robot R = {
ssl_wraps = [ sslWrap ]; // an HTTPS-capable client

origins = S.addresses;
http_proxies = P.addresses;

...
};

use(S,P,R);
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid ipcache and DNS TTL smaller than 60 seconds

2018-04-10 Thread Peter Viskup
Squid use TTL of 60 seconds for DNS resource records with TTL smaller than
that value.

Some sites can have DNS TTL set to lower value due to high availability
design (DNS load balancer).

In RFCs [1][2][3] it is explained the received TTL can be lowered to the
upper bound TTL value of DNS cache, but not to increase it.

Is it possible to change that 60 seconds default somewhere in
configuration? Was the 60 seconds default chosen according some reference?

[1] https://tools.ietf.org/html/rfc2181#section-8

[2] https://tools.ietf.org/html/rfc1035#section-3.2.1

[3] https://tools.ietf.org/html/rfc7719#section-4


Peter
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] https proxy authentication

2018-04-10 Thread Adam Weremczuk

Hi Amos,


On 30/03/18 02:44, Amos Jeffries wrote:

So, the big question is why you have this setup of Apache being a
reverse-proxy for a Squid forward-proxy?

Forward-proxy are supposed to be between clients and reverse-proxies or
origins. Not the other way around.

This is a set up I inherited with not much being documented.
I think the purpose was to split the functionality as below:
- direct unauthenticated proxy for every day usage ("proxy")
- hopping through Apache which provides http authentication for sporadic 
testing use only ("aproxy")

What are you actually trying to achieve here?
The big picture is we need to test some code against various proxy 
scenarios (http, https, authenticated, unauthenticated).

ATM we only have http authentication.
I would imagine real live proxy setups use encrypted https for 
authentication more often than plain text http.

Am I correct with my assumption?

If that's the case then my goal is to get https authentication working 
as well.
If there is no way I can easily get it to work with the existing config 
I guess I can set up a new Apache hop.

Authenticating over https only and called e.g. "bproxy".
Would that make most sense?

Thanks
Adam
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Ideas for better caching these popular urls

2018-04-10 Thread Omid Kosari
Thanks for reply . 

I assumed the community at different scales from little isp to large ISPs
may have common domains like those i highlighted so they may have same issue
as mine . So i ignored common parts .

One of problems with redbot is it shows timeout for big files like 

http://gs2.ww.prod.dl.playstation.net/gs2/appkgo/prod/CUSA00900_00/2/f_2df8e321f37e2f5ea3930f6af4e9571144916013ee38893d881890b454b5fed6/f/UP9000-CUSA00900_00-BLOODBORNE00_4.pkg?downloadId=0187=018700e2291bda0f868f=us=ob=aa2cd9c8d1f359feb843ae4a6c99cfcdb6569ca9cc60ad6d28b6f8de3b5fac23=0=23.57.69.81=0027

http://gs2.ww.prod.dl.playstation.net/gs2/ppkgo/prod/CUSA07557_00/25/f_053bab8c9dec6fbc68a0bd9fc58793285ae350ccf7dadacb35b5840228a9d802/f/EP4001-CUSA07557_00-F12017EMASTER000-A0113-V0100_0.pkg?downloadId=0059=005900e22977e62f91a2=ob=0183=8.248.5.254=0032


I assumed anyone with few thousand of users may have same problem and maybe
they like to share for example their refresh_pattern or storeid to solve my
problem . You better know that playstation is everywhere playstation ;)

Here is part of storeid_db file
^http:\/\/.*\.sonycoment\.loris-e\.llnwd\.net\/(.*?\.pkg)
http://playstation.net.squidinternal/$1
^http:\/\/.*\.playstation\.net\/(.*?\.pkg)
http://playstation.net.squidinternal/$1

Almost all of the playstation huge downloads are with 206 code but it will
download the file from start to end , if i remember correctly in this
situation squid will correctly cache the file .



--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid4 ICAP connection handling

2018-04-10 Thread Peter Viskup
On Mon, Apr 9, 2018 at 4:43 PM, Alex Rousskov <
rouss...@measurement-factory.com> wrote:
> On 04/09/2018 06:03 AM, Peter Viskup wrote:
>> Running Squid 4.0.23 the ICAP connections getting "frozen".
>>
>> proxy:~ $ netstat -ntpa| grep 40620
>> tcp   920144  0 127.0.0.1:40620 127.0.0.1:1344
>> ESTABLISHED 1165/(squid-1)
>> tcp0 2744857 127.0.0.1:1344  127.0.0.1:40620
>> ESTABLISHED 1211/esets_icap
>>
>> # after ICAP service restart
>> proxy:~ $ netstat -ntpa| grep 40620
>> tcp   920144  0 127.0.0.1:40620 127.0.0.1:1344
>> ESTABLISHED 1165/(squid-1)
>> tcp0 2744858 127.0.0.1:1344  127.0.0.1:40620
>> FIN_WAIT1   -
>>
>> # later on - squid still keep the connection open
>> proxy:~ $ netstat -ntpa| grep 40620
>> tcp   920144  0 127.0.0.1:40620 127.0.0.1:1344
>> ESTABLISHED 1165/(squid-1)
>
>> How the ICAP connections are handled?
>
> Is there an HTTP transaction associated with (e.g., waiting for) that
> stuck ICAP connection?

Not found the HTTP transaction associated with.

> Can you reproduce this problem with a single HTTP transaction? Or does
> it take many transactions to get Squid into this state? If you can
> easily reproduce, I recommend filing a bug report with an ALL,9 trace of
> the problematic transaction attached.

I can easily reproduce. Will search for the HTTP transaction, but not sure
whether I would be able to trace it.

More information in:
https://bugs.squid-cache.org/show_bug.cgi?id=4844
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Ideas for better caching these popular urls

2018-04-10 Thread Amos Jeffries
On 10/04/18 22:32, Omid Kosari wrote:
> Hello,
> 
> squid-top-domains.JPG
> 
>   
> 
> This image shows stats from one of my squid boxes . I have question about
> highlighted ones . I think they should have better hit ratio because they
> are popular between clients .

There are no URLs in that image. There are only wildcards for top-level
domains and a HIT % over the *entire* domain.

To figure out whether any of them should actually have better HIT ratios
you have to look at the actual URLs and see how much uniqueness exists
there.

Then for the _full_ URLs (scheme, domain, path, *and* ?query portions)
which are not very unique look at the response headers to see why they
are not caching well. The tool at redbot.org can help with that last part.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Ideas for better caching these popular urls

2018-04-10 Thread Omid Kosari
Hello,

squid-top-domains.JPG

  

This image shows stats from one of my squid boxes . I have question about
highlighted ones . I think they should have better hit ratio because they
are popular between clients .
I have checked a lot of things like calamaris and logs , played with
refresh_pattern , storeid rules etc .

I want gurus and community to please help for better HITs .

Also i am ready to share specific parts of access.log and others if
requested .

Thanks



--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid is very slow after moving to production environment

2018-04-10 Thread Amos Jeffries
On 10/04/18 07:58, Roberto Carna wrote:
> Dear Antony, both proxies are virtual machines in the same DMZthey
> use the same DNS, the same firewall, the same Internet link, the same
> IP but different MAC Address.


FYI: there were issues some years back with VMs that were cloned
operating VERY much slower for no apparent reason than the original
image they were cloned from.

If you are making production as a clone of the testing you may want to
try a non-clone to see if the problem disappears.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to configure Icap can improve the performance of proxy?

2018-04-10 Thread Amos Jeffries
On 10/04/18 18:11, 赵 俊 wrote:
> My Squid  with configuration of Icap like this:
> 
> 
>  #icap
> icap_enable on
> icap_preview_enable on
> icap_preview_size 1024
> icap_send_client_ip on
> adaptation_meta X-Client-Port "%>p"
> icap_206_enable on
> icap_persistent_connections off
> 
> 
> icap_service service_req reqmod_precache 0 icap://192.168.10.200:1344/echo
> icap_service service_res respmod_precache 1 icap://192.168.10.200:1344/echo
> adaptation_access service_res allow all
> adaptation_access service_req allow all
> 
> 
> When I configured the Icap parameter of Squid , the number of new
> connection or the number of concurrent connections  was less than half
> as only Squid running.
> So how to configure Icap can improve the performance of proxy?

You cannot improve it much in that aspect. ICAP is a networking protocol
for sending HTTP traffic to an external service. It uses ports and
network connections to do that.

AFAIK the best efficiency it can do is just under x2 the amount a normal
Squid uses - every inbound client connection uses +1 REQMOD socket and
every outbound server connection adds +1 RESPMOD socket. Even with
pipelining/persistence and caching that is not changed.


eCAP modules do not use the extra network resources. But whether you can
go that way depends on what you are needing it to do.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Proxy through another proxy possible?

2018-04-10 Thread Amos Jeffries
On 10/04/18 13:30, Eliezer Croitoru wrote:
> Hey Amos,
> 
> Would a PROXY protocol based "router" or "load balancer" be fine also?

Anything that acts like a S-NAT would do.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid-users Digest, Vol 44, Issue 8

2018-04-10 Thread Amos Jeffries
On 09/04/18 00:48, kalice caprice wrote:
>> 1) It is only possible to set an IPv6 outgoing when the server being
>> connected to is an IPv6 server address.
> 
> It doesn't matter for me, It is just a way to get a different outbound
> IPv6 address depending on which port the connection is made to, and both
> clients and servers has IPv6.
> I saw a few threads here asking for more or less the same thing except
> that I'm specifying the full address instead of implicit addressing to
> the outbound, this is where I'm stuck.
> >> 2) It is only possible for Squid to use an IP address which has been
>> allocated/assigned to the NIC.
> 
> The NIC is a network card if I understood it right.

Yes.

> The IPv6 /64 subnet
> is added to the main interface and the gateway is aswell, IPv6 is fully
> working on the server.

The individual IP address being used in tcp_outgoing_address by Squid
has to be assigned to the machine before it can generate any packets
from it. That goes for both IPv4 and IPv6.

If it is unassigned or assigned by another machine you get major
problems with packet delivery.

The config you had initially should work okay for IPv6 provided the
Squid machine has been assigned those *:8336, *:b369, and *:5fe0:eba8
addresses.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users