Re: [squid-users] squid and hosts_file on ipv6 not working fine

2017-09-19 Thread Amos Jeffries

On 20/09/17 03:54, Ahmed Alzaeem wrote:

access .log point to other address :

1505824835.364 69 12.13.207.211 TCP_TUNNEL/200 78573 CONNECT 
www.google.com:443 - 
HIER_DIRECT/2404:6800:4009:802::2004 -


from linux terminal i can reach goole from :

2607:f8b0:4006:810::200e google.com



but in squid itself  ….  no it dont  and it reach goole  using the 
address2404:6800:4009:802::2004  not  2607:f8b0:4006:810


so I’m sure the hosts_file directive works for IPV4 not for iPV6 address



FYI: "google.com" is not the same domain as "www.google.com". So your 
hosts file line for "www.google.com" is the one that should be used, not 
the "google.com" line.



Did you restart or reconfigure Squid after making the hosts file changes?

And, if so does "squid -k parse" show an issues?

And, does the cache manager ipcache report show all these google entries 
with a 'H' flag?


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid and hosts_file on ipv6 not working fine

2017-09-19 Thread Amos Jeffries

On 19/09/17 23:47, --Ahmad-- wrote:

hello folks


sometimes i need to change domains for cetian ipv6 website

i use the directive

hosts_file /etc/hosts
and in side it i have :

[root@server ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
2607:f8b0:4006:810::200e google.com
2607:f8b0:4006:810::200e www.google.com
2607:f8b0:4006:810::2003 google.de
2607:f8b0:4006:810::2003 www.google.de




but squid still don’t take google.com from that host file and take it from 
outside



How are you determining that?

and what does your access.log say for one of the requests that are doing it?


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] When the redirect [301, 302, 307] is cached by Squid?

2017-09-19 Thread Amos Jeffries

On 20/09/17 02:00, kAja Ziegler wrote:

Hi all,

   I want to ask why my Squid does not cache redirects 301, 302 and 307. 
See anomised example below. Even if I call the URL more times or I open 
it in the browser, I always get MISS independently of the return code 
301, 302 or 307.


302 and 307 are not because as their status description indicates they 
are *temporary* results. They can only be cached if there are explicit 
details from the server indicating for how long.


301 should be cached unless the object would need revalidation immediately.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] very slow squid response

2017-09-19 Thread Amos Jeffries

On 19/09/17 23:54, Iraj Norouzi wrote:

hi Antony
thanks for you reply

i setup squid on ubuntu and centos


Why both?
because of test and because i not get the result


with tproxy and wccp for 6 gb/s traffic


What hardware are you using for that sort of traffic flow?
i use hp DL360 with 2 6 core processor with 3 GHZ and 64 GB RAM and 1 TB HDD


but when i try to test squid with 40 mb/s traffic


How are you generating "40 mb/s traffic"?  I'm assuming that your Internet
connection is 6Gbps as stated above, so how are you restricting this down to
40Mbps for testing?
i redirect one class of ip address with 40 mb/s traffic for test of 
squid and i am going decide to redirect whole of traffic to squid after 
getting fast browsing




Squid is designed to optimize and reduce *bandwidth*. "fast browsing" is 
just a nice side effect of caching. It is a mistake to think that Squid 
will always produce faster browsing.


This is especially true during the initial cache warm-up period where 
DNS and HTTP objects are being fetched for the first time. Speed can 
only come from future fetches being reduced by the cache.


So when you are testing for speed with real traffic make sure there has 
been at least a few hrs for the caches to warm up.


Since you are testing with RAM-only caching right now be aware that 
every time you restart Squid *all* its caches (for all data types) get 
erased back to "cold"/empty.




it response very slow


Numbers please.
websites load at 2 or 1 second by direct browsing and load at 10 second 
or not load by squid



while when i use direct browsing i can browse websites very fast


Is the direct traffic still being routed through the Squid server (you say
you're using tproxy, so I assume this is an intercept machine with the 
traffic

going through it between client and server)?
no, HTTP traffic redirect to squid by wccp and access-list config on Cisco


Cisco is tunneling the packets with WCCP to the Squid machine, which is 
intercepting the traffic with TPROXY.


So actually "yes", if not something would be terribly broken in your 
WCCP and TPROXY setup.




ip wccp 80 redirect-list wccp
ip wccp 90 redirect-list wccp_to_inside

ip access-list extended wccp
  remark Permit http access from clients
  permit tcp x.x.x.x 0.0.0.255 any eq www
  deny   ip any any
ip access-list extended wccp_to_inside
  permit tcp any eq www x.x.x.x 0.0.0.255
  deny   ip any any


i used tcpdump for tracing connections arrive time and there was no problem,


Arrival time where?  From the origin server to Squid?  From Squid to the
client?  What are you actually measuring?
yes, arriving source packets from clients to squid interface,by time 
that i push enter on browser address bar and getting packets on tcpdump 
immediately


Packets arriving at Squid from the client is only the first ~1% of 
things that are going on. It would be a big problem if they took 
anything more than a few ms to arrive.


Once the packets arrive there are DNS lookups to do. Delay in DNS 
lookups is the most common cause of overall delays.


Then there is HTTP processing to find a source for the response. If the 
request has never been seen before that means a fair amount of logic to 
select potential upstream servers and attempt connections to them (maybe 
several or even all of them).


Then the server request has to be generated, and wait for a response.
Only when that server response comes back can stuff for the client 
response start to happen.


When the cache is involved the total time could be under 1ms, or 
somewhat around 50ms. If there are lots of server things to do the time 
can be hundreds of ms.






i used watch -d for tracing packets match by iptables rules and it was ok,


Please be more specific - what did you measure and what does "OK" mean?
i add rule to iptables for tracing one website packets and i saw them on 
kern.log that matched with the rule, so if you need please tell me to 
send commands that i used.


Which packets. As I detailed above, there are a minimum of 2 TCP 
connections involved with delivering a MISS object (client->Squid, and 
Squid->server) potentially many more if there are network issues 
connecting to server(s).





ip rule add fwmark 1 lookup 100 ip route add local 0.0.0.0/0 
 dev enp3s0f0 table 100


iptables -t mangle -A PREROUTING -d $LOCALIP -j ACCEPT


You also need a rule before the above one which blocks traffic to
  "-d $LOCALIP -p 3129 -j REJECT"

iptables -t 
mangle -N DIVERT iptables -t mangle -A DIVERT -j MARK --set-mark 1 
iptables -t mangle -A DIVERT -j ACCEPT iptables -t mangle -A PREROUTING 
-p tcp -m socket -j DIVERT iptables -t mangle -A PREROUTING -p tcp 
--dport 80 -j TPROXY --tproxy-mark 0x1/0x1 --on-port 3129


watch -d iptables -t mangle -vnL

Did you compare with and without Squid in place to see what differs?
no, as i told when i browsing directly it is well and packets not coming 
to squid and exit from cisco to internet and arrive to cli

Re: [squid-users] When the redirect [301, 302, 307] is cached by Squid?

2017-09-19 Thread Eliezer Croitoru
As you can see in the response headers there are no rules for caching:
< HTTP/1.1 307 Temporary Redirect
< Date: Tue, 19 Sep 2017 12:27:50 GMT
< Server: Apache
< Location: http://http://test.example.com/img.svg
< Content-Length: 249
< Content-Type: text/html; charset=iso-8859-1
< X-Cache: MISS from 
< X-Cache-Lookup: MISS from :3128
< Connection: keep-alive

If you have a specific service try to use redbot to analyze the response:
https://redbot.org/

It might give you what you need.

All The Bests,
Eliezer

* I do not know if it should be this way or not since I am missing couple 
things from the setup such as squid.conf..


http://ngtech.co.il/lmgtfy/
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il


From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of kAja Ziegler
Sent: Tuesday, September 19, 2017 17:00
To: squid-users@lists.squid-cache.org
Subject: [squid-users] When the redirect [301, 302, 307] is cached by Squid?

Hi all,

  I want to ask why my Squid does not cache redirects 301, 302 and 307. See 
anomised example below. Even if I call the URL more times or I open it in the 
browser, I always get MISS independently of the return code 301, 302 or 307.

$ curl -v http://test.example.com/img307.jpg

> GET /img307.jpg HTTP/1.1
> Host: http://test.example.com
> User-Agent: curl/7.50.1
> Accept: */*
>
< HTTP/1.1 307 Temporary Redirect
< Date: Tue, 19 Sep 2017 12:27:50 GMT
< Server: Apache
< Location: http://http://test.example.com/img.svg
< Content-Length: 249
< Content-Type: text/html; charset=iso-8859-1
< X-Cache: MISS from 
< X-Cache-Lookup: MISS from :3128
< Connection: keep-alive
<


307 Temporary Redirect

Temporary Redirect
The document has moved http://http://test.example.com/img.svg";>here.


My anomised squid.conf is attached.
Thanks in advance for clarification

  zigi




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid radius Authentication

2017-09-19 Thread Eliezer Croitoru
Hey Pascal,

I have some experience with wrapper scripts but I must admit that it has couple 
things which led me to not use it.
One of the issues was excessive CPU usage since I was using a bash script as a 
wrapper.
I remember that long ago a sysadmin used something else then basic auth.
They had a WIFI system on the premise and every user could login to the WIFI 
network using it's username and password.
Then they pulled from the radius DB periodically the user=> ip mapping and 
applied acl's based on the client IP which is unique per username.

If I will write a helper I would probably use GoLang or ruby.
I was thinking about some way to make an helper generic enough but if you have 
an idea\sketch I might take it and will actually write the helper.
I have seen but have not used the next library:
https://github.com/layeh/radius

Which might be very helpful.

Eliezer 

Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



-Original Message-
From: Pascal Schäfer [mailto:p.schae...@creapptive.de] 
Sent: Tuesday, September 19, 2017 15:20
To: Eliezer Croitoru ; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid radius Authentication

Hey,

thank you for your reply.
Yes it would be Basic.
I think I will write my own helper as a generic solution, not only for 2
domains/subdomains. Do you had the same problem in the past?

The answer mails from Amos helped me a lot to know how I can program the
wrapper helper.

Pascal

Am 17.09.2017 um 05:57 schrieb Eliezer Croitoru:
> Hey,
> 
> What kind of authentication do you want\need? Basic?
> Depends on your needs there might be a helper that you can use.
> If you have only two domains\subdomains it's one thing but if you have more 
> then these then the program would be different.
> 
> If I will have more details I might be able to answer your question and I 
> maybe even have a radius authentication helper written somewhere which I can 
> pull.
> 
> Eliezer
> 
> 
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
> 
> 
> 
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On 
> Behalf Of Pascal Sch?fer
> Sent: Friday, September 15, 2017 03:53
> To: squid-users@lists.squid-cache.org
> Subject: [squid-users] Squid radius Authentication
> 
> Dear Ladies and Gentlemen,
> 
> I have a question about the authentication with a radius server.
> I use Squid as a reverse proxy.
> It is possible to use two radius server for different pages or
> subdomains with squid_radius_auth?
> I think about a maybe special configuration.
> I try to use radius server A for the  website A and to use the radius
> server B for the website B. Maybe it is good to know that the website A
> is on web server A and Website B is on web server B.
> I would like to use one Squid server instead of two Squid server (and
> two port fowardings).
> 
> A Example of my configuration:
> 
> https://A.domain.com/... -> authentication over Radius Server A
> https://B.domain.com/... -> authentication over Radius Server B
> 
> When I search on Google I don't found an acceptable answer for my question.
> Should I program such function on my own or know someone a configuration
> that work for my project?
> 
> With best regards
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL_DB/certs are recahing 4MB on squid

2017-09-19 Thread Cherukuri, Naresh
Hello Alex,

Thank you for quick turnover.  Here is the screenshot of size output.

[root@*** ssl_db]# cat size
3620864

Thanks,
Naresh

-Original Message-
From: Alex Rousskov [mailto:rouss...@measurement-factory.com] 
Sent: Tuesday, September 19, 2017 10:10 AM
To: Cherukuri, Naresh; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] SSL_DB/certs are recahing 4MB on squid

On 09/19/2017 08:02 AM, Cherukuri, Naresh wrote:

> My squid ssl_db/certs memory is reaching to 4MB. What happens when it 
> reaches 4MB? Is squid recycle by itself

Yes, Squid should evict old certificates in order to cache the new ones while 
maintaining the total cache size at or below the configured 4MB level.


> [root@** ssl_db]# ls -ltr
> [root@ ssl_db]# du -sh certs

The above numbers do not matter to Squid. They should correlate with Squid 
estimate of the database size, but to know how many bytes Squid actually thinks 
the certificate database is using, do this instead:

  $ cat size


HTH,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL_DB/certs are recahing 4MB on squid

2017-09-19 Thread Alex Rousskov
On 09/19/2017 08:02 AM, Cherukuri, Naresh wrote:

> My squid ssl_db/certs memory is reaching to 4MB. What happens when it
> reaches 4MB? Is squid recycle by itself

Yes, Squid should evict old certificates in order to cache the new ones
while maintaining the total cache size at or below the configured 4MB level.


> [root@** ssl_db]# ls -ltr
> [root@ ssl_db]# du -sh certs

The above numbers do not matter to Squid. They should correlate with
Squid estimate of the database size, but to know how many bytes Squid
actually thinks the certificate database is using, do this instead:

  $ cat size


HTH,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] SSL_DB/certs are recahing 4MB on squid

2017-09-19 Thread Cherukuri, Naresh
Hello,

My squid ssl_db/certs memory is reaching to 4MB. What happens when it reaches 
4MB? Is squid recycle by itself or  do I have to clear files on certs by 
removing all *.pem files. Please advise?
sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/ssl_db -M 4MB 
sslcrtd_children 8 startup=1 idle=1

[root@** ssl_db]# ls -ltr
total 200
-rw-r--r--. 1 squid squid  7 Sep 19 09:30 size
drwxr-xr-x. 2 squid squid  69632 Sep 19 09:30 certs
-rw-r--r--. 1 squid squid 123463 Sep 19 09:30 index.txt

[root@ ssl_db]# du -sh certs
3.6Mcerts

Thanks,
Naresh
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] When the redirect [301, 302, 307] is cached by Squid?

2017-09-19 Thread kAja Ziegler
Hi all,

  I want to ask why my Squid does not cache redirects 301, 302 and 307. See
anomised example below. Even if I call the URL more times or I open it in
the browser, I always get MISS independently of the return code 301, 302 or
307.

$ curl -v test.example.com/img307.jpg

> GET /img307.jpg HTTP/1.1
> Host: test.example.com
> User-Agent: curl/7.50.1
> Accept: */*
>
< HTTP/1.1 307 Temporary Redirect
< Date: Tue, 19 Sep 2017 12:27:50 GMT
< Server: Apache
< Location: http://test.example.com/img.svg
< Content-Length: 249
< Content-Type: text/html; charset=iso-8859-1
< *X-Cache: MISS* from 
< *X-Cache-Lookup: MISS* from :3128
< Connection: keep-alive
<


307 Temporary Redirect

Temporary Redirect
The document has moved http://test.example.com
/img.svg">here.



My anomised squid.conf is attached.

Thanks in advance for clarification

  zigi


squid.conf
Description: Binary data
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid radius Authentication

2017-09-19 Thread Pascal Schäfer
Hey,

thank you for your reply.
Yes it would be Basic.
I think I will write my own helper as a generic solution, not only for 2
domains/subdomains. Do you had the same problem in the past?

The answer mails from Amos helped me a lot to know how I can program the
wrapper helper.

Pascal

Am 17.09.2017 um 05:57 schrieb Eliezer Croitoru:
> Hey,
> 
> What kind of authentication do you want\need? Basic?
> Depends on your needs there might be a helper that you can use.
> If you have only two domains\subdomains it's one thing but if you have more 
> then these then the program would be different.
> 
> If I will have more details I might be able to answer your question and I 
> maybe even have a radius authentication helper written somewhere which I can 
> pull.
> 
> Eliezer
> 
> 
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
> 
> 
> 
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On 
> Behalf Of Pascal Sch?fer
> Sent: Friday, September 15, 2017 03:53
> To: squid-users@lists.squid-cache.org
> Subject: [squid-users] Squid radius Authentication
> 
> Dear Ladies and Gentlemen,
> 
> I have a question about the authentication with a radius server.
> I use Squid as a reverse proxy.
> It is possible to use two radius server for different pages or
> subdomains with squid_radius_auth?
> I think about a maybe special configuration.
> I try to use radius server A for the  website A and to use the radius
> server B for the website B. Maybe it is good to know that the website A
> is on web server A and Website B is on web server B.
> I would like to use one Squid server instead of two Squid server (and
> two port fowardings).
> 
> A Example of my configuration:
> 
> https://A.domain.com/... -> authentication over Radius Server A
> https://B.domain.com/... -> authentication over Radius Server B
> 
> When I search on Google I don't found an acceptable answer for my question.
> Should I program such function on my own or know someone a configuration
> that work for my project?
> 
> With best regards
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
> 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] very slow squid response

2017-09-19 Thread Iraj Norouzi
hi Antony
thanks for you reply
> i setup squid on ubuntu and centos

Why both?
because of test and because i not get the result

> with tproxy and wccp for 6 gb/s traffic

What hardware are you using for that sort of traffic flow?
i use hp DL360 with 2 6 core processor with 3 GHZ and 64 GB RAM and 1 TB HDD

> but when i try to test squid with 40 mb/s traffic

How are you generating "40 mb/s traffic"?  I'm assuming that your Internet
connection is 6Gbps as stated above, so how are you restricting this down to
40Mbps for testing?
i redirect one class of ip address with 40 mb/s traffic for test of squid
and i am going decide to redirect whole of traffic to squid after getting
fast browsing

> it response very slow

Numbers please.
websites load at 2 or 1 second by direct browsing and load at 10 second or
not load by squid

> while when i use direct browsing i can browse websites very fast

Is the direct traffic still being routed through the Squid server (you say
you're using tproxy, so I assume this is an intercept machine with the
traffic
going through it between client and server)?
no, HTTP traffic redirect to squid by wccp and access-list config on Cisco
ip wccp 80 redirect-list wccp
ip wccp 90 redirect-list wccp_to_inside

ip access-list extended wccp
 remark Permit http access from clients
 permit tcp x.x.x.x 0.0.0.255 any eq www
 deny   ip any any
ip access-list extended wccp_to_inside
 permit tcp any eq www x.x.x.x 0.0.0.255
 deny   ip any any

> i used tcpdump for tracing connections arrive time and there was no
problem,

Arrival time where?  From the origin server to Squid?  From Squid to the
client?  What are you actually measuring?
yes, arriving source packets from clients to squid interface,by time that i
push enter on browser address bar and getting packets on tcpdump immediately

> i used watch -d for tracing packets match by iptables rules and it was ok,

Please be more specific - what did you measure and what does "OK" mean?
i add rule to iptables for tracing one website packets and i saw them on
kern.log that matched with the rule, so if you need please tell me to send
commands that i used.

ip rule add fwmark 1 lookup 100ip route add local 0.0.0.0/0 dev
enp3s0f0 table 100

iptables -t mangle -A PREROUTING -d $LOCALIP -j ACCEPTiptables -t
mangle -N DIVERTiptables -t mangle -A DIVERT -j MARK --set-mark
1iptables -t mangle -A DIVERT -j ACCEPTiptables -t mangle -A
PREROUTING -p tcp -m socket -j DIVERTiptables -t mangle -A PREROUTING
-p tcp --dport 80 -j TPROXY --tproxy-mark 0x1/0x1 --on-port 3129

watch -d iptables -t mangle -vnL

Did you compare with and without Squid in place to see what differs?
no, as i told when i browsing directly it is well and packets not coming to
squid and exit from cisco to internet and arrive to clients from cisco and
because of traffic on cisco i can't enable debugging on it but when i
browse from squid i get latency so i suppose the problem is squid or server
that squid running on it

> i also used iptables trace command for tracing matching iptables rules,
> there was no problem except i had latency on arriving packets on iptables
> rule while tcpdump captured packets fast, it happened when my browsing was
> so slow, at some times that my browsing was fast there was no latency on
> iptables trace log.

That description is too vague to know exactly what you were measuring and
what
results you got.

iptables -t raw -A PREROUTING -s x.x.x.x -j TRACE

iptables -t raw -A OUTPUT -s x.x.x.x -j TRACE

tailf /var/log/kern.log

tcpdump -e -i enp3s0f0 -d x.x.x.x dst port 80

tcpdump -e -i enp3s0f0 -s x.x.x.x src port 80



> i also used tcp and linux enhancement configurations

Details?
net.core.wmem_default=524288
net.core.wmem_max=16777216
net.core.rmem_default=524288
net.core.rmem_max=16777216
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_rmem = 66560 524288 16777216
net.ipv4.tcp_wmem = 66560 524288 16777216
net.core.somaxconn=4000
net.ipv4.tcp_timestamps=0
net.ipv4.tcp_sack=0
net.ipv4.tcp_fin_timeout=20
net.ipv4.ip_local_port_range=10240 65000
net.ipv4.tcp_keepalive_time = 900
net.ipv4.tcp_keepalive_intvl = 900
net.ipv4.tcp_keepalive_probes = 9
net.core.somaxconn = 5000
net.core.netdev_max_backlog = 8000
net.ipv4.tcp_max_syn_backlog = 8096
net.ipv4.tcp_slow_start_after_idle = 0
net.ipv4.tcp_tw_reuse = 1

> but nothing happened.
> wccp send packets very well and tcpdump show capturing packets too but
> browsing with squid is very slow.

Firstly, please define "slow" - do you mean it takes a long time for new web
pages / images / etc to appear (but once they start, they arrive quickly)
browse in 10 second or not browsing, webpages that i browse for first time
or i browse for multiple times
, or do you mean that a continuous stream of data (a "download") arrives
more
slowly when going through Squid than going direct (and if so, what are the
different speeds)?
no just browsing,

Secondly, what are you trying to achieve with Squid - what is its purpose in
your network?
cachin

[squid-users] squid and hosts_file on ipv6 not working fine

2017-09-19 Thread --Ahmad--
hello folks 


sometimes i need to change domains for cetian ipv6 website 

i use the directive 

hosts_file /etc/hosts 
and in side it i have :

[root@server ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
2607:f8b0:4006:810::200e google.com
2607:f8b0:4006:810::200e www.google.com
2607:f8b0:4006:810::2003 google.de
2607:f8b0:4006:810::2003 www.google.de 




but squid still don’t take google.com from that host file and take it from 
outside 


any directive for IVP6 ? may be ?


cheers 


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Fwd: Re: very slow squid response

2017-09-19 Thread Antony Stone
Hi.

Forwarding private reply back to the list in case it helps anyone reply with 
suggestions.

Iraj - please reply to the list in future.

Antony.

--  Forwarded Message Starts  --

Subject: Re: [squid-users] very slow squid response
Date: Tuesday 19 September 2017 12:34:47
From: Iraj Norouzi 
To: Antony Stone >

hi Antony
thanks for you reply
> i setup squid on ubuntu and centos

Why both?
because of test and because i not get the result

> with tproxy and wccp for 6 gb/s traffic

What hardware are you using for that sort of traffic flow?
i use hp DL360 with 2 6 core processor with 3 GHZ and 64 GB RAM and 1 TB HDD

> but when i try to test squid with 40 mb/s traffic

How are you generating "40 mb/s traffic"?  I'm assuming that your Internet
connection is 6Gbps as stated above, so how are you restricting this down to
40Mbps for testing?
i redirect one class of ip address with 40 mb/s traffic for test of squid
and i am going decide to redirect whole of traffic to squid after getting
fast browsing

> it response very slow

Numbers please.
websites load at 2 or 1 second by direct browsing and load at 10 second or
not load by squid

> while when i use direct browsing i can browse websites very fast

Is the direct traffic still being routed through the Squid server (you say
you're using tproxy, so I assume this is an intercept machine with the
traffic
going through it between client and server)?
no, HTTP traffic redirect to squid by wccp and access-list config on Cisco
ip wccp 80 redirect-list wccp
ip wccp 90 redirect-list wccp_to_inside

ip access-list extended wccp
 remark Permit http access from clients
 permit tcp x.x.x.x 0.0.0.255 any eq www
 deny   ip any any
ip access-list extended wccp_to_inside
 permit tcp any eq www x.x.x.x 0.0.0.255
 deny   ip any any

> i used tcpdump for tracing connections arrive time and there was no
problem,

Arrival time where?  From the origin server to Squid?  From Squid to the
client?  What are you actually measuring?
yes, arriving source packets from clients to squid interface,by time that i
push enter on browser address bar and getting packets on tcpdump immediately

> i used watch -d for tracing packets match by iptables rules and it was ok,

Please be more specific - what did you measure and what does "OK" mean?
i add rule to iptables for tracing one website packets and i saw them on
kern.log that matched with the rule, so if you need please tell me to send
commands that i used.

ip rule add fwmark 1 lookup 100ip route add local 0.0.0.0/0 dev
enp3s0f0 table 100

iptables -t mangle -A PREROUTING -d $LOCALIP -j ACCEPTiptables -t
mangle -N DIVERTiptables -t mangle -A DIVERT -j MARK --set-mark
1iptables -t mangle -A DIVERT -j ACCEPTiptables -t mangle -A
PREROUTING -p tcp -m socket -j DIVERTiptables -t mangle -A PREROUTING
-p tcp --dport 80 -j TPROXY --tproxy-mark 0x1/0x1 --on-port 3129

watch -d iptables -t mangle -vnL

Did you compare with and without Squid in place to see what differs?
no, as i told when i browsing directly it is well and packets not coming to
squid and exit from cisco to internet and arrive to clients from cisco and
because of traffic on cisco i can't enable debugging on it but when i
browse from squid i get latency so i suppose the problem is squid or server
that squid running on it

> i also used iptables trace command for tracing matching iptables rules,
> there was no problem except i had latency on arriving packets on iptables
> rule while tcpdump captured packets fast, it happened when my browsing was
> so slow, at some times that my browsing was fast there was no latency on
> iptables trace log.

That description is too vague to know exactly what you were measuring and
what
results you got.

iptables -t raw -A PREROUTING -s x.x.x.x -j TRACE

iptables -t raw -A OUTPUT -s x.x.x.x -j TRACE

tailf /var/log/kern.log

tcpdump -e -i enp3s0f0 -d x.x.x.x dst port 80

tcpdump -e -i enp3s0f0 -s x.x.x.x src port 80



> i also used tcp and linux enhancement configurations

Details?
net.core.wmem_default=524288
net.core.wmem_max=16777216
net.core.rmem_default=524288
net.core.rmem_max=16777216
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_rmem = 66560 524288 16777216
net.ipv4.tcp_wmem = 66560 524288 16777216
net.core.somaxconn=4000
net.ipv4.tcp_timestamps=0
net.ipv4.tcp_sack=0
net.ipv4.tcp_fin_timeout=20
net.ipv4.ip_local_port_range=10240 65000
net.ipv4.tcp_keepalive_time = 900
net.ipv4.tcp_keepalive_intvl = 900
net.ipv4.tcp_keepalive_probes = 9
net.core.somaxconn = 5000
net.core.netdev_max_backlog = 8000
net.ipv4.tcp_max_syn_backlog = 8096
net.ipv4.tcp_slow_start_after_idle = 0
net.ipv4.tcp_tw_reuse = 1

> but nothing happened.
> wccp send packets very well and tcpdump show capturing packets too but
> browsing with squid is very slow.

Firstly, please define "slow" - do you mean it takes a long time for new web
pages / images / etc to appear (but once they start, they arrive quickly)
browse in 10 second or not browsing, webpages that i browse f

Re: [squid-users] very slow squid response

2017-09-19 Thread Antony Stone
On Tuesday 19 September 2017 at 11:34:37, Antony Stone wrote:

> Is the direct traffic still being routed through the Squid server (you say
> you're using tproxy, so I assume this is an intercept machine with the
> traffic going through it between client and server)?

Apologies - with WCCP this is not true.  It would be good to know more about 
your hardware / network / WCCP setup, though.

(As well as the answers to the questions in my previous email.)



Regards,


Antony.

-- 
Because it messes up the order in which people normally read text.
> Why is top-posting such a bad thing?
> > Top-posting.
> > > What is the most annoying way of replying to e-mail?

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] very slow squid response

2017-09-19 Thread Antony Stone
On Tuesday 19 September 2017 at 11:18:34, Iraj Norouzi wrote:

> hi everybody
> i setup squid on ubuntu and centos

Why both?

> with tproxy and wccp for 6 gb/s traffic

What hardware are you using for that sort of traffic flow?

> but when i try to test squid with 40 mb/s traffic

How are you generating "40 mb/s traffic"?  I'm assuming that your Internet 
connection is 6Gbps as stated above, so how are you restricting this down to 
40Mbps for testing?

> it response very slow

Numbers please.

> while when i use direct browsing i can browse websites very fast

Is the direct traffic still being routed through the Squid server (you say 
you're using tproxy, so I assume this is an intercept machine with the traffic 
going through it between client and server)?

> i used tcpdump for tracing connections arrive time and there was no problem,

Arrival time where?  From the origin server to Squid?  From Squid to the 
client?  What are you actually measuring?

> i used watch -d for tracing packets match by iptables rules and it was ok,

Please be more specific - what did you measure and what does "OK" mean?

Did you compare with and without Squid in place to see what differs?

> i also used iptables trace command for tracing matching iptables rules,
> there was no problem except i had latency on arriving packets on iptables
> rule while tcpdump captured packets fast, it happened when my browsing was
> so slow, at some times that my browsing was fast there was no latency on
> iptables trace log.

That description is too vague to know exactly what you were measuring and what 
results you got.

> i also used tcp and linux enhancement configurations

Details?

> but nothing happened.
> wccp send packets very well and tcpdump show capturing packets too but
> browsing with squid is very slow.

Firstly, please define "slow" - do you mean it takes a long time for new web 
pages / images / etc to appear (but once they start, they arrive quickly), or 
do you mean that a continuous stream of data (a "download") arrives more 
slowly when going through Squid than going direct (and if so, what are the 
different speeds)?

Secondly, what are you trying to achieve with Squid - what is its purpose in 
your network?

> please help me.

Please help us - give us more details about the hardware you're running this 
on, the version of Squid you're using, what WCCP routing / filtering you're 
doing, the measurements you've made and the results you got.


Regards,


Antony.

-- 
We all get the same amount of time - twenty-four hours per day.
How you use it is up to you.

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] very slow squid response

2017-09-19 Thread Iraj Norouzi
hi everybody
i setup squid on ubuntu and centos with tproxy and wccp for 6 gb/s traffic
but when i try to test squid with 40 mb/s traffic it response very slow
while when i use direct browsing i can browse websites very fast, i used
tcpdump for tracing connections arrive time and there was no problem, also
i used watch -d for tracing packets match by iptables rules and it was ok,
i also used iptables trace command for tracing matching iptables rules,
there was no problem except i had latency on arriving packets on iptables
rule while tcpdump captured packets fast, it happened when my browsing was
so slow, at some times that my browsing was fast there was no latency on
iptables trace log.
i also used tcp and linux enhancement configurations, but nothing happened.
wccp send packets very well and tcpdump show capturing packets too but
browsing with squid is very slow.
please help me.

*Regards,Iraj Norouzi*
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Is it a good idea to use Linux swap partition/file with rock storage?

2017-09-19 Thread Amos Jeffries

On 19/09/17 18:25, duanyao wrote:

Hi,

I notice that squid's rock storage uses large (and fixed) amount of 
shared memory even if it is not accessed. It's estimated as 
110byte/slot, so for a 256GB rock storage with 16KB slot, the memory 
requirement is about 1.7GB, which is quite large.


So my questions are:

1. Is there a way to reduce memory usage of rock storage?



Reducing the cache size is the only thing that will do that.

For the entire time your Squid is running it is adding to the cache 
contents. The rate of growth decreases over time, but will only ever 
stop growing if the cache reaches 100% full.


So going out of your way to make it use less memory during that warm-up 
phaze is pointless long-term. The memory *is* needed and not having it 
available for use with zero advance notice will lead to serious 
performance problems, up to and including DoS vulnerability in your proxy.


For general memory reduction see the FAQ:



2. On Linux squid puts its shared memory in /dev/shm, which can be 
backed by swap partition/file. Is it a good idea to use swap 
partition/file with rock storage to save some physical memory?




Definitely No. The cache index has an extremely high rate of churn and a 
large number of random location reads per transaction. If any of it ever 
gets pushed out to a swap disk/file the proxy operational speed 
undergoes a performance reduction of 3-4 orders of magnitude. eg. 50GBps 
-> 2MBps.



3. For rock storage, are /dev/shm/squid* frequently and randomly 
written? If the Linux swap is on SSD, will this causes 
performance/lifetime issues?




see the answer to (2).

Squid stresses disks in ways vastly different to what manufacturers 
optimize the hardware to handle. The HTTP caches have a very high 
write-to-read ratio. No disk actually survives more than a fraction of 
its manufacturer advertised lifetime. This problem is less visible with 
HDD due to their naturally long lifetimes.


Specific to your question, due to the churn mentioned in (2) using a 
disk as storage location for the cache index faces it with the worst of 
both worlds - very high read throughput and even higher write 
throughput. SSD avoid (some of) the speed problem, but at cost of 
shorter lifetimes. So the churn is much more relevant and perhapse 
costly in hardware replacements.


YMMV depending on the specific SSD model and how it is designed to cope 
with dead sectors - but it is guaranteed to wear out much faster than 
advertised.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] disable access.log logging on a specific entrys

2017-09-19 Thread Amos Jeffries

>  Ursprüngliche Nachricht 
> Von: Amos Jeffries
>
> On 19/09/17 01:45, Verwaiser wrote:
>  >
>  > Does anybody know a solution for this problem?
>  >
>
> What Squid version?
>
> Amos

On 19/09/17 20:56, admin wrote:
> Sorry, I forgot...
>
> Squid version 3.5.21
>

Please try an upgrade. The latest version works fine for me with the 
same config.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 3.5.27 for Microsoft Windows 64-bit is available

2017-09-19 Thread Rafael Akchurin
Greetings everyone,

Sorry with a huge delay we would like to announce the availability of the 
CygWin based build of Squid proxy
for Microsoft Windows version 3.5.27 (amd64 only!).

* Original release notes are at 
http://www.squid-cache.org/Versions/v3/3.5/squid-3.5.27-RELEASENOTES.html .
* Ready to use MSI package can be downloaded from http://squid.diladele.com .
* List of open issues for the installer - 
https://github.com/diladele/squid-windows/issues

Thanks a lot for Squid developers for making this great software!

Please join our humble efforts to provide ready to run MSI installer for Squid 
on Microsoft Windows with all required dependencies at GitHub -
https://github.com/diladele/squid-windows . Report all issues/bugs/feature 
requests at GitHub project.
Issues about the *MSI installer only* can also be reported to 
supp...@diladele.com .

Best regards,
Rafael Akchurin
Diladele B.V.
https://www.diladele.com



Cloud Guard URL re-writer for Squid proxy

We would also like to introduce our new research project - cloud based URL 
rewriter for Squid proxy. In short it
is an URL rewriter that gets integrated with Squid. The rewriter calls into
guard.diladele.com/api/* to process URL rewrite requests.

For now it works in Windows only. We plan to add support for Linux (amd64, 
MIPS, ARM based),
FreeBSD and pfSense if there is be enough interest for that. The project is in 
the beta stage now so
please use it as much as possible but on non production systems. Please direct 
your issues
to supp...@diladele.com.

Signup/Login is available at https://guard.diladele.com/login/ . Please note, 
due to early stage
of the project it is only possible to sign up from DE, FR, NL and UK. If you'd 
like to be notified
of when the project is available in your country, please join our community 
forum
(https://groups.google.com/d/forum/web-safety) or MailChimp hosted news
letter (http://eepurl.com/vXDPH ).


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users