Re: [squid-users] decreased requests per second with big file size

2015-10-13 Thread Amos Jeffries
On 14/10/2015 7:48 p.m., Eliezer Croitoru wrote:
> You now got my attention!
> Depends on what you want you might be able to use external logging
> helper for that.
> I am unsure if it is possible to use two access log directives in the
> configuration and Amos or others can answer that.

Yes it is possible to use two multiple access_log directives. But not
multiple logfile_daemon directives.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] decreased requests per second with big file size

2015-10-13 Thread Eliezer Croitoru

You now got my attention!
Depends on what you want you might be able to use external logging 
helper for that.
I am unsure if it is possible to use two access log directives in the 
configuration and Amos or others can answer that.
It is pretty simple to implement since the input data will flow like any 
access log lines but to a program.

Then you can update some DB live.
There is an option to even use the same log daemon for both logging and 
http interface for the statistics but I will not go this way now.

...
I have tested and it seems possible to use two access_log directives.
I am unsure how to implement the idea of both access.log and external 
logger combined.

But there is sure the option of:
"access_log tcp://host:port"
Which if you will write a TCP service will make your life easy.
I am unfamiliar with the logging protocol but it seems like the wiki can 
help with that.


*I am willing to write an example tcp log daemon for the squid in 
golang\ruby\python for the squid project if one is not present in these 
languages.*


Eliezer

* Testing 3.5.10 RPMs is in progress.

On 14/10/2015 09:00, Ambadas H wrote:

Hi Eliezer,

Its mostly like a live feed.

I am writing these sites+(a client tracking parameter) to a flat file via
squid, from where another process reads it & does further processing (eg.
analyze top sites used by any particular client).

And that is why i was working on getting just the urls entered by clients.


Ambadas


On Tue, Oct 13, 2015 at 2:01 PM, Eliezer Croitoru 
wrote:


Hey Ambadas,

I was wondering if you want it to be something like a "live feed" or just
for logs analyzing?

Eliezer

On 09/10/2015 15:47, Ambadas H wrote:


Hi,

I am using below setup:
Squid proxy 3.5.4.
CentOS 7.1

I am trying to analyze the most used websites by the users via Squid
proxy.
I just require the first GET request for that particular browsed page page
& not the proceeding GETs of that same page.

Eg:
1) user enters *http://google.com * in client
(mozilla)
2) client gets page containing some other urls
3) client initiates multiple GETs for same requested page without users
knowledge

I myself tried a logic where I assumed if "Referer" header is present,
then
its not the first GET but a proceeding one for same requested page.

I know i cant rely on "Referer" header to be always present as its not
mandatory. But
I want to know if my logic is correct? & also if there's any alternative
solution?



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users





___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] decreased requests per second with big file size

2015-10-13 Thread Ambadas H
Hi Amos,


Got it. Will go through the session helpers & figure out how to do it.


Thanks for the help :)


Ambadas

On Tue, Oct 13, 2015 at 1:25 PM, Amos Jeffries  wrote:

> On 12/10/2015 6:51 p.m., Ambadas H wrote:
> > Hi Amos,
> >
> > Thanks for responding
> >
> > *"You would be better off taking the first use of any domain by a
> client,*
> >
> > *then ignoring other requests for it until there is some long period*
> > *between two of them. The opposite of what session helpers do."*
> >
> > Could you please elaborate a little on the above logic.
>
> That is about as clear as I can explain it sorry. Look at what the
> session helpers do to determine whether two requests are part of the
> same session or not.
>
> You need to start with that *then* figure out how to split each sequence
> of requests now grouped into "session" down into whatever grouping you
> define "page" to be.
>
>
> >
> > My understanding, if not wrong, is to take domain/host of first client
> GET
> > request & don't consider the same if it matches with the subsequent GET
> > requests.
> >
> > In this case there is possibility of multiple unique domains/hosts for
> > single page (Eg. other domain Ads, analytics etc)?
>
> Yes. There is simply no concept of "page" in HTTP.
>
> It is a hard problem to even figure out with any accuracy what requests
> are coming from the same client.
>
> Amos
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] decreased requests per second with big file size

2015-10-13 Thread Ambadas H
Hi Eliezer,

Its mostly like a live feed.

I am writing these sites+(a client tracking parameter) to a flat file via
squid, from where another process reads it & does further processing (eg.
analyze top sites used by any particular client).

And that is why i was working on getting just the urls entered by clients.


Ambadas


On Tue, Oct 13, 2015 at 2:01 PM, Eliezer Croitoru 
wrote:

> Hey Ambadas,
>
> I was wondering if you want it to be something like a "live feed" or just
> for logs analyzing?
>
> Eliezer
>
> On 09/10/2015 15:47, Ambadas H wrote:
>
>> Hi,
>>
>> I am using below setup:
>> Squid proxy 3.5.4.
>> CentOS 7.1
>>
>> I am trying to analyze the most used websites by the users via Squid
>> proxy.
>> I just require the first GET request for that particular browsed page page
>> & not the proceeding GETs of that same page.
>>
>> Eg:
>> 1) user enters *http://google.com * in client
>> (mozilla)
>> 2) client gets page containing some other urls
>> 3) client initiates multiple GETs for same requested page without users
>> knowledge
>>
>> I myself tried a logic where I assumed if "Referer" header is present,
>> then
>> its not the first GET but a proceeding one for same requested page.
>>
>> I know i cant rely on "Referer" header to be always present as its not
>> mandatory. But
>> I want to know if my logic is correct? & also if there's any alternative
>> solution?
>>
>>
>>
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>>
>>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Safari 9 vs. SSL Bump

2015-10-13 Thread Amos Jeffries
On 14/10/2015 5:03 p.m., Dan Charlesworth wrote:
> I meant to say “forward secrecy”, which appears to be a list of specific 
> ciphers:
> https://developer.apple.com/library/watchos/technotes/App-Transport-Security-Technote/index.html
> 
> Anyone know how to translate that list of ciphers to use in sslproxy_cipher 
> in squid.conf?


ECDHE means they are all Elliptic Curves. Which are only supported by
Squid-4.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Safari 9 vs. SSL Bump

2015-10-13 Thread Dan Charlesworth
I meant to say “forward secrecy”, which appears to be a list of specific 
ciphers:
https://developer.apple.com/library/watchos/technotes/App-Transport-Security-Technote/index.html

Anyone know how to translate that list of ciphers to use in sslproxy_cipher in 
squid.conf?

> On 14 Oct 2015, at 2:39 PM, Dan Charlesworth  wrote:
> 
> ¯\_(ツ)_/¯
> 
> All I really have to go on is those errors com.apple.WebKit.Networking is 
> logging which apparently points to a specific thing it’s missing called 
> “forward transport security”. Only the peek@step1 seems to make it as far as 
> any of squid’s logs.
> 
> No other browsers affected that I can find, not even mobile Safari. The sites 
> that do and don’t fail seems random too.
> 
> Fine: instagram.com, getpocket.com, youtube.com
> 
> Not fine: httpbin.org, news.ycombinator.com, basecamp.com, wikipedia.org, 
> dribbble.com, icloud.com, vimeo.com, reddit.com
> 
>> On 14 Oct 2015, at 2:13 PM, Jason Haar  wrote:
>> 
>> On 14/10/15 16:08, Dan Charlesworth wrote:
>>> I thought that fixed it for a second … 
>>> 
>>> But in reality ssl_bump peek step1 & ssl_bump bump step3 is actually 
>>> splicing everything, it seems.
>>> 
>>> Any other advice? :-)
>> Could this imply be a pinning issue? ie does Safari track the CAs used
>> by those sites - thus causing the problem you see? Certainly matches the
>> symptoms
>> 
>> -- 
>> Cheers
>> 
>> Jason Haar
>> Corporate Information Security Manager, Trimble Navigation Ltd.
>> Phone: +1 408 481 8171
>> PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
>> 
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Safari 9 vs. SSL Bump

2015-10-13 Thread Dan Charlesworth
 ¯\_(ツ)_/¯

All I really have to go on is those errors com.apple.WebKit.Networking is 
logging which apparently points to a specific thing it’s missing called 
“forward transport security”. Only the peek@step1 seems to make it as far as 
any of squid’s logs.

No other browsers affected that I can find, not even mobile Safari. The sites 
that do and don’t fail seems random too.

Fine: instagram.com, getpocket.com, youtube.com

Not fine: httpbin.org, news.ycombinator.com, basecamp.com, wikipedia.org, 
dribbble.com, icloud.com, vimeo.com, reddit.com

> On 14 Oct 2015, at 2:13 PM, Jason Haar  wrote:
> 
> On 14/10/15 16:08, Dan Charlesworth wrote:
>> I thought that fixed it for a second … 
>> 
>> But in reality ssl_bump peek step1 & ssl_bump bump step3 is actually 
>> splicing everything, it seems.
>> 
>> Any other advice? :-)
> Could this imply be a pinning issue? ie does Safari track the CAs used
> by those sites - thus causing the problem you see? Certainly matches the
> symptoms
> 
> -- 
> Cheers
> 
> Jason Haar
> Corporate Information Security Manager, Trimble Navigation Ltd.
> Phone: +1 408 481 8171
> PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TPROXY and IPv6 issues CentOS 7

2015-10-13 Thread Amos Jeffries
On 14/10/2015 7:07 a.m., James White wrote:
> Hi all,
> 
> I operate a squid box which has two http_port setups:
> 
> http_port 3128
> http_port 3129 TPROXY
> 
> I have implemented TPROXY to replace my NAT setup on a CentOS 7 Squid
> 3.3 box. Currently the IPv4 connectivity is working great, the IPv6
> connectivity is broken when going through TPROXY. All IPv6 connections
> timeout and from tests it appears there is a broken IPv6 setup. Using
> test-ipv6.com I get a broken/misconfiguration warning. IPv6
> connections handled by the standard 3128 setup work OK, direct IPv6
> connections outside of the proxy are also OK, TPROXY IPv6 is not
> working properly.
> 
> I have looked at several TPROXY resources and cannot see where I have
> gone wrong or what might be causing the issue. I am using my DD-WRT
> routing with policy routing to pass the traffic to the Squid box which
> then uses further policy routing to push the traffic to the TPROXY
> binding on port 3129.
> 
> DD-WRT firewall/routing rules:
> 
> PROXY_IPV6="2001:470::xx::x"
> CLIENTIFACE="br0"
> FWMARK=3
> 
> ip6tables -t mangle -A PREROUTING -i $CLIENTIFACE -s $PROXY_IPV6 -p tcp
> --dport 80 -j ACCEPT
> ip6tables -t mangle -A PREROUTING -i $CLIENTIFACE -p tcp --dport 80 -j
> MARK --set-mark $FWMARK
> ip6tables -t mangle -A PREROUTING -m mark --mark $FWMARK -j ACCEPT
> ip6tables -t filter -A FORWARD -i $CLIENTIFACE -o $CLIENTIFACE -p tcp
> --dport 80 -j ACCEPT
> 
> ip -f inet6 rule add fwmark $FWMARK table 2
> ip -f inet6 route add default via $PROXY_IPV6 dev $CLIENTIFACE table 2
> 
> 
> Squid box firewall and routing rules:
> 
> ip -f inet6 rule add fwmark 1 lookup 100
> ip -f inet6 route add local default dev eno1 table 100
> 
> ip6tables -t mangle -F
> ip6tables -t mangle -X
> ip6tables -t mangle -N DIVERT
> 
> ip6tables -t mangle -A DIVERT -j MARK --set-mark 1
> ip6tables -t mangle -A DIVERT -j ACCEPT
> ip6tables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
> ip6tables -t mangle -A PREROUTING -p tcp -m tcp --dport 80 -j TPROXY
> --tproxy-mark 0x1/0x1 --on-port 3129
> 
> 
> The following sysctl values are set:
> 
> net.ipv4.ip_forward = 1
> net.ipv4.conf.default.rp_filter = 0
> net.ipv4.conf.all.rp_filter = 0
> net.ipv4.conf.eno1.rp_filter = 0
> 

Double-check the meaning of 0 in those rules. The rp_filter value
meanings changed just prior to 3.x kernels, and no longer do what most
online tutorials say.


> I have defined specific IPv4 and IPv6 addresses for the Squid traffic
> to go over, I had to exclude these with PREROUTING RULES as this broke
> connectivity on LAN clients which use the standard http_port setup of
> 3128. IPv6 connectivity for these clients is OK.

Pause.

How is traffic to --dport 3128 matching "-p tcp -m tcp --dport 80" ?

It seems to me that would be part of yoru problem. Unless you mean that
these rules had to go on the router. In which case, yes you do need to
prevent Squid outbound traffic being looped back to it a second time.

> 
> iptables -t mangle -I PREROUTING -p tcp --dport 80 -s 192.168.x.x -j
> ACCEPT
> ip6tables -t mangle -I PREROUTING -p tcp --dport 80 -s
> 2001:470::xx::x -j ACCEPT
> 
> 
> I don't know if I need additional values for any ipv6 config value.
> Nothing is mentioned in the TPROXY Squid wiki article.

Given the likelihood of so called "privacy addressing" in IPv6 you may
need to make the v6 bypasses use /64 subnets instead of single IP's

> 
> 
> Any ideas on what I could be missing?
> 

When debugging make sure "via on" directive exists in squid.conf. That
will highlight looping errors that you may have from misconfiguration
TPROXY.


Also, make sure that ICMP and path-MTU etc are working. Particularly
from the Squid machine to the Internet.

If you haven't already been through the list and double/triple-checked,
the troubleshooting section of
 may have the answer.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Safari 9 vs. SSL Bump

2015-10-13 Thread Jason Haar
On 14/10/15 16:08, Dan Charlesworth wrote:
> I thought that fixed it for a second … 
>
> But in reality ssl_bump peek step1 & ssl_bump bump step3 is actually splicing 
> everything, it seems.
>
> Any other advice? :-)
Could this imply be a pinning issue? ie does Safari track the CAs used
by those sites - thus causing the problem you see? Certainly matches the
symptoms

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID: cache_dir filling up and squid imploding

2015-10-13 Thread Amos Jeffries
On 14/10/2015 3:05 a.m., Nelson Manuel Marques wrote:
> 
> Hi all,
> 
> We have a squid running for quite a few years and with the increase of
> traffic we noticed a bit of I/O hammering on the squid server (local
> disks).
> 
> For some testing, I've made a small 1.2GB tmpfs and pointed cache_dir
> to it so that our cache would be in the 'ramdrive'.
> 
> This did help a lot with I/O, but squid eventually once in a while
> implodes when it fills 1.2GB of the ramdrive (cache_dir is configured
> for 1G, leaving 200MB free).
> 

This sounds like 
which was fixed in Squid-3.5.8

Please try an upgraded package. Eliezer provides more up to date
packages for CentOS which you can find through


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Safari 9 vs. SSL Bump

2015-10-13 Thread Dan Charlesworth
I thought that fixed it for a second … 

But in reality ssl_bump peek step1 & ssl_bump bump step3 is actually splicing 
everything, it seems.

Any other advice? :-)

> On 14 Oct 2015, at 1:51 PM, Amos Jeffries  wrote:
> 
> On 14/10/2015 1:13 p.m., Dan Charlesworth wrote:
>> Throwing this out to the list in case anyone else might be trying to get SSL 
>> Bump to work with the latest version of Safari.
>> 
>> Every other browser on OS X (and iOS) is happy with bumping for pretty much 
>> all HTTPS sites, so long as the proxy’s CA is trusted. 
>> 
>> However Safari throws generic “secure connection couldn’t be established” 
>> errors for many popular HTTPS sites in including:
>> - wikipedia.org
>> - mail.google.com
>> - twitter.com
>> - github.com
>> 
>> But quite a number of others work, such as youtube.com.
>> 
>> This error gets logged to the system whenever it occurs:
>> com.apple.WebKit.Networking: NSURLSession/NSURLConnection HTTP load failed 
>> (kCFStreamErrorDomainSSL, -9802)
>> 
>> Apparently this is related to Apple’s new “App Transport Security” 
>> protections, in particular, the fact that “the server doesn’t support 
>> forward secrecy”. Even though it doesn’t seem to be affecting mobile Safari 
>> on iOS 9 at all.
>> 
>> It’s also notable that Safari seems perfectly happy with legacy server-first 
>> SSL bumping. 
>> 
>> I’m using Squid 3.5.10 and this is my current config: 
>> https://gist.github.com/djch/9b883580c6ee84f31cd1
>> 
>> Anyone have any idea what I can try?
> 
> You can try bump at step3 (roughly equivalent to server-first) instead
> of step2 (aka client-first).
> 
> 
> Amos
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Delay pool with large negative numbers

2015-10-13 Thread Amos Jeffries
On 14/10/2015 11:46 a.m., Chico Venancio wrote:
> I have configured delay pools for a client that delays access to a few
> sites, including youtube and facebook.
> It seems to work for some clients, and has significantly reduced link
> congestion. However, some clients seem to be unaffected by the delay pools.
> 
> The output to squidclient mgr:delay is as follows:
> 
> Sending HTTP request ... done.
> HTTP/1.1 200 OK
> Server: squid/3.4.8
> Mime-Version: 1.0
> Date: Tue, 13 Oct 2015 22:43:28 GMT
> Content-Type: text/plain
> Expires: Tue, 13 Oct 2015 22:43:28 GMT
> Last-Modified: Tue, 13 Oct 2015 22:43:28 GMT
> X-Cache: MISS from proxy-server
> X-Cache-Lookup: MISS from proxy-server:3128
> Via: 1.1 proxy-server (squid/3.4.8)
> Connection: close
> 
> Delay pools configured: 1
> 
> Pool: 1
> Class: 2
> 
> Aggregate:
> Max: 2
> Restore: 1
> Current: -108514139
> 
> Individual:
> Max: 12000
> Restore: 7000
> Current: 87:12000 56:12000 92:12000 123:12000 94:-58135034
> 89:12000 223:12000 55:12000 93:12000
> 
> Memory Used: 1496 bytes
> 
> 
> I have searched for answers and some do mention that sometimes the current
> bytes in the pool shoudl be negative, but a low negative like -1 or -6. To
> me it seems that the delay pools are beeing ignored...


Sounds like it might be a side effect of


Or it could be the fact that delay pools are still 32-bit functionality.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ERROR: NAT/TPROXY lookup failed to locate original IPs

2015-10-13 Thread Amos Jeffries
On 14/10/2015 1:43 p.m., SaRaVanAn wrote:
> Hi Amos,
> I have tested squid 3.5.10 in linux kernel 3.16 compiled for debian wheezy.
> But still I am seeing same kind of errors.
> What could be the issue? Is there anything else we need to change?
> 
> *Linux version *
> uname -r
> 3.16.7-ckt11-ram.custom-1.4
> 
> 
> *Squid version*
> /usr/sbin/squid -v
> Squid Cache: Version 3.5.10
> 

Do you also have libcap2 ? It needs to be present at both build and run
time.

Other than than all I'm aware of is a mystery issue (probably kernel
related) in Debian Squeeze and Wheezy that nobody ever got figured out,
which disappeared in Jesse.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Safari 9 vs. SSL Bump

2015-10-13 Thread Amos Jeffries
On 14/10/2015 1:13 p.m., Dan Charlesworth wrote:
> Throwing this out to the list in case anyone else might be trying to get SSL 
> Bump to work with the latest version of Safari.
> 
> Every other browser on OS X (and iOS) is happy with bumping for pretty much 
> all HTTPS sites, so long as the proxy’s CA is trusted. 
> 
> However Safari throws generic “secure connection couldn’t be established” 
> errors for many popular HTTPS sites in including:
> - wikipedia.org
> - mail.google.com
> - twitter.com
> - github.com
> 
> But quite a number of others work, such as youtube.com.
> 
> This error gets logged to the system whenever it occurs:
> com.apple.WebKit.Networking: NSURLSession/NSURLConnection HTTP load failed 
> (kCFStreamErrorDomainSSL, -9802)
> 
> Apparently this is related to Apple’s new “App Transport Security” 
> protections, in particular, the fact that “the server doesn’t support forward 
> secrecy”. Even though it doesn’t seem to be affecting mobile Safari on iOS 9 
> at all.
> 
> It’s also notable that Safari seems perfectly happy with legacy server-first 
> SSL bumping. 
> 
> I’m using Squid 3.5.10 and this is my current config: 
> https://gist.github.com/djch/9b883580c6ee84f31cd1
> 
> Anyone have any idea what I can try?

You can try bump at step3 (roughly equivalent to server-first) instead
of step2 (aka client-first).


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to allow subdomains in my config.

2015-10-13 Thread Amos Jeffries
On 14/10/2015 12:37 p.m., Leonardo Rodrigues wrote:
> Em 13/10/15 18:14, sebastien.boulia...@cpu.ca escreveu:
>>
>> cache_peer ezproxyx.reseaubiblio.ca parent 80 0 no-query
>> originserver name=ezproxycqlm
>>
>> acl ezproxycqlmacl dstdomain ezproxycqlm.reseaubiblio.ca
>>
>> http_access allow www80 ezproxycqlmacl
>>
>> cache_peer_access ezproxycqlm allow www80 ezproxycqlmacl
>>
>> cache_peer_access ezproxycqlm deny all
>>
>>
> 
> no guessing games would be awesome ... please post your ACL
> definitions as well
> 

He did it was hidden in the middle.

Sebastien:

Place a '.' as the prefix on the dstdomain.
Like this:
  acl example dstdomain .example.com


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ERROR: NAT/TPROXY lookup failed to locate original IPs

2015-10-13 Thread SaRaVanAn
Hi Amos,
I have tested squid 3.5.10 in linux kernel 3.16 compiled for debian wheezy.
But still I am seeing same kind of errors.
What could be the issue? Is there anything else we need to change?

*Linux version *
uname -r
3.16.7-ckt11-ram.custom-1.4


*Squid version*
/usr/sbin/squid -v
Squid Cache: Version 3.5.10

Regards,
Saravanan N


Regards,
Saravanan N

On Mon, Oct 12, 2015 at 4:25 AM, Amos Jeffries  wrote:

> On 10/10/2015 12:48 p.m., SaRaVanAn wrote:
> > Hi All,
> > I have compiled squid version 3.5.10 in  debian wheezy 7.1. With the
> > updated version squid+tproxy4 is not working in debian. I am getting the
> > below error if I try to browse any webpage. Also the connection gets
> reset.
> >
>
> Wheezy kernel and system headers do not contain TPROXY support.
>
> I suggest you upgrade to one of the newer Debian releases. Or at least
> use the backports package. Those should contain all that you need to run
> Squid on the outdated Debian system.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Safari 9 vs. SSL Bump

2015-10-13 Thread Dan Charlesworth
Throwing this out to the list in case anyone else might be trying to get SSL 
Bump to work with the latest version of Safari.

Every other browser on OS X (and iOS) is happy with bumping for pretty much all 
HTTPS sites, so long as the proxy’s CA is trusted. 

However Safari throws generic “secure connection couldn’t be established” 
errors for many popular HTTPS sites in including:
- wikipedia.org
- mail.google.com
- twitter.com
- github.com

But quite a number of others work, such as youtube.com.

This error gets logged to the system whenever it occurs:
com.apple.WebKit.Networking: NSURLSession/NSURLConnection HTTP load failed 
(kCFStreamErrorDomainSSL, -9802)

Apparently this is related to Apple’s new “App Transport Security” protections, 
in particular, the fact that “the server doesn’t support forward secrecy”. Even 
though it doesn’t seem to be affecting mobile Safari on iOS 9 at all.

It’s also notable that Safari seems perfectly happy with legacy server-first 
SSL bumping. 

I’m using Squid 3.5.10 and this is my current config: 
https://gist.github.com/djch/9b883580c6ee84f31cd1

Anyone have any idea what I can try?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to allow subdomains in my config.

2015-10-13 Thread Leonardo Rodrigues

Em 13/10/15 18:14, sebastien.boulia...@cpu.ca escreveu:


cache_peer ezproxyx.reseaubiblio.ca parent 80 0 no-query 
originserver name=ezproxycqlm


acl ezproxycqlmacl dstdomain ezproxycqlm.reseaubiblio.ca

http_access allow www80 ezproxycqlmacl

cache_peer_access ezproxycqlm allow www80 ezproxycqlmacl

cache_peer_access ezproxycqlm deny all




no guessing games would be awesome ... please post your ACL 
definitions as well



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Delay pool with large negative numbers

2015-10-13 Thread Chico Venancio
I have configured delay pools for a client that delays access to a few
sites, including youtube and facebook.
It seems to work for some clients, and has significantly reduced link
congestion. However, some clients seem to be unaffected by the delay pools.

The output to squidclient mgr:delay is as follows:

Sending HTTP request ... done.
HTTP/1.1 200 OK
Server: squid/3.4.8
Mime-Version: 1.0
Date: Tue, 13 Oct 2015 22:43:28 GMT
Content-Type: text/plain
Expires: Tue, 13 Oct 2015 22:43:28 GMT
Last-Modified: Tue, 13 Oct 2015 22:43:28 GMT
X-Cache: MISS from proxy-server
X-Cache-Lookup: MISS from proxy-server:3128
Via: 1.1 proxy-server (squid/3.4.8)
Connection: close

Delay pools configured: 1

Pool: 1
Class: 2

Aggregate:
Max: 2
Restore: 1
Current: -108514139

Individual:
Max: 12000
Restore: 7000
Current: 87:12000 56:12000 92:12000 123:12000 94:-58135034
89:12000 223:12000 55:12000 93:12000

Memory Used: 1496 bytes


I have searched for answers and some do mention that sometimes the current
bytes in the pool shoudl be negative, but a low negative like -1 or -6. To
me it seems that the delay pools are beeing ignored...

Thanks for any help.

Chico Venancio
CEO e Diretor de Criação
VM TECH - (98) 9 8800 2743
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] How to allow subdomains in my config.

2015-10-13 Thread Sebastien.Boulianne
Hi,

I searched on the doc and on the web, I cant find what I want.
The primary site is http://ezproxyx.reseaubiblio.ca.
After the user is authentificated, he cans access many ressources / other sites.

In the access.log, I got an TCP_DENIED
TCP_DENIED/403 4524 GET 
http://www.worldbookonline.com.ezproxyx.reseaubiblio.ca/decouverte/home

I would like allow all subdomain like 
http://www.worldbookonline.com.ezproxyx.reseaubiblio.ca.
How can I do that ?

cache_peer ezproxyx.reseaubiblio.ca parent 80 0 no-query originserver 
name=ezproxycqlm
acl ezproxycqlmacl dstdomain ezproxycqlm.reseaubiblio.ca
http_access allow www80 ezproxycqlmacl
cache_peer_access ezproxycqlm allow www80 ezproxycqlmacl
cache_peer_access ezproxycqlm deny all

Thanks.

Cheers,

Sébastien

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] TPROXY and IPv6 issues CentOS 7

2015-10-13 Thread James White
Hi all,

I operate a squid box which has two http_port setups:

http_port 3128
http_port 3129 TPROXY

I have implemented TPROXY to replace my NAT setup on a CentOS 7 Squid
3.3 box. Currently the IPv4 connectivity is working great, the IPv6
connectivity is broken when going through TPROXY. All IPv6 connections
timeout and from tests it appears there is a broken IPv6 setup. Using
test-ipv6.com I get a broken/misconfiguration warning. IPv6
connections handled by the standard 3128 setup work OK, direct IPv6
connections outside of the proxy are also OK, TPROXY IPv6 is not
working properly.

I have looked at several TPROXY resources and cannot see where I have
gone wrong or what might be causing the issue. I am using my DD-WRT
routing with policy routing to pass the traffic to the Squid box which
then uses further policy routing to push the traffic to the TPROXY
binding on port 3129.

DD-WRT firewall/routing rules:

PROXY_IPV6="2001:470::xx::x"
CLIENTIFACE="br0"
FWMARK=3

ip6tables -t mangle -A PREROUTING -i $CLIENTIFACE -s $PROXY_IPV6 -p tcp
--dport 80 -j ACCEPT
ip6tables -t mangle -A PREROUTING -i $CLIENTIFACE -p tcp --dport 80 -j
MARK --set-mark $FWMARK
ip6tables -t mangle -A PREROUTING -m mark --mark $FWMARK -j ACCEPT
ip6tables -t filter -A FORWARD -i $CLIENTIFACE -o $CLIENTIFACE -p tcp
--dport 80 -j ACCEPT

ip -f inet6 rule add fwmark $FWMARK table 2
ip -f inet6 route add default via $PROXY_IPV6 dev $CLIENTIFACE table 2


Squid box firewall and routing rules:

ip -f inet6 rule add fwmark 1 lookup 100
ip -f inet6 route add local default dev eno1 table 100

ip6tables -t mangle -F
ip6tables -t mangle -X
ip6tables -t mangle -N DIVERT

ip6tables -t mangle -A DIVERT -j MARK --set-mark 1
ip6tables -t mangle -A DIVERT -j ACCEPT
ip6tables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
ip6tables -t mangle -A PREROUTING -p tcp -m tcp --dport 80 -j TPROXY
--tproxy-mark 0x1/0x1 --on-port 3129


The following sysctl values are set:

net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.eno1.rp_filter = 0

I have defined specific IPv4 and IPv6 addresses for the Squid traffic
to go over, I had to exclude these with PREROUTING RULES as this broke
connectivity on LAN clients which use the standard http_port setup of
3128. IPv6 connectivity for these clients is OK.

iptables -t mangle -I PREROUTING -p tcp --dport 80 -s 192.168.x.x -j
ACCEPT
ip6tables -t mangle -I PREROUTING -p tcp --dport 80 -s
2001:470::xx::x -j ACCEPT


I don't know if I need additional values for any ipv6 config value.
Nothing is mentioned in the TPROXY Squid wiki article.


Any ideas on what I could be missing?

Thanks,

James
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID: cache_dir filling up and squid imploding

2015-10-13 Thread Alex Rousskov
On 10/13/2015 10:17 AM, Nelson Manuel Marques wrote:
> Hi Antony,
> 
> I had actually seen that document and it's "10%". That's why I've left
> 20% also taking in mind the space reserved for 'root'.
> 
> I suppose we have to increase it and go on trial/error until we find a
> safe margin?


Another option is to upgrade and use Rock store that does not force you
to play those guessing games -- the maximum size used for Rock store is
constant and should match your Squid configuration.

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID: cache_dir filling up and squid imploding

2015-10-13 Thread Nelson Manuel Marques
Hi Antony,

I had actually seen that document and it's "10%". That's why I've left
20% also taking in mind the space reserved for 'root'.

I suppose we have to increase it and go on trial/error until we find a
safe margin?

NMM


On Tue, 2015-10-13 at 17:42 +0200, Antony Stone wrote:
> On Tuesday 13 October 2015 at 16:37:10, Nelson Manuel Marques wrote:
> 
> > On Tue, 2015-10-13 at 20:22 +0600, Yuri Voinov wrote:
> > > 
> > > Squid has its own in-memory cache, what's the point to put the
> > > disk
> > > cache to the same ?!
> > 
> > The problem here isn't the tmpfs, but instead Squid going 20% over
> > the
> > max size defined in cache_dir, or am I missing something?
> 
> Have you read:
> http://wiki.squid-cache.org/SquidFaq/ConfiguringSquid
> "What cache_dir size should I use?"
> 
> 20% extra may be more than you were expecting, but it's not
> ridiculous.
> 
> 
> Antony.
> 
-- 
Nelson Manuel Marques 
Administrador de Sistemas
Eurotux Informática S.A. | www.eurotux.com
(t) +351 253 680 300


signature.asc
Description: This is a digitally signed message part
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID: cache_dir filling up and squid imploding

2015-10-13 Thread Antony Stone
On Tuesday 13 October 2015 at 16:37:10, Nelson Manuel Marques wrote:

> On Tue, 2015-10-13 at 20:22 +0600, Yuri Voinov wrote:
> > 
> > Squid has its own in-memory cache, what's the point to put the disk
> > cache to the same ?!
> 
> The problem here isn't the tmpfs, but instead Squid going 20% over the
> max size defined in cache_dir, or am I missing something?

Have you read:
http://wiki.squid-cache.org/SquidFaq/ConfiguringSquid
"What cache_dir size should I use?"

20% extra may be more than you were expecting, but it's not ridiculous.


Antony.

-- 
#define SIX 1+5
#define NINE 8+1

int main() {
printf("%d\n", SIX * NINE);
}
- thanks to ECB for bringing this to my attention

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID: cache_dir filling up and squid imploding

2015-10-13 Thread Nelson Manuel Marques
On Tue, 2015-10-13 at 20:22 +0600, Yuri Voinov wrote:
> 
> -BEGIN PGP SIGNED MESSAGE- 
> Hash: SHA256 
>  
> Squid has its own in-memory cache, what's the point to put the disk
> cache to the same ?!

The problem here isn't the tmpfs, but instead Squid going 20% over the
max size defined in cache_dir, or am I missing something?

thanks,
nmm







> 13.10.15 20:05, Nelson Manuel Marques пишет:
> >
> 
>   > Hi all,
> 
>   >
> 
>   > We have a squid running for quite a few years and with the
>   increase of
> 
>   > traffic we noticed a bit of I/O hammering on the squid server
>   (local
> 
>   > disks).
> 
>   >
> 
>   > For some testing, I've made a small 1.2GB tmpfs and pointed
>   cache_dir
> 
>   > to it so that our cache would be in the 'ramdrive'.
> 
>   >
> 
>   > This did help a lot with I/O, but squid eventually once in a
>   while
> 
>   > implodes when it fills 1.2GB of the ramdrive (cache_dir is
>   configured
> 
>   > for 1G, leaving 200MB free).
> 
>   >
> 
>   > This is on a CentOS based system:
> 
>   >
> 
>   > [EDIT squid]# rpm -qi squid
> 
>   > Name: squidRelocations: (not
> 
>   > relocatable)
> 
>   > Version : 3.1.23Vendor:
>   CentOS
> 
>   > Release : 9.el6 Build Date: Fri
>   24 Jul 2015
> 
>   > 09:59:03 AM WEST
> 
>   > Install Date: Tue 13 Oct 2015 02:31:20 PM WEST  Build
>   Host:
> 
>   > c6b8.bsys.dev.centos.org
> 
>   > Group   : System Environment/DaemonsSource RPM:
>   squid-3.1.23
> 
>   > -9.el6.src.rpm
> 
>   > Size: 6649558  License: GPLv2
>   and
> 
>   > (LGPLv2+ and Public Domain)
> 
>   > Signature   : RSA/SHA1, Fri 24 Jul 2015 09:39:20 PM WEST, Key
>   ID
> 
>   > 0946fca2c105b9de
> 
>   > Packager: CentOS BuildSystem
>   
> 
>   > URL : http://www.squid-cache.org
> 
>   > Summary : The Squid proxy caching server
> 
>   > Description :
> 
>   >
> 
>   > Anyone can help? Imagine I want to keep working on a ramdrive
>   with
> 
>   > 1.5G. How much can I get cache_dir configured with a safe
>   margin so it
> 
>   > doesnt fill the space?
> 
>   >
> 
>   > Any options I should look over again which might help me
>   diagnose if it
> 
>   > happens again?
> 
>   >
> 
>   > Kindest Regards,
> 
>   > NMM
> 
>   >
> 
>   >
> 
>   >
> 
>   >
> 
>   >
> 
>   >
> 
>   >
> 
>   > ___
> 
>   > squid-users mailing list
> 
>   > squid-users@lists.squid-cache.org
> 
>   > http://lists.squid-cache.org/listinfo/squid-users
> 
> -BEGIN PGP SIGNATURE- 
> Version: GnuPG v2 
>  
> iQEcBAEBCAAGBQJWHRO2AAoJENNXIZxhPexGl/gH/jR2oSVvX6AiRIVe+gCB1O9F 
> XB2UODRAAlvowF0wKW0Gccbpa9L3iiOJiYNKnomVMIzqwaRhqLeRXRk11kbwN/KP 
> /pEIfNOOYici9UzDIFb99ZOe/R0lR2YG4udCpCayb2+6GzbcAoX6F/4F+eMTIn7y 
> qgnvTNF9JQ1LEUxRtjSNG889RwA2ZKgw9nhhTP2PVFB1ttzP4BWy0nH5SRPkv/PR 
> VO2yZa5FQLTOyk4tEE8c3/b9DwDUiZKalqBEWdQNB6fQN2NH6dXe9jhEV25gcqI4 
> snS1fc6WTuM6pTVR8irtbSPQyw2jvRbzeEIOYM0z5cydhgqeqJg6eHgvCN3u93g= 
> =QjIl 
> -END PGP SIGNATURE- 
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
-- 
Nelson Manuel Marques 
Administrador de Sistemas
Eurotux Informática S.A. | www.eurotux.com
(t) +351 253 680 300


signature.asc
Description: This is a digitally signed message part
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID: cache_dir filling up and squid imploding

2015-10-13 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Squid has its own in-memory cache, what's the point to put the disk
cache to the same ?!

13.10.15 20:05, Nelson Manuel Marques пишет:
>
> Hi all,
>
> We have a squid running for quite a few years and with the increase of
> traffic we noticed a bit of I/O hammering on the squid server (local
> disks).
>
> For some testing, I've made a small 1.2GB tmpfs and pointed cache_dir
> to it so that our cache would be in the 'ramdrive'.
>
> This did help a lot with I/O, but squid eventually once in a while
> implodes when it fills 1.2GB of the ramdrive (cache_dir is configured
> for 1G, leaving 200MB free).
>
> This is on a CentOS based system:
>
> [EDIT squid]# rpm -qi squid
> Name: squidRelocations: (not
> relocatable)
> Version : 3.1.23Vendor: CentOS
> Release : 9.el6 Build Date: Fri 24 Jul 2015
> 09:59:03 AM WEST
> Install Date: Tue 13 Oct 2015 02:31:20 PM WEST  Build Host:
> c6b8.bsys.dev.centos.org
> Group   : System Environment/DaemonsSource RPM: squid-3.1.23
> -9.el6.src.rpm
> Size: 6649558  License: GPLv2 and
> (LGPLv2+ and Public Domain)
> Signature   : RSA/SHA1, Fri 24 Jul 2015 09:39:20 PM WEST, Key ID
> 0946fca2c105b9de
> Packager: CentOS BuildSystem 
> URL : http://www.squid-cache.org
> Summary : The Squid proxy caching server
> Description :
>
> Anyone can help? Imagine I want to keep working on a ramdrive with
> 1.5G. How much can I get cache_dir configured with a safe margin so it
> doesnt fill the space?
>
> Any options I should look over again which might help me diagnose if it
> happens again?
>
> Kindest Regards,
> NMM
>
>
>
>
>
>
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJWHRO2AAoJENNXIZxhPexGl/gH/jR2oSVvX6AiRIVe+gCB1O9F
XB2UODRAAlvowF0wKW0Gccbpa9L3iiOJiYNKnomVMIzqwaRhqLeRXRk11kbwN/KP
/pEIfNOOYici9UzDIFb99ZOe/R0lR2YG4udCpCayb2+6GzbcAoX6F/4F+eMTIn7y
qgnvTNF9JQ1LEUxRtjSNG889RwA2ZKgw9nhhTP2PVFB1ttzP4BWy0nH5SRPkv/PR
VO2yZa5FQLTOyk4tEE8c3/b9DwDUiZKalqBEWdQNB6fQN2NH6dXe9jhEV25gcqI4
snS1fc6WTuM6pTVR8irtbSPQyw2jvRbzeEIOYM0z5cydhgqeqJg6eHgvCN3u93g=
=QjIl
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] SQUID: cache_dir filling up and squid imploding

2015-10-13 Thread Nelson Manuel Marques

Hi all,

We have a squid running for quite a few years and with the increase of
traffic we noticed a bit of I/O hammering on the squid server (local
disks).

For some testing, I've made a small 1.2GB tmpfs and pointed cache_dir
to it so that our cache would be in the 'ramdrive'.

This did help a lot with I/O, but squid eventually once in a while
implodes when it fills 1.2GB of the ramdrive (cache_dir is configured
for 1G, leaving 200MB free).

This is on a CentOS based system:

[EDIT squid]# rpm -qi squid
Name: squidRelocations: (not
relocatable)
Version : 3.1.23Vendor: CentOS
Release : 9.el6 Build Date: Fri 24 Jul 2015
09:59:03 AM WEST
Install Date: Tue 13 Oct 2015 02:31:20 PM WEST  Build Host:
c6b8.bsys.dev.centos.org
Group   : System Environment/DaemonsSource RPM: squid-3.1.23
-9.el6.src.rpm
Size: 6649558  License: GPLv2 and
(LGPLv2+ and Public Domain)
Signature   : RSA/SHA1, Fri 24 Jul 2015 09:39:20 PM WEST, Key ID
0946fca2c105b9de
Packager: CentOS BuildSystem 
URL : http://www.squid-cache.org
Summary : The Squid proxy caching server
Description :

Anyone can help? Imagine I want to keep working on a ramdrive with
1.5G. How much can I get cache_dir configured with a safe margin so it
doesnt fill the space?

Any options I should look over again which might help me diagnose if it
happens again?

Kindest Regards,
NMM





-- 
Nelson Manuel Marques 
Administrador de Sistemas
Eurotux Informática S.A. | www.eurotux.com
(t) +351 253 680 300


signature.asc
Description: This is a digitally signed message part
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] decreased requests per second with big file size

2015-10-13 Thread Eliezer Croitoru

Hey Ambadas,

I was wondering if you want it to be something like a "live feed" or 
just for logs analyzing?


Eliezer

On 09/10/2015 15:47, Ambadas H wrote:

Hi,

I am using below setup:
Squid proxy 3.5.4.
CentOS 7.1

I am trying to analyze the most used websites by the users via Squid proxy.
I just require the first GET request for that particular browsed page page
& not the proceeding GETs of that same page.

Eg:
1) user enters *http://google.com * in client (mozilla)
2) client gets page containing some other urls
3) client initiates multiple GETs for same requested page without users
knowledge

I myself tried a logic where I assumed if "Referer" header is present, then
its not the first GET but a proceeding one for same requested page.

I know i cant rely on "Referer" header to be always present as its not
mandatory. But
I want to know if my logic is correct? & also if there's any alternative
solution?



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] decreased requests per second with big file size

2015-10-13 Thread Amos Jeffries
On 12/10/2015 6:51 p.m., Ambadas H wrote:
> Hi Amos,
> 
> Thanks for responding
> 
> *"You would be better off taking the first use of any domain by a client,*
> 
> *then ignoring other requests for it until there is some long period*
> *between two of them. The opposite of what session helpers do."*
> 
> Could you please elaborate a little on the above logic.

That is about as clear as I can explain it sorry. Look at what the
session helpers do to determine whether two requests are part of the
same session or not.

You need to start with that *then* figure out how to split each sequence
of requests now grouped into "session" down into whatever grouping you
define "page" to be.


> 
> My understanding, if not wrong, is to take domain/host of first client GET
> request & don't consider the same if it matches with the subsequent GET
> requests.
> 
> In this case there is possibility of multiple unique domains/hosts for
> single page (Eg. other domain Ads, analytics etc)?

Yes. There is simply no concept of "page" in HTTP.

It is a hard problem to even figure out with any accuracy what requests
are coming from the same client.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] analyze most used websites using squid

2015-10-13 Thread Ambadas H
Hi,

Thanks for responding

*"You would be better off taking the first use of any domain by a client,*

*then ignoring other requests for it until there is some long period*
*between two of them. The opposite of what session helpers do."*

Could you please elaborate a little on the above logic.

My understanding, if not wrong, is to take domain/host of first client GET
request & don't consider the same if it matches with the subsequent GET
requests.

In this case there is possibility of multiple unique domains/hosts for
single page (Eg. other domain Ads, analytics etc)?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] acl Question

2015-10-13 Thread Amos Jeffries
On 13/10/2015 12:19 p.m., joe wrote:
> ok again i filter out most of the squid conf  with this minimum config should
> i get any static img or anything as hit or not
> caus i dont get any  
>  i test on squid 3.5.8 and up same think

Please continue to use that later version. In absence of any other
useful information about yoru Squid or what it supposed to be doing the
below response assumes you are using 3.5.8.

Please do supply a description of how your Squid is _supposed_ to be
used. And any access policies you are expecting to be enforced by the proxy.


> 
> via off
> forwarded_for off
> 
> # should be allowed
> acl localnet src 10.2.3.0/24
> acl localnet src 10.2.2.0/24
> acl localnet src 10.3.2.0/24
> acl localnet src 10.4.4.0/24
> 
> #http_access deny all
> acl SSL_ports port 443
> acl Safe_ports port 80# http
> acl Safe_ports port 21# ftp
> acl Safe_ports port 443   # https
> acl Safe_ports port 70# gopher
> acl Safe_ports port 210   # wais
> acl Safe_ports port 1025-65535# unregistered ports
> acl Safe_ports port 280   # http-mgmt
> acl Safe_ports port 488   # gss-http
> acl Safe_ports port 591   # filemaker
> acl Safe_ports port 777   # multiling http
> acl CONNECT method CONNECT
> 
> # STOREID ACCESS LIST 
> acl domaincache dstdomain .dailymotion.com
> 
> cache allow domaincache

This rule allows URLs within the *.dailymotion.com domains to be cached.
The implicit followup prevents any others from being stored.

To let everything be cached properly, just remove the above "cache
allow" rule.


> http_access deny !Safe_ports
> # Deny CONNECT to other than secure SSL ports
> http_access deny CONNECT !SSL_ports
> http_access allow localhost
> http_access allow localnet
> http_access allow manager

Allowing anyone who can access the proxy to view the proxy management
reports and controls.


> # And finally deny all other access to this proxy
> http_access allow all

"allow all" does not deny anything. You have an "open proxy".


> 
> http_port 8079 
> http_port 8080 accel vhost allow-direct

What do you think this is doing?


> 
> store_dir_select_algorithm least-load
> cache_dir aufs /mnt/sdb 50 26 256
> cache_dir aufs /mnt/sdc 50 26 256
> 
> memory_pools off
> memory_pools_limit 4 GB

You disabled memory pools. No need to set a limit for it.

> cache_mem 5 GB
> #maximum_object_size_in_memory 64 KB
> maximum_object_size_in_memory 2048 KB

Meaning you can store 2 of the big objects in memory. The rest has to be
on disk.

This can cause major latency problems when Squid has to export up to 1
million small (1KB+) objects out of memory for one of these big objects.

Then when the big object expires it happens all over again, as up to 1
million small objects get loaded back from back from disk or network.


> minimum_object_size 1 KB 
> maximum_object_size 3 GB
> cache_swap_low 98
> cache_swap_high 99
> logfile_rotate 0
> cache_store_log none

This is a default. You can remove the cache_store_log line entirely.

> access_log daemon:/var/log/squid3/access.log !CONNECT

You have an open proxy. CONNECT is usually the method used to send abuse
through open proxies. Ignoring it all is a bad idea.

> cache_log /var/log/squid3/cache.log

This should be a default. You can remove the cache_log line entirely.

> 
> # FILES TYPE
> refresh_pattern -i \.(exe|crx|esd)(\?|\/\?) 10080 100% 799000
> override-expire override-lastmod ignore-reload ignore-no-store
> ignore-private ignore-auth ignore-must-revalidate store-stale
> reload-into-ims
> 
> refresh_pattern -i
> \.(3gp|m1v|ace|web(m|p|a)|m2(v|p)|swf|dat|cup|dvr-ms|ram|avi|mk(a|v)|vob|wm(a|v)|flv|x-flv|JPG)
> 10080 100% 129600 override-expire override-lastmod ignore-reload
> ignore-no-store ignore-private ignore-auth ignore-must-revalidate
> store-stale reload-into-ims
> refresh_pattern -i
> \.(m3u8|jp(e?g|e|2)|gif|pn[pg]|bm?|tiff?|ico|mp(e?g|a|e|1|2|3|4)|deb|ad|f4(f|v)|abst|dll)
> 10080 100% 129600 override-expire override-lastmod ignore-reload
> ignore-no-store ignore-private ignore-auth ignore-must-revalidate
> store-stale reload-into-ims
> refresh_pattern -i
> \.(rar|jar|gz|tgz|bz2|iso|7z|asx|mo(d|v)|arj|lha|lzh|zip|tar|pak|cup|ipa|apk)
> 10080 100% 43800 override-expire override-lastmod ignore-reload
> ignore-no-store ignore-private ignore-auth ignore-must-revalidate
> store-stale
> refresh_pattern -i
> \.(rpm|ac4|bin|ms(i|u|p)|og(x|v|a|g)|rm|r(a|p)m|snd|inc|cod|jad|txt) 10080
> 100% 43800 override-expire override-lastmod ignore-reload ignore-no-store
> ignore-private ignore-auth ignore-must-revalidate store-stale
> refresh_pattern -i
> \.(pp(t?x)|s|t)|pdf|rtf|wax|cab|wmx|wpl|cb(r|z|t)|xl(s?x)|do(c?x)|qt|vpx)
> 10080 100% 43800 override-expire override-lastmod ignore-reload
> ignore-no-store ignore-private ignore-auth ignore-must-revalidate
> store-stale

These options do nothing usefu