Re: [squid-users] squid with SMP registeration time out when i use 10K opened sessions

2015-09-24 Thread Amos Jeffries
On 25/09/2015 4:09 a.m., Alex Rousskov wrote:
> 
> The attached patch for Squid v3.3.11 changes the port sharing algorithm
> to minimize memory usage (at the expense of registration time). Please
> see the patch preamble for technical details. The patch worked with 3K
> ports (24 workers * 128 http_ports each); the registration lasted less
> than 5 seconds.
> 

Thanks Alex. This is now ported and applied to Squid-4 as rev.14314.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Optimezed???

2015-09-24 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Absolutely.

25.09.15 2:13, Amos Jeffries пишет:
> Problems with SSL-Bump are more legal related than technical.

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJWBGMTAAoJENNXIZxhPexGd78H/2LyU5wK7nlOgbWUVE2jGUAm
Y6paNJn8yi+Erv5+rASyGf3fh75vWNapVDYtdIYzC5qgzIoW4BaESiFe45NPZCY2
ZPJ4BpDLhwBkyH+CBXFtPrxeMWwPwbw77kzLDOMIH6flWRazqvCgEUl8kdRtavJh
VNA4IvSXMzhqd1g8dAfj+dDB8EhxaVjZrvrYCDEOTsR0G888iEGuBfSTZz1aoFxr
HAtiZVN8Kz2LhM01KLDWWy0WTiMz/sZZa7nXVJVg08sM8bOFLZSlDneG9fFLbKGv
P6qeNeJCRzjVAs5zWrUs9N7sGK6Yob0pmNSqx+skcDYTau9vGQC3PoJA81RzRXc=
=Jt/c
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Is it possible to send the connection, starting with the CONNECT, to cache-peer?

2015-09-24 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Aha. Good news. This is something already.

25.09.15 1:57, Amos Jeffries пишет:
> On 25/09/2015 2:13 a.m., Yuri Voinov wrote:
>>
>> 24.09.15 7:12, Amos Jeffries пишет:
>>> On 24/09/2015 2:04 a.m., Yuri Voinov wrote:

 Through assertion and then restarts squid:

 2015/09/23 20:03:25 kid1|   Validated 35899 Entries
 2015/09/23 20:03:25 kid1|   store_swap_size = 1730768.00 KB
 2015/09/23 20:03:26 kid1| storeLateRelease: released 0 objects
 2015/09/23 20:03:26 kid1| assertion failed: PeerConnector.cc:116:
 "peer->use_ssl"
 2015/09/23 20:03:30 kid1| Set Current Directory to /var/cache/squid
 2015/09/23 20:03:30 kid1| Starting Squid Cache version
 3.5.7-20150808-r13884 for x86_64-unknown-cygwin...
 2015/09/23 20:03:30 kid1| Service Name: squid
 2015/09/23 20:03:30 kid1| Process ID 11160
>>
>>> There you go. The peering ACLs are working.
>>
>>> Now you need to fix the ssl_bump rules such that the torproject traffic
>>> does not require bump/decrypt before sending over the insecure peer
>>> connection. Squid does not support re-encrypt.
>> Huh. It works. Thank your, Amos!
>>
>>
>>> Please use 3.5.9 for that part.
>> 3.5.9 does support re-encrypt?
>
> No, but it has better ssl_bump processing and more SNI related
> functonality that may allow you to avoid having to decrypt in the first
> place.
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJWBGKjAAoJENNXIZxhPexG9vMIAKGlUOd+mu5sZaq2ObqMLBDT
9lsWWeRJScidSOzMnj4zzfV0Ult8km23+z3oEj0TCE7KzIEDnkRWkn0by9YPdlqO
W+e+vPdjSu6FQbLmiHyVa6f7KxlW3+VWZdpNmj3/pAdwZ4rNA91qZP0qZ8A4NHtr
u8kc3kPT8vCTmD+AhOkyxolxo1TGyl4UAC56bENUJ9I/gy2fvc6rYyJ4D3I1SbXb
QAqbgAdJrmvEpu68s1yiuW9BG72i7dtNcvqt8rHIyfWADDjhBupE5PXD+42Q2dP2
FWl+ljTvanrUOSxXUSz5G4tyHu2YFavk/VS7wRLWAJoMRHIqLYV0PoqnBp41tHc=
=D3HA
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Optimezed???

2015-09-24 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Heh. The same question I've asked early.

Condolences. You can try at your own risk. But B1 security and your
full responsibility.

25.09.15 1:32, Jorgeley Junior пишет:
> So, if my traffic are more https than http there's no need to use squid.
> Man, most of sites are https, what's the purpose of using squid?
>
> 2015-09-24 16:13 GMT-03:00 Yuri Voinov :
>
>>
> First. This is potentially dangerous. Can you guarantee your proxy never
> has physical/network access by intruders? HTTPS can contain sensitive
data.
> You really sure you want problems with users? AS a minimum you need
protect
> your proxy at level B2 (by Orange Book).
>
> Second. Yes, it dangerous, but possible with SSL Bump. With very agressive
> cache parameters and with conjunction previous sentence. So, this is
> dangerous for many sites - for it's functionality and security, in
general.
>
> You still sure you want to do this?
>
> 24.09.15 20:46, Jorgeley Junior пишет:
> >>> Can we do that to cache https?
> >>> http_port 3128 ssl-bump generate-host-certificates=on
> >>> dynamic_cert_mem_cache_size=4MB cert=/usr/local/squid/etc/monkey.pem
> >>>
> >>> 2015-09-24 11:24 GMT-03:00 Jorgeley Junior 
> :
> >>>
>  Is it not possible to cache the https due the encryption?
> 
>  2015-09-18 9:44 GMT-03:00 Antony Stone
>  
>  :
> 
> > On Friday 18 September 2015 at 14:27:42, Jorgeley Junior wrote:
> >
> >> there is a way to improve it?
> >
> > Improve what?  The percentage of your traffic which is cached,
or the
> > accuracy
> > of the information reported by your monitoring system?
> >
> >
> > If you want to cache more content:
> >
> > 1. Make sure the sites being visited have available content
(note that
> > 12.6%
> > of your requests resulted in the remote server saying some
variation on
> > "nothing available").
> >
> > 2. Ignore things which are meaningless - such as the 27% of your
> requests
> > which resulted in 407 Authentication Required - that tells you
nothing
> > about
> > whether the user then successfully authenticated and got what they
> > wanted, or
> > didn't, but either way it's a standard response from the server
which
> > tells
> > you nothing about the effectiveness of your cache.
> >
> > 3. Make sure your traffic is HTTP instead of HTTPS.
> >
> > 4. Make sure your users are visiting the same sites repeatedly
so that
> > content
> > which gets cached gets re-used.
> >
> > 5. Make sure the sites they're visiting are not setting "don't
cache"
> or
> > "already expired" headers (such as is common for news sites, for
> example)
> > so
> > that the content is cacheable.
> >
> > 6. Run your cache for long enough that it's likely to have a
> > representative
> > proportion of what the users are asking for when you start measuring
> its
> > effectiveness - if you start from an empty cache and pass requests
> > through it,
> > it's going to take some time for the content to build up so that you
> see
> > some
> > hits.
> >
> >
> > If you want to improve the information you're getting from the
> monitoring
> > system, make sure it's telling you how much was cached as a
proportion
> of
> > requests which could have been cached - in other words, leave
out HTTPS
> > (36%)
> > and 407 Auth Required (27%), plus anything where the remote
server had
> > nothing
> > to provide (13%), and requests where the user's browser already
had a
> > cached
> > copy and didn't to request an update (4%).
> >
> > That throws out 80% of your current statistics, so you
concentrate on
> the
> > data
> > about connections Squid *could* have helped with.
> >
> >> 2015-09-18 8:25 GMT-03:00 Antony Stone:
> >>> On Friday 18 September 2015 at 13:13:27, Jorgeley Junior wrote:
>  hey guys, forgot-me? :(
> >>>
> >>> Surely you can see for yourself how many connections you've had of
> >>> different types?  Here are the most common (all those over 100
> > instances)
> >>> from your list of 5240 results
> >>>
> > 290 TAG_NONE/503
> > 368 TCP_DENIED/403
> >1421 TCP_DENIED/407
> > 680 TCP_MISS/200
> > 192 TCP_REFRESH_UNMODIFIED/304
> >1896 TCP_TUNNEL/200
> >>>
> >>> So:
> >>>
> >>> 290 (5.5%) got a 503 result (service unavailable)
> >>> 368 (7%) were denied by the remote server with code 403
(forbidden)
> >>> 1421 (27%) were deined by the remote server with code 407 (auth
> > required)
> >>> 680 (13%) were successfully retreived from the remote servers but
> were
> >>> not previously in your cache
> >>> 192 (3.6%) were already cached by your browser and didn't need
to be
> >>> retreived
> >>> 1896 (36%) were success

Re: [squid-users] Acl problem

2015-09-24 Thread Amos Jeffries
On 25/09/2015 2:15 a.m., FredB wrote:
> Hi,
> 
> I have a problem with acl and cache_peer 
> 
> I'm trying to allow (and deny for others) a list of destinations, 
> destinations only used by some browsers with this cache_peer
> Something like this
> 
> acl webnoid dstdomain test.fr
> 
> acl browsenoid "/etc/squid/browser"
> 
> cache_peer_access test2 allow browsenoid
> cache_peer_access test2 allow webnoid

For the record "AND" of the above is:

 cache_peer_access test2 allow browsenoid webnoid

though I see you found the all-of ACL anyway. That all-of simplifies the
config for not-AND excusion on other peers. So in your case is better.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid with SMP registeration time out when i use 10K opened sessions

2015-09-24 Thread Amos Jeffries
On 25/09/2015 8:26 a.m., Alex Rousskov wrote:
> On 09/24/2015 02:10 PM, Ahmad Alzaeem wrote:
> 
>> If I use 2k ips with 2 worker , squid works ok If I use 10kbports without 
>> SMP , squid is ok
>> With 10K  + 2 workers , we have reg timeout
> 
> The bigger (workers * ports) product is, the more likely you are to run
> out of the UDS buffer space because unpatched Squid workers request
> sharing of all http_ports at once.
> 
> 
>> error: "net.local.dgram.recvspace" is an unknown key
> 
> Sorry, I do not know what that option is called in your environment.
> 
> 
>> if process # 1 , I give it ports 3K If process # 2 , I give it 3
>> K And so on will that success ??
> 
> I am not sure, but I suspect that you will get different workers
> listening on different ports, without sharing. It is not a configuration
> SMP Squid was designed for, and workers will still send UDS requests to
> share their ports (a worker does not know whether other workers are
> using its port). It does not hurt to try.
> 

I would just add that if you are able to do this then you should also be
able to use a multi-tenant design to scale your Squid horizontally.


PS. Since you are obviously building a custom Squid to get past the 128
listening sockets limit anyway. Please do your building with the latest
3.5 series release. The -n option in current 3.5 will let you do
multi-tenant easily.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid with SMP registeration time out when i use 10K opened sessions

2015-09-24 Thread Alex Rousskov
On 09/24/2015 02:10 PM, Ahmad Alzaeem wrote:

> If I use 2k ips with 2 worker , squid works ok If I use 10kbports without SMP 
> , squid is ok
> With 10K  + 2 workers , we have reg timeout

The bigger (workers * ports) product is, the more likely you are to run
out of the UDS buffer space because unpatched Squid workers request
sharing of all http_ports at once.


> error: "net.local.dgram.recvspace" is an unknown key

Sorry, I do not know what that option is called in your environment.


> if process # 1 , I give it ports 3K If process # 2 , I give it 3
> K And so on will that success ??

I am not sure, but I suspect that you will get different workers
listening on different ports, without sharing. It is not a configuration
SMP Squid was designed for, and workers will still send UDS requests to
share their ports (a worker does not know whether other workers are
using its port). It does not hurt to try.

Alex.



> -Original Message-
> From: Alex Rousskov [mailto:rouss...@measurement-factory.com]
> Sent: Thursday, September 24, 2015 7:10 PM
> To: squid-users@lists.squid-cache.org
> Cc: Ahmad Alzaeem
> Subject: Re: [squid-users] squid with SMP registeration time out when i use 
> 10K opened sessions
> 
> On 09/24/2015 08:54 AM, Ahmad Alzaeem wrote:
> 
>> If I run it with no SMP 1 listenting ports  , it works ok and 
>> problem
>>
>> If I run squid with 1  listening port with 2 workers èkid timeout 
>> registeration
> 
>> 2015/09/24 14:51:25 kid2| Closing HTTP port [::]:29995
>> 2015/09/24 14:51:25 kid2| Closing HTTP port [::]:29996
>> 2015/09/24 14:51:25 kid2| Closing HTTP port [::]:29997
>> 2015/09/24 14:51:25 kid2| Closing HTTP port [::]:29998
>> 2015/09/24 14:51:25 kid2| Closing HTTP port [::]:2
>> 2015/09/24 14:51:25 kid2| Closing HTTP port [::]:3
> ...
>> FATAL: kid2 registration timed out
> 
>> do we need to increase timeout ?? since it take long time to load the 
>> the ips.
> 
> 
> The existing SMP http_port sharing algorithm needs lots of UDS buffer space 
> to share lots of ports. You may be able to get your configuration working by 
> allocating lots of UDS buffer space (sysctl net.local.dgram.recvspace and 
> such), but it may turn out to be impossible for 10K ports. If there is not 
> enough UDS buffer space, increasing timeout will not help.
> 
> 
> The attached patch for Squid v3.3.11 changes the port sharing algorithm to 
> minimize memory usage (at the expense of registration time). Please see the 
> patch preamble for technical details. The patch worked with 3K ports (24 
> workers * 128 http_ports each); the registration lasted less than 5 seconds.
> 
> I do not recall whether we have tested the patch with 10K ports -- you may 
> need to increase the hard-coded kid registration timeout to handle 10K ports 
> with a patched Squid.
> 
> Sorry, I do not have a patch for other Squid versions at this time.
> 
> 
> HTH,
> 
> Alex.
> 
> 
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
> 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Optimezed???

2015-09-24 Thread Amos Jeffries
On 25/09/2015 7:13 a.m., Yuri Voinov wrote:
> 
> First. This is potentially dangerous. Can you guarantee your proxy never
> has physical/network access by intruders? HTTPS can contain sensitive
> data. You really sure you want problems with users? AS a minimum you
> need protect your proxy at level B2 (by Orange Book).

No more so than regular HTTP. Particularly now that "TLS everywhere" is
getting popular amongst the big providers HTTPS sensitivity is being
diluted.

HTTPS messages have the same Cache-Control requirements as unencrypted
HTTP. Squid obeys them just the same too.

What you do have to watch out for is protocol abuse in squid.conf like
refresh_pattern overrides and ignores. Those are what causes dangerous
trouble, and they do the same with plain HTTP. Proxy admin doing things
like that and breaking HTTP is part of whats making HTTPS popular to
begin with.


> 
> Second. Yes, it dangerous, but possible with SSL Bump. With very
> agressive cache parameters and with conjunction previous sentence. So,
> this is dangerous for many sites - for it's functionality and security,
> in general.
> 

Problems with SSL-Bump are more legal related than technical.


> You still sure you want to do this?
> 


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid with SMP registeration time out when i use 10K opened sessions

2015-09-24 Thread Ahmad Alzaeem

Hi alex

Thanks for answering me

As I told you

If I use 2k ips with 2 worker , squid works ok If I use 10kbports without SMP , 
squid is ok

With 10K  + 2 workers , we have reg timeout

I have already added that key  u mentioned below which is :

net.local.dgram.recvspace = 1262144
But I have
When I do sysctl -p
I have 

error: "net.local.dgram.recvspace" is an unknown key



any other tricks I can change with squid ???

I can use ur version 3.3.11 to increase timeout and handle more listening ports.

But I have other idea

What about I do "if else" option

Like if process # 1 , I give it ports 3K If process # 2 , I give it 3 K And so 
on will that success ??

Awaiting ur reply about the patch and how using it

Many thankx

-Original Message-
From: Alex Rousskov [mailto:rouss...@measurement-factory.com]
Sent: Thursday, September 24, 2015 7:10 PM
To: squid-users@lists.squid-cache.org
Cc: Ahmad Alzaeem
Subject: Re: [squid-users] squid with SMP registeration time out when i use 10K 
opened sessions

On 09/24/2015 08:54 AM, Ahmad Alzaeem wrote:

> If I run it with no SMP 1 listenting ports  , it works ok and 
> problem
> 
> If I run squid with 1  listening port with 2 workers èkid timeout 
> registeration

> 2015/09/24 14:51:25 kid2| Closing HTTP port [::]:29995
> 2015/09/24 14:51:25 kid2| Closing HTTP port [::]:29996
> 2015/09/24 14:51:25 kid2| Closing HTTP port [::]:29997
> 2015/09/24 14:51:25 kid2| Closing HTTP port [::]:29998
> 2015/09/24 14:51:25 kid2| Closing HTTP port [::]:2
> 2015/09/24 14:51:25 kid2| Closing HTTP port [::]:3
...
> FATAL: kid2 registration timed out

> do we need to increase timeout ?? since it take long time to load the 
> the ips.


The existing SMP http_port sharing algorithm needs lots of UDS buffer space to 
share lots of ports. You may be able to get your configuration working by 
allocating lots of UDS buffer space (sysctl net.local.dgram.recvspace and 
such), but it may turn out to be impossible for 10K ports. If there is not 
enough UDS buffer space, increasing timeout will not help.


The attached patch for Squid v3.3.11 changes the port sharing algorithm to 
minimize memory usage (at the expense of registration time). Please see the 
patch preamble for technical details. The patch worked with 3K ports (24 
workers * 128 http_ports each); the registration lasted less than 5 seconds.

I do not recall whether we have tested the patch with 10K ports -- you may need 
to increase the hard-coded kid registration timeout to handle 10K ports with a 
patched Squid.

Sorry, I do not have a patch for other Squid versions at this time.


HTH,

Alex.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Is it possible to send the connection, starting with the CONNECT, to cache-peer?

2015-09-24 Thread Amos Jeffries
On 25/09/2015 2:13 a.m., Yuri Voinov wrote:
> 
> 24.09.15 7:12, Amos Jeffries пишет:
>> On 24/09/2015 2:04 a.m., Yuri Voinov wrote:
>>>
>>> Through assertion and then restarts squid:
>>>
>>> 2015/09/23 20:03:25 kid1|   Validated 35899 Entries
>>> 2015/09/23 20:03:25 kid1|   store_swap_size = 1730768.00 KB
>>> 2015/09/23 20:03:26 kid1| storeLateRelease: released 0 objects
>>> 2015/09/23 20:03:26 kid1| assertion failed: PeerConnector.cc:116:
>>> "peer->use_ssl"
>>> 2015/09/23 20:03:30 kid1| Set Current Directory to /var/cache/squid
>>> 2015/09/23 20:03:30 kid1| Starting Squid Cache version
>>> 3.5.7-20150808-r13884 for x86_64-unknown-cygwin...
>>> 2015/09/23 20:03:30 kid1| Service Name: squid
>>> 2015/09/23 20:03:30 kid1| Process ID 11160
> 
>> There you go. The peering ACLs are working.
> 
>> Now you need to fix the ssl_bump rules such that the torproject traffic
>> does not require bump/decrypt before sending over the insecure peer
>> connection. Squid does not support re-encrypt.
> Huh. It works. Thank your, Amos!
> 
> 
>> Please use 3.5.9 for that part.
> 3.5.9 does support re-encrypt?

No, but it has better ssl_bump processing and more SNI related
functonality that may allow you to avoid having to decrypt in the first
place.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid config request

2015-09-24 Thread Amos Jeffries
On 25/09/2015 12:55 a.m., sabriasat Nouri wrote:
> any one can share SQUID 3.3.8 config with me ? i want that config
> allow only  ips range  197.9.x.x and 197.8.x.xi want that config
> disallow access to cgi-bin urls too and any good optimisation are
> welcome
> 

The FAQ on access controls is at


You need to share your existing config before anyone can help more than
that.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Optimezed???

2015-09-24 Thread Jorgeley Junior
So, if my traffic are more https than http there's no need to use squid.
Man, most of sites are https, what's the purpose of using squid?

2015-09-24 16:13 GMT-03:00 Yuri Voinov :

>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> First. This is potentially dangerous. Can you guarantee your proxy never
> has physical/network access by intruders? HTTPS can contain sensitive data.
> You really sure you want problems with users? AS a minimum you need protect
> your proxy at level B2 (by Orange Book).
>
> Second. Yes, it dangerous, but possible with SSL Bump. With very agressive
> cache parameters and with conjunction previous sentence. So, this is
> dangerous for many sites - for it's functionality and security, in general.
>
> You still sure you want to do this?
>
> 24.09.15 20:46, Jorgeley Junior пишет:
> > Can we do that to cache https?
> > http_port 3128 ssl-bump generate-host-certificates=on
> > dynamic_cert_mem_cache_size=4MB cert=/usr/local/squid/etc/monkey.pem
> >
> > 2015-09-24 11:24 GMT-03:00 Jorgeley Junior 
> :
> >
> >> Is it not possible to cache the https due the encryption?
> >>
> >> 2015-09-18 9:44 GMT-03:00 Antony Stone
>  
> >> :
> >>
> >>> On Friday 18 September 2015 at 14:27:42, Jorgeley Junior wrote:
> >>>
>  there is a way to improve it?
> >>>
> >>> Improve what?  The percentage of your traffic which is cached, or the
> >>> accuracy
> >>> of the information reported by your monitoring system?
> >>>
> >>>
> >>> If you want to cache more content:
> >>>
> >>> 1. Make sure the sites being visited have available content (note that
> >>> 12.6%
> >>> of your requests resulted in the remote server saying some variation on
> >>> "nothing available").
> >>>
> >>> 2. Ignore things which are meaningless - such as the 27% of your
> requests
> >>> which resulted in 407 Authentication Required - that tells you nothing
> >>> about
> >>> whether the user then successfully authenticated and got what they
> >>> wanted, or
> >>> didn't, but either way it's a standard response from the server which
> >>> tells
> >>> you nothing about the effectiveness of your cache.
> >>>
> >>> 3. Make sure your traffic is HTTP instead of HTTPS.
> >>>
> >>> 4. Make sure your users are visiting the same sites repeatedly so that
> >>> content
> >>> which gets cached gets re-used.
> >>>
> >>> 5. Make sure the sites they're visiting are not setting "don't cache"
> or
> >>> "already expired" headers (such as is common for news sites, for
> example)
> >>> so
> >>> that the content is cacheable.
> >>>
> >>> 6. Run your cache for long enough that it's likely to have a
> >>> representative
> >>> proportion of what the users are asking for when you start measuring
> its
> >>> effectiveness - if you start from an empty cache and pass requests
> >>> through it,
> >>> it's going to take some time for the content to build up so that you
> see
> >>> some
> >>> hits.
> >>>
> >>>
> >>> If you want to improve the information you're getting from the
> monitoring
> >>> system, make sure it's telling you how much was cached as a proportion
> of
> >>> requests which could have been cached - in other words, leave out HTTPS
> >>> (36%)
> >>> and 407 Auth Required (27%), plus anything where the remote server had
> >>> nothing
> >>> to provide (13%), and requests where the user's browser already had a
> >>> cached
> >>> copy and didn't to request an update (4%).
> >>>
> >>> That throws out 80% of your current statistics, so you concentrate on
> the
> >>> data
> >>> about connections Squid *could* have helped with.
> >>>
>  2015-09-18 8:25 GMT-03:00 Antony Stone:
> > On Friday 18 September 2015 at 13:13:27, Jorgeley Junior wrote:
> >> hey guys, forgot-me? :(
> >
> > Surely you can see for yourself how many connections you've had of
> > different types?  Here are the most common (all those over 100
> >>> instances)
> > from your list of 5240 results
> >
> >>> 290 TAG_NONE/503
> >>> 368 TCP_DENIED/403
> >>>1421 TCP_DENIED/407
> >>> 680 TCP_MISS/200
> >>> 192 TCP_REFRESH_UNMODIFIED/304
> >>>1896 TCP_TUNNEL/200
> >
> > So:
> >
> > 290 (5.5%) got a 503 result (service unavailable)
> > 368 (7%) were denied by the remote server with code 403 (forbidden)
> > 1421 (27%) were deined by the remote server with code 407 (auth
> >>> required)
> > 680 (13%) were successfully retreived from the remote servers but
> were
> > not previously in your cache
> > 192 (3.6%) were already cached by your browser and didn't need to be
> > retreived
> > 1896 (36%) were successful HTTPS tunneled connections, simply being
> > forwarded
> > by the proxy
> >
> > This accounts for 4847 (92.5%) of your 5240 results.
> >
> > As you can see, just measuring HIT and MISS is not the whole picture.
> >
> >
> > Hope that helps,
> >
> >
> > Antony.
> >>>
> >>> --
> >>> "The problem with television is that the pe

Re: [squid-users] Optimezed???

2015-09-24 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
First. This is potentially dangerous. Can you guarantee your proxy never
has physical/network access by intruders? HTTPS can contain sensitive
data. You really sure you want problems with users? AS a minimum you
need protect your proxy at level B2 (by Orange Book).

Second. Yes, it dangerous, but possible with SSL Bump. With very
agressive cache parameters and with conjunction previous sentence. So,
this is dangerous for many sites - for it's functionality and security,
in general.

You still sure you want to do this?

24.09.15 20:46, Jorgeley Junior пишет:
> Can we do that to cache https?
> http_port 3128 ssl-bump generate-host-certificates=on
> dynamic_cert_mem_cache_size=4MB cert=/usr/local/squid/etc/monkey.pem
>
> 2015-09-24 11:24 GMT-03:00 Jorgeley Junior :
>
>> Is it not possible to cache the https due the encryption?
>>
>> 2015-09-18 9:44 GMT-03:00 Antony Stone

>> :
>>
>>> On Friday 18 September 2015 at 14:27:42, Jorgeley Junior wrote:
>>>
 there is a way to improve it?
>>>
>>> Improve what?  The percentage of your traffic which is cached, or the
>>> accuracy
>>> of the information reported by your monitoring system?
>>>
>>>
>>> If you want to cache more content:
>>>
>>> 1. Make sure the sites being visited have available content (note that
>>> 12.6%
>>> of your requests resulted in the remote server saying some variation on
>>> "nothing available").
>>>
>>> 2. Ignore things which are meaningless - such as the 27% of your
requests
>>> which resulted in 407 Authentication Required - that tells you nothing
>>> about
>>> whether the user then successfully authenticated and got what they
>>> wanted, or
>>> didn't, but either way it's a standard response from the server which
>>> tells
>>> you nothing about the effectiveness of your cache.
>>>
>>> 3. Make sure your traffic is HTTP instead of HTTPS.
>>>
>>> 4. Make sure your users are visiting the same sites repeatedly so that
>>> content
>>> which gets cached gets re-used.
>>>
>>> 5. Make sure the sites they're visiting are not setting "don't cache" or
>>> "already expired" headers (such as is common for news sites, for
example)
>>> so
>>> that the content is cacheable.
>>>
>>> 6. Run your cache for long enough that it's likely to have a
>>> representative
>>> proportion of what the users are asking for when you start measuring its
>>> effectiveness - if you start from an empty cache and pass requests
>>> through it,
>>> it's going to take some time for the content to build up so that you see
>>> some
>>> hits.
>>>
>>>
>>> If you want to improve the information you're getting from the
monitoring
>>> system, make sure it's telling you how much was cached as a
proportion of
>>> requests which could have been cached - in other words, leave out HTTPS
>>> (36%)
>>> and 407 Auth Required (27%), plus anything where the remote server had
>>> nothing
>>> to provide (13%), and requests where the user's browser already had a
>>> cached
>>> copy and didn't to request an update (4%).
>>>
>>> That throws out 80% of your current statistics, so you concentrate
on the
>>> data
>>> about connections Squid *could* have helped with.
>>>
 2015-09-18 8:25 GMT-03:00 Antony Stone:
> On Friday 18 September 2015 at 13:13:27, Jorgeley Junior wrote:
>> hey guys, forgot-me? :(
>
> Surely you can see for yourself how many connections you've had of
> different types?  Here are the most common (all those over 100
>>> instances)
> from your list of 5240 results
>
>>> 290 TAG_NONE/503
>>> 368 TCP_DENIED/403
>>>1421 TCP_DENIED/407
>>> 680 TCP_MISS/200
>>> 192 TCP_REFRESH_UNMODIFIED/304
>>>1896 TCP_TUNNEL/200
>
> So:
>
> 290 (5.5%) got a 503 result (service unavailable)
> 368 (7%) were denied by the remote server with code 403 (forbidden)
> 1421 (27%) were deined by the remote server with code 407 (auth
>>> required)
> 680 (13%) were successfully retreived from the remote servers but were
> not previously in your cache
> 192 (3.6%) were already cached by your browser and didn't need to be
> retreived
> 1896 (36%) were successful HTTPS tunneled connections, simply being
> forwarded
> by the proxy
>
> This accounts for 4847 (92.5%) of your 5240 results.
>
> As you can see, just measuring HIT and MISS is not the whole picture.
>
>
> Hope that helps,
>
>
> Antony.
>>>
>>> --
>>> "The problem with television is that the people must sit and keep their
>>> eyes
>>> glued on a screen; the average American family hasn't time for it."
>>>
>>>  - New York Times, following a demonstration at the 1939 World's Fair.
>>>
>>>Please reply to the
>>> list;
>>>  please *don't*
>>> CC me.
>>> ___
>>> squid-users mailing list
>>> squid-u

Re: [squid-users] squid with SMP registeration time out when i use 10K opened sessions

2015-09-24 Thread Alex Rousskov
On 09/24/2015 08:54 AM, Ahmad Alzaeem wrote:

> If I run it with no SMP 1 listenting ports  , it works ok and problem
> 
> If I run squid with 1  listening port with 2 workers èkid timeout
> registeration

> 2015/09/24 14:51:25 kid2| Closing HTTP port [::]:29995
> 2015/09/24 14:51:25 kid2| Closing HTTP port [::]:29996
> 2015/09/24 14:51:25 kid2| Closing HTTP port [::]:29997
> 2015/09/24 14:51:25 kid2| Closing HTTP port [::]:29998
> 2015/09/24 14:51:25 kid2| Closing HTTP port [::]:2
> 2015/09/24 14:51:25 kid2| Closing HTTP port [::]:3
...
> FATAL: kid2 registration timed out

> do we need to increase timeout ?? since it take long time to load the
> the ips.


The existing SMP http_port sharing algorithm needs lots of UDS buffer
space to share lots of ports. You may be able to get your configuration
working by allocating lots of UDS buffer space (sysctl
net.local.dgram.recvspace and such), but it may turn out to be
impossible for 10K ports. If there is not enough UDS buffer space,
increasing timeout will not help.


The attached patch for Squid v3.3.11 changes the port sharing algorithm
to minimize memory usage (at the expense of registration time). Please
see the patch preamble for technical details. The patch worked with 3K
ports (24 workers * 128 http_ports each); the registration lasted less
than 5 seconds.

I do not recall whether we have tested the patch with 10K ports -- you
may need to increase the hard-coded kid registration timeout to handle
10K ports with a patched Squid.

Sorry, I do not have a patch for other Squid versions at this time.


HTH,

Alex.

In SMP mode, limit concurrent worker registrations of shared http[s]_ports
to support many listening ports shared by many workers with small UDS buffers.

Initial implementation allowed each worker to send all "give me a shared
listening port" UDS requests to Coordinator nearly at once. Each successful
response (carrying a listening FD) is about 4KB in size. With many workers
and/or many http[s]_ports, it was easy to run out of UDS buffer space
(especially on FreeBSD), leading to worker registration timeouts and other
related problems.

This implementation limits each worker to one listening request at a time.
This effectively limits concurrent responses to only a "few" (up to the number
of workers) at a time. The latter allows SMP Squid to support at least 24
workers and 128 http_ports with default UDS receive buffer space
(net.local.dgram.recvspace=65536).  The total startup delay in that rather
"high-end" configuration is still under 5 seconds.

=== modified file 'src/ipc/SharedListen.cc'
--- src/ipc/SharedListen.cc	2012-12-02 07:23:32 +
+++ src/ipc/SharedListen.cc	2013-12-16 21:35:07 +
@@ -1,51 +1,56 @@
 /*
  * DEBUG: section 54Interprocess Communication
  */
 
 #include "squid.h"
 #include "comm.h"
 #include "base/TextException.h"
 #include "comm/Connection.h"
 #include "globals.h"
 #include "ipc/Port.h"
 #include "ipc/Messages.h"
 #include "ipc/Kids.h"
 #include "ipc/TypedMsgHdr.h"
 #include "ipc/StartListening.h"
 #include "ipc/SharedListen.h"
 #include "tools.h"
 
+#include 
 #include 
 
 /// holds information necessary to handle JoinListen response
 class PendingOpenRequest
 {
 public:
 Ipc::OpenListenerParams params; ///< actual comm_open_sharedListen() parameters
 AsyncCall::Pointer callback; // who to notify
 };
 
 /// maps ID assigned at request time to the response callback
 typedef std::map SharedListenRequestMap;
 static SharedListenRequestMap TheSharedListenRequestMap;
 
+/// accumulates delayed requests until they are ready to be sent, in FIFO order
+typedef std::list DelayedSharedListenRequests;
+static DelayedSharedListenRequests TheDelayedRequests;
+
 static int
 AddToMap(const PendingOpenRequest &por)
 {
 // find unused ID using linear seach; there should not be many entries
 for (int id = 0; true; ++id) {
 if (TheSharedListenRequestMap.find(id) == TheSharedListenRequestMap.end()) {
 TheSharedListenRequestMap[id] = por;
 return id;
 }
 }
 assert(false); // not reached
 return -1;
 }
 
 Ipc::OpenListenerParams::OpenListenerParams()
 {
 memset(this, 0, sizeof(*this));
 }
 
 bool
@@ -83,73 +88,103 @@ Ipc::SharedListenResponse::SharedListenR
 fd(aFd), errNo(anErrNo), mapId(aMapId)
 {
 }
 
 Ipc::SharedListenResponse::SharedListenResponse(const TypedMsgHdr &hdrMsg):
 fd(-1), errNo(0), mapId(-1)
 {
 hdrMsg.checkType(mtSharedListenResponse);
 hdrMsg.getPod(*this);
 fd = hdrMsg.getFd();
 // other conn details are passed in OpenListenerParams and filled out by SharedListenJoin()
 }
 
 void Ipc::SharedListenResponse::pack(TypedMsgHdr &hdrMsg) const
 {
 hdrMsg.setType(mtSharedListenResponse);
 hdrMsg.putPod(*this);
 hdrMsg.putFd(fd);
 }
 
-void Ipc::JoinSharedListen(const OpenListenerParams ¶ms,
-   AsyncCall::Pointer &callback)
-{
-PendingOpenRequest por;
-por.par

Re: [squid-users] Acl problem

2015-09-24 Thread FredB
So stupid, just a problem with webnoid dstdomain - "."test.fr was needed for 
some requests -
acl all-of his a very great feature !
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid with SMP registeration time out when i use 10K opened sessions

2015-09-24 Thread Ahmad Alzaeem
Hi support .

Im using my squid as proxy for IPV6

 

I can use 2000 ips with 2 workers and no problem

 

The problem is

If I run it with no SMP 1 listenting ports  , it works ok and problem

If I run squid with 1  listening port with 2 workers ==>kid timeout
registeration

If I run it with no SMP , it works ok and problem

 

If I run it with smp WITH 2 workers 

I have registration timeout

 

2015/09/24 14:51:25 kid2| Closing HTTP port [::]:29995

2015/09/24 14:51:25 kid2| Closing HTTP port [::]:29996

2015/09/24 14:51:25 kid2| Closing HTTP port [::]:29997

2015/09/24 14:51:25 kid2| Closing HTTP port [::]:29998

2015/09/24 14:51:25 kid2| Closing HTTP port [::]:2

2015/09/24 14:51:25 kid2| Closing HTTP port [::]:3

2015/09/24 14:51:25 kid2| storeDirWriteCleanLogs: Starting...

2015/09/24 14:51:25 kid2|   Finished.  Wrote 0 entries.

2015/09/24 14:51:25 kid2|   Took 0.00 seconds (  0.00 entries/sec).

FATAL: kid2 registration timed out

===

 

I already removed  expanded the options

Here is my options :

 

 

 

]# ls -l /var/run/squid

total 0

srwxr-x--- 1 squid squid 0 Sep 24 14:23 squid-coordinator.ipc

srwxr-x--- 1 squid squid 0 Sep 24 14:47 squid-kid-1.ipc

srwxr-x--- 1 squid squid 0 Sep 24 14:51 squid-kid-2.ipc

[root@li970-79 ~]#

 

 

 

Here is wt I have :

[root@li970-79 ~]# squid -v

Squid Cache: Version 3.5.2

Service Name: squid

configure options:  '--prefix=/usr' '--includedir=/include'
'--mandir=/share/man' '--infodir=/share/info' '--sysconfdir=/etc'
'--enable-cachemgr-hostname=Ahmad-Allzaeem' '--localstartedir=/var'
'--libexecdir=/lib/squid' '--disable-maintainer-mode'
'--disable-dependency-tracking' '--disable-silent-rules' '--srcdir=.'
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid'
'--mandir=/usr/share/man' '--enable-inline' '--enable-async-io=8'
'--enable-storeio=ufs,aufs,diskd,rock' '--enable-removal-policies=lru,heap'
'--enable-delay-pools' '--enable-cache-digests' '--enable-underscores'
'--enable-icap-client' '--enable-follow-x-forwarded-for' '--enable-auth'
'--enable-b@sic-auth-helpers=LDAP,MSNT,NCSA,PAM,SASL,SMB,YP,DB,POP3,getpwnam
,squid_radius_auth,multi-domain-NTLM' '--enable-ntlm-auth-helpers=smfb_lm'
'--enable-digest-auth-helpers=ldap,password'
'--enable-negotiate-auth-helpers=squid_kerb_auth' '--enable-efsi'
'--disable-translation' '--with-logdir=/var/log/squid'
'--with-pidfile=/var/run/squid.pid' '--with-filedescriptors=1311072'
'--with-large-files' '--with-default-user=squid' '--enable-linux-netfilter'
'--enable-ltdl-convenience' '--enable-ssl' '--enable-ssl-crtd'
'--enable-arp-acl' 'CXXFLAGS=-DMAXTCPLISTENPORTS=2' '--with-openssl'
'--enable-snmp' '--with-included-ltdl' '--disable-arch-native

 

 

 

any help Guys ??

 

do we need to increase timeout ?? since it take long time to load the the
ips.

 

 

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Optimezed???

2015-09-24 Thread Jorgeley Junior
Can we do that to cache https?
http_port 3128 ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB cert=/usr/local/squid/etc/monkey.pem

2015-09-24 11:24 GMT-03:00 Jorgeley Junior :

> Is it not possible to cache the https due the encryption?
>
> 2015-09-18 9:44 GMT-03:00 Antony Stone 
> :
>
>> On Friday 18 September 2015 at 14:27:42, Jorgeley Junior wrote:
>>
>> > there is a way to improve it?
>>
>> Improve what?  The percentage of your traffic which is cached, or the
>> accuracy
>> of the information reported by your monitoring system?
>>
>>
>> If you want to cache more content:
>>
>> 1. Make sure the sites being visited have available content (note that
>> 12.6%
>> of your requests resulted in the remote server saying some variation on
>> "nothing available").
>>
>> 2. Ignore things which are meaningless - such as the 27% of your requests
>> which resulted in 407 Authentication Required - that tells you nothing
>> about
>> whether the user then successfully authenticated and got what they
>> wanted, or
>> didn't, but either way it's a standard response from the server which
>> tells
>> you nothing about the effectiveness of your cache.
>>
>> 3. Make sure your traffic is HTTP instead of HTTPS.
>>
>> 4. Make sure your users are visiting the same sites repeatedly so that
>> content
>> which gets cached gets re-used.
>>
>> 5. Make sure the sites they're visiting are not setting "don't cache" or
>> "already expired" headers (such as is common for news sites, for example)
>> so
>> that the content is cacheable.
>>
>> 6. Run your cache for long enough that it's likely to have a
>> representative
>> proportion of what the users are asking for when you start measuring its
>> effectiveness - if you start from an empty cache and pass requests
>> through it,
>> it's going to take some time for the content to build up so that you see
>> some
>> hits.
>>
>>
>> If you want to improve the information you're getting from the monitoring
>> system, make sure it's telling you how much was cached as a proportion of
>> requests which could have been cached - in other words, leave out HTTPS
>> (36%)
>> and 407 Auth Required (27%), plus anything where the remote server had
>> nothing
>> to provide (13%), and requests where the user's browser already had a
>> cached
>> copy and didn't to request an update (4%).
>>
>> That throws out 80% of your current statistics, so you concentrate on the
>> data
>> about connections Squid *could* have helped with.
>>
>> > 2015-09-18 8:25 GMT-03:00 Antony Stone:
>> > > On Friday 18 September 2015 at 13:13:27, Jorgeley Junior wrote:
>> > > > hey guys, forgot-me? :(
>> > >
>> > > Surely you can see for yourself how many connections you've had of
>> > > different types?  Here are the most common (all those over 100
>> instances)
>> > > from your list of 5240 results
>> > >
>> > > > > 290 TAG_NONE/503
>> > > > > 368 TCP_DENIED/403
>> > > > >1421 TCP_DENIED/407
>> > > > > 680 TCP_MISS/200
>> > > > > 192 TCP_REFRESH_UNMODIFIED/304
>> > > > >1896 TCP_TUNNEL/200
>> > >
>> > > So:
>> > >
>> > > 290 (5.5%) got a 503 result (service unavailable)
>> > > 368 (7%) were denied by the remote server with code 403 (forbidden)
>> > > 1421 (27%) were deined by the remote server with code 407 (auth
>> required)
>> > > 680 (13%) were successfully retreived from the remote servers but were
>> > > not previously in your cache
>> > > 192 (3.6%) were already cached by your browser and didn't need to be
>> > > retreived
>> > > 1896 (36%) were successful HTTPS tunneled connections, simply being
>> > > forwarded
>> > > by the proxy
>> > >
>> > > This accounts for 4847 (92.5%) of your 5240 results.
>> > >
>> > > As you can see, just measuring HIT and MISS is not the whole picture.
>> > >
>> > >
>> > > Hope that helps,
>> > >
>> > >
>> > > Antony.
>>
>> --
>> "The problem with television is that the people must sit and keep their
>> eyes
>> glued on a screen; the average American family hasn't time for it."
>>
>>  - New York Times, following a demonstration at the 1939 World's Fair.
>>
>>Please reply to the
>> list;
>>  please *don't*
>> CC me.
>> ___
>> squid-users mailing list
>> squid-users@lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>>
>
>
>
> --
>
>
>


--
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Optimezed???

2015-09-24 Thread Jorgeley Junior
Is it not possible to cache the https due the encryption?

2015-09-18 9:44 GMT-03:00 Antony Stone :

> On Friday 18 September 2015 at 14:27:42, Jorgeley Junior wrote:
>
> > there is a way to improve it?
>
> Improve what?  The percentage of your traffic which is cached, or the
> accuracy
> of the information reported by your monitoring system?
>
>
> If you want to cache more content:
>
> 1. Make sure the sites being visited have available content (note that
> 12.6%
> of your requests resulted in the remote server saying some variation on
> "nothing available").
>
> 2. Ignore things which are meaningless - such as the 27% of your requests
> which resulted in 407 Authentication Required - that tells you nothing
> about
> whether the user then successfully authenticated and got what they wanted,
> or
> didn't, but either way it's a standard response from the server which tells
> you nothing about the effectiveness of your cache.
>
> 3. Make sure your traffic is HTTP instead of HTTPS.
>
> 4. Make sure your users are visiting the same sites repeatedly so that
> content
> which gets cached gets re-used.
>
> 5. Make sure the sites they're visiting are not setting "don't cache" or
> "already expired" headers (such as is common for news sites, for example)
> so
> that the content is cacheable.
>
> 6. Run your cache for long enough that it's likely to have a representative
> proportion of what the users are asking for when you start measuring its
> effectiveness - if you start from an empty cache and pass requests through
> it,
> it's going to take some time for the content to build up so that you see
> some
> hits.
>
>
> If you want to improve the information you're getting from the monitoring
> system, make sure it's telling you how much was cached as a proportion of
> requests which could have been cached - in other words, leave out HTTPS
> (36%)
> and 407 Auth Required (27%), plus anything where the remote server had
> nothing
> to provide (13%), and requests where the user's browser already had a
> cached
> copy and didn't to request an update (4%).
>
> That throws out 80% of your current statistics, so you concentrate on the
> data
> about connections Squid *could* have helped with.
>
> > 2015-09-18 8:25 GMT-03:00 Antony Stone:
> > > On Friday 18 September 2015 at 13:13:27, Jorgeley Junior wrote:
> > > > hey guys, forgot-me? :(
> > >
> > > Surely you can see for yourself how many connections you've had of
> > > different types?  Here are the most common (all those over 100
> instances)
> > > from your list of 5240 results
> > >
> > > > > 290 TAG_NONE/503
> > > > > 368 TCP_DENIED/403
> > > > >1421 TCP_DENIED/407
> > > > > 680 TCP_MISS/200
> > > > > 192 TCP_REFRESH_UNMODIFIED/304
> > > > >1896 TCP_TUNNEL/200
> > >
> > > So:
> > >
> > > 290 (5.5%) got a 503 result (service unavailable)
> > > 368 (7%) were denied by the remote server with code 403 (forbidden)
> > > 1421 (27%) were deined by the remote server with code 407 (auth
> required)
> > > 680 (13%) were successfully retreived from the remote servers but were
> > > not previously in your cache
> > > 192 (3.6%) were already cached by your browser and didn't need to be
> > > retreived
> > > 1896 (36%) were successful HTTPS tunneled connections, simply being
> > > forwarded
> > > by the proxy
> > >
> > > This accounts for 4847 (92.5%) of your 5240 results.
> > >
> > > As you can see, just measuring HIT and MISS is not the whole picture.
> > >
> > >
> > > Hope that helps,
> > >
> > >
> > > Antony.
>
> --
> "The problem with television is that the people must sit and keep their
> eyes
> glued on a screen; the average American family hasn't time for it."
>
>  - New York Times, following a demonstration at the 1939 World's Fair.
>
>Please reply to the
> list;
>  please *don't* CC
> me.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



--
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Acl problem

2015-09-24 Thread FredB
Hi,

I have a problem with acl and cache_peer 

I'm trying to allow (and deny for others) a list of destinations, destinations 
only used by some browsers with this cache_peer
Something like this

acl webnoid dstdomain test.fr

acl browsenoid "/etc/squid/browser"

cache_peer_access test2 allow browsenoid
cache_peer_access test2 allow webnoid
cache_peer_access test2 deny all 

After this an another cache peer with browsenoid denied -> good

It's almost good, but the matches is OR and I want AND 
If I try test.fr with any browser it's good and same problem with google if I'm 
using a browser in browsenoid

Tried mixed combinations without any success   

How I can do that, if I can ? 

I tried acl all-of but without any success

acl noid all-of webnoid browsenoid
http_access deny noid -> no drop

Regards

Fred

 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Is it possible to send the connection, starting with the CONNECT, to cache-peer?

2015-09-24 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 


24.09.15 7:12, Amos Jeffries пишет:
> On 24/09/2015 2:04 a.m., Yuri Voinov wrote:
>>
>> Through assertion and then restarts squid:
>>
>> 2015/09/23 20:03:25 kid1|   Validated 35899 Entries
>> 2015/09/23 20:03:25 kid1|   store_swap_size = 1730768.00 KB
>> 2015/09/23 20:03:26 kid1| storeLateRelease: released 0 objects
>> 2015/09/23 20:03:26 kid1| assertion failed: PeerConnector.cc:116:
>> "peer->use_ssl"
>> 2015/09/23 20:03:30 kid1| Set Current Directory to /var/cache/squid
>> 2015/09/23 20:03:30 kid1| Starting Squid Cache version
>> 3.5.7-20150808-r13884 for x86_64-unknown-cygwin...
>> 2015/09/23 20:03:30 kid1| Service Name: squid
>> 2015/09/23 20:03:30 kid1| Process ID 11160
>
> There you go. The peering ACLs are working.
>
> Now you need to fix the ssl_bump rules such that the torproject traffic
> does not require bump/decrypt before sending over the insecure peer
> connection. Squid does not support re-encrypt.
Huh. It works. Thank your, Amos!
>
>
> Please use 3.5.9 for that part.
3.5.9 does support re-encrypt?
>
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJWBAUJAAoJENNXIZxhPexGEVQH/2L4SE5BP8L/2m35mqDTmqKI
AbPnpiw70DeQiBu1ZidQ6vyARFhtdJTE14VTENF3qaTQP3mnfd2Orr10sx5Sv1Es
cDUE9mWf6QUdjbIivi7qaKw+zHRXrP9vD2oi1qpPqxEnRZUoX+5orNlJYQhzsp9K
USGSQg7z+Vje0ilPZrDfgh0l+DQWQk/A9k9gJ/dslJqVxtVFY1iGJevdChVAs+0I
DVSAHUIK/nwXrfA3ThZsBqqEYYk9jHvC/Kpj2vuy+udt0JdDhnR052TS0vaE6tN1
B2aIr7YQYnOD3r+ceF3ita/fM7hGWI5yPiH7jSiPHtsKghADk2wgoE+cCCBkPaM=
=jcsz
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid config request

2015-09-24 Thread sabriasat Nouri
any one can share SQUID 3.3.8 config with me ?
i want that config allow only  ips range  197.9.x.x and 197.8.x.xi want that 
config disallow access to cgi-bin urls too and any good optimisation are welcome

thank you ___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] help with acl order and deny_info pages

2015-09-24 Thread Amos Jeffries
On 24/09/2015 7:30 p.m., Marko Cupać wrote:
> On Sun, 20 Sep 2015 21:43:26 +1200
> Amos Jeffries  wrote:
> 
>> On 17/09/2015 7:24 p.m., Marko Cupać wrote:
>>> On Thu, 17 Sep 2015 03:00:56 +1200
>>> Amos Jeffries  wrote:
>>>
 On 17/09/2015 12:37 a.m., Marko Cupać wrote:
> Hi,
>
> I'm trying to setup squid in a way that it authenticates users via
> kerberos and grants different levels of web access according to
> ldap query of MS AD groups.After some trials and errors I have
> found acl order which apparently does not trigger
> reauthentication (auth dialogues in browsers although I don't
> even provide basic auth).

 What makes you think browser dialog box has anything to do with
 Basic auth? All it means is that the browser does not know what
 credentials will work. The ones tried (if any) have been rejected
 with a challenge response (401/407) for valid ones. It may be the
 browser password manager.

 If you are using only Kerberos auth then users enter their Kerberos
 username and password into the dialog to allow the browser to fetch
 the Kerberos token (or keytab entry) it needs to send to Squid.


> Here's relevant part:
>
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
> http_access allow localhost manager
> http_access deny manager
> http_access deny to_localhost
> # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
> http_access deny !auth all
> http_access allow !basic_domains !basic_extensions basic_users
> http_reply_access allow !basic_mimetypes basic_users
> http_access allow !advanced_domains !advanced_extensions
> advanced_users http_access allow expert_users all
> # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
> http_access allow localhost
> http_access deny all
>
> I'd like to know which acl triggered the ban, so I've created
> custom error page:
>
> error_directory /usr/local/etc/squid/myerrors
> deny_info ERR_BASIC_EXTENSION basic_extensions
>
> The problem is that my custom error page does not trigger when I
> expect it to (member of basic_users accessing URL with extension
> listed in basic_extensions) - ERR_ACCESS_DENIED is triggered
> instead. I guess this is because of last matching rule which is
> http_access deny all.

 Perhapse.

 But, basic_extensions is never the last listed ACL in a denial
 rule. There is never a deny action associated with the ACL. That
 is why the deny_info response template is not being used.

>
> Is there another way how I can order acls so that I don't trigger
> reauthentication while triggering deny_info?

 Not without the ACL definition details.

 Amos
>>>
>>> Hi Amos,
>>>
>>> thank you for looking into this. Here's complete squid.conf (I
>>> changed just private details such as domain, DN, password etc. in
>>> external_acl_type).
>>>
>>
>> 
>>
>>> auth_param negotiate
>>> program /usr/local/libexec/squid/negotiate_kerberos_auth \ -r -s
>>> GSS_C_NO_NAME
>> 
>>> # ldap query for group membership
>>> external_acl_type adgroups ttl=60 children-startup=2
>>> children-max=10 %LOGIN
>>> \ /usr/local/libexec/squid/ext_ldap_group_acl -R \
>> 
>>
>>
>> These ACLs...
>>
>>> # map ldap groups to squid acls
>>> acl basic_users external adgroups squid_basic
>>> acl advanced_users external adgroups squid_advanced
>>> acl expert_users external adgroups squid_expert
>>
>> ... to here ...
>>
>>
>> 
>>> # require proxy authentication
>>> acl auth proxy_auth REQUIRED
>>
>> ... and the "auth" one will all trigger 407 challenges *if* they are
>> the last ACL on the line. Or if there are no credentials of any kind
>> given in the request.
>>
>>
>>>
>>> # custom error pages
>>> deny_info ERR_BASIC_DOMAIN basic_domains
>>> deny_info ERR_ADVANCED_DOMAIN advanced_domains
>>> deny_info ERR_BASIC_EXTENSION basic_extensions
>>> deny_info ERR_ADVANCED_EXTENSION advanced_extensions
>>>
>> 
>>> # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
>>> http_access deny !auth all
>>
>> Problem #1:
>>  Any client with halfway decent security will not simply broadcast
>> credentials on their first request of a new TCP connection, but will
>> wait for a 407 challenge to indicate both their need and the type of
>> credentials to send.
>>
>> The "all" on this line will prevent that 407 happening. Instead it
>> will simply produce a plain 403 ERR_ACCESS_DENIED for any request
>> lacking (Kerberos) credentials.
>>
>> NP: you can test whether this is your problem with a custom error
>> page:
>>
>>  acl test1 src all
>>  deny_info 499:ERR_ACCESS_DENIED test1
>>  http_access deny !auth test1
>>
>> Your access.log should show the 499 status when its line matches.
>>
>>
>>> http_access allow !basic_domains !basic_extensions basic_users
>>> http_access allow !advanced_domains !advanced_ext

Re: [squid-users] help with acl order and deny_info pages

2015-09-24 Thread Marko Cupać
On Sun, 20 Sep 2015 21:43:26 +1200
Amos Jeffries  wrote:

> On 17/09/2015 7:24 p.m., Marko Cupać wrote:
> > On Thu, 17 Sep 2015 03:00:56 +1200
> > Amos Jeffries  wrote:
> > 
> >> On 17/09/2015 12:37 a.m., Marko Cupać wrote:
> >>> Hi,
> >>>
> >>> I'm trying to setup squid in a way that it authenticates users via
> >>> kerberos and grants different levels of web access according to
> >>> ldap query of MS AD groups.After some trials and errors I have
> >>> found acl order which apparently does not trigger
> >>> reauthentication (auth dialogues in browsers although I don't
> >>> even provide basic auth).
> >>
> >> What makes you think browser dialog box has anything to do with
> >> Basic auth? All it means is that the browser does not know what
> >> credentials will work. The ones tried (if any) have been rejected
> >> with a challenge response (401/407) for valid ones. It may be the
> >> browser password manager.
> >>
> >> If you are using only Kerberos auth then users enter their Kerberos
> >> username and password into the dialog to allow the browser to fetch
> >> the Kerberos token (or keytab entry) it needs to send to Squid.
> >>
> >>
> >>> Here's relevant part:
> >>>
> >>> http_access deny !Safe_ports
> >>> http_access deny CONNECT !SSL_ports
> >>> http_access allow localhost manager
> >>> http_access deny manager
> >>> http_access deny to_localhost
> >>> # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
> >>> http_access deny !auth all
> >>> http_access allow !basic_domains !basic_extensions basic_users
> >>> http_reply_access allow !basic_mimetypes basic_users
> >>> http_access allow !advanced_domains !advanced_extensions
> >>> advanced_users http_access allow expert_users all
> >>> # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
> >>> http_access allow localhost
> >>> http_access deny all
> >>>
> >>> I'd like to know which acl triggered the ban, so I've created
> >>> custom error page:
> >>>
> >>> error_directory /usr/local/etc/squid/myerrors
> >>> deny_info ERR_BASIC_EXTENSION basic_extensions
> >>>
> >>> The problem is that my custom error page does not trigger when I
> >>> expect it to (member of basic_users accessing URL with extension
> >>> listed in basic_extensions) - ERR_ACCESS_DENIED is triggered
> >>> instead. I guess this is because of last matching rule which is
> >>> http_access deny all.
> >>
> >> Perhapse.
> >>
> >> But, basic_extensions is never the last listed ACL in a denial
> >> rule. There is never a deny action associated with the ACL. That
> >> is why the deny_info response template is not being used.
> >>
> >>>
> >>> Is there another way how I can order acls so that I don't trigger
> >>> reauthentication while triggering deny_info?
> >>
> >> Not without the ACL definition details.
> >>
> >> Amos
> > 
> > Hi Amos,
> > 
> > thank you for looking into this. Here's complete squid.conf (I
> > changed just private details such as domain, DN, password etc. in
> > external_acl_type).
> > 
> 
> 
> 
> > auth_param negotiate
> > program /usr/local/libexec/squid/negotiate_kerberos_auth \ -r -s
> > GSS_C_NO_NAME
> 
> > # ldap query for group membership
> > external_acl_type adgroups ttl=60 children-startup=2
> > children-max=10 %LOGIN
> > \ /usr/local/libexec/squid/ext_ldap_group_acl -R \
> 
> 
> 
> These ACLs...
> 
> > # map ldap groups to squid acls
> > acl basic_users external adgroups squid_basic
> > acl advanced_users external adgroups squid_advanced
> > acl expert_users external adgroups squid_expert
> 
> ... to here ...
> 
> 
> 
> > # require proxy authentication
> > acl auth proxy_auth REQUIRED
> 
> ... and the "auth" one will all trigger 407 challenges *if* they are
> the last ACL on the line. Or if there are no credentials of any kind
> given in the request.
> 
> 
> > 
> > # custom error pages
> > deny_info ERR_BASIC_DOMAIN basic_domains
> > deny_info ERR_ADVANCED_DOMAIN advanced_domains
> > deny_info ERR_BASIC_EXTENSION basic_extensions
> > deny_info ERR_ADVANCED_EXTENSION advanced_extensions
> > 
> 
> > # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
> > http_access deny !auth all
> 
> Problem #1:
>  Any client with halfway decent security will not simply broadcast
> credentials on their first request of a new TCP connection, but will
> wait for a 407 challenge to indicate both their need and the type of
> credentials to send.
> 
> The "all" on this line will prevent that 407 happening. Instead it
> will simply produce a plain 403 ERR_ACCESS_DENIED for any request
> lacking (Kerberos) credentials.
> 
> NP: you can test whether this is your problem with a custom error
> page:
> 
>  acl test1 src all
>  deny_info 499:ERR_ACCESS_DENIED test1
>  http_access deny !auth test1
> 
> Your access.log should show the 499 status when its line matches.
> 
> 
> > http_access allow !basic_domains !basic_extensions basic_users
> > http_access allow !advanced_domains !advanced_extensions
> > advanced_users
> 
> Basically okay. These will trigger 40

Re: [squid-users] AUFS vs. DISKS

2015-09-24 Thread FredB

> 
> If you want to achieve highest performance it is best to resolve that
> process collision issue. The wrongly indexed entries will be causing
> others to get expired earlier and maybe reduce HIT rate on them.
> 
> The (rather large amount of) extra work Squid is doing to cope with
> the
> missing objects is also sucking away CPU and disk I/O cyces that
> would
> be better used serving traffic.
> 
> So its not a big issue generally, but for high performance it can be
> an
> extra latency issue.
> 
> Amos
> 


I agree there is no difference under 400 requests by second, I'm speaking about 
load average, but beyond diskd wins without message like that 
When squid reaches 500 r the difference is huge 

Actually with fast HDD and diskd I'm just CPU limited, beyond 60 % Squid is 
more slow and there are latency. 

Every day my size Log file is approximately 3.5/4 Go 

iostat -dx 5
sdb and sdc = caches

Linux 3.2.0-4-amd64 (proxy1)24/09/2015  _x86_64_(6 CPU)

Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
sda   0,01 0,780,880,9458,5149,13   118,47 
0,014,511,467,37   0,36   0,06
sdb   3,2735,83   54,40   21,65   296,05   421,8118,88 
0,17   12,12   10,45   16,30   1,25   9,53
sdc   3,2735,88   54,67   21,50   298,02   421,9718,90 
0,15   11,89   10,30   15,91   1,24   9,45

Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
sda   0,00 1,200,001,80 0,00   141,60   157,33 
0,000,000,000,00   0,00   0,00
sdb   0,00   121,20   48,00   27,20   298,40  1334,4043,43 
0,506,71   10,180,59   1,51  11,36
sdc   0,00   111,00   47,40   20,40   220,80   730,4028,06 
0,345,017,170,00   1,01   6,88

Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
sda   0,00 1,000,001,40 0,00   352,00   502,86 
0,000,570,000,57   0,57   0,08
sdb   0,00   130,40  118,20   25,40   664,80  1433,6029,23 
1,278,84   10,590,69   1,11  16,00
sdc   0,00   112,60  116,00   24,60   632,80   748,8019,65 
1,017,168,630,23   0,92  12,96

Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
sda   0,00 0,400,000,40 0,00 3,2016,00 
0,000,000,000,00   0,00   0,00
sdb   0,00   118,00  108,80   24,80   596,00   759,2020,29 
0,906,778,260,23   1,31  17,44
sdc   0,00   123,40  160,40   24,40   923,20   796,8018,61 
1,407,607,895,74   1,31  24,24

Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
sda   0,00 0,400,000,40 0,00 3,2016,00 
0,000,000,000,00   0,00   0,00
sdb   0,00   106,00  102,60   17,60   669,60   788,8024,27 
0,857,068,230,23   1,54  18,48
sdc   0,00   287,40  166,20  416,40   829,60  3232,0013,94 
3,596,169,774,72   0,42  24,24

Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
sda   0,00 0,600,000,60 0,00 4,8016,00 
0,000,000,000,00   0,00   0,00
sdb   0,00   259,20   40,40  452,60   320,00  2967,2013,34 
4,879,86   26,538,37   0,55  27,12
sdc   0,00   110,20  101,00   20,60   452,00   739,2019,59 
1,88   15,47   18,430,97   2,39  29,12

Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
sda   0,00 0,400,001,60 0,00 8,0010,00 
0,000,500,000,50   0,50   0,08
sdb   0,0080,60   87,20   13,60   668,00   444,0022,06 
1,05   10,57   12,220,00   2,97  29,92
sdc   0,00   104,60  146,406,60   749,60   475,2016,01 
1,82   11,898,57   85,45   1,64  25,12

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users