Re: [squid-users] Squid3: 100 % CPU load during object caching

2015-07-24 Thread Jens Offenbach
I have made a quick test of Squid 3.3.8 on Ubuntu 15.04 and I get the same 
problem: 100 % CPU usage, 500 KB/sec download rate.
 

Gesendet: Freitag, 24. Juli 2015 um 07:54 Uhr
Von: Jens Offenbach wolle5...@gmx.de
An: Marcus Kool marcus.k...@urlfilterdb.com, Eliezer Croitoru 
elie...@ngtech.co.il, Amos Jeffries squ...@treenet.co.nz, 
squid-users@lists.squid-cache.org
Betreff: Re: [squid-users] Squid3: 100 % CPU load during object caching
It is not easy for me, but I have tested Squid 3.3.8 from the Ubuntu packaging 
on a real physical infrastructure. I get the same results on the physical 
machine (1x Intel(R) Xeon(R) CPU E3-1225 V2 @ 3.20GHz, 32 GB RAM, 1 TB disk) 
where Squid is running: 100 % CPU usage, 500 KB/sec download rate. All machines 
are idle and we have 1 GBit ethernet.

The strace log from the physical test scenario can be found here, but I think 
it does not differ from the virtual test scenario:
http://wikisend.com/download/293856/squid.strace2

@Marcus:
Have you verified that the file does not fit into memory and gets cached on 
disk? On which OS is Squid running? What are your build options of Squid (squid 
-v)? Is it possible that the issue is not part of 3.4.12? Do we have a 
regression?

@Amos, Eliezer
Is someone able to reproduce the disk caching effect?

Regards,
Jens


Gesendet: Donnerstag, 23. Juli 2015 um 20:08 Uhr
Von: Marcus Kool marcus.k...@urlfilterdb.com
An: Jens Offenbach wolle5...@gmx.de, Amos Jeffries 
squ...@treenet.co.nz, Eliezer Croitoru elie...@ngtech.co.il, 
squid-users@lists.squid-cache.org
Betreff: Re: Aw: Re: [squid-users] Squid3: 100 % CPU load during object caching
The strace output shows this loop:

Squid reads 16K-1 bytes from FD 13 webserver
Squid writes 4 times 4K to FD 17 /var/cache/squid3/00/00/
Squid writes 4 times 4K to FD 12 browser

But this loop does not explain the 100% CPU usage...

Does Squid do a buffer reshuffle when it reads 16K-1 and writes 16K ?

I did the download test with Squid 3.4.12 AUFS on an idle system with a 500 
mbit connection and 1 CPU with 4 cores @ 3.7 GHz.
The first download used 35% of 1 CPU core with a steady download speed of 62 
MB/sec.
The second (cached) download used 50% of 1 CPU core with a steady download 
speed of 87 MB/sec.
I never looked at Squid CPU usage and do not know what is reasonable but it 
feels high.

With respect to the 100% CPU issue of Jens, one factor is that Squid runs in a 
virtual machine.
Squid in a virtual machine cannot be compared with a wget test since Squid 
allocates a lot of memory that the host must manage.
This is a possible explanation for the fact that you see the performance going 
down and up.
Can you do the same test on the host (i.e. not inside a VM).

Marcus



On 07/23/2015 10:39 AM, Jens Offenbach wrote:
 I have attached strace to Squid and waited until the download rate has 
 decreased to 500 KB/sec.
 I used cache_dir aufs /var/cache/squid3 88894 16 256 max-size=10737418240.
 Here is the download link:
 http://w1.wikisend.com/node-fs/download/6a004a416f65b4cdf7f8eff4ff961199/squid.strace[http://w1.wikisend.com/node-fs/download/6a004a416f65b4cdf7f8eff4ff961199/squid.strace]
 I hope it can help you.
 *Gesendet:* Donnerstag, 23. Juli 2015 um 13:29 Uhr
 *Von:* Marcus Kool marcus.k...@urlfilterdb.com
 *An:* Jens Offenbach wolle5...@gmx.de, Eliezer Croitoru 
 elie...@ngtech.co.il, Amos Jeffries squ...@treenet.co.nz, 
 squid-users@lists.squid-cache.org
 *Betreff:* Re: [squid-users] Squid3: 100 % CPU load during object caching
 I am not sure if it is relevant, maybe it is:

 I am developing an ICAP daemon and after the ICAP server sends a 100 
 continue
 Squid sends the object to the ICAP server in small chunks of varying sizes:
 4095, 5813, 1448, 4344, 1448, 1448, 2896, etc.
 Note that the interval of receiving the chunks is 1/1000th of a second.
 It seems that Squid forwards the object to the ICAP server every time it 
 receives
 one or a few TCP packets.

 I have a suspicion that in the scenario of 100% CPU, large #write calls and 
 low throughput a similar thing is happening:
 Squid physically stores a small part of the object many times, i.e. every 
 time one or a few TCP packets arrive.

 Amos, is there a debug setting that can confirm/reject this suspicion?

 Marcus


 On 07/23/2015 04:25 AM, Jens Offenbach wrote:
  A test with ROCK cache_dir rock /var/cache/squid3 51200 gives very 
  confusing results.
 
  I cleared the cache:
  rm -rf /var/cache/squid3/*
  squid -z
  squid
  http_proxy=http://139.2.57.120:3128/[http://139.2.57.120:3128/][http://139.2.57.120:3128/[http://139.2.57.120:3128/]]
   wget 
  http://test-server/freesurfer-Linux-centos6_x86_64-stable-pub-v5.3.0.tar
 
  The download starts with 10 MB/sec and stays constant for 1 minutes, then 
  it drops gradually to 1 MB/sec and stays there for some time. After 5 
  minutes the download rate returns back to 10 MB/sec
 very quickly and drops again step-by-step to 1 MB/sec. After 5-6 minutes the 
 download rates 

Re: [squid-users] squid youtube caching

2015-07-24 Thread joe
http bro i have i have 300 client and i like to staty standard not violating
privthat much as bluecoat thundercache use http the other use ssl they
have to work in country that alow thim to use ssl



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-youtube-caching-tp4672389p4672415.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid3: 100 % CPU load during object caching

2015-07-24 Thread Jens Offenbach
I have found something out... Hopefully, it helps to reproduce and solve the 
issue. 

I got it working with a good download rate, but very high CPU usage on Squid 
3.3.8 and Squid 3.5.6. There seems to be problem with large files that get 
cached on disk in combination with memory caching. When I use these settings, 
memory usage of Squid grows step-by-step with 100% CPU usage and 500 KB/sec 
download rate:

# MEMORY CACHE OPTIONS
# -
  maximum_object_size_in_memory 1 GB
  memory_replacement_policy heap LFUDA
  cache_mem 4 GB

# DISK CACHE OPTIONS
# -
  maximum_object_size 10 GB
  cache_replacement_policy heap GDSF
  cache_dir aufs /var/cache/squid3 25600 16 256

I decided to turn off memory caching completely and used the following settings:

# MEMORY CACHE OPTIONS
# -
  maximum_object_size_in_memory 0 GB
  memory_replacement_policy heap LFUDA
  cache_mem 0 GB

# DISK CACHE OPTIONS
# -
  maximum_object_size 10 GB
  cache_replacement_policy heap GDSF
  cache_dir aufs /var/cache/squid3 25600 16 256

Now, I get stable and high download rates even on a cache miss.

@Marcus:
Could you please post your squid.config
 

Gesendet: Freitag, 24. Juli 2015 um 08:25 Uhr
Von: Jens Offenbach wolle5...@gmx.de
An: squid-users@lists.squid-cache.org
Betreff: Re: [squid-users] Squid3: 100 % CPU load during object caching
I have made a quick test of Squid 3.3.8 on Ubuntu 15.04 and I get the same 
problem: 100 % CPU usage, 500 KB/sec download rate.
 

Gesendet: Freitag, 24. Juli 2015 um 07:54 Uhr
Von: Jens Offenbach wolle5...@gmx.de
An: Marcus Kool marcus.k...@urlfilterdb.com, Eliezer Croitoru 
elie...@ngtech.co.il, Amos Jeffries squ...@treenet.co.nz, 
squid-users@lists.squid-cache.org
Betreff: Re: [squid-users] Squid3: 100 % CPU load during object caching
It is not easy for me, but I have tested Squid 3.3.8 from the Ubuntu packaging 
on a real physical infrastructure. I get the same results on the physical 
machine (1x Intel(R) Xeon(R) CPU E3-1225 V2 @ 3.20GHz, 32 GB RAM, 1 TB disk) 
where Squid is running: 100 % CPU usage, 500 KB/sec download rate. All machines 
are idle and we have 1 GBit ethernet.

The strace log from the physical test scenario can be found here, but I think 
it does not differ from the virtual test scenario:
http://wikisend.com/download/293856/squid.strace2

@Marcus:
Have you verified that the file does not fit into memory and gets cached on 
disk? On which OS is Squid running? What are your build options of Squid (squid 
-v)? Is it possible that the issue is not part of 3.4.12? Do we have a 
regression?

@Amos, Eliezer
Is someone able to reproduce the disk caching effect?

Regards,
Jens


Gesendet: Donnerstag, 23. Juli 2015 um 20:08 Uhr
Von: Marcus Kool marcus.k...@urlfilterdb.com
An: Jens Offenbach wolle5...@gmx.de, Amos Jeffries 
squ...@treenet.co.nz, Eliezer Croitoru elie...@ngtech.co.il, 
squid-users@lists.squid-cache.org
Betreff: Re: Aw: Re: [squid-users] Squid3: 100 % CPU load during object caching
The strace output shows this loop:

Squid reads 16K-1 bytes from FD 13 webserver
Squid writes 4 times 4K to FD 17 /var/cache/squid3/00/00/
Squid writes 4 times 4K to FD 12 browser

But this loop does not explain the 100% CPU usage...

Does Squid do a buffer reshuffle when it reads 16K-1 and writes 16K ?

I did the download test with Squid 3.4.12 AUFS on an idle system with a 500 
mbit connection and 1 CPU with 4 cores @ 3.7 GHz.
The first download used 35% of 1 CPU core with a steady download speed of 62 
MB/sec.
The second (cached) download used 50% of 1 CPU core with a steady download 
speed of 87 MB/sec.
I never looked at Squid CPU usage and do not know what is reasonable but it 
feels high.

With respect to the 100% CPU issue of Jens, one factor is that Squid runs in a 
virtual machine.
Squid in a virtual machine cannot be compared with a wget test since Squid 
allocates a lot of memory that the host must manage.
This is a possible explanation for the fact that you see the performance going 
down and up.
Can you do the same test on the host (i.e. not inside a VM).

Marcus



On 07/23/2015 10:39 AM, Jens Offenbach wrote:
 I have attached strace to Squid and waited until the download rate has 
 decreased to 500 KB/sec.
 I used cache_dir aufs /var/cache/squid3 88894 16 256 max-size=10737418240.
 Here is the download link:
 http://w1.wikisend.com/node-fs/download/6a004a416f65b4cdf7f8eff4ff961199/squid.strace[http://w1.wikisend.com/node-fs/download/6a004a416f65b4cdf7f8eff4ff961199/squid.strace][http://w1.wikisend.com/node-fs/download/6a004a416f65b4cdf7f8eff4ff961199/squid.strace[http://w1.wikisend.com/node-fs/download/6a004a416f65b4cdf7f8eff4ff961199/squid.strace]]
 I hope it can help you.
 

Re: [squid-users] TCP_MISS in images

2015-07-24 Thread Amos Jeffries
On 24/07/2015 6:02 a.m., Yuri Voinov wrote:
 
 
 
 23.07.15 23:57, Amos Jeffries пишет:
 On 24/07/2015 4:02 a.m., Ulises Nicolini wrote:
 Hello,

 I have a basic squid 3.5 configuration with

 maximum_object_size_in_memory 64 KB
 maximum_object_size 10 KB
 minimum_object_size 512 bytes

 refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 1440 90% 10080
 override-expire ignore-no-cache ignore-private
 refresh_pattern -i (/cgi-bin/)  0   0%  0
 refresh_pattern .   0 20% 4320

 
 ignore-no-cache has no meaning for Squid-3.5.
 
 ignore-private does nothing on your images. It makes current Squid act
 like must-revalidate was set on the response instead of private.
 
 override-expire also does nothing on your images. As used above it makes
 Squid act like s-maxage=604800 was given instead of any Expires:
 header or max-age=N / s-maxage=N Cache-Control values.
 
 

 cache_dir rock  /cache1/rock1 256  min-size=500 max-size=32767
 max-swap-rate=250 swap-timeout=350
 cache_dir diskd /cache2/diskd1 1000 16 256 min-size=32768
 max-size=1048576
 cache_dir diskd /cache2/diskd2 10 16 256 min-size=1048576


 But when I test it against my webserver, using only one client PC, the
 only thing I get are TCP_MISSes of my images.

 1437664284.339 11 192.168.2.103 TCP_MISS/200 132417 GET
 http://test-server.com/images/imagen3.jpg - HIER_DIRECT/192.168.2.10
 image/jpeg
 1437664549.753  5 192.168.2.103 TCP_MISS/200 53933 GET
 http://test-server.com/images/imagen1.gif - HIER_DIRECT/192.168.2.10
 image/gif
 1437665917.469 18 192.168.2.103 TCP_MISS/200 8319 GET
 http://test-server.com/images/icono.png - HIER_DIRECT/192.168.2.10
 image/png

 The response headers don't have Vary tags or any other that may impede
 caching

 Accept-Rangesbytes
 Connectionclose
 Content-Length53644
 Content-Typeimage/gif
 DateThu, 23 Jul 2015 15:56:07 GMT
 Etage548d4-d18c-51b504b95dec0
 Last-ModifiedMon, 20 Jul 2015 15:36:03 GMT
 ServerApache/2.2.22 (EL)

 
 Your refresh pattern says to only cache these objects for +90% of their
 current age, so long as that period is longer than 1 day (1440 mins) and
 no more that 7 days (10080 mins).
 
 Which means;
  they are 3 days 20 mins 4 secs old right now (260404 secs).
  90% of that is 2 days 17 hrs 6 mins 3 secs (234363 secs).
 
 So the object e548d4-d18c-51b504b95dec0 will stay in cache for the
 next 2 days 17hrs etc.
 
 I notice though that the Content-Length size does not match any of the
 logged transfer sizes. Which makes me wonder if the object is actually
 varying despite the lack of Vary headers.
 
 

 Is it necessary a certain amount of requests of a single object to be
 cached (mem o disk) or am I facing some other problem here?
 
 Yes. Two requests. The first (a MISS) will add it to cache the second
 and later should be HITs on the now cached object.
 
 BUT, only if you are not force-reloading the browser for your tests.
 Force-reload instructs Squid it ignore its cached content and replace it
 with another MISS.

 Amos, this behaviour depends from refresh_pattern and often can be
 ignore (with reload-into-ims, for example).

I know. You know that. The Q about number of requests is a big hint that
he is very new to this and may not be aware yet.

He has not configured anything that would affect it. So it is still
relevant to his Squid and still #1 most common mistake when testing
these things.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] error windbind

2015-07-24 Thread Posta Esterna

Hi all
i'm very new about squid. I'm tryng to start a squid service (2.6 
stable) on a server linux centos 5... it has to connect to an AD server 
to authenticate users for internet access...


squid restart is not so good because in the cache.log file i see this error

(ntlm_auth) invalid option --  -

in my squid.conf file i'm trying to use this lines:

auth_param ntlm program ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm keep alive on
auth_param ntlm children 10

Other information in the cache.log

(ntlm_auth) invalid option --  h
...

(ntlm_auth) invalid option --  e
...

(ntlm_auth) invalid option --  p
...

It seems it can't recognize the simple --helper-protocol as a single 
option! Is it possible?


On the other side if i try
wbinfo -t

i get:
checking the trust secret via RPC calls failed
error code was   (0x0)
Could not check secret

and wbinfo -p

Ping to windbindd failed on fd -l
could not ping windbindd!

Windbindd is not running... but it's a samba component... i don't need 
to use my server as a samba server, how i configure samba to start 
windbindd?


Please HELP

--
VCTI Ing. Angelo Bruno

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid3: 100 % CPU load during object caching

2015-07-24 Thread Amos Jeffries
On 24/07/2015 6:08 a.m., Marcus Kool wrote:
 The strace output shows this loop:
 
Squid reads 16K-1 bytes from FD 13webserver
Squid writes 4 times 4K to FD 17  /var/cache/squid3/00/00/
Squid writes 4 times 4K to FD 12  browser
 
 But this loop does not explain the 100% CPU usage...
 
 Does Squid do a buffer reshuffle when it reads 16K-1 and writes 16K ?

Yes, several (UFS / AUFS 3, or diskd 5).

TCP buffer - FD 13 read buffer

FD 13 read buffer - 4x mem_node (4KB each)
 ** walk the length of the in-memory part of the object to find where to
attach the mem_node. (once per each node?)
  - this has been a big CPU hog in the past (Squid-2 did it twice per
node insertion)

4x mem_node - SHM memory buffer
  - diskd only, AUFS uses mem_node directly

SHM memory buffer - FD 17 disk write latency
  - happens with both diskd (single treaded) and AUFS (x64 threads)
  - wait latency until completion event is seen by Squid ...

4x mem_node write() copy to FD 12 TCP buffer (OS dependent)


If you are doing any kind of ICAP processing you can add +3 copies per
service processing the transaction body.


 
 I did the download test with Squid 3.4.12 AUFS on an idle system with a
 500 mbit connection and 1 CPU with 4 cores @ 3.7 GHz.
 The first download used 35% of 1 CPU core with a steady download speed
 of 62 MB/sec.
 The second (cached) download used 50% of 1 CPU core with a steady
 download speed of 87 MB/sec.
 I never looked at Squid CPU usage and do not know what is reasonable but
 it feels high.
 
 With respect to the 100% CPU issue of Jens, one factor is that Squid
 runs in a virtual machine.
 Squid in a virtual machine cannot be compared with a wget test since
 Squid allocates a lot of memory that the host must manage.
 This is a possible explanation for the fact that you see the performance
 going down and up.
 Can you do the same test on the host (i.e. not inside a VM).
 
 Marcus
 
 
 
 On 07/23/2015 10:39 AM, Jens Offenbach wrote:
 I have attached strace to Squid and waited until the download rate has
 decreased to 500 KB/sec.
 I used cache_dir aufs /var/cache/squid3 88894 16 256
 max-size=10737418240.
 Here is the download link:
 http://w1.wikisend.com/node-fs/download/6a004a416f65b4cdf7f8eff4ff961199/squid.strace

 I hope it can help you.
 *Gesendet:* Donnerstag, 23. Juli 2015 um 13:29 Uhr
 *Von:* Marcus Kool marcus.k...@urlfilterdb.com
 *An:* Jens Offenbach wolle5...@gmx.de, Eliezer Croitoru
 elie...@ngtech.co.il, Amos Jeffries squ...@treenet.co.nz,
 squid-users@lists.squid-cache.org
 *Betreff:* Re: [squid-users] Squid3: 100 % CPU load during object caching
 I am not sure if it is relevant, maybe it is:

 I am developing an ICAP daemon and after the ICAP server sends a 100
 continue
 Squid sends the object to the ICAP server in small chunks of varying
 sizes:
 4095, 5813, 1448, 4344, 1448, 1448, 2896, etc.
 Note that the interval of receiving the chunks is 1/1000th of a second.
 It seems that Squid forwards the object to the ICAP server every time
 it receives
 one or a few TCP packets.

 I have a suspicion that in the scenario of 100% CPU, large #write
 calls and low throughput a similar thing is happening:
 Squid physically stores a small part of the object many times, i.e.
 every time one or a few TCP packets arrive.

 Amos, is there a debug setting that can confirm/reject this suspicion?

After a bit more thought and Marcus feedback ; store.cc, mem_node
operations, and fd.cc and comm.cc are probably all worth watching.
debug_options ALL,9 will get you everything Squid has to offer of course.

But be aware that the debugging itself adds a horribly large amount of
overheads for each line logged. At the highest levels it may noticably
impact the high-speed core routines you are trying to measure by skewing
latency into those with more debugs() statements.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid3: 100 % CPU load during object caching

2015-07-24 Thread Amos Jeffries
On 24/07/2015 7:49 p.m., Jens Offenbach wrote:
 I have found something out... Hopefully, it helps to reproduce and solve the 
 issue. 
 
 I got it working with a good download rate, but very high CPU usage on Squid 
 3.3.8 and Squid 3.5.6. There seems to be problem with large files that get 
 cached on disk in combination with memory caching. When I use these settings, 
 memory usage of Squid grows step-by-step with 100% CPU usage and 500 KB/sec 
 download rate:
 
 # MEMORY CACHE OPTIONS
 # 
 -
   maximum_object_size_in_memory 1 GB
   memory_replacement_policy heap LFUDA
   cache_mem 4 GB
 
 # DISK CACHE OPTIONS
 # 
 -
   maximum_object_size 10 GB
   cache_replacement_policy heap GDSF
   cache_dir aufs /var/cache/squid3 25600 16 256
 
 I decided to turn off memory caching completely and used the following 
 settings:
 
 # MEMORY CACHE OPTIONS
 # 
 -
   maximum_object_size_in_memory 0 GB
   memory_replacement_policy heap LFUDA
   cache_mem 0 GB
 
 # DISK CACHE OPTIONS
 # 
 -
   maximum_object_size 10 GB
   cache_replacement_policy heap GDSF
   cache_dir aufs /var/cache/squid3 25600 16 256
 
 Now, I get stable and high download rates even on a cache miss.
 

Damn. That gives me ~90% confidence its the mem_node walking as new 4KB
chunks of memory are appended to the memory copy of the object.


I would expect to see a reduced effect in 3.5 that kicks in around
maximum_object_size_in_memory. Since the memory copies are now split
into cache_mem objects, vs transients (disk cache only, or totally
non-cacheable). With the transients getting their unnecessary in-memory
sections pruned away regularly.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ecap and https

2015-07-24 Thread Amos Jeffries
On 24/07/2015 5:33 a.m., HackXBack wrote:
 when we can use ecap with https contents ?

Yes, *if* the TLS part of HTTPS has been terminated by Squid.
ie. HTTPS reverse-proxy or SSL-bump interception.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] log source port from squid server?

2015-07-24 Thread Kevin Kretz
Hi,

We're working on correlating our squid logs with other logs upstream in our 
network.  We'd like to be able to identify a proxied request by network 
information from squid's log.  Currently we have the squid server IP address, 
the destination server's IP address and the destination server's listening 
port.  

From the documentation and reading back through this list's archive, I don't 
see a format code for squid server source port.  Has there ever been interest 
in this?  


thanks

Kevin
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid youtube caching

2015-07-24 Thread joe
as long as i dont use ssl in my cache man in the midle im safe gov..wise
all other i can do



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-youtube-caching-tp4672389p4672439.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid-3.5.6 + Chroot + Authentication

2015-07-24 Thread Jorgeley
Hi guys.
I have a squid 3.5.6 + basic_ncsa_auth + chroot and is crashing only when I
do an authentication.

Here is the main confs:
*auth_param basic program /libexec/basic_ncsa_auth /regras/usuarios
auth_param basic children 10 startup=0 idle=1
auth_param basic realm INTERNET-LOGIN NECESSARIO
... (other confs) ...
acl usuariosproxy_auth -i   /etc/squid-3.5.6/regras/usuarios
... (other confs) ...
chroot /etc/squid-3.5.6*

This is my chroot structre:
*/ (linux root)
/etc
 squid-3.5.6/
  bin/
   purge
   squidclient
  cache/
   (squid cache dirs generated by squid -z)
  etc/
cachemgr.conf
errorpage.css
group
gshadow
hosts
localtime
mime.conf
nsswitch.conf
passwd
resolv.conf
shadow
squid.conf
   lib64/
 (a lot of libs here, discovered with ldd
command)
   libexec/
 basic_ncsa_auth
 diskd
 (other default squid libs)
   regras/
 (my acl files rules)
   sbin/
 squid
   share/
   errors/
   (default dir squid errors)
   icons/
   (default squid icons
   man/
   (default man squid pages)
   usr/
  lib64/
   (a lot of libs here, discovered with
ldd command)
   var/
  logs/
   (default squid logs)
  run/
squid.pid*

I did the command:
chroot /etc/squid-3.5.6 /libexec/basic_ncsa_auth
It runs, that's why I'm sure the chroot environment, unless for the
ncsa_auth, is correct


Here is what I find in the cache.log:
*2015/07/22 18:47:27.866 kid1| WARNING: no_suid: setuid(0): (1) Operation
not permitted
2015/07/22 18:48:01.735 kid1| ipcCreate: /libexec/basic_ncsa_auth: (2) No
such file or directory
2015/07/22 18:47:27.866 kid1| WARNING: basicauthenticator #Hlpr13818 exited*

What is the ipcCreate and why he is not findind the file?
Any ideas? Thanks since now.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-5-6-Chroot-Authentication-tp4672440.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] RE Peek and Splice error SSL_accept failed

2015-07-24 Thread Sebastian Kirschner
Is that all sites or just a few special sites?

James

I tested a few sites like google , youtube , sparkasse, sparklabs, all with the 
same issue.


Mit freundlichen Grüßen / Best Regards

Sebastian 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] error windbind

2015-07-24 Thread Posta Esterna



Il 24/07/2015 11.22, Amos Jeffries ha scritto:

On 24/07/2015 7:57 p.m., Posta Esterna wrote:

Hi all
i'm very new about squid. I'm tryng to start a squid service (2.6
stable) on a server linux centos 5... it has to connect to an AD server
to authenticate users for internet access...

Yeesh. Please upgrade. Squid 2.6 was end-of-life'd June 2008.

It is also very unlikely that your Squid will operate both safely or
correctly on a lot of modern Internet traffic.

CentOS 5 itself is rather old too. CentOS 7 is what we are building and
testing current Squid with.


squid restart is not so good because in the cache.log file i see this error

(ntlm_auth) invalid option --  -

in my squid.conf file i'm trying to use this lines:

auth_param ntlm program ntlm_auth --helper-protocol=squid-2.5-ntlmssp

Use the full path to the helper and ensure that it is the *Samba* helper
binary being used. There is an identically named binary installed by
Squid which operates VERY differently and does not peform NTLM properly.

Other problems can and need to wait on that being fixed.

Amos




Thanx Amos,
Squid was the first problem... using /usr/lib/squid/ntlm_auth instead of 
/usr/bin/ntlm_auth


About upgrading unfortunatelly i have only an old DELL P4 (10 years 
old?) with 1GB of RAM... and for my fortune this is only the PROXY n°2 
(the backup) The PROXY n°1 goes well with a version of KERIO Control...


I've still have problems
it says:

[2015/07/24 12:06:41, 0] utils/ntlm_auth.c:get_windbind_domain(146)
  could not obtain windbind domain name!
.

--
VCTI Ing. Angelo Bruno

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ecap and https

2015-07-24 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Well, and so what? What exactly your doing with this adapter?

24.07.15 3:53, HackXBack пишет:
 read the Documentation

 http://www.e-cap.org/Documentation



 --
 View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/ecap-and-https-tp4672396p4672409.html
 Sent from the Squid - Users mailing list archive at Nabble.com.
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJVsiMqAAoJENNXIZxhPexGrqUH/jASXQe9voHn/7ILXkn0T2RS
JyEQRAW7os8uiUEtCI7Pbe8cF64JnuEKxgWJOU6Sj1C0ssradNsreAVxThz3db1N
5Sj2L82tB0DNnatsPamt7l9ij3U2c2FEtGl1Fpo8XyH/x90GFPkM0zq79vIRw6zz
31JP4W7qOl8bTlC8Ob/2G4Mf7J+6+pDtt2/Ygul68kzLBKo0Ig47KJjyOfn4xvKV
ldRp1R36+Y2MoXryV5ZdMC578fAOybNOBsb7jfh9pdR0khnh7U/NsiTfO6wpwRHI
fVkjsYr5peNqYPyRaW31MBBRqNqY70uwaVVQTO5/ZK3WBTduZk7smEbyugF7SPE=
=m0iv
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid youtube caching

2015-07-24 Thread joe
tks amos so
doing replace beter as 
reply_header_access Strict-Transport-Security deny all 

request_header_replace Strict-Transport-Security max-age=0
right ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-youtube-caching-tp4672389p4672455.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.5 with auth and chroot

2015-07-24 Thread Jorgeley Junior
Thank you so much for the help.
So, I use the directive 'chroot' in the squid.conf.
I start squid this way:
cd /etc/squid-3.5.6
sbin/squid
and it starts normally, but when I open the client browser and do an
authentication it logs the errors and don't authenticate, but the squid
doesn't stop running, just it logs the error and do not authenticate.
How I told you before, if I do: chroot /etc/squid-3.5.6
libexec/basic_ncsa_auth it runs, that's why I'm sure that basic_ncsa_auth
it's running correctly, I suspect maybe this IPCcreate run as another user
that cannot access the basic_ncsa_auth or maybe IPCcreate its located in a
directory that cannot see the libexec/basice_ncsa relative path
That's a weird scenario.

2015-07-24 11:02 GMT-03:00 Amos Jeffries squ...@treenet.co.nz:

 On 25/07/2015 12:10 a.m., Jorgeley Junior wrote:
  please guys, help me.
  Any suggestions?
 

 Squid is not generally run in a chroot. The master / coordinator daemon
 manager process requires root access for several things and spawns
 workers that are dropped automatically to highly restricted access
 anyway. You already found out how big the dependency pool of libraries is.

 I guess what I'm getting at is that this is a rarely tested situation.

 To complicate matters there are three different combinations of chroot
 that Squid can run.

 * External chroot. Where you enter the chroot before starting Squid and
 it thinks the chroot content is the whole system.

 * configured chroot. Where you configure Squid master process to chroot
 its low-privilege workers with the squid.conf chroot directive.

 * Linux containers. Similar to the first, but you dont have to copy
 files into a separate chroot area. Just assign visibility/access to the
 OS areas.


 The error is pretty clear though. The problem is that something is
 unable to load a file during helper startup.
 Either Squid is unable to read/open/see the helper binary file itself.
 Or the helper is unable to open a file it needs to operate.

 ipcCreate: is a big hint that its Squid not finding the helper binary
 named.

 So is Squid being run from inside the chroot, or using chroot
 directive in squid.conf?


 Amos





--
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] RE Peek and Splice error SSL_accept failed

2015-07-24 Thread Amos Jeffries
On 25/07/2015 12:09 a.m., Sebastian Kirschner wrote:
 Hi ,
 
 I minimized the configuration a little bit(you could see it at the bottom of 
 these message).
 
 Also I still try to understand why these error happen ,

Lets be clear. error are not happening. If errors happend Squid logs
them with a huge ERROR: or FATAL: log entry.

Specific things trying to happen can fail, or not. That is normal and
hapens all the time when trying to interpret externally generated
information such as the bytes arriving in over network socket.

In this case Squid is trying to figure out of the connection is actually
a TLS connection *or something else*.

 I increased the Debug level and saw that squid tried 48 times to peek but 
 failed.
 At the end It says that it got an Hello, does it mean that squid received 
 after 48 tries the Hello ?
 
 If yes why it does need so many tries ?

Depends on what a try is, and why it was tried.

 
 - Part of debug log -
 2015/07/24 11:05:42.866 kid1| client_side.cc(4242) clientPeekAndSpliceSSL: 
 Start peek and splice on FD 11
 2015/07/24 11:05:42.866 kid1| bio.cc(120) read: FD 11 read 11 = 11
 2015/07/24 11:05:42.866 kid1| bio.cc(146) readAndBuffer: read 11 out of 11 
 bytes
 2015/07/24 11:05:42.866 kid1| bio.cc(150) readAndBuffer: recorded 11 bytes of 
 TLS client Hello
 2015/07/24 11:05:42.866 kid1| ModEpoll.cc(116) SetSelect: FD 11, type=1, 
 handler=1, client_data=0x7effbd078458, timeout=0
 2015/07/24 11:05:42.866 kid1| client_side.cc(4245) clientPeekAndSpliceSSL: 
 SSL_accept failed.

Could be 11 bytes of anything.

Squid believes the 11 bytes maybe are a clientHello (which is 11 bytes
in size). Sends them to processing which fails to parse a clientHello
out of it.
Squid goes back to read(2) to see if anything else arrives.


 .
 2015/07/24 11:05:42.874 kid1| client_side.cc(4242) clientPeekAndSpliceSSL: 
 Start peek and splice on FD 11
 2015/07/24 11:05:42.874 kid1| bio.cc(120) read: FD 11 read 6 = 11
 2015/07/24 11:05:42.874 kid1| bio.cc(146) readAndBuffer: read 6 out of 11 
 bytes
 2015/07/24 11:05:42.874 kid1| bio.cc(150) readAndBuffer: recorded 6 bytes of 
 TLS client Hello


This is a bit obscure. 6 *more* bytes arrive from FD 11.

Now we probably have 17 bytes in the I/O buffer. The bio.c code knows
that 6 bytes alone is not enough for clientHello. But it seems to be
ignoring or forgotten about the previous read.


 2015/07/24 11:05:42.875 kid1| SBuf.cc(152) assign: SBuf2040 from c-string, 
 n=0)
 2015/07/24 11:05:42.875 kid1| SBuf.cc(152) assign: SBuf2038 from c-string, 
 n=13)

The buffer says 13 bytes to be processed. Note that 13 = 11 for the
clientHello.

I have no idea what the first 4 bytes were or went. Guess maybe some SSL
alert notice the SSL_accept() code absorbed and adjusted the context to
use later? But not part of a clientHello either way.


 2015/07/24 11:05:42.875 kid1| ModEpoll.cc(116) SetSelect: FD 11, type=1, 
 handler=1, client_data=0x7effbd078458, timeout=0
 2015/07/24 11:05:42.875 kid1| client_side.cc(4245) clientPeekAndSpliceSSL: 
 SSL_accept failed.
 2015/07/24 11:05:42.875 kid1| SBuf.cc(152) assign: SBuf2025 from c-string, 
 n=4294967295)
 2015/07/24 11:05:42.875 kid1| client_side.cc(4259) clientPeekAndSpliceSSL: I 
 got hello. Start forwarding the request!!!


Thus Squid believes the 13 bytes has in bufer are a clientHello (which
is 11? bytes). Sends them to processing which succeeds to parse it as a
clientHello. SSL-Bump peek at stage 1 completed.

In Summary:
 Looks like normal network operations to me. No errors. Just a temporary
failure to have the whole thing in memory on first parse attempt.


Take that with a grain of salt though. Squid is event driven and there
is no guarantee that any two adjacent log lines are even reporting about
the same transaction. These could be two entirely separate TCP
connections independently arriving and being assigned to FD 11. One
having non-TLS protocol delivered and rejected, one having TLS started.
I assumed that the '...' added dont omit any other FD 11 lines.

I/we would have to see the input data analysis by wireshark and
cross-check it against the Squid code to be sure of the above. But I've
~80% confident thats correct interpretation of the log.


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ISSUE accssing content

2015-07-24 Thread Jagannath Naidu
Dear List,

I have been working on this for last two weeks, but never got it resolved.

We have a application server (SERVER) in our local network and a desktop
 application (CLIENT). The application picks proxy settings from IE. And we
also have a wensense proxy server

case 1: when there is no proxy set
application works. No logs in squid server access.log

case 2: when proxy ip address set and checked bypass local network
application works. No logs in squid server access.log

case 3: when proxy ip address is set to wensense proxy server. UNCHECKED
bypass local network
application works. We dont have access to websense server and hence we can
not check logs


case 4: when proxy ip address is set to proxy server ip address. UNCHECKED
bypass local network
application does not work :-(. Below are the logs.


1437751240.149  7 192.168.122.1 TCP_MISS/404 579 GET
http://dlwvdialce.htmedia.net/UADInstall/UADPresentationLayer.application -
HIER_DIRECT/10.1.4.46 text/html
1437751240.992 94 192.168.122.1 TCP_DENIED/407 3757 CONNECT
0.client-channel.google.com:443 - HIER_NONE/- text/html
1437751240.996  0 192.168.122.1 TCP_DENIED/407 4059 CONNECT
0.client-channel.google.com:443 - HIER_NONE/- text/html
1437751242.327  5 192.168.122.1 TCP_MISS/404 579 GET
http://dlwvdialce.htmedia.net/UADInstall/uadprop.htm - HIER_DIRECT/10.1.4.46
text/html
1437751244.777  1 192.168.122.1 TCP_MISS/503 4048 POST
http://cs-711-core.htmedia.net:8180/ConcertoAgentPortal/services/ConcertoAgentPortal
- HIER_NONE/- text/html

squid -v
Squid Cache: Version 3.3.8
configure options:  '--build=x86_64-redhat-linux-gnu'
'--host=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr'
'--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin'
'--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include'
'--libdir=/usr/lib64' '--libexecdir=/usr/libexec'
'--sharedstatedir=/var/lib' '--mandir=/usr/share/man'
'--infodir=/usr/share/info' '--disable-strict-error-checking'
'--exec_prefix=/usr' '--libexecdir=/usr/lib64/squid' '--localstatedir=/var'
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid'
'--with-logdir=$(localstatedir)/log/squid'
'--with-pidfile=$(localstatedir)/run/squid.pid'
'--disable-dependency-tracking' '--enable-eui'
'--enable-follow-x-forwarded-for' '--enable-auth'
'--enable-auth-basic=DB,LDAP,MSNT,MSNT-multi-domain,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,getpwnam'
'--enable-auth-ntlm=smb_lm,fake'
'--enable-auth-digest=file,LDAP,eDirectory'
'--enable-auth-negotiate=kerberos'
'--enable-external-acl-helpers=file_userip,LDAP_group,time_quota,session,unix_group,wbinfo_group'
'--enable-cache-digests' '--enable-cachemgr-hostname=localhost'
'--enable-delay-pools' '--enable-epoll' '--enable-icap-client'
'--enable-ident-lookups' '--enable-linux-netfilter'
'--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl'
'--enable-ssl-crtd' '--enable-storeio=aufs,diskd,ufs' '--enable-wccpv2'
'--enable-esi' '--enable-ecap' '--with-aio' '--with-default-user=squid'
'--with-filedescriptors=16384' '--with-dl' '--with-openssl'
'--with-pthreads' 'build_alias=x86_64-redhat-linux-gnu'
'host_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
--param=ssp-buffer-size=4 -grecord-gcc-switches   -m64 -mtune=generic
-fpie' 'LDFLAGS=-Wl,-z,relro  -pie -Wl,-z,relro -Wl,-z,now' 'CXXFLAGS=-O2
-g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
-fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches
-m64 -mtune=generic -fpie'
'PKG_CONFIG_PATH=%{_PKG_CONFIG_PATH}:/usr/lib64/pkgconfig:/usr/share/pkgconfig'


squid.conf

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 8180
acl CONNECT method CONNECT
acl wvdial dst 10.1.4.45 10.1.4.50 10.1.4.53 10.1.4.48 10.1.4.54 10.1.4.46
10.1.4.51 10.1.4.47 10.1.4.55 10.1.4.49 10.1.4.52 10.1.2.4
http_access allow wvdial
acl dialer dstdomain .htmedia.net
http_access allow dialer
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
visible_hostname = NOIDAPROXY01.MYDOMAIN.NET
append_domain  .mydomain.net
ignore_expect_100 on
dns_v4_first on
auth_param ntlm program 

Re: [squid-users] Squid3: 100 % CPU load during object caching

2015-07-24 Thread Jens Offenbach
@Marcus:
I am not sure what exactly causes the problems, but could you please make a 
test with these two different settings:
cache_mem 4 GB
maximum_object_size_in_memory 1 GB

I think you will observe the behavior, that I was confronted with. The bad 
download rates of 500 KB/sec are gone, when I used the following settings:
cache_mem 256 MB
maximum_object_size_in_memory 16 MB

I think Amos has an idea what seems to be the source of the problem:
http://lists.squid-cache.org/pipermail/squid-users/2015-July/004728.html
 
Regards,
Jens


Gesendet: Freitag, 24. Juli 2015 um 14:33 Uhr
Von: Marcus Kool marcus.k...@urlfilterdb.com
An: Jens Offenbach wolle5...@gmx.de, squid-users@lists.squid-cache.org
Betreff: Re: [squid-users] Squid3: 100 % CPU load during object caching

On 07/24/2015 03:25 AM, Jens Offenbach wrote:
 I have made a quick test of Squid 3.3.8 on Ubuntu 15.04 and I get the same 
 problem: 100 % CPU usage, 500 KB/sec download rate.


 Gesendet: Freitag, 24. Juli 2015 um 07:54 Uhr
 Von: Jens Offenbach wolle5...@gmx.de
 An: Marcus Kool marcus.k...@urlfilterdb.com, Eliezer Croitoru 
 elie...@ngtech.co.il, Amos Jeffries squ...@treenet.co.nz, 
 squid-users@lists.squid-cache.org
 Betreff: Re: [squid-users] Squid3: 100 % CPU load during object caching
 It is not easy for me, but I have tested Squid 3.3.8 from the Ubuntu 
 packaging on a real physical infrastructure. I get the same results on the 
 physical machine (1x Intel(R) Xeon(R) CPU E3-1225 V2 @ 3.20GHz, 32 GB RAM, 1 
 TB disk) where Squid is running: 100 % CPU usage, 500 KB/sec download rate. 
 All machines are idle and we have 1 GBit ethernet.

 The strace log from the physical test scenario can be found here, but I think 
 it does not differ from the virtual test scenario:
 http://wikisend.com/download/293856/squid.strace2

 @Marcus:
 Have you verified that the file does not fit into memory and gets cached on 
 disk? On which OS is Squid running? What are your build options of Squid 
 (squid -v)? Is it possible that the issue is not part of 3.4.12? Do we have a 
 regression?

I screwed up earlier since the maximum_object_size was too low for the test 
with a 1 GB file and did a new test.

The system has 64 GB memory and for sure the entire file is in the file system 
cache. The disk system is HW RAID-1 with 1 GB cache.
The OS is Linux 3.10, CentOS 7 latest patches.

New test:
test system: 1 CPU with 4 cores/8 threads @ 3.7 GHz, 64 GB memory, AUFS, 1 Gbit 
pipe, 500 mbit guaranteed

with Squid 3.4.12 :
1st download starts with 90 MB/sec and halfway drops to 30 MB/sec. My guess is 
that the file system cache got stressed and slowed things down.
2nd cached download with 190 MB/sec sustained and 120% CPU time.

With Squid 3.5.6 :
1st download starts with 90 MB/sec sustained and 80% CPU time.
2nd cached download with 190 MB/sec sustained and 120% CPU time.

As a comparison, I did dd if=test of=test2 bs=4k which uses 100% CPU time and 
has a throughput of 1200 MB/sec.
With bs=16k the throughput is 1300 MB/sec and with bs=64k the throughput is 
1400 MB/sec.

relevant parameters :
read_ahead_gap 64 KB
cache_mem 256 MB
maximum_object_size_in_memory 8 MB
maximum_object_size 8000 MB
cache_dir aufs /local/squid34/cache 1 32 256
cache_swap_low 92
cache_swap_high 93
# also ICAP daemon and URL rewriter configured
debug_options ALL,1 93,3 61,9

configure options:
'--prefix=/local/squid35' '--disable-ipv6' '--enable-fd-config' 
'--with-maxfd=3200' '--enable-async-io=64' '--enable-storeio=aufs' 
'--with-pthreads' '--enable-removal-policies=lru'
'--disable-auto-locale' '--enable-default-err-language=English' 
'--enable-err-languages=Dutch English Portuguese' '--with-openssl' 
'--enable-ssl' '--enable-ssl-crtd'
'--enable-cachemgr-hostname=localhost' '--enable-cache-digests' 
'--enable-follow-x-forwarded-for' '--enable-xmalloc-statistics' 
'--disable-hostname-checks' '--enable-epoll' '--enable-icap-client'
'--enable-useragent-log' '--enable-referer-log' '--enable-stacktraces' 
'--enable-underscores' '--disable-icmp' '--mandir=/usr/local/share' 'CC=gcc' 
'CFLAGS=-g -O2 -Wall -march=native' 'CXXFLAGS=-g -O2
-Wall -march=native' --enable-ltdl-convenience

As you can see the cache_mem is small, If Amos finds it useful, I can do 
another test with a larger cache_mem.

Jens, since all your tests have a drop to 500 KB/sec I think the cause is 
somewhere is the configuration (Squid and/or OS).

Marcus


 @Amos, Eliezer
 Is someone able to reproduce the disk caching effect?

 Regards,
 Jens


 Gesendet: Donnerstag, 23. Juli 2015 um 20:08 Uhr
 Von: Marcus Kool marcus.k...@urlfilterdb.com
 An: Jens Offenbach wolle5...@gmx.de, Amos Jeffries 
 squ...@treenet.co.nz, Eliezer Croitoru elie...@ngtech.co.il, 
 squid-users@lists.squid-cache.org
 Betreff: Re: Aw: Re: [squid-users] Squid3: 100 % CPU load during object 
 caching
 The strace output shows this loop:

 Squid reads 16K-1 bytes from FD 13 webserver
 Squid writes 4 times 4K to FD 17 /var/cache/squid3/00/00/
 Squid 

Re: [squid-users] squid 3.5 with auth and chroot

2015-07-24 Thread Amos Jeffries
On 25/07/2015 2:22 a.m., Jorgeley Junior wrote:
 Thank you so much for the help.

Cant be much help sorry. I'm just guessing here. Never actually run
Squid in a chroot myself.

 So, I use the directive 'chroot' in the squid.conf.
 I start squid this way:
 cd /etc/squid-3.5.6
 sbin/squid
 and it starts normally, but when I open the client browser and do an
 authentication it logs the errors and don't authenticate, but the squid
 doesn't stop running, just it logs the error and do not authenticate.

I've just looked up what is displaying that error and why. It is more of
the code wrongly using errno to display error text. So the message
itself may be bogus, but some error is happening when fork()'ing and
execv()'ing the helper process.

Some things I think you should try;

1) configure Squid with the full non-chroot path of the binary in the
auth_param line.

2) enter the chroot, downgrade yourself to the squid low-privilege user,
then try running the helper. Thats whats Squid is doing.

3) try the chroot directive in squid.conf with a '/' on the end

I'm out of ideas at this point. Apart from patching your squid to fix
the errno usage in ipcCreate() just to see if some other error message
appears. Sad thing about thtat is that I'm not sure what syscall is
supposed to be error-reported there, quite a few happen in sequence.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ecap and https

2015-07-24 Thread Amos Jeffries
On 25/07/2015 6:33 a.m., HackXBack wrote:
 Dear Amos,
 you mean if the https is decrypted ?

Yes.

 so yes it is decrypted and full url shown in access.log
 and not this adapter didnt work on https pages,
 it can edit content in http pages and not in https pages .
 

Strange. AFAIK there is nothing different/special for handling of the
messages. Once decryption is done its all just HTTP on the inside.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ISSUE accssing content

2015-07-24 Thread Amos Jeffries
On 25/07/2015 6:57 a.m., Mike wrote:
 I see a few issues.
 
 1. The report from the log shows a 192.168.*.* address, common LAN IP
 
 Then in the squid.conf:
 2. You have wvdial destination as 10.1.*.* addresses, which is a
 completely different internal network.
 Typically there will be no internal routing or communication from a
 192.168..*.* address to/from a 10.*.*.* address without a custom routing
 server with 2 network connections, one from each IP set and to act as
 the DNS intermediary for routing. Otherwise for network/internet
 connections, the computer/browser sees its own IP as local network, and
 everything else including 10.*.*.* as an external address out on the
 internet. I would suggest getting both the browsing computer and the
 server on the same IP subset, as in 192.168.122.x or 10.1.4.x, otherwise
 these issues are likely to continue.

WTF? Thats IPv4, no IP-range segmentation in that protocol except 127/8.
As long as a route exists 192.* can talk to 10.* no problems.

Also, he has indicated direct connectivity tests are working fine already.

Also, Squid is an application layer gateway. As long as Squid has access
to both networks it should be fine regardless of any obstructions direct
access might have. In fact its often used to get around that type of
problem, such as IPv4-IPv6 translation.


 
 3. Next in the squid.conf is http_port which should be port number only,
 no IP address, especially 0.0.0.0 which can cause conflicts with squid
 3.x versions. Best bet is use just port only, as in: http_port 3128 or
 in your case http_port 8080, which is the port (with server IP found
 in ifconfig) the browser will use to connect through the squid server.

Nope again. The IP address is fine. In the case of 0.0.0.0 it forces
Squid to IPv4-only service on that port. Making way for another service
to run IPv6 in parallel with same ports. Or IPv6 clients to get rejected
at TCP level.

From the logs presented we can see traffic arriving at Squid and being
serviced. Just not with the desired responses.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid youtube caching

2015-07-24 Thread Amos Jeffries
On 25/07/2015 3:37 a.m., Yuri Voinov wrote:
 
 No. He said that Squid does that itself. The only question - which Squid.
 

I said that for Alternate-Protocol.
It went into 3.4.10 and 3.5.0.3.


 24.07.15 21:34, joe пишет:
 tks amos so
 doing replace beter as
 reply_header_access Strict-Transport-Security deny all
 
 request_header_replace Strict-Transport-Security max-age=0
 right ?
 

 reply_header_access Strict-Transport-Security deny all
 reply_header_replace Strict-Transport-Security max-age=0

Should work.

For Yuri: custom header names that depends on seems to have gone into 3.2.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ecap and https

2015-07-24 Thread HackXBack
with this conf it work on the same site in http and not in https
the site is youtube.

#request_header_access Accept-Encoding deny all
#loadable_modules /usr/local/lib/ecap_adapter_modifying.so
#ecap_enable on
#ecap_service ecapModifier respmod_precache \
#uri=ecap://e-cap.org/ecap/services/sample/modifying \
#victim=channels \
#replacement=aaa
#adaptation_access ecapModifier allow all


can you give a try ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/ecap-and-https-tp4672396p4672468.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.5 with auth and chroot

2015-07-24 Thread Jorgeley Junior
That's are good ideas, I'll try them.
Thanks!!!

2015-07-24 11:57 GMT-03:00 Amos Jeffries squ...@treenet.co.nz:

 On 25/07/2015 2:22 a.m., Jorgeley Junior wrote:
  Thank you so much for the help.

 Cant be much help sorry. I'm just guessing here. Never actually run
 Squid in a chroot myself.

  So, I use the directive 'chroot' in the squid.conf.
  I start squid this way:
  cd /etc/squid-3.5.6
  sbin/squid
  and it starts normally, but when I open the client browser and do an
  authentication it logs the errors and don't authenticate, but the squid
  doesn't stop running, just it logs the error and do not authenticate.

 I've just looked up what is displaying that error and why. It is more of
 the code wrongly using errno to display error text. So the message
 itself may be bogus, but some error is happening when fork()'ing and
 execv()'ing the helper process.

 Some things I think you should try;

 1) configure Squid with the full non-chroot path of the binary in the
 auth_param line.

 2) enter the chroot, downgrade yourself to the squid low-privilege user,
 then try running the helper. Thats whats Squid is doing.

 3) try the chroot directive in squid.conf with a '/' on the end

 I'm out of ideas at this point. Apart from patching your squid to fix
 the errno usage in ipcCreate() just to see if some other error message
 appears. Sad thing about thtat is that I'm not sure what syscall is
 supposed to be error-reported there, quite a few happen in sequence.

 Amos




--
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid youtube caching

2015-07-24 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
No. He said that Squid does that itself. The only question - which Squid.

24.07.15 21:34, joe пишет:
 tks amos so
 doing replace beter as
 reply_header_access Strict-Transport-Security deny all

 request_header_replace Strict-Transport-Security max-age=0
 right ?



 --
 View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-youtube-caching-tp4672389p4672455.html
 Sent from the Squid - Users mailing list archive at Nabble.com.
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJVsluvAAoJENNXIZxhPexGv4AH/3z+XFId3yB+MYAq5cQl9sdg
Kg6aqAiPPB+Ti9uWJH5Jl/GP3OpMP2kxPfo2qTVovvfrpWkuDzzL8WuVzmtE1uuH
k88bnRHYxgNsxqgMNkm2vw+Q2cl9+xhi5NOlFZ7UwIQF8l5hxYSPGXHoQ5hV89ZB
4W/T+aDRe9L/5A1kPexnCGN2gem//iVXoZu883/KUfshtlUJUpdneFVDY6W7jCEp
fT7tOkpc2yt656M/gY7S8LD3ae1wQ79WAvc2zqixlbFgyRSbGlccxPWTGDNtWlD8
Il92k3RiMhrqaxSjQtdeeg6NX+J3bO3ZqfRG+3jSfwaioJrnbX72iaoIAi1NlRw=
=jDEv
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid youtube caching

2015-07-24 Thread Amos Jeffries
On 25/07/2015 3:34 a.m., Yuri Voinov wrote:
 
 24.07.15 21:15, Amos Jeffries пишет:
 On 25/07/2015 12:38 a.m., Yuri Voinov wrote:

 https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security

 24.07.15 18:33, joe пишет:
 i dont see Strict-Transport-Security  in my log header
 only alternate-protocol
 can you post an example link pls

 
 Note that the header may be sent over HTTP or HTTPS connection just once
 with a value of up to 68 years. And the domain will be HTTPS from then
 on as far as that client is concerned.
 
 Dropping Strict-Transport-Security therefore does nothing useful.
 In my setup it works for Chrome when user type youtube.com in command
 line. Browser goes into http. Always.

Great to hear. I assume they are not placing a long duration on their
HSTS header then. Or that you successfully turned it off in some HTTPS
you intercepted sometime.

Like I said they *could* send 68 Years as the duration of non-HTTP.

 
 But Squid replacing it with a new value of max-age=0;
 includeSubDomains will turn off the HSTS in the client for that domain.
 Which Squid?

I think 3.4+ . The ones supporting reply_header_access and
reply_header_replace with custom header names. It was such a small
rarely mentioned update I've forgotten when it happened.


 
 Be careful with that though. HSTS is actually a good thing most of the
 time. No matter how annoying it is to us proxying.
 This is security illusion. Which is more bad than insecure.
 

No HSTS is not illusion. At least not beyond the illusions offered by
TLS itself (which ssl-bump shines a light on).

HSTS is just telling the client to use https:// on its URLs even if the
user types http:// or any page it gets contains a http:// URL. The TLS
connection goes to where the user actually wanted to go, and is as
secure a TLS is. Nothing transferred over plain-text HTTP that could be
used to divert where the TLS was going to.
 All else being equal (ie assuming TLS was secure) attackers would have
to control port 443 on the servers belonging to the host who happened to
only be offering port 80 service. Pretty rare thing that.

In contrast there *is* illusion when an http:// redirects to https://
because the http:// part can be intercepted and attacker replace the
redirect URL with its own https:// URL. HSTS avoids using the redirect
part at all.


 
 
 Regarding Alternate-Protocol;
  The latest Squid will auto-remove *always*. It usually indicates an
 protocol experiment taking place by the website being visited (ie Google
 and QUIC/SPDY) and does a lot of real damage to network security and
 usability in any proxied network.
 No network security during DPI. So, all of this things is meaningless. IMHO.
 

DPI ?

You recall why I put it in right? all the complains from people about
users bypassing their security rules and not being able to identify how
it was happening. It was a bit noisy in here a while back about all that.
Thats what I mean by damage. If the person in charge of security don't
even know where the traffic is, they got problems.


 All usability we are need - HTTP does.
 


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ISSUE accssing content

2015-07-24 Thread Jagannath Naidu
1. Its not  a transparent proxy.

2. My clients get wpad configuration from AD server. So there are two
question.
 2.1 :I know that wpad is used to identify proxy server and port(and rest
other bypass rules).  When clients resolve to wpad.abc.com, is there way
that I can overwrite the wpad file off client. Like creating a webserver to
to serve wpad file and I change /etc/hosts file to myhwebserveripaddress
wpad.abc.com
2.2 Is there any other way to tell clients via squid server, to do not come
to squid server and re initiate the request.

On 24 July 2015 at 21:10, Jagannath Naidu 
jagannath.na...@fosteringlinux.com wrote:



 On 24 July 2015 at 21:05, Jagannath Naidu 
 jagannath.na...@fosteringlinux.com wrote:

 Dear List,

 I have been working on this for last two weeks, but never got it
 resolved.

 We have a application server (SERVER) in our local network and a desktop
  application (CLIENT). The application picks proxy settings from IE. And we
 also have a wensense proxy server

 case 1: when there is no proxy set
 application works. No logs in squid server access.log

 case 2: when proxy ip address set and checked bypass local network
 application works. No logs in squid server access.log

 case 3: when proxy ip address is set to wensense proxy server. UNCHECKED
 bypass local network
 application works. We dont have access to websense server and hence we
 can not check logs


 case 4: when proxy ip address is set to proxy server ip address.
 UNCHECKED bypass local network
 application does not work :-(. Below are the logs.


 1437751240.149  7 192.168.122.1 TCP_MISS/404 579 GET
 http://dlwvdialce.htmedia.net/UADInstall/UADPresentationLayer.application
 - HIER_DIRECT/10.1.4.46 text/html
 1437751240.992 94 192.168.122.1 TCP_DENIED/407 3757 CONNECT
 0.client-channel.google.com:443 - HIER_NONE/- text/html
 1437751240.996  0 192.168.122.1 TCP_DENIED/407 4059 CONNECT
 0.client-channel.google.com:443 - HIER_NONE/- text/html
 1437751242.327  5 192.168.122.1 TCP_MISS/404 579 GET
 http://dlwvdialce.htmedia.net/UADInstall/uadprop.htm - HIER_DIRECT/
 10.1.4.46 text/html
 1437751244.777  1 192.168.122.1 TCP_MISS/503 4048 POST
 http://cs-711-core.htmedia.net:8180/ConcertoAgentPortal/services/ConcertoAgentPortal
 - HIER_NONE/- text/html

 UPDATE: correct logs

 1437752279.774  6 192.168.122.1 TCP_MISS/404 579 GET
 http://dlwvdialce.htmedia.net/UADInstall/UADPresentationLayer.application
 - HIER_DIRECT/10.1.4.46 text/html
 1437752281.854  5 192.168.122.1 TCP_MISS/404 579 GET
 http://dlwvdialce.htmedia.net/UADInstall/uadprop.htm - HIER_DIRECT/
 10.1.4.46 text/html
 1437752284.265  2 192.168.122.1 TCP_MISS/503 4048 POST
 http://cs-711-core.htmedia.net:8180/ConcertoAgentPortal/services/ConcertoAgentPortal
 - HIER_NONE/- text/html



 squid -v
 Squid Cache: Version 3.3.8
 configure options:  '--build=x86_64-redhat-linux-gnu'
 '--host=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr'
 '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin'
 '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include'
 '--libdir=/usr/lib64' '--libexecdir=/usr/libexec'
 '--sharedstatedir=/var/lib' '--mandir=/usr/share/man'
 '--infodir=/usr/share/info' '--disable-strict-error-checking'
 '--exec_prefix=/usr' '--libexecdir=/usr/lib64/squid' '--localstatedir=/var'
 '--datadir=/usr/share/squid' '--sysconfdir=/etc/squid'
 '--with-logdir=$(localstatedir)/log/squid'
 '--with-pidfile=$(localstatedir)/run/squid.pid'
 '--disable-dependency-tracking' '--enable-eui'
 '--enable-follow-x-forwarded-for' '--enable-auth'
 '--enable-auth-basic=DB,LDAP,MSNT,MSNT-multi-domain,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,getpwnam'
 '--enable-auth-ntlm=smb_lm,fake'
 '--enable-auth-digest=file,LDAP,eDirectory'
 '--enable-auth-negotiate=kerberos'
 '--enable-external-acl-helpers=file_userip,LDAP_group,time_quota,session,unix_group,wbinfo_group'
 '--enable-cache-digests' '--enable-cachemgr-hostname=localhost'
 '--enable-delay-pools' '--enable-epoll' '--enable-icap-client'
 '--enable-ident-lookups' '--enable-linux-netfilter'
 '--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl'
 '--enable-ssl-crtd' '--enable-storeio=aufs,diskd,ufs' '--enable-wccpv2'
 '--enable-esi' '--enable-ecap' '--with-aio' '--with-default-user=squid'
 '--with-filedescriptors=16384' '--with-dl' '--with-openssl'
 '--with-pthreads' 'build_alias=x86_64-redhat-linux-gnu'
 'host_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall
 -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
 --param=ssp-buffer-size=4 -grecord-gcc-switches   -m64 -mtune=generic
 -fpie' 'LDFLAGS=-Wl,-z,relro  -pie -Wl,-z,relro -Wl,-z,now' 'CXXFLAGS=-O2
 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
 -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches
 -m64 -mtune=generic -fpie'
 'PKG_CONFIG_PATH=%{_PKG_CONFIG_PATH}:/usr/lib64/pkgconfig:/usr/share/pkgconfig'


 squid.conf

 acl localnet src 10.0.0.0/8 # RFC1918 possible 

Re: [squid-users] ecap and https

2015-07-24 Thread HackXBack
Dear Amos,
you mean if the https is decrypted ?
so yes it is decrypted and full url shown in access.log
and not this adapter didnt work on https pages,
it can edit content in http pages and not in https pages .




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/ecap-and-https-tp4672396p4672462.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid3: 100 % CPU load during object caching

2015-07-24 Thread Marcus Kool



On 07/24/2015 01:01 PM, Jens Offenbach wrote:

@Marcus:
I am not sure what exactly causes the problems, but could you please make a 
test with these two different settings:
cache_mem 4 GB
maximum_object_size_in_memory 1 GB


I think this setting for maximum_object_size_in_memory is too high, independent 
of how the performance is.
The tests also show that large objects cached on disk have a good performance.
The perfect place for a large ISO image is the disk cache.

I did the test with squid 3.5.6 and got the same result as you have:
the download starts fast but quickly drops.  Squid uses 100% CPU.
wget displays 14 MB/sec ... 10 MB/sec ... 8 7 6 5 4 3 2 MB/sec and stays there 
for a long time.
At 50% downloaded the speed drops more to 1 MB/sec and at the end of the 
download I got 500 KB/sec *average*.
The second cached download was sustained 190 MB/sec and 120% CPU.

I did a second test with
cache_mem 4 GB
maximum_object_size_in_memory 200 MB

The download speed varied a lot: started with 30 MB/sec and went down and up 
many times between 6 MB/sec and 35 MB/sec.
The final average download speed was 31 MB/sec.  100% CPU.
The second cached download was sustained 190 MB/sec and 120% CPU.

Third test with
cache_mem 4 GB
maximum_object_size_in_memory 8 MB

The download speed started with 70 MB/sec and increased to 87 MB/sec.   100% CPU
The second cached download was sustained 190 MB/sec and 120% CPU.

4th test with
cache_mem 4 GB
maximum_object_size_in_memory 32 MB

The download speed started with 40 MB/sec and increased to 75 MB/sec.   100% 
CPU.
The second cached download was sustained 190 MB/sec and 120% CPU.

So Squid appears to have an issue with higher values of 
maximum_object_size_in_memory, the higher they are, the worse the performance.
For now, I would not go beyond 16 MB.
The question is, what is a reasonable size that you would like to be able to 
use for maximum_object_size_in_memory.
Do you have any particular requirement for a high maximum_object_size_in_memory 
?

Marcus


I think you will observe the behavior, that I was confronted with. The bad 
download rates of 500 KB/sec are gone, when I used the following settings:
cache_mem 256 MB
maximum_object_size_in_memory 16 MB

I think Amos has an idea what seems to be the source of the problem:
http://lists.squid-cache.org/pipermail/squid-users/2015-July/004728.html

Regards,
Jens


Gesendet: Freitag, 24. Juli 2015 um 14:33 Uhr
Von: Marcus Kool marcus.k...@urlfilterdb.com
An: Jens Offenbach wolle5...@gmx.de, squid-users@lists.squid-cache.org
Betreff: Re: [squid-users] Squid3: 100 % CPU load during object caching

On 07/24/2015 03:25 AM, Jens Offenbach wrote:

I have made a quick test of Squid 3.3.8 on Ubuntu 15.04 and I get the same 
problem: 100 % CPU usage, 500 KB/sec download rate.


Gesendet: Freitag, 24. Juli 2015 um 07:54 Uhr
Von: Jens Offenbach wolle5...@gmx.de
An: Marcus Kool marcus.k...@urlfilterdb.com, Eliezer Croitoru elie...@ngtech.co.il, 
Amos Jeffries squ...@treenet.co.nz, squid-users@lists.squid-cache.org
Betreff: Re: [squid-users] Squid3: 100 % CPU load during object caching
It is not easy for me, but I have tested Squid 3.3.8 from the Ubuntu packaging on a 
real physical infrastructure. I get the same results on the physical machine 
(1x Intel(R) Xeon(R) CPU E3-1225 V2 @ 3.20GHz, 32 GB RAM, 1 TB disk) where Squid is 
running: 100 % CPU usage, 500 KB/sec download rate. All machines are idle and we have 1 
GBit ethernet.

The strace log from the physical test scenario can be found here, but I think it does not 
differ from the virtual test scenario:
http://wikisend.com/download/293856/squid.strace2

@Marcus:
Have you verified that the file does not fit into memory and gets cached on 
disk? On which OS is Squid running? What are your build options of Squid (squid 
-v)? Is it possible that the issue is not part of 3.4.12? Do we have a 
regression?


I screwed up earlier since the maximum_object_size was too low for the test 
with a 1 GB file and did a new test.

The system has 64 GB memory and for sure the entire file is in the file system 
cache. The disk system is HW RAID-1 with 1 GB cache.
The OS is Linux 3.10, CentOS 7 latest patches.

New test:
test system: 1 CPU with 4 cores/8 threads @ 3.7 GHz, 64 GB memory, AUFS, 1 Gbit 
pipe, 500 mbit guaranteed

with Squid 3.4.12 :
1st download starts with 90 MB/sec and halfway drops to 30 MB/sec. My guess is 
that the file system cache got stressed and slowed things down.
2nd cached download with 190 MB/sec sustained and 120% CPU time.

With Squid 3.5.6 :
1st download starts with 90 MB/sec sustained and 80% CPU time.
2nd cached download with 190 MB/sec sustained and 120% CPU time.

As a comparison, I did dd if=test of=test2 bs=4k which uses 100% CPU time and 
has a throughput of 1200 MB/sec.
With bs=16k the throughput is 1300 MB/sec and with bs=64k the throughput is 
1400 MB/sec.

relevant parameters :
read_ahead_gap 64 KB
cache_mem 256 MB
maximum_object_size_in_memory 8 MB

Re: [squid-users] ISSUE accssing content

2015-07-24 Thread Amos Jeffries
On 25/07/2015 4:59 a.m., Jagannath Naidu wrote:
 1. Its not  a transparent proxy.
 
 2. My clients get wpad configuration from AD server. So there are two
 question.
  2.1 :I know that wpad is used to identify proxy server and port(and rest
 other bypass rules).  When clients resolve to wpad.abc.com, is there way
 that I can overwrite the wpad file off client. Like creating a webserver to
 to serve wpad file and I change /etc/hosts file to myhwebserveripaddress
 wpad.abc.com
 2.2 Is there any other way to tell clients via squid server, to do not come
 to squid server and re initiate the request.

Exactly that if you wish. Its not clear whether WPAD is the problem though.

The fact that you have Squid logs showing access indicates the traffic
us actually getting there okay. The responses do seem to be coming back
from 10.* servers as well.
So what is happening is something is causing those servers not to like
the traffic being requested from them.


 
 On 24 July 2015 at 21:10, Jagannath Naidu 
 jagannath.na...@fosteringlinux.com wrote:
 


 On 24 July 2015 at 21:05, Jagannath Naidu 
 jagannath.na...@fosteringlinux.com wrote:

 Dear List,

 I have been working on this for last two weeks, but never got it
 resolved.

 We have a application server (SERVER) in our local network and a desktop
  application (CLIENT). The application picks proxy settings from IE. And we
 also have a wensense proxy server

 case 1: when there is no proxy set
 application works. No logs in squid server access.log

 case 2: when proxy ip address set and checked bypass local network
 application works. No logs in squid server access.log

 case 3: when proxy ip address is set to wensense proxy server. UNCHECKED
 bypass local network
 application works. We dont have access to websense server and hence we
 can not check logs

Can you explain not works in any better detail?
 application expected vs actual behaviour?
 if you can relate that to particular HTTP messages even better.




 case 4: when proxy ip address is set to proxy server ip address.
 UNCHECKED bypass local network
 application does not work :-(. Below are the logs.


 1437751240.149  7 192.168.122.1 TCP_MISS/404 579 GET
 http://dlwvdialce.htmedia.net/UADInstall/UADPresentationLayer.application
 - HIER_DIRECT/10.1.4.46 text/html

404. The URL you see above references an object that does not exist on
that server.

Things to look into:
 Is it the right server?
 Is it the right URL?
 Why was it requested?
 Does the server actually know its dlwvdialce.htmedia.net name?


 1437751240.992 94 192.168.122.1 TCP_DENIED/407 3757 CONNECT
 0.client-channel.google.com:443 - HIER_NONE/- text/html
 1437751240.996  0 192.168.122.1 TCP_DENIED/407 4059 CONNECT
 0.client-channel.google.com:443 - HIER_NONE/- text/html


Authentication. Normal I think.

 1437751242.327  5 192.168.122.1 TCP_MISS/404 579 GET
 http://dlwvdialce.htmedia.net/UADInstall/uadprop.htm - HIER_DIRECT/
 10.1.4.46 text/html

Same as the first 404'd URL.

 1437751244.777  1 192.168.122.1 TCP_MISS/503 4048 POST
 http://cs-711-core.htmedia.net:8180/ConcertoAgentPortal/services/ConcertoAgentPortal
 - HIER_NONE/- text/html

503 usually indicates the attempted server failed.

Makes sense if TCP to cs-711-core.htmedia.net port 8180 did not work.
Which would also match the lack of server IP in the log.



 UPDATE: correct logs

 1437752279.774  6 192.168.122.1 TCP_MISS/404 579 GET
 http://dlwvdialce.htmedia.net/UADInstall/UADPresentationLayer.application
 - HIER_DIRECT/10.1.4.46 text/html
 1437752281.854  5 192.168.122.1 TCP_MISS/404 579 GET
 http://dlwvdialce.htmedia.net/UADInstall/uadprop.htm - HIER_DIRECT/
 10.1.4.46 text/html
 1437752284.265  2 192.168.122.1 TCP_MISS/503 4048 POST
 http://cs-711-core.htmedia.net:8180/ConcertoAgentPortal/services/ConcertoAgentPortal
 - HIER_NONE/- text/html


Same comments as above.



 squid -v
 Squid Cache: Version 3.3.8
 configure options:  '--build=x86_64-redhat-linux-gnu'
 '--host=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr'
 '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin'
 '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include'
 '--libdir=/usr/lib64' '--libexecdir=/usr/libexec'
 '--sharedstatedir=/var/lib' '--mandir=/usr/share/man'
 '--infodir=/usr/share/info' '--disable-strict-error-checking'
 '--exec_prefix=/usr' '--libexecdir=/usr/lib64/squid' '--localstatedir=/var'
 '--datadir=/usr/share/squid' '--sysconfdir=/etc/squid'
 '--with-logdir=$(localstatedir)/log/squid'
 '--with-pidfile=$(localstatedir)/run/squid.pid'
 '--disable-dependency-tracking' '--enable-eui'
 '--enable-follow-x-forwarded-for' '--enable-auth'
 '--enable-auth-basic=DB,LDAP,MSNT,MSNT-multi-domain,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,getpwnam'
 '--enable-auth-ntlm=smb_lm,fake'
 '--enable-auth-digest=file,LDAP,eDirectory'
 '--enable-auth-negotiate=kerberos'
 

Re: [squid-users] cannot leave empty workers

2015-07-24 Thread Alex Wu
further analysis indicated that master process created 
quid-ssl_session_cache.shm.

In other words, it needs a https_port or http_port with ssl-bump in outside any 
process number to create this shared memeory segment.

Furthermore, the code  should be simplied like this:

diff --git a/squid-3.5.6/src/ssl/support.cc b/squid-3.5.6/src/ssl/support.cc
index 85305ce..0ce95f9 100644
--- a/squid-3.5.6/src/ssl/support.cc
+++ b/squid-3.5.6/src/ssl/support.cc
@@ -2084,9 +2084,6 @@ SharedSessionCacheRr::useConfig()
 void
 SharedSessionCacheRr::create()
 {
-if (!isSslServer()) //no need to configure ssl session cache.
-return;
-
 int items;
 items = Config.SSL.sessionCacheSize / sizeof(Ipc::MemMap::Slot);
 if (items)



This code is called in master that may not have configuration to ensure 
isSsslServer return true.

Alex
From: alex_wu2...@hotmail.com
To: squ...@treenet.co.nz; squid-users@lists.squid-cache.org
Date: Fri, 24 Jul 2015 15:28:06 -0700
Subject: Re: [squid-users] cannot leave empty workers




There is a problem

The code isSslServer looks for https configuration. If no one found, it will 
not create /run/shm/ssl_session_cache.shm.

Late, the code somewhere else can not find it, so the process would not start 
it self.

I am not clear which worker is called first to initialize_session_cache.

We see master and coordinator start properly. so I suspect coordinator might be 
one to initialze ssl_session_cache?

Or since all my http_port are listed in worker process 4, so isSllServer cannot 
find https_port, so it will not initialize ssl_session_cache.shm.

Somewhere, something is odd.

THX

Alex


 To: squid-users@lists.squid-cache.org
 From: squ...@treenet.co.nz
 Date: Sat, 25 Jul 2015 10:07:18 +1200
 Subject: Re: [squid-users] cannot leave empty workers
 
 On 25/07/2015 7:24 a.m., Alex Wu wrote:
  If I define 4 workers, and use the following way to allocate workers:
  
  if ${process_number} = 4
  //do something
  else
  endif
 
 The else means the wrapped config bit applies to *all* workers and
 processes of Squid except the one in the if-condition (process #4). It
 is optional.
 
 if ${process_number} = 4
  # do something
 endif
 
 It does not even do anything in the code except invert a bitmask. An
 endif then erases that bitmask. So an empty else is effectively
 doing nothing at all.
  Just like one would expect reading that config.
 
 The bug is elsewhere (sorry for the pun).
 
  
  I leave other workers as empty after else, then we encounter this error:
  
  FATAL: Ipc::Mem::Segment::open failed to 
  shm_open(/squid-ssl_session_cache.shm): (2) No such file or directory
  
  If I fill one more workers,especially ${process_number} = 1, then squid can 
  launch workers now,
  
 
 Was that really the full config?
 
 I dont see workers 4 in there at all and something must have been
 configured to use the shared memory TLS/SSL session cache.
 
 Amos
 
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
  

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users   
  ___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ssl_crtd process doesn't start with Squid 3.5.6

2015-07-24 Thread Stanford Prescott
I have a working implementation of Squid 3.5.5 with ssl-bump. When 3.5.5 is
started with ssl-bump enabled all the squid and ssl_crtd processes start
and Squid functions as intended when bumping ssl sites. However, when I
bump Squid to 3.5.6 squid seems to start but ssl_crtd does not and Squid
3.5.6 cannot successfully bump ssl.

These are the config options I use for both 3.5.5 and 3.5.6.

--enable-storeio=diskd,ufs,aufs --enable-linux-netfilter \
--enable-removal-policies=heap,lru --enable-delay-pools
--libdir=/usr/lib/ \
--localstatedir=/var --with-dl --with-openssl --enable-http-violations \
--with-large-files --with-libcap --disable-ipv6
--with-swapdir=/var/spool/squid \
 --enable-ssl-crtd --enable-follow-x-forwarded-for

This is the squid.conf file used for both versions.

visible_hostname smoothwallu3

# Uncomment the following to send debug info to /var/log/squid/cache.log
debug_options ALL,1 33,2 28,9

# ACCESS CONTROLS
# 
acl localhostgreen src 10.20.20.1
acl localnetgreen src 10.20.20.0/24

acl SSL_ports port 445 443 441 563
acl Safe_ports port 80# http
acl Safe_ports port 81# smoothwall http
acl Safe_ports port 21# ftp
acl Safe_ports port 445 443 441 563# https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210   # wais
acl Safe_ports port 1025-65535# unregistered ports
acl Safe_ports port 280   # http-mgmt
acl Safe_ports port 488   # gss-http
acl Safe_ports port 591   # filemaker
acl Safe_ports port 777   # multiling http

acl CONNECT method CONNECT

# TAG: http_access
# 



http_access allow localhost
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

http_access allow localnetgreen
http_access allow CONNECT localnetgreen

http_access allow localhostgreen
http_access allow CONNECT localhostgreen

# http_port and https_port
#

# For forward-proxy port. Squid uses this port to serve error pages, ftp
icons and communication with other proxies.
#
http_port 3127

http_port 10.20.20.1:800 intercept
https_port 10.20.20.1:808 intercept ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB
cert=/var/smoothwall/mods/proxy/ssl_cert/squidCA.pem


http_port 127.0.0.1:800 intercept

sslproxy_cert_error allow all
sslproxy_flags DONT_VERIFY_PEER
sslproxy_session_cache_size 4 MB

ssl_bump none localhostgreen

acl step1 at_step SslBump1
acl step2 at_step SslBump2
ssl_bump peek step1
ssl_bump bump all

sslcrtd_program /var/smoothwall/mods/proxy/libexec/ssl_crtd -s
/var/smoothwall/mods/proxy/lib/ssl_db -M 4MB
sslcrtd_children 5

http_access deny all

cache_replacement_policy heap GDSF
memory_replacement_policy heap GDSF

# CACHE OPTIONS
#

cache_effective_user squid
cache_effective_group squid

cache_swap_high 100
cache_swap_low 80

cache_access_log stdio:/var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_mem 64 MB

cache_dir diskd /var/spool/squid/cache 1024 16 256

maximum_object_size 33 MB

minimum_object_size 0 KB


request_body_max_size 0 KB

# OTHER OPTIONS
#

#via off
forwarded_for off

pid_filename /var/run/squid.pid

shutdown_lifetime 30 seconds
icp_port 3130

half_closed_clients off
icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
icap_service service_avi_req reqmod_precache
icap://localhost:1344/squidclamav bypass=off
adaptation_access service_avi_req allow all
icap_service service_avi_resp respmod_precache
icap://localhost:1344/squidclamav bypass=on
adaptation_access service_avi_resp allow all

umask 022

logfile_rotate 0

strip_query_terms off

redirect_program /usr/sbin/squidGuard
url_rewrite_children 5

And the cache.log file when starting 3.5.6 with debug options on in
squid.conf




























































































































*2015/07/24 17:15:06.230| Acl.cc(380) ~ACL: freeing ACL
adaptation_access2015/07/24 17:15:06.230| Acl.cc(380) ~ACL: freeing ACL
adaptation_access2015/07/24 17:15:06.230| Acl.cc(380) ~ACL: freeing ACL
2015/07/24 17:15:06.230| Acl.cc(380) ~ACL: freeing ACL 2015/07/24
17:15:06.231| Acl.cc(380) ~ACL: freeing ACL 2015/07/24 17:15:06.231|
Acl.cc(380) ~ACL: freeing ACL 2015/07/24 17:15:06.231| Acl.cc(380) ~ACL:
freeing ACL 2015/07/24 17:15:06.231| Acl.cc(380) ~ACL: freeing ACL
2015/07/24 17:15:06.231| Acl.cc(380) ~ACL: freeing ACL 2015/07/24
17:15:06.231| 

Re: [squid-users] squid 3.5 with auth and chroot

2015-07-24 Thread Amos Jeffries
On 25/07/2015 12:10 a.m., Jorgeley Junior wrote:
 please guys, help me.
 Any suggestions?
 

Squid is not generally run in a chroot. The master / coordinator daemon
manager process requires root access for several things and spawns
workers that are dropped automatically to highly restricted access
anyway. You already found out how big the dependency pool of libraries is.

I guess what I'm getting at is that this is a rarely tested situation.

To complicate matters there are three different combinations of chroot
that Squid can run.

* External chroot. Where you enter the chroot before starting Squid and
it thinks the chroot content is the whole system.

* configured chroot. Where you configure Squid master process to chroot
its low-privilege workers with the squid.conf chroot directive.

* Linux containers. Similar to the first, but you dont have to copy
files into a separate chroot area. Just assign visibility/access to the
OS areas.


The error is pretty clear though. The problem is that something is
unable to load a file during helper startup.
Either Squid is unable to read/open/see the helper binary file itself.
Or the helper is unable to open a file it needs to operate.

ipcCreate: is a big hint that its Squid not finding the helper binary
named.

So is Squid being run from inside the chroot, or using chroot
directive in squid.conf?


Amos


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ISSUE accssing content

2015-07-24 Thread Mike

I see a few issues.

1. The report from the log shows a 192.168.*.* address, common LAN IP

Then in the squid.conf:
2. You have wvdial destination as 10.1.*.* addresses, which is a 
completely different internal network.
Typically there will be no internal routing or communication from a 
192.168..*.* address to/from a 10.*.*.* address without a custom routing 
server with 2 network connections, one from each IP set and to act as 
the DNS intermediary for routing. Otherwise for network/internet 
connections, the computer/browser sees its own IP as local network, and 
everything else including 10.*.*.* as an external address out on the 
internet. I would suggest getting both the browsing computer and the 
server on the same IP subset, as in 192.168.122.x or 10.1.4.x, otherwise 
these issues are likely to continue.


3. Next in the squid.conf is http_port which should be port number only, 
no IP address, especially 0.0.0.0 which can cause conflicts with squid 
3.x versions. Best bet is use just port only, as in: http_port 3128 or 
in your case http_port 8080, which is the port (with server IP found 
in ifconfig) the browser will use to connect through the squid server.
4. The bypass local network means any IP connection attempt to a local 
network IP will not use the proxy. This goes back to the 2 different IP 
subsets. One option is to enter a proxy exception as 10.*.*.* (if the 
websense server is using 10.x.x.x IP address).



Mike


On 7/24/2015 10:35 AM, Jagannath Naidu wrote:

Dear List,

I have been working on this for last two weeks, but never got it 
resolved.


We have a application server (SERVER) in our local network and a 
desktop  application (CLIENT). The application picks proxy settings 
from IE. And we also have a wensense proxy server


case 1: when there is no proxy set
application works. No logs in squid server access.log

case 2: when proxy ip address set and checked bypass local network
application works. No logs in squid server access.log

case 3: when proxy ip address is set to wensense proxy server. 
UNCHECKED bypass local network
application works. We dont have access to websense server and hence we 
can not check logs



case 4: when proxy ip address is set to proxy server ip address. 
UNCHECKED bypass local network

application does not work :-(. Below are the logs.


1437751240.149  7 192.168.122.1 TCP_MISS/404 579 GET 
http://dlwvdialce.htmedia.net/UADInstall/UADPresentationLayer.application 
- HIER_DIRECT/10.1.4.46 http://10.1.4.46 text/html
1437751240.992 94 192.168.122.1 TCP_DENIED/407 3757 CONNECT 
0.client-channel.google.com:443 
http://0.client-channel.google.com:443 - HIER_NONE/- text/html
1437751240.996  0 192.168.122.1 TCP_DENIED/407 4059 CONNECT 
0.client-channel.google.com:443 
http://0.client-channel.google.com:443 - HIER_NONE/- text/html
1437751242.327  5 192.168.122.1 TCP_MISS/404 579 GET 
http://dlwvdialce.htmedia.net/UADInstall/uadprop.htm - 
HIER_DIRECT/10.1.4.46 http://10.1.4.46 text/html
1437751244.777  1 192.168.122.1 TCP_MISS/503 4048 POST 
http://cs-711-core.htmedia.net:8180/ConcertoAgentPortal/services/ConcertoAgentPortal 
- HIER_NONE/- text/html


squid -v
Squid Cache: Version 3.3.8
configure options:  '--build=x86_64-redhat-linux-gnu' 
'--host=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr' 
'--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' 
'--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' 
'--libdir=/usr/lib64' '--libexecdir=/usr/libexec' 
'--sharedstatedir=/var/lib' '--mandir=/usr/share/man' 
'--infodir=/usr/share/info' '--disable-strict-error-checking' 
'--exec_prefix=/usr' '--libexecdir=/usr/lib64/squid' 
'--localstatedir=/var' '--datadir=/usr/share/squid' 
'--sysconfdir=/etc/squid' '--with-logdir=$(localstatedir)/log/squid' 
'--with-pidfile=$(localstatedir)/run/squid.pid' 
'--disable-dependency-tracking' '--enable-eui' 
'--enable-follow-x-forwarded-for' '--enable-auth' 
'--enable-auth-basic=DB,LDAP,MSNT,MSNT-multi-domain,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,getpwnam' 
'--enable-auth-ntlm=smb_lm,fake' 
'--enable-auth-digest=file,LDAP,eDirectory' 
'--enable-auth-negotiate=kerberos' 
'--enable-external-acl-helpers=file_userip,LDAP_group,time_quota,session,unix_group,wbinfo_group' 
'--enable-cache-digests' '--enable-cachemgr-hostname=localhost' 
'--enable-delay-pools' '--enable-epoll' '--enable-icap-client' 
'--enable-ident-lookups' '--enable-linux-netfilter' 
'--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl' 
'--enable-ssl-crtd' '--enable-storeio=aufs,diskd,ufs' 
'--enable-wccpv2' '--enable-esi' '--enable-ecap' '--with-aio' 
'--with-default-user=squid' '--with-filedescriptors=16384' '--with-dl' 
'--with-openssl' '--with-pthreads' 
'build_alias=x86_64-redhat-linux-gnu' 
'host_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall 
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong 
--param=ssp-buffer-size=4 -grecord-gcc-switches   -m64 -mtune=generic 

[squid-users] cannot leave empty workers

2015-07-24 Thread Alex Wu
If I define 4 workers, and use the following way to allocate workers:

if ${process_number} = 4
//do something
else
endif

I leave other workers as empty after else, then we encounter this error:

FATAL: Ipc::Mem::Segment::open failed to 
shm_open(/squid-ssl_session_cache.shm): (2) No such file or directory

If I fill one more workers,especially ${process_number} = 1, then squid can 
launch workers now,

Alex
  ___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid youtube caching

2015-07-24 Thread joe
tks amos for the info 
now i figure why forcing full video on some Firefox i tough v3.5.6 has bug
it turn out its 
some firefox has media.mediasource.webm.enabled;false that will play webm yt
full video and u only see 380p
no other on quality setting if you enable this to true all the quality in yt
setting enabled and yt start working partial video jumping from quality to
another wile watching suks and wat ever i do in squid to force full video on
yt dos int have afect at all weard
is there any way to do it from squid or they force it with all new browser
i try squid on some partial download
with
range_offset_limit none 
quick_abort_min -1 
it work fine 
newer browser suks



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-youtube-caching-tp4672389p4672472.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] cannot leave empty workers

2015-07-24 Thread Alex Wu
There is a problem

The code isSslServer looks for https configuration. If no one found, it will 
not create /run/shm/ssl_session_cache.shm.

Late, the code somewhere else can not find it, so the process would not start 
it self.

I am not clear which worker is called first to initialize_session_cache.

We see master and coordinator start properly. so I suspect coordinator might be 
one to initialze ssl_session_cache?

Or since all my http_port are listed in worker process 4, so isSllServer cannot 
find https_port, so it will not initialize ssl_session_cache.shm.

Somewhere, something is odd.

THX

Alex
 To: squid-users@lists.squid-cache.org
 From: squ...@treenet.co.nz
 Date: Sat, 25 Jul 2015 10:07:18 +1200
 Subject: Re: [squid-users] cannot leave empty workers
 
 On 25/07/2015 7:24 a.m., Alex Wu wrote:
  If I define 4 workers, and use the following way to allocate workers:
  
  if ${process_number} = 4
  //do something
  else
  endif
 
 The else means the wrapped config bit applies to *all* workers and
 processes of Squid except the one in the if-condition (process #4). It
 is optional.
 
 if ${process_number} = 4
  # do something
 endif
 
 It does not even do anything in the code except invert a bitmask. An
 endif then erases that bitmask. So an empty else is effectively
 doing nothing at all.
  Just like one would expect reading that config.
 
 The bug is elsewhere (sorry for the pun).
 
  
  I leave other workers as empty after else, then we encounter this error:
  
  FATAL: Ipc::Mem::Segment::open failed to 
  shm_open(/squid-ssl_session_cache.shm): (2) No such file or directory
  
  If I fill one more workers,especially ${process_number} = 1, then squid can 
  launch workers now,
  
 
 Was that really the full config?
 
 I dont see workers 4 in there at all and something must have been
 configured to use the shared memory TLS/SSL session cache.
 
 Amos
 
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
  ___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl_crtd process doesn't start with Squid 3.5.6

2015-07-24 Thread Stanford Prescott
Thanks for that. Any ideas why I am experiencing that?

Stan


On Fri, Jul 24, 2015 at 7:07 PM, James Lay j...@slave-tothe-box.net wrote:

  On Fri, 2015-07-24 at 17:25 -0500, Stanford Prescott wrote:

 I have a working implementation of Squid 3.5.5 with ssl-bump. When 3.5.5
 is started with ssl-bump enabled all the squid and ssl_crtd processes start
 and Squid functions as intended when bumping ssl sites. However, when I
 bump Squid to 3.5.6 squid seems to start but ssl_crtd does not and Squid
 3.5.6 cannot successfully bump ssl.


  These are the config options I use for both 3.5.5 and 3.5.6.

  --enable-storeio=diskd,ufs,aufs --enable-linux-netfilter \
 --enable-removal-policies=heap,lru --enable-delay-pools
 --libdir=/usr/lib/ \
 --localstatedir=/var --with-dl --with-openssl --enable-http-violations \
 --with-large-files --with-libcap --disable-ipv6
 --with-swapdir=/var/spool/squid \
  --enable-ssl-crtd --enable-follow-x-forwarded-for



  This is the squid.conf file used for both versions.

  visible_hostname smoothwallu3

 # Uncomment the following to send debug info to /var/log/squid/cache.log
 debug_options ALL,1 33,2 28,9

 # ACCESS CONTROLS
 # 
 acl localhostgreen src 10.20.20.1
 acl localnetgreen src 10.20.20.0/24

 acl SSL_ports port 445 443 441 563
 acl Safe_ports port 80# http
 acl Safe_ports port 81# smoothwall http
 acl Safe_ports port 21# ftp
 acl Safe_ports port 445 443 441 563# https, snews
 acl Safe_ports port 70 # gopher
 acl Safe_ports port 210   # wais
 acl Safe_ports port 1025-65535# unregistered ports
 acl Safe_ports port 280   # http-mgmt
 acl Safe_ports port 488   # gss-http
 acl Safe_ports port 591   # filemaker
 acl Safe_ports port 777   # multiling http

 acl CONNECT method CONNECT

 # TAG: http_access
 # 



 http_access allow localhost
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports

 http_access allow localnetgreen
 http_access allow CONNECT localnetgreen

 http_access allow localhostgreen
 http_access allow CONNECT localhostgreen

 # http_port and https_port

 #

 # For forward-proxy port. Squid uses this port to serve error pages, ftp
 icons and communication with other proxies.

 #
 http_port 3127

 http_port 10.20.20.1:800 intercept
 https_port 10.20.20.1:808 intercept ssl-bump
 generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
 cert=/var/smoothwall/mods/proxy/ssl_cert/squidCA.pem


 http_port 127.0.0.1:800 intercept

 sslproxy_cert_error allow all
 sslproxy_flags DONT_VERIFY_PEER
 sslproxy_session_cache_size 4 MB

 ssl_bump none localhostgreen

 acl step1 at_step SslBump1
 acl step2 at_step SslBump2
 ssl_bump peek step1
 ssl_bump bump all

 sslcrtd_program /var/smoothwall/mods/proxy/libexec/ssl_crtd -s
 /var/smoothwall/mods/proxy/lib/ssl_db -M 4MB
 sslcrtd_children 5

 http_access deny all

 cache_replacement_policy heap GDSF
 memory_replacement_policy heap GDSF

 # CACHE OPTIONS
 #
 
 cache_effective_user squid
 cache_effective_group squid

 cache_swap_high 100
 cache_swap_low 80

 cache_access_log stdio:/var/log/squid/access.log
 cache_log /var/log/squid/cache.log
 cache_mem 64 MB

 cache_dir diskd /var/spool/squid/cache 1024 16 256

 maximum_object_size 33 MB

 minimum_object_size 0 KB


 request_body_max_size 0 KB

 # OTHER OPTIONS
 #
 
 #via off
 forwarded_for off

 pid_filename /var/run/squid.pid

 shutdown_lifetime 30 seconds
 icp_port 3130

 half_closed_clients off
 icap_enable on
 icap_send_client_ip on
 icap_send_client_username on
 icap_client_username_encode off
 icap_client_username_header X-Authenticated-User
 icap_preview_enable on
 icap_preview_size 1024
 icap_service service_avi_req reqmod_precache
 icap://localhost:1344/squidclamav bypass=off
 adaptation_access service_avi_req allow all
 icap_service service_avi_resp respmod_precache
 icap://localhost:1344/squidclamav bypass=on
 adaptation_access service_avi_resp allow all

 umask 022

 logfile_rotate 0

 strip_query_terms off

 redirect_program /usr/sbin/squidGuard
 url_rewrite_children 5


  And the cache.log file when starting 3.5.6 with debug options on in
 squid.conf

  *2015/07/24 17:15:06.230| Acl.cc(380) ~ACL: freeing ACL
 adaptation_access*
 *2015/07/24 17:15:06.230| Acl.cc(380) ~ACL: freeing ACL adaptation_access*
 *2015/07/24 17:15:06.230| Acl.cc(380) ~ACL: freeing ACL *
 *2015/07/24 17:15:06.230| Acl.cc(380) ~ACL: freeing ACL *
 *2015/07/24 17:15:06.231| Acl.cc(380) ~ACL: freeing ACL *
 *2015/07/24 17:15:06.231| 

Re: [squid-users] ssl_crtd process doesn't start with Squid 3.5.6

2015-07-24 Thread James Lay
On Fri, 2015-07-24 at 19:15 -0500, Stanford Prescott wrote:
 Thanks for that. Any ideas why I am experiencing that?
 
 
 
 Stan
 
 
 
 
 On Fri, Jul 24, 2015 at 7:07 PM, James Lay j...@slave-tothe-box.net
 wrote:
 
 On Fri, 2015-07-24 at 17:25 -0500, Stanford Prescott wrote: 
 
  I have a working implementation of Squid 3.5.5 with
  ssl-bump. When 3.5.5 is started with ssl-bump enabled all
  the squid and ssl_crtd processes start and Squid functions
  as intended when bumping ssl sites. However, when I bump
  Squid to 3.5.6 squid seems to start but ssl_crtd does not
  and Squid 3.5.6 cannot successfully bump ssl.
  
  
  These are the config options I use for both 3.5.5 and 3.5.6.
  
  --enable-storeio=diskd,ufs,aufs --enable-linux-netfilter \
  --enable-removal-policies=heap,lru --enable-delay-pools
  --libdir=/usr/lib/ \
  --localstatedir=/var --with-dl --with-openssl
  --enable-http-violations \
  --with-large-files --with-libcap --disable-ipv6
  --with-swapdir=/var/spool/squid \
   --enable-ssl-crtd --enable-follow-x-forwarded-for
  
  
  
  This is the squid.conf file used for both versions.
  
  visible_hostname smoothwallu3
  
  # Uncomment the following to send debug info
  to /var/log/squid/cache.log
  debug_options ALL,1 33,2 28,9
  
  # ACCESS CONTROLS
  #
  
  acl localhostgreen src 10.20.20.1
  acl localnetgreen src 10.20.20.0/24
  
  acl SSL_ports port 445 443 441 563
  acl Safe_ports port 80# http
  acl Safe_ports port 81# smoothwall http
  acl Safe_ports port 21# ftp 
  acl Safe_ports port 445 443 441 563# https, snews
  acl Safe_ports port 70 # gopher
  acl Safe_ports port 210   # wais  
  acl Safe_ports port 1025-65535# unregistered ports
  acl Safe_ports port 280   # http-mgmt
  acl Safe_ports port 488   # gss-http 
  acl Safe_ports port 591   # filemaker
  acl Safe_ports port 777   # multiling http
  
  acl CONNECT method CONNECT
  
  # TAG: http_access
  #
  
  
  
  
  http_access allow localhost
  http_access deny !Safe_ports
  http_access deny CONNECT !SSL_ports
  
  http_access allow localnetgreen
  http_access allow CONNECT localnetgreen
  
  http_access allow localhostgreen
  http_access allow CONNECT localhostgreen
  
  # http_port and https_port
  
 #
  
  # For forward-proxy port. Squid uses this port to serve
  error pages, ftp icons and communication with other proxies.
  
 #
  http_port 3127
  
  http_port 10.20.20.1:800 intercept
  https_port 10.20.20.1:808 intercept ssl-bump
  generate-host-certificates=on
  dynamic_cert_mem_cache_size=4MB
  cert=/var/smoothwall/mods/proxy/ssl_cert/squidCA.pem
  
  
  http_port 127.0.0.1:800 intercept
  
  sslproxy_cert_error allow all
  sslproxy_flags DONT_VERIFY_PEER
  sslproxy_session_cache_size 4 MB
  
  ssl_bump none localhostgreen
  
  acl step1 at_step SslBump1
  acl step2 at_step SslBump2
  ssl_bump peek step1
  ssl_bump bump all
  
  sslcrtd_program /var/smoothwall/mods/proxy/libexec/ssl_crtd
  -s /var/smoothwall/mods/proxy/lib/ssl_db -M 4MB
  sslcrtd_children 5
  
  http_access deny all
  
  cache_replacement_policy heap GDSF
  memory_replacement_policy heap GDSF
  
  # CACHE OPTIONS
  #
  
 
  cache_effective_user squid
  cache_effective_group squid
  
  cache_swap_high 100
  cache_swap_low 80
  
  cache_access_log stdio:/var/log/squid/access.log
  cache_log /var/log/squid/cache.log
  cache_mem 64 MB
  
  cache_dir diskd /var/spool/squid/cache 1024 16 256
  
  maximum_object_size 33 MB
  
  minimum_object_size 0 KB
  
  
  request_body_max_size 0 KB
  
  # 

Re: [squid-users] ISSUE accssing content

2015-07-24 Thread Jagannath Naidu
Thanks Amos,, mike



On 25 July 2015 at 03:20, Amos Jeffries squ...@treenet.co.nz wrote:

 On 25/07/2015 4:59 a.m., Jagannath Naidu wrote:
  1. Its not  a transparent proxy.
 
  2. My clients get wpad configuration from AD server. So there are two
  question.
   2.1 :I know that wpad is used to identify proxy server and port(and rest
  other bypass rules).  When clients resolve to wpad.abc.com, is there way
  that I can overwrite the wpad file off client. Like creating a webserver
 to
  to serve wpad file and I change /etc/hosts file to
 myhwebserveripaddress
  wpad.abc.com
  2.2 Is there any other way to tell clients via squid server, to do not
 come
  to squid server and re initiate the request.

 Exactly that if you wish. Its not clear whether WPAD is the problem though.

 The fact that you have Squid logs showing access indicates the traffic
 us actually getting there okay. The responses do seem to be coming back
 from 10.* servers as well.
 So what is happening is something is causing those servers not to like
 the traffic being requested from them.


 
  On 24 July 2015 at 21:10, Jagannath Naidu 
  jagannath.na...@fosteringlinux.com wrote:
 
 
 
  On 24 July 2015 at 21:05, Jagannath Naidu 
  jagannath.na...@fosteringlinux.com wrote:
 
  Dear List,
 
  I have been working on this for last two weeks, but never got it
  resolved.
 
  We have a application server (SERVER) in our local network and a
 desktop
   application (CLIENT). The application picks proxy settings from IE.
 And we
  also have a wensense proxy server
 
  case 1: when there is no proxy set
  application works. No logs in squid server access.log
 
  case 2: when proxy ip address set and checked bypass local network
  application works. No logs in squid server access.log
 
  case 3: when proxy ip address is set to wensense proxy server.
 UNCHECKED
  bypass local network
  application works. We dont have access to websense server and hence we
  can not check logs

 Can you explain not works in any better detail?
  application expected vs actual behaviour?
  if you can relate that to particular HTTP messages even better.

The application is aspect unified ip agent desktop. It is a dialer
application (VOIP). Used on windows machine.
Rest cases :

When application is launched, it shows that it has joined domain HTP. HTP
is default, we can change to other from the drop down list.

Case 4: not works.

But in this case, it shows no drop down list, nor with a single option like
HTP. Application can connect to server anymore. And I can not call or
receive calls anymore.



 
 
  case 4: when proxy ip address is set to proxy server ip address.
  UNCHECKED bypass local network
  application does not work :-(. Below are the logs.
 
 
  1437751240.149  7 192.168.122.1 TCP_MISS/404 579 GET
 
 http://dlwvdialce.htmedia.net/UADInstall/UADPresentationLayer.application
  - HIER_DIRECT/10.1.4.46 text/html

 404. The URL you see above references an object that does not exist on
 that server.

 Things to look into:
  Is it the right server?

Yes

  Is it the right URL?

Yes

  Why was it requested?

Don't know. These were the only logs I can get from access.log. The server
is Microsoft IIS HTTP/1.1


  Does the server actually know its dlwvdialce.htmedia.net name?

Yes. It is resolvable 1) ping dlwvdialce works 2) ping
dlwvdialce.htmedia.net works

initially dlwvdialce was not resolving to any host. That's where used
append_domain .htmedia.net is squid.conf (worked for other applications).




  1437751240.992 94 192.168.122.1 TCP_DENIED/407 3757 CONNECT
  0.client-channel.google.com:443 - HIER_NONE/- text/html
  1437751240.996  0 192.168.122.1 TCP_DENIED/407 4059 CONNECT
  0.client-channel.google.com:443 - HIER_NONE/- text/html


 Authentication. Normal I think.

Yes, NTLM auth.



  1437751242.327  5 192.168.122.1 TCP_MISS/404 579 GET
  http://dlwvdialce.htmedia.net/UADInstall/uadprop.htm - HIER_DIRECT/
  10.1.4.46 text/html

 Same as the first 404'd URL.


 1437751244.777  1 192.168.122.1 TCP_MISS/503 4048 POST
 
 http://cs-711-core.htmedia.net:8180/ConcertoAgentPortal/services/ConcertoAgentPortal
  - HIER_NONE/- text/html

 503 usually indicates the attempted server failed.

 Makes sense if TCP to cs-711-core.htmedia.net port 8180 did not work.
 Which would also match the lack of server IP in the log.


 1)  ping cs-711-core.htmedia.net  does not work no such host
 2) ping  cs-711-core http://cs-711-core.htmedia.net/ does not work no
such host




 
  UPDATE: correct logs
 
  1437752279.774  6 192.168.122.1 TCP_MISS/404 579 GET
 
 http://dlwvdialce.htmedia.net/UADInstall/UADPresentationLayer.application
  - HIER_DIRECT/10.1.4.46 text/html
  1437752281.854  5 192.168.122.1 TCP_MISS/404 579 GET
  http://dlwvdialce.htmedia.net/UADInstall/uadprop.htm - HIER_DIRECT/
  10.1.4.46 text/html
  1437752284.265  2 192.168.122.1 TCP_MISS/503 4048 POST
 
 

Re: [squid-users] ISSUE accssing content

2015-07-24 Thread Jagannath Naidu
Thanks mike.
But I think Amos is right.

On 25 July 2015 at 00:27, Mike mcsn...@afo.net wrote:

  I see a few issues.

 1. The report from the log shows a 192.168.*.* address, common LAN IP


The ip 192.168.122.1 is the ip address of  virtual interface (acts as a
default gateway for Virtual machines). I did NATing using iptables.


 Then in the squid.conf:
 2. You have wvdial destination as 10.1.*.* addresses, which is a
 completely different internal network.
 Typically there will be no internal routing or communication from a
 192.168..*.* address to/from a 10.*.*.* address without a custom routing
 server with 2 network connections, one from each IP set and to act as the
 DNS intermediary for routing. Otherwise for network/internet connections,
 the computer/browser sees its own IP as local network, and everything else
 including 10.*.*.* as an external address out on the internet. I would
 suggest getting both the browsing computer and the server on the same IP
 subset, as in 192.168.122.x or 10.1.4.x, otherwise these issues are likely
 to continue.


I have two squid servers.
1. squid 3.1 on physical server
2. squid 3.3 on VM hosted by 1

Same logs. No different results.

So when the client requests 8080 . 3.1 serves. When the client requests
3128 3.3 serves.
This application behavior is same for both.



 3. Next in the squid.conf is http_port which should be port number only,
 no IP address, especially 0.0.0.0 which can cause conflicts with squid 3.x
 versions. Best bet is use just port only, as in: http_port 3128 or in
 your case http_port 8080, which is the port (with server IP found in
 ifconfig) the browser will use to connect through the squid server.


I tried your suggestion. But not worked. Same results :-(


 4. The bypass local network means any IP connection attempt to a local
 network IP will not use the proxy. This goes back to the 2 different IP
 subsets. One option is to enter a proxy exception as 10.*.*.* (if the
 websense server is using 10.x.x.x IP address).


I was thinking, what would websense have deployed.

@amos, mike: Can we overwrite wpad of a client using squid server or any
means automatically ?



 Mike


Jagannath Naidu
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] log source port from squid server?

2015-07-24 Thread Kevin Kretz


- Original Message -
From: Antony Stone antony.st...@squid.open.source.it
To: squid-users@lists.squid-cache.org
Sent: Friday, July 24, 2015 8:49:13 AM
Subject: Re: [squid-users] log source port from squid server?

 Does http://www.squid-cache.org/Doc/config/logformat/ help?


I saw that page earlier but misunderstood what this meant:

lp Local port number of the last server or peer connection

Looks like that does what I want.  Thank you for the quick assistance.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid youtube caching

2015-07-24 Thread joe
squid v 3.5.6
i dont think range_offset_limit none google
or range_offset_limit -1 google
or
request_header_access Accept-Ranges deny all
reply_header_access Accept-Ranges deny all
request_header_replace Accept-Ranges none
reply_header_replace Accept-Ranges none
ar working any one try v3.5.6 and see if they do pls i keep getting partial
file on yt
is there somthing i shuld do to make thim active ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-youtube-caching-tp4672389p4672445.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] cannot leave empty workers

2015-07-24 Thread Amos Jeffries
On 25/07/2015 11:53 a.m., Alex Wu wrote:
 further analysis indicated that master process created 
 quid-ssl_session_cache.shm.
 
 In other words, it needs a https_port or http_port with ssl-bump in outside 
 any process number to create this shared memeory segment.
 
 Furthermore, the code  should be simplied like this:
 
 diff --git a/squid-3.5.6/src/ssl/support.cc b/squid-3.5.6/src/ssl/support.cc
 index 85305ce..0ce95f9 100644
 --- a/squid-3.5.6/src/ssl/support.cc
 +++ b/squid-3.5.6/src/ssl/support.cc
 @@ -2084,9 +2084,6 @@ SharedSessionCacheRr::useConfig()
  void
  SharedSessionCacheRr::create()
  {
 -if (!isSslServer()) //no need to configure ssl session cache.
 -return;
 -
  int items;
  items = Config.SSL.sessionCacheSize / sizeof(Ipc::MemMap::Slot);
  if (items)
 
 
 
 This code is called in master that may not have configuration to ensure 
 isSsslServer return true.
 

The bug is in why that SharedSessionCacheRr is not being run by the worker.

AFAIK, it is the way the worker is supposed to attach to the shared
memory. First process to access the SHM does the create, others attach.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] RE Peek and Splice error SSL_accept failed

2015-07-24 Thread Sebastian Kirschner
Hi ,

I minimized the configuration a little bit(you could see it at the bottom of 
these message).

Also I still try to understand why these error happen , I increased the Debug 
level and saw that squid tried 48 times to peek but failed.
At the end It says that it got an Hello, does it mean that squid received 
after 48 tries the Hello ?

If yes why it does need so many tries ?

- Part of debug log -
2015/07/24 11:05:42.866 kid1| client_side.cc(4242) clientPeekAndSpliceSSL: 
Start peek and splice on FD 11
2015/07/24 11:05:42.866 kid1| bio.cc(120) read: FD 11 read 11 = 11
2015/07/24 11:05:42.866 kid1| bio.cc(146) readAndBuffer: read 11 out of 11 bytes
2015/07/24 11:05:42.866 kid1| bio.cc(150) readAndBuffer: recorded 11 bytes of 
TLS client Hello
2015/07/24 11:05:42.866 kid1| ModEpoll.cc(116) SetSelect: FD 11, type=1, 
handler=1, client_data=0x7effbd078458, timeout=0
2015/07/24 11:05:42.866 kid1| client_side.cc(4245) clientPeekAndSpliceSSL: 
SSL_accept failed.
.
.
.
2015/07/24 11:05:42.874 kid1| client_side.cc(4242) clientPeekAndSpliceSSL: 
Start peek and splice on FD 11
2015/07/24 11:05:42.874 kid1| bio.cc(120) read: FD 11 read 6 = 11
2015/07/24 11:05:42.874 kid1| bio.cc(146) readAndBuffer: read 6 out of 11 bytes
2015/07/24 11:05:42.874 kid1| bio.cc(150) readAndBuffer: recorded 6 bytes of 
TLS client Hello
2015/07/24 11:05:42.875 kid1| SBuf.cc(152) assign: SBuf2040 from c-string, n=0)
2015/07/24 11:05:42.875 kid1| SBuf.cc(152) assign: SBuf2038 from c-string, n=13)
2015/07/24 11:05:42.875 kid1| ModEpoll.cc(116) SetSelect: FD 11, type=1, 
handler=1, client_data=0x7effbd078458, timeout=0
2015/07/24 11:05:42.875 kid1| client_side.cc(4245) clientPeekAndSpliceSSL: 
SSL_accept failed.
2015/07/24 11:05:42.875 kid1| SBuf.cc(152) assign: SBuf2025 from c-string, 
n=4294967295)
2015/07/24 11:05:42.875 kid1| client_side.cc(4259) clientPeekAndSpliceSSL: I 
got hello. Start forwarding the request!!!

- new configuration -
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localnet
http_access allow localhost
http_access deny all

# Listening Ports
http_port 127.0.0.1:3120
http_port 192.168.1.104:3128 intercept
https_port 192.168.1.104:3129 intercept ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=10MB cert=/etc/squid3/ssl_cert/myCA.pem

# some configuration options
cache_effective_user proxy
cache_effective_group proxy
access_log /var/squid/logs/access.log
cache_log /var/squid/logs/cache.log
pinger_enable on
pinger_program /lib/squid3/pinger
sslproxy_capath /etc/ssl/certs
sslcrtd_program /lib/squid3/ssl_crtd -s /var/squid/certs -M 4MB -b 2048

#ACLs
acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3
acl bypass ssl::server_name www.google.de

ssl_bump peek step1
ssl_bump splice bypass step2
ssl_bump bump all

# Debugging if needeed
debug_options all,6 6,0 16,0 18,0 19,0 20,0 32,0 47,0 79,0 90,0 92,0

# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid3

#
# Add any of your own refresh_pattern entries above these.
#
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320


Mit freundlichen Grüßen / Best Regards

Sebastian
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid youtube caching

2015-07-24 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Firefox and Chrome use HSTS for yt and some other hardcoded sites, like
twitter. This means force use TLS. From client side.

24.07.15 18:01, joe пишет:
 http bro no ssl no https 
 plain http any one know the way to force yt to use http
 you can force google and yt to use http.. other site hard to do

 thre is a way i did not try it for facebook and some other site to cache
 http insted of https but still u have to use ssl connect on main
domain and
 there is lots of rewriter to forwerd to http most of the sub domain ar
https
 but without connect safe to re write   httpshttp but they ar tunneled



 --
 View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-youtube-caching-tp4672389p4672427.html
 Sent from the Squid - Users mailing list archive at Nabble.com.
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJVsirBAAoJENNXIZxhPexGA8QH/0/BYmMcG/0GyVyshuVVmo0O
KB1pYJyps8zLNppsVE/bMic3+9d66S9qfquRsp+kw3S4eM4AWBjfkUHl+alGVhX2
Hty9y6vw185vW3vo6Vned5xvrPufyvRkpMJf709bOIk+Ga1ge3g9FEerOlDsWUjt
K08sY45KZf3ugVSiYifKg1cZ0OTGJ4uPjomA+stizeq1GPnURuvzuW+F6HpHHP9B
5mP3Q8PirULFuN7YPDin9dO+d1ksnSwVswnLJh28syNpKl+XYfHUmKu+lyMDZXj0
dSzKdWPs0EXfwR1PGfdjOavto9NbvpVlbVihGq8slYe07AHUsH3QVmx1c2zft5Y=
=iS/T
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid youtube caching

2015-07-24 Thread joe
i dont see Strict-Transport-Security  in my log header
only alternate-protocol
can you post an example link pls



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-youtube-caching-tp4672389p4672434.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid youtube caching

2015-07-24 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
BTW, it you are concern about user's privacy, you must not block neither
QUIC/SPDY nor HSTS. This all about user's privacy.

But in this case forget about caching yt or something. Completely.

24.07.15 18:22, joe пишет:
 you can deny those protocol
 reply_header_access alternate-protocol deny all
 so it wont push the client to use udp 443 or udp 80
 that wat they ar doing



 --
 View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-youtube-caching-tp4672389p4672431.html
 Sent from the Squid - Users mailing list archive at Nabble.com.
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJVsjGTAAoJENNXIZxhPexGU2IH/0Llfdo59NMNjApxGGHmMZJ/
9vsSTiSUuVu/ivNvu3zKEaEUCjm6RGsU9upH8fLwsedZrdoxXK1gPKCYYNgKW+pe
3szl2gul1eM+ektmhXlkQV+4vHGbsXudunMI6M6ukdB34UiPO3f6aMQnCg+fJ771
BTZSYVMh2veoO8EQcUBMUJHjZtk1Hh5s+h/L7gcD1JlWRU1IgRrKOTf1t0B3G41x
kNjQBJOyh08s+uubd3QiQZceTgwqmaQM5hMmxz7p2PrFFWkZayshthFVrzK+nrcP
ecS98xU0ONxKnKhyG+RCrs3yATGDsL4hlzsOD54lTnzso6kIqcJxDubRNUvIeXw=
=58ge
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid youtube caching

2015-07-24 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security

24.07.15 18:33, joe пишет:
 i dont see Strict-Transport-Security  in my log header
 only alternate-protocol
 can you post an example link pls



 --
 View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-youtube-caching-tp4672389p4672434.html
 Sent from the Squid - Users mailing list archive at Nabble.com.
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJVsjHDAAoJENNXIZxhPexGi/cH/2hgze8gbSSwLvxvffVK0AUX
FMLZSxJGvb+D9WBJDdmAJeCrto57uqfdWklIH5XUh8mQcStzGGa4ndRrQZ0o41Gq
DrVNJd+NK+Jxksyi58JnnxsBopp023ytxkFjLqrWHfp6jgOvbNaGlfo/vk366LOg
obadFOTXbVZDSUvXIicsYEV5k5HpIM//XQbvV+ysyoFI7Ka64pgq6wXGrkXk0i+m
ZfpT6TXmibMxnoaxNhWsZoP1X9myNwF/CQDN1XnqU+cbg9LJyVSB8VKVUGhXcVqy
ltPoFzpevTxrztX8ZV3lKCyqvIgvaoAzLQaMrVGStxJEC/8wpeFCVZQRMUYN7bg=
=zPud
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid youtube caching

2015-07-24 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Also your can disable HSTS ;)

24.07.15 10:33, d...@getbusi.com пишет:
 Not to go off-topic here, but you folks are all SSL Bumping youtube.com / 
 googlevideo.com in order to
do this caching, right?




 Want to make sure I’m not missing some secret way to make YouTube use
plain HTTP.

 On Fri, Jul 24, 2015 at 8:24 AM, Eliezer Croitoru elie...@ngtech.co.il
 wrote:

 Hey Joe,
 I understand the need for caching youtube but it might be not as
 possible as in the past.
 There was someone here on the list that offers a product that helps to
 cache youtube videos but I do not know the secret behind it.
 The partial content has special key in it and youtube kind of changed
 couple things to allow variable bitrate.
 All The Bests,
 Eliezer
 On 23/07/2015 19:00, joe wrote:
 my English not grait so be pattion tks
 hi i setup yt caching working perfect but i need to ask
 first squid 3.5.6
 i need to know how is yt detect and send partial video
 i have 2 computer same flash v. same firefox v. all identical exept one
 windowsxp another is win7
 i cache html5 on win7 yt send partial video on winxp send full video
 i put none to
 request_header_access Accept-Ranges deny all
 reply_header_access Accept-Ranges deny all
 request_header_replace Accept-Ranges none
 reply_header_replace Accept-Ranges none

 so Wat cause the partial video on win7 is it some header or ??

 you thing deny Accept-Ranges not working  ?
 or some other thing tks if any help



 --
 View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-youtube-caching-tp4672389.html
 Sent from the Squid - Users mailing list archive at Nabble.com.
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users


 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJVsiOHAAoJENNXIZxhPexGC2YH/RPfHHw44yf0S4I1FjuiM9S7
S7hQ8Ec14g78og5YOZKudO03sxLkrLIgF9x8INUDWP57nq4KAwJbHBqtkIBH98kx
niI4bWg2dYxBxpPV1Tk5tuACPFKYzwiNE4MlwebLP5p1Dm8GvjWdA/vk5FGyaM+r
EUdglwUxou2aH/P+50tS3zarh+9qwEt+dbfr6NS15djpdO1citKe5CTRFL+7JZjG
WqYg1jvT9hFDWwemZCYjwhQgLB+s7r+YDaXEH03vdLKYFtmvU1bwAFCwkoLw/GsS
RmXuv5vsDJNudMKc96qml4GBn5cikaON1WCGIK87TFywJB8EKaNNVNFN60Uo42E=
=od9l
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.5 with auth and chroot

2015-07-24 Thread Jorgeley Junior
please guys, help me.
Any suggestions?

2015-07-23 13:28 GMT-03:00 Jorgeley Junior jorge...@gmail.com:

 Befor all, thanks so so much for the answears!!!
 It's exist, I'm sure.
 This is my chroot structre:
 / (linux root)
 /etc
  squid-3.5.6/
   bin/
purge
squidclient
   cache/
(squid cache dirs generated by squid -z)
   etc/
 cachemgr.conf
 errorpage.css
 group
 gshadow
 hosts
 localtime
 mime.conf
 nsswitch.conf
 passwd
 resolv.conf
 shadow
 squid.conf
lib64/
  (a lot of libs here, discovered with ldd
 command)
libexec/
  basic_ncsa_auth
  diskd
  (other default squid libs)
regras/
  (my acl files rules)
sbin/
  squid
share/
errors/
(default dir squid errors)
icons/
(default squid icons
man/
(default man squid pages)
usr/
   lib64/
(a lot of libs here, discovered
 with ldd command)
var/
   logs/
(default squid logs)
   run/
 squid.pid

 I did the command:
 chroot /etc/squid-3.5.6 /libexec/basic_ncsa_auth
 It runs, that's why I'm sure the chroot environment, unless for the
 ncsa_auth, is correct

 Any more suggestions?

 2015-07-23 11:42 GMT-03:00 Amos Jeffries squ...@treenet.co.nz:

 On 23/07/2015 11:23 p.m., Jorgeley Junior wrote:
  Hi guys.
  I have a RedHat 6.6 + squid 3.5.6 + basic_ncsa_auth + chroot and is
  crashing only when I do an authentication.
 
  Here is the main confs:
  auth_param basic program /libexec/basic_ncsa_auth /regras/usuarios
  auth_param basic children 10 startup=0 idle=1
  auth_param basic realm INTERNET-LOGIN NECESSARIO
  ... (other confs) ...
  acl usuariosproxy_auth -i
  /etc/squid-3.5.6/regras/usuarios
  ... (other confs) ...
  chroot /etc/squid-3.5.6
 
  Here is what I find in the cache.log:
  2015/07/22 18:47:27.866 kid1| WARNING: no_suid: setuid(0): (1)
 Operation
  not permitted
  2015/07/22 18:48:01.735 kid1| ipcCreate: /libexec/basic_ncsa_auth: (2)
 No
  such file or directory
  2015/07/22 18:47:27.866 kid1| WARNING: basicauthenticator #Hlpr13818
 exited
 
  What is the ipcCreate and why he is not findind the file?

 It is the code that runs the helper.

 The /libexec/basic_ncsa_auth does not exist as an exectuable binary
 inside your chroot.


 
  About the libs needed when I do the chroot, I have to copy them to the
  squid folder or I need to create the same structure like
  /squid-3.5.6/libs,  /squid-3.5.6/lib64?

 They must match the OS layout where Squid (and everything else that will
 run in the chroot) expects to find them.

 Amos

 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users




 --





--
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid youtube caching

2015-07-24 Thread joe
you can deny those protocol
reply_header_access alternate-protocol deny all 
so it wont push the client to use udp 443 or udp 80 
that wat they ar doing



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-youtube-caching-tp4672389p4672431.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid youtube caching

2015-07-24 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 
Wrong. To block HSTS you need use

# Disable HSTS
reply_header_access Strict-Transport-Security deny all

alternate-protocol - this from another opera.

UDP/80 and UDP/443 - this about QUIC and SPDY protocol. It's nothing to
HSTS not.

Learn more ;)

24.07.15 18:22, joe пишет:
 you can deny those protocol
 reply_header_access alternate-protocol deny all
 so it wont push the client to use udp 443 or udp 80
 that wat they ar doing



 --
 View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-youtube-caching-tp4672389p4672431.html
 Sent from the Squid - Users mailing list archive at Nabble.com.
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJVsi/PAAoJENNXIZxhPexGuccH/17MPWDEAMSx1viebAWd4x94
YQ7Ir3ywQgeRLQb2DvQcNv9se4QKSujjxzhGzRumOhNVGL9fVCHqG2Z9SgmbdE0+
tWhsJE9wWeWRD4O3upAxnE0wzDDu88xlOgk+VfgBi/oqkaWXWUKO/HI3IERTl3ia
W83t4zRMZ58L5e+NU2E664Ix3VMA5J9o5Rz1CuIx30HpQ55QadMcwZ+qTACR+Wa7
XFBvbvN8D207vI/0TZ7mSUTuYaUKUBn54FkX1cb7HU+O/U6eZgTz6iQmQjJGG5OC
4s1TnRSDdjQEp14npJld3GQt/EDIl6bCNjxoKpq+GuPSr6pdI0oCQvd9NvCP+6I=
=iJu+
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users