Re: [squid-users] Squid3: 100 % CPU load during object caching

2015-07-21 Thread Amos Jeffries
On 22/07/2015 12:31 a.m., Jens Offenbach wrote:
 Thank you very much for your detailed explainations. We want to use Squid in 
 order to accelerate our automated software setup processes via Puppet. 
 Actually 
 Squid will host only a very short amount of large objects (10-20). Its 
 purpose 
 is not to cache web traffic or little objects.

Ah, Squid does not host, it caches. The difference may seem trivial at
first glance but it is the critical factor between whether a proxy or a
local web server is the best tool for the job.

From my own experiences with Puppet, yes Squid is the right tool. But
only because the Puppet server was using relatively slow python code to
generate objects and not doing server-side caching on its own. If that
situation has changed in recent years then Squids usefulness will also
have changed.


 The hit-ratio for all the hosted 
 objects will be very high, because most of our VMs require the same software 
 stack.
 I will update mit config regarding to your comments! Thanks a lot!
 But actually I have still no idea, why the download rates are so 
 unsatisfying. 
 We are sill in the test phase. We have only one client that requests a large 
 object from Squid and the transfer rates are lower than 1 MB/sec during cache 
 build-up without any form of concurrency. Have vou got an idea what could be 
 the 
 source of the problem here? Why causes the Squid process 100 % CPU usage.

I did not see any config causing the known 100% CPU bugs to be
encountered in your case (eg. HTTPS going through delay pools guarantees
100% CPU). Which leads me to think its probably related to memory
shuffling. (http://bugs.squid-cache.org/show_bug.cgi?id=3189 appears
to be the same and still unidentified)

As for speed, if the CPU is maxed out by one particular action Squid
wont have time for much other work. So things go slow.

On the other hand Squid is also optimized for relatively high traffic
usage. For very small client counts (such as under-10) it is effectively
running in idle mode 99% of the time. The I/O event loop starts pausing
for 10ms blocks waiting to see if some more useful amount of work can be
done at the end of the wait. That can lead to apparent network slowdown
as TCP gets up to 10ms delay per packet. But that should not be visible
in CPU numbers.


That said, 1 client can still max out Squid CPU and/or NIC throughput
capacity on a single request if its pushing/pulling packets fast enough.


If you can attach the strace tool to Squid when its consuming the CPU
there might be some better hints about where to look.


Cheers
Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL connction failed due to SNI after content redirection

2015-07-21 Thread Alex Wu
it depends on how you set up squid, and where the connection is broken. The 
patch addessed the issue that occured using sslbump and content redirect 
together.

Alex

 Date: Tue, 21 Jul 2015 17:27:43 -0700
 From: hack.b...@hotmail.com
 To: squid-users@lists.squid-cache.org
 Subject: Re: [squid-users] SSL connction failed due to SNI after content  
 redirection
 
 i have some thing like this issue
 ssl connection failed when using in mobile apps
 your patch dont solve the problem
 how i can tune what cause this problem ?
 thanks.
 
 
 
 --
 View this message in context: 
 http://squid-web-proxy-cache.1019090.n4.nabble.com/SSL-connction-failed-due-to-SNI-after-content-redirection-tp4672339p4672369.html
 Sent from the Squid - Users mailing list archive at Nabble.com.
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
  ___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL connction failed due to SNI after content redirection

2015-07-21 Thread HackXBack
i have some thing like this issue
ssl connection failed when using in mobile apps
your patch dont solve the problem
how i can tune what cause this problem ?
thanks.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/SSL-connction-failed-due-to-SNI-after-content-redirection-tp4672339p4672369.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL connction failed due to SNI after content redirection

2015-07-21 Thread HackXBack
:~/squid-3.5.6-20150716-r13865# patch -p0 --verbose  sni.patch
Hmm...  Looks like a unified diff to me...
The text leading up to this was:
--
|--- src/ssl/PeerConnector.cc
|+++ src/ssl/PeerConnector.cc
--
Patching file src/ssl/PeerConnector.cc using Plan A...
patch:  malformed patch at line 16:  debugs(83, 5,
SNIserve   sniServer);





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/SSL-connction-failed-due-to-SNI-after-content-redirection-tp4672339p4672366.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL connction failed due to SNI after content redirection

2015-07-21 Thread Alex Wu



The patch has been manually modified to meet code review.

Here is the patch without any manuall modification:

diff --git a/squid-3.5.6/src/ssl/PeerConnector.cc 
b/squid-3.5.6/src/ssl/PeerConnector.cc
index b4dfd8f..d307665 100644
--- a/squid-3.5.6/src/ssl/PeerConnector.cc
+++ b/squid-3.5.6/src/ssl/PeerConnector.cc
@@ -189,8 +189,13 @@ Ssl::PeerConnector::initializeSsl()

 // Use SNI TLS extension only when we connect directly
 // to the origin server and we know the server host name.
-const char *sniServer = hostName ? hostName-c_str() :
-(!request-GetHostIsNumeric() ? 
request-GetHost() : NULL);
+const char *sniServer = NULL;
+const bool redirected = request-flags.redirected  
::Config.onoff.redir_rewrites_host;
+if (!hostName || redirected)
+sniServer = request-GetHostIsNumeric() ? request-GetHost() : 
NULL;
+else
+sniServer = hostName-c_str();
+
 if (sniServer)
 Ssl::setClientSNI(ssl, sniServer);
 }
~

Alex


 Date: Tue, 21 Jul 2015 12:59:29 -0700
 From: hack.b...@hotmail.com
 To: squid-users@lists.squid-cache.org
 Subject: Re: [squid-users] SSL connction failed due to SNI after content  
 redirection
 
 :~/squid-3.5.6-20150716-r13865# patch -p0 --verbose  sni.patch
 Hmm...  Looks like a unified diff to me...
 The text leading up to this was:
 --
 |--- src/ssl/PeerConnector.cc
 |+++ src/ssl/PeerConnector.cc
 --
 Patching file src/ssl/PeerConnector.cc using Plan A...
 patch:  malformed patch at line 16:  debugs(83, 5,
 SNIserve   sniServer);
 
 
 
 
 
 --
 View this message in context: 
 http://squid-web-proxy-cache.1019090.n4.nabble.com/SSL-connction-failed-due-to-SNI-after-content-redirection-tp4672339p4672366.html
 Sent from the Squid - Users mailing list archive at Nabble.com.
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users

  ___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL connction failed due to SNI after content redirection

2015-07-21 Thread HackXBack
~/squid-3.5.6-20150716-r13865# patch -p0 --verbose  sni.patch
Hmm...  Looks like a unified diff to me...
The text leading up to this was:
--
|diff --git src/ssl/PeerConnector.cc src/ssl/PeerConnector.cc
|index b4dfd8f..d307665 100644
|--- src/ssl/PeerConnector.cc
|+++ src/ssl/PeerConnector.cc
--
Patching file src/ssl/PeerConnector.cc using Plan A...
Hunk #1 succeeded at 189.
Hmm...  Ignoring the trailing garbage.
done




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/SSL-connction-failed-due-to-SNI-after-content-redirection-tp4672339p4672368.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] AUFS vs. DISKS

2015-07-21 Thread FredB

 Fred,
 I compared the 2 source diskd.cc, squid 3.4.8 and 3.5.6 both
 official, no
 dif.
 So, using the diskd 3.4 with the 3.5 does not seem to be a good idea,
 result
 should be the same.
 
 Fred

No crash for you ?

I confirm this discussion 
http://squid-web-proxy-cache.1019090.n4.nabble.com/BUG-3279-HTTP-reply-without-Date-td4664990.html
The crashes are related with aufs 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] AUFS vs. DISKS

2015-07-21 Thread Stakres
Hi Fred,

No error, no crash.
Some warnings only:
2015/07/21 11:21:02 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
directory
But we can live with these warnings, Squid will take care the missing
objects...

Bye Fred



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/AUFS-vs-DISKS-tp4672209p4672352.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid3: 100 % CPU load during object caching

2015-07-21 Thread Jens Offenbach
I am running Squid3 3.3.8 on Ubuntu 14.04. Squid3 has been installed from the 
Ubuntu package repository. In my scenario, Squid has to cache big files = 1 
GB. At the moment, I am getting very bad transfer rates lower that 1 MB/sec. I 
have checked the connectivity using iperf3. It gives my a bandwith of 853 
Mbits/sec between the nodes.

I have tried to investigate the problem and recognized that when there is no 
cache hit for a requested object, the Squid process reaches shortly after 
startup 100 % of one CPU core. The download rate drops down to 1 MB/sec. When I 
have a cache hit, I only get 30 MB/sec in my download.

Is there someting wrong with my config? I have already used Squid 3.3.14. I get 
the same result. Unfortunately, I was not able to build Squid 3.5.5 and 3.5.6.

Here is my squid.conf:
# ACCESS CONTROLS
# 
  acl intranetsrc 139.2.0.0/16
  acl intranetsrc 193.96.112.0/21
  acl intranetsrc 192.109.216.0/24
  acl intranetsrc 100.1.4.0/22
  acl localnetsrc 10.0.0.0/8
  acl localnetsrc 172.16.0.0/12
  acl localnetsrc 192.168.0.0/16
  acl localnetsrc fc00::/7
  acl localnetsrc fe80::/10
  acl to_intranet dst 139.2.0.0/16
  acl to_intranet dst 193.96.112.0/21
  acl to_intranet dst 192.109.216.0/24
  acl to_intranet dst 100.1.4.0/22
  acl to_localnet dst 10.0.0.0/8
  acl to_localnet dst 172.16.0.0/12
  acl to_localnet dst 192.168.0.0/16
  acl to_localnet dst fc00::/7
  acl to_localnet dst fe80::/10
  http_access allow manager localhost
  http_access deny  manager
  http_access allow localnet
  http_access allow localhost
  http_access deny all

# NETWORK OPTIONS
# 
  http_port 0.0.0.0:3128

# OPTIONS WHICH AFFECT THE NEIGHBOR SELECTION ALGORITHM
# 
  cache_peer proxy.mycompany.de parent 8080 0 no-query no-digest

# MEMORY CACHE OPTIONS
# 
  maximum_object_size_in_memory 1 GB
  memory_replacement_policy heap LFUDA
  cache_mem 4 GB

# DISK CACHE OPTIONS
# 
  maximum_object_size 10 GB
  cache_replacement_policy heap GDSF
  cache_dir aufs /var/cache/squid3 88894 16 256 max-size=10737418240

# LOGFILE OPTIONS
# 
  access_log daemon:/var/log/squid3/access.log squid
  cache_store_log daemon:/var/log/squid3/store.log

# OPTIONS FOR TROUBLESHOOTING
# 
  cache_log /var/log/squid3/cache.log
  coredump_dir /var/log/squid3

# OPTIONS FOR TUNING THE CACHE
# 
  cache allow localnet
  cache allow localhost
  cache allow intranet
  cache deny  all
  refresh_pattern ^ftp:  144020%10080
  refresh_pattern ^gopher:   1440 0% 1440
  refresh_pattern -i (/cgi-bin/|\?) 0 0%0
  refresh_pattern . 020% 4320

# HTTP OPTIONS
# 
  via off

# ADMINISTRATIVE PARAMETERS
# 
  cache_effective_user proxy
  cache_effective_group proxy

# ICP OPTIONS
# 
  icp_port 0

# OPTIONS INFLUENCING REQUEST FORWARDING 
# 
  nonhierarchical_direct on
  prefer_direct off
  always_direct allow to_localnet
  always_direct allow to_localhost
  always_direct allow to_intranet
  never_direct  allow all

# MISCELLANEOUS
# 
  memory_pools off
  forwarded_for off
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid 3.5.5 - assertion failed

2015-07-21 Thread Arnaud Meyer

Hi,

I'm using Squid 3.5.5 on Debian Wheezy as a transparent proxy with https 
interception. About once per hour I'm getting the following error 
message before Squid crashes:


   assertion failed: Read.cc:69: fd_table[conn-fd].halfClosedReader
   != NULL


The gdb stack trace is attached below.

Any help would be much appreciated.

Arnaud

---

Program received signal SIGABRT, Aborted.
0x74b1b165 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#0  0x74b1b165 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x74b1e3e0 in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2  0x005a1adf in xassert (msg=optimized out, file=optimized 
out,

line=optimized out) at debug.cc:544
#3  0x007c65f8 in comm_read_base (conn=..., buf=0x99bb140 , 
size=16383,

callback=...) at Read.cc:69
#4  0x0067b75b in comm_read (callback=..., len=16383, 
buf=0x99bb140 , conn=...)

at comm/Read.h:58
#5  StoreEntry::delayAwareRead (this=optimized out, conn=..., 
buf=0x99bb140 , len=16383,

callback=...) at store.cc:255
#6  0x005f90d2 in HttpStateData::maybeReadVirginBody 
(this=0x83f0ed8) at http.cc:1515
#7  0x005f959c in HttpStateData::sendRequest 
(this=this@entry=0x83f0ed8)

at http.cc:2154
#8  0x005f9d7b in HttpStateData::start (this=0x83f0ed8) at 
http.cc:2268
#9  0x00731393 in JobDialerAsyncJob::dial (this=0xaee57d0, 
call=...)

at ../../src/base/AsyncJobCalls.h:174
#10 0x0072d909 in AsyncCall::make (this=0xaee57a0) at 
AsyncCall.cc:40
#11 0x00731887 in AsyncCallQueue::fireNext 
(this=this@entry=0xe025f0)

at AsyncCallQueue.cc:56
#12 0x00731bd0 in AsyncCallQueue::fire (this=0xe025f0) at 
AsyncCallQueue.cc:42
#13 0x005c128c in EventLoop::runOnce 
(this=this@entry=0x7fffe9f0)

at EventLoop.cc:120
#14 0x005c1430 in EventLoop::run (this=0x7fffe9f0) at 
EventLoop.cc:82
#15 0x00627083 in SquidMain (argc=optimized out, 
argv=optimized out)

at main.cc:1511
#16 0x0052b08b in SquidMainSafe (argv=optimized out, 
argc=optimized out)

at main.cc:1243
#17 main (argc=optimized out, argv=optimized out) at main.cc:1236
A debugging session is active.

Inferior 1 [process 10969] will be killed.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.5.5 - assertion failed

2015-07-21 Thread Arnaud Meyer

No, I'm not using this option.

You can see my complete squid.conf here: http://pastebin.com/mzFBDLpY

Am 21.07.2015 um 12:55 schrieb HackXBack:

are you using range_offset_limit option ??



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-3-5-5-assertion-failed-tp4672353p4672354.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.5.5 - assertion failed

2015-07-21 Thread HackXBack
are you using range_offset_limit option ??



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-3-5-5-assertion-failed-tp4672353p4672354.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid3: 100 % CPU load during object caching

2015-07-21 Thread Amos Jeffries
On 21/07/2015 7:59 p.m., Jens Offenbach wrote:
 I am running Squid3 3.3.8 on Ubuntu 14.04. Squid3 has been installed from the 
 Ubuntu package repository. In my scenario, Squid has to cache big files = 1 
 GB. At the moment, I am getting very bad transfer rates lower that 1 MB/sec. 
 I have checked the connectivity using iperf3. It gives my a bandwith of 853 
 Mbits/sec between the nodes.
 
 I have tried to investigate the problem and recognized that when there is no 
 cache hit for a requested object, the Squid process reaches shortly after 
 startup 100 % of one CPU core. The download rate drops down to 1 MB/sec. When 
 I have a cache hit, I only get 30 MB/sec in my download.
 
 Is there someting wrong with my config? I have already used Squid 3.3.14. I 
 get the same result. Unfortunately, I was not able to build Squid 3.5.5 and 
 3.5.6.
 

Squid-3 is better able to cope with large objects than Squid-2 was. But
there are still significant problems.


Firstly, you only have space in memory for 4x 1GB objects. Total. if you
are dealing with such large objects at any regular frequency, you need
at least a much larger cache_mem setting.


Secondly, consider that Squid-3.3 places *all* active transactions into
cache_mem. 4GB of memory cache can store ~4 million x 1KB transactions,
or only 4 x 1GB transactions.

 If you have a cache full of small objects happily sitting in memory.
Then a requests for a 1GB object comes in, a hugh number of those small
objects need to be pushed out of memory cache onto disk, the memory
reallocated for use by the big one, and possibly 1GB object loaded from
disk into memory cache.

Then consider that GB sized object sitting in cache as it gets near to
being the oldest in memory. The next request is probably a puny little
0-1KB object, Squid may have to repeat all the GB size shufflings to and
from disk just to make memory space for that KB.

As you can imagine any one part of that process takes a lot of work and
time with a big object involved as compared to only small objects being
involved. The whole set of actions can be excruciatingly painful if the
proxy is busy.


Thirdly, you also only have 88GB of disk cache total. Neither that nor
the memory cache is sufficient to be trying to cache GB sized objects.
The tradeoff is whether one GB size object is going to get enough HITs
often enough to be worth not caching the million or so smaller objects
that could be taking its place. For most uses the tradeoff only makes
sense with high traffic on the large objects and/or TB of disk space.


My rule-of-thumb advice for caching is to keep it so that you can store
at least a few thousand maximum-sized objects at once in the allocated size.

So 4GB memory cache reasonable for 1MB size objects, 80GB disk cache is
reasonable for ~100MB sized objects.

That keeps almost all web page traffic able to be in memory, bigger but
popular media/video objects on disk. And the big things like Windows
Service Packs or whole DVD downloads get slower network fetches as
needed. If those latter are actually a problem for you get a bigger disk
cache, you *will* need it.


And a free audit for your config...


 Here is my squid.conf:
 # ACCESS CONTROLS
 # 
   acl intranetsrc 139.2.0.0/16
   acl intranetsrc 193.96.112.0/21
   acl intranetsrc 192.109.216.0/24
   acl intranetsrc 100.1.4.0/22
   acl localnetsrc 10.0.0.0/8
   acl localnetsrc 172.16.0.0/12
   acl localnetsrc 192.168.0.0/16
   acl localnetsrc fc00::/7
   acl localnetsrc fe80::/10
   acl to_intranet dst 139.2.0.0/16
   acl to_intranet dst 193.96.112.0/21
   acl to_intranet dst 192.109.216.0/24
   acl to_intranet dst 100.1.4.0/22
   acl to_localnet dst 10.0.0.0/8
   acl to_localnet dst 172.16.0.0/12
   acl to_localnet dst 192.168.0.0/16
   acl to_localnet dst fc00::/7
   acl to_localnet dst fe80::/10

The intended purpose behind the localnet and to_localnet ACLs is that
they are matching your intranet / LAN  / local network ranges.

The ones we distribute are just common standard ranges. You can simplify
your config by adding the intranet ranges to localnet and dropping all
the 'intranet' ACLs.

... BUT ...


   http_access allow manager localhost
   http_access deny  manager
   http_access allow localnet
   http_access allow localhost
   http_access deny all

... noting how the intranet ACLs are not used to permit access through
the proxy. Maybe just dropping them entirely is better. If this is a
working proxy they are not being used.


 
 # NETWORK OPTIONS
 # 
   http_port 0.0.0.0:3128
 
 # OPTIONS WHICH AFFECT THE NEIGHBOR SELECTION ALGORITHM
 # 
   cache_peer proxy.mycompany.de parent 8080 0 no-query no-digest
 
 # MEMORY CACHE OPTIONS
 # 

Re: [squid-users] squid 3.5.5 - assertion failed

2015-07-21 Thread Ortega Gustavo Martin
Hi, i was having the same problem so we must downgrade to Squid Cache: Version 
3.4.13-20150709-r13225

I hope someone can help us.

Regards.

Gustavo

-Mensaje original-
De: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] En nombre de 
Arnaud Meyer
Enviado el: martes, 21 de julio de 2015 08:13 a.m.
Para: squid-users@lists.squid-cache.org
Asunto: Re: [squid-users] squid 3.5.5 - assertion failed

No, I'm not using this option.

You can see my complete squid.conf here: http://pastebin.com/mzFBDLpY

Am 21.07.2015 um 12:55 schrieb HackXBack:
 are you using range_offset_limit option ??



 --
 View this message in context: 
 http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-3-5-5-asserti
 on-failed-tp4672353p4672354.html Sent from the Squid - Users mailing 
 list archive at Nabble.com.
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid3: 100 % CPU load during object caching

2015-07-21 Thread Eliezer Croitoru

On 21/07/2015 10:59, Jens Offenbach wrote:

Is there someting wrong with my config? I have already used Squid 3.3.14. I get 
the same result. Unfortunately, I was not able to build Squid 3.5.5 and 3.5.6.


What was the issue?
I am using 3.5.6 on 14.04.2 64 bit.

Eliezer

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users