Re: [squid-users] Whether squid 3.5.2 can support rock at wccp tproxy environment really ?

2015-03-09 Thread johnzeng

Hello Amos:

--- 



For starters,
 WCCP is a network protocol Squid uses to inform remote routers that it
is active and what traffic it can receive.
 rock is a layout format for bits stored on a disk.
 ... they are *completely* unrelated.

-

Your meaning is running two different process for wccp redirection and Cache 
operation ?

first process is for wccp redirection

and other process is for Cache operation


 





于 2015年03月09日 13:01, Amos Jeffries 写道:

On 9/03/2015 4:38 p.m., johnzeng wrote:


Hello Dear All :

I face a problem recently , When i config wccp ( tproxy ) environment (
via using squid 3.5.2 ) ,

if i disable cache_dir rock part ,and it will be success for wccp(
tproxy) , and enable cache_dir aufs

#cache_dir rock /accerater/webcache3/storage/rock1 2646 min-size=4096
max-size=262144 max-swap-rate=250 swap-timeout=350

but if i enable cache_dir rock part ,and it will be failure for wccp(
tproxy) and enable cache_dir aufs

cache_dir rock /accerater/webcache3/storage/rock1 2646 min-size=4096
max-size=262144 max-swap-rate=250 swap-timeout=350


Whether some of my config is error , if possible , please give me some
advisement


For starters,
  WCCP is a network protocol Squid uses to inform remote routers that it
is active and what traffic it can receive.
  rock is a layout format for bits stored on a disk.
  ... they are *completely* unrelated.




This is my config


thanks

---

coredump_dir /accerater/logs/webcache3/
unlinkd_program /accerater/webcache3/libexec/unlinkd
pid_filename /accerater/logs/webcache3/opmizer1/cache.pid


workers 2
cpu_affinity_map process_numbers=1,2 cores=1,3

cache_dir rock /accerater/webcache3/storage/rock1 2646 min-size=4096
max-size=262144 max-swap-rate=250 swap-timeout=350
cache_dir rock /accerater/webcache3/storage/rock1 2646 min-size=4096
max-size=262144 max-swap-rate=250 swap-timeout=350

You are telling Squid to start two controllers to the database file
/accerater/webcache3/storage/rock1 from *each* worker. There is zero
benefit from this and the two controllers may enounter collisions as
they compete for acces to the disk without sharing atomic locks. That
leads to cache corruption.

Remove one of those two lines.



if ${process_number} = 1

cache_swap_state /accerater/logs/webcache3/opmizer1_swap_log1

Dont use cache_swap_state.


access_log stdio:/accerater/logs/webcache3/opmizer1_access.log squid

Use this instead (mind the wrap):

access_log
stdio:/accerater/logs/webcache/opmizer${process_number}_access.log squid


cache_log /accerater/logs/webcache3/opmizer1_cache.log


Use this instead:

cache_log /accerater/logs/webcache3/opmizer${process_number}_cache.log


cache_store_log stdio:/accerater/logs/webcache3/opmizer1_store.log

You should not need cache_store_log at all.

Either remove it or use this instead (mind the wrap):

cache_store_log
stdio:/accerater/logs/webcache3/opmizer${process_number}_store.log



url_rewrite_program /accerater/webcache3/media/mediatool/media2
store_id_program /accerater/webcache3/media/mediatool/media1

Why do you have different binary executable names for the two workers
helpers?

If they are actually different, then the traffic will have different
things applied randomly depending on which worker happened to accept the
TCP connection. If they are the same, then you only need to define them
once and workers will start their own sets as needed.



unique_hostname fast_opmizer1
snmp_port 3401

Use this instead:

  unique_hostname fast_opmizer${process_number}
  snmp_port 340${process_number}


All of the above details can move up out of the per-worker area.



#cache_dir rock /accerater/webcache3/storage/rock1 2646 min-size=4096
max-size=262144 max-swap-rate=250 swap-timeout=350

cache_dir aufs /accerater/webcache3/storage/aufs1/${process_number} 5200
16 64 min-size=262145

else

#endif


if ${process_number} = 2


cache_swap_state /accerater/logs/webcache3/opmizer2_swap_log
access_log stdio:/accerater/logs/webcache3/opmizer2_access.log squid
cache_log /accerater/logs/webcache3/opmizer2_cache.log
cache_store_log stdio:/accerater/logs/webcache3/opmizer2_store.log
url_rewrite_program /accerater/webcache3/media/mediatool/media4
store_id_program /accerater/webcache3/media/mediatool/media3
unique_hostname fast_opmizer2
snmp_port 3402


Same notes as for worker 1.



#cache_dir rock /accerater/webcache3/storage/rock2 2646 min-size=4096
max-size=262144 max-swap-rate=250 swap-timeout=350

cache_dir aufs /accerater/webcache3/storage/aufs1/${process_number} 5200
16 64 min-size=262145

endif

endif



http_port 127.0.0.1:3220
http_port 3221 tproxy

wccp_version 2
wccp2_router 

Re: [squid-users] FATAL: xcalloc: Unable to allocate 18446744073487757627 blocks of 1 bytes!

2015-03-09 Thread HackXBack
root@debian:/etc/squid# gdb /usr/sbin/squid /var/spool/squid/cache/squid/core
GNU gdb (GDB) 7.4.1-debian
Copyright (C) 2012 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later
http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type show copying
and show warranty for details.
This GDB was configured as x86_64-linux-gnu.
For bug reporting instructions, please see:
http://www.gnu.org/software/gdb/bugs/...
Reading symbols from /usr/sbin/squid...done.
[New LWP 7263]

warning: Can't read pathname for load map: Input/output error.
[Thread debugging using libthread_db enabled]
Using host libthread_db library /lib/x86_64-linux-gnu/libthread_db.so.1.
Core was generated by `(squid-1) -YC -f /etc/squid/squid.conf'.
Program terminated with signal 6, Aborted.
#0  0x7f938c52d165 in raise () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) backtrace
#0  0x7f938c52d165 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x7f938c5303e0 in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2  0x00573bc5 in fatal_dump (message=0xbaf580 xcalloc: Unable to
allocate 18446744073645296089 blocks of 1 bytes!\n) at fatal.cc:138
#3  0x007ab976 in xcalloc (n=18446744073645296089, sz=1) at
xalloc.cc:82
#4  0xbfebfbff in ?? ()
#5  0x0006 in ?? ()
#6  0x1000 in ?? ()
#7  0x0011 in ?? ()
#8  0x0064 in ?? ()
#9  0x0003 in ?? ()
#10 0x00400040 in ?? ()
#11 0x0004 in ?? ()
#12 0x0038 in ?? ()
#13 0x0005 in ?? ()
#14 0x0008 in ?? ()
#15 0x0007 in ?? ()
#16 0x7f3c7d951000 in ?? ()
#17 0x0008 in ?? ()
#18 0x in ?? ()
(gdb)




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/FATAL-xcalloc-Unable-to-allocate-18446744073487757627-blocks-of-1-bytes-tp4670271p4670286.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Random SSL bump DB corruption

2015-03-09 Thread Dan Charlesworth
Hey folks

After having many of our systems running Squid 3.4.12 for a couple of weeks now 
we had two different deployments fail today due to SSL DB corruption.

Never seen this in almost 9 months of SSL bump being in production and there 
were no problems in either cache log until the “wrong number of fields” lines, 
apparently.

Anyone else?

Deployment #1 log excerpt:
wrong number of fields on line 505 (looking for field 6, got 1, '' left)
(squid_ssl_crtd): The SSL certificate database 
/usr/local/mwf/mwf13/squid/ssl_db is corrupted. Please rebuild
2015/03/10 09:04:24 kid1| WARNING: ssl_crtd #Hlpr0 exited
2015/03/10 09:04:24 kid1| Too few ssl_crtd processes are running (need 1/32)
2015/03/10 09:04:24 kid1| Starting new helpers
2015/03/10 09:04:24 kid1| helperOpenServers: Starting 1/32 'squid_ssl_crtd' 
processes
2015/03/10 09:04:24 kid1| ssl_crtd helper returned NULL reply.
wrong number of fields on line 505 (looking for field 6, got 1, '' left)
(squid_ssl_crtd): The SSL certificate database 
/usr/local/mwf/mwf13/squid/ssl_db is corrupted. Please rebuild

Deployment #2 log excerpt:
wrong number of fields on line 2 (looking for field 6, got 1, '' left)
(squid_ssl_crtd): The SSL certificate database 
/usr/local/mwf/mwf13/squid/ssl_db is corrupted. Please rebuild
2015/03/10 15:29:16 kid1| WARNING: ssl_crtd #Hlpr0 exited
2015/03/10 15:29:16 kid1| Too few ssl_crtd processes are running (need 1/32)
2015/03/10 15:29:16 kid1| Starting new helpers
2015/03/10 15:29:16 kid1| helperOpenServers: Starting 1/32 'squid_ssl_crtd' 
processes
2015/03/10 15:29:17 kid1| ssl_crtd helper returned NULL reply.
wrong number of fields on line 2 (looking for field 6, got 1, '' left)
(squid_ssl_crtd): The SSL certificate database 
/usr/local/mwf/mwf13/squid/ssl_db is corrupted. Please rebuild

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL Peek-n-Splice and exclusions by SNI

2015-03-09 Thread Nathan Hoad
Hi Amos,

After digging through debug logs, I noticed this:

2015/03/09 14:40:12.467 | client_side.cc(2902)
concurrentRequestQueueFilled: local=74.125.23.95:443
remote=10.3.20.249:40083 FD 11 flags=33 max concurrent requests
reached (1)
2015/03/09 14:40:12.467 | client_side.cc(2903)
concurrentRequestQueueFilled: local=74.125.23.95:443
remote=10.3.20.249:40083 FD 11 flags=33 deferring new request until
one is done
2015/03/09 14:40:12.467 | client_side.cc(4365)
httpsSslBumpStep2AccessCheckDone: Failed to start fake CONNECT request
for ssl spliced connection: local=74.125.23.95:443
remote=10.3.20.249:40083 FD 11 flags=33

Which sparked my memory about a patch that Christos has for 3.5.3:
http://www.squid-cache.org/Versions/v3/3.5/changesets/squid-3.5-13766.patch

After applying this patch and rebuilding, everything works now, so
that's good. I tried using dstdomain as opposed to an external ACL and
it did not work - I suspect this is because dstdomain doesn't cover
the SNI server name, but it should be fine with Christos' server_name
ACL patch I would expect. If I get time I might try applying that to
3.5.x to see if it covers my use case, but for the time being I'll
stick with the external ACL helper.

Cheers,

Nathan.

On 9 March 2015 at 16:06, Amos Jeffries squ...@treenet.co.nz wrote:
 On 9/03/2015 5:52 p.m., Nathan Hoad wrote:
 Hi folks,

 I'm playing with 3.5.2 and Peek-n-Splice, I was wondering if it's
 actually possible to exclude requests based on the SNI host and have
 Squid still bump correcty.

 It is supposed to work, but there have been troubles. So YMMV.

 I've been trying with this configuration,
 using a simple external acl:

 https_port 60443 intercept ssl-bump cert=/path/to/inspectcert.pem
 key=/path/to/inspectkey.pem generate-host-certificates=on
 external_acl_type sni ttl=30 concurrency=60 children-max=3
 children-startup=1 %ssl::sni /usr/libexec/bumphelper

 acl step1 at_step SslBump1
 acl step2 at_step SslBump2
 acl step3 at_step SslBump3

 acl sslbump_exclusions external sni

 ssl_bump peek step1 all
 ssl_bump splice step2 sslbump_exclusions
 snip


 So what am I missing? It's very hard to find documentation about this,
 so I might put this up on the wiki as an example once it's sorted.

 The big issue here is ssl_bump being a fast-type access check. external
 ACL helpers do not work reliably.

 Amos

 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users