Re: [squid-users] Squid 3.0.STABLE17 is available

2009-07-31 Thread Herbert Faleiros
On Friday 31 July 2009 06:01:07 you wrote:
[cut]
 Okay. This gets rid of the assert and adds some debug instead.
 The reason for sending eof=1 when not at true EOF is not yet clear, so
 use carefully, but additional debugs are added when the flag is set.
 debug_options  ... 11,9 for these.

 Amos

Hi Amos,

http.cc: In member function 'void HttpStateData::processReplyHeader()': 
  
http.cc:742: error: request for member 'size' in '((HttpStateData*)this)-
HttpStateData::readBuf', which is of non-class type 'MemBuf*'  
   

http.cc: In member function 'void HttpStateData::readReply(size_t, comm_err_t, 
int)': 
http.cc:1013: error: expected primary-expression before '' token   
  
make[3]: *** [http.o] Error 1   
  
make[3]: Leaving directory `/usr/src/squid/squid-3.0.STABLE17/src'  
  
make[2]: *** [install-recursive] Error 1
  
make[2]: Leaving directory `/usr/src/squid/squid-3.0.STABLE17/src'  
  
make[1]: *** [install] Error 2  
  
make[1]: Leaving directory `/usr/src/squid/squid-3.0.STABLE17/src'  
  
make: *** [install-recursive] Error 1 

Do I have to apply this patch against what release? (daily snapshot?) I've had 
applied the other patch (to fix the previous BUG) plus this here 
(eof_debugs.patch) now.

--
Herbert


Re: [squid-users] Squid 3.0.STABLE17 is available

2009-07-29 Thread Herbert Faleiros
On Tuesday 28 July 2009 23:22:56 Amos Jeffries wrote:
 The next formally bundled will be STABLE18. However the daily snapshots
 serve as intermediate updates on STABLE
 (http://www.squid-cache.org/Versions/v3/3.0/).

 I just have not yet had time to apply these fixes to the branch yet.


3.0.STABLE17-20090729 still crashing here (x86_64)...

2009/07/29 16:07:45| ctx: enter level  0: 
'http://images.windowsmedia.com/svcswitch/MG_pt-
br.xml?locale=416geoid=20version=1
1.0.6001.7004userlocale=416'   
  
2009/07/29 16:07:45| assertion failed: http.cc:738: !eof 

I also applied (the patch from previous e-mail) against this version:

patching file src/HttpMsg.cc
patching file src/HttpReply.cc
patching file src/HttpRequest.cc
patching file src/pconn.cc

The only solution here was downgrade to previous release...

Any clue?

-- 
Herbert


Re: [squid-users] Squid BUG?

2009-04-24 Thread Herbert Faleiros
On Tue, 21 Apr 2009 22:47:55 +1200, Amos Jeffries squ...@treenet.co.nz
wrote:
[cut]
 At a blind guess, I'd say its a 64-bit build reading a file stored by a 
 32-bit build.
 
 The result is that squid immediately dumps the file out of cache. So if 
 it repeats for any given object or for any newly stored ones, its a 
 problem, but once per existing object after a cache format upgrade may 
 be acceptable.


SOLVED! The main/source problem was:

2009/04/23 13:21:25| assertion failed: HttpHeader.cc:1196:
Headers[id].type == ftInt64

Then crash, and again and again (with same log)... Searching the list
history I found your patch (thanks), the proxy is running again (with
3.0.STABLE14) now without any crash.

-- 
Herbert


[squid-users] Squid BUG?

2009-04-20 Thread Herbert Faleiros
Hi,

after upgrade from 3.0.STABLE13 to 3.0.STABLE14, my Squid box crashed.

From cache.log (with debug_options ALL,1 11,6 73,6):

2009/04/20 18:05:05| Store rebuilding is 821.74% complete
2009/04/20 18:05:20| Store rebuilding is 822.48% complete
2009/04/20 18:05:35| Store rebuilding is 823.22% complete
2009/04/20 18:05:50| Store rebuilding is 823.97% complete
2009/04/20 18:06:05| Store rebuilding is 824.71% complete
2009/04/20 18:06:20| Store rebuilding is 825.46% complete
2009/04/20 18:06:35| Store rebuilding is 826.20% complete
2009/04/20 18:06:50| Store rebuilding is 826.94% complete
2009/04/20 18:07:05| Store rebuilding is 827.68% complete
2009/04/20 18:07:20| Store rebuilding is 828.42% complete
2009/04/20 18:07:35| Store rebuilding is 829.16% complete
2009/04/20 18:07:50| Store rebuilding is 829.91% complete
2009/04/20 18:08:05| Store rebuilding is 830.65% complete
2009/04/20 18:08:20| Store rebuilding is 831.45% complete

Then (cache_dir low/high was 80%/85%, with 20/30GB free space left on each
disk, 4 cache dir's/disks):

2009/04/20 18:08:31| diskHandleWrite: FD 21: disk write error: (28) No
space left on device
FATAL: Write failure -- check your disk space and cache.log

Any clue? Downgrade to STABLE13 did not solved the problem.

PS - STABLE13 was running without any problems (uptime was 6 weeks)
respecting cache/disk limits.

-- 
Herbert


Re: [squid-users] Squid BUG?

2009-04-20 Thread Herbert Faleiros
On Tue, 21 Apr 2009 12:54:53 +1200 (NZST), Amos Jeffries
squ...@treenet.co.nz wrote:
[cut]
 2009/04/20 18:05:20| Store rebuilding is 822.48% complete

  what type of file system is in use?

ext3


  with what settings?

4x 300GB SAS disks

cache_swap_low 80
cache_swap_high 85

cache_dir aufs /var/cache/proxy/cache1 256000 256 256
cache_dir aufs /var/cache/proxy/cache2 256000 256 256
cache_dir aufs /var/cache/proxy/cache3 256000 256 256
cache_dir aufs /var/cache/proxy/cache4 256000 256 256

maximum_object_size 256 MB
minimum_object_size 1 KB

cache_replacement_policy heap LFUDA

Now (that's why does not work anymore):

/dev/sdb1 276G  262G 0 100% /var/cache/proxy/cache1
/dev/sdc1 276G  262G 0 100% /var/cache/proxy/cache2
/dev/sdd1 276G  262G 0 100% /var/cache/proxy/cache3
/dev/sde1 276G  262G 0 100% /var/cache/proxy/cache4

Before crash was 20 or 35% free disk space on each disk, and was like that
for 6 weeks (then reached the limits and was not growing anymore until the
upgrade to STABLE14 crashed the box).


  with what disk available?
  on what operating system?

BlueWhite64 (an unofficial 64 bits Slackware port).


  is it rebuilding after saying DIRTY or CLEAN cache?

CLEAN after upgrade, DIRTY after crashes.


  does deleting the swap.state file(s) when squid is stopped fix things?

I will try.

The stranger think was store rebuild reporting  100%.

Thanks,

Herbert.



Re: [squid-users] Squid BUG?

2009-04-20 Thread Herbert Faleiros
On Tue, 21 Apr 2009 14:58:22 +1200 (NZST), Amos Jeffries
squ...@treenet.co.nz wrote:
[cut]
 As a side issue: know who the maintainer is for slackware? I'm trying to
 get in touch with them all.

Sorry, here Squid was build from sources. The distro maintainer and more
info (does not provide a binary Squid package) can be found here:
http://bluewhite64.com (I'm still waiting for an official 64 bits Slackware
port)


  does deleting the swap.state file(s) when squid is stopped fix things?

Apparently yes:

/dev/sdb1 276G  225G   37G  87% /var/cache/proxy/cache1
/dev/sdc1 276G  225G   53G  87% /var/cache/proxy/cache2
/dev/sdd1 276G  225G   37G  87% /var/cache/proxy/cache3
/dev/sde1 276G  225G   37G  87% /var/cache/proxy/cache4

It's running OK again.

Now, another strange log:

2009/04/21 00:26:25| commonUfsDirRebuildFromDirectory: Swap data buffer
length is not sane.

Should I decrease cache_dir sizes?


 The stranger think was store rebuild reporting  100%.
 
 Yes, we have seen a similar thing long ago in testing. I'm trying to
 remember and research what came of those. At present I'm thinking maybe
it
 had something to do with 32-bit/64-bit changes in distro build vs what
the
 cache was built with.


Similar logs found here about memory usage (via mallinfo):

Total in use:  1845425 KB 173%

and sometimes negative values:

total space in arena:  -1922544 KB
Ordinary blocks:   -1922682 KB 49 blks

Total in use:  -1139886 KB 59%

--
Herbert


Re: [squid-users] Don't log clientParseRequestMethod messages

2009-03-17 Thread Herbert Faleiros
On Tue, 17 Mar 2009 17:13:13 +1300, Amos Jeffries squ...@treenet.co.nz
wrote:
[cut]
 No it's a debug log and those messages are important/useful to track bad 
 clients in your traffic.
 
 What unknown methods is it recording?

Lots and lots (and lots) of trash (SIP, P2P or/and perhaps virus code). The
cache.log info is VERY useful but this kind of messages 
obviously polluted the log (can be solved by: cat /var/log/squid/cache.log
| grep -Ev client.+Request, but I don't know if it will catch out only
clientParseRequestMethod log entries).


Re: [squid-users] Config suggestion

2009-03-17 Thread Herbert Faleiros
On Tue, 17 Mar 2009 09:54:00 +0100, Matus UHLAR - fantomas
uh...@fantomas.sk wrote:
[cut]
 is that one quad-core with hyperthreading, two quad-cores without HT or
two
 dual-cores with HT? We apparently should count HT CPU's as one, not two.

2 Xeon Quad-cores (4 cores per/processor, 8 total), no HT...


[cut]
   total   used   free sharedbuffers
   cached
  Mem: 32148   2238  29910  0244   
  823
  -/+ buffers/cache:   1169  30978
  Swap:15264  0  15264
 
 swap is quite useless here I'd say...


Uptime was 1/2 min. Look at it now:

$ free -m
 total   used   free sharedbuffers cached
Mem: 32151  31996155  0   1891  24108
-/+ buffers/cache:   5996  26155
Swap:15264  6  15258


[cut]
 I'd say that the 73.5 Gb disk should be used only for OS, logs etc.

I did it.


[cut]
 I'm not to up on the L1/L2 efficiencies, but 64 256 or higher L1 seems
 to be better for larger dir sizes.

OK, I will try...


[cut]
 Note that for 300GiB HDD you will be using max 250, more probably 200 and
 some ppl would advise 150GiB of cache. Leave some space for metadata and
 some for reserve - filesystems may benefit of it.

I always configure (to use) only 80% HDD...


[cut]
 For a quad or higher CPU machine, you may do well to have multiple Squid
 running (one per 2 CPUs or so). One squid doing the caching on the 300GB
 drives and one on the smaller ~100 GB drives (to get around a small bug
 where mismatched AUFS dirs cause starvation in small dir), peered
 together with no-proxy option to share info without duplicating cache.


Cool! Thanks...

-- 
Herbert



Re: [squid-users] Config suggestion

2009-03-17 Thread Herbert Faleiros
On Tue, 17 Mar 2009 12:58:08 +1200 (NZST), Amos Jeffries
squ...@treenet.co.nz wrote:
[cut]
 
 You have 5 physical disks by the looks of it. Best usage of those is to
 split the cache_dir one per disk (sharing a disk leads to seek clashes).


OK, I will disable LVM and try it.

 
 I'm not to up on the L1/L2 efficiencies, but 64 256 or higher L1 seems
 to be better for larger dir sizes.

OK...


 For a quad or higher CPU machine, you may do well to have multiple Squid
 running (one per 2 CPUs or so). One squid doing the caching on the 300GB
 drives and one on the smaller ~100 GB drives (to get around a small bug
 where mismatched AUFS dirs cause starvation in small dir), peered
together
 with no-proxy option to share info without duplicating cache.


4 Squid's, 1 disk per/Squid proc. and a cache-peer config... Sounds good.


[cut]
 Absolutely minimal swapping of memory.

Decreased to 2GiB, the rule in faq/wiki about x% cache_dir (disk) should be
y% cache_mem seems confused to me.

-- 
Herbert


[squid-users] Don't log clientParseRequestMethod messages

2009-03-16 Thread Herbert Faleiros
Is there a way to avoid log clientParseRequestMethod: Unsupported method
in request... messages in my cache.log?


[squid-users] Config suggestion

2009-03-16 Thread Herbert Faleiros
Hardware (only running Squid):

# cat /proc/cpuinfo  | egrep -i xeon | uniq
model name  : Intel(R) Xeon(R) CPU   E5405  @ 2.00GHz
# cat /proc/cpuinfo  | egrep -i xeon | wc -l
8

# free -m
 total   used   free sharedbuffers cached
Mem: 32148   2238  29910  0244823
-/+ buffers/cache:   1169  30978
Swap:15264  0  15264

# lsscsi
[0:0:0:0]diskMAXTOR   ATLAS15K2_73WLS  JNZH  /dev/sda
[0:0:1:0]diskSEAGATE  ST3300655LW  0003  /dev/sdb
[0:0:4:0]diskSEAGATE  ST3146807LC  0007  /dev/sdc
[3:0:0:0]diskSEAGATE  ST3300655SS  0004  /dev/sdd
[3:0:1:0]diskSEAGATE  ST3300655SS  0004  /dev/sde

# fdisk -l | grep GB
Disk /dev/sda: 73.5 GB, 73557090304 bytes
Disk /dev/sdb: 300.0 GB, 3000 bytes
Disk /dev/sdc: 146.8 GB, 146815737856 bytes
Disk /dev/sdd: 300.0 GB, 3000 bytes
Disk /dev/sde: 300.0 GB, 3000 bytes

# lspci | grep -Ei 'sas|scsi'
04:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1064ET
PCI-Express Fusion-MPT SAS (rev 04)
06:02.0 SCSI storage controller: Adaptec ASC-29320LP U320 (rev 03)


# uname -srm
Linux 2.6.27.7 x86_64

Squid:

# squid -v
Squid Cache: Version 3.0.STABLE13
configure options:  '--bindir=/usr/bin' '--sbindir=/usr/sbin'
'--libexecdir=/usr/libexec' '--datadir=/usr/share/squid'
'--sysconfdir=/etc/squid' '--libdir=/usr/lib' '--includedir=/usr/include'
'--mandir=/usr/man' '--localstatedir=/var' '--enable-async-io'
'--with-pthreads' '--enable-xmalloc-statistics' '--enable-storeio=aufs'
'--enable-removal-policies' '--enable-err-languages=English Portuguese'
'--enable-linux-netfilter' '--disable-wccp' '--disable-wccpv2'
'--disable-ident-lookups' '--enable-snmp' '--enable-kill-parent-hack'
'--enable-delay-pools' '--enable-follow-x-forwarded-for'
'--with-large-files' '--with-filedescriptors=65536' 'CFLAGS= -march=native'
'CXXFLAGS= -march=native'

# cat /etc/squid/squid.conf | grep -E cache_'mem|dir'\
cache_mem 8192 MB
cache_dir aufs /var/cache/proxy/cache1 102400 16 256
cache_dir aufs /var/cache/proxy/cache2 102400 16 256
cache_dir aufs /var/cache/proxy/cache3 102400 16 256
cache_dir aufs /var/cache/proxy/cache4 102400 16 256
cache_dir aufs /var/cache/proxy/cache5 102400 16 256
cache_dir aufs /var/cache/proxy/cache6 102400 16 256
cache_dir aufs /var/cache/proxy/cache7 102400 16 256
cache_dir aufs /var/cache/proxy/cache8 102400 16 256


# cat /etc/fstab  | grep proxy
/dev/vg00/cache  /var/cache/proxy ext3defaults 1   2


Yes, I know, LVM, ext3 and aufs are bad ideas... I'm particularly
interested in a better cache_dir configuration (maximizing disk's usage)
and the correct cache_mem parameter to this hardware. (and others
possible/useful tips)

Thanks,

-- 
Herbert