[squid-users] FW: squid 3.3.10 always gives TCP_MISS for SSL requests

2014-08-25 Thread Ragheb Rustom
 TCP_MISS/200 1716 GET
https://safebrowsing-cache.google.com/safebrowsing/rd/ChFnb29nLXBoaXNoLXNo
YXZhchAAGI35FCCg-RQqBpQ8BQD_HzIFjTwFAH8 - HIER_DIRECT/173.194.44.0
application/vnd.google.safebrowsing-chunk
1409010895.410  55166 10.128.135.3 TCP_MISS/200 607 GET
https://0-channel-proxy-04-frc3.facebook.com/pull? -
HIER_DIRECT/173.252.107.16 application/json
1409010951.341  55187 10.128.135.3 TCP_MISS/200 607 GET
https://0-channel-proxy-04-frc3.facebook.com/pull? -
HIER_DIRECT/173.252.107.16 application/json

As you can see all my https requests are being flagged as TCP_MISS no
TCP_HIT.

Extracts from cache.log:

Starting Squid Cache version 3.3.11 for x86_64-redhat-linux-gnu...
2014/08/26 00:25:08 kid1| Process ID 7955
2014/08/26 00:25:08 kid1| Process Roles: worker
2014/08/26 00:25:08 kid1| With 65535 file descriptors available
2014/08/26 00:25:08 kid1| Initializing IP Cache...
2014/08/26 00:25:08 kid1| DNS Socket created at [::], FD 7
2014/08/26 00:25:08 kid1| DNS Socket created at 0.0.0.0, FD 8
2014/08/26 00:25:08 kid1| Adding nameserver 127.0.0.1 from squid.conf
2014/08/26 00:25:08 kid1| Adding nameserver 46.20.98.62 from squid.conf
2014/08/26 00:25:08 kid1| Adding nameserver 8.8.8.8 from squid.conf
2014/08/26 00:25:08 kid1| Adding nameserver 8.8.4.4 from squid.conf
2014/08/26 00:25:08 kid1| helperOpenServers: Starting 5/5 'ssl_crtd'
processes
2014/08/26 00:25:08 kid1| Logfile: opening log /var/log/squid/access.log
2014/08/26 00:25:08 kid1| Local cache digest enabled; rebuild/rewrite every
3600/3600 sec
2014/08/26 00:25:08 kid1| Store logging disabled
2014/08/26 00:25:08 kid1| Swap maxSize 30720 + 6144000 KB, estimated
24103384 objects
2014/08/26 00:25:08 kid1| Target number of buckets: 1205169
2014/08/26 00:25:08 kid1| Using 2097152 Store buckets
2014/08/26 00:25:08 kid1| Max Mem  size: 6144000 KB
2014/08/26 00:25:08 kid1| Max Swap size: 30720 KB
2014/08/26 00:25:08 kid1| Rebuilding storage in /cache1 (clean log)
2014/08/26 00:25:08 kid1| Using Least Load store dir selection
2014/08/26 00:25:08 kid1| Set Current Directory to /cache1
2014/08/26 00:25:08 kid1| Loaded Icons.
2014/08/26 00:25:08 kid1| HTCP Disabled.
2014/08/26 00:25:08 kid1| Sending SNMP messages from [::]:3401
2014/08/26 00:25:08 kid1| Squid plugin modules loaded: 0
2014/08/26 00:25:08 kid1| Adaptation support is off.
2014/08/26 00:25:08 kid1| Accepting HTTP Socket connections at
local=[::]:8080 remote=[::] FD 22 flags=9
2014/08/26 00:25:08 kid1| Accepting NAT intercepted HTTP Socket connections
at local=0.0.0.0:8082 remote=[::] FD 23 flags=41
2014/08/26 00:25:08 kid1| Accepting NAT intercepted SSL bumped HTTPS Socket
connections at local=0.0.0.0:8081 remote=[::] FD 24 flags=41
2014/08/26 00:25:08 kid1| Accepting SNMP messages on [::]:3401
2014/08/26 00:25:08 kid1| Done reading /cache1 swaplog (198 entries)
2014/08/26 00:25:08 kid1| Finished rebuilding storage from disk.
2014/08/26 00:25:08 kid1|   198 Entries scanned
2014/08/26 00:25:08 kid1| 0 Invalid entries.
2014/08/26 00:25:08 kid1| 0 With invalid flags.
2014/08/26 00:25:08 kid1|   198 Objects loaded.
2014/08/26 00:25:08 kid1| 0 Objects expired.
2014/08/26 00:25:08 kid1| 0 Objects cancelled.
2014/08/26 00:25:08 kid1| 0 Duplicate URLs purged.
2014/08/26 00:25:08 kid1| 0 Swapfile clashes avoided.
2014/08/26 00:25:08 kid1|   Took 0.02 seconds (8174.05 objects/sec).
2014/08/26 00:25:08 kid1| Beginning Validation Procedure
2014/08/26 00:25:08 kid1|   Completed Validation Procedure
2014/08/26 00:25:08 kid1|   Validated 198 Entries
2014/08/26 00:25:08 kid1|   store_swap_size = 11304.00 KB
2014/08/26 00:25:09 kid1| storeLateRelease: released 0 objects

Sincerely,

Ragheb Rustom
Smart Telecom S.A.R.L
Sin el fil Highway
Mirna Chalouhi Center - 8th Floor
Beirut, Lebanon
Telefax: +961-1-491582
Mobile: +961-3-286282
Email: rag...@smartelecom.org





RE: [squid-users] The server closed the connection without sending any data.

2011-07-29 Thread Ragheb Rustom
Hi Andrei,

I think http_port should says the following as it is written in
documentation that transparent use is being deprecated. 

So for transparent proxying this line should be as follows:

http_port 3128 intercept

As well you need to do some iptables configuration as just programming squid
as being transparent by itself does not throw http traffic from clients
transparently to squid.

Please send your iptables configuration if possible to have a look.

Sincerely,

Ragheb Rustom
Smart Telecom S.A.R.L

-Original Message-
From: Andrei [mailto:funactivit...@gmail.com] 
Sent: Friday, July 29, 2011 11:22 PM
To: squid-users@squid-cache.org
Subject: [squid-users] The server closed the connection without sending any
data.

If proxy info is entered manually in the browser, caching works OK. If
LAN clients are sent transparently to the proxy, an error message in
Google Chrome:
Error 324 The server closed the connection without sending any data.
Mozilla Firefox displays a blank page.
Strangely enough I don't see anything in the squid access.log when LAN
clients are forced by the router to transparent cache...

I'm running:
Squid Cache: Version 3.1.6
Debian stable 6.0.2.1
DualXeon 3GhZ, 250GB SCSI, 4GB RAM

Config file:

acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl localnet src 172.16.0.0/21  # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

request_header_max_size 15824 KB
request_body_max_size 15824 KB
reply_header_max_size 15824 KB
reply_body_max_size 15824 KB

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow localnet
http_access allow all
icp_access allow all
htcp_access allow all
http_port 3128 transparent
hierarchy_stoplist cgi-bin ?
cache_mem 1024 MB
cache_dir ufs /var/spool/squid3 40960 16 256
coredump_dir /var/spool/squid3
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   40% 40320
icp_port 0
refresh_pattern -i \.index.(html|htm)$ 0 40% 10080
refresh_pattern -i \.(html|htm|css|js)$ 1440 40% 40320
refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 10080 90% 43200
override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv)$ 43200
90% 432000 override-expire ignore-no-cache ignore-no-store
ignore-private
refresh_pattern -i
\.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff)$ 10080 90% 43200
override-expire ignore-no-cache ignore-no-store ignore-private




RE: [squid-users] I see this error in cache.log file no free membufs

2011-07-22 Thread Ragheb Rustom
Dear Markus and Amos,

I have done the changes you have proposed. I have dropped the max-size on COSS 
partition to 100KB so the COSS cache_dir line now reads as follows:

cache_dir coss /cache3/coss1 11 max-size=102400 max-stripe-waste=32768 
block-size=8192 membufs=100
cache_dir aufs /cache1 115000 16 256 min-size=102401
cache_dir aufs /cache2 115000 16 256 min-size=102401
cache_dir aufs /cache4/cache1 24 16 256 min-size=102401

After doing this I have noticed the following warnings every now and then 
(usually every 1 - 2 hours) in the cache.log file

squidaio_queue_request: WARNING - Queue congestion

What I also noticed using iostat is that the big HDD with AUFS dir is handling 
a lot of write requests while the other 2 HDDS with AUFS dirs rarely have disk 
writes. Is this normal behavior since I have 3 AUFS cache_dir shouldn't squid 
disk read and write access be somewhat equal between the 3 AUFS partitions? Do 
you think I should go for a higher max-size on the COSS partition to relieve 
the extra IO from the big AUFS cache_dir?

Thanks again for your excellent support.

Sincerely,

Ragheb Rustom
Smart Telecom S.A.R.L

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Thursday, July 21, 2011 2:00 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] I see this error in cache.log file no free membufs

 On Wed, 20 Jul 2011 18:23:10 -0300, Marcus Kool wrote:
 The message indicates that the numbers of membufs should be
 because there are insufficent membufs to use for caching
 objects.  The reason for having 'insufficient membufs'
 is explained below.

 Given the fact that the average object size is 13 KB, the given
 configuration effectively puts a very large percentage of objects,
 most likely more than 90% in the COSS-based cache dir.  This puts
 a high pressure on (the disk with) COSS and I bet that the disk
 with COSS (/cache3) is 100% busy while the other three are mostly 
 idle.

 COSS is very good for small objects and AUFS is fine with larger 
 objects.

 There is one larger disk.  But this larger disk is not faster.
 It will perform worse with more objects on it than the other disks.

 To find out more about disk I/O and pressure on the disk with COSS, 
 one
 can evaluate the output of iostat or 'vmstat -d 5 5'

 I recommend to change the configuration, to utilise all disks in a
 more balanced way.  Be sure to also look at the output of iostat.
 My suggestion is to use COSS only for objects smaller than 64 KB.
 Depending on the average object size of your cache, this limit
 may be set lower.

 So I suggest:

 cache_dir coss /cache3 11 max-size=65535 max-stripe-waste=32768
 block-size=8192 membufs=15
 cache_dir aufs /cache1 115000 16 256 min-size=65536
 cache_dir aufs /cache2 115000 16 256 min-size=65536
 cache_dir aufs /cache4 115000 16 256 min-size=65536

 And to observe the log and output of iostat.
 If the disk I/O is balanced and the message about membufs reappears 
 and
 you have sufficient free memory, you may increase membufs.  If the 
 I/O is
 not balanced, the limit of 64KB may be decreased to 16KB.

 Depending on the results and iostat, it may be better to
 have 2 disks with COSS and 2 disks with AUFS:

 cache_dir coss /cache3 11 max-size=16383 max-stripe-waste=32768
 block-size=8192 membufs=15
 cache_dir aufs /cache1 11 max-size=16383 max-stripe-waste=32768
 block-size=8192 membufs=15
 cache_dir aufs /cache2 115000 16 256 min-size=16384
 cache_dir aufs /cache4 115000 16 256 min-size=16384

 Marcus


 NP: use the cache manager info report to find the average object 
 size.
   squidclient mgr:info

 COSS handles things in 1MB slices. This is the main reason 
 max-size=1048575 is a bad idea, one object per file/slice is less 
 efficient than AUFS one object per file. So with 110GB of COSS dir will 
 be juggling a massive 11 slices on and off of disk as things are 
 needed. I recommend using smaller COSS overall size and using the 
 remainder of each disk for AUFS storage of the larger objects. (COSS is 
 the exception to the one-dir-per-spindle guideline)

 Something like this with ~30GB COSS on each disk, double size on the 
 big disk = ~150GB of small objects:

 cache_dir coss /cache1coss 3 max-size=65535 max-stripe-waste=32768 
 block-size=8192 membufs=15
 cache_dir aufs /cache1aufs 10 16 256 min-size=65536

 cache_dir coss /cache2coss 3 max-size=65535 max-stripe-waste=32768 
 block-size=8192 membufs=15
 cache_dir aufs /cache2aufs 10 16 256 min-size=65536

 cache_dir coss /cache3coss 3 max-size=65535 max-stripe-waste=32768 
 block-size=8192 membufs=15
 cache_dir aufs /cache3aufs 10 16 256 min-size=65536

 cache_dir coss /cache4coss1 6 max-size=65535 max-stripe-waste=32768 
 block-size=8192 membufs=15
 cache_dir aufs /cache4aufs 24 16 256 min-size=65536

 This last one is a little tricky. You will need to test and see if its 
 is okay this big or needs reducing.

 On the size multiple

RE: [squid-users] I see this error in cache.log file no free membufs

2011-07-22 Thread Ragheb Rustom
Sorry forgot to paste the iostat readings in the below email

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   0.700.011.194.640.00   93.47

Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda   3.24   120.01   111.8863305175901960
sda1  0.00 0.04 0.00   1876  8
sda2  3.24   119.96   111.8863281455901952
sdb  10.45   133.64 4.217049934 221920
sdb1 10.45   133.62 4.217048838 221920
sdc  10.45   143.34 4.237561702 223264
sdc1 10.45   143.32 4.237560606 223264
sdd  29.48  4451.11   642.59  234804222   33897624
sdd1 29.48  4451.09   642.59  234803126   33897624
sde  91.10   757.60  1533.97   39964886   80919592
sde1 91.10   757.58  1533.97   39963790   80919592
dm-0 15.51   118.73   110.8362631545846352
dm-1  0.29 1.23 1.05  64736  55600

Ragheb Rustom
Smart Telecom S.A.R.L


-Original Message-
From: Ragheb Rustom [mailto:rag...@smartelecom.org] 
Sent: Friday, July 22, 2011 3:10 PM
To: 'Amos Jeffries'; squid-users@squid-cache.org
Cc: 'Marcus Kool'
Subject: RE: [squid-users] I see this error in cache.log file no free membufs
Importance: High

Dear Markus and Amos,

I have done the changes you have proposed. I have dropped the max-size on COSS 
partition to 100KB so the COSS cache_dir line now reads as follows:

cache_dir coss /cache3/coss1 11 max-size=102400 max-stripe-waste=32768 
block-size=8192 membufs=100
cache_dir aufs /cache1 115000 16 256 min-size=102401
cache_dir aufs /cache2 115000 16 256 min-size=102401
cache_dir aufs /cache4/cache1 24 16 256 min-size=102401

After doing this I have noticed the following warnings every now and then 
(usually every 1 - 2 hours) in the cache.log file

squidaio_queue_request: WARNING - Queue congestion

What I also noticed using iostat is that the big HDD with AUFS dir is handling 
a lot of write requests while the other 2 HDDS with AUFS dirs rarely have disk 
writes. Is this normal behavior since I have 3 AUFS cache_dir shouldn't squid 
disk read and write access be somewhat equal between the 3 AUFS partitions? Do 
you think I should go for a higher max-size on the COSS partition to relieve 
the extra IO from the big AUFS cache_dir?

Thanks again for your excellent support.

Sincerely,

Ragheb Rustom
Smart Telecom S.A.R.L

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Thursday, July 21, 2011 2:00 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] I see this error in cache.log file no free membufs

 On Wed, 20 Jul 2011 18:23:10 -0300, Marcus Kool wrote:
 The message indicates that the numbers of membufs should be
 because there are insufficent membufs to use for caching
 objects.  The reason for having 'insufficient membufs'
 is explained below.

 Given the fact that the average object size is 13 KB, the given
 configuration effectively puts a very large percentage of objects,
 most likely more than 90% in the COSS-based cache dir.  This puts
 a high pressure on (the disk with) COSS and I bet that the disk
 with COSS (/cache3) is 100% busy while the other three are mostly 
 idle.

 COSS is very good for small objects and AUFS is fine with larger 
 objects.

 There is one larger disk.  But this larger disk is not faster.
 It will perform worse with more objects on it than the other disks.

 To find out more about disk I/O and pressure on the disk with COSS, 
 one
 can evaluate the output of iostat or 'vmstat -d 5 5'

 I recommend to change the configuration, to utilise all disks in a
 more balanced way.  Be sure to also look at the output of iostat.
 My suggestion is to use COSS only for objects smaller than 64 KB.
 Depending on the average object size of your cache, this limit
 may be set lower.

 So I suggest:

 cache_dir coss /cache3 11 max-size=65535 max-stripe-waste=32768
 block-size=8192 membufs=15
 cache_dir aufs /cache1 115000 16 256 min-size=65536
 cache_dir aufs /cache2 115000 16 256 min-size=65536
 cache_dir aufs /cache4 115000 16 256 min-size=65536

 And to observe the log and output of iostat.
 If the disk I/O is balanced and the message about membufs reappears 
 and
 you have sufficient free memory, you may increase membufs.  If the 
 I/O is
 not balanced, the limit of 64KB may be decreased to 16KB.

 Depending on the results and iostat, it may be better to
 have 2 disks with COSS and 2 disks with AUFS:

 cache_dir coss /cache3 11 max-size=16383 max-stripe-waste=32768
 block-size=8192 membufs=15
 cache_dir aufs /cache1 11 max-size=16383 max-stripe-waste=32768
 block-size=8192 membufs=15
 cache_dir aufs /cache2 115000 16 256 min-size=16384
 cache_dir aufs /cache4

[squid-users] I see this error in cache.log file no free membufs

2011-07-20 Thread Ragheb Rustom
Dear All,

I have a squid cache proxy which is delivering content to around 3000+
users. After some problems with performance on AUFS under peak hour loads I
have used one of my cache_dirs as COSS disk following Amos Settings on
squid-cache website while left all others as AUFS that hold files bigger
than 1MB. After running perfectly for some time with coss and very beautiful
results I started seeing the below messages in my cache.log file:

storeCossCreateMemOnlyBuf: no free membufs.  You may need to increase the
value of membufs on the /cache3/coss1 cache_dir

here are my squid.conf settings:

cache_dir coss /cache3/coss1 11 max-size=1048575 max-stripe-waste=32768
block-size=8192 membufs=15
cache_dir aufs /cache1 115000 16 256 min-size=1048576
cache_dir aufs /cache2 115000 16 256 min-size=1048576
cache_dir aufs /cache4/cache1 24 16 256 min-size=1048576

Please note all my Hdds are SAS 15k drives sizes as follows: 

/cache1    147GB
/cache2    147GB
/cache3    147GB
/cache4    450GB

The system is dual Xeon quad core intel server with 16GB of physical Ram

Do you think I should increase the membufs value and what do you think the
best or optimal value for a such system should be?

Sincerely,


Ragheb Rustom
Smart Telecom S.A.R.L





RE: [squid-users] data transfer restriction

2011-07-15 Thread Ragheb Rustom
Dear Benjamin,

What you need to do such kind of bandwidth quota is to use a radius server with 
your NAS (this is NAS related usually not squid related although as Amos 
mentioned before squid can do some speed throttling using Delay Pools which is 
very useful if your international bandwidth is expensive.) Free radius is one 
of the most widely used radius servers with GPL license. There are some 
commercial products using Free radius that have integrated such functions into 
their own setup of their billing servers with a nice web gui that make 
programming things in free radius much easier (but since it is commercial then 
you will have to buy a license to be able to use it.) If however you have the 
free time you can program all that you asked for into free radius.

I can point out some systems that I have tested based on free radius if you 
like to.

Sincerely,

Ragheb Rustom
Keblon S.A.R.L

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Friday, July 15, 2011 5:48 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] data transfer restriction

On 15/07/11 04:16, benjamin fernandis wrote:
 Hi,

 I am using centos 5.6 with latest version.Now i want to configure
 bandwith restriction per ip and want to derive restriction for data
 transfer. Example , per ip want to set 2gb data transfer per month or
 200Mb per day.

Squid does not do quotas like that.

  It can be made to do time allocations with a session helper (ie 5 
minutes access per day at full speed 10Mbps == ~200Mb per day).

  Or it can be made to set bandwidth access speeds with delay pools (ie 
throttling each IP down to 2.4 KB/sec == ~200Mb/day).

  Or it can set a TOS/Diffserv QoS value on outgoing traffic and leave 
the rest to the OS.

Amos
-- 
Please be using
   Current Stable Squid 2.7.STABLE9 or 3.1.14
   Beta testers wanted for 3.2.0.9




RE: [squid-users] bandwith restriction

2011-06-18 Thread Ragheb Rustom
Hi Benjo,

To be able to shape per ip per subnets you will need to use delay pools as 
follows: (I am using this on squid 2.7)

Delay_pools 1
Delay_class 1 2
Delay_parameters 1 -1/-1 64000/64000  (this shapes ur traffic for 512kbps/user)
Acl throttle_subnet1 src 192.168.x.x/24
Delay_access 1 allow throttle_subnet1
Delay_access 1 deny all


Take care banjo that for order for this to work all your client ips must hit 
the cache directly and not reach the cache through a nat rule otherwise your 
squid will see that all your web traffic is coming from one single ip and thus 
it will shape all your inner lan traffic as one ip and thus all your inner will 
be shaped to just 512kbps.

Hope this is clear enough for you.

Sincerely, 

Ragheb Rustom
Smart Telecom S.A.R.L
Sin el fil Highway
Mirna Chalouhi Center - 8th Floor
Beirut, Lebanon
Telefax: +961-1-491582
Mobile: +961-3-286282
Email: rag...@smartelecom.org


-Original Message-
From: benjamin fernandis [mailto:benjo11...@gmail.com] 
Sent: Saturday, June 18, 2011 7:02 PM
To: squid-users@squid-cache.org
Subject: [squid-users] bandwith restriction

Hi,

I want to use delay pool to limiting per host/ip in my network.We have
200 users in my organization.And i want to restrict them by each
host/ip.

please guide me for that.

How to use delay pool for my requirement?

Thanks,
Benjo




[squid-users] squid crashes by itself and reboots automatically

2009-11-29 Thread Ragheb Rustom
  signal handler called
#4  0x00347a8809c2 in strcmp () from /lib64/libc.so.6
#5  0x0043270e in ?? ()
#6  0x00490584 in ?? ()
#7  0x004910eb in ?? ()
#8  0x004a69a1 in ?? ()
#9  0x004a8f9a in aioCheckCallbacks ()
#10 0x00493107 in ?? ()
#11 0x00438b0e in ?? ()
#12 0x00469d6d in ?? ()
#13 0x00347a81e32a in __libc_start_main () from /lib64/libc.so.6
#14 0x00409cb9 in drand48 ()
#15 0x7fffa38f52c8 in ?? ()
#16 0x001c in ?? ()
#17 0x0002 in ?? ()
#18 0x7fffa38f6f19 in ?? ()
#19 0x7fffa38f6f21 in ?? ()
#20 0x in ?? ()

Ragheb Rustom





RE: [squid-users] squid crashes by itself and reboots automatically

2009-11-29 Thread Ragheb Rustom
The only entry that I have in the cache.log are these lines before that I have 
all storelocatevary warnings but nothing out of the ordinary.

FATAL: Received Segment Violation...dying.
2009/11/29 22:45:48| storeDirWriteCleanLogs: Starting...
2009/11/29 22:45:48| WARNING: Closing open FD   24
2009/11/29 22:45:48| commSetEvents: epoll_ctl(EPOLL_CTL_DEL): failed on fd=24: 
(1) Operation not permitted

Lines that are before this error are all similar to these 

storeLocateVary: Not our vary marker object, 9884D2F9295EC65D66A36B329A3C7BBD = 
'http://b.static.ak.fbcdn.net/rsrc.php/zE010/hash/2kkc6o
96.css', 'accept-encoding=gzip,%20deflate'/'gzip, deflate'
2009/11/29 22:45:47| storeLocateVary: Not our vary marker object, 
DCBAB677A2364853342DA79508301881 = 
'http://static.ak.fbcdn.net/rsrc.php/z5WTN/hash/arvryq2p
.css', 'accept-encoding=gzip,%20deflate'/'gzip, deflate'
2009/11/29 22:45:47| storeLocateVary: Not our vary marker object, 
FBD3324530E5C7D93199529A191361C0 = 
'http://static.ak.fbcdn.net/rsrc.php/zBICQ/hash/ryfrvghz
.css', 'accept-encoding=gzip,deflate,sdch'/'gzip,deflate,sdch'
2009/11/29 22:45:47| storeLocateVary: Not our vary marker object, 
498DE9ABC02057F4C59D6A60B2E515B1 = 
'http://static.ak.fbcdn.net/rsrc.php/z37DG/hash/e5d2uja2
.css', 'accept-encoding=gzip,deflate,sdch'/'gzip,deflate,sdch'
2009/11/29 22:45:47| storeLocateVary: Not our vary marker object, 
956DBDC557535910ECC2BDE74EEC30D4 = 
'http://b.static.ak.facebook.com/common/redirectiframe.h
tml', 'accept-encoding=gzip,%20deflate'/'gzip, deflate'

Sincerely,

Ragheb Rustom

-Original Message-
From: Kinkie [mailto:gkin...@gmail.com] 
Sent: Monday, November 30, 2009 1:25 AM
To: Ragheb Rustom
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] squid crashes by itself and reboots automatically

On Sun, Nov 29, 2009 at 8:25 PM, Ragheb Rustom rag...@smartelecom.org wrote:
 Dear all,

 I have multiple servers running squid 2.7-stable7. Lately I have noticed
 that most of these servers are crashing sometimes everyday creating a core
 dump file and then after creating the file restart by itself. After reading
 the core file with gdb and doing a backtrace I got the following info. Can
 anyone please help me identify what is going on with these squid servers.
 You help is very valuable.

Is there anything in cache.log? Possibly just the few lines before the restart.


-- 
/kinkie




RE: [squid-users] squid crashes by itself and reboots automatically

2009-11-29 Thread Ragheb Rustom
 in ?? ()
#9  0x004a8fca in aioCheckCallbacks ()
#10 0x00493137 in ?? ()
#11 0x00438b0e in ?? ()
#12 0x00469d6d in ?? ()
#13 0x00393181e32a in __libc_start_main () from /lib64/libc.so.6
#14 0x00409cb9 in drand48 ()
#15 0x7fffdec70c78 in ?? ()
#16 0x001c in ?? ()
#17 0x0002 in ?? ()
#18 0x7fffdec70e92 in ?? ()
#19 0x7fffdec70e9a in ?? ()
#20 0x in ?? ()
(gdb)


Ragheb Rustom
Smartelecom S.A.R.L
Sin el fil - Mar Elias Street
Absi Center - Ground Floor
Beirut, Lebanon
Telefax: +961-1-487275
Mobile: +961-3-286282
Email: rag...@smartelecom.org


-Original Message-
From: Mike Marchywka [mailto:marchy...@hotmail.com] 
Sent: Monday, November 30, 2009 1:50 AM
To: gkin...@gmail.com; rag...@smartelecom.org
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] squid crashes by itself and reboots automatically















 Date: Mon, 30 Nov 2009 00:24:38 +0100
 From: gkin...@gmail.com
 To: rag...@smartelecom.org
 CC: squid-users@squid-cache.org
 Subject: Re: [squid-users] squid crashes by itself and reboots
automatically

 On Sun, Nov 29, 2009 at 8:25 PM, Ragheb Rustom  wrote:
 Dear all,

 I have multiple servers running squid 2.7-stable7. Lately I have noticed
 that most of these servers are crashing sometimes everyday creating a
core
 dump file and then after creating the file restart by itself. After
reading
 the core file with gdb and doing a backtrace I got the following info.
Can
 anyone please help me identify what is going on with these squid servers.
 You help is very valuable.

 Is there anything in cache.log? Possibly just the few lines before the
restart.

the default config file seems good about logging stuff. 
Was that sparse stack trace typical or just the first
one you had? The only sig that was obvious was strcmp-
if it was blowing up there you either had pathological strings or memory
corruption or timing issue etc.
Not too informative until you at least know if it is
a consistent failure point.






 --
 /kinkie
  
_
Windows 7: I wanted simpler, now it's simpler. I'm a rock star.
http://www.microsoft.com/Windows/windows-7/default.aspx?h=myidea?ocid=PID247
27::T:WLMTAGL:ON:WL:en-US:WWL_WIN_myidea:112009




[squid-users] Squid2.7 build patch not working in certain parts of the hunks

2009-09-29 Thread Ragheb Rustom
 = @srcdir@
  sysconfdir = @sysconfdir@
  target_alias = @target_alias@
  top_build_prefix = @top_build_prefix@
  top_builddir = @top_builddir@
  top_srcdir = @top_srcdir@
- errordir = $(datadir)/errors
- DEFAULT_ERROR_DIR = $(errordir)
  INSTALL_LANGUAGES = @ERR_LANGUAGES@
  LANGUAGES = \
Armenian \
--- 188,199 
  sharedstatedir = @sharedstatedir@
  srcdir = @srcdir@
  sysconfdir = @sysconfdir@
  target_alias = @target_alias@
  top_build_prefix = @top_build_prefix@
  top_builddir = @top_builddir@
  top_srcdir = @top_srcdir@
+ errordir = $(pkgdatadir)/errors
+ DEFAULT_ERROR_DIR = $(sysconfdir)/errors
  INSTALL_LANGUAGES = @ERR_LANGUAGES@
  LANGUAGES = \
Armenian \

Src/Makefile.in.rej

***
*** 886,904 
  install-dataDATA: $(data_DATA)
@$(NORMAL_INSTALL)
-   test -z $(datadir) || $(mkdir_p) $(DESTDIR)$(datadir)
@list='$(data_DATA)'; for p in $$list; do \
  if test -f $$p; then d=; else d=$(srcdir)/; fi; \
  f=$(am__strip_dir) \
- echo  $(dataDATA_INSTALL) '$$d$$p' '$(DESTDIR)$(datadir)/$$f'; \
- $(dataDATA_INSTALL) $$d$$p $(DESTDIR)$(datadir)/$$f; \
done

  uninstall-dataDATA:
@$(NORMAL_UNINSTALL)
@list='$(data_DATA)'; for p in $$list; do \
  f=$(am__strip_dir) \
- echo  rm -f '$(DESTDIR)$(datadir)/$$f'; \
- rm -f $(DESTDIR)$(datadir)/$$f; \
done
  install-sysconfDATA: $(sysconf_DATA)
@$(NORMAL_INSTALL)
--- 886,904 
  install-dataDATA: $(data_DATA)
@$(NORMAL_INSTALL)
+   test -z $(sysconfdir)/squid || $(mkdir_p)
$(DESTDIR)$(sysconfdir)/squid
@list='$(data_DATA)'; for p in $$list; do \
  if test -f $$p; then d=; else d=$(srcdir)/; fi; \
  f=$(am__strip_dir) \
+ echo  $(dataDATA_INSTALL) '$$d$$p'
'$(DESTDIR)$(sysconfdir)/$$f'; \
+ $(dataDATA_INSTALL) $$d$$p $(DESTDIR)$(sysconfdir)/$$f; \
done

  uninstall-dataDATA:
@$(NORMAL_UNINSTALL)
@list='$(data_DATA)'; for p in $$list; do \
  f=$(am__strip_dir) \
+ echo  rm -f '$(DESTDIR)$(sysconfdir)/$$f'; \
+ rm -f $(DESTDIR)$(sysconfdir)/$$f; \
done
  install-sysconfDATA: $(sysconf_DATA)
@$(NORMAL_INSTALL)

I really need these patches since I do install squid with whatever options I
need by building my custom rpms for it.

Sincerely, 

Ragheb Rustom
Email: rag...@smartelecom.org





RE: [squid-users] Squid2.7 build patch not working in certain parts of the hunks

2009-09-29 Thread Ragheb Rustom
Hi All,

Ok so I have located and solved the problem with the 2nd part of the rpm
Build patch which was producing the 2nd errors on the src/Makefile.in in the
email down. I still only have the 1st error which is produced when patching
/errors/Makefile.in whereby I change only errordir and DEFAULT_ERROR_DIR to
conform with redhat settings. Do you see anything wrong with the patch
settings? Please your help here is much appreciated.

@@ -188,11 +188,11 @@ sbindir = @sbindir@
 sharedstatedir = @sharedstatedir@
 srcdir = @srcdir@
 sysconfdir = @sysconfdir@
 target_alias = @target_alias@
 top_build_prefix = @top_build_prefix@
 top_builddir = @top_builddir@
 top_srcdir = @top_srcdir@
-errordir = $(datadir)/errors
+errordir = $(pkgdatadir)/errors
-DEFAULT_ERROR_DIR = $(errordir)
+DEFAULT_ERROR_DIR = $(sysconfdir)/errors
 INSTALL_LANGUAGES = @ERR_LANGUAGES@
 LANGUAGES = \

Sincerely,

Ragheb Rustom
Email: rag...@smartelecom.org


-Original Message-
From: Ragheb Rustom [mailto:rag...@smartelecom.org] 
Sent: Tuesday, September 29, 2009 11:56 AM
To: squid-users@squid-cache.org
Subject: [squid-users] Squid2.7 build patch not working in certain parts of
the hunks
Importance: High

Hi All,

I have been regenerating the build patch file for squid2.7.STABLE7 to
reflect the new Makefile.in and am you have introduced after
squid.2.7-STABLE5. Most of build patch is working out fine it is failing at
2 places in the patch. 1st place errors/Makefile.in only in the Hunk section
where I need to change 2 lines 
The hunks that are failing from the patch are the following

diff -up squid-2.7.STABLE7/errors/Makefile.in.build
squid-2.7.STABLE7/errors/Makefile.in
--- squid-2.7.STABLE7/errors/Makefile.in.build  2009-05-01
04:24:16.0 +0100
+++ squid-2.7.STABLE7/errors/Makefile.in2009-09-25
09:54:09.0 +0100
@@ -188,12 +188,12 @@ sbindir = @sbindir@
 sharedstatedir = @sharedstatedir@
 srcdir = @srcdir@
 sysconfdir = @sysconfdir@
 target_alias = @target_alias@
 top_build_prefix = @top_build_prefix@
 top_builddir = @top_builddir@
 top_srcdir = @top_srcdir@
-errordir = $(datadir)/errors
-DEFAULT_ERROR_DIR = $(errordir)
+errordir = $(pkgdatadir)/errors
+DEFAULT_ERROR_DIR = $(sysconfdir)/errors
 INSTALL_LANGUAGES = @ERR_LANGUAGES@
 LANGUAGES = \
Armenian \

I have double checked everything but it is impossible to change any one of
these 2 specific lines. I have amended the patch just to make sure that the
problem is not from the patch itself I was able to change any line above
errordir = $(datadir)/errors even delete any line above it as well as below
DEFAULT_ERROR_DIR = $(errordir) the thing is I cannot change just these 2
lines that need to be changed even if I amend the patch just to test to only
remove one of these lines just removing it also fails.

The same patch is failing at another Hunk which changes settings in
src/Makefile.in.  at 2nd part of same patch which has the following
settings:

@@ -886,19 +886,19 @@ distclean-compile:
 install-dataDATA: $(data_DATA)
@$(NORMAL_INSTALL)
-   test -z $(datadir) || $(mkdir_p) $(DESTDIR)$(datadir)
+   test -z $(sysconfdir)/squid || $(mkdir_p)
$(DESTDIR)$(sysconfdir)/squid
@list='$(data_DATA)'; for p in $$list; do \
  if test -f $$p; then d=; else d=$(srcdir)/; fi; \
  f=$(am__strip_dir) \
- echo  $(dataDATA_INSTALL) '$$d$$p' '$(DESTDIR)$(datadir)/$$f'; \
- $(dataDATA_INSTALL) $$d$$p $(DESTDIR)$(datadir)/$$f; \
+ echo  $(dataDATA_INSTALL) '$$d$$p'
'$(DESTDIR)$(sysconfdir)/$$f'; \
+ $(dataDATA_INSTALL) $$d$$p $(DESTDIR)$(sysconfdir)/$$f; \
done

 uninstall-dataDATA:
@$(NORMAL_UNINSTALL)
@list='$(data_DATA)'; for p in $$list; do \
  f=$(am__strip_dir) \
- echo  rm -f '$(DESTDIR)$(datadir)/$$f'; \
- rm -f $(DESTDIR)$(datadir)/$$f; \
+ echo  rm -f '$(DESTDIR)$(sysconfdir)/$$f'; \
+ rm -f $(DESTDIR)$(sysconfdir)/$$f; \
done
 install-sysconfDATA: $(sysconf_DATA)
@$(NORMAL_INSTALL)

Although it works perfectly on the same Makefile for another part of the
patch being:

diff -up squid-2.7.STABLE7/src/Makefile.in.build
squid-2.7.STABLE7/src/Makefile.in
--- squid-2.7.STABLE7/src/Makefile.in.build 2009-05-01
04:24:40.0 +0100
+++ squid-2.7.STABLE7/src/Makefile.in   2009-09-25 10:22:37.0 +0100
@@ -613,20 +613,20 @@ DEFAULT_PREFIX = $(prefix)
 DEFAULT_CONFIG_FILE = $(sysconfdir)/squid.conf
 DEFAULT_MIME_TABLE = $(sysconfdir)/mime.conf
 DEFAULT_DNSSERVER = $(libexecdir)/`echo dnsserver | sed
'$(transform);s/$$/$(EXEEXT)/'`
-DEFAULT_LOG_PREFIX = $(localstatedir)/logs
+DEFAULT_LOG_PREFIX = $(localstatedir)/log/squid
 DEFAULT_CACHE_LOG = $(DEFAULT_LOG_PREFIX)/cache.log
 DEFAULT_ACCESS_LOG = $(DEFAULT_LOG_PREFIX)/access.log
 DEFAULT_STORE_LOG = $(DEFAULT_LOG_PREFIX)/store.log
-DEFAULT_PID_FILE = $(DEFAULT_LOG_PREFIX)/squid.pid
+DEFAULT_PID_FILE = $(DEFAULT_LOG_PREFIX)/run/squid.pid

RE: [squid-users] squid2.7 Stable 6 passes in intervals where cpu usage hits 100%

2009-05-30 Thread Ragheb Rustom
Hi Chris,

Thank you for the reply. I will use squid -k debug to check what is going
on.

As for the additional info sorry for not clearing the squid setup more
anyway this is squid configuration values you have guessed as taken from my
squid.conf

Wild guesses based on the information given:

*) You are using UFS for your cache_dir instead of AUFS.

cache_dir aufs /cache1 115000 16 256
cache_dir aufs /cache2 115000 16 256

I am using AUFS not UFS.

*) Your cache is pretty large and your cache_swap_high and 
cache_swap_low are still at the defaults

cache_swap_low 93
cache_swap_high 95

Do u think these values are reasonable enough regarding the above values of
my Cacche dir or should they be changed.

*) You are using large lists of regex ACLs

acl videocache_allow_url url_regex -i \.youtube\.com\/get_video\?
acl videocache_allow_url url_regex -i \.googlevideo\.com\/videoplayback
\.googlevideo\.com\/get_video\?
acl videocache_allow_url url_regex -i \.google\.com\/videoplayback
\.google\.com\/get_video\?
acl videocache_allow_url url_regex -i \.google\.[a-z][a-z]\/videoplayback
\.google\.[a-z][a-z]\/get_video\?
acl videocache_allow_url url_regex -i
(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]
?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][
0-9]?)\/videoplayback\?
acl videocache_allow_url url_regex -i
(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]
?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][
0-9]?)\/get_video\?
acl videocache_allow_url url_regex -i
proxy[a-z0-9\-][a-z0-9][a-z0-9][a-z0-9]?\.dailymotion\.com\/
acl videocache_allow_url url_regex -i vid\.akm\.dailymotion\.com\/
acl videocache_allow_url url_regex -i
[a-z0-9][0-9a-z][0-9a-z]?[0-9a-z]?[0-9a-z]?\.xtube\.com\/(.*)flv
acl videocache_allow_url url_regex -i bitcast\.vimeo\.com\/vimeo\/videos\/
acl videocache_allow_url url_regex -i
va\.wrzuta\.pl\/wa[0-9][0-9][0-9][0-9]?
acl videocache_allow_url url_regex -i \.files\.youporn\.com\/(.*)\/flv\/
acl videocache_allow_url url_regex -i \.msn\.com\.edgesuite\.net\/(.*)\.flv
acl videocache_allow_url url_regex -i
media[a-z0-9]?[a-z0-9]?[a-z0-9]?\.tube8\.com\/
mobile[a-z0-9]?[a-z0-9]?[a-z0-9]?\.tube8\.com\/
acl videocache_allow_url url_regex -i \.mais\.uol\.com\.br\/(.*)\.flv
acl videocache_allow_url url_regex -i
\.video[a-z0-9]?[a-z0-9]?\.blip\.tv\/(.*)\.(flv|avi|mov|mp3|m4v|mp4|wmv|rm|r
am)
acl videocache_allow_url url_regex -i video\.break\.com\/(.*)\.(flv|mp4)
acl videocache_allow_dom dstdomain v.mccont.com dl.redtube.com
.cdn.dailymotion.com
acl videocache_deny_url url_regex -i http:\/\/[a-z][a-z]\.youtube\.com
http:\/\/www\.youtube\.com

acl bad_requests urlpath_regex -i cmd.exe \/bin\/sh default\.ida?XXX
omg.pif

acl music url_regex -i .wav
acl video url_regex -i .avi .mpe
acl cddown url_regex -i .iso .raw

These are all the acl regex entries I use in my squid.conf file mainly most
of them are those of videocache.

I will use the squid -k debug and try and get more info of what is going on
during the high cpu usage period and will share whatever I find on
squid-users mail list.

Thanks Chris for your support.

Ragheb

-Original Message-
From: crobert...@gci.net [mailto:crobert...@gci.net] 
Sent: Friday, May 29, 2009 10:18 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] squid2.7 Stable 6 passes in intervals where cpu
usage hits 100%

Ragheb Rustom wrote:
 Dear All,

 I have been noticing for some time now that my squid server passes in
nearly
 regular intervals during the day where the squid process CPU usage hits
100%
 for around 20 - 30 seconds or so during which squid will not serve
requests.
 After these 20 -30sec the squid CPU usage drops to around 5 - 13% and
 website navigation is normal again. This happens like I said before at
 nearly regular intervals during each day several times and it happens only
 for around 20 -30sec. I have been observing the cache.log file but there
is
 nothing apparent there. Is there a way that I know what squid is doing
when
 this problem happens to try and resolve it as trying to solve it like it
is
 now with no info about what is going on is somewhat vague.

squid -k debug

  I think this
 problem happened after I have changed memory and disk replacement policies
 from lru to heap but I am not so sure that this is the source of the
 problem. I am using squid2.7-Stable6 on a fedora9 server which is a dual
 xeon quad 2.8Ghz CPU with 16Gb of Ram installed and 3 SAS hard drives. One
 more thing this server is also running videocache 1.9.1 but it has been
 running videocache also before the problem started with no problems
 whatsoever. I appreciate any help you can offer me.
   

Wild guesses based on the information given:

*) You are using UFS for your cache_dir instead of AUFS.
*) Your cache is pretty large and your cache_swap_high and 
cache_swap_low are still at the defaults
*) You are using large lists of regex ACLs

RE: [squid-users] squid2.7 Stable 6 passes in intervals where cpu usage hits 100%

2009-05-30 Thread Ragheb Rustom
Hi Amos,

Thanks for the info.

According to what you have mentioned below what do you think should be the
value of cache_swap_low and high values to lower the load on the system
although the hardware is big enough I think to hold such a load (all system
us SAS based with 15k rpm drives and has 16GB of pyshical memory.) 
 
That indicates that when the cache dir gets more than 95% full it needs 
to clear at least 2GB of objects (2% of 115,000MB).

Depending on your disks and avg object sizes this may be a large part of 
the load.

As for the regex acls since like I have said these specified by the
videocache python squid plugin redirector setup I can try turning all of
these off and see if the problem persists or no.
And these regex acls are used only with below access lines

url_rewrite_access deny videocache_deny_url
url_rewrite_access allow videocache_allow_url
url_rewrite_access allow videocache_allow_dom


A lot of those can be reduced to dstdomain or at least dstdomain 
followed by a regex.

ie:
acl A dstdomain .googlevideo.com
acl R urlpath_regex -i /videoplayback

http_access allow/deny A R

Using dstdomain (very fast test) at the start of an access line to 
determine if the slower regex later on in the line is even needed can 
produce a dramatic speed boost in response times.

Where you are testing for file types the rep_mime_type on reply can 
check the mime faster than a regex can test the entire URL.

Also matching all of that file type by mime from a whole dstdomain can 
be more effective and faster than a specific regex testing for a range 
of specific sub-domains and file endings.

Also you do not indicate which if the *_access lines these are being 
tested in. It's a very good idea to only to them in one of either 
http_access or http_reply_access, not both.


Ragheb Rustom


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Saturday, May 30, 2009 11:16 AM
To: Ragheb Rustom
Cc: squid-users@squid-cache.org; crobert...@gci.net
Subject: Re: [squid-users] squid2.7 Stable 6 passes in intervals where cpu
usage hits 100%

Ragheb Rustom wrote:
 Hi Chris,
 
 Thank you for the reply. I will use squid -k debug to check what is going
 on.
 
 As for the additional info sorry for not clearing the squid setup more
 anyway this is squid configuration values you have guessed as taken from
my
 squid.conf
 
 Wild guesses based on the information given:
 
 *) You are using UFS for your cache_dir instead of AUFS.
 
 cache_dir aufs /cache1 115000 16 256
 cache_dir aufs /cache2 115000 16 256
 
 I am using AUFS not UFS.
 
 *) Your cache is pretty large and your cache_swap_high and 
 cache_swap_low are still at the defaults
 
 cache_swap_low 93
 cache_swap_high 95
 
 Do u think these values are reasonable enough regarding the above values
of
 my Cacche dir or should they be changed.
 

That indicates that when the cache dir gets more than 95% full it needs 
to clear at least 2GB of objects (2% of 115,000MB).

Depending on your disks and avg object sizes this may be a large part of 
the load.


 *) You are using large lists of regex ACLs
 
 acl videocache_allow_url url_regex -i \.youtube\.com\/get_video\?
 acl videocache_allow_url url_regex -i \.googlevideo\.com\/videoplayback
 \.googlevideo\.com\/get_video\?
 acl videocache_allow_url url_regex -i \.google\.com\/videoplayback
 \.google\.com\/get_video\?
 acl videocache_allow_url url_regex -i \.google\.[a-z][a-z]\/videoplayback
 \.google\.[a-z][a-z]\/get_video\?
 acl videocache_allow_url url_regex -i

(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]

?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][
 0-9]?)\/videoplayback\?
 acl videocache_allow_url url_regex -i

(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]

?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][
 0-9]?)\/get_video\?
 acl videocache_allow_url url_regex -i
 proxy[a-z0-9\-][a-z0-9][a-z0-9][a-z0-9]?\.dailymotion\.com\/
 acl videocache_allow_url url_regex -i vid\.akm\.dailymotion\.com\/
 acl videocache_allow_url url_regex -i
 [a-z0-9][0-9a-z][0-9a-z]?[0-9a-z]?[0-9a-z]?\.xtube\.com\/(.*)flv
 acl videocache_allow_url url_regex -i bitcast\.vimeo\.com\/vimeo\/videos\/
 acl videocache_allow_url url_regex -i
 va\.wrzuta\.pl\/wa[0-9][0-9][0-9][0-9]?
 acl videocache_allow_url url_regex -i \.files\.youporn\.com\/(.*)\/flv\/
 acl videocache_allow_url url_regex -i
\.msn\.com\.edgesuite\.net\/(.*)\.flv
 acl videocache_allow_url url_regex -i
 media[a-z0-9]?[a-z0-9]?[a-z0-9]?\.tube8\.com\/
 mobile[a-z0-9]?[a-z0-9]?[a-z0-9]?\.tube8\.com\/
 acl videocache_allow_url url_regex -i \.mais\.uol\.com\.br\/(.*)\.flv
 acl videocache_allow_url url_regex -i

\.video[a-z0-9]?[a-z0-9]?\.blip\.tv\/(.*)\.(flv|avi|mov|mp3|m4v|mp4|wmv|rm|r
 am)
 acl videocache_allow_url url_regex -i video\.break\.com\/(.*)\.(flv|mp4)
 acl videocache_allow_dom dstdomain v.mccont.com dl.redtube.com

[squid-users] squid2.7 Stable 6 passes in intervals where cpu usage hits 100%

2009-05-29 Thread Ragheb Rustom
Dear All,

I have been noticing for some time now that my squid server passes in nearly
regular intervals during the day where the squid process CPU usage hits 100%
for around 20 - 30 seconds or so during which squid will not serve requests.
After these 20 -30sec the squid CPU usage drops to around 5 - 13% and
website navigation is normal again. This happens like I said before at
nearly regular intervals during each day several times and it happens only
for around 20 -30sec. I have been observing the cache.log file but there is
nothing apparent there. Is there a way that I know what squid is doing when
this problem happens to try and resolve it as trying to solve it like it is
now with no info about what is going on is somewhat vague. I think this
problem happened after I have changed memory and disk replacement policies
from lru to heap but I am not so sure that this is the source of the
problem. I am using squid2.7-Stable6 on a fedora9 server which is a dual
xeon quad 2.8Ghz CPU with 16Gb of Ram installed and 3 SAS hard drives. One
more thing this server is also running videocache 1.9.1 but it has been
running videocache also before the problem started with no problems
whatsoever. I appreciate any help you can offer me.

Sincerely, 

Ragheb Rustom
Smartelecom S.A.R.L
Sin el fil - Mar Elias Street
Absi Center - Ground Floor
Beirut, Lebanon
Email: rag...@smartelecom.org





RE: [squid-users] Build patch fails to apply on Squid 2.7 stable6

2009-02-09 Thread Ragheb Rustom
Hi Henrik,

Can you please (or anyone else who can do this) regenerate the squid 2.7
build patch to reflect the changes that Amos mentioned in the autoconf
toolchain.

Thank you for your time.

Ragheb Rustom


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Sunday, February 08, 2009 11:11 PM
To: Ragheb Rustom
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Build patch fails to apply on Squid 2.7 stable6

 Hi,



 I have been trying to compile squid 2.7-stable6 on Fedora Core 9 x86-64
 system. I have already done all the changes I need in the spec file in
 order
 to create my system rpms but I noticed that when the rpmbuild try to build
 the rpm it fails when it tries to apply the squid2.6Stable2 build patch
 with
 the following errors. I have even tried to do the compile process manually
 but also the same error appears when I manually try to apply the same
 patch.
 All other patches have been installed successfully only the build patch
 fails to apply. Below are the error messages I get from the build patching
 process


 + echo 'Patch #201 (squid-2.5.STABLE11-config.patch):'

 Patch #201 (squid-2.5.STABLE11-config.patch):

 + patch -p1 -b --suffix .config -s

 + echo 'Patch #202 (squid-2.5.STABLE4-location.patch):'

 Patch #202 (squid-2.5.STABLE4-location.patch):

 + patch -p1 -b --suffix .location -s

 + echo 'Patch #203 (squid-2.6.STABLE2-build.patch):'

 Patch #203 (squid-2.6.STABLE2-build.patch):

 + patch -p1 -b --suffix .build -s

 1 out of 2 hunks FAILED -- saving rejects to file src/Makefile.in.rej

 error: Bad exit status from /var/tmp/rpm-tmp.93888 (%prep)





 RPM build errors:

 Bad exit status from /var/tmp/rpm-tmp.93888 (%prep)

 D: May free Score board((nil))



 Now these are the errors I get from the manual application of the build
 patch



 patching file errors/Makefile.in

 Hunk #1 succeeded at 235 with fuzz 1 (offset 14 lines).

 Hunk #2 succeeded at 417 (offset 4 lines).

 Hunk #3 succeeded at 450 (offset 14 lines).

 patching file icons/Makefile.in

 Hunk #1 succeeded at 272 (offset 14 lines).

 patching file src/Makefile.in

 Hunk #1 FAILED at 586.

 Hunk #2 succeeded at 926 (offset 84 lines).

 1 out of 2 hunks FAILED -- saving rejects to file src/Makefile.in.rej



 Here are the contents of the src/Makefile.in.rej



 ***

 *** 586,603 

   DEFAULT_CONFIG_FILE = $(sysconfdir)/squid.conf

   DEFAULT_MIME_TABLE = $(sysconfdir)/mime.conf

   DEFAULT_DNSSERVER = $(libexecdir)/`echo dnsserver | sed
 '$(transform);s/$$/$(EXEEXT)/'`

 - DEFAULT_LOG_PREFIX = $(localstatedir)/logs

   DEFAULT_CACHE_LOG = $(DEFAULT_LOG_PREFIX)/cache.log

   DEFAULT_ACCESS_LOG = $(DEFAULT_LOG_PREFIX)/access.log

   DEFAULT_STORE_LOG = $(DEFAULT_LOG_PREFIX)/store.log

 - DEFAULT_PID_FILE = $(DEFAULT_LOG_PREFIX)/squid.pid

 - DEFAULT_SWAP_DIR = $(localstatedir)/cache

   DEFAULT_PINGER = $(libexecdir)/`echo pinger | sed
 '$(transform);s/$$/$(EXEEXT)/'`

   DEFAULT_UNLINKD = $(libexecdir)/`echo unlinkd | sed
 '$(transform);s/$$/$(EXEEXT)/'`

   DEFAULT_DISKD = $(libexecdir)/`echo diskd-daemon | sed
 '$(transform);s/$$/$(EXEEXT)/'`

 - DEFAULT_ICON_DIR = $(datadir)/icons

 - DEFAULT_ERROR_DIR = $(datadir)/errors/@ERR_DEFAULT_LANGUAGE@

 - DEFAULT_MIB_PATH = $(datadir)/mib.txt

   DEFAULT_HOSTS = @OPT_DEFAULT_HOSTS@



   # Don't automatically uninstall config files

 --- 586,603 

   DEFAULT_CONFIG_FILE = $(sysconfdir)/squid.conf

   DEFAULT_MIME_TABLE = $(sysconfdir)/mime.conf

   DEFAULT_DNSSERVER = $(libexecdir)/`echo dnsserver | sed
 '$(transform);s/$$/$(EXEEXT)/'`

 + DEFAULT_LOG_PREFIX = $(localstatedir)/log/squid

   DEFAULT_CACHE_LOG = $(DEFAULT_LOG_PREFIX)/cache.log

   DEFAULT_ACCESS_LOG = $(DEFAULT_LOG_PREFIX)/access.log

   DEFAULT_STORE_LOG = $(DEFAULT_LOG_PREFIX)/store.log

 + DEFAULT_PID_FILE = $(localstatedir)/run/squid.pid

 + DEFAULT_SWAP_DIR = $(localstatedir)/spool/squid

   DEFAULT_PINGER = $(libexecdir)/`echo pinger | sed
 '$(transform);s/$$/$(EXEEXT)/'`

   DEFAULT_UNLINKD = $(libexecdir)/`echo unlinkd | sed
 '$(transform);s/$$/$(EXEEXT)/'`

   DEFAULT_DISKD = $(libexecdir)/`echo diskd-daemon | sed
 '$(transform);s/$$/$(EXEEXT)/'`

 + DEFAULT_ICON_DIR = $(pkgdatadir)/icons

 + DEFAULT_ERROR_DIR = $(pkgdatadir)/errors/@ERR_DEFAULT_LANGUAGE@

 + DEFAULT_MIB_PATH = $(sysconfdir)/mib.txt

   DEFAULT_HOSTS = @OPT_DEFAULT_HOSTS@



   # Don't automatically uninstall config files



 From what I could see is that the above changes are not being done to the
 src/Makefile.in but I cannot understand why this is happening. I would
 really appreciate your help guys on this.


We have recently upgraded the autoconf toolchain used to generate
Makefile.in and configure scripts. The Makefile.in files are quite
different.

If you are the maintainer you will need to regenerate the patches.

If you are just trying to build the prepared package, then please contact
the maintainer to get the package updated.

Amos






[squid-users] Build patch fails to apply on Squid 2.7 stable6

2009-02-08 Thread Ragheb Rustom
Hi,

 

I have been trying to compile squid 2.7-stable6 on Fedora Core 9 x86-64
system. I have already done all the changes I need in the spec file in order
to create my system rpms but I noticed that when the rpmbuild try to build
the rpm it fails when it tries to apply the squid2.6Stable2 build patch with
the following errors. I have even tried to do the compile process manually
but also the same error appears when I manually try to apply the same patch.
All other patches have been installed successfully only the build patch
fails to apply. Below are the error messages I get from the build patching
process

 

+ echo 'Patch #201 (squid-2.5.STABLE11-config.patch):'

Patch #201 (squid-2.5.STABLE11-config.patch):

+ patch -p1 -b --suffix .config -s

+ echo 'Patch #202 (squid-2.5.STABLE4-location.patch):'

Patch #202 (squid-2.5.STABLE4-location.patch):

+ patch -p1 -b --suffix .location -s

+ echo 'Patch #203 (squid-2.6.STABLE2-build.patch):'

Patch #203 (squid-2.6.STABLE2-build.patch):

+ patch -p1 -b --suffix .build -s

1 out of 2 hunks FAILED -- saving rejects to file src/Makefile.in.rej

error: Bad exit status from /var/tmp/rpm-tmp.93888 (%prep)

 

 

RPM build errors:

Bad exit status from /var/tmp/rpm-tmp.93888 (%prep)

D: May free Score board((nil))

 

Now these are the errors I get from the manual application of the build
patch

 

patching file errors/Makefile.in

Hunk #1 succeeded at 235 with fuzz 1 (offset 14 lines).

Hunk #2 succeeded at 417 (offset 4 lines).

Hunk #3 succeeded at 450 (offset 14 lines).

patching file icons/Makefile.in

Hunk #1 succeeded at 272 (offset 14 lines).

patching file src/Makefile.in

Hunk #1 FAILED at 586.

Hunk #2 succeeded at 926 (offset 84 lines).

1 out of 2 hunks FAILED -- saving rejects to file src/Makefile.in.rej

 

Here are the contents of the src/Makefile.in.rej

 

***

*** 586,603 

  DEFAULT_CONFIG_FILE = $(sysconfdir)/squid.conf

  DEFAULT_MIME_TABLE = $(sysconfdir)/mime.conf

  DEFAULT_DNSSERVER = $(libexecdir)/`echo dnsserver | sed
'$(transform);s/$$/$(EXEEXT)/'`

- DEFAULT_LOG_PREFIX = $(localstatedir)/logs

  DEFAULT_CACHE_LOG = $(DEFAULT_LOG_PREFIX)/cache.log

  DEFAULT_ACCESS_LOG = $(DEFAULT_LOG_PREFIX)/access.log

  DEFAULT_STORE_LOG = $(DEFAULT_LOG_PREFIX)/store.log

- DEFAULT_PID_FILE = $(DEFAULT_LOG_PREFIX)/squid.pid

- DEFAULT_SWAP_DIR = $(localstatedir)/cache

  DEFAULT_PINGER = $(libexecdir)/`echo pinger | sed
'$(transform);s/$$/$(EXEEXT)/'`

  DEFAULT_UNLINKD = $(libexecdir)/`echo unlinkd | sed
'$(transform);s/$$/$(EXEEXT)/'`

  DEFAULT_DISKD = $(libexecdir)/`echo diskd-daemon | sed
'$(transform);s/$$/$(EXEEXT)/'`

- DEFAULT_ICON_DIR = $(datadir)/icons

- DEFAULT_ERROR_DIR = $(datadir)/errors/@ERR_DEFAULT_LANGUAGE@

- DEFAULT_MIB_PATH = $(datadir)/mib.txt

  DEFAULT_HOSTS = @OPT_DEFAULT_HOSTS@

 

  # Don't automatically uninstall config files

--- 586,603 

  DEFAULT_CONFIG_FILE = $(sysconfdir)/squid.conf

  DEFAULT_MIME_TABLE = $(sysconfdir)/mime.conf

  DEFAULT_DNSSERVER = $(libexecdir)/`echo dnsserver | sed
'$(transform);s/$$/$(EXEEXT)/'`

+ DEFAULT_LOG_PREFIX = $(localstatedir)/log/squid

  DEFAULT_CACHE_LOG = $(DEFAULT_LOG_PREFIX)/cache.log

  DEFAULT_ACCESS_LOG = $(DEFAULT_LOG_PREFIX)/access.log

  DEFAULT_STORE_LOG = $(DEFAULT_LOG_PREFIX)/store.log

+ DEFAULT_PID_FILE = $(localstatedir)/run/squid.pid

+ DEFAULT_SWAP_DIR = $(localstatedir)/spool/squid

  DEFAULT_PINGER = $(libexecdir)/`echo pinger | sed
'$(transform);s/$$/$(EXEEXT)/'`

  DEFAULT_UNLINKD = $(libexecdir)/`echo unlinkd | sed
'$(transform);s/$$/$(EXEEXT)/'`

  DEFAULT_DISKD = $(libexecdir)/`echo diskd-daemon | sed
'$(transform);s/$$/$(EXEEXT)/'`

+ DEFAULT_ICON_DIR = $(pkgdatadir)/icons

+ DEFAULT_ERROR_DIR = $(pkgdatadir)/errors/@ERR_DEFAULT_LANGUAGE@

+ DEFAULT_MIB_PATH = $(sysconfdir)/mib.txt

  DEFAULT_HOSTS = @OPT_DEFAULT_HOSTS@

 

  # Don't automatically uninstall config files

 

From what I could see is that the above changes are not being done to the
src/Makefile.in but I cannot understand why this is happening. I would
really appreciate your help guys on this.

 

Sincerely,

Ragheb Rustom




[squid-users] Squid3-stable11 crash and restart by itself

2009-01-19 Thread Ragheb Rustom
/01/19 20:09:21| clientProcessRequest: Invalid Request
2009/01/19 20:11:13| squidaio_queue_request: WARNING - Queue congestion

Something is happening where squid says FATAL: comm_write: fd 2296: pending
callback! And it stops and restart.
Does anyone have any idea what's wrong with the system. Do you think it is a
configuration problem. The system is a core2 Duo 2.66G CPU and 8GB of RAM
and 1 HDD exclusively for caching 250GB Sata 1 RPM. Can the problem be
resolved with Squid3 or shall I go back to squid 2.6 or 2.7?

My uname -r output gives the following

2.6.27.9-73.fc9.x86_64 as function kernel

Thank you

Ragheb Rustom





RE: [squid-users] Squid3-stable11 crash and restart by itself

2009-01-19 Thread Ragheb Rustom
 into squid through its listening port
as a 'request'. Are you perhaps intercepting traffic?
I suspect this is a client who has some sort of malware on his pc I will
contact him to check out his system 

 2009/01/19 19:17:45| clientParseRequestMethod: Unsupported method in
 request
 '__

___j___A__t_#__vQ_s_V_s_{WZ_fd}_3__9G(_(__DQ_N___W_Ba__T_u__
 _`__
 __OH_k_#_kWz+'
 2009/01/19 19:17:45| clientProcessRequest: Invalid Request

Third (non-fatal, slow-speed) problem, IO seems to be overloading:

 2009/01/19 19:17:59| squidaio_queue_request: WARNING - Queue congestion

Are your disks full? Or just an overload of large requests at peak traffic
times.
My discs are not full in fact I still have a lot of space but this might be
overload when squid restarts by itself

Fourth (fatal) problem, Squid rotate/restarts and flushing it's indexes to
disk dies.

 2009/01/19 19:20:08| ctx: exit level  0
 2009/01/19 19:20:08| storeDirWriteCleanLogs: Starting...
 2009/01/19 19:20:08| WARNING: Closing open FD   17
snip
 2009/01/19 19:20:09|851968 entries written so far.
 2009/01/19 19:20:09|917504 entries written so far.
 2009/01/19 19:20:09|   Finished.  Wrote 978773 entries.
 2009/01/19 19:20:09|   Took 0.35 seconds (2810079.01 entries/sec).
 FATAL: comm_write: fd 1055: pending callback!

snip


I think this is already known, bugzilla is down right now so I can't check.
First, try getting the STABLE11 to actually be the running squid. That may
solve the issue for you.


 Something is happening where squid says FATAL: comm_write: fd 2296:
 pending
 callback! And it stops and restart.
 Does anyone have any idea what's wrong with the system. Do you think it is
 a
 configuration problem. The system is a core2 Duo 2.66G CPU and 8GB of RAM
 and 1 HDD exclusively for caching 250GB Sata 1 RPM. Can the problem be
 resolved with Squid3 or shall I go back to squid 2.6 or 2.7?

 My uname -r output gives the following

 2.6.27.9-73.fc9.x86_64 as function kernel

 Thank you

 Ragheb Rustom





Thanks Again