Re: [squid-users] squidclient mgr:info

2011-04-03 Thread Amos Jeffries

On 03/04/11 07:22, sq...@sourcesystemsonline.com wrote:

Good day,
Special thanks to Hasanen AL-Bana.
Thank you for swift response. Do you mean ICP or HTCP is useful only if i
have another cache server WITHIN my network?


No. Location does not matter.
If you are using another proxy as cache_peer or someone else is using 
yours as their cache_peer, then the proxy-proxy protocols matter.




Please, what information do want me to supply you so that you can help
fine tune my squid performance because am already having headache with
squid rebuilding store most of the time (see my post Cache log errors).


Rebuild usually occurs under two scenarios:
 * on restart following a crash
 * on restart following a shutdown/restart which was aborted early 
before the cache meta data was saved to disk.


From your other thread it appears that you have hit some complications 
which lead to this second case. (continued in that other thread).


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.5


[squid-users] tproxy + tcp_outgoing_address

2011-04-03 Thread cytron

Hi!

I use tproxy in my squid server for a long time, but in this days I
need to redirect some trafic to other link by selected list url in
squid.conf using ACL.

I set the tcp_outgoing_address but don't work, the trafic out by
default route.
I replace tproxy from http_port to transparent and
tcp_outgoing_address work fine.

Resume:

without tproxy = tcp_outgoing_address work fine!

with tproxy = tcp_outgoing_address is ignored

What is this? a bug? a bad resource? bad configure?

My squid is:

* Version 3.2.0.5 (source)
* Configure --enable-async-io --enable-disk-io
--enable-storeio=ufs,diskd --enable-esi --enable-kill-parent-hack
--enable-ssl --enable-linux-netfilter --enable-zph-qos --with-openssl
--enable-default-err-language=Portuguese --enable-ltdl-convenience
--enable-cachemgr-hostname=cache --enable-removal-policies=heap lru
* System Slackware64 13.1 (I have tested in 32bits but do the same)
* iptables is the same in squid-cache.org help, usind DIVERT.
* route default is the main link, route secondary in table  link2
using advanced route by source to use tcp_outgoing_address

I need to use tproxy and I need use other links too, how can I do
this? please.



Re: [squid-users] Why need this for get auth-sync between squid and dansguardian?

2011-04-03 Thread Amos Jeffries

On 02/04/11 01:12, Fran Márquez wrote:

I'm modifying the squid.conf file of my proxy server for replace basic
auth for ntlm auth.


Please consider going straight to Negotiate/Kerberos. NTLM is officially 
deprecated and should be avoided where possible.




All work fine in squid, but when I use dansguardian, I've noticed that
dansguardian doesn't get the username if I remove this lines from
squid.conf:



external_acl_type ldap_group %LOGIN /usr/lib/squid/squid_ldap_group -R
-b dc=domain -D cn=proxy,cn=proxy,dc=domain -w proxy -f
((objectclass=person)
(sAMAccountName=%v)(memberof=cn=%a,ou=proxy,dc=domain)) -h 1.1.1.1

acl ldapLimited external ldap_group notAlowed
acl ldapTotal external ldap_group alowed

http_access allow ldapTotal all


Note: 1.1.1.1 is dc ip address


I thought that this lines affects only to basic authentication since it
already was wrote before I start to implement the NTLM auth.

Anybody can explain me what this lines are doing exactly? I revised the
ldap groups refered in this lines (ldapLimited and ldapTotal) and it are
empty.


What those lines do:
 external_acl_type using %LOGIN require authentication credentials in 
order to be tested. These details are required regardless of the result.


So whenever Squid reached that ACL and tries to test it will either use 
the credentias given or challenge the browser to present some.


The type of authentication does not matter to Squid when testing the 
ACLs. Whatever types you have in your auth_param setup will be used and 
sent.



I think the problem is likely that DG does not support NTLM. Or that 
your Squid version does not allow one of the many pre-requisits needed 
to get (stateful!) NTLM to work over (stateless) HTTP.

These requirements are:
 * pinning client and server connection together for the duration of 
*either* TCP link.

 * HTTP/1.1-style persistent server connections
 * HTTP/1.1-style persistent client connections

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.5


Re: [squid-users] replacing Apache httpd

2011-04-03 Thread Amos Jeffries

On 02/04/11 02:22, Daniel Plappert wrote:

Hi all,

I am new to squid, so I hope you don't feel offended if this is a beginner's 
question. ;-) I am trying to replace a Apache httpd server, who works as a 
delegating proxy. Let me explain the scenario shortly:

internet -  Apache httpd delegator -  server[1-3]

Because, to the outside, we have just one ip-address, the httpd delegator 
forwards the request according to the URL to one of the internal server, i.e. 
wiki.example.com is forwarded to server1, dms.example.com is forwarded to 
server2. This is done with virtual-hosts and rewrite rules, i.e. for server1:

RewriteRule ^(.*)$ http://wiki/$1   [L,P]

As you can see here, the request is delegated to an internal server called wiki.

What I am trying to do now is to replace the Apache httpd delegator with squid. 
What I've done so far is to configure squid as an accelerator and declared the 
corresponding nodes:

acl wiki_sites dstdomain wiki.example.com
http_port 80 accel defaultsite=example.com vhost
http_access allow wiki_sites


So far good.

Note:
 by using defaultsite=example.com this makes the 'broken' clients 
which do no send hostname properly use example.com, which does not 
match your domain ACL wiki.example.com.


Result: clients which do not send wiki.example.com properly as the 
virtual domain name will not get to the wiki server.


Whether this is a good behaviour is up to you. Just be aware of it.


cache_peer wiki parent 80 0 no-query originserver forceddomain=wiki name=wiki


Mostly good.

Use forcedomain= only if the peer is sightly broken and requires all 
traffic to arrive with that value as its public domain/host name.


Squid will prefer to send on the public domain FQDN (in this case 
wiki.example.com) to the peer so that it can easily and properly 
generate public redirects, cookies and page content URLs etc.




forwarded_for on


forwarded_for is not strictly relevant, but fine.


cache_peer_access wiki allow wiki_sites


Okay good.



Forwarding the request works as expected, but there is one problem: server1 
(the (t)wiki server) adds now a wrong base url in the html header:

base href=http://wiki; /


Bingo. The wiki server is using what it sees as the public host/domain 
name (Host: header) to general URLs. see above.




This doesn't happen with the apache delegator.


Apache is sending rather broken headers to the wiki server.
They look like this:

GET http://wiki/foo.html HTTP/1.1
Host: wiki.example.com
...


Whereas Squid is sending proper HTTP headers based on the URL (as 
altered by forcedomain):


GET /foo.html HTTP/1.1
Host: wiki




So, finally my question: how is it possible to configure squid, in a way that the base url is 
as it was before:base href=http://wiki.example.com; /  I need the URL from 
the outside (internet), not from the internal (intranet).



With Squid you will get the same URLs publicly and internally. So 
traffic will hopefully all go through Squid where you can centralize a 
set of ACLs for the internal/external access if it actually matters.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.5


Re: [squid-users] RE: Reverse Proxy Log Analytics

2011-04-03 Thread Amos Jeffries

On 02/04/11 01:51, Justin Warner wrote:

Hello..

I’m trying to find a program that will give me better log analysis for a
reverse proxy (accelerator).  I’m thinking I’m going to end up having to
write my own script but wondered if there is anything out there before I do.

I’m looking to see how many actual hits there were how many times the
request was passed to the real server etc.. The setup is 2 Reverse proxies
round robin’ing to 5 web servers.

Any help would be appreciated.

Justin.



Just about all the squid log analytics programs provide this info in 
some form.


What you are needing to look at is the HIT/MISS accumulations. The log 
analyzers mostly break these down into particular sub-types of HIT/MISS. 
Easily located and reported on if you need.


I use an in-house system built around the DB logging facility so can't 
really point you at specific reports or details on the other popular 
analyzers. But worst-case you may end up using a DB with custom database 
views like I have. Squid-3.2 bundles a database log helper usable on 2.7 
or 3.2 the custom bits are only needed for the display queries.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.5


Re: [squid-users] Tuning for very expensive bandwidth links

2011-04-03 Thread Amos Jeffries

On 02/04/11 11:46, Ed W wrote:

Hi



So the remote (client) side proxy would need an eCAP plugin that would
modify the initial request to include an ETag.  This would require some
ability to interrogate what we have in cache and generate/request the
ETag associated with what we have already - do you have a pointer to any
API/code that I would need to look at to do this?


I'm unsure sorry. Alex at The Measurement Factory has better info on
specific details of what the eCAP API can do.


If I wanted to hack on Squid 3.2... Do you have a 60 second overview on
the code points to examine with a view to basically:

a) create an etag and insert the relevant header on any response content
(although, perhaps done only in the case that an etag is not provided by
upstream server)


StoreEntry would be the starting point. Currently everything goes 
through it. In future it will be just cacheable stuff, but the bypassed 
things will be useless adding a ETag for caching anyway.


Other than that my knowledge of the store system is patchy. Alex and 
Henrik know a lot more about the inner cache workings than me.




b) add an etag header to requests (without one) - ie we are looking at
the case that client 2 requests content we have cached, but client 2
doesn't know that, only local squid does.


http.cc does all the request relaying outward stuff. I believe the 
if-modified-since requests Squid sends should have ETag in them (if one 
is known either from client or from local cache copy). If you find 
otherwise that is probably a bug worth fixing. Double-check with RFC 
2616 though.


There is no way we can add ETag to requests clients send before looking 
up the local cache. The local cache starts with URL and whatever results 
that produces is them checked for Vary: match.
 ETag sent by the server might be worth adding as an implicit prefix to 
the Vary: pieces. For matching against ETag sent by the client.
 BUT is of little use until multiple-variant caching is ported to 3.x 
from 2.7.




Just looking for a quick heads up on where to start investigating?


mentioned above.




IIRC we have Dimitry with The Measurement Factory assisting with HTTP
compliance fixes. I'm sure sponsorship towards a specific fix will be
welcomed.


How do I get in contact with Dimitry?



Alex is his supervisor I think. rousskov at squid-cache.org.


content might have been removed..?

Seems that at least parts of this might need to be done internally to squid?

Just to be clear, the point is that few web servers generate useful
etags, and under the condition that bandwidth is the limiting constraint
(plus a hierarchy of proxies), then it might be useful to generate (and
later test) etags based on some consistent hash algorithm?



Yes. We came to that conclusion too.

You will find it a bit tricky (but mostly possible) to insert live, but 
getting it into the cached items should be relatively easy.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.5


[squid-users] experimenting 3.2.0.5 result improvement over 3.1.11

2011-04-03 Thread Eliezer Croitoru

i'm running tests on debian 6 32bit and 64 bit.
i compared V 3.1.11 vs 3.2.0.5 and 3.2 is giving so much better response 
time than the older versions.


i had a little problem building the 3.2 and i needed to use the 
build-essential package (not using tproxy).


in order to install the build-essential package i needed to use 
aptitute install build-essential
and then you are getting a nice menu that shows the problems and the 
ways to resolve the issues.


Hope it will help someone

Regards
Eliezer


[squid-users] Cache Log Errors

2011-04-03 Thread squid
Good day,

Is there any way i can use to stop squid store rebuilding process,
why is squid rebuilding its store and why does it longer time to conclude?

Why are mine seeing some http log in cache log instead of access log?

Why are mine getting these errors (see below) in my squid cache log?

What can i do to prevent these error from occurrrng?

Regards,
Yomi



2011/03/31 15:02:01| unlinkdUnlink: write FD 10 failed: (10035)
WSAEWOULDBLOCK, Resource temporarily unavailable.
2011/03/31 15:02:01| unlinkdUnlink: write FD 10 failed: (10035)
WSAEWOULDBLOCK, Resource temporarily unavailable.
2011/03/31 15:02:01| unlinkdUnlink: write FD 10 failed: (10035)
WSAEWOULDBLOCK, Resource temporarily unavailable.
2011/03/31 15:02:01| unlinkdUnlink: write FD 10 failed: (10035)

011/04/02 07:13:51| Store rebuilding is 13.9% complete
2011/04/02 07:14:06| Store rebuilding is 14.9% complete
2011/04/02 07:14:21| Store rebuilding is 16.0% complete
2011/04/02 07:14:26| WARNING: newer swaplog entry for dirno 0, fileno
002C3FA8
2011/04/02 07:14:26| WARNING: newer swaplog entry for dirno 0, fileno
002C3FA8
2011/04/02 07:14:26| WARNING: newer swaplog entry for dirno 0, fileno
002C3FA8
2011/04/02 07:14:26| WARNING: newer swaplog entry for dirno 0, fileno
002C3FA8

2011/04/01 17:14:44| Preparing for shutdown after 18038 requests
2011/04/01 17:14:44| Waiting 0 seconds for active connections to finish
2011/04/01 17:14:44| FD 21 Closing HTTP connection
2011/04/01 17:14:45| Shutting down...
2011/04/01 17:14:45| FD 22 Closing ICP connection
2011/04/01 17:14:45| FD 23 Closing HTCP socket
2011/04/01 17:14:45| FD 24 Closing SNMP socket
2011/04/01 17:14:45| WARNING: Closing client 192.168.137.111 connection
due to lifetime timeout
2011/04/01 17:14:45|

http://prod1.rest-notify.msg.yahoo.com/v1/pushchannel/michealomoniyi90?sid=oxsQqevs3BrUSPEmbgkZN8DUms6k5KS3kNONWA--c=t3i2XkVUiKvseq=5cb=ca7u4bqsformat=jsonidle=110cache=1301674432962
2011/04/01 17:14:45| WARNING: Closing client 192.168.137.112 connection
due to lifetime timeout
2011/04/01 17:14:45|

http://0.145.channel.facebook.com/x/1534213288/1667359870/false/p_11050471985=0
2011/04/01 17:14:45| WARNING: Closing client 192.168.137.110 connection
due to lifetime timeout
2011/04/01 17:14:45|

http://0.63.channel.facebook.com/x/2878475822/2523481944/false/p_12078361547=44
2011/04/01 17:14:45| WARNING: Closing client 192.168.137.111 connection
due to lifetime timeout
2011/04/01 17:14:45|

http://0.192.channel.facebook.com/x/461011288/1190138722/false/p_11927647801=1
2011/04/01 17:14:45| Closing unlinkd pipe on FD 10
2011/04/01 17:14:45| Not currently OK to rewrite swap log.
2011/04/01 17:14:45| storeDirWriteCleanLogs: Operation aborted.
CPU Usage: 7147.781 seconds = 1196.078 user + 5951.703 sys
Maximum Resident Size: 113604 KB
Page faults with physical i/o: 28549


[squid-users] Page faults

2011-04-03 Thread squid
Good day,
Can someone assist me on how to reduce my cache misses, configure my page
faults and RAM for squid best performance. See below for summary of my
cache activities.
Regards,
Yomi.


from you squidclient info I can see some bad things like your cache
miss time :Cache Misses:  51.56862   which is extremely high.

Another thing is the Page faults with physical i/o: 25098   , the
lower this number , the better performance you get...this indicates
that you are using too much RAM and your process is using swap
partition and decreasing performance.



On Sat, Apr 2, 2011 at 6:20 PM, sq...@sourcesystemsonline.com wrote:

 Good day,
 After checking the activities of my squid-cache proxy, i discover that
there
 is no activities on ICP and HTCP, why?
 What is the implication of this?
 If they are important to squid performance, how can i implement them?
 Regards,
 Yomi


 Microsoft Windows [Version 6.1.7601]
 Copyright (c) 2009 Microsoft Corporation.  All rights reserved.

 C:\squid\binsquidclient mgr:info
 HTTP/1.0 200 OK
 Server: squid/2.7.STABLE8
 Date: Sat, 02 Apr 2011 13:40:27 GMT
 Content-Type: text/plain
 Expires: Sat, 02 Apr 2011 13:40:27 GMT
 X-Cache: MISS from ADMIN
 X-Cache-Lookup: MISS from ADMIN:3128
 Via: 1.0 ADMIN:3128 (squid/2.7.STABLE8)
 Connection: close

 Squid Object Cache: Version 2.7.STABLE8

 Running as squid Windows System Service on Windows Vista
 Service command line is:
 Start Time: Sat, 02 Apr 2011 10:38:44 GMT
 Current Time:   Sat, 02 Apr 2011 13:40:27 GMT
 Connection information for squid:
Number of clients accessing cache:  7
Number of HTTP requests received:   15437
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:   0
Number of HTCP messages received:   0
Number of HTCP messages sent:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   85.0
Average ICP messages per minute since start:0.0
Select loop called: 14516733 times, 0.751 ms avg
 Cache information for squid:
Request Hit Ratios: 5min: 5.6%, 60min: 13.6%
Byte Hit Ratios:5min: 47.5%, 60min: 11.2%
Request Memory Hit Ratios:  5min: 100.0%, 60min: 67.8%
Request Disk Hit Ratios:5min: 0.0%, 60min: 0.0%
Storage Swap size:  91072 KB
Storage Mem size:   54680 KB
Mean Object Size:   10.76 KB
Requests given to unlinkd:  0
 Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   2.02066  1.05672
Cache Misses: 51.56862  1.24267
Cache Hits:0.0  0.0
Near Hits: 0.0  0.61549
Not-Modified Replies:  0.0  0.0
DNS Lookups:   0.09117  0.26619
ICP Queries:   0.0  0.0
 Resource usage for squid:
UP Time:10902.873 seconds
CPU Time:   10220.328 seconds
CPU Usage:  93.74%
CPU Usage, 5 minute avg:93.70%
CPU Usage, 60 minute avg:   94.06%
Maximum Resident Size: 91776 KB
Page faults with physical i/o: 25098
 Memory accounted for:
Total accounted:63592 KB
memPoolAlloc calls: 2900352005
memPoolFree calls: 2900133281
 File descriptor usage for squid:
Maximum number of file descriptors:   2048
Largest file desc currently in use:120
Number of file desc currently in use:   22
Files queued for open:   0
Available number of file descriptors: 2026
Reserved number of file descriptors:   100
Store Disk files open:   0
IO loop method: select
 Internal Data Structures:
 20387 StoreEntries
  6133 StoreEntries with MemObjects
  6131 Hot Object Cache Items
  8461 on-disk objects


Re: [squid-users] tproxy + tcp_outgoing_address

2011-04-03 Thread Amos Jeffries

On 03/04/11 18:17, cyt...@pop.com.br wrote:

Hi!

I use tproxy in my squid server for a long time, but in this days I
need to redirect some trafic to other link by selected list url in
squid.conf using ACL.

I set the tcp_outgoing_address but don't work, the trafic out by
default route.
I replace tproxy from http_port to transparent and
tcp_outgoing_address work fine.


The old tag transparent used to and currently for backward 
compatibility only means NAT. It is deprecated so we can in future make 
it mean real HTTP transparency some day.


 Use tproxy for TPROXY transparent proxy or intercept for NAT 
intercepting proxy.




Resume:

without tproxy = tcp_outgoing_address work fine!

with tproxy = tcp_outgoing_address is ignored

What is this? a bug? a bad resource? bad configure?


Design. TPROXY means spoofing the source address. Traffic entering the 
Squid box is identical to traffic leaving it. It is transparent at the 
IP protocol level.


You must do any NAT manipulation on the IP outside of Squid. You can 
only MARK or TOS the packets at the TCP level as they leave when using 
TPROXY.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.5


Re: [squid-users] Cache Log Errors

2011-04-03 Thread Amos Jeffries

On 03/04/11 19:55, sq...@sourcesystemsonline.com wrote:

Good day,

Is there any way i can use to stop squid store rebuilding process,
why is squid rebuilding its store and why does it longer time to conclude?

Why are mine seeing some http log in cache log instead of access log?


http log?  cache.log displays details about the problems seen by 
Squid. Being an HTTP proxy almost all problems are somehow related to HTTP.




Why are mine getting these errors (see below) in my squid cache log?

What can i do to prevent these error from occurrrng?

Regards,
Yomi



2011/03/31 15:02:01| unlinkdUnlink: write FD 10 failed: (10035)
WSAEWOULDBLOCK, Resource temporarily unavailable.
2011/03/31 15:02:01| unlinkdUnlink: write FD 10 failed: (10035)
WSAEWOULDBLOCK, Resource temporarily unavailable.
2011/03/31 15:02:01| unlinkdUnlink: write FD 10 failed: (10035)
WSAEWOULDBLOCK, Resource temporarily unavailable.
2011/03/31 15:02:01| unlinkdUnlink: write FD 10 failed: (10035)



... Resource temporarily unavailable.

unlinkd is the cache file eraser. It looks like a file delete failed 
due to something moving or double-deleting the file or locking the disk 
access away from Squid's eraser.




011/04/02 07:13:51| Store rebuilding is 13.9% complete
2011/04/02 07:14:06| Store rebuilding is 14.9% complete
2011/04/02 07:14:21| Store rebuilding is 16.0% complete
2011/04/02 07:14:26| WARNING: newer swaplog entry for dirno 0, fileno
002C3FA8
2011/04/02 07:14:26| WARNING: newer swaplog entry for dirno 0, fileno
002C3FA8
2011/04/02 07:14:26| WARNING: newer swaplog entry for dirno 0, fileno
002C3FA8
2011/04/02 07:14:26| WARNING: newer swaplog entry for dirno 0, fileno
002C3FA8


The Squid meta-data about the file 0/002C3FA8 in the cache is different 
to the file actually in the cache. Somebody has tampered with the file, 
or the disk is getting corruption.
 When store problems are found Squid drops/erases the cache entry for 
safety.




2011/04/01 17:14:44| Preparing for shutdown after 18038 requests
2011/04/01 17:14:44| Waiting 0 seconds for active connections to finish
2011/04/01 17:14:44| FD 21 Closing HTTP connection
2011/04/01 17:14:45| Shutting down...
2011/04/01 17:14:45| FD 22 Closing ICP connection
2011/04/01 17:14:45| FD 23 Closing HTCP socket
2011/04/01 17:14:45| FD 24 Closing SNMP socket
2011/04/01 17:14:45| WARNING: Closing client 192.168.137.111 connection
due to lifetime timeout
2011/04/01 17:14:45|
 
http://prod1.rest-notify.msg.yahoo.com/v1/pushchannel/michealomoniyi90?sid=oxsQqevs3BrUSPEmbgkZN8DUms6k5KS3kNONWA--c=t3i2XkVUiKvseq=5cb=ca7u4bqsformat=jsonidle=110cache=1301674432962
2011/04/01 17:14:45| WARNING: Closing client 192.168.137.112 connection
due to lifetime timeout
2011/04/01 17:14:45|
 
http://0.145.channel.facebook.com/x/1534213288/1667359870/false/p_11050471985=0
2011/04/01 17:14:45| WARNING: Closing client 192.168.137.110 connection
due to lifetime timeout
2011/04/01 17:14:45|
 
http://0.63.channel.facebook.com/x/2878475822/2523481944/false/p_12078361547=44
2011/04/01 17:14:45| WARNING: Closing client 192.168.137.111 connection
due to lifetime timeout
2011/04/01 17:14:45|
 
http://0.192.channel.facebook.com/x/461011288/1190138722/false/p_11927647801=1
2011/04/01 17:14:45| Closing unlinkd pipe on FD 10


So far a normal shutdown process. The WARNING are not errors, just 
important information that some clients connections are being killed 
while still in use.
 channel.facebook uses a HTTP technique they call long-polling 
apparently, to keep connections open for hours or days. These can be 
ignored. I'm not sure about the yahoo URL.



2011/04/01 17:14:45| Not currently OK to rewrite swap log.
2011/04/01 17:14:45| storeDirWriteCleanLogs: Operation aborted.


Hmm, that may be a problem. COuld be related or from the same cause as 
the unlinkd problems. Something is preventing Squid from saving its meta 
data to disk.

 This will result in your cache rebuilding slowely next startup.


CPU Usage: 7147.781 seconds = 1196.078 user + 5951.703 sys
Maximum Resident Size: 113604 KB
Page faults with physical i/o: 28549


Here is some useful information about how much RAM your Squid needed to 
run and how much swapping was done.
 You can use the Resident Size to check that you have enough RAM 
allocated for Squid to use. That is how much it needed to use since the 
last restart.


 The Page faults indicates that there was not enough RAM available 
(needed 113604 KB) and swapping was done 28,549 times. 0 is best.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.5


Re: [squid-users] Cache Log Errors

2011-04-03 Thread squid
Good day,
I have 4GB RAM install in my squid server.
After increasing the RAM maximum resident size due high page faults and
reconfigure squid a WARNING: Very large maximum_object_size_in_memory
settings can have negative impact on performance was displayed.

What is the implication of this warning, any danger?
See below for more information and excerpts from my squid.conf
Regards,
Yomi.


C:\squid\sbinsquid -n squid -k reconfigure
2011/04/03 17:46:15| WARNING: Very large maximum_object_size_in_memory
settings
can have negative impact on performance

Status of squid Service:
  Service Type: 0x10
  Current State: 0x4
  Controls Accepted: 0x5
  Exit Code: 0
  Service Specific Exit Code: 0
  Check Point: 0
  Wait Hint: 0

# MEMORY CACHE OPTIONS
---
#Default:
cache_mem 1024 MB

#Default:
maximum_object_size_in_memory 131072 KB

#Default:
# memory_replacement_policy lru

# DISK CACHE OPTIONS
# --
#Default:
# cache_replacement_policy lru

#Default:
cache_dir ufs c:/squid/var/cache 40960 128 512
cache_dir ufs d:/squid/var/cache 20480 128 512
cache_dir ufs e:/squid/var/cache 5120 128 512
cache_dir ufs f:/squid/var/cache 20480 128 512

#Default:
# store_dir_select_algorithm least-load

#Default:
# max_open_disk_fds 0

#Default:
# minimum_object_size 0 KB

#Default:
maximum_object_size 204800 KB

#Default:
# cache_swap_low 90
# cache_swap_high 95

#Default:
# update_headers on





 On 03/04/11 19:55, sq...@sourcesystemsonline.com wrote:
 Good day,

 Is there any way i can use to stop squid store rebuilding process,
 why is squid rebuilding its store and why does it longer time to
 conclude?

 Why are mine seeing some http log in cache log instead of access log?

 http log?  cache.log displays details about the problems seen by
 Squid. Being an HTTP proxy almost all problems are somehow related to
 HTTP.


 Why are mine getting these errors (see below) in my squid cache log?

 What can i do to prevent these error from occurrrng?

 Regards,
 Yomi



 2011/03/31 15:02:01| unlinkdUnlink: write FD 10 failed: (10035)
 WSAEWOULDBLOCK, Resource temporarily unavailable.
 2011/03/31 15:02:01| unlinkdUnlink: write FD 10 failed: (10035)
 WSAEWOULDBLOCK, Resource temporarily unavailable.
 2011/03/31 15:02:01| unlinkdUnlink: write FD 10 failed: (10035)
 WSAEWOULDBLOCK, Resource temporarily unavailable.
 2011/03/31 15:02:01| unlinkdUnlink: write FD 10 failed: (10035)


 ... Resource temporarily unavailable.

 unlinkd is the cache file eraser. It looks like a file delete failed
 due to something moving or double-deleting the file or locking the disk
 access away from Squid's eraser.


 011/04/02 07:13:51| Store rebuilding is 13.9% complete
 2011/04/02 07:14:06| Store rebuilding is 14.9% complete
 2011/04/02 07:14:21| Store rebuilding is 16.0% complete
 2011/04/02 07:14:26| WARNING: newer swaplog entry for dirno 0, fileno
 002C3FA8
 2011/04/02 07:14:26| WARNING: newer swaplog entry for dirno 0, fileno
 002C3FA8
 2011/04/02 07:14:26| WARNING: newer swaplog entry for dirno 0, fileno
 002C3FA8
 2011/04/02 07:14:26| WARNING: newer swaplog entry for dirno 0, fileno
 002C3FA8

 The Squid meta-data about the file 0/002C3FA8 in the cache is different
 to the file actually in the cache. Somebody has tampered with the file,
 or the disk is getting corruption.
   When store problems are found Squid drops/erases the cache entry for
 safety.


 2011/04/01 17:14:44| Preparing for shutdown after 18038 requests
 2011/04/01 17:14:44| Waiting 0 seconds for active connections to finish
 2011/04/01 17:14:44| FD 21 Closing HTTP connection
 2011/04/01 17:14:45| Shutting down...
 2011/04/01 17:14:45| FD 22 Closing ICP connection
 2011/04/01 17:14:45| FD 23 Closing HTCP socket
 2011/04/01 17:14:45| FD 24 Closing SNMP socket
 2011/04/01 17:14:45| WARNING: Closing client 192.168.137.111 connection
 due to lifetime timeout
 2011/04/01 17:14:45|
  
 http://prod1.rest-notify.msg.yahoo.com/v1/pushchannel/michealomoniyi90?sid=oxsQqevs3BrUSPEmbgkZN8DUms6k5KS3kNONWA--c=t3i2XkVUiKvseq=5cb=ca7u4bqsformat=jsonidle=110cache=1301674432962
 2011/04/01 17:14:45| WARNING: Closing client 192.168.137.112 connection
 due to lifetime timeout
 2011/04/01 17:14:45|
  
 http://0.145.channel.facebook.com/x/1534213288/1667359870/false/p_11050471985=0
 2011/04/01 17:14:45| WARNING: Closing client 192.168.137.110 connection
 due to lifetime timeout
 2011/04/01 17:14:45|
  
 http://0.63.channel.facebook.com/x/2878475822/2523481944/false/p_12078361547=44
 2011/04/01 17:14:45| WARNING: Closing client 192.168.137.111 connection
 due to lifetime timeout
 2011/04/01 17:14:45|
  
 http://0.192.channel.facebook.com/x/461011288/1190138722/false/p_11927647801=1
 2011/04/01 17:14:45| Closing unlinkd pipe on FD 10

 So far a normal shutdown process. The WARNING are not errors, just
 important information that 

Re: [squid-users] Cache Log Errors

2011-04-03 Thread Amos Jeffries

On Sun, 3 Apr 2011 13:41:22 -0400, sq...@sourcesystemsonline.com wrote:

Good day,
I have 4GB RAM install in my squid server.
After increasing the RAM maximum resident size due high page faults 
and
reconfigure squid a WARNING: Very large 
maximum_object_size_in_memory

settings can have negative impact on performance was displayed.


No, no. That setting does not affect the problem like that.

Resident size is not something Squid can easily affect directly.
Have a read through http://wiki.squid-cache.org/SquidFaq/SquidMemory to 
learn how Squid uses memory and what things can be adjusted to affect 
that.


You need to ensure that when squid is not running the operating system 
says free available memory is a bigger number than the Squid maximum 
resident size. And that when Squid is running the amount of virtual or 
swap memory reported by the operating system is zero.


More on that below. But please read that wiki page before continuing, 
the answers below will make a lot more sense when you know the 
background ideas.




What is the implication of this warning, any danger?


The setting you changed is the limit on *individual* objects stored in 
memory. The problems referred to are the swapping ones you are already 
seeing before the change. The change may make them randomly even worse 
than before.



See below for more information and excerpts from my squid.conf
Regards,
Yomi.


C:\squid\sbinsquid -n squid -k reconfigure
2011/04/03 17:46:15| WARNING: Very large 
maximum_object_size_in_memory

settings
can have negative impact on performance

Status of squid Service:
  Service Type: 0x10
  Current State: 0x4
  Controls Accepted: 0x5
  Exit Code: 0
  Service Specific Exit Code: 0
  Check Point: 0
  Wait Hint: 0

# MEMORY CACHE OPTIONS

---
#Default:
cache_mem 1024 MB


Hmm, 4GB of RAM on the system and you are dedicating 25% of it to a RAM 
cache for Squid.




#Default:
maximum_object_size_in_memory 131072 KB


This can at most be set to the same as cache_mem. Though generally you 
want many HTTP objects in the RAM cache.




#Default:
# memory_replacement_policy lru

# DISK CACHE OPTIONS
# 
--

#Default:
# cache_replacement_policy lru

#Default:
cache_dir ufs c:/squid/var/cache 40960 128 512
cache_dir ufs d:/squid/var/cache 20480 128 512
cache_dir ufs e:/squid/var/cache 5120 128 512
cache_dir ufs f:/squid/var/cache 20480 128 512


Using the rule-of-thumb 1MB/GB estimate those dir need 880 MB of memory 
for their indexes. Plus the cache_mem RAM cache. That gives up to 2GB of 
RAM consumed by Squid before any clients start connecting. More for 
traffic handling.



Also, I believe the aufs scheme uses windows disk IO threading. You 
could possibly avoid some of the disk speed problems by changing those 
types to aufs (just a Squid restart needed to switch).


Also check that those sizes are leaving at least 10% of the disk space 
free on each for the cache swap.state journals. If the disk is filled 
that would result in those unable to save swap.state which your shutdown 
sees.


Also check that the disks are not being put into any kind of standby or 
hibernate mode underneath Squid. I suspect that would lead to the 
resource unavailable messages your logs show.




#Default:
# cache_swap_low 90
# cache_swap_high 95


With caches 20GB I would change that low-threshold 90 to a 94. That 
minimizes the period when the background garbage collection can drain 
speed.



There is nothing there to indicate a box with 4GB RAM would swap badly. 
So I conclude there must be other software hogging memory and reducing 
the amount available to Squid. Removing that other software would be a 
good thing for performance.


Amos


[squid-users] squid-3.HEAD-BZR failed to access https://mail.google.com (fwd)

2011-04-03 Thread Jeff Chua


Recent squid-3.HEAD-BZR failed to accessing https://mail.google.com,
but was ok prior to March 19. Attached is the cache.log with this line in 
particular ...


assertion failed: comm.cc:216: fd_table[fd].halfClosedReader != NULL


Thanks,
Jeff


2011/04/04 09:55:26.234 kid1| IoCallback.cc(120) will call 
ConnStateData::clientReadRequest(FD 11, data=0x23e0548, size=183, 
buf=0x238b410) [call19]
2011/04/04 09:55:26.234 kid1| entering ConnStateData::clientReadRequest(FD 11, 
data=0x23e0548, size=183, buf=0x238b410)
2011/04/04 09:55:26.234 kid1| AsyncCall.cc(32) make: make call 
ConnStateData::clientReadRequest [call19]

2011/04/04 09:55:26.234 kid1| cbdataReferenceValid: 0x23e0548
2011/04/04 09:55:26.234 kid1| cbdataReferenceValid: 0x23e0548
2011/04/04 09:55:26.234 kid1| cbdataReferenceValid: 0x23e0548
2011/04/04 09:55:26.234 kid1| cbdataReferenceValid: 0x23e0548
2011/04/04 09:55:26.234 kid1| ConnStateData status in: [ job2]
2011/04/04 09:55:26.234 kid1| cbdataReferenceValid: 0x23e0548
2011/04/04 09:55:26.234 kid1| client_side.cc(2765) clientReadRequest: 
clientReadRequest FD 11 size 183
2011/04/04 09:55:26.234 kid1| client_side.cc(2705) clientParseRequest: FD 11: 
attempting to parse
2011/04/04 09:55:26.234 kid1| httpParseInit: Request buffer is CONNECT 
mail.google.com:443 HTTP/1.1
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:2.2a1pre) Gecko/20110402 
Firefox/4.2a1pre

Proxy-Connection: keep-alive
Host: mail.google.com


2011/04/04 09:55:26.234 kid1| HttpMsg.cc(458) parseRequestFirstLine: parsing 
possible request: CONNECT mail.google.com:443 HTTP/1.1
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:2.2a1pre) Gecko/20110402 
Firefox/4.2a1pre

Proxy-Connection: keep-alive
Host: mail.google.com


2011/04/04 09:55:26.234 kid1| Parser: retval 1: from 0-37: method 0-6; url 
8-26; version 28-35 (1/1)
2011/04/04 09:55:26.234 kid1| parseHttpRequest: req_hdr = {User-Agent: 
Mozilla/5.0 (X11; Linux x86_64; rv:2.2a1pre) Gecko/20110402 Firefox/4.2a1pre

Proxy-Connection: keep-alive
Host: mail.google.com

}
2011/04/04 09:55:26.234 kid1| parseHttpRequest: end = {
}
2011/04/04 09:55:26.234 kid1| parseHttpRequest: prefix_sz = 183, req_line_sz = 
38

2011/04/04 09:55:26.234 kid1| cbdataLock: 0x23e0548=7
2011/04/04 09:55:26.234 kid1| cbdataLock: 0x23e0a78=1
2011/04/04 09:55:26.234 kid1| cbdataLock: 0x22946d8=1
2011/04/04 09:55:26.234 kid1| clientStreamInsertHead: Inserted node 0x23e3518 
with data 0x23e2128 after head

2011/04/04 09:55:26.234 kid1| cbdataLock: 0x23e3518=1
2011/04/04 09:55:26.234 kid1| parseHttpRequest: Request Header is
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:2.2a1pre) Gecko/20110402 
Firefox/4.2a1pre

Proxy-Connection: keep-alive
Host: mail.google.com


2011/04/04 09:55:26.234 kid1| parseHttpRequest: Complete request received
2011/04/04 09:55:26.234 kid1| client_side.cc(2743) clientParseRequest: FD 11: 
parsed a request

2011/04/04 09:55:26.234 kid1| comm.cc(1116) commSetTimeout: FD 11 timeout 86400
2011/04/04 09:55:26.234 kid1| cbdataLock: 0x23e0a78=2
2011/04/04 09:55:26.234 kid1| cbdataLock: 0x23e0a78=3
2011/04/04 09:55:26.234 kid1| The AsyncCall SomeTimeoutHandler constructed, 
this=0x21297a0 [call20]

2011/04/04 09:55:26.234 kid1| cbdataLock: 0x23e0a78=4
2011/04/04 09:55:26.234 kid1| cbdataUnlock: 0x23e0a78=3
2011/04/04 09:55:26.234 kid1| cbdataUnlock: 0x23e0a78=2
2011/04/04 09:55:26.234 kid1| comm.cc(1127) commSetTimeout: FD 11 timeout 86400
2011/04/04 09:55:26.234 kid1| cbdataUnlock: 0x23e0548=6
2011/04/04 09:55:26.234 kid1| cbdataUnlock: 0x23e0548=5
2011/04/04 09:55:26.234 kid1| cbdataReferenceValid: 0x23e0548
2011/04/04 09:55:26.234 kid1| cbdataReferenceValid: 0x23e0548
2011/04/04 09:55:26.234 kid1| urlParse: Split URL 'mail.google.com:443' into 
proto='', host='mail.google.com', port='443', path=''

2011/04/04 09:55:26.234 kid1| init-ing hdr: 0x2388740 owner: 2
2011/04/04 09:55:26.234 kid1| HttpRequest.cc(59) HttpRequest: constructed, 
this=0x2388730 id=54
2011/04/04 09:55:26.234 kid1| Address.cc(409) LookupHostIP: Given Non-IP 
'mail.google.com': Name or service not known

2011/04/04 09:55:26.234 kid1| parsing hdr: (0x2388740)
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:2.2a1pre) Gecko/20110402 
Firefox/4.2a1pre

Proxy-Connection: keep-alive
Host: mail.google.com

2011/04/04 09:55:26.234 kid1| parsing HttpHeaderEntry: near 'User-Agent: 
Mozilla/5.0 (X11; Linux x86_64; rv:2.2a1pre) Gecko/20110402 Firefox/4.2a1pre'
2011/04/04 09:55:26.234 kid1| parsed HttpHeaderEntry: 'User-Agent: Mozilla/5.0 
(X11; Linux x86_64; rv:2.2a1pre) Gecko/20110402 Firefox/4.2a1pre'
2011/04/04 09:55:26.234 kid1| created HttpHeaderEntry 0x2295990: 'User-Agent : 
Mozilla/5.0 (X11; Linux x86_64; rv:2.2a1pre) Gecko/20110402 Firefox/4.2a1pre

2011/04/04 09:55:26.234 kid1| 0x2388740 adding entry: 58 at 0
2011/04/04 09:55:26.234 kid1| parsing HttpHeaderEntry: near 'Proxy-Connection: 
keep-alive'
2011/04/04 09:55:26.234 kid1| parsed HttpHeaderEntry: 'Proxy-Connection: 
keep-alive'
2011/04/04 09:55:26.234 kid1| created 

Re: [squid-users] squid 3.2.0.5 smp scaling issues

2011-04-03 Thread Amos Jeffries

On 03/04/11 12:52, da...@lang.hm wrote:

still no response from anyone.

Is there any interest in investigating this issue? or should I just
write off squid for future use due to it's performance degrading?


It is a very ambiguous issue..
 * We have your report with some nice rate benchmarks indicating regression
 * We have two others saying me-too with less details
 * We have an independent report indicating that 3.1 is faster than 
2.7. With benchmarks to prove it.
 * We have several independent reports indicating that 3.2 is faster 
than 3.1. One like yours with benchmark proof.
 * We have someone responding to your report saying the CPU type 
affects things in a large way (likely due to SMP using CPU-level features)
 * We have our own internal testing which shows also a mix of results 
with the variance being dependent on which component of Squid is tested.


Your test in particular is testing both the large object pass-thru 
(proxy only) capacity and the parser CPU ceiling.


Could you try your test on 3.2.0.6 and 3.1.12 please? They both now have 
a server-facing buffer change which should directly affect your test 
results in a good way.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.6