[squid-users] Re: strange squid 2.6S1 behavior

2006-07-24 Thread tino

Hi,
Sorry, this is my message log  (I was turn-off syslog before)

Jul 24 15:38:32 tproxy (squid): xstrdup: tried to dup a NULL pointer!
Jul 24 15:38:33 tproxy squid[2049]: Squid Parent: child process 2051 exited 
due to signal 6


I though it was a bug-listed in Squid-2.6.PRE1 ?
http://www.squid-cache.org/bugs/show_bug.cgi?id=1589

Which patch should I added ? I'm on 2.6.stable1, wccpv2+cttproxy

regards,
Tino
- Original Message - 
From: tino

To: squid-users@squid-cache.org
Sent: Monday, July 24, 2006 2:29 PM
Subject: strange squid 2.6S1 behavior




hi,

I notice something strange, suddenly cache hit become zero for  a couple of 
second & then ok again


Cache information for squid:
Request Hit Ratios: 5min: 0.0%, 60min: 0.0%
Byte Hit Ratios: 5min: -0.0%, 60min: -0.0%
Request Memory Hit Ratios: 5min: 0.0%, 60min: 0.0%
Request Disk Hit Ratios: 5min: 0.0%, 60min: 0.0%


I was use wccpv2
When this happend, wccp still up & redirecting packets  , access.log still 
active writing clients response.

NO error in /var/log/message or cache.log

Anyone experience the same problem ?

regards,
Tino 



[squid-users] cache.log : Unsupported method 'REGISTER

2006-07-24 Thread Sekar

Hello All,

I have configured squid-2.6STABLE1 as transparent proxy in my linux
machine.Past two weeks it was running without error. But today it is 
stopped.

My cache.log saying it is Unsupported method 'REGISTER'.

My relevant squid.conf is as follows:

http_port  transparent
http_access allow all
always_direct allow all

The following entries in  my cache.log

2006/07/21 09:21:04| CACHEMGR: @127.0.0.1 requesting 'menu'
2006/07/21 09:21:08| CACHEMGR: @127.0.0.1 requesting 'vm_objects'
2006/07/21 10:39:04| clientProcessRequest2: ETag loop
2006/07/21 10:39:17| clientProcessRequest2: ETag loop
2006/07/21 11:24:21| urlParse: URL too large (4403 bytes)
2006/07/21 11:54:29| clientProcessRequest2: ETag loop
2006/07/21 12:20:44| clientProcessRequest2: ETag loop
2006/07/21 16:59:00| parseHttpRequest: NF getsockopt(SO_ORIGINAL_DST) 
failed: (2) No such file or directory

2006/07/21 18:55:11| parseHttpRequest: Unsupported method 'REGISTER'
2006/07/21 18:55:11| clientReadRequest: FD 16 Invalid Request
2006/07/21 18:57:36| parseHttpRequest: Unsupported method 'REGISTER'
2006/07/21 18:57:36| clientReadRequest: FD 15 Invalid Request
2006/07/21 19:00:36| parseHttpRequest: Unsupported method 'REGISTER'
2006/07/21 19:00:36| clientReadRequest: FD 15 Invalid Request
2006/07/21 19:04:13| parseHttpRequest: Unsupported method 'REGISTER'
2006/07/21 19:04:13| clientReadRequest: FD 13 Invalid Request
2006/07/21 19:08:26| parseHttpRequest: Unsupported method 'REGISTER'
2006/07/21 19:08:26| clientReadRequest: FD 14 Invalid Request
2006/07/21 19:13:15| parseHttpRequest: Unsupported method 'REGISTER'
2006/07/21 19:13:15| clientReadRequest: FD 13 Invalid Request
2006/07/21 19:18:39| parseHttpRequest: Unsupported method 'REGISTER'
2006/07/21 19:18:39| clientReadRequest: FD 13 Invalid Request
2006/07/21 19:24:40| parseHttpRequest: Unsupported method 'REGISTER'
2006/07/21 19:24:40| clientReadRequest: FD 13 Invalid Request
2006/07/21 19:31:16| parseHttpRequest: Unsupported method 'REGISTER'
2006/07/21 19:31:16| clientReadRequest: FD 13 Invalid Request
2006/07/21 19:38:29| parseHttpRequest: Unsupported method 'REGISTER'
2006/07/21 19:38:29| clientReadRequest: FD 13 Invalid Request
2006/07/21 19:46:28| parseHttpRequest: Unsupported method 'REGISTER'
2006/07/21 19:46:28| clientReadRequest: FD 15 Invalid Request
2006/07/21 19:54:21| parseHttpRequest: Unsupported method 'REGISTER'
2006/07/21 19:54:21| clientReadRequest: FD 13 Invalid Request
2006/07/24 09:46:25| clientProcessRequest2: ETag loop
2006/07/24 09:46:27| clientProcessRequest2: ETag loop
2006/07/24 09:52:29| clientProcessRequest2: ETag loop
2006/07/24 09:52:30| clientProcessRequest2: ETag loop
2006/07/24 09:52:45| clientProcessRequest2: ETag loop
2006/07/24 14:41:45| parseHttpRequest: NF getsockopt(SO_ORIGINAL_DST) 
failed: (2) No such file or directory

FATAL: xstrdup: tried to dup a NULL pointer!

What may be the reason?

Thanks in advance,
Sekar.D



[squid-users] Heavy mail attachments hotmail

2006-07-24 Thread Santosh Rani

How to tame heavy mail attachments through hotmail please. Today one
user downloaded six BMP images equal to 218 MB.
Somebody please advise.
Regards


[squid-users] Throughput slower, when data is in cache instead geting it from the webserver

2006-07-24 Thread Dieter Bloms
Hi,

we use squid as shipped with SuSE Linux Enterprise 9
(squid-2.5.STABLE5-42.41).

The throughput is slower, when I get the data from cache and is faster,
when I use the -r option to get the data from the webserver.

--snip--
squid06:~ # time squidclient -p 8080 -r http://mueller.datevnet.de/Richter.pdf 
>/dev/null 

real0m0.702s
user0m0.004s
sys 0m0.015s
squid06:~ # time squidclient -p 8080 http://mueller.datevnet.de/Richter.pdf 
>/dev/null 

real0m6.085s
user0m0.001s
sys 0m0.002s
squid06:~ # time squidclient -p 8080 -r http://mueller.datevnet.de/Richter.pdf 
>/dev/null 

real0m0.881s
user0m0.003s
sys 0m0.012s
squid06:~ # time squidclient -p 8080 http://mueller.datevnet.de/Richter.pdf 
>/dev/null 

real0m5.785s
user0m0.001s
sys 0m0.003s
squid06:~ # time squidclient -p 8080 -r http://mueller.datevnet.de/Richter.pdf 
>/dev/null 

real0m0.742s
user0m0.005s
sys 0m0.012s
squid06:~ # time squidclient -p 8080 http://mueller.datevnet.de/Richter.pdf 
>/dev/null 

real0m5.890s
user0m0.001s
sys 0m0.002s
squid06:~ # 
--snip--

here the logfile entries:

--snip--
1153747089.142698 127.0.0.1 TCP_CLIENT_REFRESH_MISS/200 2274238 GET 
http://mueller.datevnet.de/Richter.pdf - FIRST_UP_PARENT/127.0.0.1 
application/pdf
1153747098.419   6081 127.0.0.1 TCP_HIT/200 2274245 GET 
http://mueller.datevnet.de/Richter.pdf - NONE/- application/pdf
1153747102.320878 127.0.0.1 TCP_CLIENT_REFRESH_MISS/200 2274238 GET 
http://mueller.datevnet.de/Richter.pdf - FIRST_UP_PARENT/127.0.0.1 
application/pdf
1153747109.546   5782 127.0.0.1 TCP_HIT/200 2274245 GET 
http://mueller.datevnet.de/Richter.pdf - NONE/- application/pdf
1153747111.639739 127.0.0.1 TCP_CLIENT_REFRESH_MISS/200 2274238 GET 
http://mueller.datevnet.de/Richter.pdf - FIRST_UP_PARENT/127.0.0.1 
application/pdf
1153747118.797   5887 127.0.0.1 TCP_HIT/200 2274245 GET 
http://mueller.datevnet.de/Richter.pdf - NONE/- application/pdf
--snip--

This is my squid.conf

--snip--
squid06:~ # grep -v "^#" /etc/squid/squid.conf | grep -v "^$"
http_port 10.252.104.20:8080
http_port 10.252.104.80:8080
http_port 127.0.0.1:8080
icp_port 0
cache_peer 127.0.0.1 parent 8280 0 no-query no-digest no-netdb-exchange
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
cache_mem 512 MB
maximum_object_size 20480 KB
cache_dir diskd /var/cache/squid 15360 16 256
cache_store_log none
ftp_user datevnet@
ftp_list_width 50
auth_param basic children 15
auth_param basic realm DATEVnet Proxy-Server
auth_param basic credentialsttl 2 hours
auth_param basic program /usr/sbin/squid_ldap_auth -b
ou=Accounts,dc=datevnet,dc=de -R -f (&(uid=%s)(dvPerm=WEB)) -s sub -h
haldap.services.datevnet.de -p 389 -D
cn=admin,ou=proxy,ou=systems,dc=datevnet,dc=de -w proxy
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
half_closed_clients off
shutdown_lifetime 5 seconds
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl Admin_ports port 1812 8445  # Viruswall-GUI
acl PURGE method PURGE
acl snmpread snmp_community public
acl mrtg_host src 10.252.0.88/255.255.255.248
acl adminclients src 10.252.16.0/255.255.255.0
acl adminusers proxy_auth A0330020
acl password proxy_auth REQUIRED
acl transon-server dst 212.114.203.97/255.255.255.255
acl frustcenter dst 193.27.49.0/255.255.255.0
acl www_crl_esecure_de dst 193.27.50.195/255.255.255.255
acl nocacheservers dst 193.27.50.178/32 193.27.50.179/32
193.27.50.135/32 193.27.50.137/32
acl blockhostsip dst "/etc/squid/blockhosts.ip"
acl blockhostsdomain dstdomain "/etc/squid/blockhosts.domain"
acl cdbservers url_regex "/etc/squid/squid.cdbservers"
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny Admin_ports
http_access allow PURGE localhost
http_access deny PURGE
http_access allow localhost
http_access deny blockhostsip
http_access deny blockhostsdomain
http_access allow transon-server
http_access allow frustcenter
http_access allow www_crl_esecure_de
http_access allow cdbservers
http_access allow password
http_access allow adminusers
http_access deny all
http_reply_access allow all
icp_access deny all
cache_effective_user squid
visible_hostname squid06.services.datevnet.de
unique_hostname squid06.services

Re: [squid-users] Heavy mail attachments hotmail

2006-07-24 Thread Christoph Haas
On Monday 24 July 2006 15:34, Santosh Rani wrote:
> How to tame heavy mail attachments through hotmail please. Today one
> user downloaded six BMP images equal to 218 MB.

Squid has no notion of "hotmail email attachments". You can limit the size 
of downloads (reply_body_max_size) for the domain hotmail.com if that 
helps you.

Regards
 Christoph
-- 
~
~
".signature" [Modified] 1 line --100%--1,48 All


Re: [squid-users] Throughput slower, when data is in cache instead geting it from the webserver

2006-07-24 Thread Steven


On Mon, 24 Jul 2006, Dieter Bloms wrote:

> Hi,
> 
> we use squid as shipped with SuSE Linux Enterprise 9
> (squid-2.5.STABLE5-42.41).
> 
> The throughput is slower, when I get the data from cache and is faster,
> when I use the -r option to get the data from the webserver.
> 
> --snip--
> 
> This is my squid.conf
> 
> --snip--
> cache_dir diskd /var/cache/squid 15360 16 256
> --snip--
> 
> I'm alone on this server, which has 2G Ram and 2 Xeon 3.4 GHz CPUs.
> The cache_dir is a hardware raid1 with 36 GByte Space.
> 
> Does anybody have an idea why the throughput is lower, when I get the
> data from cache instead from the webserver ?

I had a similar problem under Linux where cache hits were really slow on a
server that was not busy.  Switching to aufs fixed the problem for me (ie 
just replace the word "diskd" with "aufs" on the cache_dir line).


I know the FAQ suggests that diskd gives better performance (or it did the
last time I checked).  I think that the FAQ may need updating to advise
that diskd or aufs may give the best performance depending on the load and
OS.

Steven



Re: [squid-users] Heavy mail attachments hotmail

2006-07-24 Thread Santosh Rani

But my (reply_body_max_size) already reads as under

reply_body_max_size 2097152 allow all

In my case the user has first opened the file in browser window and
then saved the file. The file saved in this way is converted to BMP
even though it was a JPG image as an attachment.

Further help is required please.

Regards


On 24/07/06, Christoph Haas <[EMAIL PROTECTED]> wrote:

On Monday 24 July 2006 15:34, Santosh Rani wrote:
> How to tame heavy mail attachments through hotmail please. Today one
> user downloaded six BMP images equal to 218 MB.

Squid has no notion of "hotmail email attachments". You can limit the size
of downloads (reply_body_max_size) for the domain hotmail.com if that
helps you.

Regards
 Christoph
--
~
~
".signature" [Modified] 1 line --100%--1,48 All



Re: [squid-users] Heavy mail attachments hotmail

2006-07-24 Thread Christoph Haas
On Monday 24 July 2006 16:47, Santosh Rani wrote:
> But my (reply_body_max_size) already reads as under
>
> reply_body_max_size 2097152 allow all
>
> In my case the user has first opened the file in browser window and
> then saved the file. The file saved in this way is converted to BMP
> even though it was a JPG image as an attachment.

The browser magically converts a displayed image to another format when 
saving? Strange.

Anyway, your setting should prevent larger objects from being fetched. 
Check your access.log to see whether this works as expected.

It's a bit extreme to use "allow all". Perhaps limiting it to patterns 
dealing with Hotmail will be nicer.

Regards
 Christoph
-- 
~
~
".signature" [Modified] 1 line --100%--1,48 All


Re: [squid-users] Throughput slower, when data is in cache instead geting it from the webserver

2006-07-24 Thread Dieter Bloms
Hi,

On Mon, Jul 24, Steven wrote:

> I had a similar problem under Linux where cache hits were really slow on a
> server that was not busy.  Switching to aufs fixed the problem for me (ie 
> just replace the word "diskd" with "aufs" on the cache_dir line).

I've tried it on my test system and yes, the throughput is higher now,
when the data comes from cache instead get it from webserver.

I will replace diskd with aufs on the production servers tomorrow.

Thank you very much for your hint !


-- 
Gruß

  Dieter

--
I do not get viruses because I do not use MS software.
If you use Outlook then please do not put my email address in your
address-book so that WHEN you get a virus it won't use my address in the
From field.


pgpH41AyZUZmt.pgp
Description: PGP signature


[squid-users] cache.log : Unsupported method 'REGISTER

2006-07-24 Thread Sekar

Hello All,

I have configured squid-2.6STABLE1 as transparent proxy in my linux
machine.Past two weeks it was running without error. But today it is 
stopped.

My cache.log saying it is Unsupported method 'REGISTER'.

My relevant squid.conf is as follows:

http_port  transparent
http_access allow all
always_direct allow all

The following entries in  my cache.log

2006/07/21 09:21:04| CACHEMGR: @127.0.0.1 requesting 'menu'
2006/07/21 09:21:08| CACHEMGR: @127.0.0.1 requesting 'vm_objects'
2006/07/21 10:39:04| clientProcessRequest2: ETag loop
2006/07/21 10:39:17| clientProcessRequest2: ETag loop
2006/07/21 11:24:21| urlParse: URL too large (4403 bytes)
2006/07/21 11:54:29| clientProcessRequest2: ETag loop
2006/07/21 12:20:44| clientProcessRequest2: ETag loop
2006/07/21 16:59:00| parseHttpRequest: NF getsockopt(SO_ORIGINAL_DST) 
failed: (2) No such file or directory

2006/07/21 18:55:11| parseHttpRequest: Unsupported method 'REGISTER'
2006/07/21 18:55:11| clientReadRequest: FD 16 Invalid Request
2006/07/21 18:57:36| parseHttpRequest: Unsupported method 'REGISTER'
2006/07/21 18:57:36| clientReadRequest: FD 15 Invalid Request
2006/07/21 19:00:36| parseHttpRequest: Unsupported method 'REGISTER'
2006/07/21 19:00:36| clientReadRequest: FD 15 Invalid Request
2006/07/21 19:04:13| parseHttpRequest: Unsupported method 'REGISTER'
2006/07/21 19:04:13| clientReadRequest: FD 13 Invalid Request
2006/07/21 19:08:26| parseHttpRequest: Unsupported method 'REGISTER'
2006/07/21 19:08:26| clientReadRequest: FD 14 Invalid Request
2006/07/21 19:13:15| parseHttpRequest: Unsupported method 'REGISTER'
2006/07/21 19:13:15| clientReadRequest: FD 13 Invalid Request
2006/07/21 19:18:39| parseHttpRequest: Unsupported method 'REGISTER'
2006/07/21 19:18:39| clientReadRequest: FD 13 Invalid Request
2006/07/21 19:24:40| parseHttpRequest: Unsupported method 'REGISTER'
2006/07/21 19:24:40| clientReadRequest: FD 13 Invalid Request
2006/07/21 19:31:16| parseHttpRequest: Unsupported method 'REGISTER'
2006/07/21 19:31:16| clientReadRequest: FD 13 Invalid Request
2006/07/21 19:38:29| parseHttpRequest: Unsupported method 'REGISTER'
2006/07/21 19:38:29| clientReadRequest: FD 13 Invalid Request
2006/07/21 19:46:28| parseHttpRequest: Unsupported method 'REGISTER'
2006/07/21 19:46:28| clientReadRequest: FD 15 Invalid Request
2006/07/21 19:54:21| parseHttpRequest: Unsupported method 'REGISTER'
2006/07/21 19:54:21| clientReadRequest: FD 13 Invalid Request
2006/07/24 09:46:25| clientProcessRequest2: ETag loop
2006/07/24 09:46:27| clientProcessRequest2: ETag loop
2006/07/24 09:52:29| clientProcessRequest2: ETag loop
2006/07/24 09:52:30| clientProcessRequest2: ETag loop
2006/07/24 09:52:45| clientProcessRequest2: ETag loop
2006/07/24 14:41:45| parseHttpRequest: NF getsockopt(SO_ORIGINAL_DST) 
failed: (2) No such file or directory

FATAL: xstrdup: tried to dup a NULL pointer!

What may be the reason?

Thanks in advance,
Sekar.D