Re: [squid-users] Squid url_rewrite and cookie

2010-01-04 Thread Matt W. Benjamin
Hi,

Yes, you cannot (could not) per se.  However, you can rewrite to a cooperating 
HTTP service which sets a cookie.  And, if you had adjusted Squid so as to pass 
cookie data to url_rewriter programs, you could also inspect the cookie in it 
on future requests. 

Matt

- "Rajesh Nair"  wrote:

> 
> Reading the docs , it looks like it is not possible to send "any"
> HTTP
> response header from the url_rewriter program and the url_rewriter
> merely can return the redirected URI.
> Is this correct?
> 
> Thanks,
> Rajesh

-- 

Matt Benjamin

The Linux Box
206 South Fifth Ave. Suite 150
Ann Arbor, MI  48104

http://linuxbox.com

tel. 734-761-4689
fax. 734-769-8938
cel. 734-216-5309


RE: [squid-users] Forward Cache not working

2010-01-04 Thread Mike Makowski
I have attached is a screenshot of WGET header output with the "-S" option.

I see nothing about "private" in the headers so I'm assuming this content
should be getting cached.  Yet, each time I run wget and then view the Squid
access log it shows TCP_MISS on every attempt.  I'll try the Ignore Private
parameter in squid just to make sure that isn't the cause.

Very puzzling.

Mike

-Original Message-
From: Chris Robertson [mailto:crobert...@gci.net] 
Sent: Monday, January 04, 2010 6:48 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Forward Cache not working

Mike Makowski wrote:
> Here is my basic config.  Using defaults for everything else.
>
> acl localnet src 172.16.0.0/12
> http_access allow local_net
> maximum_object_size 25 MB
>
> Here is a log entry showing one connection from a LAN user through the
> proxy.  I am guessing that the TCP_MISS is significant.  Perhaps the
> original source is marked as Private as Chris suggested. Don't really know
> how to even tell that though.

Add a "-S" to wget to output the server headers.

wget -S http://www.sortmonster.net/master/Updates/test.xyz -O test.new.gz
--header=Accept-Encoding:gzip --http-user=myuserid --http-passwd=mypassword


>   Can squid be forced to cache regardless of
> source settings?
>   

Yes.  http://www.squid-cache.org/Versions/v3/3.0/cfgman/refresh_pattern.html

Keyword "ignore-private".

> 1262645523.217 305633 172.17.0.152 TCP_MISS/200 11674081 GET
> http://www.sortmonster.net/master/Updates/test.xyz - DIRECT/74.205.4.93
> application/x-sortmonster 1262645523.464 122
>
> Mike

Chris
<>

[squid-users] show orgin client's IP address.

2010-01-04 Thread Calvin Park
Hello Squid users~

My system configuration below.

| Apache ( Origin ) |<--->  | Squid ( Transparent Proxy or Proxy )
 |  <---> | Client |

I want to see client's IP address at Apache side.

I knew one solution using by SQUID :  follow_x_forwarded_for  , APACHE
: "%{X-Forwarded-For}i\"

I want to show real client's IP at Apache log not modify Apache's configuration.

Is that possible ?


[squid-users] Squid url_rewrite and cookie

2010-01-04 Thread Rajesh Nair
Hi,

I have a squid deployed with a url_rewrite_program set to a perl
script which rewrites certain requests.
This is all working fine .

I now want to be able to set a cookie along with the 302 response and
I am not able to figure out how.

Reading the docs , it looks like it is not possible to send "any" HTTP
response header from the url_rewriter program and the url_rewriter
merely can return the redirected URI.
Is this correct?

Is there a way for me to set a cookie from squid?
I am even ready to modify the squid code to achieve this since no
other solution will work for us.

Any pointers to the code area which sends the redirect response back
to client will also help.

Thanks,
Rajesh


Re: [squid-users] Squid LDAP Auth and ACL Integration

2010-01-04 Thread Jose Ildefonso Camargo Tolosa
Hi!

On Sat, Jan 2, 2010 at 1:49 PM, ml ml  wrote:
> Hi,
>
> thanks for the reply.
>
> However, i cant get the proof-of-concept working on the command line:
>
> echo "mo" | squid_ldap_group  -b "dc=my-domain,dc=com"  -f "cn=mo" -F
> "cn=mo" -h localhost -D "cn=Manager,dc=my-domain,dc=com"  -w secret

Not sure, but I use this on the squid.conf:

/usr/lib/squid/squid_ldap_group -b "ou=Groups,dc=example,dc=com" -f
"(&(objectclass=posixGroup)(cn=%g)(memberUid=%u))" -h localhost -P -v
3 -B "ou=Users,dc=example,dc=com" -D cn=read_only,dc=example,dc=com -w
password

>
> it always returns ERR. If i do a "tcpdump -i any -n port 389" then i
> cant see any traffic at all.
>

I'm not sure, but I think it doesn't return traffic for lo interface.

> Any idea how i can debug this? the "-d" option does not seem to do any
> debugging!

maybe run the ldap daemon (slapd) with "-d -1" option, but it will
print LOTS of info, make sure NO OTHER PROCESS access the directory
server while you run the test (maybe a VM will help).

>
> Thanks,
> Mario
>
>
>
> On Thu, Dec 31, 2009 at 9:29 PM, Chris Robertson  wrote:
>> ml ml wrote:
>>>
>>> Hello List,
>>>
>>> i read that its quite easy to get squid with ldap auth running.
>>>
>>> I would also like to manage Black/White URL-Lists in ldap. Can this be
>>> done via ldap, too?

m. maybe, but, I think this could become slow, I have never
used LDAP for black lists, I store them on plain-text files, and then
use group membership (ldap) to manage who the lists applies to.  If
you feel like you really need to have the URLs on LDAP, I would write
an script that reads the URLs from LDAP and write them to plain-text
files that squid would use.  Off course, you would need some
"intelligence" on the script.

I hope this helps,

Ildefonso Camargo


Re: [squid-users] Forward Cache not working

2010-01-04 Thread Chris Robertson

Mike Makowski wrote:

Here is my basic config.  Using defaults for everything else.

acl localnet src 172.16.0.0/12
http_access allow local_net
maximum_object_size 25 MB

Here is a log entry showing one connection from a LAN user through the
proxy.  I am guessing that the TCP_MISS is significant.  Perhaps the
original source is marked as Private as Chris suggested. Don't really know
how to even tell that though.


Add a "-S" to wget to output the server headers.

wget -S http://www.sortmonster.net/master/Updates/test.xyz -O test.new.gz
--header=Accept-Encoding:gzip --http-user=myuserid --http-passwd=mypassword



  Can squid be forced to cache regardless of
source settings?
  


Yes.  http://www.squid-cache.org/Versions/v3/3.0/cfgman/refresh_pattern.html

Keyword "ignore-private".


1262645523.217 305633 172.17.0.152 TCP_MISS/200 11674081 GET
http://www.sortmonster.net/master/Updates/test.xyz - DIRECT/74.205.4.93
application/x-sortmonster 1262645523.464 122

Mike


Chris



RE: [squid-users] Forward Cache not working

2010-01-04 Thread Mike Makowski
Here is my basic config.  Using defaults for everything else.

acl localnet src 172.16.0.0/12
http_access allow local_net
maximum_object_size 25 MB

Here is a log entry showing one connection from a LAN user through the
proxy.  I am guessing that the TCP_MISS is significant.  Perhaps the
original source is marked as Private as Chris suggested. Don't really know
how to even tell that though.  Can squid be forced to cache regardless of
source settings?

1262645523.217 305633 172.17.0.152 TCP_MISS/200 11674081 GET
http://www.sortmonster.net/master/Updates/test.xyz - DIRECT/74.205.4.93
application/x-sortmonster 1262645523.464 122

Mike



-Original Message-
From: Guido Marino Lorenzutti [mailto:glorenzu...@jusbaires.gov.ar] 
Sent: Monday, January 04, 2010 3:25 PM
To: Mike Makowski
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Forward Cache not working

logs from the hits to the squid?
config files?

Mike Makowski  escribió:

> Hello all,
>
> I'm new to squid.  Currently have squid3 installed on a 64-bit ubuntu box.
> Fresh install with default YUM settings.
>
> I am trying to get the forward proxy cache to work for one very specific
> application that is running on multiple servers on my LAN.  Instead of
each
> machine pulling a 12MB file from the internet I am trying to make squid
pull
> the file on behalf of the clients and then server it to them out of cache.
> The file changes roughly every 15-20 minutes and squid would need to pull
> the fresh copy each time and update the cache.  The clients are using wget
> to download the 12MB file via http.
>
> set http_proxy = '172.16.0.2:3128  (squid server)
> wget http://www.sortmonster.net/master/Updates/test.xyz -O test.new.gz
> --header=Accept-Encoding:gzip --http-user=myuserid
--http-passwd=mypassword
>
> I have changed the ACL to accept connections from my 172 network and have
> also updated http proxy to recognize the same addresses.  Have increased
the
> max file cache size to 200MB and have increased the maximum individual
file
> cache object size to 50MB.
>
> The proxy works great in that it is servering the requested content to
each
> client but it always pulls over the interet - not from cache. I have been
> told that others have been successful at caching this file in squid so I
> suspect there is nothing wrong on the remote end.
>
> I'm also not quite clear how squid will handle the requests if multiple
> clients request the file at the same time and it is not yet cached.
>
> I know I'm missing something very simple.  Suggestions please.
>
> Thank for any help.
>
> Mike Makowski
>
>
>




Re: [squid-users] Forward Cache not working

2010-01-04 Thread Chris Robertson

Mike Makowski wrote:

Hello all,

I'm new to squid.  Currently have squid3 installed on a 64-bit ubuntu box.
Fresh install with default YUM settings.

I am trying to get the forward proxy cache to work for one very specific
application that is running on multiple servers on my LAN.  Instead of each
machine pulling a 12MB file from the internet I am trying to make squid pull
the file on behalf of the clients and then server it to them out of cache.
The file changes roughly every 15-20 minutes and squid would need to pull
the fresh copy each time and update the cache.  The clients are using wget
to download the 12MB file via http.

set http_proxy = '172.16.0.2:3128  (squid server)
wget http://www.sortmonster.net/master/Updates/test.xyz -O test.new.gz
--header=Accept-Encoding:gzip --http-user=myuserid --http-passwd=mypassword
  


Content protected by a login is usually marked with the header 
"Cache-control: private", which will prevent a shared cache from keeping 
a copy.



I have changed the ACL to accept connections from my 172 network and have
also updated http proxy to recognize the same addresses.  Have increased the
max file cache size to 200MB and have increased the maximum individual file
cache object size to 50MB.

The proxy works great in that it is servering the requested content to each
client but it always pulls over the interet - not from cache. I have been
told that others have been successful at caching this file in squid so I
suspect there is nothing wrong on the remote end.

I'm also not quite clear how squid will handle the requests if multiple
clients request the file at the same time and it is not yet cached.

I know I'm missing something very simple.  Suggestions please.
  


http://www.squid-cache.org/Versions/v3/3.0/cfgman/refresh_pattern.html

Keyword "ignore-private".


Thank for any help.

Mike Makowski
  


Chris




Re: [squid-users] Forward Cache not working

2010-01-04 Thread Guido Marino Lorenzutti

logs from the hits to the squid?
config files?

Mike Makowski  escribió:


Hello all,

I'm new to squid.  Currently have squid3 installed on a 64-bit ubuntu box.
Fresh install with default YUM settings.

I am trying to get the forward proxy cache to work for one very specific
application that is running on multiple servers on my LAN.  Instead of each
machine pulling a 12MB file from the internet I am trying to make squid pull
the file on behalf of the clients and then server it to them out of cache.
The file changes roughly every 15-20 minutes and squid would need to pull
the fresh copy each time and update the cache.  The clients are using wget
to download the 12MB file via http.

set http_proxy = '172.16.0.2:3128  (squid server)
wget http://www.sortmonster.net/master/Updates/test.xyz -O test.new.gz
--header=Accept-Encoding:gzip --http-user=myuserid --http-passwd=mypassword

I have changed the ACL to accept connections from my 172 network and have
also updated http proxy to recognize the same addresses.  Have increased the
max file cache size to 200MB and have increased the maximum individual file
cache object size to 50MB.

The proxy works great in that it is servering the requested content to each
client but it always pulls over the interet - not from cache. I have been
told that others have been successful at caching this file in squid so I
suspect there is nothing wrong on the remote end.

I'm also not quite clear how squid will handle the requests if multiple
clients request the file at the same time and it is not yet cached.

I know I'm missing something very simple.  Suggestions please.

Thank for any help.

Mike Makowski









Re: [squid-users] Expires Header

2010-01-04 Thread Chris Robertson

Dusten Splan wrote:

Thanks for the response Chris but I don't think that is was I'm
looking for.  What I would like to do is overwrite the expires header
to something far in the future.  I am also running squid 3.0.

Thanks
  Dusten


Ah.  For 3.0 you will want to use reply_header_access 
(http://www.squid-cache.org/Doc/config/reply_header_access/) and 
header_replace.


Something like...

acl myPeer peername acellerated.host.mine
reply_header_access deny Expires myPeer
header_replace Expires Sat, 1 Jan 2011 00:00:00 GMT

...should replace the expires header for all responses from your 
accelerated host.  At least I think you can use a peername ACL with 
reply_header_access.


Chris





[squid-users] Forward Cache not working

2010-01-04 Thread Mike Makowski
Hello all,

I'm new to squid.  Currently have squid3 installed on a 64-bit ubuntu box.
Fresh install with default YUM settings.

I am trying to get the forward proxy cache to work for one very specific
application that is running on multiple servers on my LAN.  Instead of each
machine pulling a 12MB file from the internet I am trying to make squid pull
the file on behalf of the clients and then server it to them out of cache.
The file changes roughly every 15-20 minutes and squid would need to pull
the fresh copy each time and update the cache.  The clients are using wget
to download the 12MB file via http.

set http_proxy = '172.16.0.2:3128  (squid server)
wget http://www.sortmonster.net/master/Updates/test.xyz -O test.new.gz
--header=Accept-Encoding:gzip --http-user=myuserid --http-passwd=mypassword

I have changed the ACL to accept connections from my 172 network and have
also updated http proxy to recognize the same addresses.  Have increased the
max file cache size to 200MB and have increased the maximum individual file
cache object size to 50MB.

The proxy works great in that it is servering the requested content to each
client but it always pulls over the interet - not from cache. I have been
told that others have been successful at caching this file in squid so I
suspect there is nothing wrong on the remote end.

I'm also not quite clear how squid will handle the requests if multiple
clients request the file at the same time and it is not yet cached.

I know I'm missing something very simple.  Suggestions please.

Thank for any help.

Mike Makowski




Re: [squid-users] Squid LDAP Auth and ACL Integration

2010-01-04 Thread Chris Robertson

ml ml wrote:

Hi,

thanks for the reply.

However, i cant get the proof-of-concept working on the command line:

echo "mo" | squid_ldap_group  -b "dc=my-domain,dc=com"  -f "cn=mo" -F
"cn=mo" -h localhost -D "cn=Manager,dc=my-domain,dc=com"  -w secret

it always returns ERR.


So, user with common name of "mo" is apparently not a member of the 
group with common name "mo".  You are statically assigning your search 
filters, which will return the same results for every run.



 If i do a "tcpdump -i any -n port 389" then i
cant see any traffic at all.

Any idea how i can debug this? the "-d" option does not seem to do any
debugging!
  


That's very odd.  The -d option should print messages:
* upon successful LDAP connection (with a failed connection being 
reported regardless of debugging being set)

* confirming the group filter and searchbase
* confirming the user filter and searchbase

Try putting -d as the first argument.  It shouldn't matter, but doing so 
will assure it's not being "missed".



Thanks,
Mario


Chris



Re: [squid-users] Squid 3.1.0.13 Speed Test - Upload breaks?

2010-01-04 Thread jay60103

I'm using  Version 3.1.0.6 and speakeasy.net doesn't work for me either.
Download test okay, but when it starts the upload part it fails with "Upload
test returned an error while trying to read the upload file." 

http://bigcartel.com fails also. Possibly related?


Adrian Chadd-3 wrote:
> 
> The pipelining used by speedtest.net and such won't really get a
> benefit from the current squid pipelining support.
> 
> 
> 
> Adrian
> 
> 2009/8/15 Daniel :
>> Henrik,
>>
>>        I added 'pipeline_prefetch on' to my squid.conf and it still isn't
>> working right. I've pasted my entire squid.conf below, if you have
>> anything extra turned on/off or et cetera than please let me know and
>> I'll try it.  Thanks!
>>
>> acl manager proto cache_object
>> acl localhost src 127.0.0.1/32
>> acl to_localhost dst 127.0.0.0/8
>> acl TestPoolIPs src lpt-hdq-dmtqq31 wksthdq88w
>> acl localnet src 10.0.0.0/8     # RFC1918 possible internal network
>> acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
>> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
>> acl sclthdq01w src 10.211.194.187/32    # custom acl for apache/cache
>> manager
>> acl SSL_ports port 443
>> acl Safe_ports port 80          # http
>> acl Safe_ports port 21          # ftp
>> acl Safe_ports port 443         # https
>> acl Safe_ports port 70          # gopher
>> acl Safe_ports port 210         # wais
>> acl Safe_ports port 1025-65535  # unregistered ports
>> acl Safe_ports port 280         # http-mgmt
>> acl Safe_ports port 488         # gss-http
>> acl Safe_ports port 591         # filemaker
>> acl Safe_ports port 777         # multiling http
>> acl CONNECT method CONNECT
>> http_access allow manager localhost
>> http_access allow manager sclthdq01w
>> http_access deny manager
>> http_access deny !Safe_ports
>> http_access deny CONNECT !SSL_ports
>> #http_access allow localnet
>> http_access allow localhost
>> http_access allow TestPoolIPs
>> http_access deny all
>> http_port 3128
>> hierarchy_stoplist cgi-bin ?
>> coredump_dir /usr/local/squid/var/cache
>> cache_mem 512 MB
>> pipeline_prefetch on
>> refresh_pattern ^ftp:           1440    20%     10080
>> refresh_pattern ^gopher:        1440    0%      1440
>> refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
>> refresh_pattern .               0       20%     4320
>>
>> -Original Message-
>> From: Henrik Lidström [mailto:free...@lidstrom.eu]
>> Sent: Monday, August 10, 2009 8:16 PM
>> To: Daniel
>> Cc: squid-users@squid-cache.org
>> Subject: Re: [squid-users] Squid 3.1.0.13 Speed Test - Upload breaks?
>>
>> Daniel skrev:
>>> Kinkie,
>>>
>>>       I'm using the default settings, so I don't have any specific max
>>> request sizes specified. I guess I'll hold out until someone else
>>> running 3.1 can test this.
>>>
>>> Thanks!
>>>
>>> -Original Message-
>>> From: Kinkie [mailto:gkin...@gmail.com]
>>> Sent: Saturday, August 08, 2009 6:44 AM
>>> To: squid-users@squid-cache.org
>>> Subject: Re: [squid-users] Squid 3.1.0.13 Speed Test - Upload breaks?
>>>
>>> Maybe the failure could depend on some specific settings, such as max
>>> request size?
>>>
>>> On 8/8/09, Heinz Diehl  wrote:
>>>
 On 08.08.2009, Daniel wrote:


> Would anyone else using Squid mind doing this same bandwidth test and
> seeing
> if they have the same issue(s)?
>
 It works flawlessly using both 2.7-STABLE6 and 3.0-STABLE18 here.



>>>
>>>
>>>
>> Squid Cache: Version 3.1.0.13
>>
>> Working without a problem, tested multiple sites on the list.
>> Nothing special in the config except maybe "pipeline_prefetch on"
>>
>> /Henrik
>>
>>
> 
> 

-- 
View this message in context: 
http://old.nabble.com/Squid-3.1.0.13-Speed-Test---Upload-breaks--tp24868479p27018936.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] Caching for maps.ayna.com

2010-01-04 Thread Serge Fonville
Hi,

> i'm checking store.log to see hits concerning maps.ayna.com
> and nothing is being cached from the map itself..
> any idea what might be the prob ?
> i've opened a session and tail access.log where everything is being missed
> times and times again...

The headers contain
> Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
> Pragma: no-cache

This might be the reason

HTH

Regards,

Serge Fonville


-- 
http://www.sergefonville.nl

Convince Google!!
They need to support Adsense over SSL
https://www.google.com/adsense/support/bin/answer.py?hl=en&answer=10528
http://www.google.com/support/forum/p/AdSense/thread?tid=1884bc9310d9f923&hl=en


[squid-users] Access.log in Transparent Mode‏

2010-01-04 Thread Pedro
Hi all,
 
I've configured a Debian Server 2.6.26-2-686 with Squid Server v2.7 in
transparent mode and works perfect, but I need that the 'transparents'
request records in access.log file.
When I configure the proxy server in the browser I see how the file grows,
and I can see a tail -f too. But when I configure the gateway client and
dismark the proxy in the browser, squid doesn't record nothing in access.log
but the clients can surfer, send email, and so on.
 
Can someboy help me?
 
Pedro Maroto



Re: [squid-users] Expires Header

2010-01-04 Thread Dusten Splan
Thanks for the response Chris but I don't think that is was I'm
looking for.  What I would like to do is overwrite the expires header
to something far in the future.  I am also running squid 3.0.

Thanks
  Dusten

On Thu, Dec 31, 2009 at 15:30, Chris Robertson  wrote:
> Dusten Splan wrote:
>>
>> Does anyone know how to change the expires header in the http response
>> that squid sends.  I have an object that has one in the past and when
>> being forwarded to the CDN because it's in the past it's only keeping
>> the content for 1 day.  I would like to do this in squid and not
>> change the origin.
>>
>
> http://www.squid-cache.org/Doc/config/header_access/
> http://www.squid-cache.org/Doc/config/header_replace/
>
>> Thanks
>> Dusten
>>
>
> Chris
>
>


Re: [squid-users] solaris 10 process size problem

2010-01-04 Thread Mario Garcia Ortiz
Hello

here are the compile option used to build squid on the solaris system:
there are no compilation error or warnings :

$ ./configure CFLAGS=-DNUMTHREADS=60 --prefix=/usr/local/squid
--enable-snmp --with-maxfd=32768 --enable-removal-policies
--enable-useragent-log --enable-storeio=diskd,null

the return from squid client concerning process data size via sbrk are
at the moment of  1213034 KB (more than 1GB) this is huuuge for a
process.

the OS return similar parameters with the top command:

  PID USERNAME LWP PRI NICE  SIZE   RES STATETIMECPU COMMAND
 12925 root   1  450 1113M 1110M sleep  218:44  7.33% squid
 12893 root   1  540 1192M 1188M sleep  222:08  5.81% squid

here is the output of the command ps -e -o pid, vsz, comm

pid  vszcommand
12925 1139948 (squid)
12893 1220136 (squid)

when the size reaches 4GB the squid process crash with the errors :

FATAL: xcalloc: Unable to allocate 1 blocks of 4194304 bytes!

Squid Cache (Version 3.0.STABLE20): Terminated abnormally.
CPU Usage: 91594.216 seconds = 57864.539 user + 33729.677 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
total space in arena:  -157909 KB
Ordinary blocks:   691840 KB 531392 blks
Small blocks:4460 KB 184700 blks
Holding blocks:50 KB   1847 blks
Free Small blocks:696 KB
Free Ordinary blocks:  -854957 KB
Total in use:  696351 KB -440%
Total free:-854260 KB 541%


the custom configuration parameter is
cache_dir null /path/to/cache
cache_mem 512 MB --> i am going to lower this to 128, maybe that is the cause.

I am compiling squid stable 21.. in order to see if there is some improvement.

is there a possibility to have information in a provoqued dumped core
i plan to do.. because we usually have to wait 3 to 4 weeks in order
to have the problem reproduce by itself. but we see that the process
grows.. in a few days is already 1GB.


thank you very much for your collaboration.



2009/12/31 Amos Jeffries :
> Mario Garcia Ortiz wrote:
>>
>> Hello
>> thank you very much for your answer. the problem is that squid grows
>> contantly on size, so far is already at 1.5GB and it has been
>> restarted monday.
>> i will try to provoke a a dump core so i can send it to squid.
>
> Squid is supposed to allow growth until the internal limit is reached.
> According to those stats only 98% of the internal storage limit is used.
>
> Anything you can provide about build options, configuration settings, and
> what the OS thinks the memory usage is will help limit the problem search
> down.
>
>>
>> in the meanwhile i will upgrade squid to the latest stable 21. is
>> there any recommended options while compiling on solaris 10?
>
> Options-wise everything builds on Solaris. Actual usage testing has been a
> little light so we can't guarantee anything as yet.
>
> Some extra build packages may be needed:
> http://wiki.squid-cache.org/KnowledgeBase/Solaris
>
>> as using
>> an alternate malloc library?
>
> If you are able to find and use a malloc library that is known to handle
>  memory allocation 64-bit systems well it would be good. They can be rare on
> some systems.
>
>
> Amos
>
>>
>> 2009/12/31 Amos Jeffries :
>>>
>>> Mario Garcia Ortiz wrote:

 Hello
 thank you very much for your help.
 the problem occurred once  the process size reached 4Gbytes. the only
 application running on the server is the proxy, there are two
 instances running each one in a different IP address.
 there is no cache.. the squid was compiled with
 --enable-storeio=diskd,null and in squid.conf :
 cache_dir null /var/spool/squid1

 as for the hits i assume there are none since there is no cache am I
 wrong?
 here is what i get with mgr:info output from squidclient:

 Cache information for squid:
       Hits as % of all requests:      5min: 11.4%, 60min: 17.7%
       Hits as % of bytes sent:        5min: 8.8%, 60min: 10.3%
       Memory hits as % of hit requests:       5min: 58.2%, 60min: 60.0%
       Disk hits as % of hit requests: 5min: 0.1%, 60min: 0.1%
       Storage Swap size:      0 KB
       Storage Swap capacity:   0.0% used,  0.0% free
       Storage Mem size:       516272 KB
       Storage Mem capacity:   98.5% used,  1.5% free
       Mean Object Size:       0.00 KB
       Requests given to unlinkd:      0


 I am not able to find a core file in the system for the problem of
 yesterday.
 the squid was restarted yesterday at 11.40 am and now the process data
 segment size is 940512 KB.

 i bet that if i let the process to reach 4GB again the crash will
 occur? maybe is this necessary in order to collect debug data?

 thank you in advance for your help it is very much appreciated.

 kindest regards

 Mario G.
>>

[squid-users] Caching for maps.ayna.com

2010-01-04 Thread Roland Roland

hello all,

i'm checking store.log to see hits concerning maps.ayna.com
and nothing is being cached from the map itself..
any idea what might be the prob ?
i've opened a session and tail access.log where everything is being 
missed times and times again...