AW: [squid-users] https from different Subnet not working

2009-07-14 Thread Jarosch, Ralph
OK i found my mistake just one moment before you write your request :).
I´ve forgot some "/" in my squidGuard conf.
Thanks a lot to all. 
I just go to  bang my head on the table.

Ralph

> -Ursprüngliche Nachricht-
> Von: adrian.ch...@gmail.com [mailto:adrian.ch...@gmail.com] Im Auftrag
> von Adrian Chadd
> Gesendet: Mittwoch, 15. Juli 2009 07:52
> An: Jarosch, Ralph
> Cc: squid-users@squid-cache.org
> Betreff: Re: [squid-users] https from different Subnet not working
> 
> Are you using a url rewriter program?
> 
> Also, why haven't you just emailed redhat support?
> 
> 
> 
> Adrian
> 
> 2009/7/15 Jarosch, Ralph :
> > I found the section which rewrite the request in my cache.log.
> >
> > Can someone explain what happens there.
> >
> > 2009/07/15 06:51:56| cbdataValid: 0x17f684f8
> > 2009/07/15 06:51:56| redirectHandleRead: {http:/golem.de
> 10.39.119.9/- - CONNECT}
> > 2009/07/15 06:51:56| cbdataValid: 0x1808d4a8
> > 2009/07/15 06:51:56| cbdataUnlock: 0x1808d4a8
> > 2009/07/15 06:51:56| clientRedirectDone: 'erv-
> justiz.niedersachsen.de:443' result=http:/golem.de
> > 2009/07/15 06:51:56| init-ing hdr: 0x1808f160 owner: 1
> > 2009/07/15 06:51:56| appending hdr: 0x1808f160 += 0x1808ec00
> > 2009/07/15 06:51:56| created entry 0x17f726f0: 'User-Agent:
> Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.2; .NET CLR 1.1.4322;
> .NET CLR 2.0.50727; .NET CLR 3.0.04506.30)'
> > 2009/07/15 06:51:56| 0x1808f160 adding entry: 50 at 0
> > 2009/07/15 06:51:56| created entry 0x17fcc870: 'Proxy-Connection:
> Keep-Alive'
> > 2009/07/15 06:51:56| 0x1808f160 adding entry: 41 at 1
> > 2009/07/15 06:51:56| created entry 0x17fc3190: 'Content-Length: 0'
> > 2009/07/15 06:51:56| 0x1808f160 adding entry: 14 at 2
> > 2009/07/15 06:51:56| created entry 0x17fcbd80: 'Host: erv-
> justiz.niedersachsen.de'
> > 2009/07/15 06:51:56| 0x1808f160 adding entry: 27 at 3
> > 2009/07/15 06:51:56| created entry 0x17fcc990: 'Pragma: no-cache'
> > 2009/07/15 06:51:56| 0x1808f160 adding entry: 37 at 4
> > 2009/07/15 06:51:56| 0x1808f160 lookup for 37
> > 2009/07/15 06:51:56| 0x1808f160: joining for id 37
> > 2009/07/15 06:51:56| 0x1808f160: joined for id 37: no-cache
> > 2009/07/15 06:51:56| 0x1808f160 lookup for 7
> > 2009/07/15 06:51:56| 0x1808f160 lookup for 7
> > 2009/07/15 06:51:56| 0x1808f160 lookup for 40
> > 2009/07/15 06:51:56| 0x1808f160 lookup for 52
> > 2009/07/15 06:51:56| clientInterpretRequestHeaders: REQ_NOCACHE = SET
> > 2009/07/15 06:51:56| clientInterpretRequestHeaders: REQ_CACHABLE =
> NOT SET
> > 2009/07/15 06:51:56| clientInterpretRequestHeaders: REQ_HIERARCHICAL
> = NOT SET
> > 2009/07/15 06:51:56| clientProcessRequest: CONNECT
> 'http.justiz.niedersachsen.de:443'
> > 2009/07/15 06:51:56| aclCheckFast: list: (nil)
> > 2009/07/15 06:51:56| aclCheckFast: no matches, returning: 1
> > 2009/07/15 06:51:56| sslStart: 'CONNECT
> http.justiz.niedersachsen.de:443'
> > 2009/07/15 06:51:56| comm_open: FD 58 is a new socket
> > 2009/07/15 06:51:56| fd_open FD 58 http.justiz.niedersachsen.de:443
> > 2009/07/15 06:51:56| comm_add_close_handler: FD 58, handler=0x463e31,
> data=0x1808d378
> >
> >> -Ursprüngliche Nachricht-
> >> Von: Jarosch, Ralph [mailto:ralph.jaro...@justiz.niedersachsen.de]
> >> Gesendet: Dienstag, 14. Juli 2009 11:40
> >> An: squid-users@squid-cache.org
> >> Betreff: AW: [squid-users] https from different Subnet not working
> >>
> >> > -Ursprüngliche Nachricht-
> >> > Von: adrian.ch...@gmail.com [mailto:adrian.ch...@gmail.com] Im
> >> Auftrag
> >> > von Adrian Chadd
> >> > Gesendet: Dienstag, 14. Juli 2009 11:16
> >> > An: Jarosch, Ralph
> >> > Cc: squid-users@squid-cache.org
> >> > Betreff: Re: [squid-users] https from different Subnet not working
> >> >
> >> > 2009/7/14 Jarosch, Ralph :
> >> > > This is the latest support squid-2 version for RHEL5.3
> >> > >
> >> > > An I want to use the dnsserver
> >> >
> >> > Right. Well, besides the other posters' response about the cache
> peer
> >> > setup being a clue - you're choosing a peer based on source IP as
> far
> >> > as I can tell there - which leads me to think that perhaps that
> >> > particular cache has a problem. You didn't say which caches they
> were
> >> > in your config or error message so we can't check whether they're
> the
> >> > same or different.
> >> >
> >> Ok sorry.
> >> The current way for an website request is
> >>
> >> Client --> headproxy(10.37.132.2) --> my cache proxys
> >> (10.37.132.5/6/7/8) --> proxy off our isp --> internet
> >>
> >> The error message come from the isp proxy which tell when I request
> >> something like https://www.ebay.com
> >>
> >>  The requested URL could not be retrieved
> >>  ---
> ---
> >>  -- While trying to retrieve the URL: http.yyy.xxx:443 The
> >>       yyy.xxx is our local domain
> >>  following error was encountered:
> >>  Unable to determine IP address from host name for The dnsserver
> >>  returned:
> >>  Name Error: The

Re: [squid-users] https from different Subnet not working

2009-07-14 Thread Adrian Chadd
Are you using a url rewriter program?

Also, why haven't you just emailed redhat support?



Adrian

2009/7/15 Jarosch, Ralph :
> I found the section which rewrite the request in my cache.log.
>
> Can someone explain what happens there.
>
> 2009/07/15 06:51:56| cbdataValid: 0x17f684f8
> 2009/07/15 06:51:56| redirectHandleRead: {http:/golem.de 10.39.119.9/- - 
> CONNECT}
> 2009/07/15 06:51:56| cbdataValid: 0x1808d4a8
> 2009/07/15 06:51:56| cbdataUnlock: 0x1808d4a8
> 2009/07/15 06:51:56| clientRedirectDone: 'erv-justiz.niedersachsen.de:443' 
> result=http:/golem.de
> 2009/07/15 06:51:56| init-ing hdr: 0x1808f160 owner: 1
> 2009/07/15 06:51:56| appending hdr: 0x1808f160 += 0x1808ec00
> 2009/07/15 06:51:56| created entry 0x17f726f0: 'User-Agent: Mozilla/4.0 
> (compatible; MSIE 7.0; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; 
> .NET CLR 3.0.04506.30)'
> 2009/07/15 06:51:56| 0x1808f160 adding entry: 50 at 0
> 2009/07/15 06:51:56| created entry 0x17fcc870: 'Proxy-Connection: Keep-Alive'
> 2009/07/15 06:51:56| 0x1808f160 adding entry: 41 at 1
> 2009/07/15 06:51:56| created entry 0x17fc3190: 'Content-Length: 0'
> 2009/07/15 06:51:56| 0x1808f160 adding entry: 14 at 2
> 2009/07/15 06:51:56| created entry 0x17fcbd80: 'Host: 
> erv-justiz.niedersachsen.de'
> 2009/07/15 06:51:56| 0x1808f160 adding entry: 27 at 3
> 2009/07/15 06:51:56| created entry 0x17fcc990: 'Pragma: no-cache'
> 2009/07/15 06:51:56| 0x1808f160 adding entry: 37 at 4
> 2009/07/15 06:51:56| 0x1808f160 lookup for 37
> 2009/07/15 06:51:56| 0x1808f160: joining for id 37
> 2009/07/15 06:51:56| 0x1808f160: joined for id 37: no-cache
> 2009/07/15 06:51:56| 0x1808f160 lookup for 7
> 2009/07/15 06:51:56| 0x1808f160 lookup for 7
> 2009/07/15 06:51:56| 0x1808f160 lookup for 40
> 2009/07/15 06:51:56| 0x1808f160 lookup for 52
> 2009/07/15 06:51:56| clientInterpretRequestHeaders: REQ_NOCACHE = SET
> 2009/07/15 06:51:56| clientInterpretRequestHeaders: REQ_CACHABLE = NOT SET
> 2009/07/15 06:51:56| clientInterpretRequestHeaders: REQ_HIERARCHICAL = NOT SET
> 2009/07/15 06:51:56| clientProcessRequest: CONNECT 
> 'http.justiz.niedersachsen.de:443'
> 2009/07/15 06:51:56| aclCheckFast: list: (nil)
> 2009/07/15 06:51:56| aclCheckFast: no matches, returning: 1
> 2009/07/15 06:51:56| sslStart: 'CONNECT http.justiz.niedersachsen.de:443'
> 2009/07/15 06:51:56| comm_open: FD 58 is a new socket
> 2009/07/15 06:51:56| fd_open FD 58 http.justiz.niedersachsen.de:443
> 2009/07/15 06:51:56| comm_add_close_handler: FD 58, handler=0x463e31, 
> data=0x1808d378
>
>> -Ursprüngliche Nachricht-
>> Von: Jarosch, Ralph [mailto:ralph.jaro...@justiz.niedersachsen.de]
>> Gesendet: Dienstag, 14. Juli 2009 11:40
>> An: squid-users@squid-cache.org
>> Betreff: AW: [squid-users] https from different Subnet not working
>>
>> > -Ursprüngliche Nachricht-
>> > Von: adrian.ch...@gmail.com [mailto:adrian.ch...@gmail.com] Im
>> Auftrag
>> > von Adrian Chadd
>> > Gesendet: Dienstag, 14. Juli 2009 11:16
>> > An: Jarosch, Ralph
>> > Cc: squid-users@squid-cache.org
>> > Betreff: Re: [squid-users] https from different Subnet not working
>> >
>> > 2009/7/14 Jarosch, Ralph :
>> > > This is the latest support squid-2 version for RHEL5.3
>> > >
>> > > An I want to use the dnsserver
>> >
>> > Right. Well, besides the other posters' response about the cache peer
>> > setup being a clue - you're choosing a peer based on source IP as far
>> > as I can tell there - which leads me to think that perhaps that
>> > particular cache has a problem. You didn't say which caches they were
>> > in your config or error message so we can't check whether they're the
>> > same or different.
>> >
>> Ok sorry.
>> The current way for an website request is
>>
>> Client --> headproxy(10.37.132.2) --> my cache proxys
>> (10.37.132.5/6/7/8) --> proxy off our isp --> internet
>>
>> The error message come from the isp proxy which tell when I request
>> something like https://www.ebay.com
>>
>>  The requested URL could not be retrieved
>>  --
>>  -- While trying to retrieve the URL: http.yyy.xxx:443 The
>>       yyy.xxx is our local domain
>>  following error was encountered:
>>  Unable to determine IP address from host name for The dnsserver
>>  returned:
>>  Name Error: The domain name does not exist.
>>  This means that:
>>   The cache was not able to resolve the hostname presented in the URL.
>>   Check if the address is correct.
>>  Your cache administrator is webmaster.
>>  --
>>  -- Generated Tue, 14 Jul 2009 08:10:39 GMT by xxx
>>       the answer come from the isp
>>  (squid/2.5.STABLE12)
>>
>> I´ve made an tcpdump between our headproxy and our cacheproxy´s  an
>> there I can see that the headproxy change the request from
>> https//www.ebay.com to https.our.domain.com
>>
>>
>>
>> > But since yo'ure using a supported squid for RHEL5.3, why don't you
>> > con

AW: [squid-users] https from different Subnet not working

2009-07-14 Thread Jarosch, Ralph
I found the section which rewrite the request in my cache.log.

Can someone explain what happens there.

2009/07/15 06:51:56| cbdataValid: 0x17f684f8
2009/07/15 06:51:56| redirectHandleRead: {http:/golem.de 10.39.119.9/- - 
CONNECT}
2009/07/15 06:51:56| cbdataValid: 0x1808d4a8
2009/07/15 06:51:56| cbdataUnlock: 0x1808d4a8
2009/07/15 06:51:56| clientRedirectDone: 'erv-justiz.niedersachsen.de:443' 
result=http:/golem.de
2009/07/15 06:51:56| init-ing hdr: 0x1808f160 owner: 1
2009/07/15 06:51:56| appending hdr: 0x1808f160 += 0x1808ec00
2009/07/15 06:51:56| created entry 0x17f726f0: 'User-Agent: Mozilla/4.0 
(compatible; MSIE 7.0; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; 
.NET CLR 3.0.04506.30)'
2009/07/15 06:51:56| 0x1808f160 adding entry: 50 at 0
2009/07/15 06:51:56| created entry 0x17fcc870: 'Proxy-Connection: Keep-Alive'
2009/07/15 06:51:56| 0x1808f160 adding entry: 41 at 1
2009/07/15 06:51:56| created entry 0x17fc3190: 'Content-Length: 0'
2009/07/15 06:51:56| 0x1808f160 adding entry: 14 at 2
2009/07/15 06:51:56| created entry 0x17fcbd80: 'Host: 
erv-justiz.niedersachsen.de'
2009/07/15 06:51:56| 0x1808f160 adding entry: 27 at 3
2009/07/15 06:51:56| created entry 0x17fcc990: 'Pragma: no-cache'
2009/07/15 06:51:56| 0x1808f160 adding entry: 37 at 4
2009/07/15 06:51:56| 0x1808f160 lookup for 37
2009/07/15 06:51:56| 0x1808f160: joining for id 37
2009/07/15 06:51:56| 0x1808f160: joined for id 37: no-cache
2009/07/15 06:51:56| 0x1808f160 lookup for 7
2009/07/15 06:51:56| 0x1808f160 lookup for 7
2009/07/15 06:51:56| 0x1808f160 lookup for 40
2009/07/15 06:51:56| 0x1808f160 lookup for 52
2009/07/15 06:51:56| clientInterpretRequestHeaders: REQ_NOCACHE = SET
2009/07/15 06:51:56| clientInterpretRequestHeaders: REQ_CACHABLE = NOT SET
2009/07/15 06:51:56| clientInterpretRequestHeaders: REQ_HIERARCHICAL = NOT SET
2009/07/15 06:51:56| clientProcessRequest: CONNECT 
'http.justiz.niedersachsen.de:443'
2009/07/15 06:51:56| aclCheckFast: list: (nil)
2009/07/15 06:51:56| aclCheckFast: no matches, returning: 1
2009/07/15 06:51:56| sslStart: 'CONNECT http.justiz.niedersachsen.de:443'
2009/07/15 06:51:56| comm_open: FD 58 is a new socket
2009/07/15 06:51:56| fd_open FD 58 http.justiz.niedersachsen.de:443
2009/07/15 06:51:56| comm_add_close_handler: FD 58, handler=0x463e31, 
data=0x1808d378

> -Ursprüngliche Nachricht-
> Von: Jarosch, Ralph [mailto:ralph.jaro...@justiz.niedersachsen.de]
> Gesendet: Dienstag, 14. Juli 2009 11:40
> An: squid-users@squid-cache.org
> Betreff: AW: [squid-users] https from different Subnet not working
> 
> > -Ursprüngliche Nachricht-
> > Von: adrian.ch...@gmail.com [mailto:adrian.ch...@gmail.com] Im
> Auftrag
> > von Adrian Chadd
> > Gesendet: Dienstag, 14. Juli 2009 11:16
> > An: Jarosch, Ralph
> > Cc: squid-users@squid-cache.org
> > Betreff: Re: [squid-users] https from different Subnet not working
> >
> > 2009/7/14 Jarosch, Ralph :
> > > This is the latest support squid-2 version for RHEL5.3
> > >
> > > An I want to use the dnsserver
> >
> > Right. Well, besides the other posters' response about the cache peer
> > setup being a clue - you're choosing a peer based on source IP as far
> > as I can tell there - which leads me to think that perhaps that
> > particular cache has a problem. You didn't say which caches they were
> > in your config or error message so we can't check whether they're the
> > same or different.
> >
> Ok sorry.
> The current way for an website request is
> 
> Client --> headproxy(10.37.132.2) --> my cache proxys
> (10.37.132.5/6/7/8) --> proxy off our isp --> internet
> 
> The error message come from the isp proxy which tell when I request
> something like https://www.ebay.com
> 
>  The requested URL could not be retrieved
>  --
>  -- While trying to retrieve the URL: http.yyy.xxx:443 The
>   yyy.xxx is our local domain
>  following error was encountered:
>  Unable to determine IP address from host name for The dnsserver
>  returned:
>  Name Error: The domain name does not exist.
>  This means that:
>   The cache was not able to resolve the hostname presented in the URL.
>   Check if the address is correct.
>  Your cache administrator is webmaster.
>  --
>  -- Generated Tue, 14 Jul 2009 08:10:39 GMT by xxx
>   the answer come from the isp
>  (squid/2.5.STABLE12)
> 
> I´ve made an tcpdump between our headproxy and our cacheproxy´s  an
> there I can see that the headproxy change the request from
> https//www.ebay.com to https.our.domain.com
> 
> 
> 
> > But since yo'ure using a supported squid for RHEL5.3, why don't you
> > contact Redhat for support? That is why you're paying them for.
> >
> >
> > adrian



Re: [squid-users] Reverse Proxy: Why does one file get disk hit, but the other memory hit (consistently)?

2009-07-14 Thread Amos Jeffries
On Tue, 14 Jul 2009 16:16:54 +0800, Drunkard  wrote:
> 在 2009-07-14二的 00:41 -0700,Elli Albek写道:
>> Hi,
>> We have squid as reverse proxy that caches files. There are two types of
>> cacheable files. I see in the log that one type always gets TCP_HIT:NONE
>> (response from disk cache) and the other type always gets
>> TCP_MEM_HIT:NONE
>> (response from memory cache).
>> 
>> What is the reason that one file type is not cached in memory, but still
>> cached on disk? If I ask this file a few times in a row it should be in
>> memory, but it is always on disk. If squid caches it with LRU, shouldn't
>> it
>> be in the memory cache if it's the last file accessed?

No, its in the cache as the last file accessed it jumps to the front of the
_index_.
The actual source location only changes when the object changes and needs
replacing.

You can use a PURGE request to drop it from cache, and the next request
will cache it to memory if there is space.  When things move to disk they
usually stay there. And everything gets flushed to disk for safe-keeping
storage when squid is restarted.


>> 
>> Response headers for files that are cached always on DISK:
>> 
>> HTTP/1.x 200 OK
>> Expires: Tue, 14 Jul 2009 07:39:15 GMT
>> Cache-Control: max-age=600
>> Content-Type: text/xml;charset=UTF-8
>> Content-Length: 7963
>> Date: Tue, 14 Jul 2009 07:29:15 GMT
>> Age: 18
>> X-Cache: HIT from www.foo.com
>> Via: 1.1 www.foo.com (squid/2.7.STABLE6)
>> Connection: keep-alive
>> 
>> Headers for files that are cached always in MEMORY:
>> 
>> HTTP/1.x 200 OK
>> Etag: W/"7624-123732799"
>> Last-Modified: Tue, 17 Mar 2009 22:13:10 GMT
>> Content-Type: image/gif
>> Content-Length: 7624
>> Expires: Thu, 13 Aug 2009 05:10:48 GMT
>> Cache-Control: max-age=2592001
>> Date: Tue, 14 Jul 2009 07:08:58 GMT
>> Age: 187
>> X-Cache: HIT from www.foo.com
>> Via: 1.1 www.foo.com (squid/2.7.STABLE6)
>> Connection: keep-alive
>> 
>> This is consistent. One file is always on disk, the other always in
>> memory.
>> No matter how many times I refresh.
>> 
>> Is there any http header that I can add to the first file to get it into
>> memory cache? Etag?
>> 
>> Relevant squid conf:
>> cache_dir aufs /usr/local/squid/var/cache 200 16 256
>> 
>> The rest is the default config file with reverse proxy configuration
>> (should
>> be cache_replacement_policy lru). There is also an ACL that blocks
>> certain
>> folders, this should not affect the LRU policy.
>> 
>> Thanks
> Maybe this help:
>   maximum_object_size_in_memory 8192 KB
>   maximum_object_size 1024000 KB
>   store_avg_object_size 130 KB

I'm not sure about help. But they are the options to look at tuning to
permit/deny larger objects into memory.
Note that maximum_object_size_in_memory wont work unless smaller than the
more global maximum_object_size.

Amos



Re: [squid-users] Very strange squid problem

2009-07-14 Thread Henrik Nordstrom
tor 2009-07-09 klockan 07:20 -0700 skrev jotacekm:
> We have a proxy server running squid with a good number of clients connected.
> 
> Sometimes clients just cant browse pages anymore (not all clients, but
> most). They can ping the proxy, ping to outside servers, resolve dns, and so
> does the proxy server.

Usually when I see this on Linux systems it's Netfilter conntrack table
being full.

Check /var/log/messages to see if you have any warnings about this...

Regards
Henrik



Re: [squid-users] users bypassing rules.. Help!?

2009-07-14 Thread Henrik Nordstrom
sön 2009-07-12 klockan 20:15 -0700 skrev nyoman karna:
> I had the same problem,
> and after some discussion with the experts
> I believe there's nothing we can do to block user
> for using another http-proxy-service.

The only effective method is to

1. Have a terms-of-use which forbids using such proxy servers.

2. Actively enforce the terms-of-use by hunting down users not following
the rules.

Any purely technical blocking measures which isn't actively followed up
by policy actions will just lead to a war between you as proxy admin
making rules and your users trying to bypass whatever rules you set up.

Regards
Henrik



Re: [squid-users] Reverse Proxy: Why does one file get disk hit, but the other memory hit (consistently)?

2009-07-14 Thread Henrik Nordstrom
tis 2009-07-14 klockan 00:41 -0700 skrev Elli Albek:

> What is the reason that one file type is not cached in memory, but still
> cached on disk? If I ask this file a few times in a row it should be in
> memory, but it is always on disk. If squid caches it with LRU, shouldn't it
> be in the memory cache if it's the last file accessed?

There is many factors.

1. squid.conf limits on what is kept in memory

2. The silly fact that once an object has been purged from memory for
one reason or another it won't get back into memory until it is
refreshed from the origin server. Only objects fetched from network can
end up in the memory cache currently.

Regards
Henrik



Re: [squid-users] user problem

2009-07-14 Thread Chris Robertson

espoire20 wrote:

Matt Harrison-3 wrote:
  

espoire20 wrote:


have a small problem with squid in access list, I need to block an IP
address
of a machine does not connect to internet even if it has the address of
the
proxy and port in the Internet option is that it is possible ? 
 
 
because I have some person who installs firefox mozzila he put the

address
of the proxy and the port it connects or it connects with a user of
another
person 
 
i use this but not working : 
 
acl user1 src 10.60.6.7 
httpd_access deny user1 
  

Try it with

http_access deny user1

HTH

Matt



excuse me i mean http not httpd but not working

I will explain you, I blocked internet for everyone ,if anyone wants
internet I add the proxy address and port in the explorer but I need blocked
IP address not to access the internet even if it adds proxy ip and port in
the explorer

what we can do ??? 
  


Share the rest of your config (preferably without comments and blank 
lines), or read the FAQ on ACLs 
(http://wiki.squid-cache.org/SquidFaq/SquidAcl).  You are likely 
allowing the traffic somewhere before the deny statement.


many thanks 
  


Chris



Re: [squid-users] Logging both x-forwarded_for and direct client address {Scanned by HBit}

2009-07-14 Thread Paul

Paul wrote:

Hi

I have several remote sites running a local copy of squid that in turn 
use a parent squid.


I want to improve the logging on the parent squid, at the moment each 
site has 1 real external IP address and many internal IP's e.g. 
192.168.1.x, in the parent squid access log I see just the 1 real IP 
address for all requests as expected.


Ive now configured the remote sites squid to add an X-Forward header 
and ive set


follow_x_forwarded_for allow all
acl_uses_indirect_client off
log_uses_indirect_client on

on the parent squid, the access log now shows the internal IP making 
the request i.e. 192.168.1.x


However all my sites have the same 192.168.1.x range, is there any way 
to make it log both the external IP AND the internal IP e.g. 
78.128.146.22,192.168.1.12 so I can narrow down what IP at each site 
is viewing?


I dont think there is such a config but if anyone has any 
ideas/pointers on how to achieve I would be grateful.




Another thing I have just noticed is I use squidguard as a redirector on 
the parent squid, its being passed the internal IP not the direct IP - 
is there a way to pass squidguard the direct access IP? I only want to 
log the local IP in access.log I dont want it to be used for any ACL's 
or redirectors.


Paul


[squid-users] Logging both x-forwarded_for and direct client address

2009-07-14 Thread Paul

Hi

I have several remote sites running a local copy of squid that in turn 
use a parent squid.


I want to improve the logging on the parent squid, at the moment each 
site has 1 real external IP address and many internal IP's e.g. 
192.168.1.x, in the parent squid access log I see just the 1 real IP 
address for all requests as expected.


Ive now configured the remote sites squid to add an X-Forward header and 
ive set


follow_x_forwarded_for allow all
acl_uses_indirect_client off
log_uses_indirect_client on

on the parent squid, the access log now shows the internal IP making the 
request i.e. 192.168.1.x


However all my sites have the same 192.168.1.x range, is there any way 
to make it log both the external IP AND the internal IP e.g. 
78.128.146.22,192.168.1.12 so I can narrow down what IP at each site is 
viewing?


I dont think there is such a config but if anyone has any ideas/pointers 
on how to achieve I would be grateful.


Thanks

Paul




[squid-users] sqstat and sarg with squid?

2009-07-14 Thread RoLaNd RoLaNd

Hello all,

i've just finished setting up sarg with squid.. its pretty decent and does what 
it says..
though i came to the need of knowing each user's specific live bandwidth usage 
where i came across sqstat

even though i've got itno squid.conf and replaced http_access allow manager 
localhost with http_access allow manager all (just so i get things up and 
running)
i still recieve the following error when i try to access sqstat's page:

SqStat errorError (13): Permission denied


any idea why would this b happening ?

_
More than messages–check out the rest of the Windows Live™.
http://www.microsoft.com/windows/windowslive/

AW: [squid-users] https from different Subnet not working

2009-07-14 Thread Jarosch, Ralph
> -Ursprüngliche Nachricht-
> Von: adrian.ch...@gmail.com [mailto:adrian.ch...@gmail.com] Im Auftrag
> von Adrian Chadd
> Gesendet: Dienstag, 14. Juli 2009 11:16
> An: Jarosch, Ralph
> Cc: squid-users@squid-cache.org
> Betreff: Re: [squid-users] https from different Subnet not working
> 
> 2009/7/14 Jarosch, Ralph :
> > This is the latest support squid-2 version for RHEL5.3
> >
> > An I want to use the dnsserver
> 
> Right. Well, besides the other posters' response about the cache peer
> setup being a clue - you're choosing a peer based on source IP as far
> as I can tell there - which leads me to think that perhaps that
> particular cache has a problem. You didn't say which caches they were
> in your config or error message so we can't check whether they're the
> same or different.
> 
Ok sorry.
The current way for an website request is

Client --> headproxy(10.37.132.2) --> my cache proxys (10.37.132.5/6/7/8) --> 
proxy off our isp --> internet

The error message come from the isp proxy which tell when I request something 
like https://www.ebay.com

 The requested URL could not be retrieved
 --
 -- While trying to retrieve the URL: http.yyy.xxx:443 The  
yyy.xxx is our local domain
 following error was encountered:
 Unable to determine IP address from host name for The dnsserver 
 returned:
 Name Error: The domain name does not exist. 
 This means that: 
  The cache was not able to resolve the hostname presented in the URL. 
  Check if the address is correct. 
 Your cache administrator is webmaster. 
 --
 -- Generated Tue, 14 Jul 2009 08:10:39 GMT by xxx  
the answer come from the isp
 (squid/2.5.STABLE12)

I´ve made an tcpdump between our headproxy and our cacheproxy´s  an there I can 
see that the headproxy change the request from https//www.ebay.com to 
https.our.domain.com
 


> But since yo'ure using a supported squid for RHEL5.3, why don't you
> contact Redhat for support? That is why you're paying them for.
> 
> 
> adrian



Re: [squid-users] https from different Subnet not working

2009-07-14 Thread Adrian Chadd
2009/7/14 Jarosch, Ralph :
> This is the latest support squid-2 version for RHEL5.3
>
> An I want to use the dnsserver

Right. Well, besides the other posters' response about the cache peer
setup being a clue - you're choosing a peer based on source IP as far
as I can tell there - which leads me to think that perhaps that
particular cache has a problem. You didn't say which caches they were
in your config or error message so we can't check whether they're the
same or different.

But since yo'ure using a supported squid for RHEL5.3, why don't you
contact Redhat for support? That is why you're paying them for.


adrian


AW: [squid-users] https from different Subnet not working

2009-07-14 Thread Jarosch, Ralph
I wonder all the time about the Version of squid.
It the correct version which descript in the squid.con
The correct version is 2.6 Stable 21

Sorry


-Ursprüngliche Nachricht-
Von: Gavin McCullagh [mailto:gavin.mccull...@gcd.ie] 
Gesendet: Dienstag, 14. Juli 2009 10:48
An: squid-users@squid-cache.org
Betreff: Re: [squid-users] https from different Subnet not working

Hi Ralph,

I'll add a couple of thoughts, but not really an answer.

On Tue, 14 Jul 2009, Jarosch, Ralph wrote:

> If I connect from an branch office with the subnet 10.37.34.*/24 to an https 
> website i´ve no Problems.
> If I do the same from another location with an subnet like 10.39.85.*/24 I 
> get the following error message.

Presumably you're using the same URL to test in both places and the same
proxy settings?

I'll note in passing that you're running a very ancient version of squid
(2.5.STABLE12).  I doubt an upgrade would fix your problem, but at some
point, you should consider an upgrade nonetheless.

> The requested URL could not be retrieved
> 
> While trying to retrieve the URL: http.yyy.xxx:443 
> The following error was encountered: 
> Unable to determine IP address from host name for 
> The dnsserver returned: 
> Name Error: The domain name does not exist. 
> This means that: 
>  The cache was not able to resolve the hostname presented in the URL. 
>  Check if the address is correct. 
> Your cache administrator is webmaster. 
> 
> Generated Tue, 14 Jul 2009 08:10:39 GMT by xxx (squid/2.5.STABLE12)
> 
> The requester url was https://www.ebay.com

It's a little odd that you removed the URL from the output, only to tell us
it afterward, but how and ever.  Also, you've removed the name of the web
proxy that generated the error, which is a little unhelpful as you appear
to have 5 proxy servers.

What the above error tells you is that the squid web proxy couldn't get a
DNS response for the site you wanted to go to, ie

"  The cache was not able to resolve the hostname presented in the URL."

It seems surprising that that problem would happen in a repeatable way that
affected one client but not another.

I note that you have several parent cache peers:

> cache_peer 10.37.132.5 parent 3128 7 no-query proxy-only no-digest sourcehash
> cache_peer 10.37.132.6 parent 3128 7 no-query proxy-only no-digest sourcehash
> cache_peer 10.37.132.7 parent 3128 7 no-query proxy-only no-digest sourcehash
> cache_peer 10.37.132.8 parent 3128 7 no-query proxy-only no-digest sourcehash

I wonder could it be that only one of the cache peers is having DNS issues?
Could you point a browser directly at each individual parent cache and see
can you get the webpage you're looking for.

Gavin



WG: [squid-users] https from different Subnet not working

2009-07-14 Thread Jarosch, Ralph
Hi Gaven, 

>Hi Ralph,

>I'll add a couple of thoughts, but not really an answer.

>On Tue, 14 Jul 2009, Jarosch, Ralph wrote:

>> If I connect from an branch office with the subnet 10.37.34.*/24 to an https 
>> website i´ve no Problems.
>> If I do the same from another location with an subnet like 10.39.85.*/24 I 
>> get the following error message.

>Presumably you're using the same URL to test in both places and the same
>proxy settings?

Yes this is correct same Url on both location

>I'll note in passing that you're running a very ancient version of squid
> (2.5.STABLE12).  I doubt an upgrade would fix your problem, but at some
>point, you should consider an upgrade nonetheless.

The Problem is that Redhat only Supports Squid 2.5./12

>> The requested URL could not be retrieved
>> 
>> While trying to retrieve the URL: http.yyy.xxx:443 
>> The following error was encountered: 
>> Unable to determine IP address from host name for 
>> The dnsserver returned: 
>> Name Error: The domain name does not exist. 
>> This means that: 
>>  The cache was not able to resolve the hostname presented in the URL. 
>>  Check if the address is correct. 
>> Your cache administrator is webmaster. 
>> 
>> Generated Tue, 14 Jul 2009 08:10:39 GMT by xxx (squid/2.5.STABLE12)
>> 
>> The requester url was https://www.ebay.com

>It's a little odd that you removed the URL from the output, only to tell us
>it afterward, but how and ever.  Also, you've removed the name of the web
>proxy that generated the error, which is a little unhelpful as you appear
>to have 5 proxy servers.

Ok yyy.xxx is the FQDN from our local domain.

>What the above error tells you is that the squid web proxy couldn't get a
>DNS response for the site you wanted to go to, ie

Ok I know
The response come from the parent proxy of the cache_peers

>"  The cache was not able to resolve the hostname presented in the URL."

>It seems surprising that that problem would happen in a repeatable way that
>affected one client but not another.

Absolutely. It is absolute crazy that all class c networks which are 10.37 
works fine an all other class c that have addresses like 10.59, 10.39 10.61 
doesn't work. 

>I note that you have several parent cache peers:

>> cache_peer 10.37.132.5 parent 3128 7 no-query proxy-only no-digest sourcehash
>> cache_peer 10.37.132.6 parent 3128 7 no-query proxy-only no-digest sourcehash
>> cache_peer 10.37.132.7 parent 3128 7 no-query proxy-only no-digest sourcehash
>> cache_peer 10.37.132.8 parent 3128 7 no-query proxy-only no-digest sourcehash

>I wonder could it be that only one of the cache peers is having DNS issues?
>Could you point a browser directly at each individual parent cache and see
>can you get the webpage you're looking for.
 
No its happens on all cache_proxy´s

>Gavin

Ralph



AW: [squid-users] https from different Subnet not working

2009-07-14 Thread Jarosch, Ralph
This is the latest support squid-2 version for RHEL5.3

An I want to use the dnsserver

-Ursprüngliche Nachricht-
Von: adrian.ch...@gmail.com [mailto:adrian.ch...@gmail.com] Im Auftrag von 
Adrian Chadd
Gesendet: Dienstag, 14. Juli 2009 10:38
An: Jarosch, Ralph
Betreff: Re: [squid-users] https from different Subnet not working

The first thing you should do is upgrade to the latest Squid-2 or
Squid-3, depending upon your environment needs.

Secondly, you should evaluate whether you truely want to use
dnsserver, or whether you can use the internal DNS redirector.

HTH,


Adrian


2009/7/14 Jarosch, Ralph :
> Hallo zusammen,
>
> ich habe mal wieder ein kleines Problem mit meinen Squid Servern. Auf bau ist 
> wie folgt.
>
> Wir haben verschiedene Netzsegmente die auf die einzelnen Standorte 
> aufgeteilt 10.37.*.* 10.39.*.* 10.55.*.*  /24 Alle greifen via VPN über 
> den Proxy in der Zentrale auf das Internet zu. Das Proxy System besteh aus 
> einem Frontproxy sowie 4 dahinter liegenden Parantproxys die als Cache 
> Systeme dienen.
>
> Desweitern gibt es noch einen Squidguard der auf der Selben Maschnine wie der 
> Frontproxy werkelt. Ich kann von allen Netzen ohne Probleme auf http Seiten 
> im Intra und Internet zugreifen. Rufe ich allerdings https Seiten auf 
> funktionieren diese nur aus 10.37 Netzen. Aus allen anderen wird die Anfrage 
> verstümmelt z.B wird aus https://www.bank.de --> http.bank.de.
>
> Ich bin nun mit meinem Latein am Ende. Vielleicht findet ja wer von euch 
> meinen Fehler. Bin für jeden Tipp echt dankbar
>
> Hier meine Konfig vom Front-Proxy
>
>
> Hi @all,
>
> I´ve have a little problem with my Squid Proxys.
>
> We have different class C subnets at our branch offices (10.37.*.* 10.39.*.* 
> )
> All of them connect to our main location by vpn.
> The Squidproxy is located in our main location.
> If I connect from an branch office with the subnet 10.37.34.*/24 to an https 
> website i´ve no Problems.
> If I do the same from another location with an subnet like 10.39.85.*/24 I 
> get the following error message.
>
>
>
> The requested URL could not be retrieved
> 
> While trying to retrieve the URL: http.yyy.xxx:443
> The following error was encountered:
> Unable to determine IP address from host name for
> The dnsserver returned:
> Name Error: The domain name does not exist.
> This means that:
>  The cache was not able to resolve the hostname presented in the URL.
>  Check if the address is correct.
> Your cache administrator is webmaster.
> 
> Generated Tue, 14 Jul 2009 08:10:39 GMT by xxx (squid/2.5.STABLE12)
>
>
> The requester url was https://www.ebay.com
>
> My squid.conf:
>
> acl all src 0.0.0.0/0.0.0.0
> acl netze src 10.39.0.0/16, 10.38.0.0/16, 10.37.0.0/16, 10.40.0.0/16, 
> 10.41.0.0/16, 10.55.0.0/16, 10.59.0.0/16, 10.61.0.0/16, 10.66.0.0/16, 
> 10.68.0.0/16
> acl manager proto cache_object
> acl localhost src 127.0.0.1/255.255.255.255
> acl to_localhost dst 127.0.0.0/8
> acl SSL_ports port 443 563 8080 3443 8443 4443
> acl Safe_ports port 80          # http
> acl Safe_ports port 21          # ftp
> acl Safe_ports port 443         # https
> acl Safe_ports port 70          # gopher
> acl Safe_ports port 210         # wais
> acl Safe_ports port 1025-65535  # unregistered ports
> acl Safe_ports port 280         # http-mgmt
> acl Safe_ports port 488         # gss-http
> acl Safe_ports port 591         # filemaker
> acl Safe_ports port 777         # multiling http
> acl CONNECT method CONNECT
> http_access allow manager localhost netze
> http_access deny manager
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
> http_access allow netze
> http_access allow localhost
> http_access deny all
> icp_access allow all
>  follow_x_forwarded_for allow netze
> http_port 3128
> cache_peer 10.37.132.5 parent 3128 7 no-query proxy-only no-digest sourcehash
> cache_peer 10.37.132.6 parent 3128 7 no-query proxy-only no-digest sourcehash
> cache_peer 10.37.132.7 parent 3128 7 no-query proxy-only no-digest sourcehash
> cache_peer 10.37.132.8 parent 3128 7 no-query proxy-only no-digest sourcehash
> hierarchy_stoplist cgi-bin ?
> access_log /data/log/access.log squid
> debug_options ALL,9
> url_rewrite_program /usr/local/bin/squidGuard
>  redirector_bypass off
> acl QUERY urlpath_regex cgi-bin \?
> cache deny QUERY
> refresh_pattern ^ftp:           1440    20%     10080
> refresh_pattern ^gopher:        1440    0%      1440
> refresh_pattern .               0       20%     4320
> acl apache rep_header Server ^Apache
> broken_vary_encoding allow apache
> visible_hostname proxy.yyy.xxx.de
> acl local-server dst 10.39.0.0/16, 10.38.0.0/16, 10.37.0.0/16, 10.40.0.0/16, 
> 10.41.0.0/16, 10.55.0.0/16, 10.59.0.0/16, 10.61.0.0/16, 10.66.0.0/16, 
> 10.68.0.0/16
> acl local-webserver dstdomain *.yyy.xxx.de
> always_dir

Re: [squid-users] https from different Subnet not working

2009-07-14 Thread Gavin McCullagh
Hi Ralph,

I'll add a couple of thoughts, but not really an answer.

On Tue, 14 Jul 2009, Jarosch, Ralph wrote:

> If I connect from an branch office with the subnet 10.37.34.*/24 to an https 
> website i´ve no Problems.
> If I do the same from another location with an subnet like 10.39.85.*/24 I 
> get the following error message.

Presumably you're using the same URL to test in both places and the same
proxy settings?

I'll note in passing that you're running a very ancient version of squid
(2.5.STABLE12).  I doubt an upgrade would fix your problem, but at some
point, you should consider an upgrade nonetheless.

> The requested URL could not be retrieved
> 
> While trying to retrieve the URL: http.yyy.xxx:443 
> The following error was encountered: 
> Unable to determine IP address from host name for 
> The dnsserver returned: 
> Name Error: The domain name does not exist. 
> This means that: 
>  The cache was not able to resolve the hostname presented in the URL. 
>  Check if the address is correct. 
> Your cache administrator is webmaster. 
> 
> Generated Tue, 14 Jul 2009 08:10:39 GMT by xxx (squid/2.5.STABLE12)
> 
> The requester url was https://www.ebay.com

It's a little odd that you removed the URL from the output, only to tell us
it afterward, but how and ever.  Also, you've removed the name of the web
proxy that generated the error, which is a little unhelpful as you appear
to have 5 proxy servers.

What the above error tells you is that the squid web proxy couldn't get a
DNS response for the site you wanted to go to, ie

"  The cache was not able to resolve the hostname presented in the URL."

It seems surprising that that problem would happen in a repeatable way that
affected one client but not another.

I note that you have several parent cache peers:

> cache_peer 10.37.132.5 parent 3128 7 no-query proxy-only no-digest sourcehash
> cache_peer 10.37.132.6 parent 3128 7 no-query proxy-only no-digest sourcehash
> cache_peer 10.37.132.7 parent 3128 7 no-query proxy-only no-digest sourcehash
> cache_peer 10.37.132.8 parent 3128 7 no-query proxy-only no-digest sourcehash

I wonder could it be that only one of the cache peers is having DNS issues?
Could you point a browser directly at each individual parent cache and see
can you get the webpage you're looking for.

Gavin



Re: [squid-users] user problem

2009-07-14 Thread espoire20



Matt Harrison-3 wrote:
> 
> espoire20 wrote:
>> have a small problem with squid in access list, I need to block an IP
>> address
>> of a machine does not connect to internet even if it has the address of
>> the
>> proxy and port in the Internet option is that it is possible ? 
>>  
>>  
>> because I have some person who installs firefox mozzila he put the
>> address
>> of the proxy and the port it connects or it connects with a user of
>> another
>> person 
>>  
>> i use this but not working : 
>>  
>> acl user1 src 10.60.6.7 
>> httpd_access deny user1 
> 
> Try it with
> 
> http_access deny user1
> 
> HTH
> 
> Matt
> 
> 
excuse me i mean http not httpd but not working

I will explain you, I blocked internet for everyone ,if anyone wants
internet I add the proxy address and port in the explorer but I need blocked
IP address not to access the internet even if it adds proxy ip and port in
the explorer

what we can do ??? 

many thanks 


-- 
View this message in context: 
http://www.nabble.com/user-problem-tp24458799p24475726.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] Reverse Proxy: Why does one file get disk hit, but the other memory hit (consistently)?

2009-07-14 Thread Drunkard
在 2009-07-14二的 00:41 -0700,Elli Albek写道:
> Hi,
> We have squid as reverse proxy that caches files. There are two types of
> cacheable files. I see in the log that one type always gets TCP_HIT:NONE
> (response from disk cache) and the other type always gets TCP_MEM_HIT:NONE
> (response from memory cache).
> 
> What is the reason that one file type is not cached in memory, but still
> cached on disk? If I ask this file a few times in a row it should be in
> memory, but it is always on disk. If squid caches it with LRU, shouldn't it
> be in the memory cache if it's the last file accessed?
> 
> Response headers for files that are cached always on DISK:
> 
> HTTP/1.x 200 OK
> Expires: Tue, 14 Jul 2009 07:39:15 GMT
> Cache-Control: max-age=600
> Content-Type: text/xml;charset=UTF-8
> Content-Length: 7963
> Date: Tue, 14 Jul 2009 07:29:15 GMT
> Age: 18
> X-Cache: HIT from www.foo.com
> Via: 1.1 www.foo.com (squid/2.7.STABLE6)
> Connection: keep-alive
> 
> Headers for files that are cached always in MEMORY:
> 
> HTTP/1.x 200 OK
> Etag: W/"7624-123732799"
> Last-Modified: Tue, 17 Mar 2009 22:13:10 GMT
> Content-Type: image/gif
> Content-Length: 7624
> Expires: Thu, 13 Aug 2009 05:10:48 GMT
> Cache-Control: max-age=2592001
> Date: Tue, 14 Jul 2009 07:08:58 GMT
> Age: 187
> X-Cache: HIT from www.foo.com
> Via: 1.1 www.foo.com (squid/2.7.STABLE6)
> Connection: keep-alive
> 
> This is consistent. One file is always on disk, the other always in memory.
> No matter how many times I refresh.
> 
> Is there any http header that I can add to the first file to get it into
> memory cache? Etag?
> 
> Relevant squid conf:
> cache_dir aufs /usr/local/squid/var/cache 200 16 256
> 
> The rest is the default config file with reverse proxy configuration (should
> be cache_replacement_policy lru). There is also an ACL that blocks certain
> folders, this should not affect the LRU policy.
> 
> Thanks
Maybe this help:
maximum_object_size_in_memory 8192 KB
maximum_object_size 1024000 KB
store_avg_object_size 130 KB



[squid-users] https from different Subnet not working

2009-07-14 Thread Jarosch, Ralph
Hallo zusammen,

ich habe mal wieder ein kleines Problem mit meinen Squid Servern. Auf bau ist 
wie folgt.

Wir haben verschiedene Netzsegmente die auf die einzelnen Standorte aufgeteilt 
10.37.*.* 10.39.*.* 10.55.*.*  /24 Alle greifen via VPN über den Proxy in 
der Zentrale auf das Internet zu. Das Proxy System besteh aus einem Frontproxy 
sowie 4 dahinter liegenden Parantproxys die als Cache Systeme dienen.

Desweitern gibt es noch einen Squidguard der auf der Selben Maschnine wie der 
Frontproxy werkelt. Ich kann von allen Netzen ohne Probleme auf http Seiten im 
Intra und Internet zugreifen. Rufe ich allerdings https Seiten auf 
funktionieren diese nur aus 10.37 Netzen. Aus allen anderen wird die Anfrage 
verstümmelt z.B wird aus https://www.bank.de --> http.bank.de.

Ich bin nun mit meinem Latein am Ende. Vielleicht findet ja wer von euch meinen 
Fehler. Bin für jeden Tipp echt dankbar

Hier meine Konfig vom Front-Proxy


Hi @all, 

I´ve have a little problem with my Squid Proxys.

We have different class C subnets at our branch offices (10.37.*.* 10.39.*.* 
)
All of them connect to our main location by vpn.
The Squidproxy is located in our main location.
If I connect from an branch office with the subnet 10.37.34.*/24 to an https 
website i´ve no Problems.
If I do the same from another location with an subnet like 10.39.85.*/24 I get 
the following error message.



The requested URL could not be retrieved

While trying to retrieve the URL: http.yyy.xxx:443 
The following error was encountered: 
Unable to determine IP address from host name for 
The dnsserver returned: 
Name Error: The domain name does not exist. 
This means that: 
 The cache was not able to resolve the hostname presented in the URL. 
 Check if the address is correct. 
Your cache administrator is webmaster. 

Generated Tue, 14 Jul 2009 08:10:39 GMT by xxx (squid/2.5.STABLE12)


The requester url was https://www.ebay.com

My squid.conf:

acl all src 0.0.0.0/0.0.0.0
acl netze src 10.39.0.0/16, 10.38.0.0/16, 10.37.0.0/16, 10.40.0.0/16, 
10.41.0.0/16, 10.55.0.0/16, 10.59.0.0/16, 10.61.0.0/16, 10.66.0.0/16, 
10.68.0.0/16
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563 8080 3443 8443 4443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost netze
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow netze
http_access allow localhost 
http_access deny all
icp_access allow all
 follow_x_forwarded_for allow netze
http_port 3128
cache_peer 10.37.132.5 parent 3128 7 no-query proxy-only no-digest sourcehash
cache_peer 10.37.132.6 parent 3128 7 no-query proxy-only no-digest sourcehash
cache_peer 10.37.132.7 parent 3128 7 no-query proxy-only no-digest sourcehash
cache_peer 10.37.132.8 parent 3128 7 no-query proxy-only no-digest sourcehash
hierarchy_stoplist cgi-bin ?
access_log /data/log/access.log squid
debug_options ALL,9
url_rewrite_program /usr/local/bin/squidGuard
 redirector_bypass off
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
visible_hostname proxy.yyy.xxx.de
acl local-server dst 10.39.0.0/16, 10.38.0.0/16, 10.37.0.0/16, 10.40.0.0/16, 
10.41.0.0/16, 10.55.0.0/16, 10.59.0.0/16, 10.61.0.0/16, 10.66.0.0/16, 
10.68.0.0/16
acl local-webserver dstdomain *.yyy.xxx.de
always_direct allow local-server
always_direct allow local-webserver
never_direct allow all
append_domain .yyy.xxx.de
forwarded_for on
coredump_dir /var/spool/squid



thanks for help

Ralph Jarosch
ZIB 
Zentraler IT-Betrieb Niedersächsische Justiz

- Technisches Betriebszentrum -
Ralph Jarosch
Schlossplatz 2
29221 Celle
Tel.: +49 (5141) 206-145
Mobil:   +49 (162) 9069470
E-Mail:    ralph.jaro...@justiz.niedersachsen.de
Intranet: http://intra.zib.niedersachsen.de



[squid-users] Reverse Proxy: Why does one file get disk hit, but the other memory hit (consistently)?

2009-07-14 Thread Elli Albek
Hi,
We have squid as reverse proxy that caches files. There are two types of
cacheable files. I see in the log that one type always gets TCP_HIT:NONE
(response from disk cache) and the other type always gets TCP_MEM_HIT:NONE
(response from memory cache).

What is the reason that one file type is not cached in memory, but still
cached on disk? If I ask this file a few times in a row it should be in
memory, but it is always on disk. If squid caches it with LRU, shouldn't it
be in the memory cache if it's the last file accessed?

Response headers for files that are cached always on DISK:

HTTP/1.x 200 OK
Expires: Tue, 14 Jul 2009 07:39:15 GMT
Cache-Control: max-age=600
Content-Type: text/xml;charset=UTF-8
Content-Length: 7963
Date: Tue, 14 Jul 2009 07:29:15 GMT
Age: 18
X-Cache: HIT from www.foo.com
Via: 1.1 www.foo.com (squid/2.7.STABLE6)
Connection: keep-alive

Headers for files that are cached always in MEMORY:

HTTP/1.x 200 OK
Etag: W/"7624-123732799"
Last-Modified: Tue, 17 Mar 2009 22:13:10 GMT
Content-Type: image/gif
Content-Length: 7624
Expires: Thu, 13 Aug 2009 05:10:48 GMT
Cache-Control: max-age=2592001
Date: Tue, 14 Jul 2009 07:08:58 GMT
Age: 187
X-Cache: HIT from www.foo.com
Via: 1.1 www.foo.com (squid/2.7.STABLE6)
Connection: keep-alive

This is consistent. One file is always on disk, the other always in memory.
No matter how many times I refresh.

Is there any http header that I can add to the first file to get it into
memory cache? Etag?

Relevant squid conf:
cache_dir aufs /usr/local/squid/var/cache 200 16 256

The rest is the default config file with reverse proxy configuration (should
be cache_replacement_policy lru). There is also an ACL that blocks certain
folders, this should not affect the LRU policy.

Thanks