Re: [squid-users] Skipping logging certain traffic in access.log?

2009-10-28 Thread Amos Jeffries
On Wed, 28 Oct 2009 10:26:51 -0400, "Kelly, Jack"
 wrote:
> Hi everyone,
> I have what will probably be a pretty simple question... unfortunately I
> need to provide a few details to help explain what I'm trying to do and
> why.
>  
> One of the big uses of Squid to our managers is seeing how much time
> employees are spending on the internet. To that extent, we've got Squint
> installed for analyzing our logs and generating a shiny report that does
> exactly that, and can be viewed in an html document hosted right on the
> Squid box. Works great. We also authenticate with LDAP so requests can
> be tied to user credentials in Squid. Again, works great.
>  
> Here's where the minor hiccup comes in:
> I have an acl called 'passthrough' which is basically a list of
> domains/keywords/etc that the proxy server will allow requests for
> without prompting the user for their credentials. This comes in handy
> for programs that like to check for updates online, like Adobe Reader
> and iTunes. Unfortunately for my purposes, requests that go through
> unauthenticated are recorded in access.log by requestor IP address,
> which subsequently gets parsed by Squint and adds gobs of useless
> information to the report.
>  
> So, my question:
> Is there any way to get Squid to exclude certain types of records from
> access.log? Or would I be better off just beefing up our PAC file to
> send these 'passthrough' requests around the proxy?
>  
> On second thought, I suppose I could just write and cron a perl script
> that nukes lines containing an IP in our DHCP range right before Squint
> updates. That feels messy though :)
>  
> Thanks everyone!
> Jack
>  

The access_log directive accepts ACLs which map what can be logged to that
file.

You are after something like:
  access_log /foo squid !bypass

Where "squid" is the logformat (if you have your own custom one there, use
that), and "bypass" is the same ACL you use in http_access to bypass
(assuming its just one ACL fro that).

Amos


[squid-users] Re: adding content to the cache -- a revisit

2009-10-28 Thread Henrik Nordstrom
ons 2009-10-28 klockan 16:13 -0600 skrev bergenp...@comcast.net:
> Does sibling require ICP?  I thought by setting the no-query option, ICP 
> wasn't used to that cache_peer... ? 

cache-digests also works..

it needs some means of knowing what content is held by the sibling,
otherwise no requests will get forwarded there.

REgards
Henrik



[squid-users] Re: adding content to the cache -- a revisit

2009-10-28 Thread bergenp...@comcast.net


Does sibling require ICP?  I thought by setting the no-query option, ICP 
wasn't used to that cache_peer... ? 


Thanks


Henrik Nordstrom wrote:

ons 2009-10-28 klockan 11:22 -0600 skrev bergenp...@comcast.net:

  
When squid is talking to a cache_peer, is there a way to tell squid that 
if it gets a 404 back, to then send the request to the origin?



Not sure.. it may if you respond with 500, but not sure about 404..

And why do you need it to? Squid will go there on the first real request
anyway. With the cache_peer_access rules this peer is only used on these
automatic "add content" requests.

  

Note that I currently have the cache_peer defined as a parent.



What I intended.

sibling is another option, but requires you to implement an ICP server.

Regards
Henrik


  




Re: [squid-users] WCCP

2009-10-28 Thread Ross Kovelman
> From: Amos Jeffries 
> Date: Tue, 27 Oct 2009 12:17:12 +1300
> To: Ross Kovelman 
> Cc: "squid-users@squid-cache.org" 
> Subject: Re: [squid-users] WCCP
> 
> On Wed, 21 Oct 2009 12:20:00 -0400, Ross Kovelman
>  wrote:
>>> From: Ross Kovelman 
>>> Date: Mon, 19 Oct 2009 22:35:36 -0400
>>> To: Amos Jeffries 
>>> Cc: "squid-users@squid-cache.org" 
>>> Subject: Re: [squid-users] WCCP
>>> 
 From: Amos Jeffries 
 Date: Tue, 20 Oct 2009 13:20:27 +1300
 To: Ross Kovelman 
 Cc: "squid-users@squid-cache.org" 
 Subject: Re: [squid-users] WCCP
 
 On Mon, 19 Oct 2009 20:06:55 -0400, Ross Kovelman
  wrote:
>> From: Amos Jeffries 
>> Date: Tue, 20 Oct 2009 12:40:02 +1300
>> To: Ross Kovelman 
>> Cc: "squid-users@squid-cache.org" 
>> Subject: Re: [squid-users] WCCP
>> 
>> On Mon, 19 Oct 2009 18:26:18 -0400, Ross Kovelman
>>  wrote:
 From: Amos Jeffries 
 Date: Tue, 20 Oct 2009 11:04:42 +1300
 To: Ross Kovelman 
 Cc: "squid-users@squid-cache.org" 
 Subject: Re: [squid-users] WCCP
 
 On Mon, 19 Oct 2009 14:21:44 -0400, Ross Kovelman wrote:
>> From: Amos Jeffries
>> 
>> Ross Kovelman wrote:
 From: Amos Jeffries:
 
 Ross Kovelman wrote:
 I am going to be using WCCP.  I did another reconfigure with
 the
 --enable
 WCCP option.  How can I check that it is on and running?  The
 next
 step I
 need to do is upgrade to version 2 since the Cisco only
>> communicates
 on
 version 2.  I tried to do the patch < upgrade patch but then
> I
 get
>> a
 response with path to upgrade and I am not sure where the
> file
 is
>> I
 need
 patch.
 There is zero need to patch for support WCCPv2. It's been
> built
>> into
 Squid for many years now.
 
 Run "./configure --help".
   * If it lists "--disable-wccpv2" there is no need to do
 anything.
   * If it lists "--enable-wccpv2" , add that to your build
 options.
   * If it does not mention "wccpv2" at all upgrade your Squid
 version.
 
 Then setup squid.conf with the relevant wccp2_* options.
 
 http://www.squid-cache.org/Doc/config/ or the wiki example
 configs
 have
 details on those.
>>> 
>>> Thanks again.
>>> Running the ./configure --help only says this:
>>>  --disable-wccp  Disable Web Cache Coordination V1
 Protocol
>>>  --disable-wccpv2Disable Web Cache Coordination V2
 Protocol
>>> 
>>> When I did the install I ran the ./configure --enable wccp
>>> option.
 I
>>> didn't
>>> say --enable-wccpv2, does this matter?  I also have this in the
 config:
>>> wccp2_router 192.168.16.1
>>> wccp2_forwarding_method 1
>>> wccp2_return_method 1
>>> 
>>> I am running Squid Web Proxy 2.7.STABLE5.
>> 
>> Okay. Thats fine.
>> 
>> The ./configure results mean that both WCCP versions are built
>> into
>> Squid by default unless you explicitly say --disable. Nothing
>> extra
>> needed to build them.
>> 
>> The config options you have there are already WCCPv2-only
> options
 for
>> Cisco. Nothing new needed there either.
>> 
>> If thats not working its a config error somewhere.
>> 
> 
> I am getting this in my cache log:
> 
> Accepting proxy HTTP connections at 0.0.0.0, port 3128, FD 20.
> commBind: Cannot bind socket FD 21 to *:3128: (48) Address
> already
 in
 use
> Accepting proxy HTTP connections at 0.0.0.0, port 80, FD 21.
> commBind: Cannot bind socket FD 22 to *:80: (48) Address already
> in
>> use
 
 
>> 
 
>> 
> 
http://wiki.squid-cache.org/SquidFaq/TroubleShooting#Cannot_bind_socket_FD_NN>>
>
>> _
 to_.2A:8080_.28125.29_Address_already_in_use
 
 I would suspect this as part of the problem. The WCCP router will
> be
 trying to contact whatever software is already running on port
> 3128,
>> not
 the Squid you are starting with WCCP config.
 
> Accepting ICP messages at 0.0.0.0, port 3130, FD 22.
> WCCP Disabled.
> Accepting WCCPv2 messages on port 2048, FD 23.
>> 
>> To answer your earlier question:
>>   the above two lines means WCCPv1 is disabled, WCCPv2 is being
> used.
>> 
> Initialising all WCCPv2 lists
> 
> As from my other posting I need WCCP enabled but it is showing
>> disabled.
> Any reason why?  How ca

Re: [squid-users] Tproxy4+squid: ebtables wiki

2009-10-28 Thread Dan

Marko Kotar wrote:

Thanks.

"redirect

The redirect target will change the MAC target address to that of the bridge 
device the frame arrived on. This target can only be used in the BROUTING chain 
of the broute table and the PREROUTING chain of the nat table. In the BROUTING 
chain, the MAC address of the bridge port is used as destination address, in 
the PREROUTING chain, the MAC address of the bridge is used.

--redirect-target target

Specifies the standard target. After doing the MAC redirect, the rule still has to give a standard target so ebtables knows what to do. The default target is ACCEPT. Making it CONTINUE could let you use multiple target extensions on the same frame. Making it DROP in the BROUTING chain will let the frames be routed. RETURN is also allowed. Note that using RETURN in a base chain is not allowed." 

I think: If accept is used it goes in the tproxy because dst mac is changed to bridge address. (So it goes up as it would if client had  gateway configured to that machine?) But is also should drop work? 

  
I decided to test it. I changed my rule to ACCEPT and traffic passes but 
not through the proxy.  My access.log shows no new traffic after 
changing the rule.  DROP is what passes the frame off to iptables.  
Could you show all your rules?  If squid is receiving the traffic the 
only thing I can think of is that maybe there is another rule further 
down the chain that cause the frame to be routed.



I have tryed drop but it didn't work. I didn't get through any traffic.
If i didn't use any of ebtable rules it went through.
But accept works.  
--- On Wed, 10/28/09, Dan  wrote:


  

From: Dan 
Subject: Re: [squid-users] Tproxy4+squid: ebtables wiki
To: "Marko Kotar" 
Cc: squid-users@squid-cache.org
Date: Wednesday, October 28, 2009, 1:03 AM
Marko Kotar wrote:


Hi,
You have incorrect commands in squid wiki for tproxy4
  

ebtables:


I figure out that it is not "--redirect-target DROP"
  

but it is  "--redirect-target ACCEPT" .

   
  

With ebtables using broute ACCEPT and DROP have special
meanings.  DROP 
means route the frame and ACCEPT means bridge the frame.


http://ebtables.sourceforge.net/misc/ebtables-man.html



There is a "-j REDIRECT" which should be in lowercase
  

letters "-j redirect".


Thanks for guide.

Marko



   
   
  

Dan






  
  




[squid-users] Re: adding content to the cache -- a revisit

2009-10-28 Thread Henrik Nordstrom
ons 2009-10-28 klockan 11:22 -0600 skrev bergenp...@comcast.net:

> When squid is talking to a cache_peer, is there a way to tell squid that 
> if it gets a 404 back, to then send the request to the origin?

Not sure.. it may if you respond with 500, but not sure about 404..

And why do you need it to? Squid will go there on the first real request
anyway. With the cache_peer_access rules this peer is only used on these
automatic "add content" requests.

> Note that I currently have the cache_peer defined as a parent.

What I intended.

sibling is another option, but requires you to implement an ICP server.

Regards
Henrik



[squid-users] Re: adding content to the cache -- a revisit

2009-10-28 Thread bergenp...@comcast.net


[moving this to squid-users]

When squid is talking to a cache_peer, is there a way to tell squid that 
if it gets a 404 back, to then send the request to the origin?


Note that I currently have the cache_peer defined as a parent.

Thanks


Henrik Nordstrom wrote:

mån 2009-10-12 klockan 07:57 -0600 skrev bergenp...@comcast.net:

  
The content I'm trying to manually install into the squid server will be 
a subset of the origin server content, so for objects not manually 
installed into squid, squid will still need to go directly back to the 
origin server. 



What you need is

a) A HTTP server on the Squid server capable of serving the objects
using HTTP, preverably with as identical properties as possible as the
origin. This includes at least properties such as ETag, Content-Type,
Content-Language, Content-Encoding and Last-Modified.

b) wget, squidclient or another simple HTTP client capable of requesting
URLs from the proxy.

c) cache_peer line telling Squid that this local http server exists

d) A unique http_port bound on the loopback interface, only used for
this purpose (simplifies next step)

e) cache_peer_access + never_direct rules telling Squid to fetch content
requested from the unique port defined in 'd' from the peer defined in
'c' and only then..

Regards
Henrik


  




[squid-users] Skipping logging certain traffic in access.log?

2009-10-28 Thread Kelly, Jack
Hi everyone,
I have what will probably be a pretty simple question... unfortunately I
need to provide a few details to help explain what I'm trying to do and
why.
 
One of the big uses of Squid to our managers is seeing how much time
employees are spending on the internet. To that extent, we've got Squint
installed for analyzing our logs and generating a shiny report that does
exactly that, and can be viewed in an html document hosted right on the
Squid box. Works great. We also authenticate with LDAP so requests can
be tied to user credentials in Squid. Again, works great.
 
Here's where the minor hiccup comes in:
I have an acl called 'passthrough' which is basically a list of
domains/keywords/etc that the proxy server will allow requests for
without prompting the user for their credentials. This comes in handy
for programs that like to check for updates online, like Adobe Reader
and iTunes. Unfortunately for my purposes, requests that go through
unauthenticated are recorded in access.log by requestor IP address,
which subsequently gets parsed by Squint and adds gobs of useless
information to the report.
 
So, my question:
Is there any way to get Squid to exclude certain types of records from
access.log? Or would I be better off just beefing up our PAC file to
send these 'passthrough' requests around the proxy?
 
On second thought, I suppose I could just write and cron a perl script
that nukes lines containing an IP in our DHCP range right before Squint
updates. That feels messy though :)
 
Thanks everyone!
Jack
 


This message (and any associated files) is the property of
S. R. Weiner and Associates Inc. and W/S Development Associates LLC
and is intended only for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
subject to copyright or constitutes a trade secret. If you are not
the intended recipient you are hereby notified that any dissemination,
copying or distribution of this message, or files associated with this
message, is strictly prohibited. If you have received this message
in error, please notify us immediately by calling our corporate office
at 617-232-8900 and deleting this message from your computer.

Internet communications cannot be guaranteed to be secure or error-free
as information could be intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain viruses. Therefore, S. R. Weiner
and Associates, Inc. and W/S Development Associates LLC do not accept
responsibility for any errors or omissions that are present in this
message, or any attachment, that have arisen as a result of e-mail
transmission. If verification is required, please request a hard-copy
version of this message.

Any views or opinions presented in this message are solely those of
the author and do not necessarily represent those of the company.


Re: [squid-users] sslBump, error SSL unknown certificate error

2009-10-28 Thread vandermeer

Got it, i did not have my full list of signing authority certificates
installed in the right local. i updated these using:

apt-get install openssl ca-certificates

Then copied the certs from the /etc/ssl/certs directory into my openssl
installation directory. works great now!



Amos Jeffries-2 wrote:
> 
> On Tue, 27 Oct 2009 12:54:03 -0700 (PDT), vandermeer
>  wrote:
>> I have squid 3.1.0.14 running with the configuration below to forward
>> decrypted traffic from sslBump to icap for inspection. 
>> 
>> When i browse non SSL sites with sslBump enabled everything is fine
>> 
>> When i browse SSL sites with sslbump disabled everything is fine. 
>> 
>> When I browse SSL sites with sslbump enabled i receive the following
>> errors:
>> 
>> 2009/10/27 10:57:41| SSL unknown certificate error 19 in
>> /C=US/ST=Arizona/L=Phoenix/O=American Express Company/OU=Web
>> Hosting/CN=www.americanexpress.com
>> 
>> 2009/10/27 10:57:41| fwdNegotiateSSL: Error negotiating SSL connection
> on
>> FD
>> 14: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate
>> verify failed (1/-1/0)
> 
> This is Squid SSL library failing to verify the real _web server_
> certificate.
> 
> There are a couple of things to check.
>  * you have correct and most recent signing authority certificates etc to
> verify theirs against.
>  * your SSL library being used by Squid is capable of SSLv3 9which their
> site appears to require)
> 
> There is a slim chance it could actually be a case of site forgery (your
> upstream doing SslBump would be pure irony).
> 
>> 
>> My certificate is my company wildcard certificate.
> 
> That only affects the browsers visiting through your Squid. Which seems
> fine so far.
> 
>> 
>> Squid Config:
>> 
>> icap_enable on
>> 
>> icap_service service_req reqmod_precache 1
>> icap://10.207.214.22:1344/request
>> adaptation_access service_req allow all
>> 
>> icap_service service_resp respmod_precache 0
>> icap://10.207.214.22:1344/response
>> adaptation_access service_resp allow all
>> 
>> # configure the HTTP port to bump CONNECT requests
>> http_port 3128 sslBump cert=/usr/local/squid/etc/cert.pem
>> 
>> 
>> # Bumped requests have relative URLs so Squid has to use reverse proxy
>> # or accelerator code. By default, that code denies direct forwarding.
>> # The need for this option may disappear in the future.
>> always_direct allow all
>> 
> 
> So far so good. However I see you have cut-n-pasted the example config and
> trying to run it.
> The following bits are probably not needed.
> 
>> # avoid bumping requests to sites that Squid cannot proxy well
>> acl broken_sites dstdomain .webax.com
>> ssl_bump deny broken_sites
>> ssl_bump allow all
>> 
>> # ignore certain certificate errors or
>> # ignore errors with certain cites (very dangerous!)
>> acl TrustedName url_regex ^https://weserve.badcerts.com/
>> acl BogusError ssl_error SQUID_X509_V_ERR_DOMAIN_MISMATCH
>> sslproxy_cert_error allow TrustedName
>> sslproxy_cert_error allow BogusError
>> sslproxy_cert_error deny all
> 
> 
> Amos
> 
> 

-- 
View this message in context: 
http://www.nabble.com/sslBump%2C-error-SSL-unknown-certificate-error-tp26084033p26095332.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] sibling peer problems

2009-10-28 Thread bergenp...@comcast.net


I'm trying to configure squid to use a sibling cache.  The sibling does 
not use ICP and I have not enabled echo mode.


In my config, I have:

cache_peer   peer-IP   sibling   http-port-#   0   no-query
cache_peer_accesspeer-IP   allow all

In this mode, I see no requests being sent to the sibling.  Instead, I 
see squid report in the access_log "TCP-MISS ... DIRECT/...".


If I change the above config from "sibling" to "parent", I see http 
requests being sent to the peer-IP.


Ideas why squid isn't attempting to leverage the sibling? 


Thanks




Re: [squid-users] poor performance video caching

2009-10-28 Thread Amos Jeffries

J. Webster wrote:
My squid users are complaining of poor performance with videos when caching is turned on. I thought I turned it off but 
SARG reports give the figures:

IN-CACHE-OUT
1.31%98.69%

Does squid still use the cache even though it's turned off?


Blue.

Seriously though:
  What ... version of squid?
  Which ... cache? (squid memory, squid disk, users agent?)
  Where?
  How was it "turned off"?

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE19
  Current Beta Squid 3.1.0.14


[squid-users] SNMP counters for bytes in hits/misses

2009-10-28 Thread Brian J. Murrell
It would be nice to have SNMP counters that tracked cache hits and
misses in terms of the number of bytes.  This would allow me to see
effective my proxy was at avoiding network traffic.

That said, I'm unsure how I would account for requests to a web servers
to test for object freshness, given that that is not a client request
per se but rather a cache server "management" (or overhead).

I think ultimately I'd like a accounting of every byte that comes/goes
from remote web servers (so that would include freshness probes as well
as object traffic) and an accounting of how much that would have been
(i.e. add to the above the amount of traffic that was avoided by serving
objects out of the cache) had the cache server not been there.

Thots?

b.




signature.asc
Description: This is a digitally signed message part


[squid-users] Intermittent access issue for ssl sites

2009-10-28 Thread John s
Hi,

I have a Squid setup . Past 20 days am facing Intermittent access
issue while opening ssl based websites .

sitename: www.hdfcbank.com , and after than opening
'enetbanking.hdfcbank.com ' from the link provided in the Main Url .

Issue is that sometimes,unable to access enetbanking site. Received
following error in browser "connection to 203.199.21.80 failed " and
same time following log is observed in access.log file .
"1256646352.314203 10.1.0.200 TCP_MISS/200 1115 CONNECT
enetbanking.hdfcbank.com:443 - DIRECT/203.199.21.80 -
1256646508.001  59474 10.160.11.69 TCP_MISS/503 0 CONNECT
enetbanking.hdfcbank.com:443 - DIRECT/203.199.21.80 -
1256646568.029  59973 10.160.11.69 TCP_MISS/503 0 CONNECT
enetbanking.hdfcbank.com:443 - DIRECT/203.199.21.80 -
1256646568.069 40 10.160.11.69 TCP_MISS/200 39 CONNECT
enetbanking.hdfcbank.com:443 - DIRECT/203.199.21.80 -
1256646668.003  59704 10.160.11.69 TCP_MISS/503 0 CONNECT
enetbanking.hdfcbank.com:443 - DIRECT/203.199.21.80 -
1256646713.492  45068 10.160.11.69 TCP_MISS/200 62373 CONNECT
enetbanking.hdfcbank.com:443 - DIRECT/203.199.21.80 -"

Following are some observations:
1:Stopped content filtering websense and checked ,but still unable to
access website.
2:Squid Details. Version 2.5.STABLE14
3: Checked the squid faqs and disabled following options:
client_persistent_connections off
 server_persistent_connections off
4:Enabled negative_dns_ttl 1 second

Issue is for 1-2 min only. Please let me know is this related any
Squid configuration error or ISP dns issue .

Thanks in advance for help

Regards
John


Re: [squid-users] Cache_peer based on destination's geoip

2009-10-28 Thread Amos Jeffries

Frito Lay wrote:

Hello list,

Some medieval country that shall remain unnamed is

blocking access to
some

specific websites, but the list of websites is huge,

dynamic, and not

public.

I have two proxy servers, one of which is located

outside of this

firewall, but access to this proxy server is slower

than to the local
one.

I would like to configure the local proxy to use a

peer cache based on
the

geoip address of the destination.

If the required object belongs to a specific country

then the request
will

go through the second proxy.

I know about the cache_peer_domain option, but I would

like to use a
geoip

based solution.






So the acl is evaluated, returns false, and the log

file doesn't have
any

output.  How come?

Nope. The helpers is a "slow" category lookup being used in
a "fast"
category access list. The helper is never called, just the
existing results
cache tested to see if a result is known.
http://wiki.squid-cache.org/SquidFaq/SquidAcl#Fast_and_Slow_ACLs

To get this to work you need to use the ACL in a "slow"
category access
list such as http_access first to get the result cached in
Squid so it can
be retrieved without any delays by cache_peer_access.

Amos


Thanks, http_access did the trick.
Perhaps there should be a reference to this in the docs, under the 
cache_peer_access page.

By the way, I can contribute my external script, in case it would be a useful 
addition to the squid package.

Thanks a lot!



Thank you. It's always worth a look. can you attach it to a mail to 
squid-dev mailing list then please? with a bit of usage and use-case 
description for docs.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE19
  Current Beta Squid 3.1.0.14


Re: [squid-users] password policy

2009-10-28 Thread Kinkie
On Fri, Oct 23, 2009 at 8:34 AM, Indunil Jayasooriya
 wrote:
> Hi ALL,
>
> we have a proxy server running with ncsa_auth.  we use htpasswd to
> generate passwords. There is a requirement for a password policy where
> we want to give  a minumum and maximum characters with both characters
> and numbers. we need a web interface for that.
>
> in addition to that,  password should expire in a period (let's say 5
> months). before that,  it should be informed to users.
>
> Could you pls let me know the software we need to achieve the above
> said requirements?

You can't really do that within squid.
The best option is to use some external password storage mechanism,
such as LDAP, and some custom authorization script (I am not sure if
something like that can be found pre-built) to warn users.

> What about the Squid Users Manager pkg?
Is there such a thing?


-- 
/kinkie


Re: [squid-users] Squid Logs

2009-10-28 Thread Nadeem Semaan
I tried dig -x 10.6.3.77 and got the machine name of the user.

I also searched the log files for both the IP and machine name of the user and 
found that squid has logged IP on one request, then name on second request, 
then IP on third request, then the rest were all looged by machine name.


- Original Message 
From: Henrik Nordstrom 
To: Nadeem Semaan 
Cc: squid-users@squid-cache.org
Sent: Mon, October 19, 2009 10:51:02 PM
Subject: Re: [squid-users] Squid Logs

mån 2009-10-19 klockan 06:24 -0700 skrev Nadeem Semaan:

> I have configured named on the machine running squid to retrieve the
> forward and reverse zones from my DNS server (windows).  I also have
> squid configured to log the fqdn (log_fqdn on). I have also tries
> playing around with the dns_nameservers option, but I'm still getting
> IPs in my log files. Is there a way to only log the fqdn, do I need to
> change the dns expiry settings to less than one day?  Please help.

If log_fqdn is on then Squid will log the host name, provided the DNS
server responds in a reasonable time.

Can you resolve the IP addresses from the Squid server?

  dig -x ip.of.client.station

or alternatively

  dig -x ip.of.client.station @selected.nameserver.address

Regards
Henrik






Re: [squid-users] Cache_peer based on destination's geoip

2009-10-28 Thread Frito Lay
> > Hello list,
> >
> > Some medieval country that shall remain unnamed is
> blocking access to
> some
> > specific websites, but the list of websites is huge,
> dynamic, and not
> > public.
> >
> > I have two proxy servers, one of which is located
> outside of this
> > firewall, but access to this proxy server is slower
> than to the local
> one.
> >
> > I would like to configure the local proxy to use a
> peer cache based on
> the
> > geoip address of the destination.
> >
> > If the required object belongs to a specific country
> then the request
> will
> > go through the second proxy.
> >
> > I know about the cache_peer_domain option, but I would
> like to use a
> geoip
> > based solution.
> >



> > So the acl is evaluated, returns false, and the log
> file doesn't have
> any
> > output.  How come?
>
> Nope. The helpers is a "slow" category lookup being used in
> a "fast"
> category access list. The helper is never called, just the
> existing results
> cache tested to see if a result is known.
> http://wiki.squid-cache.org/SquidFaq/SquidAcl#Fast_and_Slow_ACLs
>
> To get this to work you need to use the ACL in a "slow"
> category access
> list such as http_access first to get the result cached in
> Squid so it can
> be retrieved without any delays by cache_peer_access.
>
> Amos

Thanks, http_access did the trick.
Perhaps there should be a reference to this in the docs, under the 
cache_peer_access page.

By the way, I can contribute my external script, in case it would be a useful 
addition to the squid package.

Thanks a lot!


  



[squid-users] poor performance video caching

2009-10-28 Thread J. Webster

My squid users are complaining of poor performance with videos when caching is 
turned on. I thought I turned it off but 
SARG reports give the figures:
IN-CACHE-OUT
1.31%98.69%

Does squid still use the cache even though it's turned off?

  
_
New Windows 7: Find the right PC for you. Learn more.
http://www.microsoft.com/uk/windows/buy/