Re: [squid-users] store_digest hits?!?!

2010-04-23 Thread Luis Daniel Lucio Quiroz
Le samedi 24 avril 2010 00:12:09, Amos Jeffries a écrit :
> Luis Daniel Lucio Quiroz wrote:
> > 1272079797.597 52 192.168.203.19 TCP_MEM_HIT/200 4845 GET
> > http://fernanda.okay.com.mx:3128/squid-internal-periodic/store_digest -
> > NONE/- application/cache-digest
> > 
> > is this good. I mean my caches are HITTING store_digest.
> > 
> > Or shall i add an acl to discart cache hits in this class of request?
> 
> This is one cache fetching the store index from a peer.
> 
> Amos

Yes i understand that
i'm concerning about TCP_MEM_HIT, this hit could block the other caché to get 
the fresh copy of digest and instead that squid is storing a cache one?

I mean, in place with activity in where objects goes in and out, when other 
cache request the digest, it could get a non--fresh info.  


Re: [squid-users] store_digest hits?!?!

2010-04-23 Thread Amos Jeffries

Luis Daniel Lucio Quiroz wrote:
1272079797.597 52 192.168.203.19 TCP_MEM_HIT/200 4845 GET 
http://fernanda.okay.com.mx:3128/squid-internal-periodic/store_digest - NONE/- 
application/cache-digest


is this good. I mean my caches are HITTING store_digest.  


Or shall i add an acl to discart cache hits in this class of request?



This is one cache fetching the store index from a peer.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.1


Re: [squid-users] parent / child relationships

2010-04-23 Thread Luis Daniel Lucio Quiroz
Le lundi 19 avril 2010 09:38:13, SHURMER Kev (AXA-TECH-UK) a écrit :
> Hi,
> 
> We are looking to set up a single proxy in a remote location using the
> existing servers as parent servers. Can someone advise if we can create a
> parent/child relationship between 1 child and 2 parents ( and what is the
> alogorithym or usage round/robin or failover ). Alternatively can we
> create this relationship through a Content Switch that that uses a virtual
> IP address for the master proxy farm.
> 
> I can see plenty of information on the forums, but none that directly
> answers my query, and I'm getting pushed for a design very soon, to solve
> a problem.
> 
> Thanks for anticipated help.
> 
> Kev Shurmer
> Network Analyst - TS Data Networks
> AXA Technology Services
> kev.shur...@axa-tech.com
> Tel. : +44 1 253 68 4652 - Mob. : +44 7974 83 0090
> AXA redefining / standards
> 
> Please consider the environment before printing this message
> 
> 
> 
> 
> This email originates from AXA Technology Services UK Limited (reg.
> no. 1854856) which has its registered office at 5 Old Broad Street,
> London EC2N 1AD, England.
> 
> This message and any files transmitted with it are confidential and
> intended solely for the individual or entity to whom they are addressed.
> If you have received this in error, you should not disseminate or copy
> this email.  Please notify the sender immediately and delete this email
> from your system.
> 
> Please also note that any opinions presented in this email are solely
> those of the author and do not necessarily represent those of The AXA
> UK Plc Group.
> 
> Email transmission cannot be guaranteed to be secure, or error free as
> information could be intercepted, corrupted, lost, destroyed, late in
> arriving or incomplete as a result of the transmission process.  The
> sender therefore does not accept liability for any errors or omissions in
> the contents of this message which arise as a result of email
> transmission.
> 
> Finally, the recipient should check this email and any attachments for
> viruses.  The AXA UK Plc Group accept no liability for any damage
> caused by any virus transmitted by this email.

in my case

cache_peer 10.10.60.33parent 8080  7   login=*:nopass weight=95 
name=p.dansguardian no-netdb-exchange no-digest no-query default
cache_peer 127.0.0.1 parent 8080  7   login=*:nopass weight=1  
name=dg2 no-netdb-exchange no-digest no-query


one child with 2 parents with  failover.  roundrobin cand be turn on if weigh 
is the same i think.

LD


[squid-users] store_digest hits?!?!

2010-04-23 Thread Luis Daniel Lucio Quiroz
1272079797.597 52 192.168.203.19 TCP_MEM_HIT/200 4845 GET 
http://fernanda.okay.com.mx:3128/squid-internal-periodic/store_digest - NONE/- 
application/cache-digest

is this good. I mean my caches are HITTING store_digest.  

Or shall i add an acl to discart cache hits in this class of request?

TIA

LD


[squid-users] Zero reply

2010-04-23 Thread •̪●
why some website give me zero reply
when i turn on squid ( squid 2.7 from ubuntu )

but the website normal when i turn off the squid

what shoul i do ? ( fix the command or ? )

-- 
-=-=-=-=
hix nganggur maning... nganggur maning


Re: [squid-users] WARNING: Forwarding loop detected for:

2010-04-23 Thread Amos Jeffries

Cami wrote:

Hi All,

I've been unsuccessfull at trying to fix what appears to be a nasty 
forwarding loop.
After going through old posts concerning the matter, nothing seems to 
address the

issue. Some information:

The Squid proxy in question has 1 interface (eth0 10.3.0.251).

We have a hardware router that sits infront of it and intercepts all 
traffic and redirects
all traffic that comes through the router on port 80 and transparently 
redirects

it to port 3128 on the proxy.


First breakage is doing NAT on a box where Squid is not running.
If you can do policy routing there to pass all non-Squid traffic to port 
80 to squid box. Also called DMZ mode or port-specific bridging by some.



I've setup iptables to redirect it to Squid:

iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 3128 -j REDIRECT 
--to-port 3129


Why is port 3128 involved?
  are you trying to catch people sending regular proxy requests to 
external proxies?


If these are internal clients just trying to get to your Squid. Open its 
port 3128 and let them connect directly and normal clients.




Squid Cache: Version 3.1.1 config:
http_port 3129 transparent
visible_hostname lnx-proxy7.theweb.co.za
half_closed_clients off

Browsing "works fine" for most people. But occasionally i get the 
following in access.log


1272042637.252   9974 10.3.0.251 TCP_MISS/000 0 GET 
http://10.3.0.251:3128/ - DIRECT/10.3.0.251 -
1272042637.252   9974 10.3.0.251 TCP_MISS/000 0 GET 
http://10.3.0.251:3128/ - DIRECT/10.3.0.251 -
1272042637.253   9974 10.3.0.251 TCP_MISS/000 0 GET 
http://10.3.0.251:3128/ - DIRECT/10.3.0.251 -
1272042637.253   9974 10.3.0.251 TCP_MISS/000 0 GET 
http://10.3.0.251:3128/ - DIRECT/10.3.0.251 -


In cache.log i see errors along the following:

2010/04/23 19:13:27| WARNING: Forwarding loop detected for:
GET / HTTP/1.1
Via: 1.1 lnx-proxy7.theweb.co.za (squid/3.1.1)
X-Forwarded-For: 10.2.29.125
Host: 10.3.0.251:3129
Cache-Control: max-age=259200
Connection: keep-alive

2010/04/23 19:13:27| WARNING: Forwarding loop detected for:
GET / HTTP/1.1
Host: 10.3.0.251:3129
Via: 1.1 lnx-proxy7.theweb.co.za (squid/3.1.1), 1.1 
lnx-proxy7.theweb.co.za (squid/3.1.1)

X-Forwarded-For: 10.2.29.125, 10.3.0.251
Cache-Control: max-age=259200
Connection: keep-alive

2010/04/23 19:13:27| WARNING: Forwarding loop detected for:
GET / HTTP/1.1
Host: 10.3.0.251:3129
Via: 1.1 lnx-proxy7.theweb.co.za (squid/3.1.1), 1.1 
lnx-proxy7.theweb.co.za (squid/3.1.1), 1.1 lnx-proxy7.theweb.co.za 
(squid/3.1.1)

X-Forwarded-For: 10.2.29.125, 10.3.0.251, 10.3.0.251
Cache-Control: max-age=259200
Connection: keep-alive

And it keeps growing and growing. Does anyone have an ideas?


Your Squid is on the same side of the router as the clients yes?

You need to make a rule in the router which prevents capturing any 
traffic from the Squid box. This needs to happen on the router before 
any rules that catch the traffic.


 There are some examples of how to setup iptables at 
http://wiki.squid-cache.org/ConfigExamples/Intercept


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.1


Re: [squid-users] Using rewrite to optimize the squid-cache HIT rate

2010-04-23 Thread Amos Jeffries

Georg Höllrigl wrote:

Hello,

I'm using several load-balanced squids as reverse proxy. I have extra 
subdomains for delivering images. For example images1.example.com and 
images2.example.com. Both subdomains deliver exactly the same content 
and are requested evenly distributed. Now my thought was, to improve the 
squid-cache hitrate with a redirector program, which redirects all the 
requests for images2 to images1. According to webserver logfiles this 
works. But when disabling the redirects, I don't see significant 
differences in the hit rates.


Does anyone have a simmilar setup and can confirm this behavior?
Any hints about getting this "right" or hints to the right docs or even 
search terms?




URL re-writing only alters the URL, not the other related HTTP headers. 
This can cause problems and is avoidable in most situations.


Also, if you are having problems with HIT rate think of all the other 
admins out there suffering under your site design. You can save yourself 
a lot of bandwidth costs by using a cache friendly design.


Squid (and most of the load balancers I've heard of) work best for the 
case where one sub-domain is used with balancing on multiple IP addresses.


Splitting the content between two sub-domains (ie catpics.example.com, 
dogpics.example.com) is another cache friendly way to do it, but does 
loose some of the balancing benefits.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.1


Re: [squid-users] Re: Joomla DB authentication support hits Squid! :)

2010-04-23 Thread Luis Daniel Lucio Quiroz
Le vendredi 23 avril 2010 00:20:13, Amos Jeffries a écrit :
> Luis Daniel Lucio Quiroz wrote:
> > Le jeudi 22 avril 2010 20:09:57, Amos Jeffries a écrit :
> >> Luis Daniel Lucio Quiroz wrote:
> >>> Le jeudi 22 avril 2010 15:49:55, Luis Daniel Lucio Quiroz a écrit :
>  HI all
>  
>  As a requirement of one client, he wants to use joomla user database
>  to let squid authenticate.
>  
>  I did patch squid_db_auth that Henrik has written in order to support
>  joomla hash conditions.
>  
>  I did add one usefull option to script
>  
>  --joomla
>  
>  in order to activate joomla hashing.  Other options are identical.
>  Please test :)
>  
>  Ammos, I'd like if you can include this in 3.1.2
> >> 
> >> Mumble.
> >> 
> >> How do other users feel about it? Useful enough to cross the security
> >> bugs and regressions only freeze?
> >> 
>  LD
> >>> 
> >>> I have a typo in
> >>> my salt
> >>> 
> >>> should be
> >>> my $salt
> >>> 
> >>> sorry
> >> 
> >> Can you make the option --md5 instead please?
> >> 
> >>   Possibilities are not limited to Joomla and they may change someday.
> >> 
> >> The option needs to be added to the documentation sections of the helper
> >> as well.
> >> 
> >> Amos
> > 
> > I dont get you about "cross the security",
> 
> 3.1 is under feature freeze. Anything not a security fix or regression
> needs to have some good reasons to be committed.
Remember I'm a maintainer, all my changes at distro I do in a sable version we 
use.  :) I hope diff works also in HEAD.

> 
> I'm trying to stick to the freeze a little more with 3.1 than with 3.0,
> to get back into the habit of it. Particularly since we look like having
> a good foothold on the track for 12-month releases now.
> 
> > what i did is that --joomla flag do diferent sql request and because
> > joomla hass is like this:
> > hash:salt
> > i did split and compare.  by default joomla uses md5 (i'm not a joomla
> > master, i dont know when joomla uses other hashings)
> 
> I intend to use this auth helper myself for other systems, and there are
> others who ask about a DB helper occasionally.
> 
> 
> Taking a better look at your changes ...
> 
> The first one: db_conf = "block = 0"  seems to be useless. All it does
> is hard-code a different default value for the --cond option.
> 
>For Joomla the squid.conf should instead contain:
>   --cond " block=0 "
> 
> 
> Which leaves the salted/non-salted hash change.
> Adding this:
> 
>--salt-delimiter D
> 
> To configure character(s) between the hash and salt values.  Will not to
> lock people into the specific Joomla syntax of colon.  There are
> examples and tutorials out there for app design that use other delimiters.
> 
> Doing both of those changes Joomla would be configured with:
> 
>... --cond " block=0 "  --salt-delimiter ":"
> 
> > if you want, latter i may add also --md5 to store md5 password, and
> > --digest- auth to support diggest authentication :) but later jejeje
> 
> Amos

Got it

the block=0 condition is a hardcore DB condition that joomla uses.

I did add --joomla to hardcore  all joomla conditions, bu tI'm agree i will 
add --salt-delimiter.

A+


[squid-users] HTTP/1.0 502 Bad Gateway Squid 2.7.Stable6

2010-04-23 Thread Christoph Moormann

Hello,
after upgrading some proxies to a more recent release, i experienced 
some sites not working correctly anymore.


The current proxy version i test with are:
Solaris 10:
Squid Cache: Version 2.7.STABLE6 (CSW packet)
Ubuntu 9.10
Version 2.7.STABLE (provided by distribution)

When accessing the following site:
http://www.fotofinder.com/
-> Search for any picture
-> Click a thumbnail

With no proxy enabled there is a picture on the left side and some text 
on the right side.


With squid enabled, the picture is missing (broken link IE or no picture 
at all FF), the text is there.


Example:
http://www.fotofinder.com/preview.ep?imagesid=821365&resultid=13&ownerid=281&searchresultid=02677e2b721ce1d3f0ed92485e9af07f&text=kiel&usrtrck=searchresult_link_icon_preview

Checking the link in the page i get:
http://imginfo.fotofinder.net/cgi-bin/nph-ffpic2.pl/?crossid=281&res=half&nomark=1&imagesid=821365&resource=extern

Calling this with the proxy enabled i get:
Invalid Response error was encountered while trying to process the request:

GET 
/cgi-bin/nph-ffpic2.pl/?crossid=281&res=half&nomark=1&imagesid=821365&resource=extern 
HTTP/1.1

Host: imginfo.fotofinder.net
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; de; 
rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3

Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Proxy-Connection: keep-alive

The HTTP Response message received from the contacted server could not 
be understood or was otherwise malformed.


Some things from the access and cache log (maybe not related).

access.log
TCP_MISS/502 2333 GET 
http://imginfo.fotofinder.net/cgi-bin/nph-ffpic2.pl/WDD0912161812.jpg? - 
DIRECT/212.87.63.67 text/html


cache.log
2010/04/23 12:54:54| destroying entry 0x221e5df0: 'Server: 
squid/2.7.STABLE6'
2010/04/23 12:54:54| destroying entry 0x22252540: 'Date: Fri, 23 Apr 
2010 10:54:54 GMT'

2010/04/23 12:54:54| destroying entry 0x22252468: 'Content-Type: text/html'
2010/04/23 12:54:54| destroying entry 0x222524f8: 'Content-Length: 1990'
2010/04/23 12:54:54| destroying entry 0x22253f18: 'X-Squid-Error: 
ERR_INVALID_RESP 0'

2010/04/23 12:54:54| destroying entry 0x222562e0: 'X-Cache: MISS from proxy'
2010/04/23 12:54:54| destroying entry 0x22255f20: 'X-Cache-Lookup: MISS 
from ubuntu910.intern.netuse.de:8080'
2010/04/23 12:54:54| destroying entry 0x22255f68: 'Via: 1.0 proxy:8080 
(squid/2.7.STABLE6)'

2010/04/23 12:54:54| destroying entry 0x22255ed8: 'Connection: close'

My questions are:

- Has anyone an idea what is causing this?
- Does anyone know how to generally "fix" this on the squid side (even 
if the error webserver is in error), as older 2.x version seem to could 
handle this (though maybe violating standards)


Best regards
Christoph
--
Christoph Moormann

Vorstand: Dr. Joerg Posewang (Vorsitz), Dr. Roland Kaltefleiter, Andreas Seeger
Aufsichtsrat: Detlev Huebner (Vorsitz)
Sitz der AG: Kiel, HRB 5358 USt.ID: DE156073942

Diese E-Mail enthaelt vertrauliche oder rechtlich geschuetzte Informationen.
Das unbefugte Kopieren dieser E-Mail oder die unbefugte Weitergabe der
enthaltenen Informationen ist nicht gestattet.

The information contained in this message is confidential or protected by
law. Any unauthorised copying of this message or unauthorised distribution
of the information contained herein is prohibited.



[squid-users] squid transparent proxy server: can't access local web server

2010-04-23 Thread Donatas
Hello everyone,
i have a transparent squid proxy server. Here i will try to display
the scheme of networking


Internet -> Router -> Transparent Proxy (linux) -> Clients, Web server

When i try to access web server, no matter from network or internet -
i can't access it and i'm getting error

The following error was encountered:

Unable to forward this request at this time.

This request could not be forwarded to the origin server or to any
parent caches. The most likely cause for this error is that:

The cache administrator does not allow this cache to make direct
connections to origin servers, and
All configured parent caches are currently unreachable.

my squid.conf can be found:
http://pastebin.com/7srNVrxZ


--
Regards,
Donatas


[squid-users] WARNING: Forwarding loop detected for:

2010-04-23 Thread Cami

Hi All,

I've been unsuccessfull at trying to fix what appears to be a nasty 
forwarding loop.
After going through old posts concerning the matter, nothing seems to 
address the

issue. Some information:

The Squid proxy in question has 1 interface (eth0 10.3.0.251).

We have a hardware router that sits infront of it and intercepts all 
traffic and redirects
all traffic that comes through the router on port 80 and transparently 
redirects

it to port 3128 on the proxy. I've setup iptables to redirect it to Squid:

iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 3128 -j REDIRECT 
--to-port 3129


Squid Cache: Version 3.1.1 config:
http_port 3129 transparent
visible_hostname lnx-proxy7.theweb.co.za
half_closed_clients off

Browsing "works fine" for most people. But occasionally i get the 
following in access.log


1272042637.252   9974 10.3.0.251 TCP_MISS/000 0 GET 
http://10.3.0.251:3128/ - DIRECT/10.3.0.251 -
1272042637.252   9974 10.3.0.251 TCP_MISS/000 0 GET 
http://10.3.0.251:3128/ - DIRECT/10.3.0.251 -
1272042637.253   9974 10.3.0.251 TCP_MISS/000 0 GET 
http://10.3.0.251:3128/ - DIRECT/10.3.0.251 -
1272042637.253   9974 10.3.0.251 TCP_MISS/000 0 GET 
http://10.3.0.251:3128/ - DIRECT/10.3.0.251 -


In cache.log i see errors along the following:

2010/04/23 19:13:27| WARNING: Forwarding loop detected for:
GET / HTTP/1.1
Via: 1.1 lnx-proxy7.theweb.co.za (squid/3.1.1)
X-Forwarded-For: 10.2.29.125
Host: 10.3.0.251:3129
Cache-Control: max-age=259200
Connection: keep-alive

2010/04/23 19:13:27| WARNING: Forwarding loop detected for:
GET / HTTP/1.1
Host: 10.3.0.251:3129
Via: 1.1 lnx-proxy7.theweb.co.za (squid/3.1.1), 1.1 
lnx-proxy7.theweb.co.za (squid/3.1.1)

X-Forwarded-For: 10.2.29.125, 10.3.0.251
Cache-Control: max-age=259200
Connection: keep-alive

2010/04/23 19:13:27| WARNING: Forwarding loop detected for:
GET / HTTP/1.1
Host: 10.3.0.251:3129
Via: 1.1 lnx-proxy7.theweb.co.za (squid/3.1.1), 1.1 
lnx-proxy7.theweb.co.za (squid/3.1.1), 1.1 lnx-proxy7.theweb.co.za 
(squid/3.1.1)

X-Forwarded-For: 10.2.29.125, 10.3.0.251, 10.3.0.251
Cache-Control: max-age=259200
Connection: keep-alive

And it keeps growing and growing. Does anyone have an ideas?

Regards,
Cami


[squid-users] www.lgdisplay.com office page proxy error (appear not correct)

2010-04-23 Thread Eric.chen
Dear all

 In my company has been using squid for two years. 
 today i get user report , he can't access  www.lgdisplay.com 
 office page.

 so, I found a strange issue in squid proxy...

 if i direct access www.lgdisplay.com (not use squid proxy) , is ok.
 but we use  proxy to access the www.lgdisplay.com it's appear Garbage.
 even in squid acl , i setup bypass anything. that is still appear Garbage
 in the access.log , i check it .  it's no any deny msg..

 anyone can try it.  use  squid proxy  Via  to access  www.lgdisplay.com 
 any you will found appear different to direct access.

 my squid info
 OS: CENTOS 4.8
 squid 3.0 Stable24 --> get from http://people.redhat.com/~jskala/squid/
 squid 2.6 Stable20 self configure


[squid-users] www.lgdisplay.com office page proxy error (appear not correct)

2010-04-23 Thread Eric.chen
Dear all
   
 In my company has been using squid for two years. 
 today i get user report , he can't access  www.lgdisplay.com 
 office page.
  
 so, I found a strange issue in squid proxy...

 if i direct access www.lgdisplay.com (not use squid proxy) , is ok.
 but we use  proxy to access the www.lgdisplay.com it's appear Garbage.
 even in squid acl , i setup bypass anything. that is still appear Garbage
 in the access.log , i check it .  it's no any deny msg..

 anyone can try it.  use  squid proxy  Via  to access  www.lgdisplay.com 
 any you will found appear different to direct access.
 
 my squid info
 OS: CENTOS 4.8
 squid 3.0 Stable24 --> get from http://people.redhat.com/~jskala/squid/
 squid 2.6 Stable20 self configure 
 

  


Re: [squid-users] Single Forest Multiple Domains kebreos setup (squid_kerb_ldap)

2010-04-23 Thread Fabian Hugelshofer

Hi Bilal,

GIGO . wrote:

Problem:
 
Single FOrest Multiple domains where as Root A is empty with no users. Domain B & C have no trust configured between each other. The internet users belong to Domain B & Domain C. We want to enable users from both domains to authenticate via Kerberos and authrorized through LDAP.


If you serve multiple Kerberos realms add a HTTP/f...@realm service principal per realm to the 

HTTP.keytab file and use the -s GSS_C_NO_NAME option with squid_kerb_auth..
 
 
i think this is the only change required in squid configuration to authenticate and authorize from multiple domains?


I never tried this with non-hierarchical or non-Windows domains, but I 
would give it a go:


As there is at least a one-way trust from A to B/C, you don't need 
multiple service principals for the proxy. What you would do is create a 
single service principal in domain A.


When users from domains B and C are accessing the proxy, they should be 
able to discover (or be told in krb5.conf) that the service principal is 
in domain A and will acquire a service ticket from that domain. The 
proxy will then be able to verify these tickets.


I would use "-s HTTP/f...@a.com". You don't need to set GSS_C_NO_NAME.



Please confirm that am i to create SPN as below for this setup to work.


I don't have experience with msktutil. I created the SPN and keytab file 
for a computer account on the Windows DC:


ktpass.exe -princ HTTP/f...@a -mapuser accountna...@a -crypto 
rc4-hmac-nt -ptype KRB5_NT_SRV_HST +rndpass -out krb5.keytab




PLease guide me on the changes that would be required in the krb5.conf file ?


If the domain structure is reflected in DNS (i.e. with SRV records) and 
the proxy is able to query the forest DNS you shouldn't need anything in 
the krb5.conf of the proxy. Try "dig _kerberos._tcp.b.com" on the proxy. 
For simplicity I would add the default realm:


[libdefaults]
  default_realm = A.COM

Eventually and you will have to add a [capaths] section to define the 
trust relationship:


[capaths]
B.COM = {
  A.COM = .
}
C.COM = {
  A.COM = .
}

This is only for the proxy and applies to a Windows2003 forest. The 
clients might need different settings.


Regards,

Fabian


[squid-users] Using rewrite to optimize the squid-cache HIT rate

2010-04-23 Thread Georg Höllrigl

Hello,

I'm using several load-balanced squids as reverse proxy. I have extra subdomains for delivering 
images. For example images1.example.com and images2.example.com. Both subdomains deliver exactly the 
same content and are requested evenly distributed. Now my thought was, to improve the squid-cache 
hitrate with a redirector program, which redirects all the requests for images2 to images1. 
According to webserver logfiles this works. But when disabling the redirects, I don't see 
significant differences in the hit rates.


Does anyone have a simmilar setup and can confirm this behavior?
Any hints about getting this "right" or hints to the right docs or even search 
terms?

Regards,

Ing. Georg Höllrigl


Re: [squid-users] Getting Source-IP

2010-04-23 Thread Jeff Pang
On Fri, Apr 23, 2010 at 3:58 PM, Andreas Müller  wrote:
> Hello,
>
> I know that I can't trust in XFF. What is new for me that the comma is 
> optional and so its more difficult to parse the value.
>

Could use the function like Perl's split to get the last IP no matter
the comma exists or not.


$ perl -le '$ip="12.34.56.78,11.22.33.44,1.2.3.4";$last=(split/,/,$ip)[-1];print
$last'
1.2.3.4

$ perl -le '$ip="12.34.56.78,11.22.33.44";$last=(split/,/,$ip)[-1];print
$last'
11.22.33.44

$ perl -le '$ip="12.34.56.78";$last=(split/,/,$ip)[-1];print $last'
12.34.56.78

-- 
Jeff Pang
http://home.arcor.de/pangj/


AW: [squid-users] Getting Source-IP

2010-04-23 Thread Andreas Müller
Hello,

I know that I can't trust in XFF. What is new for me that the comma is optional 
and so its more difficult to parse the value.

In my case I have the control of the accel-Proxy and can trust it. So my idea 
was to inject an additional field with the IP of the incoming call to the proxy 
- the same IP I would get if my webserver gets this call directly as 
remote_addr. The reason is just to restore the behavior after putting the 
webserver behind the proxy.


Mit freundlichen Grüßen,

Andreas Müller

-- 
+--+
| Nur zwei Dinge sind unendlich:   |
| Das Weltall und die menschliche Dummheit.|
| Beim Weltall bin ich mir aber nicht ganz sicher. |
|  |
| ~Albert Einstein~|
+--+