Re: [squid-users] url rewrite problem

2007-10-02 Thread Keshava M P
you have to use a redirector and you have to include the
url_rewrite_program and related directives in squid.conf. typicallly,
url_rewrite_program=/path/to/yourredirectprogram/
url_rewrite_children=5
url_rewrite_concurrency=5
url_rewrite_host_header off

Either you define the entries of internal hosts in /etc/hosts or you
can use the internal ips directly in your redirector.
example: your outside ip is mapped to e1.yourdomain.com,
e2.yourdomain.com,e3.yourdomain.com.
let us say these correspond to host1 (10.9.0.1), host2 (10.9.0.2),
host3 (10.9.0.3)

your redirector program will look something like this:

#!/usr/bin/perl
$|=1;
while (<>) {
@X = split;
$url = $X[0];
$url =~ [EMAIL PROTECTED]://[EMAIL PROTECTED]://[EMAIL PROTECTED];
$url =~ [EMAIL PROTECTED]://[EMAIL PROTECTED]://[EMAIL PROTECTED];
$url =~ [EMAIL PROTECTED]://[EMAIL PROTECTED]://[EMAIL PROTECTED];
print;
}

you can also use internal ip addresses in place of host1, host2 etc.
you can even redirect to a specific page like
http://host1/path/to/your/page

Keshava

On 10/2/07, Srinivas B <[EMAIL PROTECTED]> wrote:
> Hi All,
>
> is there any way I can redirect urls that are replaced by accelerated mode.
>
> I have something like
>
> http_port 8080 accel defaultsite=mysite.com
>
> Requests are replaced by host=mysite.com.
>
> I want to redirect some url based on original request (depending upon
> hostname). I have tried vhost option.., but doesn't seem to solve the
> problem, as hostname requested externally is not defined in internal
> DNS.
>
> Please help
>
> Thanks in advance.
>
> Srinivas
>


-- 
M P Keshava


Re: [squid-users] squid log with "Missing needed capabilitysupport. Will continue without tproxy support"

2007-10-02 Thread Henrik Nordstrom
On mån, 2007-10-01 at 12:23 +0800, josse wang wrote:

> looks both squid and web server only send "S" packets untill squid
> gives up and reply with "(110) Connection timed out" to client.
> 
> Does it mean the packet lost from web server back to squid server?

Looks so. How have you configured routing of return traffic? All traffic
from the Internet needs to be routed via the proxy for tproxy to work.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] I Want To Ping A Cache Object

2007-10-02 Thread Henrik Nordstrom
On mån, 2007-10-01 at 04:32 -0700, Solomon Asare wrote:
> Hi All,
> please, how do I tell I have an object in my cache
> without fetching the object. I want a command like: 
> "wget --spider", but where I access only the cache
> without going to the origin server if the object is
> not available in the cache.

This is done using the "Cache-Control: only-if-cached" header.

Example:

wget ... --header="Cache-Control: only-if-cached" url...


Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] I Want To Ping A Cache Object

2007-10-02 Thread Henrik Nordstrom
On mån, 2007-10-01 at 10:14 -0700, Solomon Asare wrote:
> Hi All,
> thanks for the responses. I guess what I need is
> something like:
> 
> squidclient -m HEAD -H 'Max-Forwards = 0\n'
> http://www.google.com

Make that
squidclient -m HEAD -H 'Cache-Control: only-if-cached\n" http://www.google.com


or if you want it to return "true" on expired objects as well:
squidclient -m HEAD -H 'Cache-Control: only-if-cached,max-stale\n" 
http://www.google.com


Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] how to use accel url_rewrite_program urlgroup and cache_peer?

2007-10-02 Thread Henrik Nordstrom
On mån, 2007-10-01 at 15:55 +0200, Sylvain Viart wrote:
> Now with url_rewrite_program, http_port accel, cache_peer it seems I 
> should be able to do it in full squid.conf directive.

Yes, as long as you don't need to actually rewrite the URL, just have it
forwarded to the properserver.

> For now my squid works on the load balancing side, but not on filtering, 
> as the filtered url are also balanced on the originserver.
> 
> Here is my related squid.conf directives:
> 
> # I do vhost namebased
> http_port 80 defaultsite=my.site.com vhost
> 
> # the rewrite filter
> url_rewrite_program /etc/squid/redirector.pl
> url_rewrite_children 5
> url_rewrite_concurrency 0
> url_rewrite_host_header off
> 
> cache_peer php-01 parent 80 0 no-query originserver round-robin weight=1 
> login=PASS
> cache_peer php-02 parent 80 0 no-query originserver round-robin weight=1 
> login=PASS
> 
> /etc/squid/redirector.pl return, the incomming URL if not matched, or 
> the url with the domain replaced with the filer's hostname if it matches 
> a static document.
> ex: http://filer/imagepath/someimg.jpg
> 
> I don't find any good doc on how to use all those rewriting url + peer 
> balancing etc. Can someone point me to some good ref?

If you can drop the url rewriter and use cache_peer_access instead to
direct which requests is acceptable to send to which peer.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] authenticating against Win2000 AD

2007-10-02 Thread Henrik Nordstrom
On mån, 2007-10-01 at 16:47 +0200, polloxx wrote:

> I'll try it your way, because I was already too long struggling on this. :(
> It seems that samba 3.x no longer needs winbind, it even doesn't work
> using winbind, according to the squid FAQ.

Samba 3.x defenitely needs winbind. Where in the FAQ did you get the
impression it does not? All I can remember is it saying Squid only needs
the winbind (+ntlm_auth) parts, not SMB..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Need ACL to restrict specific hosts to specific websites

2007-10-02 Thread Henrik Nordstrom
On mån, 2007-10-01 at 14:31 -0400, George wrote:
> Hi All;
> 
> Relatively new to Squid, and am having a problem with an ACL.
> Currently, my setup allows all hosts access to all sites on the
> Internet with the exception of "bad" sites that I've already
> restricted via another ACL.
> 
> I want to add another ACL to allow 5 specific hosts on our network to
> access 6 specific Internet websites, but nothing else. What would be
> the simplest and most effective way to do this? All suggestions
> appreciated. Thanks!

Before where you otherwise allow access:

acl restricted_hosts src ip.of.host1 ...
acl restricted_sites dstdomain www.example.com www.squid-cache.org 
www.henriknordstrom.net

# Allos restricted_hosts only access to restricted_sites
http_access allow restricted_hosts restricted_sites
http_access deny restricted_hosts

or alternatively

# Deny restricted_hosts access to anything not in restricted_sites
http_access deny restricted_hosts !restricted_sites

assuming the restricted_sites is also allowed by your normal access rules..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] squid log with "Missing needed capabilitysupport. Will continue without tproxy support"

2007-10-02 Thread josse wang
Hi Henrik,

In my lab testing, currently there is no route via proxy from the
internet. Thanks for the info. I belive it should work after i
configure the return traffic.

I have another question. In our production, we are using more than 10
servers of squid proxy (all load-balance using round-robin). is it
possible to implement tproxy? How does we configure routing of return
traffic for round-robin enviroment?

Thanks for the info.

Rgds,

JW



On 10/2/07, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:
> On mån, 2007-10-01 at 12:23 +0800, josse wang wrote:
>
> > looks both squid and web server only send "S" packets untill squid
> > gives up and reply with "(110) Connection timed out" to client.
> >
> > Does it mean the packet lost from web server back to squid server?
>
> Looks so. How have you configured routing of return traffic? All traffic
> from the Internet needs to be routed via the proxy for tproxy to work.
>
> Regards
> Henrik
>
>


[squid-users] log & deny direct web access

2007-10-02 Thread Reinhard Haller

Hi,

I want to log direct web access over port 80 from misconfigured software 
update processes etc.


The firewall logged a lot of access over port 80, the reverse lookup of 
the used addresses is almost

useless. Therefore I changed the configuration:

pf redirects all connect requests for port 80 to port 3128

#added to squid.conf
http_port 127.0.0.1:3128 transparent
acl forwardport myport 3128
acl forwardip myip 127.0.0.1/255.255.255.255
http_access deny forwardip forwardport
# allow access to internet
http_access allow our_networks !ebay !useragent

Problem: squid 3.0pre6 now works as a perfect transparent proxy.

Whats's wrong?

Thanks
Reinhard



Re: [squid-users] Firefox automatic search

2007-10-02 Thread Amos Jeffries

Ted To wrote:

Does anyone know if there is a way to get firefox's automatic search
feature to work with squid?  When I type something like squid-cache
into the location window, I get the following instead of going to
squid-cache.org as I'm used to.


Get used to using the Internet properly then. Squid behaves as it 
should. Participating in the M$-initiated DDoS of the root DNS is not 
good netiquette.


Seriously though,  you can simulate the same behaviour by adding .com, 
.org, .net, and any other TLD you like to the squid servers resolv.conf 
(or elsewhere if its windows).


NP: That way only your server will have the additional 90+ second delay 
while it looks up alternative TLD on *every* non-resolvable domain.



Amos


Re: [squid-users] log & deny direct web access

2007-10-02 Thread Amos Jeffries

Reinhard Haller wrote:

Hi,

I want to log direct web access over port 80 from misconfigured software 
update processes etc.


The firewall logged a lot of access over port 80, the reverse lookup of 
the used addresses is almost

useless. Therefore I changed the configuration:

pf redirects all connect requests for port 80 to port 3128

#added to squid.conf
http_port 127.0.0.1:3128 transparent
acl forwardport myport 3128
acl forwardip myip 127.0.0.1/255.255.255.255
http_access deny forwardip forwardport
# allow access to internet
http_access allow our_networks !ebay !useragent

Problem: squid 3.0pre6 now works as a perfect transparent proxy.


> Whats's wrong?
>

I'd say you have mistaken the phrase 'redirects all traffic to a local 
port' in the REDIRECT documentation as meaning 'localhost port'. When in 
fact it just means 'a local-machine port'.


Think of the REDIRECT as a diversion making the client request from 
squid, not some other machine. The client just doesn't know it.


Amos


Re: [squid-users] Firefox automatic search

2007-10-02 Thread Henrik Nordstrom
On mån, 2007-10-01 at 17:13 -0400, Ted To wrote:
> Does anyone know if there is a way to get firefox's automatic search
> feature to work with squid?  When I type something like squid-cache
> into the location window, I get the following instead of going to
> squid-cache.org as I'm used to.

There is a relatively simple feature to keep this function while still
using a proxy. This feature of the browser is activated when the browser
encounters an unresolvable hostname, so you activate this function by
using a proxy autoconfig scripts which tells the browser to attempt to
contact the site directly if the destination is unresolvable.


Another alternative is to build the function in the proxy by using a url
rewriter helper.


Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] log & deny direct web access

2007-10-02 Thread Henrik Nordstrom
On tis, 2007-10-02 at 12:26 +0200, Reinhard Haller wrote:
> Hi,
> 
> I want to log direct web access over port 80 from misconfigured software 
> update processes etc.
> 
> The firewall logged a lot of access over port 80, the reverse lookup of 
> the used addresses is almost
> useless. Therefore I changed the configuration:
> 
> pf redirects all connect requests for port 80 to port 3128
> 
> #added to squid.conf
> http_port 127.0.0.1:3128 transparent
> acl forwardport myport 3128
> acl forwardip myip 127.0.0.1/255.255.255.255
> http_access deny forwardip forwardport
> # allow access to internet
> http_access allow our_networks !ebay !useragent
> 
> Problem: squid 3.0pre6 now works as a perfect transparent proxy.

This is because on intercepted connections myip evaluates to the
originally requested destination IP, not the IP address of the proxy
server.

Instead you can use the urlgroup feature to match these requests.

http_port 3128 transparent urlgroup=direct

any requests accepted by this http_port will then have the urlgroup of
"direct".

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] squid log with "Missing needed capabilitysupport. Will continue without tproxy support"

2007-10-02 Thread Henrik Nordstrom
On tis, 2007-10-02 at 17:46 +0800, josse wang wrote:

> I have another question. In our production, we are using more than 10
> servers of squid proxy (all load-balance using round-robin). is it
> possible to implement tproxy? How does we configure routing of return
> traffic for round-robin enviroment?

You need to use more static load distribution for this to work, or use a
session aware router capable of routing on a per tcp connection level,
i.e. using Linux CONNMARK or similar to keep track of which proxy
initiated the connection (either using MAC matching, or placing each
proxy in a different VLAN).

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] url rewrite problem

2007-10-02 Thread Amos Jeffries

Srinivas B wrote:

Hi All,

is there any way I can redirect urls that are replaced by accelerated mode.

I have something like

http_port 8080 accel defaultsite=mysite.com

Requests are replaced by host=mysite.com.

I want to redirect some url based on original request (depending upon
hostname). I have tried vhost option.., but doesn't seem to solve the
problem, as hostname requested externally is not defined in internal
DNS.


FQDN should be resolvable regardless of where you are. Websites should 
always use FQDN. You need to seriously consider allowing the local 
network to resolve your FQDN then. Particulary the webservers that are 
supposed to be serving those websites publicly.


Anyway, to get accel going without involving DNS you only need to use a 
cache_peer with a few ACLs to do the heavy lifting.


So long as its just a re-direction and not a re-writing that you want, 
the following should be much easier and faster.


Here's a few of my config lines:

   # an internal source machine...
 cache_peer colo-32.localdomain parent 80 0 originserver name=colo1
   # domain it runs...
 acl colo1Hosted dstdomain .mifrenz.com
   # it ONLY provides that domain...
 cache_peer_access colo1 allow colo1Hosted GETPOST
 cache_peer_access colo1 deny all
   # people are allowed to do general web stuff with it...
 http_access allow colo1Hosted GETPOST
   # squid is not allowed to do anything with this domain itself...
 never_direct allow colo1Hosted

  cache_peer rio.treenetnz.com parent 80 0 originserver name=rio
  acl rioHosted dstdomain .treenet.co.nz
  acl rioHosted dstdomain .treenetnz.com
  cache_peer_access rio allow rioHosted GETPOST
  cache_peer_access rio deny all
  http_access allow rioHosted GETPOST
  never_direct allow rioHosted


etc, etc, repeat as needed for any unique sources.

You can use any of the ACL criteria to switch origins based on anything 
you like.


FYI some names like colo-1 are not resolvable to the public. It does not 
matter. As long as the name squid is given as the peer can be resolved 
by squid, and the host server understands the names of domains its meant 
to be hosting. The only DNS involved here is resolving 
colo-32.localdomain and rio.treenetnz.com when squid needs them.


Placed ahead of the regular http_access rules it works well forcing all 
accelerated/locally-hosted domain MISS'es out to the designated real 
source, and blocking any general traffic being passed to the hosting 
servers. Without the additional overhead of redirector threads.


'vhost' will do basic 'accel' and also alter the original Host: header 
of the request as it goes through squid.


Amos


Re: [squid-users] how to use accel url_rewrite_program urlgroup and cache_peer?

2007-10-02 Thread Sylvain Viart

Hi,

Chris Robertson a écrit :
incomming URL: somedomaine/path/script.php => should go to peer which 
host script, php
incomming URL: somedomaine/imagepath/someimg.jpg => should go to 
static peer, filer


So I've a redirector which analyze the URL based on some regexp. It 
was needed for 2.5. Because the redirector script was embedding the 
round robbing balancing algorithm.
I don't find any good doc on how to use all those rewriting url + 
peer balancing etc. Can someone point me to some good ref?


http://wiki.squid-cache.org/SquidFaq/ReverseProxy#head-7bd155a1a9919bda8ff10ca7d3831458866b72eb 

Thanks Chris for the link, but I've found this one, and if didn't fit my 
need.


I've found by myself.

Here's one solution, for squid Version 2.6.STABLE5:

# put squid in accelerator mode
http_port 80 defaultsite=my.site.com vhost

# some peer (backends/origin server)
# here round robin same weight, you can have lot more server if needed. 
found in the FAQ URL above.
cache_peer php-01 parent 80 0 no-query originserver round-robin weight=1 
login=PASS
cache_peer php-02 parent 80 0 no-query originserver round-robin weight=1 
login=PASS


# + the filer for static content, I add another peer for content filtering
cache_peer filer-01 parent 80 0 no-query originserver name=filergrp

# the redirector, next I will try to remove it and use squid regexep
# it receives and returns the following content: (from cache.log debug 
enabled)

# helperSubmit: http://my.site.com/img/file.jpg IP/domain - GET -
# helperHandleRead: '!filer! http://filer-01/img/file.jpg IP/domain - GET -
# it uses the urlgroup to tag the url, here : filer
url_rewrite_program /etc/squid/redirector.pl

# some parameter to config the redirector, here 5 process
url_rewrite_children 5
# which don't know how to handle concurrency
url_rewrite_concurrency 0
# we need to keep the host header not to be rewrote by squid (vhosts 
name based)

url_rewrite_host_header off

# one acl for matching the tag urlgroup done by the redirector
acl static_doc urlgroup filer

# filtering to originserver based on acl tagged url
# static_doc (tagged by urlgroup by the redirector + the acl) are 
allowed on the filer peer server, nothing else

cache_peer_access filergrp allow static_doc
cache_peer_access filergrp deny all

# static content are note allowed on backend pool, but all other 
content, no tagged as static will be load balanced here.

cache_peer_access php-01 deny static_doc
cache_peer_access php-02 deny static_doc

sounds working for me, it also includes some more acl but not related to 
accelerator mode reverse proxy.


Here it is.
I may update the wiki because of the lact of documentation using the 
urlgroup behavior, also the doc in acl seems wrong:


http://www.squid-cache.org/Versions/v2/2.6/cfgman/acl.html

acl urlgroup group1 ...
  # match against the urlgroup as indicated by redirectors

Seems to miss the acl_name part => acl acl_name urlgroup group1 ...
no?

It works for me.

Regards,
Sylvain.



Re: [squid-users] log & deny direct web access

2007-10-02 Thread Reinhard Haller

Hi Henrik,

Henrik Nordstrom schrieb:

On tis, 2007-10-02 at 12:26 +0200, Reinhard Haller wrote:
  

Hi,

I want to log direct web access over port 80 from misconfigured software 
update processes etc.


The firewall logged a lot of access over port 80, the reverse lookup of 
the used addresses is almost

useless. Therefore I changed the configuration:

pf redirects all connect requests for port 80 to port 3128

#added to squid.conf
http_port 127.0.0.1:3128 transparent
acl forwardport myport 3128
acl forwardip myip 127.0.0.1/255.255.255.255
http_access deny forwardip forwardport
# allow access to internet
http_access allow our_networks !ebay !useragent

Problem: squid 3.0pre6 now works as a perfect transparent proxy.



This is because on intercepted connections myip evaluates to the
originally requested destination IP, not the IP address of the proxy
server.

Instead you can use the urlgroup feature to match these requests.

http_port 3128 transparent urlgroup=direct

any requests accepted by this http_port will then have the urlgroup of
"direct".

Regards
Henrik
  

urlgroup is not yet ported to 3.0pre6/7

Thanks
Reinhard


[squid-users] Startup problems

2007-10-02 Thread Sean O'Reilly
I have upgraded squid from squid-2.5STABLE10 to squid-2.6STABLE16.

When trying to start squid using the original squid.conf I am getting
the 'no port defined' error

I do not have an http_port defined in the configuration file. My
question is why would this work in 2.5 but not in 2.6 ?

Regards

Sean


Re: [squid-users] Startup problems

2007-10-02 Thread Slacker
Sean O'Reilly, on 10/02/2007 07:27 PM [GMT+500], wrote :
> I have upgraded squid from squid-2.5STABLE10 to squid-2.6STABLE16.
>
> When trying to start squid using the original squid.conf I am getting
> the 'no port defined' error
>
> I do not have an http_port defined in the configuration file. My
> question is why would this work in 2.5 but not in 2.6 ?
>
>   
Need to have a look at

http://www.squid-cache.org/Versions/v2/2.6/squid-2.6.STABLE16-RELEASENOTES.html#s1

Regards.


[squid-users] Squid 2.5-STABLE14 Crashing

2007-10-02 Thread Ali resting

Hi,

For the last couple of days my squid server keeps crashing and restarting 
itself. I have looked at the cache.log file and this is what I get. This 
server has been running fine for the last 2 years:


(squid)[0x80a1afd]
/lib/i686/libpthread.so.0[0x4005747e]
(squid)[0x42028c48]
(squid)[0x420c12db]
(squid)[0x420bd350]
(squid)(regexec+0x65)[0x420c2df5]
(squid)(vfprintf+0x2d36)[0x804d21a]
(squid)(vfprintf+0x39a9)[0x804de8d]
(squid)(vfprintf+0x3d4b)[0x804e22f]
(squid)(vfprintf+0x3ed6)[0x804e3ba]
(squid)[0x8067c4d]
(squid)[0x805c2f1]
(squid)[0x8060fed]
(squid)[0x8060d91]
(squid)[0x804e8e0]
(squid)(vfprintf+0x4064)[0x804e548]
(squid)[0x805c9c0]
(squid)[0x805c6ab]
(squid)[0x804e8e0]
(squid)(vfprintf+0x4064)[0x804e548]
(squid)[0x805c265]
(squid)[0x80620e8]
(squid)[0x80664b5]
(squid)[0x80884d7]
(squid)(__libc_start_main+0xa4)[0x420158d4]
(squid)(shmat+0x51)[0x804ab05]

Any help will be greatly appreciated.

Regards,

Ali

_
Express yourself instantly with MSN Messenger! Download today it's FREE! 
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/




[squid-users] Squid-3.0.RC1 now available

2007-10-02 Thread Duane Wessels

The Squid Web Proxy developers are pleased to announce the availability
of the Squid-3.0.RC1 "release candidate" release.

This new release can be downloaded from our HTTP or FTP servers:

http://www.squid-cache.org/Versions/v3/3.0/
ftp://ftp.squid-cache.org/pub/squid-3/DEVEL/

Or the mirrors. For a list of mirror sites see:

http://www.squid-cache.org/Download/mirrors.dyn
http://www.squid-cache.org/Download/http-mirrors.dyn

Regards
The Squid Web Proxy developers


[squid-users] tcp timeout issue

2007-10-02 Thread Frank Ruiz
Greetings,

I patched squid2.6 stable 14 with the tcp probe patch.

It patched two files:

cache_cf.c
neighbors.c

However, After about 14 hours of good runtime, my response times,
began to suck, and began to see errors again indicative of the tcp
probe issue:

2007/10/02 01:57:15| Detected REVIVED Parent: 10.10.10.20
2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
2007/10/02 01:57:16| Detected DEAD Parent: 10.10.10.20
2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed


The origin server is available, however I keep getting
revivied/connectionfailed/dead

It seems that the only way to recover from this is a restart.

I am running solaris 10, and I had to download the gnu patch utility
in order to patch the src.

Here was the patch applied.

Index: src/cache_cf.c
===
RCS file: /cvsroot/squid/squid/src/cache_cf.c,v
retrieving revision 1.470
diff -u -p -r1.470 cache_cf.c
--- src/cache_cf.c  20 Jul 2007 21:08:47 -  1.470
+++ src/cache_cf.c  28 Aug 2007 23:46:47 -
@@ -1621,6 +1621,7 @@ parse_peer(peer ** head)
 p->stats.logged_state = PEER_ALIVE;
 p->monitor.state = PEER_ALIVE;
 p->monitor.interval = 300;
+p->tcp_up = PEER_TCP_MAGIC_COUNT;
 if ((token = strtok(NULL, w_space)) == NULL)
self_destruct();
 p->host = xstrdup(token);
Index: src/neighbors.c
===
RCS file: /cvsroot/squid/squid/src/neighbors.c,v
retrieving revision 1.318
diff -u -p -r1.318 neighbors.c
--- src/neighbors.c 20 Jul 2007 21:08:47 -  1.318
+++ src/neighbors.c 28 Aug 2007 23:46:47 -
@@ -1010,12 +1010,13 @@ peerDNSConfigure(const ipcache_addrs * i
debug(0, 0) ("WARNING: No IP address found for '%s'!\n", p->host);
return;
 }
-p->tcp_up = PEER_TCP_MAGIC_COUNT;
 for (j = 0; j < (int) ia->count && j < PEER_MAX_ADDRESSES; j++) {
p->addresses[j] = ia->in_addrs[j];
debug(15, 2) ("--> IP address #%d: %s\n", j, inet_ntoa(p->addresses[j]))
;
p->n_addresses++;
 }
+if (!p->tcp_up)
+   peerProbeConnect((peer *) p);
 ap = &p->in_addr;
 memset(ap, '\0', sizeof(struct sockaddr_in));
 ap->sin_family = AF_INET;

Any ideas is much appreciated. Any special debug info you need, please
let me know.

Also, as I side note, I have monitorurl set as well

cache_peer 10.10.10.20 parent 80 0 no-query no-digest originserver
monitorinterval=30 monitorurl=http://10.10.10.20/test.jpg

Thank you!


Re: [squid-users] I Want To Ping A Cache Object

2007-10-02 Thread Solomon Asare
Hi,
Thanks very much.

Regards,
solomon.


--- Henrik Nordstrom <[EMAIL PROTECTED]>
wrote:

> On mån, 2007-10-01 at 10:14 -0700, Solomon Asare
> wrote:
> > Hi All,
> > thanks for the responses. I guess what I need is
> > something like:
> > 
> > squidclient -m HEAD -H 'Max-Forwards = 0\n'
> > http://www.google.com
> 
> Make that
> squidclient -m HEAD -H 'Cache-Control:
> only-if-cached\n" http://www.google.com
> 
> 
> or if you want it to return "true" on expired
> objects as well:
> squidclient -m HEAD -H 'Cache-Control:
> only-if-cached,max-stale\n" http://www.google.com
> 
> 
> Regards
> Henrik
> 



[squid-users] Re: tcp timeout issue

2007-10-02 Thread Frank Ruiz
Also,

Here are the events leading up to the failures, not too sure if this
could of triggered it:

2007/10/01 22:40:39| parseHttpRequest: Unsupported method '1y¦íiÇNxRä-4gBx wZS
 k¨]5?À4&¸1y¼_éUXÄYfTh»ÒÎaäÇ4ÞáÓåÔÀ+Ü*
  q°`RCf«Ä­³ÇÂÝÚóVÃGy!×=
ÜÕ×Ôw)}
¢YUGªT9GET'
2007/10/01 22:40:39| clientReadRequest: FD 35 (70.231.40.8:1751) Invalid Request
2007/10/01 23:21:14| parseHttpRequest: CONNECT not valid in accelerator mode
2007/10/01 23:21:14| clientReadRequest: FD 19 (62.47.159.103:1814) Invalid Reque
st
2007/10/01 23:21:16| parseHttpRequest: CONNECT not valid in accelerator mode
2007/10/01 23:21:16| clientReadRequest: FD 39 (62.180.224.67:41740) Invalid Requ
est
2007/10/01 23:22:31| parseHttpRequest: Unsupported method 'TRACK'
2007/10/01 23:22:31| clientReadRequest: FD 18 (62.153.251.221:55305) Invalid Req
uest
2007/10/01 23:22:32| parseHttpRequest: Unsupported method 'TRACK'
2007/10/01 23:22:32| clientReadRequest: FD 33 (67.174.50.122:1676) Invalid Reque
st
2007/10/01 23:22:34| parseHttpRequest: Unsupported method 'CFYZ'
2007/10/01 23:22:34| clientReadRequest: FD 27 (89.55.123.193:63949) Invalid Requ
est
2007/10/01 23:24:17| parseHttpRequest: Unsupported method 'BADMTHD'
2007/10/01 23:24:17| clientReadRequest: FD 20 (89.52.178.46:64116) Invalid Reque
st
2007/10/01 23:24:18| parseHttpRequest: CONNECT not valid in accelerator mode
2007/10/01 23:24:18| clientReadRequest: FD 17 (82.228.47.219:2004) Invalid Reque
st
2007/10/01 23:24:18| parseHttpRequest: CONNECT not valid in accelerator mode
2007/10/01 23:24:18| clientReadRequest: FD 16 (85.183.133.10:59029) Invalid Requ
est
2007/10/01 23:24:18| parseHttpRequest: CONNECT not valid in accelerator mode
2007/10/01 23:24:18| clientReadRequest: FD 34 (82.98.89.2:63530) Invalid Request
2007/10/01 23:24:19| parseHttpRequest: CONNECT not valid in accelerator mode
2007/10/01 23:24:19| clientReadRequest: FD 40 (217.166.134.202:5735) Invalid Req
uest
2007/10/01 23:24:35| clientReadRequest: FD 47 (217.5.231.249:36369) Invalid Requ
est


On 10/2/07, Frank Ruiz <[EMAIL PROTECTED]> wrote:
> Greetings,
>
> I patched squid2.6 stable 14 with the tcp probe patch.
>
> It patched two files:
>
> cache_cf.c
> neighbors.c
>
> However, After about 14 hours of good runtime, my response times,
> began to suck, and began to see errors again indicative of the tcp
> probe issue:
>
> 2007/10/02 01:57:15| Detected REVIVED Parent: 10.10.10.20
> 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
> 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
> 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
> 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
> 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
> 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
> 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
> 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
> 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
> 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
> 2007/10/02 01:57:16| Detected DEAD Parent: 10.10.10.20
> 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
>
>
> The origin server is available, however I keep getting
> revivied/connectionfailed/dead
>
> It seems that the only way to recover from this is a restart.
>
> I am running solaris 10, and I had to download the gnu patch utility
> in order to patch the src.
>
> Here was the patch applied.
>
> Index: src/cache_cf.c
> ===
> RCS file: /cvsroot/squid/squid/src/cache_cf.c,v
> retrieving revision 1.470
> diff -u -p -r1.470 cache_cf.c
> --- src/cache_cf.c  20 Jul 2007 21:08:47 -  1.470
> +++ src/cache_cf.c  28 Aug 2007 23:46:47 -
> @@ -1621,6 +1621,7 @@ parse_peer(peer ** head)
> p->stats.logged_state = PEER_ALIVE;
> p->monitor.state = PEER_ALIVE;
> p->monitor.interval = 300;
> +p->tcp_up = PEER_TCP_MAGIC_COUNT;
> if ((token = strtok(NULL, w_space)) == NULL)
>self_destruct();
> p->host = xstrdup(token);
> Index: src/neighbors.c
> ===
> RCS file: /cvsroot/squid/squid/src/neighbors.c,v
> retrieving revision 1.318
> diff -u -p -r1.318 neighbors.c
> --- src/neighbors.c 20 Jul 2007 21:08:47 -  1.318
> +++ src/neighbors.c 28 Aug 2007 23:46:47 -
> @@ -1010,12 +1010,13 @@ peerDNSConfigure(const ipcache_addrs * i
>debug(0, 0) ("WARNING: No IP address found for '%s'!\n", p->host);
>return;
> }
> -p->tcp_up = PEER_TCP_MAGIC_COUNT;
> for (j = 0; j < (int) ia->count && j < PEER_MAX_ADDRESSES; j++) {
>p->addresses[j] = ia->in_addrs[j];
>debug(15, 2) ("--> IP address #%d: %s\n", j, 
> inet_ntoa(p->addresses[j]))
> ;
>p->n_addresses++;
> }
> +if (!p->tcp_up)
> +   peerProbeConnect((peer *) p);
> ap = &p->in_addr;
> memset(ap, '\0', sizeof(

[squid-users] Re: tcp timeout issue

2007-10-02 Thread Frank Ruiz
Also,

Here is what was patched based on a diff performed:

server01# diff neighbors.c neighbors.c~
1016a1017
> p->tcp_up = PEER_TCP_MAGIC_COUNT;
1022,1023d1022
< if (!p->tcp_up)
<   peerProbeConnect((peer *) p);
server01# diff cache_cf.c cache_cf.c~
1629d1628
< p->tcp_up = PEER_TCP_MAGIC_COUNT;
server01#


On 10/2/07, Frank Ruiz <[EMAIL PROTECTED]> wrote:
> Greetings,
>
> I patched squid2.6 stable 14 with the tcp probe patch.
>
> It patched two files:
>
> cache_cf.c
> neighbors.c
>
> However, After about 14 hours of good runtime, my response times,
> began to suck, and began to see errors again indicative of the tcp
> probe issue:
>
> 2007/10/02 01:57:15| Detected REVIVED Parent: 10.10.10.20
> 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
> 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
> 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
> 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
> 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
> 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
> 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
> 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
> 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
> 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
> 2007/10/02 01:57:16| Detected DEAD Parent: 10.10.10.20
> 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed
>
>
> The origin server is available, however I keep getting
> revivied/connectionfailed/dead
>
> It seems that the only way to recover from this is a restart.
>
> I am running solaris 10, and I had to download the gnu patch utility
> in order to patch the src.
>
> Here was the patch applied.
>
> Index: src/cache_cf.c
> ===
> RCS file: /cvsroot/squid/squid/src/cache_cf.c,v
> retrieving revision 1.470
> diff -u -p -r1.470 cache_cf.c
> --- src/cache_cf.c  20 Jul 2007 21:08:47 -  1.470
> +++ src/cache_cf.c  28 Aug 2007 23:46:47 -
> @@ -1621,6 +1621,7 @@ parse_peer(peer ** head)
> p->stats.logged_state = PEER_ALIVE;
> p->monitor.state = PEER_ALIVE;
> p->monitor.interval = 300;
> +p->tcp_up = PEER_TCP_MAGIC_COUNT;
> if ((token = strtok(NULL, w_space)) == NULL)
>self_destruct();
> p->host = xstrdup(token);
> Index: src/neighbors.c
> ===
> RCS file: /cvsroot/squid/squid/src/neighbors.c,v
> retrieving revision 1.318
> diff -u -p -r1.318 neighbors.c
> --- src/neighbors.c 20 Jul 2007 21:08:47 -  1.318
> +++ src/neighbors.c 28 Aug 2007 23:46:47 -
> @@ -1010,12 +1010,13 @@ peerDNSConfigure(const ipcache_addrs * i
>debug(0, 0) ("WARNING: No IP address found for '%s'!\n", p->host);
>return;
> }
> -p->tcp_up = PEER_TCP_MAGIC_COUNT;
> for (j = 0; j < (int) ia->count && j < PEER_MAX_ADDRESSES; j++) {
>p->addresses[j] = ia->in_addrs[j];
>debug(15, 2) ("--> IP address #%d: %s\n", j, 
> inet_ntoa(p->addresses[j]))
> ;
>p->n_addresses++;
> }
> +if (!p->tcp_up)
> +   peerProbeConnect((peer *) p);
> ap = &p->in_addr;
> memset(ap, '\0', sizeof(struct sockaddr_in));
> ap->sin_family = AF_INET;
>
> Any ideas is much appreciated. Any special debug info you need, please
> let me know.
>
> Also, as I side note, I have monitorurl set as well
>
> cache_peer 10.10.10.20 parent 80 0 no-query no-digest originserver
> monitorinterval=30 monitorurl=http://10.10.10.20/test.jpg
>
> Thank you!
>


Re: [squid-users] Squid 2.5-STABLE14 Crashing

2007-10-02 Thread Amos Jeffries
> Hi,
>
> For the last couple of days my squid server keeps crashing and restarting
> itself. I have looked at the cache.log file and this is what I get. This
> server has been running fine for the last 2 years:
>

First,
  check your logs are being rotated properly and haven't taken up all disk
space. And that the system has not run out of inodes.

Second,
  upgrade to a currently supported version of squid, 2.6s16+ or 3.0rc1


> (squid)[0x80a1afd]
> /lib/i686/libpthread.so.0[0x4005747e]
> (squid)[0x42028c48]
> (squid)[0x420c12db]
> (squid)[0x420bd350]
> (squid)(regexec+0x65)[0x420c2df5]
> (squid)(vfprintf+0x2d36)[0x804d21a]
> (squid)(vfprintf+0x39a9)[0x804de8d]
> (squid)(vfprintf+0x3d4b)[0x804e22f]

That does not look like cache.log content. if it did come from there its
seriously screwed.

Amos




[squid-users] Squid 2.6-STABLE16 problems accessing HTTPS site with certificate

2007-10-02 Thread Robert French
Hello,

We have two proxies which allow our users access to the internet, one as
live box and one as a backup. Both boxes are running Gentoo and Squid 2.6
STABLE16. Recently the live box was replaced with a new server. The OS and
Squid were installed as before with the same configuration file. It now
seems to have developed a problem when accessing HTTPS sites that require a
certificate. When browsing to the site, it prompts for which certificate to
use, then gives a little warning about how the hostname does not match the
URL and then loads half the page. After about 1-2mins, a 404 error is
produced in the areas which it hasn't loaded

The main issue is that the backup proxy, which is running the same version
and same configuration file, does not produce this error and loads the sites
perfectly

I have tried re-emerging Squid, building it from source myself with the same
options and have even copied the binary over from the backup server to the
live one, but it still refuses to load the page.

I know the big change is the new install (the old server has been there for
years and just updated from time to time) but I'm wondering what could be
causing the problem. Other HTTP and HTTPS sites work fine

The log files don't show any errors. The only difference being the amount of
data transferred is a lot less on the live one than on the backup one when
connecting to the problem sites

Is there something obvious I should be checking?

I would have thought that even though I've installed a new OS which has
newer versions of bits and pieces than the backup one, this wouldn't make
much of a difference. Perhaps I'm wrong?

Any thoughts or feedback would be appreciated

 
Robert French
Email : [EMAIL PROTECTED]
 


Registered in England & Wales under number 4586709
Renesas Technology Europe Ltd
Dukes Meadow
Millboard Road, Bourne End
Buckinghamshire  SL8 5FH
UK


Re: [squid-users] Startup problems

2007-10-02 Thread Henrik Nordstrom
On tis, 2007-10-02 at 15:27 +0100, Sean O'Reilly wrote:
> I have upgraded squid from squid-2.5STABLE10 to squid-2.6STABLE16.
> 
> When trying to start squid using the original squid.conf I am getting
> the 'no port defined' error
> 
> I do not have an http_port defined in the configuration file. My
> question is why would this work in 2.5 but not in 2.6 ?

Because in 2.5 there was a built-in default of 3128 if none was
specified, but this has been removed in 2.6 to allow operation with only
https_port if one likes.

Regards
Henrik



Re: [squid-users] tcp timeout issue

2007-10-02 Thread Henrik Nordstrom
On tis, 2007-10-02 at 11:35 -0700, Frank Ruiz wrote:
> Greetings,
> 
> I patched squid2.6 stable 14 with the tcp probe patch.

Why not upgrade to 2.6.STABLE16 instead?

> However, After about 14 hours of good runtime, my response times,
> began to suck, and began to see errors again indicative of the tcp
> probe issue:
> 
> 2007/10/02 01:57:15| Detected REVIVED Parent: 10.10.10.20
> 2007/10/02 01:57:16| TCP connection to 10.10.10.20/80 failed

This means Squid could not connect to the peer. Might be as simple as
there being too many connections to the peer already.

Regards
Henrik



Re: [squid-users] Squid 2.6-STABLE16 problems accessing HTTPS site with certificate

2007-10-02 Thread Amos Jeffries
> Hello,
>
> We have two proxies which allow our users access to the internet, one as
> live box and one as a backup. Both boxes are running Gentoo and Squid 2.6
> STABLE16. Recently the live box was replaced with a new server. The OS and
> Squid were installed as before with the same configuration file. It now
> seems to have developed a problem when accessing HTTPS sites that require
> a
> certificate. When browsing to the site, it prompts for which certificate
> to
> use, then gives a little warning about how the hostname does not match the
> URL and then loads half the page. After about 1-2mins, a 404 error is
> produced in the areas which it hasn't loaded
>

>
> Is there something obvious I should be checking?

Sounds to me like a host name problem.

running "squidclient mgr:info" on the live squid will give you the headers
its sending out. Check for the name in X-Cache: and Via:.

The visible hostname must match the one inside the certificate or the
certificate will be seen as invalid. hostname is set either in the OS
configuration /etc/hostname, or overridden in squid.conf with
visible_hostname.

>
> I would have thought that even though I've installed a new OS which has
> newer versions of bits and pieces than the backup one, this wouldn't make
> much of a difference. Perhaps I'm wrong?

Unless your squid.conf contains visible_hostname, the OS is in charge of
the FQDN squid sends out as its identifier.

HTH
Amos




[squid-users] unable to viwe flash pictures

2007-10-02 Thread revathi ganesh
Hello Gurus, 

 I have stopped viewing video files..but now unable to
view even  flash pictures such as greeting cards etc..


I have done the following in squid.conf  
---
acl audiofiles req_mime_type -i ^audio/.*
 acl videofiles req_mime_type -i ^video/.*
 acl media_files rep_mime_type -i ^audio/ ^video/
 acl dl-filter urlpath_regex -i
"/PROXY/file_ext.block"
 --

  Please help me in framing ACL rules so that I can
allow users to view  any flash related files in their
greeting cards etc..  

thanks Cholam20 




  Chat on a cool, new interface. No download required. Go to 
http://in.messenger.yahoo.com/webmessengerpromo.php