Re: [squid-users] how does squid work as a transparent proxy?

2008-07-07 Thread Amos Jeffries
> Hello,
>
> I am new to Squid and I'd like to ask a question about its internal
> workings when operating as a transparent proxy.
>
> I saw that one configure the host kernel with an iptables rule in the
> nat table with the REDIRECT target to match packets destined to some
> port (e.g 80) and redirect them to some other port on the local host
> (e.g 3128).  From what I understand, when iptables matches a packet
> against this rule, it overwrites the packet's destination IP address and
> TCP port with, respectively, the local IP address and 3128.
>
> How does Squid (e.g in the case of an HTTP request) know the IP address
> of the original web server that the packet was destined to?

iptables(kernel) keeps an internal record of these changes, in case there
is any returning traffic to deal with.
Squid has code which asks iptables(kernel) what the original destination was.

>  For
> example, if the GET-ed object doesn't exist in cache, how does Squid
> know where to connect() to and request the object?

Squid processes the request and located the domain name being requested.
Clients place it in Host: header of their request.

>  I tried looking at
> the source code and it looks like in some cases Squid might be parsing
> the domain name from the GET request and using a DNS lookup on this
> domain name to determine the IP address.  Is this always the case?

Yes. There are serious security problems with any other method of
handling, so don't even ask us to change it ;-).

>
> If yes, does Squid do something similar in the case of other supported
> protocols - SSL, gopher?

gopher, ftp, whois, https; are only supported if the client sends an HTTP
URL request through a known proxy. Transparent interception of them is not
supported.

HTTPS, has a sslBump feature in the upcoming 3.1 which allows some small
degree of interception, but the nature of SSL makes the proxy visible to
the client as a possible middle-ware attack (which it actually is).

Amos




Re: [squid-users] reverse proxy with domains

2008-07-07 Thread Thomas E. Maleshafske

Henrik Nordstrom wrote:

On mån, 2008-07-07 at 18:05 -0500, Thomas E. Maleshafske wrote:
  

I managed to figure it out on a hunch.
http_port 80 accel vhost
forwarded_for on

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320

cache_peer 192.168.0.11 parent 80 0 originserver default
acl maleshafske dstdomain .example.com
http_access allow example

the key being the "." before example.com

That makes it function as a wild card




You could simplify even further

never_direct allow all
http_access allow all

with the never_direct rule being optional.. (implied by accel mode on
the http_port).

Regards
Henrik
  
But if your in a hosting enviroment, a very quick and effective way of 
taking a client offline due to one reason or another is to comment their 
acl
could be that they forgot to pay renewal or something that nature and 
give them a grace period to fix it.


It has its benefits with doing it this way, but I see your point to.

V/r
Thomas E. Maleshafske


Re: [squid-users] adding a parameter to a URL / Problem in the url_redirect program

2008-07-07 Thread Henrik Nordstrom
On mån, 2008-07-07 at 05:49 -0700, Shaine wrote:
> I did same way . but client ip doesn't comes in the second possition. Its in
> third.

It's the second..

http://www.squid-cache.org/ 127.0.0.1/localhost.localdomain - GET - 
myip=127.0.0.1 myport=3128

unless you have enabled url_rewrite_concurrency in which case all
parameters is shifted one step due to the added request identifier
infront... but then url is the second..

0 http://www.squid-cache.org/ 127.0.0.1/localhost.localdomain - GET - 
myip=127.0.0.1 myport=3128

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] how does squid work as a transparent proxy?

2008-07-07 Thread Henrik Nordstrom
On mån, 2008-07-07 at 19:46 -0400, Peter Djalaliev wrote:
> (e.g 3128).  From what I understand, when iptables matches a packet 
> against this rule, it overwrites the packet's destination IP address and 
> TCP port with, respectively, the local IP address and 3128.
> 
> How does Squid (e.g in the case of an HTTP request) know the IP address 
> of the original web server that the packet was destined to?

iptables provides a getsockopt interface where one can query the
original destination address of the connetion associated with the
socket.

> If yes, does Squid do something similar in the case of other supported 
> protocols - SSL, gopher?

No, Squid is an HTTP proxy, and only accepts HTTP requests. That HTTP
request may be for an gopher:// object, but it's still an HTTP request.
There is no gopher server component in Squid, only a gopher client to be
able to fetch gopher:// URLs when requested by it's HTTP clients.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] reverse proxy with domains

2008-07-07 Thread Henrik Nordstrom
On mån, 2008-07-07 at 18:05 -0500, Thomas E. Maleshafske wrote:
> I managed to figure it out on a hunch.
> http_port 80 accel vhost
> forwarded_for on
> 
> refresh_pattern ^ftp:   144020% 10080
> refresh_pattern ^gopher:14400%  1440
> refresh_pattern .   0   20% 4320
> 
> cache_peer 192.168.0.11 parent 80 0 originserver default
> acl maleshafske dstdomain .example.com
> http_access allow example
> 
> the key being the "." before example.com
> 
> That makes it function as a wild card


You could simplify even further

never_direct allow all
http_access allow all

with the never_direct rule being optional.. (implied by accel mode on
the http_port).

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] how does squid work as a transparent proxy?

2008-07-07 Thread Peter Djalaliev

Hello,

I am new to Squid and I'd like to ask a question about its internal 
workings when operating as a transparent proxy.


I saw that one configure the host kernel with an iptables rule in the 
nat table with the REDIRECT target to match packets destined to some 
port (e.g 80) and redirect them to some other port on the local host 
(e.g 3128).  From what I understand, when iptables matches a packet 
against this rule, it overwrites the packet's destination IP address and 
TCP port with, respectively, the local IP address and 3128.


How does Squid (e.g in the case of an HTTP request) know the IP address 
of the original web server that the packet was destined to?  For 
example, if the GET-ed object doesn't exist in cache, how does Squid 
know where to connect() to and request the object?  I tried looking at 
the source code and it looks like in some cases Squid might be parsing 
the domain name from the GET request and using a DNS lookup on this 
domain name to determine the IP address.  Is this always the case?


If yes, does Squid do something similar in the case of other supported 
protocols - SSL, gopher?


Regards,
Peter Djalaliev



Re: [squid-users] reverse proxy with domains

2008-07-07 Thread Thomas E. Maleshafske

Henrik Nordstrom wrote:

On mån, 2008-07-07 at 15:10 -0500, Thomas E. Maleshafske wrote:
  

IN squid.conf



It's not needed to list the sites in squid.conf unless you need to send
different sites to different backend web servers.

If you have only one web server (or cluster) then just cache_peer is
sufficient without cache_peer_access/domain.

If you need to route requests to different servers then acls needs to be
built on domains, hostnames or other URL patterns enabling Squid to
decide where to route the request.

Regards
Henrik
  

I managed to figure it out on a hunch.
http_port 80 accel vhost
forwarded_for on

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320

cache_peer 192.168.0.11 parent 80 0 originserver default
acl maleshafske dstdomain .example.com
http_access allow example

the key being the "." before example.com

That makes it function as a wild card





Re: [squid-users] Squid-3.0.STABLE7 Compilation errors on SPARC

2008-07-07 Thread Frog

Guido Serassio wrote:

Hi,

So the patch should be applied to Squid3 STABLE, the fail during build 
is not correct  :-)


Please, check with file if your binary is 32 or 64 bit, I'm suspecting 
that it's a 32 bit binary.


Regards

Guido



Hi Guido,

Indeed the file is a 32bit binary which I never noticed before until now!

ELF 32-bit MSB executable SPARC Version 1, dynamically linked, not stripped

The server itself is:

SunOS sparc1 5.10 Generic_120011-14 sun4u sparc SUNW,Sun-Blade-1000
UltraSPARC III+ at 900Mhz.

Regards,
Frog.



Re: [squid-users] Squid-3.0.STABLE7 Compilation errors on SPARC

2008-07-07 Thread Henrik Nordstrom
On tis, 2008-07-08 at 00:31 +0200, Guido Serassio wrote:

> So the patch should be applied to Squid3 STABLE, the fail during 
> build is not correct  :-)

I am thinking we probably should stop using the getopt build
environments by default.

I.e. the result which is now (from tonight) seen if one uses both

  --with-large-files --with-build-environment=default

which should only set -D_FILE_OFFSET_BITS=64 to enable large files on
most 32-bit platforms and ignored on most 64-bit platforms.

On nearly all platforms the default cflags is the more appropriate ones,
with other modes "optional or incomplete".

Solaris on SPARC may be an exception however if the Sun compilers still
defaults to generate 32-bit code by default even on 64-bit platforms...
but then is the option to use --with-build-environment=... to select a
more appropriate mode, or manually adding appropriate CFLAGS & CXXFLAGS
when running configure.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid-3.0.STABLE7 Compilation errors on SPARC

2008-07-07 Thread Frog

Guido Serassio wrote:

Hi,

At 21.26 07/07/2008, Frog wrote:

Hi All,

I have a machine here that is running 3.0.STABLE4 and I wish to upgrade
it to STABLE7. I compiled and installed STABLE4 with no problems.
However while attempting to compile the latest release I am getting lots
of errors during the configure script which are repeatedly saying to
report a bug.


It could be related to this problem:
http://www.squid-cache.org/Versions/v3/HEAD/changesets/b9055.patch

Please, try to build without the --with-large-files option.

Let we to know the result.

Regards

Guido



Hello Guido,

Thank you for you suggestion.

Removing --with-large-files worked. Thank you Henrik for your reply 
suggesting similar also.


Best regards
Frog


Re: [squid-users] Squid-3.0.STABLE7 Compilation errors on SPARC

2008-07-07 Thread Henrik Nordstrom
On mån, 2008-07-07 at 20:26 +0100, Frog wrote:
> Hi All,
> 
> I have a machine here that is running 3.0.STABLE4 and I wish to upgrade 
> it to STABLE7. I compiled and installed STABLE4 with no problems. 
> However while attempting to compile the latest release I am getting lots 
> of errors during the configure script which are repeatedly saying to 
> report a bug.
> 
> I am attempting to configure with the following options:
> 
> ./configure --prefix=/usr/local 
> --enable-storeio=ufs,aufs,coss,diskd,null --enable-snmp 
> --enable-delay-pools --enable-cache-digests --enable-underscores 
> --enable-referer-log --enable-useragent-log 
> --enable-auth=basic,digest,ntlm --enable-carp 
> --enable-follow-x-forwarded-for --with-large-files --enable-async-io 
> --enable-removal-policies=lru,heap --enable-icmp --enable-icap-client

Is this a 32-bit or 64-bit SPARC? I guess it's a 64-bit and in such case
you should not specify --with-large-files.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] reverse proxy with domains

2008-07-07 Thread Henrik Nordstrom
On mån, 2008-07-07 at 15:10 -0500, Thomas E. Maleshafske wrote:
> IN squid.conf

It's not needed to list the sites in squid.conf unless you need to send
different sites to different backend web servers.

If you have only one web server (or cluster) then just cache_peer is
sufficient without cache_peer_access/domain.

If you need to route requests to different servers then acls needs to be
built on domains, hostnames or other URL patterns enabling Squid to
decide where to route the request.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid deny access to some part of website

2008-07-07 Thread Leonardo Rodrigues Magalhães



Alexandre augusto escreveu:

Hi Leonardo,

The problem is that the website just show me part of website information.
The pictures (in most cases flash) is denied.

Do you have any idea ?
  


   Sure !!! First idea  look for 403 DENIED and not 407 ones. Those 
407 ones are part of the NTLM authentication process and means nothing.


   If you find some 403 DENIED, then you have some rule blocking it !!! 
it can be even the last 'deny all' one, but if you find some 403 
denials, then something is REALLY denying the access. Or it can be the 
case that no rules are accepting, then they are getting blocked by the 
'deny all' one.


   There's nothing wrong with the logs you sent. 407/denied are not the 
problem.


   Please post logs showing 403 denials for the flash URLs   if 
they dont exist, then there's no squid access policy problem probably.


--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
[EMAIL PROTECTED]
My SPAMTRAP, do not email it






Re: [squid-users] Squid deny access to some part of website

2008-07-07 Thread Alexandre augusto
Hi Leonardo,

The problem is that the website just show me part of website information.
The pictures (in most cases flash) is denied.

Do you have any idea ?

Thanks you

Alexandre

--- Em seg, 7/7/08, Leonardo Rodrigues Magalhães <[EMAIL PROTECTED]> escreveu:

> De: Leonardo Rodrigues Magalhães <[EMAIL PROTECTED]>
> Assunto: Re: [squid-users] Squid deny access to some part of website
> Para: [EMAIL PROTECTED]
> Cc: squid-users@squid-cache.org
> Data: Segunda-feira, 7 de Julho de 2008, 18:01
> Alexandre augusto escreveu:
> > Hi guys,
> >
> > On the access.log the Squid show TCP_DENIED entry to
> some part of website
> >
> > I´m authenticating my users using NTLM and all entry
> on access.log that DENIED part of site do not show the
> standard domain\username on log.
> >  only "- -"...
> >   
> 
> This is the EXPECTED behavior for NTLM authentications:
>  2 (two) 
> denied/407 log entries and then the 'allow' hit,
> some MISS or HIT 
> 
> this is expected and should be FAQqed somewhere.
> 
> -- 
> 
> 
>   Atenciosamente / Sincerily,
>   Leonardo Rodrigues
>   Solutti Tecnologia
>   http://www.solutti.com.br
> 
>   Minha armadilha de SPAM, NÃO mandem email
>   [EMAIL PROTECTED]
>   My SPAMTRAP, do not email it


  Novos endereços, o Yahoo! que você conhece. Crie um email novo com a sua 
cara @ymail.com ou @rocketmail.com.
http://br.new.mail.yahoo.com/addresses


Re: [squid-users] Squid deny access to some part of website

2008-07-07 Thread Leonardo Rodrigues Magalhães



Leonardo Rodrigues Magalhães escreveu:



Alexandre augusto escreveu:

Hi guys,

On the access.log the Squid show TCP_DENIED entry to some part of 
website


I´m authenticating my users using NTLM and all entry on access.log 
that DENIED part of site do not show the standard domain\username on 
log.

 only "- -"...
  


   This is the EXPECTED behavior for NTLM authentications:  2 (two) 
denied/407 log entries and then the 'allow' hit, some MISS or HIT 


   this is expected and should be FAQqed somewhere.




http://wiki.squid-cache.org/SquidFaq/ProxyAuthentication#head-e4803074fbb62b906724838137cc39c8481c1f16


Note that when using NTLM authentication, you will see two 
"TCP_DENIED/407" entries in access.log for every request. This is due to 
the challenge-response process of NTLM.



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
[EMAIL PROTECTED]
My SPAMTRAP, do not email it






Re: [squid-users] Squid deny access to some part of website

2008-07-07 Thread Leonardo Rodrigues Magalhães



Alexandre augusto escreveu:

Hi guys,

On the access.log the Squid show TCP_DENIED entry to some part of website

I´m authenticating my users using NTLM and all entry on access.log that DENIED 
part of site do not show the standard domain\username on log.
 only "- -"...
  


   This is the EXPECTED behavior for NTLM authentications:  2 (two) 
denied/407 log entries and then the 'allow' hit, some MISS or HIT 


   this is expected and should be FAQqed somewhere.

--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
[EMAIL PROTECTED]
My SPAMTRAP, do not email it






[squid-users] Squid deny access to some part of website

2008-07-07 Thread Alexandre augusto
Hi guys,

On the access.log the Squid show TCP_DENIED entry to some part of website

I´m authenticating my users using NTLM and all entry on access.log that DENIED 
part of site do not show the standard domain\username on log.
 only "- -"...

For example:

192.168.15.13 - contac\xtz0001 [07/Jul/2008:17:42:07 -0300] "GET 
http://c.extra.com.br/content/consul.swf HTTP/1.1" 503 1924 TCP_MISS:DIRECT

192.168.15.13 - - [07/Jul/2008:17:42:07 -0300] "GET 
http://c.extra.com.br/content/brastemp.swf HTTP/1.1" 407 2183 TCP_DENIED:NONE

192.168.15.13 - - [07/Jul/2008:17:42:07 -0300] "GET 
http://c.extra.com.br/content/brastemp.swf HTTP/1.1" 407 2257 TCP_DENIED:NONE

192.168.15.13 - contac\xtz0001 [07/Jul/2008:17:42:07 -0300] "GET 
http://c.extra.com.br/content/brastemp.swf HTTP/1.1" 503 1928 TCP_MISS:DIRECT

192.168.15.13 - - [07/Jul/2008:17:42:07 -0300] "GET 
http://c.extra.com.br/content/lg.swf HTTP/1.1" 407 2165 TCP_DENIED:NONE

192.168.15.13 - - [07/Jul/2008:17:42:07 -0300] "GET 
http://c.extra.com.br/content/lg.swf HTTP/1.1" 407 2239 TCP_DENIED:NONE

Looking for HTTP manuals I got that "407" is a Proxy Authentication Required 
response.

Is possiblie that I have problems authenticating all requests  ?

Also I´m using SquidGuard feature, but I have tried disable it on squid.conf 
without success.

Why squid deny some part of site and allow other one ?

thanks in Advance

Alexandre


  Novos endereços, o Yahoo! que você conhece. Crie um email novo com a sua 
cara @ymail.com ou @rocketmail.com.
http://br.new.mail.yahoo.com/addresses


Re: [squid-users] reverse proxy with domains

2008-07-07 Thread Thomas E. Maleshafske

Henrik Nordstrom wrote:

On mån, 2008-07-07 at 13:43 -0500, Thomas E. Maleshafske wrote:
  
I have the vhost directive  defined but am still have to list each 
seperate subdomain.



List where?

In squid.conf, or in your web server?

  
I might have found a solution with using pound but 
haven't tested yet with still have squid being a proxy..Could it 
also be a problem that i defined the default host?



In Squid?

No. The defaultsite= in Squid is just used if there is no Host header in
the request.

Regards
Henrik
  

   IN squid.conf

I am using DTC hosting panel So my apache configurations are correct


Re: [squid-users] Squid-3.0.STABLE7 Compilation errors on SPARC

2008-07-07 Thread Guido Serassio

Hi,

At 21.26 07/07/2008, Frog wrote:

Hi All,

I have a machine here that is running 3.0.STABLE4 and I wish to upgrade
it to STABLE7. I compiled and installed STABLE4 with no problems.
However while attempting to compile the latest release I am getting lots
of errors during the configure script which are repeatedly saying to
report a bug.

I am attempting to configure with the following options:

./configure --prefix=/usr/local
--enable-storeio=ufs,aufs,coss,diskd,null --enable-snmp
--enable-delay-pools --enable-cache-digests --enable-underscores
--enable-referer-log --enable-useragent-log
--enable-auth=basic,digest,ntlm --enable-carp
--enable-follow-x-forwarded-for --with-large-files --enable-async-io
--enable-removal-policies=lru,heap --enable-icmp --enable-icap-client

The error that occurs in config.log for various headers look like the
following:


cut



When running the configuration script with just --prefix=/usr/local
results in no errors. So obviously it looks like one of my configuration
options is not compatible.

My GCC compiler is 3.4.3 as provided by the OS.
PATH=/usr/sbin:/usr/bin:/usr/sfw/bin/:/usr/ccs/bin/

Would anyone have experienced this before or seen something similar?


It could be related to this problem:
http://www.squid-cache.org/Versions/v3/HEAD/changesets/b9055.patch

Please, try to build without the --with-large-files option.

Let we to know the result.

Regards

Guido



-

Guido Serassio
Acme Consulting S.r.l. - Microsoft Certified Partner
Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135  Fax. : +39.011.9781115
Email: [EMAIL PROTECTED]
WWW: http://www.acmeconsulting.it/



Re: [squid-users] reverse proxy with domains

2008-07-07 Thread Henrik Nordstrom
On mån, 2008-07-07 at 13:43 -0500, Thomas E. Maleshafske wrote:
> I have the vhost directive  defined but am still have to list each 
> seperate subdomain.

List where?

In squid.conf, or in your web server?

> I might have found a solution with using pound but 
> haven't tested yet with still have squid being a proxy..Could it 
> also be a problem that i defined the default host?

In Squid?

No. The defaultsite= in Squid is just used if there is no Host header in
the request.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] Squid-3.0.STABLE7 Compilation errors on SPARC

2008-07-07 Thread Frog

Hi All,

I have a machine here that is running 3.0.STABLE4 and I wish to upgrade 
it to STABLE7. I compiled and installed STABLE4 with no problems. 
However while attempting to compile the latest release I am getting lots 
of errors during the configure script which are repeatedly saying to 
report a bug.


I am attempting to configure with the following options:

./configure --prefix=/usr/local 
--enable-storeio=ufs,aufs,coss,diskd,null --enable-snmp 
--enable-delay-pools --enable-cache-digests --enable-underscores 
--enable-referer-log --enable-useragent-log 
--enable-auth=basic,digest,ntlm --enable-carp 
--enable-follow-x-forwarded-for --with-large-files --enable-async-io 
--enable-removal-policies=lru,heap --enable-icmp --enable-icap-client


The error that occurs in config.log for various headers look like the 
following:


configure:24403: result: no
configure:24407: checking arpa/inet.h presence
configure:24422: gcc -E  conftest.c
configure:24428: $? = 0
configure:24442: result: yes
configure:24455: WARNING: arpa/inet.h: present but cannot be compiled
configure:24457: WARNING: arpa/inet.h: check for missing 
prerequisite headers?

configure:24459: WARNING: arpa/inet.h: see the Autoconf documentation
configure:24461: WARNING: arpa/inet.h: section "Present But Cannot 
Be Compiled"
configure:24463: WARNING: arpa/inet.h: proceeding with the 
preprocessor's result
configure:24465: WARNING: arpa/inet.h: in the future, the compiler will 
take precedence

configure:24475: checking for arpa/inet.h
configure:24483: result: yes

The error also appear for other headers such as assert.h, crypt.h etc.

While the configuration script was running I noticed an autoconf error 
(Header Present But Cannot Be Compiled) and checking the autoconf 
documention brought me to a page where as I'm not a programmer I 
couldn't make much out of it.


When running the configuration script with just --prefix=/usr/local 
results in no errors. So obviously it looks like one of my configuration 
options is not compatible.


My GCC compiler is 3.4.3 as provided by the OS.  
PATH=/usr/sbin:/usr/bin:/usr/sfw/bin/:/usr/ccs/bin/


Would anyone have experienced this before or seen something similar?

Best regards
Frog




Re: [squid-users] reverse proxy with domains

2008-07-07 Thread Thomas E. Maleshafske

Henrik Nordstrom wrote:

On mån, 2008-07-07 at 07:50 -0500, Thomas E. Maleshafske wrote:
  
I have successfully implemented a reverse proxy for my http site.  My 
question is if whether or not there is an option so that it accepts on 
the basis of the domain basically instead of having www.example.com just 
have example.com and it will serve and of the sub domains if it exists.  
Appreciate the assistance.



Maybe you are looking for the vhost option, enabling host/domain based
virtual hosts?

Still relies on a web server actually implementing these...

Regards
Henrik
  
I have the vhost directive  defined but am still have to list each 
seperate subdomain.  I might have found a solution with using pound but 
haven't tested yet with still have squid being a proxy..Could it 
also be a problem that i defined the default host?



Appreciate the Help


Re: [squid-users] Pseudo-random 403 Forbidden...

2008-07-07 Thread John Doe
Hi again...

I don't know what happened (if I changed something
or if I wrongfuly thought it was fixed) but the siblings are not
talking anymore... at all.  :(
No error message, no denied...

So let's start from the begining...

configure --prefix=$PREFIX \
--enable-time-hack \
--enable-underscores \
--with-pthreads \
--enable-storeio="aufs,coss,diskd,null,ufs" \
--enable-removal-policies="heap,lru" \
--enable-delay-pools \
--enable-useragent-log \
--enable-referer-log \
--enable-snmp \
--enable-cachemgr-hostname=localhost \
--enable-arp-acl \
--enable-ssl \
--enable-cache-digests \
--enable-epoll \
--enable-linux-netfilter \
--disable-ident-lookups \
--disable-internal-dns \
--with-large-files \
--with-maxfd=65535

Do I miss something?
Do I need any of these too for siblings to talk? --enable-icmp, --enable-htcp, 
--enable-forw-via-db...

Here's a minimal conf for squid1:

pid_filename /var/run/squid1.pid
cache_effective_user jd
cache_effective_group jd
unique_hostname Squid1

access_log /home/jd/squid/var/log/squid1/access.log squid
cache_log /home/jd/squid/var/log/squid1/cache.log
cache_store_log /home/jd/squid/var/log/squid1/store.log

cache_dir ufs /home/jd/squid/var/spool/squid1 256 16 32
cache_mem 128 MB

http_port 192.168.17.11:8000 accel defaultsite=toto act-as-origin vhost
cache_peer 127.0.0.1 parent 8081 0 no-query originserver no-digest
no-netdb-exchange max-conn=256 front-end-https=auto name=apache
cache_peer 192.168.17.12 sibling 8000 3130 proxy-only name=squid2
cache_peer 192.168.17.13 sibling 8000 3130 proxy-only name=squid3
cache_peer 192.168.17.14 sibling 8000 3130 proxy-only name=squid4

icp_port 3130
udp_incoming_address 192.168.17.11
udp_outgoing_address 255.255.255.255

acl all src 0.0.0.0/0
acl from_all src 0.0.0.0/0
acl from_localhost src 127.0.0.1/32
acl from_localnetA src 10.0.0.0/8
acl from_localnetB src 172.16.0.0/12
acl from_localnetC src 192.168.0.0/16
acl to_all dst 0.0.0.0/0
acl to_localhost dst 127.0.0.0/24
acl to_localnetA dst 10.0.0.0/8
acl to_localnetB dst 172.16.0.0/12
acl to_localnetC dst 192.168.0.0/16
acl Safe_ports port 80 81 82 83 # http
acl Safe_ports port 443 # https
acl Safe_ports port 21  # ftp
acl Safe_ports port 1025-65535 # unregistered ports
acl SSL_ports port 443  # https
acl CONNECT method CONNECT
acl manager proto cache_object
acl purge method PURGE

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports 
http_access allow from_localnetC

http_access allow manager from_localhost
http_access allow manager from_localnetA
http_access allow manager from_localnetB
http_access allow manager from_localnetC
http_access deny manager

http_access allow purge from_localhost
http_access allow purge from_localnetA
http_access allow purge from_localnetB
http_access allow purge from_localnetC
http_access deny purge

icp_access allow from_localnetC
icp_access deny all
cache_peer_access apache allow from_localnetC
cache_peer_access apache deny all
miss_access allow from_localnetC
miss_access deny all

http_access allow all
http_reply_access allow all

header_access Cache-Control deny all
header_replace Cache-Control max-age=864000

The cache.log:

2008/07/07 19:23:42| Starting Squid Cache version 2.7.STABLE3 for 
i686-pc-linux-gnu...
2008/07/07 19:23:42| Process ID 27245
2008/07/07 19:23:42| With 1024 file descriptors available
2008/07/07 19:23:42| Using epoll for the IO loop
2008/07/07 19:23:42| helperOpenServers: Starting 5 'dnsserver' processes
2008/07/07 19:23:42| logfileOpen: opening log 
/home/jd/squid/var/log/squid1/useragent.log
2008/07/07 19:23:42| logfileOpen: opening log 
/home/jd/squid/var/log/squid1/referer.log
2008/07/07 19:23:42| logfileOpen: opening log 
/home/jd/squid/var/log/squid1/access.log
2008/07/07 19:23:42| Unlinkd pipe opened on FD 17
2008/07/07 19:23:42| Swap maxSize 262144 KB, estimated 20164 objects
2008/07/07 19:23:42| Target number of buckets: 1008
2008/07/07 19:23:42| Using 8192 Store buckets
2008/07/07 19:23:42| Max Mem  size: 131072 KB
2008/07/07 19:23:42| Max Swap size: 262144 KB
2008/07/07 19:23:42| Local cache digest enabled; rebuild/rewrite every 
3600/3600 sec
2008/07/07 19:23:42| logfileOpen: opening log 
/home/jd/squid/var/log/squid1/store.log
2008/07/07 19:23:42| Rebuilding storage in /home/jd/squid/var/spool/squid1 
(DIRTY)
2008/07/07 19:23:42| Using Least Load store dir selection
2008/07/07 19:23:42| Current Directory is /
2008/07/07 19:23:42| Loaded Icons.
2008/07/07 19:23:42| Accepting accelerated HTTP connections at 192.168.17.11, 
port 8000, FD 18.
2008/07/07 19:23:42| Accepting ICP messages at 192.168.17.11, port 3130, FD 19.
2008/07/07 19:23:42| Accepting SNMP messages on port 3401, FD 20.
2008/07/07 19:23:42| WCCP Disabled.
2008/07/07 19:23:42| WARNING: failed to resolve 192.168.17.11 to a fully 
qualified hostname
2008/07/07 19:23:42| Configuring apache Parent apache/8081/0
2008/07/07 19:23:42| Configuring squid2 Sibling squid2/8000/3130
2008/07/07 19:23:42| Configuring squid3 Sibling squid3/8000/3130
2008/07/0

[squid-users] Re: Squid Issues and Problems

2008-07-07 Thread Henrik Nordstrom
It's Trend Micro way of telling the ICAP server (IWSS) that the ICAP
client (the proxy) is capable of forwarding the response from the ICAP
server before the entire object has been sent to the ICAP Server.

Most others assume this by default without requiring the private "X-TE:
trailers" header.

The ICAP standard do not cover explicit how ICAP clients should behave
in this regard.

This is used by IWSS both for showing a download progress bar, and also
in trickle mode where the data is slowly sent to the requestor while
scanned for viruses.

I do not know who proposed the "X-TE: trailers" name. It's a very odd
name for the feature as it

a) Does not have anything to do with transfer encoding (TE)

b) Does not have anything to do with trailers.

But with it being an X-* header it's free to mean anyting implementation
specific, as long as everyone involved privately agrees on what the
meaning actually is...


Regards
Henrik



On mån, 2008-07-07 at 11:01 -0400, Jeremy Hall wrote:
> What do X-TE headers do?
> 
> _J
> 
> >>> <[EMAIL PROTECTED]> 7/7/2008 5:28 AM >>>
> Hi there all,
>  
> Firstly many thanks for all your work on Squid thus far :) 
>  
> I have been testing Squid 3.0 since PRE6 in various configurations, and one
> of the more notable issues I have found is that when Squid is running in
> ICAP mode, coupled with Trend Micro IWSx (InterScan Web Security) - IWSx
> reports that Squid does not support the X-TE trailers for data trickling.
> The error is usually logged when dealing with video from CNN (at first I
> thought all flash video, but YouTube is unaffected) and downloading certain
> MS Hotfixes. There might be other triggers as well - but these seem to be
> the main ones. When I configure IWSx to use a different ICAP server - say
> NetCache or other, there is no issue or error logged and things work as
> expected.
>  
> A quick search of the squid source provided no answers, however a search of
> the archives show that there was a patch for Squid 2.5 ICAP dealing with
> X-TE trailers:
>  
> http://www.squid-cache.org/mail-archive/squid-dev/200311/att-0018/squid-icap 
> -2_5-x-auth-user.diff
> http://www.squid-cache.org/~hno/changesets/squid/patches/7972.patch 
>  
>  
> Looking at ICAPModXact.cc I can see that there are some similar references
> to the area's above, however most certainly the code is above my level of
> expertise to have a play around with to cobble something together. 
>  
> I was wondering if there were any plans to include support for X-TE trailers
> in this version? If you could let me know that would be greatly appreciated.
>  
> Best Regards,
> 
> Jerome
> 
>  
> 
>    
> 
>  
> 
> Jerome Law | Solutions Architect, Regional Marketing EMEA
> 
> Pacific House, Third Avenue, Globe Business Park, Marlow
> 
> Buckinghamshire, SL7 1YL, United Kingdom
> 
> Office: +44 (0) 1628 400586 | Mobile: +44 (0) 7979 99 33 77
> 
>  
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>    
> 
>  
> 
> 
> 
> ===
> TREND MICRO EMAIL NOTICE 
> 
> Trend Micro (UK) Limited, a Limited Liability Company. Registered in England 
> No. 3698292. Registered Office: Pacific House, Third Avenue, Globe Business 
> Park, Marlow, Bucks, SL7 1YL Telephone: +44 1628 400500 Facsimile: +44 1628 
> 400511.
> 
> The information contained in this email and any attachments is confidential 
> and may be subject to copyright or other intellectual property protection. If 
> you are not the intended recipient, you are not authorized to use or disclose 
> this information, and we request that you notify us by reply mail or 
> telephone and delete the original message from your mail system.
> 
>  
> 
> 


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] LRU Statistics

2008-07-07 Thread Henrik Nordstrom
On tis, 2008-07-08 at 00:47 +0800, Roy M. wrote:

> Sure, but sometimes it would be interesting to see by adjusting the
> max. memory size, you could be able to reduce / increase the LRU per
> second. (Of coz I don't have real knowledge if LRU is costly in term
> of CPU cycle)

It will be the same most likely, the rate at which new content enters
the cache.

> On the other hand, I have another idea is to have memory partition (or
> disk partition), so not all URL are created equal, say domain1.com
> would have 1GB memory cache and 10GB disk cache, domain2.com would
> have 4GB memory cache, but no disk cache ect.

Hard to do with Squid, unless you run one Squid instane each..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] LRU Statistics

2008-07-07 Thread Roy M.
Hi,

On 7/8/08, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:
>
> It should happen as frequent as you have new content entering the cache.
>
>
>  I would think the LRU age is more interesting, telling how long the
>  oldest object has stayed in cache...
>

Sure, but sometimes it would be interesting to see by adjusting the
max. memory size, you could be able to reduce / increase the LRU per
second. (Of coz I don't have real knowledge if LRU is costly in term
of CPU cycle)

On the other hand, I have another idea is to have memory partition (or
disk partition), so not all URL are created equal, say domain1.com
would have 1GB memory cache and 10GB disk cache, domain2.com would
have 4GB memory cache, but no disk cache ect.

Thanks.


Re: [squid-users] LRU Statistics

2008-07-07 Thread Henrik Nordstrom
On tis, 2008-07-08 at 00:04 +0800, Roy M. wrote:

> Sometimes we might want to know if LRU occur in memory too frequent in
> a production server, then we might consider adding more memory, or
> adjust the max. memory object size to reduce LRU for better
> performance.

It should happen as frequent as you have new content entering the cache.


I would think the LRU age is more interesting, telling how long the
oldest object has stayed in cache...

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] reverse proxy with domains

2008-07-07 Thread Henrik Nordstrom
On mån, 2008-07-07 at 07:50 -0500, Thomas E. Maleshafske wrote:
> I have successfully implemented a reverse proxy for my http site.  My 
> question is if whether or not there is an option so that it accepts on 
> the basis of the domain basically instead of having www.example.com just 
> have example.com and it will serve and of the sub domains if it exists.  
> Appreciate the assistance.

Maybe you are looking for the vhost option, enabling host/domain based
virtual hosts?

Still relies on a web server actually implementing these...

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] LRU Statistics

2008-07-07 Thread Roy M.
Hi,

On 7/7/08, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:
> What do you mean by trigger a LRU? That Squid removes the LRU object to
>  make room for new content?

Yes, since it is very easy to fill in 100% of memory cache.

Sometimes we might want to know if LRU occur in memory too frequent in
a production server, then we might consider adding more memory, or
adjust the max. memory object size to reduce LRU for better
performance.


Thanks.


[squid-users] Squid Problem Authentication DIGEST

2008-07-07 Thread Edward Ortega
Greetings!

I try to authenticate using digest_ldap, and I get this error on
/var/log/squid3/cache.log:

2008/07/07 09:25:36| helperHandleRead: unexpected read from
digestauthenticator #1, 32 bytes '6e0856007bdf46e7c908985ea25f'
2008/07/07 09:25:36| helperHandleRead: unexpected read from
digestauthenticator #1, 1 bytes '
*user filter 'uid=user1', searchbase
'ou=SOMETHING,ou=USERS,o=SOMEDOMAIN,dc=com'*


But i've store on the attribute ldap REALM+':'+MD5, that is something
like  REALM:`echo -n "user1:$REALM:password" |md5sum  |cut -f1 -d ' '`
e.g:
*l*:  REALM:c185b844502d7f00d3a1175a23900cd3

Where 'l' is the ldap attribute, so Can anybody know what's going on in
my squid3?.

Any help, i'll be grateful

PD: When i cancel the authentication on the browser i get this on the logs:

REAL2008/07/07 09:43:58| AuthConfig::CreateAuthUser: Unsupported or
unconfigured/inactive proxy-auth scheme, 'Basic ZWRvcnRlZ2E6amlTaGEzc2Vp'


Re: [squid-users] GET cache_object://localhost/info on a reverse proxy setup

2008-07-07 Thread Henrik Nordstrom
Oh, you are using a url rewriter..

I would do it differently.

url_rewrite_access deny manager

this way you can still use squidclient on your published URLs and have
Squid react like expected on them, including URL rewrites...

On mån, 2008-07-07 at 14:25 +0200, David Obando wrote:
> Hi,
> 
> I found out, I had to configure an acl in squidGuard.conf:
> 
> 
> dbhome /var/lib/squidguard/db
> logdir /var/log/squid
> 
> #
> # DESTINATION CLASSES:
> #
> 
> src local {
> ip  127.0.0.1
> }
> 
> dest good {
> }
> 
> dest local {
> }
> 
> acl {
> local {
> pass all
> }
> 
> default {
> redirect 
> http://localhost:8080/VirtualHostBase/http/www.xyz.de:80/VirtualHostRoot/%p
> }
> }
> 
> 
> 
> 
> Thanks!
> David
> 
> Henrik Nordstrom schrieb am 07.07.2008 14:03:
> > On mån, 2008-07-07 at 10:19 +0200, David Obando wrote:
> >   
> >> Hi,
> >>
> >> thanks for the hint, I added
> >>
> >> http_port 127.0.0.1:3128
> >>
> >> to my config. Now I can access port 3128 with telnet or squidclient, but 
> >> receive an "access denied":
> >>
> >> /var/log/squid/access.log:
> >> 127.0.0.1 - - [07/Jul/2008:10:16:43 +0200] "GET 
> >> cache_object://localhost/info HTTP/1.0" 403 1430 "-" "-" TCP_DENIED:NONE
> >> 
> >
> > You probably aren't allowing localhost access to the manager functions..
> >
> > there is rules to allow this in the standard squid.conf installed when
> > you install Squid, but..
> >
> > Regards
> > Henrik
> >   
> 
> 


signature.asc
Description: This is a digitally signed message part


[squid-users] reverse proxy with domains

2008-07-07 Thread Thomas E. Maleshafske
I have successfully implemented a reverse proxy for my http site.  My 
question is if whether or not there is an option so that it accepts on 
the basis of the domain basically instead of having www.example.com just 
have example.com and it will serve and of the sub domains if it exists.  
Appreciate the assistance.


--
V/R
Thomas E. Maleshafske
http://www.maleshafske.com


Re: [squid-users] adding a parameter to a URL / Problem in the url_redirect program

2008-07-07 Thread Shaine

I did same way . but client ip doesn't comes in the second possition. Its in
third.

my ($url, $x, $ip) = split(/ /);

but in squid guide says it shoudl be in second element. why is this
confusion ? URL ip-address/fqdn ident method.

If that thrid possition will not constant all will goes off. i meant our
logic will not work any more. 

Regards
Shaine




Henrik Nordstrom-5 wrote:
> 
> On mån, 2008-07-07 at 10:03 +, Shain Lee wrote:
>> Thank you Henrik. yes that script is very simple and now and its
>> working. But i have another requirement to capture Client IP which
>> comes via the URL . Its bit confuse at this time coz i had different
>> idea .So now can direct me to how to capture client ip by that perl
>> script which you have posted.
> 
> change the split line to 
> 
> my ($url, $ip) = split(/ /);
> 
> Then use $ip as you like in the script..
> 
> Regards
> Henrik
> 
> 
>  
> 

-- 
View this message in context: 
http://www.nabble.com/adding-a-parameter-to-a-URL-tp17776816p18315957.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] Pseudo-random 403 Forbidden...

2008-07-07 Thread John Doe
> > In other words please file a bug report at http://bugs.squid-cache.org/
> 
> I filed Bug 2403.

As advised, I turned via back on and it fixed the problem.

Thx a lot Henrik,
JD


  



Re: [squid-users] Recommend for hardware configurations

2008-07-07 Thread Henrik Nordstrom
On mån, 2008-07-07 at 07:34 -0300, Michel wrote:

> ok, reverse proxy does not so very much, so sure it depends on what you do 
> with the
> machine

The known configurations which can easily push Squid to CPU bound limits
is

a) reverse proxy setups with a reasonably small but very frequently
accessed set of objects.

b) reverse proxy acting as an SSL frontend.

c) forward proxies without any cache.


It's quite hard to push a caching forward proxy to the CPU limit. You
usually run into other limits first..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] GET cache_object://localhost/info on a reverse proxy setup

2008-07-07 Thread David Obando

Hi,

I found out, I had to configure an acl in squidGuard.conf:


dbhome /var/lib/squidguard/db
logdir /var/log/squid

#
# DESTINATION CLASSES:
#

src local {
   ip  127.0.0.1
}

dest good {
}

dest local {
}

acl {
   local {
   pass all
   }

   default {
   redirect 
http://localhost:8080/VirtualHostBase/http/www.xyz.de:80/VirtualHostRoot/%p

   }
}




Thanks!
David

Henrik Nordstrom schrieb am 07.07.2008 14:03:

On mån, 2008-07-07 at 10:19 +0200, David Obando wrote:
  

Hi,

thanks for the hint, I added

http_port 127.0.0.1:3128

to my config. Now I can access port 3128 with telnet or squidclient, but 
receive an "access denied":


/var/log/squid/access.log:
127.0.0.1 - - [07/Jul/2008:10:16:43 +0200] "GET 
cache_object://localhost/info HTTP/1.0" 403 1430 "-" "-" TCP_DENIED:NONE



You probably aren't allowing localhost access to the manager functions..

there is rules to allow this in the standard squid.conf installed when
you install Squid, but..

Regards
Henrik
  



--
The day microsoft makes something that doesn't suck is the day they start 
making vacuum cleaners.
gpg --keyserver pgp.mit.edu --recv-keys 1920BD87
Key fingerprint = 3326 32CE 888B DFF1 DED3  B8D2 105F 29CB 1920 BD87



Re: [squid-users] transparent intercepting proxy

2008-07-07 Thread Henrik Nordstrom
On mån, 2008-07-07 at 15:27 +0530, Indunil Jayasooriya wrote:
> >> no, it´s now possible without dns ... browser need to resolve address
> >> to ip to start connections
> 
>  Thanks for your quick responce. How Can I achieve it.

Only by configuring the clients to use the proxy.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] adding a parameter to a URL / Problem in the url_redirect program

2008-07-07 Thread Henrik Nordstrom
On mån, 2008-07-07 at 10:03 +, Shain Lee wrote:
> Thank you Henrik. yes that script is very simple and now and its
> working. But i have another requirement to capture Client IP which
> comes via the URL . Its bit confuse at this time coz i had different
> idea .So now can direct me to how to capture client ip by that perl
> script which you have posted.

change the split line to 

my ($url, $ip) = split(/ /);

Then use $ip as you like in the script..

Regards
Henrik



signature.asc
Description: This is a digitally signed message part


RE: [squid-users] Integrating squid with OpenSSL:very slow response

2008-07-07 Thread Henrik Nordstrom
On mån, 2008-07-07 at 14:48 +0530, Geetha_Priya wrote:

> yes we use openssl libraries and created a proxy server that supports
> persistent connections. Earlier we had wcol as http prefetcher. But we
> had problems with long urls and less capabilities,  we decided to move
> to squid. Now we are facing this issue after we configured squid to
> hear request from our proxy. Hence I am not sure if it is proxy or
> squid.

Time to dig up wireshark and take a look at the traffic I think. Start
by looking at the traffic in & out of your proxy..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] GET cache_object://localhost/info on a reverse proxy setup

2008-07-07 Thread Henrik Nordstrom
On mån, 2008-07-07 at 10:19 +0200, David Obando wrote:
> Hi,
> 
> thanks for the hint, I added
> 
> http_port 127.0.0.1:3128
> 
> to my config. Now I can access port 3128 with telnet or squidclient, but 
> receive an "access denied":
> 
> /var/log/squid/access.log:
> 127.0.0.1 - - [07/Jul/2008:10:16:43 +0200] "GET 
> cache_object://localhost/info HTTP/1.0" 403 1430 "-" "-" TCP_DENIED:NONE

You probably aren't allowing localhost access to the manager functions..

there is rules to allow this in the standard squid.conf installed when
you install Squid, but..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


RE: [squid-users] LDAP Authentication with Umlauts

2008-07-07 Thread Henrik Nordstrom
On fre, 2008-07-04 at 10:30 +0200, Henrik Nordstrom wrote:
> On tor, 2008-07-03 at 12:39 +0200, [EMAIL PROTECTED] wrote:
> > Hi,
> > 
> > I also had problems with umlauts. We use our Lotus Domino Server as LDAP 
> > server and since an update from version 6.5 to 8, our users are unable to 
> > authenticate via IE or Firefox if their password contains umlauts.
> > We are running squid on BSD and Linux and on both system you are able to 
> > authenticate using squid_ldap_auth on command line.
> > I figured out that if you use the command line (set to utf-8) the utf-8 
> > code will be send and if you try to use IE or Firefox the ASCII code will 
> > be send.
> > So I wrote a small work around by adding a new function 
> > rfc1738_unescape_with_utf to squid_ldap_auth.c. The base content is the 
> > original function rfc1738_unescape, but I added a switch statement to 
> > change the character representation from ascii to utf-8 (see code for 
> > german special chars below).
> 
> Can you try the attached patch instead? It tries to address the problem
> in a generic manner.

After thinking this over a bit more your approach of translating to utf8
at input is better. But even better is to do it in Squid before the
request is sent instead of each helper..

I have now committed a change adding generic UTF-8 transpation to
Squid-2 & 3, adding an auth_param basic utf8 parameter to enable UTF-8
translation of usernames & passwords.

http://www.squid-cache.org/Versions/v2/HEAD/changesets/12298.patch

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] transparent intercepting proxy

2008-07-07 Thread Angela Williams
Hi!
On Monday 07 July 2008, Indunil Jayasooriya wrote:
> >> no, it´s now possible without dns ... browser need to resolve address
> >> to ip to start connections

There is a typo! The word should be not! Not now!

The client - no matter what they are need to resolve the dns name to an ip 
address to make the connection!
The proxy will only intercept the packets destined to the internet on port 80 
and should invisibly handle the request, cache what needs to be cached and 
pass the page contents back to the client!

If you want to do without client dns lookups you have to use a "normal" proxy 
setup!


Cheers
Ang


-- 
Angela Williams Enterprise Outsourcing
Unix/Linux & Cisco spoken here! Bedfordview
[EMAIL PROTECTED]   Gauteng South Africa

Smile!! Jesus Loves You!!



Re: [squid-users] Recommend for hardware configurations

2008-07-07 Thread Michel

> Well, I based my argument from the 10 instances of reverse proxies
> I'm running. It has 266,268,230 objects and 3.7 TB of space.  CPU
> usage is always around 0.2 according to ganglia.  So unless you have
> some other statistics to prove CPU is that important, I'm stick w/ my
> argument that disk and RAM is way more important that CPU.
>

ok, reverse proxy does not so very much, so sure it depends on what you do with 
the
machine


> mike
>
> At 03:41 AM 7/6/2008, Michel wrote:
>
>> > The cpu doesn't do any IO, it's WAITING for the disk most of the
>> > time. If you want fast squid performance, CPU speed/count is
>> > irrelevant; get more disks and ram.  When I mean more disk, I mean
>> > more spindles.  eg: 2x 100GB will is better than a 200GB disk.
>> >
>>
>>
>>well well, get prepared ... take your cpu out and then you'll see
>>who is waiting
>>forever :)
>>
>>even if IO wait is an issue it is or better WAS one on "old" giant
>>lock systems
>>where the cpu was waiting until getting the lock on a busy thread
>>because there was
>>only ONE CPU and even on multi-cpu-systems there was only one core a
>>time bound to
>>the kernel
>>
>>to get around this issue good old posix aio_*calls where used in
>>order not to wait
>>for a new lock what I believe is squid's aufs cache_dir model which
>>is still very
>>good and even better on modern smp machines and even with squid's
>>not-smp-optimized
>>code - you really can drain disks to their physical limits - but
>>that is not all
>>
>>SMP (modern) works around the global giant lock, the kernel is not
>>anymore limited
>>to get one core a time
>>
>>SMP sistems are going to work with spin locks (Linux) and sleep
>>locks (freebsd)
>>where the linux way is focusing thread synchronizing which is going to be
>>outperformanced by the sleep lock mechanism. Spin locks certainly
>>still waste cpu
>>while spinning what sleeplocks do not, cpu is free to do other work.
>>This was kind
>>of benefit for Linux last couple of years when freebsd was in deep
>>developing of
>>it's new threading model which is now on top I think, especially in
>>shared memory
>>environments.
>>
>>basicly is it not important if you use one or ten disks, this you
>>should consider
>>later as fine tuning but the threading model works the same, for one
>>or two disks,
>>or for 2 our 32Gigs of memory - so you certainly do NOT get araound
>>your IO-Wait
>>with more memory or more disk when the cpu(s) can not handle it
>>"waiting for locks"
>>as you say ...
>>
>>So IMO your statement is not so very true anymore, with a modern
>>SMP-OS on modern
>>smp hardware of course.
>>
>>michel
>>
>>
>>
>>
>>
>>Tecnologia Internet Matik http://info.matik.com.br
>>Sistemas Wireless para o Provedor Banda Larga
>>Hospedagem e Email personalizado - e claro, no Brasil.
>>
>
>
>
>
>
>
>
>
> A mensagem foi scaneada pelo sistema de e-mail e pode ser considerada segura.
> Service fornecido pelo Datacenter Matik  https://datacenter.matik.com.br
>



michel





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] transparent intercepting proxy

2008-07-07 Thread Indunil Jayasooriya
>> no, it´s now possible without dns ... browser need to resolve address
>> to ip to start connections

 Thanks for your quick responce. How Can I achieve it.

 All clinets use IE and firefox.

Hope to hear from you.

-- 
Thank you
Indunil Jayasooriya


Re: [squid-users] transparent intercepting proxy

2008-07-07 Thread Indunil Jayasooriya
On Mon, Jul 7, 2008 at 3:19 PM, Alexandre Correa
<[EMAIL PROTECTED]> wrote:
> no, it´s now possible without dns ... browser need to resolve address
> to ip to start connections

Thanks for your quick responce. How Can I achieve it.

All clinets use IE and firefox.

Hope to hear from you.




-- 
Thank you
Indunil Jayasooriya


Re: [squid-users] transparent intercepting proxy

2008-07-07 Thread Alexandre Correa
no, it´s now possible without dns ... browser need to resolve address
to ip to start connections

On Mon, Jul 7, 2008 at 6:19 AM, Indunil Jayasooriya <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I have setup transparent intercepting proxy (squid 2.6 branch) in
> RedHat EL5. It has 2 NICs. One is connected to router. The other is
> connected to LAN.  Client's gateway is LAN ip address of the proxy
> server.Clients have 2 Dns entries. It works fine. If I remove dns
> entires of clinets PCs. It will NOT work.
>
> Is it normal?
>
> Without DNS sentires in Clients Pcs. Is it possible to work?
>
> Hope to hear from you.
>
>
>
> --
> Thank you
> Indunil Jayasooriya
>



-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


[squid-users] transparent intercepting proxy

2008-07-07 Thread Indunil Jayasooriya
Hi,

I have setup transparent intercepting proxy (squid 2.6 branch) in
RedHat EL5. It has 2 NICs. One is connected to router. The other is
connected to LAN.  Client's gateway is LAN ip address of the proxy
server.Clients have 2 Dns entries. It works fine. If I remove dns
entires of clinets PCs. It will NOT work.

Is it normal?

Without DNS sentires in Clients Pcs. Is it possible to work?

Hope to hear from you.



-- 
Thank you
Indunil Jayasooriya


RE: [squid-users] Integrating squid with OpenSSL:very slow response

2008-07-07 Thread Geetha_Priya
-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 07, 2008 1:55 PM
To: Geetha_Priya
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Integrating squid with OpenSSL:very slow response

On mån, 2008-07-07 at 13:01 +0530, Geetha_Priya wrote:
> Accessing websites through proxy and squid has following response 
> along with being slow
> 
> 1. No graphical images are obtained for a requested page, I don’t see 
> subsequent requests for obtaining images through squid. It gets the 
> main page without graphics

>>Then those requests is not reaching Squid for some reason.
You are right.


> 2. At times when requests is made a window pops up in the browser that 
> says
> 
> "you have chosen to open http://weblink Which is a bin file" and 
> provides options to save
>  
> I don’t understand the reason behind this. Again I am not sure if it is 
> because of squid or openssl.

>>Most likely your proxy, not Squid. From your description it sounds like your 
>>proxy has >>problems with persistent connections, not forwarding the next 
>>request received on a >>persistent connection.

>>openssl is not a proxy. Is your proxy a homebrew software of some kind, or a 
>>ready made >>application? If a ready made application, which one?

yes we use openssl libraries and created a proxy server that supports 
persistent connections. Earlier we had wcol as http prefetcher. But we had 
problems with long urls and less capabilities,  we decided to move to squid. 
Now we are facing this issue after we configured squid to hear request from our 
proxy. Hence I am not sure if it is proxy or squid.
 
>>If the above suspicion is true it should help to disable persistent 
>>connections in Squid

   >>client_persistent_connections off
Tried this but same result.

Thanks
Geetha


DISCLAIMER:
This email (including any attachments) is intended for the sole use of the 
intended recipient/s and may contain material that is CONFIDENTIAL AND PRIVATE 
COMPANY INFORMATION. Any review or reliance by others or copying or 
distribution or forwarding of any or all of the contents in this message is 
STRICTLY PROHIBITED. If you are not the intended recipient, please contact the 
sender by email and delete all copies; your cooperation in this regard is 
appreciated.


Re: [squid-users] adding a parameter to a URL / Problem in the url_redirect program

2008-07-07 Thread Shaine

Thank you Henrik. yes that script is very simple and now and its working. But
i have another requirement to capture Client IP which comes via the URL .
Its bit confuse at this time coz i had different idea .So now can u direct
me to how to capture client ip by that perl script which you have posted.

Thank you
Shaine.




Henrik Nordstrom-5 wrote:
> 
> On sön, 2008-07-06 at 22:05 -0700, Shaine wrote:
> 
>> Following is my script.
>> 
>> #!/usr/bin/perl
>> # no buffered output, auto flush
>> use strict;
>> use warnings;
>> 
>> my ($temp, $array, @array, $param_1, $param_2, $param_3, $new_uri);
>> 
>> $|=1;
>> $temp = "";
>> 
>> 
>> while (){
>>   [EMAIL PROTECTED] = split(/ /);
>>   ($param_1, $param_2, $param_3) = split(/ /);
>>   #if (!($array[1] =~ m#VALUE-X#)) {
>>   if (!($param_2 =~ m#VALUE-X#)) {
>> $temp = $param_2;
>> if ($param_2 =~ m#\?#) {
>>   $temp .= "&VALUE-X=652224848";
>> }else {
>>   $temp .= "?VALUE-X=652224848";
>> }
>> $new_uri = ($param_1 . " " . $temp . " " . $param_3);
>> s#$param_2#$temp#;
>> #print $new_uri;
>> print;
>>   }else {
>> print;
>>   }
>> }
> 
> 
> If I understand the above correct you modify the second parameter sent
> to the script which is the requesting client ip...
> 
> The URL is the first, and the only one used by Squid in responses.
> 
> Here is a simplified version of that script which should work better I
> think (completely untested)
> 
> ### BEGIN ###
> #!/usr/bin/perl
> use strict;
> use warnings;
> 
> # no buffered output, auto flush
> $|=1;
> while () {
>   chomp;
>   my ($url) = split(/ /);
>   if (!($url =~ m#VALUE-X#)) {
> if ($url =~ m#\?#) {
>$url .= "&VALUE-X=652224848";
> } else {
>$url .= "?VALUE-X=652224848";
> }
> print $url."\n";
>   } else {
> print "\n";
>   }
> }
> ### END ###
> 
> The chomp isn't stricly needed, but makes testing from command line
> easier as it's sufficient to then enter just the URL for proper results
> and not a complete url rewriter request.
> 
> Regards
> Henrik
> 
>  
> 

-- 
View this message in context: 
http://www.nabble.com/adding-a-parameter-to-a-URL-tp17776816p18312476.html
Sent from the Squid - Users mailing list archive at Nabble.com.



RE: [squid-users] Integrating squid with OpenSSL:very slow response

2008-07-07 Thread Henrik Nordstrom
On mån, 2008-07-07 at 13:01 +0530, Geetha_Priya wrote:
> Accessing websites through proxy and squid has following response along with 
> being slow
> 
> 1. No graphical images are obtained for a requested page, I don’t see 
> subsequent requests for obtaining images through squid. It gets the main page 
> without graphics

Then those requests is not reaching Squid for some reason.

> 2. At times when requests is made a window pops up in the browser that says 
> 
> "you have chosen to open http://weblink
> Which is a bin file" and provides options to save
>  
> I don’t understand the reason behind this. Again I am not sure if it is 
> because of squid or openssl.

Most likely your proxy, not Squid. From your description it sounds like
your proxy has problems with persistent connections, not forwarding the
next request received on a persistent connection.

openssl is not a proxy. Is your proxy a homebrew software of some kind,
or a ready made application? If a ready made application, which one?

If the above suspicion is true it should help to disable persistent
connections in Squid

   client_persistent_connections off

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] GET cache_object://localhost/info on a reverse proxy setup

2008-07-07 Thread David Obando

Hi,

thanks for the hint, I added

http_port 127.0.0.1:3128

to my config. Now I can access port 3128 with telnet or squidclient, but 
receive an "access denied":


/var/log/squid/access.log:
127.0.0.1 - - [07/Jul/2008:10:16:43 +0200] "GET 
cache_object://localhost/info HTTP/1.0" 403 1430 "-" "-" TCP_DENIED:NONE



Regards,
David



Henrik Nordstrom schrieb am 04.07.2008 01:22:

On tor, 2008-07-03 at 17:01 +0200, David Obando wrote:
  

Dear all,

I'm using Squid as a reverse proxy in a Squid/Pound/Zope/Plone-setup. 
Squid is running on port 80.


I would like to access the cache manager with the munin plugins to 
monitor Squid. The plugins use a HTTP request

"GET cache_object://localhost/info HTTP/1.0".
Standard port 3128 isn't active, when asking port 80 I get a 404-error 
from zope.


How can I access the cache manager in such a setup?



Are you sending the query to Squid, or directly to Zope?

What I usually do in reverse proxy setups is to set up a normal 3128
listening port on loopback for cachemgr and squidclient to use.

http_port 127.0.0.1:3128

Regards
Henrik
  



--
The day microsoft makes something that doesn't suck is the day they start 
making vacuum cleaners.
gpg --keyserver pgp.mit.edu --recv-keys 1920BD87
Key fingerprint = 3326 32CE 888B DFF1 DED3  B8D2 105F 29CB 1920 BD87



Re: [squid-users] Squid and ziproxy

2008-07-07 Thread Henrik Nordstrom
On mån, 2008-07-07 at 13:13 +0900, KwangYul Seo wrote:
> Hi,
> 
> Is it possible to use squid with
> ziproxy(http://ziproxy.sourceforge.net/)?

Should work, assuming ziproxy does things correctly and does not mess up
on ETag..


> If so, what is the usual configuration?

Squid using ziproxy as a parent, or the other way around.. (ziproxy
using Squid as parent). 

> If not, how can I implement a ziproxy-like HTML/JS/CSS optimization in
> Squid? Is there a pluggable module interface for this purpose?

There is a gzip module for Squid-3, but it most likely needs to be
updated a bit as it hasn't been maintained in quite a while. Also it's
not yet verified to act correctly wrt ETag.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


RE: [squid-users] Integrating squid with OpenSSL:very slow response

2008-07-07 Thread Geetha_Priya
Thanks for your reply, I agree the empty request is not a concern and is not a 
part of squid.

The issue is

Accessing websites through proxy and squid has following response along with 
being slow

1. No graphical images are obtained for a requested page, I don’t see 
subsequent requests for obtaining images through squid. It gets the main page 
without graphics

2. At times when requests is made a window pops up in the browser that says 

"you have chosen to open http://weblink
Which is a bin file" and provides options to save
 
I don’t understand the reason behind this. Again I am not sure if it is because 
of squid or openssl.

Please clarify.
Regards
Geetha

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 07, 2008 12:51 PM
To: Geetha_Priya
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Integrating squid with OpenSSL:very slow response

On mån, 2008-07-07 at 12:20 +0530, Geetha_Priya wrote:

> I have a proxy code [openssl] and I need to integrate squid such that 
> all HTTP requests is forwarded to squid through our proxy only.The 
> purpose of proxy is more for controlling http access a this point.

Ok.

> Client <--> proxy <> Squid <> Webservers

Ok.

> The response time is very slow. I see some empty requests under new request = 
> , is that common in between getting pages. Please find a snapshot below.

Who is logging that "new request = ", and when? I suppose it's from your 
"openssl" proxy.

There is no such thing as an empty HTTP request. You either have an HTTP 
request, or you don't.

The only thing common between pages is that the idle connections gets closed if 
the user waits for too long..

Another thing which is somewhat common is that many clients sent a extra blank 
line after POST requests, but from the data you have shown it does not look 
like this is the issue. (only GET requests was shown).

I'd suggest you look a the HTTP data stream using wireshark and compare this 
with how your proxy behaves.

Regards
Henrik


DISCLAIMER:
This email (including any attachments) is intended for the sole use of the 
intended recipient/s and may contain material that is CONFIDENTIAL AND PRIVATE 
COMPANY INFORMATION. Any review or reliance by others or copying or 
distribution or forwarding of any or all of the contents in this message is 
STRICTLY PROHIBITED. If you are not the intended recipient, please contact the 
sender by email and delete all copies; your cooperation in this regard is 
appreciated.


Re: [squid-users] Integrating squid with OpenSSL:very slow response

2008-07-07 Thread Henrik Nordstrom
On mån, 2008-07-07 at 12:20 +0530, Geetha_Priya wrote:

> I have a proxy code [openssl] and I need to integrate squid such that
> all HTTP requests is forwarded to squid through our proxy only.The
> purpose of proxy is more for controlling http access a this point.

Ok.

> Client <--> proxy <> Squid <> Webservers

Ok.

> The response time is very slow. I see some empty requests under new request = 
> , is that common in between getting pages. Please find a snapshot below.

Who is logging that "new request = ", and when? I suppose it's from your
"openssl" proxy.

There is no such thing as an empty HTTP request. You either have an HTTP
request, or you don't.

The only thing common between pages is that the idle connections gets
closed if the user waits for too long..

Another thing which is somewhat common is that many clients sent a extra
blank line after POST requests, but from the data you have shown it does
not look like this is the issue. (only GET requests was shown).

I'd suggest you look a the HTTP data stream using wireshark and compare
this with how your proxy behaves.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] adding a parameter to a URL / Problem in the url_redirect program

2008-07-07 Thread Henrik Nordstrom
On sön, 2008-07-06 at 22:05 -0700, Shaine wrote:

> Following is my script.
> 
> #!/usr/bin/perl
> # no buffered output, auto flush
> use strict;
> use warnings;
> 
> my ($temp, $array, @array, $param_1, $param_2, $param_3, $new_uri);
> 
> $|=1;
> $temp = "";
> 
> 
> while (){
>   [EMAIL PROTECTED] = split(/ /);
>   ($param_1, $param_2, $param_3) = split(/ /);
>   #if (!($array[1] =~ m#VALUE-X#)) {
>   if (!($param_2 =~ m#VALUE-X#)) {
> $temp = $param_2;
> if ($param_2 =~ m#\?#) {
>   $temp .= "&VALUE-X=652224848";
> }else {
>   $temp .= "?VALUE-X=652224848";
> }
> $new_uri = ($param_1 . " " . $temp . " " . $param_3);
> s#$param_2#$temp#;
> #print $new_uri;
> print;
>   }else {
> print;
>   }
> }


If I understand the above correct you modify the second parameter sent
to the script which is the requesting client ip...

The URL is the first, and the only one used by Squid in responses.

Here is a simplified version of that script which should work better I
think (completely untested)

### BEGIN ###
#!/usr/bin/perl
use strict;
use warnings;

# no buffered output, auto flush
$|=1;
while () {
  chomp;
  my ($url) = split(/ /);
  if (!($url =~ m#VALUE-X#)) {
if ($url =~ m#\?#) {
   $url .= "&VALUE-X=652224848";
} else {
   $url .= "?VALUE-X=652224848";
}
print $url."\n";
  } else {
print "\n";
  }
}
### END ###

The chomp isn't stricly needed, but makes testing from command line
easier as it's sufficient to then enter just the URL for proper results
and not a complete url rewriter request.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Request Header contains NULL characters :is that sloved

2008-07-07 Thread Henrik Nordstrom
On mån, 2008-07-07 at 10:05 +0530, Geetha_Priya wrote:
> This is regarding the posting:  Request header contains NULL characters. 
> http://www.mail-archive.com/squid-users@squid-cache.org/msg16754.html
> I see back in 2004, Mozilla browser gives this error. But are there
> any improvements to this. I use Mozilla and get these errors for some
> website [even yahoo.com]. IS there any workaround.

Depends on what is causing the error.

If you can then capturing the triggering request sequence using
wireshark is a good start to get this bug addressed, wherever it is
(Squid, Mozilla or content?).

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] LRU Statistics

2008-07-07 Thread Henrik Nordstrom
On mån, 2008-07-07 at 11:37 +0800, Roy M. wrote:

> 1. Since memory is now 100% used, how do I know if there is a cache
> miss in mem 48.3%,
> how many % of them will trigger a LRU in memory cache?

What do you mean by trigger a LRU? That Squid removes the LRU object to
make room for new content?

If you enable
debug_options ALL,1 47,2 

then Squid will log in cache.log when it has removed some disk content
by LRU. It is not as easy to get the memory removal policy logged as it
requires debug 20,3 which logs quite a bit...


Hmm... we really should add per store performance counters on this...

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] how safe is server_http11?

2008-07-07 Thread Henrik Nordstrom
On mån, 2008-07-07 at 09:39 +1000, Mark Nottingham wrote:
> FWIW, I've tested it, and have been using it in production on a fair  
> number of boxes for a little while; so far so good. Like H says, the  
> main thing is lacking Expect/Continue support.

Expect is there in the minimal conforming mode of rejecting Expect
requests with Expectation Failed as the 100-continue expectation can not
be met without 1xx support.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part