Re: [squid-users] single host off/ host_header on question

2006-05-30 Thread Nick Baronian

I have continued to monkey around with my squid.conf but I still can't
figure out if I am doing something wrong and if so what.
Does anyone else who runs Squid as a reverse proxy (using it to cache
multiple sites that are load balanced between multiple servers) have
any issues or how is their conf set up?  Did anyone else have to set
these sites up in their hosts file?
If anyone has any insight into my issue I would appreciate the help.
Thanks,
Nick Baronian

On 5/25/06, Nick Baronian <[EMAIL PROTECTED]> wrote:

Thanks Henrik - couple more questions if you don't mind

> Note: For a better experience I would recomment you to have Apache
> listen on port 80 as well. Just bind them to different IPs. This way you
> do not risk that port 81 to leak out to the end-user... (which it often
> does otherwise).

We are blocking incoming port 81 traffic using a firewall, so the
outside world can't access the Apache server (port 81).  I have no
problems binding another IP to Apache but do you think it will give me
any other benefit?
>
> Anything in cache.log?

Nothing that stands out to me
2006/05/25 10:07:22| Starting Squid Cache version 2.5.STABLE13 for
i686-pc-linux-gnu...
2006/05/25 10:07:22| Process ID 30310
2006/05/25 10:07:22| With 1024 file descriptors available
2006/05/25 10:07:22| Performing DNS Tests...
2006/05/25 10:07:22| Successful DNS name lookup tests...
2006/05/25 10:07:22| DNS Socket created at 0.0.0.0, port 32776, FD 5
2006/05/25 10:07:22| Adding nameserver 168.192.40.41 from /etc/resolv.conf
2006/05/25 10:07:22| Adding nameserver 168.192.43.41 from /etc/resolv.conf
2006/05/25 10:07:22| Unlinkd pipe opened on FD 10
2006/05/25 10:07:22| Swap maxSize 8388608 KB, estimated 645277 objects
2006/05/25 10:07:22| Target number of buckets: 32263
2006/05/25 10:07:22| Using 32768 Store buckets
2006/05/25 10:07:22| Max Mem  size: 524288 KB
2006/05/25 10:07:22| Max Swap size: 8388608 KB
2006/05/25 10:07:22| Rebuilding storage in /u01/squid (DIRTY)
2006/05/25 10:07:22| Using Least Load store dir selection
2006/05/25 10:07:22| Set Current Directory to /u01/squidlogs
2006/05/25 10:07:22| Loaded Icons.
2006/05/25 10:07:22| Accepting HTTP connections at 0.0.0.0, port 80, FD 11.
2006/05/25 10:07:22| Accepting ICP messages at 0.0.0.0, port 3130, FD 12.
2006/05/25 10:07:22| WCCP Disabled.
2006/05/25 10:07:22| Ready to serve requests.
2006/05/25 10:07:22| Done scanning /u01/squid (0 entries)
2006/05/25 10:07:22| Finished rebuilding storage from disk.
2006/05/25 10:07:22| 0 Entries scanned
2006/05/25 10:07:22| 0 Invalid entries.
2006/05/25 10:07:22| 0 With invalid flags.
2006/05/25 10:07:22| 0 Objects loaded.
2006/05/25 10:07:22| 0 Objects expired.
2006/05/25 10:07:22| 0 Objects cancelled.
2006/05/25 10:07:22| 0 Duplicate URLs purged.
2006/05/25 10:07:22| 0 Swapfile clashes avoided.
2006/05/25 10:07:22|   Took 0.3 seconds (   0.0 objects/sec).
2006/05/25 10:07:22| Beginning Validation Procedure
2006/05/25 10:07:22|   Completed Validation Procedure
2006/05/25 10:07:22|   Validated 0 Entries
2006/05/25 10:07:22|   store_swap_size = 0k
2006/05/25 10:07:23| storeLateRelease: released 0 objects

> Squid doesn't really care where the DNS is pointing.. unless you have
> something relying on it such as the dst acl...
> the dst acl looks at the URL, not where the request is forwarded..

So since my dst acl points to specific IP's not domains it doesn't
matter for me, correct?
acl dest_addy dst localhost 168.192.2.3 10.10.21.2

> Only if you set "httpd_accel_single_host on". With this off all the
> httpd_accel_* directives does is to reconstruct the hostname and port
> part of the URL from the request..

So since single_host is off for me all requests replace the URL with
my httpd_accel_* info.  I thought this was the case but I wasn't sure.
 My thinking on why I was getting Access Denied/TCP_DENIED was because
we block incoming requests to port 81 on a firewall and requests could
possibly be going to my other 3 webservers, so the request would be
going out my gateway and then back in and getting denied.  I had tried
changing my dst acl to point to my non-routable DMZ address but this
didn't change things, thinking this would force Squid to only get the
uncached requests from this URL.  But the info still needs to be in
the host file for pages to come up for me.  So thinking that Squid is
requesting the page from the outside and then going back in I changed
Apache to run on a port that is open on our firewall and this didn't
help either.

Thanks again,
Nick Baronian



Re: [squid-users] single host off/ host_header on question

2006-05-25 Thread Nick Baronian

Thanks Henrik - couple more questions if you don't mind


Note: For a better experience I would recomment you to have Apache
listen on port 80 as well. Just bind them to different IPs. This way you
do not risk that port 81 to leak out to the end-user... (which it often
does otherwise).


We are blocking incoming port 81 traffic using a firewall, so the
outside world can't access the Apache server (port 81).  I have no
problems binding another IP to Apache but do you think it will give me
any other benefit?


Anything in cache.log?


Nothing that stands out to me
2006/05/25 10:07:22| Starting Squid Cache version 2.5.STABLE13 for
i686-pc-linux-gnu...
2006/05/25 10:07:22| Process ID 30310
2006/05/25 10:07:22| With 1024 file descriptors available
2006/05/25 10:07:22| Performing DNS Tests...
2006/05/25 10:07:22| Successful DNS name lookup tests...
2006/05/25 10:07:22| DNS Socket created at 0.0.0.0, port 32776, FD 5
2006/05/25 10:07:22| Adding nameserver 168.192.40.41 from /etc/resolv.conf
2006/05/25 10:07:22| Adding nameserver 168.192.43.41 from /etc/resolv.conf
2006/05/25 10:07:22| Unlinkd pipe opened on FD 10
2006/05/25 10:07:22| Swap maxSize 8388608 KB, estimated 645277 objects
2006/05/25 10:07:22| Target number of buckets: 32263
2006/05/25 10:07:22| Using 32768 Store buckets
2006/05/25 10:07:22| Max Mem  size: 524288 KB
2006/05/25 10:07:22| Max Swap size: 8388608 KB
2006/05/25 10:07:22| Rebuilding storage in /u01/squid (DIRTY)
2006/05/25 10:07:22| Using Least Load store dir selection
2006/05/25 10:07:22| Set Current Directory to /u01/squidlogs
2006/05/25 10:07:22| Loaded Icons.
2006/05/25 10:07:22| Accepting HTTP connections at 0.0.0.0, port 80, FD 11.
2006/05/25 10:07:22| Accepting ICP messages at 0.0.0.0, port 3130, FD 12.
2006/05/25 10:07:22| WCCP Disabled.
2006/05/25 10:07:22| Ready to serve requests.
2006/05/25 10:07:22| Done scanning /u01/squid (0 entries)
2006/05/25 10:07:22| Finished rebuilding storage from disk.
2006/05/25 10:07:22| 0 Entries scanned
2006/05/25 10:07:22| 0 Invalid entries.
2006/05/25 10:07:22| 0 With invalid flags.
2006/05/25 10:07:22| 0 Objects loaded.
2006/05/25 10:07:22| 0 Objects expired.
2006/05/25 10:07:22| 0 Objects cancelled.
2006/05/25 10:07:22| 0 Duplicate URLs purged.
2006/05/25 10:07:22| 0 Swapfile clashes avoided.
2006/05/25 10:07:22|   Took 0.3 seconds (   0.0 objects/sec).
2006/05/25 10:07:22| Beginning Validation Procedure
2006/05/25 10:07:22|   Completed Validation Procedure
2006/05/25 10:07:22|   Validated 0 Entries
2006/05/25 10:07:22|   store_swap_size = 0k
2006/05/25 10:07:23| storeLateRelease: released 0 objects


Squid doesn't really care where the DNS is pointing.. unless you have
something relying on it such as the dst acl...
the dst acl looks at the URL, not where the request is forwarded..


So since my dst acl points to specific IP's not domains it doesn't
matter for me, correct?
acl dest_addy dst localhost 168.192.2.3 10.10.21.2


Only if you set "httpd_accel_single_host on". With this off all the
httpd_accel_* directives does is to reconstruct the hostname and port
part of the URL from the request..


So since single_host is off for me all requests replace the URL with
my httpd_accel_* info.  I thought this was the case but I wasn't sure.
My thinking on why I was getting Access Denied/TCP_DENIED was because
we block incoming requests to port 81 on a firewall and requests could
possibly be going to my other 3 webservers, so the request would be
going out my gateway and then back in and getting denied.  I had tried
changing my dst acl to point to my non-routable DMZ address but this
didn't change things, thinking this would force Squid to only get the
uncached requests from this URL.  But the info still needs to be in
the host file for pages to come up for me.  So thinking that Squid is
requesting the page from the outside and then going back in I changed
Apache to run on a port that is open on our firewall and this didn't
help either.

Thanks again,
Nick Baronian


[squid-users] single host off/ host_header on question

2006-05-24 Thread Nick Baronian

Hello, I am having some issues with my reverse proxy.  I have some
sites that are load balanced between 4 Apache servers.  I am testing
using Squid on 2 of the servers to help alleviate the load on calls
back to web-server's app server.  Squid is running on port 80 and
Apache on 81 on the same box.  There are a handful of Virtual Hosts in
my Apache configuration so I have single host off in my squid.conf.  I
also need to pass the host header so the application server can
determine certain info for the building of the page so host_header is
on.
My issue is I get TCP_DENIED on calls to domains that are load
balanced between the 4 servers.  Squid works for domains that are in
DNS and pointed directly to that server.  As a test I added a couple
of the domains into the server's hosts file and pointed it to itself
and started squid with -D.  Doing this makes Squid work properly for
me but adding all my domains into my host file is not really an option
since we have hundreds of domains.  With host_headers on and single
host off does Squid do a lookup on each request and try and grab the
page from whatever IP it gets from DNS, even though it is suppose to
pass it to the web server?  Do I have something set incorrectly in my
conf file that is blocking my passing the load balanced request to
itself properly?
Below is my conf, I changed my outside interface IP(168.192.2.3) and
my DMZ IP (10.10.21.2)

Thanks,
Nick Baronian

visible_hostname my.webserver.net
cache_effective_user squid
cache_effective_group squid
http_port 80
httpd_accel_host 127.0.0.1
httpd_accel_port 81
httpd_accel_single_host off
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
coredump_dir /u01/squidlogs
cache_access_log /u01/squidlogs/access.log
cache_log /u01/squidlogs/cache.log
cache_store_log /u01/squidlogs/store.log
cache_dir ufs /u01/squid 8192 16 256
cache_mem 512 MB
cache_swap_low 90
cache_swap_high 95
maximum_object_size 200 KB
request_body_max_size 0 KB
shutdown_lifetime 10 seconds
emulate_httpd_log on
strip_query_terms off

acl all src 0.0.0.0/0.0.0.0
acl dest_addy dst localhost 168.192.2.3 10.10.21.2
acl http proto http
acl ports port 80 81
acl csscall url_regex -i cssindex\.htm
acl cacheman url_regex -i cachemgr
acl HEAD method HEAD
acl purge method purge
acl CONNECT method CONNECT
acl mynets src 168.192.2.1
acl localhost src 127.0.0.1/255.255.255.255

acl bad_paths urlpath_regex -i \.exe\?
acl bad_paths urlpath_regex -i \.dll\?
acl bad_paths urlpath_regex -i \/winnt\/system32\/
acl bad_paths urlpath_regex -i \/_vti_bin\/
acl bad_paths urlpath_regex -i \/scripts\/
acl bad_paths urlpath_regex -i \/perl\/
acl bad_paths urlpath_regex -i \/examples\/
acl bad_paths urlpath_regex -i \/manual\/
acl bad_paths urlpath_regex -i \.pl\?
acl bad_paths urlpath_regex -i \.ida\?
acl bad_paths urlpath_regex -i \.php\?
acl bad_paths urlpath_regex -i \/counter
acl Manager proto cache_object
cache_mgr [EMAIL PROTECTED]
cache_effective_user squid
cache_effective_group squid
cachemgr_passwd $up3r$3cr3T all

http_access allow HEAD
http_access allow CONNECT
http_access allow purge localhost
http_access allow purge mynets
http_access allow Manager localhost
http_access allow cacheman localhost
http_access allow Manager mynets
http_access allow cacheman mynets
http_access deny Manager
http_access deny cacheman
no_cache deny csscall
no_cache deny cacheman
http_access deny !http
http_access deny !ports
http_access allow dest_addy
http_access deny bad_paths
http_access deny all


Re: [squid-users] FIFO/multiple requests for uncached object question

2006-05-24 Thread Nick Baronian

Thanks Henrik, I thought I saw a post a while back stating 2.6 is on
hold and 3.0 will be released before then.  If this is true will 3.0
have collapsed forwarding?

Thanks,
Nick Baronian

On 5/24/06, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:

tis 2006-05-23 klockan 16:32 -0400 skrev Nick Baronian:

> stale Squid contacts my Apache server and pulls the page up but what
> happens if multiple calls come in at relatively the same time.  Does
> Squid notice it is currently requesting that object and put the other
> calls on hold or does each one get passed on thru to the web server as
> misses and requests a fresh copy from Apache?

Squid-2.5 sends all of them in parallel to the server.

Squid-2.6 and later will have a "collapsed_forwarding" option making it
join the requests into a single request.

Regards
Henrik


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.3 (GNU/Linux)

iD8DBQBEdG4LB5pTNio2V7IRApnoAJ9NXaeC/Hg8CgHXVh3K/20KSygU3wCcCmYS
ru6th/sYc6HocbTZq/3r6wM=
=UQzt
-END PGP SIGNATURE-





[squid-users] FIFO/multiple requests for uncached object question

2006-05-23 Thread Nick Baronian

Hello, I have a reverse proxy setup with Squid on some Linux servers
and was wondering how squid handles multiple incoming requests for an
uncached page?  I have some heavy traffic sites that are using
mod_expires for certain pages like the home page.  If I have a page
that gets called that has not been called before or is expired or
stale Squid contacts my Apache server and pulls the page up but what
happens if multiple calls come in at relatively the same time.  Does
Squid notice it is currently requesting that object and put the other
calls on hold or does each one get passed on thru to the web server as
misses and requests a fresh copy from Apache?

Thanks,
Nick Baronian