RE: [squid-users] n00b question - show version

2009-03-17 Thread Tejpal Amin

Squid -v can be used to find the version

Tejpal Amin
-Original Message-
From: mcnicholas [mailto:mcnicho...@hotmail.com] 
Sent: Tuesday, March 17, 2009 10:29 PM
To: squid-users@squid-cache.org
Subject: [squid-users] n00b question - show version


Hi Guys

Sorry for such an inane post, but can someone tell me what command to run
that will show the version of Squid is installed on a red hat server?

I've inherited some production boxes with no documentation.

Thanks
-- 
View this message in context:
http://www.nabble.com/n00b-question---show-version-tp22563605p22563605.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Disclaimer & Privilege Notice: This e-Mail may contain proprietary, privileged 
and confidential information and is sent for the intended recipient(s) only. 
If, by an addressing or transmission error, this mail has been misdirected to 
you, you are requested to notify us immediately by return email message and 
delete this mail and its attachments. You are also hereby notified that any 
use, any form of reproduction, dissemination, copying, disclosure, 
modification, distribution and/or publication of this e-mail message, contents 
or its attachment(s) other than by its intended recipient(s) is strictly 
prohibited. Any opinions expressed in this email are those of the individual 
and may not necessarily represent those of Tata Capital Ltd. Before opening 
attachment(s), please scan for viruses.


[squid-users] SquidNT service starts thenm stops immediately - running squid.exe manually works fine though?

2009-03-17 Thread Rick Payton
Aloha everyone,

I copied over my working SquidNT 2.7 Stable 2 from my 32 bit
Windows Server 2003 to my new Windows Server 2003 x64. I rebuilt the
cache, but didn't touch the squid.conf file (I copied from C:/Squid to
C:/Squid). Squid installs as a service just fine, but as soon as I start
it under x64 it shuts down immediately. Yet if I do: squid -d 10 -X it
loads up just fine, and even adheres to my configured whitelist of
websites, if I deviate from the list Squid tells me via the web page.

It just WONT start as a service under x64, and even the squid.exe.log
file is at 0 bytes. Anyone have any experience?

Rick Payton, I.T. Manager
Morikawa & Associates, LLC
(808) 572-1745 Office
(808) 442-0978 eFax
www.mai-hawaii.com


[squid-users] reverse proxy - accessing services on ports other than 80

2009-03-17 Thread Tomasz Chmielewski
If I understand correctly, setting up a reverse proxy requires a DNS 
entry for a webserver pointed at the Squid machine.


This means, packets trying to reach other ports (i.e. 443, 110 etc.) on 
the original webserver will also hit Squid machine.


How do I solve this problem, other than changing DNS entries?


I can technically use pop.example.tld:110 (with a different IP) instead 
of example.tld:110, but I would still like to access :443 port without 
having to add certificates to Squid.
With one domain it would be easy (just redirect ports), but I would like 
to use reverse Squid proxy for multiple domains.



--
Tomasz Chmielewski
http://wpkg.org


Re: [squid-users] squid on 32-bit system with PAE and 8GB RAM

2009-03-17 Thread Gavin McCullagh
Hi,

I don't mean to labour this, I'm just keen to understand better and
obviously you guys are the experts on squid.

On Mon, 16 Mar 2009, Marcello Romani wrote:

>> Really?  I would have thought the linux kernel's disk caching would be far
>> less optimised for this than using a large squid cache_mem (whatever about
>> a ramdisk).
>
> As others have pointed out, squid's cache_mem is not used to serve  
> on-disk cache objects, while os's disk cache will hold those objects in  
> RAM after squid requests them for the first time.

Agreed.  I would have thought though that a large cache_mem would be a
better way to increase the data served from RAM, compared to the OS disk
caching.  

I imagine, perhaps incorrectly, that squid uses the mem_cache first for
data, then when it's removed (by LRU or whatever), pushes it out to the
disk cache.  This sounds like it should lead to a pretty good
mem_cache:disk_cache serving ratio.  I don't have much to back this up, but
the ratio in my own case is pretty high so squid appears not to just treat
all caches (memory and disk) equally.

http://deathcab.gcd.ie/munin/gcd.ie/watcher.gcd.ie-squid_cache_hit_breakdown.html

By comparison, I would expect linux's disk caching, which has no
understanding of the fact that this is a web proxy cache, to be less smart.
Perhaps that's incorrect though, I'm not sure what mechanism linux uses.

> So if you leave most of RAM to OS for disk cache you'll end up having  
> many on-disk object loaded from RAM, i.e. very quickly.

Some, but I would imagine not as many as with mem_cache.

> Also, squid needs memory besides cache_mem, for its own internal  
> structures and for managing the on-disk repository. If its address space  
> is already almost filled up by cache_mem alone, it might have problems  
> allocating its own memory structures.

Absolutely agreed and the crashes I've seen appear to be caused by this,
though dropping to around 1.7GB mem_cache appears to cure this.  

The question then is, which would be better, an extra cache based on a
ramdisk, or just leaving it up to the kernel's disk caching.  

> OS's disk cache, on the other hand, is not allcated from squid's process  
> memory space and has also a variable size, automatically adjusted by the  
> OS when app memory needs grow or shrink.

Right.  A ramdisk is also not allocated from squid's process space either,
but it doesn't shrink in the way linux disk caching would and that might
cause swapping in a bad situation.  That's a clear advantage for linux's
caching.  Simplicity is another clear advantage.

The question I'm left with is, which of the two would better optimise the
amount of data served from ram (thus lowering iowait), linux's caching or
the ramdisk?

I guess it's not a very normal setup, so maybe nobody has done this.

Thanks for all the feedback,
Gavin




Re: [squid-users] squid host mapping problem

2009-03-17 Thread Chris Robertson

ryan haynes wrote:

using squid 2.6.STABLE18 on ubuntu

i have an old internal webserver at x.y.82.15 that needs to go away.
the new internal webserver is at x.y.82.11
i've changed the /etc/hosts file to point to the new address but my
clients keep getting content from the old webserver from squid.

on the squid server i can ping the hostname ourcompany.web and it
correctly resolves to x.y.82.11

on the squid server (using itself as a proxy) i can connect to
http://ourcompany.web and it pulls content from the correct webserver.
however clients still get the old server. they are xp clients and they
have no hostname configured and ourcompany.web does not resolve thru
dns.

i did "sudo grep -r x.y.82.15 /etc/*" just to see if there was some
other hosts mapping somewhere and it did turn up "/etc/hosts~" with
x.y.82.15 but  i fixed  that one, restarted squid and no luck and then
i restarted the server and still nothing (can anyone tell me what that
/etc/hosts~ file is???)
  


http://mark.kolich.com/2008/10/howto-configure-vi-to-stop-saving-annoying-tilde-backup-files.html


i suspected the old site was getting cached out but i dont think im
even using caching but please correct me if im wrong
  


You are not explicitly NOT caching, so that's the most likely answer.  
Tail your access.log and look for x.x.82.15:


tail -f /var/log/squid/access.log | fgrep x.x.82.15.

That will tell you for sure if Squid is sending any requests to the old 
server.



/etc/hosts & etc/squid/squid.conf below .. routable addresses have been masked.

if im overlooking something stupid please feel free to berate me.

thanks for any help!

**
127.0.0.1 localhost
127.0.1.1 proxy01
x.y.82.11 ourcompany.web

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
**

acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443  # https
acl Safe_ports port 80  # http
acl Safe_ports port 443 # https
acl purge method PURGE
acl CONNECT method CONNECT
acl 82.0_network src x.y.82.0/24
acl 81.0_network src x.y.81.0/24
acl loopback src 127.0.0.1
acl 10.193.15_network src 10.193.15.0/24
acl 10.193.16_network src 10.193.16.0/24
acl 10.193.17_network src 10.193.17.0/26
acl blocksites url_regex "/etc/squid/blacklist"
acl internal_domain dstdomain .ourcompany.web

cache_peer x.y.82.11 parent 80 0 no-query no-digest name=internalA

cache_peer_access internalA allow internal_domain
cache_peer_access internalA deny all

http_access deny blocksites
http_access allow loopback
http_access allow 82.0_network
http_access allow 81.0_network
http_access allow 10.193.15_network
http_access allow 10.193.16_network
http_access allow 10.193.17_network
http_access allow manager localhost
http_access deny manager
  


These two lines should be moved to the top of the http_access list.  
Otherwise, they are useless.  Manager access is allowed along with 
everything else by the other allows.



http_access allow purge localhost
  


Perhaps you want to move this one up to, but there is no explicit deny 
on purge...



icp_access allow all

http_port 8080
hierarchy_stoplist cgi-bin ?

access_log /var/log/squid/access.log squid

acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320

acl apache rep_header Server ^Apache
broken_vary_encoding allow apache


extension_methods REPORT MERGE MKACTIVITY CHECKOUT

visible_hostname proxy01
hosts_file /etc/hosts

coredump_dir /var/spool/squid
  


Chris



Re: [squid-users] Large-scale Reverse Proxy for serving images FAST

2009-03-17 Thread Chris Robertson

David Tosoff wrote:

OK. Thanks Amos.

Changing up the icp_port to a unique for each instance worked. I should have 
thought about that as all instances were on the same host (localhost/127.0.0.1) 
w/ the same port... duhh.

So, I have a few other questions then: We're going to scale this up to a 
single-machine single-instance of 64 linux and 64 squid 3.0 --
 - What OS would you personally recommend running Squid 3.x on for best 
performance?
  


This space intentionally left blank.


 - Is there no limit to the cache_mem we can use in squid 3? I'd be working 
with about 64GB of memory in this machine.
  


Of course there's a limit.  You just aren't likely to hit it with the 
hardware you are using.  Of course, as of Q3 2007 here's the official 
answer: http://www.squid-cache.org/mail-archive/squid-users/200709/0559.html



 - Can you elaborate on "heap replacement/garbage policy"??
  


http://www.squid-cache.org/Doc/config/cache_replacement_policy/ and 
http://www.squid-cache.org/Doc/config/memory_replacement_policy/ (The 
second link references the first, but would be the more relevant 
directive if you are going to be using a memory-only Squid).



 - Any other options to watch for, for optimizing memory cache usage?
  


http://www.squid-cache.org/Doc/config/memory_pools_limit/


Thanks again!

David
  


Chris


Re: [squid-users] Squid rewriting to http 1.0

2009-03-17 Thread Chris Woodfield
squid doesn't support http 1.1 from cache to client. squid 2.7  
supports 1.1 from cache to origin servers, but cannot pass through  
chunked transfer-encodings. (It's the lack of support for this that  
prevents it from advertising 1.0 to clients). However, just about  
every other 1.1 function is supported via extension headers; do you  
see this breaking anything in particular?


-C

On Mar 17, 2009, at 2:59 PM, Nick Duda wrote:

Hopefully a simple question that others have seen before. We are  
running 2.6 STABLE22 (can't more to 3.0 yet due to smartfilter  
support). Clients are browsing to sites requesting http 1.1 and  
squid is returning to them http 1.0. Does anyone have any starting  
grounds (I'm already banging through google results, with no luck  
just yet) on what is going on here and how I can fix this.


Regards,
Nick Duda
Manager, Information Security
GIAC GSEC | GCIH






Re: [squid-users] Squid exitin periodicaly ( Preparing for shut down after )

2009-03-17 Thread Chris Robertson

twintu...@f2s.com wrote:

#17.19 is my old workstation IP, nothing is on this now
#16.2 used to be the ip of the squid server not used anymore

acl netmgr src 10.106.17.19/255.255.255.255 10.106.16.2/255.255.255.255
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
{config cut}
  


What got cut?  Any http_access lines?


http_access allow localhost
http_access allow manager netmgr localhost
  


This acl reads "Allow requests using the cache_object protocol if they 
have a source of 10.106.17.19 or 10.106.16.2 and they have a source of 
127.0.0.1".  Since no request can have a source of two different IP 
addresses (that I know of at least) this limits manager access to 
localhost (which is allowed to do anything by the http_access line above).



http_access deny all
  


There must be more http_access lines.  Otherwise your proxy wouldn't be 
usable (unless Squid is a parent for another service on the same box).



Cheers

Rob


Chris


[squid-users] Squid rewriting to http 1.0

2009-03-17 Thread Nick Duda
Hopefully a simple question that others have seen before. We are running 2.6 
STABLE22 (can't more to 3.0 yet due to smartfilter support). Clients are 
browsing to sites requesting http 1.1 and squid is returning to them http 1.0. 
Does anyone have any starting grounds (I'm already banging through google 
results, with no luck just yet) on what is going on here and how I can fix this.

Regards,
Nick Duda
Manager, Information Security
GIAC GSEC | GCIH




Re: [squid-users] Large-scale Reverse Proxy for serving images FAST

2009-03-17 Thread David Tosoff

OK. Thanks Amos.

Changing up the icp_port to a unique for each instance worked. I should have 
thought about that as all instances were on the same host (localhost/127.0.0.1) 
w/ the same port... duhh.

So, I have a few other questions then: We're going to scale this up to a 
single-machine single-instance of 64 linux and 64 squid 3.0 --
 - What OS would you personally recommend running Squid 3.x on for best 
performance?
 - Is there no limit to the cache_mem we can use in squid 3? I'd be working 
with about 64GB of memory in this machine.
 - Can you elaborate on "heap replacement/garbage policy"??
 - Any other options to watch for, for optimizing memory cache usage?

Thanks again!

David


--- On Tue, 3/17/09, Amos Jeffries  wrote:

> From: Amos Jeffries 
> Subject: Re: [squid-users] Large-scale Reverse Proxy for serving images FAST
> To: dtos...@yahoo.com
> Cc: squid-users@squid-cache.org
> Received: Tuesday, March 17, 2009, 12:10 AM
> David Tosoff wrote:
> > All,
> > 
> > I'm new to Squid and I have been given the task of
> optimizing the delivery of photos from our website. We have
> 1 main active image server which serves up the images to the
> end user via 2 chained CDNs. We want to drop the middle CDN
> as it's not performing well and is a waste of money; in
> it's stead we plan to place a few reverse proxy web
> accelerators between the primary CDN and our image server.
> > 
> 
> You are aware then that a few reverse-proxy accelerators
> are in fact the definition of a CDN? So you are building
> your own instead of paying for one.
> 
> Thank you for choosing Squid.
> 
> > We currently recieve 152 hits/sec on average with
> about 550hps max to our secondary CDN from cache misses at
> the Primary.
> > I would like to serve a lot of this content straight
> from memory to get it out there as fast as possible.
> > 
> > I've read around that there are memory and
> processing limitations in Squid in the magnitude of 2-4GB
> RAM and 1 core/1 thread, respectively. So, my solution was
> to run multiple instances, as we don't have the
> rackspace to scale this out otherwise.
> > 
> 
> Memory limitations on large objects only exist in Squid-2.
> And 2-4GB RAM  issues reported recently are only due to
> 32-bit build + 32-bit hardware.
> 
> Your 8GB cache_mem settings below and stated object size
> show these are not problems for your Squid.
> 
> 152 req/sec is not enough to raise the CPU temperature with
> Squid, 550 might be noticeable but not a problem. 2700
> req/sec has been measured in accelerator Squid-2.6 on a
> 2.6GHz dual-core CPU and more performance improvements have
> been added since then.
> 
> 
> > I've managed to build a working config of 1:1
> squid:origin, but I am having trouble scaling this up and
> out.
> > 
> > Here is what I have attempted to do, maybe someone can
> point me in the right direction:
> > 
> > Current config:
> > User Browser -> Prim CDN -> Sec CDN -> Our
> Image server @ http port 80
> > 
> > New config idea:
> > User -> Prim CDN -> Squid0 @ http :80 ->
> round-robin to "parent" squid instances on same
> machine @ http :81, :82, etc -> Our Image server @ http
> :80
> > 
> > 
> > Squid0's (per diagram above) squid.conf:
> > 
> > acl Safe_ports port 80
> > acl PICS_DOM_COM dstdomain pics.domain.com
> > acl SQUID_PEERS src 127.0.0.1
> > http_access allow PICS_DOM_COM
> > icp_access allow SQUID_PEERS
> > miss_access allow SQUID_PEERS
> > http_port 80 accel defaultsite=pics.domain.com
> > cache_peer localhost parent 81 3130 name=imgCache1
> round-robin proxy-only
> > cache_peer localhost parent 82 3130 name=imgCache2
> round-robin proxy-only
> > cache_peer_access imgCache1 allow PICS_DOM_COM
> > cache_peer_access imgCache2 allow PICS_DOM_COM
> > cache_mem 8192 MB
> > maximum_object_size_in_memory 100 KB
> > cache_dir aufs /usr/local/squid0/cache 1024 16 256  --
> This one isn't really relevant, as nothing is being
> cached on this instance (proxy-only)
> > icp_port 3130
> > visible_hostname pics.domain.com/0
> > 
> > Everything else is per the defaults in squid.conf.
> > 
> > 
> > "Parent" squids' (from above diagram)
> squid.conf:
> > 
> > acl Safe_ports port 81
> > acl PICS_DOM_COM dstdomain pics.domain.com
> > acl SQUID_PEERS src 127.0.0.1
> > http_access allow PICS_DOM_COM
> > icp_access allow SQUID_PEERS
> > miss_access allow SQUID_PEERS
> > http_port 81 accel defaultsite=pics.domain.com
> > cache_peer 192.168.0.223 parent 80 0 no-query
> originserver name=imgParent
> > cache_peer localhost sibling 82 3130 name=imgCache2
> proxy-only
> > cache_peer_access imgParent allow PICS_DOM_COM
> > cache_peer_access imgCache2 allow PICS_DOM_COM
> > cache_mem 8192 MB
> > maximum_object_size_in_memory 100 KB
> > cache_dir aufs /usr/local/squid1/cache 10240 16 256
> > visible_hostname pics.domain.com/1
> > icp_port 3130
> > icp_hit_stale on
> > 
> > Everything else per defaults.
> > 
> > 
> > 
> > So, when I run this config and test I see the
> following happen in the logs:
> > 
> > Fro

Re: [squid-users] n00b question - show version

2009-03-17 Thread Rick Chisholm

squid -v should get you what you are looking for.

mcnicholas wrote:

Hi Guys

Sorry for such an inane post, but can someone tell me what command to run
that will show the version of Squid is installed on a red hat server?

I've inherited some production boxes with no documentation.

Thanks
  




Re: [squid-users] n00b question - show version

2009-03-17 Thread Jakob Curdes




Sorry for such an inane post, but can someone tell me what command to run
that will show the version of Squid is installed on a red hat server?
  
"whereis squid" should give you the path to your squid binary if it is 
installed in a sensible place.

Once you found the binary you can do a

/path_to_squid/squid -v

Chances are that the path is /usr/sbin, but I am not sure as I always 
use self compiled squid's.


HTH,
Jakob Curdes



Re: [squid-users] CARP question

2009-03-17 Thread Mark Nottingham


On 17/03/2009, at 7:44 AM, Chris Woodfield wrote:


Hi,

Had a question about squid's CARP implementation.

Let's say I have a farm of squids sitting behind an SLB, and behind  
those I have a set of parent caches. If I were to enable CARP on the  
front-end caches, is the hash algorithm deterministic enough to  
result in a URL request seen by more than one edge cache to be  
directed to the same parent cache?


Yes (keeping in mind that they can move around if the set of servers  
considered 'up' changes, and of course different FE caches will have a  
different idea of what set is 'up' at any particular point in time).



Or will each front-end cache have its own hash assignments compared  
to the others?


Also, how does CARP handle parent server removals and/or additions  
(i.e. are hash "buckets" reassigned gracefully or are they all  
redistributed)? Is this behavior also deterministic between multiple  
front-end squids?


It is deterministic, and the idea is to cause the least disruption.  
Search on 'consistent hashing' for the math; it's the same technique  
used in Akamai, Hadoop/BigTable, etc.


Cheers,


--
Mark Nottingham   m...@yahoo-inc.com




[squid-users] n00b question - show version

2009-03-17 Thread mcnicholas

Hi Guys

Sorry for such an inane post, but can someone tell me what command to run
that will show the version of Squid is installed on a red hat server?

I've inherited some production boxes with no documentation.

Thanks
-- 
View this message in context: 
http://www.nabble.com/n00b-question---show-version-tp22563605p22563605.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] Squid Log analyzing tool- sort log results by time

2009-03-17 Thread Marcello Romani

m...@bortal.de ha scritto:

Hello List,

i am looking for a reporting tool for squid, that shows me: WHEN 
(DATE-TIME) Someone (IP) accessed a a url.


So the output should look like:
- 


Tue, Mar 16 2009 |  192.168.123.123 | www.google.de
Tue, Mar 17 2009 |  192.168.123.23 | www.google.net
Tue, Mar 17 2009 |  192.168.123.3 | www.google.com

I was quite happy with sarg, but unfortunately i was not able to sort 
the logs by time.


Can anyone give me a hint here?

Thanks,
Mario



Have a look at mysar.

http://giannis.stoilis.gr/software/mysar/

HTH

--
Marcello Romani


[squid-users] Squid and NTLM Error

2009-03-17 Thread Phibee Network Operation Center

Hi

i have a lot of error into my cache.log of squid:


2009/03/16 07:44:47| WARNING: up to 149 pending requests queued
2009/03/16 07:44:47| Consider increasing the number of ntlmauthenticator 
processes to at least 184 in your config file.

2009/03/16 07:45:17| WARNING: All ntlmauthenticator processes are busy.
2009/03/16 07:45:17| WARNING: up to 156 pending requests queued
2009/03/16 07:45:17| Consider increasing the number of ntlmauthenticator 
processes to at least 191 in your config file.

2009/03/16 07:45:32| storeDirWriteCleanLogs: Starting...
2009/03/16 07:45:32|   Finished.  Wrote 0 entries.
2009/03/16 07:45:32|   Took 0.0 seconds (   0.0 entries/sec).
FATAL: Too many queued ntlmauthenticator requests (176 on 35)
Squid Cache (Version 2.6.STABLE1): Terminated abnormally.
CPU Usage: 110.491 seconds = 56.740 user + 53.751 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
   total space in arena:9240 KB
   Ordinary blocks: 7030 KB243 blks
   Small blocks:   0 KB  0 blks
   Holding blocks:   224 KB  1 blks
   Free Small blocks:  0 KB
   Free Ordinary blocks:2209 KB
   Total in use:7254 KB 79%
   Total free:  2209 KB 24%




I think's it's this config:

auth_param ntlm program /usr/bin/ntlm_auth 
--helper-protocol=squid-2.5-ntlmssp

auth_param ntlm children 35
#auth_param ntlm use_ntlm_negotiate on
#auth_param ntlm max_challenge_reuses 0
#auth_param ntlm max_challenge_lifetime 10 minutes

auth_param basic program /usr/bin/ntlm_auth 
--helper-protocol=squid-2.5-basic

auth_param basic children 15
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours



Correct ?
What is the best configuration for NTLM ?


Thanks
jerome



[squid-users] squid host mapping problem

2009-03-17 Thread ryan haynes
using squid 2.6.STABLE18 on ubuntu

i have an old internal webserver at x.y.82.15 that needs to go away.
the new internal webserver is at x.y.82.11
i've changed the /etc/hosts file to point to the new address but my
clients keep getting content from the old webserver from squid.

on the squid server i can ping the hostname ourcompany.web and it
correctly resolves to x.y.82.11

on the squid server (using itself as a proxy) i can connect to
http://ourcompany.web and it pulls content from the correct webserver.
however clients still get the old server. they are xp clients and they
have no hostname configured and ourcompany.web does not resolve thru
dns.

i did "sudo grep -r x.y.82.15 /etc/*" just to see if there was some
other hosts mapping somewhere and it did turn up "/etc/hosts~" with
x.y.82.15 but  i fixed  that one, restarted squid and no luck and then
i restarted the server and still nothing (can anyone tell me what that
/etc/hosts~ file is???)

i suspected the old site was getting cached out but i dont think im
even using caching but please correct me if im wrong

/etc/hosts & etc/squid/squid.conf below .. routable addresses have been masked.

if im overlooking something stupid please feel free to berate me.

thanks for any help!

**
127.0.0.1 localhost
127.0.1.1 proxy01
x.y.82.11 ourcompany.web

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
**

acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443  # https
acl Safe_ports port 80  # http
acl Safe_ports port 443 # https
acl purge method PURGE
acl CONNECT method CONNECT
acl 82.0_network src x.y.82.0/24
acl 81.0_network src x.y.81.0/24
acl loopback src 127.0.0.1
acl 10.193.15_network src 10.193.15.0/24
acl 10.193.16_network src 10.193.16.0/24
acl 10.193.17_network src 10.193.17.0/26
acl blocksites url_regex "/etc/squid/blacklist"
acl internal_domain dstdomain .ourcompany.web

cache_peer x.y.82.11 parent 80 0 no-query no-digest name=internalA

cache_peer_access internalA allow internal_domain
cache_peer_access internalA deny all

http_access deny blocksites
http_access allow loopback
http_access allow 82.0_network
http_access allow 81.0_network
http_access allow 10.193.15_network
http_access allow 10.193.16_network
http_access allow 10.193.17_network
http_access allow manager localhost
http_access deny manager

http_access allow purge localhost
icp_access allow all

http_port 8080
hierarchy_stoplist cgi-bin ?

access_log /var/log/squid/access.log squid

acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
refresh_pattern ^ftp:   1440    20% 10080
refresh_pattern ^gopher:    1440    0%  1440
refresh_pattern .   0   20% 4320

acl apache rep_header Server ^Apache
broken_vary_encoding allow apache


extension_methods REPORT MERGE MKACTIVITY CHECKOUT

visible_hostname proxy01
hosts_file /etc/hosts

coredump_dir /var/spool/squid


Re: [squid-users] Config suggestion

2009-03-17 Thread Herbert Faleiros
On Tue, 17 Mar 2009 12:58:08 +1200 (NZST), "Amos Jeffries"
 wrote:
[cut]
> 
> You have 5 physical disks by the looks of it. Best usage of those is to
> split the cache_dir one per disk (sharing a disk leads to seek clashes).


OK, I will disable LVM and try it.

 
> I'm not to up on the L1/L2 efficiencies, but "64 256" or higher L1 seems
> to be better for larger dir sizes.

OK...


> For a quad or higher CPU machine, you may do well to have multiple Squid
> running (one per 2 CPUs or so). One squid doing the caching on the 300GB
> drives and one on the smaller ~100 GB drives (to get around a small bug
> where mismatched AUFS dirs cause starvation in small dir), peered
together
> with no-proxy option to share info without duplicating cache.


4 Squid's, 1 disk per/Squid proc. and a cache-peer config... Sounds good.


[cut]
> Absolutely minimal swapping of memory.

Decreased to 2GiB, the rule in faq/wiki about x% cache_dir (disk) should be
y% cache_mem seems confused to me.

-- 
Herbert


Re: [squid-users] Config suggestion

2009-03-17 Thread Herbert Faleiros
On Tue, 17 Mar 2009 09:54:00 +0100, Matus UHLAR - fantomas
 wrote:
[cut]
> is that one quad-core with hyperthreading, two quad-cores without HT or
two
> dual-cores with HT? We apparently should count HT CPU's as one, not two.

2 Xeon Quad-cores (4 cores per/processor, 8 total), no HT...


[cut]
>> >  total   used   free sharedbuffers
>> >  cached
>> > Mem: 32148   2238  29910  0244   
>> > 823
>> > -/+ buffers/cache:   1169  30978
>> > Swap:15264  0  15264
> 
> swap is quite useless here I'd say...


Uptime was 1/2 min. Look at it now:

$ free -m
 total   used   free sharedbuffers cached
Mem: 32151  31996155  0   1891  24108
-/+ buffers/cache:   5996  26155
Swap:15264  6  15258


[cut]
> I'd say that the 73.5 Gb disk should be used only for OS, logs etc.

I did it.


[cut]
>> I'm not to up on the L1/L2 efficiencies, but "64 256" or higher L1 seems
>> to be better for larger dir sizes.

OK, I will try...


[cut]
> Note that for 300GiB HDD you will be using max 250, more probably 200 and
> some ppl would advise 150GiB of cache. Leave some space for metadata and
> some for reserve - filesystems may benefit of it.

I always configure (to use) only 80% HDD...


[cut]
>> For a quad or higher CPU machine, you may do well to have multiple Squid
>> running (one per 2 CPUs or so). One squid doing the caching on the 300GB
>> drives and one on the smaller ~100 GB drives (to get around a small bug
>> where mismatched AUFS dirs cause starvation in small dir), peered
>> together with no-proxy option to share info without duplicating cache.


Cool! Thanks...

-- 
Herbert



[squid-users] Squid Log analyzing tool- sort log results by time

2009-03-17 Thread m...@bortal.de

Hello List,

i am looking for a reporting tool for squid, that shows me: WHEN 
(DATE-TIME) Someone (IP) accessed a a url.


So the output should look like:
-
Tue, Mar 16 2009 |  192.168.123.123 | www.google.de
Tue, Mar 17 2009 |  192.168.123.23 | www.google.net
Tue, Mar 17 2009 |  192.168.123.3 | www.google.com

I was quite happy with sarg, but unfortunately i was not able to sort 
the logs by time.


Can anyone give me a hint here?

Thanks,
Mario


[squid-users] CARP question

2009-03-17 Thread Chris Woodfield

Hi,

Had a question about squid's CARP implementation.

Let's say I have a farm of squids sitting behind an SLB, and behind  
those I have a set of parent caches. If I were to enable CARP on the  
front-end caches, is the hash algorithm deterministic enough to result  
in a URL request seen by more than one edge cache to be directed to  
the same parent cache? Or will each front-end cache have its own hash  
assignments compared to the others?


Also, how does CARP handle parent server removals and/or additions  
(i.e. are hash "buckets" reassigned gracefully or are they all  
redistributed)? Is this behavior also deterministic between multiple  
front-end squids?


Thanks,

-C


Re: [squid-users] Squid exitin periodicaly ( Preparing for shut down after )

2009-03-17 Thread twinturbo


#17.19 is my old workstation IP, nothing is on this now
#16.2 used to be the ip of the squid server not used anymore

acl netmgr src 10.106.17.19/255.255.255.255 10.106.16.2/255.255.255.255
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
{config cut}
http_access allow localhost
http_access allow manager netmgr localhost
http_access deny all

Cheers

Rob






Quoting ROBIN :

> I will have a look, the basic config file has been in use for about 10
> years with no major issues. ( god i feel old now thinking about that.)
>
> Will examine and post the manager ACL's
>
>
> Rob
>
>
> On Tue, 2009-03-17 at 10:59 +1200, Amos Jeffries wrote:
> > > (Amos) Sorry did not reply to list Ignore..
> > >
> > > I wish SLES10 was more up to date on a few packages!!
> > >
> > > I can't find anythin that may be shutting down squid, certainly there
> > > seems to
> > > be no cron jobs and the issues are happeing at aproximatly 22 minuet
> > > intervals
> > > which is not consistent with a cron schedule.
> > >
> > > It's very odd, and been hapenig for a while but we had not noticed.
> > >
> > > I may just try a full restart on the system.
> > >
> > > Thanks
> > >
> > > Rob
> >
> > You could also check what type of controls you have around the 'manager'
> > ACL in squid.conf. Every visitor with an allow line before the "deny
> > manager" line may have the option to restart Squid with an HTTP request.
> >
> > Amos
> >
> > >
> > >
> > >
> > > twintu...@f2s.com wrote:
> > >> Squid 2.5STABLE12 on SLES10
> > >>
> > >> I know this is quite an old version but it's on our production machine.
> > >>
> > >
> > > Yes. Please bug SLES about using a newer release.
> > >
> > >
> > >> Anyway we have a strange issue where squid seems to be shunting down
> > >> every 22
> > >> minuets or so,  the logs says Preparing for shut down after XXX
> > >> requests.
> > >>
> > >> Now every minuet we do a "squid -k reconfigure" as we run squidGuard and
> > >> it's
> > >> config can change all the time. This has never seemed to be a problem in
> > >> the
> > >> past.
> > >>
> > >> I am building up a fresh machine to take over but would like to get this
> > >> one
> > >> working properly too.
> > >>
> > >> So far I have stoped the store.log being written and got the other logs
> > > rotating
> > >> more than once a day to keep them small.
> > >>
> > >> I was previously getting errors about there being to few redirectors so
> > >> I
> > > upped
> > >> that to 30, I have now set it back down to 10 to see what happens.
> > >>
> > >> Rob
> > >
> > > "Preparing for shut down after XXX requests" occurs when Squid receives
> > > its proper shutdown signal. A clean/graceful shutdown proceeds to follow.
> > >
> > > Amos
> > > --
> > > Please be using
> > >Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
> > >Current Beta Squid 3.1.0.6
> > >
> > >
> > >
> >
> >
> >
>
>
>






Re: [squid-users] Don't log clientParseRequestMethod messages

2009-03-17 Thread Amos Jeffries

Herbert Faleiros wrote:

On Tue, 17 Mar 2009 17:13:13 +1300, Amos Jeffries 
wrote:
[cut]
No it's a debug log and those messages are important/useful to track bad 
clients in your traffic.


What unknown methods is it recording?


Lots and lots (and lots) of trash (SIP, P2P or/and perhaps virus code). The
cache.log info is VERY useful but this kind of messages 
obviously polluted the log (can be solved by: cat /var/log/squid/cache.log

| grep -Ev client.+Request, but I don't know if it will catch out only
clientParseRequestMethod log entries).


No that pattern will catch all client request handling messages.

You'll have to edit the code and remove the debug() or debugs() line to 
silence it fully. Or a pattern to catch the text "WARNING: Unsupported 
Request" or whatever the exact one is.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6


Re: [squid-users] Nagging problem

2009-03-17 Thread Amos Jeffries

Jagdish Rao wrote:

Hi,

Squid ACL does not seem to work properly. I have created a ACL for code
project and it does not seem to work. Can anyone help ?

Excerpts from squid.conf

# SQUID DEFAULTS 
http_port 8000
#hierarchy_stoplist cgi-bin ?
#acl QUERY urlpath_regex cgi-bin \?
#no_cache deny QUERY
cache_log /var/log/squid/cache.log
debug_options ALL,1 33,2
debug_options ALL,1


The second debug_options overrides the first. To get your trace properly 
comment the second entry out.




 AUTHENTICATIONS ###

auth_param basic program /usr/lib/squid/ncsa_auth
/etc/squid/data/valid-users
auth_param basic children 5
auth_param basic realm Accord-Soft Proxy-caching Web Server
auth_param basic credentialsttl 2 hour
auth_param basic casesensitive off

request_body_max_size 50 KB
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320

### ACCESS CONTROLS ###


 Format for Access Controls 
## 
## 
## 
## 

acl password proxy_auth REQUIRED
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object

## USER DEFINED ACLS ###
#---

## Authenticating Users ###
#--
acl cdprjuser proxy_auth codeproject

 ACL TIMINGS ###
#---
acl codeprj time 9:00-17:00

### ACL for Codeproj ##
#--
#acl cdprjuser url_regex "/etc/squid/data/codeprj-sites"
acl cdprjurl url_regex codeproject.com
acl cdprjurl url_regex msdn2.microsoft.com
acl cdprjurl url_regex msdn.microsoft.com
acl cdprjurl url_regex msdn.com
acl cdprjurl url_regex smartworks.us
acl cdprjurl url_regex installshield.com
acl cdprjurl url_regex asp.net
acl cdprjurl url_regex ajax.asp.net
acl cdprjurl url_regex rodrickbrown.com
acl cdprjurl url_regex csharp-station.com
acl cdprjurl url_regex csharpcomputing.com
acl cdprjurl url_regex albahari.com
acl cdprjurl url_regex c-sharpcorner.com
acl cdprjurl url_regex devsource.com
acl cdprjurl url_regex developerfusion.co.uk


gah!!!
make these all "dstdomain" type for an order of magnitude speed increase.



http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

### Access Goes Here ###
#---
http_access allow cdprjuser codeprj cdprjurl
.
.
.
http_access deny all

cache_mgr netad...@accord-soft.com
visible_hostname squid.accord-soft.com




Any help would be appreciated.

Thanks

Regards

Jagdish



How does that not work?

You configured:  anyone logging in as user "codeproject" with any 
password gets access from 9am to 5pm to any URL containing a list of 
domain names.


For examples:
  anyone can send your squid  User/pass codeproject:fubar 
http://www.google.com/search?q=free+porn&foo=asp.net at 2pm and get the 
search results page back.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6


Re: [squid-users] what is the difference between transparent and reverse proxy?

2009-03-17 Thread Amos Jeffries

Tomasz Chmielewski wrote:

Amos Jeffries schrieb:

Why should I use all directives for configuring a reverse proxy, if 
it works with the setup explained above?

Or, am I missing something important here?



Yes. Transparent/intercept only works in the presence of NAT.
It also is not possible to perform any form of authentication, HTTPS, 
or request modification without causing major problems to anyone who 
visits the site.


All the old problems squid 2.5 has with virtual hosted domains, broken 
client software, DNS loops, and request forwarding loops can be 
tracked back to the reverse-accelerator mode using the transparent 
intercept mode like you describe.


Does this also mean that using Squid as a reverse proxy with website's 
DNS entry pointed at Squid machine is the only way to reliably cache web 
traffic to the webserver?


No any mode except offline mode will cache just as well. The problems 
are all about request retrieval or HTTP transfer requirements.




I imagined I can have an accelerating/caching proxy for a webserver in 
at least two different setups:


1) point webserver's DNS entry at Squid's IP; Squid will do all 
caching/proxying when working in reverse (more reliable) or transparent 
(less reliable) mode



2) don't change anything in DNS, but instead, make sure routing to the 
webserver goes through the Squid machine, i.e.:


client -> Squid (public IP) -> webserver (public IP)

Here, we perhaps have to use transparent/intercept mode.


Still use reverse mode settings in Squid. How the packets are routed 
there is of no consequence.





3) are there any other modes than 1) and 2) which could be used for 
caching/accelerating traffic from a webserver?



How reliable would be to use 2), provided I use anything newer than 
Squid 2.5? Your reply seem to suggest that problems with 
transparent/intercept mode used for reverse proxying apply to Squid 2.5, 
but it doesn't mention if newer Squid versions will work better in such 
scenarios.


2.5 had major problems because its reverse mode was really transparent 
mode in disguise. Newer squid work fine and faster with their real 
reverse mode. If you force transparent mode to act like reverse it 
breaks the same stuff no matter the version.


Oh, I forgot this too: 
http://fr.securityvibes.com/vulnerabilite-CVE-2009-0801.html
its a general transparent proxy issue, but Squid is still vulnerable as 
a vector. The fix is likely to scupper your plans.



Lets put it this way:
  3x NAT traversals
  2x DNS resolves
  4x TCP links
  3x request copies
  3x reply copies

vs:
  1x DNS resolve
  2x TCP links
  1x request copy
  1x reply copy

which is going to be faster with less breakage points?

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6


Re: [squid-users] Don't log clientParseRequestMethod messages

2009-03-17 Thread Herbert Faleiros
On Tue, 17 Mar 2009 17:13:13 +1300, Amos Jeffries 
wrote:
[cut]
> No it's a debug log and those messages are important/useful to track bad 
> clients in your traffic.
> 
> What unknown methods is it recording?

Lots and lots (and lots) of trash (SIP, P2P or/and perhaps virus code). The
cache.log info is VERY useful but this kind of messages 
obviously polluted the log (can be solved by: cat /var/log/squid/cache.log
| grep -Ev client.+Request, but I don't know if it will catch out only
clientParseRequestMethod log entries).


[squid-users] Nagging problem

2009-03-17 Thread Jagdish Rao
Hi,

Squid ACL does not seem to work properly. I have created a ACL for code
project and it does not seem to work. Can anyone help ?

Excerpts from squid.conf

# SQUID DEFAULTS 
http_port 8000
#hierarchy_stoplist cgi-bin ?
#acl QUERY urlpath_regex cgi-bin \?
#no_cache deny QUERY
cache_log /var/log/squid/cache.log
debug_options ALL,1 33,2
debug_options ALL,1

 AUTHENTICATIONS ###

auth_param basic program /usr/lib/squid/ncsa_auth
/etc/squid/data/valid-users
auth_param basic children 5
auth_param basic realm Accord-Soft Proxy-caching Web Server
auth_param basic credentialsttl 2 hour
auth_param basic casesensitive off

request_body_max_size 50 KB
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320

### ACCESS CONTROLS ###


 Format for Access Controls 
## 
## 
## 
## 

acl password proxy_auth REQUIRED
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object

## USER DEFINED ACLS ###
#---

## Authenticating Users ###
#--
acl cdprjuser proxy_auth codeproject

 ACL TIMINGS ###
#---
acl codeprj time 9:00-17:00

### ACL for Codeproj ##
#--
#acl cdprjuser url_regex "/etc/squid/data/codeprj-sites"
acl cdprjurl url_regex codeproject.com
acl cdprjurl url_regex msdn2.microsoft.com
acl cdprjurl url_regex msdn.microsoft.com
acl cdprjurl url_regex msdn.com
acl cdprjurl url_regex smartworks.us
acl cdprjurl url_regex installshield.com
acl cdprjurl url_regex asp.net
acl cdprjurl url_regex ajax.asp.net
acl cdprjurl url_regex rodrickbrown.com
acl cdprjurl url_regex csharp-station.com
acl cdprjurl url_regex csharpcomputing.com
acl cdprjurl url_regex albahari.com
acl cdprjurl url_regex c-sharpcorner.com
acl cdprjurl url_regex devsource.com
acl cdprjurl url_regex developerfusion.co.uk

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

### Access Goes Here ###
#---
http_access allow cdprjuser codeprj cdprjurl
.
.
.
http_access deny all

cache_mgr netad...@accord-soft.com
visible_hostname squid.accord-soft.com




Any help would be appreciated.

Thanks

Regards

Jagdish








##
The information transmitted is intended for the person or entity to which it is 
addressed and may contain confidential and/or privileged
material. Any review, retransmission, dissemination, copying or other use of, 
or taking any action in reliance upon, this information by
persons or entities other than the intended recipient is prohibited. If you 
have received this in error, please contact the sender and delete
the material from your system. Accord Software & Systems Pvt. Ltd. (ACCORD) is 
not responsible for any changes made to the material other
than those made by ACCORD or for the effect of the changes on the meaning of 
the material.
##


Re: [squid-users] what is the difference between transparent and reverse proxy?

2009-03-17 Thread Tomasz Chmielewski

Amos Jeffries schrieb:

Why should I use all directives for configuring a reverse proxy, if it 
works with the setup explained above?

Or, am I missing something important here?



Yes. Transparent/intercept only works in the presence of NAT.
It also is not possible to perform any form of authentication, HTTPS, or 
request modification without causing major problems to anyone who visits 
the site.


All the old problems squid 2.5 has with virtual hosted domains, broken 
client software, DNS loops, and request forwarding loops can be tracked 
back to the reverse-accelerator mode using the transparent intercept 
mode like you describe.


Does this also mean that using Squid as a reverse proxy with website's 
DNS entry pointed at Squid machine is the only way to reliably cache web 
traffic to the webserver?


I imagined I can have an accelerating/caching proxy for a webserver in 
at least two different setups:


1) point webserver's DNS entry at Squid's IP; Squid will do all 
caching/proxying when working in reverse (more reliable) or transparent 
(less reliable) mode



2) don't change anything in DNS, but instead, make sure routing to the 
webserver goes through the Squid machine, i.e.:


client -> Squid (public IP) -> webserver (public IP)

Here, we perhaps have to use transparent/intercept mode.


3) are there any other modes than 1) and 2) which could be used for 
caching/accelerating traffic from a webserver?



How reliable would be to use 2), provided I use anything newer than 
Squid 2.5? Your reply seem to suggest that problems with 
transparent/intercept mode used for reverse proxying apply to Squid 2.5, 
but it doesn't mention if newer Squid versions will work better in such 
scenarios.



--
Tomasz Chmielewski
http://wpkg.org


Re: [squid-users] Config suggestion

2009-03-17 Thread Matus UHLAR - fantomas
> > # cat /proc/cpuinfo  | egrep -i xeon | uniq
> > model name  : Intel(R) Xeon(R) CPU   E5405  @ 2.00GHz
> > # cat /proc/cpuinfo  | egrep -i xeon | wc -l
> > 8

is that one quad-core with hyperthreading, two quad-cores without HT or two
dual-cores with HT? We apparently should count HT CPU's as one, not two.

> >  total   used   free sharedbuffers cached
> > Mem: 32148   2238  29910  0244823
> > -/+ buffers/cache:   1169  30978
> > Swap:15264  0  15264

swap is quite useless here I'd say...

> > # fdisk -l | grep GB
> > Disk /dev/sda: 73.5 GB, 73557090304 bytes
> > Disk /dev/sdb: 300.0 GB, 3000 bytes
> > Disk /dev/sdc: 146.8 GB, 146815737856 bytes
> > Disk /dev/sdd: 300.0 GB, 3000 bytes
> > Disk /dev/sde: 300.0 GB, 3000 bytes

> > # uname -srm
> > Linux 2.6.27.7 x86_64

> > # cat /etc/squid/squid.conf | grep -E cache_'mem|dir'\

you apparently really wandet cache_'(mem|dir)' btw...

> > cache_mem 8192 MB
> > cache_dir aufs /var/cache/proxy/cache1 102400 16 256
> > cache_dir aufs /var/cache/proxy/cache2 102400 16 256
> > cache_dir aufs /var/cache/proxy/cache3 102400 16 256
> > cache_dir aufs /var/cache/proxy/cache4 102400 16 256
> > cache_dir aufs /var/cache/proxy/cache5 102400 16 256
> > cache_dir aufs /var/cache/proxy/cache6 102400 16 256
> > cache_dir aufs /var/cache/proxy/cache7 102400 16 256
> > cache_dir aufs /var/cache/proxy/cache8 102400 16 256

> > # cat /etc/fstab  | grep proxy
> > /dev/vg00/cache  /var/cache/proxy ext3defaults 1   2

> > Yes, I know, LVM, ext3 and aufs are bad ideas... I'm particularly
> > interested in a better cache_dir configuration (maximizing disk's usage)
> > and the correct cache_mem parameter to this hardware. (and others
> > possible/useful tips)

lvm is surely bad idea for proxy, not sure that about ext3 (very stable) and
aufs is _good_ idea.

On 17.03.09 12:58, Amos Jeffries wrote:
> You have 5 physical disks by the looks of it. Best usage of those is to
> split the cache_dir one per disk (sharing a disk leads to seek clashes).

I'd say that the 73.5 Gb disk should be used only for OS, logs etc.

> I'm not to up on the L1/L2 efficiencies, but "64 256" or higher L1 seems
> to be better for larger dir sizes.

L1 should be imho increased with 1 for every 65536 objects (256 L2 dirs *
256 files in each of them), with average size of 13KiB (default) it roughly
means one for each GB of cache_dir size. Depending on you
maximum_object_size the averags size may be higher, but that doesn't change
much.

Note that for 300GiB HDD you will be using max 250, more probably 200 and
some ppl would advise 150GiB of cache. Leave some space for metadata and
some for reserve - filesystems may benefit of it.

> For a quad or higher CPU machine, you may do well to have multiple Squid
> running (one per 2 CPUs or so). One squid doing the caching on the 300GB
> drives and one on the smaller ~100 GB drives (to get around a small bug
> where mismatched AUFS dirs cause starvation in small dir), peered together
> with no-proxy option to share info without duplicating cache.

Maybe even one "master" squid with big memory_cache, accessed by clients,
having those with cache_dir's (zero cache_mem) as parents and never_direct
set to on, and "off" only for files you surely don't cache e.g. the default
"query" acl, if you didn't comment that out.

I'm currently not sure if we can ask the "master" squid to fetch directly
everything it surely won't cache...

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
M$ Win's are shit, do not use it !


Re: [squid-users] what is the difference between transparent and reverse proxy?

2009-03-17 Thread Tomasz Chmielewski

Amos Jeffries schrieb:

Why should I use all directives for configuring a reverse proxy, if it 
works with the setup explained above?

Or, am I missing something important here?



Yes. Transparent/intercept only works in the presence of NAT.
It also is not possible to perform any form of authentication, HTTPS, or 
request modification without causing major problems to anyone who visits 
the site.


All the old problems squid 2.5 has with virtual hosted domains, broken 
client software, DNS loops, and request forwarding loops can be tracked 
back to the reverse-accelerator mode using the transparent intercept 
mode like you describe.


Thanks for a good explanation.


--
Tomasz Chmielewski
http://wpkg.org


Re: [squid-users] what is the difference between transparent and reverse proxy?

2009-03-17 Thread Amos Jeffries

Tomasz Chmielewski wrote:

What is the difference between transparent and reverse proxy?

OK, it may sound like a naive question, but one can set up a transparent 
proxy to be a de facto a reverse proxy:


- redirect traffic (iptables) from port 80 to 3128
- add to squid.conf:

acl proxy_websites dstdomain .example.tld
http_access allow proxy_websites


And we have a transparent proxy which is a reverse proxy when someone is 
trying to reach www.example.tld.


Why should I use all directives for configuring a reverse proxy, if it 
works with the setup explained above?

Or, am I missing something important here?



Yes. Transparent/intercept only works in the presence of NAT.
It also is not possible to perform any form of authentication, HTTPS, or 
request modification without causing major problems to anyone who visits 
the site.


All the old problems squid 2.5 has with virtual hosted domains, broken 
client software, DNS loops, and request forwarding loops can be tracked 
back to the reverse-accelerator mode using the transparent intercept 
mode like you describe.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.6


Re: [squid-users] Large-scale Reverse Proxy for serving images FAST

2009-03-17 Thread Amos Jeffries

David Tosoff wrote:

All,

I'm new to Squid and I have been given the task of optimizing the delivery of 
photos from our website. We have 1 main active image server which serves up the 
images to the end user via 2 chained CDNs. We want to drop the middle CDN as 
it's not performing well and is a waste of money; in it's stead we plan to 
place a few reverse proxy web accelerators between the primary CDN and our 
image server.



You are aware then that a few reverse-proxy accelerators are in fact the 
definition of a CDN? So you are building your own instead of paying for one.


Thank you for choosing Squid.


We currently recieve 152 hits/sec on average with about 550hps max to our 
secondary CDN from cache misses at the Primary.
I would like to serve a lot of this content straight from memory to get it out 
there as fast as possible.

I've read around that there are memory and processing limitations in Squid in 
the magnitude of 2-4GB RAM and 1 core/1 thread, respectively. So, my solution 
was to run multiple instances, as we don't have the rackspace to scale this out 
otherwise.



Memory limitations on large objects only exist in Squid-2. And 2-4GB RAM 
 issues reported recently are only due to 32-bit build + 32-bit hardware.


Your 8GB cache_mem settings below and stated object size show these are 
not problems for your Squid.


152 req/sec is not enough to raise the CPU temperature with Squid, 550 
might be noticeable but not a problem. 2700 req/sec has been measured in 
accelerator Squid-2.6 on a 2.6GHz dual-core CPU and more performance 
improvements have been added since then.




I've managed to build a working config of 1:1 squid:origin, but I am having 
trouble scaling this up and out.

Here is what I have attempted to do, maybe someone can point me in the right 
direction:

Current config:
User Browser -> Prim CDN -> Sec CDN -> Our Image server @ http port 80

New config idea:
User -> Prim CDN -> Squid0 @ http :80 -> round-robin to "parent" squid instances 
on same machine @ http :81, :82, etc -> Our Image server @ http :80


Squid0's (per diagram above) squid.conf:

acl Safe_ports port 80
acl PICS_DOM_COM dstdomain pics.domain.com
acl SQUID_PEERS src 127.0.0.1
http_access allow PICS_DOM_COM
icp_access allow SQUID_PEERS
miss_access allow SQUID_PEERS
http_port 80 accel defaultsite=pics.domain.com
cache_peer localhost parent 81 3130 name=imgCache1 round-robin proxy-only
cache_peer localhost parent 82 3130 name=imgCache2 round-robin proxy-only
cache_peer_access imgCache1 allow PICS_DOM_COM
cache_peer_access imgCache2 allow PICS_DOM_COM
cache_mem 8192 MB
maximum_object_size_in_memory 100 KB
cache_dir aufs /usr/local/squid0/cache 1024 16 256  -- This one isn't really 
relevant, as nothing is being cached on this instance (proxy-only)
icp_port 3130
visible_hostname pics.domain.com/0

Everything else is per the defaults in squid.conf.


"Parent" squids' (from above diagram) squid.conf:

acl Safe_ports port 81
acl PICS_DOM_COM dstdomain pics.domain.com
acl SQUID_PEERS src 127.0.0.1
http_access allow PICS_DOM_COM
icp_access allow SQUID_PEERS
miss_access allow SQUID_PEERS
http_port 81 accel defaultsite=pics.domain.com
cache_peer 192.168.0.223 parent 80 0 no-query originserver name=imgParent
cache_peer localhost sibling 82 3130 name=imgCache2 proxy-only
cache_peer_access imgParent allow PICS_DOM_COM
cache_peer_access imgCache2 allow PICS_DOM_COM
cache_mem 8192 MB
maximum_object_size_in_memory 100 KB
cache_dir aufs /usr/local/squid1/cache 10240 16 256
visible_hostname pics.domain.com/1
icp_port 3130
icp_hit_stale on

Everything else per defaults.



So, when I run this config and test I see the following happen in the logs:

From "Squid0" I see that it resolves to grab the image from one of it's parent caches. This is 
great! (some show as "Timeout_first_up_parent" and others as just "first_up_parent")

1237253713.769 62 127.0.0.1 TCP_MISS/200 2544 GET 
http://pics.domain.com:81/thumbnails/59/78/45673695.jpg - 
TIMEOUT_FIRST_UP_PARENT/imgParent image/jpeg

From the parent cache that it resolves to, I see that it grabs the image from 
IT'S parent, originserver (our image server). Subsequent requests are 'TCP_HIT' 
or mem hit. Great stuff!

1237253713.769 62 127.0.0.1 TCP_MISS/200 2694 GET 
http://pics.domain.com/thumbnails/59/78/45673695.jpg - 
FIRST_PARENT_MISS/imgCache1 image/jpeg


Problem is, it doesn't round-robin the requests to both of my "parent" squids and you end up with a very 1-sided cache. If I stop 
the "parent" instance that is resolving the items, the second "parent" doesn't take over either. If I then proceed to 
restart the "Squid0" instance, it will then direct the requests to the second "parent", but then the first wont recieve 
any requests. So I know both "parent" configs work, but I must be doing something wrong somewhere, or is this all just a silly 
idea...?



This is caused by Squid0 only sending ICP queries to a single peer 
(itself?) on port 3130.  Each squid needs a