Re: [squid-users] Split caching by size

2009-05-19 Thread Amos Jeffries

Jason Spegal wrote:
Just tested and verified this. At least in Squid 3.0 minimum_object_size 
affects both memory and disk caches. Anyone know if this is true in 3.1 
as well? Any thoughts as to how to split it? I may be wrong and likely 
am but I recall there was separate minimum_object_size for each cache at 
one time.


Same for all Squid-3 so far.
The per-cache_dir version is awaiting port from Squid-2.



Chris Robertson wrote:

Jason Spegal wrote:
How do I configure squid to only cache small objects, say less than 
4mb in memory cache,


http://www.squid-cache.org/Doc/config/maximum_object_size_in_memory/


and only objects larger than 4mb to the disk?


http://www.squid-cache.org/Doc/config/minimum_object_size/

I want to optimize the cache based on object size. The reasoning is 
the small stuff will change often and be accessed the most while the 
larger items that tie up bandwidth will not change as often and I can 
cache more aggressively. Also this way I minimize disk io and lag. I 
am using squid 3.0. While I can see this being done with the disk 
cache I am not certain the memory cache can be configured like this 
anymore as the options seem to be missing.


Thanks,
  Jason


Chris




Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE15
  Current Beta Squid 3.1.0.7


Re: [squid-users] Split caching by size

2009-05-19 Thread Adrian Chadd
Its a per-cache_dir option in Squid-2.7 and above; I'm not sure about 3.



Adrian

2009/5/20 Jason Spegal :
> Just tested and verified this. At least in Squid 3.0 minimum_object_size
> affects both memory and disk caches. Anyone know if this is true in 3.1 as
> well? Any thoughts as to how to split it? I may be wrong and likely am but I
> recall there was separate minimum_object_size for each cache at one time.
>
> Chris Robertson wrote:
>>
>> Jason Spegal wrote:
>>>
>>> How do I configure squid to only cache small objects, say less than 4mb
>>> in memory cache,
>>
>> http://www.squid-cache.org/Doc/config/maximum_object_size_in_memory/
>>
>>> and only objects larger than 4mb to the disk?
>>
>> http://www.squid-cache.org/Doc/config/minimum_object_size/
>>
>>> I want to optimize the cache based on object size. The reasoning is the
>>> small stuff will change often and be accessed the most while the larger
>>> items that tie up bandwidth will not change as often and I can cache more
>>> aggressively. Also this way I minimize disk io and lag. I am using squid
>>> 3.0. While I can see this being done with the disk cache I am not certain
>>> the memory cache can be configured like this anymore as the options seem to
>>> be missing.
>>>
>>> Thanks,
>>>  Jason
>>
>> Chris
>
>


Re: [squid-users] New Squid3 Stable 13 Setup

2009-05-19 Thread bharathvn

Hi Amos,

I see TCP_MISS errors on parent Server's access log

TCP_MISS/200 23162 GET
http://www.monitor.com/Portals/0/MonitorContent/images/globalOffices_0.jpg -
DIRECT/12.151.151.79 image/jpeg

on Sibling server 

i see both TCP_MISS/200 & 503 errors

Thanks


Amos Jeffries-2 wrote:
> 
>>
>> Thanks Amos
>>
>> its working now, but i see a small issue when i do query on search engine
>> like google
>>
>> i can directly hit on any site but when i do search i get
>>
>> The system returned: (111) Connection refused
>>
>> The remote host or network may be down. Please try the request again.
>>
> 
> Any other info? is it actually gong direct? what does log say? etc. etc
> 
> Amos
> 
>>
>> Thanks
>>
>> Amos Jeffries-2 wrote:
>>>

 Hi Amos,

 Thanks for responding to my message.

 i am trying to achieve as mentioned below

 Site A has proxy as Proxy 2 and another proxy is located in different
 country Site B through tunnel as Proxy1

 Site A has local internet when fails need all web request to be
 forwarded
 to
 proxy 1 through proxy 2 Ie with out changing client proxy address.

 similar setup was running for 1 month, some how messed up had to
 reconfigure
 from scratch.

>>>
>>> Ah, okay this is what you want for the peering then:
>>>
>>> Proxy2:
>>>  prefer_direct on
>>>  cache_peer Proxy1 parent 8080 3130
>>>  ...
>>>
>>> Proxy1:
>>>   
>>>
>>>
>>> Note the absence of 'default originserver' on proxy2 and any mention of
>>> peering on proxy1.
>>>
>>> If you have any problems with that it will be caused by other configure
>>> options I've overlooked.
>>>
>>> Amos
>>>

 bharathvn wrote:
>
> Hi,
>
> i am trying to setup proxy server as show below
>
> Client ==>Sibling ==> Parent==> Internet
>
> i got error when we browse any site from parent server as mentioned
> below
>
> The following error was encountered while trying to retrieve the URL:
> /
>
> Invalid URL
>
> Some aspect of the requested URL is incorrect.
>
> Some possible problems are:
>
> Missing or incorrect access protocol (should be http:// or similar)
>
> Missing hostname
>
> Illegal double-escape in the URL-Path
>
> Illegal character in hostname; underscores are not allowed.
>
> Your cache administrator is root.
>
> 
>
> Generated Sun, 17 May 2009 18:13:40 GMT by proxy1 (squid/3.0.STABLE13)
>
> Parent Proxy config
>
>
> http_port 8080
> cache_peer proxy2 sibling 8080 0
> hierarchy_stoplist cgi-bin ?
> acl QUERY urlpath_regex cgi-bin \?
> cache deny QUERY
> acl apache rep_header Server ^Apache
> cache_mem 100 MB
> cache_swap_low 90
> cache_swap_high 95
> access_log /var/log/squid/access.log squid
> acl manager proto cache_object
> acl localhost src 127.0.0.1/255.255.255.255
> acl to_localhost dst 127.0.0.0/8
> acl SSL_ports port 443
> acl Safe_ports port 80 # http
> acl Safe_ports port 21 # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70 # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535 # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl CONNECT method CONNECT
> acl US src b.b.b.b-b.b.b.254
> acl server src c.c.c.1-c.c.c.254
> http_access allow manager localhost
> http_access deny manager
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
> http_access allow localhost
> http_access allow US
> http_access allow server
> http_access allow all
> http_reply_access allow all
> icp_access deny all
> cache_effective_user squid
> cache_effective_group squid
> icp_port 0
> coredump_dir /var/spool/squid
> refresh_pattern ^ftp: 1440 20% 10080
> refresh_pattern ^gopher: 1440 0% 1440
> refresh_pattern (cgi-bin|\?)0   0%  0
> refresh_pattern . 0 20% 4320
>
>
> Sibling Proxy config
>
> http_port 8080
> cache_peer proxy1 parent 8080 0 default originserver
> hierarchy_stoplist cgi-bin ?
> acl QUERY urlpath_regex cgi-bin \?
> cache deny QUERY
> acl apache rep_header Server ^Apache
> cache_mem 100 MB
> cache_swap_low 90
> cache_swap_high 95
> access_log /var/log/squid/access.log squid
> acl manager proto cache_object
> acl localhost src 127.0.0.1/255.255.255.255
> acl to_localhost dst 127.0.0.0/8
> acl SSL_ports port 443
> acl Safe_ports port 80 # http
> acl Safe_ports port 21 # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70 # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports 

Re: [squid-users] --enable-http-violations

2009-05-19 Thread Amos Jeffries
> What exactly does compiling with --enable-http-violations do? I was
> under the impression it just allowed ignore-private, ignore-no-store,
> ignore-auth, override-expire, etc to work however I am starting to doubt
> that.

It enables use of all config settings which, if changed from the defaults,
will cause your Squid to disobey HTTP and other RFC protocol requirements.
Caching things which should not be cached (ignore-* and override-*) are
just a few of those settings.

> Even removing all options on affected refresh_pattern's still
> result in certain pages not refreshing properly.

To make things operate properly IMO its best not to use any of the
violation settings. They are the cause of breakage more often than not.

I think you need to get a wire-level trace of the requests going to/from
Squid  with the client and server both.

Amos




Re: [squid-users] Proxy and cache of SSL with client auth?

2009-05-19 Thread Amos Jeffries
> This may sound insane, but here goes.  I've got a file distribution
> system that relies on client certificate authentication through SSL
> (https) to authenticate clients prior to delivery of files.  Typical
> apache with ssl and client cert setup.  I have reached a situation,
> however, where it would be convenient to create a tiered system of
> caches of said files.  My thought was to use squid to do this as follows:
>
> Server stays the same - requires client cert to return a file.
>
> Squid proxy is set up on a box with a valid client cert, setting up
> sslproxy_* to point to valid client certs.  Squid is also configured
> with https to require client certs for connection to Squid (this last
> part is less important - the clients in this particular setup are
> actually on a private network that is not considered at risk).  When the
> client makes a request for a file, squid makes the request using its
> authorized cert, and then serves the file down-stream.
>
>  From my initial reading of the squid configs and documentation I could
> find, it seemed like this would be possible.  I have tried it, and it
> doesn't seem to be working.  I get the (apparently common) SSL 'CONNECT'
> error:
>
>> clientNegotiateSSL: Error negotiating SSL connection on FD 11:
>> error:1407609B:SSL routines:SSL23_GET_CLIENT_HELLO:https proxy request
>> (1/-1)
>
> Is what I'm trying to do even possible with Squid?  I'm using version
> 2.6.STABLE6 on Centos 5.2.  I'd be happy to send my squid configs if
> that'd help.  Any help would be apprecaited ;-)
>
> Justin Binns
>

Are you using squid as a regular forward-proxy? or as a reverse-proxy/CDN
for this system?

Amos



Re: [squid-users] Upstream Squid to identify user

2009-05-19 Thread Amos Jeffries
> myocella wrote:
>> Greeting
>>
>> I have set up an upstream Squid proxy to receive proxy traffic from
>> other Squid servers.
>> I would like to log user access on the upstream proxy. The downstream
>> has this line:
>>
>> cache_peer  upstreamproxy.foo.com  parent  8080  7 no-query login=*:foo
>>
>> However, there is no username showing in the upstream Squid log.
>> What do I need to add into the Squid conf?
>>
>
> Your upstream Squid is not requiring authentication, so it's not being
> sent.
>
>> Currently it just allows access from dowstream IPs. No auth-param is
>> setup.
>>
>
> Set up a auth-param which just replies "OK".
>

http://wiki.squid-cache.org/ConfigExamples/Authenticate/LoggingOnly

Amos



[squid-users] --enable-http-violations

2009-05-19 Thread Jason Spegal
What exactly does compiling with --enable-http-violations do? I was 
under the impression it just allowed ignore-private, ignore-no-store, 
ignore-auth, override-expire, etc to work however I am starting to doubt 
that. Even removing all options on affected refresh_pattern's still 
result in certain pages not refreshing properly.


--Jason



Re: [squid-users] Reverse Proxy, multiple web servers, only one is reachable

2009-05-19 Thread Amos Jeffries
> Hi there.
>
> Currently we are running squid 2.5.STABLE3 under RHEL3. However, this
> week our ssl certificate will expire and the new certificate is a
> chained certificate, which is not supported by that version of squid.
> Also it is an old server in need of an upgrade, so we are trying to
> configure squid 2.6.STABLE21 (running under RHEL 5.3) as a reverse
> proxy, but after reading the documentation, the FAQ and many emails
> from the email lists we still can't figure out what we are doing
> wrong.
>
> - We have 4 web sites with public IPs x.y.z.47, x.y.z.48, x.y.z.49 and
> x.y.z.50.
> Each web site is hosted on a different server with Ips x.y.z.247,
> x.y.z.248, x.y.z.249 and x.y.z.250 (x.y.z.47 goes to x.y.z.247, etc)
> Our DNS server runs on the same box as squid.
>
> - x.y.z.48 is using ssl connections.
>
> - With the current configuration www.mywebsite.ca and
> www1.mywebsite.ca work, but when trying to go to the other websites we
> get to www.mywebsite.ca instead.
>
> If we remove the # from the cache_peer_domain lines then the only
> website accessible is www1.mywebsite.ca. The other websites time out
> and we get this error message:
>
> ERROR
> The requested URL could not be retrieved
>
> While trying to retrieve the URL: http://www.mywebsite.ca/
>
> The following error was encountered:
>
> * Unable to forward this request at this time.
>
> This request could not be forwarded to the origin server or to any
> parent caches. The most likely cause for this error is that:
>
> * The cache administrator does not allow this cache to make direct
> connections to origin servers, and
> * All configured parent caches are currently unreachable.
>
> Your cache administrator is root.
> Generated Tue, 19 May 2009 17:16:35 GMT by www1.mywebsite.ca
> (squid/2.6.STABLE21)
>
> - It's our understanding that squid uses /etc/squid/hosts to have the
> hostnames redefined and to get traffic to the backend servers. So if
> the client requests www.mywebsite.ca, with dns record is x.y.z.47,
> squid uses the hosts file to resolve www.mywebsite to x.y.z.247. Is
> this correct?

Not for reverse proxies. The destination is solely dependant on the
'address/host' value in cache_peer. If its an IP that is used. If its a
FQDN then DNS is checked on startup/reconfigure. Hosts file overrides DNS.

Your attempted squid.conf using IPs (x.y.z.247 etc) is the best way to go.

>
> - We also want to avoid people connecting to the websites using any
> Ips (either x.y.z.47, .48, etc or x.y.z.247, .248, etc)
>

see notes inline with your 2.6 config.

>
> Below you can find the configuration files. Please let me know if you
> need more information. I'd really appreciate if you could point me in
> the right direction.
>
> #Squid.conf [version 2.5.STABLE3]:
> #-
> http_port 80
> https_port x.y.z.48:443 cert=/etc/squid/certs/ww1.pem
> key=/etc/squid/certs/ww1key.pem version=1
> icp_port 0
> cache_dir null /tmp
> acl all_no_cache src 0/0
> no_cache deny all_no_cache
> #Path to the host file hosts_file /etc/squid/hosts
> httpd_accel_host virtual
> httpd_accel_uses_host_header on
> visible_hostname www1.mywebsite.ca
> acl all src 0.0.0.0/0.0.0.0
> acl mynet src x.y.z.0/255.255.255.0
> http_access allow all
> http_access allow mynet
> http_access deny all
>
>
> #squid.conf version 2.6.STABLE21
> #-
> acl all src 0.0.0.0/0.0.0.0
> acl manager proto cache_object
> acl localhost src 127.0.0.1/255.255.255.255
> acl to_localhost dst 127.0.0.0/8
> acl SSL_ports port 443
> acl CONNECT method CONNECT
> acl mynet src x.y.z.0/255.255.255.0

> http_access allow all
> http_access allow mynet
> http_access allow localhost
> http_access deny all
> icp_access allow all

Kill all of the above http_access and icp_access. It's not needed and
prevents Squid from halting bad requests early in the process.

>
> http_port 80 accel vhost
> https_port x.y.z.48:443 cert=/etc/squid/certs/ww1.pem
> key=/etc/squid/certs/ww1key.pem version=1 accel vhost

Correct.

>
> cache_peer x.y.z.247 parent 80 0 no-query no-digest originserver
> name=www_mywebsite
> cache_peer x.y.z.248 parent 80 0 no-query no-digest originserver
> name=www1_mywebsite
> cache_peer x.y.z.249 parent 80 0 no-query no-digest originserver
> name=www_mywebsiteusa
> cache_peer x.y.z.250 parent 80 0 no-query no-digest originserver
> name=webmail

Correct.

Here is where things go askew slightly. You need some controls to branch
the requests to the right peer based on the domain wanted.

>
> #cache_peer_domain www_mywebsite www.mywebsite.ca
> #cache_peer_domain www1_mywebsite www1.mywebsite.ca
> #cache_peer_domain www_mywebsiteusa www.mywebsiteusa.com
> #cache_peer_domain webmail web.mywebsite.ca

They should work. It's the crude hammer way to do it, but simple when you
don't have sub-domain clauses (ie *.mywebsite.ca EXCEPT www1.mywebsite.ca
and webmail.mywebsite.ca).

If you only want www.mywebsite

Re: [squid-users] Log Request Header

2009-05-19 Thread Amos Jeffries
> Mario Remy Almeida wrote:
>> Hi I  have not enabled squidmime.
>> But logformat headers %ts.%03tu %tg %>a %rp [%>h] [%>
>
> I understand that.
>
> I am asking you to set up a log using the (defined by default) squidmime
> format, and see if that displays the information you are interested in.
>
> Try and limit the source of the problem.
>
>> Regards,
>> Remy
>>
>
> Chris
>

In order to get "NONE:// [-]" out of "%ts.%03tu %tg %>a %rp [%>h] [%4KB or >8KB URL depending on squid version).

If it can be shown that the request was correct HTTP on the wire I'm
interested.

Amos




Re: [squid-users] How to log only the MISS

2009-05-19 Thread Amos Jeffries
>
> Is it posible to log only the TCP_MISS or .*MISS
>

Not yet. It's one of the open feature requests.
Patches for 3.HEAD are welcome.

Amos




Re: [squid-users] Squid suddenly crashes (Maybe a bug)

2009-05-19 Thread Amos Jeffries
>
> The main reason for using 3.1 is TPROXY so i can not use 3.0.
> How can i provide full info to Amos ?

Bugzilla or squid-...@squid-cache.org is best for beta and RC releases.

Amos

>
>
> Jeff Pang-4 wrote:
>>
>> Omid Kosari:
>>> Anyone?
>>> This problem occurs 5 times a day (average). and each time the
>>> following
>>> message appears in cache.log
>>>
>>> assertion failed: comm.cc:2016: "!fd_table[fd].closing()"
>>>
>>
>> b/c squid-3.1 is a beta version, so anything can be happened.
>> you may provide the full info to Amos, and roll the software version to
>> 3.0.
>>
>> --
>> Jeff Pang
>> DingTong Technology
>> www.dtonenetworks.com
>>
>>
>
> --
> View this message in context:
> http://www.nabble.com/Squid-suddenly-crashes-%28Maybe-a-bug%29-tp23593693p23610858.html
> Sent from the Squid - Users mailing list archive at Nabble.com.
>
>




Re: [squid-users] Does Squid scale well?

2009-05-19 Thread Amos Jeffries
> Can someone please say how well Squid 3.1/tproxy scales? Would it have
> problems servicing more than 10k simultaneous HTTP requests, and pushing
> as much as 300 mbit/s of traffic? 500 mbit/s? 1 gbit/s?
>
>
> Planned hardware & setup:
>
> Dell Poweredge 6850 server QUAD Dual Core 3.4GHz 8GB
>
> Hard Drives:
>
> cache_dir will be split across
> 5x 73GB SAS 15K hard drives
>
>
> All will run on Ubuntu 9.04/testing mix w/ WCCPv1.
>
>
> Thanks in advance.
>

3.1/TPROXY uses about 2 syscalls more than 3.1/NAT. Bugger all difference
in total.

The problem is more likely to be WCCP, most of the installs I hear about
with TPROXY/WCCP are broken somehow. Not to say there aren't some working,
but I've heard less cries for help ending in success.

As for overall scaling. Less than 10K requests though for those on regular
proxy duty. We've seen 50Mbps pipes being flooded by Squid-3 no problems.
Above that you need to start tuning stuff like disks, reducing ACLs,
multiple Squid etc to fill the pipe.

Amos




Re: [squid-users] Transparent Squid Stalls For Up To Two Minutes

2009-05-19 Thread Amos Jeffries
> I appreciate your response. I don't believe it's a file system issue, I've
> tried troubleshooting that for several weeks.  Originally, I was using 16
> 256 (the default) as directory layout.  I've tried using ext4, reiser (my
> favorite filesystem) and now it's on btrfs.  I also have the filesystem
> mounted with noatime.  When I was using reiser, I had disabled tail
> packing as well.  As you can see, I'm using aufs, but I've also tried
> diskd.
>
> The IP tables NAT/DNAT stuff happens at my router.  See this DD-WRT wiki
> article for how it's done
> (http://www.dd-wrt.com/wiki/index.php/Transparent_Proxy), I actually wrote
> the section on multiple hosts can bypass the proxy. Either way, it's not a
> router issue.  If I set my browser to the use the proxy directly, the
> delays still happen 99% of the time.
>
> Originally,I was using dans with antivirus.  But the delays have gotten to
> be horrible.  I went back to a standard squid setup to try to resolve the
> problem.  At this  point, I simply want to get squid working because a lot
> of the sites we visit continously may benefit from cacheing (news sites
> with lots of graphics, etc).  Once I get this problem resolved, I'll go
> back to using dans w/ antivirus.
>
> 10.0.0.254 (the squid host) is excluded from the IP tables rules on
> DD-WRT, along with my Xbox 360, my BluRay player, my HD-DVD player and my
> DirecTV receiver.
>
> The three DNS servers specified in the squid.conf all resolve names
> properly and are open to the squid host.
>
> Thanks
> Doug Eubanks
> ad...@dougware.net
> 919-201-8750

Strange.

What is the output of "squid -v" and "squidclient mgr:info" (AKA info
cachmgr page)?

Amos

>
>   _
> From: Amos Jeffries [mailto:squ...@treenet.co.nz]
> To: ad...@dougware.net
> Cc: squid-users@squid-cache.org
> Sent: Mon, 18 May 2009 14:55:39 +
> Subject: Re: [squid-users] Transparent Squid Stalls For Up To Two Minutes
>
> Doug Eubanks wrote:
>> I'm having an intermittent squid issue. It's plagued me with CentOS 5.x,
>> Fedora 6, and now Fedora 11 (all using the RPM build that came with the
>> OS).
>>
>> My DD-WRT router forwards all of my outgoing port 80 requests to my
>> transparent proxy using IP tables. For some reason, squid will hang when
>> opening a URL for up to two minutes. It doesn't always happen and
>> sometimes restarting squid will correct the problem (for a while). The
>> system is pretty hefty 3ghz P4 with 2G of RAM with a SATA II drive. That
>> should be plenty for a small home network of about 10 clients.
>>
>> When I test DNS lookups from the host, requests are returned within less
>> than a second. I'm pretty sure that's not the problem.
>>
>> Here is my squid.conf, any input would be greatly appreciated!
>>
>> acl manager proto cache_object
>> acl localhost src 127.0.0.1/32
>> acl to_localhost dst 127.0.0.0/8
>> acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
>> acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
>> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
>> acl SSL_ports port 443
>> acl Safe_ports port 80  # http
>> acl Safe_ports port 21  # ftp
>> acl Safe_ports port 443 # https
>> acl Safe_ports port 70  # gopher
>> acl Safe_ports port 210 # wais
>> acl Safe_ports port 1025-65535  # unregistered ports
>> acl Safe_ports port 280 # http-mgmt
>> acl Safe_ports port 488 # gss-http
>> acl Safe_ports port 591 # filemaker
>> acl Safe_ports port 777 # multiling http
>> acl CONNECT method CONNECT
>> http_access allow manager localhost
>> http_access deny manager
>> http_access allow localnet
>> http_access deny !Safe_ports
>> http_access deny CONNECT !SSL_ports
>> http_access allow localnet
>> http_access allow localhost
>> http_access deny all
>> htcp_access allow localnet
>> htcp_access deny all
>> http_port 3128 transparent
>
> Is the NAT / REDIRECT/DNAT happening on the Squid box?
> It needs to.
>
>> hierarchy_stoplist cgi-bin ?
>> cache_mem 32 MB
>> maximum_object_size_in_memory 128 KB
>> cache_replacement_policy heap LRU
>> cache_dir aufs /var/spool/squid 4096 8 16
>
> 4GB of objects under 512KB small (avg set at 64KB later),  using only an
> 8x16 inode array. You may have a FS overload problem.
>
> Also, Squid 'pulses' cache garbage collection one directory at a time.
> Very large amounts of files in any one directory can slow things down a
> lot at random times.
>
> It's generally better to increase the L1/L2 numbers from default as the
> cache gets bigger.
>
>> max_open_disk_fds 0
>> minimum_object_size 0 KB
>> maximum_object_size 512 KB
>> access_log /var/log/squid/access.log squid
>> refresh_pattern ^ftp:   144020% 10080
>> refresh_pattern ^gopher:14400%  1440
>> refresh_pattern (cgi-bin|\?)0   0%  0
>> refresh_pattern .   0   20% 4320
>> visible_hostname doug-linux.dougware.net
>> unique_hostname doug-linux.

Re: [squid-users] New Squid3 Stable 13 Setup

2009-05-19 Thread Amos Jeffries
>
> Thanks Amos
>
> its working now, but i see a small issue when i do query on search engine
> like google
>
> i can directly hit on any site but when i do search i get
>
> The system returned: (111) Connection refused
>
> The remote host or network may be down. Please try the request again.
>

Any other info? is it actually gong direct? what does log say? etc. etc

Amos

>
> Thanks
>
> Amos Jeffries-2 wrote:
>>
>>>
>>> Hi Amos,
>>>
>>> Thanks for responding to my message.
>>>
>>> i am trying to achieve as mentioned below
>>>
>>> Site A has proxy as Proxy 2 and another proxy is located in different
>>> country Site B through tunnel as Proxy1
>>>
>>> Site A has local internet when fails need all web request to be
>>> forwarded
>>> to
>>> proxy 1 through proxy 2 Ie with out changing client proxy address.
>>>
>>> similar setup was running for 1 month, some how messed up had to
>>> reconfigure
>>> from scratch.
>>>
>>
>> Ah, okay this is what you want for the peering then:
>>
>> Proxy2:
>>  prefer_direct on
>>  cache_peer Proxy1 parent 8080 3130
>>  ...
>>
>> Proxy1:
>>   
>>
>>
>> Note the absence of 'default originserver' on proxy2 and any mention of
>> peering on proxy1.
>>
>> If you have any problems with that it will be caused by other configure
>> options I've overlooked.
>>
>> Amos
>>
>>>
>>> bharathvn wrote:

 Hi,

 i am trying to setup proxy server as show below

 Client ==>Sibling ==> Parent==> Internet

 i got error when we browse any site from parent server as mentioned
 below

 The following error was encountered while trying to retrieve the URL:
 /

 Invalid URL

 Some aspect of the requested URL is incorrect.

 Some possible problems are:

 Missing or incorrect access protocol (should be http:// or similar)

 Missing hostname

 Illegal double-escape in the URL-Path

 Illegal character in hostname; underscores are not allowed.

 Your cache administrator is root.

 

 Generated Sun, 17 May 2009 18:13:40 GMT by proxy1 (squid/3.0.STABLE13)

 Parent Proxy config


 http_port 8080
 cache_peer proxy2 sibling 8080 0
 hierarchy_stoplist cgi-bin ?
 acl QUERY urlpath_regex cgi-bin \?
 cache deny QUERY
 acl apache rep_header Server ^Apache
 cache_mem 100 MB
 cache_swap_low 90
 cache_swap_high 95
 access_log /var/log/squid/access.log squid
 acl manager proto cache_object
 acl localhost src 127.0.0.1/255.255.255.255
 acl to_localhost dst 127.0.0.0/8
 acl SSL_ports port 443
 acl Safe_ports port 80 # http
 acl Safe_ports port 21 # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70 # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535 # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT
 acl US src b.b.b.b-b.b.b.254
 acl server src c.c.c.1-c.c.c.254
 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localhost
 http_access allow US
 http_access allow server
 http_access allow all
 http_reply_access allow all
 icp_access deny all
 cache_effective_user squid
 cache_effective_group squid
 icp_port 0
 coredump_dir /var/spool/squid
 refresh_pattern ^ftp: 1440 20% 10080
 refresh_pattern ^gopher: 1440 0% 1440
 refresh_pattern (cgi-bin|\?)0   0%  0
 refresh_pattern . 0 20% 4320


 Sibling Proxy config

 http_port 8080
 cache_peer proxy1 parent 8080 0 default originserver
 hierarchy_stoplist cgi-bin ?
 acl QUERY urlpath_regex cgi-bin \?
 cache deny QUERY
 acl apache rep_header Server ^Apache
 cache_mem 100 MB
 cache_swap_low 90
 cache_swap_high 95
 access_log /var/log/squid/access.log squid
 acl manager proto cache_object
 acl localhost src 127.0.0.1/255.255.255.255
 acl to_localhost dst 127.0.0.0/8
 acl SSL_ports port 443
 acl Safe_ports port 80 # http
 acl Safe_ports port 21 # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70 # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535 # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT
 acl BLR src a.a.a.1-a.a.a.254
 acl US src b.b.b.b-b.b.b.254
 acl server src c.c.c.1-c.c.c.254
 acl TAC src d.d.d.1-d.d.d.254
 acl all src 0.0.0.0/255.0.0.0
 http_access allow manager localhost
 htt

Re: [squid-users] Split caching by size

2009-05-19 Thread Jason Spegal
Just tested and verified this. At least in Squid 3.0 minimum_object_size 
affects both memory and disk caches. Anyone know if this is true in 3.1 
as well? Any thoughts as to how to split it? I may be wrong and likely 
am but I recall there was separate minimum_object_size for each cache at 
one time.


Chris Robertson wrote:

Jason Spegal wrote:
How do I configure squid to only cache small objects, say less than 
4mb in memory cache,


http://www.squid-cache.org/Doc/config/maximum_object_size_in_memory/


and only objects larger than 4mb to the disk?


http://www.squid-cache.org/Doc/config/minimum_object_size/

I want to optimize the cache based on object size. The reasoning is 
the small stuff will change often and be accessed the most while the 
larger items that tie up bandwidth will not change as often and I can 
cache more aggressively. Also this way I minimize disk io and lag. I 
am using squid 3.0. While I can see this being done with the disk 
cache I am not certain the memory cache can be configured like this 
anymore as the options seem to be missing.


Thanks,
  Jason


Chris




Re: [squid-users] Reverse Proxy, multiple web servers, only one is reachable

2009-05-19 Thread Chris Robertson

Joaquín Puga wrote:

Hi there.

Currently we are running squid 2.5.STABLE3 under RHEL3. However, this
week our ssl certificate will expire and the new certificate is a
chained certificate, which is not supported by that version of squid.
Also it is an old server in need of an upgrade, so we are trying to
configure squid 2.6.STABLE21 (running under RHEL 5.3) as a reverse
proxy, but after reading the documentation, the FAQ and many emails
from the email lists we still can't figure out what we are doing
wrong.

- We have 4 web sites with public IPs x.y.z.47, x.y.z.48, x.y.z.49 and
x.y.z.50.
Each web site is hosted on a different server with Ips x.y.z.247,
x.y.z.248, x.y.z.249 and x.y.z.250 (x.y.z.47 goes to x.y.z.247, etc)
Our DNS server runs on the same box as squid.

- x.y.z.48 is using ssl connections.

- With the current configuration www.mywebsite.ca and
www1.mywebsite.ca work, but when trying to go to the other websites we
get to www.mywebsite.ca instead.

If we remove the # from the cache_peer_domain lines then the only
website accessible is www1.mywebsite.ca. The other websites time out
and we get this error message:

ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: http://www.mywebsite.ca/

The following error was encountered:

* Unable to forward this request at this time.

This request could not be forwarded to the origin server or to any
parent caches. The most likely cause for this error is that:

* The cache administrator does not allow this cache to make direct
connections to origin servers, and
* All configured parent caches are currently unreachable.

Your cache administrator is root.
Generated Tue, 19 May 2009 17:16:35 GMT by www1.mywebsite.ca
(squid/2.6.STABLE21)

- It's our understanding that squid uses /etc/squid/hosts to have the
hostnames redefined and to get traffic to the backend servers.


Hostnames, yes.  But not cache_peer names.


 So if
the client requests www.mywebsite.ca, with dns record is x.y.z.47,
squid uses the hosts file to resolve www.mywebsite to x.y.z.247. Is
this correct?
  


If you have an entry like...

cache_peer www.mywebsite parent 80 0 no-query originserver

...then yes, the host file would  be used.  But you are using the IP in 
your cache_peer lines.  There is nothing to resolve.



- We also want to avoid people connecting to the websites using any
Ips (either x.y.z.47, .48, etc or x.y.z.247, .248, etc)
  


Then firewall off the origin servers so they can't be accessed directly, 
and set up ACLs that prevent using IP addresses in the URL.




Below you can find the configuration files. Please let me know if you
need more information. I'd really appreciate if you could point me in
the right direction.

#Squid.conf [version 2.5.STABLE3]:
#-
http_port 80
https_port x.y.z.48:443 cert=/etc/squid/certs/ww1.pem
key=/etc/squid/certs/ww1key.pem version=1
icp_port 0
cache_dir null /tmp
acl all_no_cache src 0/0
no_cache deny all_no_cache
#Path to the host file hosts_file /etc/squid/hosts
httpd_accel_host virtual
httpd_accel_uses_host_header on
visible_hostname www1.mywebsite.ca
acl all src 0.0.0.0/0.0.0.0
acl mynet src x.y.z.0/255.255.255.0
http_access allow all
http_access allow mynet
http_access deny all


#squid.conf version 2.6.STABLE21
#-
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl CONNECT method CONNECT
acl mynet src x.y.z.0/255.255.255.0
http_access allow all
http_access allow mynet
http_access allow localhost
http_access deny all
icp_access allow all

http_port 80 accel vhost
https_port x.y.z.48:443 cert=/etc/squid/certs/ww1.pem
key=/etc/squid/certs/ww1key.pem version=1 accel vhost

cache_peer x.y.z.247 parent 80 0 no-query no-digest originserver
name=www_mywebsite
  


You should probably add "forceddomain=www.mywebsite.ca".


cache_peer x.y.z.248 parent 80 0 no-query no-digest originserver
name=www1_mywebsite
  


Same for the other cache_peers.  Define the forceddomain.


cache_peer x.y.z.249 parent 80 0 no-query no-digest originserver
name=www_mywebsiteusa
cache_peer x.y.z.250 parent 80 0 no-query no-digest originserver name=webmail

#cache_peer_domain www_mywebsite www.mywebsite.ca
#cache_peer_domain www1_mywebsite www1.mywebsite.ca
#cache_peer_domain www_mywebsiteusa www.mywebsiteusa.com
#cache_peer_domain webmail web.mywebsite.ca
  


Since you have a separate front end IP per back end server...

acl www_mywebsite_ip myip x.y.z.47
acl www1_mywebsite_ip myip x.y.z.48
acl www_mywebsiteusa_ip myip x.y.z.49
acl webmail_ip myip x.y.z.50

cache_peer_access allow www_mywebsite www_mywebsite_ip
cache_peer_access deny www_mywebsite
cache_peer_access allow www1_mywebsite www1_mywebsite_ip
cache_peer_access deny www1_mywebsite
cache_peer_access allow www_mywebsiteusa www_mywebsiteusa_ip
cache_peer_access 

Re: [squid-users] http request not going through proxy ?

2009-05-19 Thread Chris Robertson

david vauquelin wrote:

Hello all
 
I've just installed squid 3.0 on damn small linux.  The install was success=

ful and the configuration too as when i start squid with -z option i dont h=
ave any errors.
Then i open my browser and still can reach the internet without configuring=
 anything in my browser.  I dont think this is normal behavior after instal=
ling and starting squid proxy.


Really?


  I dont know much about networking so please help me with this.
  


Oh.


Thank you.


Right...  Quick networking primer.  HTTP 
(http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol) rides on top 
of TCP (http://en.wikipedia.org/wiki/Transmission_Control_Protocol).  
When your browser makes a HTTP request of information hosted by a server 
not on your network, your operating system sends the request to the 
gateway.  Installing a proxy server (on that gateway, on the client 
server, or on a machine separate from both) doesn't change anything by 
itself, but might allow you to have your gateway redirect the traffic 
(interception) and send it to the proxy server (which then makes the 
HTTP request on behalf of your browser, or services the request from its 
cache).


The best option is to modify your browser such that it contact the proxy 
server for its HTTP requests (instead of making HTTP requests directly), 
as that doesn't add a unknown "man-in-the-middle" to the HTTP 
connection.  That is a browser specific setting, and there are a number 
of sites with information on how to go about this.


Chris


Re: [squid-users] Squid suddenly crashes (Maybe a bug)

2009-05-19 Thread Chris Robertson

Omid Kosari wrote:

The main reason for using 3.1 is TPROXY so i can not use 3.0.
How can i provide full info to Amos ?


http://wiki.squid-cache.org/SquidFaq/TroubleShooting#head-7067fc0034ce967e67911becaabb8c95a34d576d

Chris


Re: [squid-users] Upstream Squid to identify user

2009-05-19 Thread Chris Robertson

myocella wrote:

Greeting

I have set up an upstream Squid proxy to receive proxy traffic from
other Squid servers.
I would like to log user access on the upstream proxy. The downstream
has this line:

cache_peer  upstreamproxy.foo.com  parent  8080  7 no-query login=*:foo

However, there is no username showing in the upstream Squid log.
What do I need to add into the Squid conf?
  


Your upstream Squid is not requiring authentication, so it's not being sent.


Currently it just allows access from dowstream IPs. No auth-param is setup.
  


Set up a auth-param which just replies "OK".



cheers,

myocella
  


Chris


Re: [squid-users] TCP_MISS/503 and icp

2009-05-19 Thread Chris Robertson

Amos Jeffries wrote:

Hi,

I have some hosts that use one squid-1 server that has a squid-2 parent:

I mean squid-1 has:

cache_peer parent.domain parent  80803130


But some sites are unaccessible, in special those sites with url having an
"?"

for example:

 1242674301.146104 10.128.255.189 TCP_MISS/503 1415 GET
http://ar.yahoo.com/? - DIRECT/209.191.93.55 text/html




You will get a better trace of these without stripping the query string.

http://www.squid-cache.org/Doc/config/strip_query_terms/

  

and browser shows:

Error
The requested URL could not be retrieved

While trying to retrieve the URL http://ar.yahoo.com/?

The following error was encountered:

*Connection to 209.191.93.55

The system returned:

(111) Connectio0n refused


Also, On the squid-1 iptables are doing REDIRECT.

Please could you tell me what's wrong?



By default dynamic pages cannot be trusted through peers. Squid up until
very recently added no-cache to peer requests (IIRC), which screws up the
bandwidth savings. So while its safe enough to turn on caching of dynamic
pages it's still a sticky issue if they pass through peers.

http://www.squid-cache.org/Doc/config/hierarchy_stoplist/
  


Also see 
http://wiki.squid-cache.org/SquidFaq/ConfiguringSquid#head-f7c4c667d4154ec5a9619044ef7d8ab94dfda39b



Your trace shows Squid-1 is not using the squid-2 as a source, its just
trying to go there DIRECTly. And the source is actively doing a TCP level
reset/denial.

Amos
  


Chris


Re: [squid-users] 3 ISPs: Routing problem

2009-05-19 Thread Chris Robertson

RSCL Mumbai wrote:

On Sun, May 17, 2009 at 11:37 AM, Amos Jeffries  wrote:
  

RSCL Mumbai wrote:

I tried " tcp_outgoing_address " by adding the following to squid.conf

acl ip1 myip 10.0.0.120
acl ip2 myip 10.0.0.121
acl ip3 myip 10.0.0.122
tcp_outgoing_address 10.0.0.120 ip1
tcp_outgoing_address 10.0.0.121 ip2
tcp_outgoing_address 10.0.0.122 ip3

Restarted squid, but no help.

Pls help how I can get the route rules to work.

Simple requirement:
If packets comes from src=10.0.0.120, forward it via ISP-1
If packets comes from src=10.0.0.121, forward it via ISP-2
If packets comes from src=10.0.0.122, forward it via ISP-3
And so forth.

Thx in advance.
Vai
  

To prevent the first (default) one being used  you may need to do:

 tcp_outgoing_address 10.0.0.120 ip1 !ip2 !ip3
 tcp_outgoing_address 10.0.0.121 ip2 !ip1 !ip3
 tcp_outgoing_address 10.0.0.122 ip3 !ip1 !ip2




I do not have 5 real interfaces for 5 ISPs.
And I believe virtual interfaces will not work in this scenario.
  


Works for me (Squid 2.7, Linux kernel 2.6.9+, one physical interface, 
two IPs)  Be sure to set "server_persistent_connections off" in your 
squid.conf.



Any other option pls ??

Thx & regards,
Vai
  


Chris


RE: [squid-users] http request not going through proxy ?

2009-05-19 Thread michael hiatt

Taking a look in the cache.log file, should be a good start for errors (if any) 
otherwise perhaps check the value of http_port in the config file set 
appropriately on client workstation.


Regards,
Michael


> From: david_06...@hotmail.fr
> To: squid-users@squid-cache.org
> Date: Tue, 19 May 2009 19:58:51 +
> Subject: [squid-users] http request not going through proxy ?
>
>
> Hello all
>
> I've just installed squid 3.0 on damn small linux. The install was success=
> ful and the configuration too as when i start squid with -z option i dont h=
> ave any errors.
> Then i open my browser and still can reach the internet without configuring=
> anything in my browser. I dont think this is normal behavior after instal=
> ling and starting squid proxy. I dont know much about networking so please=
> help me with this.
> Thank you.
> _
> Découvrez toutes les possibilités de communication avec vos proches
> http://www.microsoft.com/windows/windowslive/default.aspx

_
Looking to change your car this year? Find car news, reviews and more
http://a.ninemsn.com.au/b.aspx?URL=http%3A%2F%2Fsecure%2Dau%2Eimrworldwide%2Ecom%2Fcgi%2Dbin%2Fa%2Fci%5F450304%2Fet%5F2%2Fcg%5F801459%2Fpi%5F1004813%2Fai%5F859641&_t=762955845&_r=tig_OCT07&_m=EXT

Re: [squid-users] httpReadReply: Excess data from GET - Corrupt download

2009-05-19 Thread Chris Robertson

Emanuel dos Reis Rodrigues wrote:

Helo ...


I have a problem with migration from squid 2.6 and 2.7 ...

I have one  PHP application  than make reports in PDF from php script 
 using fpdf  lib ...


when access ...  script.php, this return one download of pdf file ... 
( This is always the same file name doc.pdf )


The behavior is strange  some times the download file is corrupt 
.. and some times that is OK 



Always display this when my try is OK or no ...


httpReadReply: Excess data from "GET   http://XXX..COM/anex3.php


I know than ... this message is because   the data is more than  
header lenght information ...


Fix the script.  It's better to give no "Length" header than an 
inaccurate one.  Otherwise, you should be able to use header_access to 
deny the Length header from the site hosting this PDF creator (or even 
the particular URL).



ok ? more sites  do this ...

I use debian 5.0 squid2.7 and in Debian 4 using squid2.6 works without 
problems ...


regards,


Emanuel



Chris


Re: [squid-users] Split caching by size

2009-05-19 Thread Chris Robertson

Jason Spegal wrote:
How do I configure squid to only cache small objects, say less than 
4mb in memory cache,


http://www.squid-cache.org/Doc/config/maximum_object_size_in_memory/


and only objects larger than 4mb to the disk?


http://www.squid-cache.org/Doc/config/minimum_object_size/

I want to optimize the cache based on object size. The reasoning is 
the small stuff will change often and be accessed the most while the 
larger items that tie up bandwidth will not change as often and I can 
cache more aggressively. Also this way I minimize disk io and lag. I 
am using squid 3.0. While I can see this being done with the disk 
cache I am not certain the memory cache can be configured like this 
anymore as the options seem to be missing.


Thanks,
  Jason


Chris


Re: [squid-users] tcp_outgoing_address Not working

2009-05-19 Thread Chris Robertson

RSCL Mumbai wrote:

Hi,

I have setup "tcp_outgoing_address" as follows:

---
acl ip1 myip 10.0.0.120
acl ip2 myip 10.0.0.121
acl ip3 myip 10.0.0.122
tcp_outgoing_address 10.0.0.120 ip1
tcp_outgoing_address 10.0.0.121 ip2
tcp_outgoing_address 10.0.0.122 ip3
--

I have route2 rules which will change the g/w IP for the packets based
on the above rules.
Example: if SRC=10.0.0.120 then G/w=10.0.0.1 etc


tcp_outgoing_address does not seem to be working;


Have you set "server_persistent_connections off"?


 How can I verify is the SOURCE_IP is manitained ?
  


The manager interface will show IPs and port (both client and server) 
for current client connections.



ELSE:
Where else should I check to learn why the SRC IP is not maintained.

Please help.

Thx
Vai
  


Chris


Re: [squid-users] Forwarding to different ports

2009-05-19 Thread Chris Robertson

edson wrote:


Matus UHLAR - fantomas wrote:
  

On 14.05.09 05:17, edson wrote:


I'm using squid as an internal proxy, and need to forward HTTP/HTTPS/FTP
to different ports to another proxy.

Is this possible?

How do I configure this in squid? Can't see that it is possible to do
this
with the cache_peer configuration.
  
Proxy.pac is not an option...
  

do you want _clients_ to use different proxies?

without proxy autoconfiguracion, you can do HTTP interception with
firewall/SQUID, FTP interception with firewall/frox (transparenf FTP
proxy)
but you can not intercept HTTPS.
--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Spam = (S)tupid (P)eople's (A)dvertising (M)ethod





Hi, I actually found an "answer" here:
http://www.nabble.com/Forwarding-HTTP-and-HTTPS-Traffic-to-an-Upstream-Proxy-using-Cache_Peer-on-separate-ports-td15598777.html

I'm also using Finjan...

Difference is that the squid version I'm running is 2.5 (centos4), so the
name= option at the cache_peer didn't seem to work.


Right.


 Is this a squid 2.6 thing, or am I doing something wrong in the configuration?
  


The name option to the cache_peer directive (as well as a whole host of 
other changes) was introduced in 2.6.  If you are unwilling (or unable) 
to update your Squid to a currently supported version, you can add a 
entries to your hosts file (or DNS A records) pointing to the same IP.




- Edson
  


Chris


[squid-users] Proxy and cache of SSL with client auth?

2009-05-19 Thread Justin Binns
This may sound insane, but here goes.  I've got a file distribution 
system that relies on client certificate authentication through SSL 
(https) to authenticate clients prior to delivery of files.  Typical 
apache with ssl and client cert setup.  I have reached a situation, 
however, where it would be convenient to create a tiered system of 
caches of said files.  My thought was to use squid to do this as follows:


Server stays the same - requires client cert to return a file.

Squid proxy is set up on a box with a valid client cert, setting up 
sslproxy_* to point to valid client certs.  Squid is also configured 
with https to require client certs for connection to Squid (this last 
part is less important - the clients in this particular setup are 
actually on a private network that is not considered at risk).  When the 
client makes a request for a file, squid makes the request using its 
authorized cert, and then serves the file down-stream.


From my initial reading of the squid configs and documentation I could 
find, it seemed like this would be possible.  I have tried it, and it 
doesn't seem to be working.  I get the (apparently common) SSL 'CONNECT' 
error:



clientNegotiateSSL: Error negotiating SSL connection on FD 11: 
error:1407609B:SSL routines:SSL23_GET_CLIENT_HELLO:https proxy request (1/-1)


Is what I'm trying to do even possible with Squid?  I'm using version 
2.6.STABLE6 on Centos 5.2.  I'd be happy to send my squid configs if 
that'd help.  Any help would be apprecaited ;-)


Justin Binns


[squid-users] Reverse Proxy, multiple web servers, only one is reachable

2009-05-19 Thread Joaquín Puga
Hi there.

Currently we are running squid 2.5.STABLE3 under RHEL3. However, this
week our ssl certificate will expire and the new certificate is a
chained certificate, which is not supported by that version of squid.
Also it is an old server in need of an upgrade, so we are trying to
configure squid 2.6.STABLE21 (running under RHEL 5.3) as a reverse
proxy, but after reading the documentation, the FAQ and many emails
from the email lists we still can't figure out what we are doing
wrong.

- We have 4 web sites with public IPs x.y.z.47, x.y.z.48, x.y.z.49 and
x.y.z.50.
Each web site is hosted on a different server with Ips x.y.z.247,
x.y.z.248, x.y.z.249 and x.y.z.250 (x.y.z.47 goes to x.y.z.247, etc)
Our DNS server runs on the same box as squid.

- x.y.z.48 is using ssl connections.

- With the current configuration www.mywebsite.ca and
www1.mywebsite.ca work, but when trying to go to the other websites we
get to www.mywebsite.ca instead.

If we remove the # from the cache_peer_domain lines then the only
website accessible is www1.mywebsite.ca. The other websites time out
and we get this error message:

ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: http://www.mywebsite.ca/

The following error was encountered:

* Unable to forward this request at this time.

This request could not be forwarded to the origin server or to any
parent caches. The most likely cause for this error is that:

* The cache administrator does not allow this cache to make direct
connections to origin servers, and
* All configured parent caches are currently unreachable.

Your cache administrator is root.
Generated Tue, 19 May 2009 17:16:35 GMT by www1.mywebsite.ca
(squid/2.6.STABLE21)

- It's our understanding that squid uses /etc/squid/hosts to have the
hostnames redefined and to get traffic to the backend servers. So if
the client requests www.mywebsite.ca, with dns record is x.y.z.47,
squid uses the hosts file to resolve www.mywebsite to x.y.z.247. Is
this correct?

- We also want to avoid people connecting to the websites using any
Ips (either x.y.z.47, .48, etc or x.y.z.247, .248, etc)


Below you can find the configuration files. Please let me know if you
need more information. I'd really appreciate if you could point me in
the right direction.

#Squid.conf [version 2.5.STABLE3]:
#-
http_port 80
https_port x.y.z.48:443 cert=/etc/squid/certs/ww1.pem
key=/etc/squid/certs/ww1key.pem version=1
icp_port 0
cache_dir null /tmp
acl all_no_cache src 0/0
no_cache deny all_no_cache
#Path to the host file hosts_file /etc/squid/hosts
httpd_accel_host virtual
httpd_accel_uses_host_header on
visible_hostname www1.mywebsite.ca
acl all src 0.0.0.0/0.0.0.0
acl mynet src x.y.z.0/255.255.255.0
http_access allow all
http_access allow mynet
http_access deny all


#squid.conf version 2.6.STABLE21
#-
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl CONNECT method CONNECT
acl mynet src x.y.z.0/255.255.255.0
http_access allow all
http_access allow mynet
http_access allow localhost
http_access deny all
icp_access allow all

http_port 80 accel vhost
https_port x.y.z.48:443 cert=/etc/squid/certs/ww1.pem
key=/etc/squid/certs/ww1key.pem version=1 accel vhost

cache_peer x.y.z.247 parent 80 0 no-query no-digest originserver
name=www_mywebsite
cache_peer x.y.z.248 parent 80 0 no-query no-digest originserver
name=www1_mywebsite
cache_peer x.y.z.249 parent 80 0 no-query no-digest originserver
name=www_mywebsiteusa
cache_peer x.y.z.250 parent 80 0 no-query no-digest originserver name=webmail

#cache_peer_domain www_mywebsite www.mywebsite.ca
#cache_peer_domain www1_mywebsite www1.mywebsite.ca
#cache_peer_domain www_mywebsiteusa www.mywebsiteusa.com
#cache_peer_domain webmail web.mywebsite.ca

#acl acl_www_mywebsite dstdomain www.mywebsite.ca
#acl acl_www1_mywebsite dstdomain www1.mywebsite.ca
#acl acl_www_mywebsiteusa dstdomain www.mywebsiteusa.com
#acl acl_webmail dstdomain webmail.mywebsite.ca

hierarchy_stoplist cgi-bin ?
cache_dir null /tmp
access_log /var/log/squid/access.log squid acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
visible_hostname www1.mywebsite.ca
hosts_file /etc/squid/hosts
coredump_dir /var/spool/squid


#/etc/squid/hosts
---
x.y.z.247   www.mywebsite.ca
x.y.z.248   www1.mywebsite.ca
x.y.z.249   www.mywebsiteusa.com
x.y.x.250   webmail.mywebsite.ca

Thanks a lot.

Joaquin Puga.


Re: [squid-users] Log Request Header

2009-05-19 Thread Chris Robertson

Mario Remy Almeida wrote:

Hi I  have not enabled squidmime.
But logformat headers %ts.%03tu %tg %>a %rp [%>h] [%  


I understand that.

I am asking you to set up a log using the (defined by default) squidmime 
format, and see if that displays the information you are interested in.


Try and limit the source of the problem.


Regards,
Remy
  


Chris


[squid-users] http request not going through proxy ?

2009-05-19 Thread david vauquelin

Hello all
 
I've just installed squid 3.0 on damn small linux.  The install was success=
ful and the configuration too as when i start squid with -z option i dont h=
ave any errors.
Then i open my browser and still can reach the internet without configuring=
 anything in my browser.  I dont think this is normal behavior after instal=
ling and starting squid proxy.  I dont know much about networking so please=
 help me with this.
Thank you.
_
Découvrez toutes les possibilités de communication avec vos proches
http://www.microsoft.com/windows/windowslive/default.aspx

Re: [squid-users] CISCO + WCCP Stopping forward packets

2009-05-19 Thread Alex Montoanelli
Hello all. I again.

In last week I updated my Cisco IOS  to a Release T, but the problem continues.

Then I decid  back to a old IOS - 12.4.-23 - , and update my Squid to
branch 3, more exactly version 3.15.

The problems that Stop Forward Packet, aparently disappeared.

Or better, the problem continues, but squid register again, and
traffic not stopping.

So I belive that the Cisco is not the Problem, but is the Squid.

Comparing files wccp2.c between 2 versions, I see a lot of changes.

In short, my current situation is:

I have 4 Squid's registering  on a Cisco 2811.

>From time to time, the squid's  turn it off then register again.
Is working, but I believe that this is not correct way.

Regards.

Alex Montoanelli
Administração e Gerência de Redes
Unetvale Conectividade
+55 48 3263 8700


On Fri, May 8, 2009 at 12:42 PM, Ritter, Nicholas
 wrote:
>
> My experience has been, and my local cisco field engineers were the ones who 
> told me this, that you should always use the T train of IOS releases.
>
> -Original Message-
> From: alexmontoane...@gmail.com [mailto:alexmontoane...@gmail.com] On Behalf 
> Of Alex Montoanelli
> Sent: Friday, May 08, 2009 8:29 AM
> To: squid-users
> Subject: Re: [squid-users] CISCO + WCCP Stopping forward packets
>
> Hi all.
>
> This problem appeared when I started to use more than one Squid to
> regiter on wccp/cisco.
>
> In the past, where I just use one squid, this not the case.
>
> Browsing the Web site cisco, I found this on Cisco IOS Changelog:
>
> http://www.cisco.com/en/US/docs/ios/12_4/release/notes/124MCAVS.html#wp280492
> -
> Resolved Caveats-Cisco IOS Release 12.4(21)
> This section describes possibly unexpected behavior by Cisco IOS
> Release 12.4(21). All the caveats listed in this section are resolved
> in Cisco IOS Release 12.4(21).
> *
> CSCsm12247
> Symptoms: A Cisco IOS router configured for WCCP may stop redirecting
> traffic following a change in topology.
> Conditions: The router must be configured for WCCP redirection using
> the hash assignment method. When there is only a single appliance in
> the service group, the loss of hash assignment details is permanent.
> However with multiple appliances in the group, the loss of assignment
> information is transitory; the router soon recovers.
> Workaround: To recover the assignment details, the WCCP configuration
> needs to be removed and re-added to the router. Use the no ip wccp
> service command followed by ip wccp service args command.
> Additional Information: The changes address also situation where some
> wccp clients are sending modified weight field in the wccp message and
> this way create a topology change situation.
> --
>
> I upgraded to IOS 12.4.(23), but problems remain.
>
> What you think to migrate to the IOS release T ?
>
> Anyone has using more than one Squid registered on the same router?
>
> Regards
>
> Alex
>
>
> On Mon, May 4, 2009 at 9:08 PM, Ritter, Nicholas
>  wrote:
> > Yuplooks like an IOS related problemtry a different release of IOS.
> >
> >
> > -Original Message-
> > From: alexmontoane...@gmail.com on behalf of Alex Montoanelli
> > Sent: Mon 5/4/2009 4:00 PM
> > To: squid-users
> > Subject: Re: [squid-users] CISCO + WCCP Stopping forward packets
> >
> > Hi, after a day works fine, the problem appear.
> >
> > I see the HereIAm and ISeeYou  packets between Cisco and Squid, above
> > is the logs of both.
> >
> > I have 4 Instances of Squid, running on the same machine, I just
> > shutdown 3 of then, and start again, and every one
> > go to normal. The fourth instance back to normal without any touch.
> >
> >
> > The *** mark, is the begin of trouble.
> >
> > --CISCO
> > May  4 17:21:32 cliente-1-254.unetvale.com.br 240185: 240210: *May  4
> > 21:23:36: WCCP-PKT:S00: Sending I_See_You packet to 200.193.10.140 w/
> > rcv_id 00091ACD
> > May  4 17:21:37 cliente-1-254.unetvale.com.br 240188: 240213: *May  4
> > 21:23:41: WCCP-PKT:S00: Received valid Here_I_Am packet from
> > 200.193.10.141 w/rcv_id 00091ACB
> > May  4 17:21:37 cliente-1-254.unetvale.com.br 240189: 240214: *May  4
> > 21:23:41: WCCP-PKT:S00: Sending I_See_You packet to 200.193.10.141 w/
> > rcv_id 00091ACF
> > May  4 17:21:41 cliente-1-254.unetvale.com.br 240190: 240215: *May  4
> > 21:23:44: WCCP-PKT:S00: Received valid Here_I_Am packet from
> > 200.193.10.143 w/rcv_id 00091ACC
> > May  4 17:21:41 cliente-1-254.unetvale.com.br 240191: 240216: *May  4
> > 21:23:44: WCCP-PKT:S00: Sending I_See_You packet to 200.193.10.143 w/
> > rcv_id 00091AD0
> > May  4 17:21:42 cliente-1-254.unetvale.com.br 240192: 240217: *May  4
> > 21:23:46: WCCP-PKT:S00: Received valid Here_I_Am packet from
> > 200.193.10.140 w/rcv_id 00091ACD
> > May  4 17:21:42 cliente-1-254.unetvale.com.br 240193: 240218: *May  4
> > 21:23:46: WCCP-PKT:S00: Sending I_See_You packet to 200.193.10.140 w/
> > rcv_id 00091AD1
> > May  4 17:22:31 cliente-1-254.unetvale.com.br 240244: 240269: *May  4
> 

Re: [squid-users] Is it true that even threaded Squid can't benefit from SMP systems?

2009-05-19 Thread Chris Woodfield
A couple lessons learned from my end, both in my own experience and  
picked up from various squid-users threads...


I've said this before, but never underestimate the value of kernel  
page cache. If you need to scale the box, put in as much RAM as you  
can afford.


Also, as has been said before, squid + RAID = PAIN (particularly  
RAID5). Performance will be much better if you can set up multiple  
physical disks under separate cache_dirs, thus allowing async reads to  
take place in parallel. If disk redundancy is a must, stick with RAID  
1 pairs (multiple RAID 1 pairs work well, particularly with a hardware  
controller).


If your traffic load is mostly small ( ~ < 1 MB ) objects, consider  
utilizing COSS storage as an alternative to AUFS; this will give you  
much more bang for the buck if you're serving large numbers of small  
objects, since it eliminates the overhead of the millions of of open()/ 
close() kernel system calls you'd see with AUFS.


If you find yourself hitting the single-core CPU bottleneck due to  
squid's main loop, it is possible to run multiple squids on a box,  
although each one requires its own cache storage. If you need to move  
to this, consider configuring one of more "front-end" squids that  
refer queries to multiple "back-end" parent caches via CARP to  
eliminate duplicating object storage.


HTH,

-Chris

On May 19, 2009, at 8:47 AM, rihad wrote:


Jeff Pang wrote:

rihad:


But what about Posix threads & Async IO?  (./configure --enable- 
async-io=2 ...)? Don't they take advantage of multiple CPUs/cores/ 
cache_dirs?



Yes Async-IO benefits from multi-cpu on disk IO, if you're using it.
Squid's main daemon is a single process, that benefits nothing from  
SMP   system.


Since disk I/O is often the bottleneck (given enough RAM), it can be  
said that, thanks to async I/O, Squid mostly scales well to the  
number of CPUs, issuing several disk I/O operations simultaneously &  
asynchronously, so it can proceed to execute the main loop without  
waiting for I/O completion? In that case that part of the FAQ needs  
updating, I guess.






Re: [squid-users] Does Squid scale well?

2009-05-19 Thread rihad

Jeff Pang wrote:

rihad :
Can someone please say how well Squid 3.1/tproxy scales? Would it have 
problems servicing more than 10k simultaneous HTTP requests, and 
pushing as much as 300 mbit/s of traffic? 500 mbit/s? 1 gbit/s?





Most time I saw my squid boxes running with 20,000 simultaneous 
connections. But none of them reached the traffic of 200 MBits/Sec 
(watched with iptraf).


Aha. You mean your traffic load was just that high, or that the box hit 
its limits?


Re: [squid-users] Is it true that even threaded Squid can't benefit from SMP systems?

2009-05-19 Thread rihad

Jeff Pang wrote:

rihad:



But what about Posix threads & Async IO?  (./configure 
--enable-async-io=2 ...)? Don't they take advantage of multiple 
CPUs/cores/cache_dirs?




Yes Async-IO benefits from multi-cpu on disk IO, if you're using it.
Squid's main daemon is a single process, that benefits nothing from SMP 
  system.




Since disk I/O is often the bottleneck (given enough RAM), it can be 
said that, thanks to async I/O, Squid mostly scales well to the number 
of CPUs, issuing several disk I/O operations simultaneously & 
asynchronously, so it can proceed to execute the main loop without 
waiting for I/O completion? In that case that part of the FAQ needs 
updating, I guess.


Re: [squid-users] Does Squid scale well?

2009-05-19 Thread Jeff Pang

rihad :
Can someone please say how well Squid 3.1/tproxy scales? Would it have 
problems servicing more than 10k simultaneous HTTP requests, and pushing 
as much as 300 mbit/s of traffic? 500 mbit/s? 1 gbit/s?





Most time I saw my squid boxes running with 20,000 simultaneous 
connections. But none of them reached the traffic of 200 MBits/Sec 
(watched with iptraf).


We use squid for reverse-proxy for images/css/htmls etc.


--
Jeff Pang
DingTong Technology
www.dtonenetworks.com


Re: [squid-users] Is it true that even threaded Squid can't benefit from SMP systems?

2009-05-19 Thread Jeff Pang

rihad:



But what about Posix threads & Async IO?  (./configure 
--enable-async-io=2 ...)? Don't they take advantage of multiple 
CPUs/cores/cache_dirs?




Yes Async-IO benefits from multi-cpu on disk IO, if you're using it.
Squid's main daemon is a single process, that benefits nothing from SMP 
  system.


--
Jeff Pang
DingTong Technology
www.dtonenetworks.com


[squid-users] Is it true that even threaded Squid can't benefit from SMP systems?

2009-05-19 Thread rihad

http://wiki.squid-cache.org/SquidFaq/InstallingSquid#head-56167774a7ab4b9ec2fb1b0bd20a74b4d984776c

It says essentially this:

Can Squid benefit from SMP systems?

Squid is a single process application and can not make use of SMP.



But what about Posix threads & Async IO?  (./configure 
--enable-async-io=2 ...)? Don't they take advantage of multiple 
CPUs/cores/cache_dirs?


[squid-users] How to log only the MISS

2009-05-19 Thread Chudy Fernandez

Is it posible to log only the TCP_MISS or .*MISS


  


Re: [squid-users] Squid suddenly crashes (Maybe a bug)

2009-05-19 Thread Omid Kosari

The main reason for using 3.1 is TPROXY so i can not use 3.0.
How can i provide full info to Amos ?


Jeff Pang-4 wrote:
> 
> Omid Kosari:
>> Anyone?
>> This problem occurs 5 times a day (average). and each time the following
>> message appears in cache.log
>> 
>> assertion failed: comm.cc:2016: "!fd_table[fd].closing()"
>> 
> 
> b/c squid-3.1 is a beta version, so anything can be happened.
> you may provide the full info to Amos, and roll the software version to
> 3.0.
> 
> -- 
> Jeff Pang
> DingTong Technology
> www.dtonenetworks.com
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Squid-suddenly-crashes-%28Maybe-a-bug%29-tp23593693p23610858.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] Squid suddenly crashes (Maybe a bug)

2009-05-19 Thread rihad

Jeff Pang wrote:

Omid Kosari:

Anyone?
This problem occurs 5 times a day (average). and each time the following
message appears in cache.log

assertion failed: comm.cc:2016: "!fd_table[fd].closing()"



b/c squid-3.1 is a beta version, so anything can be happened.

AFAIK it's release candidate already. It's still a bug.

you may [...] roll the software version to 
3.0.




Don't know about the OP, but I can't do that because Tproxy 4.1 is only 
supported in Squid 3.1.