Re: [squid-users] Zero sized reply and other recent access problems

2005-03-07 Thread Reuben Farrelly
Hi again Hans,
At 08:52 a.m. 7/03/2005, H Matik wrote:
On Saturday 05 March 2005 23:41, Reuben Farrelly wrote:
 I think you've misunderstood something quite fundamental about how squid
 works:

may be I did not used the exact expressions you like to see but like you 
wrote
you did get it. Anyway, my intention like said in my mail was not to attack
anybody.
I know, I just am asking you to be specific with the errors you are 
reporting.  None of the developers would complain in the slightest if you 
could provide good evidence of a bug, believe me ;-)


 * Strict HTTP header parsing - implemented in the most recent STABLE
 releases of squid, you can turn this off via a squid.conf directive
 anyway (but it is useful to have it set to log bad pages).

what do you mean? relaxed_header_parser? I think this is on by default, not
off, turning it off it parse strict or am I wrong here?
Yes, it is on by default, in other words, (from the squid.conf)with this 
default setting, Squid accepts certain forms of non-compliant HTTP 
messages where it is unambiguous what the sending application intended even 
if the message is not correctly formatted.

This means that as long as you have relaxed_header_parser set to on or 
warn, or simply not defined, the old behaviour will still be the same as 
older squid.
Personally I recommend at least warn, as it has allowed me to see some of 
the broken sites and inform relevant people of their broken behaviour, but 
I understand not everyone can be bothered..

 * ECN on with Linux can cause 'zero sized reply' responses, although
 usually you'll get a timeout.  I have ECN on on my system and very few
 sites fail because of this, but there are a small number.  Read the
 squid FAQ for information about how to turn this off if it is a problem.

FYI it does not happens only on Linux, again, the problem and a possible
solution here is not the point, the point is that for the end-user the site
opens using the other ISP so for him it is an ISP problem, he doesn't care
if it is squid or the remote site, network congestion or other.
Yep, I understand.
anyway, IMO the error message is obscure for the user, it starts saying
the URL:  (blank)
Do the users have Show friendly HTTP error messages ticked in their 
Internet Explorer options?  If they do, they will usually not see the squid 
error which explains what the problem is and will see a generic message 
the page could not be displayed.  Unfortunately, IE hides these useful 
squid messages with it's own garbage, which is often more useless to the 
end user than squid's messages.

If it's not that then you should either have something useful to look at in 
the users browser, or else in your cache.log.


the user obviously complains about that he typed correctly the URL and on the
error msg it is blank, so this cause understanding problems between the
support staff and the user
Then it does not help to send reading FAQs because what I am speaking 
about is
the user not the administrator. The user does not need to learn squid but
what he gets should be understandable enough and most important he should get
it when he gets it without squid.
Yes, of course.

I mean that a site should be accessible behind squid when it opens normally
with a Browser without squid. It is not interesting here if there is a wrong
header or whatever.
 * NTLM authentication, some uninformed site admins require or request

NO, I was not speaking about any authentication at all

 Can you give some examples of specific sites which you need to bypass
 squid for that you cannot get to display using the items I mentioned above?

First some banking and other secure sites which need gre protocol for example
but I was not speaking about this ones.
GRE should be unaffected.  Squid does not process or handle GRE, only 
TCP/IP.
Are you using your squid as a firewall/router box, and not allowing GRE 
through?

Lots of Blogger sites are giving erros. Sure there is a lot of underline and
whitespace problems but the latter ones often are not resolvable by squid
settings. On the other side they open normally with MSIE
I haven't seen any before..
At work I can check for more, one specific follows.
Other errors are like this, even if this specific site now is working after
contacting them. The site gave problem with squid  2.5-S4 if I am not wrong
here.
GET / HTTP/1.1
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg,
application/vnd.ms-excel, application/msword, application/vnd.ms-powerpoint,
application/x-shockwave-flash, */*
Accept-Language: pt-br
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 5.0; Windows 98; DigExt)
Host: www.redecard.com.br
Connection: Keep-Alive
That one is one of the more broken ones I have seen yet:
[EMAIL PROTECTED] ~]# wget -S www.redecard.com.br
--00:38:38--  http://www.redecard.com.br/
   = `index.html.1'
Resolving www.redecard.com.br... 200.185.9.46
Connecting to www.redecard.com.br[200.185.9.46]:80... connected

Re: [squid-users] Strange HTTP Header causing error message from squid to user

2005-03-07 Thread Reuben Farrelly
Hi,
Mark Wiater wrote:
Hi,
One of my users is getting an error message when accessing a page
through Squid but the page loads fine in Firefox, IE  Netscape
directly.
The headers that the HTTP server is returning look odd to me. 

First, the Date: field has two dates on the same line, comma separated.
Same thing for the Server line. Microsoft-IIS/5.0, Results CASI Net 

The Connection: header also has close, close
And finally, there are two distinct HTTP header lines. The first is the
first line in the data section of the IP packet, HTTP/1.x 200 OK. The
second comes after the close, the 6th line, and is: HTTP/1.1: 200 OK.
Any ideas why Squid is detecting an error while browsers render the
page?
If it has a malformed Date, Server and Connection header, then it is 
very very broken, and likely makes no sense to squid.  It probably makes 
no sense to your browser either, but it likely just ignores it.  You're 
really asking why does something which is obviously broken not work? ;-)

Can you tell us what the URL is?
Have you specified the relaxed_header_parser directive in your 
squid.conf, and if so, what is it set to?  There is an explanation about 
this in your squid.conf.

What version of squid are you using?  (squid -v)
I know, heaps of questions, but this is coming up as a daily question on 
this mailing list...

Reuben



Re: [squid-users] Strange HTTP Header causing error message from squid to user

2005-03-07 Thread Reuben Farrelly
Hi,
Henrik Nordstrom wrote:
On Tue, 8 Mar 2005, Henrik Nordstrom wrote:
On Mon, 7 Mar 2005, Mark Wiater wrote:
What version of squid are you using?  (squid -v)
2.5 stable8, it's an rpm package for Fedora Core 3.
squid-2.5.STABLE8-1.FC3.1

Upgrading to 2.5.STABLE9 helps some as the parser was relaxed a bit 
more there by default, but no guarantees as the server you asked on is 
quite broken...

Just verified and Squid-2.5.STABLE9 accepts this response in it's 
default settings.

Regards
Henrik
I'll put a request in Fedora Core bugzilla, for the maintainer to 
upgrade the package to -STABLE9..

reuben


Re: [squid-users] Zero sized reply and other recent access problems

2005-03-05 Thread Reuben Farrelly
Hi,
H Matik wrote:
Recently all of us are having problems with squid not serving certain 
pages/objects anymore. 

We do know that squid most probably does detect correct or incorrect html 
codes and tells it via it's error messages.

But I am not so sure if this should be a squid task.
Squid IMO should cache and serve what it gets from the server.
The code check should be done by the browser - means incorrect code is a 
browser problem or a web server problem so it should be adviced by the 
browser not by anything in the middle. 

Even if the page code is buggy the page could contain objects to be cached and 
that is what squid should do.

I say so because who use squid is an ISP or a system admin of any kind of 
network. So it should not turn into be this man's problem if somebody is 
coding his server's html pages incorrectly. He with his squid only serves his 
customers or his people on his network.

IMO this strict html code checking is complicating network support to end 
customers what already was or is not so easy sometimes.

We here do use transparent squid on lots of sites and soon someone complains 
about this kind of problem we rewrite our fwd rules so that it does not goes 
through squid anymore.

Even if we know that the remote site owner has no interest in somebody not 
capable to access his site we do not have the time to talk to him. Indeed it 
is not our problem and we are not a html coding school teaching how to 
correct errors. So here we simply desist and pass by squid for such kind of 
sites.

IMO I think it might be better for squid not checking code. 

Custumers say: Without your cache I can access the site, with your cache not. 
I do not want to know about and if you do not resolve this problem for me I 
do not use you service anymore but another where it works.

So even if I loose first my customer second they do not use squid anymore. I 
believe it could be considered to think about this.

I like to add that we here are using squid since 97/98 and what I wrote here 
is not in any kind a meant as offending critic to the developers but a point 
to think about. So what you think about this?
I think you've misunderstood something quite fundamental about how squid 
works:

  Squid does not read, complain or validate HTML
In other words, it does not check it or care if it is even HTML, or if 
it is a binary file.  Squid only cares about the HTTP _headers_ that the 
remote server is issuing when squid requests a document.  HTTP headers 
have nothing to do with HTML, HTTP headers are generated by the HTTP 
server and administered by the server administrator, they are not 
anything to do with the web pages on the server itself.

I suspect you are meaning to complain about a number of different things 
at once:

* Strict HTTP header parsing - implemented in the most recent STABLE 
releases of squid, you can turn this off via a squid.conf directive 
anyway (but it is useful to have it set to log bad pages).

* ECN on with Linux can cause 'zero sized reply' responses, although 
usually you'll get a timeout.  I have ECN on on my system and very few 
sites fail because of this, but there are a small number.  Read the 
squid FAQ for information about how to turn this off if it is a problem.

* NTLM authentication, some uninformed site admins require or request 
NTLM authentication, this is not supported, not recommended by Microsoft 
on the internet and will not work (you'll get an error message).  Squid 
should not support things which are known to be broken and not supposed 
to work!

Can you give some examples of specific sites which you need to bypass 
squid for that you cannot get to display using the items I mentioned above?

Reuben


Re: [squid-users] Re: Re: Re: Re: WCCP + squid 2.5-STABLE7 + linux 2.6.10

2005-02-24 Thread Reuben Farrelly
Hi,
At 02:14 p.m. 25/02/2005, Jesse Guardiani wrote:
Henrik Nordstrom wrote:
 On Thu, 24 Feb 2005, Jesse Guardiani wrote:

 I don't think it is anymore. It seems like the packets are just
 dissappearing after they hit my iptables rule. I tried placing OUTPUT and
 POSTROUTING LOG rules around the NAT table, and their hit counters
 increment if I hit the cache directly from a web browser, but if I hit it
 transparently the packet just dissappears after the REDIRECT to port
 3128.

 Try using DNAT instead of REDIRECT.
I thought you might say that, so I tried it with DNAT earlier in the day.
I tried destination addresses 192.168.10.2 (my ip alias on eth0:22) and
192.168.1.2 (my real eth0 ip). Neither worked. Here's an example of the
latter:
# iptables -t nat -L -v
Chain PREROUTING (policy ACCEPT 425 packets, 61769 bytes)
 pkts bytes target prot opt 
in out source   destination
   43  2580 
DNAT   tcp  --  gre1   any anywhere anywhere 
   tcp dpt:www to:192.168.1.2:3128

Do you see anything wrong with the above?
I'm starting to think that something is wrong with linux's gre WCCP
decapsulation. That's why I keep asking if anyone actually has
this working on my kernel and my squid. But I guess, judging from
the silence, that nobody has it working yet.
Is there a better alternative to WCCP? I'm particularly interested
in the fail-over feature. I'd hate for my user's internet access
to go down just because my squid server rebooted.

No need.  I can confirm it does work, but it does need to be set up in a 
specific way.

I have been using 2.6 series right the way through, now running 2.6.11-rc5, 
and switched to using the gre tunnel method when it became supported by the 
Linux kernel.  ip_wccp is good, but it is not in the kernel and it's a lot 
easier to just use a GRE tunnel built into the kernel instead.
If you wish to use ip_wccp, I suggest you start by getting this config 
below to work properly first, and then change to ip_wccp and then take down 
the GRE interface, start from a position of it working before you start 
experimenting ;)  The router config and squid config would be the same, the 
iptables config is slightly different though.

Router config:
--
* My router is running 12.3(11)T3.  BE CAREFUL, some versions of IOS do NOT 
work without also turning off CEF and/or fast switching, although most 
recent ones do work OK.  Stick to a stable (non T or branch) release if you 
can, such as latest 12.2 or 12.3.

interface Ethernet0
  ip address 192.168.0.1 255.255.255.0
  ip wccp web-cache redirect in
interface Loopback0
 ip address 172.16.1.5 255.255.255.252
end
(Note the loopback IP range matches that on the GRE tunnel on my linux box)
Linux box core config:
-
/etc/sysconfig/network-scripts/ifcfg-gre0
DEVICE=gre0
BOOTPROTO=static
IPADDR=172.16.1.6
NETMASK=255.255.255.252
ONBOOT=yes
IPV6INIT=no
iptables config:

iptables -A PREROUTING -s 192.168.0.0/255.255.0.0 -d ! 
192.168.0.0/255.255.0.0 -i gre0 -p tcp -m tcp --dport 80 -j DNAT --to 
192.168.0.3:3128

This makes sure that traffic from 192.168.0.0/255.255.0.0 destined for 
192.168.0.0/255.255.0.0 is not redirected to the cache.

Squid config:
-
wccp_router 192.168.0.1
wccp_version 4
wccp_outgoing_address 192.168.0.3    I have two IP addresses on this box
I'm not sure if it is optimal or not, but it works with every squid version 
I have ever tried.  If I remember correctly, some of these instructions 
came from a page by Joe Cooper @ Swelltech, but I can't put my hands on it 
right now.

Hope this helps.
reuben



Re: [squid-users] www.europroperty.com

2005-02-22 Thread Reuben Farrelly
Hi,
At 05:59 a.m. 23/02/2005, you wrote:
We have recently upgraded to 2.5Stable7-20050113.
The above URL is causing Squid to return Invalid Request, but there is
no problem going direct.
This is what happens going through squid:
GET http://www.europroperty.com/ HTTP/1.0.
Accept: */*.
Accept-Language: en-gb.
Cookie: WEBTRENDS_ID=80.169.166.244-2279377056.29694189.
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; Q312461).
Host: www.europroperty.com.
Proxy-Connection: Keep-Alive.
.
HTTP/1.0 502 Bad Gateway.
Server: squid/2.5.STABLE7-20050113.
Mime-Version: 1.0.
Date: Tue, 22 Feb 2005 14:53:52 GMT.
Content-Type: text/html.
Content-Length: 1475.
Expires: Tue, 22 Feb 2005 14:53:52 GMT.
X-Squid-Error: ERR_INVALID_REQ 0.
X-Cache: MISS from miloscz.collierscre.co.uk.
X-Cache-Lookup: MISS from miloscz.collierscre.co.uk:3128.
Proxy-Connection: keep-alive.
This is what happens going direct
GET / HTTP/1.1.
Accept: */*.
Accept-Language: en-gb.
Accept-Encoding: gzip, deflate.
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; Q312461).
Host: www.europroperty.com.
Connection: Keep-Alive.
Cookie: WEBTRENDS_ID=80.169.166.244-2279377056.29694189.
.
HTTP/1.1 200 OK.
Server: Microsoft-IIS/5.0.
Date: Tue, 22 Feb 2005 14:51:28 GMT.
Content Location: http://www.europroperty.com.
^^^
X-Powered-By: ASP.NET.
Content-Length: 7278.
Content-Type: text/html.
Expires: Tue, 22 Feb 2005 14:50:29 GMT.
Set-Cookie: ASPSESSIONIDCAQSCQDT=MLPDPKNAOAHMCMLPCNOMJDPN; path=/.
Cache-control: private.
Has anyone come across this before and is there a fix?
Yes.  Check your cache.log, the reason why this is rejected is clearly 
logged (and it is broken - hint: see above).

Have a read through one of the archives listed at 
http://www.squid-cache.org/mailing-lists.html , the answer to this question 
(as well as a way to work around the brokenness) has been posted about 100 
times already this week...

reuben


Re: [squid-users] squid-2.5.STABLE8 compilation error

2005-02-20 Thread Reuben Farrelly
Hi,
At 01:39 a.m. 21/02/2005, you wrote:
On Sat, 19 Feb 2005 15:20:37 +0100, Elsen Marc [EMAIL PROTECTED] wrote:

  Hello everyone,
 
  I have a RedHat AS 3.0 box which I want to install squid on. So I
  downloaded squid-2.5.STABLE8 and unpacked it.
 
  I used these options as my configure options:
  ./configure --enable-xmalloc-statistics --enable-delay-pools
  --enable-useragent-log --enable-referer-log --enable-snmp
  --enable-arp-acl --enable-ssl --enable-linux-netfilter
  --enable-x-accelerator-vary
 
 ...

- Does it work (as a test) when --enable-ssl is not used ?
works like a charm withouth --enable-ssl
- Do you have openssl installed on your system.

Yes, openssl-0.9.7a-22.1 is installed
What about openssl-devel ?
reuben


Re: [squid-users] driver needed...

2005-02-17 Thread Reuben Farrelly
Hi,
Daniel Navarro wrote:
I gues somebody have a Asound lan card driver for 8139
model. Is not realtek model, check at www.asound.net.
Really need it, specially for windows 2000.
Regards, Daniel Navarro
 Maracay, Venezuela
 www.csaragua.com/ecodiver
This is the squid-users mailing list for general discussion relating to 
Squid (not Windows drivers). The membership of this list is thousands of 
Squid users from around the world, and what you are asking for is very 
very off topic here...

reuben



Re: [squid-users] Enforcing Refresh patterns

2005-01-24 Thread Reuben Farrelly
Hi,
At 03:42 a.m. 25/01/2005, Alexander Shopov wrote:
Hi guys,
After reading the FAQ, searching on google, reading viSofts manual, and 
the Squid Documentation project, extensive experimenting and then 
wiretrapping with Etereal, I still cannot get the result I want with Squid.
I want to *force* a particular refresh pattern on some objects 
(*.gif,*.js) from some servers.

I want all gifs from some servers to be refreshed no earlier than 12 
minutes after they went into the cache *regardless* of the settings of the 
web server and the commands of the client:

I tried with the following setting:
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i .*\.gif$ 12  100%12 override-expire 
override-lastmod reload-into-ims ignore-reload
refresh_pattern .   0   20% 4320

But then whenever the user client generates a request for a gif object, 
squid first checks whether the object is stale by generating a request to 
the server. I do not it to do so for at least 12 minutes, I want squid to 
return the object immediately.

Can anyone give me advice?
What version of squid are you using?
Can you post the full section out of your access.log, of a request where 
this happens, with
   log_mime_hdrs on

(Just post 1 request logged)
Reuben


Re: [squid-users] Squid 3.0 accelerator using host caching problem

2005-01-17 Thread Reuben Farrelly
At 07:07 p.m. 18/01/2005, forgetful tan wrote:
hi,all
I'm using squid 3.0 's vhost mode. I set some domain name as 
cache_peers with the originserver option as follow :
cache_peer myhost.domain parent 80 0 originserver default
But all the access log I got is kind of MISSes (TCP_MISS, 
TCP_CLIENT_REFRESH_MISS).

If I set always_direct allow myhost.domain , I got all the http 
traffic to myhost in DIRECT mode.

Didn't I set the proper configuration ? Or can't I got this pages 
cached ?
I'm seeing the same thing on a customer system (everything retrieved from 
the backend server is always TCP_MISS/304 even though it definitely is 
cacheable), but was holding back till I'd done some more investigation.  I 
am using a very old snapshot (about -PRE3).  I can't upgrade as more recent 
versions seem to be very unstable right now :(

I have a recollection of this problem being fixed at some point but it must 
have been far far back, wish me luck finding the patch.

reuben


Re: [squid-users] Re: gzip

2004-07-08 Thread Reuben Farrelly
swelltech.com are listed here:
http://www.squid-cache.org/Devel/
and here:
http://www.squid-cache.org/SPONSORS.txt
and a search on www.squid-cache.org swelltech shows about 1600 hits...so 
I think it's a pretty reasonable to assume that Swelltech are for real  ;)

They seem to be asking for funds to pay for a developer, except that they 
have written most of the code already.  I would think it wasteful to 
sponsor someone else to write this functionality again when, if the funding 
is found, the code will be released under the GPL anyway..

Reuben

At 02:17 p.m. 9/07/2004, ohenry wrote:
  Also Swell Technology claims to have such a patch up for ransom. Does
  anyone know if their patch is for real and if it works as I described
  above?

 No idea - never heard of the company. I don't know how much they're
 charging, but rather than purchasing that, you (and others who are
 interested) could instead invest the money in sponsoring a Squid developer
 to implement it in Squid proper.
They claim the patch works great and they haven't released it yet b/c
squid 3.0 is in feature freeze.  One of the engineers there told me they
are going to release it under the GPL once Squid 3.1 begins.
But they are asking for $400 to gain access to the pre-release patch.
http://swelltech.com/squidgzip/
So in theory it already exist and will be apart of Squid 3.1 very soon
but who knows if these guys are for real.  I was really hoping people on
this list would know who they were...