[squid-users] browser hangs with latest squid configured without cache_peer

2014-01-14 Thread Jeff Chua
Last stable version that works is squid-3.HEAD-20131230-r13199.tar.bz2.good.

With latest bzr pull (at revno: 13234), browser hangs with squid
configured without cache_peer.

Anyone seeing this?

Thanks,
Jeff


[squid-users] bug ... No-lookup DNS ACLs ?

2013-01-31 Thread Jeff Chua
-- Forwarded message --
From: Jeff Chua 
Date: Fri, Feb 1, 2013 at 1:32 PM
Subject:
To: squid-users@squid-cache.org


Amos,

I'm seeing entries like these after rev 12620. It seems the "-n" only
applies to dst* and not src* ACL. How can I fix these?


2013/02/01 13:19:05| WARNING: (B) '::/0' is a subnetwork of (A) '::/0'
2013/02/01 13:19:05| WARNING: because of this '::/0' is ignored to
keep splay tree searching predictable
2013/02/01 13:19:05| WARNING: You should probably remove '::/0' from
the ACL named 'all'
2013/02/01 13:19:05| WARNING: (B) '127.0.0.1' is a subnetwork of (A) '127.0.0.1'
2013/02/01 13:19:05| WARNING: because of this '127.0.0.1' is ignored
to keep splay tree searching predictable
2013/02/01 13:19:05| WARNING: You should probably remove '127.0.0.1'
from the ACL named 'localhost'
2013/02/01 13:19:05| WARNING: (B) '127.0.0.0/8' is a subnetwork of (A)
'127.0.0.0/8'
2013/02/01 13:19:05| WARNING: because of this '127.0.0.0/8' is ignored
to keep splay tree searching predictable
2013/02/01 13:19:05| WARNING: You should probably remove '127.0.0.0/8'
from the ACL named 'to_localhost'
2013/02/01 13:19:05| WARNING: (B) '138.18.18.0/25' is a subnetwork of
(A) '138.18.18.0/25'
2013/02/01 13:19:05| WARNING: because of this '138.18.18.0/25' is
ignored to keep splay tree searching predictable
2013/02/01 13:19:05| WARNING: You should probably remove
'138.18.18.0/25' from the ACL named 'clients'
2013/02/01 13:19:05| WARNING: (B) '80.239.152.0/24' is a subnetwork of
(A) '80.239.152.0/24'
2013/02/01 13:19:05| WARNING: because of this '80.239.152.0/24' is
ignored to keep splay tree searching predictable


In squid.conf ...
acl clients src 138.18.18.0/25 127.0.0.1/32
http_access allow manager clients
http_access deny manager

acl razor dstdomain .edgesuite.net .razor.tv
acl razordst dst 80.239.152.0/24
http_access deny razor razordst

acl local-servers dstdomain proxy lo localhost
always_direct allow local-servers


I've tried add "-n" but it doesn't seem to fix the problem ...
acl razor dstdomain -n .edgesuite.net .razor.tv
acl razordst dst -n 80.239.152.0/24


Thanks,
Jeff


[squid-users] squid cache_peer problem after June 15 ...

2011-09-19 Thread Jeff Chua
Amos,

I still have one issue with the latest squid after June 15 bzr
download. I can't proxy anything through the company proxy server.

Before June 15, squid used to be able to do the following. but this
doesn't work anymore. I don't really anything in the squid log.
Perhaps I'm not issuing the right debug level.

cache_peer xxx.company.com parent 3128 3130 no-query no-digest

I used a copy of squid from June 15, recompiled and it's working ok.
The latest one doesn't work.


Thanks,
Jeff


Re: [squid-users] [PATCH] Host header forgery detected even with appendDomain

2011-09-13 Thread Jeff Chua
On Tue, Sep 13, 2011 at 4:28 PM, Amos Jeffries  wrote:
> On 13/09/11 18:54, Jeff Chua wrote:
>> Latest squid is prevent connection to my known servers without local
>> domain name. The version prior to June 15 allow connecting to URLs
>> without the fully qualified domain names as in "moose" instead of
>> "moose.xxx.com"
>>
>> The latest squid is throw the follwing error:
>>
>> 2011/09/13 09:17:53.420 kid1| SECURITY ALERT: Host header forgery detected
>> on local=192.168.243.1:8080 remote=192.168.243.1:59291 FD 11 flags=1
>> (moose does not match moose.xxx.com)
>>
>>
>> Here's a patch to get around the problem. By specifying "append_domain
>> .xxx.com", squid should allows host that matches the domain part. This is
>> useful for get back the old behavior so I don't need to type the full
>> URLs for many sites at work I'm dealing with.
>
>
> Thank you for reporting this.
>
>  The header forgery detection of regular proxy traffic only that the URL
> domain name matches the Host: header content. Some RFC mandated leniency
> permits the protocol default port to be optional on top of this.
>
> Domain names with no dots are legitimate public FQDN. The URL is expected to
> contain the abbreviated hostname and the Host: header also contain that
> abbreviated name. Such that both match and pass under exactly the same
> criteria as any other traffic.
> --
>
> Squid applied append_domain only later in the processing.

Amos,

Can you move this rule to apply first in the processing?


> If your client agent is requesting a mixture of no-dots and dotted domain
> names something is broken outside of the verify procedure and needs to be
> fixed.

Yes, it's both dots and non-dots.


> Are you able to investigate a little further as to what the received
> syntax is and where it is coming from please?
> (a trace like the above can be found at debug level 11,2 in your Squid)


Here's the trace ...


bv2011/09/14 03:00:36.324| TcpAcceptor.cc(187) doAccept: New connection on FD 14
2011/09/14 03:00:36.324| TcpAcceptor.cc(262) acceptNext: connection on
local=0.0.0.0:8080 remote=[::] FD 14 flags=9
2011/09/14 03:00:36.324| HTTP Client local=192.168.243.1:8080
remote=192.168.243.1:33673 FD 17 flags=1
2011/09/14 03:00:36.325| HTTP Client REQUEST:
-
GET http://proxy/cgi-bin/date.cgi HTTP/1.0
Host: proxy
Accept: text/html, text/plain, text/css, text/sgml, */*;q=0.01
Accept-Encoding: gzip, compress, bzip2
Accept-Language: en
Pragma: no-cache
Cache-Control: no-cache
User-Agent: Lynx/2.8.8dev.8 libwww-FM/2.14 SSL-MM/1.4.1 OpenSSL/0.9.8r
Referer: http://proxy/cgi-bin/date.cgi


--
2011/09/14 03:00:36.325| SECURITY ALERT: Host header forgery detected
on local=192.168.243.1:8080 remote=192.168.243.1:33673 FD 17 flags=1
(proxy does not match proxy.corp.fedex.com)
2011/09/14 03:00:36.325| SECURITY ALERT: By user agent:
Lynx/2.8.8dev.8 libwww-FM/2.14 SSL-MM/1.4.1 OpenSSL/0.9.8r
2011/09/14 03:00:36.325| SECURITY ALERT: on URL:
http://proxy.corp.fedex.com/cgi-bin/date.cgi
2011/09/14 03:00:36.325| errorpage.cc(1243) BuildContent: No existing
error page language negotiated for ERR_INVALID_REQ. Using default
error file.
2011/09/14 03:00:36.325| The reply for GET
http://proxy/cgi-bin/date.cgi is 1, because it matched 'all'
2011/09/14 03:00:36.325| HTTP Client local=192.168.243.1:8080
remote=192.168.243.1:33673 FD 17 flags=1
2011/09/14 03:00:36.325| HTTP Client REPLY:
-
HTTP/1.1 409 Conflict
Server: squid/3.HEAD-BZR
Mime-Version: 1.0
Date: Tue, 13 Sep 2011 19:00:36 GMT
Content-Type: text/html
Content-Length: 4279
X-Squid-Error: ERR_INVALID_REQ 0
Content-Language: en
X-Cache: MISS from proxy
X-Cache-Lookup: NONE from proxy:8080
Via: 1.1 proxy (squid/3.HEAD-BZR)
Connection: close


--
2011/09/14 03:00:36.325| client_side.cc(765) swanSong:
local=192.168.243.1:8080 remote=192.168.243.1:33673 flags=1


Thanks,
Jeff


[squid-users] [PATCH] Host header forgery detected even with appendDomain

2011-09-12 Thread Jeff Chua



Amos,

Latest squid is prevent connection to my known servers without local
domain name. The version prior to June 15 allow connecting to URLs
without the fully qualified domain names as in "moose" instead of
"moose.xxx.com"

The latest squid is throw the follwing error:

2011/09/13 09:17:53.420 kid1| SECURITY ALERT: Host header forgery detected
on local=192.168.243.1:8080 remote=192.168.243.1:59291 FD 11 flags=1
(moose does not match moose.xxx.com)


Here's a patch to get around the problem. By specifying "append_domain
.xxx.com", squid should allows host that matches the domain part. This is
useful for get back the old behavior so I don't need to type the full
URLs for many sites at work I'm dealing with.

Thanks,
Jeff

--- trunk/src/client_side_request.cc2011-09-02 23:48:56.0 +0800
+++ truck/src/client_side_request.cc2011-09-13 10:31:33.0 +0800
@@ -620,6 +620,8 @@
 port = xatoi(portStr);
 }

+int appendDomainOK = strcmp(strchr(http->request->GetHost(), '.'), 
Config.appendDomain);
+
 debugs(85, 3, HERE << "validate host=" << host << ", port=" << port << ", portStr=" 
<< (portStr?portStr:"NULL"));
 if (http->request->flags.intercepted || 
http->request->flags.spoof_client_ip) {
 // verify the Host: port (if any) matches the apparent destination
@@ -633,11 +635,11 @@
 // verify the destination DNS is one of the Host: headers IPs
 ipcache_nbgethostbyname(host, hostHeaderIpVerifyWrapper, this);
 }
-} else if (strlen(host) != strlen(http->request->GetHost())) {
+} else if (strlen(host) != strlen(http->request->GetHost()) && 
appendDomainOK) {
 // Verify forward-proxy requested URL domain matches the Host: header
 debugs(85, 3, HERE << "FAIL on validate URL domain length " << http->request->GetHost() 
<< " matches Host: " << host);
 hostHeaderVerifyFailed(host, http->request->GetHost());
-} else if (matchDomainName(host, http->request->GetHost()) != 0) {
+} else if (matchDomainName(host, http->request->GetHost()) != 0 && 
appendDomainOK) {
 // Verify forward-proxy requested URL domain matches the Host: header
 debugs(85, 3, HERE << "FAIL on validate URL domain " << http->request->GetHost() << 
" matches Host: " << host);
 hostHeaderVerifyFailed(host, http->request->GetHost());


Re: [squid-users] commBind: Cannot bind socket FD 49

2011-09-09 Thread Jeff Chua
On Thu, Sep 8, 2011 at 1:55 PM, Amos Jeffries  wrote:
> On 08/09/11 12:50, Jeff Chua wrote:
>>
>> Amos,
>>
>> With recent version of squid after June 15, I'm getting the following
>> error when connecting to this ftp site, all other sites seems ok.
>> ftp://:y...@renftp1.dialogic.com/MLoewl
>> error in cache.log ...
>> commBind: Cannot bind socket FD 49 to 188.18.88.188:61276: (98)
>> Address already in use
>>
> A trace of debug level 9,5 would be useful.

> Aha! Please try this patch
> If that works, 3.2.0.12 later this weekend will have it.

Amos,

That works! Thanks. Now I'm using lastest squid with the patch applied.

Jeff.


Re: [squid-users] commBind: Cannot bind socket FD 49

2011-09-07 Thread Jeff Chua
> On Thu, Sep 8, 2011 at 9:23 AM, Le Trung Kien  wrote:
>
> It seem to have another process already listening on that port ?

But when I revert back to version before June, it worked. And I checked and
nothing was listen on that port.

Thanks,

Jeff


[squid-users] commBind: Cannot bind socket FD 49

2011-09-07 Thread Jeff Chua
Amos,

With recent version of squid after June 15, I'm getting the following
error when connecting to this ftp site, all other sites seems ok.

ftp://:y...@renftp1.dialogic.com/MLoewl


error in cache.log ...

commBind: Cannot bind socket FD 49 to 188.18.88.188:61276: (98)
Address already in use


Thanks,
Jeff


[squid-users] squid-3.HEAD-BZR failed to access https://mail.google.com (fwd)

2011-04-03 Thread Jeff Chua


Recent squid-3.HEAD-BZR failed to accessing https://mail.google.com,
but was ok prior to March 19. Attached is the cache.log with this line in 
particular ...


assertion failed: comm.cc:216: "fd_table[fd].halfClosedReader != NULL"


Thanks,
Jeff


2011/04/04 09:55:26.234 kid1| IoCallback.cc(120) will call 
ConnStateData::clientReadRequest(FD 11, data=0x23e0548, size=183, 
buf=0x238b410) [call19]
2011/04/04 09:55:26.234 kid1| entering ConnStateData::clientReadRequest(FD 11, 
data=0x23e0548, size=183, buf=0x238b410)
2011/04/04 09:55:26.234 kid1| AsyncCall.cc(32) make: make call 
ConnStateData::clientReadRequest [call19]

2011/04/04 09:55:26.234 kid1| cbdataReferenceValid: 0x23e0548
2011/04/04 09:55:26.234 kid1| cbdataReferenceValid: 0x23e0548
2011/04/04 09:55:26.234 kid1| cbdataReferenceValid: 0x23e0548
2011/04/04 09:55:26.234 kid1| cbdataReferenceValid: 0x23e0548
2011/04/04 09:55:26.234 kid1| ConnStateData status in: [ job2]
2011/04/04 09:55:26.234 kid1| cbdataReferenceValid: 0x23e0548
2011/04/04 09:55:26.234 kid1| client_side.cc(2765) clientReadRequest: 
clientReadRequest FD 11 size 183
2011/04/04 09:55:26.234 kid1| client_side.cc(2705) clientParseRequest: FD 11: 
attempting to parse
2011/04/04 09:55:26.234 kid1| httpParseInit: Request buffer is CONNECT 
mail.google.com:443 HTTP/1.1
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:2.2a1pre) Gecko/20110402 
Firefox/4.2a1pre

Proxy-Connection: keep-alive
Host: mail.google.com


2011/04/04 09:55:26.234 kid1| HttpMsg.cc(458) parseRequestFirstLine: parsing 
possible request: CONNECT mail.google.com:443 HTTP/1.1
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:2.2a1pre) Gecko/20110402 
Firefox/4.2a1pre

Proxy-Connection: keep-alive
Host: mail.google.com


2011/04/04 09:55:26.234 kid1| Parser: retval 1: from 0->37: method 0->6; url 
8->26; version 28->35 (1/1)
2011/04/04 09:55:26.234 kid1| parseHttpRequest: req_hdr = {User-Agent: 
Mozilla/5.0 (X11; Linux x86_64; rv:2.2a1pre) Gecko/20110402 Firefox/4.2a1pre

Proxy-Connection: keep-alive
Host: mail.google.com

}
2011/04/04 09:55:26.234 kid1| parseHttpRequest: end = {
}
2011/04/04 09:55:26.234 kid1| parseHttpRequest: prefix_sz = 183, req_line_sz = 
38

2011/04/04 09:55:26.234 kid1| cbdataLock: 0x23e0548=7
2011/04/04 09:55:26.234 kid1| cbdataLock: 0x23e0a78=1
2011/04/04 09:55:26.234 kid1| cbdataLock: 0x22946d8=1
2011/04/04 09:55:26.234 kid1| clientStreamInsertHead: Inserted node 0x23e3518 
with data 0x23e2128 after head

2011/04/04 09:55:26.234 kid1| cbdataLock: 0x23e3518=1
2011/04/04 09:55:26.234 kid1| parseHttpRequest: Request Header is
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:2.2a1pre) Gecko/20110402 
Firefox/4.2a1pre

Proxy-Connection: keep-alive
Host: mail.google.com


2011/04/04 09:55:26.234 kid1| parseHttpRequest: Complete request received
2011/04/04 09:55:26.234 kid1| client_side.cc(2743) clientParseRequest: FD 11: 
parsed a request

2011/04/04 09:55:26.234 kid1| comm.cc(1116) commSetTimeout: FD 11 timeout 86400
2011/04/04 09:55:26.234 kid1| cbdataLock: 0x23e0a78=2
2011/04/04 09:55:26.234 kid1| cbdataLock: 0x23e0a78=3
2011/04/04 09:55:26.234 kid1| The AsyncCall SomeTimeoutHandler constructed, 
this=0x21297a0 [call20]

2011/04/04 09:55:26.234 kid1| cbdataLock: 0x23e0a78=4
2011/04/04 09:55:26.234 kid1| cbdataUnlock: 0x23e0a78=3
2011/04/04 09:55:26.234 kid1| cbdataUnlock: 0x23e0a78=2
2011/04/04 09:55:26.234 kid1| comm.cc(1127) commSetTimeout: FD 11 timeout 86400
2011/04/04 09:55:26.234 kid1| cbdataUnlock: 0x23e0548=6
2011/04/04 09:55:26.234 kid1| cbdataUnlock: 0x23e0548=5
2011/04/04 09:55:26.234 kid1| cbdataReferenceValid: 0x23e0548
2011/04/04 09:55:26.234 kid1| cbdataReferenceValid: 0x23e0548
2011/04/04 09:55:26.234 kid1| urlParse: Split URL 'mail.google.com:443' into 
proto='', host='mail.google.com', port='443', path=''

2011/04/04 09:55:26.234 kid1| init-ing hdr: 0x2388740 owner: 2
2011/04/04 09:55:26.234 kid1| HttpRequest.cc(59) HttpRequest: constructed, 
this=0x2388730 id=54
2011/04/04 09:55:26.234 kid1| Address.cc(409) LookupHostIP: Given Non-IP 
'mail.google.com': Name or service not known

2011/04/04 09:55:26.234 kid1| parsing hdr: (0x2388740)
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:2.2a1pre) Gecko/20110402 
Firefox/4.2a1pre

Proxy-Connection: keep-alive
Host: mail.google.com

2011/04/04 09:55:26.234 kid1| parsing HttpHeaderEntry: near 'User-Agent: 
Mozilla/5.0 (X11; Linux x86_64; rv:2.2a1pre) Gecko/20110402 Firefox/4.2a1pre'
2011/04/04 09:55:26.234 kid1| parsed HttpHeaderEntry: 'User-Agent: Mozilla/5.0 
(X11; Linux x86_64; rv:2.2a1pre) Gecko/20110402 Firefox/4.2a1pre'
2011/04/04 09:55:26.234 kid1| created HttpHeaderEntry 0x2295990: 'User-Agent : 
Mozilla/5.0 (X11; Linux x86_64; rv:2.2a1pre) Gecko/20110402 Firefox/4.2a1pre

2011/04/04 09:55:26.234 kid1| 0x2388740 adding entry: 58 at 0
2011/04/04 09:55:26.234 kid1| parsing HttpHeaderEntry: near 'Proxy-Connection: 
keep-alive'
2011/04/04 09:55:26.234 kid1| parsed HttpHeaderEntry: 'Proxy-Connection: 
keep-alive'
2011/04/04 09:55:26.234 kid1| cr

Re: [squid-users] [FTP BUG] squid 3.HEAD-BZR returns empty. Squid 3.1.8-20101023 ok.

2010-10-25 Thread Jeff Chua
On Mon, Oct 25, 2010 at 1:42 PM, Amos Jeffries  wrote:
> On 24/10/10 16:23, Jeff Chua wrote:

> As documented everywhere about reporting bugs and mailing list usage "do NOT
> send bug reports to squid-users".
>
> Maybe the new FTP login handling in 3.HEAD. Please report to
> http://bugs.squid-cache.org. A cache.log trace at debug level "ALL,9" for
> one of these failures would be helpful as well.

Amos,

Sorry, I should have read the doc before posting this here. I've filed
a bug report. It's bug 3089 with two attachments (a good and a bad
run).

Thanks,
Jeff


[squid-users] [FTP BUG] squid 3.HEAD-BZR returns empty. Squid 3.1.8-20101023 ok.

2010-10-23 Thread Jeff Chua
Try:  ftp://invisible-island.net/xterm/

Squid Cache: Version 3.HEAD-BZR
Directory: ftp://invisible-island.net/xterm//
Directory Content:
Transfer complete
   Parent Directory Parent Directory (Root Directory)
   Transfer complete


Squid Cache: Version 3.1.8-20101023
FTP Directory: ftp://invisible-island.net/xterm/
...



Jeff


Re: [squid-users] theOutIcpConnection FD -1: getsockname: (9) Bad file descriptor

2010-10-17 Thread Jeff Chua
On Sun, Oct 17, 2010 at 6:23 PM, Amos Jeffries  wrote:
> 3.HEAD is today the future 3.3 release. It contains development code quite
> different from 3.1 production code.
>  The dated 3.1 tarball "snapshot", rsync squid-3.1, or the bzr SQUID_3_1
> branch is the latest 3.1 series code.
> Thanks for reporting this regression. I've added a fix to 3.HEAD.

Amos,

I've been using the HEAD branch for a while and seems stable until I
encountered the theOutIcpConnection problem few weeks ago. I should
have reported it earlier.


>> What does theOutIcpConnection means?
>
> It's the internal name for the port used by Squid to send ICP requests.

Ok. Cool.

Thanks for fixing this. I just pulled the fixes and it's working nicely.

Jeff.


[squid-users] theOutIcpConnection FD -1: getsockname: (9) Bad file descriptor

2010-10-17 Thread Jeff Chua
I'm seeing this error (theOutIcpConnection FD -1: getsockname: (9) Bad
file descriptor) in the squid access.log.

It only happens with the bzr's version (Version 3.HEAD-BZR) and not
with squid-3.1.8-20101016.tar.bz2 that I downloaded.

What does theOutIcpConnection means?

I've using the following options to compile squid ...


Squid Cache: Version 3.HEAD-BZR
configure options:  '--enable-async-io' '--enable-cache-digests'
'--enable-storeio=ufs,aufs,diskd' '--disable-snmp' '--enable-ssl'
'--with-openssl=/usr/local/ssl' '--disable-ldap'
'--disable-translation' '--disable-auto-locale' '--enable-auth'
'--disable-auth-ntlm' '--disable-icmp' '--disable-ipv6'
'--disable--pinger' '--disable-static' '--enable-shared'
'--enable-auth-basic=getpwnam' '--enable-auth-digest=password'
'--enable-external-acl-helpers=ip_user session unix_group
wbinfo_group'



Thanks,.
Jeff


[squid-users] trusted squid caching

2007-05-11 Thread Jeff Chua


Is there an option to allow the connection to be held open between two 
squid servers for a long duration?


I want to reduce the intial setup up to connect remotely to an apache 
server with "wget" since wget doesn't support http 1.1 persistent caching, 
plus wget session does not stay open after it is done.


I'm assuming that having it's faster to establish local squid session, so 
the remote squid can handle the request on behalf of the local squid.


Possible?

Thanks,
Jeff


[squid-users] webput? ftpput?

2006-08-10 Thread Jeff Chua


Is it possible to cache file upload so that subsequent download of the 
same file will not hit the remote server, but simply retrieve from the 
local squid cache?


Here's an example ...
- upload a file 'abc' to http://remotehost/remotepath/abc.ext
- squid cache this in local server

Then do a download ...
- wget //remotehost/remotepath/abc.ext
- squid cache return this file from local cache

One way that I'm considering is ...
- ftp the file to the remote server
- write the file to squid cache as if it's retrieving the file remotely
- then subsequent wget will retrieve from local squid cache

Has anyone done this?


Thanks,
Jeff