Re: [squid-users] Using squidclient

2010-05-21 Thread Amos Jeffries

Ryan McCain wrote:

I'm trying to use squidclient to get some information on the performance of one 
of our squid boxes.  The version is 2.7x.

See below..

--

dss-cs99lv02-a:/usr/local/squid/bin # nmap localhost

Starting Nmap 4.00 ( http://www.insecure.org/nmap/ ) at 2010-05-21 14:16 CDT
Interesting ports on localhost (127.0.0.1):
(The 1665 ports scanned but not shown below are in state: closed)
PORT STATE SERVICE
22/tcp   open  ssh
25/tcp   open  smtp
111/tcp  open  rpcbind
427/tcp  open  svrloc
2033/tcp open  glogger
2034/tcp open  scoremgr
8080/tcp open  http-proxy

Nmap finished: 1 IP address (1 host up) scanned in 0.146 seconds
dss-cs99lv02-a:/usr/local/squid/bin # ./squidclient -p8080 mgr:5_min
HTTP/1.0 404 Not Found
Server: squid/2.7.STABLE6
Date: Fri, 21 May 2010 19:17:01 GMT
Content-Type: text/html
Content-Length: 1229
X-Squid-Error: ERR_INVALID_URL 0
X-Cache: MISS from dss-cs99lv02-a
Via: 1.0 dss-cs99lv02-a:8080 (squid/2.7.STABLE6)
Connection: close







as you can see, Squid is running on port 8080 of the local box, however, 
when I run squidclient it appears to display the HTML rather than displaying 
the metric I am looking for.  Any ideas?



Do you have the "manager" ACLs configured?

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.3


Re: [squid-users] Squid 3.1 rejecting connections after few thousands requests

2010-05-21 Thread Amos Jeffries

alter...@gmail.com wrote:

Hi, I've run into problems after upgrading 3.0.STABLE19 (installed from 
packages) to squid 3.1
I'm running amd64 8.0-RELEASE FreeBSD, with squid as accelerated proxy.

3.0.STABLE19 runs almost flawlessly. I'm getting 'Select loop Error' every 
second:
2010/05/21 14:37:34| Select loop Error. Retry 1

and these errors once in a while in my cache.log:
2010/05/21 14:39:14| comm_old_accept: FD 14: (53) Software caused connection 
abort
2010/05/21 14:39:14| httpAccept: FD 14: accept failure: (53) Software caused 
connection abort


I've never ran in such problems on Debian Squeeze (also with squid3.0), so I really don't know if I could ignore them. 
I have successfully tested 3.0.STABLE19 on FreeBSD with 2500hits/s 



Wow. Sure thats hits/sec and not hits/minute?
The 'extreme' setups of Squid-2.7 only reached 990req/sec.




After a while I tried to upgrade to the newest version of squid I've tried squid-3.1.3 from ports, and squid-3.1.0.13 
from packages. Both versions after handling few thousands of requests are stopping serving on specified port.


Here is my configuration squid listens on 2 ports:

netstat -an |grep LISTEN
tcp4   0  0 *.8080 *.*LISTEN
tcp4   0  0 *.80   *.*LISTEN

'All' request goes to :8080, I configured port :80 only for testing. After few thousands of requests to :8080, squid stops 
handling requests coming from that port. If I telnet to :8080 my connection is closed instantly, but If i send request to 
:80 everything is fine.


Here are excerpts from cache.log, after I saw that squid doesn't serve anything 
I stopped it:
2010/05/20 12:09:56| Preparing for shutdown after 7460 requests
2010/05/20 13:00:19| Preparing for shutdown after 8843 requests
2010/05/21 14:10:37| Preparing for shutdown after 9963 requests

While trying two 3.1 versions of squid I also saw 'Select loop Error. Retry 1'



FWIW; the only other occurrence of this particular "Select loop Error" 
reported in recent years was found to be due to broken NIC drivers.


The behaviour sounds very much like some such bug has been hit, or maybe 
a limit on the open ports per IP.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.3


Re: [squid-users] 71 Protocol Error when using SSL

2010-05-21 Thread Amos Jeffries

Edoardo COSTA SANSEVERINO wrote:


Hi all,

I had a look in the archives and the only similar problem I found was 
never answered 
(http://www.squid-cache.org/mail-archive/squid-users/200801/0152.html) 
so I hope someone can help me.  I posted this request on 
linuxquestions.org but got no reply so I thought I'd be better off 
asking you guys ;)


I tried to get reverse proxy working with apache mod_proxy but that 
failed so I'm giving squid3 a go but with not much more luck.  All 
connections to non ssl websites work fine.  The following error I 
[B]only get the second time[/B] I access the page, the first time the 
page is displayed properly!  This does not make sense to me but maybe it 
will to one of you.


ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: https://deb01.example.com/

The following error was encountered:
Connection to 192.168.122.11 Failed

The system returned:
(71) Protocol error

The remote host or network may be down. Please try the request again.

Your cache administrator is webmaster.
Generated Thu, 20 May 2010 18:58:28 GMT by localhost (squid/3.0.STABLE8)

My setup

 +--> (deb02) vhosts running multile http
 |
[WWW] -> KVM/SQUID ->+--> (deb01) vhost running a single https
 |
 +--> (deb03) vhosts running multile http and one https

My squid.conf
-

https_port 443 accel cert=/etc/ssl/deb01.example.com.crt 
key=/etc/ssl/deb01.example.com.pem defaultsite=deb01.example.com vhost 
protocol=https

http_port 80 accel defaultsite=deb02.example.com vhost

cache_peer 192.168.122.11 parent 443 0 no-query originserver login=PASS 
ssl sslversion=3 sslflags=DONT_VERIFY_PEER front-end-https=on name=srv01

cache_peer 192.168.122.2 parent 80 0 no-query originserver name=srv02

acl https proto https
acl sites_srv01 dstdomain deb01.example.com
acl sites_srv02 dstdomain deb02.example.com second.example.com

http_access allow sites_srv01
http_access allow sites_srv02
cache_peer_access srv01 allow sites_srv01
cache_peer_access srv02 allow sites_srv02

forwarded_for on
---

The first 'successful' connection gives the following entries in the logs:

-BEGIN SSL SESSION PARAMETERS-
MIGIAgEBAgIDAQQCADUEIDrfJnfrcvWw15QVzrwAlKJYsrinM/X+Ge9aeTyO8Fkx
BDBLAPhbkN6LTcdvHMF9YGm8ib5Qwjm05qP3rr7I+LBjpikfjzV5gJSXLfke83U0
ggOhBgIES/WH8aIEAgIBLKQCBACmFQQTZGViMDEucHJlY29nbmV0LmNvbQ==
-END SSL SESSION PARAMETERS-
2010/05/20 21:05:21| 192.168.122.11 digest requires version 17487; have: 5
2010/05/20 21:05:21| temporary disabling (invalid digest cblock) digest 
from 192.168.122.11


Normal. Simply means the web server does not understand proxy cache 
digest exchange. This can be silenced by adding no-digest option to the 
web server cache_peer line.


2010/05/20 21:05:21| fwdNegotiateSSL: Error negotiating SSL connection 
on FD 16: error:1408F06B:SSL routines:SSL3_GET_RECORD:bad decompression 
(1/-1/0)

2010/05/20 21:05:21| TCP connection to 192.168.122.11/443 failed
[...]


There is your HTTPS problem. Your SSL system libraries are producing 
that error when they can't handle the settings.


http://google.com/search?q=SSL3_GET_RECORD%3Abad+decompression


2010/05/20 21:05:21| fwdNegotiateSSL: Error negotiating SSL connection 
on FD 16: error:1408F06B:SSL routines:SSL3_GET_RECORD:bad decompression 
(1/-1/0)

2010/05/20 21:05:21| TCP connection to 192.168.122.11/443 failed
2010/05/20 21:05:21| fwdNegotiateSSL: Error negotiating SSL connection 
on FD 16: error:1408F06B:SSL routines:SSL3_GET_RECORD:bad decompression 
(1/-1/0)

2010/05/20 21:05:21| TCP connection to 192.168.122.11/443 failed






The second 'failed' connection shows the following log events:


==> /var/log/squid3/cache.log <==
2010/05/20 21:06:11| fwdNegotiateSSL: Error negotiating SSL connection 
on FD 15: error:1408F06B:SSL routines:SSL3_GET_RECORD:bad decompression 
(1/-1/0)

[...]
2010/05/20 21:06:12| fwdNegotiateSSL: Error negotiating SSL connection 
on FD 15: error:1408F06B:SSL routines:SSL3_GET_RECORD:bad decompression 
(1/-1/0)

2010/05/20 21:06:12| TCP connection to 192.168.122.11/443 failed
2010/05/20 21:06:12| fwdNegotiateSSL: Error negotiating SSL connection 
on FD 15: error:1408F06B:SSL routines:SSL3_GET_RECORD:bad decompression 
(1/-1/0)

2010/05/20 21:06:12| TCP connection to 192.168.122.11/443 failed




store.log is irrelevant to most uses. You can safely set it to "none" in 
your squid.conf file.




Any help would be greatly apreciated.


As a side note.  If anyone can tell me how to show the IP of the squid 
server rather than the internal IP of the webserver (as in the error) 
that would be a bonus ;)


The error is correct.

The link client->squid is not working perfectly.

The link squid->server (via internal IPs) is failing.

Thus you get a report telling you which link out of the two has failed. 
Changing that will only make you look in the wrong place for some


Amos
--
Please be using

[squid-users] Re: Squid3 Reverse Proxy based on url path

2010-05-21 Thread rainolf

m...i understand your point of view...

so...what do you suggest to me in order to have a better configuration?

webportal.domain.com is the dns with NAT that points to reverse proxy and
should contain a small html page where i will have the links to the other
servers...

webportal.domain.com > portal page on localhost ( href
to the following internal servers )
x1.domain.com > 1rst internal server
x2.domain.com > 2nd internal server
webportal.domain.com/hrm -> 3rd internal server that host
/hrm folder
webportal.domain.com/fax -> 4th internal server that host
/fax folder

that's i want to do..


-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid3-Reverse-Proxy-based-on-url-path-tp2197692p2226742.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Content-Type log erro

2010-05-21 Thread Henrik Nordström
fre 2010-05-21 klockan 17:39 -0300 skrev Romulo Boschetti:
> Hi Henrik, 
> 
> Yes, I talking about responses denied by http_reply_access. 

Ok.

Yes, it would be useful to log the received content-type there. Or in
fact to be able to log anything about the incoming response.
Unfortunately not supported at the moment.

> I believe the post below refers to exactly my problem, but did not understand 
> how to apply the patch: 
> http://www.mail-archive.com/squid-users@squid-cache.org/msg36285.html 

No, that's very very old. It's from the development of the
access_log_format directive. At the time of that message this directive
did not exist at all in Squid only as a separate patch.

Regards
Henrik



Re: [squid-users] Re: Squid3 Reverse Proxy based on url path

2010-05-21 Thread Henrik Nordström

fre 2010-05-21 klockan 13:24 -0700 skrev rainolf:

> hrm is in another server and it is responding ad xxx.xxx.xxx.xxx/dbghrm 
> work ok
> ftp is in another server again and its responding at xxx.xxx.xxx.xxx/webftp
> work ok

My concern is not where the servers are, but which URLs should be sent
to which servers. Your configuration is a little inconsistent on this
allowing some kinds of URLs to be forwarded to several different classes
of servers which I doubt 

The rules you have for dbg & hrm applies to every site handled by your
Squid. Even those where you also have other rules where they should get
forwarded. Every cache_peer where it's cache_peer_access rules evaluates
to allow will be a candidate where to forward the request.

> all works good except localhost ( apache on squid server )...all should
> answer at port 80 

Could not see any obvious issues with your localhost apache forwarding.

Regards
Henrik



Re: [squid-users] Content-Type log erro

2010-05-21 Thread Romulo Boschetti

Hi Henrik, 

Yes, I talking about responses denied by http_reply_access. 

But unfortunately I can't use the following configuration , Because it did not 
work too : 

acl videos rep_header Content-Type -i "/opt/hsc/webcontrol/squid/etc/str.txt" 
http_reply_access deny videos 
logformat mime l-s-85 %>a %tr %st %un %mt %Ss %ru %ea %rm %{Content-Type}http://www.mail-archive.com/squid-users@squid-cache.org/msg36285.html 

Thanks for you help . 






Atenciosamente, 

__ 

Rômulo Giordani Boschetti 

IT Consulting - InterOp 

( telefone 55 (51) 3216-7030 – Porto Alegre 

( telefone 55 (11) 4063-7881 – São Paulo 

( telefone 55 (41) 4063-7881 – Curitiba 

4 fax 55 (51) 3216-7001 
: site www.interop.com.br : email rom...@interop.com.br 
___ ___ 

- Mensagem original - 
De: "Henrik Nordström"  
Para: "Romulo Boschetti"  
Cc: squid-users@squid-cache.org, "Amos Jeffries"  
Enviadas: Sexta-feira, 21 de Maio de 2010 16:36:28 
Assunto: Re: [squid-users] Content-Type log erro 

fre 2010-05-21 klockan 15:15 -0300 skrev Romulo Boschetti: 

> The Squid to Deny one specified mime-type it need know who is this mime-type. 
> Correct ? 

Are we talking about responses denied by http_reply_access? 

Regards 
Henrik 



[squid-users] Re: Squid3 Reverse Proxy based on url path

2010-05-21 Thread rainolf

webportal.domain.com should be the apache in the same server as squid
reverse.it should answer to localhost..

hrm is in another server and it is responding ad xxx.xxx.xxx.xxx/dbghrm 
work ok
ftp is in another server again and its responding at xxx.xxx.xxx.xxx/webftp
work ok

all works good except localhost ( apache on squid server )...all should
answer at port 80 
-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid3-Reverse-Proxy-based-on-url-path-tp2197692p2226688.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Content-Type log erro

2010-05-21 Thread Henrik Nordström
fre 2010-05-21 klockan 15:15 -0300 skrev Romulo Boschetti:

> The Squid to Deny one specified mime-type it need know who is this mime-type. 
> Correct ? 

Are we talking about responses denied by http_reply_access?

Regards
Henrik



Re: [squid-users] Re: Squid3 Reverse Proxy based on url path

2010-05-21 Thread Henrik Nordström
fre 2010-05-21 klockan 04:58 -0700 skrev rainolf:

> cache_peer xxx.xxx.xxx.xxx parent 8080 0 no-query originserver name=domain4
> cache_peer_access domain4 allow portal
> cache_peer_access domain4 deny all
> 
> cache_peer xxx.xxx.xxx.xxx parent 80 0 no-query originserver name=dbghrm
> cache_peer_access dbghrm allow hrm
> cache_peer_access dbghrm deny all

Some issues here..

http://webportal.domain.com/dbghrm will get sent to both peers
above. Maybe not what you want.

Make sure you per peer only allow exactly the URLs that should get
forwarded to this peer, and exclude the URLs that SHOULD NOT get
forwarded to this peer if there is overlap.

It is not clear to me what urls should be sent to each server, but
assuming here that hrm is in the webportal but should be sent to a
different server than the rest of the portal I would use


cache_peer_access domain4 deny hrm
cache_peer_access domain4 allow portal

cache_peer_access dbghrm allow portal hrm


Regards
Henrik





Re: [squid-users] Re: Advices for a squid cluster with kerberos auth

2010-05-21 Thread Henrik Nordström
fre 2010-05-21 klockan 11:31 +0100 skrev Nick Cairncross:

> Has anyone had success using Service Location records in DNS for different 
> sites? I would be interested to hear about it..

Do you mean SRV records?

HTTP is not yet using an SRV profile, and I don't see it likely that SRV
support will generally appear any time soon for HTTP (where soon is a
decade) even if most other protocols have by now switched over to using
SRV to locate it's servers.

Regards
Henrik



[squid-users] Using squidclient

2010-05-21 Thread Ryan McCain
I'm trying to use squidclient to get some information on the performance of one 
of our squid boxes.  The version is 2.7x.

See below..

--

dss-cs99lv02-a:/usr/local/squid/bin # nmap localhost

Starting Nmap 4.00 ( http://www.insecure.org/nmap/ ) at 2010-05-21 14:16 CDT
Interesting ports on localhost (127.0.0.1):
(The 1665 ports scanned but not shown below are in state: closed)
PORT STATE SERVICE
22/tcp   open  ssh
25/tcp   open  smtp
111/tcp  open  rpcbind
427/tcp  open  svrloc
2033/tcp open  glogger
2034/tcp open  scoremgr
8080/tcp open  http-proxy

Nmap finished: 1 IP address (1 host up) scanned in 0.146 seconds
dss-cs99lv02-a:/usr/local/squid/bin # ./squidclient -p8080 mgr:5_min
HTTP/1.0 404 Not Found
Server: squid/2.7.STABLE6
Date: Fri, 21 May 2010 19:17:01 GMT
Content-Type: text/html
Content-Length: 1229
X-Squid-Error: ERR_INVALID_URL 0
X-Cache: MISS from dss-cs99lv02-a
Via: 1.0 dss-cs99lv02-a:8080 (squid/2.7.STABLE6)
Connection: close

http://www.w3.org/TR/html4/loose.dtd";>

ERROR: The requested URL could not be retrieved


ERROR
The requested URL could not be retrieved


While trying to retrieve the URL:
cache_object://localhost.dss.state.la.us/5_min

The following error was encountered:



Invalid URL




Some aspect of the requested URL is incorrect.  Possible problems:

Missing or incorrect access protocol (should be `http://'' or similar)
Missing hostname
Illegal double-escape in the URL-Path
Illegal character in hostname; underscores are not allowed

Your cache administrator is mailto:webmaster";>webmaster.




Generated Fri, 21 May 2010 19:17:01 GMT by dss-cs99lv02-a (squid/2.7.STABLE6)


dss-cs99lv02-a:/usr/local/squid/bin #



as you can see, Squid is running on port 8080 of the local box, however, 
when I run squidclient it appears to display the HTML rather than displaying 
the metric I am looking for.  Any ideas?



Re: [squid-users] Content-Type log erro

2010-05-21 Thread Romulo Boschetti
Hi Amos, 

First thanks for you help. 

The Squid to Deny one specified mime-type it need know who is this mime-type. 
Correct ? 

This is my Cache Log , where the Squid detect the Mime-Type of request web 
page: 

2010/05/21 09:29:50| aclMatchRegex: looking for 
'^application/x-shockwave-flash$' 

With this information the squid could not write the correct mime-type into the 
log ? 

Thanks , 

PS : Sorry for my bad english ... :-) 





Atenciosamente, 

__ 

Rômulo Giordani Boschetti 

IT Consulting - InterOp 

( telefone 55 (51) 3216-7030 – Porto Alegre 

( telefone 55 (11) 4063-7881 – São Paulo 

( telefone 55 (41) 4063-7881 – Curitiba 

4 fax 55 (51) 3216-7001 
: site www.interop.com.br : email rom...@interop.com.br 
___ ___ 

- Mensagem original - 
De: "Amos Jeffries"  
Para: squid-users@squid-cache.org 
Enviadas: Sexta-feira, 21 de Maio de 2010 10:21:26 
Assunto: Re: [squid-users] Content-Type log erro 

Romulo Boschetti wrote: 
> Hi Amos, 
> 
> I removing the "logformat" line from my config file 
> 
> acl video rep_mime_type -i ^video/*$ 
> http_reply_access deny video 
> access_log /opt/hsc/webcontrol/log/access.log 
> 
> But my problem persist. When I access one youtube video the Mime-type in the 
> log is incorrect. This only happens when a the access is Denied. 
> 

Oh. gotcha. Right. 

What you see is because what gets logged is the mime type of the content 
actually sent to the client as a reply. 

In this case a text HTML error page was sent back. There is no video 
involved beyond the URL. 


> My Log Denied: 
> 127734.862 35 192.168.1.102 TCP_DENIED/403 1891 GET 
> http://s.ytimg.com/yt/swf/watch-vfl165272.swf paulo DIRECT/201.47.0.52 
> text/html 
> 
> My Log Accept: 
> 1274445021.663 12066 192.168.1.102 TCP_MISS/200 148805 GET 
> http://s.ytimg.com/yt/swf/ad-vfl165210.swf paulo DIRECT/201.47.0.52 
> application/x-shockwave-flash 
> 
> That Way I can not see when anybody try access any content prohibited. 
> 

You will have to do something to look at the denials and their URL if 
you want to see what has been successfully denied. There is no reliable 
way without having received the content already to know what it was. 

> 
> - Mensagem original - 
> De: "Amos Jeffries" 
> 
> Romulo Boschetti wrote: 
>> Hi, 
>> 
>> The content-type field in the access.log is always "-" character. 
>> 
>> The only time I saw "text/html" is when your request is deny. 
>> 
>> Linux CentOs 5.4 
>> Squid Cache: Version 2.7.STABLE5 
>> 
>> some squid.conf parameters: 
>> 
>> emulate_httpd_log off 
>> acl video rep_mime_type -i ^video/*$ 
>> http_reply_access deny video 
>> logformat squid %ts.%03tu %6tr %>a %Ss/%03Hs %> access_log /var/log/squid/access.log 
>> 
>> I tried several configuration and I always get the same result. 
>> 
>> I have found the Bug http://bugs.squid-cache.org/show_bug.cgi?id=2298 , but 
>> i dont have find more 
>> information about resolution of this Bug. And I dont have found any 
>> information about solution in the 
>> version squid-2.7.STABLE5+ . 
>> 
>> Thanks 
> 
> 
> Re-defining the "squid" format is not a good idea. 
> Try removing the "logformat" line from your config file, leaving 
> everything else untouched. 
> 
> Please report back even if that fixes it. I suspect you have found a bug 
> in the custom format handling. 
> 
> 
> Amos 


-- 
Please be using 
Current Stable Squid 2.7.STABLE9 or 3.1.3 


[squid-users] 71 Protocol Error when using SSL

2010-05-21 Thread Edoardo COSTA SANSEVERINO


Hi all,

I had a look in the archives and the only similar problem I found was 
never answered 
(http://www.squid-cache.org/mail-archive/squid-users/200801/0152.html) 
so I hope someone can help me.  I posted this request on 
linuxquestions.org but got no reply so I thought I'd be better off 
asking you guys ;)


I tried to get reverse proxy working with apache mod_proxy but that 
failed so I'm giving squid3 a go but with not much more luck.  All 
connections to non ssl websites work fine.  The following error I 
[B]only get the second time[/B] I access the page, the first time the 
page is displayed properly!  This does not make sense to me but maybe it 
will to one of you.


ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: https://deb01.example.com/

The following error was encountered:
Connection to 192.168.122.11 Failed

The system returned:
(71) Protocol error

The remote host or network may be down. Please try the request again.

Your cache administrator is webmaster.
Generated Thu, 20 May 2010 18:58:28 GMT by localhost (squid/3.0.STABLE8)

My setup

 +--> (deb02) vhosts running multile http
 |
[WWW] -> KVM/SQUID ->+--> (deb01) vhost running a single https
 |
 +--> (deb03) vhosts running multile http and one 
https


My squid.conf
-

https_port 443 accel cert=/etc/ssl/deb01.example.com.crt 
key=/etc/ssl/deb01.example.com.pem defaultsite=deb01.example.com vhost 
protocol=https

http_port 80 accel defaultsite=deb02.example.com vhost

cache_peer 192.168.122.11 parent 443 0 no-query originserver login=PASS 
ssl sslversion=3 sslflags=DONT_VERIFY_PEER front-end-https=on name=srv01

cache_peer 192.168.122.2 parent 80 0 no-query originserver name=srv02

acl https proto https
acl sites_srv01 dstdomain deb01.example.com
acl sites_srv02 dstdomain deb02.example.com second.example.com

http_access allow sites_srv01
http_access allow sites_srv02
cache_peer_access srv01 allow sites_srv01
cache_peer_access srv02 allow sites_srv02

forwarded_for on
---

The first 'successful' connection gives the following entries in the logs:

-BEGIN SSL SESSION PARAMETERS-
MIGIAgEBAgIDAQQCADUEIDrfJnfrcvWw15QVzrwAlKJYsrinM/X+Ge9aeTyO8Fkx
BDBLAPhbkN6LTcdvHMF9YGm8ib5Qwjm05qP3rr7I+LBjpikfjzV5gJSXLfke83U0
ggOhBgIES/WH8aIEAgIBLKQCBACmFQQTZGViMDEucHJlY29nbmV0LmNvbQ==
-END SSL SESSION PARAMETERS-
2010/05/20 21:05:21| 192.168.122.11 digest requires version 17487; have: 5
2010/05/20 21:05:21| temporary disabling (invalid digest cblock) digest 
from 192.168.122.11
2010/05/20 21:05:21| fwdNegotiateSSL: Error negotiating SSL connection 
on FD 16: error:1408F06B:SSL routines:SSL3_GET_RECORD:bad decompression 
(1/-1/0)

2010/05/20 21:05:21| TCP connection to 192.168.122.11/443 failed
[...]
2010/05/20 21:05:21| fwdNegotiateSSL: Error negotiating SSL connection 
on FD 16: error:1408F06B:SSL routines:SSL3_GET_RECORD:bad decompression 
(1/-1/0)

2010/05/20 21:05:21| TCP connection to 192.168.122.11/443 failed
2010/05/20 21:05:21| fwdNegotiateSSL: Error negotiating SSL connection 
on FD 16: error:1408F06B:SSL routines:SSL3_GET_RECORD:bad decompression 
(1/-1/0)

2010/05/20 21:05:21| TCP connection to 192.168.122.11/443 failed

==> /var/log/squid3/store.log <==
1274382321.365 RELEASE -1  B4F6358BEF575DB8EE08C9E4544D1ED8  200 
1274382321-1-1 unknown -1/584 GET 
[url]http://192.168.122.11:443/squid-internal-periodic/store_digest[/url]
1274382321.394 RELEASE 00  5B2811E3C3DBF846FB471299507A118F   
? ? ? ? ?/? ?/? ? ?
1274382321.394 SWAPOUT 00  5B2811E3C3DBF846FB471299507A118F  200 
1274382321-1-1 x-squid-internal/vary -1/0 GET 
[url]https://deb01.example.com/[/url]
1274382321.394 RELEASE 00 0008 00A5F16BB26487A2923FC532D7EAFB78   
? ? ? ? ?/? ?/? ? ?
1274382321.394 SWAPOUT 00 0008 EEC31BDDF7F08E5301417EBDCA25AFFE  200 
1274382319 1273748130-1 text/html 69/69 GET 
[url]https://deb01.example.com/[/url]
1274382321.580 RELEASE -1  092DD741F44CA089263CADBF1B57C579  503 
1274382321 0-1 text/html 2166/2166 GET 
[url]https://deb01.example.com/favicon.ico[/url]

---


The second 'failed' connection shows the following log events:


==> /var/log/squid3/cache.log <==
2010/05/20 21:06:11| fwdNegotiateSSL: Error negotiating SSL connection 
on FD 15: error:1408F06B:SSL routines:SSL3_GET_RECORD:bad decompression 
(1/-1/0)

[...]
2010/05/20 21:06:12| fwdNegotiateSSL: Error negotiating SSL connection 
on FD 15: error:1408F06B:SSL routines:SSL3_GET_RECORD:bad decompression 
(1/-1/0)

2010/05/20 21:06:12| TCP connection to 192.168.122.11/443 failed
2010/05/20 21:06:12| fwdNegotiateSSL: Error negotiating SSL connection 
on FD 15: error:1408F06B:SSL routines:SSL3_GET_RECORD:bad decompression 
(1/-1/0)

2010/05/20 21:06:12| TCP connection to 192.168.122.11/443 failed

==> /var/log/sq

Re: [squid-users] 2.7 upstream parent (cache_peer) connection reset. Child how to handle?

2010-05-21 Thread James Tan
Hi Amos,

the PoC is for a project involving malware inspection, a personal
project. I tried to chain 2 Squids as part of solution.

The AV perform the check on the wire before actually allowing Parent
Squid to get hold of it.
I.e. Client --> ... ... -> Parent Squid --> AV (inspects HTTP, it it
is 'infected', do a "TCP Disconnect" as seen on Sysinternals Procmon)
--> Website
*There was no "TCP Disconnect" for 'clean' pages.

>From what I observe when the client is directly connected to the
Parent Squid, I got the following message in Parent.
I am OK with this message in Parent, but how can I let the Child also
know that and display similar message when Parent got it instead of
hung?

---
ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: http://www.eicar.org/download/eicar.com.txt

The following error was encountered:

   * Read Error

The system returned:

   (10054) WSAECONNRESET, Connection reset by peer.

An error condition occurred while reading data from the network.
Please retry your request.

Your cache administrator is webmaster.
Generated Fri, 21 May 2010 15:29:41 GMT by test-caf801f8d2 (squid/2.7.STABLE8)
---


thanks,
James Tan


[squid-users] Squid 3.1 rejecting connections after few thousands requests

2010-05-21 Thread alter...@gmail.com
Hi, I've run into problems after upgrading 3.0.STABLE19 (installed from 
packages) to squid 3.1
I'm running amd64 8.0-RELEASE FreeBSD, with squid as accelerated proxy.

3.0.STABLE19 runs almost flawlessly. I'm getting 'Select loop Error' every 
second:
2010/05/21 14:37:34| Select loop Error. Retry 1

and these errors once in a while in my cache.log:
2010/05/21 14:39:14| comm_old_accept: FD 14: (53) Software caused connection 
abort
2010/05/21 14:39:14| httpAccept: FD 14: accept failure: (53) Software caused 
connection abort


I've never ran in such problems on Debian Squeeze (also with squid3.0), so I 
really don't know if I could ignore them. 
I have successfully tested 3.0.STABLE19 on FreeBSD with 2500hits/s 


After a while I tried to upgrade to the newest version of squid I've tried 
squid-3.1.3 from ports, and squid-3.1.0.13 
from packages. Both versions after handling few thousands of requests are 
stopping serving on specified port.

Here is my configuration squid listens on 2 ports:

netstat -an |grep LISTEN
tcp4   0  0 *.8080 *.*LISTEN
tcp4   0  0 *.80   *.*LISTEN

'All' request goes to :8080, I configured port :80 only for testing. After few 
thousands of requests to :8080, squid stops 
handling requests coming from that port. If I telnet to :8080 my connection is 
closed instantly, but If i send request to 
:80 everything is fine.

Here are excerpts from cache.log, after I saw that squid doesn't serve anything 
I stopped it:
2010/05/20 12:09:56| Preparing for shutdown after 7460 requests
2010/05/20 13:00:19| Preparing for shutdown after 8843 requests
2010/05/21 14:10:37| Preparing for shutdown after 9963 requests

While trying two 3.1 versions of squid I also saw 'Select loop Error. Retry 1'



Re: [squid-users] refresh patterns for Caching Media

2010-05-21 Thread Amos Jeffries

Jumping Mouse wrote:




From: kafr...@hotmail.com
To: squid-users@squid-cache.org
Date: Wed, 19 May 2010 18:35:44 +0200
Subject: [squid-users] refresh patterns for Caching Media


Hello eveyone,
We are using Squid 2.7 for caching educational media files.   We are only using 
the cache for users who need to access these files.   For other internet 
traffic the cache will be bypassed.
The media files will not be changed for at least a year at which point I will 
run a script to pre-load the cache with the new media files.

1. How can I set the refresh pattern to never refresh these media files?   The 
files are swf (flash) flv, and mp3, etc.
This is what I currently have for media:

refresh_pattern -i \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv)$ 43200 90% 432000 
override-expire override-lastmod reload-into-ims ignore-reload ignore-no-cache 
ignore-no-store ignore-private

2. If  I have already pre-loaded media files into the cache, will changes to 
the refresh patterns work retroactively on these files, or will I have to load 
them into the cache again?

Thanks.

Kafriki




does anyone have any suggestions or recommendations that they can share with me on this? 		 	   		  


Sorry, my first reply seems to have gone astray completely.

re: 1)  the pattern and rules you have are about as good as you can get 
in squid. The core design of Squid caching is to provide up-to-date 
copies rather than archiving stale/obsolete garbage.


A better way to do this for such long times would probably be to 
download the data into local directories and setup your own local web 
server for them. Squid can be configured to fetch listed URLS from a 
specific cache_peer source without keeping its own duplicate copies 
(proxy-only option) or going to an external source (never_direct).


Then you would have zero worries about loosing things out of the cache.


re: 2) yes they work at the point of re-request or garbage collection 
(which also happens on startup/reconfigure). so retroactive on whatever 
is in the cache then.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.3


Re: [squid-users] Running Multiple instances and reporting confusion.

2010-05-21 Thread Amos Jeffries

GIGO . wrote:

Hi all,

I am running multiple instances of squid on the same machine. One
instance is taking the clients request and forwarding to its parent
peer at 127.0.0.1. All is going well. However there is a confusion
related to reporting through sarg. To capture the client activity
sarge is parsing the access.log file of the instance i.e user facing
which is correct. However obvioulsy it is depicting a wrong in-cache
out-cache figures as this value should be instead of the instance
which is managing/doing caching.

Is there a way/trick to manage this? Is it possible that a cache_hit
from a parent cache be recorded as in-cache in the child?



The parent cache with the hier_code ACL type may be able to log only the 
requests that did not get sent to the child.


The child cache using follow_x_forwarded_for trusting the parent proxy 
and log_uses_indirect_client should be able to log the remote client IP 
which connected to the parent with its received requests.


Combining the parent and child proxies logs line-wise for analysis 
should then give you the result you want.


That combination is a bit tricky though, since we have only just added 
TCP reliable logging to Squid-3.2. UDP logging is available for 2.7 and 
3.1, but may result in some lost records under high load. With either of 
those methods you just need a daemon to receive the log traffic and 
store it in the one file.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.3


[squid-users] Re: [squid users] parent proxy authentication

2010-05-21 Thread jyothi
I am still facing the problem, can anybody help me out?


-- Forwarded Message ---
From: "jyothi" 
To: Amos Jeffries 
Sent: Sun, 16 May 2010 20:25:57 +0630
Subject: Re: [squid users] parent proxy authentication

On Mon, 17 May 2010 01:23:55 +1200, Amos Jeffries wrote
> jyothi wrote:
> > Hi ,
> > 
> >  To forward the requests from squid proxy to my insti proxy I have added the
> > following lines to the squid.conf file. (I am using ubutnu , squid version 
> > 2.7 and
> > browser firefox).
> > 
> > cache_peer 10.65.0.32 parent 3128 3130 default no-query login=PASS
> > never_direct allow all
> > 
> > 
> > But when I tried to open any web page in the browser it kept asking me for 
> > username and
> > password. If I click cancel it was showing the following messages.
> > 
> > The following error was encountered:
> > 
> >  Cache Access Denied
> > 
> > Sorry, you are not currently allowed to request:
> >
> >   http://www.google.com/
> > 
> >  from this cache until you have authenticated yourself.
> > 
> > 
> > When I invoke squid from terminal with debug messages it was showing the 
> > following
message.
> > 
> > temporary disabling ( Proxy Authentication Required) digest from 10.65.0.32
> >
> 
> Ah this is a background cache digest request. Squid does not have any 
> credentials of its own to pass out. To prevent these you will need to 
> add the options:
>no-digest no-netdb-exchange

These options I have tried, but still it didn't work for me.

> 
> > 
> > How do I get rid of this. I tried all possible ways that I could from 
> > googling, but
> > nothing was working. I have even tried putting my proxy username and 
> > password in
> > squid.conf file itself (with login=uname:passwd) but this also didn't work. 
> > please I am
> > so grateful to you if you can solve this problem of mine.
> > 
> > Thanks
> > Jyothi
> >
> 
> Hmm. What type of authentication is the parent requiring?

I didn't get your question, type of authentication in the sense?
 It needs a username and password, as it is provided by insti(proxy.iitm.ac.in) 
I don't
have much details of it.

> 
> Amos
> -- 
> Please be using
>Current Stable Squid 2.7.STABLE9 or 3.1.3
--- End of Forwarded Message ---



Re: [squid-users] Cannot connect to squid port intermittently

2010-05-21 Thread Amos Jeffries

Tejpal Amin wrote:

Hi ,

Disabling the iptables has had no effect , the problem of slow
performance still exists.

Can anybody help me on this.
I am not able to figure out why squid is not able to accept connections.
It still times out when I try to telnet to squid port 3218 from the
squid box itself.

Regards
Tejpal



On Tue, May 18, 2010 at 12:31 PM, Tejpal Amin  wrote:

HI Nathan,

The number of file descriptor don't seem to be an issue since there is
not entry of that in the cache.log.
Ho do I check if the problem lies in the iptables?

Warm Regards
Tej


On Fri, May 14, 2010 at 6:49 PM, "٩๏̯͡๏۶ ̿ ̿ ̿ ̿ ̿̿’\\̵͇̿̿\\=(•̪●)‏
Nathan Ridge"  wrote:

 Hi Tej,

check your file descriptors and also iptables conntrack, I have had problems
in the past when either of these two settings are too low.

Regards

On 14/05/10 11:11 PM, Tejpal Amin wrote:

Hi,

I am performance issues with squid, sometimes users get page cannot be
displayed.
During the troubleshootign I found that doing a telnet to port 3128
from my squid box itself time out or it connects after some time.

I need help desperatley on this .

Warm Regards
Tej




You will need to take a deep look at cache.log

Apart from the resource overload (FD or iptables conntracks).

Intermittent loss like this can also be due to a log rotation action, 
restarting, rebuilding the index on a huge cache, or when doing garbage 
collection on a very large cache.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.3


Re: [squid-users] very slow browsing and page is not displaed properly

2010-05-21 Thread Amos Jeffries

abdul sami wrote:

Thank you heaps for reply.

I have revisited the squid.conf and have make necessary changes.

I would try to revise the RAID too.

About size of the Cache, pls let me know how much size could be the best 
for performance.




Unknown. A few days to a week worth of your networks cacheable HTTP 
traffic. Or as much as you can give Squid.





On Fri, May 21, 2010 at 1:44 PM, Amos Jeffries > wrote:


goody goody wrote:

Hi,

Squid GURUs, Your response is required, please.

Regards,
.Goody.


- Original Message 
From: goody goody mailto:think...@yahoo.com>>
To: squid-users@squid-cache.org 
Sent: Fri, May 21, 2010 1:52:23 AM
Subject: Re: [squid-users] very slow browsing and page is not
displaed properly

Dear Members,

In addition to below information, I have added some more info
regarding machine hardware and platform.
RAM = 4 GB
Processors = 4 HDDs SATA having implemented RAID-5

Running on VMWARE ESXi 3.5.

Should you need any info, pls let me know.

Waiting for your expert opinion, please.


http://wiki.squid-cache.org/SquidFaq/RAID

45GB cache on RAID-5. ouch.

If you are really absolutely forced to use RAID at all, go for RAID-1.

For best performance go with JBOD and do away with half the physical
level IO.

I'd also recommend an OS other than *BSD for AUFS. There are some
write problems on BSD that apparently slow it down.

My refresh pattern settings are as below, pls also have a look on
them too,


refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
refresh_pattern -i downloads99  99%
60 override-expire override-lastmod
 
Best Regards,

.Goody.



Place your own refresh_patterns above the list of defaults. Squid will 
always stop processing at the default "." pattern which matches 
EVERYTHING in existence and prevents any of your own rules which follow 
it from ever being used.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.3


[squid-users] Re: Squid3 Reverse Proxy based on url path

2010-05-21 Thread rainolf

Sorrybut its not completelly clear for me

how can be modified my config in order to have a working situation?

sorry to boring ubut i'm newbie...


-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid3-Reverse-Proxy-based-on-url-path-tp2197692p2226139.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Content-Type log erro

2010-05-21 Thread Amos Jeffries

Romulo Boschetti wrote:
Hi Amos, 

I removing the "logformat" line from my config file 

acl video rep_mime_type -i ^video/*$ 
http_reply_access deny video 
access_log /opt/hsc/webcontrol/log/access.log 

But my problem persist. When I access one youtube video the Mime-type in the log is incorrect. This only happens when a the access is Denied. 



Oh. gotcha. Right.

What you see is because what gets logged is the mime type of the content 
actually sent to the client as a reply.


In this case a text HTML error page was sent back. There is no video 
involved beyond the URL.



My Log Denied: 
127734.862 35 192.168.1.102 TCP_DENIED/403 1891 GET http://s.ytimg.com/yt/swf/watch-vfl165272.swf paulo DIRECT/201.47.0.52 text/html 

My Log Accept: 
1274445021.663 12066 192.168.1.102 TCP_MISS/200 148805 GET http://s.ytimg.com/yt/swf/ad-vfl165210.swf paulo DIRECT/201.47.0.52 application/x-shockwave-flash 

That Way I can not see when anybody try access any content prohibited. 



You will have to do something to look at the denials and their URL if 
you want to see what has been successfully denied. There is no reliable 
way without having received the content already to know what it was.




- Mensagem original - 
De: "Amos Jeffries" 

Romulo Boschetti wrote: 
Hi, 

The content-type field in the access.log is always "-" character. 

The only time I saw "text/html" is when your request is deny. 

Linux CentOs 5.4 
Squid Cache: Version 2.7.STABLE5 

some squid.conf parameters: 

emulate_httpd_log off 
acl video rep_mime_type -i ^video/*$ 
http_reply_access deny video 
logformat squid %ts.%03tu %6tr %>a %Ss/%03Hs %access_log /var/log/squid/access.log 

I tried several configuration and I always get the same result. 

I have found the Bug http://bugs.squid-cache.org/show_bug.cgi?id=2298 , but i dont have find more 
information about resolution of this Bug. And I dont have found any information about solution in the 
version squid-2.7.STABLE5+ . 

Thanks 



Re-defining the "squid" format is not a good idea. 
Try removing the "logformat" line from your config file, leaving 
everything else untouched. 

Please report back even if that fixes it. I suspect you have found a bug 
in the custom format handling. 



Amos 



--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.3


Re: [squid-users] Re: Squid3 Reverse Proxy based on url path

2010-05-21 Thread Amos Jeffries

rainolf wrote:

Hi,
I've changed my configuration and now seem to work like i wantI
eliminate the fake url and replaced with right dns records

This is my configuration:


http_port 80 vhost vport=90 protocol=http defaultsite=webportal.domain.com
acl http proto http
acl port80 port 80
acl domain2_com dstdomain world.webmail.domain.com
acl domain1_com dstdomain italy.webmail.domain.com
#acl domain3_com dstdomain webportal.domain.com
acl hrm urlpath_regex ^/dbghrm
acl fax urlpath_regex ^/avantfax
acl ftp urlpath_regex ^/webftp
acl portal urlpath_regex ^/webportal

http_access allow all

cache_peer xxx.xxx.xxx.xxx parent 80 0 no-query originserver name=domain1
cache_peer_access domain1 allow domain1_com

cache_peer xxx.xxx.xxx.xxx parent 80 0 no-query originserver name=domain2
cache_peer_access domain2 allow domain2_com

#cache_peer xxx.xxx.xxx.xxx parent 90 0 no-query originserver name=domain3
#cache_peer_access domain3 allow domain3_com

cache_peer xxx.xxx.xxx.xxx parent 8080 0 no-query originserver name=domain4
cache_peer_access domain4 allow portal
cache_peer_access domain4 deny all

cache_peer xxx.xxx.xxx.xxx parent 80 0 no-query originserver name=dbghrm
cache_peer_access dbghrm allow hrm
cache_peer_access dbghrm deny all

cache_peer xxx.xxx.xxx.xxx parent 80 0 no-query originserver name=dbgfax
cache_peer_access dbgfax allow fax
cache_peer_access dbgfax deny all

cache_peer xxx.xxx.xxx.xxx parent 80 0 no-query originserver name=ftpweb
cache_peer_access ftpweb allow ftp
cache_peer_access ftpweb deny all



access_log /var/log/squid3/access.log squid

http_access allow http port80 domain2_com domain1_com portal
http_access allow fax
http_access allow ftp
http_access allow hrm

dns_nameservers xxx.xxx.xxx.xxx

Like i said in previous days i would like to place an instance of apache
server on reverse proxy in order to have a small page where put links to all
internal webserver proxied by squid.

in few words i will have apache with a small webpage and squid3 on the same
server . 


My problem is :

How can i do to open only one port ( ex. 80 ) on firewall in order to have
the webpage with links without having problem to forward requests to
internal web servers the listen on the same port?


Your squid is already solving this problem. Squid receives all port 80 
traffic and routs the requests to whichever server you like to handle it.




Can i make squid to forward the request on port 80 to internal webserver 
even if also apache is listening on that port?


No. For apache on the same machine as squid you need to give apache the 
localhost IP address (127.0.0.1 etc) and treat it as just another 
cache_peer source.


Squid may need a slight config alteration to add the public interface 
address to the http_port line. To prevent it grabbing the localhost 
address if started first.


" http_port $public_ip:80  " instead of "http_port 80 "...

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.3


Re: [squid-users] Content-Type log erro

2010-05-21 Thread Romulo Boschetti
Hi Amos, 

I removing the "logformat" line from my config file 

acl video rep_mime_type -i ^video/*$ 
http_reply_access deny video 
access_log /opt/hsc/webcontrol/log/access.log 

But my problem persist. When I access one youtube video the Mime-type in the 
log is incorrect. This only happens when a the access is Denied. 

My Log Denied: 
127734.862 35 192.168.1.102 TCP_DENIED/403 1891 GET 
http://s.ytimg.com/yt/swf/watch-vfl165272.swf paulo DIRECT/201.47.0.52 
text/html 

My Log Accept: 
1274445021.663 12066 192.168.1.102 TCP_MISS/200 148805 GET 
http://s.ytimg.com/yt/swf/ad-vfl165210.swf paulo DIRECT/201.47.0.52 
application/x-shockwave-flash 

That Way I can not see when anybody try access any content prohibited. 



Any suggestions... 
Thanks for you help 




Atenciosamente, 

__ 

Rômulo Giordani Boschetti 

IT Consulting - InterOp 

( telefone 55 (51) 3216-7030 – Porto Alegre 

( telefone 55 (11) 4063-7881 – São Paulo 

( telefone 55 (41) 4063-7881 – Curitiba 

4 fax 55 (51) 3216-7001 
: site www.interop.com.br : email rom...@interop.com.br 
___ ___ 

- Mensagem original - 
De: "Amos Jeffries"  
Para: squid-users@squid-cache.org 
Enviadas: Sexta-feira, 21 de Maio de 2010 3:12:43 
Assunto: Re: [squid-users] Content-Type log erro 

Romulo Boschetti wrote: 
> 
> Hi, 
> 
> The content-type field in the access.log is always "-" character. 
> 
> The only time I saw "text/html" is when your request is deny. 
> 
> Linux CentOs 5.4 
> Squid Cache: Version 2.7.STABLE5 
> 
> some squid.conf parameters: 
> 
> emulate_httpd_log off 
> acl video rep_mime_type -i ^video/*$ 
> http_reply_access deny video 
> logformat squid %ts.%03tu %6tr %>a %Ss/%03Hs % access_log /var/log/squid/access.log 
> 
> I tried several configuration and I always get the same result. 
> 
> I have found the Bug http://bugs.squid-cache.org/show_bug.cgi?id=2298 , but i 
> dont have find more 
> information about resolution of this Bug. And I dont have found any 
> information about solution in the 
> version squid-2.7.STABLE5+ . 
> 
> Thanks 


Re-defining the "squid" format is not a good idea. 
Try removing the "logformat" line from your config file, leaving 
everything else untouched. 

Please report back even if that fixes it. I suspect you have found a bug 
in the custom format handling. 


Amos 
-- 
Please be using 
Current Stable Squid 2.7.STABLE9 or 3.1.3 


Re: [squid-users] Re: Advices for a squid cluster with kerberos auth

2010-05-21 Thread Emmanuel Lesouef
Le Fri, 21 May 2010 11:31:39 +0100,
Nick Cairncross  a écrit :

> Has anyone had success using Service Location records in DNS for
> different sites? I would be interested to hear about it..

Service location ? DNS discovery with _tcp zones ? What do you try to
configure ?

-- 
Emmanuel Lesouef


[squid-users] Re: Squid3 Reverse Proxy based on url path

2010-05-21 Thread rainolf

Hi,
I've changed my configuration and now seem to work like i wantI
eliminate the fake url and replaced with right dns records

This is my configuration:


http_port 80 vhost vport=90 protocol=http defaultsite=webportal.domain.com
acl http proto http
acl port80 port 80
acl domain2_com dstdomain world.webmail.domain.com
acl domain1_com dstdomain italy.webmail.domain.com
#acl domain3_com dstdomain webportal.domain.com
acl hrm urlpath_regex ^/dbghrm
acl fax urlpath_regex ^/avantfax
acl ftp urlpath_regex ^/webftp
acl portal urlpath_regex ^/webportal

http_access allow all

cache_peer xxx.xxx.xxx.xxx parent 80 0 no-query originserver name=domain1
cache_peer_access domain1 allow domain1_com

cache_peer xxx.xxx.xxx.xxx parent 80 0 no-query originserver name=domain2
cache_peer_access domain2 allow domain2_com

#cache_peer xxx.xxx.xxx.xxx parent 90 0 no-query originserver name=domain3
#cache_peer_access domain3 allow domain3_com

cache_peer xxx.xxx.xxx.xxx parent 8080 0 no-query originserver name=domain4
cache_peer_access domain4 allow portal
cache_peer_access domain4 deny all

cache_peer xxx.xxx.xxx.xxx parent 80 0 no-query originserver name=dbghrm
cache_peer_access dbghrm allow hrm
cache_peer_access dbghrm deny all

cache_peer xxx.xxx.xxx.xxx parent 80 0 no-query originserver name=dbgfax
cache_peer_access dbgfax allow fax
cache_peer_access dbgfax deny all

cache_peer xxx.xxx.xxx.xxx parent 80 0 no-query originserver name=ftpweb
cache_peer_access ftpweb allow ftp
cache_peer_access ftpweb deny all



access_log /var/log/squid3/access.log squid

http_access allow http port80 domain2_com domain1_com portal
http_access allow fax
http_access allow ftp
http_access allow hrm

dns_nameservers xxx.xxx.xxx.xxx

Like i said in previous days i would like to place an instance of apache
server on reverse proxy in order to have a small page where put links to all
internal webserver proxied by squid.

in few words i will have apache with a small webpage and squid3 on the same
server . 

My problem is :

How can i do to open only one port ( ex. 80 ) on firewall in order to have
the webpage with links without having problem to forward requests to
internal web servers the listen on the same port?

Can i make squid to forward the request on port 80 to internal webserver 
even if also apache is listening on that port?

Thank you
-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid3-Reverse-Proxy-based-on-url-path-tp2197692p2226037.html
Sent from the Squid - Users mailing list archive at Nabble.com.


RE: [squid-users] Memory Considerations when you are running multiple instances of squid on the same server.

2010-05-21 Thread GIGO .

Thank you for explaining well
 
regards,

Bilal


> From: hen...@henriknordstrom.net
> To: gi...@msn.com
> CC: squid-users@squid-cache.org
> Date: Fri, 21 May 2010 09:53:06 +0200
> Subject: Re: [squid-users] Memory Considerations when you are running 
> multiple instances of squid on the same server.
>
> fre 2010-05-21 klockan 06:38 + skrev GIGO .:
>
>> can it be said as a generalization that one can allocate/fix 1/4 of
>> physical ram for cache mem objects. Will it holds true even when you
>> are running multiple instances???
>
> I would not generalize a rule like that. It is a reasonable
> recommendation when sizing the system, but also depends on how your
> Squid is being used. A reverse proxy benefits much more from cache_mem
> than a normal forward proxy, and in a forward proxy you may want to give
> priority to on-disk cache instead.
>
> memory usage per Squid = cache size (in GB) * 10 MB + cache_mem + 10MB.
>
> memory usage by OS: Leave at least 25%. In smaller configurations up to
> 50%.
>
> system memory requirement = sum(squid instances) + system memory =
> sum(squid instances) / 0.75.
>
>
> If you inverse the above calculation then you'll notice that cache size
> is a function of cache_mem. If one is increased then the other need to
> be decreased.
>
> Note: if you also log in on the sever using graphical desktop (not
> recommended) then reserve about 1GB for that.
>
>> please guide that how memory handling will be occuring in multiple
>> instances setup???cache_mem will influencing per instance and not the
>> program as whole. right?
>
> Right.
>
> Regards
> Henrik
> 
_
Hotmail: Powerful Free email with security by Microsoft.
https://signup.live.com/signup.aspx?id=60969

Re: [squid-users] Re: Advices for a squid cluster with kerberos auth

2010-05-21 Thread Nick Cairncross
Just to add: Thanks for this. I've successfully got RR working with Kerberos as 
you said. It's something I've been interested in as well. My test setup is:

SQUID1.domain.com   10.0.0.1
SQUID2.domain.com   10.0.0.2

RR DNS record SQUIDS.domain.com for each SQUIDx IP

Computer account in UnixPrincipals OU called SQUIDS

msktutil -u -b "OU=UnixPrincipals" -s HTTP/squids.domain.com -k 
/etc/squid/HTTP.keytab --computer-name squids --upn HTTP/squids --server dc1 
--verbose -h squids.domain.com

Point browser to squids.domain.com.

Has anyone had success using Service Location records in DNS for different 
sites? I would be interested to hear about it..




On 20/05/2010 21:51, "Markus Moeller"  wrote:

It will work with the right setup (e.g. you have to copy the Kerberos keytab
to all machines and use the -s HTTP/ or -s GSS_C_NO_NAME option
with squid_kerb_auth).

Regards
Markus

"Amos Jeffries"  wrote in message
news:4bf52c87.9080...@treenet.co.nz...
> Emmanuel Lesouef wrote:
>> Hello,
>>
>> I'm currently satisfied with my round-robin DNS enabled "cluster" of
>> two Squid with ntlm authentication.
>>
>> But, with th appearance of Windows 7 and Windows 2008, I see by
>> searching for documentation on the web that I need to use Kerberos
>> Authentication if I would like Internet Explorer 8 from 2008 or 7 to
>> work.
>>
>> Do you have any advices for achieving this setup ? What clustering
>> mechanism do you use. Does the kerberos part of the install need to be
>> customized to support being put in cluster mode (which needs to be
>> defined) ?
>>
>> Thanks for your helps and docs.
>>
>> PS : Testing it will be easy so I thinks I'll enable Debian Backports
>> repository in order to have 2.7STABLE9.
>>
>
> Without havign used either, I expect if your clustering setup works with
> NTLM it will work equally well or better for Kerberos.
>
> The two protocols are very much similar, with Kerberos doing away with one
> of the handshake HTTP reject messages.
>
> Amos
> --
> Please be using
>   Current Stable Squid 2.7.STABLE9 or 3.1.3
>




** Please consider the environment before printing this e-mail **

The information contained in this e-mail is of a confidential nature and is 
intended only for the addressee.  If you are not the intended addressee, any 
disclosure, copying or distribution by you is prohibited and may be unlawful.  
Disclosure to any party other than the addressee, whether inadvertent or 
otherwise, is not intended to waive privilege or confidentiality.  Internet 
communications are not secure and therefore Conde Nast does not accept legal 
responsibility for the contents of this message.  Any views or opinions 
expressed are those of the author.

Company Registration details:
The Conde Nast Publications Ltd
Vogue House
Hanover Square
London W1S 1JU

Registered in London No. 226900


RE: [squid-users] agent.log and https clients

2010-05-21 Thread Steve
Henrik

Thanks for that. Explains what I am seeing for this software. Given my browser 
acls which only allow specific known user-agents then anything that doesn't 
come into squid with a User-Agent header is going to get the forbidden error. 
As this particular software uses one particular site then maybe a dstdomain acl 
above the browser ones will be the answer. I'll give that a try.

Thanks again.
Steve

-Original Message-
From: Henrik Nordström [mailto:hen...@henriknordstrom.net] 
Sent: 21 May 2010 08:36
To: Steve
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] agent.log and https clients

fre 2010-05-21 klockan 00:20 +0100 skrev Steve:

> I have a quick question about agent.log. Does the user agent get 
> logged for https clients?

Most agents do not indicate who they are in CONNECT requests.

Regards
Henrik




Re: [squid-users] Cannot connect to squid port intermittently

2010-05-21 Thread Tejpal Amin
Hi ,

Disabling the iptables has had no effect , the problem of slow
performance still exists.

Can anybody help me on this.
I am not able to figure out why squid is not able to accept connections.
It still times out when I try to telnet to squid port 3218 from the
squid box itself.

Regards
Tejpal



On Tue, May 18, 2010 at 12:31 PM, Tejpal Amin  wrote:
>> HI Nathan,
>>
>> The number of file descriptor don't seem to be an issue since there is
>> not entry of that in the cache.log.
>> Ho do I check if the problem lies in the iptables?
>>
>> Warm Regards
>> Tej
>>
>>
>> On Fri, May 14, 2010 at 6:49 PM, "٩๏̯͡๏۶ ̿ ̿ ̿ ̿ ̿̿’\\̵͇̿̿\\=(•̪●)‏
>> Nathan Ridge"  wrote:
>>>  Hi Tej,
>>>
>>> check your file descriptors and also iptables conntrack, I have had problems
>>> in the past when either of these two settings are too low.
>>>
>>> Regards
>>>
>>> On 14/05/10 11:11 PM, Tejpal Amin wrote:

 Hi,

 I am performance issues with squid, sometimes users get page cannot be
 displayed.
 During the troubleshootign I found that doing a telnet to port 3128
 from my squid box itself time out or it connects after some time.

 I need help desperatley on this .

 Warm Regards
 Tej
>>>
>>>
>>
>


Re: [squid-users] Re: Advices for a squid cluster with kerberos auth

2010-05-21 Thread Emmanuel Lesouef
Le Thu, 20 May 2010 21:51:08 +0100,
"Markus Moeller"  a écrit :

> It will work with the right setup (e.g. you have to copy the Kerberos
> keytab to all machines and use the -s HTTP/ or -s
> GSS_C_NO_NAME option with squid_kerb_auth).
> 
> Regards
> Markus
> 

Understood. Thanks Markus. I didn't know it was possible to have a RR
DNS Name in the service name.

-- 
Emmanuel Lesouef


Re: [squid-users] Memory Considerations when you are running multiple instances of squid on the same server.

2010-05-21 Thread Henrik Nordström
fre 2010-05-21 klockan 06:38 + skrev GIGO .:

> can it be said as  a generalization that one can allocate/fix 1/4 of
> physical ram for cache mem objects. Will it holds true even when you
> are running multiple instances???

I would not generalize a rule like that. It is a reasonable
recommendation when sizing the system, but also depends on how your
Squid is being used. A reverse proxy benefits much more from cache_mem
than a normal forward proxy, and in a forward proxy you may want to give
priority to on-disk cache instead.

memory usage per Squid = cache size (in GB) * 10 MB + cache_mem + 10MB.

memory usage by OS: Leave at least 25%. In smaller configurations up to
50%.

system memory requirement = sum(squid instances) + system memory =
sum(squid instances) / 0.75.


If you inverse the above calculation then you'll notice that cache size
is a function of cache_mem. If one is increased then the other need to
be decreased.

Note: if you also log in on the sever using graphical desktop (not
recommended) then reserve about 1GB for that.

> please guide that how memory handling will be occuring in multiple
> instances setup???cache_mem will influencing per instance and not the
> program as whole. right?

Right.

Regards
Henrik



Re: [squid-users] very slow browsing and page is not displaed properly

2010-05-21 Thread Amos Jeffries

goody goody wrote:

Hi,

Squid GURUs, Your response is required, please.

Regards,
.Goody.


- Original Message 
From: goody goody 
To: squid-users@squid-cache.org
Sent: Fri, May 21, 2010 1:52:23 AM
Subject: Re: [squid-users] very slow browsing and page is not displaed properly

Dear Members,

In addition to below information, I have added some more info regarding machine hardware and platform. 


RAM = 4 GB
Processors = 4 
HDDs SATA having implemented RAID-5


Running on VMWARE ESXi 3.5.

Should you need any info, pls let me know.

Waiting for your expert opinion, please.


http://wiki.squid-cache.org/SquidFaq/RAID

45GB cache on RAID-5. ouch.

If you are really absolutely forced to use RAID at all, go for RAID-1.

For best performance go with JBOD and do away with half the physical 
level IO.


I'd also recommend an OS other than *BSD for AUFS. There are some write 
problems on BSD that apparently slow it down.





- Original Message 
From: goody goody 
To: squid-users@squid-cache.org
Sent: Thu, May 20, 2010 4:31:21 PM
Subject: [squid-users] very slow browsing and page is not displaed properly

Hi,

Version information and some statistics collected by me are as below. At times, my users 
complain the browsing becomes deadly slow and we page like yahoo, after much delay is 
displayed scattered and pictures are not visible rather "X" sign is displayed 
and after few times refresh screen becomes better.

proxy-br# uname -a
FreeBSD proxy-br 0-RELEASE FreeBSD 8.0-RELEASE #0: Sat Nov 21 15:48:17 UTC 2009 
   r...@almeida.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC  i386



proxy-br# /usr/local/squid27/sbin/squid -v
Squid Cache: Version 2.7.STABLE9
configure options:  '--prefix=/usr/local/squid27' '--enable-async-io' '-enable-storeio=aufs,coss' 
'--enable-removal-policies=heap,lru' '--enable-snmp' '--with-openssl=/opt/ssl' '--enable-wccp'




proxy-br# iostat -c 5 -w 3
  ttyda0pass0cpu
tin  tout  KB/t tps  MB/s  KB/t tps  MB/s  us ni sy in id
  0  138 13.88  2  0.03  0.00  0  0.00  4  0  1  0 95
  0  140 11.00  1  0.01  0.00  0  0.00  11  0  5  1 83
  0  133 11.00  1  0.01  0.00  0  0.00  16  0  5  1 78
  086 16.00  0  0.01  0.00  0  0.00  13  0  4  1 82
  0  132  3.07  5  0.01  0.00  0  0.00  14  0  4  1 80


proxy-br# vmstat
procs  memory  pagedisksfaultscpu
r b wavmfre  flt  re  pi  pofr  sr da0 pa0  in  sy  cs us sy id
1 0 0924M  154M20  0  0  06  1  0  0  189 1178 1366  4  1 95


proxy-br# systat

/0  /1  /2  /3  /4  /5  /6  /7  /8  /9  /10
Load Average  ||

/0%  /10  /20  /30  /40  /50  /60  /70  /80  /90  /100
root  idle XX
root  idle X
squidsquid X
rootkernel X


my squid.conf is as below

http_port 3128

hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY


"no_cache" is dead. Drop the "no_" part of it.

Also the QUERY acl is now deprecated. You can drop those above lines 
entirely and add a refresh_pattern instead:


 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0

(needs to go just about the "refresh_pattern . "  line).


cache_mem 256 MB
visible_hostname pxy
#negative_ttl 0

acl PURGE method PURGE
acl localhost src 127.0.0.1
http_access allow PURGE localhost
http_access deny PURGE


cache_replacement_policy heap LFUDA
memory_replacement_policy heap GDSF

cache_dir aufs /cache 45000 16 256

cache_store_log /dev/null #/var/log/squid27/store.log


Drop the line above. It's both broken syntax (there are no comments 
allowed at the end), and unused as the next line below erases it.



cache_store_log none
cache_swap_low 80
cache_swap_high 90
cache_log /var/log/squid27/cache.log
cache_access_log /var/log/squid27/access.log

half_closed_clients off


...
...acl...

.

#always_direct allow myiplist
cache_mgr x...@
cache_effective_user squid
cache_effective_group squid
logfile_rotate 0
buffered_logs on
nonhierarchical_direct off
prefer_direct off
ie_refresh on
ftp_list_width 32
ftp_passive on
ftp_sanitycheck on
ftp_telnet_protocol on


emulate_httpd_log on


Use the "common" log format on each access_log line instead of this 
deprecated option.


ie.  cache_access_log /var/log/squid27/access.log common


Amos
--

Re: [squid-users] agent.log and https clients

2010-05-21 Thread Henrik Nordström
fre 2010-05-21 klockan 00:20 +0100 skrev Steve:

> I have a quick question about agent.log. Does the user agent get logged for
> https clients?

Most agents do not indicate who they are in CONNECT requests.

Regards
Henrik



Re: [squid-users] Deny IPs

2010-05-21 Thread Matus UHLAR - fantomas
On 17.05.10 21:43, kranthi wrote:
> I want squid to deny requests from certain IPs and forward the rest.
> The list of IPs will be saved in an external file (or a MySql
> database), which will be updated every minute. (and my squid server
> can't be restarted every minute) ny ideas how this can be done ?

what is the point? Couldn't we found bbetter solution for the real problem?
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
I wonder how much deeper the ocean would be without sponges.