Re: [squid-users] Authenticator processes after reconfigure.

2009-04-22 Thread Oleg

Done. http://www.squid-cache.org/bugs/show_bug.cgi?id=2648

Amos Jeffries пишет:

Oleg wrote:

Hello.

Version: Squid 3.0.STABLE13 on Gentoo 2.6.22-vs2.2.0.7

`squid -k reconfigure` do not close old authenticator processes if 
that was a clients. So my 'NTLM Authenticator Statistics' looks like 
below.

Is anybody has same symptom?


Maybe.  The 23 of 15 issue has been resolved recently.

But the repeated use of FD some with RS set is a bug anyways. Please 
open a bugzilla entry so we don't lose track of this. With details on 
where that output was found.


Thanks

Amos



Oleg.


NTLM Authenticator Statistics:
program: /usr/bin/ntlm_auth
number running: 23 of 15
requests sent: 8896
replies received: 8896
queue length: 0
avg service time: 0 msec


#FDPID# Requests# Deferred RequestsFlags
TimeOffsetRequest

112230794590RS0.0020(none)
21323080890RS0.0000(none)
31423081370RS0.0000(none)
41523082360RS0.0020(none)
516230833420RS0.0000(none)
6172308410570RS0.0000(none)
71823085970RS0.0000(none)
102123089710RS0.0000(none)
1201769565300.0030(none)
2221769611400.0040(none)
323176972200.0080(none)
42417698400.0200(none)
52517699000.0000(none)
62617700000.0000(none)
72717701000.0000(none)
82817702000.0000(none)
92917703000.0000(none)
103017713000.0000(none)
113117714000.0000(none)
123217715000.0000(none)
133317716000.0000(none)
143417717000.0000(none)
153517718000.0000(none)

Flags key:

   B = BUSY
   C = CLOSING
   R = RESERVED or DEFERRED
   S = SHUTDOWN
   P = PLACEHOLDER



Amos


[squid-users] Configuration file

2009-04-22 Thread Wong

All,

Below the lines that exist in my squid.conf

acl our_networks src 192.168.1.0/24
http_access allow our_networks
http_access deny all

Will it be more effective below? And what is the impact?

acl our_networks src 192.168.1.0/24
http_access deny !our_networks
(and line "http_access deny all" be removed)

Please advise

Thx & Rgds,

Wong





Re: [squid-users] Squid and TC - Traffic Shaping

2009-04-22 Thread Indunil Jayasooriya
On Wed, Apr 22, 2009 at 2:55 PM, Amos Jeffries  wrote:
> Wilson Hernandez - MSD, S. A. wrote:
>>
>> Hello.
>>
>> I was writing a script to control traffic on our network. I created my
>> rules with tc and noticed that it wasn't working correctly.
>>
>> I tried this traffic shaping on a linux router that has squid doing
>> transparent cache.
>>
>> When measuring the download speed on speedtest.net the download speed is
>> 70kbps when is supposed to be over 300kbps. I found it strange since
>> I've done traffic shaping in the past and worked but not on a box with
>> squid. I stopped the squid server and ran the test again and it gave me
>> the speed I assigned to that machine. I assigned different bw and the
>> test gave the correct speed.
>>
>> Have anybody used traffic shaping (TC in linux) on a box with squid? Is
>> there a way to combine both a have them work side by side?

About  2years ago, I used the below script on a CentOS 4.4 box acting
as a firewall (iptables), routing (iproute2) and squid 2.5 transparent
intercepting.



#traffic shaping on eth1 - i.e: LAN INTERFACE (For Downloading). eth0
is connected to the Internet

INTERFAZ_LAN=eth1
FULLBANDWIDTH=256
BANDWIDTH4LAN=64

tc qdisc del root dev $INTERFAZ_LAN

tc qdisc add dev $INTERFAZ_LAN root handle 1: htb r2q 4
tc class add dev $INTERFAZ_LAN parent 1: classid 1:1 htb rate
"$FULLBANDWIDTH"Kbit
tc class add dev $INTERFAZ_LAN parent 1:1 classid 1:10 htb rate
"$BANDWIDTH4LAN"Kbit
tc qdisc add dev $INTERFAZ_LAN parent 1:10 handle 10: sfq perturb 10
tc filter add dev $INTERFAZ_LAN parent 1: protocol ip prio 1 u32 match
ip dst 192.168.100.0/24 classid 1:10



192.168.100.0/24 is my LAN RANGE.

According to the above script, My FULL bandwidth was 256 kbit. I
allocated 64 kbit for downloading. it is actually NOTHING to do with
squid for me. ALL went fine with iproute2 pkg.


> I am also seeking a TC expert to help several users already needing to use
> it with TPROXYv4 and/or WCCP setups.

I am NOT a tc expert. just a guy with an interest.





-- 
Thank you
Indunil Jayasooriya


Re: [squid-users] Auto Detect Proxy in Browser, visiting users.

2009-04-22 Thread Amos Jeffries
> gavguinness wrote:
>> Hi
>>
>> I'm new to Squid.  New in the sense that this time yesterday, I didn't
>> know
>> what Squid was.  I knew what I wanted to achieve though, and I've
>> achieved
>> most of this today using Squid and a few helpful online guides...
>>
>> To have users promted to authenticate when they start their browser
>> (Check)
>> To log their activity in a log file (Check)
>> Not to have to install any software on the PC (Check)
>> Specifically not to use any server based DB lookup authentication
>> (check)
>>
>> The only problem is that I want all users to go through Squid, even
>> visiting
>> users.  A lot of our guys are not going to want to manually enter Proxy
>> settings each time they visit a site - I want it to be automatic.
>>
>> Similarly, not every user logs into our server(s), so I can't deploy a
>> scrips or setting to the visiting computer as they simply connect to the
>> WiFi, or Cabled network point.
>>
>> So basically, just connect up to the network, go on line and BAM, they
>> have
>> to authenticate.  Just like in Starbucks!  (But without the coffee or
>> wifi
>> charges!)
>>
>> I looked at transparent settings, but I gather this doesn't work with
>> Authentication, so that's a no.
>>
>> Now i'm focussing on how to get the clients to auto detect the squid
>> box.
>> But I can't fathom how that's going to work.  If the machines don't know
>> it's there, how can squid make itself known to them?
>>
>> Ideally (and bear in mind my lack of knowledge at this stage) I would
>> like
>> to just have my DCHP tell the clients that the squid box is the default
>> gateway and solve it that way, but again, I'm learning that the proxy
>> doesn't work that way - it's not a router, right?
>>
>> Hope that makes sense, any help appreciated.  But in the meantime, I'll
>> get
>> my head back in the manual!
>>
>> Cheers
>>
>
> Look into WPAD
> (http://en.wikipedia.org/wiki/Web_Proxy_Autodiscovery_Protocol) or a
> captive portal like WiFiDog
> (http://en.wikipedia.org/wiki/WiFiDog_Captive_Portal) or the Squid
> session helper (check the archives).
>

And definitely the relevant Squid FAQ entries:

http://wiki.squid-cache.org/SquidFaq/ConfiguringBrowsers?highlight=%28WPAD%29
http://wiki.squid-cache.org/Technology/WPAD/DNS
http://wiki.squid-cache.org/Technology/WPAD


> Here's the condensed version of what I have experienced with WPAD.  It
> all assumes that the proxy settings have not been changed from the
> shipping default in the browsers.
>
> Using a Windows (98/2000/XP) machine and Internet Explorer, the DHCP
> option 252 is honored.  DNS (wpad.domainname.com) is used in the absence
> of the DHCP option 252.  Firefox (2 or 3) on a Windows (98/2000/XP)
> machine or OS X (10.4 for sure) the DHCP option 252 is ignored, DNS is
> used exclusively .  Safari on Windows (98/2000/XP) or OS X ignores both
> DHCP and DNS and must be explicitly configured to use a statically
> defined PAC (http://en.wikipedia.org/wiki/Proxy_auto-config) file.
>
> My suggestion is to have a webserver assigned to
> http://wpad.yourdomain.tld that serves a PAC file when
> http://wpad.yourdomain.tld/wpad.dat OR
> http://wpad.yourdomain.tld/wpad.da is requested.  This will
> (transparently) catch the majority of web browsers.  For the rest, you
> should intercept outbound port 80 traffic and redirect it to a page that
> describes how to set their browser back to defaults (or how to set their
> browser to explicitly grab the PAC file).
>
> Chris
>




Re: [squid-users] Auto Detect Proxy in Browser, visiting users.

2009-04-22 Thread Amos Jeffries
>
> i do believe a native squid transparent settings will do this. you can
> configure squid with transparency settings, configure squid with
> authentication (basic or LDAP) , set your Unix box (i will assume Linux)
> to be the default gateway, enable ip forwarding (act as a router),
> configure ipchains to trap http traffic and redirect it to your squid
> port.
>

No. He requires web authentication. Which is absolutely not possible under
interception conditions. His other requirements forbid the few auth ways
that do work (DB lookup server based checks).

>
>
> - Original Message 
> From: Chris Robertson 
> To: squid-users@squid-cache.org
> Sent: Wednesday, April 22, 2009 7:43:59 PM
> Subject: Re: [squid-users] Auto Detect Proxy in Browser, visiting users.
>
> gavguinness wrote:
>> Hi
>>
>> I'm new to Squid.  New in the sense that this time yesterday, I didn't
>> know
>> what Squid was.  I knew what I wanted to achieve though, and I've
>> achieved
>> most of this today using Squid and a few helpful online guides...
>>
>> To have users promted to authenticate when they start their browser
>> (Check)
>> To log their activity in a log file (Check)
>> Not to have to install any software on the PC (Check)
>> Specifically not to use any server based DB lookup authentication
>> (check)
>>
>> The only problem is that I want all users to go through Squid, even
>> visiting
>> users.  A lot of our guys are not going to want to manually enter Proxy
>> settings each time they visit a site - I want it to be automatic.
>>
>> Similarly, not every user logs into our server(s), so I can't deploy a
>> scrips or setting to the visiting computer as they simply connect to the
>> WiFi, or Cabled network point.
>>
>> So basically, just connect up to the network, go on line and BAM, they
>> have
>> to authenticate.  Just like in Starbucks!  (But without the coffee or
>> wifi
>> charges!)
>>
>> I looked at transparent settings, but I gather this doesn't work with
>> Authentication, so that's a no.
>>
>> Now i'm focussing on how to get the clients to auto detect the squid
>> box. But I can't fathom how that's going to work.  If the machines don't
>> know
>> it's there, how can squid make itself known to them?
>>
>> Ideally (and bear in mind my lack of knowledge at this stage) I would
>> like
>> to just have my DCHP tell the clients that the squid box is the default
>> gateway and solve it that way, but again, I'm learning that the proxy
>> doesn't work that way - it's not a router, right?
>>
>> Hope that makes sense, any help appreciated.  But in the meantime, I'll
>> get
>> my head back in the manual!
>>
>> Cheers
>>
>
> Look into WPAD
> (http://en.wikipedia.org/wiki/Web_Proxy_Autodiscovery_Protocol) or a
> captive portal like WiFiDog
> (http://en.wikipedia.org/wiki/WiFiDog_Captive_Portal) or the Squid session
> helper (check the archives).
>
> Here's the condensed version of what I have experienced with WPAD.  It all
> assumes that the proxy settings have not been changed from the shipping
> default in the browsers.
>
> Using a Windows (98/2000/XP) machine and Internet Explorer, the DHCP
> option 252 is honored.  DNS (wpad.domainname.com) is used in the absence
> of the DHCP option 252.  Firefox (2 or 3) on a Windows (98/2000/XP)
> machine or OS X (10.4 for sure) the DHCP option 252 is ignored, DNS is
> used exclusively .  Safari on Windows (98/2000/XP) or OS X ignores both
> DHCP and DNS and must be explicitly configured to use a statically defined
> PAC (http://en.wikipedia.org/wiki/Proxy_auto-config) file.
>
> My suggestion is to have a webserver assigned to
> http://wpad.yourdomain.tld that serves a PAC file when
> http://wpad.yourdomain.tld/wpad.dat OR http://wpad.yourdomain.tld/wpad.da
> is requested.  This will (transparently) catch the majority of web
> browsers.  For the rest, you should intercept outbound port 80 traffic and
> redirect it to a page that describes how to set their browser back to
> defaults (or how to set their browser to explicitly grab the PAC file).
>
> Chris
>
>
>
>
>




Re: [squid-users] Intermittent slow response from Squid

2009-04-22 Thread molybtek

I've been able to do a little more monitoring on squid - the DNS Lookups are
still below 1 seconds for the 5 minute averages during the times when there
is a slowdown in squid response. And the connections averages around 5 per
seconds, just like the time when there isn't a slow down...

Just wondering are there anything else that might be causing it?


Daniel Kühl wrote:
> 
> I can bet on DNS Server...
> 
> 
> On Feb 4, 2009, at 9:43 AM, Moses Truong wrote:
> 
>> We have squid running on a server with delay pools enabled. The  
>> squidclient usually responds very quickly - in less than 0.03  
>> seconds most of the time. However, there are times when this rises  
>> to over 39 seconds.
>>
>> There are 2 Gb of RAM, and there's about 900mb used.
>> There's 1024 file descriptors, and the largest opened hovers around  
>> 370.
>>
>> Could anyone suggest what I should be looking for to track down why  
>> squid sometimes take so long to respond? Thanks.
>>
> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Intermittent-slow-response-from-Squid-tp21829319p23189509.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] HTCP logging?

2009-04-22 Thread Amos Jeffries
> No there's not. See:
>http://www.squid-cache.org/bugs/show_bug.cgi?id=2627
>
>

Aye. FWIW I'm clearing up the logging code in 3.1 and extending it in 3.2.
When the 3.1 cleanup is done and approved I'll take a look at how easy
adding HTCP would be for that release. But chances are small for anything
quick.

Amos

> On 23/04/2009, at 7:59 AM, Dean Weimer wrote:
>
>> Working on Testing a child parent proxy setup using HTCP, I was
>> wondering if there is any way to see a log of the HTCP requests on
>> the parent similar to how you see the ICP requests in the access log?
>>
>> Thanks,
>>  Dean Weimer
>>  Network Administrator
>>  Orscheln Management Co
>
> --
> Mark Nottingham   m...@yahoo-inc.com
>
>
>




[squid-users] Fwd: Problem accessing a webpage

2009-04-22 Thread Pedro Corá
- Mensagem encaminhada - 
De: "Pedro Corá"  
Para: squid-users@squid-cache.org 
Cc: "Romulo Giordani. Boschetti"  
Enviadas: Quarta-feira, 22 de Abril de 2009 19:19:54 (GMT-0300) Auto-Detected 
Assunto: Problem accessing a webpage 


Hi there. 


I`m having trouble while accessing a webpage througt a squid proxy 
server. 


The page is http://minha.unisul.br . The page is loaded ok and the 
problem only seams to appear when you click on the "CRIAR NOVO 
USUARIO" (URL: https://projetoguia.unisul.br/sa8new/directlink.html ) 


Then the squid doesnt process the page correctly. Without proxy the 
page loads ok. But with a squid proxy configured the images and 
sometimes de css file doesn't load. 


Can anyone help me? In the access.log i got nothing. I tried 
installing a squid server from zero and only added to the squid.conf 
the following lines: 


visible_hostname srvdevXX 
no_cache deny all 
always_direct allow all 
http_access allow all 


Can you guys give me some ideas? 


Regards, 




Pedro Corá 
InterOp 
mail. pedro.c...@interop.com.br 
fone. +55 (51) 3126.7000 
mobile. + 55 (51) 9957.1468 



[squid-users] redirector #1 (FD 6) exited

2009-04-22 Thread murrah boswell

Hello,

Periodically I get messages in cache.log like:

redirector #1 (FD 6) exited

It is possible to put squid in a debug mode level so I can see what query was 
submitted that caused my redirector to die?


Re: [squid-users] problems with SQUID 3.x and IBM Proventia

2009-04-22 Thread Amos Jeffries
> Amos Jeffries wrote:
>>> So of course the problem is proventia corrupting the HTTP headers and
>>> we will raise an issue about that with IBM.
>>>
>>> But for the time being: is there a chance to make squid more
>>> "tolerant" about those kind of problems? Without surprize I did not
>>> find any fitting config options :-)
>>>
>> Not nearly as easy as it will be for IBM to issue a fix for it. Or even
>> to replace the box with free software that works well.
>
> Hehe, sure, no objections, it is just the world being far from perfect :-)
>
>> Not also without opening some potential data-injection and cache
>> poisoning flaws into Squid.
>>
>> Consider what happens with:
>>
>> HTTP/1.1 200 OK
>> Bwahaha: "
>> Cache-Control: private
>>
>> ...something you really did not want public...
>> .
>>
>> vs:
>>
>> HTTP/1.1 200 OK
>> Content-Type: "fu
>> bar: tender: and: wine"
>> Cache-Control: private
>
> Hmm, reading the specs for HTTP message headers [1] I think this could
> be done without imposing security issues. As per specification your last
> example would read correctly:
>
> ---CUT---
> HTTP/1.1 200 OK
> Content-Type: "fu
>   bar: tender: and: wine"
> Cache-Control: private
> ---CUT---
>
> note the leading whitespace.

I know. Both of what I pointed are two cases of the same brokenness. Squid
handles the one is can interpret 'safely'[1] (second), but at expense of
dropping the first set when no termination can be found at all and its
clearly very unsafe to make assumptions.

 [1] for some vague value of safe the paranoid in me gets very edgy about
as-is.

Amos




Re: [squid-users] Auto Detect Proxy in Browser, visiting users.

2009-04-22 Thread Sir June

i do believe a native squid transparent settings will do this. you can 
configure squid with transparency settings, configure squid with authentication 
(basic or LDAP) , set your Unix box (i will assume Linux) to be the default 
gateway, enable ip forwarding (act as a router), configure ipchains to trap 
http traffic and redirect it to your squid port. 



- Original Message 
From: Chris Robertson 
To: squid-users@squid-cache.org
Sent: Wednesday, April 22, 2009 7:43:59 PM
Subject: Re: [squid-users] Auto Detect Proxy in Browser, visiting users.

gavguinness wrote:
> Hi
> 
> I'm new to Squid.  New in the sense that this time yesterday, I didn't know
> what Squid was.  I knew what I wanted to achieve though, and I've achieved
> most of this today using Squid and a few helpful online guides...
> 
> To have users promted to authenticate when they start their browser (Check)
> To log their activity in a log file (Check)
> Not to have to install any software on the PC (Check)
> Specifically not to use any server based DB lookup authentication (check)
> 
> The only problem is that I want all users to go through Squid, even visiting
> users.  A lot of our guys are not going to want to manually enter Proxy
> settings each time they visit a site - I want it to be automatic.
> 
> Similarly, not every user logs into our server(s), so I can't deploy a
> scrips or setting to the visiting computer as they simply connect to the
> WiFi, or Cabled network point.
> 
> So basically, just connect up to the network, go on line and BAM, they have
> to authenticate.  Just like in Starbucks!  (But without the coffee or wifi
> charges!)
> 
> I looked at transparent settings, but I gather this doesn't work with
> Authentication, so that's a no.
> 
> Now i'm focussing on how to get the clients to auto detect the squid box. But 
> I can't fathom how that's going to work.  If the machines don't know
> it's there, how can squid make itself known to them?
> 
> Ideally (and bear in mind my lack of knowledge at this stage) I would like
> to just have my DCHP tell the clients that the squid box is the default
> gateway and solve it that way, but again, I'm learning that the proxy
> doesn't work that way - it's not a router, right?
> 
> Hope that makes sense, any help appreciated.  But in the meantime, I'll get
> my head back in the manual!
> 
> Cheers
>  

Look into WPAD (http://en.wikipedia.org/wiki/Web_Proxy_Autodiscovery_Protocol) 
or a captive portal like WiFiDog 
(http://en.wikipedia.org/wiki/WiFiDog_Captive_Portal) or the Squid session 
helper (check the archives).

Here's the condensed version of what I have experienced with WPAD.  It all 
assumes that the proxy settings have not been changed from the shipping default 
in the browsers.

Using a Windows (98/2000/XP) machine and Internet Explorer, the DHCP option 252 
is honored.  DNS (wpad.domainname.com) is used in the absence of the DHCP 
option 252.  Firefox (2 or 3) on a Windows (98/2000/XP) machine or OS X (10.4 
for sure) the DHCP option 252 is ignored, DNS is used exclusively .  Safari on 
Windows (98/2000/XP) or OS X ignores both DHCP and DNS and must be explicitly 
configured to use a statically defined PAC 
(http://en.wikipedia.org/wiki/Proxy_auto-config) file.

My suggestion is to have a webserver assigned to http://wpad.yourdomain.tld 
that serves a PAC file when http://wpad.yourdomain.tld/wpad.dat OR 
http://wpad.yourdomain.tld/wpad.da is requested.  This will (transparently) 
catch the majority of web browsers.  For the rest, you should intercept 
outbound port 80 traffic and redirect it to a page that describes how to set 
their browser back to defaults (or how to set their browser to explicitly grab 
the PAC file).

Chris






Re: [squid-users] Squid Ignoring ESI

2009-04-22 Thread Robert Collins
On Wed, 2009-04-22 at 22:20 +, James Ellis wrote:
> I am trying to use the ESI parser in Squid.  I have compiled with
> "--enable-esi" and set "esi_parser custom" in my squid.conf file.

You shouldn't need to set esi_parser at all.

> Through the squid client, I can access a JSP page running on my local
> machine, but I am unable to parse ESI pages.  
> 
> Questions:
> 
> 1) Is there a set of instructions anywhere on how to use ESI and Squid
> together?  If not I'd be happy to piece together what I have (if I
> ever get it actually working).
> 
> 2) I read somewhere that you need to set the header
> "Surrogate-Control", so I've tried the following:
> 
> response.setHeader("Surrogate-Control", "no-store, content=\"ESI/1.0
> \"");
> 
> In this case the esi tags are just ignored.

This should be correct, I suggest upping the debug flags for ESI to see
what squid thinks is happening.

> 3) Are there any other squid.conf settings required other than
> "esi_parser custom" required?

Not that I remember.

-Rob


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Auto Detect Proxy in Browser, visiting users.

2009-04-22 Thread Chris Robertson

gavguinness wrote:

Hi

I'm new to Squid.  New in the sense that this time yesterday, I didn't know
what Squid was.  I knew what I wanted to achieve though, and I've achieved
most of this today using Squid and a few helpful online guides...

To have users promted to authenticate when they start their browser (Check)
To log their activity in a log file (Check)
Not to have to install any software on the PC (Check)
Specifically not to use any server based DB lookup authentication (check)

The only problem is that I want all users to go through Squid, even visiting
users.  A lot of our guys are not going to want to manually enter Proxy
settings each time they visit a site - I want it to be automatic.

Similarly, not every user logs into our server(s), so I can't deploy a
scrips or setting to the visiting computer as they simply connect to the
WiFi, or Cabled network point.

So basically, just connect up to the network, go on line and BAM, they have
to authenticate.  Just like in Starbucks!  (But without the coffee or wifi
charges!)

I looked at transparent settings, but I gather this doesn't work with
Authentication, so that's a no.

Now i'm focussing on how to get the clients to auto detect the squid box. 
But I can't fathom how that's going to work.  If the machines don't know

it's there, how can squid make itself known to them?

Ideally (and bear in mind my lack of knowledge at this stage) I would like
to just have my DCHP tell the clients that the squid box is the default
gateway and solve it that way, but again, I'm learning that the proxy
doesn't work that way - it's not a router, right?

Hope that makes sense, any help appreciated.  But in the meantime, I'll get
my head back in the manual!

Cheers
  


Look into WPAD 
(http://en.wikipedia.org/wiki/Web_Proxy_Autodiscovery_Protocol) or a 
captive portal like WiFiDog 
(http://en.wikipedia.org/wiki/WiFiDog_Captive_Portal) or the Squid 
session helper (check the archives).


Here's the condensed version of what I have experienced with WPAD.  It 
all assumes that the proxy settings have not been changed from the 
shipping default in the browsers.


Using a Windows (98/2000/XP) machine and Internet Explorer, the DHCP 
option 252 is honored.  DNS (wpad.domainname.com) is used in the absence 
of the DHCP option 252.  Firefox (2 or 3) on a Windows (98/2000/XP) 
machine or OS X (10.4 for sure) the DHCP option 252 is ignored, DNS is 
used exclusively .  Safari on Windows (98/2000/XP) or OS X ignores both 
DHCP and DNS and must be explicitly configured to use a statically 
defined PAC (http://en.wikipedia.org/wiki/Proxy_auto-config) file.


My suggestion is to have a webserver assigned to 
http://wpad.yourdomain.tld that serves a PAC file when 
http://wpad.yourdomain.tld/wpad.dat OR 
http://wpad.yourdomain.tld/wpad.da is requested.  This will 
(transparently) catch the majority of web browsers.  For the rest, you 
should intercept outbound port 80 traffic and redirect it to a page that 
describes how to set their browser back to defaults (or how to set their 
browser to explicitly grab the PAC file).


Chris


Re: [squid-users] squidclient -follow_x_forwarded_for

2009-04-22 Thread Chris Robertson

Alejandro Martinez wrote:

Hi,

This is my first post.

I have two proxies

Network(Users) - > ProxyA (sibling)
-->   ProxyB (parent)



In proxyA I have:
 forwarded_for on

In ProxyB I have:
 follow_x_forwarded_for allow all


This should NOT be an allow all.  Since you only have one child proxy, 
you should only allow follow_x_forwarded_for for that specific IP.


acl childProxy src 192.168.18.92
follow_x_forwarded_for allow childProxy


acl_uses_indirect_client on
log_uses_indirect_client on
delay_pool_uses_indirect_client on

ProxyA - Squid Cache: Version 2.5.STABLE14
   configure options:  --build=i686-redhat-linux-gnu 
--host=i686-redhat-linux-gnu --target=i386-redhat-linux-gnu 
--program-prefix= --prefix=/usr --exec-prefix=/usr 
--bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc 
--datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib 
--libexecdir=/usr/libexec --localstatedir=/var 
--sharedstatedir=/usr/com --mandir=/usr/share/man 
--infodir=/usr/share/info --exec_prefix=/usr --bindir=/usr/sbin 
--libexecdir=/usr/lib/squid --localstatedir=/var 
--sysconfdir=/etc/squid --enable-poll --enable-snmp 
--enable-removal-policies=heap,lru 
--enable-storeio=aufs,coss,diskd,null,ufs --enable-ssl 
--with-openssl=/usr/kerberos --enable-delay-pools 
--enable-linux-netfilter --with-pthreads 
--enable-ntlm-auth-helpers=SMB,winbind 
--enable-external-acl-helpers=ip_user,ldap_group,unix_group,wbinfo_group,winbind_group 
--enable-auth=basic,ntlm --with-winbind-auth-challenge 
--enable-useragent-log --enable-referer-log 
--disable-dependency-tracking --enable-cachemgr-hostname=localhost 
--enable-ident-lookups --enable-truncate --enable-underscores 
--datadir=/usr/share 
--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-domain-NTLM,SASL,winbind 
--enable-fd-config --enable-arp-acl



ProxyB -  Squid Cache: Version 2.6.STABLE22
configure options:  '--enable-ssl' 
'--enable-follow-x-forwarded-for' '--enable-delay-pools' 
'--enable-arp-acl' '--enable-linux-netfilter'




My problem is, I can see the original IP of the users in access.log, 
but when I do a "squidclient -U user -W password mgr:active_requests" 
(in ProxyB) I only see one entry



HTTP/1.0 200 OK
Server: squid/2.6.STABLE22
Date: Mon, 23 Mar 2009 21:07:15 GMT
Content-Type: text/plain
Expires: Mon, 23 Mar 2009 21:07:15 GMT
Last-Modified: Mon, 23 Mar 2009 21:07:15 GMT
X-Cache: MISS from proxyE1.equital.com
Via: 1.0 proxyE1.equital.com:3128 (squid/2.6.STABLE22)
Proxy-Connection: close

Connection: 0x8f1bfd0
FD 12, read 117, wrote 0
FD desc: cache_object://localhost/active_requests
in: buf 0x8f33cf8, offset 0, size 4096
peer: 127.0.0.1:33086
me: 127.0.0.1:3128
nrequests: 1
defer: n 0, until 0
uri cache_object://localhost/active_requests
log_type TCP_MISS
out.offset 0, out.size 0
req_sz 117
entry 0x8f22dc8/82AFF239F7FDD8D3ED9A797B5AEE2340
old_entry (nil)/N/A
start 1237842435.324518 (0.00 seconds ago)
username -
delay_pool 0

squidclient can't see the forwarded address of the clients ? I'm 
missing something ?


At this time there was just one active request, that being the Squid 
client (on localhost) requesting information about active requests...  I 
have no idea if the cache_manager menu honors the X-Forwarded-For 
header, but I would imagine not.  The active_requests list includes port 
numbers, and so probably uses the raw TCP connection data.



Thanks a lot


Chris


Re: [squid-users] visible_hostname versus unique_hostname

2009-04-22 Thread Chris Robertson

Matus UHLAR - fantomas wrote:

Hello,

I was searching for the logic of setting visible_hostname and
unique_hostname. I found out that value of unique_hostname is set by calling
getMyHostname() function, which returns value of visible_hostname, if it's
set. However, I would prefer not to do this - to use autodected hostname,
and only change visible_hostname in configuration file.

My point is that we use different /etc/hosts on different systems for
configuring more system services (not just squid) to run on different IP's
with the same configuration files, e.g.:

- hosts file:
195.168.1.136   proxy1.nextra.sk proxy.nextra.sk

- squid config:
http_port proxy.nextra.sk:3128
tcp_outgoing_address proxy.nextra.sk
udp_incoming_address proxy.nextra.sk

visible_hostname proxy.nextra.sk

Squid could resolve it's own unique hostname to proxy1.nextra.sk, if it
would not take value of visible_hostname. That would allow me using of _the
same_ config file on more machines, which would make teh administration much
easier. However, because this logic, it's impossible and I _must_ hold more
configuration files, no matter what I do to make that easier.
  


From http://www.squid-cache.org/Versions/v3/3.0/cfgman/:

 Configuration options can be included using the "include" directive.
 Include takes a list of files to include. Quoting and wildcards is
 supported.

 For example,

 include /path/to/included/file/squid.acl.config

 Includes can be nested up to a hard-coded depth of 16 levels.
 This arbitrary restriction is to prevent recursive include references
 from causing Squid entering an infinite loop whilst trying to load
 configuration files.


So you could define unique_hostname in a file (different on each server) 
that is included by the main config file (which is the same on all).  
Squid 2.7 also supports the include directive.



I would like to ask, could the *hostname logic be changed, so people could
set visible_hostname and leave unique_hostname to rely on the internal
logic? Should I fill bugreport for this?

Thank you.
  


Chris



[squid-users] Auto Detect Proxy in Browser, visiting users.

2009-04-22 Thread gavguinness

Hi

I'm new to Squid.  New in the sense that this time yesterday, I didn't know
what Squid was.  I knew what I wanted to achieve though, and I've achieved
most of this today using Squid and a few helpful online guides...

To have users promted to authenticate when they start their browser (Check)
To log their activity in a log file (Check)
Not to have to install any software on the PC (Check)
Specifically not to use any server based DB lookup authentication (check)

The only problem is that I want all users to go through Squid, even visiting
users.  A lot of our guys are not going to want to manually enter Proxy
settings each time they visit a site - I want it to be automatic.

Similarly, not every user logs into our server(s), so I can't deploy a
scrips or setting to the visiting computer as they simply connect to the
WiFi, or Cabled network point.

So basically, just connect up to the network, go on line and BAM, they have
to authenticate.  Just like in Starbucks!  (But without the coffee or wifi
charges!)

I looked at transparent settings, but I gather this doesn't work with
Authentication, so that's a no.

Now i'm focussing on how to get the clients to auto detect the squid box. 
But I can't fathom how that's going to work.  If the machines don't know
it's there, how can squid make itself known to them?

Ideally (and bear in mind my lack of knowledge at this stage) I would like
to just have my DCHP tell the clients that the squid box is the default
gateway and solve it that way, but again, I'm learning that the proxy
doesn't work that way - it's not a router, right?

Hope that makes sense, any help appreciated.  But in the meantime, I'll get
my head back in the manual!

Cheers


-- 
View this message in context: 
http://www.nabble.com/Auto-Detect-Proxy-in-Browser%2C-visiting-users.-tp23177583p23177583.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] Squid Ignoring ESI

2009-04-22 Thread James Ellis

I am trying to use the ESI parser in Squid.  I have compiled with 
"--enable-esi" and set "esi_parser custom" in my squid.conf file.

Through the squid client, I can access a JSP page running on my local machine, 
but I am unable to parse ESI pages.  

Questions:

1) Is there a set of instructions anywhere on how to use ESI and Squid 
together?  If not I'd be happy to piece together what I have (if I ever get it 
actually working).

2) I read somewhere that you need to set the header "Surrogate-Control", so 
I've tried the following:

response.setHeader("Surrogate-Control", "no-store, content=\"ESI/1.0\"");

In this case the esi tags are just ignored.

response.setHeader("Surrogate-Control", "no-store, content='ESI/1.0'");

This crashes my Squid with the following message "assertion failed: 
HttpHeaderTools.cc:355: "*start == '"'"
Aborted (core dumped)"

Are either of these correct "Surrogate-Control" header values correct?

3) Are there any other squid.conf settings required other than "esi_parser 
custom" required?


Thanks,
Jim 


Re: [squid-users] Invalidating of a resource cached with a POST request

2009-04-22 Thread Mark Nottingham

Squid2-HEAD does this. See:
  http://www.squid-cache.org/Versions/v2/HEAD/changesets/12355.patch
(be aware that that has dependencies on several other changesets on  
HEAD)


Cheers,



On 23/04/2009, at 1:42 AM,  > wrote:



Hello,

I would like to know if with SQUID it is possible to disable a  
resource
via a POST, PUT or DELETE request on a resource caching via a GET  
(same

URI)

Here is an example of what I would do:
* A client sends a GET request on a page ex : / mapage1
* The response is cached by the proxy SQUID
* The site manager send a POST request  to modify this resource, the
resource is remove from the SQUID cache

HTTP 1.1 theoretically it is possible to do this (RFC 2616 sec 13-10),
but from my research SQUID does not implement this recommendation (For
the POST request, it's ok for PUT and DELETE request).

(I used the 2.6 version of SQUID)

thank you for your help
Philippe


--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] HTCP logging?

2009-04-22 Thread Mark Nottingham

No there's not. See:
  http://www.squid-cache.org/bugs/show_bug.cgi?id=2627


On 23/04/2009, at 7:59 AM, Dean Weimer wrote:

Working on Testing a child parent proxy setup using HTCP, I was  
wondering if there is any way to see a log of the HTCP requests on  
the parent similar to how you see the ICP requests in the access log?


Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


--
Mark Nottingham   m...@yahoo-inc.com




[squid-users] HTCP logging?

2009-04-22 Thread Dean Weimer
Working on Testing a child parent proxy setup using HTCP, I was wondering if 
there is any way to see a log of the HTCP requests on the parent similar to how 
you see the ICP requests in the access log?

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


[squid-users] squidclient -follow_x_forwarded_for

2009-04-22 Thread Alejandro Martinez

Hi,

This is my first post.

I have two proxies

Network(Users) - > ProxyA (sibling)-->   
ProxyB (parent)



In proxyA I have:
 forwarded_for on

In ProxyB I have:
 follow_x_forwarded_for allow all
acl_uses_indirect_client on
log_uses_indirect_client on
delay_pool_uses_indirect_client on

ProxyA - Squid Cache: Version 2.5.STABLE14
   configure options:  --build=i686-redhat-linux-gnu 
--host=i686-redhat-linux-gnu --target=i386-redhat-linux-gnu 
--program-prefix= --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin 
--sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share 
--includedir=/usr/include --libdir=/usr/lib --libexecdir=/usr/libexec 
--localstatedir=/var --sharedstatedir=/usr/com --mandir=/usr/share/man 
--infodir=/usr/share/info --exec_prefix=/usr --bindir=/usr/sbin 
--libexecdir=/usr/lib/squid --localstatedir=/var --sysconfdir=/etc/squid 
--enable-poll --enable-snmp --enable-removal-policies=heap,lru 
--enable-storeio=aufs,coss,diskd,null,ufs --enable-ssl 
--with-openssl=/usr/kerberos --enable-delay-pools 
--enable-linux-netfilter --with-pthreads 
--enable-ntlm-auth-helpers=SMB,winbind 
--enable-external-acl-helpers=ip_user,ldap_group,unix_group,wbinfo_group,winbind_group 
--enable-auth=basic,ntlm --with-winbind-auth-challenge 
--enable-useragent-log --enable-referer-log 
--disable-dependency-tracking --enable-cachemgr-hostname=localhost 
--enable-ident-lookups --enable-truncate --enable-underscores 
--datadir=/usr/share 
--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-domain-NTLM,SASL,winbind 
--enable-fd-config --enable-arp-acl



ProxyB -  Squid Cache: Version 2.6.STABLE22
configure options:  '--enable-ssl' 
'--enable-follow-x-forwarded-for' '--enable-delay-pools' 
'--enable-arp-acl' '--enable-linux-netfilter'




My problem is, I can see the original IP of the users in access.log, but 
when I do a "squidclient -U user -W password mgr:active_requests" (in 
ProxyB) I only see one entry



HTTP/1.0 200 OK
Server: squid/2.6.STABLE22
Date: Mon, 23 Mar 2009 21:07:15 GMT
Content-Type: text/plain
Expires: Mon, 23 Mar 2009 21:07:15 GMT
Last-Modified: Mon, 23 Mar 2009 21:07:15 GMT
X-Cache: MISS from proxyE1.equital.com
Via: 1.0 proxyE1.equital.com:3128 (squid/2.6.STABLE22)
Proxy-Connection: close

Connection: 0x8f1bfd0
FD 12, read 117, wrote 0
FD desc: cache_object://localhost/active_requests
in: buf 0x8f33cf8, offset 0, size 4096
peer: 127.0.0.1:33086
me: 127.0.0.1:3128
nrequests: 1
defer: n 0, until 0
uri cache_object://localhost/active_requests
log_type TCP_MISS
out.offset 0, out.size 0
req_sz 117
entry 0x8f22dc8/82AFF239F7FDD8D3ED9A797B5AEE2340
old_entry (nil)/N/A
start 1237842435.324518 (0.00 seconds ago)
username -
delay_pool 0

squidclient can't see the forwarded address of the clients ? I'm missing 
something ?



Thanks a lot


[squid-users] Allow access to port 8080 from only one or two public IPs

2009-04-22 Thread david
Hello Amos and fellow Squid users, I am running Squid 3.0. I would like to 
block access to port 8080 accept for one or two public IPs and one or two 
internal class C IPs (192.168.1.1/24). Please advise if you have some definite 
caveats to share. Thanks, David.


OS: CentOS 5.2
Squid: 3.0
port 8080: Tomcat 5.5 web application (a blog).


[squid-users] visible_hostname versus unique_hostname

2009-04-22 Thread Matus UHLAR - fantomas
Hello,

I was searching for the logic of setting visible_hostname and
unique_hostname. I found out that value of unique_hostname is set by calling
getMyHostname() function, which returns value of visible_hostname, if it's
set. However, I would prefer not to do this - to use autodected hostname,
and only change visible_hostname in configuration file.

My point is that we use different /etc/hosts on different systems for
configuring more system services (not just squid) to run on different IP's
with the same configuration files, e.g.:

- hosts file:
195.168.1.136   proxy1.nextra.sk proxy.nextra.sk

- squid config:
http_port proxy.nextra.sk:3128
tcp_outgoing_address proxy.nextra.sk
udp_incoming_address proxy.nextra.sk

visible_hostname proxy.nextra.sk

Squid could resolve it's own unique hostname to proxy1.nextra.sk, if it
would not take value of visible_hostname. That would allow me using of _the
same_ config file on more machines, which would make teh administration much
easier. However, because this logic, it's impossible and I _must_ hold more
configuration files, no matter what I do to make that easier.

I would like to ask, could the *hostname logic be changed, so people could
set visible_hostname and leave unique_hostname to rely on the internal
logic? Should I fill bugreport for this?

Thank you.
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
We are but packets in the Internet of life (userfriendly.org)


Re: [squid-users] caching behavior during COSS rebuild

2009-04-22 Thread Chris Woodfield
Just tested this - -F appears to work for aufs rebuilds but not COSS  
rebuilds.


To reproduce:

- Config squid with an aufs and a coss store, like so:

cache_dir aufs /usr/squidcache 5000 16 256 min-size=50
cache_dir coss /usr/squidcache/coss1.dat 500 block-size=4096 max- 
size=50 membufs=100

cache_swap_log /usr/squidcache/%s.swap

- Start squid, send it a bunch of queries (I use a script that grabs  
random URLs from recent access.log files)


- Stop squid, delete /usr/squidcache/usr.squidcache.swap

- Start sending squid requests (again, the random URL script above)

- Restart squid with -F

What I'm seeing when I do the above is that as soon as the AUFS stores  
finish rebuilding, squid starts answering queries - but the COSS isn't  
rebuilt yet, and until that completes, all objects < 500K are cache  
misses (SO_FAIL in store.log).


Will get a bugzilla entry in place for this.

-C

On Apr 22, 2009, at 10:00 AM, Chris Woodfield wrote:


...and sure enough, it's right there in -h output...

cache$ /usr/local/squid/sbin/squid -h
...
  -FDon't serve any requests until store is rebuilt.
...

/me goes to write "I will RTFM Before Posting To squid-users" 100  
times on the whiteboard... :)


-C

On Apr 22, 2009, at 9:56 AM, Adrian Chadd wrote:


Well, I killed the swaplog writing entirely in Lusca - the COSS
rebuild code doesn't read from it (it was broken for various reasons
revolving mostly around code bitrot IIRC.)

There's a flag you can pass Squid to not handle requests until the
store is rebuilt - its the "-F" flag.

I'm fixing the store rebuild times in Lusca-HEAD at the moment and
this includes writing some new COSS rebuild-from-index, rebuild- 
from-log

and rebuild-from-rawdevice tools.



Adrian


On Wed, Apr 22, 2009, Chris Woodfield wrote:


On Apr 22, 2009, at 4:56 AM, Amos Jeffries wrote:


Chris Woodfield wrote:

So I'm running with COSS under 2.7STABLE6, we've noticed (as I can
see others have, teh Googles tell me so) that the COSS rebuild a.
happens every time squid is restarted, and b. takes quite a while
if the COSS stripes are large. However, I've noticed that while  
the

stripes are being rebuilt, squid still listens for and handles
requests - it just SO_FAILs on every object that would normally  
get

saved to a COSS stripe. So much for that hit ratio.
SO - the questions are:
1. Is there *any* way to prevent the COSS rebuild if squid is
terminated normally?


The indexes are stored in swap.state. Check that it is being done
properly by your Squid.



This could be the issue - when I exit squid, I notice that my
$coss_file.dat and $coss_file.dat.last-clean files all have zero  
size.

Any idea why this might be happening?

The relevant section of our squid.conf reads as follows:

cache_dir aufs /usr/squidcache.0/cache/ 75 16 256 min- 
size=100
cache_dir coss /usr/squidcache.0/cache/coss1.dat 3 block- 
size=4096

max-size=100 membufs=100
cache_dir coss /usr/squidcache.0/cache/coss2.dat 3 block- 
size=4096

max-size=100 membufs=100
cache_dir coss /usr/squidcache.0/cache/coss3.dat 3 block- 
size=4096

max-size=100 membufs=100

cache_swap_log /usr/squidcache.0/cache/%s

Thanks,

-C

2. Is there a way to prevent squid from handling requests until  
the

COSS stripe is fully rebuilt (this is obviously not good if you
don't have redundant squids, but that's not a problem for us) ?


I believe its possible.  If its not a local failure to find
swap.state for the COSS dir then it will be a code fix.
Unfortunately we developers are no longer actively working on
Squid-2 without a paid support contract. Also Adrian our storage
expert who would be the best to ask has retired from active
alterations.

Amos
--
Please be using
Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
Current Beta Squid 3.1.0.7



--
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial  
Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in  
WA -








[squid-users] Invalidating of a resource cached with a POST request

2009-04-22 Thread pgrisolano.ext
Hello,

I would like to know if with SQUID it is possible to disable a resource
via a POST, PUT or DELETE request on a resource caching via a GET (same
URI) 

Here is an example of what I would do: 
* A client sends a GET request on a page ex : / mapage1
* The response is cached by the proxy SQUID
* The site manager send a POST request  to modify this resource, the
resource is remove from the SQUID cache 

HTTP 1.1 theoretically it is possible to do this (RFC 2616 sec 13-10),
but from my research SQUID does not implement this recommendation (For
the POST request, it's ok for PUT and DELETE request). 

(I used the 2.6 version of SQUID)

thank you for your help
Philippe


[squid-users] using icp_hit_stale on small cache farm

2009-04-22 Thread Matus UHLAR - fantomas
Hello,

I have 4 cache servers on the same network, configured as siblings, with
cache digests rutned on. AFAIK using cache digests (nearly) wipes out
benefits of ICP. Now I am not sure, if:

- I should turn ICP off
- I should turn icp_hit_stale on (allow_miss is off)
- should I leave it as it is?

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
- Have you got anything without Spam in it?
- Well, there's Spam egg sausage and Spam, that's not got much Spam in it.


Re: [squid-users] Getting error msgs when trying to start squid

2009-04-22 Thread Amos Jeffries

Henrique M. wrote:


Amos Jeffries-2 wrote:

  acl localhost src 192.168.2.5 # 192.168.2.5 Server IP, 192.168.2.1 Modem
IP

"localhost" is a special term used in networking to mean the IPs 127.0.0.1
and sometimes ::1 as well. When defining an ACL for 'public' squid box IPs
its better to use a different name. The localnet definition covers the
same public IPs anyway so redefining it is not a help here.



So what do you suggest? Should I just erase this line or change it?


Make it back to:
  acl localhost src 127.0.0.1




Amos Jeffries-2 wrote:

  http_access allow all

This opens the proxy to access from any source on the internet at all.
Zero inbound security. Not good for a long-term solution. I'd suggest
testing with that as a "deny all" to make sure we don't get a
false-success.



Will do that. How about the "icp_access"? What does this command do? Should
I leave it "allow all"?


Allows other machines which have your squid set as a cache_peer to send 
ICP requests to you and get replies back. Current Squid default it off 
for extra security. Unless you need it, do: icp_access deny all





joost.deheer wrote:

Define "doesn't work". Clients get an error? Won't start? Something else?



Squid seems to starts, but clients can't browse the internet. They get the
default error msg that the  browser shows when it  can't load the website.
This actualy got me thinking if I am setting up the browser  correctly? I'm
typing the servers IP for  the proxy address and "3128" for the proxy port,
is that correct?


I believe so yes.
 * Make sure its set for HTTP, HTTPS, FTP, and Gopher but not SOCKS 
proxy settings. (some may not be present).


 * Check the testing client machine can get to squid (ping or such).
Check the cache.log to see if Squid is failing or busy at the time you 
are checking.


 * make sure that squid is actually running and opened port 3128.
  "netstat -antup | grep 3128" or similar commands should say.




joost.deheer wrote:

You could also try to start the proxy with 'squid -N' to start squid as a
console application instead of  in daemon mode. The  errors should then
appear on your screen.



How should I do that? I tried to start squid with "/etc/init.d/squid -N
start" and "/etc/init.d/squid -N"  but I didn't work.  I end up finding out
that I could check squid's status and for my surprise I got this message "*
squid is not running.".  So how do I start squid so it will show me the
error msgs on screen?


Just "squid -N -Y -d 1" shoudl work.  If not find the path to *bin/squid 
and run with the full file path/name.

 Usually "locate bin/squid" says where squid actually is.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7


Re: [squid-users] problems with SQUID 3.x and IBM Proventia

2009-04-22 Thread Udo Rader

Amos Jeffries wrote:
So of course the problem is proventia corrupting the HTTP headers and 
we will raise an issue about that with IBM.


But for the time being: is there a chance to make squid more 
"tolerant" about those kind of problems? Without surprize I did not 
find any fitting config options :-)


Not nearly as easy as it will be for IBM to issue a fix for it. Or even 
to replace the box with free software that works well.


Hehe, sure, no objections, it is just the world being far from perfect :-)

Not also without opening some potential data-injection and cache 
poisoning flaws into Squid.


Consider what happens with:

HTTP/1.1 200 OK
Bwahaha: "
Cache-Control: private

...something you really did not want public...
.

vs:

HTTP/1.1 200 OK
Content-Type: "fu
bar: tender: and: wine"
Cache-Control: private


Hmm, reading the specs for HTTP message headers [1] I think this could 
be done without imposing security issues. As per specification your last 
example would read correctly:


---CUT---
HTTP/1.1 200 OK
Content-Type: "fu
 bar: tender: and: wine"
Cache-Control: private
---CUT---

note the leading whitespace.

[1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.2

--
Udo Rader, CTO
http://www.bestsolution.at
http://riaschissl.blogspot.com


Re: [squid-users] CONNECT method support(for https) using squid3.1.0.6 + tproxy4

2009-04-22 Thread Amos Jeffries

Mikio Kishi wrote:

Hi, Amos


Ah, you need the follow_x_forwarded_for feature on Proxy(1).


That's right, I know about that, but I'd like to use "source address
spoofing"...

Just only following enables my anxiety.


lol.



replacing In tunnelStart()#tunnel.cc


   sock = comm_openex(SOCK_STREAM,
  IPPROTO_TCP,
  temp,
  COMM_NONBLOCKING,
  getOutgoingTOS(request),
  url);


with


   if (request->flags.spoof_client_ip) {
   sock = comm_openex(SOCK_STREAM,
  IPPROTO_TCP,
  temp,
  (COMM_NONBLOCKING|COMM_TRANSPARENT),
  getOutgoingTOS(request),
  url);
   } else {
   sock = comm_openex(SOCK_STREAM,
  IPPROTO_TCP,
  temp,
  COMM_NONBLOCKING,
  getOutgoingTOS(request),
  url);
   }


I think it has no harmful effects. I long for that.
Would you modify that ?



Only slightly. The regular way is to move COMM_NONBLOCKING flag into a 
local variable which gets  |= COMM_TRANSPARENT  done to it when spoofing 
(reduced code to break).


But essentially I think so.  Have you actually tested this at all?

Once this is confirmed no side-effects I'll merge.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7


Re: [squid-users] caching cgi_bin in 3.0

2009-04-22 Thread Matus UHLAR - fantomas
> Matus UHLAR - fantomas wrote:
>> I'm upgrading to 3.0 (finally) and I see that the new refresh_pattern
>> default was added in the config file:
>>
>> refresh_pattern (cgi-bin|\?)   0   0%  0
>>
>> I hope this is just to always verify the dynamic content, and should not
>> have any impact of caching it, if it's cacheable, correct?

On 21.04.09 09:41, Chris Robertson wrote:
> Correct.  If the dynamic content gives a "Cache-Control: max-age" and/or  
> a "Expires" header that allows caching, the refresh pattern will not  
> prevent caching it.

So it is replacement a former config defaults:

acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY

and should lead to more effective caching of dynamic content, correct?
Perfect!
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Posli tento mail 100 svojim znamim - nech vidia aky si idiot
Send this email to 100 your friends - let them see what an idiot you are


Re: [squid-users] problems with SQUID 3.x and IBM Proventia

2009-04-22 Thread Amos Jeffries

Udo Rader wrote:

Hi,

one of our customers has an issue with a Debian Lenny based squid 3.x in 
connection with an IBM Proventia security appliance.


The setup is like this:

internet <-> proventia <-> squid

Now proventia comes with a transparent web content filter, removing 
dangerous things (viruses, ...) from HTTP traffic.


Unfortunately this transparent filter rewrites the HTTP headers and 
sometimes it even corrupts them in a way that squid cannot deal with it 
and refuses to further process the content. The cache.log then contains 
a message like this one:


---CUT---
2009/04/22 11:09:23| WARNING: HTTP header contains NULL characters 
{Date: Wed, 22 Apr 2009 09:09:23 GMT

Server: Apache/2.0.53 (Linux/SUSE)
X-Powered-By: PHP/4.3.10
Content-Disposition: inline; filename="Lady.jpg
---CUT---

The problem probably is the missing trailing double quote at the end of 
the filename.


I've verified the problem using telnet:

on the proxy server itself, connecting through proventia:
CUT
Proxy2:~# telnet www.example.com 80
Trying 192.168.1.0...
Connected to www.example.com
Escape character is '^]'.
GET 
/main.php?g2_view=core.DownloadItem&g2_itemId=20129&g2_serialNumber=2 
HTTP/1.0


HTTP/1.1 200 OK
Date: Wed, 22 Apr 2009 09:02:40 GMT
Server: Apache/2.0.53 (Linux/SUSE)
X-Powered-By: PHP/4.3.10
Content-Disposition: inline; filename="Lady.jpg
Last-Modified: Sat, 04 Apr 2009 11:46:36 GMT
Expires: Thu, 22 Apr 2010 09:02:40 GMT
Connection: close
Content-Length: 8234
Content-Type: image/jpeg
CUT

on the proxy server itself, connecting directly to the server (using a 
ssh tunnel at port 8088)

CUT
Proxy2:~# telnet localhost 8088
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
GET 
/main.php?g2_view=core.DownloadItem&g2_itemId=20129&g2_serialNumber=2 
HTTP/1.0


HTTP/1.1 200 OK
Date: Wed, 22 Apr 2009 09:03:03 GMT
Server: Apache/2.0.53 (Linux/SUSE)
X-Powered-By: PHP/4.3.10
Content-Disposition: inline; filename="Lady.jpg"
Last-Modified: Sat, 04 Apr 2009 11:46:36 GMT
Content-length: 8234
Expires: Thu, 22 Apr 2010 09:03:03 GMT
Connection: close
Content-Type: image/jpeg
CUT

So of course the problem is proventia corrupting the HTTP headers and we 
will raise an issue about that with IBM.


But for the time being: is there a chance to make squid more "tolerant" 
about those kind of problems? Without surprize I did not find any 
fitting config options :-)




Not nearly as easy as it will be for IBM to issue a fix for it. Or even 
to replace the box with free software that works well.
Not also without opening some potential data-injection and cache 
poisoning flaws into Squid.


Consider what happens with:

HTTP/1.1 200 OK
Bwahaha: "
Cache-Control: private

...something you really did not want public...
.

vs:

HTTP/1.1 200 OK
Content-Type: "fu
bar: tender: and: wine"
Cache-Control: private



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7


[squid-users] squid ldap auth osx

2009-04-22 Thread jeff donovan

Greetings

working on creating a simple web access cache with authentication. I  
want to use my current LDAP directory to get login info.


running squid 3.0 stable 13

so close. clients browser pops up and asks for credentials. The  
username and pass are given and the browser prompts again. never  
giving access.

access logs tell me nothing,
 TCP_DENIED/407 2522 GET http://livepage.apple.com/ joeusername  
NONE/- text/html




auth_param basic program /usr/local/squid/libexec/squid_ldap_auth -b  
"dc=host,dc=my,dc=domain,dc=com" host.my.domain.com

auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours

acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl ldapauth proxy_auth REQUIRED
acl localnet src 10.135.0.0/16  # noc
#
#
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access allow ldapauth
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
http_access allow localnet
http_access deny all



Re: [squid-users] Getting error msgs when trying to start squid

2009-04-22 Thread Henrique M.


Amos Jeffries-2 wrote:
> 
>   acl localhost src 192.168.2.5 # 192.168.2.5 Server IP, 192.168.2.1 Modem
> IP
> 
> "localhost" is a special term used in networking to mean the IPs 127.0.0.1
> and sometimes ::1 as well. When defining an ACL for 'public' squid box IPs
> its better to use a different name. The localnet definition covers the
> same public IPs anyway so redefining it is not a help here.
> 

So what do you suggest? Should I just erase this line or change it?


Amos Jeffries-2 wrote:
> 
>   http_access allow all
> 
> This opens the proxy to access from any source on the internet at all.
> Zero inbound security. Not good for a long-term solution. I'd suggest
> testing with that as a "deny all" to make sure we don't get a
> false-success.
> 

Will do that. How about the "icp_access"? What does this command do? Should
I leave it "allow all"?


joost.deheer wrote:
> 
> Define "doesn't work". Clients get an error? Won't start? Something else?
> 

Squid seems to starts, but clients can't browse the internet. They get the
default error msg that the  browser shows when it  can't load the website.
This actualy got me thinking if I am setting up the browser  correctly? I'm
typing the servers IP for  the proxy address and "3128" for the proxy port,
is that correct?


joost.deheer wrote:
> 
> You could also try to start the proxy with 'squid -N' to start squid as a
> console application instead of  in daemon mode. The  errors should then
> appear on your screen.
> 

How should I do that? I tried to start squid with "/etc/init.d/squid -N
start" and "/etc/init.d/squid -N"  but I didn't work.  I end up finding out
that I could check squid's status and for my surprise I got this message "*
squid is not running.".  So how do I start squid so it will show me the
error msgs on screen?
-- 
View this message in context: 
http://www.nabble.com/Getting-error-msgs-when-trying-to-start-squid-tp22933693p23175470.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] caching behavior during COSS rebuild

2009-04-22 Thread Chris Woodfield

...and sure enough, it's right there in -h output...

cache$ /usr/local/squid/sbin/squid -h
...
   -FDon't serve any requests until store is rebuilt.
...

/me goes to write "I will RTFM Before Posting To squid-users" 100  
times on the whiteboard... :)


-C

On Apr 22, 2009, at 9:56 AM, Adrian Chadd wrote:


Well, I killed the swaplog writing entirely in Lusca - the COSS
rebuild code doesn't read from it (it was broken for various reasons
revolving mostly around code bitrot IIRC.)

There's a flag you can pass Squid to not handle requests until the
store is rebuilt - its the "-F" flag.

I'm fixing the store rebuild times in Lusca-HEAD at the moment and
this includes writing some new COSS rebuild-from-index, rebuild-from- 
log

and rebuild-from-rawdevice tools.



Adrian


On Wed, Apr 22, 2009, Chris Woodfield wrote:


On Apr 22, 2009, at 4:56 AM, Amos Jeffries wrote:


Chris Woodfield wrote:

So I'm running with COSS under 2.7STABLE6, we've noticed (as I can
see others have, teh Googles tell me so) that the COSS rebuild a.
happens every time squid is restarted, and b. takes quite a while
if the COSS stripes are large. However, I've noticed that while the
stripes are being rebuilt, squid still listens for and handles
requests - it just SO_FAILs on every object that would normally get
saved to a COSS stripe. So much for that hit ratio.
SO - the questions are:
1. Is there *any* way to prevent the COSS rebuild if squid is
terminated normally?


The indexes are stored in swap.state. Check that it is being done
properly by your Squid.



This could be the issue - when I exit squid, I notice that my
$coss_file.dat and $coss_file.dat.last-clean files all have zero  
size.

Any idea why this might be happening?

The relevant section of our squid.conf reads as follows:

cache_dir aufs /usr/squidcache.0/cache/ 75 16 256 min- 
size=100
cache_dir coss /usr/squidcache.0/cache/coss1.dat 3 block- 
size=4096

max-size=100 membufs=100
cache_dir coss /usr/squidcache.0/cache/coss2.dat 3 block- 
size=4096

max-size=100 membufs=100
cache_dir coss /usr/squidcache.0/cache/coss3.dat 3 block- 
size=4096

max-size=100 membufs=100

cache_swap_log /usr/squidcache.0/cache/%s

Thanks,

-C


2. Is there a way to prevent squid from handling requests until the
COSS stripe is fully rebuilt (this is obviously not good if you
don't have redundant squids, but that's not a problem for us) ?


I believe its possible.  If its not a local failure to find
swap.state for the COSS dir then it will be a code fix.
Unfortunately we developers are no longer actively working on
Squid-2 without a paid support contract. Also Adrian our storage
expert who would be the best to ask has retired from active
alterations.

Amos
--
Please be using
Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
Current Beta Squid 3.1.0.7



--
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial  
Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in  
WA -






[squid-users] problems with SQUID 3.x and IBM Proventia

2009-04-22 Thread Udo Rader

Hi,

one of our customers has an issue with a Debian Lenny based squid 3.x in 
connection with an IBM Proventia security appliance.


The setup is like this:

internet <-> proventia <-> squid

Now proventia comes with a transparent web content filter, removing 
dangerous things (viruses, ...) from HTTP traffic.


Unfortunately this transparent filter rewrites the HTTP headers and 
sometimes it even corrupts them in a way that squid cannot deal with it 
and refuses to further process the content. The cache.log then contains 
a message like this one:


---CUT---
2009/04/22 11:09:23| WARNING: HTTP header contains NULL characters 
{Date: Wed, 22 Apr 2009 09:09:23 GMT

Server: Apache/2.0.53 (Linux/SUSE)
X-Powered-By: PHP/4.3.10
Content-Disposition: inline; filename="Lady.jpg
---CUT---

The problem probably is the missing trailing double quote at the end of 
the filename.


I've verified the problem using telnet:

on the proxy server itself, connecting through proventia:
CUT
Proxy2:~# telnet www.example.com 80
Trying 192.168.1.0...
Connected to www.example.com
Escape character is '^]'.
GET 
/main.php?g2_view=core.DownloadItem&g2_itemId=20129&g2_serialNumber=2 
HTTP/1.0


HTTP/1.1 200 OK
Date: Wed, 22 Apr 2009 09:02:40 GMT
Server: Apache/2.0.53 (Linux/SUSE)
X-Powered-By: PHP/4.3.10
Content-Disposition: inline; filename="Lady.jpg
Last-Modified: Sat, 04 Apr 2009 11:46:36 GMT
Expires: Thu, 22 Apr 2010 09:02:40 GMT
Connection: close
Content-Length: 8234
Content-Type: image/jpeg
CUT

on the proxy server itself, connecting directly to the server (using a 
ssh tunnel at port 8088)

CUT
Proxy2:~# telnet localhost 8088
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
GET 
/main.php?g2_view=core.DownloadItem&g2_itemId=20129&g2_serialNumber=2 
HTTP/1.0


HTTP/1.1 200 OK
Date: Wed, 22 Apr 2009 09:03:03 GMT
Server: Apache/2.0.53 (Linux/SUSE)
X-Powered-By: PHP/4.3.10
Content-Disposition: inline; filename="Lady.jpg"
Last-Modified: Sat, 04 Apr 2009 11:46:36 GMT
Content-length: 8234
Expires: Thu, 22 Apr 2010 09:03:03 GMT
Connection: close
Content-Type: image/jpeg
CUT

So of course the problem is proventia corrupting the HTTP headers and we 
will raise an issue about that with IBM.


But for the time being: is there a chance to make squid more "tolerant" 
about those kind of problems? Without surprize I did not find any 
fitting config options :-)


--
Udo Rader, CTO
http://www.bestsolution.at
http://riaschissl.blogspot.com


Re: [squid-users] caching behavior during COSS rebuild

2009-04-22 Thread Chris Woodfield


On Apr 22, 2009, at 4:56 AM, Amos Jeffries wrote:


Chris Woodfield wrote:
So I'm running with COSS under 2.7STABLE6, we've noticed (as I can  
see others have, teh Googles tell me so) that the COSS rebuild a.  
happens every time squid is restarted, and b. takes quite a while  
if the COSS stripes are large. However, I've noticed that while the  
stripes are being rebuilt, squid still listens for and handles  
requests - it just SO_FAILs on every object that would normally get  
saved to a COSS stripe. So much for that hit ratio.

SO - the questions are:
1. Is there *any* way to prevent the COSS rebuild if squid is  
terminated normally?


The indexes are stored in swap.state. Check that it is being done  
properly by your Squid.




This could be the issue - when I exit squid, I notice that my  
$coss_file.dat and $coss_file.dat.last-clean files all have zero size.  
Any idea why this might be happening?


The relevant section of our squid.conf reads as follows:

cache_dir aufs /usr/squidcache.0/cache/ 75 16 256 min-size=100
cache_dir coss /usr/squidcache.0/cache/coss1.dat 3 block-size=4096  
max-size=100 membufs=100
cache_dir coss /usr/squidcache.0/cache/coss2.dat 3 block-size=4096  
max-size=100 membufs=100
cache_dir coss /usr/squidcache.0/cache/coss3.dat 3 block-size=4096  
max-size=100 membufs=100


cache_swap_log /usr/squidcache.0/cache/%s

Thanks,

-C

2. Is there a way to prevent squid from handling requests until the  
COSS stripe is fully rebuilt (this is obviously not good if you  
don't have redundant squids, but that's not a problem for us) ?


I believe its possible.  If its not a local failure to find  
swap.state for the COSS dir then it will be a code fix.  
Unfortunately we developers are no longer actively working on  
Squid-2 without a paid support contract. Also Adrian our storage  
expert who would be the best to ask has retired from active  
alterations.


Amos
--
Please be using
 Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
 Current Beta Squid 3.1.0.7





[squid-users] Tproxy v4 patch for squid 2.7 version

2009-04-22 Thread Visolve Squid Team

Hello all,

Tproxy-4 patch for squid 2.7 STABLE6 is been released. Tproxy helps in 
IP spoofing, which means when a browser request for an URL, the client 
IP is sent to the webserver instead of the proxy server's IP. 


The patch is available at http://www.visolve.com/squid/squid-tproxy.php

Thanks
ViSolve Squid Team.
http://www.visolve.com



Re: [squid-users] TCP_MISS/600 Squid 2.6S6 and Dansguardin

2009-04-22 Thread Marco Leone

Hi,

I'm still trying to solve this issue but I was not able to find a solution.

Here follows the log where SQUID assigns the 600 code that DansGuardian 
is not able to process:


2009/04/10 15:17:08| clientProcessRequest: GET 
'http://dst.domain.com/sc-security/registrazione.html'

2009/04/10 15:17:08| storeGet: looking up 03D7461E3ED75DC90510DB5F460962AB
2009/04/10 15:17:08| clientProcessRequest2: storeGet() MISS
2009/04/10 15:17:08| clientProcessRequest: TCP_MISS for 
'http://dst.domain.com/sc-security/registrazione.html'
2009/04/10 15:17:08| clientProcessMiss: 'GET 
http://dst.domain.com/sc-security/registrazione.html'
2009/04/10 15:17:08| storeCreateEntry: 
'http://dst.domain.com/sc-security/registrazione.html'

2009/04/10 15:17:08| creating rep: 0xc1ada08
2009/04/10 15:17:08| init-ing hdr: 0xc1ada48 owner: 2
2009/04/10 15:17:08| 0xc1ada48 lookup for 38
2009/04/10 15:17:08| 0xc1ada48 lookup for 9
2009/04/10 15:17:08| 0xc1ada48 lookup for 38
2009/04/10 15:17:08| 0xc1ada48 lookup for 9
2009/04/10 15:17:08| 0xc1ada48 lookup for 22
2009/04/10 15:17:08| new_MemObject: returning 0xc044c38
2009/04/10 15:17:08| new_StoreEntry: returning 0xfa2e9c0
2009/04/10 15:17:08| storeKeyPrivate: GET 
http://dst.domain.com/sc-security/registrazione.html
2009/04/10 15:17:08| storeHashInsert: Inserting Entry 0xfa2e9c0 key 
'144B5601AA27F008144ACF9D23554129'
2009/04/10 15:17:08| storeLockObject: key 
'144B5601AA27F008144ACF9D23554129' count=2
2009/04/10 15:17:08| storeClientCopy: 144B5601AA27F008144ACF9D23554129, 
seen 0, want 0, size 4096, cb 0x806cd00, cbdata 0xf999a20

2009/04/10 15:17:08| cbdataLock: 0xf999a20
2009/04/10 15:17:08| cbdataLock: 0xe6572c8
2009/04/10 15:17:08| storeClientCopy2: 144B5601AA27F008144ACF9D23554129
2009/04/10 15:17:08| storeClientCopy3: Waiting for more
2009/04/10 15:17:08| cbdataUnlock: 0xe6572c8
2009/04/10 15:17:08| aclCheckFast: list: (nil)
2009/04/10 15:17:08| aclCheckFast: no matches, returning: 1
2009/04/10 15:17:08| fwdStart: 
'http://dst.domain.com/sc-security/registrazione.html'
2009/04/10 15:17:08| storeLockObject: key 
'144B5601AA27F008144ACF9D23554129' count=3
2009/04/10 15:17:08| peerSelect: 
http://dst.domain.com/sc-security/registrazione.html
2009/04/10 15:17:08| storeLockObject: key 
'144B5601AA27F008144ACF9D23554129' count=4

2009/04/10 15:17:08| cbdataLock: 0xc3388e0
2009/04/10 15:17:08| peerSelectFoo: 'GET dst.domain.com'
2009/04/10 15:17:08| cbdataLock: 0x9a51718
2009/04/10 15:17:08| cbdataLock: 0xb213dc0
2009/04/10 15:17:08| cbdataValid: 0x9a51718
2009/04/10 15:17:08| aclCheck: checking 'always_direct allow ftp'
2009/04/10 15:17:08| aclMatchAclList: checking ftp
2009/04/10 15:17:08| aclMatchAcl: checking 'acl ftp proto FTP'
2009/04/10 15:17:08| aclMatchAclList: no match, returning 0
2009/04/10 15:17:08| cbdataUnlock: 0x9a51718
2009/04/10 15:17:08| aclCheck: NO match found, returning 0
2009/04/10 15:17:08| aclCheckCallback: answer=0
2009/04/10 15:17:08| cbdataValid: 0xb213dc0
2009/04/10 15:17:08| peerCheckAlwaysDirectDone: 0
2009/04/10 15:17:08| peerSelectFoo: 'GET dst.domain.com'
2009/04/10 15:17:08| peerCheckNetdbDirect: MY RTT = 0 msec
2009/04/10 15:17:08| peerCheckNetdbDirect: minimum_direct_rtt = 400 msec
2009/04/10 15:17:08| peerCheckNetdbDirect: MY hops = 0
2009/04/10 15:17:08| peerCheckNetdbDirect: minimum_direct_hops = 4
2009/04/10 15:17:08| whichPeer: from 0.0.0.0 port 0
2009/04/10 15:17:08| peerSelectFoo: direct = DIRECT_MAYBE
2009/04/10 15:17:08| neighborsDigestSelect: choices: 0 (0)
2009/04/10 15:17:08| peerNoteDigestLookup: peer , lookup: LOOKUP_NONE
2009/04/10 15:17:08| peerSelectIcpPing: 
http://dst.domain.com/sc-security/registrazione.html

2009/04/10 15:17:08| neighborsCount: 0
2009/04/10 15:17:08| peerSelectIcpPing: counted 0 neighbors
2009/04/10 15:17:08| peerGetSomeParent: GET dst.domain.com
2009/04/10 15:17:08| getDefaultParent: returning NULL
2009/04/10 15:17:08| peerSourceHashSelectParent: Calculating hash for 
127.0.0.1

2009/04/10 15:17:08| getRoundRobinParent: returning NULL
2009/04/10 15:17:08| getFirstUpParent: returning NULL
2009/04/10 15:17:08| getAnyParent: returning NULL
2009/04/10 15:17:08| peerAddFwdServer: adding DIRECT DIRECT
2009/04/10 15:17:08| peerSelectCallback: 
http://dst.domain.com/sc-security/registrazione.html

2009/04/10 15:17:08| cbdataValid: 0xc3388e0
2009/04/10 15:17:08| fwdStartComplete: 
http://dst.domain.com/sc-security/registrazione.html
2009/04/10 15:17:08| fwdConnectStart: 
http://dst.domain.com/sc-security/registrazione.html

2009/04/10 15:17:08| fwdConnectStart: got addr 0.0.0.0, tos 0
2009/04/10 15:17:08| comm_open: FD 37 is a new socket
2009/04/10 15:17:08| fd_open FD 37 
http://dst.domain.com/sc-security/registrazione.html
2009/04/10 15:17:08| comm_add_close_handler: FD 37, handler=0x807d778, 
data=0xc3388e0

2009/04/10 15:17:08| cbdataLock: 0xc3388e0
2009/04/10 15:17:08| commSetTimeout: FD 37 timeout 60
2009/04/10 15:17:08| commConnectStart: FD 37, dst.domain.com:80
2009/04/10 15:17:08| cbdataLock: 0xc3388e0
2009/

Re: [squid-users] Authenticator processes after reconfigure.

2009-04-22 Thread Amos Jeffries

Oleg wrote:

Hello.

Version: Squid 3.0.STABLE13 on Gentoo 2.6.22-vs2.2.0.7

`squid -k reconfigure` do not close old authenticator processes if that 
was a clients. So my 'NTLM Authenticator Statistics' looks like below.

Is anybody has same symptom?


Maybe.  The 23 of 15 issue has been resolved recently.

But the repeated use of FD some with RS set is a bug anyways. Please 
open a bugzilla entry so we don't lose track of this. With details on 
where that output was found.


Thanks

Amos



Oleg.


NTLM Authenticator Statistics:
program: /usr/bin/ntlm_auth
number running: 23 of 15
requests sent: 8896
replies received: 8896
queue length: 0
avg service time: 0 msec


#FDPID# Requests# Deferred RequestsFlagsTime
OffsetRequest

112230794590RS0.0020(none)
21323080890RS0.0000(none)
31423081370RS0.0000(none)
41523082360RS0.0020(none)
516230833420RS0.0000(none)
6172308410570RS0.0000(none)
71823085970RS0.0000(none)
102123089710RS0.0000(none)
1201769565300.0030(none)
2221769611400.0040(none)
323176972200.0080(none)
42417698400.0200(none)
52517699000.0000(none)
62617700000.0000(none)
72717701000.0000(none)
82817702000.0000(none)
92917703000.0000(none)
103017713000.0000(none)
113117714000.0000(none)
123217715000.0000(none)
133317716000.0000(none)
143417717000.0000(none)
153517718000.0000(none)

Flags key:

   B = BUSY
   C = CLOSING
   R = RESERVED or DEFERRED
   S = SHUTDOWN
   P = PLACEHOLDER



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7


Re: [squid-users] Squid and TC - Traffic Shaping

2009-04-22 Thread Amos Jeffries

Wilson Hernandez - MSD, S. A. wrote:

Hello.

I was writing a script to control traffic on our network. I created my
rules with tc and noticed that it wasn't working correctly.

I tried this traffic shaping on a linux router that has squid doing
transparent cache.

When measuring the download speed on speedtest.net the download speed is
70kbps when is supposed to be over 300kbps. I found it strange since
I've done traffic shaping in the past and worked but not on a box with
squid. I stopped the squid server and ran the test again and it gave me
the speed I assigned to that machine. I assigned different bw and the
test gave the correct speed.

Have anybody used traffic shaping (TC in linux) on a box with squid? Is
there a way to combine both a have them work side by side?


Answer to both is yes. Though how is not known to me at this point.

Squid is capable of setting a mixture of outbound QoS flags. Information 
about using those with iptables etc seems to be fine. But when people 
throw TC into the mix with newer Squid something appears in the routing 
behavior that we have no documented info about.


I am also seeking a TC expert to help several users already needing to 
use it with TPROXYv4 and/or WCCP setups.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7


Re: [squid-users] squid AND ssl

2009-04-22 Thread Amos Jeffries

joe ryan wrote:

Hi,
I have a simple webserver that listens on port 80 for requests. I
would like to secure access to this webserver using squid and SSL. I
can access the simple website through http without any issue. When I


As your config shows. Squid is never involved with port 80 inbound traffic.


try and access it using https: I get a message in the cache file. See
attached.
The web page error show up as Connection to 192.168.0.1 Failed
The system returned:
(13) Permission denied

I am running Squid stable 2.7 and I used openssl to generate the cert and key.
I have attached my conf file and cache errors.
Can squid secure an unsecure webserver the way i am trying to do do



From your config:
> http_port 192.168.0.1:8080
 ...
> http_access allow all

This is not a secure configuration. Either use accel options on the port 
 line to set default handling security. Or explicitly permit and deny 
specific access to things using ACL.


Also this:

> acl webSrv dst 192.168.0.1
> acl webPrt port 80
> http_access allow webSrv webprt

Is even less secure. As an accelerator clients will never visit squid 
asking for port 80, since squid does not listen there.


These two lines:
> https_port 192.168.0.1:443 accel 
> cache_peer 192.168.0.1 parent 443 0 no-query 

explicitly state that all incoming HTTPS requests are to be looped from 
squid into squid ... infinity.


But luckily for you ...

> always_direct allow all

... prevents any cache_peer ever being used.


I believe you need to chop your http_port and http_access configuration 
back to the defaults then reconstruct along these guidelines for the 
HTTP portion:

 http://wiki.squid-cache.org/ConfigExamples/Reverse/BasicAccelerator

At which point you should have both HTTP and HTTPS accepted by squid and 
passed to the HTTPS-enabled web server.



For Squid to be a proper reverse-proxy/accelerator you need Squid to 
listen on port 192.168.0.1:80 and the app to listen on some other IP 
port 80 (127.0.0.1:80 is commonly used in these circumstances).



I also get the impression the web server is not HTTPS enabled. Therefore 
you probably do not actually want any SSL options on the cache_peer 
line. Then HTTPS will be on the public clients->squid link and internal 
link plain HTTP.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7


Re: [squid-users] caching behavior during COSS rebuild

2009-04-22 Thread Amos Jeffries

Chris Woodfield wrote:
So I'm running with COSS under 2.7STABLE6, we've noticed (as I can see 
others have, teh Googles tell me so) that the COSS rebuild a. happens 
every time squid is restarted, and b. takes quite a while if the COSS 
stripes are large. However, I've noticed that while the stripes are 
being rebuilt, squid still listens for and handles requests - it just 
SO_FAILs on every object that would normally get saved to a COSS stripe. 
So much for that hit ratio.


SO - the questions are:

1. Is there *any* way to prevent the COSS rebuild if squid is terminated 
normally?


The indexes are stored in swap.state. Check that it is being done 
properly by your Squid.


2. Is there a way to prevent squid from handling requests until the COSS 
stripe is fully rebuilt (this is obviously not good if you don't have 
redundant squids, but that's not a problem for us) ?


I believe its possible.  If its not a local failure to find swap.state 
for the COSS dir then it will be a code fix. Unfortunately we developers 
are no longer actively working on Squid-2 without a paid support 
contract. Also Adrian our storage expert who would be the best to ask 
has retired from active alterations.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
  Current Beta Squid 3.1.0.7


Re: [squid-users] allowedURL don't work

2009-04-22 Thread Amos Jeffries

Chris Robertson wrote:

Phibee Network Operation Center wrote:

Hi

i have a new problems with my Squid Server (NTLM AD)

My configuration:

auth_param ntlm program /usr/bin/ntlm_auth 
--helper-protocol=squid-2.5-ntlmssp

auth_param ntlm children 15
auth_param ntlm keep_alive on
auth_param basic program /usr/bin/ntlm_auth 
--helper-protocol=squid-2.5-basic

auth_param basic children 15
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
#external_acl_type AD_Group children=50 concurrency=50 %LOGIN 
/usr/lib/squid/wbinfo_group.pl
external_acl_type AD_Group children=50 concurrency=50 ttl=1800 
negative_ttl=900 %LOGIN /usr/lib/squid/wbinfo_group.pl


cache_peer 127.0.0.1parent  80810   proxy-only no-query 
weight=100 connect-timeout=5 login=*:password


## ACL des droits d'accès
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl Lan src 10.0.0.0/8 # RFC1918 possible internal network
acl Lan src 172.16.0.0/12  # RFC1918 possible internal network
acl Lan src 192.168.0.0/16 # RFC1918 possible internal network


##
## ACL pour les sites web consultable sans authentification
##
acl URL_Authorises dstdomain "/etc/squid-ntlm/allowedURL"
http_access allow URL_Authorises


Are  you sure you don't want to add additional restrictions to the 
http_access allow (such as a limitation on the source IP, or something)?



##

acl SSL_ports port 443 563 1 1494 2598
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 563 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

#http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

##
# ACL pour definir les groupes AD autorisés a ce connecter
##
acl AllowedADUsers external AD_Group "/etc/squid-ntlm/allowedntgroups"
acl Winbind proxy_auth REQUIRED
##


##
# ACL pour les Droits d'accès d'apres l'Active Directory
##
# Droits d'accès d'apres l'Active Directory
http_access allow AllowedADUsers
http_access deny !AllowedADUsers
http_access deny !Winbind


These two deny lines are redundant, as everything is denied by the next 
line...


Almost, but not quite.
Since he is using "allow AllowedADUsers" there will be no forced login. 
The two denials are required to kick that 407 back at the visitor 
instead of 403.






##

http_access deny all


##
# Parametre Systeme
##
http_port 8080
hierarchy_stoplist cgi-bin ?
cache_mem 16 MB
#cache_dir ufs /var/spool/squid-ntlm 5000 16 256
cache_dir null /dev/null
#logformat squid %ts.%03tu %6tr %>a %Ss/%03Hs %%mt
#logformat squidmime %ts.%03tu %6tr %>a %Ss/%03Hs %%Sh/%h] [%
#logformat common %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %"%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh

access_log /var/log/squid-ntlm/access.log squid
cache_log /var/log/squid-ntlm/cache.log
cache_store_log /var/log/squid-ntlm/store.log
# emulate_httpd_log off
mime_table /etc/squid-ntlm/mime.conf
pid_filename /var/run/squid-ntlm.pid
# debug_options ALL,1
log_fqdn off
ftp_user pr...@gw.phibee.net
ftp_passive on
ftp_sanitycheck on
ftp_telnet_protocol on
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern (cgi-bin|\?)0   0%  0
refresh_pattern .   0   20% 4320
icp_port 3130
error_directory /usr/share/squid/errors/French
icp_access allow Lan
icp_access deny all
htcp_access allow Lan
htcp_access deny all


Into my allowedURL, i have:

pagesjaunes.fr
estat.com
societe.com
quidonc.fr



when i want access to www.pagejaunes.fr, he request a authentification 
... i want no authentification

and no limitation of surf.

Anyone see where is my error ?
the correct synthaxe are "pagesjaunes.fr" or ".pagesjaunes.fr" for 
*.pagesjaunes.fr ?