Re: [squid-users] Re: I need help with url_regex

2010-09-09 Thread Amos Jeffries

On 10/09/10 09:17, devlin7 wrote:


Thanks Amos for the feedback.

It must be that I am entering it incorrectly because anything with a * or ?
doesn't work at all.

Are you sure that the "." is treated as "any character"


I am. In posix regex...
 "." means any (single) character.
 "*" means any zero or more of the previous item.
 "*" means any one or more of the previous item.
 "?" means zero or one of the previous item.
 "\" means treat the next character as exact, even if its usually special.

by "item" above I mean one character or a whole bracketed () thing.

To be matched as part of the source text the reserved characters all 
need to be escaped like \? in the pattern.




I would have thought that blocking .info would block any site that had .info
in it like www.porn.info but from what you are saying it would also block
www.sinfo.com. Am I correct?


Yes. These also-rans are most of the problem for this type of config.



So is there a beetter way?


Yes, a few. Breaking the denial into several rules will help do it 
faster and more precisely.



In most cases you will find you can do away with the regex part entirely 
and ban a whole domain. This way you can also search online and download 
lists of proxy domains to block wholesale. It's far easier than trying 
to build the list yourself. SquidGuard, DansGuardian, ufdb tools provide 
some lists like this. Also RHSBL anti-spam lists often include open 
proxy domains.



Some matches you can limit to only trying the matching on certain 
domains and doing the regex on only the path portion of the URL 
(urlpath_regex matches path+query string):


  acl badDomains dstdomain .example.com .info
  acl urlPathRegex urlpath_regex ^/browse\.php \.php\?q= \.php\?u=i8v
  http_access deny badDomains urlPathRegex


There will be some patterns which detect certain types of broken CMS 
(usually the search component "\?q=" like I mentioned) which act like a 
proxy even if they were not intended that way. Doing a urlpath_regex 
without the domain protection above is needed to catch many site using 
these CMS. Just be sure of and careful with the patterns.



NP: Ordering your rules in the same order I've named them above will 
even provide some measure of speed gain to the proxy. dstdomain is 
rather fast matching, regex is slow and resource hungry.



To backup everything you need reliable management support behind the 
blocking policy. With stronger enforcement for students caught actively 
trying to evade it. Without those you are is the sad position of an 
endless race.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


Re: [squid-users] squid config parser library

2010-09-09 Thread Amos Jeffries

On 10/09/10 17:03, Mikio Kishi wrote:

Hi, Amos


To what purpose?


I'd like to implement my squid config viewer. It becomes easier to implement it
if there is already such a parser library. Just a simple question.



With a potentially complicated solution.

I'm not aware of any as such. Webmin plays with the config in a GUI so 
there may be something they use.


I tried my hand at a config validator/upgrader a year or so ago, it 
turned out to be some trouble keeping up with all the little details fro 
validation. I've turned to making the squid internal parser report 
better instead.



A simple viewer should be a lot easier than one which tries to do 
things, the overall options come in a few distinct flavours which can be 
known an displayed appropriately despite the churn in fine detail...


Toggles are:
  directive optionlist

Access controls are:
  directive acllist
or
  directive valuelist acllist

Compounds are:
  directive valuelist optionlist

external_acl_type is a bit of an exception with its multiple lists:
  directive name [optionlist] formatlist helperparams


acllist ::= [allow|deny] acl [ acl ...]

optionlist ::= option [option ...]

valuelist is a fixed number for fields dependent on the directive name.

option ::=  name ["=" flaglist ]

flaglist ::= flag ["," flag]


With the appropriate password a query to the Squid cachemgr API will 
produce dump of the running config. This will also inline the sub-file 
content and included sub-configs for you. It does loose the comments and 
includes options still at their default values though.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


Re: [squid-users] squid config parser library

2010-09-09 Thread Mikio Kishi
Hi, Amos

> To what purpose?

I'd like to implement my squid config viewer. It becomes easier to implement it
if there is already such a parser library. Just a simple question.

--mkishi

On Fri, Sep 10, 2010 at 1:10 PM, Amos Jeffries  wrote:
> On 10/09/10 05:32, Mikio Kishi wrote:
>>
>> Hi, all
>>
>> I'm looking for squid config(squid.conf) parser script like perl or
>> python library.
>>
>> Please tell me if it already exists.
>>
>
> To what purpose?
>
> NP: we have several chapters of release notes each major version just on
> config changes.
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.8
>  Beta testers wanted for 3.2.0.2
>


Re: [squid-users] RE: Two squid servers to talk to each other before the internet

2010-09-09 Thread Amos Jeffries

On 10/09/10 02:54, Tóth Tibor Péter wrote:

Sorry the config file is a bit of a legacy of 5 people who been touching the 
config in the past years.

So...I've did what you said:
-removed "always_direct allow all"
-cahnged "via off" ->  "via on"
-changed " icp_access deny all" ->  " icp_access allow all"

I have a result as an empty white page in my browser, and nothing in the 
access.log
Still no UDP_*_ or any sign of the servers talking to the other, but worst 
because I dont even see myself accessing to any site anymore.
Might be something still missing?



Something outside of Squid would be my guess. Possibly a firewall 
setting?  You will likely have to trace packets and see where they are 
going.
 Starting with the browser->squid ones which you say are now going 
missing (no access log means they either don't arrive at squid, or the 
request is taking infinite/long time to complete).


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


Re: [squid-users] NTLM not working for squid in windows server

2010-09-09 Thread Amos Jeffries

On 10/09/10 05:21, José Carlos Correia wrote:

Hi,

On 08/25/2010 01:22 AM, Amos Jeffries wrote:

On Tue, 24 Aug 2010 17:22:09 +0100, José Carlos Correia
 wrote:

Dear all,

I have installed Squid in Windows 2008 with NTLM authentication but the
browser still prompts for login.

I read in the forums that NTLM won't work if:
"- the client is not joined to a domain
- the client is configured not to attempt automatica authentication to
the proxy
- the clients is not MSIE or Firefox (not sure about other browsers)"

That last point is false. WMP and Java apps are known to do NTLM.
There is no reason other browsers on windows can't do it too.

In this environment all clients are MSIE.

Add to that list:
- if the server closes the connection all the time behind HTTP/1.0
proxies (ie Squid).

I don't think this is happening on this case.

In this case, Squid is replacing an ISA Server. NTLM was working with
the ISA server but without any changes to the clients (just replacing
the ISA Server by Squid) NTLM doesn't work.

The only situation where the browser doesn't prompt for authentication
is when the server is added to the Trusted Zone and IE is configured
with Automatic login. But this won't necessary with the ISA Server.

What am I missing?

Thanks,
Jose Carlos Correia

There has been a lot of testing and checking of NTLM and persistent
connections recently in exactly this area. Squid-3.1.7 contains a
number of
fixes.

Squid is running on Windows Server (2003 and 2008) and it's not an easy
task to compile it. I didn't find any binary distribution after 2.7.
I've been trying to compile it without success although I didn't find
any document saying clearly that 3.1.X versions can be compiled on windows.



Ah, we have had people contribute build fixes for 3.x every so often. 
But all I've seen prior to your interest was one complaint that even 
when built on cygwin 3.x wont run on windows.


Guido from Acme is the only one of the developers with the licenses 
required to build on Windows. His last message was that for some long 
time they had seen zero interest from the Windows community on 
supporting future releases. (hint)


 There seems to be more people with interest running Squid on 
non-Windows boxes inside the Windows network. Squid can certinly support 
a few more useful features (with higher performance) when its not 
running on Windows OS.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


Re: [squid-users] squid config parser library

2010-09-09 Thread Amos Jeffries

On 10/09/10 05:32, Mikio Kishi wrote:

Hi, all

I'm looking for squid config(squid.conf) parser script like perl or
python library.

Please tell me if it already exists.



To what purpose?

NP: we have several chapters of release notes each major version just on 
config changes.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


Re: [squid-users] Re: How to ignore query terms for store key?

2010-09-09 Thread Amos Jeffries

On 10/09/10 04:48, Guy Bashkansky wrote:

Amos, Matus,

Some websites embed in query terms arbitrary redundant information
which is irrelevant to content distribution, but prevents effective
caching by giving same object different URLs each time.

For such websites (recognized by regex ACLs), stripping those
redundant cache-unfriendly query terms for storing provides a way of
effective caching without hurting the web functionality.

Guy



I'm well aware of this. Removing sections of URLs is a local-instance 
hack that does little to solve the problem.


The last claim of it not hurting the functionality is false. It DOES 
hurt the web functionality, what it doesn't hurt is your users view of it.


By "some websites" you are referring to facebook and youtube and their 
like right?  The YouTube storeurl_rewrite script provided in the squid 
wiki needs regular updates to continue storing content without screwing 
things up. That is for a site which apparently is conservative to the 
point of paranoia with their changes.



WARNING: rant follows.


A real solution has to be multi-pronged:

 ** education for the designers of such systems about the benefits 
caching provides and how to use the cache-controls in HTTP.


  Unfortunately this effort is constantly undermined by administrators 
everywhere trusting to "override" hacks to force caching of objects, 
every time a small mistake is made by these admin it provides stronger 
incentives for the website designers to force their sites as un-cacheable.


 You need only look at the extreme obsessive settings sent out by 
Facebook and similar sites to see where that arms race leads (Pragma, 
no-cache, no-store, private, stale-0, maxage-0, expired cookies, 
redirects, POST instead of GET, PUT instead of POST, WebSockets, CONNECT 
tunnels, fake auth headers, expire years old, date years old, modified 
decades old). ALL of it designed and implemented site-wide to prevent 
the odd little truly dynamic reply amidst the static stuff being stored.



 ** making use of the users experience headspace. Pass on the complaints!

 Users have this preference for a good time as I'm sure you know. You 
as an ISP and facebook etc as providers both want two side of the same 
goal: a great user experience at the website. Just because the complaint 
arrives at your inbox does not mean to needs to stay there and ruin your 
day. The users don't know who to complain to so they pick any email in 
sight, pass it on to someone who can fix the problem properly.


 I've had personal experiences with people complaining to HR 
departments because their website login failed through an ISP proxy that 
blocked cookies.


 Both you and the rest of the Internet will benefit from the website 
working even slightly better with caches. They really are the ONLY 
authority on what can and can't be stored. If the website fails to do 
this, they alone are the cause of their demise.



 ** And finally but most crucially, convincing other admins to trust 
the website designer to know their own website. right or wrong its their 
fault. Let them learn from the experience.


 Grumbling away in the background while preventing website designers 
from getting/seeing the users complaints is not going to help solve 
anything.



Sorry for the rant. I've been a network admin for 12 years, webmaster 
for 8, and a caching guy for the last three. So I've seen a lot of this 
from all sides. It started when the marketing guys of the '90s latched 
onto webserver hits being a measure of a sites success and they 
strangled the web experience with it.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


[squid-users] Re: Re: Re: Squid 3.0 STABLE 19 and SPNEGO with Windows Firefox 3.6.3

2010-09-09 Thread Markus Moeller
So it looks like a Firefox issue. Unfortunately I don't have a setup to test 
on.


Markus

"Paul Freeman"  wrote in message 
news:19672eecfb9ae340833c84f3e90b5956040dd...@mel-ex-01.eml.local...

Markus
In our current setup, no WINS server is being provided to workstations
obtaining an IP address via DHCP.

I am finding that Firefox is actually failing at step 3.  It is not 
prompting

for a username and password.  Unlike IE which is.

Thanks

Paul


-Original Message-
From: Markus Moeller [mailto:hua...@moeller.plus.com]
Sent: Thursday, 9 September 2010 6:01 AM
To: squid-users@squid-cache.org
Subject: [squid-users] Re: Re: Squid 3.0 STABLE 19 and SPNEGO with
Windows Firefox 3.6.3


Hi Paul,

  Does your environment provide WINS server details via DHCP to the
desktops
?  I think in theory it should work as follows:

  1) User connects to proxy which requests negotiate
  2) The browser does not have any tickets and has not joined a domain
to
use NTLM so prompts the user
  3) The user provides u...@domain and password
  4) Desktop tries to find Kerberos kdc locally using NetBIOS or with
WINS
  5) Desktop will send AS-REQ to kdc
  6) Desktop will send TGS-REQ to kdc
  7) Browser will send token to squid.

   This would mean that Firefox does have a problem at step 4)  and
creates
an NTLM token for DESKTOP\User

Markus

"Paul Freeman"  wrote in message
news:19672eecfb9ae340833c84f3e90b595604014...@mel-ex-01.eml.local...
Markus
I will try and answer your questions in-line below.  Please let me know
if
there is any other information or testing you would like me to do.

I appreciate your assistance.

Regards

Paul

> -Original Message-
> From: Markus Moeller [mailto:hua...@moeller.plus.com]
> Sent: Wednesday, 8 September 2010 4:54 AM
> To: squid-users@squid-cache.org
> Subject: [squid-users] Re: Squid 3.0 STABLE 19 and SPNEGO with
Windows
> Firefox 3.6.3
>
> Hi Paul,
>
> >"Paul Freeman"  wrote in message
> >news:19672eecfb9ae340833c84f3e90b595604014...@mel-ex-01.eml.local...
> >Hi
> >I am running Squid 3.0STABLE19 on Ubuntu 10.04LTS as a "normal"
> >(non-transparent) proxy server for a number of Windows workstations
in
> an
> >Active Directory environment using W2K8R2 domain controller servers
> running
> >in W2K3 functional mode.
> >
> >I have implemented suthenitcation in Squid using the squid_kerb_auth
> module
> >from Markus Moeller.  Authentication is working fine for users
logging
> in
> >using domain credentials on domain registered workstations using
both
> IE7
> >and
> >8 on Windows XP and Firefox 3.6.3.
> >
> >However, I would like to allow the occasional non-domain user to
have
> >internet access via Squid and so it would be helpful for a login
> dialog box
> >to be presented.  When IE 7 and 8 are used, this occurs and
> authentication
> >is
> >successful.  However, with Firefox it does not and an error is
> returned by
> >Squid - Access Denied.
> >
> >Looking at some packet dumps between the Windows workstation and
Squid
> >shows
> >that Firefox tries a few times to auth then gives up.  Enabling
> logging in
> >Firefox reveals Firefox responds similarly to IE with a GET request
> with a
> >Proxy-Authorization: Negotiate . header.  In the Squid cache log
> it
> >indicates:
> >
> >squid_kerb_auth: Got 'YR T1RMT...Dw==' from squid (length 59).
> >squid_kerb_auth: received type 1 NTLM token
> >
> >However, unlike IE, it then gives up whereas IE then initiates a
KRB5
> >AS-REQ
> >to a domain controller then gets a ticket and then contacts Squid
> again at
> >which point it authenticates.
> >
>
> I would like to know some more details here.  Do you also see a KRB5
> AS-REQ
> at any time before ? Can you use kerbtray from MS and list Kerberos
> tickets
> for the non domain user ?
>

I have watched the traffic from prior to launching Firefox to the end
of the
Firefox session.  I have not seen any Kerberos related traffic from the
Windows client.

I have the MS Kerberos tools installed and kerbtray does not show any
tickets
- Client Principal field says "No network credentials".

Strangely (maybe not???), there are also no tickets shown even while
successfully using IE as a non-domain user.

>
> >In the Firefox log, just before the GET request, it shows:
> >
> >service = fqdn.of.squid.proxy
> >using negotiate-sspi
> >using SPN of [HTTP/fqdn.of.squid.proxy]]
> >AcquireCredentailsHandle() succeeded
> >nsHttpNegotiateAuth:: GenerateCredentials_1_9_2()
[challenge=Negotiate]
> >entering nsAuthSSPI::GetNextToken()
> >InitializeSecurityContext: continue
> >Sending a token of length 40
> >
> >Then after sending the GET request and receiving the Squid 407
> response it
> >shows:
> >nsHttpNegotiateAuth:: GenerateCredentials_1_9_2()
[challenge=Negotiate]
> >entering nsAuthSSPI::GetNextToken()
> >Cannot restart authentication sequence!
> >
>
> Does Firefox work after you used IE ?  e.g. does IE cache credentials
> which
> can be used by Firefox ?
>

Firefox does not work after using IE or even whil

Re: [squid-users] WCCP + Squid with Cisco 2811. Not working

2010-09-09 Thread Chris Abel
Amos Jeffries  writes:
>First, check your configuration for Squid and its firewall match this
>page:
>http://wiki.squid-cache.org/Features/Wccp2#Squid_configuration_for_WCCP_version_2
>
>An alternative to WCCP is to do real routing, we have an example for a
>2501 here:
>http://wiki.squid-cache.org/ConfigExamples/Intercept/Cisco2501PolicyRoute
>
>
>For the troubleshooting;
> * There is no indication in the cache.log that the cisco or Squid are in
>contact with each other. Check the cisco wccp information to see if its
>got
>any knowledge of Squid.
> * check if requests are getting into Squid. access.log should have
>records of every request attempt made, even failed ones.
> * the 'usual' problem when this behaviour is seen is that packets going
>from squid get looped back somewhere strange. They are supposed to get a
>free pass out to the Internet. Whether or not they go back to the cisco to
>do so is optional.
>
>
>Squid by default will hold off sending its HERE_I_AM message to the cisco
>until the cache has been fully loaded and Squid is actually ready for
>service. If you have a large cache (GB) wccp2_rebuild_wait can make it not
>wait, but you will see degraded service until the cache is available.
>

Thanks. After spending a lot of time with wccp and trying the tutorial on
squids wiki, I think I have given up. It "seems" to work before I play
around with my iptables. I say seems because I can actually see gre
traffic on the squid server and I see wccp packets being sent to the squid
server on the cisco router, but I am not sure if this is actually working
though. Is there a way I can actually check squid logs to see if it's
getting anything? For some reason I don't have an access.log. I have an
access.log.1, but not an access.log.

When I put this in:
iptables -t nat -A PREROUTING -i gre1 -p tcp --dport 80 -j REDIRECT
--to-port 3129
It seems to break it and I'm left with the same problem I had before.

I then tried the routing method you have posted. I configured my cisco
router word for word and it doesn't seem to be working. I have a
Dansguardian filter and I can see that traffic is obviously not going
through the filter. Shouldn't this method work just like the sonicwall
method that is working for me? Essentially it's just routing traffic to my
proxy server. I don't understand how this is so hard for me.

Thanks for your time!

-Chris
___
Chris Abel
Systems and Network Administrator
Wildwood Programs 
2995 Curry Road Extension
Schenectady, NY  12303
518-836-2341



[squid-users] Re: I need help with url_regex

2010-09-09 Thread devlin7

Thanks Amos for the feedback.

It must be that I am entering it incorrectly because anything with a * or ?
doesn't work at all.

Are you sure that the "." is treated as "any character"

I would have thought that blocking .info would block any site that had .info
in it like www.porn.info but from what you are saying it would also block
www.sinfo.com. Am I correct?

So is there a beetter way?


-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/I-need-help-with-url-regex-tp2532264p2533599.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Squid redirectors children doesn't shutdown/die with squid

2010-09-09 Thread Henrik Nordström
tor 2010-09-09 klockan 09:25 -0500 skrev Jorge Iván Burgos Aguilar:

> Dump question for anyone with experience in building redirectors:
> How does concurrency is being implemented in the redirectors...
> A) A parallel-connection to the redirector
> B) More than one line at once
> C) Both
> ???

B
and also D (multiple processes).

The concurrency protocol sends more than one request at a time to the
the helper, each tagged with a unique id. The helper may respond to the
queries in any order if needed.

http://www.squid-cache.org/Doc/config/url_rewrite_program/
http://www.squid-cache.org/Doc/config/url_rewrite_concurrency/

Same scheme applies to all helper channels supporting concurrency.


In addition to concurrency there is also the children parameter which
controls how many instances of the helper Squid starts.

Regards
Henrk



Re: [squid-users] Not receiving latest posts.

2010-09-09 Thread Henrik Nordström
tor 2010-09-09 klockan 13:36 +0200 skrev Matus UHLAR - fantomas:

> FYI: squid-cache.org was listed in SORBS blacklist for some time...

Hmm.. wonder why. any clue?

Regards
Henrik




[squid-users] squid config parser library

2010-09-09 Thread Mikio Kishi
Hi, all

I'm looking for squid config(squid.conf) parser script like perl or
python library.

Please tell me if it already exists.

Sincerely,

--
Mikio Kishi


Re: [squid-users] NTLM not working for squid in windows server

2010-09-09 Thread José Carlos Correia

 Hi,

On 08/25/2010 01:22 AM, Amos Jeffries wrote:

On Tue, 24 Aug 2010 17:22:09 +0100, José Carlos Correia
  wrote:

Dear all,

I have installed Squid in Windows 2008 with NTLM authentication but the
browser still prompts for login.

I read in the forums that NTLM won't work if:
"- the client is not joined to a domain
- the client is configured not to attempt automatica authentication to
the proxy
- the clients is not MSIE or Firefox (not sure about other browsers)"

That last point is false. WMP and Java apps are known to do NTLM.
There is no reason other browsers on windows can't do it too.

In this environment all clients are MSIE.

Add to that list:
  - if the server closes the connection all the time behind HTTP/1.0
proxies (ie Squid).

I don't think this is happening on this case.

In this case, Squid is replacing an ISA Server. NTLM was working with
the ISA server but without any changes to the clients (just replacing
the ISA Server by Squid) NTLM doesn't work.

The only situation where the  browser doesn't prompt for authentication
is when the server is added to the Trusted Zone and IE is configured
with Automatic login. But this won't necessary with the ISA Server.

What am I missing?

Thanks,
Jose Carlos Correia

There has been a lot of testing and checking of NTLM and persistent
connections recently in exactly this area. Squid-3.1.7 contains a number of
fixes.
Squid is running on Windows Server (2003 and 2008) and it's not an easy 
task to compile it. I didn't find any binary distribution after 2.7.
I've been trying to compile it without success although I didn't find 
any document saying clearly that 3.1.X versions can be compiled on windows.


I'm still trying. Thanks for your help.

Jose Carlos

Amos





Re: [squid-users] Re: How to ignore query terms for store key?

2010-09-09 Thread Guy Bashkansky
Amos, Matus,

Some websites embed in query terms arbitrary redundant information
which is irrelevant to content distribution, but prevents effective
caching by giving same object different URLs each time.

For such websites (recognized by regex ACLs), stripping those
redundant cache-unfriendly query terms for storing provides a way of
effective caching without hurting the web functionality.

Guy


On Thu, Sep 9, 2010 at 7:28 AM, Matus UHLAR - fantomas
 wrote:
>
> are you sure that http://www.google.sk/search?q=one  should give the same
> result as http://www.google.sk/search?q=two?
>
> I think that you and your users will be very surprised...


On Fri, Sep 3, 2010 at 8:09 PM, Amos Jeffries  wrote:
>
> First, please answer: Why? what possible problem could require you to do this 
> massive abuse of the web?


Re: [squid-users] Problem with squid and dansguardian viewing streaming videos

2010-09-09 Thread David Touzeau

Yes


use C-ICAP + SquidGuard

This is a good/performance alternative.




On 08/09/2010 21:52, Darren wrote:

So I am back at square one with this issue.

I have tested with squid 2.6 and 3.0, 3.1 with dansguardian.  It seems
that no matter which setup I use, I am not able to have anyone using
the proxy actually be able to view videos on the cbc.ca website.

If I go directly through squid, it works.  It is once I introduce
dansguardian that it breaks down.

Any other avenues I might pursue?

Also, are there any other programs similar to dansguardian I could use
instead of dansguardian?

Thanks!


RE: [squid-users] RE: Two squid servers to talk to each other before the internet

2010-09-09 Thread Tóth Tibor Péter
Sorry the config file is a bit of a legacy of 5 people who been touching the 
config in the past years.

So...I've did what you said:
-removed "always_direct allow all"
-cahnged "via off" -> "via on"
-changed " icp_access deny all" -> " icp_access allow all"

I have a result as an empty white page in my browser, and nothing in the 
access.log
Still no UDP_*_ or any sign of the servers talking to the other, but worst 
because I dont even see myself accessing to any site anymore.
Might be something still missing?

Thanks,
Tibby

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Wednesday, September 08, 2010 2:42 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] RE: Two squid servers to talk to each other before 
the internet

On 08/09/10 21:20, Tóth Tibor Péter wrote:
> Hi Amos!
> Here is my config file:
>
> http_port 8080
> hierarchy_stoplist cgi-bin ?
> acl QUERY urlpath_regex cgi-bin \?
> cache deny QUERY

If you have a squid newer then 2.6.STABLE18 you can safely remove these 
QUERY line. It will improve your hit rates a lot. The new 
refresh_pattern below replaces them.

hierarchy_stoplist is still needed up to squid 3.1. After that it can go 
too.

> acl apache rep_header Server ^Apache
>
> cache_peer ##THE_IP_OF_THE_SIBLING## sibling 3128 3130
> #prefer_direct off
>
> cache_mem 1024 MB
> maximum_object_size 4096 KB
> minimum_object_size 0 KB
>
> cache_dir ufs /var/spool/squid3 75000 32 256
>
> error_directory /usr/share/squid3/errors/English
>
> logformat squid %ts.%03tu %6tr %>a %Ss/%03Hs % access_log /var/log/squid3/access.log squid
> cache_store_log none
> logfile_rotate 1
>
> debug_options ALL,1
> cache_log syslog
>
> ftp_user ftp@
>
> hosts_file /etc/hosts
>
> refresh_pattern ^ftp:   144020% 10080
> refresh_pattern ^gopher:14400%  1440

refresh+pattern -i (/cgi-bin/|\?)  0 0% 0

> refresh_pattern .   0   20% 4320
>
> httpd_suppress_version_string on
>
> acl manager proto cache_object
> acl localhost src 127.0.0.1/255.255.255.255

acl localhost src 127.0.0.1

> acl to_localhost dst 127.0.0.0/8

acl to_localhost dst 127.0.0.0/8 0.0.0.0/32

> acl SSL_ports port 443  # https
> acl Safe_ports port 21  # ftp
> acl Safe_ports port 80  # http
> acl Safe_ports port 880 # http
> acl Safe_ports port 443 # https
> acl Safe_ports port 1025-65535
> acl purge method PURGE
> acl CONNECT method CONNECT
>
> http_access allow manager localhost
> http_access deny manager
> http_access allow purge localhost
> http_access deny purge
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
> http_access deny to_localhost
>
> acl desktop-clients src 0.0.0.0/0

acl desktop-clients src all

... that is not good either way. "all" and the above numbers mean the 
entire Internet is one of your desktop-clients.

> acl denied-desktop-clients src SOME_IP_ADDRESS SOME_OTHER_IP 
> AND_SOME_MORE_IP_ADDRESSES
> acl denied-domains dstdom_regex -i "/etc/squid3/denied-hosts.acl"

If its just domain names and wildcard sub-domains on that list 
"dstdomain" is faster than "dstdom_regex".

>
> http_access deny denied-desktop-clients
> http_access deny denied-domains
> http_access allow desktop-clients
> http_access allow localhost
> http_access deny all
>
> http_reply_access allow all
>

You will need to permit ICP access between the siblings or they will not 
trade replies like you want. You will see a lot of UDP_*_MISS with icp 
access denied.

> icp_access deny all

> htcp_clr_access deny all
>
> htcp_access deny all
> miss_access allow all
>
> visible_hostname THE_HOSTNAME.DOMAIN_OF_THIS_HOST
> via off

via is REQUIRED to be ON when linking proxies together like this. It's 
what prevents a single request looping around between your sibling 
proxies until all existing network sockets are used up.

> forwarded_for off
>
> cachemgr_passwd SOME_PASSWORD all
> always_direct allow all

There is your main problem. "always_direct" FORCES the proxy to ignore 
its sibling, not to even bother trying a lookup there.
Remove this and you will start to see requests between them.

Amos

>
> -Original Message-
> From: Amos Jeffries [mailto:squ...@treenet.co.nz]
> Sent: Tuesday, September 07, 2010 1:51 PM
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] RE: Two squid servers to talk to each other before 
> the internet
>
> On 07/09/10 22:49, Tóth Tibor Péter wrote:
>>> Is there a way to check if squids are talking to each other?
>>
>> The access.log of each proxy will contain entries for messages going to
>> and from the sibling.
>>
>> On a basic setup like you have so far, expect to see SIBLING hit/miss
>> codes sometimes. UDP_SIBLING_* are the ICP messages flowing between the
>> siblings as they check whether the other has an object. TCP_SIBLING_HIT
>> are the actual HTTP reply objects being fetched.
>>
>> Amos
>>
>> Hi Amos!
>>
>> I dont see anythin

Re: [squid-users] Re: How to ignore query terms for store key?

2010-09-09 Thread Matus UHLAR - fantomas
On 07.09.10 18:59, Guy Bashkansky wrote:
> Thanks, storeurl_rewrite in Squid 2.7 looks like the right solution.
> 
> But when I try to use it to strip query, Squid does not respond:
> 
> /usr/local/squid/etc/squid.conf
> storeurl_access allow all # just for the test, will narrow down later
> storeurl_rewrite_program /usr/local/squid/bin/strip-query.pl
> 
> /usr/local/squid/bin/strip-query.pl
> #!/usr/local/bin/perl -Tw
> $| = 1; while(<>) { chomp; s/\?\S*//; print; } ### my strip query test

are you sure that http://www.google.sk/search?q=one  should give the same
result as http://www.google.sk/search?q=two?

I think that you and your users will be very surprised...

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Due to unexpected conditions Windows 2000 will be released
in first quarter of year 1901


Re: [squid-users] Squid redirectors children doesn't shutdown/die with squid

2010-09-09 Thread Jorge Iván Burgos Aguilar
Hi again,

2010/9/8 John Doe :
> Maybe try -u...?
>       -u     Force stdin, stdout and stderr to  be  totally  unbuffered.   On
>              systems  where  it matters, also put stdin, stdout and stderr in
>              binary mode.  Note that there is internal  buffering  in  xread-
>              lines(),  readlines()  and  file-object  iterators ("for line in
>              sys.stdin") which is not influenced by  this  option.   To  work
>              around  this, you will want to use "sys.stdin.readline()" inside
>              a "while 1:" loop.
>
M using it since the advice from diego...

Dump question for anyone with experience in building redirectors:
How does concurrency is being implemented in the redirectors...
A) A parallel-connection to the redirector
B) More than one line at once
C) Both
???

Best Regards


Re: [squid-users] Squid 3.2.0.2 beta is available

2010-09-09 Thread Ralf Hildebrandt
* Amos Jeffries :

> This is the complete list of every OID in Squid since 2.0:
>   http://wiki.squid-cache.org/Features/Snmp#Squid_OIDs
> 
> We have never had any such info by those names. They look more like
> what the OS or malloc would provide about process memory usage. 

very odd, since the script is gathering all data via snmpget

oid_table() {
gauge   cacheSysVMsize  # Amount of cache_mem used
gauge   cacheSysStorage # Amount of on-disk cache used
gauge   cacheNumObjCount# Number of objects
gauge   cacheMemUsage   # Total memory accounted for KB
counter cacheCpuTime# Amount of cpu seconds consumed
gauge   cacheCurrentFileDescrCnt# Number of filedescriptors in use
gauge   cacheCurrentFileDescrMax# Highest filedescriptor in use
gauge   VmSize  proc# Process size
gauge   VmRSS   proc# Process RSS
gauge   VmData  proc# Process data segment size

Oh, I'm seeing the difference now: the additional "proc". DAMN!

> Which makes sense since Squid cannot account for its own memory usage
> completely.

Yup. *goes stand in the corner*

-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | http://www.charite.de



Re: [squid-users] Squid 3.2.0.2 beta is available

2010-09-09 Thread Amos Jeffries

On 10/09/10 00:22, Ralf Hildebrandt wrote:

* Amos Jeffries:

The Squid HTTP Proxy team is very pleased to announce the
availability of the Squid-3.2.0.2 beta release!


My scripts used to query the SNMP attributes:
VmRSS VmData

But 3.2.x doesn't have them anymore:

# snmpget -Os -OQ -v 2c -c public -m /usr/share/squid3/mib.txt localhost:3401 
VmSize VmRSS VmData
VmRSS: Unknown Object Identifier (Sub-id not found: (top) ->  VmRSS)
VmData: Unknown Object Identifier (Sub-id not found: (top) ->  VmData)

# fgrep -i rss
/usr/share/squid3/mib.txt
# fgrep -i vm
/usr/share/squid3/mib.txt
 cacheSysVMsize OBJECT-TYPE

What is replacing them?



This is the complete list of every OID in Squid since 2.0:
  http://wiki.squid-cache.org/Features/Snmp#Squid_OIDs

We have never had any such info by those names. They look more like what 
the OS or malloc would provide about process memory usage. Which makes 
sense since Squid cannot account for its own memory usage completely.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


Re: [squid-users] sslBump: unrecognized: 'ssl_bump', unrecognized: 'https_port'

2010-09-09 Thread Stephan Huiser
On 09/09/2010 02:06 PM, Amos Jeffries wrote:
> On 09/09/10 23:05, Guillaume CHAUVEL wrote:
>>> Hi,
>>>
>>> I want to enable SSL bumping with Squid.
>>> This function is disabled in Debian version of Squid (Lenny,
>>> Lenny-backports and Squeeze), so I decided to compile Squid from
>>> source.
>>>
>>> Squid version: 3.1.8
>>>
>>> ./configure --prefix=/usr/local/squid \
>>> --enable-inline \
>>> --enable-async-io=8 \
>>> --enable-storeio="ufs,aufs,diskd" \
>>> --enable-removal-policies="lru,heap" \
>>> --enable-delay-pools \
>>> --enable-cache-digests \
>>> --enable-icap-client \
>>> --enable-follow-x-forwarded-for \
>>> --enable-auth="basic,digest,ntlm,negotiate" \
>>>
>> ...
>>>
>>> /usr/local/squid/sbin/squid output:
>>> 2010/09/09 11:23:43| cache_cf.cc(363) parseOneConfigFile:
>>> squid.conf:1155 unrecognized: 'https_port'
>>> 2010/09/09 11:23:43| cache_cf.cc(363) parseOneConfigFile:
>>> squid.conf:1156 unrecognized: 'ssl_bump'
>>> 2010/09/09 11:23:43| cache_cf.cc(363) parseOneConfigFile:
>>> squid.conf:1537 unrecognized: 'ssl_bump'
>>> 2010/09/09 11:23:43| cache_cf.cc(363) parseOneConfigFile:
>>> squid.conf:5625 unrecognized: 'sslproxy_cert_error'
>>> 2010/09/09 11:23:43| cache_cf.cc(363) parseOneConfigFile:
>>> squid.conf:5626 unrecognized: 'sslproxy_flags'
>>>
>>> What am I doing wrong?
>>
>> ./configure --help | grep ssl
>>--enable-sslEnable ssl gatewaying support using OpenSSL
>>--with-openssl{=PATH}   Compile with the OpenSSL libraries. The
>> path to the
>>
>> It looks like '--with-ssl' doesn't work, you should use '--enable-ssl'
>>
>> also since 3.1.7 "sslBump" is deprecated, you should move to
>> "ssl-bump" :
>> http://www.squid-cache.org/Versions/v3/3.1/changesets/SQUID_3_1_7.html
>> have a look at ./src/squid.conf.documented line 1045
>>
>>
>>> http_port 8080
>>> https_port 8443 sslBump cert=/etc/ssl/certs/certificate.pem
>>
>> I am quite new to squid but I don't think this is going to do what you
>> want judging by your config file without any "cache_peer"
>> https_port as stated in the documentation is really only useful when
>> running squid as an accelerator. you should use
>> "http_port 8080 ssl-bump cert=/etc/ssl/certs/certificate.pem" instead
>> and remove https_port
>
> Yes, https_port is a port for receiving "native" SSL connections.
>
> The ssl-bump feature is for converting CONNECT tunnel requests into
> normal HTTP traffic. CONNECT is a weird kind of
> HTTP-over-SSL-over-HTTP multiple-wrapped request thing. ssl-bump
> strips away the outer two layers of wrapping. It only works when
> browsers etc which are configured to send their HTTPS via an HTTP proxy.
>
> Amos

It seems to be working now :) 
Guillaume, thanks for pointing me to my wrong ./configure option!
Amos, thanks for the explanation.

- Stephan


Re: [squid-users] forwarding port changed based on url

2010-09-09 Thread Amos Jeffries

On 09/09/10 21:39, foobar devnull wrote:

squid-users@squid-cache.org, Amos Jeffries

On Wed, Sep 8, 2010 at 3:50 PM, Amos Jeffries  wrote:

On 09/09/10 01:22, foobar devnull wrote:


Hi all,

I tried to look for an answer to this probably simple question via the
"mailing list search" but it seems to be down and google was of little
help.

I have the following setup:

I have a squid server setup as a reverse proxy and serving a vm with
multiple domains/websites.  One of these websites offers an ssl
connection on port 443 and a second ssl connection on port 6066 for
the admin interface.  both ports point to www.foobar.com

I'd like to be able to do the following with squid

wwwadm.foobar.com:443 -->[squid] -->www.foobar.com:6066
www.foobar.com:443 -->[squid]-->www.foobar.com:443

Can this be done?  If so, I'd be grateful if you could point me to the
appropriate documentation or give me a simple example to work from.


The answer is two questions:
  can you make a wildcard cert for both those domains?
or,
  can you assign each its own IP and certificate?

Squid can be configured as an HTTPS reverse proxy to do it either way. It's
a standard virtual-host setup with ssl added, differing only in the
receiving https_port settings.
http://wiki.squid-cache.org/ConfigExamples/Reverse/VirtualHosting
http://wiki.squid-cache.org/ConfigExamples/Reverse/MultipleWebservers

Amos


Hi Amos,

I read the documentation you sent me and you had a very good point
regarding the need for a wildcard certificate but I am still looking
for a solution to my question which is basicaly...

can squid reformat the url and port to match the target vm?


That is a very different proposition to what you asked initially.

Doing URL re-writing in a reverse-proxy is almost completely pointless 
and quite dangerous.


A reverse-proxy never sees the full URL, it has to lookup all sorts of 
details and generate a fake one in order to re-write it, then it has to 
erase the domain and port info from the new URL again before passing it 
to the master server. In a properly configured reverse proxy the master 
server is hard coded with a cache_peer entry regardless of what the URL 
says about host/port.
 Altering the URL in-transit merely fools the master server into 
thinking it can advertise the "internal" host/port names to its public 
clients via uncontrolled things like absolute links in the pages (or 
worse; Refresh and Location HTTP headers on *real* 30x redirects).


If you don't have a cache_peer entry for the master server then you have 
a badly configured interceptor proxy. Which is another nightmare altogether.


Yes it can be done. In three years I've seen exactly one network which 
needed to, and I have yet to meet anyone who likes the experience.




The request is made to
wwwadm.foobar.com:443
and passed through the reverse proxy to the vm listening on
www.foobar.com:6066

both ports use ssl of course.

Any help is appreciated. I can't seem to find any information on the
forwarding (and changing) of ports.



Middleware like Squid is forbidden from doing so. Squid can, if you 
absolutely must re-write the URL.


I pointed you at the accelerator stuff, because this line here:
  cache_peer www.foobar.com.ip 6606 0 ssl

... passes traffic to its destination without altering the request 
in-transit. The master server then knows how to generate the public URLs 
like https://wwwadm.foobar.com/foo to be sent back to the public 
clients. Instead of telling them to bypass the proxy for future requests 
straight to https://www.fobar.com:6066/foo


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


Re: [squid-users] forwarding port changed based on url

2010-09-09 Thread John Doe
From: foobar devnull 

> > Squid can be configured as an HTTPS reverse proxy  to do it either way. It's
> > a standard virtual-host setup with ssl added,  differing only in the
> > receiving https_port settings.
> > http://wiki.squid-cache.org/ConfigExamples/Reverse/VirtualHosting
> > http://wiki.squid-cache.org/ConfigExamples/Reverse/MultipleWebservers
> I read the documentation you sent me and you had a very good  point
> regarding the need for a wildcard certificate but I am still  looking
> for a solution to my question which is basicaly...
> can squid  reformat the url and port to match the target vm?

I think he answered your question.
Did you look at the second link?
You create 2 cache_peer, one on port 443 and the other on 6066
Then you use cache_peer_access with acls to point to the correct cache_peer 
name.

JD


  


Re: [squid-users] Squid 3.2.0.2 beta is available

2010-09-09 Thread Ralf Hildebrandt
* Amos Jeffries :
> The Squid HTTP Proxy team is very pleased to announce the
> availability of the Squid-3.2.0.2 beta release!

My scripts used to query the SNMP attributes:
VmRSS VmData

But 3.2.x doesn't have them anymore:

# snmpget -Os -OQ -v 2c -c public -m /usr/share/squid3/mib.txt localhost:3401 
VmSize VmRSS VmData
VmRSS: Unknown Object Identifier (Sub-id not found: (top) -> VmRSS)
VmData: Unknown Object Identifier (Sub-id not found: (top) -> VmData)

# fgrep -i rss
/usr/share/squid3/mib.txt
# fgrep -i vm
/usr/share/squid3/mib.txt
cacheSysVMsize OBJECT-TYPE

What is replacing them?

-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | http://www.charite.de



Re: [squid-users] Squid 3.1 with MRTG, Not able to get Graphs

2010-09-09 Thread Amos Jeffries

On 09/09/10 21:38, Babu Chaliyath wrote:

2010/9/9 Henrik Nordström:

tor 2010-09-09 klockan 11:36 +0530 skrev Babu Chaliyath:

Hi List,
I am trying to get mrtg graphing of my squid box running freebsd 7.2
with squid 3.1.0.13, I was able to get the mrtg while running 2.6
version of squid, but once  moved to 3.1 version, I am not able to get
the mrtg graph at all, I would greatly appreciate if any
suggestions/clues what might have gone wrong on my mrtg setup.


I did not see any reference to the Squid MIB from your mrtg config.

Regards
Henrik




Ooops! I missed  "LoadMIBs: /usr/local/etc/mrtg/squid.mib" line while
pasting it in my mail, yes it is there in my mrtg.cfg
btw mib.txt file is renamed as squid.mib.

Thanx for that quick reply
Regards
Babs



It's well worth upgrading to 3.1.8. Many of the 3.1 betas had broken SNMP.

Also check that the squid.mib being loaded came from the 3.1 install.

We now have a full map of what the OID are and what versions they work 
for. You may find this useful:

http://wiki.squid-cache.org/Features/Snmp#Squid_OIDs


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


Re: [squid-users] sslBump: unrecognized: 'ssl_bump', unrecognized: 'https_port'

2010-09-09 Thread Amos Jeffries

On 09/09/10 23:05, Guillaume CHAUVEL wrote:

Hi,

I want to enable SSL bumping with Squid.
This function is disabled in Debian version of Squid (Lenny,
Lenny-backports and Squeeze), so I decided to compile Squid from source.

Squid version: 3.1.8

./configure --prefix=/usr/local/squid \
--enable-inline \
--enable-async-io=8 \
--enable-storeio="ufs,aufs,diskd" \
--enable-removal-policies="lru,heap" \
--enable-delay-pools \
--enable-cache-digests \
--enable-icap-client \
--enable-follow-x-forwarded-for \
--enable-auth="basic,digest,ntlm,negotiate" \


...


/usr/local/squid/sbin/squid output:
2010/09/09 11:23:43| cache_cf.cc(363) parseOneConfigFile:
squid.conf:1155 unrecognized: 'https_port'
2010/09/09 11:23:43| cache_cf.cc(363) parseOneConfigFile:
squid.conf:1156 unrecognized: 'ssl_bump'
2010/09/09 11:23:43| cache_cf.cc(363) parseOneConfigFile:
squid.conf:1537 unrecognized: 'ssl_bump'
2010/09/09 11:23:43| cache_cf.cc(363) parseOneConfigFile:
squid.conf:5625 unrecognized: 'sslproxy_cert_error'
2010/09/09 11:23:43| cache_cf.cc(363) parseOneConfigFile:
squid.conf:5626 unrecognized: 'sslproxy_flags'

What am I doing wrong?


./configure --help | grep ssl
   --enable-sslEnable ssl gatewaying support using OpenSSL
   --with-openssl{=PATH}   Compile with the OpenSSL libraries. The path to the

It looks like '--with-ssl' doesn't work, you should use '--enable-ssl'

also since 3.1.7 "sslBump" is deprecated, you should move to
"ssl-bump" : 
http://www.squid-cache.org/Versions/v3/3.1/changesets/SQUID_3_1_7.html
have a look at ./src/squid.conf.documented line 1045



http_port 8080
https_port 8443 sslBump cert=/etc/ssl/certs/certificate.pem


I am quite new to squid but I don't think this is going to do what you
want judging by your config file without any "cache_peer"
https_port as stated in the documentation is really only useful when
running squid as an accelerator. you should use
"http_port 8080 ssl-bump cert=/etc/ssl/certs/certificate.pem" instead
and remove https_port


Yes, https_port is a port for receiving "native" SSL connections.

The ssl-bump feature is for converting CONNECT tunnel requests into 
normal HTTP traffic. CONNECT is a weird kind of HTTP-over-SSL-over-HTTP 
multiple-wrapped request thing. ssl-bump strips away the outer two 
layers of wrapping. It only works when browsers etc which are configured 
to send their HTTPS via an HTTP proxy.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


Re: [squid-users] Not receiving latest posts.

2010-09-09 Thread Matus UHLAR - fantomas
On 27.08.10 06:03, Landy Landy wrote:
> Subject: Re: [squid-users] Not receiving latest posts.

> Ok. Looks like theres something going on with yahoo mail. I've only
> received four posts since I posted this one, looked in the mail archive
> and there are more than four new posts. I will be creating a new account
> on gmail as some suggested.

FYI: squid-cache.org was listed in SORBS blacklist for some time...
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
WinError #98652: Operation completed successfully.


Re: [squid-users] sslBump: unrecognized: 'ssl_bump', unrecognized: 'https_port'

2010-09-09 Thread Guillaume CHAUVEL
> Hi,
>
> I want to enable SSL bumping with Squid.
> This function is disabled in Debian version of Squid (Lenny,
> Lenny-backports and Squeeze), so I decided to compile Squid from source.
>
> Squid version: 3.1.8
>
> ./configure --prefix=/usr/local/squid \
>    --enable-inline \
>    --enable-async-io=8 \
>    --enable-storeio="ufs,aufs,diskd" \
>    --enable-removal-policies="lru,heap" \
>    --enable-delay-pools \
>    --enable-cache-digests \
>    --enable-icap-client \
>    --enable-follow-x-forwarded-for \
>    --enable-auth="basic,digest,ntlm,negotiate" \
>
...
>
> /usr/local/squid/sbin/squid output:
> 2010/09/09 11:23:43| cache_cf.cc(363) parseOneConfigFile:
> squid.conf:1155 unrecognized: 'https_port'
> 2010/09/09 11:23:43| cache_cf.cc(363) parseOneConfigFile:
> squid.conf:1156 unrecognized: 'ssl_bump'
> 2010/09/09 11:23:43| cache_cf.cc(363) parseOneConfigFile:
> squid.conf:1537 unrecognized: 'ssl_bump'
> 2010/09/09 11:23:43| cache_cf.cc(363) parseOneConfigFile:
> squid.conf:5625 unrecognized: 'sslproxy_cert_error'
> 2010/09/09 11:23:43| cache_cf.cc(363) parseOneConfigFile:
> squid.conf:5626 unrecognized: 'sslproxy_flags'
>
> What am I doing wrong?

./configure --help | grep ssl
  --enable-sslEnable ssl gatewaying support using OpenSSL
  --with-openssl{=PATH}   Compile with the OpenSSL libraries. The path to the

It looks like '--with-ssl' doesn't work, you should use '--enable-ssl'

also since 3.1.7 "sslBump" is deprecated, you should move to
"ssl-bump" : 
http://www.squid-cache.org/Versions/v3/3.1/changesets/SQUID_3_1_7.html
have a look at ./src/squid.conf.documented line 1045


>http_port 8080
>https_port 8443 sslBump cert=/etc/ssl/certs/certificate.pem

I am quite new to squid but I don't think this is going to do what you
want judging by your config file without any "cache_peer"
https_port as stated in the documentation is really only useful when
running squid as an accelerator. you should use
"http_port 8080 ssl-bump cert=/etc/ssl/certs/certificate.pem" instead
and remove https_port


Guillaume.


Re: [squid-users] forwarding port changed based on url

2010-09-09 Thread foobar devnull
squid-users@squid-cache.org, Amos Jeffries 

On Wed, Sep 8, 2010 at 3:50 PM, Amos Jeffries  wrote:
> On 09/09/10 01:22, foobar devnull wrote:
>>
>> Hi all,
>>
>> I tried to look for an answer to this probably simple question via the
>> "mailing list search" but it seems to be down and google was of little
>> help.
>>
>> I have the following setup:
>>
>> I have a squid server setup as a reverse proxy and serving a vm with
>> multiple domains/websites.  One of these websites offers an ssl
>> connection on port 443 and a second ssl connection on port 6066 for
>> the admin interface.  both ports point to www.foobar.com
>>
>> I'd like to be able to do the following with squid
>>
>> wwwadm.foobar.com:443 -->  [squid] -->  www.foobar.com:6066
>> www.foobar.com:443 -->  [squid]-->  www.foobar.com:443
>>
>> Can this be done?  If so, I'd be grateful if you could point me to the
>> appropriate documentation or give me a simple example to work from.
>
> The answer is two questions:
>  can you make a wildcard cert for both those domains?
> or,
>  can you assign each its own IP and certificate?
>
> Squid can be configured as an HTTPS reverse proxy to do it either way. It's
> a standard virtual-host setup with ssl added, differing only in the
> receiving https_port settings.
> http://wiki.squid-cache.org/ConfigExamples/Reverse/VirtualHosting
> http://wiki.squid-cache.org/ConfigExamples/Reverse/MultipleWebservers
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.8
>  Beta testers wanted for 3.2.0.2
>

Hi Amos,

I read the documentation you sent me and you had a very good point
regarding the need for a wildcard certificate but I am still looking
for a solution to my question which is basicaly...

can squid reformat the url and port to match the target vm?

The request is made to
wwwadm.foobar.com:443
and passed through the reverse proxy to the vm listening on
www.foobar.com:6066

both ports use ssl of course.

Any help is appreciated. I can't seem to find any information on the
forwarding (and changing) of ports.

Thanks!


Re: [squid-users] Squid 3.1 with MRTG, Not able to get Graphs

2010-09-09 Thread Babu Chaliyath
2010/9/9 Henrik Nordström :
> tor 2010-09-09 klockan 11:36 +0530 skrev Babu Chaliyath:
>> Hi List,
>> I am trying to get mrtg graphing of my squid box running freebsd 7.2
>> with squid 3.1.0.13, I was able to get the mrtg while running 2.6
>> version of squid, but once  moved to 3.1 version, I am not able to get
>> the mrtg graph at all, I would greatly appreciate if any
>> suggestions/clues what might have gone wrong on my mrtg setup.
>
> I did not see any reference to the Squid MIB from your mrtg config.
>
> Regards
> Henrik
>
>

Ooops! I missed  "LoadMIBs: /usr/local/etc/mrtg/squid.mib" line while
pasting it in my mail, yes it is there in my mrtg.cfg
btw mib.txt file is renamed as squid.mib.

Thanx for that quick reply
Regards
Babs


[squid-users] sslBump: unrecognized: 'ssl_bump', unrecognized: 'https_port'

2010-09-09 Thread Stephan Huiser
Hi,

I want to enable SSL bumping with Squid.
This function is disabled in Debian version of Squid (Lenny,
Lenny-backports and Squeeze), so I decided to compile Squid from source.

Squid version: 3.1.8

./configure --prefix=/usr/local/squid \
--enable-inline \
--enable-async-io=8 \
--enable-storeio="ufs,aufs,diskd" \
--enable-removal-policies="lru,heap" \
--enable-delay-pools \
--enable-cache-digests \
--enable-icap-client \
--enable-follow-x-forwarded-for \
--enable-auth="basic,digest,ntlm,negotiate" \
   
--enable-basic-auth-helpers="LDAP,MSNT,NCSA,PAM,SASL,SMB,YP,DB,POP3,getpwnam,squid_radius_auth,multi-domain-NTLM"
\
--enable-ntlm-auth-helpers="smb_lm," \
--enable-digest-auth-helpers="ldap,password" \
--enable-negotiate-auth-helpers="squid_kerb_auth" \
   
--enable-external-acl-helpers="ip_user,ldap_group,session,unix_group,wbinfo_group"
\
--enable-arp-acl \
--enable-esi \
--disable-translation \
--with-filedescriptors=65536 \
--with-large-files \
--with-ssl \
--with-openssl=/usr \
--with-default-user=proxy \
--disable-ipv6

make all
make install


./squid -v
Squid Cache: Version 3.1.8
configure options:  '--prefix=/usr/local/squid'
'--with-cppunit-basedir=/usr' '--enable-inline' '--enable-async-io=8'
'--enable-storeio=ufs,aufs,diskd' '--enable-removal-policies=lru,heap'
'--enable-delay-pools' '--enable-cache-digests' '--enable-underscores'
'--enable-icap-client' '--enable-follow-x-forwarded-for'
'--enable-auth=basic,digest,ntlm,negotiate'
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SASL,SMB,YP,DB,POP3,getpwnam,squid_radius_auth,multi-domain-NTLM'
'--enable-ntlm-auth-helpers=smb_lm,'
'--enable-digest-auth-helpers=ldap,password'
'--enable-negotiate-auth-helpers=squid_kerb_auth'
'--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group'
'--enable-arp-acl' '--enable-esi' '--enable-ipv6'
'--disable-translation' '--with-filedescriptors=65536'
'--with-large-files' '--with-ssl' '--with-openssl=/usr'
'--with-default-user=proxy' '--disable-ipv6'
--with-squid=/usr/local/src/squid-3.1.8 --enable-ltdl-convenience


squid.conf (cat squid.conf | grep -v "^#" | grep -v "^$" ):

auth_param ntlm program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 50
auth_param basic program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-basic
auth_param basic children 5
auth_param basic realm DOMAIN
auth_param basic credentialsttl 2 hours
cache_peer 127.0.0.1 parent 8081 0 no-query login=*:nopassword
acl apache rep_header Server ^Apache
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl shoutcast rep_header X-HTTP09-First-Line ^ICY\s[0-9]
acl AuthorizedUsers proxy_auth REQUIRED
external_acl_type ad_group %LOGIN /usr/lib/squid3/wbinfo_group.pl
acl power_download_gebruikers external ad_group InternetUnlimitedDownload
acl internet_kantoor_gebruikers external ad_group ServApplicatiegroep52
acl internet_desktop_gebruikers external ad_group Applicatiegroep55
acl internet_blacklist_gebruikers external ad_group ServApplicatiegroep53
acl ie_browser browser ^Mozilla/4\.0 .compatible; MSIE # die!!
acl localnet src 172.16.0.0/12# RFC1918 possible internal network
acl localnet src 192.168.0.0/16# RFC1918 possible internal network
acl terminalservers src 10.2.0.202/32
acl terminalservers src 10.2.0.203/32
acl terminalservers src 10.2.0.204/32
acl terminalservers src 10.2.0.205/32
acl terminalservers src 10.2.0.206/32
acl terminalservers src 10.2.0.207/32
acl desktops src 10.2.150.4/32
acl desktops src 10.1.107.1/32
acl desktops src 10.2.100.88/32
acl vrij_internet_werkplekken src 10.2.100.1/32
acl vrij_internet_werkplekken src 10.2.100.2/32
acl vrij_internet_werkplekken src 10.2.100.3/32
acl vrij_internet_werkplekken src 10.2.100.4/32
acl vrij_internet_werkplekken src 10.2.100.5/32
acl vrij_internet_werkplekken src 10.2.100.6/32
acl vrij_internet_werkplekken src 10.2.100.7/32
acl vrij_internet_werkplekken src 10.2.100.12/32
acl vrij_internet_werkplekken src 10.2.100.88/32
acl vrij_internet_werkplekken src 10.2.176.3/32
acl allow_download_unlimited_from dstdomain
"/etc/squid/download_unlimited_sites"
acl whitelist_kantoor dstdomain "/etc/squid/whitelist_kantoor"
acl whitelist_desktop dstdomain "/etc/squid/whitelist_desktop"
acl whitelist_desktop_IE dstdomain "/etc/squid/whitelist_desktop_IE"
acl whitelist_kantoor_IE dstdomain "/etc/squid/whitelist_kantoor_IE"
redirector_access deny whitelist_kantoor
redirector_access deny whitelist_desktop
redirector_access deny whitelist_desktop_IE
redirector_access deny whitelist_kantoor_IE
acl SSL_ports port 443# https
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443# https
acl purge method PURGE
acl CONNECT method CONNECT
http_access allow manager localhost
http_access allow all localhost
http_access allow ie_browser internet_desktop_gebru

Re: [squid-users] Proxy-Connection: Keep-Alive in POST Requests?

2010-09-09 Thread Henrik Nordström
ons 2010-09-08 klockan 14:46 +0900 skrev Mikio Kishi:

> It's still a feature, right ?
> On the Internet, there are some web applications which requires
> multi post requests in a http connection...

Applications which require some special relation between connection and
requests are per definition broken.

Connections in HTTP are hop-by-hop, not end-to-end. There is no relation
between connections made by clients to proxy and connections made by the
proxy to requested servers. 

You can have N clients connections getting multiplexed over 1 server
connection. Or 1 persistent client connection getting it's requests
distributed over M server connections. Exact result depends on the
traffic and policy of client, proxy and webserver, but the key is that
client<->proxy and proxy<->server connections are fully independent.

An exception to this is if you are using NTLM or Negotiate(kerberos)
authentication, as those authentication protocols is not HTTP
authentication schemes but TCP connection authentication schemes in
direct violation with HTTP messaging rules.

Regards
Henrik



Re: [squid-users] Squid 3.1 with MRTG, Not able to get Graphs

2010-09-09 Thread Henrik Nordström
tor 2010-09-09 klockan 11:36 +0530 skrev Babu Chaliyath:
> Hi List,
> I am trying to get mrtg graphing of my squid box running freebsd 7.2
> with squid 3.1.0.13, I was able to get the mrtg while running 2.6
> version of squid, but once  moved to 3.1 version, I am not able to get
> the mrtg graph at all, I would greatly appreciate if any
> suggestions/clues what might have gone wrong on my mrtg setup.

I did not see any reference to the Squid MIB from your mrtg config.

Regards
Henrik