RE: [squid-users] Dynamic content caching in Squid 3.2 vs 3.1

2013-03-15 Thread Jon Schneider
1^M
Host: testsite.domain.com^M
Accept: */*^M
Accept-Encoding: gzip^M
User-Agent: JoeDog/1.00 [en] (X11; I; Siege 2.72)^M
Surrogate-Capability: cache1.domain.com="Surrogate/1.0 ESI/1.0"^M
X-Forwarded-For: 192.168.5.183^M
Cache-Control: max-age=259200^M
Connection: keep-alive^M
^M

--
2013/03/15 09:34:38.336 kid1| ctx: enter level  0: 
'http://testsite.domain.com/__utm.js.aspx'
2013/03/15 09:34:38.336 kid1| http.cc(732) processReplyHeader: HTTP Server 
local=192.168.5.183:51705 remote=192.168.5.20:80 FD 23 flags=1
2013/03/15 09:34:38.336 kid1| http.cc(733) processReplyHeader: HTTP Server 
REPLY:
-
HTTP/1.1 200 OK^M
Cache-Control: public, max-age=7200^M
Content-Type: text/javascript; charset=utf-8^M
Content-Encoding: gzip^M
Expires: Fri, 15 Mar 2013 17:34:38 GMT^M
Last-Modified: Fri, 15 Mar 2013 15:34:38 GMT^M
ETag: "71B76C2B36A7E48318E27D6B5ED98F3A"^M
Vary: Accept-Encoding^M
Server: Microsoft-IIS/7.5^M
X-AspNet-Version: 4.0.30319^M
X-Server: IISSVR^M
X-Powered-By: ASP.NET^M
Date: Fri, 15 Mar 2013 15:34:38 GMT^M
Content-Length: 6157^M
^M
^_<8b>^H
--
HTTP/1.1 200 OK^M
Cache-Control: public, max-age=7200^M
Content-Type: text/javascript; charset=utf-8^M
Content-Encoding: gzip^M
Expires: Fri, 15 Mar 2013 17:34:38 GMT^M
Last-Modified: Fri, 15 Mar 2013 15:34:38 GMT^M
ETag: "71B76C2B36A7E48318E27D6B5ED98F3A"^M
Vary: Accept-Encoding^M
Server: Microsoft-IIS/7.5^M
X-AspNet-Version: 4.0.30319^M
X-Server: IISSVR^M
X-Powered-By: ASP.NET^M
Date: Fri, 15 Mar 2013 15:34:38 GMT^M
Content-Length: 6157^M
X-Cache: MISS from cache1.domain.com^M
X-Cache-Lookup: MISS from cache1.domain.com:80^M
Connection: close^M
^M

--
2013/03/15 09:34:38.336 kid1| store_io.cc(33) storeCreate: storeCreate: 
Selected dir 0 for -1@-1=0/2/1/1
2013/03/15 09:34:38.337 kid1| client_side.cc(764) swanSong: 
local=192.168.5.183:80 remote=192.168.5.183:45831 flags=1



Thanks,
Jon



-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Thursday, March 14, 2013 3:49 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Dynamic content caching in Squid 3.2 vs 3.1

On 15/03/2013 6:59 a.m., Jon Schneider wrote:
> I have setup squid 3.2.7 in a test environment in preparation to roll it out 
> to production, however I have noticed a difference in caching behavior that I 
> have as of yet been unable to resolve.  The squid config files are almost 
> identical with the exception of two config lines that are now obsolete in 
> 3.2.  The refresh patterns and ACL's are all exactly the same.
>
> After some testing it appears that the 3.2 instance is only caching images, 
> virtually everything else results in a miss.  In the 3.1 instance almost 
> everything returns as a hit with very few misses.  When going through the 
> list of everything that is a miss in the 3.2 instance that is a hit in the 
> 3.1 instance it all seems to be either dynamic or possibly dynamic content.  
> Virtually anything that does or could have a '?' at the end does not seem to 
> get cached in 3.2.  This results in page times of about twice as log with the 
> 3.2 instance.

That "or could have a ?" is a big clue #1 that '?' is *not* involved.


> Current refresh patterns look like this:
>
> refresh_pattern \.(css|gif|ico|jpg|jpeg|js|swf|xsl|xslt) 5 20% 4320 
> reload-into-ims #refresh_pattern .(\?.*)?$ 4320 20% 4320 
> reload-into-ims refresh_pattern \.(axd|cssx|svg|swfx|img) 0 100% 4320 
> reload-into-ims refresh_pattern . 0 20% 4320 reload-into-ims

Notice the maximum Age time in all these heuristics is 4320 minutes.

> I have tried almost everything is it simply refuses to cache the same content 
> as 3.1 is caching.
>
> I am using siege to do the testing, here is an example output of the header 
> information, first one is from the 3.2 server, second is from the 3.1 server:
>
> [root@test squid_test]# siege -b -r 1 -c 1 -g 
> https://urldefense.proofpoint.com/v1/url?u=http://testsite.domain.com/
> Shared/Util/CookieUtils.js?__v%3D1363031409&k=LXAl%2FS1Qy5NX2y2VjzohVw
> %3D%3D%0A&r=zo9XkoYHQcWwwCdFUN9MRsoh05AAVkIG0wIFJyHOKjQ%3D%0A&m=YCJbNE
> FOaqXclG6S%2BFiBYdZmc%2BMoTwOPouA5u65NkSg%3D%0A&s=277d042c485ccce23a52
> 747953c2d6b588802a197d209b50608d338da3bf0e01
> GET /Shared/Util/CookieUtils.js?__v=1363031409 HTTP/1.0
> Host: testsite.domain.com
> Accept: */*
> Accept-Encoding: *
> User-Agent: JoeDog/1.00 [en] (X11; I; Siege 2.72)
> Connection: close
>
>
> HTTP/1.1 200 OK
> Cache-Control: max-age=604800
> Last-Modified: Thu, 05 May 2005 20:51:40 GMT
> Date: Tue, 12 Mar 2013 22:49:23 GMT

The above is a big clue #2 that 3.2 is operating correctly.

Squid is not permitted to store the object more than 604800 seconds (7
days) past its creation date of 5 May 2005.
This is effectively the same as sendin

[squid-users] Dynamic content caching in Squid 3.2 vs 3.1

2013-03-14 Thread Jon Schneider
2.330 kid1| client_side_request.cc(760) 
clientAccessCheckDone: The request GET 
http://testsite.domain.com/Content/js/SocialShare.js?__v=1363031409 is 1, 
because it matched 'all'
2013/03/14 10:10:12.330 kid1| forward.cc(103) FwdState: Forwarding client 
request local=192.168.5.183:80 remote=172.18.12.20:59205 FD 11 flags=1, 
url=http://testsite.domain.com/Content/js/SocialShare.js?__v=1363031409
2013/03/14 10:10:12.330 kid1| peer_select.cc(271) peerSelectDnsPaths: Find IP 
destination for: 
http://testsite.domain.com/Content/js/SocialShare.js?__v=1363031409' via 
origin.portal.domain.com
2013/03/14 10:10:12.330 kid1| peer_select.cc(271) peerSelectDnsPaths: Find IP 
destination for: 
http://testsite.domain.com/Content/js/SocialShare.js?__v=1363031409' via 
origin.portal.domain.com
2013/03/14 10:10:12.330 kid1| peer_select.cc(298) peerSelectDnsPaths: Found 
sources for 
'http://testsite.domain.com/Content/js/SocialShare.js?__v=1363031409'
2013/03/14 10:10:12.330 kid1| peer_select.cc(299) peerSelectDnsPaths:   
always_direct = 0
2013/03/14 10:10:12.330 kid1| peer_select.cc(300) peerSelectDnsPaths:
never_direct = 0
2013/03/14 10:10:12.330 kid1| peer_select.cc(309) peerSelectDnsPaths:  
cache_peer = local=0.0.0.0 remote=192.168.5.20:80 flags=1
2013/03/14 10:10:12.330 kid1| peer_select.cc(309) peerSelectDnsPaths:  
cache_peer = local=0.0.0.0 remote=192.168.5.20:80 flags=1
2013/03/14 10:10:12.330 kid1| peer_select.cc(311) peerSelectDnsPaths:
timedout = 0
2013/03/14 10:10:12.330 kid1| http.cc(2177) sendRequest: HTTP Server 
local=192.168.5.183:51057 remote=192.168.5.20:80 FD 32 flags=1
2013/03/14 10:10:12.330 kid1| http.cc(2178) sendRequest: HTTP Server REQUEST:
-
GET /Content/js/SocialShare.js?__v=1363031409 HTTP/1.1^M
Host: testsite.domain.com^M
Cookie: Domain.SqlXml.LastUpdate=0^M
Accept: */*^M
Accept-Encoding: *^M
User-Agent: JoeDog/1.00 [en] (X11; I; Siege 2.72)^M
Surrogate-Capability: cache1.domain.com="Surrogate/1.0 ESI/1.0"^M
X-Forwarded-For: 172.18.12.20^M
Cache-Control: max-age=259200^M
Connection: keep-alive^M
^M

--
2013/03/14 10:10:12.358 kid1| ctx: enter level  0: 
'http://testsite.domain.com/Content/js/SocialShare.js?__v=1363031409'
2013/03/14 10:10:12.358 kid1| http.cc(732) processReplyHeader: HTTP Server 
local=192.168.5.183:51057 remote=192.168.5.20:80 FD 32 flags=1
2013/03/14 10:10:12.358 kid1| http.cc(733) processReplyHeader: HTTP Server 
REPLY:
-
HTTP/1.1 200 OK^M
Cache-Control: max-age=604800^M
Content-Type: application/x-javascript^M
Content-Encoding: gzip^M
Last-Modified: Fri, 28 Dec 2012 15:00:37 GMT^M
Accept-Ranges: bytes^M
ETag: "709ea918ce5cd1:0"^M
Vary: Accept-Encoding^M
Server: Microsoft-IIS/7.5^M
X-Server: IISSVR2^M
X-Powered-By: ASP.NET^M
Date: Thu, 14 Mar 2013 16:10:12 GMT^M
Content-Length: 1020^M

Any help is greatly appreciated

Thanks,
Jon


Re: [squid-users] youtube safety mode

2011-03-21 Thread Jon

On 03/21/2011 08:16 PM, Amos Jeffries wrote:

On 22/03/11 16:16, Jon wrote:

On 03/21/2011 04:31 PM, Amos Jeffries wrote:

On Mon, 21 Mar 2011 16:06:31 -0800, Jon R. wrote:

On Friday, March 18, 2011 15:48 AKDT, Amos Jeffries wrote:


On 19/03/11 07:14, Test User wrote:
> I had been asked if this is possible and doing a search through
the mailing list and google, I could only find a howto for
SafeSquid. Is it possible to do this in transparent mode using
squid? If so, can someone point me to a doc on how to accomplish 
this?


What is this "youtube safety mode" you speak of?

NP: "SafeSquid" is a system which is not related to Squid, just 
taking

the brand name to boost their product.

Amos


Hello Amos,

I understand about SafeSquid after spending a couple minutes on their
site.

The youtube safety mode is a mode that blocks objectionable content
from appearing as a result in a search. From what I have read it
appears to only work on a browser by browser basis, so I figured I
would ask the gurus' if they knew of a way to turn it on using a
transparent proxy.

Here is a link to YouTubes explanation of what it is:

http://www.google.com/support/youtube/bin/answer.py?answer=174084



That is one truely useless explanation for anyone with technical
interest.

Do you have any info or knowledge about how it operates in HTTP? or if
it even does so?

Squid has some capability to alter HTTP headers. But that requires
knowing what is going on in the background and what to change from/to.

Amos



The best information I have found for it is from the safesquid website
that explains how they enforce it.

Taken from: http://www.safesquid.com/html/portal.php?page=165


"The first rule in Profiles section identifies requests made for
youtube.com, and adds them a profile 'UNSAFE_YOUTUBE'.

The second rule analyzes the Request Header Pattern of the client, to
check if a string - 'PREF=f2800' exists in cookie being sent to the
host. This string will be found, only if the client has opted for Safety
Mode. If the string is found, this rule removes the profile
'UNSAFE_YOUTUBE' from the request, and adds a profile 'SAFE_YOUTUBE' to
it, and the request is forwarded to the host.

If the string is not found, the request still carries the
'UNSAFE_YOUTUBE' profile. The rule under Rewrite document section acts
upon such requests, and inserts the string in the cookie section of the
client request headers, and forwards the request to the host.

So effectively, all requests that are sent to the host, i.e.
youtube.com, carry the 'Safety Mode Enabled' preference string. This
makes youtube serve only filtered results to the client."



I think almost the same thing could be done with squid but I am not a
squid master, so I am asking here. I am sorry if I am not giving you
good information to work with.


Jon



Thanks Jon. That does seemmore useful than the YouTube docs.

It paints a sad picture though. There is one possible way to turn it 
OFF. But not for turning it on without screwing the clients cookies up.


"Test User": you will have to used ICAP/eCAP or other HTTP content 
adaptation fiddling. Squid will not do this itself.


Amos


Thanks Amos! I very much appreciate you taking the time to look into it.

Jon


Re: [squid-users] youtube safety mode

2011-03-21 Thread Jon

On 03/21/2011 04:31 PM, Amos Jeffries wrote:

On Mon, 21 Mar 2011 16:06:31 -0800, Jon R. wrote:

On Friday, March 18, 2011 15:48 AKDT, Amos Jeffries wrote:


On 19/03/11 07:14, Test User wrote:
> I had been asked if this is possible and doing a search through 
the mailing list and google, I could only find a howto for 
SafeSquid. Is it possible to do this in transparent mode using 
squid? If so, can someone point me to a doc on how to accomplish this?


What is this "youtube safety mode" you speak of?

NP: "SafeSquid" is a system which is not related to Squid, just taking
the brand name to boost their product.

Amos


 Hello Amos,

I understand about SafeSquid after spending a couple minutes on their 
site.


The youtube safety mode is a mode that blocks objectionable content
from appearing as a result in a search. From what I have read it
appears to only work on a browser by browser basis, so I figured I
would ask the gurus' if they knew of a way to turn it on using a
transparent proxy.

Here is a link to YouTubes explanation of what it is:

http://www.google.com/support/youtube/bin/answer.py?answer=174084



That is one truely useless explanation for anyone with technical 
interest.


Do you have any info or knowledge about how it operates in HTTP? or if 
it even does so?


Squid has some capability to alter HTTP headers. But that requires 
knowing what is going on in the background and what to change from/to.


Amos



The best information I have found for it is from the safesquid website 
that explains how they enforce it.


Taken from:  http://www.safesquid.com/html/portal.php?page=165


"The first rule in Profiles section identifies requests made for 
youtube.com, and adds them a profile 'UNSAFE_YOUTUBE'.


The second rule analyzes the Request Header Pattern of the client, to 
check if a string - 'PREF=f2800' exists in cookie being sent to the 
host. This string will be found, only if the client has opted for Safety 
Mode. If the string is found, this rule removes the profile 
'UNSAFE_YOUTUBE' from the request, and adds a profile 'SAFE_YOUTUBE' to 
it, and the request is forwarded to the host.


If the string is not found, the request still carries the 
'UNSAFE_YOUTUBE' profile. The rule under Rewrite document section acts 
upon such requests, and inserts the string in the cookie section of the 
client request headers, and forwards the request to the host.


So effectively, all requests that are sent to the host, i.e. 
youtube.com, carry the 'Safety Mode Enabled' preference string. This 
makes youtube serve only filtered results to the client."




I think almost the same thing could be done with squid but I am not a 
squid master, so I am asking here. I am sorry if I am not giving you 
good information to work with.



Jon



Re: [squid-users] youtube safety mode

2011-03-21 Thread Jon R.
 
On Friday, March 18, 2011 15:48 AKDT, Amos Jeffries  
wrote: 
 
> On 19/03/11 07:14, Test User wrote:
> > I had been asked if this is possible and doing a search through the mailing 
> > list and google, I could only find a howto for SafeSquid. Is it possible to 
> > do this in transparent mode using squid? If so, can someone point me to a 
> > doc on how to accomplish this?
> 
> What is this "youtube safety mode" you speak of?
> 
> NP: "SafeSquid" is a system which is not related to Squid, just taking 
> the brand name to boost their product.
> 
> Amos
> -- 
> Please be using
>Current Stable Squid 2.7.STABLE9 or 3.1.11
>Beta testers wanted for 3.2.0.5
 
 
 Hello Amos,

I understand about SafeSquid after spending a couple minutes on their site.

The youtube safety mode is a mode that blocks objectionable content from 
appearing as a result in a search. From what I have read it appears to only 
work on a browser by browser basis, so I figured I would ask the gurus' if they 
knew of a way to turn it on using a transparent proxy.

Here is a link to YouTubes explanation of what it is:

http://www.google.com/support/youtube/bin/answer.py?answer=174084


Jon
 



[squid-users] squid session

2010-11-19 Thread jon jon
Hi,

I have squid session woking to display an agreement page before
surfing the internet. I made a custom agreement webpage, I put it in
/usr/local/squid/share/errors/English. The webpage contains a couple
of images which I thought I could put into that directory also. When
the agreement page comes up the images are not displayed. I made a
directory called images in /usr/local/squid/share/errors/English, and
edited the lines in the HTML file to point to the images. But I stil
can't get the webpage to load correct. What am I doing wrong?

Thanks,

jon


[squid-users] squid session

2010-11-19 Thread jon jon
 Hi,

I have squid session woking to display an agreement page before
surfing the internet. I made a custom agreement webpage, I put it in
/usr/local/squid/share/errors/English. The webpage contains a couple
of images which I thought I could put into that directory also. When
the agreement page comes up the images are not displayed. I made a
directory called images in /usr/local/squid/share/errors/English, and
edited the lines in the HTML file to point to the images. But I stil
can't get the webpage to load correct. What am I doing wrong?

Thanks,

jon


[squid-users] squid_session

2010-11-03 Thread jon jon
Hello,<

I have looked through the mail archive, and not found what I am
looking for. I have Squid 2.6 installed on my Slackware box. I am new
to Linux, so I apologize for the noob question. How do install
squid_session the external helper program? Do I need to run the
./configure --enable squid_session

Or is this program alreay installed, and I just need to add a line to =
my squid.conf file?
I dont understand how to install that program.

Thanks for any help


[squid-users] Squid - Client joined to domain vs client not joined

2010-04-28 Thread Jon Williams
I'm completely new to Squid and have run across a problem that I'm
beating my head on.  In my test lab I set up a CentOS server with
Squid.  I've configured winbind so that I'm joined to Microsoft 2003
domain controller.  I've configured squid.conf so that it looks up
users in Active Directory and either allows them access to a website
or not depending upon their group.  Happily everything works
beautifully when testing using a workstation on the same subnet.
However as soon as I join that workstation to the domain, I have
issues.  My two http_access rules for my authenticated users do not
seem to work.  My web browser does not prompt me to log in.

So it seems that the authentication that happens when part of the
domain is conflicting with what I'm doing with Squid.  However this is
where I get hopelessly lost.  Below is the modified part of my
smb.conf.  Anybody have any ideas?

auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 5
auth_param ntlm keep_alive on

auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off

acl authenticated proxy_auth REQUIRED


external_acl_type ad_group %LOGIN /usr/lib/squid/wbinfo_group.pl
acl banned_users external ad_group BannedUsers
acl allowed_sites dstdomain "/etc/squid/allowed_sites"
acl allowed_users external ad_group AllowedUsers

# And finally deny all other access to this proxy
http_access allow localhost
http_access allow authenticated allowed_users
http_access allow authenticated allowed_sites banned_users
http_access deny all


[squid-users] Load balancing WITHOUT parents over multiple WAN connections

2009-12-30 Thread Jon DeLee

*This message was scanned for all current viruses and is certified clean*


Hi All,

I'm using Squid 3.0. STABLE 8 as my main cache, and I have two other 2.7 
caches set up, one on each WAN connection.  The only reason we have 
multiple proxy servers is to load balance; in reality I only need the 
one 3.0 server, which has access to both WAN links. 

I don't want any ACLs that force one group of users to one outgoing IP; 
I just want Squid to see that it has two paths to the internet and use 
them in a weighted round-robin fashion. 

I have tried setting up one direct and one parent, but no weighting 
occurs because Squid prefers direct routes if possible.


I have tried to force squid to use an IP address on the machine and set 
up multiple weighted routes from that IP, but strange things happen with 
web sites that check source IP address, so it needs to be something that 
Squid can control.



Any suggestions?

Thanks,

Jon DeLee


[squid-users] can't access javascript enable website

2009-08-11 Thread Jon Tim
Hi Squid Support,
 
I'm using squid squid-2.6.STABLE22-1.fc8 from Fedora 8.
I can use squid and SquidGuard without having any
problems.
 
But, today, one of the user complaints that she can't
browse http://www.sdf.gov.sg website. I try without using
squid proxty and it works OK.
 
I look into more about this website and fond in 

RE: [squid-users] Applying ACLs to access_log directive

2009-06-17 Thread Jon Gregory
Hi Chris,

Thank you for the response.

Yes, the third column of the log shows the host IP of the machine requesting 
pages.


Regards,

Jon Gregory

-Original Message-
From: crobert...@gci.net [mailto:crobert...@gci.net] 
Sent: Tue 16 June 2009 21:04
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Applying ACLs to access_log directive

Jon Gregory wrote:
> I am using SquidNT 2.7 STABLE 5 on WinXP SP3 running as a service and would 
> like to sense check what I am attempting but failing to achieve.  From all 
> the documentation I have read from Visolve, squid-cache.org FAQ and this 
> lists history I am creating a valid set of directives in the below format.
>
> access_log  [ [acl acl ...]]
>
>
>
> I am wanting to direct logging to individual files depending on the source 
> network while still capturing all requests in the access.log.  The example 
> below is how I have attempted to implement this but the result is that 
> access.log logs all events which is okay but the network specific logs remain 
> empty.
>
> acl NET_A src 192.168.0.0/24
> acl NET_A src 10.20.30.0/24
> acl NET_B src 192.168.1.0/24
> acl NET_C src 192.168.2.0/24
>
> access_log c:/squid/var/logs/access_NET_A.log squid NET_A
> access_log c:/squid/var/logs/access_NET_B.log squid NET_B
> access_log c:/squid/var/logs/access_NET_C.log squid NET_C
> access_log c:/squid/var/logs/access.log squid
>   

That looks right...

> In an attempt to test I have also implemented a usergroup based ACL I can get 
> logging to individual files and to the catch all access.log which works as I 
> would expect.
>
> acl Admins external NT_local_group Administrators
>
> access_log c:/squid/var/logs/access_ADMINS.log squid Admins
> access_log c:/squid/var/logs/access.log squid
>   

So it works...

> What am I not understanding?  Is there a dependence on the acl type when 
> using access_log?
>   

Do the entries in c:/squid/var/logs/access.log show the remotehost IP in 
the third column?

Chris


This message is meant for the sole viewing of the addressee. If you have 
received this message in error please reply to the sender to inform them of 
their mistake.
The views and opinions expressed in this email are not necessarily endorsed by 
Innovate Logistics Ltd (Company No. 02058414).

Disclaimer : 

This e-mail has been scanned using Anti-Virus Software, although all efforts 
have been made to make this email safe it is always a wise precaution to scan 
this message with your own Anti-Virus Software.



[squid-users] Applying ACLs to access_log directive

2009-06-16 Thread Jon Gregory

I am using SquidNT 2.7 STABLE 5 on WinXP SP3 running as a service and would 
like to sense check what I am attempting but failing to achieve.  From all the 
documentation I have read from Visolve, squid-cache.org FAQ and this lists 
history I am creating a valid set of directives in the below format.

access_log  [ [acl acl ...]]



I am wanting to direct logging to individual files depending on the source 
network while still capturing all requests in the access.log.  The example 
below is how I have attempted to implement this but the result is that 
access.log logs all events which is okay but the network specific logs remain 
empty.

acl NET_A src 192.168.0.0/24
acl NET_A src 10.20.30.0/24
acl NET_B src 192.168.1.0/24
acl NET_C src 192.168.2.0/24

access_log c:/squid/var/logs/access_NET_A.log squid NET_A
access_log c:/squid/var/logs/access_NET_B.log squid NET_B
access_log c:/squid/var/logs/access_NET_C.log squid NET_C
access_log c:/squid/var/logs/access.log squid



In an attempt to test I have also implemented a usergroup based ACL I can get 
logging to individual files and to the catch all access.log which works as I 
would expect.

acl Admins external NT_local_group Administrators

access_log c:/squid/var/logs/access_ADMINS.log squid Admins
access_log c:/squid/var/logs/access.log squid



What am I not understanding?  Is there a dependence on the acl type when using 
access_log?



This message is meant for the sole viewing of the addressee. If you have 
received this message in error please reply to the sender to inform them of 
their mistake.
The views and opinions expressed in this email are not necessarily endorsed by 
Innovate Logistics Ltd (Company No. 02058414).

Disclaimer : 

This e-mail has been scanned using Anti-Virus Software, although all efforts 
have been made to make this email safe it is always a wise precaution to scan 
this message with your own Anti-Virus Software.



[squid-users] 2.7STABLE6 weight has no effect - weight not working

2009-02-12 Thread Jon DeLee

*This message was scanned for all current viruses and is certified clean*


I  upgraded 2 days ago from 2.7.STABLE5 to 2.7.STABLE6 due to complaints
about hotmail hanging.  I have 3 squid servers, and I updated all 3 at
the same time.  I use one as the primary in a transparent mode, and it
is forced to make requests from the other two parents, which are on
different connections to the internet.  This gives me a way to load
balance between two connections.

Anyway, the weighting was working great until I upgraded.  After I
noticed the problem, I made sure that I was seeing ICP packets coming
from the primary to both parents, and they were, and ICP packets are
coming back from the parents.  The primary would always use the closest
parent, no matter the weight setting.  I went back to 2.7.STABLE5 on the
primary and the problem disappeared.

I found a reference to bug 2241, which seems to address this issue, but
the patch for neighbors.c appears to patch a different version than the
2.7.STABLE6 source that I built from.  It does appear that 2241 has been
incorporated into Squid 3.

Is Squid 3 mature enough to simply migrate to with my setup?  Or should
I continue to try to patch 2.7S6?  If so, how can I patch from a
different source version?

Thanks in advance,

Jon




[squid-users] Re: Clock sync accuracy importance?

2008-04-16 Thread Jon Drukman

K K wrote:

Have you considered running one of the machines as an NTP server, have
the others sync their clock to that?


no, one of the machines is shared hosting so i don't have access to run 
my own ntpd on it.



Yes, explicit 'Expires' headers help squid make smarter decisions.

If you know an object is going to be good indefinitely (e.g. a GIF for
a logo), then setting a very long expiration date will ensure squid
doesn't bother checking with the origin server.

You might want to also reconsider 'Cache-Control: max-age=300'


reconsider in what way?  the pages i am most interested in 
cache-controlling are news hub pages, and they should be good for 5 
minutes, tops.  otherwise the cached version is in danger of falling too 
far behind the 'real' news feed.


i guess i don't really understand the difference between doing Expires: 
now plus 5 minutes (in apache speak) and Cache-Control: max-age=300


-jsd-



[squid-users] Clock sync accuracy importance?

2008-04-15 Thread Jon Drukman
I'm trying to run a squid accelerator on a server in India, accelerating 
an origin host in the USA.  I don't have a ton of experience with ntpd 
but I think I have it running properly on both sites.  For whatever 
reason, they are always 20 seconds out of sync.  Squid is not appearing 
to cache items on the India server.  It's always contacting the origin 
server on every request, and I assume this is because of the clock 
discrepancy.


Is there any way to tell squid that a minute or two drift in either 
direction is OK?


Also, is there any way to find out exactly what decisions squid is 
making so I can tell for sure if it's the clock issue or something else? 
 Maybe my headers aren't correct?


HTTP/1.1 200 OK
Date: Tue, 15 Apr 2008 17:32:23 GMT
Server: Apache/2.0.61 (Unix) PHP/4.4.7 mod_ssl/2.0.61 OpenSSL/0.9.7e 
mod_fastcgi/2.4.2 DAV/2 SVN/1.4.2

X-Powered-By: PHP/5.2.3
Cache-Control: max-age=300, stale-while-revalidate, stale-on-error
Vary: Accept-Encoding
Content-Type: text/html

Should I throw an Expires header in there?

-jsd-



[squid-users] Re: Squid3 accelerator mode example config?

2008-04-10 Thread Jon Drukman

On Thu, Apr 10, 2008 at 6:36 AM, Amos Jeffries <[EMAIL PROTECTED]> wrote:
>  Here you go:
>
>  # Listen on port 80,
>  http_port 80 accel defaultsite=mysite.com vhost
>
>  # actual data source is 1.2.3.4
>  # (IP or domain MUST NOT resolve to squid IP)
>  cache_peer 1.2.3.4 parent 80 0 no-query originserver name=mySitePeer
>
>  # only accept requests for "mysite.com" or "www.mysite.com"
>  acl mySites dstdomain mysite.com www.mysite.com
>  cache_peer_access mySitePeer allow mySites
>  http_access allow mySites
>
>  # stop random people abusing me with spam traffic.
>  allow_direct deny all
>  http_access deny all

i got an error on the "allow_direct" line so i commented it out.
right now this is all running on dev boxes behind a firewall so it's
no big deal about access control.

> > If I access the cache at it's IP address (http://10.0.2.19/) it 
does not

> send the Host: mysite.com header back to the origin.  If I use curl to
> inject a Host: header into the request, it does work.  I want it to 
always

> inject that Host: header if it's missing.
> >
>
>  It should be doing it. Try with the fixed config above. If you still 
see no

> Host: I'll have a closer look at the code.

with the config above, i can't hit the squid server *without* the
header so it doesn't show the problem... but that's ok, i can live
with this.  it's not likely that real world clients will hit the squid
box without the right Host: header anyway.

> > Cache-Control: max-age=15, must-revalidate
> > Content-Type: text/html; charset=UTF-8
> >
> > Squid is not obeying the Cache-Control though.  It always contacts the
> origin on every request.
>
>  It should be.
>  "must-revalidate" means contact the origin and check for new data. 
Try just

> max-age alone.
>
>  And check that your server and squid machines are synced properly 
for time.

> If they are out by 15sec that could cause this behavior.

They are in sync but it's still happening.

-jsd-



[squid-users] Squid3 accelerator mode example config?

2008-04-09 Thread Jon Drukman
I am trying to get Squid3 working in accelerator mode but I'm running 
into some beginner mistakes, clearly.


Can someone provide me a minimal config file that would accelerate a 
single site, always force requests to have that site in the Host: header 
sent to the origin, and obey the Cache-Control: max-age=xxx header 
coming back from the origin?  Here's what I've got:


http_port 80 accel defaultsite=mysite.com vhost
http_access allow all
icp_port 0
redirect_rewrites_host_header off
cache_peer mysite.com parent 80 0 no-query originserver name=mysite.com 
forceddomain=mysite.com



If I access the cache at it's IP address (http://10.0.2.19/) it does not 
send the Host: mysite.com header back to the origin.  If I use curl to 
inject a Host: header into the request, it does work.  I want it to 
always inject that Host: header if it's missing.


Right now, the responses from the origin are coming back with the 
following headers:


HTTP/1.1 200 OK
Date: Wed, 09 Apr 2008 18:09:24 GMT
Server: Apache/2.2.3 (Unix) PHP/5.2.5
X-Powered-By: PHP/5.2.5
Cache-Control: max-age=15, must-revalidate
Content-Type: text/html; charset=UTF-8

Squid is not obeying the Cache-Control though.  It always contacts the 
origin on every request.


Squid Cache: Version 3.0.STABLE4 on Ubuntu 7.10

-jsd-




[squid-users] header_replace cache-control

2007-09-11 Thread Jon
Hi all,

I've been reading the past posts about header_replace but can't seem
to get an answer.  I run squid in cache acceleration mode, I'm trying
to return a specific expiration date in the header for the images I
cache from my backend servers.  As far as I Know I tried setting the
value "header_replace Cache-Control max-age=10080" in the squid.conf
file but I still get a random expiration date.  I also tried setting
"header_access Cache-Control deny all" and set "refresh_pattern .
10018 0% 10080".

What do I need to do exactly for squid to respond to the client's web
browser with a specific expiration date in the header?

Thanks all,

Jon


Re: [squid-users] centos 4.4, wccpv2, cisco 3550 switch

2007-02-09 Thread Jon Christensen

On 2/9/07, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:

fre 2007-02-09 klockan 17:32 +0800 skrev Adrian Chadd:

> Yeah but if you read the grandfather post you'll see it saying there's a 
mismatch
> between 0002 and 0002 .. :)

I did. I kind of remember seeing a report that IOS reported this odd
message when there was a mismatch in the return method. But I may be
wrong. Not that it helps the network layout L2 incompatibility.

Regards
Henrik




I have another question.  I moved squid to another box so my new
diagram looks like this:

 |-  SQUID (192.168.100.200/24)
 |
3550 (ip wccp web-cache redirect in set on 192.168.25.254/24 vlan interface)
 |
 |
CLIENT (192.168.25.25/24)

Updated configs:

SQUID:
wccp2_router 192.168.25.254
wccp2_rebuild_wait off
wccp2_forwarding_method 2
wccp2_return_method 2
wccp2_assignment_method 2
wccp2_service standard 0

3550:
ip wccp web-cache
interface Vlan25
ip address 192.168.25.254 255.255.255.0
ip wccp web-cache redirect in

1) The 3550 shows this in 'sh ip wccp':
SRA#sh ip wccp
Global WCCP information:
   Router information:
   Router Identifier:   192.168.254.254
   Protocol Version:2.0

   Service Identifier: web-cache
   Number of Cache Engines: 0

Why is the router identifier 192.168.254.254?

Here is some (hopefully) useful debug information on the 3550:
1d08h: WCCP-EVNT:S00: Here_I_Am packet from 192.168.100.200 w/bad
rcv_id 
1d08h: WCCP-PKT:S00: Sending I_See_You packet to 192.168.100.200 w/
rcv_id 0038
1d08h: WCCP-EVNT:S00: Here_I_Am packet from 192.168.100.200 w/bad
rcv_id 
1d08h: WCCP-PKT:S00: Sending I_See_You packet to 192.168.100.200 w/
rcv_id 0039
1d08h: WCCP-PKT:S00: Sending Removal_Query packet to 192.168.100.200w/
rcv_id 003A
1d08h: WCCP-EVNT:wccp_change_router_view: S00
1d08h: WCCP-EVNT:wccp_change_router_view: deallocate rtr_view (24 bytes)
1d08h: WCCP-EVNT:wccp_change_router_view: allocate hash rtr_view (1560 bytes)
1d08h: WCCP-EVNT:wccp_change_router_view: rtr_view_size set to 24 bytes
1d08h: WCCP-EVNT:S00: Built new router view: 0 routers, 0 usable web
caches, change # 000F
1d08h: WCCP-EVNT:wccp_copy_wc_assignment_data: enter
1d08h: WCCP-EVNT:wccp_copy_wc_assignment_data: allocate orig mask info
(28 bytes)
1d08h: WCCP-EVNT:wccp_copy_wc_assignment_data: exit
1d08h: WCCP-PKT:S00: Sending I_See_You packet to 192.168.100.200 w/
rcv_id 003B
1d08h: WCCP-EVNT:S00: Here_I_Am packet from 192.168.100.200 w/bad
rcv_id 
1d08h: WCCP-PKT:S00: Sending I_See_You packet to 192.168.100.200 w/
rcv_id 003C
1d08h: WCCP-EVNT:S00: Here_I_Am packet from 192.168.100.200 w/bad
rcv_id 
1d08h: WCCP-PKT:S00: Sending I_See_You packet to 192.168.100.200 w/
rcv_id 003D
1d08h: WCCP-PKT:S00: Sending Removal_Query packet to 192.168.100.200w/
rcv_id 003E


Re: [squid-users] centos 4.4, wccpv2, cisco 3550 switch

2007-02-08 Thread Jon Christensen

On 2/8/07, Adrian Chadd <[EMAIL PROTECTED]> wrote:

On Thu, Feb 08, 2007, Jon Christensen wrote:

> Will v1 work?   I can always move squid inside the pix.   Now, do I
> apply the wccp config to the vlan facing the clients or the vlan
> facing the squid box, or both?   Layer 2 should have hit me that it
> won't work as I am trying to make this work at layer 3.

I'm not even sure WCCPv1 is supported on the 3550! In any case,
the hardware is doing the redirection (the CPU just programming
the hardware to -do- the redirection) and there's no hardware on
the 3550 for generating the WCCP-format GRE packets.

So you'll have to move the Squid to wherever the 3550 is doing the
routing.

Supply me privately with a better network diagram and I'll help you
get WCCPv2 going.



Adrian



Thanks for your help.   I think that moving squid is my only option.
The other option would be to use the pix but don't want to update the
code to 7 at this time.  If I run into further troubles, I will post
again.


Re: [squid-users] centos 4.4, wccpv2, cisco 3550 switch

2007-02-08 Thread Jon Christensen

On 2/8/07, Adrian Chadd <[EMAIL PROTECTED]> wrote:

On Thu, Feb 08, 2007, Jon Christensen wrote:

> This is firewalled, I had to add that static if you recall. :)

It won't work. The 3550 only does L2 redirect and this requires the web cache
to be on a subnet thats reachable directly from an interface on the 3550 -
whether thats an SVI or an L3 port.

You can't do what you're doing.

> SQUID
>  |
>  |
> Firewall
>  |
>  |
> 3550
>  |
>  |
> CLIENT




Adrian




Will v1 work?   I can always move squid inside the pix.   Now, do I
apply the wccp config to the vlan facing the clients or the vlan
facing the squid box, or both?   Layer 2 should have hit me that it
won't work as I am trying to make this work at layer 3.


Re: [squid-users] centos 4.4, wccpv2, cisco 3550 switch

2007-02-08 Thread Jon Christensen

On 2/8/07, Adrian Chadd <[EMAIL PROTECTED]> wrote:

On Thu, Feb 08, 2007, Jon Christensen wrote:

> 16:50:42: WCCP-EVNT:S00: Redirect_Assignment packet from
> 192.168.255.248 fails source check

Yeah, ignore this to start with.

> 16:50:47: WCCP-EVNT:wccp_update_assignment_status: enter
> 16:50:47: WCCP-EVNT:wccp_update_assignment_status: exit
> 16:50:47: WCCP-EVNT:S00: Here_I_Am packet from 192.168.255.248 w/bad
> fwd method 0002, was offered 0002

Ok. This is a bit unclear. What it means is:

* I was offered 002 (L2 redirect)
* The only supported method is 002 (L2 redirect)
* but, and I'm guessing here, 192.168.255.248 isn't directly connected;
  so it can't do an L2 redirect.

Does the 3550 have an IP address in the 192.168.255.x network? Or is it
reaching 192.168.255.x via some firewall device?

> 16:50:47: WCCP-EVNT:S00: Here_I_Am packet from 192.168.255.248 with
> incompatible capabilites
> 16:50:47: WCCP-PKT:S00: Sending I_See_You packet to 192.168.255.248 w/
> rcv_id 001D
> 16:50:57: WCCP-EVNT:wccp_update_assignment_status: enter
> 16:50:57: WCCP-EVNT:wccp_update_assignment_status: exit
> 16:50:57: WCCP-EVNT:S00: Here_I_Am packet from 192.168.255.248 w/bad
> fwd method 0002, was offered 0002
> 16:50:57: WCCP-EVNT:S00: Here_I_Am packet from 192.168.255.248 with
> incompatible capabilites
> 16:50:57: WCCP-PKT:S00: Sending I_See_You packet to 192.168.255.248 w/
> rcv_id 001E

--
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -



This is firewalled, I had to add that static if you recall. :)


SQUID
 |
 |
Firewall
 |
 |
3550
 |
 |
CLIENT


Re: [squid-users] centos 4.4, wccpv2, cisco 3550 switch

2007-02-08 Thread Jon Christensen

On 2/8/07, Adrian Chadd <[EMAIL PROTECTED]> wrote:

On Thu, Feb 08, 2007, Jon Christensen wrote:

> OK, I made a big mistake.  I forgot to create a static on our firewall
> so traffic from squid could get to the router.  I have all of IP open
> by the way.   I am a bit closer:

Is the Squid cache directly connected to the 3550? The 3550 only supports
L2 redirection which implies the Cache(s) are directly connected via an L2
link (ie, ethernet.) It won't work if the caches are on the other side of
an IP firewall.

> SRA#sh ip wccp
> Global WCCP information:
>Router information:
>Router Identifier:   192.168.254.254
>Protocol Version:2.0
>
>Service Identifier: web-cache
>Number of Cache Engines: 0

This here is your first problem. Note how there's no cache engines.
The rest of the counters won't matter - the 3550 has a known issue
updating (or, more clearly, -not- updating) the WCCPv2 packet
statistica.

Do the debugging stuff I suggested before to see why the router isn't
allowing the cache to associate.



Adrian

>Number of routers:   0
>Total Packets Redirected:0
>Redirect access-list:-none-
>Total Packets Denied Redirect:   0
>Total Packets Unassigned:0
>Group access-list:   -none-
>Total Messages Denied to Group:  0
>Total Authentication failures:   0
>Total Bypassed Packets Received: 0
>
>
> Still not redirecting packets correctly though.

--
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level bandwidth-capped VPSes available in WA -



Thanks, here is some debug info:

16:50:42: WCCP-EVNT:S00: Redirect_Assignment packet from
192.168.255.248 fails source check
16:50:47: WCCP-EVNT:wccp_update_assignment_status: enter
16:50:47: WCCP-EVNT:wccp_update_assignment_status: exit
16:50:47: WCCP-EVNT:S00: Here_I_Am packet from 192.168.255.248 w/bad
fwd method 0002, was offered 0002
16:50:47: WCCP-EVNT:S00: Here_I_Am packet from 192.168.255.248 with
incompatible capabilites
16:50:47: WCCP-PKT:S00: Sending I_See_You packet to 192.168.255.248 w/
rcv_id 001D
16:50:57: WCCP-EVNT:wccp_update_assignment_status: enter
16:50:57: WCCP-EVNT:wccp_update_assignment_status: exit
16:50:57: WCCP-EVNT:S00: Here_I_Am packet from 192.168.255.248 w/bad
fwd method 0002, was offered 0002
16:50:57: WCCP-EVNT:S00: Here_I_Am packet from 192.168.255.248 with
incompatible capabilites
16:50:57: WCCP-PKT:S00: Sending I_See_You packet to 192.168.255.248 w/
rcv_id 001E


Re: [squid-users] centos 4.4, wccpv2, cisco 3550 switch

2007-02-08 Thread Jon Christensen

On 2/8/07, Jon Christensen <[EMAIL PROTECTED]> wrote:

On 2/8/07, Adrian Chadd <[EMAIL PROTECTED]> wrote:
> On Thu, Feb 08, 2007, Jon Christensen wrote:
> > Hello,
> >
> > I am having trouble getting my cache registered.  Here are the configs:
> >
> > centos:
> > [EMAIL PROTECTED] squid]# lsmod  | grep gre
> > ip_gre 17121  0
>
> You don't need that. the 3550 does L2 redirect, not GRE.
>
> > squid:
> > wccp2_router 192.168.0.254
> > wccp_version 4
> > wccp2_forwarding_method 1
> > wccp2_return_method 1
> > wccp2_service standard 0
>
> '1' means GRE, which isn't what you want. You want '2'.
> try "debug ip wccp packets" and "debug ip wccp events" on the 3550 - it'll
> complain that the redirection method isn't valid and reject the association.
>
>
>
> Adrian
>
> >
> > 3550:
> > ip wccp web-cache
> > interface vlan3
> >  ip wccp web-cache redirect in
> >
> >
> > SRA#sh ip wccp
> > Global WCCP information:
> >Router information:
> >Router Identifier:   -not yet determined-
> >Protocol Version:2.0
> >
> >Service Identifier: web-cache
> >Number of Cache Engines: 0
> >Number of routers:   0
> >Total Packets Redirected:0
> >Redirect access-list:-none-
> >Total Packets Denied Redirect:   0
> >Total Packets Unassigned:0
> >Group access-list:   -none-
> >Total Messages Denied to Group:  0
> >Total Authentication failures:   0
> >Total Bypassed Packets Received: 0
> >
> >
> > 1) Does the "ip wccp web-cache redirect in" go on the vlan interface
> > of the client or the vlan interface that leads to squid?
> >
> > 2) What am I missing?
> >
> > Thanks!
>

Thanks for the reply.  I modified the wccp2_forwarding_method to 2.
I don't see any debug messages on the 3550.  Here is a bit more info
from squid:

2007/02/08 17:04:58| Accepting proxy HTTP connections at 0.0.0.0, port
3128, FD 12.
2007/02/08 17:04:58| Accepting ICP messages at 0.0.0.0, port 3130, FD 13.
2007/02/08 17:04:58| WCCP Disabled.
2007/02/08 17:04:58| Accepting WCCPv2 messages on port 2048, FD 14.



OK, I made a big mistake.  I forgot to create a static on our firewall
so traffic from squid could get to the router.  I have all of IP open
by the way.   I am a bit closer:

SRA#sh ip wccp
Global WCCP information:
   Router information:
   Router Identifier:   192.168.254.254
   Protocol Version:2.0

   Service Identifier: web-cache
   Number of Cache Engines: 0
   Number of routers:   0
   Total Packets Redirected:0
   Redirect access-list:-none-
   Total Packets Denied Redirect:   0
   Total Packets Unassigned:0
   Group access-list:   -none-
   Total Messages Denied to Group:  0
   Total Authentication failures:   0
   Total Bypassed Packets Received: 0


Still not redirecting packets correctly though.


Re: [squid-users] centos 4.4, wccpv2, cisco 3550 switch

2007-02-08 Thread Jon Christensen

On 2/8/07, Adrian Chadd <[EMAIL PROTECTED]> wrote:

On Thu, Feb 08, 2007, Jon Christensen wrote:
> Hello,
>
> I am having trouble getting my cache registered.  Here are the configs:
>
> centos:
> [EMAIL PROTECTED] squid]# lsmod  | grep gre
> ip_gre 17121  0

You don't need that. the 3550 does L2 redirect, not GRE.

> squid:
> wccp2_router 192.168.0.254
> wccp_version 4
> wccp2_forwarding_method 1
> wccp2_return_method 1
> wccp2_service standard 0

'1' means GRE, which isn't what you want. You want '2'.
try "debug ip wccp packets" and "debug ip wccp events" on the 3550 - it'll
complain that the redirection method isn't valid and reject the association.



Adrian

>
> 3550:
> ip wccp web-cache
> interface vlan3
>  ip wccp web-cache redirect in
>
>
> SRA#sh ip wccp
> Global WCCP information:
>Router information:
>Router Identifier:   -not yet determined-
>Protocol Version:2.0
>
>Service Identifier: web-cache
>Number of Cache Engines: 0
>Number of routers:   0
>Total Packets Redirected:0
>Redirect access-list:-none-
>Total Packets Denied Redirect:   0
>Total Packets Unassigned:0
>Group access-list:   -none-
>Total Messages Denied to Group:  0
>Total Authentication failures:   0
>Total Bypassed Packets Received: 0
>
>
> 1) Does the "ip wccp web-cache redirect in" go on the vlan interface
> of the client or the vlan interface that leads to squid?
>
> 2) What am I missing?
>
> Thanks!



Thanks for the reply.  I modified the wccp2_forwarding_method to 2.
I don't see any debug messages on the 3550.  Here is a bit more info
from squid:

2007/02/08 17:04:58| Accepting proxy HTTP connections at 0.0.0.0, port
3128, FD 12.
2007/02/08 17:04:58| Accepting ICP messages at 0.0.0.0, port 3130, FD 13.
2007/02/08 17:04:58| WCCP Disabled.
2007/02/08 17:04:58| Accepting WCCPv2 messages on port 2048, FD 14.


[squid-users] centos 4.4, wccpv2, cisco 3550 switch

2007-02-08 Thread Jon Christensen

Hello,

I am having trouble getting my cache registered.  Here are the configs:

centos:
[EMAIL PROTECTED] squid]# lsmod  | grep gre
ip_gre 17121  0

[EMAIL PROTECTED] squid]# rpm -q kernel
kernel-2.6.9-42.0.8.EL

squid:
wccp2_router 192.168.0.254
wccp_version 4
wccp2_forwarding_method 1
wccp2_return_method 1
wccp2_service standard 0

3550:
ip wccp web-cache
interface vlan3
 ip wccp web-cache redirect in


SRA#sh ip wccp
Global WCCP information:
   Router information:
   Router Identifier:   -not yet determined-
   Protocol Version:2.0

   Service Identifier: web-cache
   Number of Cache Engines: 0
   Number of routers:   0
   Total Packets Redirected:0
   Redirect access-list:-none-
   Total Packets Denied Redirect:   0
   Total Packets Unassigned:0
   Group access-list:   -none-
   Total Messages Denied to Group:  0
   Total Authentication failures:   0
   Total Bypassed Packets Received: 0


1) Does the "ip wccp web-cache redirect in" go on the vlan interface
of the client or the vlan interface that leads to squid?

2) What am I missing?

Thanks!


[squid-users] Reverse Proxy of token protected content.

2006-11-13 Thread Jon Scott Stevens

Hello Squid Users!

I've been pouring over the documentation and I haven't quite seen a  
setup like what I'm trying to do so I'm asking here in the hopes I  
can get some help. =)


In the end, I would like to have a setup like this:

Internet -> [Squid -> Apache (mod_python)] -> images/videos mounted  
via nfs/smbfs.


Squid is on port 80
Apache is on port 8080
Both are on the same machine.

So, a request comes in like this:

http://server.com/foobar/images/foo.jpg?t=encryptedstring

The mod_python script intercepts the request in it's accesshandler()  
method and either returns OK or HTTP_UNAUTHORIZED based on the data  
in the encrypted string. Apache then either serves the content or  
returns a 403 header.


Now, what I want to do is put Squid in front of Apache so that I can  
cache the image/video content. This would mean that for each request  
that comes in, Squid would send a HEAD request to Apache with the  
full URI and then either download the content from Apache if it isn't  
in the cache, serve the content out of the cache, or not serve it...  
all based on what the mod_python script returns.


I've discovered the external_acl_type config argument and I would be  
willing to switch from mod_python to that, but it seems to be 3.0  
only and I need 2.5.x since that is what is available in Ubuntu.  
Also, most importantly, it doesn't seem to have an option to pass the  
query string data to the external app.


Anyone else have similar experiences with this type of caching?

thanks!

jon



RE: [squid-users] httpd_accel in Squid 2.6.STABLE1 problem

2006-07-10 Thread Jon
I just want to thank you again for your help, everything is working great.

Jon

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 06, 2006 3:39 PM
To: Jon
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] httpd_accel in Squid 2.6.STABLE1 problem

tor 2006-07-06 klockan 15:09 -0400 skrev Jon:
> Is there another way since I have multiple backend servers?

The intended method is one cache_peer per backend, and
cache_peer_access/domain to select which requests gets sent where.

Regards
Henrik



RE: [squid-users] httpd_accel in Squid 2.6.STABLE1 problem

2006-07-06 Thread Jon
Is there another way since I have multiple backend servers?

Thanks,

Jon

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 06, 2006 2:24 PM
To: Jon
Subject: RE: [squid-users] httpd_accel in Squid 2.6.STABLE1 problem

tor 2006-07-06 klockan 12:26 -0400 skrev Jon:
> Thanks for the reply and I tried
> 
> cache_peer virtual parent 80 0 no-query originserver
> 
> but it gave me an error
> 
>   The following error was encountered:
> 
>   Unable to determine IP address from host name for virtual

Change virtual to whatever your backend server is itt needs to be either
the IP or a valid host name.

Regards
Henrik



RE: [squid-users] httpd_accel in Squid 2.6.STABLE1 problem

2006-07-05 Thread Jon
Hi,

Thanks Henrik for the pointer, I have it working but I'm unsure if it's 
configured correctly.

First I added cache_peer virtual parent 80 3130 originserver and http_port 80 
vhost to the conf file.

But I get this error:

The following error was encountered:

* Unable to forward this request at this time. 

This request could not be forwarded to the origin server or to any parent 
caches. The most likely cause for this error is that:

* The cache administrator does not allow this cache to make direct 
connections to origin servers, and
* All configured parent caches are currently unreachable.

If I add:

acl local-servers dstdomain .mydomain.com
always_direct allow local-servers

It works fine.

Can someone verify if it's the correct way to configure the reverse proxy in 
version 2.6?

Thank you,

Jon


-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 03, 2006 5:25 PM
To: Jon
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] httpd_accel in Squid 2.6.STABLE1 problem

mån 2006-07-03 klockan 16:44 -0400 skrev Jon:

> Then I check the FAQ and followed this link 
> http://wiki.squid-cache.org/SquidFaq/ReverseProxy to read more about 
> how to set it up.  It says to use the options httpd_accel_host and 
> httpd_accel_port but they're not valid options when I start Squid, it gives 
> me an error.
> 
> 2006/07/03 16:20:13| parseConfigFile: line 2967 unrecognized:
> 'httpd_accel_host virtual'
> 2006/07/03 16:20:13| parseConfigFile: line 2968 unrecognized:
> 'httpd_accel_port 80'
> 
> I didn't have any problems configuring it in 2.5.STABLE14 release.
> Further searching the squid.conf for the term "httpd_accel" returns 
> nothing except the above option.

Things have changed a bit. See the Squid-2.6 release notes. (yes, there is 
release notes...)

Regards
Henrik



[squid-users] httpd_accel in Squid 2.6.STABLE1 problem

2006-07-03 Thread Jon
Hi,

I'm currently testing Squid 2.6.STABLE1.  In the past I have always used
Squid as a reverse proxy to our http servers on the internal network, today
as I'm setting up Squid I notice it's missing something in the Squid.conf
file.  I just went through and configure everything as I had on the old
Squid box but there is only one option under the httpd-accelerator section.

# HTTPD-ACCELERATOR OPTIONS
#

-

#  TAG: httpd_accel_no_pmtu_discon|off
#   In many setups of transparently intercepting proxies Path-MTU
#   discovery can not work on traffic towards the clients. This is
#   the case when the intercepting device does not fully track
#   connections and fails to forward ICMP must fragment messages
#   to the cache server.
#
#   If you have such setup and experience that certain clients
#   sporadically hang or never complete requests set this to on.
#
#Default:
# httpd_accel_no_pmtu_disc off

Then I check the FAQ and followed this link
http://wiki.squid-cache.org/SquidFaq/ReverseProxy to read more about how to
set it up.  It says to use the options httpd_accel_host and httpd_accel_port
but they're not valid options when I start Squid, it gives me an error.

2006/07/03 16:20:13| parseConfigFile: line 2967 unrecognized:
'httpd_accel_host virtual'
2006/07/03 16:20:13| parseConfigFile: line 2968 unrecognized:
'httpd_accel_port 80'

I didn't have any problems configuring it in 2.5.STABLE14 release.
Further searching the squid.conf for the term "httpd_accel" returns nothing
except the above option.

Am I doing something wrong?  I checked if needed to compile it with options
to enable reverse proxy but I don't.

Thank you,

Jon



Re: [squid-users] Allowing/Unblocking Skype with Squid

2006-06-07 Thread Jon Joyce

Hi Emilio,

Many thanks for your reply.

When you say careful regards to security, do you mean that anyone who  
knows the IP of a host will get through our content filter? We have  
mainly set our squid up like this to stop people using Proxy  
Tunneling software


Jon

On 6 Jun 2006, at 09:27, Emilio Casbas wrote:


Jon Joyce wrote:

Hi all,

We currently have a Squid box set up to only allow secure https  
traffic through a manually updated whitelist. So now, all clients  
must provide the name and 443 port of our Proxy server before they  
can access secure sites (i.e. Internet Banking, Hotmail etc.)


We now have the problem that Skype wants to use the outgoing  
secure 443 port which is not allowed through our Proxy...


Is there anyway around this??


Skype will attempt  to tunnel the traffic over port 443 using the  
SSL protocol as you said,
In order to permit access to skype through squid, you would have to  
know the "random" destination

IPs that skype use with the CONNECT method.

One possibility could be you can try permit numeric IPs with the  
CONNECT method, but be careful regard to security.


acl N_IPS urlpath_regex ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+
acl connect method CONNECT

http_access allow connect N_IPS all

Thanks
Emilio C.



Anyone's help is much appretiated

Jon










[squid-users] Allowing/Unblocking Skype with Squid

2006-06-06 Thread Jon Joyce

Hi all,

We currently have a Squid box set up to only allow secure https  
traffic through a manually updated whitelist. So now, all clients  
must provide the name and 443 port of our Proxy server before they  
can access secure sites (i.e. Internet Banking, Hotmail etc.)


We now have the problem that Skype wants to use the outgoing secure  
443 port which is not allowed through our Proxy...


Is there anyway around this??

Anyone's help is much appretiated

Jon



[squid-users] Unsubscribing

2006-02-15 Thread Jon Banks
OK.  I'm trying to unsubscribe my old email address.  I sent an email to 
squid-users-unsubscribe.  I got the following email back:

Hi! This is the ezmlm program. I'm managing the
squid-users@squid-cache.org mailing list.

I'm working for my owner, who can be reached
at [EMAIL PROTECTED]

To confirm that you would like

   [EMAIL PROTECTED] 

removed from the squid-users mailing list, please send an empty reply 
to this address:

   [EMAIL PROTECTED] 

Usually, this happens when you just hit the "reply" button.
If this does not work, simply copy the address and paste it into
the "To:" field of a new message.

I haven't checked whether your address is currently on the mailing list.
To see what address you used to subscribe, look at the messages you are
receiving from the mailing list. Each message has your address hidden
inside its return path; for example, [EMAIL PROTECTED] receives messages
with return path: [EMAIL PROTECTED]

I have replied back to this email several times, and each time and each method 
I use returns the same error message:

The attached file had the following undeliverable recipient(s):
[EMAIL PROTECTED] 

Information about your message:

Transcript of session follows:
DATA
551 No valid recipients221 2.0.0 squid-cache.org closing connection

The first email message also has the information included:

If you need to get in touch with the human owner of this list,
please send a message to:

<[EMAIL PROTECTED]>

Please include a FORWARDED list message with ALL HEADERS intact
to make it easier to help you.

Any emails I send to [EMAIL PROTECTED] also comes back undeliverable with an 
invalid recipients error message.

Will somebody who is in charge of the squid-users mailing list please fix the 
system so it works?  I need to unsubscribe an old email address and subscribe a 
new one.

Jon






Re: [squid-users] Mailing List Problems

2006-02-15 Thread Jon Banks
>What did the "undeliverable" error message say?

1.  When I emailed [EMAIL PROTECTED], I got:
The attached file had the following undeliverable recipient(s):
[EMAIL PROTECTED] 

Information about your message:

Transcript of session follows:
DATA
551 No valid recipients221 2.0.0 squid-cache.org closing connection

2.  When I emailed squid-users@squid-cache.org, I got:
Hi! This is the ezmlm program. I'm managing the
squid-users@squid-cache.org mailing list.

I'm working for my owner, who can be reached
at [EMAIL PROTECTED]

This is a generic help message. The message I received wasn't sent to
any of my command addresses.

3.  When I emailed [EMAIL PROTECTED], I got:
The attached file had the following undeliverable recipient(s):
[EMAIL PROTECTED] 

Information about your message:

Transcript of session follows:
DATA
551 No valid recipients221 2.0.0 squid-cache.org closing connection

4.  When I emailed [EMAIL PROTECTED], I got:
The attached file had the following undeliverable recipient(s):
[EMAIL PROTECTED] 

Information about your message:

Transcript of session follows:
DATA
551 No valid recipients221 2.0.0 squid-cache.org closing connection

5.  When I emailed [EMAIL PROTECTED], I got:
The attached file had the following undeliverable recipient(s):
[EMAIL PROTECTED] 

Information about your message:

Transcript of session follows:
DATA
551 No valid recipients221 2.0.0 squid-cache.org closing connection




Re: [squid-users] Squidalyser Problem

2006-02-13 Thread Jon Banks
Never mind...I think I found it...it's called Time::ParseDate on the CPAN site.

>>> Marcin Mazurek <[EMAIL PROTECTED]> 2/13/2006 4:06 PM >>>
Jon Banks ([EMAIL PROTECTED]) napisał(a):

> I installed Squidalyser from scratch using their instructions.  Any
> suggestions?  Thanks

Error says: "Can't locate Time/ParseDate.pm in @INC", just install that
perl module.

hth


-- 
http://www.actus.org.pl/  -  -  nic-hdl: MM3380-RIPE
GnuPG 6687 E661 98B0 AEE6 DA8B  7F48 AEE4 776F 5688 DC89
http://www.poznan.linux.org.pl/ : http://www.netsync.pl/ 




Re: [squid-users] Squidalyser Problem

2006-02-13 Thread Jon Banks
That does help.  Can you tell me what the exact name of that module on the CPAN 
site?  I loaded all the perl modules listed in the Squidalyser instructions 
(DBI, CGI, GD, GD::Graph, GD::Text, and URI::Escape).  I'm not sure what this 
module is called, and it's not listed in the instructions.  Thanks.

>>> Marcin Mazurek <[EMAIL PROTECTED]> 2/13/2006 4:06 PM >>>
Jon Banks ([EMAIL PROTECTED]) napisał(a):

> I installed Squidalyser from scratch using their instructions.  Any
> suggestions?  Thanks

Error says: "Can't locate Time/ParseDate.pm in @INC", just install that
perl module.

hth


-- 
http://www.actus.org.pl/  -  -  nic-hdl: MM3380-RIPE
GnuPG 6687 E661 98B0 AEE6 DA8B  7F48 AEE4 776F 5688 DC89
http://www.poznan.linux.org.pl/ : http://www.netsync.pl/ 




[squid-users] Squidalyser Problem

2006-02-13 Thread Jon Banks
Is anyone out there familiar with Squidalyser enough to tell me why I'm getting 
the following error when I run the perl script that copies the access.log file 
info into the database?  Here is my error:

[EMAIL PROTECTED] ~]# /usr/local/squidalyser/squidparse.pl
Can't locate Time/ParseDate.pm in @INC (@INC contains: 
/usr/lib/perl5/site_perl/5.8.6/i386-linux-thread-multi 
/usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi 
/usr/lib/perl5/site_perl/5.8.4/i386-linux-thread-multi 
/usr/lib/perl5/site_perl/5.8.3/i386-linux-thread-multi 
/usr/lib/perl5/site_perl/5.8.6 /usr/lib/perl5/site_perl/5.8.5 
/usr/lib/perl5/site_perl/5.8.4 /usr/lib/perl5/site_perl/5.8.3 
/usr/lib/perl5/site_perl 
/usr/lib/perl5/vendor_perl/5.8.6/i386-linux-thread-multi 
/usr/lib/perl5/vendor_perl/5.8.5/i386-linux-thread-multi 
/usr/lib/perl5/vendor_perl/5.8.4/i386-linux-thread-multi 
/usr/lib/perl5/vendor_perl/5.8.3/i386-linux-thread-multi 
/usr/lib/perl5/vendor_perl/5.8.6 /usr/lib/perl5/vendor_perl/5.8.5 
/usr/lib/perl5/vendor_perl/5.8.4 /usr/lib/perl5/vendor_perl/5.8.3 
/usr/lib/perl5/vendor_perl /usr/lib/perl5/5.8.6/i386-linux-thread-multi 
/usr/lib/perl5/5.8.6 .) at /usr/local/squidalyser/squidparse.pl line 6.
BEGIN failed--compilation aborted at /usr/local/squidalyser/squidparse.pl line 
6.

Here is what is on line 6 of the squidalyser.conf file:

expire 30_d

I installed Squidalyser from scratch using their instructions.  Any 
suggestions?  Thanks

Jon



[squid-users] Mailing List Problems

2006-02-13 Thread Jon Banks
My organization's email domain name changed.  As such, I can get postings sent 
to my old email address (because I can still receive on this address to now), 
but I cannot post from my new email address.  I had to have my email account 
changed back to the old email address just so I could post this message.

1.  I have tried to unsubscribe my old email address using [EMAIL PROTECTED]  
It came back undeliverable.
2.   I have tried to update a new address to [EMAIL PROTECTED]  It came back 
undeliverable.
3.  I emailed [EMAIL PROTECTED] to ask for help.  It came back undeliverable.
4.  I emailed [EMAIL PROTECTED] to ask for help.  It came back undeliverable.
5.  I emailed [EMAIL PROTECTED]  It came back undeliverable.

Does ANYONE know how to get someone to remove my old email address and 
subscribe my new email address?  I don't want both active on this forum at the 
same time because I don't want two copies of everything posted.  I've got 
another issue I need to post, so the sooner I can get this resoved, the better.

Jon Banks



RE: [squid-users] Compressed file gets uncompressed

2006-02-03 Thread Jon
The older Squids are running 2.5-STABLE10 and the newer ones 2.5-STABLE12.
The back-end servers are IIS6 using gzip and deflate.  I would think that
Squid would store and forward the compressed content but nope.  Maybe it
doesn't apply to accelerated content?

I'll check the logs and see if I can get more details.

Thank,

Jon

-Original Message-
From: Chris Robertson [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 01, 2006 5:13 PM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] Compressed file gets uncompressed

> -Original Message-----
> From: Jon [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, February 01, 2006 9:41 AM
> To: squid-users@squid-cache.org
> Subject: [squid-users] Compressed file gets uncompressed
> 
> 
> Hi list,
> 
> I have squids running in HTTP accelerator mode where my servers
> sitting behind them serve compressed files.  When I call those files
> through squid, they get uncompressed by squid.  I was checking the
> document status using pipeboost.com's URL compression report page, it
> comes out to be uncompressed when I go through squid.
> 
> Is there a way to keep them compressed when it passes through squid?
> 
> Thanks,
> 
> Jon
> 

To the best of my knowledge, recent versions of Squid allow pass-through and
caching of compressed content.  I just tested Squid (2.5STABLE7) as a proxy
(not an accelerator) and validated the former.

What version of Squid are you using?  What are you using as the back-end
(apache, IIS, etc.)?  What is the compression method (mod_gzip, pipeboost,
etc.)?

Chris



RE: [squid-users] Compressed file gets uncompressed

2006-02-01 Thread Jon
Hi,

Can you elaborate on that?

Thanks,

Jon


From: Steve Stephens [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 01, 2006 1:46 PM
To: Jon
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Compressed file gets uncompressed

It depends on the digestion protocol.
On 01/02/06, Jon <[EMAIL PROTECTED]> wrote:
Hi list,

I have squids running in HTTP accelerator mode where my servers
sitting behind them serve compressed files.  When I call those files
through squid, they get uncompressed by squid.  I was checking the 
document status using pipeboost.com's URL compression report page, it
comes out to be uncompressed when I go through squid.

Is there a way to keep them compressed when it passes through squid? 

Thanks,

Jon




[squid-users] Compressed file gets uncompressed

2006-02-01 Thread Jon
Hi list,

I have squids running in HTTP accelerator mode where my servers
sitting behind them serve compressed files.  When I call those files
through squid, they get uncompressed by squid.  I was checking the
document status using pipeboost.com's URL compression report page, it
comes out to be uncompressed when I go through squid.

Is there a way to keep them compressed when it passes through squid?

Thanks,

Jon


Re: [squid-users] Squid Proxy will not resolv or proxy local intranet server

2006-01-20 Thread Jon Banks
squid-2.5.STABLE11-3.FC4

>>> Mark Elsen <[EMAIL PROTECTED]> 1/20/2006 10:11 AM >>>
> What happens when you type the IP address in your browser instead of the URL 
> of your Intranet?  Does your Intranet open?  From the Squid box, what happens 
> when you ping the URL of your Intranet?  Does the Squid box resove the URL to 
> an IP address?
>
> I have a similar problem with my Squid box.  All Internet access works fine, 
> but Squid won't >resolve my Intranet URLs to IP addresses.  The default hosts 
> file location tag is set correctly >in squid.conf to /etc/hosts and the hosts 
> file has been configured correctly, yet Squid won't >use its hosts file to 
> resolve local addresses.  I had to setup and run DNS on my Squid box >just 
> for this address.  Not the solution I wanted, but it works.  Does anyone know 
> how to >make Squid resolve local hostnames from its /etc/hosts file?

  - SQUID version ?
  - It should use /etc/hosts autom. ; starting from 2.5

  M.




Re: [squid-users] Squid Proxy will not resolv or proxy local intranet server

2006-01-20 Thread Jon Banks
What happens when you type the IP address in your browser instead of the URL of 
your Intranet?  Does your Intranet open?  From the Squid box, what happens when 
you ping the URL of your Intranet?  Does the Squid box resove the URL to an IP 
address?
 
I have a similar problem with my Squid box.  All Internet access works fine, 
but Squid won't resolve my Intranet URLs to IP addresses.  The default hosts 
file location tag is set correctly in squid.conf to /etc/hosts and the hosts 
file has been configured correctly, yet Squid won't use its hosts file to 
resolve local addresses.  I had to setup and run DNS on my Squid box just for 
this address.  Not the solution I wanted, but it works.  Does anyone know how 
to make Squid resolve local hostnames from its /etc/hosts file?
 
Jon

>>> Jim <[EMAIL PROTECTED]> 1/19/2006 4:47 PM >>>

We have a squid box that will not proxy or resolv local webpages on our 
Intranet but it will proxy external pages, we have DNS set for the 
internal proxy server.

What I get when I try to access a local webpage, local to our network, 
the proxy server the below message.

The following error was encountered:

Unable to determine IP address from host name for local.intranet.org 
<http://local.intranet.org/>

The dnsserver returned:

Name Error: The domain name does not exist.

But on that server I can ping the intranet server by name and nslookup 
on that machine resoves the IP address.


Jim






Re: [squid-users] Automatic Sign-On for Squid Users

2006-01-19 Thread Jon Banks
OK.  Are you going to tell me how I configure NTLM authentication to autologin 
to and eDirectory database?

Jon

>>> Mark Elsen <[EMAIL PROTECTED]> 1/18/2006 5:06 PM >>>
> But don't you have to be running a Microsoft network to make this work?

No.

>  Our back end is Novell and eDirectory, and user accounts on the MS PCs 
> sometimes are generic compared to their Novell User IDs.
>
> I'm working on one day having a Samba back end on a Linux platform, though.
>

  It should work with that back-end too.

  M.




Re: [squid-users] Automatic Sign-On for Squid Users

2006-01-18 Thread Jon Banks
But don't you have to be running a Microsoft network to make this work?  Our 
back end is Novell and eDirectory, and user accounts on the MS PCs sometimes 
are generic compared to their Novell User IDs.
 
I'm working on one day having a Samba back end on a Linux platform, though.
 
Jon

>>> Mark Elsen <[EMAIL PROTECTED]> 1/18/2006 3:45 PM >>>

> We are using basic authentication (via LDAP to a Novell eDirectory Database) 
> for Squid which requires users to manually enter their Novell ID and password 
> before they can access the Internet.
>
> Is there a way to automatically log a user into Squid so that they don't have 
> to enter their ID and password manually?  In Novell BorderManager, there was 
> a program called Client Trust that ran TSR on the user's PC (it was loaded 
> via the user's login script when they first logged into Novell).  This 
> program sent the user's credentials to BorderManger so that the user didn't 
> have to type anything in when they went to the Internet.
>
> Does anyone know if there is a similar program or functionality for Squid?
>
> Thanks
>
>

The NTLM authentication scheme provides this functionality.

M.




[squid-users] Automatic Sign-On for Squid Users

2006-01-18 Thread Jon Banks
We are using basic authentication (via LDAP to a Novell eDirectory Database) 
for Squid which requires users to manually enter their Novell ID and password 
before they can access the Internet.

Is there a way to automatically log a user into Squid so that they don't have 
to enter their ID and password manually?  In Novell BorderManager, there was a 
program called Client Trust that ran TSR on the user's PC (it was loaded via 
the user's login script when they first logged into Novell).  This program sent 
the user's credentials to BorderManger so that the user didn't have to type 
anything in when they went to the Internet.

Does anyone know if there is a similar program or functionality for Squid?

Thanks

Jon




[squid-users] Slow Downloads from Windows Update

2006-01-15 Thread Jon Banks
I created the solution to make Squid work with Windows Update as suggested  by 
Brian E. Conklin on Sept. 9, 2005.  I can get to Windows Update, but it takes 
forever (or 10+ minutes) just for windows update to tell me what files I need 
to download.  Then, it make take 5+ minutes per file download.  The downloads 
as slow and they tend to pause for 2-5 minutes in between each file.

We had the same problem with downloading Windows Updates and the very slow 
speeds when we use to use Novell BorderManager, too, so it seems to be an issue 
with Windows Update and proxy servers.  Does anyone have a solution as to how 
to make these downloads faster or at least a reason that I can tell my boss?  
Thnaks.

Jon Banks



[squid-users] squid idea possibility

2005-05-26 Thread Jon Howe
As you can probably tell by me posting this, I have a problem.  I want
to have users authenticate only for port 80 traffic.  For all other
ports I want traffic to pass through as there were no proxy.
I'm not too good with the ACLs yet.  I know that if there's an answer
it lies (at least partly) with the ACLs.  Is this possible, and if so
does anyone know how to do it?

Thanks a lot
Jon


[squid-users] ntlm tutorials

2005-05-25 Thread Jon Howe
Does anyone know of any good ntlm authentication tutorials?


Re: [squid-users] transparent proxy + auth

2005-05-01 Thread Jon Newman
I work as the lead developer for an ISP in Houston TX. I am developing a
transparent bridge/filter/firewall for our customers where we map each
customers IP/MAC/etc (and other information depending on the type of
account and whats available to 'map' them) to their account, and using
that as 'authentication' for who they are. After they are mapped to their
account, we use a user/pass combo stored in an SQL database through a web
interface so that customers can select what kind of filtering/etc they
desire. The customers mapping is re-evaluated every 30 seconds or so
(through a background accounting daemon), to make sure that the correct
settings/firewall/etc are in place for 'their' IP(s) the account is
currently using (we update periodically because we have many customers
which are dynamic DSL which we map using their vp/vc pair info, and to
generally ensure people are configured correctly). It is still in the
final phases of development, but it all appears to be going well thus far
(after a few hiccups that had to be cured here and there, of course). By
keeping track of this information we can also see if any customers are
misconfigured, or connected to the network through our in-house web based
management software. Another nice benefit of this method that might be
something to consider. This works on a per-ip basis, so if you have
several customers connecting behind a NAT box or something similar, you
are out of luck as far as controlling each person independently.

Just thought I'd offer a perspective on what one company is doing to get
around these issues.

-Jon

-- 
Jon Newman ([EMAIL PROTECTED])
Technical Solutions Manager / Senior software Engineer
The Optimal Link (http://www.oplink.net)

>
>  This solution only works when there is a one-to-one
> mapping between users and ip addresses but imagine
> circumstances where all users have same ip addresses(
> e.g. terminal server users).
>
>  The definite solution to this problem is
> "cookie-based authentication" which is implemented by
> some commercial products like bluecoat ProxySG
> (http://www.bluecoat.com/downloads/support/BCS_tb_enabling_transparent_auth.pdf)
> and Novell BoarderManager
> (http://support.novell.com/techcenter/articles/cfa03332.html)
>
>
> --- Henrik Nordstrom <[EMAIL PROTECTED]> wrote:
>> On Sat, 30 Apr 2005, Varun wrote:
>>
>> >   Is it possible to have any sort of
>> > authentication with squid running as
>> > transparent proxy.
>>
>> Yes, but not the HTTP authentication.
>>
>> To make authenitcation in a transparent proxy you
>> need to figure out some
>> way of authenticating the user based on his IP. The
>> external_acl interface
>> of Squid-2.5 or later allows you to plug this into
>> Squid.
>>
>> Regards
>> Henrik
>>
>
> __
> Do You Yahoo!?
> Tired of spam?  Yahoo! Mail has the best spam protection around
> http://mail.yahoo.com
>


-- 
Jon Newman ([EMAIL PROTECTED])
Technical Solutions Manager / Senior software Engineer
The Optimal Link (http://www.oplink.net)



RE: [squid-users] Httpd Accelerator

2005-04-28 Thread Jon
Excellent, I will give that a try.

Thanks,

Jon

-Original Message-
From: kavos gabor [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 28, 2005 3:22 PM
To: Jon
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Httpd Accelerator

Hello again,

cache_mem should be a minimum because squid automatically manages memory. i
recommened 64MB because it solved my problem. my box also has 4 GB of memory
and twin Xeon 2.66 Ghz CPU but i must tell you that squid doesnt use
multiple CPUs as it runs only as a single procoess. However, the i/o is
excellent with SMP boxes. also, try to reduce the object size in memory to a
lower size. e.g, 50KB so squid doesnt panic when big files are requested
from the cache. hope this helps you ...

regards,

KG
-- 
___
Graffiti.net free e-mail @ www.graffiti.net
Check out our value-added Premium features, such as a 1 GB mailbox for just
US$9.95 per year!


Powered by Outblaze



RE: [squid-users] Httpd Accelerator

2005-04-28 Thread Jon
Thanks, I'll give that a try.

Jon

-Original Message-
From: kavos gabor [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 28, 2005 3:04 PM
To: Jon; Squid Users
Subject: RE: [squid-users] Httpd Accelerator

Hi Jon,

I was facing the same problem when the user load increased on my cache box.
I only reduced teh cache_mem to 64MB and it really solved the problem. That
helped me, hope that solves your prob as well.

regards,

KG
-- 
___
Graffiti.net free e-mail @ www.graffiti.net
Check out our value-added Premium features, such as a 1 GB mailbox for just
US$9.95 per year!


Powered by Outblaze



RE: [squid-users] Httpd Accelerator

2005-04-28 Thread Jon
Thanks for the great advice.  My Squid process is using ~1747 MB of RAM.
CPU and disk are pretty fast, Xeon 3.06 GHz with Ultra320 SCSI drives.  CPU
load is pretty high during peak, ~60%.

I recently purchased a new server to try out putting each cache directory on
its own disk.  It has 4 SCSI drives, Opteron 250s with 4GB of RAM.  Thanks
for the advice, I learned a lot.

-Original Message-
From: Matus UHLAR - fantomas [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 28, 2005 6:53 AM
To: Squid Users
Subject: Re: [squid-users] Httpd Accelerator

Hello,

please set up quoting in your MTA properly...

> On 26.04 21:01, Jon wrote:
> > I've been using Squid for a couple of months as a server accelerator and
> > it was great. But recently our site traffic has increased.  Now I'm
> > having issues where Squid would exit and restart back up during heavy
> > load. At most it could serve out ~84 Mbps before it crashes. My server
> > has 4 GB of RAM; I tweaked the kernel for message queues, shared memory,
> > increased nmbclusters and file descriptors.  Is there other settings I
> > can tune to increase its performance?  I know my description is a little
> > vague but I'll be happy to submit my setting if anyone is interested. 
> > Maybe it has reached the limit and I need to add another squid?

> From: Matus UHLAR - fantomas [mailto:[EMAIL PROTECTED] 

> What is your cache_mem setting and maximum_object_size_in_memory? what
> memory replacement policy do you use? Do you use disk cache? If so, what
> disk layout do you use, what storage system and what is your
> maximum_object size and disk replacement policy?

On 27.04 15:27, Jon wrote:
> cache_mem 512 MB

How much memory does squid use? If it's under 2 GB, you can increase
cache_mem and will get better memory hit rate, and thus less disk I/O.

...with FreeBSD on ia32 architecture oyur processes can eat up to 2GB of RAM
(if not more...check it), but you'l probably have to recompile your kernel
to allow such big data segment size.

(do not check the memory usage imediately after start, wait a few days until
memory cache fills up)

> maximum_object_size_in_memory 1024 KB

too much probably, I'd set up lower size to get more objects to memory,
and thus have less disk I/O for small files.

> maximum_object_size 2048 KB
> cache_replacement_policy heap GDSF
> memory_replacement_policy heap GDSF
> 
> I use diskd with 3 cache directories on a RAID 0

Oh, you have NOT read the FAQ before you installed the machine, did you?

You should NOT run SQUID cache on RAID0 disks, it will not benefit from
stripping more disks. Running one cachd_dir on each drive is more effective
and you won't loose all your cache if any of your disks fails.

Another question is, how much are your CPU and disks loaded. I mentioned the
disk I/O two times, if this is the bottleneck, you can get faster disks (but
I'd try it just after splitting RAID to 3 drives and tuning other
parameters)

if CPU is your problem, you should check if you don't have ineffective ACL's
(it's quite common reason why squid is slow) and buy a better CPU...

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
(R)etry, (A)bort, (C)ancer



RE: [squid-users] Httpd Accelerator

2005-04-28 Thread Jon
Good call to check the cache.log:

2005/04/27 17:32:51| assertion failed: cbdata.c:274: "c->y == c"
2005/04/27 17:32:55| Starting Squid Cache version 2.5.STABLE9 for
i386-unknown-freeb\ sd5.3...

I have no idea what that means.

Thanks,

Jon
(Sorry about that)

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, April 27, 2005 5:49 PM
To: Jon
Cc: Squid Users
Subject: Re: [squid-users] Httpd Accelerator

On Tue, 26 Apr 2005, Jon wrote:

> I've been using Squid for a couple of months as a server accelerator and
it
> was great. But recently our site traffic has increased.  Now I'm having
> issues where Squid would exit and restart back up during heavy load.

Anything in cache.log explaining why it exited?

Regards
Henrik



Re: [squid-users] How to block a client using mac

2005-04-28 Thread Jon Newman

> hi all
> i am running Linux 9 and squid shipped with Linux i think it is 2.5 stable
> 1

Linux 9? Don't you mean RedHat Linux 9? Sorry, pet peeve of mine where
users call Redhat X "Linux X" as though redhat were the only distribution.

> we are using acl ip based and 24 hours and time based depends on the
> package.
>
> problem is this we have used http_access Deny to a client in day bcz
> his time package is night he is unable to browse but he is still
> signing msn and yahoo and mirc how can i stop this and how can i block
> a mac address of a client permanently
> to not use the network.

ACL's are not going to get you what you want. You are going to need
firewalling or IP re-routing techniques in order to block ALL traffic for
a certain user/IP. Squid only has control over what it redirects (which I
assume is web traffic only in your setup), nothing more.

Hope this helps...

Jon

-- 
Jon Newman ([EMAIL PROTECTED])
Technical Solutions Manager / Senior software Engineer
The Optimal Link (http://www.oplink.net)



RE: [squid-users] Httpd Accelerator

2005-04-27 Thread Jon


-Original Message-
From: Matus UHLAR - fantomas [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, April 27, 2005 2:57 AM
To: Squid Users
Subject: Re: [squid-users] Httpd Accelerator

On 26.04 21:01, Jon wrote:
> I've been using Squid for a couple of months as a server accelerator and
it
> was great. But recently our site traffic has increased.  Now I'm having
> issues where Squid would exit and restart back up during heavy load. At
most
> it could serve out ~84 Mbps before it crashes. My server has 4 GB of RAM;
I
> tweaked the kernel for message queues, shared memory, increased
nmbclusters
> and file descriptors.  Is there other settings I can tune to increase its
> performance?  I know my description is a little vague but I'll be happy to
> submit my setting if anyone is interested.  Maybe it has reached the limit
> and I need to add another squid?

What is your cache_mem setting and maximum_object_size_in_memory?
what memory replacement policy do you use?
Do you use disk cache? If so, what disk layout do you use, what storage
system
and what is your maximum_object size and disk replacement policy?

cache_mem 512 MB
maximum_object_size_in_memory 1024 KB
maximum_object_size 2048 KB
cache_replacement_policy heap GDSF
memory_replacement_policy heap GDSF

I use diskd with 3 cache directories on a RAID 0

Thanks,

Jon



[squid-users] Httpd Accelerator

2005-04-26 Thread Jon
Hi everyone,

I've been using Squid for a couple of months as a server accelerator and it
was great. But recently our site traffic has increased.  Now I'm having
issues where Squid would exit and restart back up during heavy load. At most
it could serve out ~84 Mbps before it crashes. My server has 4 GB of RAM; I
tweaked the kernel for message queues, shared memory, increased nmbclusters
and file descriptors.  Is there other settings I can tune to increase its
performance?  I know my description is a little vague but I'll be happy to
submit my setting if anyone is interested.  Maybe it has reached the limit
and I need to add another squid?

Thanks,

Jon



Re: [squid-users] Transparent proxy issues...

2005-04-14 Thread Jon Newman
> Servere MTU problems cache->client perhaps? Try disabling PMTU discovery
> for the interface/route towards your clients.

Wouldn't redirecting localhost/127.0.0.1 to another port on the localhost
work regardless of MTU settings though? Because when I redirect, say, port
1 to port 22 (ssh):
iptables -t nat -A PREROUTING -p tcp -s 0/0 --dport 1 -j DNAT --to
127.0.0.1:22

...and try ssh'ing to port 1 via 'ssh -p 1 127.0.0.1' I get:
ssh: connect to host 127.0.0.1 port 1: Connection refused

But 'ssh 127.0.0.1' works as expected.

Ideas? This is getting way too frustrating...I've been working on this far
too long *sigh*

Trying it on another installation (different distribution) nets the same
results...it has to be something I am doing.

Jon

-- 
Jon Newman ([EMAIL PROTECTED])
Systems Administrator/Software Engineer
The Optimal Link (http://www.oplink.net)



Re: [squid-users] Transparent proxy issues...

2005-04-13 Thread Jon Newman
Every time I put the redirect in, I can see the requests for the pages in
the dansguardian logs, but the transfer does not work/take place. Anyone
have any ideas as to why this might occur? It's as though it makes the
initial connection but does not allow the client to recieve any data?

Thanks.

Jon

> On Tue, 12 Apr 2005, Jon Newman wrote:
>
>> Using DNAT, via this command, still nets the same result:
>> iptables -t nat -A PREROUTING -p tcp -s x.x.x.x/32 --dport 80 -j DNAT
>> --to
>> 216.90.3.137:8080
>
> As I said it is equivalent. REDIRECT only saves you from entering the IP
> (automatic).
>
>> Any other ideas? I can't believe this is so difficult, this should be
>> simple and straight foreward...there must be something stupid I am
>> missing...PLEASE, anyone willing to point out my idiocy?
>
> Never ever had netfilter NAT fail on me.
>
> But if your intercepting router is running in "lollipop" mode (just one
> interface, next hop router on same interface as client station) then you
> may need disabling ICMP redirects.
>
> Regards
> Henrik
>


-- 
Jon Newman ([EMAIL PROTECTED])
Systems Administrator/Software Engineer
The Optimal Link (http://www.oplink.net)



Re: [squid-users] Transparent proxy issues...

2005-04-13 Thread Jon Newman
I even tried redirecting a non-specific port to google.com's port 80, and
still no success:
[EMAIL PROTECTED]:~# iptables -t nat -A PREROUTING -p tcp -s 0/0 --dport 1
-j DNAT --to 64.233.187.104:80
[EMAIL PROTECTED]:~# telnet 127.0.0.1 1
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
[EMAIL PROTECTED]:~# telnet 64.233.187.104 80
Trying 64.233.187.104...
Connected to 64.233.187.104.
Escape character is '^]'.
^]
telnet> quit
Connection closed.

So as you can see, redirection, does not work however direct connection
does. Anyone have an idea?

Thanks.

Jon


> On Tue, 12 Apr 2005, Jon Newman wrote:
>
>> Using DNAT, via this command, still nets the same result:
>> iptables -t nat -A PREROUTING -p tcp -s x.x.x.x/32 --dport 80 -j DNAT
>> --to
>> 216.90.3.137:8080
>
> As I said it is equivalent. REDIRECT only saves you from entering the IP
> (automatic).
>
>> Any other ideas? I can't believe this is so difficult, this should be
>> simple and straight foreward...there must be something stupid I am
>> missing...PLEASE, anyone willing to point out my idiocy?
>
> Never ever had netfilter NAT fail on me.
>
> But if your intercepting router is running in "lollipop" mode (just one
> interface, next hop router on same interface as client station) then you
> may need disabling ICMP redirects.
>
> Regards
> Henrik
>


-- 
Jon Newman ([EMAIL PROTECTED])
Systems Administrator/Software Engineer
The Optimal Link (http://www.oplink.net)



Re: [squid-users] Transparent proxy issues...

2005-04-13 Thread Jon Newman
> Never ever had netfilter NAT fail on me.
>
> But if your intercepting router is running in "lollipop" mode (just one
> interface, next hop router on same interface as client station) then you
> may need disabling ICMP redirects.

I have 2 interfaces on that router, it is setup as follows:

[Customers]---DS3[Cisco 7206]Fa2/0--|
--
|--eth1[BOX 'mainbr' is bridge iface with ip]eth0|
--
|--[Switched network including link to internet]

Relatively simple setup. Sorry if that is difficult to understand.

Jon

-- 
Jon Newman ([EMAIL PROTECTED])
Systems Administrator/Software Engineer
The Optimal Link (http://www.oplink.net)



Re: [squid-users] Transparent proxy issues...

2005-04-12 Thread Jon Newman
> If you want to explicitly state the IP then you can use DNAT instead of
> REDIRECT. Both supports specifying the port to NAT to.

Using DNAT, via this command, still nets the same result:
iptables -t nat -A PREROUTING -p tcp -s x.x.x.x/32 --dport 80 -j DNAT --to
216.90.3.137:8080

Any other ideas? I can't believe this is so difficult, this should be
simple and straight foreward...there must be something stupid I am
missing...PLEASE, anyone willing to point out my idiocy?

Thanks...

Jon

-- 
Jon Newman ([EMAIL PROTECTED])
Systems Administrator/Software Engineer
The Optimal Link (http://www.oplink.net)



Re: [squid-users] Transparent proxy issues...

2005-04-12 Thread Jon Newman
> Is your squid running in 8080 port to get 80 requests?
> Check it with netstat -na | grep '8080'

Yes, this is the output of that command:
[EMAIL PROTECTED]:~# netstat -na | grep '8080'
tcp0  0 0.0.0.0:80800.0.0.0:*   LISTEN
tcp1   2654 216.90.3.137:8080   66.101.59.243:45942 CLOSING
tcp1  11967 216.90.3.137:8080   66.101.59.243:45940
CLOSE_WAIT
tcp1   2654 216.90.3.137:8080   66.101.59.243:45941 CLOSING
tcp1   2654 216.90.3.137:8080   66.101.59.243:45944 CLOSING
tcp0  0 216.90.3.137:8080   66.101.59.243:45945 TIME_WAIT

As you can see there is something bound to that port and listening on all
IP addresses on the box. Currently I have my PC pointed at port 8080
(manually setup), using dansguardian as I type this email, so it
definitely is working. I do have port 8080 and 3128 blocked from outside
access only to prevent users not on our network from using the cache and
filter.

> Is /proc/sys/net/ipv4/ip_forward file havine an entry
> as 1 (or) Is sysctl net.ipv4.ip_forward equal to 1

[EMAIL PROTECTED]:~# cat /proc/sys/net/ipv4/ip_forward
1

I currently have the PC I am on now, routed through the transparent proxy.
When I manually configure my browser to use the proxy via port 8080,
everything is fine and I am able to browse the web. However, when I try to
connect straight through to the internet and have the iptables rule to
route my destination port 80 packets through port 8080, I get nothing. The
DNS still is looked up successfully (as it should, since I am not touching
those packets) but it just sits as it is 'waiting for reply from XX'.

Here is the iptables nat table setup:
[EMAIL PROTECTED]:~# iptables-save -t nat
# Generated by iptables-save v1.2.10 on Tue Apr 12 09:38:04 2005
*nat
:PREROUTING ACCEPT [29252743:1621473381]
:POSTROUTING ACCEPT [29250710:1621356573]
:OUTPUT ACCEPT [188:13722]
-A PREROUTING -s 66.101.59.243 -p tcp -m tcp --dport 80 -j REDIRECT
--to-ports 8080
COMMIT
# Completed on Tue Apr 12 09:38:04 2005

Shouldn't I supply the destination IP address when redirecting to port
8080? In other words, doesn't the current setup redirect the client to
port 8080 on the ORIGINAL, INTERNET based server (which would be
incorrect)? If so, how would I do so with iptables?

Just an ideathanks for any responses.


-- 
Jon Newman ([EMAIL PROTECTED])
Systems Administrator/Software Engineer
The Optimal Link (http://www.oplink.net)



[squid-users] ignore

2005-04-11 Thread Jon
Testing.



[squid-users] Transparent proxy issues...

2005-04-10 Thread Jon Newman
Ok, I have squid and dansguardian setup and running. If I configure my
client pc to use its IP address for as a proxy, it works fine, and I can
see the various entries both in squids and dansguardians log files.
However, when I try to force it to transparently proxy, it does not work.
I do see the entries in dansguardians log files for the request, but it
does not complete the transfer on the client pc. Here is the command I
issue to try and force the client to use the proxy:
iptables -t nat -A PREROUTING -s x.x.x.x/32 -p tcp --dport 80 -j REDIRECT
--to-port 8080
'x.x.x.x' = client pc ip address

Some additional information, here is the basic setup as to how my routing
works:
[CUSTOMER]<-->[TRANSPARENT PROXY]<--->[INTERNET]
Pretty basic and simple. Can anyone point out something I may be doing
wrong? Also, the same issue occurs when I redirect the customer to squid
as opposed to dansguardian (ie: it stalls, no data is transferred).

Thanks in advance.

Jon



[squid-users] Cache store rebuild

2005-04-08 Thread Jon
Hi everyone,

My squid is configured as an http accelerator.  I was just wondering about
the cache store rebuild when squid crashes and restarts.  Is there a way to
set it so it doesn't rebuild it all at once but slowly or not at all?

I read about the -F option but I can't cut it off from serving any traffic.
Basically during the cache store rebuild it causes a lot of connection to my
backend server which I'm trying to eliminate.  Is there any other
alternatives?

Thanks,

Jon



[squid-users] Squid 3 w/ESI

2005-02-21 Thread Jon
Hi everyone,

I was experimenting with the latest build of Squid 3 with ESI.  I ran the
configure file with the following options "--prefix=/usr/local/squid3
--enable-gnuregex --with-pthreads --enable-esi --enable-storeio=ufs,aufs
--with-aufs-threads=10 --enable-useragent-log --enable-referer-log
-enable-ssl --enable-x-accelerator-vary --with-dl".  It compiles fine except
when I run "make", I received the following errors.

/usr/bin/ld: cannot find -lexpat
*** Error code 1

Stop in /usr/home/username/squid-3.0-PRE3-20050218/src.
*** Error code 1

I was reading a tutorial but they seem to have it installed ok.  Can anyone
point me towards the right direction?

Thank you,

Jon



[squid-users] filtering/proxy options?

2005-01-11 Thread Jon Newman
I am looking at implementing a proxy/filtering server and would like some
recommendations on the direction to take. I wish to do the filtering on an
IP basis with a transparent proxy. IE: specify what IPs are to have
filtering enabled and what options for that IP are enabled/etc. Can squid
do this? Could anyone point me in the right direction in order to get
something like this implemented? I am a developer and fully understand and
am well accustomed with RTFM'ingbut would like some direction. Can
anyone point me the right way? Basically I would just like to know what
packages/software would be needed to accomplish this...the routing/etc I
can already take care of.

Thanks in advance

Sincerely,
Jon Newman



RE: [squid-users] Squid in 64-bit

2005-01-06 Thread Jon
Thanks for the reply; it raised another question in my mind.  You mentioned
the benefits of 64-bits is limited, can you elaborate on that?  What are the
benefits of going 64-bit?  Will the caching be faster?

Thanks for reading,

Jon

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, January 05, 2005 12:28 AM
To: Jon
Cc: Squid Users
Subject: Re: [squid-users] Squid in 64-bit



On Tue, 4 Jan 2005, Jon wrote:

> I'm new to the mailing list.  I did some Google and mail archive search
but
> I wasn't able to find much information on Squid running in a 64-bit
> environment.

Should work, but is not very much tested.

There quire likely will be problems on requests for very large objects > 
2GB.


The benefits of 64bits is quite limited for Squid, and the drawbacks of 
higher memory usage is very noticeable.

> I am going to stick with FreeBSD since I've become familiar with it, but
has
> anyone successfully have Squid running on 64-bit in a production
> environment?  How difficult or different is it to setup?  Will the current
> version of Squid compile in 64-bit without any problems?

I did run Squid on 64-bit Alpha machines some several years back, and it 
performed reasonably well then (with a few patches). But I have not tested 
any recent versions of Squid in 64-bit environments.

Regards
Henrik




[squid-users] Squid in 64-bit

2005-01-04 Thread Jon
Hi everyone,

I'm new to the mailing list.  I did some Google and mail archive search but
I wasn't able to find much information on Squid running in a 64-bit
environment.  I have minimum knowledge with the Unix/Linux environment but I
was able to setup some Squids as http accelerators using FreeBSD.  I would
ultimately like to use Opteron machines for Squid.  I just want to gather as
much information as I can before moving towards that direction.

I am going to stick with FreeBSD since I've become familiar with it, but has
anyone successfully have Squid running on 64-bit in a production
environment?  How difficult or different is it to setup?  Will the current
version of Squid compile in 64-bit without any problems?

Well, I guess that's what I have in mind so far.  If anyone can help me out
or point me to the right direction it would be awesome.

Thanks for reading,

Jon




RE: [squid-users] Odd redirect? denied.

2004-10-21 Thread Wyatt, Jon


> -Original Message-
> From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
> Sent: 20 October 2004 22:16
> To: Wyatt, Jon
> Cc: '[EMAIL PROTECTED]'
> Subject: Re: [squid-users] Odd redirect? denied.
> 
> 
> 
> 
> On Wed, 20 Oct 2004, Wyatt, Jon wrote:
> 
> > Hi,
> >
> > We're having a problems with a strange URL and I'd wonder 
> if anyone can
> > help.
> >
> > This is the URL.
> > 
> https://www.btwholesale.com:8443/bbtcr/login.jsp?serverURL=htt
> p://reportserv
> > er.nat.bt.com:8000
> 
> You need to add port 8443 to both Safe_ports and SSL_ports.
> 
> Regards
> Henrik
> 

Excellent, that got it working from home for me. But the squid at work is
still playing up, it's obviously a fundamental config problem we have here.

Thanks,
Jon.


***
The information contained in this e-mail is intended only 
for the individual to whom it is addressed. It may contain 
privileged and confidential information. If you have 
received this message in error or there are any problems, 
please notify the sender immediately and delete the message 
from your computer. The unauthorised use, disclosure, 
copying or alteration of this message is forbidden. Neither 
Vertex Data Science Limited nor any of its subsidiaries 
will be liable for direct, special, indirect or 
consequential damage as a result of any virus being passed 
on, or arising from alteration of the contents of this 
message by a third party.

Vertex Data Science Limited (England and Wales No. 3153391) 
registered office Vertex House, Greencourts Business Park, 
333 Styal Road, Manchester, M22 5TX
***



RE: [squid-users] Odd redirect? denied.

2004-10-20 Thread Wyatt, Jon


> -Original Message-
> From: Elsen Marc [mailto:[EMAIL PROTECTED]
> Sent: 20 October 2004 12:39
> To: Wyatt, Jon; [EMAIL PROTECTED]
> Subject: RE: [squid-users] Odd redirect? denied.
> 
>  It works here for the complete url, through SQUID and I get 
> the login page.
>  Are you sure you did :
> 
>% squid -k reconfigure
> 
>  after allowing 8443 as a safe SSL port ?
> 

Yes.

Must be something else in our configuration which is causing the problem
then. How odd.


jon




***
The information contained in this e-mail is intended only 
for the individual to whom it is addressed. It may contain 
privileged and confidential information. If you have 
received this message in error or there are any problems, 
please notify the sender immediately and delete the message 
from your computer. The unauthorised use, disclosure, 
copying or alteration of this message is forbidden. Neither 
Vertex Data Science Limited nor any of its subsidiaries 
will be liable for direct, special, indirect or 
consequential damage as a result of any virus being passed 
on, or arising from alteration of the contents of this 
message by a third party.

Vertex Data Science Limited (England and Wales No. 3153391) 
registered office Vertex House, Greencourts Business Park, 
333 Styal Road, Manchester, M22 5TX
***



[squid-users] Odd redirect? denied.

2004-10-20 Thread Wyatt, Jon
Hi,

We're having a problems with a strange URL and I'd wonder if anyone can
help.

This is the URL.
https://www.btwholesale.com:8443/bbtcr/login.jsp?serverURL=http://reportserv
er.nat.bt.com:8000

And this is the error message that's produced.
TCP_DENIED/403 1004 CONNECT www.btwholesale.com:8443 - NONE/- -

I'm testing with a device which is allowed to access any site and 8443 has
been added to the safe ports listing, i.e. (and yes, I've restarted squid)

acl SSL_ports port 443 563
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 8443# https extra port
acl CONNECT method CONNECT


The address reportserver.bt.com appears to be an internal address as it
doesn't resolve from the internet but I guess that's irrelevant. I would
guess that the entire URL tells the front end webserver to retrieve a page
from within the corporate infrastructure and display it from this server.

Access works fine directly. Once clicked the link produces an additional
popup window with a username a login field.
Any ideas?

Thanks,
Jon.




***
The information contained in this e-mail is intended only 
for the individual to whom it is addressed. It may contain 
privileged and confidential information. If you have 
received this message in error or there are any problems, 
please notify the sender immediately and delete the message 
from your computer. The unauthorised use, disclosure, 
copying or alteration of this message is forbidden. Neither 
Vertex Data Science Limited nor any of its subsidiaries 
will be liable for direct, special, indirect or 
consequential damage as a result of any virus being passed 
on, or arising from alteration of the contents of this 
message by a third party.

Vertex Data Science Limited (England and Wales No. 3153391) 
registered office Vertex House, Greencourts Business Park, 
333 Styal Road, Manchester, M22 5TX
***



Re: [squid-users] RE: [SPAM] - [squid-users] Re: Transfer-encoding insquid -- question - Bayesian Filter detected spam

2004-07-13 Thread Jon Kay
> Thanks. I already saw that. I am more interested in the TE chunking
> implementation (more http/1.1).

We haven't done that, at least not yet.  So far, TE is rarely used in
practice.  CE is by far the most widely used encoding, being supported
by virtually every browser out there, and an increasing number of
servers.

Let me or Joe Cooper know if you're interested in supporting TE; right
now we're considering next steps on encoding.  So far, nobody has
voted with money to implement TE (in fact, even without money,
i'm curious what you're thinking of using it for).
------
Jon Kay  pushcache.com - "push done right"   (512) 442-3320
 Squid consulting / installation




Re: [squid-users] Re: gzip

2004-07-09 Thread Jon Kay
  > Also Swell Technology claims to have such a patch up for ransom. Does
> > anyone know if their patch is for real and if it works as I described
> > above?
> >
> > No idea - never heard of the company. I don't know how much they're
> > charging, but rather than purchasing that, you (and others who are
> > interested) could instead invest the money in sponsoring a Squid developer
> > to implement it in Squid proper.

> They claim the patch works great and they haven't released it yet b/c
> squid 3.0 is in feature freeze.  One of the engineers there told me they
> are going to release it under the GPL once Squid 3.1 begins.

> But they are asking for $400 to gain access to the pre-release patch.
>
> http://swelltech.com/squidgzip/
>
> So in theory it already exist and will be apart of Squid 3.1 very soon
> but who knows if these guys are for real.  I was really hoping people on
> this list would know who they were...

I don't think Joe Cooper will mind my saying some of what I know on the
list.

Swell Technologies, in concert with a partner, do, in fact, have a
gzip/deflate content-encoding implementation at an advanced stage.
I'm using it right now for my browsing.  I recommend contacting Joe
for more information.

Note that Swell did, in fact, sponsor a Squid developer to implement
it in Squid and eventually get it into the trunk.  They were kind
enough to hire me to do it.  Although I certainly have a conflict of
interest on commenting on it, I think their "ransom" system is rather
good for the Squid community, because it creates a market for
expensive Squid developments while spreading out the investment and
risk over several interested parties.  And this was a fairly
complicated change, a little expensive for a single pocket, given the
as-yet weak recovery of the caching market.  It will have many
benefactors, way beyond the community that actually pays for it.  This
kind of ransom system can get widely desired Squid developments in the
field alot faster if well-supported by the community.
--
Jon Kay  pushcache.com - "push done right"
 Squid consulting / installation





[squid-users] Multiple Internal WebServers.

2004-07-01 Thread Jon Garcia

Hi,

I'm wondering if its possible to proxy multiple Internal Webservers with
Squid in Acceletor Mode?

In my setup I would only have 1 squid in single machine (in accell mode)
that would proxy requests from
internet to 3 internal WebServers.

Thnks in advance.

Jon Garcia



Re: [squid-users] User login

2004-06-24 Thread Jon Kay
s s wrote:

> I want to bind an ip address to a user , for example
> user abc should be able to login from only 192.168.0.1
> ip
>
> I am using right now PAM based authenticaion using
> /etc/passwd

You can do this using a combination of the
PAM "proxy_auth" acl you probably use now with http_access
and the "src" acl  (see squid.conf and FAQ secn 10).

------
Jon Kay  pushcache.com - "push done right"
 Squid consulting / installation




Re: [squid-users] Having trouble starting squid as non-root user

2004-06-22 Thread Jon Kay
Carl Barton wrote:

> I am having trouble starting squid as a non-root user.  When I attempt to do
> it
> I get the following.
>
> commBind: Cannot bind socket FD11 to 192.168.0.197:443 (13) Permission denied
>
> Does anyone know the permission that I need to set for a non-root user to be
> able
> to start squid or do I just always have to start squid as the root user?

If you're setting up Squid to run as https, yes, you have to start Squid as
root,
because it's a root-priviledged port.  You want to set up a cache user and
group,
and set cache_effective_user and cache_effective_group in squid.conf
accordingly.

Good luck!
------
Jon Kay  pushcache.com - "push done right"
 Squid consulting / installation




Re: [squid-users] Connection Timed Out While Squid Running Well

2004-06-12 Thread Jon Kay
Harry Prasetyo K wrote:

> Hello,
> I'm a newbie in linux and squid things
> On my college we have primary proxy which connected to internet, and i use
> those primary proxy as my cache parent. My system running Debian with
> kernel 2.6.6 and squid 2.4 STABLE 6.I've just upgrading it from the STABLE
> 4 version. After that i get this problem :
> my squid is running well, my primary proxy is running well when i'm
> testing it via browser but when i set up my browser to use my own squid as
> proxy i can't connect to the internet.
>
> this is my setting for cache peer
> cache_peer 202.xxx.xxx.xxx parent 8080 3130 login=xxx:xxx default
>
> after checking the cache log i found my squid assumed that the digest from
> cache parent temporarily disabled and the cache parent is dead.(the fact
> the cache parent is running well)
> from the access.log it has an action code 504 (Gateway Time Out) ???

Did you check (via telnet) if you can actually reach the parent proxy
from your machine?


Jon





Re: [squid-users] delays pools and RPM

2004-06-12 Thread Jon Kay
[zuñiga] wrote:

> Now I want to use Delay pools, but I read in squid.conf if I use delays pool
> I must recompile squid with --enable delay-pools.
>
> How can I do it from an RPM? or when I install from a rpm package  squid
> compile with all parameter?

Most RPMs are packages of already-compiled binaries.  You can't recompile them
because they don't include
source code.

There is such a thing as a "source RPM" (SRPM), which works just like other RPMs,
except that instead of containing
binaries, it has source code.   Red Hat usually puts SRPM contents in
/usr/src/redhat/SOURCES when installed.  There
may be an SRPM for Squid on the source code CD.If so, you can install it and
compile it up however you want.

If not, you may have to just fetch the source from squid-cache.org.

------
Jon Kay  pushcache.com - "push done right"
 Squid consulting / installation




[squid-users] Performance expectations using acl dstdomain FILE?

2003-10-07 Thread Jon Kinred
Hi,

I am currently running a cache whereby a group of authenticated users 
are granted access to a limited amount of sites by using a dstdomain 
"file" ACL. This list currently contains ~1500 domains and is running 
without problem at the moment but i am wondering how well it will scale 
in the future. From what i remember, the list is loaded into memory at 
runtime, so my questions are:

Are there any other file formats that would improve the situation as the 
file grows and if not, does that mean that the scalability will depend 
on available memory?

These are the current memory stats:

Mem:   2069852k total,  2061264k used, 8588k free,   701300k buffers
Swap:  1469936k total,96184k used,  1373752k free,   862224k cached
Thanks, Jon.