Re: [squid-users] Issue with header_access and validation

2008-05-02 Thread Adrian Chadd
Use refresh_pattern entries to override the max-age.


On Fri, May 02, 2008, Paul-Kenji Cahier wrote:
 Hello,
 
 In our current situation, we are trying to have Cache-control: max-age=0 
 headers from clients to be ignored
 in the cache decision process, while keeping all of the 'Cache-control: 
 no-cache' and 'Pragma: no-cache'
 still valid as making revalidation mandatory.
 
 Without trying to do anything, when squid receives the max-age=0 directive, 
 it decides to TCP_REFRESH_HIT since
 the client asks it.
 
 Our current approach was the following:
 acl static_content req_header Cache-control max.age=0
 header_access Cache-Control deny static_content
 
 While the acl is properly matched, it seems the header_access does not ever 
 get applied when deciding of what to do,
 with the result that it's effectively being ignored.
 
 Is there any way to make it be applied earlier/another way to ignore only 
 'Cache-control: max.age=0' headers?
 (we would also preferably rather be able to define that with an acl so we can 
 only apply that directive to
 really probably static content)
 
 The whole goal is to avoid firefox's F5/refresh button from forcing thousands 
 of TCP_REFRESH_HIT/304 all the time,
 which not only strains the servers but takes longer. Of course we also want 
 users that want to force a refresh
 (through ctrl+shift+R, which actually adds the no-cache directives) to be 
 able to do so.(Caching is good,
 but forcing delays before things are checked again is not)
 
 Any suggestions will be really appreciated... We have tried to rewrite urls 
 through privoxy, but it came messy
 and fairly heavy on load, so a squid only solution would really be best.
 
 -- 
 Best regards,
  Paul-Kenji Cahier
 mailto:[EMAIL PROTECTED]

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


[squid-users] Reverse proxy problem

2008-05-02 Thread Gianfranco Varone [TIN]
Hi to all, 
firstable sorry for my english!!

I'm trying to configure 
reverse proxy with Squid version 2.6, to permit users to connect  to 
our mail server

Schema as follow:
USER - internet - Squid(DMZ) - FW 
- Mail(LAN)
Squid AND Mail answer on tcp port 1

Squid.conf:
http_port ipSquid:1 vhost=ipMail:1 vport=1 accel
cache_peer 
ipMail 1 0 no-query originserver
acl MailServer ipMail/32
always_direct deny all !MailServer

So, if i try to connect to http:
//ipProxy:1/ i get the login page, but every request automatically 
redirect to http://ipMail:1 and i obviously get errors!

Using 
squid 2.5 instead it works perfectly!

Squid 2.5 conf:
http_port 1
httpd_accel_host 192.168.0.8
httpd_accel_port 1
httpd_accel_single_host on
httpd_accel_uses_host_header on
httpd_accel_with_proxy on

Where i'm in wrong???

Cheers/GfV


Re[2]: [squid-users] Issue with header_access and validation

2008-05-02 Thread Paul-Kenji Cahier
But wouldnt that only override max-age which is received in headers sent by 
servers?
The ones we want to override are from client requests only.
Plus refresh_pattern can not take an acl since it's global and only based on 
path.(ie no acls)

Or am I not seeing things clearly?


Thanks for any help again.


?Use refresh_pattern entries to override the max-age.


?On Fri, May 02, 2008, Paul-Kenji Cahier wrote:
?Hello,

?In our current situation, we are trying to have Cache-control: max-age=0 
headers from clients to be ignored
?in the cache decision process, while keeping all of the 'Cache-control: 
no-cache' and 'Pragma: no-cache'
?still valid as making revalidation mandatory.

?Without trying to do anything, when squid receives the max-age=0 directive, 
it decides to TCP_REFRESH_HIT since
?the client asks it.

?Our current approach was the following:
?acl static_content req_header Cache-control max.age=0
?header_access Cache-Control deny static_content

?While the acl is properly matched, it seems the header_access does not ever 
get applied when deciding of what to do,
?with the result that it's effectively being ignored.

?Is there any way to make it be applied earlier/another way to ignore only 
'Cache-control: max.age=0' headers?
?(we would also preferably rather be able to define that with an acl so we 
can only apply that directive to
?really probably static content)

?The whole goal is to avoid firefox's F5/refresh button from forcing 
thousands of TCP_REFRESH_HIT/304 all the time,
?which not only strains the servers but takes longer. Of course we also want 
users that want to force a refresh
?(through ctrl+shift+R, which actually adds the no-cache directives) to be 
able to do so.(Caching is good,
?but forcing delays before things are checked again is not)

?Any suggestions will be really appreciated... We have tried to rewrite urls 
through privoxy, but it came messy
?and fairly heavy on load, so a squid only solution would really be best.

?-- 
?Best regards,
? Paul-Kenji Cahier
?mailto:[EMAIL PROTECTED]



Re: [squid-users] Reverse proxy problem

2008-05-02 Thread Amos Jeffries

Gianfranco Varone [TIN] wrote:
Hi to all, 
firstable sorry for my english!!


I'm trying to configure 
reverse proxy with Squid version 2.6, to permit users to connect  to 
our mail server


Schema as follow:
USER - internet - Squid(DMZ) - FW 
- Mail(LAN)

Squid AND Mail answer on tcp port 1

Squid.conf:
http_port ipSquid:1 vhost=ipMail:1 vport=1 accel


http_port ipSquid:1 accel vhost defaultsite=fqdnMailDomain:1


cache_peer ipMail 1 0 no-query originserver
acl MailServer ipMail/32


acl MailServer dstdomain fqdnMailDomain


always_direct deny all !MailServer


No. Instead:

never_direct allow fqdnMailDomain
http_access allow fqdnMailDomain
cache_peer_access ipMail allow fqdnMailDomain
cache_peer_access deny all



So, if i try to connect to http:
//ipProxy:1/ i get the login page, but every request automatically 
redirect to http://ipMail:1 and i obviously get errors!


Prefer FQDN for public mail.
Point FQDN for mail at ipSquid so clients can get to proxy.

NP: no need for squid to listen on 1, it can be anything. The 
clients never know the private link to mail and mail only knows squid is 
connecting correctly.




Using 
squid 2.5 instead it works perfectly!


Squid 2.5 conf:
http_port 1
httpd_accel_host 192.168.0.8
httpd_accel_port 1
httpd_accel_single_host on
httpd_accel_uses_host_header on
httpd_accel_with_proxy on

Where i'm in wrong???

Cheers/GfV


Amos
--
Please use Squid 2.6.STABLE20 or 3.0.STABLE5


Re: [squid-users] Dub about how to work squid ...

2008-05-02 Thread Amos Jeffries

Ramiro Sabastta wrote:

Hi !!!

I installed squid on a Debian box, with 1Gb of RAM, 160 Gb of disk
and AMD Optreon Dual Core, in transparent mode.
I configured a cache of 100Gb on disk with aufs.
I was making some testing in my private network, and I couldn't
understand how the squid worked.

I have this configuration:

maximum_object_size 5120 bytes
minimum_object_size 0 bytes
maximum_object_size_in_memory 204800 bytess

When I send a HTTP requeriment for an object, for example of 1Kb, the
object is saved by the squid IN MEMORY and the following
requeriments from the same object, the squid answers with a HIT
(This is ok)
For the other side, when I send a request for an object bigger than
200Kb,  it always resolves in DIRECT way and the following
requeriments from the same object, the squid answer with a MISS. The
squid doesn't save this object on the disk cache,

It's ok? May be I have a configuration problem.

Any Help?


Which version of squid?

Is the large object perhapse also not allowed storage in the disk cache?

Most recent squid have memory-only as the default caching option now if 
none is configured. So maximum_object_size_in_memory in one of those 
setups would mean maximum cachable object size.


Amos
--
Please use Squid 2.6.STABLE20 or 3.0.STABLE5


Re: [squid-users] SSL Accel - Reverse Proxy

2008-05-02 Thread Amos Jeffries

Tory M Blue wrote:

On Thu, May 1, 2008 at 2:02 AM, Amos Jeffries [EMAIL PROTECTED] wrote:

 You could make a second peer connection using HTTPS between squid and the
back-end server and ACL the traffic so that only requests coming in via SSL
are sent over that link. Leaving non-HTTPS incoming going over the old HTTP
link fro whatever the server want to do.


Thanks Amos

Not sure that I made myself clear or that I understand your suggestion.


You made the situation clear. I mentioned the only reasonably easy solution.
If you didn't understand me, Keith M Richad provided you with the exact 
squid.conf settings I was talking about before.


Squid can talk HTTPS to the clients, HTTPS to the web server, and still 
sit in the middle caching files. Exactly as it would for HTTP.
All you need is SSL certificates for each side of squid. Configured as 
Keith gave you.




I need to allow squid to connect and talk to my servers via http
(only), i want squid to handle the SSL termination (SSL acceleration,
take the overhead off the back end servers).

However since squid talks to the back end servers via http (and not
https on pages that require https), I need to somehow tell the server
that the original connection, or the connection that will go back to
the client will be https, even though the server is responding via
http..

I handle secure and non secure fine now, the same website for example.
apps.domain.com, listens to both 443 and 80, so squid can handle
secure and non secure. there is code on apps.domain.com that checks
the incoming protocol to verify that's it's secure, if not it sends a
secure url for the client to come back in on.  As you can see if I
allow Squid to handle the SSL portion, the back end server has no way
of knowing (the piece I'm missing) if the actual client connection is
secure or not. (hard to explain possibly)..

Client  apps.domain.com (443) Squid - backend server (80)
backend server (80)  -- Squid apps.domain.com (443) --
Client (443)

I'm wondering if Squid can tell the peer (server) that the original
request was in fact secure, so that we can tell the application, feel
free to respond with the secure data via non secure port, because
squid will encrypt the server response and get back to the client via
https

Sorry kind of long winded.
Tory



--
Please use Squid 2.6.STABLE20 or 3.0.STABLE5


[squid-users] Unable to Access support.microsoft.com through Squid

2008-05-02 Thread Dean Weimer
I have recently been unable to browse support.microsoft.com through our squid 
proxy servers.

Investigation into the issue leads me to believe that Microsoft is responding 
with gzip transfer encoding.

Firefox Reports a content encoding error:
Content Encoding Error
The page you are trying to view cannot be shown because it uses an invalid or 
unsupported form of compression.

Internet Explorer Reports that I am not connected to the internet or the web 
server is not responding.
I was able to find a work around for IE, by un-checking the use http 1.1 
options under the advanced tab.

We have 2 proxy servers configured, one is running FreeBSD 5.4 with 
Squid-2.5.STABLE13, the other is running FreeBSD 6.2 with Squid-2.6-STABLE9.  
Both have the same problem, after some reading, I found that this issue should 
have been fixed in Squid-2.6+ and Squid-3.1+.  Since 3.1 is not out in 
production state yet, 2.6 seems to be the way to go, is STABLE9 not new enough 
to have the fix in it?  I already have 3.0 compiled and was ready to put it in 
place, but I have no problem switching to the latest version of 2.6 if that 
will fix this problem.

Has anyone else ran into this issue, and found a solution to the problem?  

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


[squid-users] question about delay_pool

2008-05-02 Thread Christian Purnomo
HI Gurus

I have a requirement where my user wants to have options to browse using
different speed to the internet.  At present I have 2 instances of squid
to use different delay_pool parameters, one has 256kbits/s and the other
has 64kbits/s.  

My question, is there any way I can run only an instance of squid but
still have the 2 options? the users use different proxy.pac at the
moment to choose their speed.

Thanks 


Re: [squid-users] Surfing hangs after period of time

2008-05-02 Thread Usrbich

I am worried about the Request memory hit ratios 0.0% part. This means that
my memory cache is unaffective, right? Where can I tune this?

Cache information for squid:
Request Hit Ratios: 5min: 32.8%, 60min: 32.8%
Byte Hit Ratios:5min: 25.0%, 60min: 25.0%
Request Memory Hit Ratios:  5min: 0.0%, 60min: 0.0%
Request Disk Hit Ratios:5min: 50.8%, 60min: 50.8%
Storage Swap size:  1401944 KB
Storage Mem size:   1144 KB
Mean Object Size:   16.21 KB
Requests given to unlinkd:  0


Adrian Chadd wrote:
 
 On Wed, Apr 30, 2008, Usrbich wrote:
 
 In cache.log, all I get is this messages:
 2008/04/30 23:58:25| clientReadRequest: FD 121 (10.19.14.58:2014) Invalid
 Request
 2008/04/30 23:58:25| clientReadRequest: FD 112 (10.19.14.58:2013) Invalid
 Request
 2008/04/30 23:58:37| clientReadRequest: FD 170 (10.19.13.54:1317) Invalid
 Request
 2008/04/30 23:58:38| clientReadRequest: FD 121 (10.19.20.55:1235) Invalid
 Request
 2008/04/30 23:58:41| clientReadRequest: FD 169 (10.19.13.54:1318) Invalid
 Request
 2008/04/30 23:58:41| clientReadRequest: FD 169 (10.19.13.54:1319) Invalid
 Request
 2008/04/30 23:58:50| clientReadRequest: FD 126 (10.19.15.55:1662) Invalid
 Request
 
 Is that the cause of my timeout problem?
 
 It'd be ncie to know what that is, but no, that in itself shouldn't hang
 browsing activities.
 
 Is this startup output ok?
 Memory usage for squid via mallinfo():
  total space in arena:   13768 KB
  Ordinary blocks:13023 KB265 blks
  Small blocks:   0 KB  5 blks
  Holding blocks:   244 KB  1 blks
  Free Small blocks:  0 KB
  Free Ordinary blocks: 744 KB
  Total in use:   13267 KB 95%
  Total free:   745 KB 5%
 
 Well, squid is using bugger all memory then.
 
 Debugging this will require a little more effort.. you will probably
 have to begin by fiddling with your server stats and determine what else
 is going on. You may want to run the system call tracer on Squid to see
 what its doing when it slows down to see whats going on.
 
 This sort of stuff is precisely why I suggest people graph as much about
 their
 Squid servers as they can!
 
 
 
 Adrian
 
 -- 
 - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid
 Support -
 - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -
 
 

-- 
View this message in context: 
http://www.nabble.com/Surfing-hangs-after-period-of-time-tp16976682p17019406.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] Help with PROXY TRANSPARENT centos 5.1 and Squid 3.0.Stable4

2008-05-02 Thread opc
Hi

I like configure squid with proxy transperent, but don't work, if i
configure i my browser the address proxy and port of server this work very
good.

The next is the configuration

squid.conf
-
http_port 3138 transparent


firewall in /etc/rc.d/rc.local

iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 80 -j REDIRECT
--TO-PORT 3138

General configuration
--
wan = eth0
intranet = eth1
Centos v 5.1
Squid 3.0 Stable 4

What is wrong ?

This same configuracion work with centos 4.0 and squid 2.6


Re: [squid-users] Unable to Access support.microsoft.com through Squid

2008-05-02 Thread Michael Graham
Has anyone else ran into this issue, and found a solution to the problem?  


I do

# Fix broken sites by removing Accept-Encoding header
acl broken dstdomain support.microsoft.com
acl broken dstdomain .digitalspy.co.uk
header_access Accept-Encoding deny broken

The problem is that sending Accept-Encoding causes these sites to reply 
with a header that it's not supposed to (Transfer-Encoding: chunked)


Cheers,
Mick


Re: [squid-users] Squid sends TCP_DENIED/407 even on already authenticated users

2008-05-02 Thread Julio Cesar Gazquez
El Jueves 01 Mayo 2008 06:08:28 escribió:

But, as far as I can tell, credentials are sent in the request as they appear 
in the log. Just happens that, after several successfull responses, 407 
responses happen.

Anyway, IE7 only ask again for authentication on a certain site, it keeps 
working silently on the other sites we tried, and IE6, FF and Konqueror never 
ask for authentication again, even if 

 1) Have you tried the auth TTL settings.

 2) are you certain that this is not simply a case of long-ago provided
 credentials timing out in IE?

No. While I found it seems having TCP_DENIED/407 is normal because squid 
changing nonces to limit reply attacks. However the IE7 problem asking again 
for credentials (found in a single site: rosario3.com, sadly one in the top 5 
in our stats) I guess could be a problem about IE7 and/or IIS broken 
implementation of digest RFC (RFC 2617). 



-- 
Julio César Gázquez
Area Seguridad Informática -- Int. 736
Municipalidad de Rosario


RE: [squid-users] Unable to Access support.microsoft.com through Squid

2008-05-02 Thread Dean Weimer
Thanks for your help Mick, this solved the problem.

Also after seeing this I was able to figure out that in squid 3.0 you can use 
request_header_access in place of header_access.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

-Original Message-
From: Michael Graham [mailto:[EMAIL PROTECTED] 
Sent: Friday, May 02, 2008 10:37 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Unable to Access support.microsoft.com through Squid

 Has anyone else ran into this issue, and found a solution to the problem?  

I do

# Fix broken sites by removing Accept-Encoding header
acl broken dstdomain support.microsoft.com
acl broken dstdomain .digitalspy.co.uk
header_access Accept-Encoding deny broken

The problem is that sending Accept-Encoding causes these sites to reply 
with a header that it's not supposed to (Transfer-Encoding: chunked)

Cheers,
Mick


Re: [squid-users] SSL Accel - Reverse Proxy

2008-05-02 Thread Tory M Blue
On Fri, May 2, 2008 at 5:25 AM, Amos Jeffries [EMAIL PROTECTED] wrote:


  You made the situation clear. I mentioned the only reasonably easy
 solution.
  If you didn't understand me, Keith M Richad provided you with the exact
 squid.conf settings I was talking about before.


Obviously i have not., and I apologize.

I want Squid to handle both HTTP/HTTPS (easy, implemented working for months).

I want SQUID to talk to the backend server via HTTP.. period,  (EASY)

I want SQUID to handle the https encryption/description and talk to
the origin server via http . (EASY)

I want Squid to somehow inform the origin that the original request
was in fact HTTPS (HOW, is the question at hand)

I can do SSL and pass it and have squid handle the SSL without issue.,
the issue is allowing the origin insight as to the originating
protocol, if squid accepts the client connection on 443 and sends the
request to the origin on port 80

The issue is that I don't want my backend server to have to deal with
ssl at all. But I have some applications that require the request be
https (secured pages),  So if Squid could pass something in the header
citing that the original request was made via https, than my code
could take that information, and know that sending secured data via
non secure method is okay, since Squid will encrypt the data and send
to the client before that data leaves my network.

I had similar questions with squid sending the original http version
information in a header, which it does. Now I'm wondering if squid
keeps track of the original requesting protocol, so that my
application can look at the header and decide if the original request
came in as https (Since the origin at this point believes not, since
squid is talking to the origin via http and talking to the client via
https.)

Sorry that I seem to be making this complicated, it totally makes
sense in my head (: )

Tory

I'm not sure how to be clearer and would be happy to email directly
with someone , aim, or phone


RE: [squid-users] SSL Accel - Reverse Proxy

2008-05-02 Thread Keith M. Richard
Tory,

If you are going to use Certificates from a provider like Verisign
or similar, and will be using an Intermediate cert, will need to chain
them together as to avoid errors from EU web browsers.

Keith

 -Original Message-
 From: Amos Jeffries [mailto:[EMAIL PROTECTED]
 Sent: Friday, May 02, 2008 7:26 AM
 To: Tory M Blue
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] SSL Accel - Reverse Proxy
 
 Tory M Blue wrote:
  On Thu, May 1, 2008 at 2:02 AM, Amos Jeffries [EMAIL PROTECTED]
 wrote:
   You could make a second peer connection using HTTPS between squid
and
 the
  back-end server and ACL the traffic so that only requests coming in
via
 SSL
  are sent over that link. Leaving non-HTTPS incoming going over the
old
 HTTP
  link fro whatever the server want to do.
 
  Thanks Amos
 
  Not sure that I made myself clear or that I understand your
suggestion.
 
 You made the situation clear. I mentioned the only reasonably easy
 solution.
 If you didn't understand me, Keith M Richad provided you with the
exact
 squid.conf settings I was talking about before.
 
 Squid can talk HTTPS to the clients, HTTPS to the web server, and
still
 sit in the middle caching files. Exactly as it would for HTTP.
 All you need is SSL certificates for each side of squid. Configured as
 Keith gave you.
 
 
  I need to allow squid to connect and talk to my servers via http
  (only), i want squid to handle the SSL termination (SSL
acceleration,
  take the overhead off the back end servers).
 
  However since squid talks to the back end servers via http (and not
  https on pages that require https), I need to somehow tell the
server
  that the original connection, or the connection that will go back to
  the client will be https, even though the server is responding via
  http..
 
  I handle secure and non secure fine now, the same website for
example.
  apps.domain.com, listens to both 443 and 80, so squid can handle
  secure and non secure. there is code on apps.domain.com that checks
  the incoming protocol to verify that's it's secure, if not it sends
a
  secure url for the client to come back in on.  As you can see if I
  allow Squid to handle the SSL portion, the back end server has no
way
  of knowing (the piece I'm missing) if the actual client connection
is
  secure or not. (hard to explain possibly)..
 
  Client  apps.domain.com (443) Squid - backend server
 (80)
  backend server (80)  -- Squid apps.domain.com (443)
--
  Client (443)
 
  I'm wondering if Squid can tell the peer (server) that the original
  request was in fact secure, so that we can tell the application,
feel
  free to respond with the secure data via non secure port, because
  squid will encrypt the server response and get back to the client
via
  https
 
  Sorry kind of long winded.
  Tory
 
 
 --
 Please use Squid 2.6.STABLE20 or 3.0.STABLE5


[squid-users] Inserting text on web page

2008-05-02 Thread Wet Mogwai

I have to collect information from every system on my network. I wrote a
script that will be placed on a local web server. Now, I need to have an
easy way to make sure it is accessible by even the least capable user. I
would like to insert a link to it at the bottom of a page that every
computer would have bookmarked. Basically, I want to run essentially this
regex on a web page as it is being sent to the client:
s/\/BODY/A(defeatnabblefilter) HREF=addressclicky\/a\/BODY/i

Is this possible with squid? How can it be done? I already have squid
running as a transparent proxy. My google-fu is weak with this one. I seem
to only be able to find pages about making regex based ACLs.
-- 
View this message in context: 
http://www.nabble.com/Inserting-text-on-web-page-tp17028057p17028057.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] question about delay_pool

2008-05-02 Thread Chris Robertson

Christian Purnomo wrote:

HI Gurus

I have a requirement where my user wants to have options to browse using
different speed to the internet.  At present I have 2 instances of squid
to use different delay_pool parameters, one has 256kbits/s and the other
has 64kbits/s.  


My question, is there any way I can run only an instance of squid but
still have the 2 options? the users use different proxy.pac at the
moment to choose their speed.

Thanks 
  


Three possibilities from the most simple to the most complex:

1) Have squid listen to two different ports.  One port gives speedy 
access, one gives slow
2) Use authentication and give out one login for speedy access, and one 
for slow.  If you are already using basic authentication, you could 
modify the authenticator to strip off the difference before passing the 
credentials to the back end.
3) Use an external helper (based on the session helper) that checks a 
back end store for information on which speed the user has chosen.  A 
local web page could be used to modify the preferences after the initial 
choice.


Chris


Re: [squid-users] Inserting text on web page

2008-05-02 Thread Chris Robertson

Wet Mogwai wrote:

I have to collect information from every system on my network. I wrote a
script that will be placed on a local web server. Now, I need to have an
easy way to make sure it is accessible by even the least capable user. I
would like to insert a link to it at the bottom of a page that every
computer would have bookmarked. Basically, I want to run essentially this
regex on a web page as it is being sent to the client:
s/\/BODY/A(defeatnabblefilter) HREF=addressclicky\/a\/BODY/i

Is this possible with squid? How can it be done? I already have squid
running as a transparent proxy. My google-fu is weak with this one. I seem
to only be able to find pages about making regex based ACLs.
  


Squid 3 supports ICAP 
(http://en.wikipedia.org/wiki/Internet_Content_Adaptation_Protocol).


Otherwise, you could use a modified version of the Upside-Down-Ternet 
(http://www.ex-parrot.com/~pete/upside-down-ternet.html).  Instead of 
modifying images, you could grab the to-be-modified page, and... 
Well...  Modify it.  Using url_rewrite_access you would be able to limit 
the pages for which your modification script would be called.


Chris


Re: [squid-users] 2.6.STABLE19 and 2.6.STABLE20 missing from mirrors

2008-05-02 Thread Henrik Nordstrom
On tor, 2008-05-01 at 11:38 +0100, Dave Holland wrote:

 I don't know if it's related, but the *.asc signature files for STABLE20
 are missing too. The links to them from
 http://www.squid-cache.org/Versions/v2/2.6/ are also broken. Please can
 they be replaced?

I haven't had time to sign the STABLE20 release yet.

Regards
Henrik



Re: [squid-users] Squid sends TCP_DENIED/407 even on already authenticated users

2008-05-02 Thread Henrik Nordstrom
On ons, 2008-04-30 at 13:29 -0300, Julio Cesar Gazquez wrote:

 We are starting to deploy digest based authentication on a large network, and 
 we found a weird problem: Sometimes authenticated requests are answered by 
 TCP_DENIED/407 responses.

Which Squid version?

Regards
Henrik



Re: [squid-users] SSL Accel - Reverse Proxy

2008-05-02 Thread Henrik Nordstrom
On ons, 2008-04-30 at 11:10 -0700, Tory M Blue wrote:
 I was wondering if there was a way for Squid to pass on some basic
 information to the server citing that the original request was Secure,
 so that the backend server will respond correctly.

Yes. See the front-end-https cache_peer option.

Regards
Henrik



Re: [squid-users] Help with PROXY TRANSPARENT centos 5.1 and Squid 3.0.Stable4

2008-05-02 Thread Amos Jeffries

[EMAIL PROTECTED] wrote:

Hi

I like configure squid with proxy transperent, but don't work, if i
configure i my browser the address proxy and port of server this work very
good.

The next is the configuration

squid.conf
-
http_port 3138 transparent


firewall in /etc/rc.d/rc.local

iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 80 -j REDIRECT
--TO-PORT 3138

General configuration
--
wan = eth0
intranet = eth1
Centos v 5.1
Squid 3.0 Stable 4

What is wrong ?

This same configuracion work with centos 4.0 and squid 2.6


Squid built with --enable-linux-netfilter ?

Previous NAT rule permitting squid external web access missing?

Squid box even routing web requests?

Squid transparent actually working but showing error pages?

Amos
--
Please use Squid 2.6.STABLE20 or 3.0.STABLE5


Re: [squid-users] SSL Accel - Reverse Proxy

2008-05-02 Thread Amos Jeffries

Tory M Blue wrote:

On Fri, May 2, 2008 at 5:25 AM, Amos Jeffries [EMAIL PROTECTED] wrote:


 You made the situation clear. I mentioned the only reasonably easy
solution.
 If you didn't understand me, Keith M Richad provided you with the exact
squid.conf settings I was talking about before.



Obviously i have not., and I apologize.

I want Squid to handle both HTTP/HTTPS (easy, implemented working for months).

I want SQUID to talk to the backend server via HTTP.. period,  (EASY)

I want SQUID to handle the https encryption/description and talk to
the origin server via http . (EASY)

I want Squid to somehow inform the origin that the original request
was in fact HTTPS (HOW, is the question at hand)

I can do SSL and pass it and have squid handle the SSL without issue.,
the issue is allowing the origin insight as to the originating
protocol, if squid accepts the client connection on 443 and sends the
request to the origin on port 80

The issue is that I don't want my backend server to have to deal with
ssl at all. But I have some applications that require the request be
https (secured pages),  So if Squid could pass something in the header
citing that the original request was made via https, than my code
could take that information, and know that sending secured data via
non secure method is okay, since Squid will encrypt the data and send
to the client before that data leaves my network.

I had similar questions with squid sending the original http version
information in a header, which it does. Now I'm wondering if squid
keeps track of the original requesting protocol, so that my
application can look at the header and decide if the original request
came in as https (Since the origin at this point believes not, since
squid is talking to the origin via http and talking to the client via
https.)

Sorry that I seem to be making this complicated, it totally makes
sense in my head (: )


No worries (on our part at least).

The HTTP-only back-end requirement is a major hurdle for you.

No release of Squid has that capacity in any easy way. You will need to 
add new code to squid one way or another. Or have it added for you.


You could try coding up an ICAP adaptor for Squid 3.0+ that just adds 
headers.
Or make a url-rewrite setup adding a piece to the URL the server 
application receives.





Tory

I'm not sure how to be clearer and would be happy to email directly
with someone , aim, or phone


Amos
--
Please use Squid 2.6.STABLE20 or 3.0.STABLE5


Re: [squid-users] SSL Accel - Reverse Proxy

2008-05-02 Thread Amos Jeffries

Amos Jeffries wrote:

Tory M Blue wrote:
On Fri, May 2, 2008 at 5:25 AM, Amos Jeffries [EMAIL PROTECTED] 
wrote:



 You made the situation clear. I mentioned the only reasonably easy
solution.
 If you didn't understand me, Keith M Richad provided you with the exact
squid.conf settings I was talking about before.



Obviously i have not., and I apologize.

I want Squid to handle both HTTP/HTTPS (easy, implemented working for 
months).


I want SQUID to talk to the backend server via HTTP.. period,  (EASY)

I want SQUID to handle the https encryption/description and talk to
the origin server via http . (EASY)

I want Squid to somehow inform the origin that the original request
was in fact HTTPS (HOW, is the question at hand)

I can do SSL and pass it and have squid handle the SSL without issue.,
the issue is allowing the origin insight as to the originating
protocol, if squid accepts the client connection on 443 and sends the
request to the origin on port 80

The issue is that I don't want my backend server to have to deal with
ssl at all. But I have some applications that require the request be
https (secured pages),  So if Squid could pass something in the header
citing that the original request was made via https, than my code
could take that information, and know that sending secured data via
non secure method is okay, since Squid will encrypt the data and send
to the client before that data leaves my network.

I had similar questions with squid sending the original http version
information in a header, which it does. Now I'm wondering if squid
keeps track of the original requesting protocol, so that my
application can look at the header and decide if the original request
came in as https (Since the origin at this point believes not, since
squid is talking to the origin via http and talking to the client via
https.)

Sorry that I seem to be making this complicated, it totally makes
sense in my head (: )


No worries (on our part at least).

The HTTP-only back-end requirement is a major hurdle for you.

No release of Squid has that capacity in any easy way. You will need to 
add new code to squid one way or another. Or have it added for you.


Bah, never mind me. See Henriks post earlier.


Amos
--
Please use Squid 2.6.STABLE20 or 3.0.STABLE5