[squid-users] "Cannot open HTTP Port" on 3.0.STABLE1

2008-02-04 Thread Alan Strassberg
Squid 3.0R1 fails in daemon mode when binding to a privileged  port.
Works fine on ports > 1023.

There is nothing running on the ports as verified with "lsof -i" and
"netstat -a"

Debug (squid -X) shows this:
2008/02/04 11:13:24.293| acl_access::containsPURGE: invoked for
'http_access allow manager localhost'
2008/02/04 11:13:24.293| acl_access::containsPURGE: can't create tempAcl
2008/02/04 11:13:24.293| acl_access::containsPURGE:   returning false
2008/02/04 11:13:24.293| leave_suid: PID 8500 called
2008/02/04 11:13:24.293| leave_suid: PID 8500 giving up root, becoming 'squid'
2008/02/04 11:13:24.293| command-line -X overrides: ALL,1

What is "acl_access::containsPURGE: can't create tempAcl" trying to
tell me to fix?

Incidentally runs fine on ports < 1024 in non-daemon mode (squid -N)

This is FreeBSD 6.3-STABLE


[squid-users] How to make squid sitting in front of apache and ZOPE

2008-02-04 Thread kk CHN
Hi  people:

 I have a zope instance in my server on port 8080 , I configured a
vhost entry & Rewrite Rule  for this instance in my httpd-vhost.conf
as in the paste , but my site is too slow due to large number of
requests , I want squid to sit infront of apache like  this
squid--->apache-->zope
 Please see the paste: the current setup as follows


ServerAdmin [EMAIL PROTECTED]
ServerName  mysite.net
ServerAlias www.mysite.net

RewriteEngine On


#Main rewrite for zope#

RewriteRule ^/(.*)
http://127.0.0.1:8080/VirtualHostBase/http/www.mysite.net:80/mysite/VirtualHostRoot/$1
[L,P]

ErrorLog /var/log/httpd/mysite.net/error.log
CustomLog /var/log/httpd/mysite.net/access.log combined

this is the exiting setup  in my machine,   so request will satisfied
like this  apache:80 -->zope:8080,




 I installed squid in my FreeBSD-6.2 box from ports


Can you help me what I have change squid.conf , so as the requests
will first handled by squid , like the follows

SQUID-->APACHE--->ZOPE

I would like to request your kind response , that will help me to fix the issue

Thanks in advance
KK


Re: [squid-users] Upgrading from 2.5 to 3.0

2008-02-04 Thread Odhiambo Washington
On Feb 4, 2008 11:14 PM, Sherwood Botsford <[EMAIL PROTECTED]> wrote:
> It's not clear from the various places I've groveled how to make
> the transition from 2.5 to 3.0 for transparent proxying.
>
> I compiled squid on OpenBSD 3.9 with the flag --enable-pf-transparent
>
> Pf is set to redirect web calls to local host.
> The first thing I tried
> http_ports 127.0.0.1:3128 transparent

Is this a typo? It should be http_port (without the "s").

-- 
Best regards,
Odhiambo WASHINGTON,
Nairobi,KE
+254733744121/+254722743223
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

"Oh My God! They killed init! You Bastards!"
--from a /. post


[squid-users] Squid +XChat + Bitlbee

2008-02-04 Thread stephane lepain

Hi,

I have added am acl in order for me to connect to Xchat through my 
proxy. it works fine. Now, I want to use bitlbee using XChat to try to 
connect to msn and everything going through my proxy. Everytime I lunch 
Bitlbee and I get the error HTTP/1.0 503 Service Unavailable. Proxy 
traversal failed. The way I connect to bitlbee through Xchat is "/server 
127.0.0.1 and then this is when I get the error mentioned above.
I can't see the reason why I would be able to connect to XChat and not 
bitlbee. When I check the access.log I do see a tcp_miss 503. Thanks for 
your help


### ACCESS CONTROLS
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443  # https
acl SSL_ports port 563  # snews
acl SSL_ports port 873  # rsync
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 631 # cups
acl Safe_ports port 873 # rsync
acl Safe_ports port 901 # SWAT
acl purge   method PURGE
acl CONNECT method CONNECT
acl iguane src 192.168.1.8 127.0.0.1
acl heaven src 192.168.1.10
acl zongo  src 192.168.1.5
acl margoullat src 192.168.1.6 192.168.1.7
acl liveboxsrc 192.168.1.1
acl xchat  port 6667 1863
http_access allow CONNECT xchat
http_access deny CONNECT xchat
http_access allow iguane
http_access allow heaven
http_access allow zongo### OPTIONS FOR X-FORWARDED-FOR
### NETWORK OPTIONS
http_access allow margoullat
http_access allow livebox
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
http_access allow localhost
http_access deny all
icp_access allow iguane
icp_access allow heaven
icp_access allow zongo
icp_access allow margoullat
icp_access allow livebox
icp_access deny ALL
http_port 192.168.1.7:3128
hierarchy_stoplist cgi-bin ?
access_log /var/log/squid/access.log squid
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
quick_abort_min  0 KB
quick_abort_max  0 KB
quick_abort_pct  95
negative_ttl 2 minutes
request_header_max_size 12 KB
request_header_max_size 12 KB
request_body_max_size   0  KB # 0=nolimit
via off
cache_vary off
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
refresh_stale_hit 5 seconds
extension_methods REPORT MERGE MKACTIVITY CHECKOUT
header_access From deny all
header_access Referer deny all
header_access Server deny all
header_access User-Agent deny all
header_access WWW-Authenticate deny all
header_access Link deny all
forward_timeout 2 minutes
cache_mgr [EMAIL PROTECTED]
htcp_port 4827
cache_peer cache.orange.fr parent 3128 3130 default no-query
hosts_file /etc/hosts
append_domain .macitos.fr
memory_pools_limit 50 MB
forwarded_for off
client_db off
reload_into_ims on
coredump_dir /var/spool/squid



[squid-users] Free-SA: Why can't I add my logfile processing software to the web list?

2008-02-04 Thread Oleg
Would you like to add my software to this list:
http://www.squid-cache.org/Scripts/

Home page:
http://free-sa.sourceforge.net

Description:
Free-SA is tool for statistical analysis of daemons' log files, similar to 
SARG. Its main advantages over SARG are much better speed (7x-20x), more 
support for reports, and W3C compliance of generated HTML/CSS reports. It can 
be used to help control traffic usage, to control Internet access security 
policies, to investigate security incidents, to evaluate server efficiency, 
and to detect troubles with configuration.

I've tried many times to add it via form at the end of list page, but nothing 
happen. All generated reports meet W3C requirements and have "" at headers section.

If you can't or want not to add it, then tell me please why.

-- 
Best regards, Oleg


Re: [squid-users] Tracking down why I'm being blocked.

2008-02-04 Thread Chris Robertson

Justin Popa wrote:

Afternoon everyone, I have a small problem.

I've got a user who needs to access a website, and when he goes there
he occasionally gets an Access Denied error. Looking in the logs, I
see the following:

10.150.6.53 - hoffmand [04/Feb/2008:13:53:33 -0500] "GET
http://buymtdonline.arinet.com/EW54MTD/MTDC/Include/cfgCustom.js
HTTP/1.0" 200 13276 TCP_MISS:DIRECT
10.150.6.53 - (hoffmand) - [04/Feb/2008:13:53:33 -0500] "GET
http://buymtdonline.arinet.com/scripts/EmpartISAPI.dll? HTTP/1.0" 403
1403 TCP_DENIED:NONE
10.150.6.53 - hoffmand [04/Feb/2008:13:53:33 -0500] "GET
http://buymtdonline.arinet.com/scripts/EmpartISAPI.dll? HTTP/1.0" 200
4908 TCP_MISS:DIRECT

Note: In the second line I added the (hoffmand) because it's obviously
his traffic, just not marked as such. 


Which indicates Squid did not receive authentication details for that 
request.



Now, for the fun stuff. We use
AD for our authentication source and that works great. I've also
looked through our deny statements in squid.conf, of which there are
only 3 and here they are:

1) Blocking based on url. The blocked entries are all like
myspace.com, facebook.com, 2girls1cup.com, etc...

2) Blocking based on streaming media. These entries are like .avi,
.mov, .wmv, etc.

3) Blocking if Active Directory authentication failed.

Any thoughts on what this might be just looking at it? Obviously I'm
sure you guys need more, but any help you can give me in starting to
track down the why would be awesome. Thanks
  


Squid did not receive authentication details with the first request for 
EmpartISAPI.dll, threw the 403 and then (likely*) got the same request 
with authentication details.  I would assume all this happened with out 
the client seeing anything.  At least in this instance.  I don't know 
enough about NTLM authentication to say why the browser would not send 
authentication details with that request.


Chris

* With the default squid.conf setting "strip_query_terms on" there is no 
way to tell if that is indeed the same request, but assuming the time 
stamps are accurate, it's likely.




Re: [squid-users] Squid/heartbeat/pam_auth => login problem

2008-02-04 Thread Chris Robertson

`VL wrote:

Hello.
Recently i setup backup server with heartbeat for my proxy.
Squid setup is very simple, it uses pam_auth for user authorisation
and http authentication. Works fine.

Backup server is exact copy of primary ,except some system config files.
 Both systems are Debian 4.0/squid 2.6.5-6. Squid cache/logs are on DRBD disk.
  


What is pam_auth using for it's authentication source (shadow file, 
LDAP, NIS?).  If needed, how are you replicating the authentication 
resources?



Squid on primary server works just fine - users successfully login and
access internet.

When i move resources to backup node, squid starts ok, listens for connection
 and users are prompted for password when they try to access internet. But
proxy doesn`t allow them to do it - it keeps asking password.

I don`t understand what`s going on: config is the same, binaries are the same,
 client restarted browser.

Maybe there are some things i don`t understand about how squid
authenticates users?
Which logs should i watch and what should i see there to understand
where the problem is?
  


I'd have to guess that the data source pam_auth is using on the primary 
is not available from the backup.



P.S. When i switch back to primary node - all works OK.
  


Chris


Re: [squid-users] Modifying error Responses.

2008-02-04 Thread Chris Robertson

Krist van Besien wrote:

Hello all,

I need a way to modify the body of a response when a server responds
with an error code. I have a suspicion that this may be possible with
squid, but I'm getting a bit confused by the terse documentation.

First a bit of background to better understand the problem.

We run a website that serves content for mobile phones. This content
resides on several backend servers, most of them live with partners
who provide our content. We have an "aggregator" that accepts requests
from mobile phones, and then in turn requests the content form one or
more backend servers. The content these backends deliver is xml, and
this xml gets transformed by our aggregator in to something a mobile
phone can display. We access these backends through a squid proxy to
have some caching.

Our problem is that sometimes the backend sends an error (404, 500
etc..) without a properly formed xml body. This causes a thread on our
aggregator to block until a timeout is reached. As a result a problem
on a backend can lead to resource depletion on our aggregator.

On possible solution would be to modify error responses. We want to
tack our own xml response body in  to any errror response we get from
a backend.
  


Look at the error_map directive 
(http://www.squid-cache.org/Versions/v2/2.6/cfgman/error_map.html).  
While it states it is designed for accelerators, I imagine it should 
work in a forward proxy.



I've done some reading, and came across ICAP, eCam and clientstreams.
From the little documentation that is available i'm not sure how to
attack this problem.
- I only want to modify http responses when the backend server sends
an error code (4xx, 5xx).
- I only want a simple modification. Basically swap out whatever body
in the response with our own.
- We currently use squid 2.6. We could move to 3.0 if needed.

Any suggestions as to what the best way to solve our problem would be
are welcome.

Krist van Besien

  


Chris


[squid-users] Trouble downloading large files with Squid

2008-02-04 Thread mista_eng

Hi guys, just a quick bit on me: I am still a *nix newbie but am trying to
learn the ropes. Since my home server machine only has 1GB of RAM, I use
small distributions and have gotten used to doing things on the command line
only. My dd-wrt router is set to forward http requests to the VM's proxy
ports for transparent proxying. Here's the situation with Squid:

My original VM was a downloaded appliance with a stripped down Debian +
Squid + Dansguardian + adzapper. I had trouble downloading large files; for
example, a file from microsoft.com 1GB in size would stop at around 500MB.
This was a problem that was consistently reproduced, though with ~10%
variability in filesize, on different Windows Vista/2003 client machines in
both FireFox and IE7.0. 

I created a VM with 128MB RAM and 3GB of hard drive space based on Ubuntu
JeOS and installed the following at first: TinyProxy and Dansguardian. I
don't think I made any other changes to the dansguardian.conf other than
commenting one of the initial lines that would allow it to work. Large
files, like the one I originally had a problem with, downloaded fine. (I
realize that TinyProxy is different from Squid in that it does not cache.) 

I was hoping adzapper would work with tinyproxy but there was no such luck.
I did "sudo apt-get remove --purge tinyproxy" and then a "sudo apt-get
install squid." The Squid installation went without error messages and I
believe I made only the following change to squid.conf: "http_port 3128
transparent". The rest of the configuration file was left at default.
Adzapper was installed with "sudo apt-get install adzapper".

It appears to be working correctly for regular web browsing, but when I try
to download large files (this time I tried a 2GB file from microsoft.com),
the download incompletes. This brings me back to square one and is very
frustrating. I doubt that it is caused by either dansguardian or adzapper.
TinyProxy was great, but caching and adzapping is very convenient; I'd hate
to see them go. 

"sudo less /var/log/squid/access.log | grep install.wim" shows only the
requests made to GET the large file. That seems to be correct, though I'm
not sure if there would be anything remarked by squid when the download
fails/stops.

Any ideas on how to fix?
-- 
View this message in context: 
http://www.nabble.com/Trouble-downloading-large-files-with-Squid-tp15277650p15277650.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] Squid Win32 SSL version

2008-02-04 Thread Guido Serassio

Hi,

At 11:05 04/02/2008, Tomer Brand wrote:

Hi All,

I am trying to run SQUID as reverse proxy with SSL.
I downloaded 2.6.STABLE18 with SSL support from
http://squid-mirror.acmeconsulting.it/download/dl-squid.html
I copied:
 - ssleay32.dll
 - libeay32.dll

To system32 and created a certificate using OpenSSL.

SQUID process gets terminated when the proxy machine gets HTTPS request
(Working great for HTTP) with the following message:

OPENSSL_Uplink(100EB010,07): no OPENSSL_Applink

Any idea?


As you can read, the SSL enabled binaries are declared "experimental".
There are two reasons for this:
- The SSL binaries are automatically generated during a release 
without the test of the SSL functionality

- I use pre-built Windows OpenSSL libraries, out of my quality control.

Please, do you could send to me the SSL section of your squid.conf , 
so I can do some testing?


Regards

Guido



-

Guido Serassio
Acme Consulting S.r.l. - Microsoft Certified Partner
Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135  Fax. : +39.011.9781115
Email: [EMAIL PROTECTED]
WWW: http://www.acmeconsulting.it/



[squid-users] Tracking down why I'm being blocked.

2008-02-04 Thread Justin Popa
Afternoon everyone, I have a small problem.

I've got a user who needs to access a website, and when he goes there
he occasionally gets an Access Denied error. Looking in the logs, I
see the following:

10.150.6.53 - hoffmand [04/Feb/2008:13:53:33 -0500] "GET
http://buymtdonline.arinet.com/EW54MTD/MTDC/Include/cfgCustom.js
HTTP/1.0" 200 13276 TCP_MISS:DIRECT
10.150.6.53 - (hoffmand) - [04/Feb/2008:13:53:33 -0500] "GET
http://buymtdonline.arinet.com/scripts/EmpartISAPI.dll? HTTP/1.0" 403
1403 TCP_DENIED:NONE
10.150.6.53 - hoffmand [04/Feb/2008:13:53:33 -0500] "GET
http://buymtdonline.arinet.com/scripts/EmpartISAPI.dll? HTTP/1.0" 200
4908 TCP_MISS:DIRECT

Note: In the second line I added the (hoffmand) because it's obviously
his traffic, just not marked as such. Now, for the fun stuff. We use
AD for our authentication source and that works great. I've also
looked through our deny statements in squid.conf, of which there are
only 3 and here they are:

1) Blocking based on url. The blocked entries are all like
myspace.com, facebook.com, 2girls1cup.com, etc...

2) Blocking based on streaming media. These entries are like .avi,
.mov, .wmv, etc.

3) Blocking if Active Directory authentication failed.

Any thoughts on what this might be just looking at it? Obviously I'm
sure you guys need more, but any help you can give me in starting to
track down the why would be awesome. Thanks


[squid-users] cache peer proxy-only - is there a middle ground?

2008-02-04 Thread Chris Woodfield

Hi all,

I'm facing an issue where we'd like to implement cache peering on our  
squid farms, primarily to leverage the combined disk capacity across  
all our boxes into a larger cache. I would presume that this requires  
the use of the proxy-only directive to avoid content duplication.  
However, this has raised the issue of server overload - there's a very  
real possibility that a single hot object, if it only lives on a  
single server, could overload that server with requests from all the  
other peers in the event of a flash crowd. I'd like to find some  
middle ground.


I'm wondering if there's a way to configure squid such that content  
retrieved from peers is cached locally, but for a much shorter period  
of time than content the cache retrieves directly. Is this possible  
within squid, or should this be a feature request?


-C




[squid-users] Upgrading from 2.5 to 3.0

2008-02-04 Thread Sherwood Botsford
It's not clear from the various places I've groveled how to make 
the transition from 2.5 to 3.0 for transparent proxying.


I compiled squid on OpenBSD 3.9 with the flag --enable-pf-transparent

Pf is set to redirect web calls to local host.
The first thing I tried
http_ports 127.0.0.1:3128 transparent

This by itself gives me an error message:
Unable to forward this request at this time.
This request could not be forwarded to the origin server or to 
any parent caches. The most likely cause for this error is that:
The cache administrator does not allow this cache to make direct 
connections to origin servers, and

All configured parent caches are currently unreachable.

Ok.  So let's stumble through the other network options.

So I added:

always_direct allow all

This works, but I don't understand why this is not the default.
I expect that the largest number of squid installations are for a 
single proxy server by itself for a single machine or small 
network.  Because I don't understand this, I suspect that I'm wrong.


Should I have:
prefer_direct on

(I tried this before trying always_direct.  By itself it's not 
sufficient.)


Anyone have a pointer to a web page making the transition between 
2.5 and 3.0?


[squid-users] "Cannot open HTTP Port" on 3.0.STABLE1

2008-02-04 Thread Alan Strassberg
Squid 3.0R1 fails in daemon mode when binding to a privileged  port.
Works fine on ports > 1023.

There is nothing running on the ports as verified with "lsof -i" and
"netstat -a"

Debug (squid -X) shows this:
 2008/02/04 11:13:24.293| acl_access::containsPURGE: invoked for
'http_access allow manager localhost'
2008/02/04 11:13:24.293| acl_access::containsPURGE: can't create tempAcl
2008/02/04 11:13:24.293| acl_access::containsPURGE:   returning false
 2008/02/04 11:13:24.293| leave_suid: PID 8500 called
2008/02/04 11:13:24.293| leave_suid: PID 8500 giving up root, becoming 'squid'
2008/02/04 11:13:24.293| command-line -X overrides: ALL,1

What is "acl_access::containsPURGE: can't create tempAcl" trying to
tell me to fix?

This is FreeBSD 6.3-STABLE


Re: [squid-users] Squid Blocking non-listed websites

2008-02-04 Thread Ben Hollingsworth

Amos Jeffries wrote:

Go Wow wrote:

whats the command to get only those configuration lines from
squid.conf leaving the comment lines. If i get it i will post my
config file.


grep -v -E "^#" squid.conf


Or to also remove all the empty lines and trailing comments from valid lines:

sed -e 's/ *#.*//' squid.conf | sed -e '/^$/d'
begin:vcard
fn:Ben Hollingsworth
n:Hollingsworth;Ben
org:BryanLGH Health System;Information Technology
adr:;;1600 S. 48th St.;Lincoln;NE;68506;USA
email;internet:[EMAIL PROTECTED]
title:Systems Programmer
tel;work:402-481-8582
tel;fax:402-481-8354
tel;cell:402-432-5334
url:http://www.bryanlgh.org
version:2.1
end:vcard



Re: [squid-users] for new squid make install

2008-02-04 Thread Angela Williams
Hi!
On Monday 04 February 2008, squid learner wrote:
> make[3]: Leaving directory
> `/root/squid-2.6.STABLE14/tools'
> make[2]: Leaving directory
> `/root/squid-2.6.STABLE14/tools'
> make[1]: Leaving directory
> `/root/squid-2.6.STABLE14/tools'
> make[1]: Entering directory `/root/squid-2.6.STABLE14'
> make[2]: Entering directory `/root/squid-2.6.STABLE14'
> make[2]: Nothing to be done for `install-exec-am'.
> make[2]: Nothing to be done for `install-data-am'.
> make[2]: Leaving directory `/root/squid-2.6.STABLE14'
> make[1]: Leaving directory `/root/squid-2.6.STABLE14'
>
> =confeused for this two lines
>
> make[2]: Nothing to be done for `install-exec-am'.
> make[2]: Nothing to be done for `install-data-am'.

Just means nothing to compile here!


>
> can i go next or re start from scratch again
> http://tools.search.yahoo.com/newsearch/category.php?category=shopping

If you had no errors at the end of the make you just do the next step - 
normally make install

Cheers
Ang


-- 
Angela Williams Enterprise Outsourcing
Unix/Linux & Cisco spoken here! Bedfordview
[EMAIL PROTECTED]   Gauteng South Africa

Smile!! Jesus Loves You!!


[squid-users] persistent_request_timeout, entourage and outlook : retrieving distant images in HTML email

2008-02-04 Thread François Cami

Hi,

We are using Squid as a http proxy. Web clients are able
to load webpages through it without any problems. However,
images linked in an HTML email hosted on a remote server
(be it Apache or IIS) took a long time to load (2 to 10
minutes for 8 images, which is rather too much).
Using web browsers (Firefox, Konqueror, IE6/7) to load the
images worked fine, so we suspected the email clients
(Outlook and Entourage) to be the culprits.

BTW, this awfully looked like the same problem as this one :
http://www.tech-archive.net/Archive/Mac/microsoft.public.mac.office.entourage/2007-04/msg00053.html

Setting persistent_request_timeout to 0 :
persistent_request_timeout 0 seconds
solved the problem without any measurable adverse effect on
our end.

Are there any drawbacks to this workaround ?

Maybe this should go into the FAQ...

For the record, both Squid version squid-2.6.STABLE6-5.el5_1.2
(CentOS 5.1) and 3.0.STABLE1 (on CentOS 5.1) had the problem.

Best regards,

François Cami


[squid-users] for new squid make install

2008-02-04 Thread squid learner
make[3]: Leaving directory
`/root/squid-2.6.STABLE14/tools'
make[2]: Leaving directory
`/root/squid-2.6.STABLE14/tools'
make[1]: Leaving directory
`/root/squid-2.6.STABLE14/tools'
make[1]: Entering directory `/root/squid-2.6.STABLE14'
make[2]: Entering directory `/root/squid-2.6.STABLE14'
make[2]: Nothing to be done for `install-exec-am'.
make[2]: Nothing to be done for `install-data-am'.
make[2]: Leaving directory `/root/squid-2.6.STABLE14'
make[1]: Leaving directory `/root/squid-2.6.STABLE14'

=confeused for this two lines

make[2]: Nothing to be done for `install-exec-am'.
make[2]: Nothing to be done for `install-data-am'.


can i go next or re start from scratch again


  

Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  
http://tools.search.yahoo.com/newsearch/category.php?category=shopping


[squid-users] Squid/heartbeat/pam_auth => login problem

2008-02-04 Thread `VL
Hello.
Recently i setup backup server with heartbeat for my proxy.
Squid setup is very simple, it uses pam_auth for user authorisation
and http authentication. Works fine.

Backup server is exact copy of primary ,except some system config files.
 Both systems are Debian 4.0/squid 2.6.5-6. Squid cache/logs are on DRBD disk.

Squid on primary server works just fine - users successfully login and
access internet.

When i move resources to backup node, squid starts ok, listens for connection
 and users are prompted for password when they try to access internet. But
proxy doesn`t allow them to do it - it keeps asking password.

I don`t understand what`s going on: config is the same, binaries are the same,
 client restarted browser.

Maybe there are some things i don`t understand about how squid
authenticates users?
Which logs should i watch and what should i see there to understand
where the problem is?

P.S. When i switch back to primary node - all works OK.


Re: [squid-users] Modifying error Responses.

2008-02-04 Thread Amos Jeffries

Krist van Besien wrote:

Hello all,

I need a way to modify the body of a response when a server responds
with an error code. I have a suspicion that this may be possible with
squid, but I'm getting a bit confused by the terse documentation.


Which documentation?



First a bit of background to better understand the problem.

We run a website that serves content for mobile phones. This content
resides on several backend servers, most of them live with partners
who provide our content. We have an "aggregator" that accepts requests
from mobile phones, and then in turn requests the content form one or
more backend servers. The content these backends deliver is xml, and
this xml gets transformed by our aggregator in to something a mobile
phone can display. We access these backends through a squid proxy to
have some caching.

Our problem is that sometimes the backend sends an error (404, 500
etc..) without a properly formed xml body. This causes a thread on our
aggregator to block until a timeout is reached. As a result a problem
on a backend can lead to resource depletion on our aggregator.

On possible solution would be to modify error responses. We want to
tack our own xml response body in  to any errror response we get from
a backend.

I've done some reading, and came across ICAP, eCam and clientstreams.
From the little documentation that is available i'm not sure how to
attack this problem.
- I only want to modify http responses when the backend server sends
an error code (4xx, 5xx).
- I only want a simple modification. Basically swap out whatever body
in the response with our own.
- We currently use squid 2.6. We could move to 3.0 if needed.

Any suggestions as to what the best way to solve our problem would be
are welcome.


You will likely need to move to squid-3 to get this behaviour within squid.

If you wish to use squid for this any of those three ways you mention 
above would be possible approaches.
ICPA and ClientStreams are available in 3.x, eCAP is largely on the 
drawing board still. Contact squid-dev if you would be interested in 
sponsoring any of that development.


In Squid-3 there may also be a config hack to configure a simple error 
response replacement:


  acl PageReplace http_status 404 500
  deny_info http://.../custom_error_content.xml PageReplace
  http_reply_access deny PageReplace


Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


[squid-users] Modifying error Responses.

2008-02-04 Thread Krist van Besien
Hello all,

I need a way to modify the body of a response when a server responds
with an error code. I have a suspicion that this may be possible with
squid, but I'm getting a bit confused by the terse documentation.

First a bit of background to better understand the problem.

We run a website that serves content for mobile phones. This content
resides on several backend servers, most of them live with partners
who provide our content. We have an "aggregator" that accepts requests
from mobile phones, and then in turn requests the content form one or
more backend servers. The content these backends deliver is xml, and
this xml gets transformed by our aggregator in to something a mobile
phone can display. We access these backends through a squid proxy to
have some caching.

Our problem is that sometimes the backend sends an error (404, 500
etc..) without a properly formed xml body. This causes a thread on our
aggregator to block until a timeout is reached. As a result a problem
on a backend can lead to resource depletion on our aggregator.

On possible solution would be to modify error responses. We want to
tack our own xml response body in  to any errror response we get from
a backend.

I've done some reading, and came across ICAP, eCam and clientstreams.
>From the little documentation that is available i'm not sure how to
attack this problem.
- I only want to modify http responses when the backend server sends
an error code (4xx, 5xx).
- I only want a simple modification. Basically swap out whatever body
in the response with our own.
- We currently use squid 2.6. We could move to 3.0 if needed.

Any suggestions as to what the best way to solve our problem would be
are welcome.

Krist van Besien

-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED]
Bremgarten b. Bern, Switzerland
--
A: It reverses the normal flow of conversation.
Q: What's wrong with top-posting?
A: Top-posting.
Q: What's the biggest scourge on plain text email discussions?


[squid-users] Squid Win32 SSL version

2008-02-04 Thread Tomer Brand
Hi All,

I am trying to run SQUID as reverse proxy with SSL.
I downloaded 2.6.STABLE18 with SSL support from
http://squid-mirror.acmeconsulting.it/download/dl-squid.html 
I copied:
 - ssleay32.dll
 - libeay32.dll

To system32 and created a certificate using OpenSSL.

SQUID process gets terminated when the proxy machine gets HTTPS request
(Working great for HTTP) with the following message:

OPENSSL_Uplink(100EB010,07): no OPENSSL_Applink

Any idea?