Re: [squid-users] Sending on Group names after Kerb LDAP look-up

2010-03-23 Thread Amos Jeffries

Nick Cairncross wrote:

Hi All,

Things seem to be going well with my Squid project so far; a combined
Mac/Windows AD environment using Kerberos authentication with fall
back of NTLM. I (hopefully) seem to be getting the hang of it! I've
been trying out the Kerberos LDAP look up tool and have a couple of
questions (I think the answers will be no..):

- Is it possible to wrap up the matched group name(s) in the header
as it gets sent onwards to my peer? I used to use the authentication


I don't think so.
 There is a lot of manipulation magic you can do with the ICAP or eCAP 
interfaces that is not possible directly in Squid though.


The risk is breaking back-end services that can't handle the altered 
header. Since you say below about already doing so, I assume this is a 
non-risk for your network.



agent that came from our A/V provider. This tool ran as a service and
linked into our ISA. Once a user authenticated their group membership
was forwarded along with their username to my peer (Scansafe). The
problem is that it only does NTLM auth. It added the group
(WINNT://[group]) into the header and then a rule base at the peer
site could be set up based on group. Since I am using Kerberos I
wondered whether it's possible to send the results of the Kerb LDAP
auth? I already see the user on the peer as the Kerberos login. It
would be great if I could include the group or groups...


You can do transparent login pass-thru to the peer (login=PASS). You can 
log Squid-3.1 into the peer with kerberos credentials.
 But I do not think the Kerberos details get decoded to a 
username/password for Squid to pass back as a pair.




This is what I use currently: cache_peer proxy44.scansafe.net parent
8080 7 no-query no-digest no-netdb-exchange login=* (From
http://www.hutsby.net/2008/03/apple-mac-osx-squid-and-scansafe.html)

- Are there plans to integrate the lookup tool in future versions of
Squid? I've enjoyed learning about compiling but.. just wondering..



No. Plans are for all network-specific adaptation to be done via 
external helper processes.  The *CAP interfaces for add-on modules allow 
all the adaptation extras to be plugged in as needed in a very powerful way.
 Check that AV tool, it likely has an ICAP interface Squid-3 can plug 
into already.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


Re: [squid-users] TPROXY and DansGuardian

2010-03-23 Thread Amos Jeffries

Jason Healy wrote:

We've used a few different Squid setups over the years, from a
vanilla setup to a transparent interception proxy, to a fully
transparent tproxy.

We're now using DansGuardian to keep tabs on our users (we don't
block; we just monitor).  This is good, but unfortunately it doesn't
appear to be compatible with tproxy (DG only understands interception
or regular proxying).

Does anyone know of a way to use DG as an interception proxy, but
configure Squid to use the "real" client IP address in its outgoing
requests?  I have no idea if this is possible since it would be quite
a mess of different proxy schemes (DG would be interception-based
using routing, Squid would use X-Forwarded-For to get the real IP,
and then tproxy to make the request using the client address).


It was not safe to do that when I first added TPROXY. XFF as been 
improved since so the risk is now much lower but still present. I'll 
consider it for a future release.




Alternately, does anyone know of a good web monitoring product that
works in a "sniffer" mode so I don't need to insert it inline?  I
basically would like to use tproxy, but also need to log users who
are going to naughty sites...



From what I understand of your requirements you don't actually need DG 
or anything but Squid alone. Squid can log in any format you choose to 
configure. If there is anything it does not yet log we'd be interested 
in hearing about that.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


Re: [squid-users] Peering squid multiple instances.

2010-03-23 Thread Amos Jeffries

GIGO . wrote:

I have successfully setup running of multiple instances of squid for the sake 
of surviving a Cache directory failure. However I still have few confusions 
regarding peering multiple instances of squid. Please guide me in this respect.
 
 
In my setup i percept that my second instance is doing caching on behalf of requests send to Instance 1? Am i correct.
 


You are right in your understanding of what you have configured. I've 
some suggestions below on a better topology though.


 
 
what protocol to select for peers in this scenario? what is the recommendation? (carp, digest, or icp/htcp)
 


Under your current config there is no selection, ALL requests go through 
both peers.


Client -> Squid1 -> Squid2 -> WebServer

or

Client -> Squid2 -> WebServer

thus Squid2 and WebServer are both bottleneck points.

 
 
If syntax of my cache_peer directive is correct or local loop back address should not be used this way?
 


Syntax is correct.
Use of localhost does not matter. It's a useful choice for providing 
some security and extra speed to the inter-proxy traffic.



 
what is the recommended protocol for peering squids with each other?
 


Does not matter to your existing config. By reason of the "parent" 
selection.


 
 
what is the recommended protocl for peering squid with ISA Server.
 


"parent" is the peering method for origin web servers. With 
"originserver" selection method.


 
Instance 1:


visible_hostname vSquidlhr
unique_hostname vSquidMain
pid_filename /var/run/squid3main.pid
http_port 8080
icp_port 0
snmp_port 3161
access_log  /var/logs/access.log
cache_log /var/logs/cache.log

cache_peer 127.0.0.1 parent 3128 0 default no-digest no-query proxy-only 
no-delay
prefer_direct off
cache_dir aufs /var/spool/squid3 100 256 16
coredump_dir /var/spool/squid3
cache deny all
 
 
 
Instance 2:
 
visible_hostname SquidProxylhr

unique_hostname squidcacheprocess
pid_filename /var/run/squid3cache.pid
http_port 3128
icp_port 0
snmp_port 7172
access_log /var/logs/access2.log
cache_log /var/logs/cache2.log
 


coredump_dir /cache01/var/spool/squid3
cache_dir aufs /cache01/var/spool/squid3 5 48 768
cache_swap_low 75
cache_mem 1000 MB
range_offset_limit -1
maximum_object_size 4096 MB
minimum_object_size 12 bytes
quick_abort_min -1
 


What I suggest for failover is two proxies configured identically:

 * a cache_peer "sibling" type between them. Using digest selection. To 
localhost (different ports)
 * permitting both to cache data from the origin (optionally from the 
peer).
 * a cache_peer "parent" type to the web server. With "originserver" 
and "default" selection enabled.



This topology utilizes a single layer of multiple proxies. Possibly with 
hardware load balancing in iptables etc sending alternate requests to 
each of the two proxies listening ports.
  Useful for small-medium businesses requiring scale with minimal 
hardware. Probably their own existing load balancers already purchased 
from earlier attempts. IIRC the benchmark for this is somewhere around 
600-700 req/sec.



The next step up in performance and HA is to have an additional layer of 
Squid acting as the load-balancer doing CARP to reduce cache duplication 
 and remove sibling data transfers. This form of scaling out is how 
WikiMedia serve their sites up.
 It is documented somewhat in the wiki as ExtremeCarpFrontend. With a 
benchmark so far for a single box reaching 990 req/sec.



These maximum speed benchmarks are only achievable by reverse-proxy 
people. Regular ISP setups can expect their maximum to be somewhere 
below 1/2 or 1/3 of that rate due to the content diversity and RTT lag 
of remote servers.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


[squid-users] Peering squid multiple instances.

2010-03-23 Thread GIGO .

I have successfully setup running of multiple instances of squid for the sake 
of surviving a Cache directory failure. However I still have few confusions 
regarding peering multiple instances of squid. Please guide me in this respect.
 
 
In my setup i percept that my second instance is doing caching on behalf of 
requests send to Instance 1? Am i correct.
 
 
 
what protocol to select for peers in this scenario? what is the recommendation? 
(carp, digest, or icp/htcp)
 
 
 
If syntax of my cache_peer directive is correct or local loop back address 
should not be used this way?
 
 
 
what is the recommended protocol for peering squids with each other?
 
 
 
what is the recommended protocl for peering squid with ISA Server.
 
 
 
Instance 1:

visible_hostname vSquidlhr
unique_hostname vSquidMain
pid_filename /var/run/squid3main.pid
http_port 8080
icp_port 0
snmp_port 3161
access_log  /var/logs/access.log
cache_log /var/logs/cache.log

cache_peer 127.0.0.1 parent 3128 0 default no-digest no-query proxy-only 
no-delay
prefer_direct off
cache_dir aufs /var/spool/squid3 100 256 16
coredump_dir /var/spool/squid3
cache deny all
 
 
 
Instance 2:
 
visible_hostname SquidProxylhr
unique_hostname squidcacheprocess
pid_filename /var/run/squid3cache.pid
http_port 3128
icp_port 0
snmp_port 7172
access_log /var/logs/access2.log
cache_log /var/logs/cache2.log
 

coredump_dir /cache01/var/spool/squid3
cache_dir aufs /cache01/var/spool/squid3 5 48 768
cache_swap_low 75
cache_mem 1000 MB
range_offset_limit -1
maximum_object_size 4096 MB
minimum_object_size 12 bytes
quick_abort_min -1
 
 
 
regards,

  
_
Hotmail: Powerful Free email with security by Microsoft.
https://signup.live.com/signup.aspx?id=60969

Re: [squid-users] squid 3.0.19 + transparent + sslbump

2010-03-23 Thread Amos Jeffries

Stefan Reible wrote:

Hi,

I want to use https with the viralator (http ist working).
I'm prerouting Port 80 to Port 3128 for http.

Is there an option like https_port in my version?

Now I want to set following option in squid.conf:

http_port 3128 sslBump 
cert=/etc/squid/ssl_cert/proxy.testdomain.deCert.pem 
key=/etc/squid/ssl_cert/private/proxy.testdomain.deKey_without_Passphrase.pem 



but I get:

squid1 ~ # squid -D
FATAL: Bungled squid.conf line 9: http_port 3128 sslBump 
cert=/etc/squid/ssl_cert/proxy.testdomain.deCert.pem 
key=/etc/squid/ssl_cert/private/proxy.testdomain.deKey_without_Pp.pem

Squid Cache (Version 3.0.STABLE19): Terminated abnormally

The squid should run in transparent mode.



_Which_ 'transparent' mode?

 * WPAD transparent configuration
 * Domain policy transparent configuration
 * NAT interception
 * TPROXY interception
 * transparent HTTP traffic relay
 * transparent authentication (single-sign-on)
 * transparent encoding crypto.

I know it sounds like I'm being pedantic, but the specific meaning does 
matter with Squid.



Thank you very mutch for viralator support, it`s very nice ;)

Stefan



Some factums worth knowing:

 * 3.0 does not support sslBump or any other form of HTTPS 
man-in-middle attacks. 3.1 is required for that.


 * sslBump in 3.1 requires that the client machines all have a CA 
certificate installed to make them trust the proxy for decryption.


 * sslBump requires clients to be configured for using the proxy. (Some 
of the 'transparent' above work this way some do not.)


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


RE: [squid-users] Disable user accounts

2010-03-23 Thread David Parks
I created my own authentication module, and tried setting nonce_max_duration
to "1 minutes" (I also tried "1 minute", and "2 minutes" to make sure there
wasn't something funky with the word minutes). My authentication module logs
every time it is called. 

But when I sit there and hit refresh on the browser every ~15 seconds, I
don't get any re-authentication calls being made to the auth module (only
the initial authentication). I've kept this test up for over 5 min with no
re-authentication attempts to the auth module.

Did I mis-understand something possibly? Or is nonce_max_duration not
actually causing re-authentication to the auth_module (perhaps it just
sticks within the cached authentication in squid?)

So far the only two ways to lock out users that I understand are the
nonce_max_duration (if I can make it work as I currently understand it
should), and banned user list ACLs w/ "-k reload" calls. If anyone thinks
I'm missing anything else let me know.

Thanks,
Dave



Quote from a previous email:

>   nonce_max_duration determines how long the nonces may be used for. 
> It's closer to what you are wanting, but I'm not sure of there are any
nasty side effects of setting it too low.







Re: [squid-users] FileDescriptor Issues

2010-03-23 Thread a...@gmail

Hi,
Which OS are you using?
my start up script is located here
/usr/local/squid/sbin/squid

The packaged one I had before, the startup script was located in
/etc/init.d/squid3

But not the compiled version
Thank you
Regards
Adam
- Original Message - 
From: "Bradley, Stephen W. Mr." 

To: "a...@gmail" 
Sent: Tuesday, March 23, 2010 2:02 PM
Subject: RE: [squid-users] FileDescriptor Issues


A problem I found is that you have to set ulimit BEFORE you compile it as 
well.


I built everything from scratch and everytime I rebuild it I have to :

ulimit -HSn XX

( being whatever you want it to be)


In /etc/init.d/squid  (the script I use)


[snip]
PATH=/usr/bin:/sbin:/bin:/usr/sbin
export PATH
ulimit -HSn 32768
[snip]


That way every time I run the script it makes sure that it sets the FDs up 
to where they need to be.




I'm guessing that if you have a busy server it is crashing after a little 
while of running...  ;-)


steve







-Original Message-
From: a...@gmail [mailto:adbas...@googlemail.com]
Sent: Monday, March 22, 2010 11:10 PM
To: Amos Jeffries; squid-users@squid-cache.org
Subject: Re: [squid-users] FileDescriptor Issues

Thanks Amos for this tip I will try that and keep you posted
Regards
Adam

- Original Message - 
From: "Amos Jeffries" 

To: 
Sent: Tuesday, March 23, 2010 2:54 AM
Subject: Re: [squid-users] FileDescriptor Issues



On Tue, 23 Mar 2010 02:19:40 -, "a...@gmail" 
wrote:

Thanks Ivan for your suggestion
But in my case it's slightly different
I have no squid in

/etc/default/squid


/etc/init.d/mine is located in /usr/local/squid/sbin/squidunless I try
this/usr/local/squid/sbin/squid
  SQUID_MAXFD=4096



/etc/default/squid is a configuration file for configuring the system
init.d/squid script.
It does not exist normally, you create it only when overrides are needed.

.../sbin/squid is supposed to be the binary application which gets run.


And then restart it, but I am not sure I am using Ubuntu HardyI think

this

tip is for the Squid that is packaged with Ubuntu and not the
compiledSquid


Bash environment shells resets the descriptors down again towards 1024
each time a new one is generated. It _always_ must be increased to the
wanted limit before running Squid. Whether you do it manually on the
command line each time, or in the init.d script, or in some other custom
starter script.


My Ubuntu systems show default OS limits of just over 24K FD available.

Building Squid with:
 ulimit -HSn 65535 && ./configure --with-filedescriptors=65535 ...
 make install

starting:  squid -f /etc/squid.conf
squid shows 1024

starting: ulimit -Hsn 64000 && squid -f /etc/squid.conf
squid shows 64000

Amos




Re: [squid-users] clients -- SSL SQUID -- NON SSL webserver

2010-03-23 Thread Guido Marino Lorenzutti

Amos Jeffries  escribió:


Luis Daniel Lucio Quiroz wrote:

Le Lundi 22 Mars 2010 21:47:05, Guido Marino Lorenzutti a écrit :

Hi people: Im trying to give my clients access to my non ssl
webservers thru my reverse proxies adding ssl support on them.

Like the subject tries to explain:

WAN CLIENTS --- SSL SQUID (443) --- NON SSL webserver (80).

This is the relevant part of the squid.conf:

https_port 22.22.22.22:443 cert=/etc/squid/crazycert.domain.com.crt
key=/etc/squid/crazycert.domain.com.key
defaultsite=crazycert.domain.com vhost
sslflags=VERIFY_CRL_ALL,VERIFY_CRL cafile=/etc/squid/ca.crt
clientca=/etc/squid/ca.crt


"cafile=" option overrides the "clientca=" option and contains a  
single CA to be checked.


Set clientca= to the file containing the officially accepted global  
CA certificates. The type used for multiple certificates is a .PEM  
file if I understand it correctly.


If you have issued the clients with certificates signed by your own  
custom CA, then add that to the list as well.


I will assume that you know how to do that since you are requiring it.



Well, with your suggestion now I can connect. But it seems that  
something is missing. I can connect with any browser, with or without  
any cert installed on them.

Maybe the VERIFY_CRL_ALL,VERIFY_CRL dosen't work as I expected?

Any ideas?

Tnxs in advance.



Re: [squid-users] FileDescriptor Issues

2010-03-23 Thread a...@gmail

Hi All
I have recompiled squid with 6400 FDS I tried with 65535 and I got a warning
that 65535 is not a multiple of 64 and it may cause some problems on some 
systems.

so I changed it to 6400
I completed the installation started Squid now it's showing 6400 although 
the system is set to 65535
I have one question, from your experiences with squid, would 6400 FDS be 
enough?


Thank you all for your help
Regards
Adam

- Original Message - 
From: "Bradley, Stephen W. Mr." 

To: "a...@gmail" 
Sent: Tuesday, March 23, 2010 2:02 PM
Subject: RE: [squid-users] FileDescriptor Issues


A problem I found is that you have to set ulimit BEFORE you compile it as 
well.


I built everything from scratch and everytime I rebuild it I have to :

ulimit -HSn XX

( being whatever you want it to be)


In /etc/init.d/squid  (the script I use)


[snip]
PATH=/usr/bin:/sbin:/bin:/usr/sbin
export PATH
ulimit -HSn 32768
[snip]


That way every time I run the script it makes sure that it sets the FDs up 
to where they need to be.




I'm guessing that if you have a busy server it is crashing after a little 
while of running...  ;-)


steve







-Original Message-
From: a...@gmail [mailto:adbas...@googlemail.com]
Sent: Monday, March 22, 2010 11:10 PM
To: Amos Jeffries; squid-users@squid-cache.org
Subject: Re: [squid-users] FileDescriptor Issues

Thanks Amos for this tip I will try that and keep you posted
Regards
Adam

- Original Message - 
From: "Amos Jeffries" 

To: 
Sent: Tuesday, March 23, 2010 2:54 AM
Subject: Re: [squid-users] FileDescriptor Issues



On Tue, 23 Mar 2010 02:19:40 -, "a...@gmail" 
wrote:

Thanks Ivan for your suggestion
But in my case it's slightly different
I have no squid in

/etc/default/squid


/etc/init.d/mine is located in /usr/local/squid/sbin/squidunless I try
this/usr/local/squid/sbin/squid
  SQUID_MAXFD=4096



/etc/default/squid is a configuration file for configuring the system
init.d/squid script.
It does not exist normally, you create it only when overrides are needed.

.../sbin/squid is supposed to be the binary application which gets run.


And then restart it, but I am not sure I am using Ubuntu HardyI think

this

tip is for the Squid that is packaged with Ubuntu and not the
compiledSquid


Bash environment shells resets the descriptors down again towards 1024
each time a new one is generated. It _always_ must be increased to the
wanted limit before running Squid. Whether you do it manually on the
command line each time, or in the init.d script, or in some other custom
starter script.


My Ubuntu systems show default OS limits of just over 24K FD available.

Building Squid with:
 ulimit -HSn 65535 && ./configure --with-filedescriptors=65535 ...
 make install

starting:  squid -f /etc/squid.conf
squid shows 1024

starting: ulimit -Hsn 64000 && squid -f /etc/squid.conf
squid shows 64000

Amos




RE: [squid-users] Heath Check HTTP Request to Squid

2010-03-23 Thread Baird, Josh
What I have done is configured the load balancers to do a GET on a bogus
URL:

GET http://health.check/please/ignore

Then, to ignore these requests to prevent log spam:

acl healthcheck dstdomain health.check
log_access deny healthcheck

Thanks,

Josh


-Original Message-
From: Baird, Josh 
Sent: Tuesday, March 23, 2010 9:45 AM
To: squid-users@squid-cache.org
Subject: [squid-users] Heath Check HTTP Request to Squid

I need to configure a pair of load balancers in front of Squid to send
periodic health HTTP requests to my Squid servers to make sure they are
up and functioning properly.  How should I structure this HTTP request?
A "GET /" results in an invalid-request.  What type of request can I use
that will differentiate its self from normal proxied requests and not
cause Squid to bark at me for it being invalid?

Thanks


Re: [squid-users] reverse proxy question

2010-03-23 Thread Rick Chisholm

Jeff Peng wrote:

Also, could
someone recommend a light weight server for static content?




Apache is good enough IMO.


  
a stripped down Apache would server static content just fine, but a 
couple other options would be Lighttpd and Nginx.


--
Rick Chisholm
System Administrator
Parallel42

e.  rchish...@parallel42.ca
p.  519-325-8630
w.  http://parallel42.ca



[squid-users] Heath Check HTTP Request to Squid

2010-03-23 Thread Baird, Josh
I need to configure a pair of load balancers in front of Squid to send
periodic health HTTP requests to my Squid servers to make sure they are
up and functioning properly.  How should I structure this HTTP request?
A "GET /" results in an invalid-request.  What type of request can I use
that will differentiate its self from normal proxied requests and not
cause Squid to bark at me for it being invalid?

Thanks


Re: [squid-users] reverse proxy question

2010-03-23 Thread Jeff Peng
On Tue, Mar 23, 2010 at 3:33 AM, Al - Image Hosting Services
 wrote:
> Hi,
>
> I have a reverse proxy setup. It has worked well except now the apache
> server is getting overloaded. I would like to change my load balancing so
> that I send all the dynamic content to one server like php to the apache
> server and all the static content like .gif, .jpg, .html to another
> webserver. Is there a way to do this and where is it documented?


Yes, use ACLs (i.e, based on url_path) to transfer different requests
to different backend servers.

>Also, could
> someone recommend a light weight server for static content?
>

Apache is good enough IMO.


-- 
Jeff Peng
Email: jeffp...@netzero.net
Skype: compuperson


[squid-users] Squid Kerb Auth Issue

2010-03-23 Thread Nick Cairncross
Hi,

I'm concerned by a problem with my HTTP.keytab 'expiring'. My test base have 
reported a problem to me that they are prompted repeatedly for an unsatisfiable 
username and password. When I checked cache.log I noticed that there was a KVNO 
mismatch being reported. I regenerated my keytab and all was well again. 
However, I was worried by this so I looked back over my emails and I noticed 
the same problem occurred 7 days ago (almost to the hour). Does anyone have a 
suggestion as to what might have caused this/things to check? There haven't 
been any AD changes.

Thanks,


Nick

** Please consider the environment before printing this e-mail **

The information contained in this e-mail is of a confidential nature and is 
intended only for the addressee.  If you are not the intended addressee, any 
disclosure, copying or distribution by you is prohibited and may be unlawful.  
Disclosure to any party other than the addressee, whether inadvertent or 
otherwise, is not intended to waive privilege or confidentiality.  Internet 
communications are not secure and therefore Conde Nast does not accept legal 
responsibility for the contents of this message.  Any views or opinions 
expressed are those of the author.

Company Registration details:
The Conde Nast Publications Ltd
Vogue House
Hanover Square
London W1S 1JU

Registered in London No. 226900


AW: [squid-users] The requested URL could not be retrieved TCP_MISS/502

2010-03-23 Thread Zeller, Jan
hmm seems not to work properly :

behind proxy :
$ httping -g http://www.bitlifesciences.com/wcvi2010 -c 3 mysquidproxy:80
PING www.bitlifesciences.com:80 (http://www.bitlifesciences.com/wcvi2010):
timeout connecting to host
timeout connecting to host
timeout connecting to host
--- http://www.bitlifesciences.com/wcvi2010 ping statistics ---
3 connects, 0 ok, 100.00% failed

without proxy
./squidclient -v -h localhost -p 80 http://www.bitlifesciences.com/wcvi2010
headers: 'GET http://www.bitlifesciences.com/wcvi2010 HTTP/1.0
Accept: */*

'
HTTP/1.0 502 Bad Gateway
Server: squid
Mime-Version: 1.0
Date: Tue, 23 Mar 2010 14:20:17 GMT
Content-Type: text/html
Content-Length: 1493
X-Squid-Error: ERR_READ_ERROR 104
X-Cache: MISS from mysquidproxy
Via: 1.0 mysquidproxy (squid)
Proxy-Connection: close

.
.
.

The following error was encountered:



Read Error



.
.
,

./squid -v
Squid Cache: Version 3.0.STABLE23
configure options:  '--prefix=/opt/squid-3.0.STABLE23' '--enable-icap-client' 
'--enable-ssl' '--enable-default-err-language=English' 
'--enable-err-languages=English' '--enable-linux-netfilter' '--with-pthreads' 
'--with-filedescriptors=32768'


regards,

Jan




Von: Ralf Hildebrandt [ralf.hildebra...@charite.de]
Gesendet: Dienstag, 23. März 2010 14:45
An: squid-users@squid-cache.org
Betreff: Re: [squid-users] The requested URL could not be retrieved TCP_MISS/502

* Umesh Bodalina :
> Hi Squid
> I'm getting the following error when I try to access the following
> site through Squid:

Works for me with:
Squid 2.7.STABLE8-1 from Debian





[squid-users] squid 3.0.19 + transparent + sslbump

2010-03-23 Thread Stefan Reible

Hi,

I want to use https with the viralator (http ist working).
I'm prerouting Port 80 to Port 3128 for http.

Is there an option like https_port in my version?

Now I want to set following option in squid.conf:

http_port 3128 sslBump  
cert=/etc/squid/ssl_cert/proxy.testdomain.deCert.pem  
key=/etc/squid/ssl_cert/private/proxy.testdomain.deKey_without_Passphrase.pem


but I get:

squid1 ~ # squid -D
FATAL: Bungled squid.conf line 9: http_port 3128 sslBump  
cert=/etc/squid/ssl_cert/proxy.testdomain.deCert.pem  
key=/etc/squid/ssl_cert/private/proxy.testdomain.deKey_without_Pp.pem

Squid Cache (Version 3.0.STABLE19): Terminated abnormally

The squid should run in transparent mode.

Thank you very mutch for viralator support, it`s very nice ;)

Stefan



Re: [squid-users] The requested URL could not be retrieved TCP_MISS/502

2010-03-23 Thread Ralf Hildebrandt
* Umesh Bodalina :
> Hi Squid
> I'm getting the following error when I try to access the following
> site through Squid:

Works for me with:
Squid 2.7.STABLE8-1 from Debian
 
Squid Cache: Version 2.7.STABLE8
configure options:  '--prefix=/usr' '--exec_prefix=/usr'
'--bindir=/usr/sbin' '--sbindir=/usr/sbin'
'--libexecdir=/usr/lib/squid' '--sysconfdir=/etc/squid'
'--localstatedir=/var/spool/squid' '--datadir=/usr/share/squid'
'--enable-async-io' '--with-pthreads'
'--enable-storeio=ufs,aufs,coss,diskd,null' '--enable-linux-netfilter'
'--enable-arp-acl' '--enable-epoll'
'--enable-removal-policies=lru,heap' '--enable-snmp'
'--enable-delay-pools' '--enable-htcp' '--enable-cache-digests'
'--enable-underscores' '--enable-referer-log' '--enable-useragent-log'
'--enable-auth=basic,digest,ntlm,negotiate'
'--enable-negotiate-auth-helpers=squid_kerb_auth' '--enable-carp'
'--enable-follow-x-forwarded-for' '--with-large-files'
'--with-maxfd=65536' 'i386-debian-linux'
'build_alias=i386-debian-linux' 'host_alias=i386-debian-linux'
'target_alias=i386-debian-linux' 'CFLAGS=-Wall -g -O2' 'LDFLAGS='
'CPPFLAGS='

-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | http://www.charite.de



[squid-users] The requested URL could not be retrieved TCP_MISS/502

2010-03-23 Thread Umesh Bodalina
Hi Squid
I'm getting the following error when I try to access the following
site through Squid:

1269337729.147  61931 146.141.77.4 TCP_MISS/502 1411 GET
http://www.bitlifesciences.com/wcvi2010 - DIRECT/121.199.32.113
text/html
Using:
Squid Cache: Version 2.7.STABLE7
configure options:  '--prefix=/usr/local/squid' '--disable-wccp'
'--disable-wccpv2' '--enable-large-cache-files' '--with-large-files'
'--enable-delay-pools' '--enable-cachemgr-hostname' '=servername'
'--enable-ntlm-auth-helpers=SMB' '--enable-auth=basic,ntlm'
'--enable-snmp'
Any ideas?
Regards
Umesh