[squid-users] squid-2.6.STABLE19 https proxying

2008-04-01 Thread ssoo

Squid-2.6.STABLE19 have sslproxy* directives.
Can it support forward proxying https?


Below is part of squid FAQ:
Unsupported Request Method and Protocol for ''https'' URLs.

The information here is current for version 2.3

This is correct. Squid does not know what to do with an https URL.
To handle such a URL, Squid would need to speak the SSL protocol.
Unfortunately, it does not (yet).

Normally, when you type an https URL into your browser, one of two  
things happens.

* The browser opens an SSL connection directly to the origin server.
* The browser tunnels the request through Squid with the CONNECT  
request method.


Re: [squid-users] squid-2.6.STABLE19 https proxying

2008-04-01 Thread Adrian Chadd
On Tue, Apr 01, 2008, [EMAIL PROTECTED] wrote:
 Squid-2.6.STABLE19 have sslproxy* directives.
 Can it support forward proxying https?

It doesn't, no. Porting the sslbump stuff from Squid-3 or a cut-down version
shouldn't be hard.



Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] With Two Internet connection

2008-04-01 Thread Arun Shrimali
On Tue, Apr 1, 2008 at 12:50 AM, Daniel Becker [EMAIL PROTECTED] wrote:
 Hi Arun,

  to realise this scenario you need to implement a dynamic routing
  protocol, which announces the routing information on the two egde
  routers, as well as on your squid server or alternatively if you have a
  third router between your squid server and your edge routers, on this
  router.


  regards,
  Daniel




Thanks for all your support, hope I will make it out from an additional router.

Arun


RE: [squid-users] Unable to access a website through Suse/Squid.

2008-04-01 Thread J Beris
  Can other people here access this site using Suse Linux?

Yes, works perfectly here behind a squid-2.6.STABLE6-0.8 proxy on openSUSE 
10.3. Both Firefox and IE.
 
 What was the site again?

http://www.franklintraffic.com/

Regards,

Joop

 
Dit bericht is gescand op virussen en andere gevaarlijke
inhoud door MailScanner en lijkt schoon te zijn.
Mailscanner door http://www.prosolit.nl
Professional Solutions fot IT



[squid-users] how to controll user to download from torrent

2008-04-01 Thread Tarak Ranjan
Hi List,
i have one squid proxy server . all the traffic(http) has been redirect
to the squid ip:port.
now i want to deny the torrent download , using my Proxy. if anyone help
me or share the experience to do it, then it'll be really appreciable .

/\
Tarak




[squid-users] squid-3.0.STABLE3 make error in dns_internal.cc

2008-04-01 Thread Henrik Nordstrom
tis 2008-04-01 klockan 06:59 +0200 skrev Tvrtko Majstorović:
 I have applied patch to 'src/Makefile.am' described on your pages, but 
 when I run 'make' get this error:
 
 dns_internal.cc: In function ‘void idnsSendQuery(idns_query*)’:
 dns_internal.cc:778: error: cannot convert ‘sockaddr_in’ to ‘const 
 sockaddr_in*’ for argument ‘2’ to ‘int comm_udp_sendto(int, const 
 sockaddr_in*, int, const void*, int)’

This is a known error in 3.0.STABLE3. See Bug #2288.

Regards
Henrik



Re: [squid-users] squid-2.6.STABLE19 https proxying

2008-04-01 Thread J. Peng
On 4/1/08, Henrik Nordstrom [EMAIL PROTECTED] wrote:
 tis 2008-04-01 klockan 15:15 +0900 skrev [EMAIL PROTECTED]:
  Squid-2.6.STABLE19 have sslproxy* directives.
  Can it support forward proxying http?

 Not really no. This feature allows Squid to gateway requests to http.
 I.e. if Squid receives an request for https:// over HTTP, or if you use
 an url rewriter to rewrite requests from http to https while it's
 forwarded by Squid.


Hello Henrik,

I'm totally confused by your description.
Which one squid can support for the below 3 ways?


(client) https  --- squid --- https (realserver)
(client) http    squid --- https (realserver)
(client) https  --- squid --- http (realserver)


Thanks!


Re: [squid-users] squid-3.0.STABLE3 make error in dns_internal.cc

2008-04-01 Thread Amos Jeffries

Henrik Nordstrom wrote:

tis 2008-04-01 klockan 06:59 +0200 skrev Tvrtko Majstorović:
I have applied patch to 'src/Makefile.am' described on your pages, but 
when I run 'make' get this error:


dns_internal.cc: In function ‘void idnsSendQuery(idns_query*)’:
dns_internal.cc:778: error: cannot convert ‘sockaddr_in’ to ‘const 
sockaddr_in*’ for argument ‘2’ to ‘int comm_udp_sendto(int, const 
sockaddr_in*, int, const void*, int)’


This is a known error in 3.0.STABLE3. See Bug #2288.



The fix will be in tomorrows daily snapshot.

I'm also repeating my pre-commit test cycle to see if I can figure how 
this got through in the first place.


Amos
--
Please use Squid 2.6STABLE19 or 3.0STABLE3


RE: [squid-users] client ip's

2008-04-01 Thread Jorge Bastos
No, just squid himself.




 -Original Message-
 From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
 Sent: terça-feira, 1 de Abril de 2008 10:22
 To: Jorge Bastos
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] client ip's
 
 
 tis 2008-04-01 klockan 10:07 +0100 skrev Jorge Bastos:
  Hi,
 
  My squid always report localhost on the client's IP.
  What can I do to correct this? Only started to happen with the last
 3.0
  stable2.
 
 are you using dansguardian or another filtering proxy infront of your
 Squid?
 
 Regards
 Henrik




Re: [squid-users] Block Squid error page

2008-04-01 Thread Henrik Nordstrom
tis 2008-04-01 klockan 10:42 +0530 skrev sekar it:
 Iam using squid as Transparent proxy. I dont want send any error
 message from squid. Is there any possible to send the origin server
 error message instead of squid error message ?

Squid doesn't replace origin server error messages. You will only see
Squid error messages when Squid detected some problem such as unable to
contact the origin server, or not allowed to access the requested server
(denied by squid.conf)

Regards
Henrik



Re: [squid-users] squid-2.6.STABLE19 https proxying

2008-04-01 Thread Amos Jeffries

J. Peng wrote:

On 4/1/08, Henrik Nordstrom [EMAIL PROTECTED] wrote:

tis 2008-04-01 klockan 15:15 +0900 skrev [EMAIL PROTECTED]:

Squid-2.6.STABLE19 have sslproxy* directives.
Can it support forward proxying http?

Not really no. This feature allows Squid to gateway requests to http.
I.e. if Squid receives an request for https:// over HTTP, or if you use
an url rewriter to rewrite requests from http to https while it's
forwarded by Squid.



Hello Henrik,

I'm totally confused by your description.
Which one squid can support for the below 3 ways?


(client) https  --- squid --- https (realserver)


   Yes. 2.6+ reverse-proxy mode ONLY.
3.1+ reverse proxy mode OR sslbump intercept experiment.


(client) http    squid --- https (realserver)


Yes. 2.6+ normal operations. Any mode.


(client) https  --- squid --- http (realserver)


Yes. 2.6+ reverse-proxy mode ONLY.
3.1+ reverse proxy mode OR sslbump intercept experiment.

Amos
--
Please use Squid 2.6STABLE19 or 3.0STABLE3


Re: [squid-users] client ip's

2008-04-01 Thread Henrik Nordstrom

tis 2008-04-01 klockan 10:07 +0100 skrev Jorge Bastos:
 Hi,
 
 My squid always report localhost on the client's IP.
 What can I do to correct this? Only started to happen with the last 3.0
 stable2.

are you using dansguardian or another filtering proxy infront of your
Squid?

Regards
Henrik



[squid-users] client ip's

2008-04-01 Thread Jorge Bastos
Hi,

My squid always report localhost on the client's IP.
What can I do to correct this? Only started to happen with the last 3.0
stable2.


---
1207040749.939436 localhost TCP_MISS/200 1528 GET
http://library.gnome.org/skin/tab_right.png - DIRECT/209.132.176.176
image/png





Re: [squid-users] squid-2.6.STABLE19 https proxying

2008-04-01 Thread Henrik Nordstrom
tis 2008-04-01 klockan 15:15 +0900 skrev [EMAIL PROTECTED]:
 Squid-2.6.STABLE19 have sslproxy* directives.
 Can it support forward proxying http?

Not really no. This feature allows Squid to gateway requests to http.
I.e. if Squid receives an request for https:// over HTTP, or if you use
an url rewriter to rewrite requests from http to https while it's
forwarded by Squid.

But there is a hidden define which enables a proof of concept for https
decryption of proxied requests making Squid send them to your first
https_port. And https_port also supports transparent interception just
like http_port. But it's no more than a proof of concept and there is
many shortcomings making it not suitable for production use

 - Always the same certificate presented no matter what site the user
requested, which means a lot of security warnings in the client on each
new site requested.
 - No control over server certificate validation. It's either accept
anything, or reject almost anything..

 Below is part of squid FAQ:
 Unsupported Request Method and Protocol for ''https'' URLs.
 
 The information here is current for version 2.3

This section isn't valid any more.. but is about a browser bug where
some browsers forgot to enable SSL when using a proxy and switching from
http to https on the same requested site... (iirc there was also similar
issues with some browsers forgetting to enabling SSL when using proxy
authentication). It's even a duplicate of another FAQ section where this
is explained better.. removed.

Regards
Henrik



Re: [squid-users] how to controll user to download from torrent

2008-04-01 Thread Leonardo Rodrigues Magalhães



Tarak Ranjan escreveu:

Hi List,
i have one squid proxy server . all the traffic(http) has been redirect
to the squid ip:port.
now i want to deny the torrent download , using my Proxy. if anyone help
me or share the experience to do it, then it'll be really appreciable .
  


   i have answered this question from you yesterday. There's no need on 
sending it again 


http://marc.info/?l=squid-usersm=120697490730861w=2



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
[EMAIL PROTECTED]
My SPAMTRAP, do not email it






RE: [squid-users] Unable to access a website through Suse/Squid.

2008-04-01 Thread Henrik Nordstrom
tis 2008-04-01 klockan 09:28 -0400 skrev Terry Dobbs:
 Thanks for checking. Odd, not sure why this wont work here, the only
 problem like this that I have had in the few years ive used it. 

Well.. Squid will only be able to reach the sites you can reach from the
server where Squid runs.

The site works fine from here, but it's hard to test all possible
variables which may make sites fail

Regards
Henrik



RE: [squid-users] Unable to access a website through Suse/Squid.

2008-04-01 Thread J Beris
 Thanks for checking. Odd, not sure why this wont work here, the only
 problem like this that I have had in the few years ive used it.

Hi Terry/Henrik,

No problem, little effort to click the link :-)
I made one small mistake, our proxy runs on openSUSE 10.2, not 10.3 as
reported earlier.

Which release of openSUSE do you run? Perhaps there's a difference
between those 2 versions (although, having used both, I can't think of
anything related to this case...)

Regards,

Joop

 
Dit bericht is gescand op virussen en andere gevaarlijke
inhoud door MailScanner en lijkt schoon te zijn.
Mailscanner door http://www.prosolit.nl
Professional Solutions fot IT



RE: [squid-users] Unable to access a website through Suse/Squid.

2008-04-01 Thread Terry Dobbs
Yea, I understand that this issue really isn't squid related, just was
hoping someone running squid on suse linux has had a similar issue. I am
running Suse Linux 10 and I can ping the domain from the server. I just
cant browse to it, I get an error box in Mozilla saying Document
contains no data.

This is obviously why the squid users cant access. I thought it might be
a DNS issue, but that's crossed off as I can ping the domain, and it
resolves to correct address. 


-Original Message-
From: J Beris [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, April 01, 2008 9:36 AM
To: Terry Dobbs; Henrik Nordstrom
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Unable to access a website through
Suse/Squid.

 Thanks for checking. Odd, not sure why this wont work here, the only
 problem like this that I have had in the few years ive used it.

Hi Terry/Henrik,

No problem, little effort to click the link :-)
I made one small mistake, our proxy runs on openSUSE 10.2, not 10.3 as
reported earlier.

Which release of openSUSE do you run? Perhaps there's a difference
between those 2 versions (although, having used both, I can't think of
anything related to this case...)

Regards,

Joop

 
Dit bericht is gescand op virussen en andere gevaarlijke
inhoud door MailScanner en lijkt schoon te zijn.
Mailscanner door http://www.prosolit.nl
Professional Solutions fot IT



RE: [squid-users] Unable to access a website through Suse/Squid.

2008-04-01 Thread Terry Dobbs
Thanks for checking. Odd, not sure why this wont work here, the only
problem like this that I have had in the few years ive used it. 

-Original Message-
From: J Beris [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, April 01, 2008 3:46 AM
To: Henrik Nordstrom; Terry Dobbs
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Unable to access a website through
Suse/Squid.

  Can other people here access this site using Suse Linux?

Yes, works perfectly here behind a squid-2.6.STABLE6-0.8 proxy on
openSUSE 10.3. Both Firefox and IE.
 
 What was the site again?

http://www.franklintraffic.com/

Regards,

Joop

 
Dit bericht is gescand op virussen en andere gevaarlijke
inhoud door MailScanner en lijkt schoon te zijn.
Mailscanner door http://www.prosolit.nl
Professional Solutions fot IT



RE: [squid-users] Unable to access a website through Suse/Squid.

2008-04-01 Thread J Beris
 This is obviously why the squid users cant access. I thought it might
 be
 a DNS issue, but that's crossed off as I can ping the domain, and it
 resolves to correct address.

Yes, if you can ping and resolve, it's not DNS related.
I'd fire up wireshark/ethereal and grab the communication that way, see
if that clears things up a bit more. Like this, it's hard to
troubleshoot.

Regards,

Joop

 
Dit bericht is gescand op virussen en andere gevaarlijke
inhoud door MailScanner en lijkt schoon te zijn.
Mailscanner door http://www.prosolit.nl
Professional Solutions fot IT



Re: [squid-users] Block Squid error page

2008-04-01 Thread Matus UHLAR - fantomas
 tis 2008-04-01 klockan 10:42 +0530 skrev sekar it:
  Iam using squid as Transparent proxy. I dont want send any error
  message from squid. Is there any possible to send the origin server
  error message instead of squid error message ?

On 01.04.08 10:40, Henrik Nordstrom wrote:
 Squid doesn't replace origin server error messages. You will only see
 Squid error messages when Squid detected some problem such as unable to
 contact the origin server, or not allowed to access the requested server
 (denied by squid.conf)

...and if you don't want (your users) to see those errors, stop intercepting
their connections and configure WPAD on the network.
-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
How does cat play with mouse? cat /dev/mouse


Re: [squid-users] cpu load boom when rotate the access.log(coss filesystem)

2008-04-01 Thread Felix New
Amos:
  thank you very much. i  appreciate you if give me some detail.


2008/3/28, Amos Jeffries [EMAIL PROTECTED]:

 Ah a few problems with COSS. Firstly it does not handle large objects
 very well.
 Secondy its reload requires reading into memory the entire cache_dir
 slice by slice. Which is extremely slow the larger the dir.

 You would get better performance splitting your cache into two
 cache_dirs one COSS (max around 2GB) for small objects and one ufs/aufs
 for large objects.


my every cache_dir disk capability is lager than 100G, and the cache
box is server for very small files--this is the reason why i use COSS.
as your advice, i need split the cache into about 50(or more)
cache_dirs and several aufs for large objects( if exists)...is this?

why it can get better performance splittint big cache into several cache_dirs?


-- 
Best regards
Felix New


[squid-users] https -- http reverse proxy problem

2008-04-01 Thread Mirabello Massimiliano

Hi all,

I have a problem with squid in reverse proxy mode (squid-2.6.STABLE16 on
HP-UX).
I need to redirect an https port on squid server to a http port on the
backend server 
that's my configuration:


###
acl Safe_ports port 37500-37501
acl xprov0_sec myport 37500
acl xprov0_unsec myport 37501

http_port 37501 accel defaultsite=ipahu016
https_port 37500 cert=/opt/hpws/apache/conf/ipahu016.crt
key=/opt/hpws/apache/conf/ipahu016.key  protocol=http accel
defaultsite=ipahu016
#[...]

cache_peer  cmapacheparent  27500   0   name=xprov0 proxy-only
originserver
cache_peer  cmapacheparent  27501   0   name=xprov1 proxy-only
originserver

#[...]

cache_peer_access xprov0 allow xprov0_sec mynet
cache_peer_access xprov1 allow xprov0_unsec mynet

#[...]
visible_hostname ipahu016
#[...]


Squid redirect correctly http port (37501) to 27501,while I can't open
https://ipahu016:37500.

My cache.log reports:
2008/04/01 17:53:50| clientNegotiateSSL: Error negotiating SSL
connection on FD 11: error:140B512D:SSL routines:SSL_GET_NEW_SESSION:ssl
session id callback failed (1/-1)


I searched on squid mailing lists for a while but found nothing.

Any hint, please?

Thanks in advance,
Massimiliano


Internet Email Confidentiality Footer
-
La presente comunicazione, con le informazioni in essa contenute e ogni 
documento o file allegato, e' rivolta unicamente alla/e persona/e cui e' 
indirizzata ed alle altre da questa autorizzata/e a riceverla. Se non siete i 
destinatari/autorizzati siete avvisati che qualsiasi azione, copia, 
comunicazione, divulgazione o simili basate sul contenuto di tali informazioni 
e' vietata e potrebbe essere contro la legge (art. 616 C.P., D.Lgs n. 196/2003 
Codice in materia di protezione dei dati personali). Se avete ricevuto questa 
comunicazione per errore, vi preghiamo di darne immediata notizia al mittente e 
di distruggere il messaggio originale e ogni file allegato senza farne copia 
alcuna o riprodurne in alcun modo il contenuto. 

This e-mail and its attachments are intended for the addressee(s) only and are 
confidential and/or may contain legally privileged information. If you have 
received this message by mistake or are not one of the addressees above, you 
may take no action based on it, and you may not copy or show it to anyone; 
please reply to this e-mail and point out the error which has occurred. 
-



Re: [squid-users] Slow internet

2008-04-01 Thread François Cami
On Mon, 31 Mar 2008 13:04:35 -0400
Jeremy Kim [EMAIL PROTECTED] wrote:

 Using the squid proxy is really slow. Is there anyway to make it faster?
 
 I have squid version Squid2.6STABLE18 on a XP.

Please use a server OS, either 2003 Server if you really have to run on
Windows, or something like RHEL (www.redhat.com/rhel) or CentOS
(www.centos.org) or Debian (www.debian.org). Both RHEL and CentOS squid
versions work well in my experience.
Now, if you still see delays / lags when using a real OS :
Long delays usually point with intermittent DNS problems, cache rebuild
issues, or a misconfigured firewall.
I've also had more success using diskd than aufs, but YMMV.
You might want to use wireshark (www.wireshark.org) or at least tcpdump
on the squid box to see what it's doing when you experience delays.

Cheers

François


RE: [squid-users] Unable to access a website through Suse/Squid.

2008-04-01 Thread Terry Dobbs
Yea, im lost on this one. Ethereal doesn't show anything strange, just
the initial connection request, just doesn't seem to get anything back.

Doesn't really make sense that only this one site (at least that I know
of) is having this issue. The SUSE firewall is turned off, network card
is configured properly, etc...

-Original Message-
From: J Beris [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, April 01, 2008 10:05 AM
To: Terry Dobbs; Henrik Nordstrom
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Unable to access a website through
Suse/Squid.

 This is obviously why the squid users cant access. I thought it might
 be
 a DNS issue, but that's crossed off as I can ping the domain, and it
 resolves to correct address.

Yes, if you can ping and resolve, it's not DNS related.
I'd fire up wireshark/ethereal and grab the communication that way, see
if that clears things up a bit more. Like this, it's hard to
troubleshoot.

Regards,

Joop

 
Dit bericht is gescand op virussen en andere gevaarlijke
inhoud door MailScanner en lijkt schoon te zijn.
Mailscanner door http://www.prosolit.nl
Professional Solutions fot IT



RE: [squid-users] Unable to access a website through Suse/Squid.

2008-04-01 Thread Henrik Nordstrom
tis 2008-04-01 klockan 17:29 -0400 skrev Terry Dobbs:
 Yea, im lost on this one. Ethereal doesn't show anything strange, just
 the initial connection request, just doesn't seem to get anything back.
 
 Doesn't really make sense that only this one site (at least that I know
 of) is having this issue. The SUSE firewall is turned off, network card
 is configured properly, etc...

Post the trace somewhere and we may take a look if something can be
identified.

My bet is still TCP window scaling.. it's the most common source to this
problem these days.

Regards
Henrik



RE: [squid-users] Unable to access a website through Suse/Squid.

2008-04-01 Thread Terry Dobbs
Would you want the trace from the squid server, or from a client behind
the squid server?

Also, the TCP scaling fix, it was just to add a record to the file
right?

Also, I tried doing the window scaling again. Is it just as simple as
creating the file tcp_default_win_scale in /proc/sys/net/ipv4?

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, April 01, 2008 5:37 PM
To: Terry Dobbs
Cc: J Beris; squid-users@squid-cache.org
Subject: RE: [squid-users] Unable to access a website through
Suse/Squid.

tis 2008-04-01 klockan 17:29 -0400 skrev Terry Dobbs:
 Yea, im lost on this one. Ethereal doesn't show anything strange, just
 the initial connection request, just doesn't seem to get anything
back.
 
 Doesn't really make sense that only this one site (at least that I
know
 of) is having this issue. The SUSE firewall is turned off, network
card
 is configured properly, etc...

Post the trace somewhere and we may take a look if something can be
identified.

My bet is still TCP window scaling.. it's the most common source to this
problem these days.

Regards
Henrik



RE: [squid-users] Unable to access a website through Suse/Squid.

2008-04-01 Thread Henrik Nordstrom
tis 2008-04-01 klockan 18:00 -0400 skrev Terry Dobbs:
 Would you want the trace from the squid server, or from a client behind
 the squid server?
 
 Also, the TCP scaling fix, it was just to add a record to the file
 right?
 
 Also, I tried doing the window scaling again. Is it just as simple as
 creating the file tcp_default_win_scale in /proc/sys/net/ipv4?

The simplest way to test if it's window scaling biting the host (or to
be correct it's firewall) is to disable window scaling.

echo 0 /proc/sys/net/ipv4/tcp_window_scaling

The sysctls have changed somewhat since the lwn.net article was written
many years ago.

Regards
Henrik



Re: [squid-users] https -- http reverse proxy problem

2008-04-01 Thread Henrik Nordstrom
tis 2008-04-01 klockan 17:55 +0200 skrev Mirabello Massimiliano:
 My cache.log reports:
 2008/04/01 17:53:50| clientNegotiateSSL: Error negotiating SSL
 connection on FD 11: error:140B512D:SSL routines:SSL_GET_NEW_SESSION:ssl
 session id callback failed (1/-1)

Hmm.. that's a new one.

Which version of OpenSSL are you using?

Try setting sslcontext=something on your https_port, may make a
difference (very related to session ids).

Regards
Henrik



[squid-users] All url_rewriter processes are busy x Too many open files

2008-04-01 Thread Marcio Augusto Stocco
Testing Squid/SquidGuard with thousands of users, the cache.log shows
the following messages:

2008/04/01 15:19:16| WARNING: All url_rewriter processes are busy.
2008/04/01 15:19:16| WARNING: up to 2730 pending requests queued
2008/04/01 15:19:16| Consider increasing the number of url_rewriter
processes to at least 3552 in your config file.
2008/04/01 15:19:34| WARNING! Your cache is running out of filedescriptors
2008/04/01 15:19:50| WARNING! Your cache is running out of filedescriptors
2008/04/01 15:19:56| comm_open: socket failure: (24) Too many open files
2008/04/01 15:19:56| comm_open: socket failure: (24) Too many open files
2008/04/01 15:19:56| comm_open: socket failure: (24) Too many open files

The server is a HP DL360G5 (2x Xeon Dual 1.6 GHz, RAM 8 GB, HP Smart
Array - RAID 1).

Is there any way to increase SQUID_MAXFD from 8192 to 65536, so I can
try using the sugested number of url_rewriter processes?

With SQUID_MAXFD=8192 I got lots of comm_open: socket failure: (24)
Too many open files if url_rewriter is set higher than 200 (roughly).

Thanks for any help,
Marcio.


RE: [squid-users] client ip's

2008-04-01 Thread Henrik Nordstrom
tis 2008-04-01 klockan 12:29 +0100 skrev Jorge Bastos:
 No, just squid himself.

As a plain proxy, or playing with NAT?

Regards
Henrik



RE: [squid-users] Unable to access a website through Suse/Squid.

2008-04-01 Thread Henrik Nordstrom

tis 2008-04-01 klockan 18:00 -0400 skrev Terry Dobbs:
 Would you want the trace from the squid server, or from a client behind
 the squid server?

The squid server talking to the web site.

Regards
Henrik



[squid-users] Reverse proxy question

2008-04-01 Thread James Wenzel

Hi

 I am setting up squid in front of an oracle applications server. The  
squid is in a DMZ, the oracle applications server is in the internal  
network.  I need to have the squid server talk to port 8000 and port  
9000 and accept requests for those ports from internet users.  
Currently my configuration works fine for 8000 and is such:


http_port 8000
httpd_accel_host 10.1.140.200
httpd_accel_port 8000
httpd_accel_single_host on
httpd_accel_with_proxy on
httpd_accel_uses_host_header off

Now the application will later try to hand off to port 9000 on the  
same back end httpd_accel_host via a call in java. In a simple quick  
fashion, how do I get the server to accept and accel both port 8000  
and 9000 so my application can work.


Thanks for your help in advance.

James Wenzel
Enterprise Resource Providers
www.enterpriserp.com
716 310 8236




Re: [squid-users] Reverse proxy question

2008-04-01 Thread Henrik Nordstrom
tis 2008-04-01 klockan 19:52 -0400 skrev James Wenzel:
 http_port 8000
 httpd_accel_host 10.1.140.200
 httpd_accel_port 8000

Before you continue, upgrade to a supported Squid release. I.e.
Squid-2.6 or later.

Regards
Henrik



Re: [squid-users] cpu load boom when rotate the access.log(coss filesystem)

2008-04-01 Thread Amos Jeffries

Felix New wrote:

Amos:
  thank you very much. i  appreciate you if give me some detail.


2008/3/28, Amos Jeffries [EMAIL PROTECTED]:

Ah a few problems with COSS. Firstly it does not handle large objects
very well.
Secondy its reload requires reading into memory the entire cache_dir
slice by slice. Which is extremely slow the larger the dir.

You would get better performance splitting your cache into two
cache_dirs one COSS (max around 2GB) for small objects and one ufs/aufs
for large objects.


Sorry I was out by an order of magnitude. I should have said 20GB.

The 2.6 config manual mentions the default is 8GB
  http://www.squid-cache.org/Versions/v2/2.6/cfgman/cache_dir.html






my every cache_dir disk capability is lager than 100G, and the cache
box is server for very small files--this is the reason why i use COSS.
as your advice, i need split the cache into about 50(or more)
cache_dirs and several aufs for large objects( if exists)...is this?

why it can get better performance splittint big cache into several cache_dirs?



With several cache_dir's you have a few factors increasing performance:

 - parallel dir access. As one dir is reading/writing its slices 
another could still be used. Usually minor, but under heavy load or very 
large caches it can add up.


 - COSS has limited max-size for slices and its dir size.

 - Since COSS apparently must index its whole cache_dir before use, 
smaller sizes can reduce the delay before connections are accepted 
through the first cache_dir.


 - I don't know much of the details of COSS, but I keep hearing people 
mentioning a limit to the file size it likes (a few MB). Using another 
cache_dir type can ease that bottleneck.


Amos
--
Please use Squid 2.6STABLE19 or 3.0STABLE3


Re: [squid-users] All url_rewriter processes are busy x Too many open files

2008-04-01 Thread Amos Jeffries

Marcio Augusto Stocco wrote:

Testing Squid/SquidGuard with thousands of users, the cache.log shows
the following messages:

2008/04/01 15:19:16| WARNING: All url_rewriter processes are busy.
2008/04/01 15:19:16| WARNING: up to 2730 pending requests queued
2008/04/01 15:19:16| Consider increasing the number of url_rewriter
processes to at least 3552 in your config file.
2008/04/01 15:19:34| WARNING! Your cache is running out of filedescriptors
2008/04/01 15:19:50| WARNING! Your cache is running out of filedescriptors
2008/04/01 15:19:56| comm_open: socket failure: (24) Too many open files
2008/04/01 15:19:56| comm_open: socket failure: (24) Too many open files
2008/04/01 15:19:56| comm_open: socket failure: (24) Too many open files

The server is a HP DL360G5 (2x Xeon Dual 1.6 GHz, RAM 8 GB, HP Smart
Array - RAID 1).

Is there any way to increase SQUID_MAXFD from 8192 to 65536, so I can
try using the sugested number of url_rewriter processes?


Squid 2.6: --with-maxfd=65536
Squid 3.x: --with-filedescriptors=65536

Be sure your OS can handle a single process with that many FD though. 
Using these options overrides the automatic build detections AFAIK.


You can also use ulimit while compiling (I don't know the details).



With SQUID_MAXFD=8192 I got lots of comm_open: socket failure: (24)
Too many open files if url_rewriter is set higher than 200 (roughly).

Thanks for any help,
Marcio.


For our info, you say you are handling thousands of users;
  and what release of squid is it?
  what request/sec load is your squid maxing out at?

Amos
--
Please use Squid 2.6STABLE19 or 3.0STABLE4


[squid-users] TCP Connection failed to parent proxy server

2008-04-01 Thread Josh
Hi all,

I'm having an issue with the squid server I setup on Openbsd 4.2-stable.
Clients are coming on 10.X.X.X (virtual IP) port 8080 and requests are
made to parent proxy server from 10.X.X.Y to 10.2.5.1 port 8080

As you can see below (cache.log) I got a lot of TCP connection to
parent proxy server failed...
For sure, the parent is listening on port 8080.
I deactivated the firewall rules to check whether it was the one
dropping the connections but i got the same results... tcp
connection failed...

Let me know if you need further details / explanations ... in the
meantime do you have any ideas on what's going on ?

Thanks,

Regards,
Josh

# squid -v
Squid Cache: Version 2.6.STABLE13
configure options: '--datadir=/usr/local/share/squid'
'--localstatedir=/var/squid' '--disable-linux-netfilter'
'--disable-linux-tproxy' '--disable-epoll' '--enable-arp-acl'
'--enable-async-io' '--enable-auth=basic digest ntlm'
'--enable-basic-auth-helpers=NCSA YP'
'--enable-digest-auth-helpers=password' '--enable-cache-digests'
'--enable-large-cache-files' '--enable-carp' '--enable-delay-pools'
'--enable-external-acl-helpers=ip_user session unix_group
wbinfo_group' '--enable-htcp' '--enable-ntlm-auth-helpers=SMB'
'--enable-referer-log' '--enable-removal-policies=lru heap'
'--enable-snmp' '--enable-ssl' '--enable-storeio=ufs aufs coss diskd
null' '--enable-underscores' '--enable-useragent-log'
'--enable-wccpv2' '--with-aio' '--with-large-files' '--with-pthreads'
'--with-maxfd=32768' 'CPPFLAGS=-I/usr/local/include'
'LDFLAGS=-L/usr/local/lib' 'CFLAGS=-DNUMTHREADS=128'
'--prefix=/usr/local' '--sysconfdir=/etc' '--mandir=/usr/local/man'
'--infodir=/usr/local/info' 'CC=cc'

# cat /etc/squid/squid.conf
http_port 8080
icp_port 0
cache_peer 10.2.5.1 parent 8080 0 default no-query no-digest no-netdb-exchange
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
cache_mem 640 MB
cache_swap_low 90
cache_swap_high 95
maximum_object_size 4096 KB
maximum_object_size_in_memory 16 KB
cache_replacement_policy heap LFUDA
memory_replacement_policy heap GDSF
cache_dir aufs /var/squid/cache 6 16 256
access_log /var/squid/logs/access.log squid
hosts_file /etc/hosts
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
quick_abort_min 0 KB
quick_abort_max 0 KB
half_closed_clients off
shutdown_lifetime 5 seconds
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443  # https
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 554
acl Safe_ports port 1755
acl purge method PURGE
acl CONNECT method CONNECT
acl snmppublic snmp_community public
acl corpnet dstdomain .corp.local
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access allow CONNECT SSL_ports
http_access allow Safe_ports
http_access deny all
httpd_suppress_version_string on
visible_hostname proxy
memory_pools off
log_icp_queries off
client_db off
buffered_logs on
never_direct deny corpnet
never_direct allow all
coredump_dir /var/squid/logs
pipeline_prefetch on

cache.log:
 snip 
2008/04/01 17:47:46| Starting Squid Cache version 2.6.STABLE13 for
x86_64-unknown-openbsd4.2...
2008/04/01 17:47:46| Process ID 23178
2008/04/01 17:47:46| With 32768 file descriptors available
2008/04/01 17:47:46| Using kqueue for the IO loop
2008/04/01 17:47:46| DNS Socket created at 0.0.0.0, port 11217, FD 8
2008/04/01 17:47:46| Adding nameserver 10.5.1.1 from /etc/resolv.conf
2008/04/01 17:47:46| Adding nameserver 10.1.9.5 from /etc/resolv.conf
2008/04/01 17:47:46| Adding nameserver 10.1.15.15 from /etc/resolv.conf
2008/04/01 17:47:46| User-Agent logging is disabled.
2008/04/01 17:47:46| Referer logging is disabled.
2008/04/01 17:47:46| Unlinkd pipe opened on FD 13
2008/04/01 17:47:46| Swap maxSize 6144 KB, estimated 4726153 objects
2008/04/01 17:47:46| Target number of buckets: 236307
2008/04/01 17:47:46| Using 262144 Store buckets
2008/04/01 17:47:46| Max Mem  size: 655360 KB
2008/04/01 17:47:46| Max Swap size: 6144 KB
2008/04/01 17:47:46| Local cache digest enabled; rebuild/rewrite every
3600/3600 sec
2008/04/01 17:47:46| Rebuilding storage in /var/squid/cache (DIRTY)
2008/04/01 17:47:46| Using Least Load store dir selection
2008/04/01 17:47:46| Set Current Directory to /var/squid/logs
2008/04/01 17:47:46| Loaded Icons.
2008/04/01 17:47:47| Accepting proxy HTTP connections at 0.0.0.0, port
8080, FD 17.
2008/04/01 17:47:47| Accepting HTCP messages on port 4827, FD 18.
2008/04/01 17:47:47| Accepting SNMP messages on port 3401, FD 19.
2008/04/01 17:47:47| WCCP Disabled.
2008/04/01 17:47:47| Configuring Parent 10.2.5.1/8080/0