Re: [squid-users] What is the best way to authenticate remote users with dynamic ip?

2008-04-14 Thread S.M.H. Hamidi
Dear Roma,

If you want to authenticate users through a captive portal mechanism you should 
think to IP Address as user identity. Although it is possible to implement a 
cookie-based authentication but it is more complex and needs to detailed 
explanation.

Regards,

- Original Message 
From: [EMAIL PROTECTED] [EMAIL PROTECTED]
To: squid-users@squid-cache.org
Sent: Sunday, April 13, 2008 8:16:09 PM
Subject: [squid-users] What is the best way to authenticate remote users with 
dynamic ip?

Hello, list.
I want to setup public proxy, that will serve clients from anywhere, after 
registration.
I will setup captive portal for authorization/registration and external 
authenticator,
that will check user validity, and redirect unauthorizated to captive portal.

I guess that simple basic/digest auth will be better choice, but I want to use 
captive portal,
so its no option for me, alas.

So I need some kind of session authentication.
For now I'm stick to cookie authentication, but not sure if it possible.
I can configure captive portal to set cookie and external helper to check for 
it,
but I believe client will not send that cookie until squid ask him,
and squid will not, are not he? What can I do it that case?

Is there any better way, to approach my target?

Thanks in advance, Roma.






  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ


Re: [squid-users] Configuring cache_peer to use ssl

2008-04-14 Thread Janis

Quoting Chris Robertson [EMAIL PROTECTED]:


On the parent server is acl allowing this secondary server to connect.


Are you using an http_port, or an https_port directive on the parent
server?  What does it look like?


it looks so:

http_port IP:port

Janis


This message was sent using IMP, the Internet Messaging Program.




[squid-users] Getting sibling caches to work in an accelerator setup

2008-04-14 Thread Patrik Ellrén
We have a setup with a number of identical application servers running on 
Windows 2003 Server. On each server there is an instance of Squid (2.6 stable 
18) that runs as an accelerator. The accelerator mode seems to work fine when 
each Squid instance is only accelerating its own application server but we 
would like the Squids to run as siblings and we have not been able to get it to 
work.

Even though the objects are cached by Squid on one machine calls from a sibling 
generates a combination of:

UDP_HIT/000
TCP_MISS/504

It looks like the ICP call indicates a hit but when a Squid tries to retrieve 
the cached object it is not found in the cache. The max-age and expire headers 
are set to allow caching for weeks (and it does work when the Squids are 
accelerating only its own origin server), cache-control is set to public and no 
other headers have been set.

If we add allow_miss for the cache_peer tags then the objects will be retrieved 
but they will come from the sibling's origin server and not from the cache so 
it looks like the communication works.

Does anyone have an idea what could cause this behaviour?

Kind regards,
Patrik Ellrén
Carmenta AB


[squid-users] reverse proxy https - http and redirect request from server

2008-04-14 Thread Wojciech Durczyński

Hello

I try to set squid as a reverse proxy. Clients should connect via https, 
and originserver is via http.
Client ---(https://neon:3129/)-- squid (http://neon:8085/) 
webserver


My configuration is something like that:

https_port 3129 accel vport protocol=http cert=/root/private/cacert.pem 
key=/root/private/privkey.pem

cache_peer neon 8085 0 no-query originserver name=neon
cache_peer_access neon allow all
http_access allow all

Client shouldn't know anything about address of webserver.
It works well unless webserver generate
HTTP/1.0 302 Moved Temporarily
Location: http://neons_ip:3129/sth

Then web browser tries to connect with squid's https port via standard 
http, and I get information connection reset.

How to configure right behaviour? Is it a bug in squid?
I use squid 3.0.STABLE4.





[squid-users] squid 3.0 on Windows

2008-04-14 Thread Shailesh Mishra
Hi,

Do we have squid 3.0 available on Windows which has the inbuilt support
for ICAP protocol?
I am unable to find a location for downloading it. Although I can
download the use the same for Linux.
The information in the site says we have only a development release for
3.0 and not production release.

Any information or pointers in this regard will be helpful.

Thanks in advance,
Shailesh


Re: [squid-users] squid 3.0 on Windows

2008-04-14 Thread Amos Jeffries

Shailesh Mishra wrote:

Hi,

Do we have squid 3.0 available on Windows which has the inbuilt support
for ICAP protocol?
I am unable to find a location for downloading it. Although I can
download the use the same for Linux.
The information in the site says we have only a development release for
3.0 and not production release.

Any information or pointers in this regard will be helpful.

Thanks in advance,
Shailesh


Porting to windows is not yet completed.
If you want to lend a hand with the Win32 code give Guido a mail.

Amos
--
Please use Squid 2.6.STABLE19 or 3.0.STABLE4


[squid-users] Dynamic PDF thru transparent squid 2.6 problem

2008-04-14 Thread Rob Asher
I've seen a few questions about problems opening/downloading PDF's through a 
transparent proxy but haven't found a solution that works for me yet.  I have a 
bridged, transparent squid 2.6.stable6(stock CentOS 5 build) machine with 
squidGuard that works great for filtering and caching except for one particular 
site that generates reports dynamically into PDF's.  Just watching the 
transaction with tcpdump, it looks like the download completes but it's never 
displayed to the end user.  The downloading dialog just hangs.  Remove the 
squid machine and the PDF downloads fine.  If anyone has any ideas and would 
like to see the config, redirect rules, and tcpdump output, I'd be more than 
happy with the help.  :-)

Thanks,
Rob


-
Rob Asher
Network Systems Technician
Paragould School District
(870)236-7744 Ext. 169




[squid-users] squid as reverse proxy, serving large files

2008-04-14 Thread Lin Jui-Nan Eric
Hi All,

I set up some squid proxy as reverse proxy for serving large files (~50MB).
If there are about 1200 concurrent connections (each connections
persists about 2 mins, since I have file size ~50MB),
The performance degrades quickly, and cache.log shows:

2008/04/14 22:48:11| comm_old_accept: FD 11: (53) Software caused
connection abort
2008/04/14 22:48:11| httpAccept: FD 11: accept failure: (53) Software
caused connection abort
2008/04/14 22:48:13| comm_old_accept: FD 11: (53) Software caused
connection abort
2008/04/14 22:48:13| httpAccept: FD 11: accept failure: (53) Software
caused connection abort
2008/04/14 22:48:14| comm_old_accept: FD 11: (53) Software caused
connection abort
2008/04/14 22:48:14| httpAccept: FD 11: accept failure: (53) Software
caused connection abort
2008/04/14 22:48:16| comm_old_accept: FD 11: (53) Software caused
connection abort
2008/04/14 22:48:17| httpAccept: FD 11: accept failure: (53) Software
caused connection abort

I use squid-3.0.4 with kqueue on FreeBSD 7.0 RELEASE, with 4 CPUs and
4G RAM. Thank you for any suggestions to make the numbers better.

My squid.conf:

acl all src 0.0.0.0/0
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
#
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl PURGE method PURGE

acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255

acl pixnet dstdomain .pixnet.net
#  TAG: http_access
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
http_access allow manager localhost
http_access deny manager
http_access allow PURGE localhost
http_access deny PURGE

http_access allow pixnet
http_access deny all

icp_access allow localnet
icp_access deny all

#htcp_access allow localnet
#htcp_access deny all

#  TAG: http_port
http_port 80 accel defaultsite=server.pixnet.net

#  TAG: cache_peer
#   #proxy  icp
#   #  hostname type port   port  options
#   #    - -  ---
#   cache_peer parent.foo.net   parent3128  3130  proxy-only default
#   cache_peer sib1.foo.net sibling   3128  3130  proxy-only
#   cache_peer sib2.foo.net sibling   3128  3130  proxy-only
cache_peer server.pixnet.net parent 80 0 no-query originserver

#  TAG: cache_mem   (bytes)
cache_mem 1024 MB
cache_swap_low 80
cache_swap_high 95

maximum_object_size_in_memory 65536 KB

#  TAG: cache_dir
cache_dir aufs /ad6/cache 102400 32 256
cache_dir aufs /ad4/cache 102400 32 256
minimum_object_size 0 KB
maximum_object_size 131072 KB

access_log /var/log/squid/access.log squid
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log
logfile_rotate 30
# emulate_httpd_log off
# pid_filename /usr/local/squid/logs/squid.pid
# via on
#accept_filter httpready
memory_pools_limit 3608 MB
forwarded_for on

coredump_dir /ad6/cache
cache_effective_user squid
refresh_pattern . 1440 90% 2880  override-expire ignore-no-cache ignore-no-store
minimum_expiry_time 2592000 seconds
client_db off
buffered_logs on
half_closed_clients off
#debug_options ALL,5 5,9


Re: [squid-users] Configuring cache_peer to use ssl

2008-04-14 Thread Chris Robertson

Janis wrote:

Quoting Chris Robertson [EMAIL PROTECTED]:


On the parent server is acl allowing this secondary server to connect.


Are you using an http_port, or an https_port directive on the parent
server?  What does it look like?


it looks so:

http_port IP:port

Janis


So the child Squid is trying to negotiate an SSL connection with a port 
on the Parent that's not set up to accept it.  See 
http://www.squid-cache.org/Versions/v3/3.0/cfgman/https_port.html for 
the proper directive to terminate an SSL connection.


Chris



Re: [squid-users] squid as reverse proxy, serving large files

2008-04-14 Thread Amos Jeffries
 Hi All,

 I set up some squid proxy as reverse proxy for serving large files
 (~50MB).
 If there are about 1200 concurrent connections (each connections
 persists about 2 mins, since I have file size ~50MB),
 The performance degrades quickly, and cache.log shows:

 2008/04/14 22:48:11| comm_old_accept: FD 11: (53) Software caused
 connection abort
 2008/04/14 22:48:11| httpAccept: FD 11: accept failure: (53) Software
 caused connection abort
 2008/04/14 22:48:13| comm_old_accept: FD 11: (53) Software caused
 connection abort
 2008/04/14 22:48:13| httpAccept: FD 11: accept failure: (53) Software
 caused connection abort
 2008/04/14 22:48:14| comm_old_accept: FD 11: (53) Software caused
 connection abort
 2008/04/14 22:48:14| httpAccept: FD 11: accept failure: (53) Software
 caused connection abort
 2008/04/14 22:48:16| comm_old_accept: FD 11: (53) Software caused
 connection abort
 2008/04/14 22:48:17| httpAccept: FD 11: accept failure: (53) Software
 caused connection abort

 I use squid-3.0.4 with kqueue on FreeBSD 7.0 RELEASE, with 4 CPUs and
 4G RAM. Thank you for any suggestions to make the numbers better.

I think you are getting close to the top of the range we have benchmarked
squid-3 at. Do you have any performance graphs we could use?

#1 - Check the number of file descriptors your cache is allowed and using.
You may need to rebuild with a larger set.

#2 - maybe run a second squid on that machine. Squid only takes up one CPU
(helpers another maybe).

Nothing obvious in the config. Just a few minor tweaks FYI:

 cache_store_log. - pretty specialized for debugging the cache storage. A
waste of disk accesses otherwise.

 acl all src * - A built-in default for squid 3.x. You can drop it from
the config to clear up some warnings.

 debug_options - probably ALL,0 to get the critical info without many
entries.


Amos


 My squid.conf:

 acl all src 0.0.0.0/0
 acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
 acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
 acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
 #
 acl SSL_ports port 443
 acl Safe_ports port 80  # http
 acl Safe_ports port 21  # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70  # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT
 acl PURGE method PURGE

 acl manager proto cache_object
 acl localhost src 127.0.0.1/255.255.255.255

 acl pixnet dstdomain .pixnet.net
 #  TAG: http_access
 # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
 http_access allow manager localhost
 http_access deny manager
 http_access allow PURGE localhost
 http_access deny PURGE

 http_access allow pixnet
 http_access deny all

 icp_access allow localnet
 icp_access deny all

 #htcp_access allow localnet
 #htcp_access deny all

 #  TAG: http_port
 http_port 80 accel defaultsite=server.pixnet.net

 #  TAG: cache_peer
 #   #proxy  icp
 #   #  hostname type port   port  options
 #   #    - -  ---
 #   cache_peer parent.foo.net   parent3128  3130  proxy-only
 default
 #   cache_peer sib1.foo.net sibling   3128  3130  proxy-only
 #   cache_peer sib2.foo.net sibling   3128  3130  proxy-only
 cache_peer server.pixnet.net parent 80 0 no-query originserver

 #  TAG: cache_mem   (bytes)
 cache_mem 1024 MB
 cache_swap_low 80
 cache_swap_high 95

 maximum_object_size_in_memory 65536 KB

 #  TAG: cache_dir
 cache_dir aufs /ad6/cache 102400 32 256
 cache_dir aufs /ad4/cache 102400 32 256
 minimum_object_size 0 KB
 maximum_object_size 131072 KB

 access_log /var/log/squid/access.log squid
 cache_log /var/log/squid/cache.log
 cache_store_log /var/log/squid/store.log
 logfile_rotate 30
 # emulate_httpd_log off
 # pid_filename /usr/local/squid/logs/squid.pid
 # via on
 #accept_filter httpready
 memory_pools_limit 3608 MB
 forwarded_for on

 coredump_dir /ad6/cache
 cache_effective_user squid
 refresh_pattern . 1440 90% 2880  override-expire ignore-no-cache
 ignore-no-store
 minimum_expiry_time 2592000 seconds
 client_db off
 buffered_logs on
 half_closed_clients off
 #debug_options ALL,5 5,9





Re: [squid-users] change in source code

2008-04-14 Thread Henrik Nordstrom
lör 2008-04-12 klockan 12:07 -0700 skrev Anil Saini:
 
 where i have make change in source code in order to increase dns_children to
 more than 32

Is there really a limit of 32? Not so sure...

What happens if you try to set it higher?

Regards
Henrik



[squid-users] Configuration problem ...

2008-04-14 Thread Ramiro Sabastta
Hi !

I configured a squid 2.6 in a debian box (1Gb ram and 120Gb of disk)

When I send a http request to a file bigger than 200kb (my
maximum_object_size is 4194304 bytes and my
maximum_object_size_in_memory 204800 bytes), the squid answers with a
TCP_MISS and It doesn't save the file into the cache.
The size of the file is 210133 bytes.

In addition to this, when I send a http request from the same file,
with different extensions (one with .jpg and other with .gif) the
squid responds in diferent ways. When I sent the .jpg file request, I
receive a X-Cache and X-Cache-Lookup MISS. When I sent the .gif
file request, I receive a X-Cache MISS and X-Cache-Lookup HIT.

The follow example shows this issue:

JPG:
--- Sent ---
GET /prueba/imagen2.jpg HTTP/1.0
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)
Host: www.dellog.com.ar
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
Accept-Language: en-us
Connection: Keep-Alive
--- Receive ---
HTTP/1.0 200 OK
Date: Tue, 15 Apr 2008 00:29:26 GMT
Server: Apache/2.2.8 (Win32) PHP/5.2.5
Last-Modified: Mon, 14 Apr 2008 23:03:34 GMT
ETag: a6202-334d5-44add4b112197
Accept-Ranges: bytes
Content-Length: 210133
Content-Type: image/jpeg
X-Cache: MISS from ProxyServer.ProxyServer.net
X-Cache-Lookup: MISS from ProxyServer.ProxyServer.net:3128
Via: 1.0 ProxyServer.ProxyServer.net:3128 (squid/2.6.STABLE5)
Connection: keep-alive


GIF:
--- Sent ---
GET /prueba/imagen2.gif HTTP/1.0
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)
Host: www.dellog.com.ar
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
Accept-Language: en-us
Connection: Keep-Alive
---
--- Receive ---
HTTP/1.0 200 OK
Date: Tue, 15 Apr 2008 00:16:26 GMT
Server: Apache/2.2.8 (Win32) PHP/5.2.5
Last-Modified: Mon, 14 Apr 2008 23:03:34 GMT
ETag: 1196f5-334d5-44add4b112197
Accept-Ranges: bytes
Content-Length: 210133
Content-Type: image/gif
X-Cache: MISS from ProxyServer.ProxyServer.net
X-Cache-Lookup: HIT from ProxyServer.ProxyServer.net:3128
Via: 1.0 ProxyServer.ProxyServer.net:3128 (squid/2.6.STABLE5)
Connection: keep-alive


could you check my squid.conf configuration file, in order to detect
some configuration mistakes?

I would thanks your help.

Thanks a lot 

Ramiro

http_port 0.0.0.0:3128 transparent
icp_port 3130
htcp_port 0
udp_incoming_address 0.0.0.0
udp_outgoing_address 255.255.255.255
icp_query_timeout 0
maximum_icp_query_timeout 2000
mcast_icp_query_timeout 2000
dead_peer_timeout 10 seconds
hierarchy_stoplist cgi-bin
hierarchy_stoplist ?
cache Deny QUERY
cache Deny exepciones
cache_vary on
broken_vary_encoding Allow apache
cache_mem 268435456 bytes
cache_swap_low 90
cache_swap_high 95
maximum_object_size 4194304 bytes
minimum_object_size 0 bytes
maximum_object_size_in_memory 204800 bytes
ipcache_size 2048
ipcache_low 90
ipcache_high 95
fqdncache_size 2048
cache_replacement_policy heap LFUDA
memory_replacement_policy lru
cache_dir diskd /var/spool/squid 102400 16 256 Q1=64 Q2=72
access_log /var/log/squid/access.log squid
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log
emulate_httpd_log off
log_ip_on_direct on
mime_table /usr/share/squid/mime.conf
log_mime_hdrs off
pid_filename /var/run/squid.pid
debug_options ALL,1
log_fqdn off
client_netmask 255.255.255.255
ftp_user Squid@
ftp_list_width 32
ftp_passive on
ftp_sanitycheck on
ftp_telnet_protocol on
check_hostnames on
allow_underscore on
dns_retransmit_interval 5 seconds
dns_timeout 120 seconds
dns_defnames off
dns_nameservers 200.45.191.35
dns_nameservers 200.45.191.40
hosts_file /etc/hosts
diskd_program /usr/lib/squid/diskd-daemon
unlinkd_program /usr/lib/squid/unlinkd
url_rewrite_children 5
url_rewrite_concurrency 0
url_rewrite_host_header on
location_rewrite_children 5
location_rewrite_concurrency 0
authenticate_cache_garbage_interval 3600 seconds
authenticate_ttl 3600 seconds
authenticate_ip_ttl 0 seconds
wais_relay_port 0
request_header_max_size 20480 bytes
request_body_max_size 0 bytes
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 20% 4320
quick_abort_min 0 KB
quick_abort_max 0 KB
quick_abort_pct 95
read_ahead_gap 16384 bytes
negative_ttl 300 seconds
positive_dns_ttl 21600 seconds
negative_dns_ttl 60 seconds
range_offset_limit 0 bytes
collapsed_forwarding off
refresh_stale_hit 0 seconds
forward_timeout 240 seconds
connect_timeout 60 seconds
peer_connect_timeout 30 seconds
read_timeout 900 seconds
request_timeout 300 seconds
persistent_request_timeout 60 seconds
client_lifetime 86400 seconds
half_closed_clients off
pconn_timeout 120 seconds
ident_timeout 10 seconds
shutdown_lifetime 30 seconds
acl QUERY urlpath_regex cgi-bin
acl QUERY urlpath_regex \?
acl 

Re: [squid-users] change in source code

2008-04-14 Thread Anil Saini

actually its is thr in squid.conf fine that max limit is 32
but i increased the limit to 60..and no of dns processes increases...but i
dont know it will effect the squid or notproblem that i was facing is
solved to some extend.




Anil Saini wrote:
 
 
 where i have to  make changes in source code in order to increase
 dns_children to more than 32
 
 
 


-
Anil Saini
M.E. - Software Systems
B.E. - Electronics and Communication

Project Assistant
CISCO LAB
Information Processing Center Unit
BITS-PILANI
-- 
View this message in context: 
http://www.nabble.com/change-in-source-code-tp16654526p16694231.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] Reverse proxy for Primary and then Secondary

2008-04-14 Thread Indunil Jayasooriya
On Thu, Apr 10, 2008 at 7:48 PM, Amos Jeffries [EMAIL PROTECTED] wrote:

 Indunil Jayasooriya wrote:

  Hi all,
 
  I have 2 web servers . One is Primary and the other is Secondary.
 
  Pls asssume
  ip of primary is 1.2.3.4
  ip of secondary 2.3.4.5
 
  I want squid resverse proxy to forward traffic to primary server.
  When, the primary goes offline, it should forward to Secondary web
  Server.
 
  How can I acheive this task?
 
  I am going to keep squid as a reverse proxy in front of them?
 
  pls assume ip of reverse proxy is 5.6.7.8
 
  How Can I write rules in squid.conf?
 
  pls see below rules.
 
 
  http_port 80 accel defaultsite=your.main.website
 
  cache_peer ip.of.primarywebserver parent 80 0 no-query originserver
  cache_peer ip.of.secondarywebserver parent 80 0 no-query originserver
 
  acl our_sites dstdomain your.main.website
  http_access allow our_sites
 

  Add:squid-users squid-users@squid-cache.org
   cache_peer_access ip.of.primarywebserver allow our_sites
   cache_peer_access ip.of.secondarywebserver allow our_sites
   never_direct allow our_sites

Hi, amos,

Then, Comple rule set will be this. Pls let me know.


 http_port 80 accel defaultsite=your.main.website

 cache_peer ip.of.primarywebserver parent 80 0 no-query  originserver

 cache_peer ip.of.secondarywebserver parent 80 0 no-query  originserver

 acl our_sites dstdomain your.main.website

http_access allow our_sites

cache_peer_access ip.of.primarywebserver allow our_sites

cache_peer_access ip.of.secondarywebserver allow our_sites
never_direct allow our_sites




  Squid follows that behavior by default.

  FYI, There are some additional monitor* options to fine-tune recovery.

What are they?



  Amos
  --
  Please use Squid 2.6.STABLE19 or 3.0.STABLE4




-- 
Thank you
Indunil Jayasooriya


Re: [squid-users] squid as reverse proxy, serving large files

2008-04-14 Thread Lin Jui-Nan Eric
On Tue, Apr 15, 2008 at 6:12 AM, Amos Jeffries [EMAIL PROTECTED] wrote:

  I think you are getting close to the top of the range we have benchmarked
  squid-3 at. Do you have any performance graphs we could use?

Thank you for your advice very much!
  #1 - Check the number of file descriptors your cache is allowed and using.
  You may need to rebuild with a larger set.
The number of file descriptors for a process is 32768.

  #2 - maybe run a second squid on that machine. Squid only takes up one CPU
  (helpers another maybe).
I think it is not CPU bound because I use aufs and kqueue, and the
total CPU usage reported by top  15%.
I also found the disk is not so busy (iostat ~ 40%)

  Nothing obvious in the config. Just a few minor tweaks FYI:



Re: [squid-users] Configuring cache_peer to use ssl

2008-04-14 Thread Janis

Quoting Chris Robertson [EMAIL PROTECTED]:


So the child Squid is trying to negotiate an SSL connection with a port
on the Parent that's not set up to accept it.  See
http://www.squid-cache.org/Versions/v3/3.0/cfgman/https_port.html for
the proper directive to terminate an SSL connection.


so, on the parent should be the line(s?):

http_port IP:PORT1

for non-ssl connections and

https_port IP:PORT2 cert=self_s_cert.pem key=key.pem  
sslflags=NO_DEFAULT_CA NO_SESSION_REUSE


for ssl connections

and on secondary proxy - as was written before?

Janis



This message was sent using IMP, the Internet Messaging Program.