[squid-users] NTLM Fails at boot

2008-02-29 Thread Wayne Swart

Hi everyone

I have a strange issue on squid-2.6.STABLE6-5.el5_1.2 running on CentOS 5.1 
(2.6.18-53.1.13.el5PAE)


I have samba and winbind working on our local domain and squid authenticates 
perfectly from it.


My issue however is that squid fails to start at boot time with the following 
(seems rather common) error:


Feb 29 09:56:58 firewall (squid): The basicauthenticator helpers are crashing 
too rapidly, need help!
Feb 29 09:56:58 firewall squid[3047]: Squid Parent: child process 3124 exited 
due to signal 6
Feb 29 09:56:58 firewall squid[3047]: Exiting due to repeated, frequent failures

After the machine has booted and I ssh to it I can start it manually 
as root from cmdln fine with exactly the same rc script as the one that is 
used at boot time, and it starts without any errors and works 100% fine with ntlm_auth on our domain.


I believe this is a permissions issue, but I am not sure why.
I have moved smb and winbind to start after the network service and squid 
to start last in runlevel 3 but to no avail. Thinking that maybe something 
in smb does not start properly before squid inits.
Then I removed the rc3.d symlink for squid and called it from rc.local 
with a 3d script:


sleep 30
/etc/init.d/squid start

It still fails with the same error.

Here is the auth_param part of my config, even though I doubt its a config 
problem.


auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 10

auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic
auth_param basic children 5
auth_param basic realm This is a proxy server
auth_param basic credentialsttl 2 hours

I hope you guys can help me.

Thanks

Wayne


[squid-users] Downloading zip files from squid cache

2008-02-29 Thread Philippe Geril
Hi all,

I have a very basic scenario, but I am little confused because there are
pretty much advanced things available.

here is the case:

>> 2 clients, client-A and client-B

>> 1 squid 2.5 server running with default squid.conf (modified acls, cache 
>> directories, etc)

>> This is a url http://example.com/application.zip [120MB]

-

Now for example client-A requests the above url
http://example.com/application.zip which is of 120MBs.

I want that whenever the client-B requests the same url from the squid
box, squid verifies that the url exists in cache, and then deliver it
from the cache instead of going to the internet to retrieve it.

My questions are:
- How to configure it.
- I want to ignore client no-cache headers, i.e. always download from
cache than going to server


any help would be highly appreciated.

peace

PG

-- 
  Philippe Geril
  [EMAIL PROTECTED]

-- 
http://www.fastmail.fm - Same, same, but different…



Re: [squid-users] Port Changing in Fedora 8

2008-02-29 Thread Peter Albrecht
Hi Steve,

On Thursday 28 February 2008 20:18, Steve B wrote:
> I am trying to change the port that Squid uses from 3128(default) to
> something other than it obviously. My main problem is that when I
> change the port to Anything, even 3127(made something up), squid will
> NOT start. For example when I type the command 'service squid start'
> with the port at 3128, it will start, but if it is anything other then
> that, it will go: 'Starting Squid. [FAILED]'
> 
> Any help?

Some questions:

1) _Why_ do you want to use another port? You have to tell the clients
   anyway. 
2) _How_ did you configure the other port? What is in your http_port  
   directive?
3) Is there any other service running on the port you want to use? Try
   "nmap ip-of-your-server" to see if any service is running on that port.
4) Is there any information in the log files?

Regards,

Peter

-- 
Peter Albrecht, Novell Training Services


[squid-users] Port Changing in Fedora 8

2008-02-29 Thread Steve B
Sorry, forgot to reply to squidusers again.

1) I want to use another port because where I am, no other port is
 open. The port is 81.
 2) I put inside of my .conf file: http_port 81.
 3) Not from what the grep command had said. Someone else suggested to
 use it so I used it. Supposedly nothing.
 4) Not from what I had seen.



 On Fri, Feb 29, 2008 at 5:03 AM, Peter Albrecht
 <[EMAIL PROTECTED]> wrote:
 > Hi Steve,
 >
 >
 >
 >  On Thursday 28 February 2008 20:18, Steve B wrote:
 >  > I am trying to change the port that Squid uses from 3128(default) to
 >  > something other than it obviously. My main problem is that when I
 >  > change the port to Anything, even 3127(made something up), squid will
 >  > NOT start. For example when I type the command 'service squid start'
 >  > with the port at 3128, it will start, but if it is anything other then
 >  > that, it will go: 'Starting Squid. [FAILED]'
 >  >
 >  > Any help?
 >
 >  Some questions:
 >
 >  1) _Why_ do you want to use another port? You have to tell the clients
 >anyway.
 >  2) _How_ did you configure the other port? What is in your http_port
 >directive?
 >  3) Is there any other service running on the port you want to use? Try
 >"nmap ip-of-your-server" to see if any service is running on that port.
 >  4) Is there any information in the log files?
 >
 >  Regards,
 >
 >  Peter
 >
 >  --
 >  Peter Albrecht, Novell Training Services
 >



 --
 -Steve



-- 
-Steve


[squid-users] Authentication Hack

2008-02-29 Thread Dave Coventry
I understand that transparent proxy cannot ask the browser for
Authentication because the browser is not aware of the existence of
the proxy.

I can't believe that there is not a work-around for this...

I have several laptops on my network which are used on other networks,
so I need the connection through the proxy to be "automagic" to the
extent that I don't need to ask my CEO to reconfigure his browser
everytime he comes into the office. But I also need to be able to
track web usage.

I have thought up a hack involving the following:
I can set up a file containing an ip address on each line /etc/squid/iplist.

Then I set up the squid.conf to have the following line:

acl authorisedip src "/etc/squid/iplist"

I changed the ERR_ACCESS_DENIED file to contain a form which calls a
perl program (catchip.pl) passing it a username and password which, if
correct, appends the user's ip to the /etc/squid/iplist file.
(removing the IP when the user closes his browser would be trickier).

However, this all falls down because it appears that the file is only
parsed on startup which sort of subverts it's usefulness.

I can't believe that this avenue has not been fully explored. Can
anyone comment on this hack?

Is there a simpler method of getting this done?


[squid-users] Creating an Out of Service Page using Custom Errors

2008-02-29 Thread Mark A. Schnitter
Hello,

I'm attempting to create an Out of Service page hosted on the same box
that is running Squid. I'm running a reverse proxy configuration with
several backend boxes. When one of the backend boxes is down for scheduled
maintenance or is having a problem, I would like to have a custom html
page displayed to indicate the problem instead of a Squid proxy error
page.

I've searched through the mail archives, Google, etc. and have found lots
of good info around how to display a custom page, but so far the only way
to trap or trigger the page seems to be confined to creating an ACL that
looks for specific information to display the page. I've researched the
error_map and deny_info tags and haven't been able to find a way to
trigger the custom page when I get a squid error.

For example, if my Windows box is up, but IIS is down, Squid returns an
error code 111. The error_map tag only seems to accept HTML response codes
so I'm out of luck with that approach. If I try to use deny_info, I can't
find an ACL tag that allows me to identify Squid errors.

I would like to identify the following two conditions: Target box is down
and Target Box is up, but web server is down.

If there was a way I could trap the errors Squid was producing without
changing the default error messages, that would be the ideal solution.

Hypothetical Example:

acl oos squid-error 111
deny_info ERR_OOS oos
- Where squid-error is the trapping mechanism
- Where ERR_OOS is the custom error page

Any ideas or different approaches?

Thanks,
Mark



Re: [squid-users] Authentication Hack

2008-02-29 Thread jonr

Quoting Dave Coventry <[EMAIL PROTECTED]>:


I understand that transparent proxy cannot ask the browser for
Authentication because the browser is not aware of the existence of
the proxy.

I can't believe that there is not a work-around for this...

I have several laptops on my network which are used on other networks,
so I need the connection through the proxy to be "automagic" to the
extent that I don't need to ask my CEO to reconfigure his browser
everytime he comes into the office. But I also need to be able to
track web usage.

I have thought up a hack involving the following:
I can set up a file containing an ip address on each line /etc/squid/iplist.

Then I set up the squid.conf to have the following line:

acl authorisedip src "/etc/squid/iplist"

I changed the ERR_ACCESS_DENIED file to contain a form which calls a
perl program (catchip.pl) passing it a username and password which, if
correct, appends the user's ip to the /etc/squid/iplist file.
(removing the IP when the user closes his browser would be trickier).

However, this all falls down because it appears that the file is only
parsed on startup which sort of subverts it's usefulness.

I can't believe that this avenue has not been fully explored. Can
anyone comment on this hack?

Is there a simpler method of getting this done?



Have you looked into a .pac file? It can be configured to tell look  
for which network you are on and depending either go through your  
internal proxy or if outside to use the external address.


Hope that helps,

Jon




[squid-users] Cache url's with "?" question marks

2008-02-29 Thread Saul Waizer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hello List,

I am having problems trying to cache images*/content that comes from a
URL containing a question mark on it ('?')

Background:
I am running squid Version 2.6.STABLE17 on FreeBSD 6.2 as a reverse
proxy to accelerate content hosted in America served in Europe.

The content comes from an application that uses TOMCAT so a URL
requesting dynamic content would look similar to this:

http://domain.com/storage/storage?fileName=/.domain.com-1/usr/14348/image/thumbnail/th_8837728e67eb9cce6fa074df7619cd0d193_1_.jpg

The result of such request always results on a MISS with a log similar
to this:

TCP_MISS/200 8728 GET http://domain.com/storage/storage? -
FIRST_UP_PARENT/server_1 image/jpg

I've added this to my config: acl QUERY urlpath_regex cgi-bin as you can
see bellow but it makes no difference and I tried adding this:
acl QUERY urlpath_regex cgi-bin \?  and for some reason ALL requests
result in a MISS.

Any help is greatly appreciated.

My squid config looks like this: (obviously real ip's were changed)

# STANDARD ACL'S ###
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
# REVERSE CONFIG FOR SITE #
http_port 80 accel vhost
cache_peer 1.1.1.1 parent 80 0 no-query originserver name=server_1
acl sites_server_1 dstdomain domain.com
#  REVERSE ACL'S FOR OUR DOMAINS ##
acl  ourdomain0  dstdomain   www.domain.com
acl  ourdomain1  dstdomain   domain.com
http_access allow ourdomain0
http_access allow ourdomain1
http_access deny all
icp_access allow all
 HEADER CONTROL ###
visible_hostname cacheA.domain.com
cache_effective_user nobody
forwarded_for on
follow_x_forwarded_for allow all
header_access All allow all
### SNMP CONTROL  ###
snmp_port 161
acl snmppublic snmp_community public1
snmp_access allow all
## CACHE CONTROL 
access_log /usr/local/squid/var/logs/access.log squid
acl QUERY urlpath_regex cgi-bin
cache_mem 1280 MB
cache_swap_low 95
cache_swap_high 98
maximum_object_size 6144 KB
minimum_object_size 1 KB
maximum_object_size_in_memory 4096 KB
cache_dir ufs /storage/ram_dir1 128 16 256
cache_dir ufs /storage/cache_dir1 5120 16 256
cache_dir ufs /storage/cache_dir2 5120 16 256
cache_dir ufs /storage/cache_dir3 5120 16 256

Also here is the result of a custom script I made to parse the
access.log that will sort and display the top 22 responses so I can
compare them with cacti, I am trying to increase the Hit ratio but so
far is extremely low.

1  571121 69.3643% TCP_MISS/200
2  98432 11.9549% TCP_HIT/200
3  51590 6.26576% TCP_MEM_HIT/200
4  47009 5.70938% TCP_MISS/304
5  17757 2.15664% TCP_IMS_HIT/304
6  11982 1.45525% TCP_REFRESH_HIT/200
7  11801 1.43327% TCP_MISS/404
8  6810 0.827095% TCP_MISS/500
9  2508 0.304604% TCP_MISS/000
   10  1323 0.160682% TCP_MISS/301
   11  1151 0.139792% TCP_MISS/403
   12  1051 0.127647% TCP_REFRESH_HIT/304
   13  430 0.0522248% TCP_REFRESH_MISS/200
   14  127 0.0154245% TCP_CLIENT_REFRESH_MISS/200
   15  83 0.0100806% TCP_MISS/401
   16  81 0.00983769% TCP_CLIENT_REFRESH_MISS/304
   17  35 0.00425085% TCP_MISS/503
   18  20 0.00242906% TCP_DENIED/400
   19  19 0.00230761% TCP_HIT/000
   20  19 0.00230761% TCP_DENIED/403
   21  14 0.00170034% TCP_SWAPFAIL_MISS/200
   22  1 0.000121453% TCP_SWAPFAIL_MISS/30

Thanks!




-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHyHfrAcr37anguZsRAtktAKCKlqDTxrtmLLpfEK+cq92OOS0JwQCeIuiG
59G9YtNTZXD5JIExywCYprI=
=1Uls
-END PGP SIGNATURE-


RE: [squid-users] Reverse proxy woes

2008-02-29 Thread Anthony Tonns
> In accelerator setups you probably want to make this a much shorter
> limit. See minimum_expiry_time in squid.conf.

Henrik,

Thanks! I'm a little slow in my reply, but setting the
minimum_expiry_time helped. Initially, I set max-age != the expiration
header, but minimum_expiry_time less than max-age and the time in the
expiration header is the right way to fix this.

Thanks again,
Tony


[squid-users] Need a cache_peer only when normal path down

2008-02-29 Thread Tuc at T-B-O-H.NET
Hi,

I'm terrible at trying to make short subjects, sorry.

I have a FreeBSD squid cache in transparent mode off 
a Cisco 3640 (Thanks to the Wiki on that!). The 3640 is
running IP SLA between a wireless broadband and satellite
connection. If everything is great, the default route is
the wireless broadband. If it tanks, it flips to satellite.

While sending over the wireless broadband, it should
just "act natural". When it goes over the satellite, they have
their own Web Acceleration running on port 87. 

Short of wiring up maybe some ipfw type scripts with
monitoring, is there something that I might be able to do with
squid to automatically do this?

Thanks, Tuc


[squid-users] my squid hang up

2008-02-29 Thread mkdracoor
hello my problem is that a can't search anything in google the page open 
good but when  I search for anything the proxy give an error of "time out" 
here is my conf file. I don't now if I missing something please help me

thanks


# Configuracion Squid by mkdracoor
# 
#
# This is the default Squid configuration file. You may wish
# to look at the Squid home page (http://www.squid-cache.org/)
# for the FAQ and other documentation.
#
# The default Squid config file shows what the defaults for
# various options happen to be.  If you don't need to change the
# default, you shouldn't uncomment the line.  Doing so may cause
# run-time problems.  In some cases "none" refers to no default
# setting at all, while in other cases it refers to a valid
# option - the comments for that keyword indicate if this is the
# case.
#


# NETWORK OPTIONS
# -

http_port 3128

# cache_peer
cache_peer  192.168.22.75 parent 3128 0 default no-query login=PASS
cache_peer_domain 192.168.22.75
#
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache

cache_mem 100 MB
cache_swap_low 90
cache_swap_high 95
cache_dir ufs /var/cache/squid 500 16 256
refresh_pattern ^ftp:  1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern .  0 20% 4320

#
log_fqdn off
log_mime_hdrs off
emulate_httpd_log off
half_closed_clients off
cache_access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log
log_ip_on_direct off
client_netmask 255.255.255.0
#__
ftp_user [EMAIL PROTECTED]
ftp_list_width 32
ftp_passive on
ftp_sanitycheck off
hosts_file /etc/hosts


# Autentificación.
# --
authenticate_ttl 30 minutes
authenticate_ip_ttl 0 seconds
auth_param basic children 5
auth_param basic realm Internet Proxy-Caching (JCCE MASO III)
auth_param basic credentialsttl 5 minutes
auth_param basic casesensitive on
authenticate_cache_garbage_interval 1 hour
auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/passwd

# Timeouts.
# --
forward_timeout 4 minutes
connect_timeout 1 minute
peer_connect_timeout 30 seconds
read_timeout 5 minutes
request_timeout 1 minute
persistent_request_timeout 1 minute
client_lifetime 1 day
half_closed_clients on
pconn_timeout 120 seconds
ident_timeout 10 seconds
shutdown_lifetime 30 seconds

# ACLs.
# --
acl PURGE method PURGE
acl CONNECT method CONNECT
acl manager proto cache_object
acl passwd proxy_auth REQUIRED

acl all src 0.0.0.0/0.0.0.0
acl localhost src 127.0.0.1/255.255.255.255
#acl intranet src 192.168.3.34/45
acl jc_ips src "/etc/squid/jc_ips"

acl SSL_ports port 443 563
acl Safe_ports port 80 443 # HTTP, HTTPS
acl Safe_ports port 21  # FTP
acl Safe_ports port 563  # HTTPS, SNEWS
acl Safe_proxy port 3128 # PROXY
acl Safe_admin port 70  # GOPHER
acl Safe_admin port 210  # WAIS
acl Safe_admin port 280  # HTTP-MGMT
acl Safe_admin port 488  # GSS-HTTP
acl Safe_admin port 591  # FILEMAKER
acl Safe_admin port 777  # MULTILING HTTP
acl Safe_admin port 1025-65535 # Unregistered Ports

# ACLs Personalizadas.
acl porno0 dstdomain "/etc/squid/filtros/porno0"
acl peligroso0 dstdomain "/etc/squid/filtros/peligroso0"
acl peligroso1 url_regex "/etc/squid/filtros/peligroso1"
acl noporno0 dstdomain "/etc/squid/filtros/noporno0"
acl noporno1 url_regex "/etc/squid/filtros/noporno1"
acl descargas0 urlpath_regex "/etc/squid/filtros/descargas0"
acl descargas1 url_regex "/etc/squid/filtros/descargas1"
acl sitesall dstdomain "/etc/squid/filtros/sitesall"

# Reglas Default.
http_access allow manager localhost
http_access deny manager
http_access allow PURGE localhost
http_access deny PURGE
http_access allow jc_ips passwd
http_access deny all

#Permitir y Denegar Filtros
http_access deny all porno0
http_access deny all peligroso0
http_access deny all peligroso1
http_access allow all noporno0 noporno1
http_access allow all descargas0 descargas1


# Parametros Administrativos.
# --
mail_program mail
cache_mgr [EMAIL PROTECTED]
cache_effective_user proxy
visible_hostname proxy.jcmaso3

# Misceláneas
# --
ie_refresh off
retry_on_error on
redirector_bypass off
cachemgr_passwd disable all
dead_peer_timeout 10 seconds
hierarchy_stoplist cgi-bin ?
mime_table /etc/squid/mime.conf
coredump_dir /var/cache/squid
icon_directory /etc/squid/icons
error_directory /etc/squid/errors/Custom 





Re: [squid-users] Authentication Hack

2008-02-29 Thread Adrian Chadd
look at external acl helpers. You may find what you're looking for.



Adrian

On Fri, Feb 29, 2008, Dave Coventry wrote:
> I understand that transparent proxy cannot ask the browser for
> Authentication because the browser is not aware of the existence of
> the proxy.
> 
> I can't believe that there is not a work-around for this...
> 
> I have several laptops on my network which are used on other networks,
> so I need the connection through the proxy to be "automagic" to the
> extent that I don't need to ask my CEO to reconfigure his browser
> everytime he comes into the office. But I also need to be able to
> track web usage.
> 
> I have thought up a hack involving the following:
> I can set up a file containing an ip address on each line /etc/squid/iplist.
> 
> Then I set up the squid.conf to have the following line:
> 
> acl authorisedip src "/etc/squid/iplist"
> 
> I changed the ERR_ACCESS_DENIED file to contain a form which calls a
> perl program (catchip.pl) passing it a username and password which, if
> correct, appends the user's ip to the /etc/squid/iplist file.
> (removing the IP when the user closes his browser would be trickier).
> 
> However, this all falls down because it appears that the file is only
> parsed on startup which sort of subverts it's usefulness.
> 
> I can't believe that this avenue has not been fully explored. Can
> anyone comment on this hack?
> 
> Is there a simpler method of getting this done?

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Cache url's with "?" question marks

2008-02-29 Thread Adrian Chadd
G'day,

Just remove the QUERY ACL and the cache ACL line using "QUERY" in it.
Then turn on header logging (log_mime_hdrs on) and see if the replies
to the dynamically generated content is actually giving caching info.



Adrian

On Fri, Feb 29, 2008, Saul Waizer wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Hello List,
> 
> I am having problems trying to cache images*/content that comes from a
> URL containing a question mark on it ('?')
> 
> Background:
> I am running squid Version 2.6.STABLE17 on FreeBSD 6.2 as a reverse
> proxy to accelerate content hosted in America served in Europe.
> 
> The content comes from an application that uses TOMCAT so a URL
> requesting dynamic content would look similar to this:
> 
> http://domain.com/storage/storage?fileName=/.domain.com-1/usr/14348/image/thumbnail/th_8837728e67eb9cce6fa074df7619cd0d193_1_.jpg
> 
> The result of such request always results on a MISS with a log similar
> to this:
> 
> TCP_MISS/200 8728 GET http://domain.com/storage/storage? -
> FIRST_UP_PARENT/server_1 image/jpg
> 
> I've added this to my config: acl QUERY urlpath_regex cgi-bin as you can
> see bellow but it makes no difference and I tried adding this:
> acl QUERY urlpath_regex cgi-bin \?  and for some reason ALL requests
> result in a MISS.
> 
> Any help is greatly appreciated.
> 
> My squid config looks like this: (obviously real ip's were changed)
> 
> # STANDARD ACL'S ###
> acl all src 0.0.0.0/0.0.0.0
> acl manager proto cache_object
> acl localhost src 127.0.0.1/255.255.255.255
> acl to_localhost dst 127.0.0.0/8
> # REVERSE CONFIG FOR SITE #
> http_port 80 accel vhost
> cache_peer 1.1.1.1 parent 80 0 no-query originserver name=server_1
> acl sites_server_1 dstdomain domain.com
> #  REVERSE ACL'S FOR OUR DOMAINS ##
> acl  ourdomain0  dstdomain   www.domain.com
> acl  ourdomain1  dstdomain   domain.com
> http_access allow ourdomain0
> http_access allow ourdomain1
> http_access deny all
> icp_access allow all
>  HEADER CONTROL ###
> visible_hostname cacheA.domain.com
> cache_effective_user nobody
> forwarded_for on
> follow_x_forwarded_for allow all
> header_access All allow all
> ### SNMP CONTROL  ###
> snmp_port 161
> acl snmppublic snmp_community public1
> snmp_access allow all
> ## CACHE CONTROL 
> access_log /usr/local/squid/var/logs/access.log squid
> acl QUERY urlpath_regex cgi-bin
> cache_mem 1280 MB
> cache_swap_low 95
> cache_swap_high 98
> maximum_object_size 6144 KB
> minimum_object_size 1 KB
> maximum_object_size_in_memory 4096 KB
> cache_dir ufs /storage/ram_dir1 128 16 256
> cache_dir ufs /storage/cache_dir1 5120 16 256
> cache_dir ufs /storage/cache_dir2 5120 16 256
> cache_dir ufs /storage/cache_dir3 5120 16 256
> 
> Also here is the result of a custom script I made to parse the
> access.log that will sort and display the top 22 responses so I can
> compare them with cacti, I am trying to increase the Hit ratio but so
> far is extremely low.
> 
> 1  571121 69.3643% TCP_MISS/200
> 2  98432 11.9549% TCP_HIT/200
> 3  51590 6.26576% TCP_MEM_HIT/200
> 4  47009 5.70938% TCP_MISS/304
> 5  17757 2.15664% TCP_IMS_HIT/304
> 6  11982 1.45525% TCP_REFRESH_HIT/200
> 7  11801 1.43327% TCP_MISS/404
> 8  6810 0.827095% TCP_MISS/500
> 9  2508 0.304604% TCP_MISS/000
>10  1323 0.160682% TCP_MISS/301
>11  1151 0.139792% TCP_MISS/403
>12  1051 0.127647% TCP_REFRESH_HIT/304
>13  430 0.0522248% TCP_REFRESH_MISS/200
>14  127 0.0154245% TCP_CLIENT_REFRESH_MISS/200
>15  83 0.0100806% TCP_MISS/401
>16  81 0.00983769% TCP_CLIENT_REFRESH_MISS/304
>17  35 0.00425085% TCP_MISS/503
>18  20 0.00242906% TCP_DENIED/400
>19  19 0.00230761% TCP_HIT/000
>20  19 0.00230761% TCP_DENIED/403
>21  14 0.00170034% TCP_SWAPFAIL_MISS/200
>22  1 0.000121453% TCP_SWAPFAIL_MISS/30
> 
> Thanks!
> 
> 
> 
> 
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.6 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
> 
> iD8DBQFHyHfrAcr37anguZsRAtktAKCKlqDTxrtmLLpfEK+cq92OOS0JwQCeIuiG
> 59G9YtNTZXD5JIExywCYprI=
> =1Uls
> -END PGP SIGNATURE-

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Squid-2, Squid-3, roadmap

2008-02-29 Thread Adrian Chadd
(I'm going to try and raise these during the London meeting if I can
get in with Skype or something similar.)

On Thu, Feb 28, 2008, Robert Collins wrote:

> Folk will scratch their own itches - thats open source for you. I know
> I'd really prefer it if features being added are *primarily* added to -3
> - I'm totally supportive of backporting to -2, but would rather see it
> as a backporting process rather than a forward porting process.

Trouble is, "scratching their own itches" is primarily whats caused the
conflict between ongoing squid-2 and squid-3 development. I'd really
prefer if the majority of squid (core) developers decided on a path
forward so those currently using Squid can decide whether the project
is going along in a direction that suits their needs.

Right now .. well, there's not much in the way of direction.

> >* If that success metric is not reached, what is the contingency  
> > plan?
> 
> I don't know what you really mean here. Squid isn't a corporate entity
> with a monetary either-or marketing/funding style problem.

Yes, but Squid is being used by lots of companies who rely on it, and
-they- have a problem if Squid starts diverging from their needs.

> >* How will these answers change if a substantial number of users  
> > willingfully choose to stay on -2 (and not just because they neglect  
> > to update their software)?
> 
> Well, I'd hope that at the minimum those users would file bugs on the
> things about -3 that keep them on -2, so that developers can fix
> them :).

Which developers? What time? The set of features required to make -3
cover 100% of -2's functionality is well known and a lot of them are
in bugzilla. Somehow though its not being worked on, so people stay
on -2. The new features in -3 are still tainted by the fact that its
buggy and its slow.

The trouble with -3 that -I- see as a (core) developer is that the set
of features being worked on in -3 doesn't correlate well with the set
of features in -2 that people are using, including performance. This
has been a problem for a number of years.

The reason I pieced together the Squid-2 "roadmap" thats in the Wiki now
is because I saw a lot of companies who are or were using Squid, and some
items in the roadmap are what -they- saw as important. Some of the items
in the roadmap are my personal itches, but at the end of the day we have
to get paid, and that roadmap was going to be how I attacked the problem
of ongoing funding.

This upset (understandably) the other developers working on Squid-3, and
I've put a hold on it for now. Somehow though nothings happened, and I'm
hoping -something- happens soon. Past history in the -3 development cycle
(and Squid in general over the last few years) has shown that although
we have a lot of good ideas, we just aren't implementing them, and meanwhile
entire projects spring up implementing much faster HTTP proxying/caching/
outing code that we're not leaveraging in own project.

Just to put it down for the record, I've had enough interest in my Squid-2
roadmap that I see it as a path forward for the next twelve months of my
time, and enough financial interest in my Squid-2 roadmap that I may be
able to start funding a couple of other developers. I stopped pushing it
due to discussions inside squid-core, as I really would like to work on this
as a team rather than splintering what constitutes "Squid", but I'm rapidly
reaching the point where I'll do it if it means actual tangible progress
will be made.

> >* Who is using -3 in production now? How are you using it (load,  
> > use case, etc.) and what are your experiences?
> 
> I use -3, have for ages. But its trivial home-site accelerating and
> browsing, so entirely uninteresting at the scope of yahoo :).

I tried putting -3 in trial production last week for a reverse proxy
accelerator for a university project - it couldn't handle the load -2
could. Thats only ~800 requests a second on modest hardware.



Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -