Re: RS: [squid-users] winbindd: Exceeding 200 client connections, no idle connection found

2008-03-07 Thread Francisco Martinez Espadas

I downloaded squid from the project web and installed the tipical way:
configure, make, make install

thanks

El dj 06 de 03 del 2008 a les 11:44 -0600, en/na Dave Augustus va
escriure:
 On Tuesday 04 March 2008 5:08:54 am Francisco Martinez Espadas wrote:
  2.6stable18
 
 I have a Centos5.0 box now- where did you get squid 2.6 stable18 from? I 
 don't 
 see it in the upgrade path?
 
 Thanks!
 Dave 


Re: [squid-users] bypass parent proxy for some urls - dstdomains

2008-03-07 Thread Amos Jeffries

Paul Anderson wrote:

Hello,

I appologize if this has been answered before, but I have been unable to find 
anything.

I am trying to set up the  following:

Lan - Squid - Parent Proxy (ISP)


I have basically added the following lines to squid.conf file

cache_peer parentcache.foo.com parent 3128 0 no-query default
acl all src 0.0.0.0/0.0.0.0
never_direct allow all

and all traffic is going through the parent proxy, however I would like to be 
able to set up acl in order to allow some websites to go through directly, 
without going through the parent proxy. So basically what I need is the 
following:

check destination to see if it is allowed to bypass parent proxy (or access 
denied, or blocked)  if no acl exists, then forword request to parent proxy

any and all help would be greatly appreciated, please let me know ifyou need 
more info.


Depending on your versino of squid:
http://www.squid0cache.org/Versions/v2/2.6/cfgman/acl.html

and cache_peer_access

Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] Serve JSON object on access denial?

2008-03-07 Thread Amos Jeffries

Dimitry Golubovsky wrote:

Hi,

Is there any way to make Squid serve a JSON object when access to some
proxied resource is denied?

I use Squid as a reverse proxy to control access to CouchDB database
(which by itself does not have any access control yet). In the case of
error, CouchDB serves a specifically-formatted JSON object. I would
like to be able to serve a similar JSON object (with content-type
application/json) if the proxy denies access, instead of a HTML
page.


In 2.6+ and 3.0+ you can point deny_info at any URL you like. Including 
a script that produces JSON objects and the appropriate HTTP mime 
headers for them. Whatever they are.


Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] Configuring reverse proxy for both 80/443

2008-03-07 Thread Matus UHLAR - fantomas
On 04.03.08 17:11, Nick Duda wrote:
 I seem to be stumped. I need to reverse proxy for one internal server
 that listens on both 80 and 443. How can I configure squid to proxy for
 the same cache-peer on both 80 and 443? As far as I can see you can only
 specify one protocol per cache-peer line. I think I am missing
 something.

why do you need reverse proxy the https? and if you need it, why not connect
to the webserver on 80?

I mean, is there real reason to reverse proxy https by using https from
client to proxy and https from proxy to server?
-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
The early bird may get the worm, but the second mouse gets the cheese. 


Re: [squid-users] Downloading zip file via squid cache

2008-03-07 Thread Amos Jeffries

Philippe Geril wrote:

Hi all,

I have a very basic scenario, but I am little confused because there are
pretty much advanced things available.

here is the case:


2 clients, client-A and client-B



1 squid 2.5 server running with default squid.conf (modified acls, cache 
directories, etc)



This is a url http://example.com/application.zip [120MB]


-

Now for example client-A requests the above url
http://example.com/application.zip which is of 120MBs.

I want that whenever the client-B requests the same url from the squid
box, squid verifies that the url exists in cache, and then deliver it
from the cache instead of going to the internet to retrieve it.

My questions are:
- How to configure it.
- I want to ignore client no-cache headers, i.e. always download from
cache than going to server


any help would be highly appreciated.

peace

P


Please try an upgrade to 2.6 first at least.
http://www.squid-cache.org/Versions/v2/2.6/cfgman/miss_access.html

Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


[squid-users] problem with access_log

2008-03-07 Thread Ramashish Baranwal
Hi,

I am trying to prevent logging of certain urls using acls on
access_log. The corresponding part of my squid.conf looks like-

acl test_url url_regex .*test.*

# don't log test_url
access_log none test_url

# log others
access_log log-file-path squid

Squid however, is not honoring the acl. It logs everything. The log for request

http://netdev.com/test/

looks like-

1204882748.408 RELEASE -1  15A550F13DEC5BEE462C4DDCA8645838
403 1204882748 0 1204882748 text/html 1091/1091 GET
http://netdev.com/test/

What am I missing here?
My squid version is squid/2.6.STABLE16.

Thanks in advance,
Ram


Re: [squid-users] problem with access_log

2008-03-07 Thread Cassiano Martin

Are you looking at cache.log or access.log?

Looks like you're in the wrong file.

Ramashish Baranwal escreveu:

Hi,

I am trying to prevent logging of certain urls using acls on
access_log. The corresponding part of my squid.conf looks like-

acl test_url url_regex .*test.*

# don't log test_url
access_log none test_url

# log others
access_log log-file-path squid

Squid however, is not honoring the acl. It logs everything. The log 
for request


http://netdev.com/test/

looks like-

1204882748.408 RELEASE -1  15A550F13DEC5BEE462C4DDCA8645838
403 1204882748 0 1204882748 text/html 1091/1091 GET
http://netdev.com/test/

What am I missing here?
My squid version is squid/2.6.STABLE16.

Thanks in advance,
Ram
  






Re: [squid-users] Re: Real hit count of a user? Can it be really found?

2008-03-07 Thread Aykut Demirkol
 Ahmet wrote:
 Hi,

 I am trying to count web hits of users. With using proxy it is seems
 to be easy (I tried combinations of squid, tiny, privoxy in
 transparent modes).
 But it is obvious that the hits in the logs not purely the hits that
 users wanted to do.
 For example when a user goes to cnn.com, cnn.com calls other ad pages
 or non-ad pages and it is seen as an user hit in logs. So for real
 hit count an analysis must be made on logs.
 Do you know any tool, proxy that can help in such analysis?

 Second choice is writing own tool that can be parsing the logs and
 doing an analysis on referer field. But solely depending on referer
 can cause false positive results for users clicking on a link on a page.
 To further investigate the issue I listened (by ethereal) outgoing
 packets for a usual user behavior (clicking on a link) and page
 calling pages. In request packets they all seem to have same headers
 and similar header values. So I stucked and could not found any
 possible piece of evidence to track and distinguish the hits.
 Is there a known theoretical or practical way for distinguishing this
 behaviours?



 No.

 We use http://awstats.sourceforge.net/ because we need some kind of
 count, but there's no way to count people seeing the page.

 You've got robots of various kinds and user agents that lie and all
 kinds of other stuff that makes the data fuzzy.  The best you can
 get is a general increasing or decreasing trend.

 For all you know, someone's using a script to launch a browser to hit
 your page as a diagnostic poll to see if their connection is still up.

 Having said all that, awstats will probably do what you want it to.

 Cheers!

Daniel thanks for reply.
I need to learn exact action taken on the client browser such as if user
clicked the link or incoming page is calling another page.
So parsing and analyzing proxy logs or apache logs seems to not helping
in that case.

AB



[squid-users] Content adaptation proxy?

2008-03-07 Thread Aykut Demirkol
Hi,
I know that squid can do content adaptation (modification) by ICAP and
there is a project on going for eCAP.
Should I wait for eCAP or start with ICAP?
I couldn't found any good documentation showing how to use XXX ICAP
server with Squid. Do any one know one?
Is there alternative proxy servers for content adaptation (modification)
purpose that do not depend on ICAP and has configurable internal systems?

Any help will be gratefull. Thanks in advance,

AB


Re: [squid-users] Re: Re: [squid-users] centralized storage for squid

2008-03-07 Thread Pablo García
I have the same problem, tough I don't remove my squid servers very often.
I've partially resolved this problem thanks to the implementation of
the algorithm of my load balancer.
What it does, it's calculate the hash taking into account all the
squids in the pool, whether they're up or down. Then if the algorithm
chooses a server that is down, then the calculation happens again. So
if one of my squids restarts, as soon as it starts again, it receives
the same urls as before using disk cache instead of memory.

What you also can try is to link all the squids toghether whith icp to
create a sibling relationship between them. though I guess the
hierarchical cache scenario would help you best to reduce the load in
your web servers.

Hope this helps,

Regards, Pablo


2008/3/7 Siu Kin LAM [EMAIL PROTECTED]:
 Hi Pablo

  Actually, it is my case.
  The URL-hash is helpful to reduce the duplicated
  objects. However, once adding/removing squid server,
  load balancer needs to re-calculate the hash of URL
  which cause lot of TCP_MISS in squid server at the
  inital stage.

  Do you have same experience ?

  Thanks


  --- Pablo Garc燰 [EMAIL PROTECTED] 說:



   I dealt with the same problem using a load balancer
   in front of the
   cache farm, using a URL-HASH algorithm to send the
   same url to the
   same cache every time. It works great, and also
   increases the hit
   ratio a lot.
  
   Regards, Pablo
  
   2008/3/6 Siu Kin LAM [EMAIL PROTECTED]:
Dear all
   
At this moment, I have several squid servers for
   http
caching. Many duplicated objects have been found
   in
different servers.  I would minimize to data
   storage
by installing a large centralized storage and the
squid servers mount to the storage as data disk.
   
Have anyone tried this before?
   
thanks a lot
   
   
 Yahoo! 網上安全攻略,教你如何防範黑客!
   請前往http://hk.promo.yahoo.com/security/index.html
   了解更多。
   
  



   Yahoo! 網上安全攻略,教你如何防範黑客! 
 請前往http://hk.promo.yahoo.com/security/index.html 了解更多。



[squid-users] Ayuda contra Live MSN

2008-03-07 Thread Marco Jesus Chuco

Amigos:

 

Quisiera que me ayuden a evaluar la forma de bloquear el live Messenger.

 

He configurado los acls por MIME, puerto y url_path, pero este solo =
funciona para versiones que van desde 7.0 a inferiores, al final el live
siempre termina conectandose.

 

Podrian por favor ayudarme a ver como puedo hacer?.


Marco Jesus Chuco
 Soporte Técnico
 Novatronic
+51 (01) 415-2469
+51 (01) 93512457 

No virus found in this outgoing message.
Checked by AVG Free Edition. 
Version: 7.5.516 / Virus Database: 269.21.3/1308 - Release Date: 03/03/2008
10:01 a.m.
 



RE: [squid-users] Ayuda contra Live MSN

2008-03-07 Thread Marco Jesus Chuco
Tengo un firewall bloqueando todas las conexiones desde mi red LAN. El proxy
atiende en un puerto que esta permitido.
Cuando realizo el seguimiento en /var/log/squid/access.log Veo que el Live
MSN genera el trafico (lo cual indica que usa el proxy obligatoriamente),
pero siempre logra pasar:




Marco Jesus Chuco
 Soporte Técnico
 Novatronic
+51 (01) 415-2469
+51 (01) 93512457 

-Mensaje original-
De: Beavis [mailto:[EMAIL PROTECTED] 
Enviado el: Viernes, 07 de Marzo de 2008 09:39 a.m.
Para: [EMAIL PROTECTED]
Asunto: Re: [squid-users] Ayuda contra Live MSN

bloquear todo lost ports y uso solamente el proxy y cuando hay algo para
otros tipo de Instant Messenger. usalo un socks 5.

-

On Fri, Mar 7, 2008 at 8:27 AM, Marco Jesus Chuco [EMAIL PROTECTED]
wrote:

  Amigos:



  Quisiera que me ayuden a evaluar la forma de bloquear el live Messenger.



  He configurado los acls por MIME, puerto y url_path, pero este solo =  
 funciona para versiones que van desde 7.0 a inferiores, al final el 
 live  siempre termina conectandose.



  Podrian por favor ayudarme a ver como puedo hacer?.


  Marco Jesus Chuco
   Soporte Técnico
  Novatronic
  +51 (01) 415-2469
  +51 (01) 93512457

  No virus found in this outgoing message.
  Checked by AVG Free Edition.
  Version: 7.5.516 / Virus Database: 269.21.3/1308 - Release Date: 
 03/03/2008
  10:01 a.m.




No virus found in this incoming message.
Checked by AVG Free Edition. 
Version: 7.5.516 / Virus Database: 269.21.3/1308 - Release Date: 03/03/2008
10:01 a.m.
 

No virus found in this outgoing message.
Checked by AVG Free Edition. 
Version: 7.5.516 / Virus Database: 269.21.3/1308 - Release Date: 03/03/2008
10:01 a.m.
 



RE: [squid-users] Squid on FreeBSD poor performance

2008-03-07 Thread joejoe
, rejecting l.yimg.com
2008/03/07 12:34:35| dnsSubmit: queue overload, rejecting l.yimg.com
2008/03/07 12:34:35| dnsSubmit: queue overload, rejecting l.yimg.com
2008/03/07 12:34:35| dnsSubmit: queue overload, rejecting l.yimg.com
2008/03/07 12:34:35| dnsSubmit: queue overload, rejecting l.yimg.com
2008/03/07 12:34:35| dnsSubmit: queue overload, rejecting l.yimg.com
2008/03/07 12:34:35| dnsSubmit: queue overload, rejecting l.yimg.com
2008/03/07 12:34:35| dnsSubmit: queue overload, rejecting l.yimg.com
2008/03/07 12:34:35| dnsSubmit: queue overload, rejecting l.yimg.com
2008/03/07 12:34:35| dnsSubmit: queue overload, rejecting l.yimg.com
2008/03/07 12:34:35| dnsSubmit: queue overload, rejecting l.yimg.com
2008/03/07 12:34:35| dnsSubmit: queue overload, rejecting l.yimg.com
2008/03/07 12:34:35| dnsSubmit: queue overload, rejecting l.yimg.com
2008/03/07 12:34:35| dnsSubmit: queue overload, rejecting l.yimg.com


3/6 cache.log in 
http://mail.knu.edu.tw/~joejoe/cache-20080307.rar
How to make it better ?




[squid-users] No apparent errors on NTLM but still cache_access_denied

2008-03-07 Thread Jerome Steunenberg

Hello squid users,

I can't seem to get the ntlm to work on the following setup:

Debian 4.0 etch
Squid Cache: Version 2.6.STABLE5
Microsoft Windows Server 2003 SP2
Kerberos environment OK: wbinfo -t, -u and -g work fine
Using ntlm_auth from Samba Winbind 3.0.24-6etch9

squid.conf looks like this:

http_port localhost:3128
icp_port 0
htcp_port 0
refresh_pattern -i \.flv$ 10080 90% 99 ignore-no-cache 
override-expire ignore-private

quick_abort_min -1 KB
maximum_object_size 1 GB
acl youtube dstdomain .youtube.com
cache allow youtube
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
access_log /var/log/squid/access.log squid
debug_options ALL,9
hosts_file /etc/hosts
auth_param ntlm program /usr/bin/ntlm_auth -d 10 
--helper-protocol=squid-2.5-ntlmssp --domain=MYDOMAIN

auth_param ntlm children 30
auth_param ntlm keep_alive on
external_acl_type nt_group ttl=0 children=5 %LOGIN 
/usr/lib/squid/wbinfo_group.pl

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443  # https
[...]
acl Safe_ports port 901 # SWAT
acl purge method PURGE
acl CONNECT method CONNECT
acl InternetInterdit external nt_group internet_interdit
acl FTPUsers external nt_group ftp_users_ext
acl AuthenticatedUsers proxy_auth REQUIRED
acl FTP proto FTP
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny FTP !FTPUsers
http_access deny InternetInterdit
http_access allow all AuthenticatedUsers
http_access allow localhost
http_access deny all
http_reply_access allow all
icp_access allow all
cache_effective_group proxy
-coredump_dir /var/spool/squid

The authentication works fine with the --helper-protocol=squid-2.5-basic 
but with the ntlm protocol the following appears in the log files. Can 
someone shed some light on this ? I don't know what to investigate 
further as there is no explicit error message. It seems as if the NTLM 
protocol starts fine but then stops because one of the parties does not 
send what it's supposed to. I've tried using the ntlmauth bundled with 
Squid but that doesn't solve the problem.


[2008/03/07 10:56:35, 10] utils/ntlm_auth.c:manage_squid_request(1615)
 Got 'YR TlRMTVNTUAABB7...RzEzNzBJTlRSQS1UUEc=' from squid (length: 
79).
[2008/03/07 10:56:35, 10] 
utils/ntlm_auth.c:manage_squid_ntlmssp_request(590)

 got NTLMSSP packet:
[2008/03/07 10:56:35, 10] lib/util.c:dump_data()
 [000] 4E 54 4C 4D 53 53 50 00  01 00 00 00 07 B2 08 A2  NTLMSSP. 
 [010] 09 00 09 00 2F 00 00 00  07 00 07 00 28 00 00 00  /... (...
 [020] 05 01 28 0A 00 00 00 0F  54 50 XX XX XX XX XX XX  ..(. 
 [030] XX XX XX XX XX XX XX XX   XXX
[2008/03/07 10:56:35, 3] libsmb/ntlmssp.c:debug_ntlmssp_flags(63)
 Got NTLMSSP neg_flags=0xa208b207
   NTLMSSP_NEGOTIATE_UNICODE
   NTLMSSP_NEGOTIATE_OEM
   NTLMSSP_REQUEST_TARGET
   NTLMSSP_NEGOTIATE_NTLM
   NTLMSSP_NEGOTIATE_DOMAIN_SUPPLIED
   NTLMSSP_NEGOTIATE_WORKSTATION_SUPPLIED
   NTLMSSP_NEGOTIATE_ALWAYS_SIGN
   NTLMSSP_NEGOTIATE_NTLM2
   NTLMSSP_NEGOTIATE_128
   NTLMSSP_NEGOTIATE_56
[2008/03/07 10:56:35, 10] 
utils/ntlm_auth.c:manage_squid_ntlmssp_request(600)

 NTLMSSP challenge

Isn't the NTLM negotiation supposed to be longer ?
I think this is the same problem as in 
http://www.squid-cache.org/mail-archive/squid-dev/200708/0167.html but 
the answer to that question does not give a solution. Has someone solved 
this ?


Thanks,

Jerome Steunenberg



Re: [squid-users] transparency Squid very slow internet

2008-03-07 Thread Thomas Harold


Guillaume Chartrand wrote:

Hi I run squid 2.6.STABLE12 on RHEL3 AS for web-caching and filtering
my internet. I use also Squidguard to block some sites. I configure
squid to run with WCCP v2 with my cisco router. So all my web-cache
traffic is redirected transparently to squid.

I don't know why but when I activate the squid it's really decrease
my internet speed. It's long to have page loaded, even when it's in
my network. I look with the command top and the squid process run
only about 2-3 % of CPU and 15% of Memory. I also run iftop and I
have about 15 Mb/s Total on my ethernet interface. I don't know where
to look in the config to increase the speed. I use about 50% of disk
space so it's not so bad


Another possible issue is that squid is having to wait on a slow DNS 
server.  Take a look at the mgr:info report:


$ /usr/sbin/squidclient mgr:info

And look at Median Service Times section.  In our case, DNS lookups 
were in the 5+ second range, due to the primary DNS server being broken. 
 So squid was asking the primary DNS server, waiting 5 seconds, then 
asking the backup DNS server.  A normal squid server will be servicing 
all requests in under 1 second (depending on your load).


RE: [squid-users] transparency Squid very slow internet

2008-03-07 Thread Guillaume Chartrand
I don't think it's the problem when I see the results

Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.05046  0.05331
Cache Misses:  0.05046  0.05331
Cache Hits:0.0  0.0
Near Hits: 0.0  0.0
Not-Modified Replies:  0.0  0.0
DNS Lookups:   0.00094  0.00094
ICP Queries:   0.0  0.0


And for the answer by Amos Jeffries
It's usually regex ACL at fault when speed drops noticably.

Check that:
  * ACL are only regex when absoutely necessary (dstdomain, srcdomain, dst, src 
are all better in most uses).
   ie acl searchengines dstdomain google.com yahoo.com

  * limiting regex ACL to only be tested when needed (placing a src netblock 
ACL ahead of one on the http_access will speed up all requests outside that 
netblockk).
   ie   http_access allow dodgy_users pornregexes

I have just dstdomain, src, dst in my acl.
The only acl conatins regex is that line

acl msnmessenger url_regex -i gateway.dll

Thanks

Guillaume Chartrand
Technicien informatique
Cégep régional de Lanaudière
Centre administratif, Repentigny
(450) 470-0911 poste 7218
-Message d'origine-
De : Thomas Harold [mailto:[EMAIL PROTECTED] 
Envoyé : 7 mars 2008 11:38
À : Guillaume Chartrand
Cc : squid-users@squid-cache.org
Objet : Re: [squid-users] transparency Squid very slow internet


Guillaume Chartrand wrote:
 Hi I run squid 2.6.STABLE12 on RHEL3 AS for web-caching and filtering
 my internet. I use also Squidguard to block some sites. I configure
 squid to run with WCCP v2 with my cisco router. So all my web-cache
 traffic is redirected transparently to squid.
 
 I don't know why but when I activate the squid it's really decrease
 my internet speed. It's long to have page loaded, even when it's in
 my network. I look with the command top and the squid process run
 only about 2-3 % of CPU and 15% of Memory. I also run iftop and I
 have about 15 Mb/s Total on my ethernet interface. I don't know where
 to look in the config to increase the speed. I use about 50% of disk
 space so it's not so bad

Another possible issue is that squid is having to wait on a slow DNS 
server.  Take a look at the mgr:info report:

$ /usr/sbin/squidclient mgr:info

And look at Median Service Times section.  In our case, DNS lookups 
were in the 5+ second range, due to the primary DNS server being broken. 
  So squid was asking the primary DNS server, waiting 5 seconds, then 
asking the backup DNS server.  A normal squid server will be servicing 
all requests in under 1 second (depending on your load).


Re: [squid-users] Squid on FreeBSD poor performance

2008-03-07 Thread Adrian Chadd
On Fri, Mar 07, 2008, joejoe wrote:

 2008/03/07 12:34:35| WARNING: All dnsserver processes are busy.
 2008/03/07 12:34:35| WARNING: up to 39 pending requests queued

.. recompile Squid and disable the external DNS helper. Use Squid's internal
DNS code.




Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Squid-2, Squid-3, roadmap

2008-03-07 Thread Alex Rousskov

Below you will find my personal comments on a few hand-picked thoughts
(from various posters) that I consider important on this thread:

On Thu, 2008-03-06 at 08:44 -0800, Michael Puckett wrote:
 If there is one killer app that tops all other functionality 
 additions it would be to multi-thread Squid so that it can 
 perform on multi-cores.

Ability to perform on multiple cores is a performance/scalability
optimization. We obviously do want Squid to perform and scale better,
and are working on that.

Squid3 already has several mechanisms that would make such work easier.
Folks that need faster Squid, including CPU-core scalability
optimizations, should consider contributing time or money to the cause,
keeping in mind that it is a serious project and it will require
cooperation with other developers and projects.


On Thu, 2008-03-06 at 11:26 +1100, Mark Nottingham wrote:
 Again, parity with -2 isn't enough; why would someone pay for  
 something they can already get in -2 if it meets their needs?

Nobody should pay for something they do not need. However, any sponsor
should consider the long-term sustainability of an old or new feature
they rely on: Will the feature I need be included in the next major
Squid version? Do I need cooperation and trust of Squid developers? Do I
want a fork dedicated to my needs? These questions are often as
important as the How much would it cost me to make Squid do Foo by the
end of the month? question.

Currently, sponsors have significant impact on Squid direction. There is
a lot of implied responsibility that comes with that influence. Please
use your power with care.


On Thu, 2008-03-06 at 01:15 +, Dodd, Tony wrote:
 development on -3 seems entirely ad-hoc, with no direction; whereas -2
 development is entirely focused [...]. I could be talking entirely
 out of turn here though, as I haven't seen a -3 roadmap.

 The second thing [...] the majority of squid developers don't seem to 
 get, is that the big users of squid are businesses.

 The truth of it is, as much as you guys tell yourselves that
 your userbase is people who run one or two cache boxes in their
 basements to cache their lan internet access, and that there's no money
 in squid, ...

 I've spoken to Adrian too many times to count on two hands about this
 whole thing, and if you guys are trying to re-invent the wheel, you 
 may as well stop now.

I am not sure how to say this the right way, but when your opinion is
based on a single and often extremely biased source of information, your
perception of reality becomes so distorted, that it is very difficult
for others to respond.

Your assumptions about the majority of squid developers are simply
wrong.

Believe it or not, we understand your situation fairly well. Nobody I
know is asking you to upgrade to Squid3, for example.

What I would suggest is that you make a fundamental choice: Do you want
to collaborate with the Squid project (as a whole)? If yes, we will do
our best to address your short-term and long-term needs. If no, I am
sure your dedicated developer will do his best to address your needs
within or outside the project.

Collaborating with an open source project is difficult because you have
to cooperate with others and balance different needs, all while
struggling with inefficiencies of a weak decision-making structure.
Whether collaboration benefits are worth the trouble, is something you
have to decide. I certainly hope they are.


On Thu, 2008-03-06 at 18:17 +0900, Adrian Chadd wrote: 
 Mark Nottingham wrote:
  A killer app for -3 would be multi-core support
 
 12 months away on my draft Squid-2 roadmap, if there was enough
 commercial interest.

11 months away on Squid-3 roadmap if there is enough commercial
interest. And I will also throw in a 90% chance that the feature will
also be in Squid4 without a major porting effort. Wait, wait, and a 10%
off coupon!

But, really, this is _not_ the way Squid features should be planned or
sponsorship should be solicited, and I trust Adrian knows that.


On Thu, 2008-03-06 at 11:26 +1100, Mark Nottingham wrote:
 While I'm in a mood for ruffling feathers (*grin*), 
 it might also help to have the core discussions in public

Discussions that may benefit from public input should, and usually do,
happen on squid-dev or squid-users, even if they start on squid-core.
That covers the majority of the topics. There is not much how the Core
should respond?, dirty laundry, personal, or offensive to
somebody stuff that we have to keep private. Any multi-person
organization has these trust layers, naturally.


I hope the above comments will clarify my personal position. I would
love to work with more users that, besides pulling hard in their
direction, would think of the project as a whole, accepting the fact
that Squid will always try to satisfy several conflicting needs.

The list of missing Squid3 features was a useful outcome of this
thread. I will make sure those wishes are added to Squid3 roadmap.

If 

[squid-users] popups on linux box- ntlm? ldap helper?

2008-03-07 Thread Dave Augustus
Hello all,

I get auth popups on firefox 2 on my workstation (centos 4.6).  My workstation 
is also a domain machine using samba. Now I am aware of the ntlm problem with 
winbind- that is, occasional popups. Is this to be expected for my machine as 
well or should I use an ldap helper for non-domain machines?

( parenthetically- Is there a way I can use my local winbind pipe for proxy 
authentication?)

Running squid-2.6.STABLE6-5.el5_1.2 on Centos 5. Here is the squid.conf for 
auth section:

# Active Directory configuration
auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 30
# Basic authentication
auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic
auth_param basic children 5
auth_param basic realm Proxy Server
auth_param basic credentialsttl 2 hours

acl authenticated_users proxy_auth REQUIRED


Thanks!
Dave


[squid-users] LiveCD type install for transparent caching of YouTube, etc?

2008-03-07 Thread Paul Bryson
I have been looking for some sort of easy to install Squid transparent 
caching proxy.  Something like KnoppMyth (http://mysettopbox.tv/) but 
just for Squid.  Boot to a CD that has you partition/format your 
harddrives, installs the OS plus Squid with sane default settings.  If 
there is a web interface, all the better.


I'm a big fan of the KnoppMyth design where all of the settings are 
backed up on a data partition.  Upgrading requires installing from a 
CD which formats over the OS + programs partition and restores the 
backed up settings and data.


There are some much bigger packages that include Squid, like SmoothWall 
and IPCop, but these include tons of other firewall and filtering 
services that I don't need or want.  They also don't offer much in the 
way of options for the proxy.


And as we want to be able to cache YouTube, Google Maps, Google Earth, 
Windows Updates, etc, we need the rewrite rules in the 2.7 branch of 
Squid.  We were thinking of using a good sized hard disk for cache 
storage, to reduce the internet bandwidth hit as much as possible.


Is there any such type of beast, or a quick path to such a thing?


Atamido



Re: [squid-users] Re: Real hit count of a user? Can it be really found?

2008-03-07 Thread Amos Jeffries

Aykut Demirkol wrote:

Ahmet wrote:

Hi,

I am trying to count web hits of users. With using proxy it is seems
to be easy (I tried combinations of squid, tiny, privoxy in
transparent modes).
But it is obvious that the hits in the logs not purely the hits that
users wanted to do.
For example when a user goes to cnn.com, cnn.com calls other ad pages
or non-ad pages and it is seen as an user hit in logs. So for real
hit count an analysis must be made on logs.
Do you know any tool, proxy that can help in such analysis?

Second choice is writing own tool that can be parsing the logs and
doing an analysis on referer field. But solely depending on referer
can cause false positive results for users clicking on a link on a page.
To further investigate the issue I listened (by ethereal) outgoing
packets for a usual user behavior (clicking on a link) and page
calling pages. In request packets they all seem to have same headers
and similar header values. So I stucked and could not found any
possible piece of evidence to track and distinguish the hits.
Is there a known theoretical or practical way for distinguishing this
behaviours?



No.

We use http://awstats.sourceforge.net/ because we need some kind of
count, but there's no way to count people seeing the page.

You've got robots of various kinds and user agents that lie and all
kinds of other stuff that makes the data fuzzy.  The best you can
get is a general increasing or decreasing trend.

For all you know, someone's using a script to launch a browser to hit
your page as a diagnostic poll to see if their connection is still up.

Having said all that, awstats will probably do what you want it to.

Cheers!


Daniel thanks for reply.
I need to learn exact action taken on the client browser such as if user
clicked the link or incoming page is calling another page.
So parsing and analyzing proxy logs or apache logs seems to not helping
in that case.


Such information usually given in HTTP Referer: header.

Any finder grain than available in those headers and logs requires 
complete control of every machine on The Internet connecting to your page.


Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] automatic migration of config files from 2.6 to 3.0

2008-03-07 Thread Amos Jeffries

Amos Jeffries wrote:

Hi,

Is there a utility that comes with squid 3.0 package that will
automatically
migrate any existing lower version squid configuration to reflect with the
new version?  I know this sounds a lazy kind of administration technique,
but these will help a lot on proxy farms especially you have different
type
of machines and specs..

Just curious about it.


There is nothing to do automatic migration. Individual setups are just too
individual.
I planned to make one but settled with an online validator similar to the
ones W3C provide to assist updating old configuration to 2.6. It takes
your squid.conf and marks each line as OK, EVIL, USELESS, or CHANGE-TO-___

http://squid.treenet.co.nz/cf.check/

When 3.0 came out I got trapped into 3.1 updates and bugs rather than the
planned 3.0 support in the validator :-( If you poke me a few times,
2.6stable18 and 3.x rules might happen over this month.


FYI, one kick was enough to get me halfway through the change list. 
There is now an experimental 3.0 validator up.


Just be aware that as long as it says (experimental) there are a few new 
options it will wrongly mark as 'not present'.
All such should be checked in cfgman and release notes as to why and 
whats replaced them.


Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] Configuring reverse proxy for both 80/443

2008-03-07 Thread l3d
I've gotten part of it to work with two cache_peer lines such as:

acl incoming_ssl dstdomain ssl.domain.com
acl incoming_default dstdomain domain.com www.domain.com

http_port 80 accel vhost
https_port 443 cert=/usr/YOURCERT.cer key=/usr/YOURKEY.key vhost

cache_peer xxx.xxx.xxx.xxx parent 443 0 no-query originserver ssl
sslflags=DONT_VERIFY_PEER name=ssl.domain.com
cache_peer_access ssl.domain.com allow incoming_ssl

cache_peer xxx.xxx.xxx.xxx(same server) parent 80 0 no-query
originserver name=*hostname of webserver*
cache_peer_access *hostname of website* allow incoming_default

But I have a problem with this config..

Because my domain.com is in the acl even without the *.domain.com it
still allows an 80 connection for my 443 website ssl.domain.com
 And because it still communicates with the originserver over 443,
the originserver does not have an issue. HOW can I force squid to
accept only 443 connections for a website on a originserver that squid
already uses for 80 connections?

Please help

-l3d
On Tue, Mar 4, 2008 at 7:16 PM, Nick Duda [EMAIL PROTECTED] wrote:
 Nope, it throws an error, I tried that.



 -Original Message-
 From: Chris Woodfield [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, March 04, 2008 8:02 PM
 To: Nick Duda
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] Configuring reverse proxy for both 80/443

 I haven't tried this myself, but can't you just have two cache-peer
 lines with the same host but different port numbers?

 -C

 On Mar 4, 2008, at 5:11 PM, Nick Duda wrote:

  I seem to be stumped. I need to reverse proxy for one internal server
  that listens on both 80 and 443. How can I configure squid to proxy
  for
  the same cache-peer on both 80 and 443? As far as I can see you can
  only
  specify one protocol per cache-peer line. I think I am missing
  something.
 
  - Nick
 




Re: [squid-users] Squid-2, Squid-3, roadmap

2008-03-07 Thread Amos Jeffries

Dodd, Tony wrote:

-Original Message-
From: Amos Jeffries [mailto:[EMAIL PROTECTED]

snip

3.0 was about parity with needs. It failed some in that regard.
3.1 is about making up that failure plus some.
Is seamless IPv6, SSL control, and weighted round-robin not enough of

a

killer app for you?



SSL control is nice, but we don't use SSL anywhere near squid, so it's
not a big issue either way for us... we already have weighted
round-robin in -2 by using CARP with specific weights, unless you're
talking about something different?


True round-robin looping, but with peer weightings affecting the 
load-balancing on every cycle. It's much simpler then full CARP meshing.
Being geared towards a two-tiered squid setup which need bottom-layer 
weightings calculated in the top-layer instead of flat-mesh setups like 
CARP.



 As for IPv6... eh, I suppose that'll
be nice if IPv6 actually starts getting use sometime this decade.


Some of us do. At least 5% of the net and growing. When 3.x rolls out 
the capability fully you can expect a small jump in www traffic over v6 
without any user-visible changes.




As Mark said, multicore would be quite awesome,


Truely. Care to sponsor any of the cleanups being done to -3 for this to 
happen?

http://wiki.squid-cache.org/Features/FasterHttpParser
http://wiki.squid-cache.org/Features/LogDaemon
http://wiki.squid-cache.org/Features/NativeAsyncCalls
http://wiki.squid-cache.org/Features/NoCentralStoreIndex
http://wiki.squid-cache.org/Features/SourceLayout


as would better memory management,


Tricky. Do we have an expert in the house who can do better without pay? 
(nobody seems willing to sponsor any memory management)


 better I/O throughput on the cache_dir,

Work was sponsored to get COSS into -2 I believe to help with this. But 
there were not matching changes sponsored for -3. Anyone?



proper support for memory only caches,


http://wiki.squid-cache.org/Features/RemoveNullStore


support for acl based cache_dir's (i.e. cache_dir
foo allow dstdomain blah while denying everything else and cache_dir bar
allow dstdomain boo while denying everything else) to improve overall
hit-rate and decrease cache file flapping,


http://www.squid-cache.org/Versions/v3/3.0/cfgman/cache.html


handling of a cache_dir
failure that doesn't include squid dumping core,


?? details or bug IDs please.


 HTTP/1.1 support,


Underway in BOTH squid versions.
http://wiki.squid-cache.org/Features/HTTP11 (a little out of date now).

Each version has a different set of compliance/non-compliance depending 
on which areas developers have worked on since we gave this emphasis.



options support.


Please describe this?



Things in 2.6 I'd like to see in 3 (on top of mark's list):

COSS support - stable, with all the functions -2 has


Thanks. I've added it to the WishList.
Are you interested in sponsoring a developer to get it into the TODO 
List with an integration time.

http://wiki.squid-cache.org/Features/COSS


follow_x_forwarded_for


http://www.squid-cache.org/bugs/show_bug.cgi?id=1628



refresh_stale_hit


Part of the collapsed-forwarding feature I believe.
http://wiki.squid-cache.org/Features/CollapsedForwarding


umask support


Thanks you. I've added this as a bug item.
http://www.squid-cache.org/bugs/show_bug.cgi?id=2254



-Tony


Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] Squid-2, Squid-3, roadmap

2008-03-07 Thread Amos Jeffries

Dodd, Tony wrote:

-Original Message-
From: Mark Nottingham [mailto:[EMAIL PROTECTED]

Well, that's a bit of a straw-man, isn't it? AIUI 3 *is* already 2 re-
coded into C++. Never mind the question of why that's necessary;
indeed, I think a lot of people's discomfort is centred on the fact
that large parts of 3 have been rewritten and not battle-tested in
wide deployment.


Some of my discomfort stems from the fact that from where I sit,
development on -3 seems entirely ad-hoc, with no direction; whereas -2
development is entirely focused (of course, with only Adrian really
developing it, it's going to be moreso; what I'm talking about is more
'what is being developed' than 'who's developing it' though).  I could
be talking entirely out of turn here though, as I haven't seen a -3
roadmap.


Thats one thing thats immediately fixable:
  http://wiki.squid-cache.org/RoadMap/Squid3

Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] Squid-2, Squid-3, roadmap

2008-03-07 Thread Adrian Chadd
If anything, I hope that this particular discussion realigns the Squid-3
developers with the sorts of things the community wants.

On Sat, Mar 08, 2008, Amos Jeffries wrote:

 Some of us do. At least 5% of the net and growing. When 3.x rolls out 
 the capability fully you can expect a small jump in www traffic over v6 
 without any user-visible changes.

Which isn't a big deal for reverse proxy setups - they'll be running
v6 facing clients to v4 facing servers for quite a while. Thats a lot
easier to implement then fully v6-v4 gatewaying support, which is what
you're debugging in -3.

 As Mark said, multicore would be quite awesome,
 
 Truely. Care to sponsor any of the cleanups being done to -3 for this to 
 happen?

[snip]

And if it doesn't happen, are the Squid-3 developers willing to put in the time
and effort to actually make these things happen, or is it going to sit for 
another
couple of years waiting for sponsors or developers who want to scratch that 
itch?

One of the things that Robert made a point of is that developers will scratch
their own itch, and we can't force developers to do otherwise. Thats fine,
but as a -project- we need to think far past the personal itch point and look
at what work needs to be done to benefit everyone, not just what we find
personally interesting or relevant.

I still get the feeling that the whole Squid-3 direction seems to be what
is specifically relevant to individual developers and not what will keep
the project alive and (re-)engage the developers. This only became an issue
because somehow (again) I stirred trouble.

The whole point behind Xenion trying to get Squid support contracts is to get
some resources to -do- the things which need doing that aren't being done.
I'm almost at the point where I can begin throwing someone at some of the 
Squid-2
draft roadmap - some being requested by clients, some being future-proofing.

I'm not sure why my Squid-2 plans upset the rest of the Squid developers so 
much -
is it because I'm not playing ball with the bulk of the active core Squid
developers or something else?




Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Configuring reverse proxy for both 80/443

2008-03-07 Thread Amos Jeffries

l3d wrote:

I've gotten part of it to work with two cache_peer lines such as:

acl incoming_ssl dstdomain ssl.domain.com
acl incoming_default dstdomain domain.com www.domain.com

http_port 80 accel vhost
https_port 443 cert=/usr/YOURCERT.cer key=/usr/YOURKEY.key vhost

cache_peer xxx.xxx.xxx.xxx parent 443 0 no-query originserver ssl
sslflags=DONT_VERIFY_PEER name=ssl.domain.com
cache_peer_access ssl.domain.com allow incoming_ssl

cache_peer xxx.xxx.xxx.xxx(same server) parent 80 0 no-query
originserver name=*hostname of webserver*
cache_peer_access *hostname of website* allow incoming_default

But I have a problem with this config..

Because my domain.com is in the acl even without the *.domain.com it
still allows an 80 connection for my 443 website ssl.domain.com
 And because it still communicates with the originserver over 443,
the originserver does not have an issue. HOW can I force squid to
accept only 443 connections for a website on a originserver that squid
already uses for 80 connections?


ACL type myport .

  acl sslPort myport 443

  cache_peer_access allow sslPort incoming_ssl
  cache_peer_access domain.com allow !sslPort incoming_default


Amos




Please help

-l3d
On Tue, Mar 4, 2008 at 7:16 PM, Nick Duda [EMAIL PROTECTED] wrote:

Nope, it throws an error, I tried that.



-Original Message-
From: Chris Woodfield [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 04, 2008 8:02 PM
To: Nick Duda
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Configuring reverse proxy for both 80/443

I haven't tried this myself, but can't you just have two cache-peer
lines with the same host but different port numbers?

-C

On Mar 4, 2008, at 5:11 PM, Nick Duda wrote:


I seem to be stumped. I need to reverse proxy for one internal server
that listens on both 80 and 443. How can I configure squid to proxy
for
the same cache-peer on both 80 and 443? As far as I can see you can
only
specify one protocol per cache-peer line. I think I am missing
something.

- Nick






--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.