[squid-users] NTLM and loadbalanced squid

2003-02-20 Thread Michael Pophal
Hi all,
does anyone have experiences with a (round-robin) loadbalanced squid
farm and NTLM authentication.
As I understand this, 
1. a client sends his request
2. he gets an answer from proxy 1 'authentication required'
3. the authentication dialog pops up and after entering the credetials
4. the client send his response including the authentication header to
proxy 2

What is the next step. Can proxy 2 handle this request without knowing
the steps before? Is it necessary to have a sticky connection between
client and proxy?

Any information will be appreciated.
Thanx

Michael





Re: [squid-users] blocking internet application files?

2005-02-18 Thread Michael Pophal
Hi,
use the acl 

#   acl aclname browser  [-i] regexp ...
# # pattern match on User-Agent header

see squid.conf on this.

Regards Michael

On Wed, 2005-02-16 at 21:01, Shiraz Gul Khan wrote:
> dear list,
> 
> is there a way to allow only iexplorer.exe application for my user to access 
> squid box.
> 
> suppose i only want to run internet explorer on my user computers. no msn no 
> yahoo no any other internet application. only and only iexplorer for 
> browsing internet.
> 
> what is the best config for squid.conf what and where i add/edit in 
> squid.conf
> 
> ==
> squid.conf
> ==
> acl all src 0.0.0.0/0.0.0.0
> acl myusers src 192.168.100.0/255.255.255.0
> http_access allow myusers
> http_access deny all
> ==
> 
> 
> 
> 
> 
> Thankyou & best regards,
> Shiraz Gul Khan (03002061179)
> Onezero Inc.
> 
> _
> It's fast, it's easy and it's free. Get MSN Messenger today! 
> http://www.msn.co.uk/messenger
-- 
Mit freundlichen Grüssen / With kind regards

Michael Pophal
--
Topic Manager
Internet Access Services & Solutions
--
Siemens AG, ITO A&S 4
Telefon: +49(0)9131/7-25150
Fax: +49(0)9131/7-43344
Email:   [EMAIL PROTECTED]
--



Re: [squid-users] Forward Requests to different Ports:

2005-02-22 Thread Michael Pophal
Why don't you create different webwasher profiles for different user
groups?

Regards michael

On Tue, 2005-02-22 at 09:46, Markus Atteneder wrote:
> Is it possible to configure squid to forward reqests comming from specific
> hosts to the same "parent" as other requests but to a different port? The
> reason is to bypass a webwasher on the "parent" server for these hosts in
> order to allow denyed sites.
-- 
Mit freundlichen Grüssen / With kind regards

Michael Pophal
--
Topic Manager
Internet Access Services & Solutions
--
Siemens AG, ITO A&S 4
Telefon: +49(0)9131/7-25150
Fax: +49(0)9131/7-43344
Email:   [EMAIL PROTECTED]
--



[squid-users] How to get the latest ICAP patch

2005-03-08 Thread Michael Pophal
Hi,
is there an easy way to get the newest ICAP patch? Unfortunately the
icap-squid on Duane Wessels homepage is from Sept. 2004. Is the newest
icap branch adapted to squid-2.5.STABLE9?

Thanks for information.

Regards Michael




Re: [squid-users] Replacement policy and log analyzer

2005-03-16 Thread Michael Pophal
You have to use
--enable-removal-policies="lru heap"
when configuring squid.

Calamaris is a good tool. It gives you a lot of reports an graphical
output. It is highly configurable, but easy to use.
Look at the demo report (calamaris v3)
http://cord.de/tools/squid/calamaris/calamaris-3/

Regards Michael

On Tue, 2005-03-15 at 18:18, Marco Crucianelli wrote:
> I would like to use a different replacement policy from lru, like heap
> GDSF or heap LFUDA, but, what shall I use in squid.conf? Nor "heap
> LFUDA", nor "LFUDA" nor "heap_LFUDA" did work!! Which is the right
> statement?!
> 
> Moreover...based on your experience...which is the best squid log
> analyzer?!?!
> 
> Thank you in advance!
> 
> Marco
-- 
Mit freundlichen Grüssen / With kind regards

Michael Pophal
--
Topic Manager
Internet Access Services & Solutions
--
Siemens AG, ITO A&S 4
Telefon: +49(0)9131/7-25150
Fax: +49(0)9131/7-43344
Email:   [EMAIL PROTECTED]
--



Re: [squid-users] Squid -> Homepage

2005-03-16 Thread Michael Pophal
As I know there is a possibility to define a homepage via proxy-pac.
proxypac is a javascript, which configures the client. It tells the
client, which proxy to use and afaik it can tell the homepage, as well.

Regards Michael

On Tue, 2005-03-15 at 04:16, Hendro Susanto wrote:
> Hi,
> 
> I've tried to google this information but still can't find it.
> 
> Is it possible to 'set' the squid so it will retrieve a default
> homepage for the network ? e.g when a user opens the IE, it will be
> directed to a default homepage ?
> 
> TIA.
> 
> -H-
-- 
Mit freundlichen Grüssen / With kind regards

Michael Pophal
--
Topic Manager
Internet Access Services & Solutions
--
Siemens AG, ITO A&S 4
Telefon: +49(0)9131/7-25150
Fax: +49(0)9131/7-43344
Email:   [EMAIL PROTECTED]
--



Re: [squid-users] Squid -> Homepage

2005-03-17 Thread Michael Pophal
It took me also some time to find anything ;-)

Try this Link
http://www.innopacusers.org/list/archives/2001/msg03577.html
and search for browser.startup.page

Hope this helps you. It would be nice, if you could give me some
response of your results on this.

Regards Michael



On Thu, 2005-03-17 at 11:13, Matus UHLAR - fantomas wrote:
> On 16.03 14:23, Michael Pophal wrote:
> > As I know there is a possibility to define a homepage via proxy-pac.
> 
> Where? I haven't found that anywhere.
> Can you provide more info?
> 
> > proxypac is a javascript, which configures the client. It tells the
> > client, which proxy to use and afaik it can tell the homepage, as well.
-- 
Mit freundlichen Grüssen / With kind regards

Michael Pophal
--
Topic Manager
Internet Access Services & Solutions
--
Siemens AG, ITO A&S 4
Telefon: +49(0)9131/7-25150
Fax: +49(0)9131/7-43344
Email:   [EMAIL PROTECTED]
--



Re: [squid-users] Measuring Squid Efficiency

2005-03-30 Thread Michael Pophal
Take a look at the reporting tool calamaris. There you get information
about caching efficiency and you can see the efficiency of your
refresh_patterns in a separate report as well. There are also values
like:
- Bandwidth savings in Percent (Byte hit rate)
- Proxy efficiency (HIT [kB/sec] / DIRECT [kB/sec])
- Average speed increase
and a huge amount of reports!

http://cord.de/tools/squid/calamaris/Welcome.html.en

Regards
Michael

On Wed, 2005-03-30 at 03:52, Bob Morrison wrote:
> I am new to squid and would like to know what information to look for to see
> if a squid cache needs adjusting to perform more efficiently.  The
> evaluation tool we’re using is CacheManager.CGI script that comes with
> squid.
> 
> Thanks in advance for any help 
> 
> Bob Morrison, CNE, MCSE
> Network Administrator
> Wallingford CT Public Schools USA
> 
-- 
Mit freundlichen Grüssen / With kind regards

Michael Pophal
--
Topic Manager
Internet Access Services & Solutions
--
Siemens AG, ITO A&S 4
Telefon: +49(0)9131/7-25150
Fax: +49(0)9131/7-43344
Email:   [EMAIL PROTECTED]
--



RE: [squid-users] Measuring Squid Efficiency

2005-03-30 Thread Michael Pophal
The request hit rate ~39% is IMHO a pretty nice value, also the
bandwitdh savings!
As you have a cascade of proxies, you can try to increase your hit rate,
if you split the cached object size between parent and children caches.

I run a proxy cascade, where the parents cache objects larger than 8 KB
and the childs cache objects smaller than 8 KB. This avoids duplicated
object caching and could increase the hit rate a little bit.

The 'Proxy efficiency' tells you, that the proxy woks fast enough the
deliver objects faster from cache, than from the origin server. Of
course, this shoud be ;-) But on the outher hand you can figure out some
bottlenecks if it isn't.

Compared to the values I have on my cache cluster, I would say your
proxy works very well!

Regards Michael


On Thu, 2005-03-31 at 04:01, Bob Morrison wrote:
> Thanks for the info on Calamaris.  
> 
> I installed Calamaris 2.99 and came up with these results:
> Total amount:requests 5 
> Total amount cached: requests 19488 
> Request hit rate: % 38.98 
> Total Bandwidth:  Byte
> 346M 
> Bandwidth savings:Byte 50716K 
> Bandwidth savings in Percent (Byte hit rate): % 14.30 
> Proxy efficiency (HIT [kB/sec] / DIRECT [kB/sec]):factor 4.47 
> Average speed increase:   % 12.48 
> 
> The cache in this example is a child cache that is serving about 300 PC's in
> a high school.  The cache size is 1GB with 16K first level dirs and 256
> second level dirs.
> 
> Should I be worried about these numbers?  If so, what should I do to improve
> them?
> 
> Thanks in advance for any help 
> 
> Bob Morrison, CNE, MCSE
> Network Administrator
> Wallingford CT Public Schools USA
> 
> 
> -Original Message-
> From: Michael Pophal [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, March 30, 2005 3:10 AM
> To: [EMAIL PROTECTED]
> Cc: squid-users@squid-cache.org
> Subject: Re: [squid-users] Measuring Squid Efficiency
> 
> Take a look at the reporting tool calamaris. There you get information
> about caching efficiency and you can see the efficiency of your
> refresh_patterns in a separate report as well. There are also values
> like:
> - Bandwidth savings in Percent (Byte hit rate)
> - Proxy efficiency (HIT [kB/sec] / DIRECT [kB/sec])
> - Average speed increase
> and a huge amount of reports!
> 
> http://cord.de/tools/squid/calamaris/Welcome.html.en
> 
> Regards
> Michael
> 
> On Wed, 2005-03-30 at 03:52, Bob Morrison wrote:
> > I am new to squid and would like to know what information to look for to
> see
> > if a squid cache needs adjusting to perform more efficiently.  The
> > evaluation tool we’re using is CacheManager.CGI script that comes with
> > squid.
> > 
> > Thanks in advance for any help 
> > 
> > Bob Morrison, CNE, MCSE
> > Network Administrator
> > Wallingford CT Public Schools USA
> > 
-- 
Mit freundlichen Grüssen / With kind regards

Michael Pophal
--
Topic Manager
Internet Access Services & Solutions
--
Siemens AG, ITO A&S 4
Telefon: +49(0)9131/7-25150
Fax: +49(0)9131/7-43344
Email:   [EMAIL PROTECTED]
--



Re: [squid-users] Report in html format

2005-04-03 Thread Michael Pophal
Try calamaris. This gives you a lot of reports also in HTLM and with
graphs.

Michael

On Fri, 2005-04-01 at 19:08, sasa wrote:
> Hi, I use squid and squidguard .. my question is if possible with 
> webmin+webalizer (or only webmin or only webalizer or another software) to 
> have a report in html format what say me for every ip address in the my lan 
> which sites have been visit.
> Thanks.
> 
> Salvatore.




Re: [squid-users] Help understanding calamaris/squid output

2005-04-20 Thread Michael Pophal
Hi,
maybe you should use the new calamaris version. It tells you some
perfomance values like 'Proxy efficiency' and 'speed increase'. You also
get infomation about your refresh_pattern in the 'Requested extensions'
report. Here you will see how many of the cached objects are stale or
fresh.
The 'Proxy efficiency' factor tells you how fast  your squid is with
cached objects.

Regards Michael

On Mon, 2005-04-18 at 18:40, Scott Presnell wrote:
> HI Folks,
>   I'm running Squid Cache: Version 2.5.STABLE7 under NetBSD 2.0
> and I'm using calamaris to try and track performance.  Calamaris seems
> to be telling me that my TCP_REFRESH_HIT speed performance is poor; actually
> lower than my MISS performance.  I have some questions: 
> 
> 1) Why would this be true?  Overhead of the IMS test + the actual request?
> local disk performace issues?  What can I do to further interrogate and 
> increase
> the performance of this kind of request/response?
> 
> 2) Given the definition of TCP_REFRESH_HIT:
> 
>  The requested object was cached but STALE. The IMS query for the object 
> resulted in "304 not modified".
> 
> .. then what is the difference between TCP_REFRESH_HIT/200 and 
> TCP_REFRESH_HIT/304?
> 
>   Thanks for any help.
> 
>   - Scott
> 
> # Incoming TCP-requests by status
> status  request  %  sec/req   Byte   %  
> kB/sec  
> -- - -- ---  -- 
> --- 
> HIT11599  54.810.12 11108299   6.26
> 7.95 
>  TCP_REFRESH_HIT6572  31.060.20  6582553   3.71
> 4.94 
>  TCP_MEM_HIT2929  13.840.01  2529904   1.43   
> 79.90 
>  TCP_IMS_HIT1136   5.370.01   262521   0.15   
> 18.61 
>  TCP_HIT 942   4.450.02  1706470   0.96   
> 92.35 
>  TCP_NEGATIVE_HIT 20   0.090.0126851   0.02  
> 141.74 
> MISS7279  34.401.53  156276K  90.15   
> 14.00 
>  TCP_MISS   6579  31.091.64  149822K  86.43   
> 13.89 
>  TCP_CLIENT_REFRESH_MISS 462   2.180.40  5698990   3.21   
> 29.81 
>  TCP_REFRESH_MISS238   1.120.80   909725   0.51
> 4.64 
> ERROR   2283  10.790.72  6372629   3.59
> 3.78 
>  TCP_MISS   1718   8.120.96  5551109   3.13
> 3.30 
>  TCP_DENIED  564   2.670.00   819910   0.46  
> 652.03 
>  TCP_REFRESH_MISS  1   0.000.05 1610   0.00   
> 32.09 
> -- - -- ---  -- 
> --- 
> Sum21161 100.000.67  173347K 100.00   
> 12.23 
> 
> My refresh_pattern(s) look like this:
> 
> refresh_pattern -i \.jpe?g$ 144050% 10080 reload-into-ims
> refresh_pattern -i \.tiff?$ 144050% 10080 reload-into-ims
> refresh_pattern -i \.gif$   144050% 10080 ignore-reload
> refresh_pattern -i \.png$   144050% 10080 reload-into-ims
> refresh_pattern -i \.bmp$   144050% 10080 reload-into-ims
> refresh_pattern -i \.p(n|b|g|p)m$ 1440  50% 10080 reload-into-ims
> refresh_pattern .   30  50% 10080
> 
> 
> Example TCP_REFRESH_HITs:
> 
> 1113779628.851240 192.168.37.165 TCP_REFRESH_HIT/200 648 GET 
> http://www.google.com/nav_current.gif - DIRECT/www.google.com text/html
> 1113779628.876257 192.168.37.165 TCP_REFRESH_HIT/200 1306 GET 
> http://www.google.com/nav_first.gif - DIRECT/www.google.com text/html
> 1113779628.902284 192.168.37.165 TCP_REFRESH_HIT/200 645 GET 
> http://www.google.com/nav_page.gif - DIRECT/www.google.com text/html
> 1113779628.924259 192.168.37.165 TCP_REFRESH_HIT/200 1787 GET 
> http://www.google.com/nav_next.gif - DIRECT/www.google.com text/html
> 1113779629.092241 192.168.37.165 TCP_REFRESH_HIT/200 2905 GET 
> http://www.google.com/images/gds1.gif - DIRECT/www.google.com text/html
> 1113777949.633323 192.168.37.58 TCP_REFRESH_HIT/304 165 GET 
> http://www.bankofamerica.com/global/mvc_objects/stylesheet/masthead.css - 
> DIRECT/www.bankofamerica.com -
> 1113777949.681356 192.168.37.58 TCP_REFRESH_HIT/304 165 GET 
> http://www.bankofamerica.com/global/hs_home/signin.js - 
> DIRECT/www.bankofamerica.com -
> 1113777950.138185 192.168.37.58 TCP_REFRESH_HIT/304 165 GET 
> http://www.bankofamerica.com/global/js/fontsize.js - 
> DIRECT/www.bankofamerica.com -
> 11

Re: [squid-users] Squid Log Analysis Suggestions

2005-04-20 Thread Michael Pophal
Hi,
if you use calamaris V3 you can modify the tables. Tt is easy to switch
off redundant columns.

Regards Michael

On Tue, 2005-04-19 at 05:08, Merton Campbell Crockett wrote:
> On Mon, 18 Apr 2005, Bob Morrison wrote:
> 
> > Hello
> > 
> > I need a very easy way to log what user accesses what URL including date and
> > time of access.
> 
> Squid's access log satisfies your stated requirement.  It provides the 
> date and time of each access, the IP address or domain name of the system 
> used, the user name if required by the web site, method, URL, etc.
> 
> 
> >  To make things a little harder, I do not want to install
> > Apache on the server.  But I won't mind FTP'ing the results to my PC and
> > look at them with a browser if that's what it takes.  I've looked at
> > Calamaris, Webalizer and SARG but can't find what I need.  
> 
> 
> Squid's access log con be viewed in situ or ftp'ed to your workstation for 
> viewing.  No need to install a web server.
> 
> 
> Webalizer is primarily concerned with the volume of activity.  I looked at 
> SARG but can't recall why I didn't like it.  Calamaris seemed to have more 
> of what I was looking for at the time but required modification to make it 
> more useful for my purposes.  The big problem was the horizontal width of 
> the tables.  Too many redundant columns of information.
> 
> 
> Merton Campbell Crockett




[squid-users] CARP does ignore cache_peer_domain

2004-08-10 Thread Michael Pophal
Hi,
I've configured my squid farm like:

cache_peer cache1.domain.com parent 81 83 no-query no-digest
carp-load-factor=0.25
cache_peer cache2.domain.com parent 81 83 no-query no-digest
carp-load-factor=0.25
cache_peer cache3.domain.com parent 81 83 no-query no-digest
carp-load-factor=0.25
cache_peer cache4.domain.com parent 81 83 no-query no-digest
carp-load-factor=0.25

So there are 4 parent caches. 
Some websites can not handle changing IPs from the requestor (because of
4 caches) and fail. If I try the directive 'cache_peer_domain' to handle
this 'problem hosts' squid just ignores it.

e.g. 
cache_peer_domain cache1.domain.com !.comdirect.de
cache_peer_domain cache2.domain.com !.comdirect.de
cache_peer_domain cache3.domain.com !.comdirect.de

Here squid should route all '.comdirect.de' requests to
cache4.domain.com, isn't it? But it doesn't!

What is wrong. Please help!


regards

Michael




[squid-users] ICAP patch for squid-2.5.STABLE6

2004-08-16 Thread Michael Pophal
Hi,
I couldn't find an ICAP patch for the squid-2.5STABLE6 release.
I looked for it on http://devel.squid-cache.org/icap/, but this patch
will not fit in.
I tried to patch it manually, but there are to much differences, so I
can not be sure for a stable running squid.


Thanks for any advice!

Regards Michael



[squid-users] Two authentication schemes, NTLM and LDAP

2004-09-02 Thread Michael Pophal
Hi all,

my problem is, I have to provide two authentication schemes, LDAP and
NTLM. Unfortunately the user has no choice which scheme to use, because
this is negotiated between browser and proxy. The strongest
authentication scheme wins -> NTLM. But some of my users only have
credentials on LDAP, others on the domain controller (NTLM).

I tried to give the choice by calling one proxy on two different ports,
to seperate the http_access lines by 

acl NTLM_auth_port myport 
acl LDAP_auth_port myport 3334

http_access allow NTLM_auth_port NTLM_authenticated_user
http_access allow LDAP_auth_port LDAP_authenticated_user

but this doesn't help.

So the next step is to run two squids on one machine. Here my question:
Is it feasible to share one disk cache between both squids (I run
diskd)? I don't want to have redundant disk cache.

If you have any good ideas to above mentioned problem I would very
appreciate that!

Thanks !!

Regards,
  Michael




[squid-users] ERR_ICAP_FAILURE

2004-09-09 Thread Michael Pophal
Hi,

I'm running squid-2.5.STABLE6 with ICAP to filter content against
WebWasher Dynablocator. My users get sometimes an ERR_ICAP_FAILURE,
which is confusing, because a reload of the requested page solves this
problem. Nevertheless this is a big  problem for us, because 35000 User
can run our hotline very hot. 

The ICAP patch is from Fri Jan 30 10:28:53 2004 GMT. 

In the cache log I find the following lines, but they appear much more
often than the ERR_ICAP_FAILURE appears. So this is not neccessarily the
reason for the ERR_ICAP_FAILURE:
snip
2004/09/09 12:56:36| headlen=5
2004/09/09 12:56:36| Read icap header : <0
   
  
>
2004/09/09 12:56:36| BAD ICAP status line <0
   
  
>
2004/09/09 12:56:36| icapStateFree: FD 75, icap 0x47583bc8
snip

Any advice is very appreciated!

Thanks,
Michael




[squid-users] squid ICAP Update

2004-10-27 Thread Michael Pophal
Hi,

I use squid 2.5.STABEL6 with ICAP Patch from
http://www.squid-cache.org/~wessels/squid-icap-2.5/.

Here my questions:

1) Why is ICAP not in the squid main branch? Is there any developement
going on?

2) When can I expect an ICAP-patched squid-2.5.STABLE7?

3) I regularily get the following error message in the cache.log:
2004/10/26 14:30:33_ assertion failed: icap_reqmod.c:856: "NULL ==
icap->reqmod.http_entity.callback"
Any idea on this error?

Thanks for any information.

Regards Michael




[squid-users] https_port question

2004-11-18 Thread Michael Pophal
Hi,

when I use the LWP user agent from CPAN, I can not use https via the
squid. The reason:
LWP doesn't make an connect, it makes a 
'GET https://' and expects the proxy to do the ssl connection with
the webserver.
I assume the same phenomenon with some home-banking java applications,
which can not work with squid.

Here the question:
is the squid directive 'https_port' the right way to solve this problem?

Any suggestions would be appreciated.

Thanks,

Michael




Re: [squid-users] CALAMARIS

2004-11-30 Thread Michael Pophal

> Calamaris is only a reporting tool. It only parses the log files. AFAIK,
> its output is text format.
> 

Calamaris supports also graphical output in the new version (v3). This
graphics are embedded in HTLM. In addition there are some more reports
included and calamaris calculates some values e.g. proxy efficiency,
bandwidth savings, speed increase ...

I use calamaris HTML output (not text output) and run httpd on every
proxy server.

Michael




Re: [squid-users] Question according Calamaris 2.99[OT]

2004-12-03 Thread Michael Pophal
Hi Sebastian,

calamaris::calBars3d is part of the calamaris distribution. This
calamaris::calBars3d.pm needs GD::Graph. 
Have a look at the package, there should be inluded at least 
- calamaris
- calamaris.conf
- calAxestype3d.pm
- calAxestype.pm
- calBars3d.pm

The cal*.pm files are needed in Graph mode. In calamaris there is a line
like
use lib '/usr/local';
You have to adapt this path to the location, where your
calamaris::cal*.pm files reside.

Hope that helps.

Michael



On Fri, 2004-12-03 at 11:12, Sebastian Pasch wrote:
> Hello,
> I tried to use the new calamaris version 2.99.xx which should have the
> features of the upcoming 3.x
>  
> I get the following error:
>  
> /calamaris: Couldn't load package calamaris::calBars3d,
>   maybe it is not installed: Not a directory
> 
> This error comes up when I enable graph support. I installed GD and NetAddr
> lib and tried different paths for the .pm files of calamaris; but it
> continously seems to doesn´t find them.
>  
> Thx for any suggestions
> Sebastian Pasch



Re: [squid-users] Re: Squid limits and hardware spec

2004-12-03 Thread Michael Pophal
As mentioned before, try the new version of calamaris (v2.99). There is
a new report included, which shows you in the 'Requested extensions'
report some nice information about object freshness:
- ratio fresh/stale 
- ratio unmod/mod
This helps you to improve your squid refresh_pattern.

Michael

On Thu, 2004-12-02 at 22:39, Adam Aube wrote:
> Martin Marji Cermak wrote:
> 
> > I have been playing with Squid under a heavy load and there are some
> > stats. I am trying to maximise the "Byte Hit Ratio" value. I got 13%
> > average, but I am not happy about this number - I want it higher
> 
> To increase your byte hit ratio, you can:
> 
> 1) Switch to one of the heap cache replacement policies
> 2) Tune your refresh_pattern settings to make Squid cache more aggressively
> 
> See the FAQ and default squid.conf for details on these items.
> 
> However, before going through the tuning, run an analysis tool (such as
> Calamaris) on your logs to see what your traffic pattern is like. This will
> show you what a reasonable byte hit ratio would be.
> 
> If, for example, 70% of your traffic is dynamic content (which usually
> cannot be cached), then a 13% byte hit ratio is actually pretty good.
> 
> > USED HARDWARE:
> > Processor: P4 1.8GHz
> > Memory:1 GB
> > Hardisk:   40 GB IDE 7200rpm
> 
> > Requests: 180 req/sec (peak), 60 req/sec (day average).
> 
> According to posts from Squid developers, a single caching Squid box has an
> upper limit of about 300 - 400 requests/second. This isn't too bad,
> considering you are using a single IDE disk for the entire system.
> 
> > maximum_object_size 51200 KB (SHOULD I MAKE IT HIGHER ???)
> 
> Actually, you might want to make it lower. Most web requests will not be for
> 50 MB files, and your byte hit ratio will be hurt if a 50 MB file that is
> requested once forces out fifty 1 MB files that are accessed twice each.
> 
> The default is generally acceptable, unless log analysis shows large numbers
> of requests for larger files.
> 
> > cache_dir aufs /cache 25000 16 256
> 
> You should size your cache to hold about a week's worth of traffic. Just
> watch your memory usage (1 GB of cache ~ 10 MB of memory for metadata).
> 
> > cache_mem 8 MB
> 
> This is generally fine - the OS will generally use free memory to cache
> files anyway, which will have the same effect as boosting this setting.
> 
> > I am going to install a new box with SCSI disks so I will report to you
> > how the performance will change.
> 
> Best disk performance will be achieved with multiple small, fast SCSI disks
> dedicated to Squid's cache, each with its own cache_dir (no RAID), and
> round-robin between the cache_dirs.
> 
> Adam




Re: AW: [squid-users] Question according Calamaris 2.99[OT]

2004-12-03 Thread Michael Pophal
I have about 40 calamaris installations running, so it works ;-)!

Try the following file structure:

- /usr/local/calamaris/calamaris
- /usr/local/calamaris/calamaris.conf
- /usr/local/calamaris/calAxestype3d.pm
- /usr/local/calamaris/calAxestype.pm
- /usr/local/calamaris/calBars3d.pm

make sure the existance of 
use lib '/usr/local/'; 
in /usr/local/calamaris/calamaris.

calamaris internally requires calamaris::calBars3d.
calBars3d.pm requires GD::Graph::bars;
So try to check 
perl -c /usr/local/calamaris/calBars3d.pm
If you get an error like 'Can't locate GD/Graph/bars.pm', your GD::Graph
has been installed in the wrong location. You have to insure, that
GD::Graph is installed, where perl is looking for.

Check perl -V to see the @INC paths, where perl is looking for packages.
Compare it with the path, where GD::Graph is installed. Move the
directory GD to something like /usr/lib/perl5/site_perl/

Regards Michael


On Fri, 2004-12-03 at 12:05, Sebastian Pasch wrote:
> Thx, but
> 
> >calamaris::calBars3d is part of the calamaris distribution. This
> calamaris::calBars3d.pm needs GD::Graph. 
> >Have a look at the package, there should be inluded at least
> >- calamaris
> >- calamaris.conf
> >- calAxestype3d.pm
> >- calAxestype.pm
> >- calBars3d.pm
> 
> Check, I have all these files
> 
> >The cal*.pm files are needed in Graph mode. In calamaris there is a line
> like use lib '/usr/local'; You have to adapt this >path to the location,
> where your calamaris::cal*.pm files reside.
> 
> Check, I found that line an already tried various things
> 
> - First I tried to copy the 3 .pm files to /usr/local
> - Second I trid to move complete calamaris folder to /usr/local
> - Third I tried to rename the calamaris folder to different names like
> calamaris-3.0.xx, calamaris-2.99.xx
> 
> But I get the same error :-(
> 
> Thx anyway
> Sebastian Pasch
> 
> 
> >On Fri, 2004-12-03 at 11:12, Sebastian Pasch wrote:
> >> Hello,
> >> I tried to use the new calamaris version 2.99.xx which should have the 
> >> features of the upcoming 3.x
> >>  
> >> I get the following error:
> >>  
> >> /calamaris: Couldn't load package calamaris::calBars3d,
> >>   maybe it is not installed: Not a directory
> >> 
> >> This error comes up when I enable graph support. I installed GD and 
> >> NetAddr lib and tried different paths for the .pm files of calamaris; 
> >> but it continously seems to doesn´t find them.
> >>  
> >> Thx for any suggestions
> >> Sebastian Pasch




[squid-users] How to restrict ftp

2004-12-10 Thread Michael Pophal
Hi,
normally it is enough to allow 443 and 563 SSL ports. But
ftp-over-http-proxy-clients (e.g. smartftp) uses the connect method. So
I have to open the ports 21 and >1023.
I want to restrict the ports 21 and 1023-65535 for ftp use only. Is
there a way to do this? How to identify the ftp requests?

Regards Michael 
-- 
Mit freundlichen Grüssen / With kind regards

Michael Pophal
--
Topic Manager
Internet Access Services & Solutions
--
Siemens AG, ITO A&S 4
Telefon: +49(0)9131/7-25150
Fax: +49(0)9131/7-43344
Email:   [EMAIL PROTECTED]
--



Re: [squid-users] https_port question

2004-12-12 Thread Michael Pophal
You read wrong, I mentioned we have problems with SOME java
applications. 

But now I figured out, that it depends on the vendor: MicroSoft JRE
works, Sun JRE does not! 

Do you know any similar problems? 

Regards Michael

On Mon, 2004-12-13 at 03:41, Ow Mun Heng wrote:
> On Thu, 2004-12-09 at 06:39, Henrik Nordstrom wrote:
> > On Fri, 19 Nov 2004, Michael Pophal wrote:
> > > I assume the same phenomenon with some home-banking java applications,
> > > which can not work with squid.
> > 
> > Quite unlikely. The problems with most JAVA applications is that they 
> > assume there is no proxy and make direct TCP connections to their "home 
> > server".
> 
> I've not done extensive testing, but did I read right that squid does
> not support Java proxying?? If that's the case, then transparent
> proxying (interception) would have a big problem wouldn't it??
> 
> --
> Ow Mun Heng
> Gentoo/Linux on D600 1.4Ghz 
> Neuromancer 10:40:34 up 1:35, 5 users, 0.49, 0.52, 0.63 




Re: [squid-users] squid performance

2005-01-24 Thread Michael Pophal
Hi Daniel,

proxy efficiency: it compares the time for fetching objects from the
cache to objects fetched from the internet. It shows you how fast your
cache can deliver objects. Of course, this value should be > 0,
otherwise you have a bottleneck. The higher the efficiency, the better
performs your proxy.

'bandwidth savings [%]' = 'Bandwidth savings [byte]' divided by  'Total
Bandwidth [byte]'.
It shows you how many percent of the sent byte are cached bytes.

Hope this helps.

Regards Michael

On Sun, 2005-01-23 at 02:42, Daniel Navarro wrote:
> what is the squid performance parameter that shows me
> how much efficient it is?
> what is the squid parameter that shows me how much
> bandwidth have saved?
> 
> I refer to calamaris reports.
> Yours, Daniel Navarro
>Maracay, Venezuela
>www.csaragua.com/ecodiver
> 
> _
> Do You Yahoo!?
> Información de Estados Unidos y América Latina, en Yahoo! Noticias.
> Visítanos en http://noticias.espanol.yahoo.com
-- 
Mit freundlichen Grüssen / With kind regards

Michael Pophal
--
Topic Manager
Internet Access Services & Solutions
--
Siemens AG, ITO A&S 4
Telefon: +49(0)9131/7-25150
Fax: +49(0)9131/7-43344
Email:   [EMAIL PROTECTED]
--



Re: [squid-users] squidrunner

2005-02-10 Thread Michael Pophal
Hi squidrunner team,

... nice idea!
I miss the ICAP-patch in your software.

Regards Michael

On Wed, 2005-02-09 at 05:43, squidrunner developer wrote:
> Dear All,
>  
> Warm wishes to all.
>  
> We are working on a script to make squid build,
> configuration and 
> installation automation, based on shell script. We
> started this project that 
> end-users are getting problem on build, installation
> and configuration. 
> Currently intial version to get recent source, patches
> and build with 
> default configuration, squidrunnerv1.0 is available in
> freshmeat.net as,
>  
> http://freshmeat.net/projects/squidrunner/
>  
> Intially, It is being very simple, we are looking for
> all your 
> comments, feedbacks, views on this project.
>  
> Expecting good from all. Have a nice day.
>  
> regards,
> squidrunner team.
> 
> 
> 
>   
> __ 
> Do you Yahoo!? 
> Read only the mail you want - Yahoo! Mail SpamGuard. 
> http://promotions.yahoo.com/new_mail
-- 
Mit freundlichen Grüssen / With kind regards

Michael Pophal
--
Topic Manager
Internet Access Services & Solutions
--
Siemens AG, ITO A&S 4
Telefon: +49(0)9131/7-25150
Fax: +49(0)9131/7-43344
Email:   [EMAIL PROTECTED]
--



[squid-users] ICP / CARP

2004-05-13 Thread Michael Pophal
Hi all,

since a while I have a problem with the siblings in my proxy cluster.
Some weeks ago, I activated CARP on the proxies, to loadbalance the
parents by URL-Hash. This should save disk space!
But unfortunately since this change, sibling does not work anymore. No
ICP request is sent or recieved by one of the proxies. Nothing happens
in the ICP port.
I played with the order of the cache_peer lines in squid.conf and
figured out the following:
- if the cache_peer parent (CARP) line is followed by the cache_peer
sibling line, no sibling works.

cache_peer x.x.x.1 parent 81 83 no-query no-digest carp-load-factor=0.5
cache_peer x.x.x.2 parent 81 83 no-query no-digest carp-load-factor=0.5
cache_peer y.y.y.1 sibling 81 83 proxy-only
cache_peer y.y.y.2 sibling 81 83 proxy-only

- if the cache_peer sibling line is followed by the cache_peer parent
(CARP) line, no CARP works.

cache_peer y.y.y.1 sibling 81 83 proxy-only
cache_peer y.y.y.2 sibling 81 83 proxy-only
cache_peer x.x.x.1 parent 81 83 no-query no-digest carp-load-factor=0.5
cache_peer x.x.x.2 parent 81 83 no-query no-digest carp-load-factor=0.5

What is going wrong, do I missunderstand these relationships?

Regards Michael



Re: [squid-users] Problen with cache_dir

2004-05-14 Thread Michael Pophal
Hi Fabian,

what about your inodes?
check 'df -i'

If you cache a lot of small objects, it may be possible, that your inode
limit is reached. Then you have to reformat your disk with mke2fs or
eqivalent comand according to your fs.

Michael

On Thu, 2004-05-13 at 20:28, Software wrote:
> Hi i have squid 2.5 stable 2
> 
> I've configured squid with this options the squid was installed in 
> /usr/local/squid the filesystem /usr has a size of 3 GB in this moment i 
> have available  1GB
> 
> cache_dir ufs /data 28000 16 256
> cache_access_log /usr/local/squid/logs/access.log
> cache_log /usr/local/squid/logs/cache.log
> cache_store_log /usr/local/squid/logs/store.log
> cache_swap_log /data/swap.log
> 
> The problem with my filesystem /data (it has 30 GB of capacity) has 83% 
> available the squid stopped and the messager error is like the 
> filesystem doesn't have more space in this direcoty, and i can to 
> restart the proxy again i must to delete some directory cache in this 
> place to restart the squid. Here only i have the squid cache dir.
> 
> How can i do to avoid this problem .
> 
> Thanks
> Fabian
-- 
Mit freundlichen Grüssen / With kind regards

Michael Pophal

Siemens AG, I&S IT PS 223 OP3
Telefon: +49(0)9131/7-25150
Fax: +49(0)9131/7-43344
Email:   [EMAIL PROTECTED]




[squid-users] efficient IP ACLs

2004-05-17 Thread Michael Pophal
our squid has to handle more than 100.000 IP adresses.

Is it more efficient to fill up subnets or doesn't it matter. 

E.g. 250 IPs of an C-IP Range  have to have proxy access, but I can also
allow all 255. Is there a difference in performance, when I give squid
maybe 10 subnets with 250 IPs or 1 C-Subnet with 255 IPs.

regards Michael




[squid-users] sibling doesn't work with CARP-parents

2004-05-17 Thread Michael Pophal
since a while I have a problem with the siblings in my proxy cluster.
Some weeks ago, I activated CARP on the proxies, to loadbalance the
parents by URL-Hash.
But unfortunately since this change, sibling does not work anymore. No
ICP request is sent or recieved by one of the proxies. Nothing happens
in the ICP port.

Any help!?

Regards Michael




RE: [squid-users] efficient IP ACLs

2004-05-17 Thread Michael Pophal
I assumed, it is a matter of ACL number. I can have 10 ACLs or 1 ACL in
the squid. But I don't know, how squid does handle this internally, so
you may be right and it doesn't matter anyway.

Sure, I want to permit only the allowed IPs on the proxy, but it is also
a matter of performance. We have about 7600 IP ACLs, which could be
reduced by compacting them to lager subnets.

Michael 

On Tue, 2004-05-18 at 07:51, Elsen Marc wrote:
>  
> > 
> > our squid has to handle more than 100.000 IP adresses.
> > 
> > Is it more efficient to fill up subnets or doesn't it matter. 
> > 
> > E.g. 250 IPs of an C-IP Range  have to have proxy access, but 
> > I can also
> > allow all 255. Is there a difference in performance, when I give squid
> > maybe 10 subnets with 250 IPs or 1 C-Subnet with 255 IPs.
> > 
>  
>  That part of networking stuff, happens a at a lower layer, and is probably
> more influenced by the performance/efficiency of the network stack of your box
> and not by SQUID.
> SQUID's limitations,if any are
> determined by finding out for instance the number of requests/sec
> it has to deal with e.d.
> 
> M.
-- 
Mit freundlichen Grüssen / With kind regards

Michael Pophal

Siemens AG, I&S IT PS 223 OP3
Telefon: +49(0)9131/7-25150
Fax: +49(0)9131/7-43344
Email:   [EMAIL PROTECTED]