Re: [squid-users] squid acceleration for more than 1 ip ...

2006-08-23 Thread nima sadeghian

Ok, I have to explaine it more:
My squid box in current situation accelerate automation.fisheries.ir
to my local server in this ip : 172.x.x.x
I want to can accelerate www.fisheries.ir to another server in my
local address : 172..x.x1.x2
can I do it by squid ?
regards
nima

On 8/21/06, Henrik Nordstrom [EMAIL PROTECTED] wrote:

mån 2006-08-21 klockan 10:45 +0430 skrev nima sadeghian:
 Dear Friends
 hi, can squid in acceleration mode use for accelerate for more than
 one ip?

Yes, but it's easier if you think of it in terms of domains rather than
IPs. Or do you really need to support old clients not sending Host
headers?

 I have one server in my lan that squid accelerate request from
 web to it, now can I assign another server in my lan to this squid? or
 I have to setup another squid.

A single Squid can handle as many public IP addresses you like, each
with as many domains you like, and connected to as many backend web
servers you like, varied in any combination..

Regards
Henrik






--
Best Regards
NIMA SADEGHIAN


Re: [squid-users] squid acceleration for more than 1 ip ...

2006-08-23 Thread nima sadeghian

Ok, I have to explaine it more:
My squid box in current situation accelerate automation.fisheries.ir
to my local server in this ip : 172.x.x.x
I want to can accelerate www.fisheries.ir to another server in my
local address : 172..x.x1.x2
can I do it by squid ?
regards
nima



On 8/21/06, Matus UHLAR - fantomas [EMAIL PROTECTED] wrote:

On 21.08.06 10:45, nima sadeghian wrote:
 hi, can squid in acceleration mode use for accelerate for more than
 one ip? I have one server in my lan that squid accelerate request from
 web to it, now can I assign another server in my lan to this squid? or
 I have to setup another squid.

yes, you just have to properly setup ACL's not to make it open proxy.

--
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
(R)etry, (A)bort, (C)ancer




--
Best Regards
NIMA SADEGHIAN


Re: [squid-users] squid acceleration for more than 1 ip ...

2006-08-23 Thread Henrik Nordstrom
On Wed, 2006-08-23 at 13:44 +0430, nima sadeghian wrote:
 Ok, I have to explaine it more:
 My squid box in current situation accelerate automation.fisheries.ir
 to my local server in this ip : 172.x.x.x
 I want to can accelerate www.fisheries.ir to another server in my
 local address : 172..x.x1.x2
 can I do it by squid ?

Yes.

See cache_peer + cache_peer_domain directives.

You only need a single http_port.

Regards
Henrik



Re: [squid-users] squid acceleration for more than 1 ip ...

2006-08-23 Thread nima sadeghian

but 172.x.x.x and 172.x.x1.x2 are webservers not cache server. cache
peer is ok for this?


On 8/23/06, Henrik Nordstrom [EMAIL PROTECTED] wrote:

On Wed, 2006-08-23 at 13:44 +0430, nima sadeghian wrote:
 Ok, I have to explaine it more:
 My squid box in current situation accelerate automation.fisheries.ir
 to my local server in this ip : 172.x.x.x
 I want to can accelerate www.fisheries.ir to another server in my
 local address : 172..x.x1.x2
 can I do it by squid ?

Yes.

See cache_peer + cache_peer_domain directives.

You only need a single http_port.

Regards
Henrik





--
Best Regards
NIMA SADEGHIAN


Re: [squid-users] almost there , just a little help needed

2006-08-23 Thread Visolve Squid

S t i n g r a y wrote:



Well thanks to all the help you guys provided i have enabled for the first time OpenBSD + squid+ squidguard on my network, internet seems to work very fast now . 
thank you 

now i want to know how to block only specific ips specified in a file to download .exe  mp3 files from internet according to my limited knowledge i have made this config , but its not working , can you please tell me whats wrong ?  how should i put it ? 

Expression file 

\.(ra?m|mpe?g?|mov|movie|qt|avi|dif|dvd?|exe|mp3)($|\?) 


Hello Stingray,

You can block the downloands for specificied IP's by using the following acl 
setting in squid configuration file(squid.conf).


acl restricted_IPs src /usr/local/ip_list_file
acl restricted_dwnlds urlpath_regex [i]  \.mp3$ \.exe$
http_access deny restricted_dwnlds restricted_IPs

Thanks,
Visolve Squid Team
http://www.visolve.com/squid/


Re: [squid-users] squid acceleration for more than 1 ip ...

2006-08-23 Thread Henrik Nordstrom
On Wed, 2006-08-23 at 14:17 +0430, nima sadeghian wrote:
 but 172.x.x.x and 172.x.x1.x2 are webservers not cache server. cache
 peer is ok for this?

Yes, since 2.6.

If you want to do this in 2.5 then search the archives. Have posted many
times how to accelerated any number of sites in 2.5.

Regards
Henrik



[squid-users] Squid sometimes failed on boot but it work

2006-08-23 Thread Paolo De Marco

Hi all.
I have a RedHat 9 with Squid Cache version 2.5.STABLE1 for 
i386-redhat-linux-gnu, and it starts on the boot machine.

Sometimes Squid failed to start on machine's boot. In messages i see

squid: Starting squid:
squid[1886]: Squid Parent: child process 1889 started
squid: .
last message repeated 19 times
squid:
rc: Starting squid:  failed

but it works good! Squid accepts all requests. In cache.log there are no 
errors...

I have 8 machines with the same problem. Any idea?

--
Paolo De Marco


[squid-users] Content-Transfer-Encopding:gzip problem with squid...

2006-08-23 Thread Juhasz Gabor

Hi !

I'm newbie with squid and I didn't find any solution in the FAQ for next 
problem :

Our company is developing a robust web application. We have to use http 
compression
to download plenty of svg files. We have tested our application with many 
proxies (Linux, Windows)
and have worked perfectly. Unfortunatelly, the squid has disliked gzip 
compression and our web
application hasn't work.

How can i use $(subject) with squid. Altough squid enables this header,  it 
changes HTTP protocols (1.1 - 1.0).
and IE doesnt't like 'Content-Transfer-Encopding : gzip' header with HTTP/1.0 
protocol and does nothing
(maybe some crash).

What to do ?

thanks,
gabor



--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.405 / Virus Database: 268.11.5/425 - Release Date: 2006. 08. 22.



[squid-users] where are the w and U variables in ERR pages defined?

2006-08-23 Thread nick humphrey

i'm looking at the error page ERR_INVALID_URL and i see the following
two variables:
%U = url
%w = webmaster

where are these defined and where can i find a list of all such variables?


Re: [squid-users] Content-Transfer-Encopding:gzip problem with squid...

2006-08-23 Thread Henrik Nordstrom
On Wed, 2006-08-23 at 12:23 +0200, Juhasz Gabor wrote:

 How can i use $(subject) with squid. Altough squid enables this
 header,  it changes HTTP protocols (1.1 - 1.0).

Yes. Squid is still HTTP/1.0 for various small but important reasons.

 and IE doesnt't like 'Content-Transfer-Encopding : gzip' header with
 HTTP/1.0 protocol and does nothing (maybe some crash).

Now you confuse me a bit. There is no Content-Transfer-Encoding in HTTP,
what there is is

  Content-Encoding, indicating the encoding of the entity (all versions)

  Transfer-Encoding, transport level hop-by-hop encoding (HTTP/1.1 and
later).

Squid supports the first (Content-Encoding), but not the second
(Transfer-Encoding).

Note: HTTP/1.1 is very clear that Transfer-Encoding MUST NOT be used in
responses to HTTP/1.0 requests.

Some day we'd like to support HTTP/1.1, but still some work remains.

Regards
Henrik



Re: [squid-users] where are the w and U variables in ERR pages defined?

2006-08-23 Thread Henrik Nordstrom
On Wed, 2006-08-23 at 13:13 +0200, nick humphrey wrote:
 i'm looking at the error page ERR_INVALID_URL and i see the following
 two variables:
 %U = url
 %w = webmaster
 
 where are these defined and where can i find a list of all such variables?

The FAQ has a list of most variables.

%w is defined in squid.conf. Most other variables is referring to
different aspects of the request.

Regards
Henrik



Re: [squid-users] where are the w and U variables in ERR pages defined?

2006-08-23 Thread Henrik Nordstrom
On Wed, 2006-08-23 at 13:28 +0200, nick humphrey wrote:
 i see that w is cache_mgr, but where is the mapping between the too.
 the actual variable w is not in squid.conf...

The definition of the % codes is in the code. It's not variables as
such, more like template codes.

Regards
Henrik



[squid-users] Java, proxy.pac, and squid

2006-08-23 Thread Michael W. Lucas

Hi,

I'm not sure this is even related to Squid, but it could be and I need
to double-check everything.  I'm using Squid 2.5S13 on RHEL ESR4.

We need to access a Web site that launches a Java-based file transfer
client.

If I configure the client browser manually, by entering
proxy.us.add:8080 (.add is our private internal domain), the applet
works.

If I use the following proxy.pac to autoconfigure, however, it doesn't
work:

function FindProxyForURL(url, host)
{
// variable strings to return
var proxy_yes = PROXY proxy.us.add:8080;
var proxy_no = DIRECT;

return proxy_yes;

}

To my eye it seems that the browser shoudl be sending all requests to
Squid, no matter what, in either case.  access.log seems to indicate
that all the requests are traversing Squid.

So, either Squid handles cases differently or the browser isn't
actually sending all the requests to the proxy.  I'll happily track
down the latter elsewhere, but also need to check: does Squid handle
these cases differently?

Thanks,
==ml

-- 
Michael W. Lucas[EMAIL PROTECTED], [EMAIL PROTECTED]
http://www.BlackHelicopters.org/~mwlucas/
Latest book: PGP  GPG -- http://www.pgpandgpg.com
The cloak of anonymity protects me from the nuisance of caring. -Non Sequitur


[squid-users] incomplete requests

2006-08-23 Thread Stefan Palme

Hi,

I have 2.6.STABLE1-20060711 installed as an accelerating server in front
of an apache-2 (which itself works as frontend for zope-2.7).

When using normal browsers (Firefox, IE, Mozilla, Opera) all works
perfectly. When using wget to access the same URL from the same
client, wget sometimes fetches only between 50 and 90 percent of the
data and then freezes:

---
[EMAIL PROTECTED] wget http://zwickau-bp24.de
--15:06:33--  http://zwickau-bp24.de/
   = `index.html.8'
Resolving zwickau-bp24.de... 88.198.32.172
Connecting to zwickau-bp24.de[88.198.32.172]:80... connected.
HTTP request sent, awaiting response...

61% [==
  ] 74,937   179.92K/s
--

At this stage the wget process seems to freeze, i.e. does not fetch
any more data. Maybe it will timeout after some minutes... I've just killed
it with SIGTERM.

The client sits in a LAN behind a firewall, but the same happens when using
wget on the gateway host. I already have the squid option 
httpd_accel_no_pmtu_disc
enabled, but this did not help.

Any hints?

Best regards
-stefan-




Re: [squid-users] incomplete requests

2006-08-23 Thread Stefan Palme

Hi,

after some logfile- and traffic analysis I found, that wget
seems to be buggy. The bug occurs when using wget-1.9.1. After
upgrading to wget-1.10.2 the world is perfect again :)

Best regards
-stefan-

 Hi
 maybe on squid log ?
 or run tcpdump and record tcp session ?
 Regards
 Rmkml
 
 
 On Wed, 23 Aug 2006, Stefan Palme wrote:
 
  Date: Wed, 23 Aug 2006 15:15:19 +0200
  From: Stefan Palme [EMAIL PROTECTED]
  To: squid-users@squid-cache.org
  Subject: [squid-users] incomplete requests
  
 
  Hi,
 
  I have 2.6.STABLE1-20060711 installed as an accelerating server in front
  of an apache-2 (which itself works as frontend for zope-2.7).
 
  When using normal browsers (Firefox, IE, Mozilla, Opera) all works
  perfectly. When using wget to access the same URL from the same
  client, wget sometimes fetches only between 50 and 90 percent of the
  data and then freezes:
 
  ---
  [EMAIL PROTECTED] wget http://zwickau-bp24.de
  --15:06:33--  http://zwickau-bp24.de/
= `index.html.8'
  Resolving zwickau-bp24.de... 88.198.32.172
  Connecting to zwickau-bp24.de[88.198.32.172]:80... connected.
  HTTP request sent, awaiting response...
 
  61% 
  [==
] 74,937   179.92K/s
  --
 
  At this stage the wget process seems to freeze, i.e. does not fetch
  any more data. Maybe it will timeout after some minutes... I've just killed
  it with SIGTERM.
 
  The client sits in a LAN behind a firewall, but the same happens when using
  wget on the gateway host. I already have the squid option 
  httpd_accel_no_pmtu_disc
  enabled, but this did not help.
 
  Any hints?
 
  Best regards
  -stefan-
 
 
 
-- 
---
Dipl. Inf. (FH) Stefan Palme
 
email: [EMAIL PROTECTED]
www:   http://hbci4java.kapott.org
icq:   36376278
phon:  +49 341 3910484
fax:   +49 1212 517956219
mobil: +49 178 3227887
 
key fingerprint: 1BA7 D217 36A1 534C A5AD  F18A E2D1 488A E904 F9EC
---



[squid-users] tweaking squid values

2006-08-23 Thread Roger 'Rocky' Vetterberg
Hi list.

I've recently been given two spare servers that are to be deployed
as squid caches for a network with about 300 workstations, all heavy
internet users and sharing a 10MBit dedicated internet line.
The idea is to have the squids proxy http and MSN messenger
connections, as well as some ftp restricted to certain sites.

The servers in question are one PIII 1.0GHz with 1G of RAM and 72G
of raid-0 diskspace, and one PIII 1.4GHz with 512M RAM and 36G of
raid-0 diskspace. Both run the latest 6.x version of FreeBSD and
Squid 2.5.14_2.

I have tried to configure them according to what I read in
documentation and FAQ's, but I run into heavy swapping, Unable to
allocate errors or just bad performance. It seems I'm having
problems finding a good balance between performance and stability.

Could someone give me some rough figures to use for cache_mem,
cache_dir, L1, L2, Q1 and Q2?
Would I benefit from using diskd, or should I run with normal UFS?
Both servers are armed with hardware raid on 15k drives, so the disk
I/O should be pretty decent.

TIA
--
R



[squid-users] Re: How to set multiple namebased virtual reverse proxy?

2006-08-23 Thread Robin Bowes
Henrik Nordstrom wrote:
 mån 2006-08-21 klockan 06:40 + skrev Monty Ree:
 
 Is there any problem to set this?
 
 Exacly how it's meant to be done, except that perhaps you want to use
 the real server IP addresses in squid.conf rather than DNS.

Henrik,

I too want to set up something with exactly this configuration.

Whereabouts do the IPs go?

Here's a stab at the configuration:

http_port 192.168.26.26:80 vhost

cache_peer 192.168.0.41 parent 80 0 no-query originserver name=cache
cache_peer_domain server1 cache.example.com
cache_peer 192.168.0.42 parent 80 0 no-query originserver name=images
cache_peer_domain server1 images.example.com


One other thing I'm not sure about is DNS resolution.

I currently have this configuration:

client - LB1 - squid farm - LB2 - apache farm

LB1  LB2 are load-balancers

So, clients access cache.example.com which externally (i.e. public IP
address) resolves to LB1.
LB1 passes the request to a machine in the squid farm (squid01,02,03)
The squid instances peer with each other and are configured as
accelerators for the apache farm via LB2
proxy.example.com resolves to LB2 (192.168.0.41)
LB2 passes the request on to a machine in the apache farm
(proxy01,02,03) which are configured with cache.example.com as
ServerAliases in httpd.conf.

On each of the squid machines, I'm currently using this config (IP
address different per machine):

http_port 192.168.26.26:80 vhost
cache_peer 192.168.0.41 parent 80 0 no-query originserver

LB2 has address 192.168.0.41

However, I find that this only works if cache.example.com resolves
internally to 192.168.0.41.

Is this how it's supposed to work, or am I missing something?

Basically, what I'd like to happen is :

 * all incoming requests for cache.example.com get passed to 192.168.0.41
 * all incoming requests for images.example.com get passed to 192.168.0.42

This should happen regardless of what cache.example.com and
images.example.com resolve to internally.

Thanks,

R.



[squid-users] url rewriting with squid 2.6

2006-08-23 Thread Travis Derouin

Hi,

We have been doing url rewriting with Squid 2.5 with success so far,
but we're having issues getting it to work on 2.6. We have a few
hostnames (wikihow.net, wikihow.org) that we would like 301 redirected
to wikihow.com, and we have 2 back-end apache servers. I've played
around with forceddomain for the cache_peer settings, turning it on
and off and it doesn't seem to do anything for our situation.

I've also copied over our previously working redirector script and set
it up as url_rewrite_program, and it's not being called (I verified
this by putting some logging statements in redirect.pl and nothing is
being written to the log, although I can see it running when I do a ps
-aux).

here are some settings we've been using:

url_rewrite_program /usr/local/squid2.6/sbin/redirect.pl

http_port 80 defaultsite=www.wikihow.com
#http_port 80
#cache_peer 10.234.169.204 parent 80 0 no-query originserver
round-robin forceddomain=www.wikihow.com
#cache_peer 10.234.169.201 parent 80 0 no-query originserver
round-robin forceddomain=www.wikihow.com
cache_peer 10.234.169.204 parent 80 0 no-query originserver round-robin
cache_peer 10.234.169.201 parent 80 0 no-query originserver round-robin

acl port80 port 80
acl mysites dstdomain www.wikihow.com
http_access allow mysites port80

Any suggestions?
Travis


[squid-users] squid 2.6 refresh changes?

2006-08-23 Thread Dan Thomson

Hey all,

I've upgraded a squid server from 2.5.9 (debian stable squid) to 2.6
(stable3) and I've been noticing what seems to be a change in
behaviour for refreshes.

Some objects that were be getting REFRESH_HITs are now getting
REFRESH_MISSes. Was there a change in logic (or possibly notation) for
refreshes between 2.5 and 2.6? If so, is there anywhere I can read
about this?

I guess I should also mention that I'm basically trying to figure out
why there seems to be more traffic going back to the origin with 2.6
than I used to with 2.5 despite similar squid configuration.

I should also mention that it seems like you developers did a great
job with 2.6 overall ;)

Thanks.
--
Dan Thomson
Systems Engineer
Peer1 Network
1600 555 West Hastings
Vancouver, BC
V6B 4N5
866-683-7747
http://www.peer1.com


Re: [squid-users] Squid sometimes failed on boot but it work

2006-08-23 Thread Chris Robertson

Paolo De Marco wrote:

Hi all.
I have a RedHat 9 with Squid Cache version 2.5.STABLE1 for 
i386-redhat-linux-gnu, and it starts on the boot machine.

Sometimes Squid failed to start on machine's boot. In messages i see

squid: Starting squid:
squid[1886]: Squid Parent: child process 1889 started
squid: .
last message repeated 19 times
squid:
rc: Starting squid:  failed

but it works good! Squid accepts all requests. In cache.log there are 
no errors...

I have 8 machines with the same problem. Any idea?

My guess would be that your init script starts Squid (successfully) and 
looks for a pid file to verify that Squid is running (printing a period 
every second for 20 seconds while waiting).  I would further speculate 
that the pid file is either being created in a location other than the 
one that the init script expects it (likely /var/run/squid.pid), or is 
not created due to a permissions issue.  Not having that pid file to 
fall back on will likely prevent your init script from properly stopping 
squid (or reloading it for that matter).  But this is all just speculation.


Chris


Re: [squid-users] Java, proxy.pac, and squid

2006-08-23 Thread Chris Robertson

Michael W. Lucas wrote:

Hi,

I'm not sure this is even related to Squid, but it could be and I need
to double-check everything.  I'm using Squid 2.5S13 on RHEL ESR4.

We need to access a Web site that launches a Java-based file transfer
client.

If I configure the client browser manually, by entering
proxy.us.add:8080 (.add is our private internal domain), the applet
works.

If I use the following proxy.pac to autoconfigure, however, it doesn't
work:

function FindProxyForURL(url, host)
{
// variable strings to return
var proxy_yes = PROXY proxy.us.add:8080;
var proxy_no = DIRECT;

return proxy_yes;

}

To my eye it seems that the browser shoudl be sending all requests to
Squid, no matter what, in either case.  access.log seems to indicate
that all the requests are traversing Squid.

So, either Squid handles cases differently or the browser isn't
actually sending all the requests to the proxy.  I'll happily track
down the latter elsewhere, but also need to check: does Squid handle
these cases differently?

Thanks,
==ml

  
There is no difference in the request sent to Squid due to explicitly 
entering the proxy settings vs. supplied via a proxy.pac.


I'd presume that the browser is not passing the proxy setting from the 
PAC file to the Java applet.


Chris


Re: [squid-users] tweaking squid values

2006-08-23 Thread Chris Robertson

Roger 'Rocky' Vetterberg wrote:

Hi list.

I've recently been given two spare servers that are to be deployed
as squid caches for a network with about 300 workstations, all heavy
internet users and sharing a 10MBit dedicated internet line.
The idea is to have the squids proxy http and MSN messenger
connections, as well as some ftp restricted to certain sites.

The servers in question are one PIII 1.0GHz with 1G of RAM and 72G
of raid-0 diskspace, and one PIII 1.4GHz with 512M RAM and 36G of
raid-0 diskspace. Both run the latest 6.x version of FreeBSD and
Squid 2.5.14_2.

I have tried to configure them according to what I read in
documentation and FAQ's, but I run into heavy swapping, Unable to
allocate errors or just bad performance. It seems I'm having
problems finding a good balance between performance and stability.

  
Have you read through the genuine Squid FAQ section on memory 
(http://wiki.squid-cache.org/SquidFaq/SquidMemory)?  It's probably a 
good place to start.



Could someone give me some rough figures to use for cache_mem,
cache_dir, L1, L2, Q1 and Q2?
  


Personally, I leave cache_mem at the default value, and trust my OS to 
cache disk accesses.  As for the cache_dir, the consensus seems to be 
not to fill your partition beyond 60% for best performance.



Would I benefit from using diskd, or should I run with normal UFS?
  


Use diskd or aufs.


Both servers are armed with hardware raid on 15k drives, so the disk
I/O should be pretty decent.
  


Just as long as you are aware that RAID 0 really doesn't have the R, 
and as far as I am aware, most RAID controllers only show significant 
performance increase when calculating parity (RAID 3-6), you might be 
better off using one spindle for the OS and creating separate cache_dirs 
on each of the other spindles.  That way a SPECIFIC disk has to die for 
your proxy server to go down.



TIA
--
R

  

Chris


Re: [squid-users] tweaking squid values

2006-08-23 Thread Roger 'Rocky' Vetterberg

Chris Robertson wrote:

Roger 'Rocky' Vetterberg wrote:

Hi list.

[snip]

I have tried to configure them according to what I read in
documentation and FAQ's, but I run into heavy swapping, Unable to
allocate errors or just bad performance. It seems I'm having
problems finding a good balance between performance and stability.

Have you read through the genuine Squid FAQ section on memory 
(http://wiki.squid-cache.org/SquidFaq/SquidMemory)?  It's probably a 
good place to start.


Yes, several times. It explains in great detail what certain errors mean 
and how to see how much memory is used for what, but does not give much 
help when it comes to calculating what values to use.



Could someone give me some rough figures to use for cache_mem,
cache_dir, L1, L2, Q1 and Q2?
  
Personally, I leave cache_mem at the default value, and trust my OS to 
cache disk accesses.  As for the cache_dir, the consensus seems to be 
not to fill your partition beyond 60% for best performance.



Would I benefit from using diskd, or should I run with normal UFS?
   

Use diskd or aufs.


Both servers are armed with hardware raid on 15k drives, so the disk
I/O should be pretty decent.
   
Just as long as you are aware that RAID 0 really doesn't have the R, 
and as far as I am aware, most RAID controllers only show significant 
performance increase when calculating parity (RAID 3-6), you might be 
better off using one spindle for the OS and creating separate cache_dirs 
on each of the other spindles.  That way a SPECIFIC disk has to die for 
your proxy server to go down.


Im not really worried about a server going down. The reason I have two 
servers is to be fully redundant. Im planning on implementing CARP and a 
round-robin DNS entry as soon as I have both servers tweaked.


--
R


Re: [squid-users] tweaking squid values

2006-08-23 Thread Chris Robertson

Roger 'Rocky' Vetterberg wrote:

Chris Robertson wrote:

Roger 'Rocky' Vetterberg wrote:

Hi list.

[snip]

I have tried to configure them according to what I read in
documentation and FAQ's, but I run into heavy swapping, Unable to
allocate errors or just bad performance. It seems I'm having
problems finding a good balance between performance and stability.

Have you read through the genuine Squid FAQ section on memory 
(http://wiki.squid-cache.org/SquidFaq/SquidMemory)?  It's probably a 
good place to start.


Yes, several times. It explains in great detail what certain errors 
mean and how to see how much memory is used for what, but does not 
give much help when it comes to calculating what values to use.



http://wiki.squid-cache.org/SquidFaq/SquidMemory#head-09818ad4cb8a1dfea1f51688c41bdf4b79a69991

And I quote:

As a rule of thumb on Squid uses approximately 10 MB of RAM per GB of 
the total of all cache_dirs (more on 64 bit servers such as Alpha), 
plus your cache_mem setting and about an additional 10-20MB. It is 
recommended to have at least twice this amount of physical RAM 
available on your Squid server.


So if you have 512 MB of RAM, use 256 explicitly for Squid.  Using the 
default of 8MB for cache_mem, this gives you the freedom to use up to 
around 23GB of cache_dir space ((256 - 8 - 20)/10MB * 1GB).  Tweak to fit.


Chris


Re: [squid-users] need some information

2006-08-23 Thread squid learner


--- Chris Robertson [EMAIL PROTECTED] wrote:

 squid learner wrote:
  as 
  using 
  4 isp with proxy 
  with round robin peer
 
  it is working good with weight=1
 
  but now i have to add one isp with out proxy so
 how i
  configure to that isp     --- - 
 --
  with 
  never_direct
 
 If I'm reading this correctly, you have a total of
 five upstream ISPs, 
 four of which have proxies one of which does not,
 and you are looking to 
 distribute your traffic through all five?
 
 Assuming this is correct, I would set up a second
 proxy server locally 
 that only uses the fifth ISP for Internet access. 
 Use it in the same 
 manner as the four ISP maintained proxies
 (cache_peer proxy5.example.com 
 parent 3128 3130 round-robin).
 
 For what it's worth, the weight option does nothing
 for round-robin 
 groups, and never_direct will not help you in this
 situation.
 
 Chris
Thanks dear i understand

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


[squid-users] Not chronological header's Age

2006-08-23 Thread Jose Octavio de Castro Neves Jr

Hey Guys,

I have a question: Many times (using reverse proxy) I get Proxy errors
related to my non chronological Age on headers. Do you guys have any
clue about this problem?

Thanxs in advance,

JOC


[squid-users] Help with differents Time on Headers

2006-08-23 Thread Jose Octavio de Castro Neves Jr

Hey Guys,

I would like to understand why on my headers I'm getting a different
time zone of my server. It is possible to see that on my server is
20:00:02, but on my headers is saying:
Date: Wed, 23 Aug 2006 22:59:16 GMT

And my expire is showing:
Expires: Wed, 23 Aug 2006 22:58:09 GMT

How is that possible? When I hit date on my server shows: Wed Aug 23
20:04:54 BRST 2006

And some how I'm getting a GMT date. Could be the proxy's date that
redirect the request to my server?

I'm sending the whole wget debug on request to help with the debugging.

DEBUG output created by Wget 1.9.1 on linux-gnu.

--20:00:02--  http://idgnow.uol.com.br/
  = `index.html.132'
Resolving idgnow.uol.com.br... 200.221.9.49, 200.221.9.50, 200.221.9.52, ...
Caching idgnow.uol.com.br = 200.221.9.49 200.221.9.50 200.221.9.52
200.221.9.58 200.221.9.60 200.221.9.15 200.221.9.39 200.221.9.43
200.221.9.44 200.221.9.45 200.221.9.46
Connecting to idgnow.uol.com.br[200.221.9.49]:80... connected.
Created socket 3.
Releasing 0x80a5b18 (new refcount 1).
---request begin---
GET / HTTP/1.0
User-Agent: Wget/1.9.1
Host: idgnow.uol.com.br
Accept: */*
Connection: Keep-Alive

---request end---
HTTP request sent, awaiting response... HTTP/1.1 200 OK
Date: Wed, 23 Aug 2006 22:59:16 GMT
Server: Zope/(Zope 2.7.6-final, python 2.3.5, linux2) ZServer/1.1 Plone/2.0.5
Content-Length: 81491
Content-Language: None
X-Cache-Headers-Set-By: CachingPolicyManager: /IDGNews/caching_policy_manager
Expires: Wed, 23 Aug 2006 22:58:09 GMT
Cache-Control: max-age=0, s-maxage=300
Content-Type: text/html;charset=utf-8
Age: 113
X-Cache: HIT from idg-app01-e
X-Cache-Lookup: HIT from idg-app01-e:8080
Vary: Accept-Encoding,User-Agent
Connection: close


Length: 81,491 [text/html]

100%[] 81,491--.--K/s

Closing fd 3
20:00:02 (2.76 MB/s) - `index.html.132' saved [81491/81491]


Thanxs,

JOC


Re: [squid-users] Help with differents Time on Headers

2006-08-23 Thread Chris Robertson

Jose Octavio de Castro Neves Jr wrote:

Hey Guys,

I would like to understand why on my headers I'm getting a different
time zone of my server. It is possible to see that on my server is
20:00:02, but on my headers is saying:
Date: Wed, 23 Aug 2006 22:59:16 GMT



According to RFC1945 (http://www.w3.org/Protocols/rfc1945/rfc1945):

All HTTP/1.0 date/time stamps must be represented in Universal Time 
(UT), also known as Greenwich Mean Time (GMT), without exception.



And my expire is showing:
Expires: Wed, 23 Aug 2006 22:58:09 GMT

How is that possible? When I hit date on my server shows: Wed Aug 23
20:04:54 BRST 2006

And some how I'm getting a GMT date. Could be the proxy's date that
redirect the request to my server?

I'm sending the whole wget debug on request to help with the debugging.



SNIP


Thanxs,

JOC


Chris


[squid-users] COSS losing cached data after squid restart?

2006-08-23 Thread Monty Ree

Hello, list.

I have read a book which titled Squid, the Definite Guide.
That say like this.

coss doesn't support rebuilding cached data from disk well.
When you restart sqyud, you might find that it fails to read the coss 
swap.state files, thus 
losing any cached data.


I guess that I should change or add cache_peer or cache_peer_domain 
once ot twice per day for some reason. Then, coss is not suit for me??



Thanks in advance.

_
메신저에서 문자를 바로 보내보세요 
http://phonebuddy.msn.co.kr/m_main.aspx?part=W 



Re: [squid-users] COSS losing cached data after squid restart?

2006-08-23 Thread Adrian Chadd
On Thu, Aug 24, 2006, Monty Ree wrote:
 Hello, list.
 
 I have read a book which titled Squid, the Definite Guide.
 That say like this.
 
 coss doesn't support rebuilding cached data from disk well.
 When you restart sqyud, you might find that it fails to read the coss 
 swap.state files, thus 
 losing any cached data.

A lot has changed in the COSS codebase between the time the book was
written and now. The COSS code will now read the entire filestore
and pull out the objects it finds during a Squid restart. It'll take
a while but it'll happen.



Adrian