On 08/01/11 15:36, p...@mail.nsbeta.info wrote:
Amos Jeffries writes:
Either
1) - 4 separate squid instances, each with its own completely
independent cache? (no cache hierarchy or something like that)
or
2) - a configuration similar to [1] with frontend and backend instances?
Does anybody have
Amos Jeffries writes:
Either
1) - 4 separate squid instances, each with its own completely
independent cache? (no cache hierarchy or something like that)
or
2) - a configuration similar to [1] with frontend and backend instances?
Does anybody have experiences with either of these two configu
On 28/12/10 03:23, Seok Jiwoo wrote:
Thaks for reply~ ^^
I did reinstalled [squid 3.0] and I set 'squid.conf' as below.
o visible_hostname localhost
o http_access allow locahost
o http_port 3128
o cache_dir ufs /var/spool/squid/cache 100 16 256 (as default)
but still the squid makes web-browse
On 08/01/11 11:36, mpnordland wrote:
so what should I do so that I can give the browser the wpad file and
still have port 80 exclusively for squid?
That is all documented at the earlier referred page
http://wiki.squid-cache.org/SquidFaq/ConfiguringBrowsers
Amos
--
Please be using
Current
2011/1/8 Amos Jeffries :
> On 08/01/11 06:22, Drunkard Zhang wrote:
>>
>> 2011/1/8 Mohsen Saeedi:
>>>
>>> I know about coss. it's great. but i have squid 3.1 and i think it's
>>> unstable in 3.x version. that's correct?
>>
>> I need "null" for memory-only cache, which is not provided in squid-3,
>>
On 29/12/10 15:45, Colin Coe wrote:
Hi all
I'm currently redesigning an environment I've inherited. The
environment consists of two main sites with limited replication. Each
site is effectively a replica of the other.
The environment was built with four proxy servers at each site (so
eight in
On 08/01/11 08:57, John Craws wrote:
Hi,
I originally posted this on December 14th, but did not get any reply.
Maybe someone will be able to help this time.
Can you ensure you are using a build with valgrind support and please
run these tests with SQUID1 started inside valgrind. That will
h
On 08/01/11 07:56, guest01 wrote:
Hi guys,
I am using a couple of squid instances per server (Squid 3.1.0, RHEL
5.5, lots of RAM) and was wondering which would be the better
configuration? Better in that case means more performance.
Either
1) - 4 separate squid instances, each with its own comp
On 08/01/11 06:22, Drunkard Zhang wrote:
2011/1/8 Mohsen Saeedi:
I know about coss. it's great. but i have squid 3.1 and i think it's
unstable in 3.x version. that's correct?
I need "null" for memory-only cache, which is not provided in squid-3,
so it's all squid-2.x in product environment.
so what should I do so that I can give the browser the wpad file and
still have port 80 exclusively for squid?
Hi Nick,
Can you look at the memory usage of the helper. I am aware of some
underlying Kerberos library memory leaks. The best way to find out where the
leak is to use valgrind e.g. ./squid_kerb_auth_test proxy 1 |
valgrind --log-file=squid_kerb_auth_test-1.val --leak-check=full --show-reacha
Hi,
I originally posted this on December 14th, but did not get any reply.
Maybe someone will be able to help this time.
Thanks,
--
I am seeing a level of memory consumption that I do not understand
from a squid instance configured to use a single cache_peer over
multicast ICP.
Please disregard
Hi again,
2011/1/7 Robert Pipca :
> This assertion failed occurs on all of them. So, Mr. Henrik, can you
> help hunt down this bug on squid like mr. Jeffreis suggested?
On trying to track down the bug, I put a lot of "debug (1,1) ("shit
happened = %d\n", __LINE__);" of all possible code-paths of
Hi Mr. Jeffreis,
2011/1/7 Amos Jeffries :
> On 08/01/11 00:44, Robert Pipca wrote:
>>
>> Hi Mr. Jeffreis,
>>
>> 2011/1/7 Robert Pipca:
>>>
>>> 2011/1/7 Amos Jeffries:
Your config shows ~69 GB of small files. Each cache_dir has a maximum
count
of 2^31 files. It looks like that f
Hi guys,
I am using a couple of squid instances per server (Squid 3.1.0, RHEL
5.5, lots of RAM) and was wondering which would be the better
configuration? Better in that case means more performance.
Either
1) - 4 separate squid instances, each with its own completely
independent cache? (no cache
2011/1/8 Mohsen Saeedi :
> I know about coss. it's great. but i have squid 3.1 and i think it's
> unstable in 3.x version. that's correct?
I need "null" for memory-only cache, which is not provided in squid-3,
so it's all squid-2.x in product environment.
Of cource, we tested every squid-3.x, many
On 08/01/11 05:28, mpnordland wrote:
I read about having squid give out the wpad.dat itself, is that
possible? all the howtos that I've read have a webserver or dhcp server
doing stuff on port 80 so that the browser can autoconfigure, but, I am
going to block port 80 to everyone but the squid use
OS: CentOS 5,5, 2.6.18-194.26.1.el5
Squid 2.6.STABLE21 (from repo, with --enable-wccpv2 options)
Cisco 7201 (Cisco IOS Software, 7200 Software (C7200P-IK91S-M),
Version 12.2(31)SB17, RELEASE SOFTWARE (fc1), image file
c7200p-ik91s-mz.122-31.SB17.bin)
I can not configure a transparent proxy.
I here
I know about coss. it's great. but i have squid 3.1 and i think it's
unstable in 3.x version. that's correct?
On Fri, Jan 7, 2011 at 8:05 PM, Drunkard Zhang wrote:
> 2011/1/8 Mohsen Saeedi :
>> and now which filesystem has better performance. aufs or diskd? on the
>> SAS hdd for example.
>
> Neit
2011/1/8 Mohsen Saeedi :
> and now which filesystem has better performance. aufs or diskd? on the
> SAS hdd for example.
Neither of them, we are using coss on SATA. And coss on SSD is under
testing, looks good still.
> On Fri, Jan 7, 2011 at 7:56 PM, Drunkard Zhang wrote:
>>
>> 2011/1/7 Amos Jef
and now which filesystem has better performance. aufs or diskd? on the
SAS hdd for example.
On Fri, Jan 7, 2011 at 7:56 PM, Drunkard Zhang wrote:
>
> 2011/1/7 Amos Jeffries :
> > On 07/01/11 19:08, Drunkard Zhang wrote:
> >>
> >> In order to get squid server 400M+ traffic, I did these:
> >> 1. Me
I read about having squid give out the wpad.dat itself, is that
possible? all the howtos that I've read have a webserver or dhcp server
doing stuff on port 80 so that the browser can autoconfigure, but, I am
going to block port 80 to everyone but the squid user. Second, I just
use 127.0.0.1:312
2011/1/7 Amos Jeffries :
> On 07/01/11 19:08, Drunkard Zhang wrote:
>>
>> In order to get squid server 400M+ traffic, I did these:
>> 1. Memory only
>> IO bottleneck is too hard to avoid at high traffic, so I did not use
>> harddisk, use only memory for HTTP cache. 32GB or 64GB memory per box
>> wo
Dea Hasanen
which setting is better than for it? can you give me some helps?
On Fri, Jan 7, 2011 at 7:36 PM, Hasanen AL-Bana wrote:
> This will cause a bigger problem , if user downloading a file with
> download manager ,let's say in 4 segments , squid will start 4
> download threads for the same
This will cause a bigger problem , if user downloading a file with
download manager ,let's say in 4 segments , squid will start 4
download threads for the same file each one from its beginning. will
consume 4 times bandwidth than really needed.
On Fri, Jan 7, 2011 at 6:49 PM, Amos Jeffries wrote:
Amos i set these:
quick_abort_min -1
quick_abort_max -1
range_offset_limit -1
but when clients are being downloaded, it's very slow. about 14KB/s
but normaly they can fill 256KB/s.
is anything wrong?
On Fri, Jan 7, 2011 at 7:25 PM, Mohsen Saeedi wrote:
> Do you mean range_offset_limit -1 and quic
Do you mean range_offset_limit -1 and quick_abort_* 0 ?
that's true?
On Fri, Jan 7, 2011 at 7:19 PM, Amos Jeffries wrote:
> On 08/01/11 03:51, Mohsen Saeedi wrote:
>>
>> Thank.
>> but which value should be set for quick_abort?? i don't know relation
>> between quick_abort and download manager.
>
On 08/01/11 03:51, Mohsen Saeedi wrote:
Thank.
but which value should be set for quick_abort?? i don't know relation
between quick_abort and download manager.
can you explain it or give me some useful links?
Abort needs to be disabled. Range offset needs to force full-download.
now i'm readin
On 07/01/11 19:40, Drunkard Zhang wrote:
My configuration:
cache_dir coss /mnt/c/72 10240 max-size=524288 max-stripe-waste=32768
block-size=4096 maxfullbufs=10
cache_swap_log /mnt/s/%s
/mnt/c/72 is a file on btrfs + SSD. The btrfs is created by:
"mkfs.btrfs /dev/sdb1 /dev/sdc1", so it will spa
On 07/01/11 23:29, Artemis BRAJA wrote:
As suggested, I upgraded to 3.2.0.4 with --disable-cpu-profiling option
but I'm still getting TCP_MISS/200 on both backends and I can't see any
TCP_HIT.
Artemis
CD_PARENT_HIT is a HIT instead of TCP_HIT. Though it indicates a
cache-digest hit and I don'
On 08/01/11 04:09, mpnordland wrote:
alright, that makes sense, so, now, how do I serve up
WPAD to the browsers?
DHCP or DNS. http://wiki.squid-cache.org/SquidFaq/ConfiguringBrowsers
Amos
--
Please be using
Current Stable Squid 2.7.STABLE9 or 3.1.10
Beta testers wanted for 3.2.0.4
On 07/01/11 19:08, Drunkard Zhang wrote:
In order to get squid server 400M+ traffic, I did these:
1. Memory only
IO bottleneck is too hard to avoid at high traffic, so I did not use
harddisk, use only memory for HTTP cache. 32GB or 64GB memory per box
works good.
NP: The problem in squid-2 is l
>What does "squid -v" report as the version? we don't have a 3.20 release
>yet.
Sorry - 3.0.STABLE24
The information contained in this e-mail is of a confidential nature and is
intended only for the addressee. If you are not the intended addressee, any
disclosure, copying or distribution by yo
alright, that makes sense, so, now, how do I serve up
WPAD to the browsers?
On 08/01/11 04:01, mpnordland wrote:
On 01/06/2011 11:27 PM, Amos Jeffries wrote:
On 07/01/11 15:54, mpnordland wrote:
The tricky thing is, is that this is all on one computer, squid is a
proxy for the computer it is installed on, the idea of it all is to
track the urls that the users visit. Au
On 07/01/11 23:26, Harald Dunkel wrote:
Hi Amos,
On 01/07/11 06:22, Amos Jeffries wrote:
On 05/01/11 02:09, Harald Dunkel wrote:
Hi folks,
I've got an OpenBSD gateway (including NAT) redirecting HTTP
traffic to a dedicated internal Linux host running Squid 3.1.9.
Problem: I see tons of messag
On 01/06/2011 11:27 PM, Amos Jeffries wrote:
On 07/01/11 15:54, mpnordland wrote:
The tricky thing is, is that this is all on one computer, squid is a
proxy for the computer it is installed on, the idea of it all is to
track the urls that the users visit. Authentication is necessary so that
one
Thank.
but which value should be set for quick_abort?? i don't know relation
between quick_abort and download manager.
can you explain it or give me some useful links?
now i'm reading about range_offset_limit.
> On Fri, Jan 7, 2011 at 6:11 PM, Amos Jeffries wrote:
>>
>> On 08/01/11 02:20, Mohsen
On 08/01/11 02:20, Mohsen Saeedi wrote:
Dear Amos
I'm mohsen saeedi as translator of squid for persian language. We chat
on squid IRC with each other many times already have.
I want to cache some files are being downloaded with download managers
( for example IDMan. ) as you know download manager
On 08/01/11 02:06, Tim Huckle wrote:
Hi,
I'm caching for a short period of time a large number of delayed
stock market data web service requests which ordinarily were being
requested from the upstream data provider every time by the app
layer. I have various refresh_patterns stipulated for the
On 08/01/11 01:37, Nick Cairncross wrote:
Hi List,
From time to time my users experience constant unsatisfiable prompts from
squid. Cache.log reports:
2011/01/07 12:04:53| authenticateNegotiateHandleReply: Error validating user
via Negotiate. Error returned 'BH gss_acquire_cred() failed: Uns
On 08/01/11 00:44, Robert Pipca wrote:
Hi Mr. Jeffreis,
2011/1/7 Robert Pipca:
2011/1/7 Amos Jeffries:
Your config shows ~69 GB of small files. Each cache_dir has a maximum count
of 2^31 files. It looks like that file count is being exceeded and the
overflow handling is broken.
I looked at o
Dear Amos
I'm mohsen saeedi as translator of squid for persian language. We chat
on squid IRC with each other many times already have.
I want to cache some files are being downloaded with download managers
( for example IDMan. ) as you know download manager split file to
multiple part for accelerat
Hi,
I'm caching for a short period of time a large number of delayed stock market
data web service requests which ordinarily were being requested from the
upstream data provider every time by the app layer. I have various
refresh_patterns stipulated for the different types of requests, i.e. da
Hi List,
>From time to time my users experience constant unsatisfiable prompts from
>squid. Cache.log reports:
2011/01/07 12:04:53| authenticateNegotiateHandleReply: Error validating user
via Negotiate. Error returned 'BH gss_acquire_cred() failed: Unspecified GSS
failure. Minor code may prov
Hi Mr. Jeffreis,
2011/1/7 Robert Pipca :
> 2011/1/7 Amos Jeffries :
>> Your config shows ~69 GB of small files. Each cache_dir has a maximum count
>> of 2^31 files. It looks like that file count is being exceeded and the
>> overflow handling is broken.
I looked at one server showing this bug, and
Hi Mr. Jeffreis,
2011/1/7 Amos Jeffries :
> Your config shows ~69 GB of small files. Each cache_dir has a maximum count
> of 2^31 files. It looks like that file count is being exceeded and the
> overflow handling is broken.
Right, can I help fixing it?
Or should I decrease the cache-size of each
I see.
Thank you
Markus
"Amos Jeffries" wrote in message
news:4d26bbcd.1050...@treenet.co.nz...
On 06/01/11 03:17, Markus Moeller wrote:
Hi,
When should I expect to see a Proxy-Authentication-Info header ? I
noticed that when I use Kerberos authentication with squid_kerb_auth on
Version 3.0
On 07/01/11 23:00, Mohsen Saeedi wrote:
Hello all
I had very experience with squid caching server for 8 years. but i
have a question. now i'm using squid 3.1 on the RHEL under 100Mbit/s
bandwidth for large university. it's great in performance. but how can
i cache some content is downloading wit
On 07/01/11 21:32, Tóth Tibor Péter wrote:
Hi Amos!
Thanks for the reply.
So how would this look like?
acl apache http_reply_access "apache"
acl apache http_reply_access "Apache"
http_access deny apache
Thanks,
Tibby
Methinks you need a bit more reading on how Squid works.
These should get
As suggested, I upgraded to 3.2.0.4 with --disable-cpu-profiling option
but I'm still getting TCP_MISS/200 on both backends and I can't see any
TCP_HIT.
Artemis
On 01/07/2011 05:36 AM, Amos Jeffries wrote:
On 06/01/11 04:54, Artemis BRAJA wrote:
Hello everyone!
I recently upgraded to squid
Hi Amos,
On 01/07/11 06:22, Amos Jeffries wrote:
> On 05/01/11 02:09, Harald Dunkel wrote:
>> Hi folks,
>>
>> I've got an OpenBSD gateway (including NAT) redirecting HTTP
>> traffic to a dedicated internal Linux host running Squid 3.1.9.
>> Problem: I see tons of messages in cache.log
>>
>> :
>> 2
Hello all
I had very experience with squid caching server for 8 years. but i
have a question. now i'm using squid 3.1 on the RHEL under 100Mbit/s
bandwidth for large university. it's great in performance. but how can
i cache some content is downloading with some download manager such as
IDman or s
Thank you Amos , I solved the problem. First I applied the
use-storeurl patch into squid 2.7STABLE9, it adds usr-storeurl option
to cache_peer.
Then I had to use the same storeurl rewriter script on both peers. And
it worked with ICP , I am getting internally written URLs as
parent_hit now which is
Hi Amos!
Thanks for the reply.
So how would this look like?
acl apache http_reply_access "apache"
acl apache http_reply_access "Apache"
http_access deny apache
Thanks,
Tibby
-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz]
Sent: Friday, January 07, 2011 5:51 AM
To:
55 matches
Mail list logo