Hi
We have some experience on cluster configuration.
https://wiki.articatech.com/en/proxy-service/hacluster
As using Kubernetes for Squid and for 40K users is a very "risky adventure".
Squid requires a very high disk performance (I/O) which means both a
good hard disk drive and a decent
-users@lists.squid-cache.org
Subject: [squid-users] Squid performance recommendation
Hi squid community,
I need to find most best and sustainable way to build a stable High
Availability squid cluster/solution for abou 40k user.
Parameters: I need HA, caching(little objects only not like big w
On 20/09/2022 20:52, Pintér Szabolcs wrote:
Hi squid community,
I need to find most best and sustainable way to build a stable High
Availability squid cluster/solution for abou 40k user.
Parameters: I need HA, caching(little objects only not like big windows
updates), scaling(It is just
On 21/09/22 07:52, Pintér Szabolcs wrote:
Hi squid community,
I need to find most best and sustainable way to build a stable High
Availability squid cluster/solution for abou 40k user.
Number of users is of low relevance to Squid. What matters is the rate
of requests they are sending to
Hi squid community,
I need to find most best and sustainable way to build a stable High
Availability squid cluster/solution for abou 40k user.
Parameters: I need HA, caching(little objects only not like big windows
updates), scaling(It is just secondly), and I want to use and modify(in
ut it:
>
> systemctl daemon-reload
>
> and then restart squid.
>
> systemctl restart squid
>
>
>
>
>
> Eliezer
>
>
>
>
>
>
>
> *From:* NgTech LTD
> *Sent:* Tuesday, August 31, 2021 6:11 PM
> *To:* Marcio B.
> *Cc:* Squid Users
-users] Squid performance issues
Hey Marcio,
You will need to add a systemd service file that extends the current one with
more FileDescriptors.
I cannot guide now I do hope to be able to write later.
If anyone is able to help faster go ahead.
Eliezer
בתאריך יום ג׳, 31 באוג׳
Van: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] Namens
NgTech LTD
Verzonden: dinsdag 31 augustus 2021 17:11
Aan: Marcio B.
CC: Squid Users
Onderwerp: Re: [squid-users] Squid performance issues
Hey Marcio,
You will need to add a systemd service file that extends
look at your cache.log, after squid is starting, there you can see, how
much filedescriptors are available:
2021/08/31 17:14:36.870 kid1| With 1024 file descriptors available
Maybe there is a file like /etc/default/squid:
SQUID_MAXFD=1024
Regards
Klaus
Am Dienstag, dem 31.08.2021 um 18:10
Hey Marcio,
You will need to add a systemd service file that extends the current one
with more FileDescriptors.
I cannot guide now I do hope to be able to write later.
If anyone is able to help faster go ahead.
Eliezer
בתאריך יום ג׳, 31 באוג׳ 2021, 18:05, מאת Marcio B. :
> Hi,
>
> I
Hi,
I implemented a Squid server in version 4.6 on Debian and tested it for
about 40 days. However I put it into production today and Internet browsing
was extremely slow.
In /var/log/syslog I'm getting the following messages:
Aug 31 11:29:19 srvproxy squid[4041]: WARNING! Your cache is running
Looks like this was a false alarm. The test environment I was using had
some random fluctuations in the results. I assumed they averaged out over
multiple test runs. However it looks as if I got some bad rolls of the dice
and they did not.
I have now a modified test which has <1% variation over
"Premature optimization is root of all evlis".
13.01.2017 16:10, Stephen Baynes пишет:
Is there a known performance fall off going 3.5.20 → 3.5.23?
I am seeing a 15% to 20% performance drop on my normal download
benchmark and a crude test of uploading shows a few percent slowdown.
Running
Is there a known performance fall off going 3.5.20 → 3.5.23?
I am seeing a 15% to 20% performance drop on my normal download benchmark
and a crude test of uploading shows a few percent slowdown.
Running on a Linux derived from Debian.
Thanks
--
Stephen Baynes
On 4/08/2016 11:55 p.m., brendan kearney wrote:
> At what point does buffer bloat set in? I have a linux router with the
> below sysctl tweaks load balancing with haproxy to 2 squid instances. I
> have 4 x 1Gb interfaces bonded and have bumped the ring buffers on RX and
> TX to 1024 on all
On 08/04/2016 10:08 AM, Heiler Bemerguy wrote:
Sorry Amos, but I've tested with modifying JUST these two sysctl parameters and
the difference is huge.
Without maximum tcp buffers set to 8MB, I got a 110KB/s download speed, and
with a 8MB kernel buffer I got a 9.5MB/s download speed (via
Sorry Amos, but I've tested with modifying JUST these two sysctl
parameters and the difference is huge.
Without maximum tcp buffers set to 8MB, I got a 110KB/s download speed,
and with a 8MB kernel buffer I got a 9.5MB/s download speed (via squid,
of course).
I think it has to do with the
At what point does buffer bloat set in? I have a linux router with the
below sysctl tweaks load balancing with haproxy to 2 squid instances. I
have 4 x 1Gb interfaces bonded and have bumped the ring buffers on RX and
TX to 1024 on all interfaces.
The squid servers run with almost the same
On 4/08/2016 2:32 a.m., Heiler Bemerguy wrote:
>
> I think it doesn't really matter how much squid sets its default buffer.
> The linux kernel will upscale to the maximum set by the third option.
> (and the TCP Window Size will follow that)
>
> net.ipv4.tcp_wmem = 1024 32768 8388608
>
On 08/03/2016 10:27 AM, Amos Jeffries wrote:
On 3/08/2016 9:45 p.m., Marcus Kool wrote:
On 08/03/2016 12:30 AM, Amos Jeffries wrote:
If thats not fast enough, you may also wish to patch in a larger value
for HTTP_REQBUF_SZ in src/defines.h to 64KB with a matching incease to
I think it doesn't really matter how much squid sets its default buffer.
The linux kernel will upscale to the maximum set by the third option.
(and the TCP Window Size will follow that)
net.ipv4.tcp_wmem = 1024 32768 8388608
net.ipv4.tcp_rmem = 1024 32768 8388608
--
Best Regards,
Heiler
On 3/08/2016 9:45 p.m., Marcus Kool wrote:
>
>
> On 08/03/2016 12:30 AM, Amos Jeffries wrote:
>
>
>> If thats not fast enough, you may also wish to patch in a larger value
>> for HTTP_REQBUF_SZ in src/defines.h to 64KB with a matching incease to
>> read_ahead_gap in squid.conf. That has had
On 08/03/2016 12:30 AM, Amos Jeffries wrote:
If thats not fast enough, you may also wish to patch in a larger value
for HTTP_REQBUF_SZ in src/defines.h to 64KB with a matching incease to
read_ahead_gap in squid.conf. That has had some mixed results though,
faster traffic, but also some
On 3/08/2016 2:42 p.m., Heiler Bemerguy wrote:
>
> in /etc/sysctl.conf, add:
>
> net.core.rmem_max = 8388608
> net.core.wmem_max = 8388608
> net.core.wmem_default = 32768
> net.core.rmem_default = 32768
> net.ipv4.tcp_wmem = 1024 32768 8388608
> net.ipv4.tcp_rmem = 1024 32768 8388608
>
PLease
in /etc/sysctl.conf, add:
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
net.core.wmem_default = 32768
net.core.rmem_default = 32768
net.ipv4.tcp_wmem = 1024 32768 8388608
net.ipv4.tcp_rmem = 1024 32768 8388608
--
Best Regards,
Heiler Bemerguy
Network Manager - CINBESA
55 91
Hi All,
We've been running Squid for many years. Recently we upgraded our
internet link to a 1Gbps link, but we are finding that squid is not able
to drive this link to its full potential (previous links have been
30Mbps or 100Mbps).
Currently running squid 3.5.1, but have tried 3.4, 3.3, 3.2
I want to share the results with the community on the squid wikis. How
to do that?
We are collecting some ad-hoc benchmark details for Squid releases at
http://wiki.squid-cache.org/KnowledgeBase/Benchmarks. So far this is not
exactly a rigourous testing, although following the methodology
On Thu, Jun 20, 2013 at 5:21 PM, Marcus Kool
marcus.k...@urlfilterdb.com wrote:
On 06/20/2013 06:51 AM, Amos Jeffries wrote:
If anyone is interested with very detailed benchmarks, then I can provide
them.
Yes please :-)
PS. could you CC the squid-dev mailing list as well with the
On Fri, Jun 21, 2013 at 10:41 AM, Alex Rousskov
rouss...@measurement-factory.com wrote:
On 06/20/2013 10:47 PM, Ahmed Talha Khan wrote:
On Fri, Jun 21, 2013 at 6:17 AM, Alex Rousskov wrote:
On 06/20/2013 02:00 AM, Ahmed Talha Khan wrote:
My test methodology looks like this
generator(apache
I understand that Amos is eager to get more tests and more results about
the latest enhancements, but as Amos himself also stated earlier, please
use a released version of Squid for testing since the test results for
3.3.x or 3.4.x are interesting for admins of Squid who can consider
On 06/21/2013 04:34 AM, Ahmed Talha Khan wrote:
Then the question becomes why squid is slowing down?
I think there are 2.5 primary reasons for that:
1) Higher concurrency level (c in your tables) means more
waiting/queuing time for each transaction: When [a part of] one
transaction has to
On 21/06/2013 10:34 p.m., Ahmed Talha Khan wrote:
On Fri, Jun 21, 2013 at 10:41 AM, Alex Rousskov
rouss...@measurement-factory.com wrote:
On 06/20/2013 10:47 PM, Ahmed Talha Khan wrote:
On Fri, Jun 21, 2013 at 6:17 AM, Alex Rousskov wrote:
On 06/20/2013 02:00 AM, Ahmed Talha Khan wrote:
My
Hello All,
I have been trying to benchmark the performance of squid for sometime
now for plain HTTP and HTTPS traffic.
The key performance indicators that i am looking at are Requests Per
Second(RPS), Throughput(mbps) and Latency (ms).
My test methodology looks like this
generator(apache
On 20/06/2013 8:00 p.m., Ahmed Talha Khan wrote:
Hello All,
I have been trying to benchmark the performance of squid for sometime
now for plain HTTP and HTTPS traffic.
The key performance indicators that i am looking at are Requests Per
Second(RPS), Throughput(mbps) and Latency (ms).
My test
On 06/20/2013 06:51 AM, Amos Jeffries wrote:
If anyone is interested with very detailed benchmarks, then I can provide them.
Yes please :-)
PS. could you CC the squid-dev mailing list as well with the details. The more
developer eyes we can get on this data the better. Although please
On 06/20/2013 02:00 AM, Ahmed Talha Khan wrote:
My test methodology looks like this
generator(apache benchmark)---squid--server(lighttpd)
...
These results show that squid is NOT CPU bound at this point. Neither
is it Network IO bound because i can get much more throughput when I
On Fri, Jun 21, 2013 at 6:17 AM, Alex Rousskov
rouss...@measurement-factory.com wrote:
On 06/20/2013 02:00 AM, Ahmed Talha Khan wrote:
My test methodology looks like this
generator(apache benchmark)---squid--server(lighttpd)
...
These results show that squid is NOT CPU bound at this
On 06/20/2013 10:47 PM, Ahmed Talha Khan wrote:
On Fri, Jun 21, 2013 at 6:17 AM, Alex Rousskov wrote:
On 06/20/2013 02:00 AM, Ahmed Talha Khan wrote:
My test methodology looks like this
generator(apache benchmark)---squid--server(lighttpd)
...
These results show that squid is NOT
On Sun, Mar 31, 2013 at 3:20 AM, Amos Jeffries squ...@treenet.co.nz wrote:
On 31/03/2013 9:07 a.m., Hasanen AL-Bana wrote:
The above config for cache_dirs is not working probably.
You are top-posting.
. Why?
.. There is no above config.
Sorry , it is the new gmail composer...
I can see
On 30/03/2013 6:33 a.m., Hasanen AL-Bana wrote:
Hi,
I am running squid 3.2 with an average of 50k req/min. Total received
bandwidth is around 200mbit/s.
I have problem when my aufs cache_dirs reaches size above 600GB.
Traffic starts dropping and going up again , happening every 20~30 minutes.
I
Thank you Amos for clarifying these issues.
I will skip SMP and use single worker since Rock limit my max object
size to 32kb when used in shared environments.
My new cache_dir configuration looks like this now :
cache_dir rock /mnt/ssd/cache/ 30 max-size=131072
cache_dir aufs
The above config for cache_dirs is not working probably.
I can see the aufs dir growing rapidly while the Rock directory has
been created but it is empty !
---
Store Directory Statistics:
Store Entries : 1166040
On 31/03/2013 9:07 a.m., Hasanen AL-Bana wrote:
The above config for cache_dirs is not working probably.
You are top-posting.
. Why?
.. There is no above config.
I can see the aufs dir growing rapidly while the Rock directory has
been created but it is empty !
Hi,
I am running squid 3.2 with an average of 50k req/min. Total received
bandwidth is around 200mbit/s.
I have problem when my aufs cache_dirs reaches size above 600GB.
Traffic starts dropping and going up again , happening every 20~30 minutes.
I have more that enough RAM in the system (125GB
hi ,.
i have centos 64 bit with kernel 3.7.5 compiled with tproxy features .
i noted that in rush hour , squid squid guard is bypassing .
i noted that squid is using only 1 cpu .
here is output sample:
===
[root@squid squid-3.3.1]# mpstat -u
Linux 3.7.5 (squid) 03/05/2013
On 6/03/2013 3:14 a.m., Ahmad wrote:
hi ,.
i have centos 64 bit with kernel 3.7.5 compiled with tproxy features .
i noted that in rush hour , squid squid guard is bypassing .
Are you basing that on the detected possible bypass attack messages
mentioned in threads from days back?
... that
I have some Dell 1950 servers dedicated to squid in my production
environment. Each with 16GB RAM and 300G disk
As the website traffic grows, the load of squid becomes high at high
traffic time. Average load is higher than 10.
Device: rrqm/s wrqm/s r/s w/srkB/swkB/s avgrq-sz
W dniu 2011-08-18 08:19, Chen Bangzhong pisze:
I have some Dell 1950 servers dedicated to squid in my production
environment. Each with 16GB RAM and 300G disk
As the website traffic grows, the load of squid becomes high at high
traffic time. Average load is higher than 10.
Device:
Median Service Times (seconds) 5 min 60 min:
HTTP Requests (All): 0.00865 0.00865
Cache Misses: 0.01035 0.01035
Cache Hits: 0.0 0.0
Near Hits: 0.00091 0.00091
Not-Modified Replies: 0.0 0.0
My cached objects will expire after 10 minutes.
Cache-Control:max-age=600
I don't know why there are so many disk writes and there are so many
objects on disk.
In addtion, Disk hits as % of hit requests: 5min: 1.6%, 60min: 1.9%
is very low.
Can I increase the cache_mem? or not use disk cache
2011/8/18 Chen Bangzhong bangzh...@gmail.com:
My cached objects will expire after 10 minutes.
Cache-Control:max-age=600
Static content like pictures should cache longer, like 1 day, 86400.
I don't know why there are so many disk writes and there are so many
objects on disk.
In addtion,
On 18/08/11 19:40, Drunkard Zhang wrote:
2011/8/18 Chen Bangzhong:
My cached objects will expire after 10 minutes.
Cache-Control:max-age=600
Static content like pictures should cache longer, like 1 day, 86400.
Could also be a whole year. If you control the origin website, set
caching
2011/8/18 Amos Jeffries squ...@treenet.co.nz:
On 18/08/11 19:40, Drunkard Zhang wrote:
2011/8/18 Chen Bangzhong:
My cached objects will expire after 10 minutes.
Cache-Control:max-age=600
Static content like pictures should cache longer, like 1 day, 86400.
Could also be a whole year. If
thanks you Amos and Drunkard.
My website hosts novels, That's, user can read novel there.
The pages are not truely static contents, so I can only cache them for
10 minutes.
My squids serve both non-cachable requests (works like nginx) and
cachable-requests (10 min cache). So 60% cache miss is
2011/8/18 Amos Jeffries squ...@treenet.co.nz:
On 18/08/11 19:40, Drunkard Zhang wrote:
2011/8/18 Chen Bangzhong:
My cached objects will expire after 10 minutes.
Cache-Control:max-age=600
Static content like pictures should cache longer, like 1 day, 86400.
Could also be a whole year. If
Mean Object Size: 20.61 K
maximum_object_size_in_memory 1024 KB
So most objects will be save in RAM first, still can't explain why
there are so many disk writes.
avg-cpu: %user %nice %system %iowait %steal %idle
1.520.001.636.950.00 89.91
Device:
On 18/08/11 22:50, Chen Bangzhong wrote:
thanks you Amos and Drunkard.
My website hosts novels, That's, user can read novel there.
The pages are not truely static contents, so I can only cache them for
10 minutes.
My squids serve both non-cachable requests (works like nginx) and
On 18/08/11 22:56, Chen Bangzhong wrote:
Mean Object Size: 20.61 K
maximum_object_size_in_memory 1024 KB
So most objects will be save in RAM first, still can't explain why
there are so many disk writes.
Well, I would check the HTTP response headers there. Make sure they are
On 18/08/11 22:53, Kaiwang Chen wrote:
2011/8/18 Amos Jeffriessqu...@treenet.co.nz:
On 18/08/11 19:40, Drunkard Zhang wrote:
2011/8/18 Chen Bangzhong:
snip
I don't know why there are so many disk writes and there are so many
objects on disk.
All traffic goes through either RAM cache
thanks.
Before I try the gateway squid solution, I want to change one of my
squid to use memory cache only. I have 16GB RAM. now cache_mem is set
to 5GB.
I will try to increase it to 12GB and set cache_dir to null schma. I
do this because I am sure that my hot objects can be saved in RAM,
2011/8/18 Amos Jeffries squ...@treenet.co.nz:
On 18/08/11 22:53, Kaiwang Chen wrote:
2011/8/18 Amos Jeffriessqu...@treenet.co.nz:
On 18/08/11 19:40, Drunkard Zhang wrote:
2011/8/18 Chen Bangzhong:
snip
I don't know why there are so many disk writes and there are so many
objects on
在 2011年8月18日 下午9:07,Amos Jeffries squ...@treenet.co.nz 写道:
On 18/08/11 22:56, Chen Bangzhong wrote:
Mean Object Size: 20.61 K
maximum_object_size_in_memory 1024 KB
So most objects will be save in RAM first, still can't explain why
there are so many disk writes.
Well, I would check
On 19/08/11 02:40, Kaiwang Chen wrote:
2011/8/18 Amos Jeffriessqu...@treenet.co.nz:
On 18/08/11 22:53, Kaiwang Chen wrote:
2011/8/18 Amos Jeffriessqu...@treenet.co.nz:
On 18/08/11 19:40, Drunkard Zhang wrote:
2011/8/18 Chen Bangzhong:
snip
I don't know why there are so many disk
On 19/08/11 02:10, Chen Bangzhong wrote:
thanks.
Before I try the gateway squid solution, I want to change one of my
squid to use memory cache only. I have 16GB RAM. now cache_mem is set
to 5GB.
I will try to increase it to 12GB and set cache_dir to null schma. I
do this because I am sure that
Amos, I want to find out what is filling my disk at 2-3MB/s. If there
is no cache related information in the response header, will squid
write the response to the disk?
In squid wiki, I found the following sentences:
Responses with Cache-Control: Private are NOT cachable.
Responses with
On 19/08/11 02:59, Kaiwang Chen wrote:
在 2011年8月18日 下午9:07,Amos Jeffriessqu...@treenet.co.nz 写道:
On 18/08/11 22:56, Chen Bangzhong wrote:
Mean Object Size: 20.61 K
maximum_object_size_in_memory 1024 KB
So most objects will be save in RAM first, still can't explain why
there are so many
On 19/08/11 03:58, Chen Bangzhong wrote:
Amos, I want to find out what is filling my disk at 2-3MB/s. If there
is no cache related information in the response header, will squid
write the response to the disk?
In squid wiki, I found the following sentences:
Responses with Cache-Control:
Dear team,
I run a Squid Cache: Version 3.1.8. i have a problem when my
client_http.requests = is more than 200/sec. pages doesn't browse but
when the request are less than 200 i dont find any problem. i don't
see any errors in /etc/var/squid/cache.log. my file descriptors is
32768.
Please find
On 23/10/10 03:01, Ananth wrote:
Dear team,
I run a Squid Cache: Version 3.1.8. i have a problem when my
client_http.requests = is more than 200/sec. pages doesn't browse but
when the request are less than 200 i dont find any problem. i don't
see any errors in /etc/var/squid/cache.log. my file
饶琛琳 wrote:
I have seem the
page(http://wiki.squid-cache.org/KnowledgeBase/Benchmarks), and want to
ask a question about the RPS.
My LVS tell me that the ActiveConn number of one squid is more than
200,000;the netstat command tell me the established connection number is
6;but the RPS from
I have seem the
page(http://wiki.squid-cache.org/KnowledgeBase/Benchmarks), and want to
ask a question about the RPS.
My LVS tell me that the ActiveConn number of one squid is more than
200,000;the netstat command tell me the established connection number is
6;but the RPS from squidclient
guest01 wrote:
Hi guys,
I am sorry if this is a question which has been asked for many times,
but I did not find any actual question concerning the performance of
recent versions of squid.
We are trying to replace a commercial product with squid servers on
64bit linux servers (most likely red
Hi guys,
I am sorry if this is a question which has been asked for many times,
but I did not find any actual question concerning the performance of
recent versions of squid.
We are trying to replace a commercial product with squid servers on
64bit linux servers (most likely red hat 5). At the
Felipe W Damasio wrote:
Hi Mr. Robertson,
2010/1/26 Chris Robertson crobert...@gci.net:
Do you have any idea or any other data I can collect to try and
track down this?
Check your log rotation schedule. Is it possible that logs are being
rotated at midnight? I think that the swap.state
(2.6.29.6) now, in case it's
something broke in the more recent kernels.
-Original Message-
From: Felipe W Damasio [mailto:felip...@gmail.com]
Sent: Monday, January 25, 2010 10:06 PM
To: John Lauro
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid performance issues
Both your CPU and disk both look ok based on these, and not enough
difference from baseline to explain the change in timing of the command.
I'll look at the netstats a little more later to see if I spot anything.
Can you test the equivalent outside of squid? Maybe it's just your internet
or
Hi Mr. Lauro,
2010/1/27 John Lauro john.la...@covenanteyes.com:
I'll look at the netstats a little more later to see if I spot anything.
Can you test the equivalent outside of squid? Maybe it's just your internet
or amazon being slow and it has nothing to do with squid...?
I thought
Felipe W Damasio wrote:
Hi all,
Sorry for the long email.
I'm using squid on a 300Mbps ISP with about 10,000 users.
I have an 8-core I7 Intel processor-machine, with 8GB of RAM and 500
of HD for the cache. (exclusive Sata HD with xfs). Using aufs as
storeio.
I'm caching mostly
Hi Mr. Robertson,
2010/1/26 Chris Robertson crobert...@gci.net:
Do you have any idea or any other data I can collect to try and
track down this?
Check your log rotation schedule. Is it possible that logs are being
rotated at midnight? I think that the swap.state file is rewritten when
Felipe W Damasio wrote:
Hi Mr. Robertson,
2010/1/26 Chris Robertson crobert...@gci.net:
Do you have any idea or any other data I can collect to try and
track down this?
Check your log rotation schedule. Is it possible that logs are being
rotated at midnight? I think that the
Hi Mr. Robertson,
2010/1/26 Chris Robertson crobert...@gci.net:
I don't use -k rotate.
Err... Really? Last I heard, calling squid -k rotate (aside from the
obvious logfile rotation) prunes the swap.state file. Not doing so would
lead to your swap.state growing without bounds.
Felipe W Damasio wrote:
Hi Mr. Robertson,
2010/1/26 Chris Robertson crobert...@gci.net:
I don't use -k rotate.
Err... Really? Last I heard, calling squid -k rotate (aside from the
obvious logfile rotation) prunes the swap.state file. Not doing so would
lead to your
Hi all,
Sorry for the long email.
I'm using squid on a 300Mbps ISP with about 10,000 users.
I have an 8-core I7 Intel processor-machine, with 8GB of RAM and 500
of HD for the cache. (exclusive Sata HD with xfs). Using aufs as
storeio.
I'm caching mostly multimedia files (youtube and
narrow it down.
-Original Message-
From: Felipe W Damasio [mailto:felip...@gmail.com]
Sent: Monday, January 25, 2010 9:37 PM
To: squid-users@squid-cache.org
Subject: [squid-users] Squid performance issues
Hi all,
Sorry for the long email.
I'm using squid on a 300Mbps ISP
Hi Mr. John,
2010/1/26 John Lauro john.la...@covenanteyes.com:
What does the following give:
uname -a
uname -a:
Linux squid 2.6.29.6 #4 SMP Thu Jan 14 21:00:42 BRST 2010 x86_64
Intel(R) Core(TM) i7 CPU @ 9200 @ 2.67GHz GenuineIntel GNU/Linux
While it's being slow, run the following to get
Le lundi 12 octobre 2009 10:11:03, Jason Martina a écrit :
Hello,
Well im looking for a better solution than MS ISA proxy, we have 3000
users that uses 4 ISA proxy servers, and its a managment nightmare so
im going to attempt to use squid+dansguardian, on the squid side of
things i cant
On Oct 12, 2009, at 11:11 AM, Jason Martina wrote:
Hello,
Well im looking for a better solution than MS ISA proxy, we have 3000
users that uses 4 ISA proxy servers, and its a managment nightmare so
im going to attempt to use squid+dansguardian, on the squid side of
things i cant find
donovan jeffrey j wrote:
On Oct 12, 2009, at 11:11 AM, Jason Martina wrote:
Hello,
Well im looking for a better solution than MS ISA proxy, we have 3000
users that uses 4 ISA proxy servers, and its a managment nightmare so
im going to attempt to use squid+dansguardian, on the squid side of
Hello,
Well im looking for a better solution than MS ISA proxy, we have 3000
users that uses 4 ISA proxy servers, and its a managment nightmare so
im going to attempt to use squid+dansguardian, on the squid side of
things i cant find anything about using it in a large orginization and
with the
* Jason Martina jason.mart...@gmail.com:
Hello,
Well im looking for a better solution than MS ISA proxy, we have 3000
users that uses 4 ISA proxy servers, and its a managment nightmare so
im going to attempt to use squid+dansguardian, on the squid side of
things i cant find anything about
Hello,
As far as I know, Squid is a single process, single threaded program.
So is Squid good for serviing large video download (e.g. 10MB+), will
be block other download?
Thanks.
Ryan Chan wrote:
Hello,
As far as I know, Squid is a single process, single threaded program.
So is Squid good for serviing large video download (e.g. 10MB+), will
be block other download?
Thanks.
No. Two or more files can be served at the same time.
Squid is built in a non-blocking design
Hey
On Sun, Oct 4, 2009 at 10:34 AM, Amos Jeffries squ...@treenet.co.nz wrote:
No. Two or more files can be served at the same time.
Squid is built in a non-blocking design that does multi-threaded things
internally without using OS threads.
Amos
Is it using epoll?
Ryan Chan wrote:
Hey
On Sun, Oct 4, 2009 at 10:34 AM, Amos Jeffries squ...@treenet.co.nz wrote:
No. Two or more files can be served at the same time.
Squid is built in a non-blocking design that does multi-threaded things
internally without using OS threads.
Amos
Is it using epoll?
If
Hi Everyone,
I am currently doing performance testing with squid 3 and I seem to be
running into some bottlenecks. I have done exhaustive research
through the squid mail archives, Duane Wessels O'reilly book(a great
resource) and other areas.
I have a dual hyperthreading Xeon machine with 8GB
Hi Everyone,
I am currently doing performance testing with squid 3 and I seem to be
running into some bottlenecks. I have done exhaustive research
through the squid mail archives, Duane Wessels O'reilly book(a great
resource) and other areas.
I have a dual hyperthreading Xeon machine with 8GB
Hello,
I am using a customized Web Polygraph recipe based on polymix4 to
benchmark Squid 2.7STABLE6. With Linux kernel 2.6.23.8 the benchmark
indicates that our hardware will allow for approximately 1000 requests
per second. When the kernel is switched to 2.6.28.5 the benchmark
indicates a likely
On ons, 2008-07-02 at 18:12 -0500, Carlos Alberto Bernat Orozco wrote:
Why I'm making this question, because when I installed squid for 120
users, the ram went to the sky
ram usage is not very dependent on the amount of users, more on how you
configure Squid.
There is a whole chapter in
Hi group
I wonder if a debian box with 1Gb RAM could run squid to block child
porn sites for 600 users aprox.
Is possible? would be good? a checklist to know the requisites for squid?
Thanks in advanced
@squid-cache.org
Subject: [squid-users] Squid performance for 600 users
Hi group
I wonder if a debian box with 1Gb RAM could run squid to block child
porn sites for 600 users aprox.
Is possible? would be good? a checklist to know the requisites for squid?
Thanks in advanced
1 - 100 of 242 matches
Mail list logo