Hi All,
I face a weird issue regarding DISKS cache-dir model and I would like to
have your expertise here
Here is the result of a cache object with an AUFS cache_dir:
1436916227.603462 192.168.1.88 00:0c:29:6e:2c:99 TCP_HIT/200 10486356
GET http://proof.ovh.net/files/10Mio.dat - HIER_NONE/-
Very Interesting, I would add that I wonder which storage scheme in 2015 should
we use to give better performance with recent hardware and high load (more than
800 r/s) ?
Have any recent benchmark somewhere ?
In my case I'm using diskd with some system tuning, noatime, separate disks for
cache
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
The key question: which OS using?
15.07.15 12:56, Stakres пишет:
> Hi All,
>
> I face a weird issue regarding DISKS cache-dir model and I would like to
> have your expertise here
>
> Here is the result of a cache object with an AUFS cache_dir:
> 1
Yuri,
Debian 7 or 8, tested on both...
Bye Fred
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/AUFS-vs-DISKS-tp4672209p4672212.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
DIskd works perfectly on some OS'es, like Solaris, BSD.
Linux-based OS, AFAIK, works with diskd so slow. And AUFS is the best
choise in this case. Depending system settings, of course.
AFAIK, on some OS (like.h. Windows) "aufs" leads
Yury,
you mean that having the DISKD 52 times slower then AUFS with linux OS is
normal ?
I cannot believe that, incredible !
I could understand the double or the triple, but here we're speaking about
50+ times...
Fred.
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Are you surprised that the IO modules may be specific for different
operating systems? :)
15.07.15 15:59, Stakres пишет:
> Yury,
>
> you mean that having the DISKD 52 times slower then AUFS with linux OS is
> normal ?
> I cannot believe that, inc
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Also - did you read this:
http://wiki.squid-cache.org/Features/DiskDaemon
?
Your seen, for which OS this feature designed? ;)
15.07.15 15:59, Stakres пишет:
> Yury,
>
> you mean that having the DISKD 52 times slower then AUFS with linux OS is
>
On 15/07/2015 9:59 p.m., Stakres wrote:
> Yury,
>
> you mean that having the DISKD 52 times slower then AUFS with linux OS is
> normal ?
> I cannot believe that, incredible !
>
> I could understand the double or the triple, but here we're speaking about
> 50+ times...
Yes. Exactly so.
The diff
Just a little word about aufs, just for information, to avoid
squidaio_queue_request: WARNING - Queue congestion
squidaio_queue_request: WARNING - Queue congestion
squidaio_queue_request: WARNING - Queue congestion
squidaio_queue_request: WARNING - Queue congestion
I had increase this value (so
Your are right fred,
It is is a difficult deal for us too...
aufs -> good speed but more troubles ( assertion failed, "empty()", HTTP
reply without date unstable rock system ) and must deal with squid
crashes ( watchdog)
diskd -> more stable but slower...
Le 15/07/2015 12:46, FredB a
> Your are right fred,
>
> It is is a difficult deal for us too...
>
> aufs -> good speed but more troubles ( assertion failed, "empty()",
> HTTP
> reply without date unstable rock system ) and must deal with
> squid
> crashes ( watchdog)
You mean "rock store" or aufs ?
For me aufs seems
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
15.07.15 17:18, FredB пишет:
>
>> Your are right fred,
>>
>> It is is a difficult deal for us too...
>>
>> aufs -> good speed but more troubles ( assertion failed, "empty()",
>> HTTP
>> reply without date unstable rock system ) and must deal
On 15/07/2015 6:56 p.m., Stakres wrote:
> Hi All,
>
> I face a weird issue regarding DISKS cache-dir model and I would like to
> have your expertise here
>
> Here is the result of a cache object with an AUFS cache_dir:
> 1436916227.603462 192.168.1.88 00:0c:29:6e:2c:99 TCP_HIT/200 10486356
>
> Just use fast separate physical devices on separate controllers - and
> all will be ok without any delays.
>
Of course, with this kind of load without separate disks Squid dies after some
minutes :)
I'm using separates drives with noatime file system and I never found a way to
(completely)
> I'm making a test just now
>
> Diskd, 600 r/s, squid CPU usage = 40 %, load average 1, no warning in
> cache/kernel/syslog logs
> Aufs, 600 r/s, squid CPU usage = 45 %, load average 3, many 'Queue
> congestion'
>
> And no gain for hits % of all requests and bytes sent from squid
> cache
>
> S
Hi Amos,
Sorry but the Rock mode is totaly bugged, the worst mode to use here.
We did tons of tests, small, medium and big rock cache, all crash process
after process. We have definitively abandonned the Rock mode while it'll be
the same results.
So, it seems we'll have to switch all boxes from d
Fred,
Welcome to the club...
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/AUFS-vs-DISKS-tp4672209p4672227.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-
sults.
>
> So, it seems we'll have to switch all boxes from diskd to aufs, but I
> think
> we could survive
> Anyway, we liked the diskd because we see good stability, but the
> HITed
> objects are really too slow, all my clients are complaining, that's
> why we
> did many tests yesterday and we f
On 15/07/2015 11:41 p.m., Stakres wrote:
> Hi Amos,
>
> Sorry but the Rock mode is totaly bugged, the worst mode to use here.
> We did tons of tests, small, medium and big rock cache, all crash process
> after process. We have definitively abandonned the Rock mode while it'll be
> the same results
Hi Fred,
We did the tests with 1 hard disk only (for testing), we used 150 req/sec,
load was around 0.7-0.8
Naaa, response times are crazy in DISKD/TCP_HIT (20+ sec instead 0.5 sec in
AUFS) but it concerns TCP_HIT only, the other flags are corrects in DISKD.
I'll try the "noatime"...
Fred
--
Hi Fred,
We did the tests with 1 hard disk only (for testing), we used 150 req/sec,
load was around 0.7-0.8
Naaa, response times are crazy in DISKD/TCP_HIT (20+ sec instead 0.5 sec in
AUFS) but it concerns TCP_HIT only, the other flags are corrects in DISKD.
I'll try the "noatime"...
Fred
--
Amos,
We're using the latest 3.5.6 build, and we have not yet planed new tests
with the Rock. We were a bit disapointed with so we're not really "hot" to
spend time in testing it.
We're ok with the Diskd mode, except with the TCP_HIT objects (50+ times
slower).
We did tests on a basic server, i3
>
> Hi Fred,
>
> We did the tests with 1 hard disk only (for testing), we used 150
> req/sec,
> load was around 0.7-0.8
> Naaa, response times are crazy in DISKD/TCP_HIT (20+ sec instead 0.5
> sec in
> AUFS) but it concerns TCP_HIT only, the other flags are corrects in
> DISKD.
>
> I'll try t
Fred,
(Guys, 2 french Fred here, but not the sames)
Did you check the TCP_HIT response times with the Diskd ?
During our tests, we have seen than it's sometime better to download the
object from internet again instead using the one from the cache, we got
better response times...
Fred
--
View
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
AFAIK,
diskd speed depends from backend fs (OS level).
I use diskd over zfs with some tunables and has acceptable response
time, approx 0.1 sec.
15.07.15 18:52, Stakres пишет:
> Fred,
> (Guys, 2 french Fred here, but not the sames)
>
> Did you c
>
> Did you check the TCP_HIT response times with the Diskd ?
Yes
192.x.x.x - fred [15/Jul/2015:14:30:27 +0200] "GET
http://ec.ccm2.net/www.commentcamarche.net/download/files/youtube_downloader_hd_setup-2.9.9.23.exe
HTTP/1.0" 200 10096376 TCP_HIT:HIER_NONE "Wget/1.13.4 (linux-gnu)"
192.x.x.x
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Here is my stats:
client_http.all_median_svc_time = 0.097357 seconds
client_http.miss_median_svc_time = 0.097357 seconds
client_http.nm_median_svc_time = 0.00 seconds
client_http.nh_median_svc_time = 0.00 seconds
client_http.hit_median_svc
Sorry, I forgot a real life test
time wget
http://ec.ccm2.net/www.commentcamarche.net/download/files/youtube_downloader_hd_setup-2.9.9.23.exe
-v
--2015-07-15 15:22:03--
http://ec.ccm2.net/www.commentcamarche.net/download/files/youtube_downloader_hd_setup-2.9.9.23.exe
Connexion vers x.x.x.x:312
Just adding something to the subject.
HDD vs SSD speeds are quite something.
I have tried to test the benefits of a SSD in the past and in many cases
it was a great addition of speed.
Eliezer
On 15/07/2015 15:27, Stakres wrote:
Amos,
We're using the latest 3.5.6 build, and we have not yet pl
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
SSD as squid cache?! You are really rich, man!
15.07.15 19:33, Eliezer Croitoru пишет:
> Just adding something to the subject.
> HDD vs SSD speeds are quite something.
> I have tried to test the benefits of a SSD in the past and in many
cases it w
I agree, but what about the life time ? I change every two years (max 3) my
sata drives
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Speaking in essence: Performance depends strongly on the process model
used by the operating system, from settings, the hardware configuration
and the actual configuration of the operating system. And it can not be
considered in isolation from all
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
It depends from your squid settings (memory cache size, etc), your OS
(as expected), your fs.
My installation works 4 years 24x7 with shipped HDD.
15.07.15 19:41, FredB пишет:
> I agree, but what about the life time ? I change every two years (m
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Look:
root @ cthulhu / # zpool status data
pool: data
state: ONLINE
scan: scrub repaired 0 in 1h49m with 0 errors on Sat Jul 11 07:49:01 2015
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0
Hi Fred,
tests from my side:
DISKD with TCP_HIT objects: 564KB/s with wget, the same url you have tested.
AUFS with TCP_HITS objects: 47.8M/s, same wget, same squid, same url, same
all.
Wget with AUFS:
Length: 10095849 (9.6M) [application/x-msdos-program]
Saving to: `youtube_downloader_hd_setup-2.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
queue congestion means IO bottleneck. This will appears on regular
basis. With client delays, of course.
15.07.15 19:51, Stakres пишет:
> Hi Fred,
> tests from my side:
> DISKD with TCP_HIT objects: 564KB/s with wget, the same url you have
tested
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> It depends from your squid settings (memory cache size, etc), your OS
> (as expected), your fs.
>
> My installation works 4 years 24x7 with shipped HDD.
>
Yes, in my case it depends of number of read/write by second, I know that I
of
> Objet: Re: [squid-users] AUFS vs. DISKS
>
> Hi Fred,
> tests from my side:
> DISKD with TCP_HIT objects: 564KB/s with wget, the same url you have
> tested.
> AUFS with TCP_HITS objects: 47.8M/s, same wget, same squid, same url,
> same
> all.
>
> Wget wi
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I think, that using datacenter (not consumer) class HDD is more
preferrable than SSD.
Cache content lost means cached traffic and money loss. And this is not
acceptable for big caches.
15.07.15 19:57, FredB пишет:
>
>>
>> -BEGIN PGP SIGNED ME
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
This test means nothing. Only very approximate overall IO performance
for IO subsystem.
15.07.15 19:58, FredB пишет:
>
>
>> Objet: Re: [squid-users] AUFS vs. DISKS
>>
>> Hi Fred,
>> tests from my side:
>> DISK
пишет:
>
>
>> Objet: Re: [squid-users] AUFS vs. DISKS
>>
>> Hi Fred,
>> tests from my side:
>> DISKD with TCP_HIT objects: 564KB/s with wget, the same url you have
>> tested.
>> AUFS with TCP_HITS objects: 47.8M/s, same wget, same squid, same url,
>> s
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Just remember: performance tuning is complex problem, especially for
high load installations. And must be solved as complex.
15.07.15 19:58, FredB пишет:
>
>
>> Objet: Re: [squid-users] AUFS vs. DISKS
>>
>> Hi Fre
>
> All,
> We have switched some ISPs from DISKD to AUFS this morning, the
> "queue
> congestion" appears at the begining then disappears from the
> cache.log. For
> how long, nobody knows...
>
Yes me too, but after a while I had
2015/07/15 13:36:07 kid1| DiskThreadsDiskFile::openDone: (2) No
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
At this moment your user got partially loaded web page.
15.07.15 20:06, FredB пишет:
>
>>
>> All,
>> We have switched some ISPs from DISKD to AUFS this morning, the
>> "queue
>> congestion" appears at the begining then disappears from the
>> c
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> This test means nothing. Only very approximate overall IO performance
> for IO subsystem.
>
Not nothing I don't agree, it's not sufficiently precise to indicate where the
problem is, ok with that, but if you change only diskd by aufs you
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> At this moment your user got partially loaded web page.
>
Yes bad experience for me, I guess I reach some limitations about aufs,
fortunately I have no problem with diskd but I like to increase the
performances.
I will (re)test
Fred,
We have upgraded 4 big ISPs to the latest 3.5.6 in AUFS, feedbacks are so
good. I can tell you clients see a big (positive) change here.
We use the same settings in the squid.conf but AUFS instead DISKD, the
difference is crazy...
In the past we moved to the Diskd due to too many errors in A
On 16/07/2015 1:51 a.m., Stakres wrote:
> Hi Fred,
> tests from my side:
> DISKD with TCP_HIT objects: 564KB/s with wget, the same url you have tested.
> AUFS with TCP_HITS objects: 47.8M/s, same wget, same squid, same url, same
> all.
>
> Wget with AUFS:
> Length: 10095849 (9.6M) [application/x-m
On 16/07/2015 2:27 a.m., FredB wrote:
>
>> At this moment your user got partially loaded web page.
>>
>
> Yes bad experience for me, I guess I reach some limitations about aufs,
That is the SWAPFAIL part of SWAPFAIL_MISS. User should have simply
gitten a MISS fetched from the network. Maybe
>
> Fred,
> We have upgraded 4 big ISPs to the latest 3.5.6 in AUFS, feedbacks
> are so
> good. I can tell you clients see a big (positive) change here.
> We use the same settings in the squid.conf but AUFS instead DISKD,
> the
> difference is crazy...
>
> In the past we moved to the Diskd due t
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Amos,
I think, auds queue must be buffered more better and smoother. On some
OS (I've tested) peak loads leads performance degradation. Periodically.
That is why I'm not using aufs.
15.07.15 20:39, Amos Jeffries пишет:
> On 16/07/2015 1:51 a.m.,
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
15.07.15 20:45, Amos Jeffries пишет:
> On 16/07/2015 2:27 a.m., FredB wrote:
>>
>>> At this moment your user got partially loaded web page.
>>>
>>
>> Yes bad experience for me, I guess I reach some limitations about aufs,
>
> That is the SWAP
Fred,
Not sure we'll have free time for testing the previous 3.4, we now have
dozens of boxes to manually upgrade to the 3.5.6...
yes, we do use the original squid 3.5.6 package, no build mix here.
Fred
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/AUFS-v
On 07/15/2015 11:39 AM, Amos Jeffries wrote:
On 16/07/2015 1:51 a.m., Stakres wrote:
Hi Fred,
tests from my side:
DISKD with TCP_HIT objects: 564KB/s with wget, the same url you have tested.
AUFS with TCP_HITS objects: 47.8M/s, same wget, same squid, same url, same
all.
Wget with AUFS:
Length
On 16/07/2015 2:59 a.m., Yuri Voinov wrote:
>
> Amos,
>
> I think, auds queue must be buffered more better and smoother. On some
> OS (I've tested) peak loads leads performance degradation. Periodically.
>
Buffering and I/O scheduling is all done by the system disk controller
AFAICT. Squid is j
>
> Fred,
>
> Not sure we'll have free time for testing the previous 3.4, we now
> have
> dozens of boxes to manually upgrade to the 3.5.6...
> yes, we do use the original squid 3.5.6 package, no build mix here.
>
Ok I will, It would be interesting to understand what happen and if there is
so
On 07/15/2015 11:59 AM, Yuri Voinov wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Amos,
I think, auds queue must be buffered more better and smoother. On some
OS (I've tested) peak loads leads performance degradation. Periodically.
That is why I'm not using aufs.
This makes sense
On 15/07/2015 16:36, Yuri Voinov wrote:
SSD as squid cache?! You are really rich, man!
Please do separate two things Enterprise level SSD and Desktop SSD.
They are different by nature and they do not tend to "break" easily.
They do have different life spans and Enterprise grade HDDs tend to be
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I think,
it's enough datacenter class HDD. Also I use it with mirror option for
speed and reliability in my setup. This is comprehensive enough to
enterprise-level proxy. ;)
Of course, I know you know the separation between two hardware clauses.
> >
> > Not sure we'll have free time for testing the previous 3.4, we now
> > have
> > dozens of boxes to manually upgrade to the 3.5.6...
> > yes, we do use the original squid 3.5.6 package, no build mix here.
> >
>
> Ok I will, It would be interesting to understand what happen and if
> there
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Fred.
It's depending your OS.
Depending your hardware.
Depending your OS configuration.
Tuning is very complex problem and tuning is EVIL.
Remember it.
PS. On MY platform diskd is the single choise. And it's very fast.
16.07.15 21:00, FredB
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Fred.
It's depending your OS.
Depending your hardware.
Depending your OS configuration.
Tuning is very complex problem and tuning is EVIL.
Remember it.
PS. On MY platform diskd is the single choise. And it's very fast. 0.1
sec latency.
16.0
Hi Fred,
Same results from our side...
Does it mean we should catch the diskd engine from the 3.4.x and apply it
with the 3.5.x ?
Should be a good try to see if it works
bye Fred
--
View this message in context:
http://squid-web-proxy-cache.1019090.n4.nabble.com/AUFS-vs-DISKS-tp4672209p46
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Fred.
>
> It's depending your OS.
>
> Depending your hardware.
>
> Depending your OS configuration.
>
> Tuning is very complex problem and tuning is EVIL.
>
> Remember it.
>
Yuri. my tests are very very basic
I think in this case
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
In my case diskd only choice. On my platform aufs does not work at all.
And diskd gives the best result after careful tuning.
As I said earlier, the result is highly dependent on the platform,
hardware, and configuration. diskd was designed for a
Fred,
The AUFS works for us, we switched all our clients back to the AUFS from
DISKD.
Yes, there are some Queue congestions at the squid restart (during 30 min
maxi), but as Amos said the Squid will re-adapt its internal value to fit
the traffic, I can confirm that point.
After a while, the queue c
> Fred,
> The AUFS works for us, we switched all our clients back to the AUFS
> from
> DISKD.
> Yes, there are some Queue congestions at the squid restart (during 30
> min
> maxi), but as Amos said the Squid will re-adapt its internal value to
> fit
> the traffic, I can confirm that point.
> After
Hi,
By "cache.log saying objects are not found" I meant
"DiskThreadsDiskFile::openDone: (2) No such file or directory".
(je n'avais plus le message en tete...)
Yes, still this message but it disapears at least 30 minutes later. So not a
problem to us with clients.
bye Fred
--
View this message
On 16/07/2015 3:37 a.m., Marcus Kool wrote:
>
> I think that changing the baseline to 8K is not required since the queue
> congestion
> warning is normally seen only a few times, so the baseline value of 8 is
> doubled
> only a few times.
> A new baseline value of 256 (5 doublings) makes sense to
>
> Fred and Fred;
>
> Could you guys who have been seeing these warnings logged please
> present a grep of those cache.log lines so I can get a better handle
> on
> how many doublings your queues are actually requiring ?
>
> I count 5 and 6 warnings respectively in FredB's two earlier log
> t
> 'accept-encoding="identity,gzip,deflate"'
> 2015/07/20 10:15:00 kid1| clientProcessHit: Vary object loop!
> 2015/07/20 10:20:49 kid1| clientIfRangeMatch: Weak ETags are not
> allowed in If-Range: "bbfe4fbed01:0" ? "537965ecbcc2d01:0"
> 2015/07/20 10:22:50 kid1| urlParse: Illegal hostname '.x
Argh ! now crash
2015/07/20 11:06:36 kid1| WARNING: swapfile header inconsistent with available
data
2015/07/20 11:06:36 kid1| Could not parse headers from on disk object
2015/07/20 11:06:36 kid1| BUG 3279: HTTP reply without Date:
2015/07/20 11:06:36 kid1| StoreEntry->key: F5761430F887925196458
>
> 2015/07/20 11:06:36 kid1| WARNING: swapfile header inconsistent with
> available data
> 2015/07/20 11:06:36 kid1| Could not parse headers from on disk object
> 2015/07/20 11:06:36 kid1| BUG 3279: HTTP reply without Date:
> 2015/07/20 11:06:36 kid1| StoreEntry->key:
> F5761430F887925196458A469
Hi Amos,
Here is the cache.log to check:
http://utimg.unveiltech.com/tmp/amos-cache.tgz
Fred,
I compared the 2 source diskd.cc, squid 3.4.8 and 3.5.6 both official, no
dif.
So, using the diskd 3.4 with the 3.5 does not seem to be a good idea, result
should be the same.
Fred
--
View this messag
On 21/07/2015 3:19 a.m., Stakres wrote:
> Hi Amos,
> Here is the cache.log to check:
> http://utimg.unveiltech.com/tmp/amos-cache.tgz
Thanks. Looks like my guesstimate was good. You have 9 lines there (4K
queue). I'll backport the update shortly as-is.
Amos
__
> Fred,
> I compared the 2 source diskd.cc, squid 3.4.8 and 3.5.6 both
> official, no
> dif.
> So, using the diskd 3.4 with the 3.5 does not seem to be a good idea,
> result
> should be the same.
>
> Fred
No crash for you ?
I confirm this discussion
http://squid-web-proxy-cache.1019090.n4.nabb
Hi Fred,
No error, no crash.
Some warnings only:
2015/07/21 11:21:02 kid1| DiskThreadsDiskFile::openDone: (2) No such file or
directory
But we can live with these warnings, Squid will take care the missing
objects...
Bye Fred
--
View this message in context:
http://squid-web-proxy-cache.10190
>
> Hi Fred,
>
> No error, no crash.
> Some warnings only:
> 2015/07/21 11:21:02 kid1| DiskThreadsDiskFile::openDone: (2) No such
> file or
> directory
> But we can live with these warnings, Squid will take care the missing
> objects...
>
> Bye Fred
>
>
FI
Tried with squid 3.5.9 and no prob
Fred,
We now have the 3.5.8 deployed with our clients, not yet switched to the
3.5.9...
"strange" messages are not a problem because i suspect it's generated by the
cache_swap_low/high, cleaning old objects.
I suppose the Squid cleans old objects but another squid process does not
take care this cl
.
>
> Based on previous answers, diskd is for freebsd with 1 process only,
> when
> the ufs/aufs are with many processes.
> Also, as you said, it seems the diskd process was modified with the
> latest
> builds...
>
I don't know about freebsd, diskd is a separate process with a light consumption
On 23/09/2015 16:55, FredB wrote:
I don't know about freebsd, diskd is a separate process with a light consumption
Top with 3000 simultaneous users (2 x caches 250 Go full)
Just as a side note:
I have tested and compared RAM only squid FreeBSD VS Linux and it seems
like FreeBSD tests results s
On 24/09/2015 12:48 a.m., FredT wrote:
> Fred,
> We now have the 3.5.8 deployed with our clients, not yet switched to the
> 3.5.9...
> "strange" messages are not a problem because i suspect it's generated by the
> cache_swap_low/high, cleaning old objects.
> I suppose the Squid cleans old objects b
>
> If you want to achieve highest performance it is best to resolve that
> process collision issue. The wrongly indexed entries will be causing
> others to get expired earlier and maybe reduce HIT rate on them.
>
> The (rather large amount of) extra work Squid is doing to cope with
> the
> miss
84 matches
Mail list logo