Hi,
After more than two months of testing and great support from entire ATS
team, finally we have integrated ATS in production scenario as
transparent caching . Currently we have put limited traffic to it approx
200mbps (1k plus users). After putting actual load on the server i'm
observing the upstream utilization has been increased. Last stats that i
have viewed are 130Mbps (upstream) and 41.6Mbps (to clients) so there is
negative impact of caching instead.
It is worth mentioning that I have used background fetch plugin to cache
range requests (to improve cache performance specially streaming). The
max object size currently is set to zero. During testing I have observed
when downloading a large file the ATS were starting object download on
available upstream capacity. The large file like ISO of 600 MB takes
lots of bandwidth to fill the object while delivering to the user
according to allocated speed on next object hit however the object was
delivered from the cache.
I production scenario the above behavior is causing increased upstream
utilization while the hit probability on larger objects are rare.
I need experts opinion to improve traffic saving. What comes to my mind
is:
- Set Max object size to 100 or 200 MB
- Keep using background_fill plugin and exclude the larger objects (max
object size)
- How to exclude larger files to being background fill above e.g 200Mbps
Below are my background fill configs:
exclude Content -Length < 1000 (this is to exclude small object less
than 1000 bytes?)
include Content-Type video/mp4
exclude Content-Type text
include Content-Type video/quicktime
include Content-Type video/3gpp
include Content-Type application/octet-stream
--
Regards,
Faisal.