On 2014-02-19 06:25, spiderslack wrote:
Hi all.

i am trying to do web squid store any content, compiled squid 3.4 and
I'm auditioning. The first test is the desire of many network
administrators do youtube cache. After some tests the cache was not done
the youtube video then panned the html the following
videohttps://www.youtube.com/watch?v=KaI8sdDxCAc .it possesses 11
seconds to test everything and not spend time waiting for the video 1
hour eg load. After panning with wireshark the html found true URL
https://youtube.googleapis.com/v/KaI8sdDxCAc?autohide=1&amp=&version=3 .
The video opens fullscreen in firefox. But to take the test by
monitoring the access.log I see q he does not cache. I did the accessing
of a computer with IP address 192.168.1.104 did 2 test request he made
the first storing far so good because the content was not in the cache,
but the second request he continued giving TCP_MISS

<snip for brevity>


According squid documentation fields Store.log file are defined at the
site.http://wiki.squid-cache.org/SquidFaq/SquidLogs#store.log


What intrigued me is the field action, dir number and file number.
RELEASE code to say that the object was removed from the cache (?? Ham
as it was removed if it was not for caching)

dir_number says that the
cache directory was stored in the -1 and consistent because it was not
stored then - 1 (error) now says that the line number FFFFFFFF code
indicates that the object is in memory. Well so far so good, but if this
in mind because in access.log showed me a TCP_MEM_HIT rather than
TCP_MISS.

"-1 == error" is wrong. It just means "not in a cache_dir" / "not on disk".

All three are consistent. It is in memory cache and being HIT on there.

According to my maximum_object_in_memory this parameter as
64KB and video according to the log must be 7391 or 7K, I'm right, all
right until the object has to be in memory because the code then
continues TCP_MISS not the TCP_MEM_HIT? has anyone ever experienced this?

The objects from YT are "unknown length" / a set of streamed chunks. They are being stored in memory until it becomes known as too big for memory and gets a disk file. However none of the *pieces* of the video are getting large enough for that to happen. Your test operations seem to be constantly truncating the stream(s) before they grow past your memory cache limit.


The test operations you are performing on the video highlight a problem with the YT design that we are still trying to get them to fix. Namely that the changes in player GUI make it send transaction aborts and terminations. Which prevent the video being completed and cached.

Also, while the video objects are static and storeable the player system uses random client-specific details in the URL to fetch them. Which breaks cache object re-use by HTTP URL-based mechanisms (ie forces MISS rather than revalidate or variant storage). I think that is why you are seeing the RELEASE, these are streams terminated before their natural completion.

You might be able to get the streams to finish by using Squid quick_abort_* directives to keep the download happening in Squid after the client/player disconnects. For YT caching you also require the StoreID feature.
 http://wiki.squid-cache.org/Features/StoreID

Amos

Reply via email to