Il giorno 01/set/2010, alle ore 15:36, Tateru Nino <tateru.n...@gmail.com>
ha scritto:



On 1/09/2010 11:24 PM, Oz Linden (Scott Lawrence) wrote:

On 2010-09-01 7:12, Tateru Nino wrote:

Hmm. It might not be an actual leak per-se... I've noticed in busy areas
that the viewer will often hit a **lot** of parallel HTTP texture fetches.

 That's not very good http behavior, but I doubt that we can get it changed
until the servers are properly supporting persistent connections.

Indeed. It's not exactly best-practice. Creating a priority list of textures
and a configurable concurrent requests cap (default: 16?) would probably be
the way to go.


No, this is a client side problem in file handling, not an HTTP problem...
You can parallelize billions of download, the fail (you can see in my logs)
is in local filesystem file handling. Maybe there are more locks than
suitable, the file/decoder handler must detect the limits and adapt the
pipes.

As seen in logs i suppose when a cached texture fail (timeout, bad crc for
packet loss) the automatic clean try a clear_while_run wasting all openable
files. If a http timeout or corrupted cached texture found the SINGLE
download or the single fileust be deleted or dropped, not whole cache.

If a running viewer have 600 opened tectures and got a timeout now re-open
all to clean them, exceding the default 1024 limit

I've noticed some grey textures too, i begin to think about the old
(patched) bug about decoding fail without retry, the pipe hold the channell
opened wasting resources




-- 
Sent by iPhone
_______________________________________________
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges

Reply via email to