Re: [opensource-dev] before opena Jira, APR

2010-09-01 Thread Tofu Linden
Altair Sythos Memo wrote:
...
 2010-08-31T21:02:02Z WARNING: ll_apr_warn_status: APR: Too many open
 files 
 
 2010-08-31T21:02:02Z WARNING: open:  Attempting to open
 filename: /home/user/.secondlife/cache/texturecache/texture.entries
...
 not so sure anyway is a viewer problem of something specific to my
 linuxbox... is why i'm asking here if somebody else match this kind of
 crash with latest build

This is a genuine problem.  Please file it in jira.  Any eyes on this
would be welcome.
It seems to have popped up recently - it may be related to the latest
HTTP textures changes, not sure (perhaps leaking file handles, though
lsof says we're still somewhat under the default limits).
___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


Re: [opensource-dev] before opena Jira, APR

2010-09-01 Thread Tateru Nino
  On 1/09/2010 8:28 PM, Tofu Linden wrote:
 Altair Sythos Memo wrote:
 ...
 2010-08-31T21:02:02Z WARNING: ll_apr_warn_status: APR: Too many open
 files

 2010-08-31T21:02:02Z WARNING: open:  Attempting to open
 filename: /home/user/.secondlife/cache/texturecache/texture.entries
 ...
 not so sure anyway is a viewer problem of something specific to my
 linuxbox... is why i'm asking here if somebody else match this kind of
 crash with latest build
 This is a genuine problem.  Please file it in jira.  Any eyes on this
 would be welcome.
 It seems to have popped up recently - it may be related to the latest
 HTTP textures changes, not sure (perhaps leaking file handles, though
 lsof says we're still somewhat under the default limits).

Hmm. It might not be an actual leak per-se... I've noticed in busy areas 
that the viewer will often hit a *lot* of parallel HTTP texture fetches. 
I'm not sure if there's a hard limit there, because my texture console 
can easily overflow with active texture-fetches. That's... what... 30+ 
and I'm guessing a minimum of two file-handles each right there (and gut 
feeling says there's probably closer to 3 involved).

If there's no cap on the number of parallel HTTP texture fetches (or the 
cap is too large), then you'll see more simultaneous fetches for 
higher-latency users (as an artifact of each HTTP session taking longer 
to complete), all other things being equal. If that is the case, then 
it's likely to have a far lower incidence if you're on the continental 
USA. Altair... you're in southern Europe, right?

-- 
Tateru Nino
http://dwellonit.taterunino.net/

___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


Re: [opensource-dev] before opena Jira, APR

2010-09-01 Thread Oz Linden (Scott Lawrence)

 On 2010-09-01 7:12, Tateru Nino wrote:

Hmm. It might not be an actual leak per-se... I've noticed in busy areas
that the viewer will often hit a*lot*  of parallel HTTP texture fetches.
That's not very good http behavior, but I doubt that we can get it 
changed until the servers are properly supporting persistent connections.


___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges

Re: [opensource-dev] before opena Jira, APR

2010-09-01 Thread Tateru Nino



On 1/09/2010 11:24 PM, Oz Linden (Scott Lawrence) wrote:

On 2010-09-01 7:12, Tateru Nino wrote:

Hmm. It might not be an actual leak per-se... I've noticed in busy areas
that the viewer will often hit a*lot*  of parallel HTTP texture fetches.
That's not very good http behavior, but I doubt that we can get it 
changed until the servers are properly supporting persistent connections.
Indeed. It's not exactly best-practice. Creating a priority list of 
textures and a configurable concurrent requests cap (default: 16?) would 
probably be the way to go.


--
Tateru Nino
http://dwellonit.taterunino.net/

___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges

Re: [opensource-dev] before opena Jira, APR

2010-09-01 Thread Francesco Rabbi
Il giorno 01/set/2010, alle ore 15:36, Tateru Nino tateru.n...@gmail.com
ha scritto:



On 1/09/2010 11:24 PM, Oz Linden (Scott Lawrence) wrote:

On 2010-09-01 7:12, Tateru Nino wrote:

Hmm. It might not be an actual leak per-se... I've noticed in busy areas
that the viewer will often hit a **lot** of parallel HTTP texture fetches.

 That's not very good http behavior, but I doubt that we can get it changed
until the servers are properly supporting persistent connections.

Indeed. It's not exactly best-practice. Creating a priority list of textures
and a configurable concurrent requests cap (default: 16?) would probably be
the way to go.


No, this is a client side problem in file handling, not an HTTP problem...
You can parallelize billions of download, the fail (you can see in my logs)
is in local filesystem file handling. Maybe there are more locks than
suitable, the file/decoder handler must detect the limits and adapt the
pipes.

As seen in logs i suppose when a cached texture fail (timeout, bad crc for
packet loss) the automatic clean try a clear_while_run wasting all openable
files. If a http timeout or corrupted cached texture found the SINGLE
download or the single fileust be deleted or dropped, not whole cache.

If a running viewer have 600 opened tectures and got a timeout now re-open
all to clean them, exceding the default 1024 limit

I've noticed some grey textures too, i begin to think about the old
(patched) bug about decoding fail without retry, the pipe hold the channell
opened wasting resources




-- 
Sent by iPhone
___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges

Re: [opensource-dev] before opena Jira, APR

2010-09-01 Thread Oz Linden (Scott Lawrence)
  On 2010-09-01 9:46, Francesco Rabbi wrote:

 No, this is a client side problem in file handling, not an HTTP 
 problem... You can parallelize billions of download,
Whether or not you _can_, you _shouldn't_.  The HTTP spec is quite clear 
on this point.

We'd get much better performance than we're getting now if we fixed the 
servers to support persistent connections; there's a lot of overhead in 
setting up a new connection - extra round trips plus TCP slow-start.

___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


Re: [opensource-dev] before opena Jira, APR

2010-09-01 Thread Tateru Nino


On 2/09/2010 12:20 AM, Oz Linden (Scott Lawrence) wrote:
On 2010-09-01 9:46, Francesco Rabbi wrote:
 No, this is a client side problem in file handling, not an HTTP
 problem... You can parallelize billions of download,
 Whether or not you _can_, you _shouldn't_.  The HTTP spec is quite clear
 on this point.
RFC2616 makes for great reading, and the *majority* of it is superbly 
thought through. I spent years with a copy close-to-hand at all times.
 We'd get much better performance than we're getting now if we fixed the
 servers to support persistent connections; there's a lot of overhead in
 setting up a new connection - extra round trips plus TCP slow-start.
Concur. However, persistent connections (and possibly pipelining) will 
pretty much mean that you'll need to make sure you're maintaining a 
priority-queue of textures to fetch. Otherwise it will *feel* slower to 
the end-user even if it is actually faster in total fetch-and-render 
time. I've been down this road before.

-- 
Tateru Nino
http://dwellonit.taterunino.net/

___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


Re: [opensource-dev] before opena Jira, APR

2010-09-01 Thread Oz Linden (Scott Lawrence)
  On 2010-09-01 10:36, Tateru Nino wrote:

 On 2/09/2010 12:20 AM, Oz Linden (Scott Lawrence) wrote:
 On 2010-09-01 9:46, Francesco Rabbi wrote:
 No, this is a client side problem in file handling, not an HTTP
 problem... You can parallelize billions of download,
 Whether or not you _can_, you _shouldn't_.  The HTTP spec is quite clear
 on this point.
 RFC2616 makes for great reading, and the *majority* of it is superbly
 thought through. I spent years with a copy close-to-hand at all times.
 We'd get much better performance than we're getting now if we fixed the
 servers to support persistent connections; there's a lot of overhead in
 setting up a new connection - extra round trips plus TCP slow-start.
 Concur. However, persistent connections (and possibly pipelining) will
 pretty much mean that you'll need to make sure you're maintaining a
 priority-queue of textures to fetch. Otherwise it will *feel* slower to
 the end-user even if it is actually faster in total fetch-and-render
 time. I've been down this road before.

Correct, but that's also something we should improve anyway - doing a 
better job of prioritizing which textures are loaded and in what order 
could make things _seem_ faster, even if the total time was the same.  
Doing both that and the connection handling together would be a big win 
(file under Faster).



___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


Re: [opensource-dev] before opena Jira, APR

2010-09-01 Thread Francesco Rabbi
Il giorno 01/set/2010, alle ore 16:47, Oz Linden (Scott Lawrence)
o...@lindenlab.com ha scritto:



 Correct, but that's also something we should improve anyway - doing a
 better job of prioritizing which textures are loaded and in what order
 could make things _seem_ faster, even if the total time was the same.
 Doing both that and the connection handling together would be a big win
 (file under Faster).

May be usefull do something like old windlight... The area visible by
avatar (cam or mouselook) should be splitted in slices and textures
should be downloaded starting from closest slice 16 textures each
512Kbps of inbound network speed (client setup). A system for fallback
on single texture if corrupted or timed out will be great...



-- 
Sent by iPhone
___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


Re: [opensource-dev] before opena Jira, APR

2010-09-01 Thread Sythos
On Wed, 01 Sep 2010 11:28:42 +0100
Tofu Linden t...@lindenlab.com wrote:

 Altair Sythos Memo wrote:
 ...
  2010-08-31T21:02:02Z WARNING: ll_apr_warn_status: APR: Too many open
  files 
  
  2010-08-31T21:02:02Z WARNING: open:  Attempting to open
  filename: /home/user/.secondlife/cache/texturecache/texture.entries
 ...
  not so sure anyway is a viewer problem of something specific to my
  linuxbox... is why i'm asking here if somebody else match this kind
  of crash with latest build
 
 This is a genuine problem.  Please file it in jira.  Any eyes on this
 would be welcome.
 It seems to have popped up recently - it may be related to the latest
 HTTP textures changes, not sure (perhaps leaking file handles, though
 lsof says we're still somewhat under the default limits).

http://jira.secondlife.com/browse/VWR-22757

added info, but i thyink lsof isn't right tools, if on an empty sim
(1 texture: the gorund) of if i'm in a city fullfit of textures lsof
report about same numbers.

is there a debug info to activate to read files opened by HTTP handler?
___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


[opensource-dev] before opena Jira, APR

2010-08-31 Thread Sythos
got a lot of crashes on a linux 32 system


[cut]
2010-08-31T21:02:02Z INFO: purgeAllTextures: Deleting files in
directory: /home/user/.secondlife/cache/texturecache/f

2010-08-31T21:02:02Z WARNING: ll_apr_warn_status: APR: Too many open
files 

2010-08-31T21:02:02Z WARNING: open:  Attempting to open
filename: /home/user/.secondlife/cache/texturecache/texture.entries

2010-08-31T21:02:02Z INFO: purgeAllTextures: The entire texture cache
is cleared. 

2010-08-31T21:02:02Z WARNING: ll_apr_warn_status: APR: Too
many open files 

2010-08-31T21:02:02Z WARNING: open:  Attempting to open
filename: /home/user/.secondlife/cache/texturecache/texture.entries

2010-08-31T21:02:02Z WARNING: write: apr mFile is removed by somebody
else. Can not write. 

2010-08-31T21:02:02Z WARNING: clearCorruptedCache:
the texture cache is corrupted, need to be cleared.

2010-08-31T21:02:02Z INFO: purgeAllTextures: Deleting files in
directory: /home/user/.secondlife/cache/texturecache/0
[/cut]

all this repeated some times, then viewer crash badly

double checked limits on my user, none reasonable related to number of
files opened or sort off... build Second Life 2.1.2 (208719) Aug 30
2010 09:18:15 (Second Life Development)

not so sure anyway is a viewer problem of something specific to my
linuxbox... is why i'm asking here if somebody else match this kind of
crash with latest build
___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges