Jonas Borgström wrote:
> On Dec 14, 2008, at 12:16 PM, Omry Yadan wrote:
>
>   
>>> I'd be interested to know if you still get the crash if you enable
>>> pooling on linux.
>>>
>>>       
>> Enabling pooling seems to have resolved my situation, and I have not
>> seen any crashes yet (in the last 12 hours).
>> Quoting the ticket:
>> "But on Linux, you very quickly run into the following crash (all it
>> takes are 2 concurrent requests to the timeline):"
>>
>> since my stress test was 15 concurrent connections to the timeline  
>> and I
>> had no problems, I assume this is no longer an issue.
>>     
>
>
> I think it's important to remember is that a connection pool's job is  
> to reduce the connection overhead by reusing existing connections.
> This is important for most server based databases where this overhead  
> can be very noticeable. But with sqlite's in-process design this  
> benefit should be much less noticeable if it's even measurable.
>
> SQLite's Achilles' heel is concurrency. Only one active transaction is  
> allowed at a time, regardless if the connection is "pooled" or not.
>
> So that's why I can't really understand how enabling connection  
> pooling could on its own so clearly make all your concurrency related  
> problems go away.
>
> / Jonas
>   

I tested using siege, a simple http stress tester.
my test was simply to stress test the timeline url with 15 concurrent
clients.
the results of this - before enabling pooling - is that soon enough trac
beging to timeout and/or return http 500.
Osimons suggester to check the number open files by the wsgi process,
which shown that while under stress the number sometimes jumped to a
very high number of open trac.db file handles (almost 1000 in one case).
once I figured that pooling was disabled, I hacked the code to re-enable
it, and noticed that the number of open tracd.db file handles no longed
exceed the number of allowed connections in the pool.
also, my stress test now works fine.

while looking at the code, I could not figure out when the connections
to sqlite are actually closed, and I suspect that the problem is that
they are actually
never closed explicitly - only when they are garbage collected.

this is with the almost trivial load caused by 15 concurrent connections
to a strong server with plenty of RAM.

as for system limits, this is the output if  ulimit -a :

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 16308
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16308
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Trac 
Development" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/trac-dev?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to