>>>>> "ML" == Marc Lehmann <[EMAIL PROTECTED]> writes:
ML> The surprising one was the pure perl implementation, which was quite on ML> par with C-based event loops such as Event or Glib. I did expect the pure ML> perl implementatioon to be at least a factor of three slower than Event or ML> Glib. ML> As the pure perl loop wasn't written with high performance in mind, this ML> prompted me to optimise it for some important cases (mostly to get rid of ML> the O(n²) degenerate cases and improving the select bitmask parsing for ML> the sparse case). check out stem's pure perl event loop. there are examples in the /sessions dir on how to use that directly without the rest of the modules. it does things in a different direction and doesn't scan select's bit masks but instead it scans the interesting handles and see whether their bits are set. it should exhibit good behavior under growth as all the data are managed in hashes. ML> I then made a second benchmark, designed not to measure anyevent overhead, ML> but to measure real-world performance of a socket server. and that /sessions code also shows use of the asyncio module. if you can benchmark that i would be interested in the results. ML> The result is that the pure perl event loop used as fallback in AnyEvent ML> single-handedly beats Glib by a large margin, and even event by a factor ML> of two. ML> For small servers, the overhead introduced by running a lot of perl ML> opcodes per iteration dominates, however, reflected in the last benchmark. in a heavily loaded server most of the work in in the i/o and should overwhelm the event loop itself. that is the whole purpose of event loops as we all know here. ML> However, the net result is that the pure perl event loop performs ML> better than almost all other event loops (EV being the only exception) ML> ins erious/medium-sized cases, while I originally expected it to fail ML> completely w.r.t. performance and being only usable as a workaround when ML> no "better" event module is installed. i don't find that surprising. perl's i/o is decent and as i said above, a loaded server is doing mostly i/o. ML> All the benchmark data and explanations can be found here: ML> http://pod.tst.eu/http://cvs.schmorp.de/AnyEvent/lib/AnyEvent.pm#BENCHMARKS ML> The code is not yet released and likely still buggy (the question is ML> whether any bugs affect the benchmark results). It is only available via ML> CVS: http://software.schmorp.de/pkg/AnyEvent i will take a gander and see if i can play with it and add stem's loop to it. if you want to work on this with me, i wouldn't mind the help. thanx, uri -- Uri Guttman ------ [EMAIL PROTECTED] -------- http://www.sysarch.com -- ----- Perl Code Review , Architecture, Development, Training, Support ------ --------- Free Perl Training --- http://perlhunter.com/college.html --------- --------- Gourmet Hot Cocoa Mix ---- http://bestfriendscocoa.com ---------