Hi List,
The topic of prediction latency seems to pop up from times to times, either
on the list or on SO [1].
I'm pondering the idea of creating a performance benchmark in the spirit of
the out-of-core example.
This benchmark could be used to compare and track prediction time of a few
popular algorithms in sklearn.
It could be used also to showcase to new users what they can expect from
the different algorithms and help them choose a solution that fits their
requirements.
I don't know if other machine learning toolkits have this kind of
benchmarks (and do we already have some various pieces out there ?), but I
think it would be a great feature for the "enterprise" users who often need
to choose a toolkit in a limited amount of time and stick to it during the
life of the project.
It could also give water to the idea that Python can/should be fast ;)
Any thoughts on that ?
Eustache
[1]
http://stackoverflow.com/questions/11295755/gradient-boosting-predictions-in-low-latency-production-environments/13137344#13137344
------------------------------------------------------------------------------
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from
the latest Intel processors and coprocessors. See abstracts and register >
http://pubads.g.doubleclick.net/gampad/clk?id=60133471&iu=/4140/ostg.clktrk
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general