Antoine Pitrou added the comment: Le 04/02/2016 14:54, Yury Selivanov a écrit : > > 30% faster floats (sic!) is a serious improvement, that shouldn't > just be discarded. Many applications have floating point calculations one way > or another, but don't use numpy because it's an overkill.
Can you give any example of such an application and how they would actually benefit from "faster floats"? I'm curious why anyone who wants fast FP calculations would use pure Python with CPython... Discarding Numpy because it's "overkill" sounds misguided to me. That's like discarding asyncio because it's "less overkill" to write your own select() loop. It's often far more productive to use the established, robust, optimized library rather than tweak your own low-level code. (and Numpy is easier to learn than asyncio ;-)) I'm not violently opposing the patch, but I think maintenance effort devoted to such micro-optimizations is a bit wasted. And once you add such a micro-optimization, trying to remove it often faces a barrage of FUD about making Python slower, even if the micro-optimization is practically worthless. > Python 2 is much faster than Python 3 on any kind of numeric > calculations. Actually, it shouldn't really be faster on FP calculations, since the float object hasn't changed (as opposed to int/long). So I'm skeptical of FP-heavy code that would have been made slower by Python 3 (unless there's also integer handling in that, e.g. indexing). ---------- _______________________________________ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue21955> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com