On 19 June 2018 at 16:12, INADA Naoki <songofaca...@gmail.com> wrote:
>
> On Tue, Jun 19, 2018 at 2:56 PM Jeroen Demeyer <j.deme...@ugent.be> wrote:
>>
>> On 2018-06-18 16:55, INADA Naoki wrote:
>> > Speeding up most python function and some bultin functions was very
>> > significant.
>> > But I doubt making some 3rd party call 20% faster can make real
>> > applications significant faster.
>>
>> These two sentences are almost contradictory. I find it strange to claim
>> that a given optimization was "very significant" in specific cases while
>> saying that the same optimization won't matter in other cases.
>
> It's not contradictory because there is basis:
>
>   In most real world Python application, number of calling Python methods or
>   bulitin functions are much more than other calls.
>
> For example, optimization for bulitin `tp_init` or `tp_new` by FASTCALL was
> rejected because it's implementation is complex and it's performance gain is
> not significant enough on macro benchmarks.
>
> And I doubt number of 3rd party calls are much more than calling builtin
> tp_init or tp_new.

I was going to ask a question here about JSON parsing
micro-benchmarks, but then I went back and re-read
https://blog.sentry.io/2016/10/19/fixing-python-performance-with-rust.html
and realised that the main problem discussed in that article was the
*memory* overhead of creating full Python object instances, not the
runtime cost of instantiating those objects.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to