Or got 'OptimizationMarker::kInOptimizationQueue' if the function 
compilation is not finished before the loop ends. I got 
'OptimizationMarker::kNone', which looks like the compilation is not kicked 
off.
On Tuesday, December 7, 2021 at 11:21:47 PM UTC+8 Jiading Guo wrote:

> Hi  Leszek,
>
> Thanks for your excellent explanation! Now I have a better understanding 
> of function optimization.
>
> However, after running the code you listed I still got `
> OptimizationMarker::kNone`:
>
>  - optimized code: 0x204700044001 <Code TURBOFAN>
>
>  - optimization marker: OptimizationMarker::kNone
>  - optimization tier: OptimizationTier::kTopTier
>
> Is this the expected behavior? I'm expecting `
> OptimizationMarker::kCompileOptimized` here since I have got the top tier 
> optimization, turbofan, as stated in the following `optimization tier` 
> field. 
>
> Many Thanks,
> Jiading
> On Tuesday, December 7, 2021 at 9:30:58 PM UTC+8 les...@chromium.org 
> wrote:
>
>> There's actually a couple of things going on here.
>>
>> First of all, this is actually a great example of why we always tell 
>> people that we can't "just" optimize functions ahead of time. You may 
>> notice that "add" is in the --trace-opt twice; it's optimized exactly when 
>> you request it, but then as soon as that optimized code is executed, it 
>> immediately deopts due to insufficient type feedback for the add operation 
>> (this is called a "soft" deopt). You'll see the deopt/bailout if you run 
>> with --trace-deopt. Then, later, it's optimised again from being run lots 
>> of times, which is a concurrent compilation that may or may not complete 
>> before your loop ends. To get "add" properly optimized using 
>> OptimizeFunctionOnNextCall, you have to
>>
>>    1. Call %PrepareFunctionForOptimization(add) -- this makes sure that 
>>    the add function can collect type feedback and that its bytecode is not 
>>    flushed away by GC
>>    2. Call add(1,2) -- this is a call with the correct types, two small 
>>    integers, which teach it the correct type feedback for the sum
>>    3. Call %OptimizeFunctionOnNextCall(add)
>>
>> Secondly, this is a bit of a printing failure -- the "code" you're seeing 
>> is on the JSFunction as you'd expect, but the "no optimized code" is 
>> actually printing as part of printing the "feedback vector" of the 
>> JSFunction. The reason is that there is only one closure (JSFunction) in 
>> one context that implements "add", which means that we can do some extra 
>> optimisations called "function context specialization"; this is the common 
>> case, but in some cases you may have JSFunctions implementing the same 
>> underlying code (e.g. function factory() { return function add(a+b) { 
>> return a+b; } } always returns a new JSFunction but one that implements 
>> the same code). For those cases, we skip the "function context 
>> specialization", and share code between these different JSFunctions by 
>> putting it on the feedback vector.
>>
>> Putting this together, you'll see what you originally expected by running:
>>
>> function factory() {
>> return function add(a, b) { return a + b; }
>> }
>>
>> // Create two closures for 'add', to turn off function context 
>> specialization.
>> add = factory();
>> some_other_add = factory();
>>
>> %PrepareFunctionForOptimization(add);
>> // Warm up the type feedback.
>> add(1, 2);
>>
>> %OptimizeFunctionOnNextCall(add);
>> // Do the optimization.
>> add(1, 2);
>>
>> // Run the loop.
>> let sum = 0;
>> for (let i = 0; i < 100000; i++) {
>> sum += add(i, i);
>> }
>> %DebugPrint(add);
>>
>> - Leszek
>>
>> On Tuesday, December 7, 2021 at 5:45:51 AM UTC+1 zin...@gmail.com wrote:
>>
>>> Hi all,
>>>
>>> I'm trying to mannually optimize my function [1] by 
>>> calling %OptimizeFunctionOnNextCall. 
>>>
>>> I run `./d8 --allow-natives-syntax --trace-opt --print-opt-code foo.js`, 
>>> it seems that the function is compiled:
>>>
>>> [manually marking 0x1e6e082935b5 <JSFunction add (sfi = 0x1e6e08293485)> 
>>> for non-concurrent optimization]
>>> [compiling method 0x1e6e082935b5 <JSFunction add (sfi = 0x1e6e08293485)> 
>>> (target TURBOFAN) using TurboFan]
>>> --- Optimized code ---
>>> optimization_id = 0
>>> source_position = 12
>>> kind = TURBOFAN
>>> name = add
>>> stack_slots = 6
>>> compiler = turbofan
>>> address = 0x1e6e00044001
>>> ...
>>>
>>>  But according to `%DebugPrint(add)` in the last line, it says:
>>>
>>> - code: 0x1e6e00045841 <Code TURBOFAN>
>>> ...
>>>  - shared function info: 0x1e6e08293485 <SharedFunctionInfo add>
>>>  - no optimized code
>>>  - optimization marker: OptimizationMarker::kNone
>>>  - optimization tier: OptimizationTier::kNone
>>>
>>> I'm curious why it has `<Code TURBOFAN>` but  `no optimized code` and 
>>> `OptimizationMarker::kNone`? Shouldn't the function be compiled with 
>>> `OptimizationMarker::kCompileOptimized`? Is there something I'm missing? 
>>> Thanks in advance!
>>>
>>> Regards,
>>> Jiading
>>>
>>> [1] add.js:
>>> function add(a, b) { return a + b; }
>>> %OptimizeFunctionOnNextCall(add);
>>> add(1, 2);
>>> let sum = 0;
>>> for (let i = 0; i < 100000; i++) {
>>>         sum += add(i, i);
>>> }
>>> %DebugPrint(add);
>>>
>>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/v8-users/3a3fe9b6-7489-485c-81f3-a804b24e555fn%40googlegroups.com.

Reply via email to