Re: [v8-users] What are no obvious reject reasons for cached data

2017-04-21 Thread Daniel Vogelheim
On Fri, Apr 21, 2017 at 11:21 AM, Johannes Rieken  wrote:

> For some background: We are using cached data since a while in VS Code and
> are quite happy with it. It's one of the simpler things to improve start up
> time. The only downside is the slow down on first start when we generate
> and store the cached data and I was thinking about making our build
> machines do this.
>

You could also try the startup snapshot:

This is similar to the code cache (in fact, the code cache implementation
was derived from the startup snapshot). This is a build-time operation. It
caches more code (that is, it compiles everything), and it runs the code
generator so that code will be potentially less efficient, but independent
of the cpu feature flags (that is, specialised to a CPU family; not to a
particular model). The JS source + V8 source mismatch issue is
auto-magically solved by being bundled with the binary. This is used by
Chrome to speed up browser + tab startup.

(This isn't super well document, though. See v8_extra_library_files in
BUILD.gn.)


An alternative, that would also work with code cache: Chrome Android has a
similar problem. We discussed not running with snapshot/code cache, and
then compiling (in idle time) the snapshot/code cache in a separate &
"clean" process. That way, you can hide the cache generation time. You'd
still have the initial unaided & somewhat slower startup.

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [v8-users] Why does a combination of optimization flags(--always-opt --turbo) and code caching show bad performance?

2017-04-21 Thread Jakob Kummerow
As Jochen already said on chromium-dev, --always-opt does not make things
faster. This is expected. The purpose of the flag is to flush out certain
kinds of bugs when running tests, at the cost of a big slowdown.

Code caching has limits. It cannot cache everything.

The default configuration is what we believe gives the best performance in
general cases. There are no "secret" flags to make things faster.

On Fri, Apr 21, 2017 at 9:27 AM, Jin Chul Kim  wrote:

> Hello,
>
> I am trying to reduce a execution time on general cases. My approach is a
> combination of two features: fully optimized code generation + code caching.
> In my experiment, I figured out that code caching is very powerful. By the
> way, the execution time was significantly increased when I was using the
> following flags: --always-opt. I know turboFan was enabled on the recent V8
> code. Here are my question.
>
> 1. As far as I know, code caching does not need code compilation to
> generate machine(native) code. Is that correct?
>
> I checked trace with --trace-opt. There were many lines for optimizing and
> compilation on code caching. why did them happen?
>
> [compiling method 0xed4b9e340c1  = 0xed4b9e31ea1)> using TurboFan]
> [optimizing 0xed4b9e340c1  0xed4b9e31ea1)> - took 0.081, 0.356, 0.039 ms]
> [compiling method 0x3d3958a8f609  0xed4b9e41829)> using TurboFan]
> [optimizing 0x3d3958a8f609  0xed4b9e41829)> - took 0.109, 0.560, 0.060 ms]
> [compiling method 0x3d3958a93a81  0xed4b9e3d401)> using TurboFan]
> [optimizing 0x3d3958a93a81  0xed4b9e3d401)> - took 0.028, 0.086, 0.011 ms]
> ...
>
> 2. Do you explain why execution time is significantly increased on w/ opt.
> + w/ caching (examples 4) and 5) below)?
>
> If I was using the flag(--always-opt), the compiler may generates
> optimized or unoptimized code. Then, I think the second run should be same
> or better performance than the first run because it does not require
> compilation and just loads binary. Please see my experiment result as below:
>
> - baseline: w/o opt. + w/o caching
> 1) 24.06 secs
>
> - w/o opt. + w/ caching
> 2) 1st run(save native code): 24.35 secs
> 3) 2nd run(load native code): 16.94 secs
>
> - w/ opt. + w/ caching
> 4) 1st run(save native code): 75.12 secs
> 5) 2nd run(load native code): 74.02 secs
>
> 3. How may I generate an optimal code to decrease an execution time on
> code caching?
>
> Many thanks,
> Jinchul
>
> --
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
> ---
> You received this message because you are subscribed to the Google Groups
> "v8-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to v8-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [v8-users] What are no obvious reject reasons for cached data

2017-04-21 Thread Johannes Rieken
For some background: We are using cached data since a while in VS Code and 
are quite happy with it. It's one of the simpler things to improve start up 
time. The only downside is the slow down on first start when we generate 
and store the cached data and I was thinking about making our build 
machines do this. 

Thanks again for clarifying the API contract. I won't try some lucky shots 
with prebuilt data for the relatively monotone mac world ;-) Actually, we 
ran into this a while back. Our minifier had changed, producing slightly 
different code which didn't get rejected but causing things to go south 
[1]. We now use a stronger cache key such that we don't attempt to reuse 
bad cached data...

Thanks and Happy Coding, Joh

[1] https://github.com/Microsoft/vscode/issues/23841#issuecomment-291457502

On Friday, April 21, 2017 at 11:00:04 AM UTC+2, Johannes Rieken wrote:
>
> Thanks for clarifying!
>
> On Friday, April 21, 2017 at 10:00:03 AM UTC+2, Ben Noordhuis wrote:
>>
>> On Fri, Apr 21, 2017 at 9:51 AM, Johannes Rieken 
>>  wrote: 
>> > Does the data depend on things like endian-ness, CPU etc or only 
>> > on v8-locals like v8-version and v8-flags? 
>>
>> All of the above; it's machine-, version- and invocation-specific. 
>>
>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [v8-users] What are no obvious reject reasons for cached data

2017-04-21 Thread Johannes Rieken
Thanks for clarifying!

On Friday, April 21, 2017 at 10:00:03 AM UTC+2, Ben Noordhuis wrote:
>
> On Fri, Apr 21, 2017 at 9:51 AM, Johannes Rieken 
>  wrote: 
> > Does the data depend on things like endian-ness, CPU etc or only 
> > on v8-locals like v8-version and v8-flags? 
>
> All of the above; it's machine-, version- and invocation-specific. 
>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [v8-users] What are no obvious reject reasons for cached data

2017-04-21 Thread Daniel Vogelheim
As Ben says: The code cache is specific the exact source string, the CPU
(cpu family & cpu flags, e.g. whether certain features are supported), the
exact V8 version (since there is no mechanism to guarantee correctness
across versions), and the compile options (debug, release, certain features
that affect code generation).


Additional info: The API contract is meant to be that the embedder checks
all these things, and it's the embedders duty to never hand us a cache w/
kConsumeCodeCache if there's a a source/v8 version/cpu mismatch.

When trying to launch the feature (& shortly after), we had a bunch of
weird crashes apparently related to code caching (likely from disk or RAM
failures on cheap-ish machines, which happens at a noticeable rate with
~1e9 users), and so we added a number of checksums and sanity checks to the
code caching. These are the primary reason why stuff gets rejected. But
those where really meant as a last resort & sanity check, and it really
means that the embedder shouldn't have given us that data in the first
place. The sanity checks are mostly hash-sums. Hash collisions are
possible, and then there'll be false positives.

Looking back, I'm not sure that particular API was the best way of doing
this. I hope it works well; I'll be happy to hear any feedback if it
doesn't.

On Fri, Apr 21, 2017 at 9:59 AM, Ben Noordhuis  wrote:

> On Fri, Apr 21, 2017 at 9:51 AM, Johannes Rieken
>  wrote:
> > Does the data depend on things like endian-ness, CPU etc or only
> > on v8-locals like v8-version and v8-flags?
>
> All of the above; it's machine-, version- and invocation-specific.
>
> --
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
> ---
> You received this message because you are subscribed to the Google Groups
> "v8-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to v8-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [v8-users] What are no obvious reject reasons for cached data

2017-04-21 Thread Ben Noordhuis
On Fri, Apr 21, 2017 at 9:51 AM, Johannes Rieken
 wrote:
> Does the data depend on things like endian-ness, CPU etc or only
> on v8-locals like v8-version and v8-flags?

All of the above; it's machine-, version- and invocation-specific.

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[v8-users] Why does a combination of optimization flags(--always-opt --turbo) and code caching show bad performance?

2017-04-21 Thread Jin Chul Kim
Hello,

I am trying to reduce a execution time on general cases. My approach is a 
combination of two features: fully optimized code generation + code caching.
In my experiment, I figured out that code caching is very powerful. By the 
way, the execution time was significantly increased when I was using the 
following flags: --always-opt. I know turboFan was enabled on the recent V8 
code. Here are my question.

1. As far as I know, code caching does not need code compilation to 
generate machine(native) code. Is that correct?

I checked trace with --trace-opt. There were many lines for optimizing and 
compilation on code caching. why did them happen?

[compiling method 0xed4b9e340c1  using TurboFan]
[optimizing 0xed4b9e340c1  - took 0.081, 0.356, 0.039 ms]
[compiling method 0x3d3958a8f609  using TurboFan]
[optimizing 0x3d3958a8f609  - took 0.109, 0.560, 0.060 ms]
[compiling method 0x3d3958a93a81  using TurboFan]
[optimizing 0x3d3958a93a81  - took 0.028, 0.086, 0.011 ms]
...

2. Do you explain why execution time is significantly increased on w/ opt. 
+ w/ caching (examples 4) and 5) below)?

If I was using the flag(--always-opt), the compiler may generates optimized 
or unoptimized code. Then, I think the second run should be same or better 
performance than the first run because it does not require compilation and 
just loads binary. Please see my experiment result as below:

- baseline: w/o opt. + w/o caching
1) 24.06 secs

- w/o opt. + w/ caching
2) 1st run(save native code): 24.35 secs
3) 2nd run(load native code): 16.94 secs

- w/ opt. + w/ caching
4) 1st run(save native code): 75.12 secs
5) 2nd run(load native code): 74.02 secs

3. How may I generate an optimal code to decrease an execution time on code 
caching?

Many thanks,
Jinchul

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.