I looked into this a lot last year. It would make sense to have either all
on-heap or all off-heap backing stores for array buffers. This would
simplify generated code a lot as well (currently we do this trick where we
have to add two pointers together to get the pointer to the backing store).

The other big advantage of on-heap array (apart from inline allocation in
CSA) is that they don't need to use the ArrayBufferTracker to finalize the
backing store. When the backing store is off-heap, we need to explicitly
delete it, rather than it being implicitly collected like regular GC'd
objects. For every off-heap buffer, we add time to the GC proportional to
the buffers that die, rather than the ones that live, which is bad™ for GC.

There are three reasons we can't use all on-heap buffers (or increase the
threshold much).
1. People expect detaching array-buffers to be 0-copy. The spec doesn't
require it, but that's how people expect it to work on the web today: when
you postMessage an ArrayBuffer, the underlying buffer should just be
transferred and not copied. With on-heap backing stores, we always have to
copy (for security, but also because the isolate that the Heap came from
could go away). For larger arrays, the time to copy will be noticeable.
2. Embedders (Chrome) can provide an externally allocated backing store
when constructing an ArrayBuffer through the API. Unless we remove this API
then we can't get rid of off-heap backing stores. This doesn't prevent us
from increasing the limit, though.
3. Embedders can get access to created ArrayBuffers (created by the
embedder or via JS) and get a pointer to their backing store. The current
contract of the API is that when we give out these backing store pointers,
the backing stores can't move (it might, due to GC if it is on-heap). The
way we enforce this is that we have mostly off-heap backing stores, and
when giving out pointers to on-heap backing stores, we first 'externalize'
the ArrayBuffer, reallocating the backing store externally and copying the
data (this gets more expensive for bigger backing stores).

For the non-moving and 0-copy detaching constraints, you'd think it might
be possible to say 'OK, let's keep external buffers (regardless of size)
for these use cases and allocate everything else on-heap'. The problem
there is that we don't know upfront which buffers will be detached or
externalized in the future :(.


My opinion re. an on/off flag vs. a size limit flag - other embedders might
use it differently so I think it's safer to leave it as-is.

Cheers,
Peter

P.S Here's a handy table of the constraints vs. different options.


[image: ta_table.png]


On Thu, Jan 17, 2019 at 4:31 AM Peter Wong <[email protected]> wrote:

> Does anyone know why *V8_TYPED_ARRAY_MAX_SIZE_IN_HEAP*, by default, set
> to 64?
> This value seems rather low. Can we raise raise this to
> *kMaxRegularHeapObjectSize*?
>
> I've been working on improving the performance of *TypedArray#subarray*,
> this lead me to improve the performance of supporting builtins (
> *CreateTypedArray* and *TypedArrayInitialize*).  I accidentally ignored
> *V8_TYPED_ARRAY_MAX_SIZE_IN_HEAP*, and noticed a significant performance
> improvement to various TypedArray js-perf benchmarks (3x *SliceNoSpecies*,
> 2x *ConstructAllTypedArray*).
>
> Did some digging on the origin of this limit, introduced back in 2014:
> https://codereview.chromium.org/150813004/ . I didn't see much discussion
> how this low value was determined.  Maybe assumptions back then have
> changed since then?
>
> *V8_TYPED_ARRAY_MAX_SIZE_IN_HEAP* does not seem to be customized by
> Chrome, so I assume it's being defaulted to 64. As for Node, this does seem
> to be important for turning off TypedArray on-heap allocations completely (
> https://chromium-review.googlesource.com/c/v8/v8/+/962243/). Considering
> these 2 use-cases, I wonder if this build time config should not be value
> but flag to turn on or off TypedArray on-heap allocations (ex.
> *V8_USE_ON_HEAP_TYPED_ARRAY_ALLOCATION*)
>
> Tangentially, perhaps some of the performance difference with on/off-heap
> TypedArray allocation because off-heap is going through an expensive
> CallJS (
> https://cs.chromium.org/chromium/src/v8/src/builtins/builtins-typed-array-gen.cc?q=typed-array-gen.cc&sq=package:chromium&g=0&l=301-303),
> which could be mitigated with an external reference, fast-c call.
>
> In summary, looking for thoughts on whether this limit can be increased
> for performance.
> If not, what sets TypedArrays apart from other objects that
> use/base-off-of *kMaxRegularHeapObjectSize *as a limit?
> If so, should we just change to build time flag to be an on/off (not a
> number value). Off - for Node, On - for Chrome and default to
> *kMaxRegularHeapObjectSize?*
>
> Thanks!
>
> (cc-ing folks that are referenced in the above links)
>
>
> --

Peter Marshall

Software Engineer

[email protected]

Google Germany GmbH

Erika-Mann-Straße 33

80636 München

Geschäftsführer: Paul Manicle, Halimah DeLaine Prado

Registergericht und -nummer: Hamburg, HRB 86891

Sitz der Gesellschaft: Hamburg

Diese E-Mail ist vertraulich. Falls sie diese fälschlicherweise erhalten
haben sollten, leiten Sie diese bitte nicht an jemand anderes weiter,
löschen Sie alle Kopien und Anhänge davon und lassen Sie mich bitte wissen,
dass die E-Mail an die falsche Person gesendet wurde.



This e-mail is confidential. If you received this communication by mistake,
please don't forward it to anyone else, please erase all copies and
attachments, and please let me know that it has gone to the wrong person.

-- 
-- 
v8-dev mailing list
[email protected]
http://groups.google.com/group/v8-dev
--- 
You received this message because you are subscribed to the Google Groups 
"v8-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to