A quick update for those who are interested in running a v8-based
multi-threaded application in a Node process, but completely isolated from
the Node.
I built static v8 libraries using a GN build. This is release arguments
file I used (`args.gn`):
is_debug = false
target_cpu = "x64"
v8_static
> For example, on 64-bit platforms, V8 requires all executable memory to
be within a 2GB section of address space so that calls can use 32-bit
offsets, so it reserves that amount of address space on initialization
(which is a very cheap operation), whereas actual memory is allocated later
as n
> For example, on 64-bit platforms, V8 requires all executable memory to be
within a 2GB section of address space so that calls can use 32-bit offsets,
so it reserves that amount of address space on initialization (which is a
very cheap operation), whereas actual memory is allocated later as nee
>
> > What Ben pointed out was virtual memory reservations, which is not the
> same as restricted space sizes.
>
> Would you guys mind expanding on this? My thinking is that the v8 commits
> as much memory as it is allowed for semi/old/code spaces (considering that
> the app doesn't allocate more t
Great. Thank you for jumping in, Jakob. Considering that both, you and Ben
didn't reject this approach with some obvious reason, like some resources
created with a process ID in the name, I can go ahead and try things out
with a static v8 build. Without the input from the both of you, a simple
Great. Thank you for jumping in, Jakob. Considering that both, you and Ben
didn't reject this approach with some obvious reason, like some resources
created with a process ID in the name, I can go ahead and try things out
with a static v8 build. Without the input from the both of you, a simple
I'm on the V8 team, and I defer to Ben on this question :-)
I've never tried embedding two different versions of V8 into the same
process, and I don't think we'd consider that scenario supported, but it
may well work. You can probably find out with a fairly small example (i.e.
without sinking lots
Thanks Ben. Your feedback is much appreciated. I realize you are not on the
v8 team. Just thought may be I'm missing some MSDN-like way to contact
Google.
--
--
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
---
You received this message because you ar
On Tue, Apr 17, 2018 at 4:31 PM, A.M. wrote:
> Thank you, Ben. That is reassuring. With regards to memory, I implemented
> first/second chance weak handle callbacks and tested it with low memory
> limits for semi/old space sizes and it appears to deal with memory
> restrictions well.
>
> The proje
Thank you, Ben. That is reassuring. With regards to memory, I implemented
first/second chance weak handle callbacks and tested it with low memory
limits for semi/old space sizes and it appears to deal with memory
restrictions well.
The project I'm working on is rather large and it would be rea
On Tue, Apr 17, 2018 at 12:27 AM, A.M. wrote:
> I have an application that uses the v8 to run concurrent script jobs using
> multiple v8 isolates. There is a desire to move this application to Node.js,
> which puts the v8 used by the application in conflict with how Node.js uses
> its instance of
I have an application that uses the v8 to run concurrent script jobs using
multiple v8 isolates. There is a desire to move this application to
Node.js, which puts the v8 used by the application in conflict with how
Node.js uses its instance of the v8 (e.g. the v8 platform isn't exposed
outside
12 matches
Mail list logo