Re: [v8-users] Performance impact of increasing Error.stackTraceLimit?

2018-03-04 Thread Benjamin Pasero

>
>  assume you're asking this in the context 
> of VS Code
>

Yes, this would apply to any process in VS Code, including the renderer. 

On Monday, March 5, 2018 at 6:29:13 AM UTC+1, Ben Noordhuis wrote:
>
> On Fri, Mar 2, 2018 at 5:55 PM, Benjamin Pasero 
>  wrote: 
> > Hi, 
> > 
> > I am wondering what the performance impact would be if I would change 
> > Error.stackTraceLimit [1] to a high value (e.g. 1000?). The default of 
> just 
> > 10 stack frames is little when the error bubbles through a long chain of 
> > promises for example. 
> > 
> > This change would be in production code, not just for testing, so I am a 
> > little bit nervous of the consequences this would have. 
> > 
> > Maybe someone can share some experiences with changing this value. 
> > 
> > Ben 
> > 
> > [1] https://github.com/v8/v8/wiki/Stack-Trace-API 
>
> Stack traces are built by storing back-references to the JS functions 
> on the stack.  The human-readable stack trace is computed lazily.  The 
> longer the stack trace, the bigger the chance you retain code objects 
> beyond their natural lifetime (i.e., introduce memory leaks) but that 
> might be offset by the observation that the bottom of the stack is 
> often invariant. 
>
> You'll also pay a little in CPU time in the stack frame walker but 
> that's probably tolerable.  I assume you're asking this in the context 
> of VS Code where human perception is the important factor. 
>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [v8-users] v8 garbage collection and threading

2018-03-04 Thread Ben Noordhuis
On Wed, Feb 28, 2018 at 6:19 AM, A.M.  wrote:
>> The answer is "it depends."  If you're not going to call into V8 at
>> all it's _probably_ safe but most V8 APIs can kick off a GC run.  Call
>> `v8::Isolate::Dispose()` first to be safe.
>
> Is there a best-practices way to dispose of application objects stored in
> internal fields that didn't participate in any weak callbacks? Writing code
> to clear out internal fields and then deal with potentially-empty internal
> fields in all object callbacks makes the code unnecessarily complex, which
> is why I was looking for a more straightforward way. Any insights in this
> would be much appreciated. Thanks!

Not really, I'm afraid.  You can't really get around of doing double
bookkeeping.

It's a perennial source of bugs in Node.js but the upside is it keeps
me employed.

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [v8-users] How does one schedule a weak handle callback with internal fields?

2018-03-04 Thread Ben Noordhuis
On Wed, Feb 28, 2018 at 6:05 AM, A.M.  wrote:
> On Tuesday, 27 February 2018 19:21:03 UTC-5, Ben Noordhuis wrote:
>>
>> IIRC, it's a two-pass system: first pass should reset the persistent
>> handle, second pass is the real finalizer.  Preempting the question of
>> why it works that way: I don't know. :-)
>
> Thanks for the insights. I think I see what you are saying, but even if I
> set up the second pass callback, both internal fields within the second
> callback are still `NULL`. How can I set up the weak callback to receive
> internal fields of the object instance that is being destroyed?

I don't rightly know.  We never used this mechanism in Node.js and I
don't know if it's still used in Chromium.  It might have been an
experiment that didn't pan out.

V8 still seems to use it internally for a few things (i18n, mostly) so
I assume it must still be doing _something_.

> I also noticed that the second pass callback is called much less frequent
> than the first callback and if the actual clean-up is done in the second
> callback, allocations would pile up at a higher rate. From the code it
> appears that the second pass callbacks are done only if
> `synchronous_second_pass` is set. Can you elaborate how second pass
> callbacks are scheduled?

>From a background thread, unless `--optimize_for_size` or
`--predictable` is set, in which case they run on the foreground
thread.

> Further to this, my understanding is that in order to process phantom
> handles generated by weak callbacks, I need to call `PumpMessageLoop`
> periodically. The comment above this function says this:
>
>> The caller has to make sure that this is called from the right thread.
>
> What's the "right" thread in this context? If it's the same thread that runs
> the script, how can one pump the message loop if the script never returns?
> Am I supposed to synchronize with the script thread in v8 callbacks and pump
> for messages?

The foreground thread.  If the script never yields control, you can't
pump the message loop.

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [v8-users] Performance impact of increasing Error.stackTraceLimit?

2018-03-04 Thread Ben Noordhuis
On Fri, Mar 2, 2018 at 5:55 PM, Benjamin Pasero
 wrote:
> Hi,
>
> I am wondering what the performance impact would be if I would change
> Error.stackTraceLimit [1] to a high value (e.g. 1000?). The default of just
> 10 stack frames is little when the error bubbles through a long chain of
> promises for example.
>
> This change would be in production code, not just for testing, so I am a
> little bit nervous of the consequences this would have.
>
> Maybe someone can share some experiences with changing this value.
>
> Ben
>
> [1] https://github.com/v8/v8/wiki/Stack-Trace-API

Stack traces are built by storing back-references to the JS functions
on the stack.  The human-readable stack trace is computed lazily.  The
longer the stack trace, the bigger the chance you retain code objects
beyond their natural lifetime (i.e., introduce memory leaks) but that
might be offset by the observation that the bottom of the stack is
often invariant.

You'll also pay a little in CPU time in the stack frame walker but
that's probably tolerable.  I assume you're asking this in the context
of VS Code where human perception is the important factor.

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [v8-users] Are functions which defined using eval or new Function() optimized?

2018-03-04 Thread Jakob Kummerow
On Sat, Mar 3, 2018 at 9:17 AM Koray  wrote:

> Hello Jakob,
>
> Thank you for your response. I will make sure to design my application
> accordingly.
>
> My next question would be, does v8 optimize according to new prototype
> changes after a while?


Yes.


> Or should I restart the application in certain
> periods to make sure the application is optimized with newly added
> functions?
>

That should not be necessary. Of course you can experiment with it to see
if it makes a difference.


> My next question is, do all functions have to be in a file in order to
> optimization to not be broken?


That's pretty much the same as your first question: if you create functions
with eval, it doesn't matter where the source text came from.


> Because my idea was to keep all
> functions in a database and then add them to prototypes whenever
> application's restarted.
>

That should be fine.

For completeness' sake I'll mention: there are many forms of optimizations;
e.g. certain tricks to improve startup speed assume the common pattern of
parsing code from files. By cooking up your own initialization mechanism,
you'll probably miss out on those, but if you have uncommon requirements,
then it's of course perfectly fine to devise your own set of tradeoffs.


>
> On 3/2/18, Jakob Kummerow  wrote:
> > Functions created with eval or new Function will get optimized (after a
> > while, just like other functions -- nothing is optimized on first use).
> >
> > That said, modifying prototypes after the fact tends to have a
> performance
> > impact, because V8 makes optimizations (all over the place) based on the
> > assumption that prototypes don't change much. When they do, optimized
> code
> > built on such assumptions must be discarded. That may or may not be an
> > issue for your use case, the only way to find out is to try.
> >
> >
> >
> > On Fri, Mar 2, 2018 at 7:47 AM Koray  wrote:
> >
> >> Forwarding here from Node.js group as this one is more appropriate.
> >>
> >> Hello,
> >>
> >> I have a real time application which will require constant updates /
> >> bug fixes. So I will be constantly defining new functions on
> >> prototypes or constructors.
> >>
> >> My question is, will there be a difference between functions which
> >> were hard coded and functions defined using eval or Function
> >> constructor? If so, will require() be possibly able to get around
> >> this?
> >>
> >> Note: Functions will be used multiple times throughout the lifetime of
> >> the application. Giving this information just in case if they are
> >> being optimized after the first usage.
> >> Thank you
> >>
> >> --
> >> --
> >> v8-users mailing list
> >> v8-users@googlegroups.com
> >> http://groups.google.com/group/v8-users
> >> ---
> >> You received this message because you are subscribed to the Google
> Groups
> >> "v8-users" group.
> >> To unsubscribe from this group and stop receiving emails from it, send
> an
> >> email to v8-users+unsubscr...@googlegroups.com.
> >> For more options, visit https://groups.google.com/d/optout.
> >>
> >
> > --
> > --
> > v8-users mailing list
> > v8-users@googlegroups.com
> > http://groups.google.com/group/v8-users
> > ---
> > You received this message because you are subscribed to the Google Groups
> > "v8-users" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to v8-users+unsubscr...@googlegroups.com.
> > For more options, visit https://groups.google.com/d/optout.
> >
>
> --
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
> ---
> You received this message because you are subscribed to the Google Groups
> "v8-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to v8-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[v8-users] Re: How does one schedule a weak handle callback with internal fields?

2018-03-04 Thread A.M.

> I still don't understand what the point of this call is under any 
circumstances  when you can get the internal fields out of your object

The first pass weak handle callback can call only a couple v8 functions 
(i.e. reset the handle and set up the second-pass callback) and is not 
supposed to pull internal fields from the object. More v8 functions can be 
called in the second pass callback, but that callback is only called from 
the message pump, which makes it unusable for long-running scripts that 
don't get a chance to pump the message loop in the middle of the script. 
This leaves us only with packing everything into the weak callback 
parameter, which unnecessarily duplicates these internal fields in the 
parameter and requires an extra structure to maintain within the 
application and synchronize them between the object and weak handle 
parameters. 

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.