Re: [v8-users] How to remove CheckMaps instructions in an optimization phase

2014-11-05 Thread Vyacheslav Egorov
Hydrogen is also used to generate stubs and there CheckMaps "deopt"  has a
different semantics from a normal deopt - and it is not reported in
--trace-deopt (which reports all normal JS function deopts). You have to
accommodate for that in your experiments.

I however think that your experiment does not provide any actionable data.
Knowing that checks introduce X% overhead is kinda useless - unless you
also know an algorithm to eliminate all of them and retain correctness.


Vyacheslav Egorov

On Wed, Nov 5, 2014 at 5:37 PM, Gabriel Southern 
wrote:

> Perhaps it would help if I explained my motivation.  I'm trying to
> evaluate the overhead of conditional deoptimization checks.  One way is by
> running workloads that have the checks in their normal configuration and
> measuring the runtime.  Then removing checks that were never triggered and
> rerunning the workload and comparing the runtime.  Obviously I understand
> this is not safe in the general case.
>
> For some workloads I was able to remove the call to DeoptimizeIf in
> LCodeGen::DoCheckMaps and the benchmark still ran correctly.  But if I
> removed the call to CompareMap(reg,map) I would get an error about
> unreachable code similar to what I posted earlier when I remove the
> CompareMaps hydrogen instructions.
>
> Aside from that I think that I want to be able to choose when to remove
> checks at the hydrogen instruction level because later I will want to pick
> which functions to remove the checks from.  I would profile a benchmark
> first and see which functions have conditional deopts that are never
> triggered and then remove the deopts from those functions.
>
> Again this is all part of a performance evaluation study, not something to
> be used for production code.  I hope this makes sense, but if you think
> there's something I'm overlooking for why this won't work I'd be interested
> to know why.  From looking at the assembly code sequences that are
> generated I think this should be okay, but there's also obviously something
> I'm missing that is leading to the unreachable code error that I've seen.
>
> Thanks,
>
> -Gabriel
>
> On Wednesday, November 5, 2014 5:42:20 AM UTC-8, Jakob Kummerow wrote:
>>
>> Removing check instructions is so utterly wrong and dangerous that I
>> can't bring myself to even try to help you. Just don't do it!
>>
>>
>> On Wed, Nov 5, 2014 at 8:19 AM, Gabriel Southern 
>> wrote:
>>
>>> I'm experimenting with removing deoptimization checks and I have a
>>> question about how to remove hydrogen instructions.
>>>
>>> I'm looking at a benchmark where the CheckMaps deoptimization checks are
>>> never triggered and I'm trying to remove them.  I know this is not safe in
>>> the general case, but when I traced the deoptimizations for this benchmark
>>> there were not any that were triggered because of CheckMaps.
>>>
>>> I've tried to follow the HDeadCodeEliminationPhase as a guide because
>>> what I want to do is delete instructions that match a certain criteria, so
>>> I thought that pass might be a good example.  The main loop in my pass is:
>>>
>>>   for (int i = 0; i < graph()->blocks()->length(); ++i) {
>>> HBasicBlock* block = graph()->blocks()->at(i);
>>> for (HInstructionIterator it(block); !it.Done(); it.Advance()) {
>>>   HInstruction* instr = it.Current();
>>>   if (instr->opcode() == HValue::kCheckMaps) {
>>> instr->DeleteAndReplaceWith(NULL);
>>>   }
>>> }
>>>   }
>>>
>>> When I run this and just print the list of instructions that will be
>>> removed the list looks okay.  However if I actually delete the instruction
>>> I get a runtime error as follows:
>>>
>>> #
>>> # Fatal error in ../src/objects.cc, line 10380
>>> # unreachable code
>>> #
>>>
>>>  C stack trace ===
>>>
>>>  1: V8_Fatal
>>>  2: 
>>> v8::internal::Code::FindAndReplace(v8::internal::Code::FindAndReplacePattern
>>> const&)
>>>  3: 
>>> v8::internal::CodeStub::GetCodeCopy(v8::internal::Code::FindAndReplacePattern
>>> const&)
>>>  4: v8::internal::PropertyICCompiler::ComputeCompareNil(v8::
>>> internal::Handle, v8::internal::CompareNilICStub*)
>>>  5: v8::internal::CompareNilIC::CompareNil(v8::internal::
>>> Handle)
>>>  6: ??
>>>  7: v8::internal::CompareNilIC_Miss(int, v8::internal::Object**,
>>> v8::internal::I

Re: [v8-users] Using --hydrogen_track_positions flag triggers crash in debug mode

2014-10-14 Thread Vyacheslav Egorov
Hi Gabriel,

I took a quick look into this and it's actually an issue in my code: we are
trying to lookup SharedFunctionInfo by inlining_id in the list that is
actually indexed by something else entirely (unique id of inlined
function). So we if we inline the same function twice we end up reading out
of bounds (if you run with --enable-slow-asserts you will get bounds check
error). I will fix this.

Good news: this does *not* affect IRHydra, because IRHydra does not rely on
"linearized" source positions encoded in the code, it uses whatever is
encoded in the hydrogen.cfg file, and those are correct.



Vyacheslav Egorov

On Tue, Oct 14, 2014 at 9:46 PM, Gabriel Southern 
wrote:

> I wanted to try IRHydra2 (http://mrale.ph/irhydra/2/) with the Octane
> benchmarks.  When I use the x64.debug version of d8 with the flags listed
> for IRHydra2 and run the Octane benchmarks I get a crash in the interpreter.
>
> I tried to narrow down the problem, and it looks like
> --hydrogen_track_positions is the flag that gives the problem.  The stack
> trace that I get when d8 crashes is:
>
> #
> # Fatal error in ../src/assembler.cc, line 1551
> # CHECK(pos >= 0) failed
> #
>
>  C stack trace ===
>
>  1: V8_Fatal
>  2: v8::internal::PositionsRecorder::RecordPosition(int)
>  3: v8::internal::LCodeGen::RecordAndWritePosition(int)
>  4: v8::internal::LCodeGenBase::GenerateBody()
>  5: v8::internal::LCodeGen::GenerateCode()
>  6: v8::internal::LChunk::Codegen()
>  7: v8::internal::OptimizedCompileJob::GenerateCode()
>  8:
> v8::internal::Compiler::GetConcurrentlyOptimizedCode(v8::internal::OptimizedCompileJob*)
>  9: v8::internal::OptimizingCompilerThread::InstallOptimizedFunctions()
> 10: ??
> 11: v8::internal::Runtime_TryInstallOptimizedCode(int,
> v8::internal::Object**, v8::internal::Isolate*)
> 12: ??
>
> Looking in gdb I think the problem is that the check DCHECK(pos >= 0)
> in PositionsRecorder::RecordPosition(int) fails in debug mode because pos
> is -842150428.  Running in release mode the interpreter doesn't crash,
> probably since the check is not run, but I'm wondering if the output can be
> trusted to be correct.
>
> I noticed an issue related to the --hydrogen_track_positions flag had been
> opened in Feb, 2014: https://code.google.com/p/v8/issues/detail?id=3184
>
> I also have experienced this problem when compiling either the master or
> the bleeding_edge branch (from the git repo).
>
> I'm using Ubuntu 12.04 with Linux 3.5 and gcc 4.7.3.  I've tried with both
> x64 and ia32 and seen the crash in both cases.  Any suggestions for
> debugging the problem are appreciated.  I'm wondering if it's something
> specific to my system, or a bug in V8.  And whether it really matters for
> using IRHydra2 or not.
>
> -Gabriel
>
>
>
>
>  --
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
> ---
> You received this message because you are subscribed to the Google Groups
> "v8-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to v8-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [v8-users] V8, Hydrogen, disassembly, and Vyacheslav Egorov's JSConf talk

2014-10-09 Thread Vyacheslav Egorov
Hi John,

> 2. Why does Vyacheslav suggest using --trace-hydrogen? Is that needed to
see the disassembly?

I suggest passing --trace-hydrogen because it is needed to see the HIR -
high-level intermediate representation (also known as hydrogen) - which is
much easier to read then raw assembly. (for example Slides 38 on show HIR,
not disassembly)

Also if you --trace-hydrogen you can use a tool of mine called IRHydra[1]
to inspect optimized code. It can show you source, HIR & disassembly all
together[2].

[1] http://mrale.ph/irhydra/2
[2] http://imgur.com/HbOOXEM


Vyacheslav Egorov

On Wed, Oct 8, 2014 at 5:56 PM, John Feminella  wrote:

> As an educational exercise, I'd like to inspect disassembled JS generated
> by JS, and compare both the optimized and unoptimized versions to see what
> changes and what doesn't.
>
> In Vyacheslav Egorov's JSConf 2012 excellent slides (
> http://s3.mrale.ph/jsconf2012.pdf) he recommends compiling V8 like so:
>
> =
> make ia32.release objectprint=on \
>   disassembler=on
>
> out/ia32.release/d8 --print-opt-code \
>   --code-comments \
>   --trace-hydrogen \
>   test.js
> =
>
> which will enable the disassembler for introspection by the curious.
>
> I'm having a little trouble understanding the current state of affairs,
> though:
>
> 0. Is this the best way of doing things? How do I generate the
> *unoptimized* JS versus the *optimized* JS for test.js in this example?
>
> 1. Are Hydrogen and Crankshaft still relevant? For example, my limited
> understanding is that TurboFan will be replacing Hydrogen:
> https://codereview.chromium.org/426233002
> Is there somewhere I can read about the relevance and lifecycle of
> different pieces like Hydrogen and Crankshaft and how they play a role in
> code generation/optimization?
>
> 2. Why does Vyacheslav suggest using --trace-hydrogen? Is that needed to
> see the disassembly?
>
> Thanks for your help!
>
> best,
> ~ jf
>
> --
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
> ---
> You received this message because you are subscribed to the Google Groups
> "v8-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to v8-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [v8-users] Re: Array#join - better to special case for Array/etc.?

2014-09-02 Thread Vyacheslav Egorov
> It still is easily optimizable to a low
> level. An array of numbers can
> be compiled eventually to push/pop (pushf/popf for floats).

Which architecture are we talking about here? x86? pushf pushes flags
register, not floats.

All of the above (push/pop) have a fixed register ESP --- which means that
either your array somehow ended up on the top of the stack or you pointed
ESP into your array. Neither of this makes much sense.

> Is there a reason why the special case is faster in JS than native
> (even taking into account coercion)?

When you say "faster" --- you probably mean some specific benchmark that
you wrote. If you share this benchmark, I can approximately tell you why
you see what you see there.

I can tell you that on average it is expected to be faster as it is
implemented natively.

Common mistake for measuring these things is that join() is compared
against `+` --- where is `+` does not actually produce precisely the same
result as join() - `+` creates a cons-string and thus lazyly delays the
cost of actual character copying to the place where `+` result is used in a
way that requires flattening. join() produces flattened string from the
start.

Take for example a look at: http://jsperf.com/join-faster-than-concat


Vyacheslav Egorov


On Tue, Sep 2, 2014 at 12:59 AM, Isiah Meadows  wrote:

> On Mon, Sep 1, 2014 at 9:12 AM, Vyacheslav Egorov 
> wrote:
> >> Array#[push|pop]() is easily optimized for array instances, because
> >> they each compile down to a single common assembly instruction
> >
> > Last time I checked Intel manual it did not have jsapop/jsapush
> > instructions.
> >
> > You need to do a bit of checks (length underflow, lack holes in the
> array,
> > etc). So a single instruction is unlikely (though potentially possible
> in a
> > loop under certain conditions --- but those conditions require
> sophisticated
> > optimizations to achieve, e.g. you need to hoist bounds checks and sink
> > update of length out of the loop, etc.).
>
> I'll recind that statement. It still is easily optimizable to a low
> level. An array of numbers can
> be compiled eventually to push/pop (pushf/popf for floats).
>
> >
> >> but to make a special case (or more optimal case) for Arrays in
> >> Array#join(), and especially if it is an array of Strings.
> >
> > There is such case. It's called _FastAsciiArrayJoin.
>
> Is there a reason why the special case is faster in JS than native
> (even taking into account coercion)?
>
> >
> >> This is a relatively fast snippet of C++ code:
> >
> > This might have O(n^2) runtime complexity or waste memory for result (if
> > library does capacity doubling on appends, which is common strategy)
> > depending on how your C++ library reserves capacity for std::string.
> >
> > std::string join(const std::vector& array) {
> >   size_t total_length = 0;
> >   for (auto& s : array) total_length += s.length();
> >
> >   std::string str;
> >   str.reserve(total_length);
> >   for (auto& s : array) str.append(s);
> >   return str;
> > }
>
> Amend that to "simple". I'm not usually coding in C++.
>
> >
> > which is btw exactly what _FastAsciiArrayJoin attempts to do.
> >
> >
> >
> >
> > Vyacheslav Egorov
> >
> >
> > On Mon, Sep 1, 2014 at 11:20 AM, Isiah Meadows 
> wrote:
> >>
> >> That library rarely does type checking. This contributes a lot of
> >> speed to their overall algorithm. If you look at my benchmarks,
> >> clearly, removing type checking helps, but it doesn't help for all
> >> applications. Another thing is that they use 99% C-style for loops
> >> with numerical indices instead of for-in loops (which always require
> >> some type checking because they work with all Objects and Arrays). The
> >> code actually resembles Asm.js in its heavy use of numbers.
> >>
> >> Array#[push|pop]() is easily optimized for array instances, because
> >> they each compile down to a single common assembly instruction. Also,
> >> in the case of Array#pop(), if the value isn't used, then it can
> >> simply pop to the same register over and over again, making it easily
> >> surpass 100 million operations per second if properly optimized.
> >>
> >> Back to the initial topic, my main request isn't to remove
> >> type-checking, but to make a special case (or more optimal case) for
> >> Arrays in Array#join(), and especially if it is an array of Strings.
> >> This is a relatively fast snippet of C++ code:
> >>
> >>

Re: [v8-users] Re: Array#join - better to special case for Array/etc.?

2014-09-01 Thread Vyacheslav Egorov
> Array#[push|pop]() is easily optimized for array instances, because
> they each compile down to a single common assembly instruction

Last time I checked Intel manual it did not have jsapop/jsapush
instructions.

You need to do a bit of checks (length underflow, lack holes in the array,
etc). So a single instruction is unlikely (though potentially possible in a
loop under certain conditions --- but those conditions require
sophisticated optimizations to achieve, e.g. you need to hoist bounds
checks and sink update of length out of the loop, etc.).

> but to make a special case (or more optimal case) for Arrays in
Array#join(), and especially if it is an array of Strings.

There is such case. It's called _FastAsciiArrayJoin.

> This is a relatively fast snippet of C++ code:

This might have O(n^2) runtime complexity or waste memory for result (if
library does capacity doubling on appends, which is common strategy)
depending on how your C++ library reserves capacity for std::string.

std::string join(const std::vector& array) {
  size_t total_length = 0;
  for (auto& s : array) total_length += s.length();

  std::string str;
  str.reserve(total_length);
  for (auto& s : array) str.append(s);
  return str;
}

which is btw exactly what _FastAsciiArrayJoin attempts to do.




Vyacheslav Egorov


On Mon, Sep 1, 2014 at 11:20 AM, Isiah Meadows  wrote:

> That library rarely does type checking. This contributes a lot of
> speed to their overall algorithm. If you look at my benchmarks,
> clearly, removing type checking helps, but it doesn't help for all
> applications. Another thing is that they use 99% C-style for loops
> with numerical indices instead of for-in loops (which always require
> some type checking because they work with all Objects and Arrays). The
> code actually resembles Asm.js in its heavy use of numbers.
>
> Array#[push|pop]() is easily optimized for array instances, because
> they each compile down to a single common assembly instruction. Also,
> in the case of Array#pop(), if the value isn't used, then it can
> simply pop to the same register over and over again, making it easily
> surpass 100 million operations per second if properly optimized.
>
> Back to the initial topic, my main request isn't to remove
> type-checking, but to make a special case (or more optimal case) for
> Arrays in Array#join(), and especially if it is an array of Strings.
> This is a relatively fast snippet of C++ code:
>
> std::string join(std::string* array, int len) {
>   std::string str = '';
>   while (len) {
> str += *(array + --len);
>   }
>   return str;
> }
>
> The Fast library could speed up some of their methods easily by
> reversing the iteration order for some methods (and I'm about to draft
> a quick patch to it).
>
> On Sun, Aug 31, 2014 at 9:22 AM, Jacob G  wrote:
> > You should take a look at this too: https://github.com/codemix/fast.js -
> > Functions written in JS are faster than the native functions. Is there
> > something to be done?
> >
> > Am Sonntag, 31. August 2014 02:16:37 UTC+2 schrieb Isiah Meadows:
> >>
> >> I profiled various native methods, comparing them to equivalent
> polyfills
> >> and special-cased ones. I compared the following functions:
> >>
> >> Math.abs(x)
> >> Array.prototype.pop()
> >> Math.ceil(x)
> >> Array.prototype.join(sep)
> >>
> >> I found the following things from testing in various browsers:
> >>
> >> Math.abs(x)
> >>
> >> Webkit is about twice as fast as V8 in the native implementation.
> >> Webkit's performance in the rest is on par with V8's.
> >> Similar performance between type-ignorant polyfills and native
> >> implementation (on all browsers)
> >>
> >> Array.prototype.pop()
> >>
> >> Firefox clearly hasn't optimized the special case for arrays natively.
> >> JS polyfills are insanely slow, with type checking making little
> >> difference.
> >>
> >> Math.ceil(x)
> >>
> >> JS polyfills significantly slower, but that is explainable with the
> better
> >> bitwise ability with floats/doubles/etc. in C/C++.
> >>
> >> Mine does it without branching, but a potentially better way is to
> >> decrement if less than 0 and truncate it.
> >>
> >> Webkit is a little faster, but not a lot.
> >>
> >> Array.prototype.join(sep)
> >>
> >> JS standards polyfill rather slow
> >> JS polyfill assuming an array is over twice as fast as the native
> >> implementation (If it optimizes for this case, it should structurally
> >>

Re: [v8-users] Problem with several threads trying to lock an isolate, and the withdrawal of v8::Locker's preemption

2014-06-12 Thread Vyacheslav Egorov
I would like to note that RequestInterrupt was not intended as a
replacement for preemption. We didn't want callback executing any
JavaScript in the interrupted isolate, so we put the following requirement
on the interrupt callback:


*Registered |callback| must not reenter interrupted Isolate.*

This requirement is not checked right now, but neither anything is
guaranteed to work if you try and start executing JavaScript in the
interrupted isolate from the callback or from another thread (by unlocking
isolate in the callback and allowing other thread to lock it).


Vyacheslav Egorov


On Thu, Jun 12, 2014 at 1:10 PM, juu  wrote:

> Ok, I didn't notice this API available since v8 3.25.
>
> I will have to wait for my team to migrate to a new version of v8 then ...
>
> Thanks
> Julien.
> On Thursday, June 12, 2014 1:44:27 AM UTC+2, Jochen Eisinger wrote:
>>
>>
>>
>>
>> On Tue, Jun 10, 2014 at 8:38 AM, juu  wrote:
>>
>>> Hello everyone,
>>>
>>> I'm trying to implement RequireJS on my JS Engine based on v8 (v8 3.21).
>>> I have a problem with asynchronous loading and evaluation of scripts.
>>>
>>> The main thread initialize v8 : create its isolate, context, script etc
>>> ..
>>> When the main script is ready, the current isolate is locked and the
>>> script is run.
>>>
>>>  Once a " *require(anotherScript)* " is encoutered (in my main script),
>>> another thread is created and is in charge of loading *anotherScript *and
>>> execute it as soon as possible.
>>>
>>> My problem is that the main thread lock the current isolate until the
>>> whole main script is executed. Which let no chance to *anotherScript *to
>>> be called asynchronously ; actually it's always executed synchronously
>>> since *anotherScript *manage to Lock the current isolate only once the
>>> main thread is finished and unlock the current isolate.
>>>
>>> I use v8::Locker and v8::Locker to deal with my "multithreaded" use of
>>> v8. In my version of v8 : 3.21, v8::Locker provide a preemption feature
>>> which enable me to give some chance to other threads to lock v8
>>> periodically :
>>>
>>> /**Start preemption.When preemption is started, a timer is fired every n
>>> milliseconds that will switch between multiple threads that are in
>>> contention for the V8 lock. */
>>>   static void StartPreemption(int every_n_ms);
>>>
>>> /** Stop preemption.*/
>>>   static void StopPreemption();
>>>
>>> But ...this feature is no longer available in the next versions of v8
>>> (since 3.23) .
>>> This post confirm it : https://groups.google.com/
>>> forum/#!searchin/v8-users/StartPreemption/v8-users/
>>> E5jtPC-scp8/H-2yz4Wj_SkJ
>>>
>>> So here are my questions :
>>>
>>> Is there any other way to perform the Preemption v8 used to provide ?
>>> Am I supposed to do it myself ? I dont think I can, I guess can't
>>> interrupt/pause myself the execution properly...
>>>
>>
>> I guess you can do this by using the RequestInterrupt API?
>>
>> best
>> -jochen
>>
>>
>>
>>> Am I doing something wrong in my global use of v8 and multiple threads ?
>>>
>>>
>>> Thanks a lot !
>>> Julien.
>>>
>>> --
>>> --
>>> v8-users mailing list
>>> v8-u...@googlegroups.com
>>>
>>> http://groups.google.com/group/v8-users
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "v8-users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to v8-users+u...@googlegroups.com.
>>>
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  --
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
> ---
> You received this message because you are subscribed to the Google Groups
> "v8-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to v8-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [v8-users] Re: PropertyCallbackInfo::This() return type change

2014-05-14 Thread Vyacheslav Egorov
Consider the following test:

void SelfGetter(Local name,
const v8::PropertyCallbackInfo& info) {
  info.GetReturnValue().Set(info.This());
}


THREADED_TEST(GetterInThePrototypeChain) {
  LocalContext env;
  v8::Isolate* isolate = env->GetIsolate();
  v8::HandleScope scope(isolate);

  v8::Handle obj = ObjectTemplate::New(isolate);
  obj->SetAccessor(v8_str("self"), SelfGetter);
  env->Global()->Set(v8_str("obj"), obj->NewInstance());
  CHECK(CompileRun("Number.prototype.__proto__ = obj;"
   "var v = 42..self;"
   "v + ' is a ' + typeof v")->Equals(v8_str("42 is a
number")));
}




Vyacheslav Egorov


On Wed, May 14, 2014 at 10:19 AM, Sven Panne  wrote:

> On Tue, May 13, 2014 at 11:15 PM, Ben Noordhuis wrote:
>
>> Yang, isn't that an implementation detail leaking into the API?  [...]
>>
>
> Yup, it is... :-} Fix reverting the external API change under way:
> https://codereview.chromium.org/285643008/
>
> --
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
> ---
> You received this message because you are subscribed to the Google Groups
> "v8-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to v8-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [v8-users] Intent to ship: ES6 Map & Set

2014-05-07 Thread Vyacheslav Egorov
What are the performance characteristics and memory footprint one can
expect from Map & Set?

I would like to point out that a lot of built-in features that V8 and other
JS VMs implement (e.g. Array.prototype.forEach) are never used because they
are perceived as slow (and they are actually slow for various reasons).

Can we proactively avoid falling down the same hole with ES6 features?

What about having ES6 features (micro)benchmark suite to drive performance
of this features across all browsers implementing them?


Vyacheslav Egorov


On Tue, May 6, 2014 at 9:22 PM, 'Erik Arvidsson' via v8-users <
v8-users@googlegroups.com> wrote:

> Map & Set are both part of ES6 [1], [2].
>
> They are shipping in Firefox since version 13 [3] and Internet Explorer 11
> [4]. They are also turned on by default for nightly WebKit/JSC.
>
> Adam Klein recently re-implemented the backing hash table used by both Map
> and Set to use an ordered hash table, which is a requirement for
> deterministic insertion order iteration. With that we were able to add
> support for forEach which we saw as a must have for parity with Firefox and
> Internet Explorer.
>
> This is not a full implementation of Map and Set. Most notably it does not
> include @iterator, entries, values nor keys. This is also the lowest common
> denominator between IE and FF. We plan to send out further intent to ship
> emails before we ship the remaining features of Map and Set.
>
> Owners: ad...@chromium.org, a...@chromium.org
>
> [1] http://people.mozilla.org/~jorendorff/es6-draft.html#sec-map-objects
> [2] http://people.mozilla.org/~jorendorff/es6-draft.html#sec-set-objects
> [3] https://developer.mozilla.org/en-US/Firefox/Releases/13
> [4] http://msdn.microsoft.com/en-us/library/ie/dn342892(v=vs.85).aspx
>
>  --
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
> ---
> You received this message because you are subscribed to the Google Groups
> "v8-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to v8-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [v8-users] Disappearing closure bindings

2013-12-04 Thread Vyacheslav Egorov
> Here is a simple example of the issue from the JS point of view:

You actually can't have a function declaration inside an if-statement but
engines allow that for compat reasons.

Now what you have written is equivalent to:

function preprocessor(source, url, listenerName) {
  function wrapSource(source, url, listenerName) {
console.log('closeOverMe=' + closeOverMe);
return source + '\n//' + postfix;
  }

  var closeOverMe;

  if (!window.wasCompiledPreviously) {
closeOverMe = 'I am closed over';
console.log('unclosed closeOverMe=' + closeOverMe);
window.wasCompiledPreviously = true;
  }

  return wrapSource(source, url, listenerName);
}

Now you should be able to see why closeOverMe is undefined on the second
invocation.

Honestly I am not sure I understand the intent of the code. function
literal / declaration create a new closure every time they are executed,
you can't cache it like that. You need to explicitly save it in a variable.

I would say that clean JavaScript way to do this is something along this
lines:

var preprocessor = (function () {
  var wrapSource = null;

  function preprocessor(source, url, listenerName) {
if (wrapSource === null) {
  var closeOverMe = 'I am closed over';
  wrapSource = function wrapSource(source, url, listenerName) {
console.log('closeOverMe=' + closeOverMe);
return source + '\n//' + postfix;
  };
}
    return wrapSource(source, url, listenerName);
  }

  return preprocessor;
})();


Vyacheslav Egorov


On Thu, Dec 5, 2013 at 12:04 AM,  wrote:

> Let me start with a request for patience, I have a complex problem and I'm
> unsure on some of the V8 terminology.
>
> Chrome's DevTools supports "script preprocessing": from the DevTools you
> can reload a Web page and preprocess every thing that will go into V8 with
> a JS to JS preprocessor.  This allows tracing and runtime analysis tools
> based on recompilation to be implemented in JS.
>
> In applying this preprocessor with the traceur-compiler (
> https://github.com/google/traceur-compiler) I hit a snag: functions
> within the JS preprocessor sometimes reference 'undefined' rather than the
> object expected. ("sometimes" here is one kind of complication, the
> failures are deterministic and in simple cases the failure 100%,
> thankfully).  These undefined references are always pointing to objects
> created in closure environments.
>
> As the web page loads, scripts from the browser enter V8, V8 emits a
> before-compile event, the preprocessor runs and returns modified code, then
> V8 proceeds with its work.  The first event works; subsequent ones all fail.
>
> The preprocessor itself is running in a separate Context modeled after the
> Chrome browser's content-script mechanism. We compile the JS preprocessor
> into this Context to obtain a C++ reference to a function within the
> Context. Then we call the function from C++ every time we get the V8
> before-compile event.
>
> Here is a simple example of the issue from the JS point of view:
>
> function preprocessor(source, url, listenerName) {
>   if (!window.wasCompiledPreviously) {
> var closeOverMe = 'I am closed over';
> console.log('unclosed closeOverMe=' + closeOverMe);
> function wrapSource(source, url, listenerName) {
>   console.log('closeOverMe=' + closeOverMe);
>   return source + '\n//' + postfix;
> }
> window.wasCompiledPreviously = true;
>   }
>   return wrapSource(source, url, listenerName);
> }
>
> On the first preprocessor call the console log messages are ok, but
> subsequent calls will have closeOverMe undefined. So the 'window' state is
> being saved between calls and the function wrapSource() can be called, but
> the thing that closeOverMe points to has gone away.
>
> I'm hoping that some reading this far will say "Oh that means you did not
> ..." in the V8 blink binding code.  If this does not ring any bells I'll
> have to start asking about the details of how Blink calls in to V8 for this
> case.
>
> Thanks,
> jjb
>
> --
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
> ---
> You received this message because you are subscribed to the Google Groups
> "v8-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to v8-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [blink-dev] Re: [v8-users] Intent to Implement Promises in V8

2013-10-04 Thread Vyacheslav Egorov
Recently I was made aware of a user land promises library that Petka
Antonov (cced) implemented with the focus on performance.

https://github.com/petkaantonov/bluebird

There seem to be some meaningful benchmarks mentioned there in the section
about Benchmarking.

You might be interested in taking a look at his code.


Vyacheslav Egorov


On Fri, Oct 4, 2013 at 7:27 PM, Yusuke SUZUKI wrote:

> We're currently working on adding a threading API, probably similar to
>> blink's WebThread.
>
>
> Sounds very nice. Providing embedder-side's threading operation interfaces
> to V8 is needed for Promises.
> Is there already any discussion about design of a threading API?
>
>
> On Sat, Oct 5, 2013 at 2:50 AM, Jochen Eisinger wrote:
>
>> To clarify, we won't expose threads to the language, but clean-up the
>> thread usage of V8 internally, e.g. the optimizing compiler thread.
>>
>> best
>> -jochen
>>
>>
>> On Fri, Oct 4, 2013 at 7:37 PM, Dirk Pranke  wrote:
>>
>>> On Fri, Oct 4, 2013 at 3:29 AM, Jochen Eisinger wrote:
>>>
>>>>
>>>> We're currently working on adding a threading API, probably similar to
>>>> blink's WebThread.
>>>>
>>>>
>>> We are?
>>>
>>> -- Dirk
>>>
>>>
>>
>>  --
>> --
>> v8-users mailing list
>> v8-users@googlegroups.com
>> http://groups.google.com/group/v8-users
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "v8-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to v8-users+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>
>  --
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
> ---
> You received this message because you are subscribed to the Google Groups
> "v8-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to v8-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [v8-users] Re: --nouse-idle-notification and "last resort gc"

2013-06-06 Thread Vyacheslav Egorov
Do you report the size of your enormous Judy array to v8 via
AdjustAmountOfExternallyAllocatedMemory? If you do try disabling that.

Vyacheslav Egorov
On Jun 6, 2013 10:22 AM, "Hitesh Gupta"  wrote:

> Hi,
>   We are facing a similar problem. We have an XMPP server running over
> node.js on a machine with 3.8 GB RAM available. However, around 400mb heap
> usage, v8 starts seeing a false positive OOM, and starts triggering last
> resort gc. The detail problem description, and the gc trace can be found at
> https://groups.google.com/forum/?fromgroups#!topic/v8-users/pnhQsNxUhs4 .
>
>   Please let us know if any cause or resolution has been identified for
> similar problems.
>
> Regards,
> Hitesh.
>
> On Monday, November 5, 2012 8:31:52 PM UTC+5:30, Joran Dirk Greef wrote:
>>
>> Thanks Vyacheslav.
>>
>> I thought it may be some kind of OOM situation, but was surprised that
>> this would be the case, given all the memory available to the process.
>> Running top command shows 32GB used memory but I assume this is all disk
>> cache, since there are no other user programs shown in top apart from the
>> node process itself which is shown to be around 2GB used memory. The node
>> process accesses files where the total data set is over 32GB so it makes
>> sense that Linux would grow the disk cache? Would something like
>> overcommit_memory=1 help V8 here? It seems like V8 is seeing a false
>> positive OOM. There really should be more than enough RAM.
>>
>> As to the kind of allocation, it seems to be caused by calling
>> buffer.toString, which drops out to C++ to convert the buffer to a string
>> which it passes back. So essentially any 1-2MB readFile('binary' or 'utf8'
>> or 'ascii') seems to trigger it. Interestingly enough, reading the file as
>> a pure buffer does not cause the allocation error and returns within a few
>> ms. And then converting the buffer to a string manually in JS does not
>> cause any further GC either.
>>
>> I will give your suggestion re: CollectAllAvailableGarbage a try and post
>> the results here.
>>
>> What I was wanting to do was to set the GC limits very high, as you say,
>> to try and prevent it from anything non-incremental, since the heap has
>> millions of persistent objects. I was hoping there would be a way to
>> configure this using flags, or make exposing gc cause V8 to refrain from
>> doing anything non-incremental, except when gc() is called.
>>
>> Your help is much appreciated.
>>
>> On Monday, November 5, 2012 4:43:30 PM UTC+2, Vyacheslav Egorov wrote:
>>>
>>> Hello Joran,
>>>
>>> "last resort gc" means that there was an allocation failure that a
>>> normal GC could "resolve". Basically you are in a kinda OOM situation.
>>> I am kinda curious what kind of allocation it is. Probably it is some
>>> very big object. It can be that allocation attempt does not correctly
>>> fall into allocating from LO space.
>>>
>>> One thing though is that last resort GC can be much more lightweight
>>> for node.js application that it is currently. I doubt 7 GC in a row
>>> are very helpful. As a workaround you can go into
>>> Heap::**CollectAllAvailableGarbage and replace everything inside with
>>>
>>> CollectGarbage(OLD_POINTER_**SPACE, gc_reason);
>>>
>>> This should get rid of 7 repetitive GCs. I think for an application
>>> like yours it makes perfect sense to set internal GC limits very high
>>> and let incremental GC crunch things instead of falling back to
>>> non-incremental marking. But there are currently no way to configure
>>> GC like that.
>>> Vyacheslav Egorov
>>>
>>>
>>> On Mon, Nov 5, 2012 at 12:50 AM, Joran Dirk Greef 
>>> wrote:
>>> > Max-old-space-size is measured in MB not KB as you suggest.
>>> >
>>> > Further, max-new-space-size makes no difference to the GC trace given
>>> above,
>>> > whether it's passed as flag or not, big or small.
>>> >
>>> > On Monday, November 5, 2012 10:21:11 AM UTC+2, Yang Guo wrote:
>>> >>
>>> >> The short answer is: don't mess with GC settings if you don't know
>>> what
>>> >> you are doing.
>>> >>
>>> >> The long answer is: new space is the part of the heap where
>>> short-living
>>> >> objects are allocated. The GC scans new space on every collection and
>>> >> promotes long-living objects into

Re: [v8-users] Creating persistent weak handles on primitive values did not collected by GC?

2013-04-27 Thread Vyacheslav Egorov
Non-smi values are allocated on the heap so they should be collected sooner
or later. Though they might be temporary stuck in some cache.

As Ben recommends it's better to put only Objects&Strings into weak
persistent handles. [though strings can also be stuck in some cache
for indefinite amount of time]

Vyacheslav Egorov


On Sat, Apr 27, 2013 at 10:36 AM, Dmitry Azaraev
wrote:

> > Boolean values are eternal --- they never die. Small integers (31 bits
> on ia32 and 32bits on x64) are
> > not even allocated in the heap, they are *values* essentially so weak
> reachability is undefined for them.
>Thanks. I'm trying allocate non-SMI values too - and got same result,
> so looks, that exists some additional rule. May be exist easy way to detect
> which values can be used as weak headles or no?
>
>
> On Fri, Apr 26, 2013 at 10:54 PM, Vyacheslav Egorov 
> wrote:
>
>> > So it is possible that persistent weak handles built on top of
>> primitive values did not collected?
>>
>> Boolean values are eternal --- they never die. Small integers (31 bits on
>> ia32 and 32bits on x64) are not even allocated in the heap, they are
>> *values* essentially so weak reachability is undefined for them.
>>
>> > it is safe return persistent handle via raw return handle; instead of
>> return handle_scope.Close(handle); ?
>>
>> Yes. You need to use Close only for Local handles. If you return
>> Persistent handle you can just return that directly.
>>
>> --
>> Vyacheslav Egorov
>>
>>
>> On Fri, Apr 26, 2013 at 12:42 PM, Dmitry Azaraev wrote:
>>
>>> Hi.
>>>
>>> My question initially originated from CEF V8 integration, described on
>>> next issues:
>>> https://code.google.com/p/chromiumembedded/issues/detail?id=323 ,
>>> https://code.google.com/p/chromiumembedded/issues/detail?id=960 .
>>>
>>> In short problem looks as:
>>> CEF creates persistent and weak handles for every created V8 value (
>>> Boolean, Integers ). And in this case i found memory leak, when for example
>>> our native function returns primitive value. For complex values (Object),
>>> this is did not appear at all.
>>>
>>> So it is possible that persistent weak handles built on top of primitive
>>> values did not collected?
>>>
>>> And additional question: it is safe return persistent handle via raw
>>> return handle; instead of return handle_scope.Close(handle); ?
>>>
>>>  --
>>> --
>>> v8-users mailing list
>>> v8-users@googlegroups.com
>>> http://groups.google.com/group/v8-users
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "v8-users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to v8-users+unsubscr...@googlegroups.com.
>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>
>>>
>>>
>>
>> --
>> Vyacheslav Egorov
>> Software Engineer
>> Google Danmark ApS - Skt Petri Passage 5, 2 sal - 1165 København K -
>> CVR nr. 28 86 69 84
>>
>>  --
>> --
>> v8-users mailing list
>> v8-users@googlegroups.com
>> http://groups.google.com/group/v8-users
>> ---
>> You received this message because you are subscribed to a topic in the
>> Google Groups "v8-users" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/v8-users/616ZM3UWh2k/unsubscribe?hl=en.
>> To unsubscribe from this group and all its topics, send an email to
>> v8-users+unsubscr...@googlegroups.com.
>>
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>>
>>
>
>
>
> --
> Best regards,
>Dmitry
>
> --
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
> ---
> You received this message because you are subscribed to the Google Groups
> "v8-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to v8-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [v8-users] Creating persistent weak handles on primitive values did not collected by GC?

2013-04-26 Thread Vyacheslav Egorov
> So it is possible that persistent weak handles built on top of primitive
values did not collected?

Boolean values are eternal --- they never die. Small integers (31 bits on
ia32 and 32bits on x64) are not even allocated in the heap, they are
*values* essentially so weak reachability is undefined for them.

> it is safe return persistent handle via raw return handle; instead of
return handle_scope.Close(handle); ?

Yes. You need to use Close only for Local handles. If you return Persistent
handle you can just return that directly.

--
Vyacheslav Egorov


On Fri, Apr 26, 2013 at 12:42 PM, Dmitry Azaraev  wrote:

> Hi.
>
> My question initially originated from CEF V8 integration, described on
> next issues:
> https://code.google.com/p/chromiumembedded/issues/detail?id=323 ,
> https://code.google.com/p/chromiumembedded/issues/detail?id=960 .
>
> In short problem looks as:
> CEF creates persistent and weak handles for every created V8 value (
> Boolean, Integers ). And in this case i found memory leak, when for example
> our native function returns primitive value. For complex values (Object),
> this is did not appear at all.
>
> So it is possible that persistent weak handles built on top of primitive
> values did not collected?
>
> And additional question: it is safe return persistent handle via raw
> return handle; instead of return handle_scope.Close(handle); ?
>
>  --
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
> ---
> You received this message because you are subscribed to the Google Groups
> "v8-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to v8-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
>

--
Vyacheslav Egorov
Software Engineer
Google Danmark ApS - Skt Petri Passage 5, 2 sal - 1165 København K -
CVR nr. 28 86 69 84

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [v8-users] How to use --trap_on_deopt

2013-03-18 Thread Vyacheslav Egorov
This flag is not intended to be used by JavaScript developers.

It is intended to be used by VM developers.

You need to use a native debugger like gdb (not V8's built-in debugger) or
WinDBG (on Windows) to catch SIGTRAP and then you'll have to look at the
disassembly and the memory state to figure out what is going on.

So unless you have deep understanding of V8 internals and proficient in
reading assembly language this flag is useless.

Vyacheslav Egorov


On Mon, Mar 18, 2013 at 5:19 PM, Nick Evans wrote:

> I would like to use the --trap_on_deopt flag ("put a break point before
> deoptimizing") because it sounds like a way to pause the program in a
> debugger, inspect the local variables and hopefully understand why a
> function has been deoptimised. Unfortunately I cannot find any examples on
> the web of how this flag is used.
>
> I had naively assumed it would just set a breakpoint when and where a
> function was deoptimised and invoke the debugger. Jakob and Yang have
> explained to me that it throws a SIGTRAP signal that needs to be caught
> (see link below). Unfortunately even with this knowledge I still cannot
> work out how to use this flag. All of the following commands result in
> "d8.exe has stopped working" without d8 printing anything:
>
> [d8 3.17.11, built with Visual Studio 2010, Windows 7 x64]
> d8 --trap_on_deopt
> d8 --debugger --trap_on_deopt
> d8 --trap_on_deopt test.js
> d8 --debugger --trap_on_deopt test.js
>
> d8 fails immediately even when it hasn't been passed a js file. d8 runs as
> expected when --trap_on_deopt is omitted.
>
> I have had more success with Node.js:
>
> [node-v0.10.0-x86.msi]
> node --trace_deopt --trap_on_deopt test.js
>
> The script will run up to the deopt but then quits to the OS, ignoring
> several nested catch...finally statements. Including the debug option
> yields:
>
> node --trace_deopt --trap_on_deopt debug test.js
> [deoptimize context: 671033d]
> < debugger listening on port 5858
> connecting... ok
> break in test.js:1
>
> before Node quits to the OS without running any of the script. Node runs
> as expected when --trap_on_deopt is omitted.
>
> I had assumed that this behaviour and the absence of any examples on the
> web indicated an underused and buggy feature but apparently not (
> http://code.google.com/p/v8/issues/detail?id=2583&thanks=2583&ts=1363468366).
> If the exception is thrown by d8/Node to the OS, and not to my catches or
> the debugger, then I'm really confused about what this flag does and how it
> can be used properly.
>
> I would be grateful if someone could provide an example of how to use
> --trap_on_deopt.
>
> --
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
> ---
> You received this message because you are subscribed to the Google Groups
> "v8-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to v8-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [v8-users] .map is slower than for

2013-02-04 Thread Vyacheslav Egorov
Andy,

Lack of deforestation is not really *the* main issue in this particular
example.

If you try fastMap that I attached then you'll see that its overhead is at
least 80% less.

Vyacheslav Egorov


On Mon, Feb 4, 2013 at 7:02 PM, Andrii Melnykov wrote:

>
> On Monday, February 4, 2013 5:14:05 PM UTC+2, Sven Panne wrote:
>>
>> On Mon, Feb 4, 2013 at 3:58 PM, Andrii Melnykov wrote:
>>
>>> http://hpaste.org/81784 contains a benchmark - slow_test() is twice as
>>> slow as fast_test() [...]
>>>
>>
>> Furthermore, we don't do any deforestation, which is "a bit" hard in
>> JavaScript.
>>
>
> It's all I wanted to hear, great answer. It's apparent lack of
> deforestation that worried me. If deforestation is not attempted at all,
> then no need to report specific cases :)
>
> Andy
>
> --
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
> ---
> You received this message because you are subscribed to the Google Groups
> "v8-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to v8-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [v8-users] .map is slower than for

2013-02-04 Thread Vyacheslav Egorov
I'd say it's a known issue that generic Array built-ins are slower than
handwritten less generic versions.

Array you are mapping over is extremely small so all overheads of a generic
implementation are highlighted. For example %MoveArrayContents amounts to
20% of the overheads, while in reallity it just swaps around some pointers.

If you are into function programming (which I know you are :-)) you can
have your own, less generic version:

Array.prototype.fastMap = function (cb) {
  "use strict";
  var result = new Array(this.length);
  for (var i = 0; i < this.length; i++) result[i] = cb(this[i]);
  return result;
};

However I'd also argue that it is not impossible to optimize overhead away
in majority of common cases using combination of inlining, constant
propagation and some other optimizations. e.g. hasOwnProperty check can be
fused with actual load or completely eliminated (depending if backing store
is holey or not). It just requires some plumbing. Even V8's approach to
function inlining does not really work with high order code. (instead of
checking closure identity, it's "literal" identity should be checked:
https://code.google.com/p/v8/issues/detail?id=2206).

An alternative approach could provide specializations of map for various
backing storage types. But again, there is no plumbing in V8 to enable that.


Vyacheslav Egorov


On Mon, Feb 4, 2013 at 4:17 PM, Andreas Rossberg wrote:

> Moreover, 'map' has to make a hasOwnProperty check in every iteration.
>
> /Andreas
>
> On 4 February 2013 16:14, Sven Panne  wrote:
> > On Mon, Feb 4, 2013 at 3:58 PM, Andrii Melnykov  >
> > wrote:
> >>
> >> http://hpaste.org/81784 contains a benchmark - slow_test() is twice as
> >> slow as fast_test() [...]
> >
> >
> > The way Array.prototype.map is specified (see section 15.4.4.19 in the
> ECMA
> > spec) makes it very hard to implement efficiently. One has to create a
> new
> > array for the result and has to be prepared for the case when the
> callback
> > function modifies the array. Furthermore, we don't do any deforestation,
> > which is "a bit" hard in JavaScript. Therefore, fast_test() basically
> does
> > something different than slow_test(): It is optimized knowing the fact
> that
> > the callback function does not modify the underlying array + it does the
> > deforestation by hand, avoiding the need for an intermediate array.
> >
> > In a nutshell: It shouldn't be a surprise that fast_test() is, well,
> faster
> > than slow_test()...
> >
> > --
> > --
> > v8-users mailing list
> > v8-users@googlegroups.com
> > http://groups.google.com/group/v8-users
> > ---
> > You received this message because you are subscribed to the Google Groups
> > "v8-users" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to v8-users+unsubscr...@googlegroups.com.
> > For more options, visit https://groups.google.com/groups/opt_out.
> >
> >
>
> --
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
> ---
> You received this message because you are subscribed to the Google Groups
> "v8-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to v8-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [v8-users] Resize an Array in C++ ?

2012-12-07 Thread Vyacheslav Egorov
It should work.

Please post the a complete reproduction of the problem that can be
compiled: example above should not compile because Set expects a handle not
a raw number.

Vyacheslav Egorov


On Fri, Dec 7, 2012 at 3:24 PM, Paul Harris  wrote:

> Hi,
>
> How do I do the equivalent of:
> var x = [1,2,3];
> x.length = 1; // THIS BIT
>
> I tried doing
> Array * target = etc;
> target->Set( String::NewSymbol("length"), 1 );
> but that didn't work as expected, or crashed in the case of zero.
>
> ideas?
> Thanks,
> Paul
>
>  --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] What causes a function to be "optimized too many times"? How do I avoid it?

2012-11-08 Thread Vyacheslav Egorov
Figuring out which value requires looking at the IR dumped with
--trace-hydrogen.

As for logs: on Windows you can patch your chrome.exe as I describe here:

http://mrale.ph/blog/2012/06/21/v8s-flags-and-chrome-on-windows.html

and then a simple unix style redirection works from command prompt:

chrome.exe --no-sandbox --js-flags="--trace-opt --trace-deopt" > log.txt

Vyacheslav Egorov


On Thu, Nov 8, 2012 at 4:48 AM, Kevin Gadd  wrote:
> Interesting, I wonder why --trace-deopt isn't spitting out deopt notices for
> me. Maybe some of the output is being lost because I'm using WinDbg to
> capture it. I used to get deopt output there, though...
>
> Is there a way to tell which value is causing check-prototype-maps to fail?
> Is it a check performed on the this-reference?
>
> Thanks for taking a look, I appreciate it. I did some more testing using the
> release version of chrome and at present most JSIL code seems to perform
> dramatically better there - I'm seeing 4-5x performance regressions for some
> simple hot functions in Canary, like this one for example (source from a
> local build - haven't uploaded it to production yet because I'm wary of
> making things worse):
>
> function KinematicBody_get_DynamicAreaSubPx () {
>   var areaPosition = this._area.get_PositionSubPx();
>   var x = ((areaPosition.X - this.HalfWidthSubPx) | 0);
>   var y = ((areaPosition.Y - this.HalfHeightSubPx) | 0);
>   var w = ((this.HalfWidthSubPx * 2) | 0);
>   var h = ((this.HalfHeightSubPx * 2) | 0);
>   if (!((this._DynamicAreaSubPx.X === x) &&
>   (this._DynamicAreaSubPx.Y === y) &&
>   (this._DynamicAreaSubPx.Width === w) &&
> (this._DynamicAreaSubPx.Height === h))) {
> this._DynamicAreaSubPx = new ($T15())(x, y, w, h);
>   }
>   return this._DynamicAreaSubPx;
> }
>
> In that function all the direct property accesses aren't going through
> getter/setter functions, so there shouldn't be very much actually happening
> in there. This seems to be supported by it performing fine in release branch
> Chrome. It makes me wonder if some particular pattern in my generated code
> is causing newer revisions of V8 some grief (lazy initialization, perhaps?)
>
> Sven, let me know if I can provide you additional details (or chrome traces,
> or whatever) to help you investigate this.
>
> Thanks,
> -kg
>
>
>
> On Wed, Nov 7, 2012 at 9:29 AM, Vyacheslav Egorov 
> wrote:
>>
>> I asked because it is highly unlikely that any big application runs
>> without deopts.
>>
>> I just tried to run the game in Chrome Canary on Mac with
>> --js-flags="--trace-deopt --code-comments" and I saw many deopts.
>>
>> DrawScaleF constantly deopts on check-prototype-maps.
>>
>>  DEOPT: DrawScaleF at bailout #19, address 0x0, frame size 40
>> ;;; @292: check-prototype-maps.
>> [deoptimizing: begin 0x51b14d05 DrawScaleF @19]
>>   translating DrawScaleF => node=190, height=76
>> 0xbff70d18: [top + 128] <- 0x585febd1 ; [sp + 92] 0x585febd1 > Microsoft_Xna_Framework_Graphics_SpriteBatch>
>> 0xbff70d14: [top + 124] <- 0x24ed70d1 ; [sp + 88] 0x24ed70d1 > HTML5ImageAsset>
>> 0xbff70d10: [top + 120] <- 0x4e4b96d9 ; [sp + 84] 0x4e4b96d9 > Object>
>> 0xbff70d0c: [top + 116] <- 0x4e4e07d5 ; [sp + 80] 0x4e4e07d5 > Microsoft_Xna_Framework_Rectangle>
>> 0xbff70d08: [top + 112] <- 0x4e4cd1ed ; [sp + 76] 0x4e4cd1ed > Microsoft_Xna_Framework_Color>
>> 0xbff70d04: [top + 108] <- 0x ; [sp + 72] 0
>> 0xbff70d00: [top + 104] <- 0x462f1199 ; [sp + 68] 0x462f1199 > Microsoft_Xna_Framework_Vector2>
>> 0xbff70cfc: [top + 100] <- 0x0002 ; [sp + 64] 1
>> 0xbff70cf8: [top + 96] <- 0x449a2edd ; [sp + 60] 0x449a2edd > Microsoft_Xna_Framework_Graphics_SpriteEffects>
>> 0xbff70cf4: [top + 92] <- 0x47b2609d ; [sp + 56] 0x47b2609d
>> 
>> 0xbff70cf0: [top + 88] <- 0x223b0d4b ; caller's pc
>> 0xbff70cec: [top + 84] <- 0xbff70d28 ; caller's fp
>> 0xbff70ce8: [top + 80] <- 0x51b12e11 ; context
>> 0xbff70ce4: [top + 76] <- 0x51b14d05 ; function
>> 0xbff70ce0: [top + 72] <- 0x00a0 ; [sp + 28] 80
>> 0xbff70cdc: [top + 68] <- 0x0040 ; [sp + 20] 32
>> 0xbff70cd8: [top + 64] <- 0x0020 ; [sp + 24] 16
>> 0xbff70cd4: [top + 60] <- 0x0020 ; [sp + 12] 16
>> 0xbff70cd0: [top + 56] <- 0x51b14d05 ; [sp + 16] 0x51b14d05 > Function DrawScaleF>
>> 0xbff70ccc: [top + 52] <- 0x585febd1 ; [sp + 92] 0x58

Re: [v8-users] What causes a function to be "optimized too many times"? How do I avoid it?

2012-11-07 Thread Vyacheslav Egorov
I asked because it is highly unlikely that any big application runs
without deopts.

I just tried to run the game in Chrome Canary on Mac with
--js-flags="--trace-deopt --code-comments" and I saw many deopts.

DrawScaleF constantly deopts on check-prototype-maps.

 DEOPT: DrawScaleF at bailout #19, address 0x0, frame size 40
;;; @292: check-prototype-maps.
[deoptimizing: begin 0x51b14d05 DrawScaleF @19]
  translating DrawScaleF => node=190, height=76
0xbff70d18: [top + 128] <- 0x585febd1 ; [sp + 92] 0x585febd1 
0xbff70d14: [top + 124] <- 0x24ed70d1 ; [sp + 88] 0x24ed70d1 
0xbff70d10: [top + 120] <- 0x4e4b96d9 ; [sp + 84] 0x4e4b96d9 
0xbff70d0c: [top + 116] <- 0x4e4e07d5 ; [sp + 80] 0x4e4e07d5 
0xbff70d08: [top + 112] <- 0x4e4cd1ed ; [sp + 76] 0x4e4cd1ed 
0xbff70d04: [top + 108] <- 0x ; [sp + 72] 0
0xbff70d00: [top + 104] <- 0x462f1199 ; [sp + 68] 0x462f1199 
0xbff70cfc: [top + 100] <- 0x0002 ; [sp + 64] 1
0xbff70cf8: [top + 96] <- 0x449a2edd ; [sp + 60] 0x449a2edd 
0xbff70cf4: [top + 92] <- 0x47b2609d ; [sp + 56] 0x47b2609d

0xbff70cf0: [top + 88] <- 0x223b0d4b ; caller's pc
0xbff70cec: [top + 84] <- 0xbff70d28 ; caller's fp
0xbff70ce8: [top + 80] <- 0x51b12e11 ; context
0xbff70ce4: [top + 76] <- 0x51b14d05 ; function
0xbff70ce0: [top + 72] <- 0x00a0 ; [sp + 28] 80
0xbff70cdc: [top + 68] <- 0x0040 ; [sp + 20] 32
0xbff70cd8: [top + 64] <- 0x0020 ; [sp + 24] 16
0xbff70cd4: [top + 60] <- 0x0020 ; [sp + 12] 16
0xbff70cd0: [top + 56] <- 0x51b14d05 ; [sp + 16] 0x51b14d05 
0xbff70ccc: [top + 52] <- 0x585febd1 ; [sp + 92] 0x585febd1 
0xbff70cc8: [top + 48] <- 0x24ed70d1 ; [sp + 88] 0x24ed70d1 
0xbff70cc4: [top + 44] <- 0x02f0 ; [sp + 8] 376
0xbff70cc0: [top + 40] <- 0x0010 ; [sp + 4] 8
0xbff70cbc: [top + 36] <- 0x0020 ; [sp + 24] 16
0xbff70cb8: [top + 32] <- 0x0020 ; [sp + 12] 16
0xbff70cb4: [top + 28] <- 0x00a0 ; [sp + 28] 80
0xbff70cb0: [top + 24] <- 0x0040 ; [sp + 20] 32
0xbff70cac: [top + 20] <- 0x0020 ; [sp + 24] 16
0xbff70ca8: [top + 16] <- 0x0020 ; [sp + 12] 16
0xbff70ca4: [top + 12] <- 0x4e4cd1ed ; [sp + 76] 0x4e4cd1ed 
0xbff70ca0: [top + 8] <- 0x ; [sp + 72] 0
0xbff70c9c: [top + 4] <- 0x0010 ; [sp + 0] 8
0xbff70c98: [top + 0] <- 0x0010 ; eax 8
[deoptimizing: end 0x51b14d05 DrawScaleF => node=190, pc=0x497cac9d,
state=NO_REGISTERS, alignment=no padding, took 0.060 ms]
[removing optimized code for: DrawScaleF]

I do not see such deopt on Chrome 23 (though I did see some deopts of
this function). This indeed looks like an issue either with type
feedback or with generated code, though I can't be sure.

Sven recently was changing things in that neighborhood. I am CCing
him. I hope he will be able to help.
Vyacheslav Egorov


On Tue, Nov 6, 2012 at 4:16 PM, Kevin Gadd  wrote:
> Hi Vyacheslav,
>
> Yeah, as I said I ran with trace-opt, trace-bailout and trace-deopt turned
> on. So 'disabled optimization for' doesn't mean the function is deoptimized?
> That's really surprising to me, because I see a performance hit for those
> functions, and I assume that optimization being turned off would mean that
> the functions would have to run using unoptimized JIT output. That's not the
> case then? Does that mean that this error message doesn't matter, and it's
> intended that these functions keep getting recompiled until they hit the
> limit?
>
> It would be cool to know how to find out why the functions keep getting
> marked for recompilation, since the compiles seem to be taking time, but I
> guess that's less of an issue.
>
> Thanks,
> -kg
>
>
> On Tue, Nov 6, 2012 at 8:04 AM, Vyacheslav Egorov 
> wrote:
>>
>> Hi Kevin,
>>
>> Does it deoptimize?
>>
>> I do not see any deoptimizations in the log you have attached, were
>> you running with --trace-deopt?
>>
>> Vyacheslav Egorov
>>
>>
>> On Tue, Nov 6, 2012 at 12:33 AM, Kevin Gadd  wrote:
>> > Hi,
>> >
>> > I've been looking into some performance issues in Chrome Canary for a
>> > HTML5
>> > game I released a little while back. At present, Firefox Nightly runs
>> > this
>> > game a lot faster than Canary does, which is surprising to me because
>> > Chrome
>> > has a much better Canvas backend and used to do much better at running
>> > this
>> > game. From doing some profiling and comparing profiles between the
>> > browsers,
>> > I am pretty sure I am running into a V8 issue here - perhaps because
>> > something is 

Re: [v8-users] Does changing the type passed to a constructor change a hidden class (and other queries)?

2012-11-06 Thread Vyacheslav Egorov
In V8 currently most assumptions are made and checked at uses, not at
definitions.

Consider for example:

var p = new Point(1, 2);

function add(p) {
  return p.x + p.y
}

add(p);

Here the fact that p.x and p.y is numbers is checked at the +
operation. If add is optimized for these assumptions and you pass
point that contains strings in x and y then function add will
deoptimize. But this will happen when you execute add not when you
create point with string values in x and y.
Vyacheslav Egorov


On Tue, Nov 6, 2012 at 3:44 PM, Wyatt  wrote:
> Interesting! But wouldn't each of these cases nullify any assumptions
> made by the type-specializing JIT?
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Does changing the type passed to a constructor change a hidden class (and other queries)?

2012-11-06 Thread Vyacheslav Egorov
The answer is no for each case.

V8 does not track types of values assigned to a named properties.

Vyacheslav Egorov


On Tue, Nov 6, 2012 at 12:27 PM, Wyatt  wrote:
> With the follow code one hidden class is created:
>
> function Point(x, y) {
> this.x = x;
> this.y = y;
> }
> var p1 = new Point(11, 22);
> var p2 = new Point(33, 44);
>
> Will p2.x="aString"; change its hidden class?
>
> Will p2.x=undefined; change its hidden class?
>
> Will p3= new Point(42,"theAnswer"); create a new hidden class?
>
> I'm inclined to think that the answer is yes for each case..?
> Or at least each of these cases seems as if it could not be fully optimized.
>
> Any help is much appreciated!
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] What causes a function to be "optimized too many times"? How do I avoid it?

2012-11-06 Thread Vyacheslav Egorov
Hi Kevin,

Does it deoptimize?

I do not see any deoptimizations in the log you have attached, were
you running with --trace-deopt?

Vyacheslav Egorov


On Tue, Nov 6, 2012 at 12:33 AM, Kevin Gadd  wrote:
> Hi,
>
> I've been looking into some performance issues in Chrome Canary for a HTML5
> game I released a little while back. At present, Firefox Nightly runs this
> game a lot faster than Canary does, which is surprising to me because Chrome
> has a much better Canvas backend and used to do much better at running this
> game. From doing some profiling and comparing profiles between the browsers,
> I am pretty sure I am running into a V8 issue here - perhaps because
> something is wrong with my JS.
>
> I ran the game with trace-opt, trace-bailout and trace-deopt turned on. I
> see tons and tons of marking and optimization happening while the game is
> running, and it never settles down, despite the fact that the game is not
> particularly dynamic once it gets going - it reaches a steady state where it
> is not generating tons of code on the fly, and types are not changing, so it
> shouldn't be necessary to constantly recompile functions.
>
> Worse, though, I see a lot of these messages:
>
> [disabled optimization for Game_EnqueueTick, reason: optimized too many
> times]
> [disabled optimization for KinematicBody_get_SupportingBody, reason:
> optimized too many times]
> [disabled optimization for SpriteBatch_InternalDraw, reason: optimized too
> many times]
> [disabled optimization for DrawScaleF, reason: optimized too many times]
> [disabled optimization for SpriteBatch_DeferBlit, reason: optimized too many
> times]
>
> From looking at the code, this appears to be a deopt that is hit once a
> function has been optimized 1000 times. I can't imagine why these functions
> would be optimized 1000 times in the first place, and the deopt seems to be
> hurting them because profiles in the Web Inspector show some of these
> functions as bottlenecks - but in SpiderMonkey they barely contribute to CPU
> time in comparison (and some of them are extremely simple).
>
> For example:
>
> function DrawScaleF (texture, position, sourceRectangle, color,
> rotation, origin, scale, effects, layerDepth) {
>   var sourceX = 0, sourceY = 0, sourceWidth = 0, sourceHeight = 0;
>   if (sourceRectangle !== null) {
> sourceX = sourceRectangle.X;
> sourceY = sourceRectangle.Y;
> sourceWidth = sourceRectangle.Width;
> sourceHeight = sourceRectangle.Height;
>   } else {
> sourceWidth = texture.Width;
> sourceHeight = texture.Height;
>   }
>
>   this.InternalDraw(
> texture, position.X, position.Y, sourceWidth, sourceHeight,
> sourceX, sourceY, sourceWidth, sourceHeight,
> color, rotation,
> origin.X, origin.Y,
> scale, scale,
> effects, layerDepth
>   );
> }
>
> This function is pretty simple, and the type information should basically
> never change. sourceRectangle is always either null or an instance of one
> specific class, texture is always an instance of one specific class, etc.
> The function isn't doing any arithmetic or calling complex functions, it's
> basically just a wrapper around InternalDraw.
>
> Normally I would expect this function to entirely get optimized out, or at
> least get reduced down to some really simple code. That appears to be what
> happens in SpiderMonkey.
>
> So, for stuff like this, what steps should I take to understand why a
> function is being recompiled lots of times, and how can I work around this?
> If I manage to get the hydrogen IR dumped, can I look for particular warning
> signs in the IR for these functions? Are there some other debug flags I can
> pass to the runtime to get diagnostic information here?
>
> If you want to test the game yourself with flags set, it's at
> http://www.playescapegoat.com/. Just playing one or two stages should be
> enough to generate lots of those messages.
>
> Possibly related: In the logs, sometimes I see it reoptimize the same
> function like ten times back to back. Is this right? It seems like it
> shouldn't happen.
>
> [marking DrawScaleF 0x2c69a038 for recompilation, reason: small function,
> ICs with typeinfo: 11/11 (100%)]
> [optimizing: DrawScaleF / 2c69a039 - took 0.000, 0.000, 0.000 ms]
> [marking DrawScaleF 0x2c69a038 for recompilation, reason: small function,
> ICs with typeinfo: 11/11 (100%)]
> [optimizing: DrawScaleF / 2c69a039 - took 0.000, 0.000, 0.000 ms]
> [marking DrawScaleF 0x2c69a038 for recompilation, reason: small function,
> ICs with typeinfo: 11/11 (100%)]
> [optimizing: DrawScaleF / 2c69a039 

Re: [v8-users] Re: [V8-Users] V8 3.6.6.25 With Max-Old-Space-Size Greater Than 1900MB?

2012-11-05 Thread Vyacheslav Egorov
Depends on the operating system you are running in.

Check out man backtrace if you are on Mac/Linux/BSD like OS.
Vyacheslav Egorov


On Mon, Nov 5, 2012 at 8:17 AM, Joran Greef  wrote:
> How can I get such a back trace?
>
> On 05 Nov 2012, at 6:14 PM, v8-users@googlegroups.com wrote:
>
>> Node should not be able to trigger last resort gc.
>>
>> It can be that recent changes in V8 changed allocation patterns for
>> some large object (array, properties backing store etc) and this now
>> causes last resort GC to happen.
>>
>> Unfortunately it is impossible to figure out what is going on unless
>> you can somehow get a back trace from inside
>> CollectAllAvailableGarbage.
>>
>> --
>> Vyacheslav Egorov
>>
>>
>> On Mon, Nov 5, 2012 at 7:08 AM, Joran Dirk Greef  wrote:
>>> In practice it's working perfectly now. I rolled Node from v0.8 back to v0.6
>>> and the false positive allocation errors are no longer happening. There's no
>>> more "last resort gc". Load has dropped from 100% to 1%. The gc trace looks
>>> normal now. I assumed the GC errors were due to the different version of V8
>>> bundled with Node. Perhaps it's something in Node triggering full GC
>>> repetitively? Would Node trigger GC by itself?
>>>
>>>
>>> On Monday, November 5, 2012 4:53:59 PM UTC+2, Vyacheslav Egorov wrote:
>>>>
>>>>> Recent GC changes are unable to handle millions of long-lived entities..
>>>>> V8 3.6.6.25 GC works perfectly.
>>>>
>>>> Contrary to what you might think worst pause time for V8 3.6.x and V8
>>>> 3.7 - 3.15 should be roughly the same. V8 3.7 will also do 7 GCs in a
>>>> row as a last resort.
>>>>
>>>> However in 3.6 if you hit a full collection it will always pause your
>>>> app for much longer then a incremental collector of 3.7 and later
>>>> would (given that everything is tweaked correctly).
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] V8 3.6.6.25 with max-old-space-size greater than 1900MB?

2012-11-05 Thread Vyacheslav Egorov
Node should not be able to trigger last resort gc.

It can be that recent changes in V8 changed allocation patterns for
some large object (array, properties backing store etc) and this now
causes last resort GC to happen.

Unfortunately it is impossible to figure out what is going on unless
you can somehow get a back trace from inside
CollectAllAvailableGarbage.

--
Vyacheslav Egorov


On Mon, Nov 5, 2012 at 7:08 AM, Joran Dirk Greef  wrote:
> In practice it's working perfectly now. I rolled Node from v0.8 back to v0.6
> and the false positive allocation errors are no longer happening. There's no
> more "last resort gc". Load has dropped from 100% to 1%. The gc trace looks
> normal now. I assumed the GC errors were due to the different version of V8
> bundled with Node. Perhaps it's something in Node triggering full GC
> repetitively? Would Node trigger GC by itself?
>
>
> On Monday, November 5, 2012 4:53:59 PM UTC+2, Vyacheslav Egorov wrote:
>>
>> > Recent GC changes are unable to handle millions of long-lived entities.
>> > V8 3.6.6.25 GC works perfectly.
>>
>> Contrary to what you might think worst pause time for V8 3.6.x and V8
>> 3.7 - 3.15 should be roughly the same. V8 3.7 will also do 7 GCs in a
>> row as a last resort.
>>
>> However in 3.6 if you hit a full collection it will always pause your
>> app for much longer then a incremental collector of 3.7 and later
>> would (given that everything is tweaked correctly).
>>
>> --
>> Vyacheslav Egorov
>>
>>
>> On Mon, Nov 5, 2012 at 1:28 AM, Joran Dirk Greef 
>> wrote:
>> > Recent GC changes in V8 are wreaking havoc with a production app. GC
>> > traces
>> > are showing pauses of over 22 seconds. Recent GC changes are unable to
>> > handle millions of long-lived entities.
>> >
>> > V8 3.6.6.25 GC works perfectly.
>> >
>> > The one problem now is getting V8 3.6.6.25 to allow max-old-space-size
>> > greater than 1900 MB on Ubuntu.
>> >
>> > Is there any way to run V8 3.6.6.25 with max-old-space-size greater than
>> > 1900 MB?
>> >
>> > Or is there a slightly newer version than 3.6.6.25 which allows bigger
>> > heaps
>> > but without all the new GC work?
>> >
>> > Your help would be much appreciated.
>> >
>> > --
>> > v8-users mailing list
>> > v8-u...@googlegroups.com
>> > http://groups.google.com/group/v8-users
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] V8 3.6.6.25 with max-old-space-size greater than 1900MB?

2012-11-05 Thread Vyacheslav Egorov
> Recent GC changes are unable to handle millions of long-lived entities. V8 
> 3.6.6.25 GC works perfectly.

Contrary to what you might think worst pause time for V8 3.6.x and V8
3.7 - 3.15 should be roughly the same. V8 3.7 will also do 7 GCs in a
row as a last resort.

However in 3.6 if you hit a full collection it will always pause your
app for much longer then a incremental collector of 3.7 and later
would (given that everything is tweaked correctly).

--
Vyacheslav Egorov


On Mon, Nov 5, 2012 at 1:28 AM, Joran Dirk Greef  wrote:
> Recent GC changes in V8 are wreaking havoc with a production app. GC traces
> are showing pauses of over 22 seconds. Recent GC changes are unable to
> handle millions of long-lived entities.
>
> V8 3.6.6.25 GC works perfectly.
>
> The one problem now is getting V8 3.6.6.25 to allow max-old-space-size
> greater than 1900 MB on Ubuntu.
>
> Is there any way to run V8 3.6.6.25 with max-old-space-size greater than
> 1900 MB?
>
> Or is there a slightly newer version than 3.6.6.25 which allows bigger heaps
> but without all the new GC work?
>
> Your help would be much appreciated.
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: --nouse-idle-notification and "last resort gc"

2012-11-05 Thread Vyacheslav Egorov
Hello Joran,

"last resort gc" means that there was an allocation failure that a
normal GC could "resolve". Basically you are in a kinda OOM situation.
I am kinda curious what kind of allocation it is. Probably it is some
very big object. It can be that allocation attempt does not correctly
fall into allocating from LO space.

One thing though is that last resort GC can be much more lightweight
for node.js application that it is currently. I doubt 7 GC in a row
are very helpful. As a workaround you can go into
Heap::CollectAllAvailableGarbage and replace everything inside with

CollectGarbage(OLD_POINTER_SPACE, gc_reason);

This should get rid of 7 repetitive GCs. I think for an application
like yours it makes perfect sense to set internal GC limits very high
and let incremental GC crunch things instead of falling back to
non-incremental marking. But there are currently no way to configure
GC like that.
Vyacheslav Egorov


On Mon, Nov 5, 2012 at 12:50 AM, Joran Dirk Greef  wrote:
> Max-old-space-size is measured in MB not KB as you suggest.
>
> Further, max-new-space-size makes no difference to the GC trace given above,
> whether it's passed as flag or not, big or small.
>
> On Monday, November 5, 2012 10:21:11 AM UTC+2, Yang Guo wrote:
>>
>> The short answer is: don't mess with GC settings if you don't know what
>> you are doing.
>>
>> The long answer is: new space is the part of the heap where short-living
>> objects are allocated. The GC scans new space on every collection and
>> promotes long-living objects into the old space. You are setting the new
>> space to ~19GB, which takes a while to scan. Furthermore, you are setting
>> the old space to only 19MB, limiting the part of the heap where long-living
>> objects are being moved to, hence the last resort GC. What you probably want
>> is to specify a large old space size, but leave the new space size at
>> default.
>>
>> Yang
>>
>> On Sunday, November 4, 2012 4:19:11 PM UTC+1, Joran Dirk Greef wrote:
>>>
>>> I am running Node v0.8.14 with --nouse_idle_notification --expose_gc
>>> --max_old_space_size=19000 --max_new_space_size=1900.
>>>
>>> I have a large object used as part of a BitCask style store, keeping a
>>> few million entries.
>>>
>>> Calling gc() manually takes a 3 seconds which is fine as I call it every
>>> 2 minutes.
>>>
>>> The machine has 32GB of RAM and all of this is available to the process,
>>> there is nothing else running.
>>>
>>> The process sits at around 1.9GB of RAM.
>>>
>>> I have found an interesting test case where async reading a 1mb file in
>>> Node takes longer and longer depending on how many entries are in the large
>>> object discussed above:
>>>
>>> Node.fs.readFile('test', 'binary', End.timer())
>>>   347745 ms: Scavenge 1617.4 (1660.4) -> 1611.1 (1660.4) MB, 0 ms
>>> [allocation failure].
>>>   350900 ms: Mark-sweep 1611.5 (1660.4) -> 1512.2 (1633.4) MB, 3153 ms
>>> [last resort gc].
>>>   354072 ms: Mark-sweep 1512.2 (1633.4) -> 1512.0 (1592.4) MB, 3171 ms
>>> [last resort gc].
>>>   357247 ms: Mark-sweep 1512.0 (1592.4) -> 1512.0 (1568.4) MB, 3175 ms
>>> [last resort gc].
>>>   360426 ms: Mark-sweep 1512.0 (1568.4) -> 1512.0 (1567.4) MB, 3178 ms
>>> [last resort gc].
>>>   363620 ms: Mark-sweep 1512.0 (1567.4) -> 1512.0 (1567.4) MB, 3193 ms
>>> [last resort gc].
>>>   366802 ms: Mark-sweep 1512.0 (1567.4) -> 1511.6 (1567.4) MB, 3182 ms
>>> [last resort gc].
>>>   369967 ms: Mark-sweep 1511.6 (1567.4) -> 1511.6 (1567.4) MB, 3164 ms
>>> [last resort gc].
>>> 2012-11-04T14:59:30.700Z INFO 22230ms
>>>
>>> Reading the 1mb file before the large object is created is fast, the
>>> bigger the object becomes the slower the file is to read.
>>>
>>> Why is last resort gc being called if gc is exposed and if the machine
>>> has more than enough RAM?
>>>
>>> What was interesting was that this behabiour does not happen for V8
>>> 3.6.6.25 and earlier.
>>>
>>> The reason I can't use 3.6.6.25 however is that the heap is limited to
>>> 1.9GB and I need more head room than that.
>>>
>>> Is there anyway I can disable the last resort GC?
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] make release.check fails as missing cpplint.py

2012-10-24 Thread Vyacheslav Egorov
> Please make sure you have depot_tools in your $PATH.

http://dev.chromium.org/developers/how-tos/install-depot-tools

--
Vyacheslav Egorov


On Wed, Oct 24, 2012 at 11:05 PM, Joe Millenbach  wrote:
> I'm trying to follow the directions on the build page to both build
> and test my shells.  But I get the below error after the builds
> complete...
>
> ...
> cctest/test-platform-linux.o
>   LINK(target) /home/jmillenbach/perf/v8/out/ia32.release/cctest
>   TOUCH /home/jmillenbach/perf/v8/out/ia32.release/obj.target/build/
> All.stamp
> make[1]: Leaving directory `/home/jmillenbach/perf/v8/out'
>>>> running presubmit tests
> Running C++ lint check...
> Error running cpplint.py. Please make sure you have depot_tools in
> your $PATH. Lint check skipped.
> ...
>
> Which is repeated a number of times.  Is there a package I need to
> install?  I did a search and nothing obvious popped up in Google.  I'm
> using Ubuntu 12.04 x64.  And is there a reason this isn't in the
> dependencies I already downloaded at the beginning of the build
> process?
>
> Thanks for your time,
> Joe
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Is empty functions optimized away whenever know?

2012-10-21 Thread Vyacheslav Egorov
Well if you create a single global function you will at least save memory
and allocation time (as function literal creates a new function  every time
it is executed).

Additionally if you always pass empty function to async_http_get then its
better to create a single function to help inlining as explained in my
previous mail.

Anyway all this matters only on a very hot path.

Vyacheslav Egorov
On Oct 21, 2012 7:10 PM, "idleman"  wrote:

> Thanks for your answer!
>
> To make the question some more clear, I invoke huge number of asynchronous
> functions, but sometimes I just don´t care about the result, but the
> function itself require a callback to pass the result of the operation to.
> Would it in those cases be smarter to create a global do_nothing() function
> and pass it into all the asynchronous functions where I don´t care about
> the result, than on invocation just create a new empty function:
>
> //new empty functions each time
> async_http_get("http://statics.com?webpage=abc";, function() { });
> async_flush(function() { });
>
> //or using a global do_nothing function
> function do_nothing() { }
>
> async_http_get("http://statics.com?webpage=abc";, do_nothing);
> async_flush(do_nothing);
>
> My question regards if V8 would easier optimize away the "callback" call
> when using anonymous empty functions or not, because if it does, it would
> be worthless to create a global do_nothing on the first place.
>
> But what I understood, V8 does no such optimizations? What would be better
> in that case, using a global do_nothing() or not?
>
> Thanks in advance!
>
>
> Den söndagen den 21:e oktober 2012 kl. 16:15:19 UTC+2 skrev Vyacheslav
> Egorov:
>>
>> V8 does inline functions at call sites where target is observed to be
>> always the same. Inclined body is guarded by an identity check against
>> identity of the call target. If guard fails code is deoptimized.
>>
>> Thus what matters is whether each call site is monomorphic ( sees the
>> same function all the time) or megamorphic (sees different functions).
>>
>> Without seeing complete code it is hard to say whether you will help
>> inlining by creating a single empty function (inlining definitely will not
>> happen if you create new functions and send them to a single call site
>> again and again). But you will definitely save space.
>>
>> --
>> Vyacheslav Egorov
>>  On Oct 20, 2012 10:05 PM, "idleman"  wrote:
>>
>>> Hi,
>>>
>>> Is empty functions in lined whenever the function is know? Example:
>>>
>>> function do_nothing() { }
>>>
>>> //somewhere later in the code:
>>> var cb = do_nothing;
>>> cb(null, "Will this call be inlined/optimized away?");
>>>
>>> Will V8 actually call the function, even if it does nothing? I wonder
>>> because I want to know if it is smarter to create a do_nothing() function
>>> which will be reused over and over again (but not as obvious) or each time
>>> create an empty function { } directly in place and let the V8 more easily
>>> optimize away the call.
>>>
>>> Thanks in advance!
>>>
>>>
>>>
>>>  --
>>> v8-users mailing list
>>> v8-u...@googlegroups.com
>>> http://groups.google.com/**group/v8-users<http://groups.google.com/group/v8-users>
>>
>>  --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] Is empty functions optimized away whenever know?

2012-10-21 Thread Vyacheslav Egorov
V8 does inline functions at call sites where target is observed to be
always the same. Inclined body is guarded by an identity check against
identity of the call target. If guard fails code is deoptimized.

Thus what matters is whether each call site is monomorphic ( sees the same
function all the time) or megamorphic (sees different functions).

Without seeing complete code it is hard to say whether you will help
inlining by creating a single empty function (inlining definitely will not
happen if you create new functions and send them to a single call site
again and again). But you will definitely save space.

--
Vyacheslav Egorov
 On Oct 20, 2012 10:05 PM, "idleman"  wrote:

> Hi,
>
> Is empty functions in lined whenever the function is know? Example:
>
> function do_nothing() { }
>
> //somewhere later in the code:
> var cb = do_nothing;
> cb(null, "Will this call be inlined/optimized away?");
>
> Will V8 actually call the function, even if it does nothing? I wonder
> because I want to know if it is smarter to create a do_nothing() function
> which will be reused over and over again (but not as obvious) or each time
> create an empty function { } directly in place and let the V8 more easily
> optimize away the call.
>
> Thanks in advance!
>
>
>
>  --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] Distinguish a floating-point argument from an integer?

2012-10-16 Thread Vyacheslav Egorov
No this is not possible. No matter what they write in the source 5 or 5.0
it'll be represented as 5.

Vyacheslav Egorov
On Oct 16, 2012 9:31 PM, "Brandon Harvey"  wrote:

> I'd like to be able to do printf / snprintf style string formatting, on
> the C++ side, using numbers obtained from v8.  However, I'm not sure how to
> know whether to use %f or %d (etc.) in any given case.  I'd like to be able
> to reflect the intent of the Javascript writer -- if they wrote 5.0, I'd
> like to use %f, and if they wrote 5, I'd like to use %d.
>
> Brandon
>
> On Monday, October 15, 2012 10:22:52 PM UTC-7, Vyacheslav Egorov wrote:
>>
>> There are no integers in JavaScript so semantically they are identical.
>> It's an implementation detail that 5.0 is sometimes represented as 5. Why
>> do you want to distinguish them?
>>
>> Vyacheslav Egorov
>> On Oct 16, 2012 5:25 AM, "Brandon Harvey"  wrote:
>>
>>> I'd like to be able to know whether not a particular  Local
>>> (passed to me as part of any Arguments list) refers to an integral number
>>> (e.g. 5) or a floating-point style number (e.g. 5.0).  Is there any way to
>>> make that distinction?
>>>
>>> --
>>> v8-users mailing list
>>> v8-u...@googlegroups.com
>>> http://groups.google.com/**group/v8-users<http://groups.google.com/group/v8-users>
>>
>>  --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] Distinguish a floating-point argument from an integer?

2012-10-15 Thread Vyacheslav Egorov
There are no integers in JavaScript so semantically they are identical.
It's an implementation detail that 5.0 is sometimes represented as 5. Why
do you want to distinguish them?

Vyacheslav Egorov
On Oct 16, 2012 5:25 AM, "Brandon Harvey"  wrote:

> I'd like to be able to know whether not a particular  Local (passed
> to me as part of any Arguments list) refers to an integral number (e.g. 5)
> or a floating-point style number (e.g. 5.0).  Is there any way to make that
> distinction?
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] documentation for deopt bailouts

2012-10-11 Thread Vyacheslav Egorov
Hi,

> how do i see why the method is deopted? for example here:

If you --trace-deopt with --code-comments then in most cases you will
get LIR instruction that deopted in the output (though it is not
always correct). The only reliable way to figure what caused deopt is
run with --print-opt-code and go read assembly around deopt point.

> does that mean i should replace the for in with a for loop that iterates
> over Object.keys?

Yes.

--
Vyacheslav Egorov


On Thu, Oct 11, 2012 at 11:48 AM, Christoph Sturm  
wrote:
> I'm trying to optimize my node app with --trace-deopt
>
> how do i see why the method is deopted? for example here:
>
>  DEOPT: Wlbl.Checker.checkUrl at bailout #24, address 0x0, frame size 88
> [deoptimizing: begin 0x25d8e5e85e71 Wlbl.Checker.checkUrl @24]
>   translating Wlbl.Checker.checkUrl => node=260, height=40
>
> also when i log optimizer bailouts, i see this:
> Bailout in HGraphBuilder: @"exports.paramsToString": ForInStatement is not
> fast case
>
> does that mean i should replace the for in with a for loop that iterates
> over Object.keys?
>
> thanks
>  chris
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: ia32 bug?

2012-09-07 Thread Vyacheslav Egorov
Chromium's bindings layer uses External to associate opaque data with
V8 objects and callbacks (see methods accepting Handle data in
V8 api).

So making External into a non-Value might involve some bindings work.

--
Vyacheslav Egorov


On Fri, Sep 7, 2012 at 2:50 PM, Sven Panne  wrote:
> After several discussions, it is not so clear anymore what to do. First of
> all, SilkJS does not follow https://developers.google.com/v8/embed#dynamic
> on how to handle foreign (i.e. C/C++) pointers when embedding v8. The return
> value of External::New is supposed to live in an internal field, but it is
> *not* a valid JavaScript value, it is just a Foreign in disguise, sometimes
> optimized to a Smi. Our v8.h header is very confusing regarding this fact,
> and having External as a subclass of Value is basically wrong. Furthermore
> Value::IsExternal is completely broken. I can see 2 ways of fixing things:
>
>* Keep External's implementation basically as it is. i.e. either a Smi or
> a Foreign. If we do this, we should not keep External as a subclass of Value
> (perhaps a subclass of Data?) and we should remove the IsExternal predicate.
> This means that e.g. SilkJS has to change, following
> https://developers.google.com/v8/embed#dynamic. As it is, one can easily
> crash SilkJS by pure JavaScript.
>
>* Make External basically a convenience wrapper for a JavaScript object
> with an internal property containing a Foreign. This way we could keep
> External a subclass of value and we could fix IsExternal. The downside is
> that all code already following
> https://developers.google.com/v8/embed#dynamic would basically do a useless
> double indirection, punishing people following that guide.
>
> We will discuss these options, there are good arguments for both of them...
>
> Cheers,
>S.
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: [V8-Users] Is There A Limit To Number Of Properties In An Object?

2012-08-21 Thread Vyacheslav Egorov
Minor correction: I obviously meant to say "way below" not "way beyond".

Vyacheslav Egorov
On Aug 21, 2012 8:03 AM, "Joran Greef"  wrote:

> Thank you Vyacheslav
>
> On 20 Aug 2012, at 8:41 PM, v8-users@googlegroups.com wrote:
>
> > I would say limit is around 2^24 entries (biggest fixed array can have
> approx 2^27 entries and hash table requires 3 entries per key-value pair
> and tries to maintain 50% occupancy). But overheads for mutating such a
> table become less than reasonable way beyond this point.
> >
> > Vyacheslav Egorov
> > On Aug 18, 2012 4:34 PM, "Joran Greef"  wrote:
> > I am using a vanilla {} as a hash with 24 byte string keys. It currently
> has 5,500,000 entries.
> >
> > Is there a limit to the number of properties in such an Object?
> >
> > --
> > v8-users mailing list
> > v8-users@googlegroups.com
> > http://groups.google.com/group/v8-users
> >
> > --
> > v8-users mailing list
> > v8-users@googlegroups.com
> > http://groups.google.com/group/v8-users
>
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
>

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] Is there a limit to number of properties in an Object?

2012-08-20 Thread Vyacheslav Egorov
I would say limit is around 2^24 entries (biggest fixed array can have
approx 2^27 entries and hash table requires 3 entries per key-value pair
and tries to maintain 50% occupancy). But overheads for mutating such a
table become less than reasonable way beyond this point.

Vyacheslav Egorov
 On Aug 18, 2012 4:34 PM, "Joran Greef"  wrote:

> I am using a vanilla {} as a hash with 24 byte string keys. It currently
> has 5,500,000 entries.
>
> Is there a limit to the number of properties in such an Object?
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] garbage collection of anonymous functions

2012-08-20 Thread Vyacheslav Egorov
Hi Morten,

Listeners are implicitly referenced by a DOM node wrapper for a node to
which they are attached.

http://code.google.com/searchframe#OAMlx_jo-ck/src/third_party/WebKit/Source/WebCore/bindings/v8/V8GCController.cpp&exact_package=chromium&q=addimplicitreference&type=cs&l=332

This keeps them alive as long as wrapper is alive.

--
Vyacheslav Egorov
On Aug 20, 2012 6:04 PM, "Morten Olsen"  wrote:

> Hi,
>
> I'm struggling to figure out precisely what/where keeps an anonymous
> function alive when used as an event handler in the WebKit integration,
> example:
>
> document.getElementById("clickMe").addEventListener('click', function (e)
> { alert(e); });
>
> WebKit only keeps a weak pointer to the function, and I can't figure out
> how its kept alive through GC, as I can't seem to find a live pointer to it
> anywhere else, but I must be overlooking something.
>
> Regards, Morten
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] Crankshaft without 64-bit hardware support

2012-08-13 Thread Vyacheslav Egorov
Here is another possibility: in low level IR explode double values
into pairs of low-level values.

d2 = Mul d0, d1

becomes smth like

(i20, i21) = Mul (i00, i01), (i10, i11)

that would require a minor change in the pipeline to allow multiple
return values but I think it might be less painful than supporting
register pairs across the pipeline.

--
Vyacheslav Egorov


On Mon, Aug 13, 2012 at 11:34 AM, Vyacheslav Egorov
 wrote:
> Are you planning to allocate a couple of general purpose registers to
> represent a double? Do these registers have to be ordered? (e.g. do
> you need r_{N}, r_{N+1})
>
> Do you want a single build of V8 work both on a hardware that supports
> real double registers and that does not?
>
> If you don't need binary portability then I don't think you need to
> extend unallocated policies. Just change "interpretation" of
> DOUBLE_REGISTER policy everywhere.
>
> Otherwise, yeah, you need new policies (which would require some bit
> stealing cause LUnallocated is packed pretty tight already).
>
> In any case it would require some adjustments in allocator to get
> "what interferes with what" right.
>
> --
> Vyacheslav Egorov
>
>
> On Mon, Aug 13, 2012 at 7:53 AM, Evgeny Baskakov
>  wrote:
>> Hi guys,
>>
>> I'm looking for ways to modify the Crankshaft compilation mode to make it
>> work without 64-bit hardware registers.
>>
>> Could someone give me brief guidelines? Is it possible at all, without the
>> whole V8 codebase reworking?
>>
>> My first impulse is to extend the LUnallocated policies set and make the
>> codegen distinguish between single and coupled registers. Then, the lithium
>> codegen would use the CPU-based code stubs instead of native double-related
>> instructions. What pitfalls should I be aware of here?
>>
>> --Evgeny
>>
>> --
>> v8-users mailing list
>> v8-users@googlegroups.com
>> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Crankshaft without 64-bit hardware support

2012-08-13 Thread Vyacheslav Egorov
Are you planning to allocate a couple of general purpose registers to
represent a double? Do these registers have to be ordered? (e.g. do
you need r_{N}, r_{N+1})

Do you want a single build of V8 work both on a hardware that supports
real double registers and that does not?

If you don't need binary portability then I don't think you need to
extend unallocated policies. Just change "interpretation" of
DOUBLE_REGISTER policy everywhere.

Otherwise, yeah, you need new policies (which would require some bit
stealing cause LUnallocated is packed pretty tight already).

In any case it would require some adjustments in allocator to get
"what interferes with what" right.

--
Vyacheslav Egorov


On Mon, Aug 13, 2012 at 7:53 AM, Evgeny Baskakov
 wrote:
> Hi guys,
>
> I'm looking for ways to modify the Crankshaft compilation mode to make it
> work without 64-bit hardware registers.
>
> Could someone give me brief guidelines? Is it possible at all, without the
> whole V8 codebase reworking?
>
> My first impulse is to extend the LUnallocated policies set and make the
> codegen distinguish between single and coupled registers. Then, the lithium
> codegen would use the CPU-based code stubs instead of native double-related
> instructions. What pitfalls should I be aware of here?
>
> --Evgeny
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: trace bailout stopped working on node 0.7.7 (V8 3.9.24.7) up to latest release 0.8.4

2012-08-01 Thread Vyacheslav Egorov
There is no such flag now. Sorry.

--
Vyacheslav Egorov


On Tue, Jul 31, 2012 at 6:56 PM, Sławek Janecki  wrote:
> Anyone? Is there a flag in new V8 that tells me, why my function wont be
> optimized?
> In previous V8 versions optimizer will try to optimize and bailout when
> nessesery (giving me bailout info).
> I want to know WHY my function will not be optimized at all.
>
> Thanks
>
>
> On Sunday, July 29, 2012 2:09:34 AM UTC+2, Sławek Janecki wrote:
>>
>> Using Node.js up to version  0.7.6 (v8 3.9.17) when I use --trace-bailout
>> flag i have output as expected (bailouts info)
>>
>> Using Node from version 0.7.7 (v8 3.9.24.7) up to latest release 0.8.4
>> --trace-bailout don't show any info.
>>
>> I've tried simple scripts with try/catch and 'with' (100% bailouts)
>>
>> Tracing hydrogen output on both node/v8 (0.7.6 and 0.7.7) versions tells
>> that my test function (with try/catch and 'with') is'nt optimized (it's
>> bailing-out) but I don't see any info on node >= 0.7.7
>>
>> Something changed? In V8 sources bailout flag is in place.
>> Do I need to turn other flag on to get bailouts or this is a bug?
>>
>> Thanks
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: [V8-Users] 64-Bit Integers

2012-07-27 Thread Vyacheslav Egorov
V8 implements ECMAScript as described in ECMA-262 5th.

V8 does not implement non-standard features with two exceptions:

1) they are required for compatibility with existing code on the web.
2) they are highly likely to be included in the next standard (ES6)
(e.g. V8 experiments with: block scoping, proxies, collections,
modules; some of these features as implemented now do not actually
match newest drafts of ES6 spec because spec changed since they were
implemented... This is major disadvantage for implementing features
before spec if frozen).

That said there were actually some discussions about including 64bit
types into ES6 as part of Binary Data work (e.g.
http://wiki.ecmascript.org/doku.php?id=harmony:binary_data_discussion
). I don't know what is the status of this now. Any standardization
issues should be addressed to TC-39.

--
Vyacheslav Egorov


On Fri, Jul 27, 2012 at 4:43 PM, Joran Greef  wrote:
> Yes, it's not in the spec but are there ways and means to change this?
>
> Will Javascript be stuck forever with 51/52/53 bits?
>
> It would be great if V8 could support proper native 64-bit integers and 
> perhaps encourage other engines to do the same.
>
> Especially with the ubiquity of 64-bit elsewhere. It hampers common use 
> cases, e.g. hashing, compression.
>
> On 27 Jul 2012, at 2:50 PM, v8-users@googlegroups.com wrote:
>
>> On Fri, Jul 27, 2012 at 2:42 PM, Joran Greef  wrote:
>> Would it be possible to have proper support for native 64-bit integers and 
>> operations in Javascript?
>>
>> JS as a language does not specify them. It specifies a Number type and 
>> (IIRC) 51 (52? 53?) bits of integer precision.
>>
>> --
>> - stephan beal
>> http://wanderinghorse.net/home/stephan/
>> http://gplus.to/sgbeal
>>
>>
>> --
>> v8-users mailing list
>> v8-users@googlegroups.com
>> http://groups.google.com/group/v8-users
>
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] recommended V8 GC settings for nodejs in heroku (hitting heroku memory limits)

2012-07-25 Thread Vyacheslav Egorov
You need to use something that gives you access to heap snapshots. I think
node-inspector does. But better ask at node.js list.

Vyacheslav Egorov
On Jul 25, 2012 11:15 PM, "spollack"  wrote:

> I think you are probably right. after more testing, i can see that the GC
> is running, it just isn't freeing up very much. i probably am accidentally
> keeping references to state that i don't need. Are there any good tools to
> help identify what specifically might be getting held onto here?
>
> Thanks,
> Seth
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] recommended V8 GC settings for nodejs in heroku (hitting heroku memory limits)

2012-07-25 Thread Vyacheslav Egorov
Yes, I suspect your heap is being paged out.

I don't know though why your app behaves so differently on different
machines. I suspect there is something in your/node.js/node packages
code that is OS specific and causes larger memory consumption (e.g.
it's buffering too much things in the memory because writing stuff to
network/disk does not keep up with incomming data etc).

I don't think any GC tweaks will be helpful here.

--
Vyacheslav Egorov


On Tue, Jul 24, 2012 at 9:52 PM, spollack  wrote:
> here is a reading even closer to the peak:
> {"rss":542273536,"heapTotal":523510848,"heapUsed":503801408}. lots of heap
> usage!
>
> as another aside: sometimes i get surprising results for rss, where rss is
> considerably smaller than heapTotal/heapUsed in the middle of the run, and
> then it quickly returns to what i would expect (rss > heapTotal > heapUsed).
> For example, this reading from mid-run:
> {"rss":345870336,"heapTotal":516328896,"heapUsed":511765976}  Why would that
> be? is the heap getting swapped out?
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Integer division when operands and target are integers

2012-07-25 Thread Vyacheslav Egorov
It does not have to be in Canonicalization pass (not every
optimization that replaces instructions fits into it).

I am not entirely sure that it makes perfect sense to perform such
optimization during range analysis itself though I do see at least one
reason why you would want to do that: our conditional range
propagation does not associate range information with uses, so [alpha
> 0] information becomes lost after range analysis. This can be worked
around by attaching more information to HUseListNode* during range
analysis.

Honestly speaking it's hard to speculate if some patch would be
accepted or not without actually seeing a patch.

--
Vyacheslav Egorov


On Wed, Jul 25, 2012 at 4:20 PM, Evan Wallace  wrote:
> Oh ok, that makes sense. So V8 generates an integer instruction only if the
> remainder is always zero. I've submitted this as issue 2258.
>
> I'm more than happy to contribute my patch but I'm new to V8 and my quick
> hack probably isn't the correct way to do it. It looks like instruction
> replacement should take place in the canonicalize pass but range information
> isn't available until range analysis. I did the optimization during range
> analysis both to make sure the preconditions hold (non-negative dividend and
> positive divisor) and to make sure the correct range is calculated for
> subsequent instructions. Would a patch that adds and removes instructions
> during range analysis be accepted?
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] recommended V8 GC settings for nodejs in heroku (hitting heroku memory limits)

2012-07-24 Thread Vyacheslav Egorov
What is V8 heap usage: heapTotal and heapUsed parts reported by
process.memoryUsage?

--
Vyacheslav Egorov


On Tue, Jul 24, 2012 at 8:52 PM, spollack  wrote:
> I am running a nodejs app in heroku, and on certain datasets i'm going over
> the heroku memory limit of 512MB. I'm running node v0.6.6 with defaults. I
> can see via node's process.memoryUsage() that my RSS value does indeed go as
> high as 544MB on my test dataset, and ps shows similar results.
>
> What V8 GC settings would you recommend in order to keep the RSS lower?
>
> Are there any known memory behavior improvements of moving to node v0.6.20
> or v0.8.3 that would help me here?
>
> As an aside, running the same test locally (same node version, same code,
> same data) only hits a max RSS of 155MB, almost a factor of 4 different.
> Both are x86_64 machines, although my local machine is OSX Lion (11.4.0
> Darwin Kernel Version 11.4.0: Mon Apr 9 19:32:15 PDT 2012;
> root:xnu-1699.26.8~1/RELEASE_X86_64 x86_64) while heroku is Linux 2.6
> (2.6.32-343-ec2 #45+lp929941v1 SMP Tue Feb 21 14:07:44 UTC 2012 x86_64
> GNU/Linux). Any ideas why that is?
>
> Thanks.
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Integer division when operands and target are integers

2012-07-24 Thread Vyacheslav Egorov
Once type-feedback told hydrogen that division happened to produce
double value it does not try to revert it to integer division.

The only exception is (quite fragile) MathFloorOfDiv optimization
performed by HUnaryMathOperation::Canonicalize.

Consider contributing your patch :-)

--
Vyacheslav Egorov


On Tue, Jul 24, 2012 at 8:18 PM, Evan Wallace  wrote:
> I've been trying to optimize image manipulation and I couldn't get V8 to
> emit integer division instructions. Does V8 currently emit any integer
> division instructions? It seems odd that it wouldn't because it does have
> the capability to emit them (see LDivI). I was going to submit a bug but
> wanted to check first that this really is the case.
>
> When using typed arrays, division causes lots of conversions to and from
> doubles. Since V8 does range analysis, it should be possible to emit integer
> division for at least the case with non-negative dividends and positive
> divisors when the target location is an integer. I hacked up a quick proof
> of concept yesterday and got an easy 2x speedup (for converting a
> premultiplied alpha image to non-premultiplied alpha). This puts V8 at the
> speed of optimized C code and seems like too good an optimization to pass
> up. This optimization would also be useful for tools like emscripten.
>
> function undoPremultiplication(image, w, h) {
>   for (var y = 0, i = 0; y < h; y++) {
> for (var x = 0; x < w; x++, i += 4) {
>   var alpha = image[i + 3];
>   if (alpha > 0) {
> image[i + 0] = image[i + 0] * 0xFF / alpha;
> image[i + 1] = image[i + 1] * 0xFF / alpha;
> image[i + 2] = image[i + 2] * 0xFF / alpha;
>   }
> }
>   }
> }
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: How to write efficient code for Node/MongoDB mixed use?

2012-07-23 Thread Vyacheslav Egorov
Yes.
--
Vyacheslav Egorov


On Mon, Jul 23, 2012 at 4:48 PM, Sebastian Ferreyra Pons
 wrote:
>> Yes.  In a constructor function, the hidden class begins with an
>> element that also includes a prototype.  In two different constructor
>> functions these initial elements will be different, making their
>> hidden classes different.
>
> Does this mean that if I make sure that both the mongo driver and my code
> uses the literal empty object {} for creating new objects they will share
> the same root class?
>
> On Friday, July 20, 2012 2:21:39 AM UTC-3, jMerliN wrote:
>>
>> Hi Sebastian,
>>
>> > It is my understanding that hidden classes are not shared between
>> > different
>> > constructors, even if I construct structurally identical objects with
>> > the
>> > same properties and in the same order. This seems to imply that
>> > deserialized objects coming from mongodb will not share the same hidden
>> > class as structurally identical objects created by the constructors,
>> > hence
>> > functions that use these objects will not be well optimized into native
>> > code. Am I right?
>>
>> Yes.  In a constructor function, the hidden class begins with an
>> element that also includes a prototype.  In two different constructor
>> functions these initial elements will be different, making their
>> hidden classes different.
>>
>> If you have hot code that is being impacted because you pass it both
>> objects you construct and ones from the MongoDB driver you're using,
>> one thing you can do is make your constructor take the MongoDB object
>> representation as an input.  Then you would be able to make your hot
>> code monomorphic.  If you don't have any hot code being impacted, it's
>> probably not something you should worry about unless your Mongo driver
>> is putting objects it constructs into dictionary mode for some reason.
>>
>> An example: http://jsfiddle.net/xznxP/
>>
>> "hot" only deoptimizes when given the raw object here, which has a
>> different hidden class.
>>
>> > Is there any hidden class inheritance built into v8? That is, if I
>> > create
>> > object o={a:1, b:2} and later add o.z=3, will native code optimized for
>> > the
>> > hidden class before the property-add still work unmodified afterwards?
>>
>> No.
>>
>> - Justin
>>
>> On Jul 19, 4:16 pm, Sebastian Ferreyra Pons 
>> wrote:
>> > I have two questions.
>> >
>> > #1
>> > I'm developing a Node.js/Mongodb web app.
>> >
>> > This means that objects used in the code will be created by at least two
>> > different code paths:
>> >
>> >1. Constructor functions
>> >2. Deserialization code in the mongo driver.
>> >
>> > It is my understanding that hidden classes are not shared between
>> > different
>> > constructors, even if I construct structurally identical objects with
>> > the
>> > same properties and in the same order. This seems to imply that
>> > deserialized objects coming from mongodb will not share the same hidden
>> > class as structurally identical objects created by the constructors,
>> > hence
>> > functions that use these objects will not be well optimized into native
>> > code. Am I right?
>> >
>> > #2
>> > Is there any hidden class inheritance built into v8? That is, if I
>> > create
>> > object o={a:1, b:2} and later add o.z=3, will native code optimized for
>> > the
>> > hidden class before the property-add still work unmodified afterwards?
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: De-optimization of hot function when constructor adds methods directly to object

2012-07-20 Thread Vyacheslav Egorov
> If there are only 25-35 allowable properties in a klass, you can
> potentially make a really fast check for this.

Yep, I know. You are describing exactly what I described above, just
in different words :-) It's an old and well known way to implement
inheritance checks in a single inheritance languages (at least Oberon
compilers used in back in 80s).

--
Vyacheslav Egorov


On Fri, Jul 20, 2012 at 12:41 AM, jMerliN  wrote:
>> This will be great but there is no easy way to check that two hidden
>> classes are compatible. Hidden classes are currently compared by
>> pointer equivalence, which boils down to two instructions (compare and
>> jump). Checking for inheritance would lead to a pretty complicated
>> code. The most effecient way, it seems, to implement such a check is
>> to record transition path in every map and then check if a fixed
>> position in transition path is equal to a fixed map. This is much more
>> complex and I am not sure it benefits any real world code.
>
> I'll try to find a good real-world example of where this causes
> violent deops from common practices.  I've seen it done quite a few
> times.
>
> If there are only 25-35 allowable properties in a klass, you can
> potentially make a really fast check for this.  If you store pointers
> to the klasses in a contiguous array such that higher indices are
> always superklass pointers of lower indices (regardless of
> transition), you can determine compatibility with 2 cmps (one compat,
> one bounds checking).  You could still do the normal cmp/jmp into
> optimized code, but if the cmp fails (not equal), you can do 2 more
> cmps (if > optimized-for-klass and < end of block) to determine if
> this is a parent klass, and if so you can jmp to the optimized code
> and only if those cmps fail do you deoptimize.
>
> The downside is that the generated optimized code would need to
> dereference once just to get the klass pointer, adding an extra few
> cycles to each optimized IC.  Though I suppose when you could move
> that code out and do actual klass pointer equiv cmp, if that fails
> then go back to this block and do a bounds check, and if it's a parent
> then jmp into the optimized code keeping the klass pointer, which
> pushes the extra work into the case that the klass pointers aren't
> equivalent but are compatible (which should be rare).  Storing those
> compat blocks would add a memory overhead and the non-monomorphic
> check can potentially prevent a deoptimization with a few more
> instructions.  It shouldn't reduce performance, though.
>
> You could also potentially partition such a compat block structure as
> to minimize the number of pointers needed to do a reasonable job at
> guarding against deoptimization from extended objects.
>
> On Jul 19, 1:00 pm, Vyacheslav Egorov  wrote:
>> Knowing that you are running it in node.js I can confirm that there is
>> indeed a difference between test/test2 properties. The reason is we
>> don't convert test to a CONSTANT_FUNCTION if object literal is not in
>> global scope. This is a heuristic that was based on the assumption
>> that top level code is executed once and non-top-level many times
>> (thus every time object literal will have a different 
>> map):https://github.com/v8/v8/blob/master/src/parser.cc#L4272-4279. In the
>> past we would not make test2 a CONSTANT_FUNCTION either because we
>> required function to be in old space. I think we might want to change
>> this to make it consistent and I've filed a bug (https://
>> code.google.com/p/v8/issues/detail?id=2246). node.js wraps module
>> bodies in anonymous function --- that is why slow down is not
>> reproable in Chrome or d8 shell:
>>
>> (function () {
>> var z = {test: function () {}};
>> z.test2 = function () {};
>> function foo(z) {
>>   var i;
>>   console.time('test speed');
>>   for (i = 0; i < 1000; i++) z.test();
>>   console.timeEnd('test speed');
>>   console.time('test2 speed');
>>   for (i = 0; i < 1000; i++) z.test2();
>>   console.timeEnd('test2 speed');
>>
>> }
>>
>> foo(z);
>> foo(z);
>>
>> })();
>> > The real issue in my example is that test is per-
>> > object and runTest is static, if runTest was assigned via this., it
>> > should only ever see one hidden class, unless you do something evil
>> > like .apply.
>>
>> This will not help because type-feedback is currently shared between
>> all instances of the same function literal: V8 mostly gets type-
>> feedback from IC-stubs that are  referenced by inline-caches in
>&g

[v8-users] Re: De-optimization of hot function when constructor adds methods directly to object

2012-07-19 Thread Vyacheslav Egorov
Knowing that you are running it in node.js I can confirm that there is
indeed a difference between test/test2 properties. The reason is we
don't convert test to a CONSTANT_FUNCTION if object literal is not in
global scope. This is a heuristic that was based on the assumption
that top level code is executed once and non-top-level many times
(thus every time object literal will have a different map):
https://github.com/v8/v8/blob/master/src/parser.cc#L4272-4279 . In the
past we would not make test2 a CONSTANT_FUNCTION either because we
required function to be in old space. I think we might want to change
this to make it consistent and I've filed a bug (https://
code.google.com/p/v8/issues/detail?id=2246). node.js wraps module
bodies in anonymous function --- that is why slow down is not
reproable in Chrome or d8 shell:

(function () {
var z = {test: function () {}};
z.test2 = function () {};
function foo(z) {
  var i;
  console.time('test speed');
  for (i = 0; i < 1000; i++) z.test();
  console.timeEnd('test speed');
  console.time('test2 speed');
  for (i = 0; i < 1000; i++) z.test2();
  console.timeEnd('test2 speed');
}

foo(z);
foo(z);
})();

> The real issue in my example is that test is per-
> object and runTest is static, if runTest was assigned via this., it
> should only ever see one hidden class, unless you do something evil
> like .apply.

This will not help because type-feedback is currently shared between
all instances of the same function literal: V8 mostly gets type-
feedback from IC-stubs that are  referenced by inline-caches in
unoptimized code and unoptimized code object is the same for any
closure created from the same function literal.

> On a related note, has there been any consideration for making v8 not
> de-optimize when a hidden class is ancestral to another (and therefore
> compatible)?

This will be great but there is no easy way to check that two hidden
classes are compatible. Hidden classes are currently compared by
pointer equivalence, which boils down to two instructions (compare and
jump). Checking for inheritance would lead to a pretty complicated
code. The most effecient way, it seems, to implement such a check is
to record transition path in every map and then check if a fixed
position in transition path is equal to a fixed map. This is much more
complex and I am not sure it benefits any real world code.

--
Vyacheslav Egorov

On Jul 19, 8:03 pm, jMerliN  wrote:
> Vyacheslav,
>
> When I run the code you posted, I see a much bigger discrepancy
> between test/test2 in the first pass and a slight reduction in test's
> time but still a large discrepancy the second pass (indicating OSR
> happened during the first loop the first time around), similar to what
> I was seeing yesterday.  But that's running on Node.js, and I haven't
> re-built Node.js against the latest stable v8 code, but that issue is
> completely gone in the current nightly Canary build.
>
> I think I better understand the method issue now.  V8 actually treats
> methods set on this. differently than other properties, the assembly
> generated looks aggressively inlined.  If you cheat and set this.test
> to a number then to the method, it effectively disables those
> optimizations in V8 and you end up treating the object as a normal
> object, and even though it doesn't cause deoptimizations (all objects
> have the same hidden class), it's significantly slower than the
> inlined method call.  The real issue in my example is that test is per-
> object and runTest is static, if runTest was assigned via this., it
> should only ever see one hidden class, unless you do something evil
> like .apply.
>
> Though this test seems to indicate that this only occurs when building
> the hidden class:  http://pastebin.com/JbuLaEUt
>
> Even though it never deoptimizes, I'd expect each of those to have
> similar performance, but only the first Foobar created is performant.
>
> On a related note, has there been any consideration for making v8 not
> de-optimize when a hidden class is ancestral to another (and therefore
> compatible)?  I mean if you have {a: 7, b: 7} and you have a really
> hot loop that only touches a and b, then you add a c property, because
> it was transitioned from the proper hidden class for that hot loop to
> a superclass of it (with the same indices in its property access
> table), that hot function can assume it's the {a, b} hidden class.
> This is similar to how classical inheritance works (Foo extends Bar,
> functions that operate on Bar can also operate on Foo), but in this
> case a hidden class transition is a strict superset, which lets you
> make really nice assumptions.
>
> On Jul 19, 2:27 am, Vyacheslav Egorov  wrote:
>
>
>
>
>
>
>
> >

Re: [v8-users] Version Performance

2012-07-19 Thread Vyacheslav Egorov
The major difference between 3.0 and those before it is Crankshaft ---
adaptive compilation pipeline:
http://blog.chromium.org/2010/12/new-crankshaft-for-v8.html

I am not sure what was the major difference between 2.x and 1.x

--
Vyacheslav Egorov


On Thu, Jul 19, 2012 at 5:07 PM, W Brimley  wrote:
> Does anyone know what are the major difference between version 1.x,2.x, and
> 3.x that constitute to performance gains. For example ArrayBuffers in
> version 3.x had quite an impact on performance. Are there other examples?
>
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] De-optimization of hot function when constructor adds methods directly to object

2012-07-19 Thread Vyacheslav Egorov
Hi Justin,

V8's hidden classes are not limited to tracking fields you assign to
an object, V8 also tries to capture methods you assign (just like in
any object-oriented language classes capture both data and behavior).

That is why first and second objects produced by Foobar will have
different hidden classes --- they have different methods.

As to your second question: they are not treated differently. If you
rewrite your test like this:

var z = {test: function () {}};
z.test2 = function () {};

function foo(z) {
  var i;
  console.time('test speed');
  for (i = 0; i < 1000; i++) z.test();
  console.timeEnd('test speed');
  console.time('test2 speed');
  for (i = 0; i < 1000; i++) z.test2();
  console.timeEnd('test2 speed');
}

foo(z);
foo(z);

You will see something like:

test speed: 38ms
test2 speed: 12ms
test speed: 11ms
test2 speed: 11ms

Truth is V8 optimizes the code while the first loop is still _running_
(this is called On Stack Replacement aka OSR). So first "test speed"
measurement contains a sum of time spent in unoptimized code, compiler
and optimized code and first "test2 speed" measurement is purely time
spent in optimized code. If you call the same code second time you see
purely timing results for optimized code. This is why benchmarks
should always contain warm up phase to let optimizing JIT kick in.

Hope this explains it.

--
Vyacheslav Egorov


On Thu, Jul 19, 2012 at 3:04 AM, jMerliN  wrote:
> So I can't get my head around why this happens (I haven't dug through
> v8's code to try to figure it out either), but this is really
> inconsistent to me with how v8 constructs hidden classes in general.
> The following is running in Node.js v0.8.2 (V8 v3.11.10.12).
>
> Here's the code:
> http://pastebin.com/2gKWrfHp
>
> Here's the output, and the deopt trace:
> http://pastebin.com/WerQuGLZ
>
> Calling Foo.prototype.runTest with any Foo object results in similar
> performance (unless you change the hidden class, as expected).  Bar
> expectedly deoptimizes because abc is stored on the proto and isn't
> actually on the constructed object until the first call, causing the
> optimized function (once it gets hot, which is after the object has
> changed hidden class) to bailout on the next attempt with a new Bar
> object.
>
> It gets weird with Foobar.  test is added directly to the object, the
> only difference is that this is a function, not a primitive, but it
> seems like the hidden classes of objects from Foobar's constructor
> should be the same.  The first run is performant, equivalent to Foo
> (expected).  Though running the test again with a new Foobar
> deoptimizes it.  I can't at all understand why.
>
> Thanks,
> Justin
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: Is --trace-deopt actually usable? If so, how are you supposed to use it?

2012-07-18 Thread Vyacheslav Egorov
> Oh, if functions have hidden classes and changing them prevents the
> fast-path for .call and .apply, then setting debugName and displayName
> on all my functions isn't a very good idea... thanks.

Well actually if you set same fields in the same order on _all_
functions that come into this .apply site then it should be fine
(unless you set too many fields, more than 14, or delete properties
--- which would cause property backing store to be normalized) => they
will all have the same hidden class.

> I was under the impressions that bailouts were based on the shape of the
> code and deopts were based on type information

Yep, we have some corner cases where compile time optimization is
limited to certain good cases that can be detected by looking at type
feedback. So if type feedback does not look good we just bailout.

> do they apply to anything that can have properties (strings, functions, etc) 
> as well?

Well... How should I answer, not too be confusing :-) Short answer is
yes: objects, functions, value wrappers (String, Number, etc) have
hidden class that change when you add/remove properties, primitive
values that don't carry properties like strings and numbers don't have
one (or rather they don't change it, because you can't add/remove
properties on them).

Long answer is: strictly speaking _every_ object in V8 heap has a
thing called Map, that describes its layout. Objects that can carry
around properties (inheriting from v8::internal::JSObject:
https://github.com/v8/v8/blob/master/src/objects.h#L57-71) _might_
have their map changed when you add and remove properties. It does not
always happen, because not evey map is describing "fast" properties
layout.

You can sometimes see deoptimizations that mention check-map
instruction. It's the one that checks object layout by comparing
object's map to an expected map.

> Does modifying an object's prototype cause
> its hidden class to change and deopt any functions that use it - like
> if I were to alter String.prototype or Number.prototype after some
> code had JITted?

If you add property to a prototype then JS object that represents
prototype will get a new hidden class. If some optimized code was
checking this prototype and expecting certain map --- this check will
fail when executed and code will deopt. If some inline cache stub was
checking it --- this check will fail when this IC is used and IC will
miss.

> Is a function's hidden class just based on whether
> you've made changes, or do, say, .bind() functions have a different
> hidden class from native ones like console.log?

Yeah, they actually have different ones due to the way we do bootstrap
of the built-ins. Built-in functions are actually slightly different
from normal functions because they does not have .prototype property
by default. But even if you add one manually they will not transition
to the same hidden class as a normal function with .prototype; they
are just not linked together with a transition and are completely
separated. I am not exactly sure why.

Functions coming from different contexts (iframes) will have different
hidden classes.

Strict functions ("use strict";) will have different hidden classes
from non-strict ones.

--
Vyacheslav Egorov


On Wed, Jul 18, 2012 at 6:33 PM, Kevin Gadd  wrote:
> Oh, if functions have hidden classes and changing them prevents the
> fast-path for .call and .apply, then setting debugName and displayName
> on all my functions isn't a very good idea... thanks. That makes the
> slide's advice make more sense, and it also explains why my attempts
> to move the bailout into a child function weren't a success.
>
> I was
> under the impressions that bailouts were based on the shape of the
> code and deopts were based on type information - if the bailout can
> also occur because it doesn't have IC type information to show that
> .apply is builtin and the fn is a standard Function, that explains how
> I'm getting it in some of these contexts. I will test this out some
> and if I get good results I'll definitely write it up on my
> optimization page.
>
> https://github.com/kevingadd/JSIL/wiki/JavaScript-Performance-For-Madmen
>
> If any of the information on the above about V8 is wrong, please let
> me know so I can fix it :)
>
> P.S. Every example I've ever seen for Hidden Classes uses Objects. I
> foolishly assumed that as a result, they only applied to user-created
> objects - do they apply to anything that can have properties (strings,
> functions, etc) as well? Does modifying an object's prototype cause
> its hidden class to change and deopt any functions that use it - like
> if I were to alter String.prototype or Number.prototype after some
> code had JITted? Is a function's hidden class just based 

Re: [v8-users] Re: Is --trace-deopt actually usable? If so, how are you supposed to use it?

2012-07-18 Thread Vyacheslav Egorov
>
>   return (function JSIL_ArrayEnumerator() {
> return state.ctorToCall.apply(this, arguments);
>   });
>
> Bailout in HGraphBuilder: @"JSIL_ArrayEnumerator": bad value context
> for arguments value

Interesting. There is a "small" detail that my slides do not mention:
.apply must be the builtin apply function and expression should be
monomorphic.

Monomorphic example that will be optimized:

function apply() { arguments[0].apply(this, arguments); }

function foo() { }
function bar() { }

apply(foo);
apply(foo);
apply(bar);
apply(bar);
// Both foo and bar have same hidden classes.

Polymorphic example that will not be:

function apply() { arguments[0].apply(this, arguments); }

function foo() { }
function bar() { }

bar.foo = "aaa";  // After this point foo and bar have different hidden classes.

apply(foo);
apply(foo);
apply(bar);
apply(bar);

// now .apply expression inside apply is not monomorphic and compiler
will say "bad value context for arguments value".

Did you patch Function.prototype.apply or add properties to your
functions? This might explain why .apply optimization gets confused.

> Bailout in HGraphBuilder: @"System_Threading_Interlocked_CompareExchange": 
> bad value context for arguments value

This one might be tricky. Assumptions V8 makes during compilation are
all based on type-feedback. If argc was never equal to 4 before V8
tried to optimize System_Threading_Interlocked_CompareExchange V8 just
will not know that .apply there is a built-in Function.prototype.apply
so it will bailout. I suggest avoiding .apply on rarely executed
branches in hot functions if possible.

Of course there might be still a possiblity that .apply access is
polymorphic as described above.

> Your explanation on why the no-message deopts occur is helpful

To be precise I was referring to deoptimization that happens on
"deoptimize" instruction you quoted in your first mail.

[please do not hesitate to ask more questions!]

--
Vyacheslav Egorov


On Wed, Jul 18, 2012 at 6:06 PM, Kevin Gadd  wrote:
> Thanks for the detailed response. Unfortunately I didn't write down
> the example I saw with arguments.length causing it - it may have been
> me misreading the output, or perhaps it was from inlining? However,
> there are certainly a bunch of uses of fn.apply(this, arguments),
> which the presentation also said would be fine, and those are bailing
> out. Here are two examples (both generated by code at runtime, so if I
> can change the generated code to fix this, I'd love to know about it
> :D)
>
>   return (function JSIL_ArrayEnumerator() {
> return state.ctorToCall.apply(this, arguments);
>   });
>
> Bailout in HGraphBuilder: @"JSIL_ArrayEnumerator": bad value context
> for arguments value
>
>   return (function System_Threading_Interlocked_CompareExchange() {
>   var argc = arguments.length;
>   if (argc === 4) {
> return thisType["CompareExchange`1$559[!!0],!!0,!!0=!!0"].apply(this,
> arguments);
>   }
>
>   throw new Error('No overload of ' + name + ' can accept ' +
> (argc - offset) + ' argument(s).')
> });
>
> Bailout in HGraphBuilder:
> @"System_Threading_Interlocked_CompareExchange": bad value context for
> arguments value
>
> In total I see something like 30 'value context for arguments value'
> bailouts when starting this test case and almost all of them look like
> they should be okay based on that slide, so I must either have
> misinterpreted the slide or it's not correct anymore.
>
> Your explanation on why the no-message deopts occur is helpful; if I
> assume that they indicate polymorphism maybe I can use that
> information to try and zero in on locations within the function where
> polymorphism might be occurring and make some headway that way.
> Thanks.
>
> --hydrogen-filter sounds like *exactly* what I need, so thank you very
> much for mentioning that. :D
>
> -kg
>
> On Tue, Jul 17, 2012 at 11:52 PM, Vyacheslav Egorov
>  wrote:
>> Hi Kevin,
>>
>> To be absolutely honest all these flags historically were made by V8
>> developers for V8 developers. You usually can't interpret what they
>> print without understanding of how V8 works internally, how optimizing
>> compiler IR looks like etc. We advocate them for JS developers only
>> because there is nothing else available at the moment.
>>
>> [I was always convinced that V8 needs a more gui-sh thing that would
>> overlay events from the optimizing compiler over the source of
>> function but that is not so easy. I was playing with some prototypes
>> but at some point I gave up... It requires attaching source position
>&

[v8-users] Re: Is --trace-deopt actually usable? If so, how are you supposed to use it?

2012-07-17 Thread Vyacheslav Egorov
Hi Kevin,

To be absolutely honest all these flags historically were made by V8
developers for V8 developers. You usually can't interpret what they
print without understanding of how V8 works internally, how optimizing
compiler IR looks like etc. We advocate them for JS developers only
because there is nothing else available at the moment.

[I was always convinced that V8 needs a more gui-sh thing that would
overlay events from the optimizing compiler over the source of
function but that is not so easy. I was playing with some prototypes
but at some point I gave up... It requires attaching source position
information to individual IR instructions (plus merging this
information somehow when we optimize code and remove redundant
instructions) and to make it worse: AST does not even have a span
information attached to each node... you can't say that expression a +
b starts a position X and ends at position Y to correctly highlight
the whole offending expression.]

The deoptimization that you are mentioning in the first message
indicates that either the execution reached a part of the function
that was optimized before typefeedback for it was available [this
happens a lot for big functions or functions with complicated control
flow and rarely executed parts] or you have a polymorphic property
access site that had a small degree of polymorphism at the moment of
compilation, but now it saw some new hidden class.

> To provide one example: I did some spelunking around with
> --trace-deopt and --trace-bailouts and found that in my codebase,
> basically any use of the 'arguments' object - even just checking
> 'arguments.length' - causes the entire function to be deoptimized.

Can you provide more information about this? What kind of --trace-
deopt/trace-bailout output made it look like arguments.length causes
deoptimization?

> Non-v8/non-chrome devs saying false things about V8
> performance isn't your fault

To be precise presentation your are linking to was made by me and I am
V8 dev.

> Thanks for the info about c1visualizer - I bet the memory limit was
> probably responsible for the flakiness and if I fiddle with JVM
> parameters it might work. I'll give it another try later on.

C1Visualizer has a major problem with its memory consumption. Big IR
dumps usually have to be either split into separate files (I do it
with a small script) or minimized by applying --hydrogen-filter=foo
flag to block optimization of all functions that are not called foo.

--
Vyacheslav Egorov



On Jul 18, 12:42 am, Kevin Gadd  wrote:
> Thanks for the link to that video, I'll give it a look. Based on your
> suggestion I'll try doing a custom build of Chromium and see if the
> disassembly will let me make sense of things.
>
> The reason this is a real problem for me (and why I find the lack of
> documentation for this stuff in chromium frustrating) is that I'm
> machine-translating code from other languages to JS - hand-editing it
> to make it faster is something I can do for code I'm writing myself,
> but I can't do it in a compiler. The current nature of the performance
> feedback from V8 makes it more or less a black box and this is
> worsened by the fact that a large amount of the documentation I've
> found out there that claims to describe V8 performance characteristics
> is either wrong or outdated. When you profile an application in V8 and
> the web inspector says you're spending 50% of your time in a simple
> function, your only choice is to dig deeper to try and understand why
> that function is slow. You could solve this by offering line-level
> profiling data in your profiler, but I think that's probably a ton of
> work, so I'm not asking for that. ;)
>
> To provide one example: I did some spelunking around with
> --trace-deopt and --trace-bailouts and found that in my codebase,
> basically any use of the 'arguments' object - even just checking
> 'arguments.length' - causes the entire function to be deoptimized. Of
> course, there isn't a ton of documentation here, 
> buthttp://s3.mrale.ph/nodecamp.eu/#57along with other sources claim that
> this is not the case. So, either something creepy is happening in my
> test cases - more verbose feedback from V8 here, or concrete docs from
> the devs themselves would help - or the information being given to the
> public is wrong. Non-v8/non-chrome devs saying false things about V8
> performance isn't your fault, but it wouldn't hurt to try and prevent
> that by publishing good information in textual form on the web.
>
> I hope I'm not giving the impression that I think V8 is the only
> problem here either; JS performance in general is a labyrinth. Based
> on my experiences however, the best way to get info about V8
> performa

Re: [v8-users] Destructors, a proposal of sorts

2012-07-11 Thread Vyacheslav Egorov
Actually finalization in reference counting GCs is much more
predictable than in GCs that mark through the heap.

Contrary to want you might think --nouse-idle-notification does not
disable automatic GC in V8. What it does is tells V8 not to perform GC
actions (be it advance the sweeper, incremental marker or do a full
GC) in response to IdleNotifications that embedder (node.js in this
case) sends to V8.

If V8 sees fit (e.g. on allocation failure) it _will_ perform it and
you can't disable that.

[also running through all objects is kinda how GC works, though it
should be done in increments]

--
Vyacheslav Egorov


On Wed, Jul 11, 2012 at 5:02 PM, Michael Schwartz  wrote:
> Here's a pattern, using Canvas as an example:
>
> function foo() {
>   var c = new Canvas();
>   var ctx = c.getContext('2d');
>   var grad = ctx.createLinearGradient(0,0, 10,10);
>   // do something / render
> }
>
> There's now a native cairo_surface_t created (new Canvas).  The c variable
> has a this.surface referencing the native surface object.
> There's now a native cairo_context_t created (c.getContext). The ctx
> variable has a this.ctx referencing the native context object.
> There's now a native cairo_pattern_t created (c.createLinearGradient).  The
> grad variable has a this.pattern referencing the native pattern object.
>
> All three need to be cleaned up at some point.
>
> There are no "destroy" (destructor) methods defined in the W3C spec for
> Canvas, Context, Gradient, etc.  Nobody is going to call them (they don't
> client-side), unless writing non-portable SilkJS specific code.  And
> ideally, the code should be portable between client and server - that's one
> of the best features of JavaScript running on both sides.
>
> I'm fully aware of the issues with finalize and reference counting based GC,
> etc.   Stephan made his rant about garbage collection and destructor issues
> in that link I posted in my first message.  Things haven't changed.  He and
> I have been and still are developing complex API for server-side, and this
> is an issue we both face.  Surely every API designer faces the same problem.
> Assist from V8 in addressing the problem would benefit a wide audience.
>
> As for my choice to force GC:
>
> http://blog.caustik.com/2012/04/08/scaling-node-js-to-100k-concurrent-connections/
>
> 2) V8′s idle garbage collection is disabled via “–nouse-idle-notification”
>
> This was critical, as the server pre-allocates over 2 million JS Objects for
> the network topology. If you don’t disable idle garbage collection, you’ll
> see a full second of delay every few seconds, which would be an intolerable
> bottleneck to scalability and responsiveness. The delay appears to be caused
> by the garbage collector traversing this list of objects, even though none
> of them are actually candidates for garbage collection.
>
> In my case, I know when it is a good time to force a GC.  It is being done
> in process that is about to block in accept().  If it ties up a CPU core for
> a bit, it is not going to stop the other processes from running, nor is the
> GC going to pause the server in action.
>
> (The above is one of numerous WWW pages I've read about scaling NodeJS,
> garbage collection pauses during benchmarks, etc.)
>
>
> On Jul 11, 2012, at 7:38 AM, Andreas Rossberg wrote:
>
> On 11 July 2012 15:35, Michael Schwartz  wrote:
>>
>> GC finalization actually works for SilkJS.  In the HTTP server, there are
>> two nested loops.  The outer loop waits for connections, the inner loop
>> handles keep-alive requests.  At the end of the inner loop (e.g. when the
>> connection is about to be closed), I force a GC (or at least try to).
>
>
> Hm, I don't quite follow. If you actually have a specific point where you
> know that you want to dispose a resource, why is it impossible to dispose it
> directly at that point? Or if there are many of them, you could maintain a
> set/map of them.
>
> In any case, before you conclude that finalization is the answer to your
> problems, let me +1 Sven's recommendation on reading Boehm's paper on the
> subject. Finalization is pretty much an anti-pattern. There are some rare,
> low-level use cases, but usually it creates more problems than it solves.
> That's why we only provide it in the C++ API.
>
> Also, I should mention that forcing a GC manually is highly discouraged.
> That causes a full (major, non-incremental) collection, which generally is a
> very costly operation that can cause significant pause time, and basically
> defeats most of the goodness built into a modern GC.
>
> /Andreas
>
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
>
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Destructors, a proposal of sorts

2012-07-10 Thread Vyacheslav Egorov
First of all for an arbitrary object in a garbage collected
environment you can't get an instant notification when objects is no
longer referenced. GC does not know that until it runs.

If you are fully controlling the way objects are produced and consumed
you can implement your own memory management (e.g. reference counting
based scheme) in pure JS (but it will be error prone).

That said if you are doing server side JS you can easily expose
MakeWeak to your code. Some node.js modules do that.

--
Vyacheslav Egorov


On Tue, Jul 10, 2012 at 3:33 PM, mschwartz  wrote:
> I was reading this great commentary on V8:
> http://code.google.com/p/v8-juice/wiki/CommentaryOnV8
>
> I think most of it still applies.
>
> JavaScript doesn't have a concept of destructors like many OO languages, but
> in a server-side context, there's a real need for them.  Garbage collection
> in JavaScript is done behind the scenes on objects that are no longer
> referenced.  When would you call an object's destructor: 1) when it is no
> longer referenced, or 2) when it is garbage collected?
>
> I suggest that if we could have some "destructor" support in v8, that it be
> the former (no longer referenced).  It is critical in server-side
> environments that any (external) resources can be freed as soon as possible.
> For example, there are a limited number of file descriptors (fds) available
> to a Unix/Linux system, and if they're all tied up (open) in some JS object
> that's been dereferenced, the server can run out of them.  If you deal with
> fds in wrapped C++ classes made as weak references, those fds can be tied up
> for a long long time, until v8 maybe decides to GC and call the weak
> callback.  But if there were a mechanism to register a callback to be called
> when an object is no longer referenced, the fds could be closed and released
> to the OS/application for reuse long before V8 maybe decides to garbage
> collect.
>
> Generally, application developers pretty much know when they're effectively
> doing a C++ style delete of a JS object (by dereferencing it), so they could
> be forced to call an agreed upon method, say destroy(), at that time.  But I
> don't find that so elegant, and it's error prone and can lead to
> memory/resource leaks that are hard to track down.
>
> I note that JavaScript 1.8.5 provides this:
> https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Object/defineProperty
>
> It leads me to ponder a similar mechanism that might be trivially
> implemented by the v8 team to help us server-side developers out.  I mean,
> there are means to define getters and setter and with Harmony there are
> proxies.  Basically, a lot of functionality that was restricted to C++ code
> (like MakeWeak) are being exported to JS.
>
> It's time to provide a mechanism to allow us to define a MakeWeak callback
> in JS from JS, and ultimately to also implement a destructor type function
> when an object becomes fully dereferenced.
>
> Am I missing something that's already there?
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] HTML5 Drag Drop feature crashing

2012-07-10 Thread Vyacheslav Egorov
This is not a V8 crash, the crash is in the WebKit. Please report it to them.

--
Vyacheslav Egorov


On Tue, Jul 10, 2012 at 2:16 PM, Manjunath G  wrote:
> Hi,
>
> When I try to test the HTML5 feature drag drop from
> http://html5demos.com/drag,
> v8 is crashing. Looks like crash is happening in pthread lock. Stack trace
> is as follows. Please can anyone help in debugging this.
>
>
> #0  0x022ab3bd in pthread_mutex_lock () from
> /opt/ThirdParty/lib/libpthread.so.0
> #1  0x01f15f76 in pthread_mutex_lock () from /opt/ThirdParty/lib/libc.so.6
> #2  0x002ff75d in WTF::Mutex::lock() () from ./.libs/libwebkitgtk-3.0.so.0
> #3  0x002de1c8 in WTF::strtod(char const*, char**) () from
> ./.libs/libwebkitgtk-3.0.so.0
> #4  0x002fa9d1 in WTF::charactersToDouble(unsigned short const*, unsigned
> int, bool*, bool*) ()
>from ./.libs/libwebkitgtk-3.0.so.0
> #5  0x003fd406 in WebCore::CSSParser::lex(void*) () from
> ./.libs/libwebkitgtk-3.0.so.0
>
> #6  0x00c5234b in cssyyparse(void*) () from ./.libs/libwebkitgtk-3.0.so.0
>
> #7  0x0040180b in
> WebCore::CSSParser::parseValue(WebCore::CSSMutableStyleDeclaration*, int,
> WTF::String const&, bool) () from ./.libs/libwebkitgtk-3.0.so.0
>
> #8  0x004024de in
> WebCore::CSSParser::parseValue(WebCore::CSSMutableStyleDeclaration*, int,
> WTF::String const&, bool, bool) () from ./.libs/libwebkitgtk-3.0.so.0
> #9  0x003f44fd in WebCore::CSSMutableStyleDeclaration::setProperty(int,
> WTF::String const&, bool, bool) () from ./.libs/libwebkitgtk-3.0.so.0
> #10 0x003f45d4 in WebCore::CSSMutableStyleDeclaration::setProperty(int,
> WTF::String const&, bool, int&) () from ./.libs/libwebkitgtk-3.0.so.0
> #11 0x00c262eb in
> WebCore::V8CSSStyleDeclaration::namedPropertySetter(v8::Local,
> v8::Local, v8::AccessorInfo const&) () from
> ./.libs/libwebkitgtk-3.0.so.0
> #12 0x010a0e18 in
> v8::internal::JSObject::SetPropertyWithInterceptor(v8::internal::String*,
> v8::internal::Object*, PropertyAttributes, v8::internal::StrictModeFlag) ()
>from ./.libs/libwebkitgtk-3.0.so.0
> #13 0x in ?? ()
> (gdb) q
>
>
>
> Thanks in advance.
>
> Regards
> Manjunath
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Strict mode performance benefits

2012-07-10 Thread Vyacheslav Egorov
Strict mode actually does have a performance benefit in one peculiar case:
if you want to extend String, Number, Boolean prototype declare function
you put on it strict. This will allow you to avoid coercion of a primitive
value to object wrapper which has a negative impact on performance if this
call is on a very hot path.

Vyacheslav Egorov
On Jul 10, 2012 9:19 AM, "Jakob Kummerow"  wrote:

> On Tue, Jul 10, 2012 at 7:58 AM, Rohit  wrote:
>
>> Does V8's strict mode implementation offer any performance benefits?
>>
>
> No.
>
>
>  --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] CALL_AND_RETRY_2 memory errors and strange TODO message in heap-inl.h

2012-06-27 Thread Vyacheslav Egorov
> Is the overall performance of the GC design satisfactory enough that this
> is probably going to be a back-burner item for a while?

Not sure what you mean. It is not related to performance. It's more
about cleaner and safe code. Paradoxically: it's simpler to write
cleaner and safer runtime when you just crash on OOM and don't have to
worry about inconsistent state inside your VM.

At some places it's simple to unroll/discard changes and rethrow OOM
further, in some it should be possible to allow Isolate to die alone,
without crashing the whole process; but overall it's a huge
engineering problem as it requires audit and refactoring of 4 years of
code written in "no OOM or crash" paradigm.

--
Vyacheslav Egorov

On Thu, Jun 28, 2012 at 12:53 AM, Brian Wilson  wrote:
> Thanks,
>
> I've started by setting a breakpoint in
> v8::internal::V8::FatalProcessOutOfMemory to see how I end up here.
> Once the process is stopped, I can dig into what's actually going on
> with the process state.
>
>
> Thanks for the insight the the TODO message, so the eventual goal
> (of the TODO) is to attempt to allow some sort of recovery from this state?
> Is the overall performance of the GC design satisfactory enough that this
> is probably going to be a back-burner item for a while?
>
> Brian
>
>
> On Jun 27, 2012, at 6:32 PM, Vyacheslav Egorov wrote:
>
>> You need to catch it in the debugger to see what actually happens. It
>> can be either:
>>
>> - real OOM: OS refused to give memory to V8 (you seem to be confident
>> that this is not the case)
>> - heap size limit OOM: ensure that your heap size is not exceeding
>> --max-old-space-size limit (defaults: 700mb on ia32, 1400mb on x64).
>> - artificial OOM when trying to allocate an array or a string that is
>> too big (e.g. check constants SeqAsciiString::kMaxLength,
>> FixedArray::kMaxLength).
>>
>> The cryptic TODO itself comes from V8's early days, short explanation:
>> some places in V8 runtime (e.g. places that use methods of
>> v8::internal::Factory) do not expect allocation failures that cannot
>> be resolved by calling GC (several times in worst case), so
>> CALL_AND_RETRY can't return Failure object to them (and V8 does not
>> use exceptions or longjmp in runtime) so it has to fail. TODO reflects
>> some hope that V8 might start handle OOMs more gracefully in some day
>> in the future (which is not trivial as OOM might leave VM in
>> inconsistent state).
>>
>> --
>> Vyacheslav Egorov
>>
>>
>> On Wed, Jun 27, 2012 at 11:55 PM, Brian Wilson  wrote:
>>>
>>> I've been running into some issues lately where I see the message:
>>>
>>> FATAL ERROR: CALL_AND_RETRY_2 Allocation failed - process out of memory 
>>> running a program (it's built on Node.js, but I'm interested in tracing 
>>> this on the v8 level).
>>>
>>> From all indications there's plenty of free memory, plenty of heap space 
>>> and the ulimit is not set too low, but we're still seeing this issue.  Does 
>>> anyone have any suggestions on how to track down how we're triggering this 
>>> allocation failure?
>>>
>>> Incidentally, while browsing through heap-inl.h there's a cryptic TODO... 
>>> fix this. comment.
>>>
>>> 540     if (__maybe_object__->IsOutOfMemory() ||                            
>>>   \
>>> 541         __maybe_object__->IsRetryAfterGC()) {                           
>>>   \
>>> 542       /* TODO(1181417): Fix this. */                                    
>>>   \
>>> 543       v8::internal::V8::FatalProcessOutOfMemory("CALL_AND_RETRY_2", 
>>> true);\
>>> 544     }                                                                   
>>>   \
>>>
>>>
>>> Can someone shed light on where that comment came from, what the issue was 
>>> and what fixed it?
>>>
>>>
>>> Thanks,
>>> Brian
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] instanceof Array fails after ReattachGlobal

2012-06-27 Thread Vyacheslav Egorov
> So where are they kept (the built-ins)? Are the hooked in via prototype chain 
> to the global object?

Yes, they are. Quoting comments in v8.h:

  /**
   * Returns the global proxy object or global object itself for
   * detached contexts.
   *
   * Global proxy object is a thin wrapper whose prototype points to
   * actual context's global object with the properties like Object, etc.
   * This is done that way for security reasons (for more details see
   * https://wiki.mozilla.org/Gecko:SplitWindow).
   *
   * Please note that changes to global proxy object prototype most probably
   * would break VM---v8 expects only global object as a prototype of
   * global proxy object.
   *
   * If DetachGlobal() has been invoked, Global() would return actual global
   * object until global is reattached with ReattachGlobal().
   */
  Local Global();

>   So I can just create a new empty global object each time like this

Unfortunately no. Normal JavaScript objects do not work with
ReattachGlobal. It expects JSGlobalProxy (which cannot be created
through an API separately from a Context). Quoting v8.h:

 /**
   * Reattaches a global object to a context.  This can be used to
   * restore the connection between a global object and a context
   * after DetachGlobal has been called.
   *
   * \param global_object The global object to reattach to the
   *   context.  For this to work, the global object must be the global
   *   object that was associated with this context before a call to
   *   DetachGlobal.
   */
  void ReattachGlobal(Handle global_object);

Unfortunately it seems that your use case is not supported by V8's API.

--
Vyacheslav Egorov


On Wed, Jun 27, 2012 at 3:58 PM, MikeM  wrote:
>
> > There is no way. New context means new built-in objects. Also
> > reattaching global does not change anything because built-ins are not
> > on the global object itself.
>
> Holy Javascript Batman!  That explains a lot!
> So where are they kept (the built-ins)? Are the hooked in via prototype chain 
> to the global object?
>
> > If you want to reuse the same builtins you don't ultimately need a new
> > context. Just use the same all the time.
>
> Excellent!  So I can just create a new empty global object each time like 
> this and attach to my existing context to give me the same built-ins but a 
> clean global?
> Persistent globalObject = Persistent::New(Object::New());
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] CALL_AND_RETRY_2 memory errors and strange TODO message in heap-inl.h

2012-06-27 Thread Vyacheslav Egorov
You need to catch it in the debugger to see what actually happens. It
can be either:

- real OOM: OS refused to give memory to V8 (you seem to be confident
that this is not the case)
- heap size limit OOM: ensure that your heap size is not exceeding
--max-old-space-size limit (defaults: 700mb on ia32, 1400mb on x64).
- artificial OOM when trying to allocate an array or a string that is
too big (e.g. check constants SeqAsciiString::kMaxLength,
FixedArray::kMaxLength).

The cryptic TODO itself comes from V8's early days, short explanation:
some places in V8 runtime (e.g. places that use methods of
v8::internal::Factory) do not expect allocation failures that cannot
be resolved by calling GC (several times in worst case), so
CALL_AND_RETRY can't return Failure object to them (and V8 does not
use exceptions or longjmp in runtime) so it has to fail. TODO reflects
some hope that V8 might start handle OOMs more gracefully in some day
in the future (which is not trivial as OOM might leave VM in
inconsistent state).

--
Vyacheslav Egorov


On Wed, Jun 27, 2012 at 11:55 PM, Brian Wilson  wrote:
>
> I've been running into some issues lately where I see the message:
>
> FATAL ERROR: CALL_AND_RETRY_2 Allocation failed - process out of memory 
> running a program (it's built on Node.js, but I'm interested in tracing this 
> on the v8 level).
>
> From all indications there's plenty of free memory, plenty of heap space and 
> the ulimit is not set too low, but we're still seeing this issue.  Does 
> anyone have any suggestions on how to track down how we're triggering this 
> allocation failure?
>
> Incidentally, while browsing through heap-inl.h there's a cryptic TODO... fix 
> this. comment.
>
> 540     if (__maybe_object__->IsOutOfMemory() ||                              
> \
> 541         __maybe_object__->IsRetryAfterGC()) {                             
> \
> 542       /* TODO(1181417): Fix this. */                                      
> \
> 543       v8::internal::V8::FatalProcessOutOfMemory("CALL_AND_RETRY_2", 
> true);\
> 544     }                                                                     
> \
>
>
> Can someone shed light on where that comment came from, what the issue was 
> and what fixed it?
>
>
> Thanks,
> Brian
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] instanceof Array fails after ReattachGlobal

2012-06-27 Thread Vyacheslav Egorov
> How can I re-use the same built-ins each time?

There is no way. New context means new built-in objects. Also
reattaching global does not change anything because built-ins are not
on the global object itself.

If you want to reuse the same builtins you don't ultimately need a new
context. Just use the same all the time.

> Also what about strings or custom objects of our own?

If you execute this code multiple times you get multiple foos and of
course instances produced by one foo are not instanceof another foo...
Just like in pure JS:

function mkfoo() {
  function foo() {}
  return new foo();
}

var o1 = mkfoo();
var o2 = mkfoo();

o1 instanceof o2.constructor // => false

--
Vyacheslav Egorov


On Tue, Jun 26, 2012 at 5:34 PM, MikeM  wrote:
>> Array function in one context is different from Array function in
>> another context as each context is a separate world with it's own
>> built-in objects.
> Right.  That was the purpose of the ReattachGlobal() in the code.
> My idea (possibly mis-guided), was to re-use the same set of built-in
> objects (or prototypes) between different executions.
> So that instanceOf would work properly.  How can I re-use the same built-ins
> each time?
> I suppose I could keep using the same Context over and over, but I would
> need a way to wipe out any local var declarations between executions and
> only keep the built-ins.
>
>
>> You can use Array.isArray which should work cross-context.
> Also what about strings or custom objects of our own?
>
>   function foo() {}
>   var x = new foo();
>   x instanceof foo;
>
> Thanks!
>
>
>
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] instanceof Array fails after ReattachGlobal

2012-06-26 Thread Vyacheslav Egorov
Array function in one context is different from Array function in
another context as each context is a separate world with it's own
built-in objects.

You can use Array.isArray which should work cross-context.

--
Vyacheslav Egorov


On Mon, Jun 25, 2012 at 9:08 PM, MikeM  wrote:
> The code below throws an exception because the saved property "arraytest" is
> doesn't seem to be an array on the second execution of script.
> I presume the build-ins prototypes like "Array" must be different after
> Reattaching the same global to the 2nd context.
> But why?
> The failure happens in the final test at the end of the code.
>
> 
> static inline v8::Local CompileRun(const char* source) {
>   return v8::Script::Compile(v8::String::New(source))->Run();
> }
>
>   HandleScope scope;
>   Persistent ctxRequest = Context::New();
>
>   Local foo = String::New("foo");
>   ctxRequest->SetSecurityToken(foo);
>   ctxRequest->Enter();  //Start execution of 1st request.
>
>   //Create an object we can use to store properties between requests.
>   Persistent sessionObject = Persistent::New(Object::New());
>   Local requestGlobal = ctxRequest->Global();
>
>   //Create a reference to session object on request global.
>   requestGlobal->Set(String::New("Session"), sessionObject);
>   TryCatch trycatch;
>
>   //Add property to the session object and save it. Add an empty array too.
>   CompileRun("Session.saveme = 42; if(Session.arraytest === undefined)
> {Session.arraytest = [];}");
>
>   //Makes sure we have an array and return value in saveme
>   Local result = Script::Compile(String::New("if(!(Session.arraytest
> instanceof Array)) {throw new Error('Failed instanceof Array test.');}
> Session.saveme"))->Run();
>   if(result.IsEmpty())
>   {
>  Handle exception = trycatch.Exception();
>  String::AsciiValue exception_str(exception);
>  printf("Exception: %s\n", *exception_str);
>   }
>   else
>   {
>  CHECK(!result->IsUndefined());
>  CHECK(result->IsInt32());
>  CHECK(42 == result->Int32Value());
>   }
>
>   //Save the the global and reattach to 2nd request later
>   Persistent requestSavedGlobal =
> Persistent::Persistent(ctxRequest->Global());
>   ctxRequest->DetachGlobal();
>   ctxRequest->Exit();
>
>   Persistent ctxRequest2 = Context::New();
>   ctxRequest2->ReattachGlobal(requestSavedGlobal);
>   ctxRequest2->SetSecurityToken(foo);
>   ctxRequest2->Enter();
>
>   requestSavedGlobal->Set(String::New("Session"), sessionObject);
>
>   //Check that we can get value of saved property in the session.
>   result = Script::Compile(String::New("if(!(Session.arraytest instanceof
> Array)) {throw new Error('Failed instanceof Array test.');}"))->Run();
>   if(result.IsEmpty())
>   {
>     Handle exception = trycatch.Exception();
>     String::AsciiValue exception_str(exception);
>     printf("Exception: %s\n", *exception_str);
>   }
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: [SIGSEGV] v8::HandleScope::HandleScope()

2012-06-16 Thread Vyacheslav Egorov
This assertion indeed indicates that your are trying to use V8 from a
thread that does not own an isolate.

You should get exclusive access to the isolate you are going to use with Locker:

https://github.com/v8/v8/blob/master/include/v8.h#L3638

--
Vyacheslav Egorov


On Fri, Jun 15, 2012 at 1:16 PM, Serega  wrote:
> #
> # Fatal error in ../src/isolate.h, line 440
> # CHECK(isolate != __null) failed
> #
>
> Program received signal SIGTRAP, Trace/breakpoint trap.
> [Switching to Thread 0x74d56700 (LWP 8551)]
> v8::internal::OS::DebugBreak () at ../src/platform-linux.cc:389
> 389     }
> (gdb) step
> v8::internal::OS::Abort () at ../src/platform-linux.cc:373
> 373       abort();
> (gdb) step
>
> Program received signal SIGABRT, Aborted.
> 0x76cbd445 in raise () from /lib/x86_64-linux-gnu/libc.so.6
> (gdb) step
> Single stepping until exit from function raise,
> which has no line number information.
> [Thread 0x74d56700 (LWP 8551) exited]
> [Thread 0x7fffef7fe700 (LWP 8552) exited]
> [Thread 0x7fffe700 (LWP 8550) exited]
> [Thread 0x75d58700 (LWP 8548) exited]
> [Thread 0x76559700 (LWP 8547) exited]
> [Thread 0x77ff7700 (LWP 8546) exited]
> [Thread 0x77fd5740 (LWP 8519) exited]
>
> Program terminated with signal SIGABRT, Aborted.
> The program no longer exists.
>
> Thank you!
> One Question more, can you show example, how to isolate v8?
>
> On 15 июн, 13:38, Vyacheslav Egorov  wrote:
>> Hello,
>>
>> Please link against debug version of V8 to get more information about the 
>> crash.
>>
>> Also ensure that the thread that invokes your even_handler owns V8
>> Isolate if you have multiple threads using V8 concurrently.
>>
>> --
>> Vyacheslav Egorov
>>
>>
>>
>>
>>
>>
>>
>> On Fri, Jun 15, 2012 at 8:28 AM, Serega  wrote:
>> > Hellow! I'm having a little trouble.
>>
>> > Program received signal SIGSEGV, Segmentation fault.
>> > [Switching to Thread 0x75f66700 (LWP 21828)]
>> > 0x772bcff7 in v8::HandleScope::HandleScope() () from /usr/lib/
>> > libv8.so
>> > (gdb) step
>> > Single stepping until exit from function _ZN2v811HandleScopeC2Ev,
>> > which has no line number information.
>> > [Thread 0x75f66700 (LWP 21828) exited]
>> > [Thread 0x7fffe77fe700 (LWP 21832) exited]
>> > [Thread 0x7fffe7fff700 (LWP 21831) exited]
>> > [Thread 0x74f64700 (LWP 21830) exited]
>> > [Thread 0x75765700 (LWP 21829) exited]
>> > [Thread 0x76767700 (LWP 21827) exited]
>> > [Thread 0x77ff7700 (LWP 21826) exited]
>>
>> > Program terminated with signal SIGSEGV, Segmentation fault.
>> > The program no longer exists.
>>
>> > Why i can't create HandleScope in function that is runing thrue
>> > pointer?
>>
>> > void *event_handler(...) {
>>
>> >      HandleScope handle_scope;
>>
>> > ...
>> > }
>>
>> > --
>> > v8-users mailing list
>> > v8-users@googlegroups.com
>> >http://groups.google.com/group/v8-users
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] ArrayBuffer fast access

2012-06-16 Thread Vyacheslav Egorov
> I couldn't find any fast access defined in the JIT compiler

There are fast paths for typed arrays inside V8, they are just not
called typed arrays :-) Look for "external arrays" instead.

For them V8 has both specialized IC stubs (e.g. load stub:
https://github.com/v8/v8/blob/master/src/ia32/stub-cache-ia32.cc#L3508
) and support in optimizing compiler pipeline (see IR instructions:
LoadExternalArrayPointer, LoadKeyedSpecializedArrayElement,
StoreKeyedSpecializedArrayElement).

I always use typed arrays when they are available (this communicates
my intent both to the JIT compiler and to a person reading my code).

--
Vyacheslav Egorov


On Sat, Jun 16, 2012 at 5:44 PM, Pablo Sole  wrote:
> Hello there,
>
> I'm embedding v8 into a binary instrumentation framework and I'm trying
> to use an ArrayBuffer/TypedBuffer for fast memory operations (like
> Tamarin/ActionScript does for the Memory object operations), but I
> couldn't find any fast access defined in the JIT compiler, so I suppose
> that a read/write to a TypedBuffer goes all the way of an object and
> property resolution. Although, for this case it could just be a range
> check and a memory load/store operation.
>
> So, would it be faster to use a regular array of SMIs (SMIs in the
> indexes and in the values) without holes faster than an ArrayBuffer? Is
> there any plan to provide a fast path for this case?
>
> Thanks,
>
> pablo.
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] [SIGSEGV] v8::HandleScope::HandleScope()

2012-06-15 Thread Vyacheslav Egorov
Hello,

Please link against debug version of V8 to get more information about the crash.

Also ensure that the thread that invokes your even_handler owns V8
Isolate if you have multiple threads using V8 concurrently.

--
Vyacheslav Egorov

On Fri, Jun 15, 2012 at 8:28 AM, Serega  wrote:
> Hellow! I'm having a little trouble.
>
> Program received signal SIGSEGV, Segmentation fault.
> [Switching to Thread 0x75f66700 (LWP 21828)]
> 0x772bcff7 in v8::HandleScope::HandleScope() () from /usr/lib/
> libv8.so
> (gdb) step
> Single stepping until exit from function _ZN2v811HandleScopeC2Ev,
> which has no line number information.
> [Thread 0x75f66700 (LWP 21828) exited]
> [Thread 0x7fffe77fe700 (LWP 21832) exited]
> [Thread 0x7fffe7fff700 (LWP 21831) exited]
> [Thread 0x74f64700 (LWP 21830) exited]
> [Thread 0x75765700 (LWP 21829) exited]
> [Thread 0x76767700 (LWP 21827) exited]
> [Thread 0x77ff7700 (LWP 21826) exited]
>
> Program terminated with signal SIGSEGV, Segmentation fault.
> The program no longer exists.
>
> Why i can't create HandleScope in function that is runing thrue
> pointer?
>
> void *event_handler(...) {
>
>      HandleScope handle_scope;
>
> ...
> }
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] SetAccessor and read only?

2012-06-08 Thread Vyacheslav Egorov
Actually CCing Michael and Andreas.
--
Vyacheslav Egorov

On Fri, Jun 8, 2012 at 10:49 PM, Vyacheslav Egorov  wrote:
> +mstarzinger +rossberg
>
> I think we need a repro and V8 version. I could not repro on ToT with
> an example like:
>
>  static Handle Getter(Local property, const AccessorInfo& 
> info) {
>  return Integer::New(24);
> }
>
>
> static Handle Foo(const Arguments& args) {
>  HandleScope scope;
>  Handle obj = Object::New();
>
>  obj->SetAccessor(String::New("foo"), &Getter, 0 /* setter */,
>                   Handle(),
>                   static_cast(v8::DEFAULT),
>                   static_cast(v8::ReadOnly));
>
>  return scope.Close(obj);
> }
>
> var obj = Foo();
>
> var desc = Object.getOwnPropertyDescriptor(obj, "foo");
>
> print(desc.value);
> print(desc.get);
> print(desc.set);
> print(desc.writable);
> print(obj.foo);
> obj.foo = 42;
> print(obj.foo);
>
> % out/ia32.debug/d8 test.js
> 24
> undefined
> undefined
> false
> 24
> 24
>
>> What I want is a property that is writable but if not set should call the 
>> getter.
>
> I don't see how it fits into JavaScript object model.  If you have an
> accessor property without a setter you can't write into it ([[CanPut]]
> will be false).
>
>>  Is there a way to remove a V8 accessor?
>
> delete object.name would delete it. But there seems to be a different
> way, see below.
>
>> Would a call to Delete() make this a slow object?
>
> Yes.
>
> Different way is to use v8::Object::ForceSet to replace accessor with
> a real property. Good news: it will keep an object in fast mode if
> possible (if object has enough space for another fast property). Bad
> news: every time you replace an accessor with a normal data property
> with ForceSet you will get a different map (hidden class) because
> v8::internal::JSObject::ConvertDescriptorToField does not create a
> transition.
>
> // In theory accessor descriptors should be replaceable with data
> descriptors via [[DefineOwnProperty]] (Object.defineProperty) but
> there is some special handling in our code that prevents it.
> https://github.com/v8/v8/blob/master/src/runtime.cc#L4459-4478
>
> --
> Vyacheslav Egorov
>
> On Fri, Jun 8, 2012 at 9:33 PM, Erik Arvidsson  wrote:
>> The V8 WebKit bindings generates something like this:
>>
>> object->SetAccessor(name, getter, 0 /* setter */, data,
>> static_cast(v8::DEFAULT),
>> static_cast(v8::ReadOnly)
>>
>> There are two really strange behaviors with this:
>>
>> 1. The descriptor for this reports this as writable:
>>
>> var descr = Object.getOwnPropertyDescriptor(object, name);
>> descr.writable  // true!
>>
>> 2. Setting the property works
>>
>> object.name = 42;
>> object.name  // 42
>>
>> However, if we remove the ReadOnly flag in the call to SetAccessor we
>> get a writable property that cannot be written to:
>>
>> object->SetAccessor(name, getter, 0 /* setter */, data,
>> static_cast(v8::DEFAULT),
>> static_cast(v8::None)
>>
>> var descr = Object.getOwnPropertyDescriptor(object, name);
>> descr.writable  // true
>>
>> object.name = 42;
>> object.name  // not 42!
>>
>>
>> This is pretty strange. What I want is a property that is writable but
>> if not set should call the getter.
>>
>> One way I can implement this is to generate a setter too that when set
>> reconfigures the property. Is there a way to remove a V8 accessor?
>> Would a call to Delete() make this a slow object?
>>
>>
>> --
>> erik
>>
>> --
>> v8-users mailing list
>> v8-users@googlegroups.com
>> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] SetAccessor and read only?

2012-06-08 Thread Vyacheslav Egorov
+mstarzinger +rossberg

I think we need a repro and V8 version. I could not repro on ToT with
an example like:

 static Handle Getter(Local property, const AccessorInfo& info) {
  return Integer::New(24);
}


static Handle Foo(const Arguments& args) {
  HandleScope scope;
  Handle obj = Object::New();

  obj->SetAccessor(String::New("foo"), &Getter, 0 /* setter */,
   Handle(),
   static_cast(v8::DEFAULT),
   static_cast(v8::ReadOnly));

  return scope.Close(obj);
}

var obj = Foo();

var desc = Object.getOwnPropertyDescriptor(obj, "foo");

print(desc.value);
print(desc.get);
print(desc.set);
print(desc.writable);
print(obj.foo);
obj.foo = 42;
print(obj.foo);

% out/ia32.debug/d8 test.js
24
undefined
undefined
false
24
24

> What I want is a property that is writable but if not set should call the 
> getter.

I don't see how it fits into JavaScript object model.  If you have an
accessor property without a setter you can't write into it ([[CanPut]]
will be false).

>  Is there a way to remove a V8 accessor?

delete object.name would delete it. But there seems to be a different
way, see below.

> Would a call to Delete() make this a slow object?

Yes.

Different way is to use v8::Object::ForceSet to replace accessor with
a real property. Good news: it will keep an object in fast mode if
possible (if object has enough space for another fast property). Bad
news: every time you replace an accessor with a normal data property
with ForceSet you will get a different map (hidden class) because
v8::internal::JSObject::ConvertDescriptorToField does not create a
transition.

// In theory accessor descriptors should be replaceable with data
descriptors via [[DefineOwnProperty]] (Object.defineProperty) but
there is some special handling in our code that prevents it.
https://github.com/v8/v8/blob/master/src/runtime.cc#L4459-4478

--
Vyacheslav Egorov

On Fri, Jun 8, 2012 at 9:33 PM, Erik Arvidsson  wrote:
> The V8 WebKit bindings generates something like this:
>
> object->SetAccessor(name, getter, 0 /* setter */, data,
> static_cast(v8::DEFAULT),
> static_cast(v8::ReadOnly)
>
> There are two really strange behaviors with this:
>
> 1. The descriptor for this reports this as writable:
>
> var descr = Object.getOwnPropertyDescriptor(object, name);
> descr.writable  // true!
>
> 2. Setting the property works
>
> object.name = 42;
> object.name  // 42
>
> However, if we remove the ReadOnly flag in the call to SetAccessor we
> get a writable property that cannot be written to:
>
> object->SetAccessor(name, getter, 0 /* setter */, data,
> static_cast(v8::DEFAULT),
> static_cast(v8::None)
>
> var descr = Object.getOwnPropertyDescriptor(object, name);
> descr.writable  // true
>
> object.name = 42;
> object.name  // not 42!
>
>
> This is pretty strange. What I want is a property that is writable but
> if not set should call the getter.
>
> One way I can implement this is to generate a setter too that when set
> reconfigures the property. Is there a way to remove a V8 accessor?
> Would a call to Delete() make this a slow object?
>
>
> --
> erik
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] XOR two 31-bit unsigned integers much faster than XOR two 32-bit unsigned integers?

2012-05-30 Thread Vyacheslav Egorov
So I threw in some code: https://gist.github.com/2837822 That does
something similiar to what you described (but not necessarily
completely similiar): it just does some int32 bitwise operations.

On my machine x64.release build of V8 takes 8ms (0.8 ms per key).

If I start casting result to uint32 via >>> 0, I get to 36ms
(0.00036ms per key).

Limiting number of bits inside the table does not improve anything
(which is quite expected with code shaped like that).

--
Vyacheslav Egorov

On Wed, May 30, 2012 at 7:10 PM, Joran Greef  wrote:
> Keeping everything as Int32 but casting using "return hash >>> 0"
> is 0.00022ms per key.
>
> Keeping everything as Int32 but casting using "return hash < 0 ? 4294967296
> + hash : hash" is 0.51ms per key, order of magnitude faster and still
> 32-bit.
>
>
> On Wednesday, May 30, 2012 6:25:39 PM UTC+2, Vyacheslav Egorov wrote:
>>
>> c) stop doing >>> 0 at the end;
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] XOR two 31-bit unsigned integers much faster than XOR two 32-bit unsigned integers?

2012-05-30 Thread Vyacheslav Egorov
Interesting. If unsigned integers are completely eliminated I would
expect it to be as fast as 31-bit version. Strange to still the slow
down still.

I would expect that conversion to bucket is independent of hash sign,
it should be the same: bucket_index = hash & mask; where mask =
number_of_buckets - 1 (and number_of_buckets is a power of 2).

--
Vyacheslav Egorov

On Wed, May 30, 2012 at 7:02 PM, Joran Greef  wrote:
> To clarify, Array instead of Uint32Array is slightly slower as expected:
> 0.54ms per key vs 0.48ms per key.
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] XOR two 31-bit unsigned integers much faster than XOR two 32-bit unsigned integers?

2012-05-30 Thread Vyacheslav Egorov
>>> is a logical right shift. It shifts bits right filling vacant positions 
>>> with 0 (as opposed to >> which fills vacant positions with sign bit).

In JavaScript >>> also performs ToUint32 on the input before doing
shift (as opposed to >> which converts input via ToInt32) so the
result is always a number from unsigned 32-bit integer range.

See: http://es5.github.com/#x11.7.3

x >>> 0 is basically ToUint32(x)

--
Vyacheslav Egorov

On Wed, May 30, 2012 at 6:34 PM, Stephan Beal  wrote:
> On Wed, May 30, 2012 at 6:25 PM, Vyacheslav Egorov 
> wrote:
>>
>> c) stop doing >>> 0 at the end;
>
>
> A side question: what does >>>0 do? i have never seen the >>> operator
> before this thread and never seen a 0-bit shift anywhere.
>
> :-?
>
> --
> - stephan beal
> http://wanderinghorse.net/home/stephan/
> http://gplus.to/sgbeal
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] XOR two 31-bit unsigned integers much faster than XOR two 32-bit unsigned integers?

2012-05-30 Thread Vyacheslav Egorov
Is it essential that your hash should be uint32 not int32?

I would assume you can get a good performance and 32 bit of hash if
you stay with int32's:

a) fill your tables with (integer % Math.pow(2,32)) | 0 to force
uint32 into int32 and
b) make your table a typed array instead of normal array: new
Int32Array(256); [not necessary if you are running x64 version of V8]
c) stop doing >>> 0 at the end;

--
Vyacheslav Egorov

On Wed, May 30, 2012 at 5:21 PM, Joran Greef  wrote:
> The difference comes through when changing "integer % Math.pow(2,32)"
> to "integer % Math.pow(2,31)" when pre-generating the hash tables.
>
> Hash tables containing 256 random 31-bit unsigned integers are pre-generated
> for every byte of key. The hash operates on fixed-length 20 byte keys.
>
> Each byte of the key is XOR-red with one of the integers in the table,
> depending on the position of the byte in the key, so the XOR is dealing with
> an 8-bit unsigned integer and a 31-bit unsigned integer.
>
> The result is cast to an unsigned integer by >>> 0.
>
> On Wednesday, May 30, 2012 5:00:55 PM UTC+2, Vyacheslav Egorov wrote:
>>
>> Minor correction: 31-bit tagged _signed_ integers are used on ia32, on
>> x64 you get 32-bit tagged _signed_ integers. Neither though are wide
>> enough to contain all values from unsigned 32-bit integer range.
>>
>> Thus if you are really using them as 32bit _unsigned_ integers, e.g.
>> you are doing something like:
>>
>> var a = (b ^ c) >>> 0; // force into uint32 and then use in
>> non-int32-truncating manner.
>>
>> then unfortunately even V8's optimizing compiler gets confused. It
>> does not have designated uint32 representation and does not try to
>> infer when int32 can be safely used instead of uint32 (another example
>> of this bug: http://code.google.com/p/v8/issues/detail?id=2097).
>>
>> I suggest you post your code here if possible so that we could take a
>> look.
>>
>> --
>> Vyacheslav Egorov
>>
>> On Wed, May 30, 2012 at 4:40 PM, Jakob Kummerow 
>> wrote:
>> > As long as you're running unoptimized code on a 32-bit V8, this is
>> > expected:
>> > 31-bit integers are stored directly as "small integers", the 32nd bit is
>> > used to tag them as such, whereas 32-bit integers are converted to
>> > doubles
>> > and stored as objects on the heap, which makes accessing them more
>> > expensive.
>> >
>> > When your code runs long enough for the optimizer to kick in, it should
>> > recognize this situation, use untagged 32-bit integer values in
>> > optimized
>> > code, and the difference between 31-bit and 32-bit values should go
>> > away. If
>> > it doesn't, please post a reduced test case that exhibits the behavior
>> > so
>> > that we can investigate. (Running the code for a second or so should be
>> > enough to get the full effect of optimization and make the initial
>> > difference negligible.)
>> >
>> >
>> > On Wed, May 30, 2012 at 4:31 PM, Joran Greef  wrote:
>> >>
>> >> I am implementing a table hash
>> >> (http://en.wikipedia.org/wiki/Tabulation_hashing) and noticed that a
>> >> table
>> >> hash using a table of 31-bit unsigned integers is almost an order of
>> >> magnitude faster than a table hash using a table of 32-bit unsigned
>> >> integers.
>> >>
>> >> The former has an average hash time of 0.7ms per 20 byte key for a
>> >> 31-bit hash, and the latter has an average hash time of 0.00034ms per
>> >> 20
>> >> byte key for a 32-bit hash.
>> >>
>> >> I figured that XOR on 8-bit integers would be faster than XOR on 16-bit
>> >> integers would be faster than XOR on 24-bit integers would be faster
>> >> than
>> >> XOR on 32-bit integers, but did not anticipate such a difference
>> >> between
>> >> 31-bit and 32-bit integers.
>> >>
>> >> Is there something regarding XOR that I may be missing that could
>> >> explain
>> >> the difference?
>> >>
>> >>
>> > --
>> > v8-users mailing list
>> > v8-users@googlegroups.com
>> > http://groups.google.com/group/v8-users
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] XOR two 31-bit unsigned integers much faster than XOR two 32-bit unsigned integers?

2012-05-30 Thread Vyacheslav Egorov
Minor correction: 31-bit tagged _signed_ integers are used on ia32, on
x64 you get 32-bit tagged _signed_ integers. Neither though are wide
enough to contain all values from unsigned 32-bit integer range.

Thus if you are really using them as 32bit _unsigned_ integers, e.g.
you are doing something like:

var a = (b ^ c) >>> 0; // force into uint32 and then use in
non-int32-truncating manner.

then unfortunately even V8's optimizing compiler gets confused. It
does not have designated uint32 representation and does not try to
infer when int32 can be safely used instead of uint32 (another example
of this bug: http://code.google.com/p/v8/issues/detail?id=2097).

I suggest you post your code here if possible so that we could take a look.

--
Vyacheslav Egorov

On Wed, May 30, 2012 at 4:40 PM, Jakob Kummerow  wrote:
> As long as you're running unoptimized code on a 32-bit V8, this is expected:
> 31-bit integers are stored directly as "small integers", the 32nd bit is
> used to tag them as such, whereas 32-bit integers are converted to doubles
> and stored as objects on the heap, which makes accessing them more
> expensive.
>
> When your code runs long enough for the optimizer to kick in, it should
> recognize this situation, use untagged 32-bit integer values in optimized
> code, and the difference between 31-bit and 32-bit values should go away. If
> it doesn't, please post a reduced test case that exhibits the behavior so
> that we can investigate. (Running the code for a second or so should be
> enough to get the full effect of optimization and make the initial
> difference negligible.)
>
>
> On Wed, May 30, 2012 at 4:31 PM, Joran Greef  wrote:
>>
>> I am implementing a table hash
>> (http://en.wikipedia.org/wiki/Tabulation_hashing) and noticed that a table
>> hash using a table of 31-bit unsigned integers is almost an order of
>> magnitude faster than a table hash using a table of 32-bit unsigned
>> integers.
>>
>> The former has an average hash time of 0.7ms per 20 byte key for a
>> 31-bit hash, and the latter has an average hash time of 0.00034ms per 20
>> byte key for a 32-bit hash.
>>
>> I figured that XOR on 8-bit integers would be faster than XOR on 16-bit
>> integers would be faster than XOR on 24-bit integers would be faster than
>> XOR on 32-bit integers, but did not anticipate such a difference between
>> 31-bit and 32-bit integers.
>>
>> Is there something regarding XOR that I may be missing that could explain
>> the difference?
>>
>>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: Low benchmark score on MIPS platform

2012-05-30 Thread Vyacheslav Egorov
Paul, I wonder if you have some reference MIPS numbers to share? Do
the numbers you match Lawrence's?

Lawrence, I wonder what numbers you are aiming for? What numbers did
you see with your other JavaScript engine?

--
Vyacheslav Egorov

On Wed, May 30, 2012 at 12:00 PM, Lawrence  wrote:
> Hi Jokob,
>
>    Thanks for your reply and reminding me to switch building tool :)
>
>    I run the simulator on my linux PC so it should have a better
> performance than your cell phone.
>    Just want to compare the performance between V8 and the other
> javascript engine.
>
>    I have a MIPS hardware with 700MHz CPU clock.
>    As I talked, I want to replace my original javascript engine with
> V8.
>    However, the result isn't my expectation.
>
>    Really want to know that's the normal result for MIPS case
> happened on my side.
>
>
> Regards,
>
> On May 30, 4:46 pm, Jakob Kummerow  wrote:
>> You should use GYP/make instead of SCons to build V8 (call "make
>> dependencies" once, then simply "make mips.release -j8"), but that won't
>> change performance numbers.
>>
>> You seem to be running this on a pretty fast machine; for the MIPS
>> simulator I get a score of only 45.6 on my box. Simulators are slow, that's
>> expected.
>>
>> I don't have any MIPS hardware. The closest I have is a Nexus S which
>> scores roughly 900. Would you expect your MIPS hardware to be about one
>> third as fast as a 1GHz ARM A8?
>>
>>
>>
>>
>>
>>
>>
>> On Wed, May 30, 2012 at 5:43 AM, Lawrence  wrote:
>> > I build Version 3.11.6 for little-endian MIPS with command : scons
>> > arch=mips library=static sample=shell mode=release -j8 . Also setup CC
>> > CXX AR LD RANLIB CXXFLAGS well.
>>
>> > And got the following result:
>> > Richards: 542
>> > DeltaBlue: 343
>> > Crypto: 593
>> > RayTrace: 211
>> > EarleyBoyer: 979
>> > RegExp: 120
>> > Splay: 293
>> > NavierStokes: 268
>> > 
>> > Score (version 7): 348
>>
>> > Besides, I also built mips simulator and got the following result
>> > Richards: 65.2
>> > DeltaBlue: 94.6
>> > Crypto: 49.3
>> > RayTrace: 123
>> > EarleyBoyer: 133
>> > RegExp: 33.2
>> > Splay: 243
>> > NavierStokes: 37.1
>> > 
>> > Score (version 7): 78.8
>>
>> > Could anyone get good performance on MIPS platform or just I did
>> > something wrong ?
>>
>> > Regards,
>> > Lawrence
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: Read-only SetIndexedPropertiesToExternalArrayData

2012-04-23 Thread Vyacheslav Egorov
V8 does not support read only external arrays.

--
Vyacheslav Egorov

On Mon, Apr 23, 2012 at 7:49 AM, Paul Harris  wrote:
> Any more information on this?
>
> I would like make TypeArray access read-only too.
>
>
> On Tuesday, May 10, 2011 8:04:17 AM UTC+8, Henrik Lindqvist wrote:
>>
>> In the spec for ArrayBuffer there is a proposal for an option to make
>> the content read-only.
>>
>> http://www.khronos.org/registry/typedarray/specs/latest/
>>
>> For fast element access I'am using
>> SetIndexedPropertiesToExternalArrayData to implement TypedArray access
>> to ArrayBuffer data. Is there a way to prohibit element assignment,
>> making it read-only?
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Locally-scoped version of Persistent?

2012-04-10 Thread Vyacheslav Egorov
Locally scoped version of persistent is almost equivalent to Local
except that Local can never be weak.

> Local lcontext = pcontext;

It should be

Local lcontext = Local::New(pcontext);

--
Vyacheslav Egorov


On Fri, Apr 6, 2012 at 3:32 AM, Marcel Laverdet  wrote:
> Hey I'm wondering why there isn't a helper class for Persistent which
> will Dispose() a handle at the end of scope. It seems like right now v8
> encourages lots of unfriendly cleanup code such as:
>
> void function hello() {
>   Persistent thing = Persistent::New(...);
>   ...
>   thing.Dispose();
> }
>
> This kind of code is difficult to maintain in many cases, and also
> vulnerable to memory leaks when using C++ exceptions. I'd like to see a
> version of Persistent that behaves similarly to std::unique_ptr. v8
> already has helper classes like this with Isolate::Scope and Context::Scope.
>
> Or perhaps there's a way to get what I want with local handles? I tried
> something like this to no avail:
>
> Persistent pcontext = Context::New(NULL, global);
> Local lcontext = pcontext;
> pcontext.Dispose();
>
> Any advise would be appreciated!
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: How to get a function's prototype

2012-03-14 Thread Vyacheslav Egorov
operator* defined on a v8::Handle does not return a pointer to an
object but rather a pointer to place which contains that pointer.

Why do you actually want to get a raw pointer to an object?

--
Vyacheslav Egorov


On Wed, Mar 14, 2012 at 1:44 PM, avasilev  wrote:
> Seems this approach will lead to nowhere - two absolutely identical,
> consequent C++ calls which query the same property, return different,
> consequent pointers. Seems in C++ there is also some internal
> shadowing and pointers (at least not *Handle) don't uniquely
> identify objects.
>
> On Mar 14, 2:29 pm, avasilev  wrote:
>> I,m trying to write a C++ app to test these values, I print C++
>> pointers and implement a js function, taking an object, and print this
>> object's pointer, so that I have a picture both what Js  and C++.
>> However, I discovered a strange thing - the function's arguments,
>> transformer like this: *(args[0]->ToObject()) always appear as the
>> same pointer between function calls. If I try to print two object
>> arguments in the same call, these are consecutive addresses. So it
>> seems v8 passes different objects than the actual ones, somehow
>> shadowing the real object. The addresses are quite different than
>> these that I get in C++, which leads me tho thing they are allocated
>> on the stack. This all makes sense, but how is the shadowing mechanism
>> implemented, and is there a way to reach the original objects from
>> within the function?
>>
>> On Mar 13, 4:06 pm, Matthias Ernst  wrote:
>>
>>
>>
>>
>>
>>
>>
>> > On Tue, Mar 13, 2012 at 3:00 PM, avasilev  wrote:
>> > > Thanks,
>> > > I was just thinking that as there is GetPrototype() and SetPrototype()
>> > > for objects, which access '__proto__', there should be also for
>> > > 'prototype'.
>>
>> > Well, GetPrototype() has slightly different semantics, at least
>> > judging from the documentation WRT hidden prototypes.
>>
>> > > I'd like to use the topic to ask - what does GetPrototype() actually
>> > > return on a function object then? Is it func.prototype.__proto__?
>>
>> > I'd expect func.__proto__.
>> > In the Chrome console this evaluates as such:
>>
>> > (function() {}).__proto__
>> > function Empty() {}
>>
>> > Matthias
>>
>> > > On Mar 13, 3:50 pm, Matthias Ernst  wrote:
>> > >> On Tue, Mar 13, 2012 at 2:26 PM, avasilev  wrote:
>> > >> > Hello,
>> > >> > Is there a way to get a function's prototype, equivalent to the
>> > >> > function's 'prototype' property, e.g.:
>>
>> > >> > function Func()
>> > >> > {}
>> > >> > var a = Func.prototype;
>>
>> > >> > Using Object::GetPrototype() does not do the same and returns a
>> > >> > different value. Setting an object's prototype via SetPrototype() to
>> > >> > the property value gives the desired effect of instanceof recognizing
>> > >> > the object as constructed by the function. Setting the prototype to
>> > >> > the function's GetPrototype() does not achieve this.
>> > >> > From the doc I don't see a way to access the "prototype" property of a
>> > >> > function, besides getting it as an ordinary property via  func-
>> > >> >>Get(String::New("prototype"));
>> > >> > Am I missing something?
>>
>> > >> I don't think you are. Why should there be another way, apart from
>> > >> convenience? If JS specifies it as a property, especially not even a
>> > >> special one, then use the property accessor. You may of course argue
>> > >> that it's inconsistent with, say, Array::Length.
>>
>> > >> > Greetings
>> > >> > Alex
>>
>> > >> > --
>> > >> > v8-users mailing list
>> > >> > v8-users@googlegroups.com
>> > >> >http://groups.google.com/group/v8-users
>>
>> > > --
>> > > v8-users mailing list
>> > > v8-users@googlegroups.com
>> > >http://groups.google.com/group/v8-users
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Error evaluating an object

2012-03-05 Thread Vyacheslav Egorov
JavaScript grammar has it's peculiarities. Program "{ value: {a:
[1,2,3,4,5,6]} }" is parsed not as object literal but as block
statement with labeled block statement inside;  where inner block
statement contains labeled expression statement (so value and a are
become labels, not property names).

You have to enclose top level expression like that in parenthesis if
you want to ensure that it is parsed as an expression.

--
Vyacheslav Egorov


On Mon, Mar 5, 2012 at 1:02 PM, fdmanana  wrote:
> Hi,
>
> When trying to evaluate the following object expression in the shell,
> I get a syntax error:
>
> $ v8
> V8 version 3.8.9 [sample shell]
>> { value: {a: [1,2,3,4,5,6], b: 1} }
> (shell):1: SyntaxError: Unexpected token :
> { value: {a: [1,2,3,4,5,6], b: 1} }
>                             ^
> SyntaxError: Unexpected token :
>
> The syntax seems correct to me.
> Node.js's shell for e.g. doesn't complain about it:
>
> $ node
>> { value: {a: [1,2,3,4,5,6], b: 1} }
> { value: { a: [ 1, 2, 3, 4, 5, 6 ], b: 1 } }
>>
>
> If I remove the second property of the inner object it works:
>
> $ v8
>> { value: {a: [1,2,3,4,5,6]} }
> 1,2,3,4,5,6
>>
>
> Evaluating the code via v8::Script::Compile also produces the same
> error.
>
> I'm using V8 version 3.8.9 installed via HomeBrew on Mac OS X 10.7.3.
>
> Thanks.
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] BinaryOpStub_MUL_Alloc_HeapNumbers = ?

2012-01-17 Thread Vyacheslav Egorov
Here is an alternative (a bit more precise) test case:
http://jsperf.com/float64-vs-float/2

--
Vyacheslav Egorov

On Tue, Jan 17, 2012 at 10:46 AM, Daniel Clifford wrote:

> All the examples are optimized by Crankshaft. However, in the Float64Array
> case, storing the intermediate values in the array forces a memory access.
> When you use local variables, it's much faster, since the intermediate
> double operations and local variables are stored in registers, avoiding
> memory accesses and triggering only a single boxing operation at the
> "return".
>
> Danno
>
>
> On Tue, Jan 17, 2012 at 2:01 AM, Joseph Gentle  wrote:
>
>> I tried a simple test on jsperf to see if I can get a speedup from
>> float64 arrays:
>>
>> http://jsperf.com/float64-vs-float
>>
>> In this test, using float64 arrays end up slower than just using normal
>> variables. JSPerf tests are only run for a few seconds - is that long
>> enough for v8's optimizer to kick in properly? - Or is that benchmark
>> correct, and I'm just missing something?
>>
>> -J
>>
>>
>>
>> On Friday, 30 December 2011 07:33:14 UTC+11, Vyacheslav Egorov wrote:
>>>
>>> 2) There are fields mutated in the loop that contain floating point
>>> values. This currently requires boxing (and boxing requires heap
>>> allocation, heap allocation puts pressure on GC etc). I wonder if you can
>>> put typed arrays (e.g. Float64Array) to work here.
>>>
>>> --
>>> Vyacheslav Egorov
>>>
>>>  --
>> v8-users mailing list
>> v8-users@googlegroups.com
>> http://groups.google.com/group/v8-users
>>
>
>  --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
>

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] BinaryOpStub_MUL_Alloc_HeapNumbers = ?

2011-12-30 Thread Vyacheslav Egorov
> I tracked down the deoptimization problem - a bug in
> another part of code was occasionally making the contact list the
> number zero instead of an empty list.

Oh, interesting. I did not pay much attention to deoptimization because it
happened in tagged-to-i, so I assumed it was first floating point value,
but apparently it was undefined.

> It seems like replacing the float value in a field with another float
> value shouldn't require an allocation. I would expect it to reuse the
> box of the previous field value..?

Unfortunately this is impossible because boxes are values and they can be
shared:

a.x = 1 / n;  // a.x will contain pointer to the boxed number (HeapNumber)
a.y = a.x;   // a.y points to the same HeapNumber as a.x.

--
Vyacheslav Egorov

On Fri, Dec 30, 2011 at 5:41 AM, Joseph Gentle  wrote:

> Cool, thanks. I tracked down the deoptimization problem - a bug in
> another part of code was occasionally making the contact list the
> number zero instead of an empty list. Fixing that gave me a ~15%
> performance boost. My library now performs <3x slower than the
> original C version, which is a huge improvement. I'd like to take that
> number down further still if I can though.
>
> I'll have a play with typed arrays soon. It seems like I should just
> replace some structures wholesale with float64 arrays. It'll be a bit
> nasty writing contacts[i * C_LENGTH + C_ROT_X] - but if I'm avoiding
> heap allocations, it'll be worth it. Since you can't guarantee that a
> number in javascript will remain constant, I imagine I'll want a
> compilation step which replaces all the constants with literals.
>
> It seems like replacing the float value in a field with another float
> value shouldn't require an allocation. I would expect it to reuse the
> box of the previous field value..?
>
> Thanks for the tip about inlining. I manually inlined a couple
> function calls earlier, but stopped when they stopped giving me
> performance gains. - Which makes sense considering applyImpulse was
> deoptimized. Once I've done everything I can think of, I'll take a
> good hard read through the source. Considering that I'm spending 35%
> of my time in that one function, its a pretty obvious place for
> optimization.
>
> Cheers
> Joseph
>
> On Fri, Dec 30, 2011 at 7:33 AM, Vyacheslav Egorov 
> wrote:
> > If you run with --print-code --code-comments you will see generated code
> (v8
> > should be build with objectprint=on disassembler=on) and you'll have to
> > locate bailout in the code and figure out why it happens.
> >
> > If it happens only once then the reason it probably that the function was
> > optimized before it got correct type feedback.
> >
> > I took a very quick look through 2nd version of code that V8 generates
> > for Arbiter.applyImpulse, without trying to understand what it does,
> just by
> > looking for inefficiencies. I don't see anything obvious but there are
> two
> > things:
> >
> > 1) V8 seems to exhaust inlining budget when trying to inline things into
> > applyImpulse. It leaves one call in the loop not inlined, which prevents
> > proper LICM and probably causes unnecessary boxing. If I relax inlining
> > budget by --nolimit-inlining I get 10% boost on the benchmark.
> >
> > 2) There are fields mutated in the loop that contain floating point
> values.
> > This currently requires boxing (and boxing requires heap allocation, heap
> > allocation puts pressure on GC etc). I wonder if you can put typed arrays
> > (e.g. Float64Array) to work here.
> >
> > --
> > Vyacheslav Egorov
> >
> >
> >
> > On Thu, Dec 29, 2011 at 4:18 AM, Joseph Gentle 
> wrote:
> >>
> >> Wow, thats awesome information. That would explain why the function in
> >> question is slow, and why inlining a couple of the function calls it
> makes
> >> decreases overall speed.
> >>
> >> How do I read the trace I get back? I'm getting this:
> >>
> >>  DEOPT: Arbiter.applyImpulse at bailout #49, address 0x0, frame size
> >> 264
> >> [deoptimizing: begin 0x1b70ac6a67f1 Arbiter.applyImpulse @49]
> >>   translating Arbiter.applyImpulse => node=432, height=216
> >> 0x7fff6f711630: [top + 248] <- 0x3ebe7f33eb9 ; [esp + 296]
> >> 0x3ebe7f33eb9 
> >> 0x7fff6f711628: [top + 240] <- 0x2457afa6b4ad ; caller's pc
> >> 0x7fff6f711620: [top + 232] <- 0x7fff6f7116c0 ; caller's fp
> >> 
> >>
> >> I assume address 0x0 means something the function is doing is hitting a
> >> null o

Re: [v8-users] BinaryOpStub_MUL_Alloc_HeapNumbers = ?

2011-12-29 Thread Vyacheslav Egorov
If you run with --print-code --code-comments you will see generated code
(v8 should be build with objectprint=on disassembler=on) and you'll have to
locate bailout in the code and figure out why it happens.

If it happens only once then the reason it probably that the function was
optimized before it got correct type feedback.

I took a very quick look through 2nd version of code that V8 generates
for Arbiter.applyImpulse, without trying to understand what it does, just
by looking for inefficiencies. I don't see anything obvious but there are
two things:

1) V8 seems to exhaust inlining budget when trying to inline things into
applyImpulse. It leaves one call in the loop not inlined, which prevents
proper LICM and probably causes unnecessary boxing. If I relax inlining
budget by --nolimit-inlining I get 10% boost on the benchmark.

2) There are fields mutated in the loop that contain floating point values.
This currently requires boxing (and boxing requires heap allocation, heap
allocation puts pressure on GC etc). I wonder if you can put typed arrays
(e.g. Float64Array) to work here.

--
Vyacheslav Egorov


On Thu, Dec 29, 2011 at 4:18 AM, Joseph Gentle  wrote:

> Wow, thats awesome information. That would explain why the function in
> question is slow, and why inlining a couple of the function calls it makes
> decreases overall speed.
>
> How do I read the trace I get back? I'm getting this:
>
>  DEOPT: Arbiter.applyImpulse at bailout #49, address 0x0, frame size
> 264
> [deoptimizing: begin 0x1b70ac6a67f1 Arbiter.applyImpulse @49]
>   translating Arbiter.applyImpulse => node=432, height=216
> 0x7fff6f711630: [top + 248] <- 0x3ebe7f33eb9 ; [esp + 296]
> 0x3ebe7f33eb9 
> 0x7fff6f711628: [top + 240] <- 0x2457afa6b4ad ; caller's pc
> 0x7fff6f711620: [top + 232] <- 0x7fff6f7116c0 ; caller's fp
> 
>
> I assume address 0x0 means something the function is doing is hitting a
> null object. Does bailout #49 mean anything? The function is (later)
> repeatedly optimized and deoptimized again with bailout #8. How do I track
> these down?
>
> -J
>
>
> On Monday, 26 December 2011 23:56:31 UTC+11, Vyacheslav Egorov wrote:
>
>> This is a multiplication stub that is usually called from non-optimized
>> code (or optimized code that could not be appropriately specialized).
>> Non-optimizing compiler does not try to infer appropriate representation
>> for local variable so floating point numbers always get boxed.
>>
>> If this stub is high on the profile then it usually means that optimizing
>> compiler either failed to optimize hot function which does a lot of
>> multiplications or it failed to infer an optimal representation for some
>> reason.
>>
>> Bottom up profile should show which functions invoke the stub. Then you
>> should inspect --trace-opt --trace-bailout --trace-deopt output to figure
>> out what optimizer does with those function.
>>
>> --
>> Vyacheslav Egorov
>>
>> On Mon, Dec 26, 2011 at 7:00 AM, Joseph Gentle  wrote:
>>
>>> What does it mean when I see BinaryOpStub_MUL_Alloc_**HeapNumbers in my
>>> profile? Does that mean the compiler is putting local number variables on
>>> the heap? Why would it do that?
>>>
>>> -J
>>>
>>> --
>>> v8-users mailing list
>>> v8-u...@googlegroups.com
>>> http://groups.google.com/**group/v8-users<http://groups.google.com/group/v8-users>
>>
>>
>>  --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
>

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] BinaryOpStub_MUL_Alloc_HeapNumbers = ?

2011-12-26 Thread Vyacheslav Egorov
This is a multiplication stub that is usually called from non-optimized
code (or optimized code that could not be appropriately specialized).
Non-optimizing compiler does not try to infer appropriate representation
for local variable so floating point numbers always get boxed.

If this stub is high on the profile then it usually means that optimizing
compiler either failed to optimize hot function which does a lot of
multiplications or it failed to infer an optimal representation for some
reason.

Bottom up profile should show which functions invoke the stub. Then you
should inspect --trace-opt --trace-bailout --trace-deopt output to figure
out what optimizer does with those function.

--
Vyacheslav Egorov

On Mon, Dec 26, 2011 at 7:00 AM, Joseph Gentle  wrote:

> What does it mean when I see BinaryOpStub_MUL_Alloc_HeapNumbers in my
> profile? Does that mean the compiler is putting local number variables on
> the heap? Why would it do that?
>
> -J
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] Re: ASSERT(state_ != NEAR_DEATH);

2011-12-26 Thread Vyacheslav Egorov
Your weak callback (handle_weak) is empty and does not follow the contract.

Put object.Dispose(); there.

--
Vyacheslav Egorov

On Mon, Dec 26, 2011 at 12:55 PM, D C  wrote:

> Hi:
>   Thanks for you replay, I modify my source according to your words,
> It seems not work.
> here is my source snippet:
>
> void javascript_ctx_impl::call_obj_func (v8::persistent
> object, const char* method, int argc, handle argv[])
> {
>handlescope handle_scope;
>
>local cb = object->get(string::new(method));
>
>if (!cb->isfunction()) {
>std::cerr << "method = " << method << std::endl;
>return;
>}
>
>local do_action = local::cast(cb);
>
>trycatch try_catch;
> /*** ASSERT HERE **/
> /*** ASSERT HERE **/
> /*** ASSERT HERE **/
>do_action->call(object, argc, argv);
> /*** ASSERT HERE **/
> /*** ASSERT HERE **/
> /*** ASSERT HERE **/
>if (try_catch.hascaught()) {
>v8::local msg = try_catch.message ();
>if (!msg->getscriptresourcename().isempty() && !msg-
> >getscriptresourcename()->isundefined())
>{
>v8::string::asciivalue name (msg-
> >getscriptresourcename());
>std::cerr << *name << std::endl;
>}
>else {
>std::cerr << "call_obj_func: runtime error." << std::endl;
>}
>}
> }
>
> template 
> class write_handle : public handle_impl_base
> {
>public:
>write_handle (boost::asio::io_service& io,
> v8::Persistent
> local,
>v8::Persistent h
>)
>: handle_impl_base (io), handle_ (h), session_ (local)
>{
>}
>public:
>void operator () (const boost::system::error_code& ec,
> std::size_t bytes_transferred)
>{
>HandleScope handle_scope;
>if (!ec) {
>Handle args[3] = {
>js::instance ().safe_new_value (session_),
>js::instance ().safe_new_value
> ("TRUE"),
>js::instance ().safe_new_value
> (bytes_transferred)
>};
>
>js::instance ().call_obj_func (handle_,
> "onHandle", 3, args);
>}
>else {
>Handle args[3] = {
>js::instance ().safe_new_value (session_),
>js::instance ().safe_new_value
> ("FALSE"),
>js::instance ().safe_new_value
> (bytes_transferred)
>};
>js::instance ().call_obj_func (handle_,
> "onHandle", 3, args);
>}
>handle_.Dispose (); session_.Dispose ();
>}
>static void handle_weak (Persistent object, void*
> parameter)
>{
>}
>private:
>v8::Persistent handle_;
>v8::Persistent session_;
> };
>
> v8::Handle js_asio_socket_ip_tcp_function::async_write
> (const v8::Arguments& args)
> {
>HandleScope hScope;
>js_asio_socket_ip_tcp_function* native_obj =
> unwrap(args.This());
>
>if (args.Length () < 4) {
>return ThrowException (Exception::TypeError(String::New(
>"async_resolve need 4 parameters."))
>);
>}
>/** Argument check here */
>js_stream_function* s = unwrap (args[1]-
> >ToObject ());
>if (s == NULL) {
>return ThrowException (Exception::TypeError(String::New(
>"async_resolve parameter 2 error."))
>);
>}
>
>v8::Local  p0 = args[0]->ToObject ();
>v8::Local p2 = args[2]->ToUint32 ();
>
>v8::Persistent handle;
>v8::Persistent sessin;
>
>if (args[3]->ToObject ()->IsFunction ()) {
>v8::Local f = v8::Local::Cast(args
> [3]->ToObject());
>handle = v8::Persistent::New(f);
>}
>else {
>handle = v8::Persistent::New (args[3]->ToObject ());
>}
>
>handle.MakeWeak (NULL, write_handle::handle_weak);
>handle.MarkIndependent ();
>
>sessin = v8::Persistent::New (p0);
>boost::asio::async_write (*(native_obj->socket_),
>boost::asio::buffer (s->get (), p2->Value ()),
>boost

Re: [v8-users] ASSERT(state_ != NEAR_DEATH);

2011-12-25 Thread Vyacheslav Egorov
Every weak callback should either revive (via ClearWeak or MakeWeak) or
destroy (via Dispose) the handle for which it was called. This contract is
described in v8.h, see the comment above WeakReferenceCallback definition.

You have a callback that does not follow this contract.

--
Vyacheslav Egorov
On Dec 25, 2011 3:41 PM, "D C"  wrote:

> Hi all:
> I stuck here for about two weeks.
> Here is my stack trace.
> =
>Agent.exe!v8::internal::OS::DebugBreak()  Line 930  C++
>Agent.exe!v8::internal::OS::Abort()  Line 925   C++
>Agent.exe!V8_Fatal(const char * file=0x00d8fcd8, int line=237, const
> char * format=0x00d7bd18, ...)  Line 59 C++
>Agent.exe!CheckHelper(const char * file=0x00d8fcd8, int line=237,
> const char * source=0x00d8fe34, bool condition=false)  Line 60 + 0x16
> bytes   C++
> >
> Agent.exe!v8::internal::GlobalHandles::Node::PostGarbageCollectionProcessing(v8::internal::Isolate
> * isolate=0x014b0068, v8::internal::GlobalHandles *
> global_handles=0x003ef478)  Line 237 + 0x27 bytesC++
>Agent.exe!
>
> v8::internal::GlobalHandles::PostGarbageCollectionProcessing(v8::internal::GarbageCollector
> collector=SCAVENGER)  Line 540 + 0x12 bytes C++
>Agent.exe!
> v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector
> collector=SCAVENGER, v8::internal::GCTracer * tracer=0x0012e0b4)  Line
> 822 + 0x15 bytesC++
>Agent.exe!
> v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace
> space=NEW_SPACE, v8::internal::GarbageCollector collector=SCAVENGER)
> Line 518 + 0x16 bytes   C++
>Agent.exe!
> v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace
> space=NEW_SPACE)  Line 443  C++
>Agent.exe!
> v8::internal::Factory::NewStringFromUtf8(v8::internal::Vector const > string={...}, v8::internal::PretenureFlag
> pretenure=NOT_TENURED)  Line 186 + 0xea bytes   C++
>Agent.exe!v8::String::New(const char * data=0x00d68254, int
> length=8)  Line 4410C++
>Agent.exe!javascript_ctx_impl::call_obj_func(v8::Handle
> object={...}, const char * method=0x00d68254, int argc=3,
> v8::Handle * argv=0x0012e4b0)  Line 111 + 0x12 bytes C++
>Agent.exe!read_handle::operator()(const
> boost::system::error_code & ec={...}, unsigned int
> bytes_transferred=661)  Line 226C++
> =
> It seem like each time GC, String::New ASSERT, who can help me about
> that.
>
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
>

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] Fast Property Access in V8 JavaScript Engine

2011-12-17 Thread Vyacheslav Egorov
Yes, new Thing(true) and new Thing(false) will produce objects with
different hidden-classes (though all new Thing(x) for a fixed x will have
the same hidden class).

Sites  that see both kinds Things will go megamorphic (become slower than
monomorphic, yet faster than just going to runtime). Optimizer will try to
deal with polymorphism by inlining (limited amount) of polymorphic cases.

--
Vyacheslav Egorov

On Sat, Dec 17, 2011 at 8:51 PM, Marcel Laverdet wrote:

> I only have a loose understanding of v8's hidden classes and optimizer,
> but there's something I've always wondered about hidden classes that I
> figured I could hijack this thread for..
>
> How do hidden classes handle this case?
>
> function Thing(switch) {
>   if (switch) {
> this.whatever = true;
>   }
>   ...
> }
>
> From what I understand (basically just watching those videos of Lars Bak
> talk about v8 internals) it seems in this case, instances of Thing would
> have divergent hidden classes right from the very start. This would then
> ripple out into the optimizer leading to code that couldn't be optimized
> with fast property access.
>
> It seems like this would be a fairly common case.. is there something I'm
> missing that makes this not as bad as it seems?
>
>
> On Sat, Dec 17, 2011 at 10:19 AM, Vyacheslav Egorov 
> wrote:
>
>> It should be noted that hidden classes have at least two advantages:
>>
>> 1) they go beyond simple index to property mappings and can capture many
>> different aspects (types of backing stores, constant function properties,
>> type specific optimized construction stubs, estimated number of properties
>> that will be added to an object).
>>
>> 2) allow to optimize memory usage --- dictionary based backing stores a
>> much less compact.
>>
>> Your suggestion of course is viable (and well known,
>> e.g. similar approach is used for example by LuaJIT2) optimization that can
>> be used to make monomorphic dictionary lookups faster.
>>
>> However it cannot replace hidden classes entirely as noted above.
>>
>> --
>> Vyacheslav Egorov
>>
>>
>> On Sat, Dec 17, 2011 at 11:31 AM, Vladimir Nesterovsky <
>> vladi...@nesterovsky-bros.com> wrote:
>>
>>> Yesterday I've seen an article about some design principles of V8
>>> JavaScript Engine (http://code.google.com/apis/v8/design.html). In
>>> particular V8 engine optimizes property access using "dynamically
>>> created hidden classes", they are derived when a new property is
>>> created (deleted) on the object.
>>>
>>> We would like to suggest a slightly different strategy, which exploits
>>> the cache matches, and does not require a dynamic hidden classes.
>>>
>>> Consider an implementation data type with following characteristics:
>>>
>>> a) object is implemented as a hash map of property id to property
>>> value: Map;
>>> b) it stores data as an array of pairs and can be accessed directly:
>>> Pair values[];
>>> c) property index can be acquired with a method: int index(ID);
>>>
>>> A pseudo code for the property access looks like this:
>>>
>>> pair = object.values[cachedIndex];
>>>
>>> if (pair.ID == propertyID)
>>> {
>>>   value = pair.Value;
>>> }
>>> else
>>> {
>>>  // Cache miss.
>>>  cachedIndex = object.index(propertyID);
>>>  value = objec.values[cachedIndex].Value;
>>> }
>>>
>>> This approach brings us back to dictionary like implementation but
>>> with important optimization of array speed access when property index
>>> is cached, and with no dynamic hidden classes.
>>>
>>> See also
>>> http://www.nesterovsky-bros.com/weblog/2011/12/17/FastPropertyAccessInV8JavaScriptEngine.aspx
>>> --
>>> Vladimir Nesterovsky
>>>
>>> --
>>> v8-users mailing list
>>> v8-users@googlegroups.com
>>> http://groups.google.com/group/v8-users
>>>
>>
>>  --
>> v8-users mailing list
>> v8-users@googlegroups.com
>> http://groups.google.com/group/v8-users
>>
>
>  --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
>

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] Fast Property Access in V8 JavaScript Engine

2011-12-17 Thread Vyacheslav Egorov
It should be noted that hidden classes have at least two advantages:

1) they go beyond simple index to property mappings and can capture many
different aspects (types of backing stores, constant function properties,
type specific optimized construction stubs, estimated number of properties
that will be added to an object).

2) allow to optimize memory usage --- dictionary based backing stores a
much less compact.

Your suggestion of course is viable (and well known, e.g. similar approach
is used for example by LuaJIT2) optimization that can be used to make
monomorphic dictionary lookups faster.

However it cannot replace hidden classes entirely as noted above.

--
Vyacheslav Egorov

On Sat, Dec 17, 2011 at 11:31 AM, Vladimir Nesterovsky <
vladi...@nesterovsky-bros.com> wrote:

> Yesterday I've seen an article about some design principles of V8
> JavaScript Engine (http://code.google.com/apis/v8/design.html). In
> particular V8 engine optimizes property access using "dynamically
> created hidden classes", they are derived when a new property is
> created (deleted) on the object.
>
> We would like to suggest a slightly different strategy, which exploits
> the cache matches, and does not require a dynamic hidden classes.
>
> Consider an implementation data type with following characteristics:
>
> a) object is implemented as a hash map of property id to property
> value: Map;
> b) it stores data as an array of pairs and can be accessed directly:
> Pair values[];
> c) property index can be acquired with a method: int index(ID);
>
> A pseudo code for the property access looks like this:
>
> pair = object.values[cachedIndex];
>
> if (pair.ID == propertyID)
> {
>   value = pair.Value;
> }
> else
> {
>  // Cache miss.
>  cachedIndex = object.index(propertyID);
>  value = objec.values[cachedIndex].Value;
> }
>
> This approach brings us back to dictionary like implementation but
> with important optimization of array speed access when property index
> is cached, and with no dynamic hidden classes.
>
> See also
> http://www.nesterovsky-bros.com/weblog/2011/12/17/FastPropertyAccessInV8JavaScriptEngine.aspx
> --
> Vladimir Nesterovsky
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
>

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] Hint V8?

2011-12-16 Thread Vyacheslav Egorov
It will be optimized when V8 decides that it's hot. (or maybe when V8
inlines it into some other function, etc). "Hotness" is estimated based on
many factors, e.g. statistical profiling. You can use --trace-opt to see
functions that are being optimized by V8.

Why do you want to ensure that this function is optimized? If it is
frequently called and it takes non-trivial amount of time to execute --- it
will be optimized.

--
Vyacheslav Egorov

On Fri, Dec 16, 2011 at 9:11 PM, Egor Egorov  wrote:

>
> Yes but how many calls do I have to fire till this optimisation takes
> place?
>
> On Friday, December 16, 2011 1:51:23 PM UTC+2, Vyacheslav Egorov wrote:
> []
>
>  call it with numbers only V8 will in the end generate code that is
>> specialized appropriately for the number case.
>>
>>  --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
>

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] Hint V8?

2011-12-16 Thread Vyacheslav Egorov
You can't hit it explicitly.

But V8's optimizing compiler gathers type feedback from inlined caches in
the non-optimized code.

Essentially this means that program itself gives hints to V8 while it
executes, e.g. if you write function like

function add(x, y) { return x + y; }

and you will call it with numbers only V8 will in the end generate code
that is specialized appropriately for the number case.

--
Vyacheslav Egorov

On Fri, Dec 16, 2011 at 12:23 PM, Egor Egorov  wrote:

> Let's suppose I have a function that expects an argument of a certain
> type, a number.
>
> What if I somehow hint the V8 compiler that here I expect only a number so
> that V8 optimises accordingly without guesses? Would such a hint make sense
> for V8 optimisation process?
>
>  --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] [couldn't find pc offset for node=0]

2011-10-12 Thread Vyacheslav Egorov
> Would this be a valuable bug report as is?

Yes, please file a bug with a repro so we can start looking.

--
Vyacheslav Egorov


On Wed, Oct 12, 2011 at 2:25 AM, Marcel Laverdet  wrote:
> I've managed to simplify the repro case a lot, removing the dependency on
> NodeJS and other externalities. The error should reproduce easily enough on
> other Lion computers, at least. It still requires the node-fibers library
> because the pthread invocations that are happening are difficult to
> detangle.
> Would this be a valuable bug report as is? I'm at a loss at this point.
> Also in addition to the failed assertion in the first message, sometimes I'm
> getting this:
> #
> # Fatal error in src/objects-inl.h, line 1652
> # CHECK(index >= 0 && index < this->length()) failed
> #
> On Tue, Oct 11, 2011 at 3:30 PM, Vyacheslav Egorov 
> wrote:
>>
>> Marcel,
>>
>> It's really hard to say anything without reproduction. It might be a
>> genuine bug in the deoptimizer.
>>
>> --
>> Vyacheslav Egorov
>>
>>
>>
>> On Tue, Oct 11, 2011 at 9:53 PM, Marcel Laverdet 
>> wrote:
>> > Hey there I'm trying to track down an issue with my v8 application (on
>> > NodeJS). This is on OS X Lion and x64 v8. I've noticed this error on v8
>> > 3.6.6 and also bleeding_edge.
>> > The issue is that every now and then I'm seeing this:
>> > [couldn't find pc offset for node=0]
>> > [method: Future.wait]
>> > [source:
>> > function () {???Future.wait(this);???return this.get();??}
>> > Bus error: 10
>> > This error seems to come from deoptimizer.cc and reproducing the error
>> > is
>> > difficult. The only thing strange about my application is that I'm using
>> > node-fibers [https://github.com/laverdet/node-fibers], which adds
>> > coroutine
>> > support to v8. Each coroutine is actually just a pthread and
>> > pthread_cond_signal is used to simulate coroutines** which play nicely
>> > with
>> > v8::Locker. So it seems that this issue may be related to threads,
>> > potentially 100's of threads using the same v8 isolate (locking and
>> > unlocking where appropriate).
>> > If I run this instead with a debug build of v8 I get this error:
>> > #
>> > # Fatal error in /Users/marcel/code/node/deps/v8/src/objects-inl.h, line
>> > 2996
>> > # CHECK(kind() == OPTIMIZED_FUNCTION) failed
>> > #
>> > I have NOT been able to reproduce this problem on an ia32 build with the
>> > same application and machine; it seems to be just x64.
>> > I'm wondering where I should start looking from here. Is this a bug in
>> > v8,
>> > should I work on distilling a test case for you guys?
>> > ** Actually the default version of node-fibers uses some not-supported
>> > v8
>> > hacking to get true coroutines, but you can modify the build to use
>> > pthreads
>> > instead which is totally within the confines of v8's API. All my testing
>> > was
>> > done with the pthread_cond_signal version of node-fibers.
>> >
>> > --
>> > v8-users mailing list
>> > v8-users@googlegroups.com
>> > http://groups.google.com/group/v8-users
>>
>> --
>> v8-users mailing list
>> v8-users@googlegroups.com
>> http://groups.google.com/group/v8-users
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] [couldn't find pc offset for node=0]

2011-10-11 Thread Vyacheslav Egorov
Marcel,

It's really hard to say anything without reproduction. It might be a
genuine bug in the deoptimizer.

--
Vyacheslav Egorov



On Tue, Oct 11, 2011 at 9:53 PM, Marcel Laverdet  wrote:
> Hey there I'm trying to track down an issue with my v8 application (on
> NodeJS). This is on OS X Lion and x64 v8. I've noticed this error on v8
> 3.6.6 and also bleeding_edge.
> The issue is that every now and then I'm seeing this:
> [couldn't find pc offset for node=0]
> [method: Future.wait]
> [source:
> function () {???Future.wait(this);???return this.get();??}
> Bus error: 10
> This error seems to come from deoptimizer.cc and reproducing the error is
> difficult. The only thing strange about my application is that I'm using
> node-fibers [https://github.com/laverdet/node-fibers], which adds coroutine
> support to v8. Each coroutine is actually just a pthread and
> pthread_cond_signal is used to simulate coroutines** which play nicely with
> v8::Locker. So it seems that this issue may be related to threads,
> potentially 100's of threads using the same v8 isolate (locking and
> unlocking where appropriate).
> If I run this instead with a debug build of v8 I get this error:
> #
> # Fatal error in /Users/marcel/code/node/deps/v8/src/objects-inl.h, line
> 2996
> # CHECK(kind() == OPTIMIZED_FUNCTION) failed
> #
> I have NOT been able to reproduce this problem on an ia32 build with the
> same application and machine; it seems to be just x64.
> I'm wondering where I should start looking from here. Is this a bug in v8,
> should I work on distilling a test case for you guys?
> ** Actually the default version of node-fibers uses some not-supported v8
> hacking to get true coroutines, but you can modify the build to use pthreads
> instead which is totally within the confines of v8's API. All my testing was
> done with the pthread_cond_signal version of node-fibers.
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: Assertion in Isolate::Current()

2011-10-10 Thread Vyacheslav Egorov
You have to use Isolate::Enter or Isolate::Scope.

If you want to use the default isolate just acquire isolate lock with
Locker.

v8.h includes very detailed comments. Please read them for further
information:

http://code.google.com/p/v8/source/browse/trunk/include/v8.h#3510

--
Vyacheslav Egorov

On Mon, Oct 10, 2011 at 1:16 PM, Adrian Basheer wrote:

> Hi,
>
> I am afraid I do not know how to enter an isolate (I don't remember it
> coming up in the v8 tutorial)...
>
> Can you help me please?
>
> Thanks!
>
> Adrian.
>
>
> On Mon, Oct 10, 2011 at 2:09 PM, Vyacheslav Egorov 
> wrote:
>
>> You can not declare handle scope in a thread that does not yet own V8
>> Isolate. You should enter an Isolate first.
>>
>> --
>> Vyacheslav Egorov
>>
>>
>> On Mon, Oct 10, 2011 at 1:02 PM, Adrian  wrote:
>>
>>> Hi,
>>>
>>> This is the exact, complete program, that is failing..
>>>
>>> class TestThread : public Thread::Thread
>>>
>>> {
>>>
>>> public:
>>>
>>>   TestThread(){};
>>>
>>>   void TestThread::run()
>>>
>>>   {
>>>
>>> v8::HandleScope scope;//<--This failes
>>>
>>>   }
>>>
>>> };
>>>
>>>
>>> int main(int argc, const char* argv[])
>>>
>>> {
>>>
>>>   v8::HandleScope scope; //<--This works
>>>
>>>   TestThread mythread;
>>>
>>>   mythread.start();
>>>
>>>   mythread.waitDone();
>>>
>>> }
>>>
>>>
>>>  --
>>> v8-users mailing list
>>> v8-users@googlegroups.com
>>> http://groups.google.com/group/v8-users
>>>
>>
>>  --
>> v8-users mailing list
>> v8-users@googlegroups.com
>> http://groups.google.com/group/v8-users
>>
>
>  --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
>

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] Re: Assertion in Isolate::Current()

2011-10-10 Thread Vyacheslav Egorov
You can not declare handle scope in a thread that does not yet own V8
Isolate. You should enter an Isolate first.

--
Vyacheslav Egorov

On Mon, Oct 10, 2011 at 1:02 PM, Adrian  wrote:

> Hi,
>
> This is the exact, complete program, that is failing..
>
> class TestThread : public Thread::Thread
>
> {
>
> public:
>
>   TestThread(){};
>
>   void TestThread::run()
>
>   {
>
> v8::HandleScope scope;//<--This failes
>
>   }
>
> };
>
>
> int main(int argc, const char* argv[])
>
> {
>
>   v8::HandleScope scope; //<--This works
>
>   TestThread mythread;
>
>   mythread.start();
>
>   mythread.waitDone();
>
> }
>
>
>  --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
>

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] Re: Large unexpected Code FreeStoreAllocationPolicy resize

2011-09-02 Thread Vyacheslav Egorov
Hi,

I have found only one instantiation of the List template with
T=Code* and P=FreeStoreAllocationPolicy it type alias --- CodeList
(see list.h).

The only instance of the CodeList I could find is allocated on the
stack (see KeyedIC::ComputeStub) and should not leak.

But this interpretation does not fit your trace. So I think we can
assume that compiler merged some template methods together and it's
either

  List > entered_contexts_; or List saved_contexts_;

that get resized when you perform Context::Enter.

My guess would be that you have unbalanced Context::Enter and
Context::Exit so your stacks of entered/saved contexts grow without
bounds.

Please check that they are balanced. You can also add some debug
prints into Context::Enter to see length of entered_contexts_ stack.

--
Vyacheslav Egorov


On Fri, Sep 2, 2011 at 4:09 PM, Stuart  wrote:
> The system is still running and has since allocated 2 x 150MB more
> using
> v8::internal::List::Resize
>
> The stack profile is similar; calling back into V8 using stored
> function and parameter handles.
>
> Any clues? Am I even correct in assuming this has been allocated for
> code and not data?
>
> Stuart.
>
>
>> *,v8::internal::FreeStoreAllocationPolicy>::Resize
>
> On Aug 31, 10:46 pm, Stuart  wrote:
>> I need some help understanding this call stack.
>>
>> The resize results in a 47MB allocation that never gets freed. This
>> happened twice during a 6 hour run.
>>
>> StackTrace Content
>>  v8director!v8::internal::List> *,v8::internal::FreeStoreAllocationPolicy>::Resize+22 bytes, 0xE6BF76
>>  v8director!v8::Context::Enter+199 bytes, 0xE1F2D7
>>  v8director!`anonymous namespace'::DecoupledCall::call
>>  v8director!
>> boost::asio::detail::completion_handler> ::mf0>
>> DecoupledCall is making a synchronous (no locks) call into V8 using
>> previously stored persistent handles to a function and parameters. I
>> have seen this leak occur twice with this stack, but is exceptionally
>> rare; two times over thousands and thousands of iterations. I would
>> love to learn that it is related to this stack, but I suspect it's a
>> coincidence.
>>
>> The parameter v8::internal::Code suggests this is an allocation for
>> code. Why would V8 suddenly need 50MB of code storage after running
>> the same thing for 6 hours?
>>
>> What am I seeing? Could a closure cause this? Would any logging help?
>>
>> Stuart.
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
>

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: V8 Garbage collection in multitasking mode

2011-09-01 Thread Vyacheslav Egorov
> Would it produce weak references?  I think not, right?

No. When I am talking about weak references I mean v8 API Handles that
were made weak with MakeWeak method.

--
Vyacheslav Egorov


On Thu, Sep 1, 2011 at 11:09 PM, Dennis H  wrote:
> Hi Vyacheslav,
>
> I guess you are right: additional context is not really needed there.
> Let me
> try to remove it and see if it  improves GC.
>
> I am not really using any weak reachable objects explicitly. I am not
> sure though if they can be potentially be created in Javascript.
> My test case produces GC-collectable string in the endless loop...
>
> var task = require("task"),
>    x  = 1;
> for (var i = 0; i < 10 ; i++) {
>    var t = task.createTask(function() {
>       var r = "";
>       while (1)   {
>          r = "BB " + x;
>       }
>    });
>    console.log("Task created  "  + i);
>    t.run();
> }
>
> Would it produce weak references?  I think not, right?
>
> Thanks,
> Denis
>
> On Sep 1, 12:47 pm, Vyacheslav Egorov  wrote:
>> > 1. Can I transfer an function object from script compiled in global
>> > context and main thread Isolate into the other thread's Isolate and
>> > execute there?
>>
>> No. You have to serialize it somehow (e.g. JSON) in one isolate and
>> deserialize it in another one.
>>
>> > 2. If not, Is what I am doing a valid v8 API usage at all? What could
>> > be other ways to trigger GC more often?
>>
>> Seem valid to me. But I am a bit confused why forcing GC manually helps so 
>> much.
>>
>> It's not completely clear though why do you need a separate context
>> for your task because the function you pass as value (not as source)
>> already has a context attached and will use it but not the context you
>> have created.
>>
>> Another thing: do you use weak handles extensively? V8 might be
>> overflowed by weakly reachable objects. That might cost a lot
>> especially if those weakly reachable objects hold onto contexts which
>> you seem to allocate for each task.
>>
>> --
>> Vyacheslav Egorov
>>
>> On Thu, Sep 1, 2011 at 8:34 PM, Dennis H  wrote:
>> > Hi Vyacheslav,
>>
>> > I see your point.
>>
>> > I don't use separate Isolate for each thread and all tasks use
>> > function objects compiled in the global Context/Isolate.
>> > This might be a trouble of course, but I was not sure there is a good
>> > way to create a function object
>> > in main Context and then transfer it to another Isolate/Context.
>>
>> > Essentially I wanted to achieve following usage:
>>
>> > // thread function
>> > function foo () {}
>>
>> > // starts execution in a separate thread
>> > task(foo);
>>
>> > I do make them run exclusively (at least I hope I do) by using
>>
>> >    Locker locker;
>> >    Locker::StartPreemption(preemption_interval);
>> >    HandleScope scope;
>>
>> >    context = Context::New();
>> >    context->Enter();
>>
>> > The code is in a fork from nodejs.You can see the code here:
>> >https://github.com/bfrancojr/node/blob/node-task/src/node_task.cc
>>
>> > The funny part is that if I create just a couple of threads the memory
>> > is stable, while I call V8::IdleNotification()  every 5 sec.
>>
>> > If I increase the number of tasks to 100 it will start to grow and
>> > the process will run out memory. However If I start to call
>> > V8::IdleNotification() every 0.3 seconds it fixes it again.
>>
>> > I've got a feeling the v8 treats the GC as one of the tasks, meaning
>> > that the more tasks I create the more time is allocated to 'garbage
>> > producers'
>> > and the smaller is a relative portion of time for GC. May be I am
>> > wrong.
>>
>> > So the questions are:
>>
>> > 1. Can I transfer an function object from script compiled in global
>> > context and main thread Isolate into the other thread's Isolate and
>> > execute there?
>> > 2. If not, Is what I am doing a valid v8 API usage at all? What could
>> > be other ways to trigger GC more often?
>>
>> > Thanks,
>> > Dennis
>>
>> > On Aug 31, 12:44 am, Vyacheslav Egorov  wrote:
>> >> Hi Dennis,
>>
>> >> V8's GC is stop-the-world type so you don't have to do anything
>> >> special to make it "keep up". If you are getting OOM that most
>> >&g

Re: [v8-users] Re: V8 Garbage collection in multitasking mode

2011-09-01 Thread Vyacheslav Egorov
> 1. Can I transfer an function object from script compiled in global
> context and main thread Isolate into the other thread's Isolate and
> execute there?

No. You have to serialize it somehow (e.g. JSON) in one isolate and
deserialize it in another one.

> 2. If not, Is what I am doing a valid v8 API usage at all? What could
> be other ways to trigger GC more often?

Seem valid to me. But I am a bit confused why forcing GC manually helps so much.

It's not completely clear though why do you need a separate context
for your task because the function you pass as value (not as source)
already has a context attached and will use it but not the context you
have created.

Another thing: do you use weak handles extensively? V8 might be
overflowed by weakly reachable objects. That might cost a lot
especially if those weakly reachable objects hold onto contexts which
you seem to allocate for each task.

--
Vyacheslav Egorov


On Thu, Sep 1, 2011 at 8:34 PM, Dennis H  wrote:
> Hi Vyacheslav,
>
> I see your point.
>
> I don't use separate Isolate for each thread and all tasks use
> function objects compiled in the global Context/Isolate.
> This might be a trouble of course, but I was not sure there is a good
> way to create a function object
> in main Context and then transfer it to another Isolate/Context.
>
> Essentially I wanted to achieve following usage:
>
> // thread function
> function foo () {}
>
> // starts execution in a separate thread
> task(foo);
>
> I do make them run exclusively (at least I hope I do) by using
>
>    Locker locker;
>    Locker::StartPreemption(preemption_interval);
>    HandleScope scope;
>
>    context = Context::New();
>    context->Enter();
>
> The code is in a fork from nodejs.You can see the code here:
> https://github.com/bfrancojr/node/blob/node-task/src/node_task.cc
>
> The funny part is that if I create just a couple of threads the memory
> is stable, while I call V8::IdleNotification()  every 5 sec.
>
> If I increase the number of tasks to 100 it will start to grow and
> the process will run out memory. However If I start to call
> V8::IdleNotification() every 0.3 seconds it fixes it again.
>
> I've got a feeling the v8 treats the GC as one of the tasks, meaning
> that the more tasks I create the more time is allocated to 'garbage
> producers'
> and the smaller is a relative portion of time for GC. May be I am
> wrong.
>
> So the questions are:
>
> 1. Can I transfer an function object from script compiled in global
> context and main thread Isolate into the other thread's Isolate and
> execute there?
> 2. If not, Is what I am doing a valid v8 API usage at all? What could
> be other ways to trigger GC more often?
>
> Thanks,
> Dennis
>
> On Aug 31, 12:44 am, Vyacheslav Egorov  wrote:
>> Hi Dennis,
>>
>> V8's GC is stop-the-world type so you don't have to do anything
>> special to make it "keep up". If you are getting OOM that most
>> probably means you have a leak somewhere. Try tracing GC with
>> --trace-gc flag to see how heap grows.
>>
>> Also you can't run JavaScript in parallel on V8 unless you create
>> several isolates.
>>
>> If you are using a single isolate from many threads you have to ensure
>> that only one thread is executing JS at the given moment.
>>
>> --
>> Vyacheslav Egorov
>>
>> On Wed, Aug 31, 2011 at 2:28 AM, Dennis H  wrote:
>> > Dear v8 Developers,
>>
>> > I am relatively new to v8 internals, but here is what I found:
>> > I tried to create an app which has multiple tasks running in parallel.
>> > The v8::internal::Thread API seems to work fine, the trouble is I
>> > didn't find a good way to make garbage collection to keep up.
>>
>> > I do call the V8::IdleNotification() in the main event loop
>> > periodically, but it doesn't scale if number of threads is getting
>> > bigger. Essentially If I create a lot of tasks, the process would
>> > reliably run out of memory pretty quick.
>>
>> > How is the garbage collection supposed to be handled correctly with
>> > multiple threads?
>>
>> > I used v3.4.14.
>>
>> > Thanks,
>> > Dennis
>>
>> > --
>> > v8-users mailing list
>> > v8-users@googlegroups.com
>> >http://groups.google.com/group/v8-users
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
>

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: Garbage Collection very slow on Android with latest stable line

2011-09-01 Thread Vyacheslav Egorov
> I did this and I was still seeing the mark-sweep take about 120 ms+.
> While still too slow this leads me to think that it was not the
> compression part that was slow but the sweep.  I thought I was getting
> hammered by all the memmoves but it looks like the sweep was the real
> issue.

Lets additionally enable --trace-gc-nvp flag to get timings for
separate phases of mark-sweep GC.

> 1)  Since we are in a limited memory environment we have no issue with
> setting a hard limit to how much memory V8 can use (Android will kill
> the application if you try to allocate more memory then the OS says is
> appropriate).  Is there a way to create just one young generation to a
> set size (say 16 megs), then force nothing but Scavenger sweeps and
> just keep using the one heap?  We would also need to know when we are
> getting close to the high water mark and then start forcing GC's every
> frame to get this our footprint down.
>

I am not sure I understand you question. But here is the number of concerns:

a) V8's heap has an unorthogonal structure so some spaces (code, map,
largeobject) are not managed by scavenger.

b) scavenger is a copying collector which performs best when there are
little to copy around. managing the whole heap (32 mb in your case)
with scavenger will completely destroy performance (it will be far
worse than MarkSweep).

c) Your heap is larger than 16 mb.

Also here is a small idea for you: try bumping  max_semispace_size_
and initial_semispace_size_ (see heap.cc). Increase
initial_semispace_size_ to 1 or 2 mb and max_semispace_size_ to 4 or 8
mb and see how it affects GC timings.

> 2) Is there a good way to limit how much data is swept durning the
> Mark-Sweep phase.  For example: I only want to sweep up to N objects,
> or maybe spend X amount of time durning the sweep phase?  It looks
> like I am spending most of my cycles in:  void
> LargeObjectSpace::FreeUnmarkedObjects().  It would be cool if I could
> just release N number of objects each GC call and then keep doing GC's
> every frame until all the unmarked objects are released.

This is possible. (We are doing similar stuff in our experimental GC
developed on experimental/gc branch but it is not thoroughly tested on
ARM yet, so I am not advising to use it).

FreeUnmarkedObjects is a very simple routine and I would not expect it
to consume a lot of time.

Can you add some debug prints into it? For example like this:

Index: src/spaces.cc
===
--- src/spaces.cc   (revision 9090)
+++ src/spaces.cc   (working copy)
@@ -2932,6 +2932,12 @@


 void LargeObjectSpace::FreeUnmarkedObjects() {
+  double start = OS::TimeCurrentMillis();
+  int alive = 0;
+  int alivebytes = 0;
+  int freed = 0;
+  int freedbytes = 0;
+
   LargeObjectChunk* previous = NULL;
   LargeObjectChunk* current = first_chunk_;
   while (current != NULL) {
@@ -2941,7 +2947,11 @@
   heap()->mark_compact_collector()->tracer()->decrement_marked_count();
   previous = current;
   current = current->next();
+  alive++;
+  alivebytes += object->Size();
 } else {
+  freed++;
+  freedbytes += object->Size();
   // Cut the chunk out from the chunk list.
   LargeObjectChunk* current_chunk = current;
   current = current->next();
@@ -2962,6 +2972,14 @@
   current_chunk->Free(current_chunk->GetPage()->PageExecutability());
 }
   }
+  double end = OS::TimeCurrentMillis();
+  PrintF("FreeUnmarkedObjects took %d ms\n"
+ "%d bytes in %d objects alive, %d bytes in %d objects freed\n",
+ static_cast(end - start),
+ alivebytes,
+ alive,
+ freedbytes,
+ freed);
 }


--
Vyacheslav Egorov, Software Engineer, V8 Team.
Google Denmark ApS.



On Thu, Sep 1, 2011 at 2:22 AM, Chris Jimison  wrote:
> Hi Vyacheslav,
>
>
>> editing flag-definitions.h to turn these flags on during compile time.
>> Alternatively you can call v8::V8::SetFlagsFromString to pass them to V8
>
> AWESOME.  That did the trick.  Here is an excerpt from my log
> statements:
>
> platform-posix.cc(10721): *** Scavenger GC Type Selected
> platform-posix.cc(10721):  Profile Complete <
> PerformGarbageCollection> 5
> platform-posix.cc(10721): (149)Logging here
> platform-posix.cc(10721): Scavenge 29.5 -> 29.2 MB,
> platform-posix.cc(10721): (149)Logging here
> platform-posix.cc(10721): 5 ms.
> platform-posix.cc(10721): (149)Logging here
>
>
> platform-posix.cc(10721): *** Scavenger GC Type Selected
> platform-posix.cc(10721):  Profile Complete <
> PerformGarbageCollection> 2
> platform-posix.cc(10721): (149)Logging here
> platform-posix.cc(10721): Scavenge 33.4 -> 32.5 MB,
> platform-posix.cc(10721): (149)Logging

Re: [v8-users] Crankshaft Preemption Mechanism

2011-08-31 Thread Vyacheslav Egorov
> but the optimized code only appears to be checking the stack limit at
> the function entry, not in the loop itself.

Yes. As I said in my previous email: optimizing compiler eliminates
stack checks that are dominated by calls because called functions
should at least have a stack check in the prologue.

However there seems to be a minor issue here. Apparently HandleApiCall
builtin does not perform any stack checks thus breaking
HStackCheckEliminator assumption that every call implies a stack
check.

If print in your example is a JS function everything is fine because
JS function has a stack check in the prologue. But if print is an API
function then this loop will have no interruption point which is bad.

--
Vyacheslav Egorov


On Wed, Aug 31, 2011 at 10:39 PM, Kyle Morgan  wrote:
> Hi Vyacheslav,
>
> Allow me to demonstrate what I mean.  I ran the v8 shell with code
> containing the following function.
>
> function loop() {
>   for(var i = 0; i < 5; ++i) {
>     print(i);
>   }
> }
>
> It appears to be emitting the following code (optimized and unoptimized).
>
> --- Code ---
> kind = FUNCTION
> name = loop
> Instructions (size = 196)
> 0x7f6f3e26fca0     0  55             push rbp
> 0x7f6f3e26fca1     1  4889e5         REX.W movq rbp,rsp
> 0x7f6f3e26fca4     4  56             push rsi
> 0x7f6f3e26fca5     5  57             push rdi
> 0x7f6f3e26fca6     6  41ff7598       push [r13-0x68]
> 0x7f6f3e26fcaa    10  493b6508       REX.W cmpq rsp,[r13+0x8]
> 0x7f6f3e26fcae    14  7305           jnc 21  (0x7f6f3e26fcb5)
> 0x7f6f3e26fcb0    16  e82bf4fdff     call 0x7f6f3e24f0e0     ;; debug:
> statement 36
>                                                             ;; code: STUB,
> StackCheckStub, minor: 0
> 0x7f6f3e26fcb5    21  33c0           xorl rax,rax
> 0x7f6f3e26fcb7    23  488945e8       REX.W movq [rbp-0x18],rax
> 0x7f6f3e26fcbb    27  e94d00     jmp 109  (0x7f6f3e26fd0d)
> 0x7f6f3e26fcc0    32  ff7627         push [rsi+0x27]
> 0x7f6f3e26fcc3    35  ff75e8         push [rbp-0x18]
> 0x7f6f3e26fcc6    38  48b9919790636f7f REX.W movq rcx,0x7f6f63909791
>  ;; object: 0x7f6f63909791 
> 0x7f6f3e26fcd0    48  e80b98     call 0x7f6f3e2694e0     ;; debug:
> statement 76
>                                                             ;; code:
> contextual, CALL_IC, UNINITIALIZED, in_loop, argc = 1
> 0x7f6f3e26fcd5    53  488b75f8       REX.W movq rsi,[rbp-0x8]
> 0x7f6f3e26fcd9    57  488b45e8       REX.W movq rax,[rbp-0x18]
> 0x7f6f3e26fcdd    61  a801           test al,0x1
> 0x7f6f3e26fcdf    63  7405           jz 70  (0x7f6f3e26fce6)
> 0x7f6f3e26fce1    65  e89a83feff     call 0x7f6f3e258080     ;; debug:
> statement 43
>                                                             ;; debug:
> position 67
>                                                             ;; code: STUB,
> ToNumberStub, minor: 0
> 0x7f6f3e26fce6    70  4c01e0         REX.W addq rax,r12
> 0x7f6f3e26fce9    73  7004           jo 79  (0x7f6f3e26fcef)
> 0x7f6f3e26fceb    75  a801           test al,0x1
> 0x7f6f3e26fced    77  720d           jc 92  (0x7f6f3e26fcfc)
> 0x7f6f3e26fcef    79  4c29e0         REX.W subq rax,r12
> 0x7f6f3e26fcf2    82  4c89e2         REX.W movq rdx,r12
> 0x7f6f3e26fcf5    85  e80658feff     call 0x7f6f3e255500     ;; code:
> BINARY_OP_IC, UNINITIALIZED (id = 30)
> 0x7f6f3e26fcfa    90  a80d           test al,0xd
> 0x7f6f3e26fcfc    92  488945e8       REX.W movq [rbp-0x18],rax
> 0x7f6f3e26fd00    96  493b6508       REX.W cmpq rsp,[r13+0x8]
> 0x7f6f3e26fd04   100  7307           jnc 109  (0x7f6f3e26fd0d)
> 0x7f6f3e26fd06   102  e8d5f3fdff     call 0x7f6f3e24f0e0     ;; code: STUB,
> StackCheckStub, minor: 0
> 0x7f6f3e26fd0b   107  a801           test al,0x1
> 0x7f6f3e26fd0d   109  ff75e8         push [rbp-0x18]
> 0x7f6f3e26fd10   112  4b8d04a4       REX.W leaq rax,[r12+r12*4]
> 0x7f6f3e26fd14   116  5a             pop rdx
> 0x7f6f3e26fd15   117  488bca         REX.W movq rcx,rdx
> 0x7f6f3e26fd18   120  480bc8         REX.W orq rcx,rax
> 0x7f6f3e26fd1b   123  f6c101         testb rcx,0x1
> 0x7f6f3e26fd1e   126  730a           jnc 138  (0x7f6f3e26fd2a)
> 0x7f6f3e26fd20   128  483bd0         REX.W cmpq rdx,rax
> 0x7f6f3e26fd23   131  7c9b           jl 32  (0x7f6f3e26fcc0)
> 0x7f6f3e26fd25   133  e91d00     jmp 167  (0x7f6f3e26fd47)
> 0x7f6f3e26fd2a   138  e8313cfeff     call 0x7f6f3e253960     ;; debug:
> position 60
>                                                             ;; code:
> COMPARE_IC, UNINITIALIZED (id = 23)
> 0x7f6f3e26fd2f   143  a811           test al,0x11
> 0x7f6f3e26fd31   145  eb0b           jmp 158  (0x7f6f3e26fd3e)
> 0x7f6f3e26fd33   147  493b45b0       REX.W cmpq rax,[r13-0x50]
> 0x7f6f3e26fd37   151  7487           jz 32  (0

Re: [v8-users] Crankshaft Preemption Mechanism

2011-08-31 Thread Vyacheslav Egorov
Hi Kyle,

Optimizing compiler inserts stack checks (HStackCheck instruction)
explicitly at loop body's entry[1].

It also does an optimization pass[2] to remove redundant stack checks
that are dominated by function calls (as functions always does stack
check in the prologue).

Stack checks are important part of V8's interruption mechanism so both
compilers emit them to make all loops interruptable.

[1] http://code.google.com/p/v8/source/browse/trunk/src/hydrogen.cc#2823
[2] http://code.google.com/p/v8/source/browse/trunk/src/hydrogen.cc#1247

--
Vyacheslav Egorov


On Wed, Aug 31, 2011 at 9:27 PM, Kyle  wrote:
> Hello,
>
> Some time ago I noticed that the v8 compile was inserting stack limit
> checks at the back edges of loops.  I later found out that this check
> was doubling as a preemption mechanism to interrupt potentially long-
> running code.  However, I've noticed that the hydrogen/lithium
> compiler included with crankshaft does not seem to include these
> checks.  Is there a particular reason for this?  Is the previous
> design for JavaScript preemption no longer being pursued?
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
>

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] V8 Garbage collection in multitasking mode

2011-08-31 Thread Vyacheslav Egorov
Hi Dennis,

V8's GC is stop-the-world type so you don't have to do anything
special to make it "keep up". If you are getting OOM that most
probably means you have a leak somewhere. Try tracing GC with
--trace-gc flag to see how heap grows.

Also you can't run JavaScript in parallel on V8 unless you create
several isolates.

If you are using a single isolate from many threads you have to ensure
that only one thread is executing JS at the given moment.

--
Vyacheslav Egorov


On Wed, Aug 31, 2011 at 2:28 AM, Dennis H  wrote:
> Dear v8 Developers,
>
> I am relatively new to v8 internals, but here is what I found:
> I tried to create an app which has multiple tasks running in parallel.
> The v8::internal::Thread API seems to work fine, the trouble is I
> didn't find a good way to make garbage collection to keep up.
>
> I do call the V8::IdleNotification() in the main event loop
> periodically, but it doesn't scale if number of threads is getting
> bigger. Essentially If I create a lot of tasks, the process would
> reliably run out of memory pretty quick.
>
> How is the garbage collection supposed to be handled correctly with
> multiple threads?
>
> I used v3.4.14.
>
> Thanks,
> Dennis
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
>

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: Garbage Collection very slow on Android with latest stable line

2011-08-30 Thread Vyacheslav Egorov
Hi Chris,

> I enabled these flags but I am not seeing any additional TTY spew or a
> data log file generated on the device (however we are building using
> ANT and I could have just screwed something up).

It should just print things to stdout when enabled. You can try
editing flag-definitions.h to turn these flags on during compile time.
Alternatively you can call v8::V8::SetFlagsFromString to pass them to V8.

> The pauses where the same (about 1~2 ms per SCAVENGER Sweep) but the
> difference is with this old version of V8 the exact same Javascript
> code would not trigger a MARK_COMPRESS sweep.

So did the old version only trigger scavenges? Or did it trigger
scavenges and mark-sweeps but no mark-compacts?

> Another thing I tried was just forcing all GC to SCAVENGER.  This had
> a much better impact for our use case.  While the SCAVENGER time jump
> from 1~2 ms up to about 4~5 ms it did this only 1 per second and the
> game ran very smoothly.  I have now exposed a custom flag in our
> version of V8 to turn off MARK_COMPRESS from Javascript code so that
> games when they are in long wait situations (waiting for file IO or
> Network Messaging) can turn on MARK_COMPRESS and if they are in a
> performance critical area they can turn it off.

If you disable MARK_COMPACTOR then your app might start accumulating
garbage because scavenger can only collect things in the young
generation. Also if you disable MARK_COMPACTOR entirely your app will
just crash with OOM when V8 hits the allocation limit in oldspace
because none of the GCs it will force will free any memory in the old
space.

You can try to run app with --never-compact to disable compaction
phase of the mark-sweep collector to see whether it improves situation
or not. This will lead to fragmentation but is much more stable then
disabling full collection entirely.

--
Vyacheslav Egorov


On Tue, Aug 30, 2011 at 6:52 PM, Chris Jimison  wrote:
> Hi Vyacheslav,
>
> Sorry I took so long to get back to this.
>
>> What version were you using before upgrading to HEAD and how long were
>> pauses at that version?
>
> We where using a very old version of V8.  I would say from around
> January/February of 2010.  The person who I inherited this code from
> did not leave an SVN revision number anywhere I can find (and it looks
> like they removed any reference files inside of V8 that had this
> information).
>
>> how long were pauses at that version?
>
> The pauses where the same (about 1~2 ms per SCAVENGER Sweep) but the
> difference is with this old version of V8 the exact same Javascript
> code would not trigger a MARK_COMPRESS sweep.  However when I upgraded
> V8 I am now seeing the MARK_COMPRESS trigger every 10 sec or so which
> introduces a very noticeable frame hitch.
>
>> Can you share a bit more about your use of V8?
>
> Sure.  So we have a native C++ game engine that handles all of our
> lower level logic (rendering with OpenGL, managing device touches,
> etc, etc) and once a GL frame we send a large Command String to the V8
> engine for processing (it has to be a string currently in order for us
> to maintain compatibility with iOS which does not use V8) and once the
> Javascript has completed the frame tick it will send back a Command
> String for the C++ engine to process.
>
> The general take away here is we are VERY string heavy in our
> processing.
>
>> It seems that promotion rate is not high (judging from 1-2ms
>> scavenging pauses) but it's hard to say anything without looking at GC
>> logs. (e.g. --trace-gc, --trace-gc-verbose ones; plus maybe some
>> additional debug prints in Heap::SelectGarbageCollector to see why V8
>> chooses MARK_COMPACTOR).
>
> I enabled these flags but I am not seeing any additional TTY spew or a
> data log file generated on the device (however we are building using
> ANT and I could have just screwed something up).  I did put some debug
> prints around the SelectGarbageCollector and found we are hitting the
> following case:
>
>  // Is enough data promoted to justify a global GC?
>  if (OldGenerationPromotionLimitReached()) {
>    isolate_->counters()->gc_compactor_caused_by_promoted_data()-
>>Increment();
>    return MARK_COMPACTOR;
>  }
>
> Interesting notes:
>
> One thing I am in the process of doing is moving all our string
> process messaging into C++.  This way the string objects can be more
> tightly managed and we keep the uber long frame command string out of
> V8 memory.  This had a positive impact for us in that it increased the
> amount of time from 10 sec to about 45 sec between MARK_COMPRESS
> sweeps.  However the MARK_COMPRESS still took about 200ms.
>
> Another thing I tried was just forcing all GC to SCAVENGER.  This had
> a m

Re: [v8-users] v8 on Linux on Power architecture (powerpc)

2011-08-26 Thread Vyacheslav Egorov
Hi,

Main issue here is that V8 does not have a PPC codegen.

You'll to implement one (plus all required arch specific runtime
support) if you want to see V8 running on Power PC.

--
Vyacheslav Egorov


On Fri, Aug 26, 2011 at 4:06 PM, swsyessws  wrote:
> Hello all, I am a new user of v8, and would like to see it running on Linux
> on the Power architecture. Can someone help make a list of potential
> issues/concerns/roadblocks/showstoppers to make that happen? It will be
> great to have a list in a priority order. Many thanks in advance!
>
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


  1   2   >