Re: [v8-users] How to remove CheckMaps instructions in an optimization phase

2014-11-05 Thread Vyacheslav Egorov
Hydrogen is also used to generate stubs and there CheckMaps deopt  has a
different semantics from a normal deopt - and it is not reported in
--trace-deopt (which reports all normal JS function deopts). You have to
accommodate for that in your experiments.

I however think that your experiment does not provide any actionable data.
Knowing that checks introduce X% overhead is kinda useless - unless you
also know an algorithm to eliminate all of them and retain correctness.


Vyacheslav Egorov

On Wed, Nov 5, 2014 at 5:37 PM, Gabriel Southern souther...@gmail.com
wrote:

 Perhaps it would help if I explained my motivation.  I'm trying to
 evaluate the overhead of conditional deoptimization checks.  One way is by
 running workloads that have the checks in their normal configuration and
 measuring the runtime.  Then removing checks that were never triggered and
 rerunning the workload and comparing the runtime.  Obviously I understand
 this is not safe in the general case.

 For some workloads I was able to remove the call to DeoptimizeIf in
 LCodeGen::DoCheckMaps and the benchmark still ran correctly.  But if I
 removed the call to CompareMap(reg,map) I would get an error about
 unreachable code similar to what I posted earlier when I remove the
 CompareMaps hydrogen instructions.

 Aside from that I think that I want to be able to choose when to remove
 checks at the hydrogen instruction level because later I will want to pick
 which functions to remove the checks from.  I would profile a benchmark
 first and see which functions have conditional deopts that are never
 triggered and then remove the deopts from those functions.

 Again this is all part of a performance evaluation study, not something to
 be used for production code.  I hope this makes sense, but if you think
 there's something I'm overlooking for why this won't work I'd be interested
 to know why.  From looking at the assembly code sequences that are
 generated I think this should be okay, but there's also obviously something
 I'm missing that is leading to the unreachable code error that I've seen.

 Thanks,

 -Gabriel

 On Wednesday, November 5, 2014 5:42:20 AM UTC-8, Jakob Kummerow wrote:

 Removing check instructions is so utterly wrong and dangerous that I
 can't bring myself to even try to help you. Just don't do it!


 On Wed, Nov 5, 2014 at 8:19 AM, Gabriel Southern south...@gmail.com
 wrote:

 I'm experimenting with removing deoptimization checks and I have a
 question about how to remove hydrogen instructions.

 I'm looking at a benchmark where the CheckMaps deoptimization checks are
 never triggered and I'm trying to remove them.  I know this is not safe in
 the general case, but when I traced the deoptimizations for this benchmark
 there were not any that were triggered because of CheckMaps.

 I've tried to follow the HDeadCodeEliminationPhase as a guide because
 what I want to do is delete instructions that match a certain criteria, so
 I thought that pass might be a good example.  The main loop in my pass is:

   for (int i = 0; i  graph()-blocks()-length(); ++i) {
 HBasicBlock* block = graph()-blocks()-at(i);
 for (HInstructionIterator it(block); !it.Done(); it.Advance()) {
   HInstruction* instr = it.Current();
   if (instr-opcode() == HValue::kCheckMaps) {
 instr-DeleteAndReplaceWith(NULL);
   }
 }
   }

 When I run this and just print the list of instructions that will be
 removed the list looks okay.  However if I actually delete the instruction
 I get a runtime error as follows:

 #
 # Fatal error in ../src/objects.cc, line 10380
 # unreachable code
 #

  C stack trace ===

  1: V8_Fatal
  2: 
 v8::internal::Code::FindAndReplace(v8::internal::Code::FindAndReplacePattern
 const)
  3: 
 v8::internal::CodeStub::GetCodeCopy(v8::internal::Code::FindAndReplacePattern
 const)
  4: v8::internal::PropertyICCompiler::ComputeCompareNil(v8::
 internal::Handlev8::internal::Map, v8::internal::CompareNilICStub*)
  5: v8::internal::CompareNilIC::CompareNil(v8::internal::
 Handlev8::internal::Object)
  6: ??
  7: v8::internal::CompareNilIC_Miss(int, v8::internal::Object**,
 v8::internal::Isolate*)
  8: ??
 Segmentation fault (core dumped)

 I'm wondering if anyone has suggestions for what I can look at it
 understand what's going on and debug the problem.  Obviously the specific
 thing I'm trying to do of removing CheckMaps is not something that should
 work in general.  But I think it should be possible to remove a hydrogen
 instruction during the optimization phase.  I've tried to pattern my
 attempt off of the existing code, but obviously I'm missing something.  If
 anyone has suggestions about what I should try that is appreciated.

 Thanks,

 Gabriel

  --
 --
 v8-users mailing list
 v8-u...@googlegroups.com
 http://groups.google.com/group/v8-users
 ---
 You received this message because you are subscribed to the Google
 Groups v8-users group.
 To unsubscribe from

Re: [v8-users] Re: Array#join - better to special case for Array/etc.?

2014-09-01 Thread Vyacheslav Egorov
 Array#[push|pop]() is easily optimized for array instances, because
 they each compile down to a single common assembly instruction

Last time I checked Intel manual it did not have jsapop/jsapush
instructions.

You need to do a bit of checks (length underflow, lack holes in the array,
etc). So a single instruction is unlikely (though potentially possible in a
loop under certain conditions --- but those conditions require
sophisticated optimizations to achieve, e.g. you need to hoist bounds
checks and sink update of length out of the loop, etc.).

 but to make a special case (or more optimal case) for Arrays in
Array#join(), and especially if it is an array of Strings.

There is such case. It's called _FastAsciiArrayJoin.

 This is a relatively fast snippet of C++ code:

This might have O(n^2) runtime complexity or waste memory for result (if
library does capacity doubling on appends, which is common strategy)
depending on how your C++ library reserves capacity for std::string.

std::string join(const std::vectorstd::string array) {
  size_t total_length = 0;
  for (auto s : array) total_length += s.length();

  std::string str;
  str.reserve(total_length);
  for (auto s : array) str.append(s);
  return str;
}

which is btw exactly what _FastAsciiArrayJoin attempts to do.




Vyacheslav Egorov


On Mon, Sep 1, 2014 at 11:20 AM, Isiah Meadows impinb...@gmail.com wrote:

 That library rarely does type checking. This contributes a lot of
 speed to their overall algorithm. If you look at my benchmarks,
 clearly, removing type checking helps, but it doesn't help for all
 applications. Another thing is that they use 99% C-style for loops
 with numerical indices instead of for-in loops (which always require
 some type checking because they work with all Objects and Arrays). The
 code actually resembles Asm.js in its heavy use of numbers.

 Array#[push|pop]() is easily optimized for array instances, because
 they each compile down to a single common assembly instruction. Also,
 in the case of Array#pop(), if the value isn't used, then it can
 simply pop to the same register over and over again, making it easily
 surpass 100 million operations per second if properly optimized.

 Back to the initial topic, my main request isn't to remove
 type-checking, but to make a special case (or more optimal case) for
 Arrays in Array#join(), and especially if it is an array of Strings.
 This is a relatively fast snippet of C++ code:

 std::string join(std::string* array, int len) {
   std::string str = '';
   while (len) {
 str += *(array + --len);
   }
   return str;
 }

 The Fast library could speed up some of their methods easily by
 reversing the iteration order for some methods (and I'm about to draft
 a quick patch to it).

 On Sun, Aug 31, 2014 at 9:22 AM, Jacob G kurtext...@gmail.com wrote:
  You should take a look at this too: https://github.com/codemix/fast.js -
  Functions written in JS are faster than the native functions. Is there
  something to be done?
 
  Am Sonntag, 31. August 2014 02:16:37 UTC+2 schrieb Isiah Meadows:
 
  I profiled various native methods, comparing them to equivalent
 polyfills
  and special-cased ones. I compared the following functions:
 
  Math.abs(x)
  Array.prototype.pop()
  Math.ceil(x)
  Array.prototype.join(sep)
 
  I found the following things from testing in various browsers:
 
  Math.abs(x)
 
  Webkit is about twice as fast as V8 in the native implementation.
  Webkit's performance in the rest is on par with V8's.
  Similar performance between type-ignorant polyfills and native
  implementation (on all browsers)
 
  Array.prototype.pop()
 
  Firefox clearly hasn't optimized the special case for arrays natively.
  JS polyfills are insanely slow, with type checking making little
  difference.
 
  Math.ceil(x)
 
  JS polyfills significantly slower, but that is explainable with the
 better
  bitwise ability with floats/doubles/etc. in C/C++.
 
  Mine does it without branching, but a potentially better way is to
  decrement if less than 0 and truncate it.
 
  Webkit is a little faster, but not a lot.
 
  Array.prototype.join(sep)
 
  JS standards polyfill rather slow
  JS polyfill assuming an array is over twice as fast as the native
  implementation (If it optimizes for this case, it should structurally
  resemble a Java Object[] internally).
 
  This really needs a special case (or better special case) for Arrays.
 
  I can't a patch for this yet, because of current CLA confusion
  (off-topic), but it should be relatively simple.
 
  --
  --
  v8-users mailing list
  v8-users@googlegroups.com
  http://groups.google.com/group/v8-users
  ---
  You received this message because you are subscribed to a topic in the
  Google Groups v8-users group.
  To unsubscribe from this topic, visit
  https://groups.google.com/d/topic/v8-users/FoK9X52cIDs/unsubscribe.
  To unsubscribe from this group and all its topics, send an email to
  v8-users+unsubscr...@googlegroups.com.
  For more options

Re: [v8-users] Problem with several threads trying to lock an isolate, and the withdrawal of v8::Locker's preemption

2014-06-12 Thread Vyacheslav Egorov
I would like to note that RequestInterrupt was not intended as a
replacement for preemption. We didn't want callback executing any
JavaScript in the interrupted isolate, so we put the following requirement
on the interrupt callback:


*Registered |callback| must not reenter interrupted Isolate.*

This requirement is not checked right now, but neither anything is
guaranteed to work if you try and start executing JavaScript in the
interrupted isolate from the callback or from another thread (by unlocking
isolate in the callback and allowing other thread to lock it).


Vyacheslav Egorov


On Thu, Jun 12, 2014 at 1:10 PM, juu julien.vouilla...@gmail.com wrote:

 Ok, I didn't notice this API available since v8 3.25.

 I will have to wait for my team to migrate to a new version of v8 then ...

 Thanks
 Julien.
 On Thursday, June 12, 2014 1:44:27 AM UTC+2, Jochen Eisinger wrote:




 On Tue, Jun 10, 2014 at 8:38 AM, juu julien.v...@gmail.com wrote:

 Hello everyone,

 I'm trying to implement RequireJS on my JS Engine based on v8 (v8 3.21).
 I have a problem with asynchronous loading and evaluation of scripts.

 The main thread initialize v8 : create its isolate, context, script etc
 ..
 When the main script is ready, the current isolate is locked and the
 script is run.

  Once a  *require(anotherScript)*  is encoutered (in my main script),
 another thread is created and is in charge of loading *anotherScript *and
 execute it as soon as possible.

 My problem is that the main thread lock the current isolate until the
 whole main script is executed. Which let no chance to *anotherScript *to
 be called asynchronously ; actually it's always executed synchronously
 since *anotherScript *manage to Lock the current isolate only once the
 main thread is finished and unlock the current isolate.

 I use v8::Locker and v8::Locker to deal with my multithreaded use of
 v8. In my version of v8 : 3.21, v8::Locker provide a preemption feature
 which enable me to give some chance to other threads to lock v8
 periodically :

 /**Start preemption.When preemption is started, a timer is fired every n
 milliseconds that will switch between multiple threads that are in
 contention for the V8 lock. */
   static void StartPreemption(int every_n_ms);

 /** Stop preemption.*/
   static void StopPreemption();

 But ...this feature is no longer available in the next versions of v8
 (since 3.23) .
 This post confirm it : https://groups.google.com/
 forum/#!searchin/v8-users/StartPreemption/v8-users/
 E5jtPC-scp8/H-2yz4Wj_SkJ

 So here are my questions :

 Is there any other way to perform the Preemption v8 used to provide ?
 Am I supposed to do it myself ? I dont think I can, I guess can't
 interrupt/pause myself the execution properly...


 I guess you can do this by using the RequestInterrupt API?

 best
 -jochen



 Am I doing something wrong in my global use of v8 and multiple threads ?


 Thanks a lot !
 Julien.

 --
 --
 v8-users mailing list
 v8-u...@googlegroups.com

 http://groups.google.com/group/v8-users
 ---
 You received this message because you are subscribed to the Google
 Groups v8-users group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to v8-users+u...@googlegroups.com.

 For more options, visit https://groups.google.com/d/optout.


  --
 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users
 ---
 You received this message because you are subscribed to the Google Groups
 v8-users group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to v8-users+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
v8-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [v8-users] Intent to ship: ES6 Map Set

2014-05-07 Thread Vyacheslav Egorov
What are the performance characteristics and memory footprint one can
expect from Map  Set?

I would like to point out that a lot of built-in features that V8 and other
JS VMs implement (e.g. Array.prototype.forEach) are never used because they
are perceived as slow (and they are actually slow for various reasons).

Can we proactively avoid falling down the same hole with ES6 features?

What about having ES6 features (micro)benchmark suite to drive performance
of this features across all browsers implementing them?


Vyacheslav Egorov


On Tue, May 6, 2014 at 9:22 PM, 'Erik Arvidsson' via v8-users 
v8-users@googlegroups.com wrote:

 Map  Set are both part of ES6 [1], [2].

 They are shipping in Firefox since version 13 [3] and Internet Explorer 11
 [4]. They are also turned on by default for nightly WebKit/JSC.

 Adam Klein recently re-implemented the backing hash table used by both Map
 and Set to use an ordered hash table, which is a requirement for
 deterministic insertion order iteration. With that we were able to add
 support for forEach which we saw as a must have for parity with Firefox and
 Internet Explorer.

 This is not a full implementation of Map and Set. Most notably it does not
 include @iterator, entries, values nor keys. This is also the lowest common
 denominator between IE and FF. We plan to send out further intent to ship
 emails before we ship the remaining features of Map and Set.

 Owners: ad...@chromium.org, a...@chromium.org

 [1] http://people.mozilla.org/~jorendorff/es6-draft.html#sec-map-objects
 [2] http://people.mozilla.org/~jorendorff/es6-draft.html#sec-set-objects
 [3] https://developer.mozilla.org/en-US/Firefox/Releases/13
 [4] http://msdn.microsoft.com/en-us/library/ie/dn342892(v=vs.85).aspx

  --
 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users
 ---
 You received this message because you are subscribed to the Google Groups
 v8-users group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to v8-users+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.


-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
v8-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [v8-users] Disappearing closure bindings

2013-12-04 Thread Vyacheslav Egorov
 Here is a simple example of the issue from the JS point of view:

You actually can't have a function declaration inside an if-statement but
engines allow that for compat reasons.

Now what you have written is equivalent to:

function preprocessor(source, url, listenerName) {
  function wrapSource(source, url, listenerName) {
console.log('closeOverMe=' + closeOverMe);
return source + '\n//' + postfix;
  }

  var closeOverMe;

  if (!window.wasCompiledPreviously) {
closeOverMe = 'I am closed over';
console.log('unclosed closeOverMe=' + closeOverMe);
window.wasCompiledPreviously = true;
  }

  return wrapSource(source, url, listenerName);
}

Now you should be able to see why closeOverMe is undefined on the second
invocation.

Honestly I am not sure I understand the intent of the code. function
literal / declaration create a new closure every time they are executed,
you can't cache it like that. You need to explicitly save it in a variable.

I would say that clean JavaScript way to do this is something along this
lines:

var preprocessor = (function () {
  var wrapSource = null;

  function preprocessor(source, url, listenerName) {
if (wrapSource === null) {
  var closeOverMe = 'I am closed over';
  wrapSource = function wrapSource(source, url, listenerName) {
console.log('closeOverMe=' + closeOverMe);
return source + '\n//' + postfix;
  };
}
return wrapSource(source, url, listenerName);
  }

  return preprocessor;
})();


Vyacheslav Egorov


On Thu, Dec 5, 2013 at 12:04 AM, johnjbar...@chromium.org wrote:

 Let me start with a request for patience, I have a complex problem and I'm
 unsure on some of the V8 terminology.

 Chrome's DevTools supports script preprocessing: from the DevTools you
 can reload a Web page and preprocess every thing that will go into V8 with
 a JS to JS preprocessor.  This allows tracing and runtime analysis tools
 based on recompilation to be implemented in JS.

 In applying this preprocessor with the traceur-compiler (
 https://github.com/google/traceur-compiler) I hit a snag: functions
 within the JS preprocessor sometimes reference 'undefined' rather than the
 object expected. (sometimes here is one kind of complication, the
 failures are deterministic and in simple cases the failure 100%,
 thankfully).  These undefined references are always pointing to objects
 created in closure environments.

 As the web page loads, scripts from the browser enter V8, V8 emits a
 before-compile event, the preprocessor runs and returns modified code, then
 V8 proceeds with its work.  The first event works; subsequent ones all fail.

 The preprocessor itself is running in a separate Context modeled after the
 Chrome browser's content-script mechanism. We compile the JS preprocessor
 into this Context to obtain a C++ reference to a function within the
 Context. Then we call the function from C++ every time we get the V8
 before-compile event.

 Here is a simple example of the issue from the JS point of view:

 function preprocessor(source, url, listenerName) {
   if (!window.wasCompiledPreviously) {
 var closeOverMe = 'I am closed over';
 console.log('unclosed closeOverMe=' + closeOverMe);
 function wrapSource(source, url, listenerName) {
   console.log('closeOverMe=' + closeOverMe);
   return source + '\n//' + postfix;
 }
 window.wasCompiledPreviously = true;
   }
   return wrapSource(source, url, listenerName);
 }

 On the first preprocessor call the console log messages are ok, but
 subsequent calls will have closeOverMe undefined. So the 'window' state is
 being saved between calls and the function wrapSource() can be called, but
 the thing that closeOverMe points to has gone away.

 I'm hoping that some reading this far will say Oh that means you did not
 ... in the V8 blink binding code.  If this does not ring any bells I'll
 have to start asking about the details of how Blink calls in to V8 for this
 case.

 Thanks,
 jjb

 --
 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users
 ---
 You received this message because you are subscribed to the Google Groups
 v8-users group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to v8-users+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/groups/opt_out.


-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
v8-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [blink-dev] Re: [v8-users] Intent to Implement Promises in V8

2013-10-04 Thread Vyacheslav Egorov
Recently I was made aware of a user land promises library that Petka
Antonov (cced) implemented with the focus on performance.

https://github.com/petkaantonov/bluebird

There seem to be some meaningful benchmarks mentioned there in the section
about Benchmarking.

You might be interested in taking a look at his code.


Vyacheslav Egorov


On Fri, Oct 4, 2013 at 7:27 PM, Yusuke SUZUKI yusukesuz...@chromium.orgwrote:

 We're currently working on adding a threading API, probably similar to
 blink's WebThread.


 Sounds very nice. Providing embedder-side's threading operation interfaces
 to V8 is needed for Promises.
 Is there already any discussion about design of a threading API?


 On Sat, Oct 5, 2013 at 2:50 AM, Jochen Eisinger joc...@chromium.orgwrote:

 To clarify, we won't expose threads to the language, but clean-up the
 thread usage of V8 internally, e.g. the optimizing compiler thread.

 best
 -jochen


 On Fri, Oct 4, 2013 at 7:37 PM, Dirk Pranke dpra...@chromium.org wrote:

 On Fri, Oct 4, 2013 at 3:29 AM, Jochen Eisinger joc...@chromium.orgwrote:


 We're currently working on adding a threading API, probably similar to
 blink's WebThread.


 We are?

 -- Dirk



  --
 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users
 ---
 You received this message because you are subscribed to the Google Groups
 v8-users group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to v8-users+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/groups/opt_out.


  --
 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users
 ---
 You received this message because you are subscribed to the Google Groups
 v8-users group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to v8-users+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/groups/opt_out.


-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
v8-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [v8-users] Creating persistent weak handles on primitive values did not collected by GC?

2013-04-27 Thread Vyacheslav Egorov
Non-smi values are allocated on the heap so they should be collected sooner
or later. Though they might be temporary stuck in some cache.

As Ben recommends it's better to put only ObjectsStrings into weak
persistent handles. [though strings can also be stuck in some cache
for indefinite amount of time]

Vyacheslav Egorov


On Sat, Apr 27, 2013 at 10:36 AM, Dmitry Azaraev
dmitry.azar...@gmail.comwrote:

  Boolean values are eternal --- they never die. Small integers (31 bits
 on ia32 and 32bits on x64) are
  not even allocated in the heap, they are *values* essentially so weak
 reachability is undefined for them.
Thanks. I'm trying allocate non-SMI values too - and got same result,
 so looks, that exists some additional rule. May be exist easy way to detect
 which values can be used as weak headles or no?


 On Fri, Apr 26, 2013 at 10:54 PM, Vyacheslav Egorov 
 vego...@chromium.orgwrote:

  So it is possible that persistent weak handles built on top of
 primitive values did not collected?

 Boolean values are eternal --- they never die. Small integers (31 bits on
 ia32 and 32bits on x64) are not even allocated in the heap, they are
 *values* essentially so weak reachability is undefined for them.

  it is safe return persistent handle via raw return handle; instead of
 return handle_scope.Close(handle); ?

 Yes. You need to use Close only for Local handles. If you return
 Persistent handle you can just return that directly.

 --
 Vyacheslav Egorov


 On Fri, Apr 26, 2013 at 12:42 PM, Dmitry Azaraev fdd...@gmail.comwrote:

 Hi.

 My question initially originated from CEF V8 integration, described on
 next issues:
 https://code.google.com/p/chromiumembedded/issues/detail?id=323 ,
 https://code.google.com/p/chromiumembedded/issues/detail?id=960 .

 In short problem looks as:
 CEF creates persistent and weak handles for every created V8 value (
 Boolean, Integers ). And in this case i found memory leak, when for example
 our native function returns primitive value. For complex values (Object),
 this is did not appear at all.

 So it is possible that persistent weak handles built on top of primitive
 values did not collected?

 And additional question: it is safe return persistent handle via raw
 return handle; instead of return handle_scope.Close(handle); ?

  --
 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users
 ---
 You received this message because you are subscribed to the Google
 Groups v8-users group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to v8-users+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/groups/opt_out.




 --
 Vyacheslav Egorov
 Software Engineer
 Google Danmark ApS - Skt Petri Passage 5, 2 sal - 1165 København K -
 CVR nr. 28 86 69 84

  --
 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users
 ---
 You received this message because you are subscribed to a topic in the
 Google Groups v8-users group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/v8-users/616ZM3UWh2k/unsubscribe?hl=en.
 To unsubscribe from this group and all its topics, send an email to
 v8-users+unsubscr...@googlegroups.com.

 For more options, visit https://groups.google.com/groups/opt_out.






 --
 Best regards,
Dmitry

 --
 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users
 ---
 You received this message because you are subscribed to the Google Groups
 v8-users group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to v8-users+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
v8-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [v8-users] Creating persistent weak handles on primitive values did not collected by GC?

2013-04-26 Thread Vyacheslav Egorov
 So it is possible that persistent weak handles built on top of primitive
values did not collected?

Boolean values are eternal --- they never die. Small integers (31 bits on
ia32 and 32bits on x64) are not even allocated in the heap, they are
*values* essentially so weak reachability is undefined for them.

 it is safe return persistent handle via raw return handle; instead of
return handle_scope.Close(handle); ?

Yes. You need to use Close only for Local handles. If you return Persistent
handle you can just return that directly.

--
Vyacheslav Egorov


On Fri, Apr 26, 2013 at 12:42 PM, Dmitry Azaraev fdd...@gmail.com wrote:

 Hi.

 My question initially originated from CEF V8 integration, described on
 next issues:
 https://code.google.com/p/chromiumembedded/issues/detail?id=323 ,
 https://code.google.com/p/chromiumembedded/issues/detail?id=960 .

 In short problem looks as:
 CEF creates persistent and weak handles for every created V8 value (
 Boolean, Integers ). And in this case i found memory leak, when for example
 our native function returns primitive value. For complex values (Object),
 this is did not appear at all.

 So it is possible that persistent weak handles built on top of primitive
 values did not collected?

 And additional question: it is safe return persistent handle via raw
 return handle; instead of return handle_scope.Close(handle); ?

  --
 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users
 ---
 You received this message because you are subscribed to the Google Groups
 v8-users group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to v8-users+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/groups/opt_out.




--
Vyacheslav Egorov
Software Engineer
Google Danmark ApS - Skt Petri Passage 5, 2 sal - 1165 København K -
CVR nr. 28 86 69 84

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
v8-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [v8-users] How to use --trap_on_deopt

2013-03-18 Thread Vyacheslav Egorov
This flag is not intended to be used by JavaScript developers.

It is intended to be used by VM developers.

You need to use a native debugger like gdb (not V8's built-in debugger) or
WinDBG (on Windows) to catch SIGTRAP and then you'll have to look at the
disassembly and the memory state to figure out what is going on.

So unless you have deep understanding of V8 internals and proficient in
reading assembly language this flag is useless.

Vyacheslav Egorov


On Mon, Mar 18, 2013 at 5:19 PM, Nick Evans nick.evans8...@gmail.comwrote:

 I would like to use the --trap_on_deopt flag (put a break point before
 deoptimizing) because it sounds like a way to pause the program in a
 debugger, inspect the local variables and hopefully understand why a
 function has been deoptimised. Unfortunately I cannot find any examples on
 the web of how this flag is used.

 I had naively assumed it would just set a breakpoint when and where a
 function was deoptimised and invoke the debugger. Jakob and Yang have
 explained to me that it throws a SIGTRAP signal that needs to be caught
 (see link below). Unfortunately even with this knowledge I still cannot
 work out how to use this flag. All of the following commands result in
 d8.exe has stopped working without d8 printing anything:

 [d8 3.17.11, built with Visual Studio 2010, Windows 7 x64]
 d8 --trap_on_deopt
 d8 --debugger --trap_on_deopt
 d8 --trap_on_deopt test.js
 d8 --debugger --trap_on_deopt test.js

 d8 fails immediately even when it hasn't been passed a js file. d8 runs as
 expected when --trap_on_deopt is omitted.

 I have had more success with Node.js:

 [node-v0.10.0-x86.msi]
 node --trace_deopt --trap_on_deopt test.js

 The script will run up to the deopt but then quits to the OS, ignoring
 several nested catch...finally statements. Including the debug option
 yields:

 node --trace_deopt --trap_on_deopt debug test.js
 [deoptimize context: 671033d]
  debugger listening on port 5858
 connecting... ok
 break in test.js:1

 before Node quits to the OS without running any of the script. Node runs
 as expected when --trap_on_deopt is omitted.

 I had assumed that this behaviour and the absence of any examples on the
 web indicated an underused and buggy feature but apparently not (
 http://code.google.com/p/v8/issues/detail?id=2583thanks=2583ts=1363468366).
 If the exception is thrown by d8/Node to the OS, and not to my catches or
 the debugger, then I'm really confused about what this flag does and how it
 can be used properly.

 I would be grateful if someone could provide an example of how to use
 --trap_on_deopt.

 --
 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users
 ---
 You received this message because you are subscribed to the Google Groups
 v8-users group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to v8-users+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
v8-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [v8-users] .map is slower than for

2013-02-04 Thread Vyacheslav Egorov
I'd say it's a known issue that generic Array built-ins are slower than
handwritten less generic versions.

Array you are mapping over is extremely small so all overheads of a generic
implementation are highlighted. For example %MoveArrayContents amounts to
20% of the overheads, while in reallity it just swaps around some pointers.

If you are into function programming (which I know you are :-)) you can
have your own, less generic version:

Array.prototype.fastMap = function (cb) {
  use strict;
  var result = new Array(this.length);
  for (var i = 0; i  this.length; i++) result[i] = cb(this[i]);
  return result;
};

However I'd also argue that it is not impossible to optimize overhead away
in majority of common cases using combination of inlining, constant
propagation and some other optimizations. e.g. hasOwnProperty check can be
fused with actual load or completely eliminated (depending if backing store
is holey or not). It just requires some plumbing. Even V8's approach to
function inlining does not really work with high order code. (instead of
checking closure identity, it's literal identity should be checked:
https://code.google.com/p/v8/issues/detail?id=2206).

An alternative approach could provide specializations of map for various
backing storage types. But again, there is no plumbing in V8 to enable that.


Vyacheslav Egorov


On Mon, Feb 4, 2013 at 4:17 PM, Andreas Rossberg rossb...@google.comwrote:

 Moreover, 'map' has to make a hasOwnProperty check in every iteration.

 /Andreas

 On 4 February 2013 16:14, Sven Panne svenpa...@chromium.org wrote:
  On Mon, Feb 4, 2013 at 3:58 PM, Andrii Melnykov andy.melni...@gmail.com
 
  wrote:
 
  http://hpaste.org/81784 contains a benchmark - slow_test() is twice as
  slow as fast_test() [...]
 
 
  The way Array.prototype.map is specified (see section 15.4.4.19 in the
 ECMA
  spec) makes it very hard to implement efficiently. One has to create a
 new
  array for the result and has to be prepared for the case when the
 callback
  function modifies the array. Furthermore, we don't do any deforestation,
  which is a bit hard in JavaScript. Therefore, fast_test() basically
 does
  something different than slow_test(): It is optimized knowing the fact
 that
  the callback function does not modify the underlying array + it does the
  deforestation by hand, avoiding the need for an intermediate array.
 
  In a nutshell: It shouldn't be a surprise that fast_test() is, well,
 faster
  than slow_test()...
 
  --
  --
  v8-users mailing list
  v8-users@googlegroups.com
  http://groups.google.com/group/v8-users
  ---
  You received this message because you are subscribed to the Google Groups
  v8-users group.
  To unsubscribe from this group and stop receiving emails from it, send an
  email to v8-users+unsubscr...@googlegroups.com.
  For more options, visit https://groups.google.com/groups/opt_out.
 
 

 --
 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users
 ---
 You received this message because you are subscribed to the Google Groups
 v8-users group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to v8-users+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
v8-users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [v8-users] What causes a function to be optimized too many times? How do I avoid it?

2012-11-08 Thread Vyacheslav Egorov
Figuring out which value requires looking at the IR dumped with
--trace-hydrogen.

As for logs: on Windows you can patch your chrome.exe as I describe here:

http://mrale.ph/blog/2012/06/21/v8s-flags-and-chrome-on-windows.html

and then a simple unix style redirection works from command prompt:

chrome.exe --no-sandbox --js-flags=--trace-opt --trace-deopt  log.txt

Vyacheslav Egorov


On Thu, Nov 8, 2012 at 4:48 AM, Kevin Gadd kevin.g...@gmail.com wrote:
 Interesting, I wonder why --trace-deopt isn't spitting out deopt notices for
 me. Maybe some of the output is being lost because I'm using WinDbg to
 capture it. I used to get deopt output there, though...

 Is there a way to tell which value is causing check-prototype-maps to fail?
 Is it a check performed on the this-reference?

 Thanks for taking a look, I appreciate it. I did some more testing using the
 release version of chrome and at present most JSIL code seems to perform
 dramatically better there - I'm seeing 4-5x performance regressions for some
 simple hot functions in Canary, like this one for example (source from a
 local build - haven't uploaded it to production yet because I'm wary of
 making things worse):

 function KinematicBody_get_DynamicAreaSubPx () {
   var areaPosition = this._area.get_PositionSubPx();
   var x = ((areaPosition.X - this.HalfWidthSubPx) | 0);
   var y = ((areaPosition.Y - this.HalfHeightSubPx) | 0);
   var w = ((this.HalfWidthSubPx * 2) | 0);
   var h = ((this.HalfHeightSubPx * 2) | 0);
   if (!((this._DynamicAreaSubPx.X === x) 
   (this._DynamicAreaSubPx.Y === y) 
   (this._DynamicAreaSubPx.Width === w) 
 (this._DynamicAreaSubPx.Height === h))) {
 this._DynamicAreaSubPx = new ($T15())(x, y, w, h);
   }
   return this._DynamicAreaSubPx;
 }

 In that function all the direct property accesses aren't going through
 getter/setter functions, so there shouldn't be very much actually happening
 in there. This seems to be supported by it performing fine in release branch
 Chrome. It makes me wonder if some particular pattern in my generated code
 is causing newer revisions of V8 some grief (lazy initialization, perhaps?)

 Sven, let me know if I can provide you additional details (or chrome traces,
 or whatever) to help you investigate this.

 Thanks,
 -kg



 On Wed, Nov 7, 2012 at 9:29 AM, Vyacheslav Egorov vego...@chromium.org
 wrote:

 I asked because it is highly unlikely that any big application runs
 without deopts.

 I just tried to run the game in Chrome Canary on Mac with
 --js-flags=--trace-deopt --code-comments and I saw many deopts.

 DrawScaleF constantly deopts on check-prototype-maps.

  DEOPT: DrawScaleF at bailout #19, address 0x0, frame size 40
 ;;; @292: check-prototype-maps.
 [deoptimizing: begin 0x51b14d05 DrawScaleF @19]
   translating DrawScaleF = node=190, height=76
 0xbff70d18: [top + 128] - 0x585febd1 ; [sp + 92] 0x585febd1 a
 Microsoft_Xna_Framework_Graphics_SpriteBatch
 0xbff70d14: [top + 124] - 0x24ed70d1 ; [sp + 88] 0x24ed70d1 an
 HTML5ImageAsset
 0xbff70d10: [top + 120] - 0x4e4b96d9 ; [sp + 84] 0x4e4b96d9 an
 Object
 0xbff70d0c: [top + 116] - 0x4e4e07d5 ; [sp + 80] 0x4e4e07d5 a
 Microsoft_Xna_Framework_Rectangle
 0xbff70d08: [top + 112] - 0x4e4cd1ed ; [sp + 76] 0x4e4cd1ed a
 Microsoft_Xna_Framework_Color
 0xbff70d04: [top + 108] - 0x ; [sp + 72] 0
 0xbff70d00: [top + 104] - 0x462f1199 ; [sp + 68] 0x462f1199 a
 Microsoft_Xna_Framework_Vector2
 0xbff70cfc: [top + 100] - 0x0002 ; [sp + 64] 1
 0xbff70cf8: [top + 96] - 0x449a2edd ; [sp + 60] 0x449a2edd a
 Microsoft_Xna_Framework_Graphics_SpriteEffects
 0xbff70cf4: [top + 92] - 0x47b2609d ; [sp + 56] 0x47b2609d
 Number: 0.1290322580645161
 0xbff70cf0: [top + 88] - 0x223b0d4b ; caller's pc
 0xbff70cec: [top + 84] - 0xbff70d28 ; caller's fp
 0xbff70ce8: [top + 80] - 0x51b12e11 ; context
 0xbff70ce4: [top + 76] - 0x51b14d05 ; function
 0xbff70ce0: [top + 72] - 0x00a0 ; [sp + 28] 80
 0xbff70cdc: [top + 68] - 0x0040 ; [sp + 20] 32
 0xbff70cd8: [top + 64] - 0x0020 ; [sp + 24] 16
 0xbff70cd4: [top + 60] - 0x0020 ; [sp + 12] 16
 0xbff70cd0: [top + 56] - 0x51b14d05 ; [sp + 16] 0x51b14d05 JS
 Function DrawScaleF
 0xbff70ccc: [top + 52] - 0x585febd1 ; [sp + 92] 0x585febd1 a
 Microsoft_Xna_Framework_Graphics_SpriteBatch
 0xbff70cc8: [top + 48] - 0x24ed70d1 ; [sp + 88] 0x24ed70d1 an
 HTML5ImageAsset
 0xbff70cc4: [top + 44] - 0x02f0 ; [sp + 8] 376
 0xbff70cc0: [top + 40] - 0x0010 ; [sp + 4] 8
 0xbff70cbc: [top + 36] - 0x0020 ; [sp + 24] 16
 0xbff70cb8: [top + 32] - 0x0020 ; [sp + 12] 16
 0xbff70cb4: [top + 28] - 0x00a0 ; [sp + 28] 80
 0xbff70cb0: [top + 24] - 0x0040 ; [sp + 20] 32
 0xbff70cac: [top + 20] - 0x0020 ; [sp + 24] 16
 0xbff70ca8: [top + 16] - 0x0020 ; [sp + 12] 16
 0xbff70ca4: [top + 12

Re: [v8-users] What causes a function to be optimized too many times? How do I avoid it?

2012-11-07 Thread Vyacheslav Egorov
I asked because it is highly unlikely that any big application runs
without deopts.

I just tried to run the game in Chrome Canary on Mac with
--js-flags=--trace-deopt --code-comments and I saw many deopts.

DrawScaleF constantly deopts on check-prototype-maps.

 DEOPT: DrawScaleF at bailout #19, address 0x0, frame size 40
;;; @292: check-prototype-maps.
[deoptimizing: begin 0x51b14d05 DrawScaleF @19]
  translating DrawScaleF = node=190, height=76
0xbff70d18: [top + 128] - 0x585febd1 ; [sp + 92] 0x585febd1 a
Microsoft_Xna_Framework_Graphics_SpriteBatch
0xbff70d14: [top + 124] - 0x24ed70d1 ; [sp + 88] 0x24ed70d1 an
HTML5ImageAsset
0xbff70d10: [top + 120] - 0x4e4b96d9 ; [sp + 84] 0x4e4b96d9 an Object
0xbff70d0c: [top + 116] - 0x4e4e07d5 ; [sp + 80] 0x4e4e07d5 a
Microsoft_Xna_Framework_Rectangle
0xbff70d08: [top + 112] - 0x4e4cd1ed ; [sp + 76] 0x4e4cd1ed a
Microsoft_Xna_Framework_Color
0xbff70d04: [top + 108] - 0x ; [sp + 72] 0
0xbff70d00: [top + 104] - 0x462f1199 ; [sp + 68] 0x462f1199 a
Microsoft_Xna_Framework_Vector2
0xbff70cfc: [top + 100] - 0x0002 ; [sp + 64] 1
0xbff70cf8: [top + 96] - 0x449a2edd ; [sp + 60] 0x449a2edd a
Microsoft_Xna_Framework_Graphics_SpriteEffects
0xbff70cf4: [top + 92] - 0x47b2609d ; [sp + 56] 0x47b2609d
Number: 0.1290322580645161
0xbff70cf0: [top + 88] - 0x223b0d4b ; caller's pc
0xbff70cec: [top + 84] - 0xbff70d28 ; caller's fp
0xbff70ce8: [top + 80] - 0x51b12e11 ; context
0xbff70ce4: [top + 76] - 0x51b14d05 ; function
0xbff70ce0: [top + 72] - 0x00a0 ; [sp + 28] 80
0xbff70cdc: [top + 68] - 0x0040 ; [sp + 20] 32
0xbff70cd8: [top + 64] - 0x0020 ; [sp + 24] 16
0xbff70cd4: [top + 60] - 0x0020 ; [sp + 12] 16
0xbff70cd0: [top + 56] - 0x51b14d05 ; [sp + 16] 0x51b14d05 JS
Function DrawScaleF
0xbff70ccc: [top + 52] - 0x585febd1 ; [sp + 92] 0x585febd1 a
Microsoft_Xna_Framework_Graphics_SpriteBatch
0xbff70cc8: [top + 48] - 0x24ed70d1 ; [sp + 88] 0x24ed70d1 an
HTML5ImageAsset
0xbff70cc4: [top + 44] - 0x02f0 ; [sp + 8] 376
0xbff70cc0: [top + 40] - 0x0010 ; [sp + 4] 8
0xbff70cbc: [top + 36] - 0x0020 ; [sp + 24] 16
0xbff70cb8: [top + 32] - 0x0020 ; [sp + 12] 16
0xbff70cb4: [top + 28] - 0x00a0 ; [sp + 28] 80
0xbff70cb0: [top + 24] - 0x0040 ; [sp + 20] 32
0xbff70cac: [top + 20] - 0x0020 ; [sp + 24] 16
0xbff70ca8: [top + 16] - 0x0020 ; [sp + 12] 16
0xbff70ca4: [top + 12] - 0x4e4cd1ed ; [sp + 76] 0x4e4cd1ed a
Microsoft_Xna_Framework_Color
0xbff70ca0: [top + 8] - 0x ; [sp + 72] 0
0xbff70c9c: [top + 4] - 0x0010 ; [sp + 0] 8
0xbff70c98: [top + 0] - 0x0010 ; eax 8
[deoptimizing: end 0x51b14d05 DrawScaleF = node=190, pc=0x497cac9d,
state=NO_REGISTERS, alignment=no padding, took 0.060 ms]
[removing optimized code for: DrawScaleF]

I do not see such deopt on Chrome 23 (though I did see some deopts of
this function). This indeed looks like an issue either with type
feedback or with generated code, though I can't be sure.

Sven recently was changing things in that neighborhood. I am CCing
him. I hope he will be able to help.
Vyacheslav Egorov


On Tue, Nov 6, 2012 at 4:16 PM, Kevin Gadd kevin.g...@gmail.com wrote:
 Hi Vyacheslav,

 Yeah, as I said I ran with trace-opt, trace-bailout and trace-deopt turned
 on. So 'disabled optimization for' doesn't mean the function is deoptimized?
 That's really surprising to me, because I see a performance hit for those
 functions, and I assume that optimization being turned off would mean that
 the functions would have to run using unoptimized JIT output. That's not the
 case then? Does that mean that this error message doesn't matter, and it's
 intended that these functions keep getting recompiled until they hit the
 limit?

 It would be cool to know how to find out why the functions keep getting
 marked for recompilation, since the compiles seem to be taking time, but I
 guess that's less of an issue.

 Thanks,
 -kg


 On Tue, Nov 6, 2012 at 8:04 AM, Vyacheslav Egorov vego...@chromium.org
 wrote:

 Hi Kevin,

 Does it deoptimize?

 I do not see any deoptimizations in the log you have attached, were
 you running with --trace-deopt?

 Vyacheslav Egorov


 On Tue, Nov 6, 2012 at 12:33 AM, Kevin Gadd kevin.g...@gmail.com wrote:
  Hi,
 
  I've been looking into some performance issues in Chrome Canary for a
  HTML5
  game I released a little while back. At present, Firefox Nightly runs
  this
  game a lot faster than Canary does, which is surprising to me because
  Chrome
  has a much better Canvas backend and used to do much better at running
  this
  game. From doing some profiling and comparing profiles between the
  browsers,
  I am pretty sure I am running into a V8 issue here - perhaps because
  something is wrong with my JS.
 
  I ran the game with trace-opt, trace-bailout and trace-deopt turned on.
  I
  see tons and tons of marking

Re: [v8-users] Does changing the type passed to a constructor change a hidden class (and other queries)?

2012-11-06 Thread Vyacheslav Egorov
The answer is no for each case.

V8 does not track types of values assigned to a named properties.

Vyacheslav Egorov


On Tue, Nov 6, 2012 at 12:27 PM, Wyatt deltabathyme...@gmail.com wrote:
 With the follow code one hidden class is created:

 function Point(x, y) {
 this.x = x;
 this.y = y;
 }
 var p1 = new Point(11, 22);
 var p2 = new Point(33, 44);

 Will p2.x=aString; change its hidden class?

 Will p2.x=undefined; change its hidden class?

 Will p3= new Point(42,theAnswer); create a new hidden class?

 I'm inclined to think that the answer is yes for each case..?
 Or at least each of these cases seems as if it could not be fully optimized.

 Any help is much appreciated!

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Does changing the type passed to a constructor change a hidden class (and other queries)?

2012-11-06 Thread Vyacheslav Egorov
In V8 currently most assumptions are made and checked at uses, not at
definitions.

Consider for example:

var p = new Point(1, 2);

function add(p) {
  return p.x + p.y
}

add(p);

Here the fact that p.x and p.y is numbers is checked at the +
operation. If add is optimized for these assumptions and you pass
point that contains strings in x and y then function add will
deoptimize. But this will happen when you execute add not when you
create point with string values in x and y.
Vyacheslav Egorov


On Tue, Nov 6, 2012 at 3:44 PM, Wyatt deltabathyme...@gmail.com wrote:
 Interesting! But wouldn't each of these cases nullify any assumptions
 made by the type-specializing JIT?

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: --nouse-idle-notification and last resort gc

2012-11-05 Thread Vyacheslav Egorov
Hello Joran,

last resort gc means that there was an allocation failure that a
normal GC could resolve. Basically you are in a kinda OOM situation.
I am kinda curious what kind of allocation it is. Probably it is some
very big object. It can be that allocation attempt does not correctly
fall into allocating from LO space.

One thing though is that last resort GC can be much more lightweight
for node.js application that it is currently. I doubt 7 GC in a row
are very helpful. As a workaround you can go into
Heap::CollectAllAvailableGarbage and replace everything inside with

CollectGarbage(OLD_POINTER_SPACE, gc_reason);

This should get rid of 7 repetitive GCs. I think for an application
like yours it makes perfect sense to set internal GC limits very high
and let incremental GC crunch things instead of falling back to
non-incremental marking. But there are currently no way to configure
GC like that.
Vyacheslav Egorov


On Mon, Nov 5, 2012 at 12:50 AM, Joran Dirk Greef jo...@ronomon.com wrote:
 Max-old-space-size is measured in MB not KB as you suggest.

 Further, max-new-space-size makes no difference to the GC trace given above,
 whether it's passed as flag or not, big or small.

 On Monday, November 5, 2012 10:21:11 AM UTC+2, Yang Guo wrote:

 The short answer is: don't mess with GC settings if you don't know what
 you are doing.

 The long answer is: new space is the part of the heap where short-living
 objects are allocated. The GC scans new space on every collection and
 promotes long-living objects into the old space. You are setting the new
 space to ~19GB, which takes a while to scan. Furthermore, you are setting
 the old space to only 19MB, limiting the part of the heap where long-living
 objects are being moved to, hence the last resort GC. What you probably want
 is to specify a large old space size, but leave the new space size at
 default.

 Yang

 On Sunday, November 4, 2012 4:19:11 PM UTC+1, Joran Dirk Greef wrote:

 I am running Node v0.8.14 with --nouse_idle_notification --expose_gc
 --max_old_space_size=19000 --max_new_space_size=1900.

 I have a large object used as part of a BitCask style store, keeping a
 few million entries.

 Calling gc() manually takes a 3 seconds which is fine as I call it every
 2 minutes.

 The machine has 32GB of RAM and all of this is available to the process,
 there is nothing else running.

 The process sits at around 1.9GB of RAM.

 I have found an interesting test case where async reading a 1mb file in
 Node takes longer and longer depending on how many entries are in the large
 object discussed above:

 Node.fs.readFile('test', 'binary', End.timer())
   347745 ms: Scavenge 1617.4 (1660.4) - 1611.1 (1660.4) MB, 0 ms
 [allocation failure].
   350900 ms: Mark-sweep 1611.5 (1660.4) - 1512.2 (1633.4) MB, 3153 ms
 [last resort gc].
   354072 ms: Mark-sweep 1512.2 (1633.4) - 1512.0 (1592.4) MB, 3171 ms
 [last resort gc].
   357247 ms: Mark-sweep 1512.0 (1592.4) - 1512.0 (1568.4) MB, 3175 ms
 [last resort gc].
   360426 ms: Mark-sweep 1512.0 (1568.4) - 1512.0 (1567.4) MB, 3178 ms
 [last resort gc].
   363620 ms: Mark-sweep 1512.0 (1567.4) - 1512.0 (1567.4) MB, 3193 ms
 [last resort gc].
   366802 ms: Mark-sweep 1512.0 (1567.4) - 1511.6 (1567.4) MB, 3182 ms
 [last resort gc].
   369967 ms: Mark-sweep 1511.6 (1567.4) - 1511.6 (1567.4) MB, 3164 ms
 [last resort gc].
 2012-11-04T14:59:30.700Z INFO 22230ms

 Reading the 1mb file before the large object is created is fast, the
 bigger the object becomes the slower the file is to read.

 Why is last resort gc being called if gc is exposed and if the machine
 has more than enough RAM?

 What was interesting was that this behabiour does not happen for V8
 3.6.6.25 and earlier.

 The reason I can't use 3.6.6.25 however is that the heap is limited to
 1.9GB and I need more head room than that.

 Is there anyway I can disable the last resort GC?

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] V8 3.6.6.25 with max-old-space-size greater than 1900MB?

2012-11-05 Thread Vyacheslav Egorov
 Recent GC changes are unable to handle millions of long-lived entities. V8 
 3.6.6.25 GC works perfectly.

Contrary to what you might think worst pause time for V8 3.6.x and V8
3.7 - 3.15 should be roughly the same. V8 3.7 will also do 7 GCs in a
row as a last resort.

However in 3.6 if you hit a full collection it will always pause your
app for much longer then a incremental collector of 3.7 and later
would (given that everything is tweaked correctly).

--
Vyacheslav Egorov


On Mon, Nov 5, 2012 at 1:28 AM, Joran Dirk Greef jo...@ronomon.com wrote:
 Recent GC changes in V8 are wreaking havoc with a production app. GC traces
 are showing pauses of over 22 seconds. Recent GC changes are unable to
 handle millions of long-lived entities.

 V8 3.6.6.25 GC works perfectly.

 The one problem now is getting V8 3.6.6.25 to allow max-old-space-size
 greater than 1900 MB on Ubuntu.

 Is there any way to run V8 3.6.6.25 with max-old-space-size greater than
 1900 MB?

 Or is there a slightly newer version than 3.6.6.25 which allows bigger heaps
 but without all the new GC work?

 Your help would be much appreciated.

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] V8 3.6.6.25 with max-old-space-size greater than 1900MB?

2012-11-05 Thread Vyacheslav Egorov
Node should not be able to trigger last resort gc.

It can be that recent changes in V8 changed allocation patterns for
some large object (array, properties backing store etc) and this now
causes last resort GC to happen.

Unfortunately it is impossible to figure out what is going on unless
you can somehow get a back trace from inside
CollectAllAvailableGarbage.

--
Vyacheslav Egorov


On Mon, Nov 5, 2012 at 7:08 AM, Joran Dirk Greef jo...@ronomon.com wrote:
 In practice it's working perfectly now. I rolled Node from v0.8 back to v0.6
 and the false positive allocation errors are no longer happening. There's no
 more last resort gc. Load has dropped from 100% to 1%. The gc trace looks
 normal now. I assumed the GC errors were due to the different version of V8
 bundled with Node. Perhaps it's something in Node triggering full GC
 repetitively? Would Node trigger GC by itself?


 On Monday, November 5, 2012 4:53:59 PM UTC+2, Vyacheslav Egorov wrote:

  Recent GC changes are unable to handle millions of long-lived entities.
  V8 3.6.6.25 GC works perfectly.

 Contrary to what you might think worst pause time for V8 3.6.x and V8
 3.7 - 3.15 should be roughly the same. V8 3.7 will also do 7 GCs in a
 row as a last resort.

 However in 3.6 if you hit a full collection it will always pause your
 app for much longer then a incremental collector of 3.7 and later
 would (given that everything is tweaked correctly).

 --
 Vyacheslav Egorov


 On Mon, Nov 5, 2012 at 1:28 AM, Joran Dirk Greef jo...@ronomon.com
 wrote:
  Recent GC changes in V8 are wreaking havoc with a production app. GC
  traces
  are showing pauses of over 22 seconds. Recent GC changes are unable to
  handle millions of long-lived entities.
 
  V8 3.6.6.25 GC works perfectly.
 
  The one problem now is getting V8 3.6.6.25 to allow max-old-space-size
  greater than 1900 MB on Ubuntu.
 
  Is there any way to run V8 3.6.6.25 with max-old-space-size greater than
  1900 MB?
 
  Or is there a slightly newer version than 3.6.6.25 which allows bigger
  heaps
  but without all the new GC work?
 
  Your help would be much appreciated.
 
  --
  v8-users mailing list
  v8-u...@googlegroups.com
  http://groups.google.com/group/v8-users

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: [V8-Users] V8 3.6.6.25 With Max-Old-Space-Size Greater Than 1900MB?

2012-11-05 Thread Vyacheslav Egorov
Depends on the operating system you are running in.

Check out man backtrace if you are on Mac/Linux/BSD like OS.
Vyacheslav Egorov


On Mon, Nov 5, 2012 at 8:17 AM, Joran Greef jo...@ronomon.com wrote:
 How can I get such a back trace?

 On 05 Nov 2012, at 6:14 PM, v8-users@googlegroups.com wrote:

 Node should not be able to trigger last resort gc.

 It can be that recent changes in V8 changed allocation patterns for
 some large object (array, properties backing store etc) and this now
 causes last resort GC to happen.

 Unfortunately it is impossible to figure out what is going on unless
 you can somehow get a back trace from inside
 CollectAllAvailableGarbage.

 --
 Vyacheslav Egorov


 On Mon, Nov 5, 2012 at 7:08 AM, Joran Dirk Greef jo...@ronomon.com wrote:
 In practice it's working perfectly now. I rolled Node from v0.8 back to v0.6
 and the false positive allocation errors are no longer happening. There's no
 more last resort gc. Load has dropped from 100% to 1%. The gc trace looks
 normal now. I assumed the GC errors were due to the different version of V8
 bundled with Node. Perhaps it's something in Node triggering full GC
 repetitively? Would Node trigger GC by itself?


 On Monday, November 5, 2012 4:53:59 PM UTC+2, Vyacheslav Egorov wrote:

 Recent GC changes are unable to handle millions of long-lived entities..
 V8 3.6.6.25 GC works perfectly.

 Contrary to what you might think worst pause time for V8 3.6.x and V8
 3.7 - 3.15 should be roughly the same. V8 3.7 will also do 7 GCs in a
 row as a last resort.

 However in 3.6 if you hit a full collection it will always pause your
 app for much longer then a incremental collector of 3.7 and later
 would (given that everything is tweaked correctly).

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Is empty functions optimized away whenever know?

2012-10-21 Thread Vyacheslav Egorov
V8 does inline functions at call sites where target is observed to be
always the same. Inclined body is guarded by an identity check against
identity of the call target. If guard fails code is deoptimized.

Thus what matters is whether each call site is monomorphic ( sees the same
function all the time) or megamorphic (sees different functions).

Without seeing complete code it is hard to say whether you will help
inlining by creating a single empty function (inlining definitely will not
happen if you create new functions and send them to a single call site
again and again). But you will definitely save space.

--
Vyacheslav Egorov
 On Oct 20, 2012 10:05 PM, idleman evoo...@gmail.com wrote:

 Hi,

 Is empty functions in lined whenever the function is know? Example:

 function do_nothing() { }

 //somewhere later in the code:
 var cb = do_nothing;
 cb(null, Will this call be inlined/optimized away?);

 Will V8 actually call the function, even if it does nothing? I wonder
 because I want to know if it is smarter to create a do_nothing() function
 which will be reused over and over again (but not as obvious) or each time
 create an empty function { } directly in place and let the V8 more easily
 optimize away the call.

 Thanks in advance!



  --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] Is empty functions optimized away whenever know?

2012-10-21 Thread Vyacheslav Egorov
Well if you create a single global function you will at least save memory
and allocation time (as function literal creates a new function  every time
it is executed).

Additionally if you always pass empty function to async_http_get then its
better to create a single function to help inlining as explained in my
previous mail.

Anyway all this matters only on a very hot path.

Vyacheslav Egorov
On Oct 21, 2012 7:10 PM, idleman evoo...@gmail.com wrote:

 Thanks for your answer!

 To make the question some more clear, I invoke huge number of asynchronous
 functions, but sometimes I just don´t care about the result, but the
 function itself require a callback to pass the result of the operation to.
 Would it in those cases be smarter to create a global do_nothing() function
 and pass it into all the asynchronous functions where I don´t care about
 the result, than on invocation just create a new empty function:

 //new empty functions each time
 async_http_get(http://statics.com?webpage=abc;, function() { });
 async_flush(function() { });

 //or using a global do_nothing function
 function do_nothing() { }

 async_http_get(http://statics.com?webpage=abc;, do_nothing);
 async_flush(do_nothing);

 My question regards if V8 would easier optimize away the callback call
 when using anonymous empty functions or not, because if it does, it would
 be worthless to create a global do_nothing on the first place.

 But what I understood, V8 does no such optimizations? What would be better
 in that case, using a global do_nothing() or not?

 Thanks in advance!


 Den söndagen den 21:e oktober 2012 kl. 16:15:19 UTC+2 skrev Vyacheslav
 Egorov:

 V8 does inline functions at call sites where target is observed to be
 always the same. Inclined body is guarded by an identity check against
 identity of the call target. If guard fails code is deoptimized.

 Thus what matters is whether each call site is monomorphic ( sees the
 same function all the time) or megamorphic (sees different functions).

 Without seeing complete code it is hard to say whether you will help
 inlining by creating a single empty function (inlining definitely will not
 happen if you create new functions and send them to a single call site
 again and again). But you will definitely save space.

 --
 Vyacheslav Egorov
  On Oct 20, 2012 10:05 PM, idleman evo...@gmail.com wrote:

 Hi,

 Is empty functions in lined whenever the function is know? Example:

 function do_nothing() { }

 //somewhere later in the code:
 var cb = do_nothing;
 cb(null, Will this call be inlined/optimized away?);

 Will V8 actually call the function, even if it does nothing? I wonder
 because I want to know if it is smarter to create a do_nothing() function
 which will be reused over and over again (but not as obvious) or each time
 create an empty function { } directly in place and let the V8 more easily
 optimize away the call.

 Thanks in advance!



  --
 v8-users mailing list
 v8-u...@googlegroups.com
 http://groups.google.com/**group/v8-usershttp://groups.google.com/group/v8-users

  --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] documentation for deopt bailouts

2012-10-11 Thread Vyacheslav Egorov
Hi,

 how do i see why the method is deopted? for example here:

If you --trace-deopt with --code-comments then in most cases you will
get LIR instruction that deopted in the output (though it is not
always correct). The only reliable way to figure what caused deopt is
run with --print-opt-code and go read assembly around deopt point.

 does that mean i should replace the for in with a for loop that iterates
 over Object.keys?

Yes.

--
Vyacheslav Egorov


On Thu, Oct 11, 2012 at 11:48 AM, Christoph Sturm m...@christophsturm.com 
wrote:
 I'm trying to optimize my node app with --trace-deopt

 how do i see why the method is deopted? for example here:

  DEOPT: Wlbl.Checker.checkUrl at bailout #24, address 0x0, frame size 88
 [deoptimizing: begin 0x25d8e5e85e71 Wlbl.Checker.checkUrl @24]
   translating Wlbl.Checker.checkUrl = node=260, height=40

 also when i log optimizer bailouts, i see this:
 Bailout in HGraphBuilder: @exports.paramsToString: ForInStatement is not
 fast case

 does that mean i should replace the for in with a for loop that iterates
 over Object.keys?

 thanks
  chris

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: ia32 bug?

2012-09-07 Thread Vyacheslav Egorov
Chromium's bindings layer uses External to associate opaque data with
V8 objects and callbacks (see methods accepting HandleValue data in
V8 api).

So making External into a non-Value might involve some bindings work.

--
Vyacheslav Egorov


On Fri, Sep 7, 2012 at 2:50 PM, Sven Panne svenpa...@chromium.org wrote:
 After several discussions, it is not so clear anymore what to do. First of
 all, SilkJS does not follow https://developers.google.com/v8/embed#dynamic
 on how to handle foreign (i.e. C/C++) pointers when embedding v8. The return
 value of External::New is supposed to live in an internal field, but it is
 *not* a valid JavaScript value, it is just a Foreign in disguise, sometimes
 optimized to a Smi. Our v8.h header is very confusing regarding this fact,
 and having External as a subclass of Value is basically wrong. Furthermore
 Value::IsExternal is completely broken. I can see 2 ways of fixing things:

* Keep External's implementation basically as it is. i.e. either a Smi or
 a Foreign. If we do this, we should not keep External as a subclass of Value
 (perhaps a subclass of Data?) and we should remove the IsExternal predicate.
 This means that e.g. SilkJS has to change, following
 https://developers.google.com/v8/embed#dynamic. As it is, one can easily
 crash SilkJS by pure JavaScript.

* Make External basically a convenience wrapper for a JavaScript object
 with an internal property containing a Foreign. This way we could keep
 External a subclass of value and we could fix IsExternal. The downside is
 that all code already following
 https://developers.google.com/v8/embed#dynamic would basically do a useless
 double indirection, punishing people following that guide.

 We will discuss these options, there are good arguments for both of them...

 Cheers,
S.

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: [V8-Users] Is There A Limit To Number Of Properties In An Object?

2012-08-21 Thread Vyacheslav Egorov
Minor correction: I obviously meant to say way below not way beyond.

Vyacheslav Egorov
On Aug 21, 2012 8:03 AM, Joran Greef jo...@ronomon.com wrote:

 Thank you Vyacheslav

 On 20 Aug 2012, at 8:41 PM, v8-users@googlegroups.com wrote:

  I would say limit is around 2^24 entries (biggest fixed array can have
 approx 2^27 entries and hash table requires 3 entries per key-value pair
 and tries to maintain 50% occupancy). But overheads for mutating such a
 table become less than reasonable way beyond this point.
 
  Vyacheslav Egorov
  On Aug 18, 2012 4:34 PM, Joran Greef jo...@ronomon.com wrote:
  I am using a vanilla {} as a hash with 24 byte string keys. It currently
 has 5,500,000 entries.
 
  Is there a limit to the number of properties in such an Object?
 
  --
  v8-users mailing list
  v8-users@googlegroups.com
  http://groups.google.com/group/v8-users
 
  --
  v8-users mailing list
  v8-users@googlegroups.com
  http://groups.google.com/group/v8-users


 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] garbage collection of anonymous functions

2012-08-20 Thread Vyacheslav Egorov
Hi Morten,

Listeners are implicitly referenced by a DOM node wrapper for a node to
which they are attached.

http://code.google.com/searchframe#OAMlx_jo-ck/src/third_party/WebKit/Source/WebCore/bindings/v8/V8GCController.cppexact_package=chromiumq=addimplicitreferencetype=csl=332

This keeps them alive as long as wrapper is alive.

--
Vyacheslav Egorov
On Aug 20, 2012 6:04 PM, Morten Olsen mortenol...@gmail.com wrote:

 Hi,

 I'm struggling to figure out precisely what/where keeps an anonymous
 function alive when used as an event handler in the WebKit integration,
 example:

 document.getElementById(clickMe).addEventListener('click', function (e)
 { alert(e); });

 WebKit only keeps a weak pointer to the function, and I can't figure out
 how its kept alive through GC, as I can't seem to find a live pointer to it
 anywhere else, but I must be overlooking something.

 Regards, Morten

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] Is there a limit to number of properties in an Object?

2012-08-20 Thread Vyacheslav Egorov
I would say limit is around 2^24 entries (biggest fixed array can have
approx 2^27 entries and hash table requires 3 entries per key-value pair
and tries to maintain 50% occupancy). But overheads for mutating such a
table become less than reasonable way beyond this point.

Vyacheslav Egorov
 On Aug 18, 2012 4:34 PM, Joran Greef jo...@ronomon.com wrote:

 I am using a vanilla {} as a hash with 24 byte string keys. It currently
 has 5,500,000 entries.

 Is there a limit to the number of properties in such an Object?

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] Crankshaft without 64-bit hardware support

2012-08-13 Thread Vyacheslav Egorov
Are you planning to allocate a couple of general purpose registers to
represent a double? Do these registers have to be ordered? (e.g. do
you need r_{N}, r_{N+1})

Do you want a single build of V8 work both on a hardware that supports
real double registers and that does not?

If you don't need binary portability then I don't think you need to
extend unallocated policies. Just change interpretation of
DOUBLE_REGISTER policy everywhere.

Otherwise, yeah, you need new policies (which would require some bit
stealing cause LUnallocated is packed pretty tight already).

In any case it would require some adjustments in allocator to get
what interferes with what right.

--
Vyacheslav Egorov


On Mon, Aug 13, 2012 at 7:53 AM, Evgeny Baskakov
evgeny.baska...@gmail.com wrote:
 Hi guys,

 I'm looking for ways to modify the Crankshaft compilation mode to make it
 work without 64-bit hardware registers.

 Could someone give me brief guidelines? Is it possible at all, without the
 whole V8 codebase reworking?

 My first impulse is to extend the LUnallocated policies set and make the
 codegen distinguish between single and coupled registers. Then, the lithium
 codegen would use the CPU-based code stubs instead of native double-related
 instructions. What pitfalls should I be aware of here?

 --Evgeny

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Crankshaft without 64-bit hardware support

2012-08-13 Thread Vyacheslav Egorov
Here is another possibility: in low level IR explode double values
into pairs of low-level values.

d2 = Mul d0, d1

becomes smth like

(i20, i21) = Mul (i00, i01), (i10, i11)

that would require a minor change in the pipeline to allow multiple
return values but I think it might be less painful than supporting
register pairs across the pipeline.

--
Vyacheslav Egorov


On Mon, Aug 13, 2012 at 11:34 AM, Vyacheslav Egorov
vego...@chromium.org wrote:
 Are you planning to allocate a couple of general purpose registers to
 represent a double? Do these registers have to be ordered? (e.g. do
 you need r_{N}, r_{N+1})

 Do you want a single build of V8 work both on a hardware that supports
 real double registers and that does not?

 If you don't need binary portability then I don't think you need to
 extend unallocated policies. Just change interpretation of
 DOUBLE_REGISTER policy everywhere.

 Otherwise, yeah, you need new policies (which would require some bit
 stealing cause LUnallocated is packed pretty tight already).

 In any case it would require some adjustments in allocator to get
 what interferes with what right.

 --
 Vyacheslav Egorov


 On Mon, Aug 13, 2012 at 7:53 AM, Evgeny Baskakov
 evgeny.baska...@gmail.com wrote:
 Hi guys,

 I'm looking for ways to modify the Crankshaft compilation mode to make it
 work without 64-bit hardware registers.

 Could someone give me brief guidelines? Is it possible at all, without the
 whole V8 codebase reworking?

 My first impulse is to extend the LUnallocated policies set and make the
 codegen distinguish between single and coupled registers. Then, the lithium
 codegen would use the CPU-based code stubs instead of native double-related
 instructions. What pitfalls should I be aware of here?

 --Evgeny

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: trace bailout stopped working on node 0.7.7 (V8 3.9.24.7) up to latest release 0.8.4

2012-08-01 Thread Vyacheslav Egorov
There is no such flag now. Sorry.

--
Vyacheslav Egorov


On Tue, Jul 31, 2012 at 6:56 PM, Sławek Janecki jane...@gmail.com wrote:
 Anyone? Is there a flag in new V8 that tells me, why my function wont be
 optimized?
 In previous V8 versions optimizer will try to optimize and bailout when
 nessesery (giving me bailout info).
 I want to know WHY my function will not be optimized at all.

 Thanks


 On Sunday, July 29, 2012 2:09:34 AM UTC+2, Sławek Janecki wrote:

 Using Node.js up to version  0.7.6 (v8 3.9.17) when I use --trace-bailout
 flag i have output as expected (bailouts info)

 Using Node from version 0.7.7 (v8 3.9.24.7) up to latest release 0.8.4
 --trace-bailout don't show any info.

 I've tried simple scripts with try/catch and 'with' (100% bailouts)

 Tracing hydrogen output on both node/v8 (0.7.6 and 0.7.7) versions tells
 that my test function (with try/catch and 'with') is'nt optimized (it's
 bailing-out) but I don't see any info on node = 0.7.7

 Something changed? In V8 sources bailout flag is in place.
 Do I need to turn other flag on to get bailouts or this is a bug?

 Thanks

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: [V8-Users] 64-Bit Integers

2012-07-27 Thread Vyacheslav Egorov
V8 implements ECMAScript as described in ECMA-262 5th.

V8 does not implement non-standard features with two exceptions:

1) they are required for compatibility with existing code on the web.
2) they are highly likely to be included in the next standard (ES6)
(e.g. V8 experiments with: block scoping, proxies, collections,
modules; some of these features as implemented now do not actually
match newest drafts of ES6 spec because spec changed since they were
implemented... This is major disadvantage for implementing features
before spec if frozen).

That said there were actually some discussions about including 64bit
types into ES6 as part of Binary Data work (e.g.
http://wiki.ecmascript.org/doku.php?id=harmony:binary_data_discussion
). I don't know what is the status of this now. Any standardization
issues should be addressed to TC-39.

--
Vyacheslav Egorov


On Fri, Jul 27, 2012 at 4:43 PM, Joran Greef jo...@ronomon.com wrote:
 Yes, it's not in the spec but are there ways and means to change this?

 Will Javascript be stuck forever with 51/52/53 bits?

 It would be great if V8 could support proper native 64-bit integers and 
 perhaps encourage other engines to do the same.

 Especially with the ubiquity of 64-bit elsewhere. It hampers common use 
 cases, e.g. hashing, compression.

 On 27 Jul 2012, at 2:50 PM, v8-users@googlegroups.com wrote:

 On Fri, Jul 27, 2012 at 2:42 PM, Joran Greef jo...@ronomon.com wrote:
 Would it be possible to have proper support for native 64-bit integers and 
 operations in Javascript?

 JS as a language does not specify them. It specifies a Number type and 
 (IIRC) 51 (52? 53?) bits of integer precision.

 --
 - stephan beal
 http://wanderinghorse.net/home/stephan/
 http://gplus.to/sgbeal


 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Integer division when operands and target are integers

2012-07-25 Thread Vyacheslav Egorov
It does not have to be in Canonicalization pass (not every
optimization that replaces instructions fits into it).

I am not entirely sure that it makes perfect sense to perform such
optimization during range analysis itself though I do see at least one
reason why you would want to do that: our conditional range
propagation does not associate range information with uses, so [alpha
 0] information becomes lost after range analysis. This can be worked
around by attaching more information to HUseListNode* during range
analysis.

Honestly speaking it's hard to speculate if some patch would be
accepted or not without actually seeing a patch.

--
Vyacheslav Egorov


On Wed, Jul 25, 2012 at 4:20 PM, Evan Wallace evan@gmail.com wrote:
 Oh ok, that makes sense. So V8 generates an integer instruction only if the
 remainder is always zero. I've submitted this as issue 2258.

 I'm more than happy to contribute my patch but I'm new to V8 and my quick
 hack probably isn't the correct way to do it. It looks like instruction
 replacement should take place in the canonicalize pass but range information
 isn't available until range analysis. I did the optimization during range
 analysis both to make sure the preconditions hold (non-negative dividend and
 positive divisor) and to make sure the correct range is calculated for
 subsequent instructions. Would a patch that adds and removes instructions
 during range analysis be accepted?

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] recommended V8 GC settings for nodejs in heroku (hitting heroku memory limits)

2012-07-25 Thread Vyacheslav Egorov
Yes, I suspect your heap is being paged out.

I don't know though why your app behaves so differently on different
machines. I suspect there is something in your/node.js/node packages
code that is OS specific and causes larger memory consumption (e.g.
it's buffering too much things in the memory because writing stuff to
network/disk does not keep up with incomming data etc).

I don't think any GC tweaks will be helpful here.

--
Vyacheslav Egorov


On Tue, Jul 24, 2012 at 9:52 PM, spollack s...@pollackphoto.com wrote:
 here is a reading even closer to the peak:
 {rss:542273536,heapTotal:523510848,heapUsed:503801408}. lots of heap
 usage!

 as another aside: sometimes i get surprising results for rss, where rss is
 considerably smaller than heapTotal/heapUsed in the middle of the run, and
 then it quickly returns to what i would expect (rss  heapTotal  heapUsed).
 For example, this reading from mid-run:
 {rss:345870336,heapTotal:516328896,heapUsed:511765976}  Why would that
 be? is the heap getting swapped out?

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] recommended V8 GC settings for nodejs in heroku (hitting heroku memory limits)

2012-07-25 Thread Vyacheslav Egorov
You need to use something that gives you access to heap snapshots. I think
node-inspector does. But better ask at node.js list.

Vyacheslav Egorov
On Jul 25, 2012 11:15 PM, spollack s...@pollackphoto.com wrote:

 I think you are probably right. after more testing, i can see that the GC
 is running, it just isn't freeing up very much. i probably am accidentally
 keeping references to state that i don't need. Are there any good tools to
 help identify what specifically might be getting held onto here?

 Thanks,
 Seth

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] Integer division when operands and target are integers

2012-07-24 Thread Vyacheslav Egorov
Once type-feedback told hydrogen that division happened to produce
double value it does not try to revert it to integer division.

The only exception is (quite fragile) MathFloorOfDiv optimization
performed by HUnaryMathOperation::Canonicalize.

Consider contributing your patch :-)

--
Vyacheslav Egorov


On Tue, Jul 24, 2012 at 8:18 PM, Evan Wallace evan@gmail.com wrote:
 I've been trying to optimize image manipulation and I couldn't get V8 to
 emit integer division instructions. Does V8 currently emit any integer
 division instructions? It seems odd that it wouldn't because it does have
 the capability to emit them (see LDivI). I was going to submit a bug but
 wanted to check first that this really is the case.

 When using typed arrays, division causes lots of conversions to and from
 doubles. Since V8 does range analysis, it should be possible to emit integer
 division for at least the case with non-negative dividends and positive
 divisors when the target location is an integer. I hacked up a quick proof
 of concept yesterday and got an easy 2x speedup (for converting a
 premultiplied alpha image to non-premultiplied alpha). This puts V8 at the
 speed of optimized C code and seems like too good an optimization to pass
 up. This optimization would also be useful for tools like emscripten.

 function undoPremultiplication(image, w, h) {
   for (var y = 0, i = 0; y  h; y++) {
 for (var x = 0; x  w; x++, i += 4) {
   var alpha = image[i + 3];
   if (alpha  0) {
 image[i + 0] = image[i + 0] * 0xFF / alpha;
 image[i + 1] = image[i + 1] * 0xFF / alpha;
 image[i + 2] = image[i + 2] * 0xFF / alpha;
   }
 }
   }
 }

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] recommended V8 GC settings for nodejs in heroku (hitting heroku memory limits)

2012-07-24 Thread Vyacheslav Egorov
What is V8 heap usage: heapTotal and heapUsed parts reported by
process.memoryUsage?

--
Vyacheslav Egorov


On Tue, Jul 24, 2012 at 8:52 PM, spollack s...@pollackphoto.com wrote:
 I am running a nodejs app in heroku, and on certain datasets i'm going over
 the heroku memory limit of 512MB. I'm running node v0.6.6 with defaults. I
 can see via node's process.memoryUsage() that my RSS value does indeed go as
 high as 544MB on my test dataset, and ps shows similar results.

 What V8 GC settings would you recommend in order to keep the RSS lower?

 Are there any known memory behavior improvements of moving to node v0.6.20
 or v0.8.3 that would help me here?

 As an aside, running the same test locally (same node version, same code,
 same data) only hits a max RSS of 155MB, almost a factor of 4 different.
 Both are x86_64 machines, although my local machine is OSX Lion (11.4.0
 Darwin Kernel Version 11.4.0: Mon Apr 9 19:32:15 PDT 2012;
 root:xnu-1699.26.8~1/RELEASE_X86_64 x86_64) while heroku is Linux 2.6
 (2.6.32-343-ec2 #45+lp929941v1 SMP Tue Feb 21 14:07:44 UTC 2012 x86_64
 GNU/Linux). Any ideas why that is?

 Thanks.

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: How to write efficient code for Node/MongoDB mixed use?

2012-07-23 Thread Vyacheslav Egorov
Yes.
--
Vyacheslav Egorov


On Mon, Jul 23, 2012 at 4:48 PM, Sebastian Ferreyra Pons
ushiferre...@gmail.com wrote:
 Yes.  In a constructor function, the hidden class begins with an
 element that also includes a prototype.  In two different constructor
 functions these initial elements will be different, making their
 hidden classes different.

 Does this mean that if I make sure that both the mongo driver and my code
 uses the literal empty object {} for creating new objects they will share
 the same root class?

 On Friday, July 20, 2012 2:21:39 AM UTC-3, jMerliN wrote:

 Hi Sebastian,

  It is my understanding that hidden classes are not shared between
  different
  constructors, even if I construct structurally identical objects with
  the
  same properties and in the same order. This seems to imply that
  deserialized objects coming from mongodb will not share the same hidden
  class as structurally identical objects created by the constructors,
  hence
  functions that use these objects will not be well optimized into native
  code. Am I right?

 Yes.  In a constructor function, the hidden class begins with an
 element that also includes a prototype.  In two different constructor
 functions these initial elements will be different, making their
 hidden classes different.

 If you have hot code that is being impacted because you pass it both
 objects you construct and ones from the MongoDB driver you're using,
 one thing you can do is make your constructor take the MongoDB object
 representation as an input.  Then you would be able to make your hot
 code monomorphic.  If you don't have any hot code being impacted, it's
 probably not something you should worry about unless your Mongo driver
 is putting objects it constructs into dictionary mode for some reason.

 An example: http://jsfiddle.net/xznxP/

 hot only deoptimizes when given the raw object here, which has a
 different hidden class.

  Is there any hidden class inheritance built into v8? That is, if I
  create
  object o={a:1, b:2} and later add o.z=3, will native code optimized for
  the
  hidden class before the property-add still work unmodified afterwards?

 No.

 - Justin

 On Jul 19, 4:16 pm, Sebastian Ferreyra Pons ushiferre...@gmail.com
 wrote:
  I have two questions.
 
  #1
  I'm developing a Node.js/Mongodb web app.
 
  This means that objects used in the code will be created by at least two
  different code paths:
 
 1. Constructor functions
 2. Deserialization code in the mongo driver.
 
  It is my understanding that hidden classes are not shared between
  different
  constructors, even if I construct structurally identical objects with
  the
  same properties and in the same order. This seems to imply that
  deserialized objects coming from mongodb will not share the same hidden
  class as structurally identical objects created by the constructors,
  hence
  functions that use these objects will not be well optimized into native
  code. Am I right?
 
  #2
  Is there any hidden class inheritance built into v8? That is, if I
  create
  object o={a:1, b:2} and later add o.z=3, will native code optimized for
  the
  hidden class before the property-add still work unmodified afterwards?

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: De-optimization of hot function when constructor adds methods directly to object

2012-07-20 Thread Vyacheslav Egorov
 If there are only 25-35 allowable properties in a klass, you can
 potentially make a really fast check for this.

Yep, I know. You are describing exactly what I described above, just
in different words :-) It's an old and well known way to implement
inheritance checks in a single inheritance languages (at least Oberon
compilers used in back in 80s).

--
Vyacheslav Egorov


On Fri, Jul 20, 2012 at 12:41 AM, jMerliN jmer...@jmerlin.net wrote:
 This will be great but there is no easy way to check that two hidden
 classes are compatible. Hidden classes are currently compared by
 pointer equivalence, which boils down to two instructions (compare and
 jump). Checking for inheritance would lead to a pretty complicated
 code. The most effecient way, it seems, to implement such a check is
 to record transition path in every map and then check if a fixed
 position in transition path is equal to a fixed map. This is much more
 complex and I am not sure it benefits any real world code.

 I'll try to find a good real-world example of where this causes
 violent deops from common practices.  I've seen it done quite a few
 times.

 If there are only 25-35 allowable properties in a klass, you can
 potentially make a really fast check for this.  If you store pointers
 to the klasses in a contiguous array such that higher indices are
 always superklass pointers of lower indices (regardless of
 transition), you can determine compatibility with 2 cmps (one compat,
 one bounds checking).  You could still do the normal cmp/jmp into
 optimized code, but if the cmp fails (not equal), you can do 2 more
 cmps (if  optimized-for-klass and  end of block) to determine if
 this is a parent klass, and if so you can jmp to the optimized code
 and only if those cmps fail do you deoptimize.

 The downside is that the generated optimized code would need to
 dereference once just to get the klass pointer, adding an extra few
 cycles to each optimized IC.  Though I suppose when you could move
 that code out and do actual klass pointer equiv cmp, if that fails
 then go back to this block and do a bounds check, and if it's a parent
 then jmp into the optimized code keeping the klass pointer, which
 pushes the extra work into the case that the klass pointers aren't
 equivalent but are compatible (which should be rare).  Storing those
 compat blocks would add a memory overhead and the non-monomorphic
 check can potentially prevent a deoptimization with a few more
 instructions.  It shouldn't reduce performance, though.

 You could also potentially partition such a compat block structure as
 to minimize the number of pointers needed to do a reasonable job at
 guarding against deoptimization from extended objects.

 On Jul 19, 1:00 pm, Vyacheslav Egorov vego...@chromium.org wrote:
 Knowing that you are running it in node.js I can confirm that there is
 indeed a difference between test/test2 properties. The reason is we
 don't convert test to a CONSTANT_FUNCTION if object literal is not in
 global scope. This is a heuristic that was based on the assumption
 that top level code is executed once and non-top-level many times
 (thus every time object literal will have a different 
 map):https://github.com/v8/v8/blob/master/src/parser.cc#L4272-4279. In the
 past we would not make test2 a CONSTANT_FUNCTION either because we
 required function to be in old space. I think we might want to change
 this to make it consistent and I've filed a bug (https://
 code.google.com/p/v8/issues/detail?id=2246). node.js wraps module
 bodies in anonymous function --- that is why slow down is not
 reproable in Chrome or d8 shell:

 (function () {
 var z = {test: function () {}};
 z.test2 = function () {};
 function foo(z) {
   var i;
   console.time('test speed');
   for (i = 0; i  1000; i++) z.test();
   console.timeEnd('test speed');
   console.time('test2 speed');
   for (i = 0; i  1000; i++) z.test2();
   console.timeEnd('test2 speed');

 }

 foo(z);
 foo(z);

 })();
  The real issue in my example is that test is per-
  object and runTest is static, if runTest was assigned via this., it
  should only ever see one hidden class, unless you do something evil
  like .apply.

 This will not help because type-feedback is currently shared between
 all instances of the same function literal: V8 mostly gets type-
 feedback from IC-stubs that are  referenced by inline-caches in
 unoptimized code and unoptimized code object is the same for any
 closure created from the same function literal.

  On a related note, has there been any consideration for making v8 not
  de-optimize when a hidden class is ancestral to another (and therefore
  compatible)?

 This will be great but there is no easy way to check that two hidden
 classes are compatible. Hidden classes are currently compared by
 pointer equivalence, which boils down to two instructions (compare and
 jump). Checking for inheritance would lead to a pretty complicated
 code. The most effecient way, it seems

Re: [v8-users] De-optimization of hot function when constructor adds methods directly to object

2012-07-19 Thread Vyacheslav Egorov
Hi Justin,

V8's hidden classes are not limited to tracking fields you assign to
an object, V8 also tries to capture methods you assign (just like in
any object-oriented language classes capture both data and behavior).

That is why first and second objects produced by Foobar will have
different hidden classes --- they have different methods.

As to your second question: they are not treated differently. If you
rewrite your test like this:

var z = {test: function () {}};
z.test2 = function () {};

function foo(z) {
  var i;
  console.time('test speed');
  for (i = 0; i  1000; i++) z.test();
  console.timeEnd('test speed');
  console.time('test2 speed');
  for (i = 0; i  1000; i++) z.test2();
  console.timeEnd('test2 speed');
}

foo(z);
foo(z);

You will see something like:

test speed: 38ms
test2 speed: 12ms
test speed: 11ms
test2 speed: 11ms

Truth is V8 optimizes the code while the first loop is still _running_
(this is called On Stack Replacement aka OSR). So first test speed
measurement contains a sum of time spent in unoptimized code, compiler
and optimized code and first test2 speed measurement is purely time
spent in optimized code. If you call the same code second time you see
purely timing results for optimized code. This is why benchmarks
should always contain warm up phase to let optimizing JIT kick in.

Hope this explains it.

--
Vyacheslav Egorov


On Thu, Jul 19, 2012 at 3:04 AM, jMerliN jmer...@jmerlin.net wrote:
 So I can't get my head around why this happens (I haven't dug through
 v8's code to try to figure it out either), but this is really
 inconsistent to me with how v8 constructs hidden classes in general.
 The following is running in Node.js v0.8.2 (V8 v3.11.10.12).

 Here's the code:
 http://pastebin.com/2gKWrfHp

 Here's the output, and the deopt trace:
 http://pastebin.com/WerQuGLZ

 Calling Foo.prototype.runTest with any Foo object results in similar
 performance (unless you change the hidden class, as expected).  Bar
 expectedly deoptimizes because abc is stored on the proto and isn't
 actually on the constructed object until the first call, causing the
 optimized function (once it gets hot, which is after the object has
 changed hidden class) to bailout on the next attempt with a new Bar
 object.

 It gets weird with Foobar.  test is added directly to the object, the
 only difference is that this is a function, not a primitive, but it
 seems like the hidden classes of objects from Foobar's constructor
 should be the same.  The first run is performant, equivalent to Foo
 (expected).  Though running the test again with a new Foobar
 deoptimizes it.  I can't at all understand why.

 Thanks,
 Justin

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Version Performance

2012-07-19 Thread Vyacheslav Egorov
The major difference between 3.0 and those before it is Crankshaft ---
adaptive compilation pipeline:
http://blog.chromium.org/2010/12/new-crankshaft-for-v8.html

I am not sure what was the major difference between 2.x and 1.x

--
Vyacheslav Egorov


On Thu, Jul 19, 2012 at 5:07 PM, W Brimley cushingreg...@gmail.com wrote:
 Does anyone know what are the major difference between version 1.x,2.x, and
 3.x that constitute to performance gains. For example ArrayBuffers in
 version 3.x had quite an impact on performance. Are there other examples?


 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


[v8-users] Re: De-optimization of hot function when constructor adds methods directly to object

2012-07-19 Thread Vyacheslav Egorov
Knowing that you are running it in node.js I can confirm that there is
indeed a difference between test/test2 properties. The reason is we
don't convert test to a CONSTANT_FUNCTION if object literal is not in
global scope. This is a heuristic that was based on the assumption
that top level code is executed once and non-top-level many times
(thus every time object literal will have a different map):
https://github.com/v8/v8/blob/master/src/parser.cc#L4272-4279 . In the
past we would not make test2 a CONSTANT_FUNCTION either because we
required function to be in old space. I think we might want to change
this to make it consistent and I've filed a bug (https://
code.google.com/p/v8/issues/detail?id=2246). node.js wraps module
bodies in anonymous function --- that is why slow down is not
reproable in Chrome or d8 shell:

(function () {
var z = {test: function () {}};
z.test2 = function () {};
function foo(z) {
  var i;
  console.time('test speed');
  for (i = 0; i  1000; i++) z.test();
  console.timeEnd('test speed');
  console.time('test2 speed');
  for (i = 0; i  1000; i++) z.test2();
  console.timeEnd('test2 speed');
}

foo(z);
foo(z);
})();

 The real issue in my example is that test is per-
 object and runTest is static, if runTest was assigned via this., it
 should only ever see one hidden class, unless you do something evil
 like .apply.

This will not help because type-feedback is currently shared between
all instances of the same function literal: V8 mostly gets type-
feedback from IC-stubs that are  referenced by inline-caches in
unoptimized code and unoptimized code object is the same for any
closure created from the same function literal.

 On a related note, has there been any consideration for making v8 not
 de-optimize when a hidden class is ancestral to another (and therefore
 compatible)?

This will be great but there is no easy way to check that two hidden
classes are compatible. Hidden classes are currently compared by
pointer equivalence, which boils down to two instructions (compare and
jump). Checking for inheritance would lead to a pretty complicated
code. The most effecient way, it seems, to implement such a check is
to record transition path in every map and then check if a fixed
position in transition path is equal to a fixed map. This is much more
complex and I am not sure it benefits any real world code.

--
Vyacheslav Egorov

On Jul 19, 8:03 pm, jMerliN jmer...@jmerlin.net wrote:
 Vyacheslav,

 When I run the code you posted, I see a much bigger discrepancy
 between test/test2 in the first pass and a slight reduction in test's
 time but still a large discrepancy the second pass (indicating OSR
 happened during the first loop the first time around), similar to what
 I was seeing yesterday.  But that's running on Node.js, and I haven't
 re-built Node.js against the latest stable v8 code, but that issue is
 completely gone in the current nightly Canary build.

 I think I better understand the method issue now.  V8 actually treats
 methods set on this. differently than other properties, the assembly
 generated looks aggressively inlined.  If you cheat and set this.test
 to a number then to the method, it effectively disables those
 optimizations in V8 and you end up treating the object as a normal
 object, and even though it doesn't cause deoptimizations (all objects
 have the same hidden class), it's significantly slower than the
 inlined method call.  The real issue in my example is that test is per-
 object and runTest is static, if runTest was assigned via this., it
 should only ever see one hidden class, unless you do something evil
 like .apply.

 Though this test seems to indicate that this only occurs when building
 the hidden class:  http://pastebin.com/JbuLaEUt

 Even though it never deoptimizes, I'd expect each of those to have
 similar performance, but only the first Foobar created is performant.

 On a related note, has there been any consideration for making v8 not
 de-optimize when a hidden class is ancestral to another (and therefore
 compatible)?  I mean if you have {a: 7, b: 7} and you have a really
 hot loop that only touches a and b, then you add a c property, because
 it was transitioned from the proper hidden class for that hot loop to
 a superclass of it (with the same indices in its property access
 table), that hot function can assume it's the {a, b} hidden class.
 This is similar to how classical inheritance works (Foo extends Bar,
 functions that operate on Bar can also operate on Foo), but in this
 case a hidden class transition is a strict superset, which lets you
 make really nice assumptions.

 On Jul 19, 2:27 am, Vyacheslav Egorov vego...@chromium.org wrote:







  Hi Justin,

  V8's hidden classes are not limited to tracking fields you assign to
  an object, V8 also tries to capture methods you assign (just like in
  any object-oriented language classes capture both data and behavior).

  That is why first and second objects produced

[v8-users] Re: Is --trace-deopt actually usable? If so, how are you supposed to use it?

2012-07-18 Thread Vyacheslav Egorov
Hi Kevin,

To be absolutely honest all these flags historically were made by V8
developers for V8 developers. You usually can't interpret what they
print without understanding of how V8 works internally, how optimizing
compiler IR looks like etc. We advocate them for JS developers only
because there is nothing else available at the moment.

[I was always convinced that V8 needs a more gui-sh thing that would
overlay events from the optimizing compiler over the source of
function but that is not so easy. I was playing with some prototypes
but at some point I gave up... It requires attaching source position
information to individual IR instructions (plus merging this
information somehow when we optimize code and remove redundant
instructions) and to make it worse: AST does not even have a span
information attached to each node... you can't say that expression a +
b starts a position X and ends at position Y to correctly highlight
the whole offending expression.]

The deoptimization that you are mentioning in the first message
indicates that either the execution reached a part of the function
that was optimized before typefeedback for it was available [this
happens a lot for big functions or functions with complicated control
flow and rarely executed parts] or you have a polymorphic property
access site that had a small degree of polymorphism at the moment of
compilation, but now it saw some new hidden class.

 To provide one example: I did some spelunking around with
 --trace-deopt and --trace-bailouts and found that in my codebase,
 basically any use of the 'arguments' object - even just checking
 'arguments.length' - causes the entire function to be deoptimized.

Can you provide more information about this? What kind of --trace-
deopt/trace-bailout output made it look like arguments.length causes
deoptimization?

 Non-v8/non-chrome devs saying false things about V8
 performance isn't your fault

To be precise presentation your are linking to was made by me and I am
V8 dev.

 Thanks for the info about c1visualizer - I bet the memory limit was
 probably responsible for the flakiness and if I fiddle with JVM
 parameters it might work. I'll give it another try later on.

C1Visualizer has a major problem with its memory consumption. Big IR
dumps usually have to be either split into separate files (I do it
with a small script) or minimized by applying --hydrogen-filter=foo
flag to block optimization of all functions that are not called foo.

--
Vyacheslav Egorov



On Jul 18, 12:42 am, Kevin Gadd kevin.g...@gmail.com wrote:
 Thanks for the link to that video, I'll give it a look. Based on your
 suggestion I'll try doing a custom build of Chromium and see if the
 disassembly will let me make sense of things.

 The reason this is a real problem for me (and why I find the lack of
 documentation for this stuff in chromium frustrating) is that I'm
 machine-translating code from other languages to JS - hand-editing it
 to make it faster is something I can do for code I'm writing myself,
 but I can't do it in a compiler. The current nature of the performance
 feedback from V8 makes it more or less a black box and this is
 worsened by the fact that a large amount of the documentation I've
 found out there that claims to describe V8 performance characteristics
 is either wrong or outdated. When you profile an application in V8 and
 the web inspector says you're spending 50% of your time in a simple
 function, your only choice is to dig deeper to try and understand why
 that function is slow. You could solve this by offering line-level
 profiling data in your profiler, but I think that's probably a ton of
 work, so I'm not asking for that. ;)

 To provide one example: I did some spelunking around with
 --trace-deopt and --trace-bailouts and found that in my codebase,
 basically any use of the 'arguments' object - even just checking
 'arguments.length' - causes the entire function to be deoptimized. Of
 course, there isn't a ton of documentation here, 
 buthttp://s3.mrale.ph/nodecamp.eu/#57along with other sources claim that
 this is not the case. So, either something creepy is happening in my
 test cases - more verbose feedback from V8 here, or concrete docs from
 the devs themselves would help - or the information being given to the
 public is wrong. Non-v8/non-chrome devs saying false things about V8
 performance isn't your fault, but it wouldn't hurt to try and prevent
 that by publishing good information in textual form on the web.

 I hope I'm not giving the impression that I think V8 is the only
 problem here either; JS performance in general is a labyrinth. Based
 on my experiences however, the best way to get info about V8
 performance tuning is to talk to a Chromium dev directly or hunt down
 YouTube videos of hour long presentations. This is pretty suboptimal
 for a developer who's trying to tackle a performance issue in the
 short term - Google is surprisingly bad at finding either of those two
 info sources

Re: [v8-users] Re: Is --trace-deopt actually usable? If so, how are you supposed to use it?

2012-07-18 Thread Vyacheslav Egorov

   return (function JSIL_ArrayEnumerator() {
 return state.ctorToCall.apply(this, arguments);
   });

 Bailout in HGraphBuilder: @JSIL_ArrayEnumerator: bad value context
 for arguments value

Interesting. There is a small detail that my slides do not mention:
.apply must be the builtin apply function and expression should be
monomorphic.

Monomorphic example that will be optimized:

function apply() { arguments[0].apply(this, arguments); }

function foo() { }
function bar() { }

apply(foo);
apply(foo);
apply(bar);
apply(bar);
// Both foo and bar have same hidden classes.

Polymorphic example that will not be:

function apply() { arguments[0].apply(this, arguments); }

function foo() { }
function bar() { }

bar.foo = aaa;  // After this point foo and bar have different hidden classes.

apply(foo);
apply(foo);
apply(bar);
apply(bar);

// now .apply expression inside apply is not monomorphic and compiler
will say bad value context for arguments value.

Did you patch Function.prototype.apply or add properties to your
functions? This might explain why .apply optimization gets confused.

 Bailout in HGraphBuilder: @System_Threading_Interlocked_CompareExchange: 
 bad value context for arguments value

This one might be tricky. Assumptions V8 makes during compilation are
all based on type-feedback. If argc was never equal to 4 before V8
tried to optimize System_Threading_Interlocked_CompareExchange V8 just
will not know that .apply there is a built-in Function.prototype.apply
so it will bailout. I suggest avoiding .apply on rarely executed
branches in hot functions if possible.

Of course there might be still a possiblity that .apply access is
polymorphic as described above.

 Your explanation on why the no-message deopts occur is helpful

To be precise I was referring to deoptimization that happens on
deoptimize instruction you quoted in your first mail.

[please do not hesitate to ask more questions!]

--
Vyacheslav Egorov


On Wed, Jul 18, 2012 at 6:06 PM, Kevin Gadd kevin.g...@gmail.com wrote:
 Thanks for the detailed response. Unfortunately I didn't write down
 the example I saw with arguments.length causing it - it may have been
 me misreading the output, or perhaps it was from inlining? However,
 there are certainly a bunch of uses of fn.apply(this, arguments),
 which the presentation also said would be fine, and those are bailing
 out. Here are two examples (both generated by code at runtime, so if I
 can change the generated code to fix this, I'd love to know about it
 :D)

   return (function JSIL_ArrayEnumerator() {
 return state.ctorToCall.apply(this, arguments);
   });

 Bailout in HGraphBuilder: @JSIL_ArrayEnumerator: bad value context
 for arguments value

   return (function System_Threading_Interlocked_CompareExchange() {
   var argc = arguments.length;
   if (argc === 4) {
 return thisType[CompareExchange`1$559[!!0],!!0,!!0=!!0].apply(this,
 arguments);
   }

   throw new Error('No overload of ' + name + ' can accept ' +
 (argc - offset) + ' argument(s).')
 });

 Bailout in HGraphBuilder:
 @System_Threading_Interlocked_CompareExchange: bad value context for
 arguments value

 In total I see something like 30 'value context for arguments value'
 bailouts when starting this test case and almost all of them look like
 they should be okay based on that slide, so I must either have
 misinterpreted the slide or it's not correct anymore.

 Your explanation on why the no-message deopts occur is helpful; if I
 assume that they indicate polymorphism maybe I can use that
 information to try and zero in on locations within the function where
 polymorphism might be occurring and make some headway that way.
 Thanks.

 --hydrogen-filter sounds like *exactly* what I need, so thank you very
 much for mentioning that. :D

 -kg

 On Tue, Jul 17, 2012 at 11:52 PM, Vyacheslav Egorov
 vego...@chromium.org wrote:
 Hi Kevin,

 To be absolutely honest all these flags historically were made by V8
 developers for V8 developers. You usually can't interpret what they
 print without understanding of how V8 works internally, how optimizing
 compiler IR looks like etc. We advocate them for JS developers only
 because there is nothing else available at the moment.

 [I was always convinced that V8 needs a more gui-sh thing that would
 overlay events from the optimizing compiler over the source of
 function but that is not so easy. I was playing with some prototypes
 but at some point I gave up... It requires attaching source position
 information to individual IR instructions (plus merging this
 information somehow when we optimize code and remove redundant
 instructions) and to make it worse: AST does not even have a span
 information attached to each node... you can't say that expression a +
 b starts a position X and ends at position Y to correctly highlight
 the whole offending expression.]

 The deoptimization that you are mentioning in the first message
 indicates that either

Re: [v8-users] Re: Is --trace-deopt actually usable? If so, how are you supposed to use it?

2012-07-18 Thread Vyacheslav Egorov
 Oh, if functions have hidden classes and changing them prevents the
 fast-path for .call and .apply, then setting debugName and displayName
 on all my functions isn't a very good idea... thanks.

Well actually if you set same fields in the same order on _all_
functions that come into this .apply site then it should be fine
(unless you set too many fields, more than 14, or delete properties
--- which would cause property backing store to be normalized) = they
will all have the same hidden class.

 I was under the impressions that bailouts were based on the shape of the
 code and deopts were based on type information

Yep, we have some corner cases where compile time optimization is
limited to certain good cases that can be detected by looking at type
feedback. So if type feedback does not look good we just bailout.

 do they apply to anything that can have properties (strings, functions, etc) 
 as well?

Well... How should I answer, not too be confusing :-) Short answer is
yes: objects, functions, value wrappers (String, Number, etc) have
hidden class that change when you add/remove properties, primitive
values that don't carry properties like strings and numbers don't have
one (or rather they don't change it, because you can't add/remove
properties on them).

Long answer is: strictly speaking _every_ object in V8 heap has a
thing called Map, that describes its layout. Objects that can carry
around properties (inheriting from v8::internal::JSObject:
https://github.com/v8/v8/blob/master/src/objects.h#L57-71) _might_
have their map changed when you add and remove properties. It does not
always happen, because not evey map is describing fast properties
layout.

You can sometimes see deoptimizations that mention check-map
instruction. It's the one that checks object layout by comparing
object's map to an expected map.

 Does modifying an object's prototype cause
 its hidden class to change and deopt any functions that use it - like
 if I were to alter String.prototype or Number.prototype after some
 code had JITted?

If you add property to a prototype then JS object that represents
prototype will get a new hidden class. If some optimized code was
checking this prototype and expecting certain map --- this check will
fail when executed and code will deopt. If some inline cache stub was
checking it --- this check will fail when this IC is used and IC will
miss.

 Is a function's hidden class just based on whether
 you've made changes, or do, say, .bind() functions have a different
 hidden class from native ones like console.log?

Yeah, they actually have different ones due to the way we do bootstrap
of the built-ins. Built-in functions are actually slightly different
from normal functions because they does not have .prototype property
by default. But even if you add one manually they will not transition
to the same hidden class as a normal function with .prototype; they
are just not linked together with a transition and are completely
separated. I am not exactly sure why.

Functions coming from different contexts (iframes) will have different
hidden classes.

Strict functions (use strict;) will have different hidden classes
from non-strict ones.

--
Vyacheslav Egorov


On Wed, Jul 18, 2012 at 6:33 PM, Kevin Gadd kevin.g...@gmail.com wrote:
 Oh, if functions have hidden classes and changing them prevents the
 fast-path for .call and .apply, then setting debugName and displayName
 on all my functions isn't a very good idea... thanks. That makes the
 slide's advice make more sense, and it also explains why my attempts
 to move the bailout into a child function weren't a success.

 I was
 under the impressions that bailouts were based on the shape of the
 code and deopts were based on type information - if the bailout can
 also occur because it doesn't have IC type information to show that
 .apply is builtin and the fn is a standard Function, that explains how
 I'm getting it in some of these contexts. I will test this out some
 and if I get good results I'll definitely write it up on my
 optimization page.

 https://github.com/kevingadd/JSIL/wiki/JavaScript-Performance-For-Madmen

 If any of the information on the above about V8 is wrong, please let
 me know so I can fix it :)

 P.S. Every example I've ever seen for Hidden Classes uses Objects. I
 foolishly assumed that as a result, they only applied to user-created
 objects - do they apply to anything that can have properties (strings,
 functions, etc) as well? Does modifying an object's prototype cause
 its hidden class to change and deopt any functions that use it - like
 if I were to alter String.prototype or Number.prototype after some
 code had JITted? Is a function's hidden class just based on whether
 you've made changes, or do, say, .bind() functions have a different
 hidden class from native ones like console.log?

 Thanks,
 -kg

 On Wed, Jul 18, 2012 at 9:26 AM, Vyacheslav Egorov vego...@chromium.org 
 wrote:

   return (function JSIL_ArrayEnumerator

Re: [v8-users] Destructors, a proposal of sorts

2012-07-11 Thread Vyacheslav Egorov
Actually finalization in reference counting GCs is much more
predictable than in GCs that mark through the heap.

Contrary to want you might think --nouse-idle-notification does not
disable automatic GC in V8. What it does is tells V8 not to perform GC
actions (be it advance the sweeper, incremental marker or do a full
GC) in response to IdleNotifications that embedder (node.js in this
case) sends to V8.

If V8 sees fit (e.g. on allocation failure) it _will_ perform it and
you can't disable that.

[also running through all objects is kinda how GC works, though it
should be done in increments]

--
Vyacheslav Egorov


On Wed, Jul 11, 2012 at 5:02 PM, Michael Schwartz myk...@gmail.com wrote:
 Here's a pattern, using Canvas as an example:

 function foo() {
   var c = new Canvas();
   var ctx = c.getContext('2d');
   var grad = ctx.createLinearGradient(0,0, 10,10);
   // do something / render
 }

 There's now a native cairo_surface_t created (new Canvas).  The c variable
 has a this.surface referencing the native surface object.
 There's now a native cairo_context_t created (c.getContext). The ctx
 variable has a this.ctx referencing the native context object.
 There's now a native cairo_pattern_t created (c.createLinearGradient).  The
 grad variable has a this.pattern referencing the native pattern object.

 All three need to be cleaned up at some point.

 There are no destroy (destructor) methods defined in the W3C spec for
 Canvas, Context, Gradient, etc.  Nobody is going to call them (they don't
 client-side), unless writing non-portable SilkJS specific code.  And
 ideally, the code should be portable between client and server - that's one
 of the best features of JavaScript running on both sides.

 I'm fully aware of the issues with finalize and reference counting based GC,
 etc.   Stephan made his rant about garbage collection and destructor issues
 in that link I posted in my first message.  Things haven't changed.  He and
 I have been and still are developing complex API for server-side, and this
 is an issue we both face.  Surely every API designer faces the same problem.
 Assist from V8 in addressing the problem would benefit a wide audience.

 As for my choice to force GC:

 http://blog.caustik.com/2012/04/08/scaling-node-js-to-100k-concurrent-connections/

 2) V8′s idle garbage collection is disabled via “–nouse-idle-notification”

 This was critical, as the server pre-allocates over 2 million JS Objects for
 the network topology. If you don’t disable idle garbage collection, you’ll
 see a full second of delay every few seconds, which would be an intolerable
 bottleneck to scalability and responsiveness. The delay appears to be caused
 by the garbage collector traversing this list of objects, even though none
 of them are actually candidates for garbage collection.

 In my case, I know when it is a good time to force a GC.  It is being done
 in process that is about to block in accept().  If it ties up a CPU core for
 a bit, it is not going to stop the other processes from running, nor is the
 GC going to pause the server in action.

 (The above is one of numerous WWW pages I've read about scaling NodeJS,
 garbage collection pauses during benchmarks, etc.)


 On Jul 11, 2012, at 7:38 AM, Andreas Rossberg wrote:

 On 11 July 2012 15:35, Michael Schwartz myk...@gmail.com wrote:

 GC finalization actually works for SilkJS.  In the HTTP server, there are
 two nested loops.  The outer loop waits for connections, the inner loop
 handles keep-alive requests.  At the end of the inner loop (e.g. when the
 connection is about to be closed), I force a GC (or at least try to).


 Hm, I don't quite follow. If you actually have a specific point where you
 know that you want to dispose a resource, why is it impossible to dispose it
 directly at that point? Or if there are many of them, you could maintain a
 set/map of them.

 In any case, before you conclude that finalization is the answer to your
 problems, let me +1 Sven's recommendation on reading Boehm's paper on the
 subject. Finalization is pretty much an anti-pattern. There are some rare,
 low-level use cases, but usually it creates more problems than it solves.
 That's why we only provide it in the C++ API.

 Also, I should mention that forcing a GC manually is highly discouraged.
 That causes a full (major, non-incremental) collection, which generally is a
 very costly operation that can cause significant pause time, and basically
 defeats most of the goodness built into a modern GC.

 /Andreas


 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Strict mode performance benefits

2012-07-10 Thread Vyacheslav Egorov
Strict mode actually does have a performance benefit in one peculiar case:
if you want to extend String, Number, Boolean prototype declare function
you put on it strict. This will allow you to avoid coercion of a primitive
value to object wrapper which has a negative impact on performance if this
call is on a very hot path.

Vyacheslav Egorov
On Jul 10, 2012 9:19 AM, Jakob Kummerow jkumme...@chromium.org wrote:

 On Tue, Jul 10, 2012 at 7:58 AM, Rohit rox...@gmail.com wrote:

 Does V8's strict mode implementation offer any performance benefits?


 No.


  --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] HTML5 Drag Drop feature crashing

2012-07-10 Thread Vyacheslav Egorov
This is not a V8 crash, the crash is in the WebKit. Please report it to them.

--
Vyacheslav Egorov


On Tue, Jul 10, 2012 at 2:16 PM, Manjunath G manjunath1...@gmail.com wrote:
 Hi,

 When I try to test the HTML5 feature drag drop from
 http://html5demos.com/drag,
 v8 is crashing. Looks like crash is happening in pthread lock. Stack trace
 is as follows. Please can anyone help in debugging this.


 #0  0x022ab3bd in pthread_mutex_lock () from
 /opt/ThirdParty/lib/libpthread.so.0
 #1  0x01f15f76 in pthread_mutex_lock () from /opt/ThirdParty/lib/libc.so.6
 #2  0x002ff75d in WTF::Mutex::lock() () from ./.libs/libwebkitgtk-3.0.so.0
 #3  0x002de1c8 in WTF::strtod(char const*, char**) () from
 ./.libs/libwebkitgtk-3.0.so.0
 #4  0x002fa9d1 in WTF::charactersToDouble(unsigned short const*, unsigned
 int, bool*, bool*) ()
from ./.libs/libwebkitgtk-3.0.so.0
 #5  0x003fd406 in WebCore::CSSParser::lex(void*) () from
 ./.libs/libwebkitgtk-3.0.so.0

 #6  0x00c5234b in cssyyparse(void*) () from ./.libs/libwebkitgtk-3.0.so.0

 #7  0x0040180b in
 WebCore::CSSParser::parseValue(WebCore::CSSMutableStyleDeclaration*, int,
 WTF::String const, bool) () from ./.libs/libwebkitgtk-3.0.so.0

 #8  0x004024de in
 WebCore::CSSParser::parseValue(WebCore::CSSMutableStyleDeclaration*, int,
 WTF::String const, bool, bool) () from ./.libs/libwebkitgtk-3.0.so.0
 #9  0x003f44fd in WebCore::CSSMutableStyleDeclaration::setProperty(int,
 WTF::String const, bool, bool) () from ./.libs/libwebkitgtk-3.0.so.0
 #10 0x003f45d4 in WebCore::CSSMutableStyleDeclaration::setProperty(int,
 WTF::String const, bool, int) () from ./.libs/libwebkitgtk-3.0.so.0
 #11 0x00c262eb in
 WebCore::V8CSSStyleDeclaration::namedPropertySetter(v8::Localv8::String,
 v8::Localv8::Value, v8::AccessorInfo const) () from
 ./.libs/libwebkitgtk-3.0.so.0
 #12 0x010a0e18 in
 v8::internal::JSObject::SetPropertyWithInterceptor(v8::internal::String*,
 v8::internal::Object*, PropertyAttributes, v8::internal::StrictModeFlag) ()
from ./.libs/libwebkitgtk-3.0.so.0
 #13 0x in ?? ()
 (gdb) q



 Thanks in advance.

 Regards
 Manjunath

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] instanceof Array fails after ReattachGlobal

2012-06-27 Thread Vyacheslav Egorov
 How can I re-use the same built-ins each time?

There is no way. New context means new built-in objects. Also
reattaching global does not change anything because built-ins are not
on the global object itself.

If you want to reuse the same builtins you don't ultimately need a new
context. Just use the same all the time.

 Also what about strings or custom objects of our own?

If you execute this code multiple times you get multiple foos and of
course instances produced by one foo are not instanceof another foo...
Just like in pure JS:

function mkfoo() {
  function foo() {}
  return new foo();
}

var o1 = mkfoo();
var o2 = mkfoo();

o1 instanceof o2.constructor // = false

--
Vyacheslav Egorov


On Tue, Jun 26, 2012 at 5:34 PM, MikeM mi...@reteksolutions.com wrote:
 Array function in one context is different from Array function in
 another context as each context is a separate world with it's own
 built-in objects.
 Right.  That was the purpose of the ReattachGlobal() in the code.
 My idea (possibly mis-guided), was to re-use the same set of built-in
 objects (or prototypes) between different executions.
 So that instanceOf would work properly.  How can I re-use the same built-ins
 each time?
 I suppose I could keep using the same Context over and over, but I would
 need a way to wipe out any local var declarations between executions and
 only keep the built-ins.


 You can use Array.isArray which should work cross-context.
 Also what about strings or custom objects of our own?

   function foo() {}
   var x = new foo();
   x instanceof foo;

 Thanks!




 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] instanceof Array fails after ReattachGlobal

2012-06-27 Thread Vyacheslav Egorov
 So where are they kept (the built-ins)? Are the hooked in via prototype chain 
 to the global object?

Yes, they are. Quoting comments in v8.h:

  /**
   * Returns the global proxy object or global object itself for
   * detached contexts.
   *
   * Global proxy object is a thin wrapper whose prototype points to
   * actual context's global object with the properties like Object, etc.
   * This is done that way for security reasons (for more details see
   * https://wiki.mozilla.org/Gecko:SplitWindow).
   *
   * Please note that changes to global proxy object prototype most probably
   * would break VM---v8 expects only global object as a prototype of
   * global proxy object.
   *
   * If DetachGlobal() has been invoked, Global() would return actual global
   * object until global is reattached with ReattachGlobal().
   */
  LocalObject Global();

   So I can just create a new empty global object each time like this

Unfortunately no. Normal JavaScript objects do not work with
ReattachGlobal. It expects JSGlobalProxy (which cannot be created
through an API separately from a Context). Quoting v8.h:

 /**
   * Reattaches a global object to a context.  This can be used to
   * restore the connection between a global object and a context
   * after DetachGlobal has been called.
   *
   * \param global_object The global object to reattach to the
   *   context.  For this to work, the global object must be the global
   *   object that was associated with this context before a call to
   *   DetachGlobal.
   */
  void ReattachGlobal(HandleObject global_object);

Unfortunately it seems that your use case is not supported by V8's API.

--
Vyacheslav Egorov


On Wed, Jun 27, 2012 at 3:58 PM, MikeM mi...@reteksolutions.com wrote:

  There is no way. New context means new built-in objects. Also
  reattaching global does not change anything because built-ins are not
  on the global object itself.

 Holy Javascript Batman!  That explains a lot!
 So where are they kept (the built-ins)? Are the hooked in via prototype chain 
 to the global object?

  If you want to reuse the same builtins you don't ultimately need a new
  context. Just use the same all the time.

 Excellent!  So I can just create a new empty global object each time like 
 this and attach to my existing context to give me the same built-ins but a 
 clean global?
 PersistentObject globalObject = PersistentObject::New(Object::New());

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] CALL_AND_RETRY_2 memory errors and strange TODO message in heap-inl.h

2012-06-27 Thread Vyacheslav Egorov
 Is the overall performance of the GC design satisfactory enough that this
 is probably going to be a back-burner item for a while?

Not sure what you mean. It is not related to performance. It's more
about cleaner and safe code. Paradoxically: it's simpler to write
cleaner and safer runtime when you just crash on OOM and don't have to
worry about inconsistent state inside your VM.

At some places it's simple to unroll/discard changes and rethrow OOM
further, in some it should be possible to allow Isolate to die alone,
without crashing the whole process; but overall it's a huge
engineering problem as it requires audit and refactoring of 4 years of
code written in no OOM or crash paradigm.

--
Vyacheslav Egorov

On Thu, Jun 28, 2012 at 12:53 AM, Brian Wilson br...@kinvey.com wrote:
 Thanks,

 I've started by setting a breakpoint in
 v8::internal::V8::FatalProcessOutOfMemory to see how I end up here.
 Once the process is stopped, I can dig into what's actually going on
 with the process state.


 Thanks for the insight the the TODO message, so the eventual goal
 (of the TODO) is to attempt to allow some sort of recovery from this state?
 Is the overall performance of the GC design satisfactory enough that this
 is probably going to be a back-burner item for a while?

 Brian


 On Jun 27, 2012, at 6:32 PM, Vyacheslav Egorov wrote:

 You need to catch it in the debugger to see what actually happens. It
 can be either:

 - real OOM: OS refused to give memory to V8 (you seem to be confident
 that this is not the case)
 - heap size limit OOM: ensure that your heap size is not exceeding
 --max-old-space-size limit (defaults: 700mb on ia32, 1400mb on x64).
 - artificial OOM when trying to allocate an array or a string that is
 too big (e.g. check constants SeqAsciiString::kMaxLength,
 FixedArray::kMaxLength).

 The cryptic TODO itself comes from V8's early days, short explanation:
 some places in V8 runtime (e.g. places that use methods of
 v8::internal::Factory) do not expect allocation failures that cannot
 be resolved by calling GC (several times in worst case), so
 CALL_AND_RETRY can't return Failure object to them (and V8 does not
 use exceptions or longjmp in runtime) so it has to fail. TODO reflects
 some hope that V8 might start handle OOMs more gracefully in some day
 in the future (which is not trivial as OOM might leave VM in
 inconsistent state).

 --
 Vyacheslav Egorov


 On Wed, Jun 27, 2012 at 11:55 PM, Brian Wilson br...@kinvey.com wrote:

 I've been running into some issues lately where I see the message:

 FATAL ERROR: CALL_AND_RETRY_2 Allocation failed - process out of memory 
 running a program (it's built on Node.js, but I'm interested in tracing 
 this on the v8 level).

 From all indications there's plenty of free memory, plenty of heap space 
 and the ulimit is not set too low, but we're still seeing this issue.  Does 
 anyone have any suggestions on how to track down how we're triggering this 
 allocation failure?

 Incidentally, while browsing through heap-inl.h there's a cryptic TODO... 
 fix this. comment.

 540     if (__maybe_object__-IsOutOfMemory() ||                            
   \
 541         __maybe_object__-IsRetryAfterGC()) {                           
   \
 542       /* TODO(1181417): Fix this. */                                    
   \
 543       v8::internal::V8::FatalProcessOutOfMemory(CALL_AND_RETRY_2, 
 true);\
 544     }                                                                   
   \


 Can someone shed light on where that comment came from, what the issue was 
 and what fixed it?


 Thanks,
 Brian

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] instanceof Array fails after ReattachGlobal

2012-06-26 Thread Vyacheslav Egorov
Array function in one context is different from Array function in
another context as each context is a separate world with it's own
built-in objects.

You can use Array.isArray which should work cross-context.

--
Vyacheslav Egorov


On Mon, Jun 25, 2012 at 9:08 PM, MikeM mi...@reteksolutions.com wrote:
 The code below throws an exception because the saved property arraytest is
 doesn't seem to be an array on the second execution of script.
 I presume the build-ins prototypes like Array must be different after
 Reattaching the same global to the 2nd context.
 But why?
 The failure happens in the final test at the end of the code.

 
 static inline v8::Localv8::Value CompileRun(const char* source) {
   return v8::Script::Compile(v8::String::New(source))-Run();
 }

   HandleScope scope;
   PersistentContext ctxRequest = Context::New();

   LocalValue foo = String::New(foo);
   ctxRequest-SetSecurityToken(foo);
   ctxRequest-Enter();  //Start execution of 1st request.

   //Create an object we can use to store properties between requests.
   PersistentObject sessionObject = PersistentObject::New(Object::New());
   LocalObject requestGlobal = ctxRequest-Global();

   //Create a reference to session object on request global.
   requestGlobal-Set(String::New(Session), sessionObject);
   TryCatch trycatch;

   //Add property to the session object and save it. Add an empty array too.
   CompileRun(Session.saveme = 42; if(Session.arraytest === undefined)
 {Session.arraytest = [];});

   //Makes sure we have an array and return value in saveme
   LocalValue result = Script::Compile(String::New(if(!(Session.arraytest
 instanceof Array)) {throw new Error('Failed instanceof Array test.');}
 Session.saveme))-Run();
   if(result.IsEmpty())
   {
  HandleValue exception = trycatch.Exception();
  String::AsciiValue exception_str(exception);
  printf(Exception: %s\n, *exception_str);
   }
   else
   {
  CHECK(!result-IsUndefined());
  CHECK(result-IsInt32());
  CHECK(42 == result-Int32Value());
   }

   //Save the the global and reattach to 2nd request later
   PersistentObject requestSavedGlobal =
 PersistentObject::Persistent(ctxRequest-Global());
   ctxRequest-DetachGlobal();
   ctxRequest-Exit();

   PersistentContext ctxRequest2 = Context::New();
   ctxRequest2-ReattachGlobal(requestSavedGlobal);
   ctxRequest2-SetSecurityToken(foo);
   ctxRequest2-Enter();

   requestSavedGlobal-Set(String::New(Session), sessionObject);

   //Check that we can get value of saved property in the session.
   result = Script::Compile(String::New(if(!(Session.arraytest instanceof
 Array)) {throw new Error('Failed instanceof Array test.');}))-Run();
   if(result.IsEmpty())
   {
     HandleValue exception = trycatch.Exception();
     String::AsciiValue exception_str(exception);
     printf(Exception: %s\n, *exception_str);
   }

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] ArrayBuffer fast access

2012-06-16 Thread Vyacheslav Egorov
 I couldn't find any fast access defined in the JIT compiler

There are fast paths for typed arrays inside V8, they are just not
called typed arrays :-) Look for external arrays instead.

For them V8 has both specialized IC stubs (e.g. load stub:
https://github.com/v8/v8/blob/master/src/ia32/stub-cache-ia32.cc#L3508
) and support in optimizing compiler pipeline (see IR instructions:
LoadExternalArrayPointer, LoadKeyedSpecializedArrayElement,
StoreKeyedSpecializedArrayElement).

I always use typed arrays when they are available (this communicates
my intent both to the JIT compiler and to a person reading my code).

--
Vyacheslav Egorov


On Sat, Jun 16, 2012 at 5:44 PM, Pablo Sole pablo.s...@gmail.com wrote:
 Hello there,

 I'm embedding v8 into a binary instrumentation framework and I'm trying
 to use an ArrayBuffer/TypedBuffer for fast memory operations (like
 Tamarin/ActionScript does for the Memory object operations), but I
 couldn't find any fast access defined in the JIT compiler, so I suppose
 that a read/write to a TypedBuffer goes all the way of an object and
 property resolution. Although, for this case it could just be a range
 check and a memory load/store operation.

 So, would it be faster to use a regular array of SMIs (SMIs in the
 indexes and in the values) without holes faster than an ArrayBuffer? Is
 there any plan to provide a fast path for this case?

 Thanks,

 pablo.

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: [SIGSEGV] v8::HandleScope::HandleScope()

2012-06-16 Thread Vyacheslav Egorov
This assertion indeed indicates that your are trying to use V8 from a
thread that does not own an isolate.

You should get exclusive access to the isolate you are going to use with Locker:

https://github.com/v8/v8/blob/master/include/v8.h#L3638

--
Vyacheslav Egorov


On Fri, Jun 15, 2012 at 1:16 PM, Serega skripac...@gmail.com wrote:
 #
 # Fatal error in ../src/isolate.h, line 440
 # CHECK(isolate != __null) failed
 #

 Program received signal SIGTRAP, Trace/breakpoint trap.
 [Switching to Thread 0x74d56700 (LWP 8551)]
 v8::internal::OS::DebugBreak () at ../src/platform-linux.cc:389
 389     }
 (gdb) step
 v8::internal::OS::Abort () at ../src/platform-linux.cc:373
 373       abort();
 (gdb) step

 Program received signal SIGABRT, Aborted.
 0x76cbd445 in raise () from /lib/x86_64-linux-gnu/libc.so.6
 (gdb) step
 Single stepping until exit from function raise,
 which has no line number information.
 [Thread 0x74d56700 (LWP 8551) exited]
 [Thread 0x7fffef7fe700 (LWP 8552) exited]
 [Thread 0x7fffe700 (LWP 8550) exited]
 [Thread 0x75d58700 (LWP 8548) exited]
 [Thread 0x76559700 (LWP 8547) exited]
 [Thread 0x77ff7700 (LWP 8546) exited]
 [Thread 0x77fd5740 (LWP 8519) exited]

 Program terminated with signal SIGABRT, Aborted.
 The program no longer exists.

 Thank you!
 One Question more, can you show example, how to isolate v8?

 On 15 июн, 13:38, Vyacheslav Egorov vego...@chromium.org wrote:
 Hello,

 Please link against debug version of V8 to get more information about the 
 crash.

 Also ensure that the thread that invokes your even_handler owns V8
 Isolate if you have multiple threads using V8 concurrently.

 --
 Vyacheslav Egorov







 On Fri, Jun 15, 2012 at 8:28 AM, Serega skripac...@gmail.com wrote:
  Hellow! I'm having a little trouble.

  Program received signal SIGSEGV, Segmentation fault.
  [Switching to Thread 0x75f66700 (LWP 21828)]
  0x772bcff7 in v8::HandleScope::HandleScope() () from /usr/lib/
  libv8.so
  (gdb) step
  Single stepping until exit from function _ZN2v811HandleScopeC2Ev,
  which has no line number information.
  [Thread 0x75f66700 (LWP 21828) exited]
  [Thread 0x7fffe77fe700 (LWP 21832) exited]
  [Thread 0x7fffe7fff700 (LWP 21831) exited]
  [Thread 0x74f64700 (LWP 21830) exited]
  [Thread 0x75765700 (LWP 21829) exited]
  [Thread 0x76767700 (LWP 21827) exited]
  [Thread 0x77ff7700 (LWP 21826) exited]

  Program terminated with signal SIGSEGV, Segmentation fault.
  The program no longer exists.

  Why i can't create HandleScope in function that is runing thrue
  pointer?

  void *event_handler(...) {

       HandleScope handle_scope;

  ...
  }

  --
  v8-users mailing list
  v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] [SIGSEGV] v8::HandleScope::HandleScope()

2012-06-15 Thread Vyacheslav Egorov
Hello,

Please link against debug version of V8 to get more information about the crash.

Also ensure that the thread that invokes your even_handler owns V8
Isolate if you have multiple threads using V8 concurrently.

--
Vyacheslav Egorov

On Fri, Jun 15, 2012 at 8:28 AM, Serega skripac...@gmail.com wrote:
 Hellow! I'm having a little trouble.

 Program received signal SIGSEGV, Segmentation fault.
 [Switching to Thread 0x75f66700 (LWP 21828)]
 0x772bcff7 in v8::HandleScope::HandleScope() () from /usr/lib/
 libv8.so
 (gdb) step
 Single stepping until exit from function _ZN2v811HandleScopeC2Ev,
 which has no line number information.
 [Thread 0x75f66700 (LWP 21828) exited]
 [Thread 0x7fffe77fe700 (LWP 21832) exited]
 [Thread 0x7fffe7fff700 (LWP 21831) exited]
 [Thread 0x74f64700 (LWP 21830) exited]
 [Thread 0x75765700 (LWP 21829) exited]
 [Thread 0x76767700 (LWP 21827) exited]
 [Thread 0x77ff7700 (LWP 21826) exited]

 Program terminated with signal SIGSEGV, Segmentation fault.
 The program no longer exists.

 Why i can't create HandleScope in function that is runing thrue
 pointer?

 void *event_handler(...) {

      HandleScope handle_scope;

 ...
 }

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] SetAccessor and read only?

2012-06-08 Thread Vyacheslav Egorov
+mstarzinger +rossberg

I think we need a repro and V8 version. I could not repro on ToT with
an example like:

 static HandleValue Getter(LocalString property, const AccessorInfo info) {
  return Integer::New(24);
}


static HandleValue Foo(const Arguments args) {
  HandleScope scope;
  HandleObject obj = Object::New();

  obj-SetAccessor(String::New(foo), Getter, 0 /* setter */,
   HandleValue(),
   static_castv8::AccessControl(v8::DEFAULT),
   static_castv8::PropertyAttribute(v8::ReadOnly));

  return scope.Close(obj);
}

var obj = Foo();

var desc = Object.getOwnPropertyDescriptor(obj, foo);

print(desc.value);
print(desc.get);
print(desc.set);
print(desc.writable);
print(obj.foo);
obj.foo = 42;
print(obj.foo);

% out/ia32.debug/d8 test.js
24
undefined
undefined
false
24
24

 What I want is a property that is writable but if not set should call the 
 getter.

I don't see how it fits into JavaScript object model.  If you have an
accessor property without a setter you can't write into it ([[CanPut]]
will be false).

  Is there a way to remove a V8 accessor?

delete object.name would delete it. But there seems to be a different
way, see below.

 Would a call to Delete() make this a slow object?

Yes.

Different way is to use v8::Object::ForceSet to replace accessor with
a real property. Good news: it will keep an object in fast mode if
possible (if object has enough space for another fast property). Bad
news: every time you replace an accessor with a normal data property
with ForceSet you will get a different map (hidden class) because
v8::internal::JSObject::ConvertDescriptorToField does not create a
transition.

// In theory accessor descriptors should be replaceable with data
descriptors via [[DefineOwnProperty]] (Object.defineProperty) but
there is some special handling in our code that prevents it.
https://github.com/v8/v8/blob/master/src/runtime.cc#L4459-4478

--
Vyacheslav Egorov

On Fri, Jun 8, 2012 at 9:33 PM, Erik Arvidsson a...@chromium.org wrote:
 The V8 WebKit bindings generates something like this:

 object-SetAccessor(name, getter, 0 /* setter */, data,
 static_castv8::AccessControl(v8::DEFAULT),
 static_castv8::PropertyAttribute(v8::ReadOnly)

 There are two really strange behaviors with this:

 1. The descriptor for this reports this as writable:

 var descr = Object.getOwnPropertyDescriptor(object, name);
 descr.writable  // true!

 2. Setting the property works

 object.name = 42;
 object.name  // 42

 However, if we remove the ReadOnly flag in the call to SetAccessor we
 get a writable property that cannot be written to:

 object-SetAccessor(name, getter, 0 /* setter */, data,
 static_castv8::AccessControl(v8::DEFAULT),
 static_castv8::PropertyAttribute(v8::None)

 var descr = Object.getOwnPropertyDescriptor(object, name);
 descr.writable  // true

 object.name = 42;
 object.name  // not 42!


 This is pretty strange. What I want is a property that is writable but
 if not set should call the getter.

 One way I can implement this is to generate a setter too that when set
 reconfigures the property. Is there a way to remove a V8 accessor?
 Would a call to Delete() make this a slow object?


 --
 erik

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] SetAccessor and read only?

2012-06-08 Thread Vyacheslav Egorov
Actually CCing Michael and Andreas.
--
Vyacheslav Egorov

On Fri, Jun 8, 2012 at 10:49 PM, Vyacheslav Egorov vego...@chromium.org wrote:
 +mstarzinger +rossberg

 I think we need a repro and V8 version. I could not repro on ToT with
 an example like:

  static HandleValue Getter(LocalString property, const AccessorInfo 
 info) {
  return Integer::New(24);
 }


 static HandleValue Foo(const Arguments args) {
  HandleScope scope;
  HandleObject obj = Object::New();

  obj-SetAccessor(String::New(foo), Getter, 0 /* setter */,
                   HandleValue(),
                   static_castv8::AccessControl(v8::DEFAULT),
                   static_castv8::PropertyAttribute(v8::ReadOnly));

  return scope.Close(obj);
 }

 var obj = Foo();

 var desc = Object.getOwnPropertyDescriptor(obj, foo);

 print(desc.value);
 print(desc.get);
 print(desc.set);
 print(desc.writable);
 print(obj.foo);
 obj.foo = 42;
 print(obj.foo);

 % out/ia32.debug/d8 test.js
 24
 undefined
 undefined
 false
 24
 24

 What I want is a property that is writable but if not set should call the 
 getter.

 I don't see how it fits into JavaScript object model.  If you have an
 accessor property without a setter you can't write into it ([[CanPut]]
 will be false).

  Is there a way to remove a V8 accessor?

 delete object.name would delete it. But there seems to be a different
 way, see below.

 Would a call to Delete() make this a slow object?

 Yes.

 Different way is to use v8::Object::ForceSet to replace accessor with
 a real property. Good news: it will keep an object in fast mode if
 possible (if object has enough space for another fast property). Bad
 news: every time you replace an accessor with a normal data property
 with ForceSet you will get a different map (hidden class) because
 v8::internal::JSObject::ConvertDescriptorToField does not create a
 transition.

 // In theory accessor descriptors should be replaceable with data
 descriptors via [[DefineOwnProperty]] (Object.defineProperty) but
 there is some special handling in our code that prevents it.
 https://github.com/v8/v8/blob/master/src/runtime.cc#L4459-4478

 --
 Vyacheslav Egorov

 On Fri, Jun 8, 2012 at 9:33 PM, Erik Arvidsson a...@chromium.org wrote:
 The V8 WebKit bindings generates something like this:

 object-SetAccessor(name, getter, 0 /* setter */, data,
 static_castv8::AccessControl(v8::DEFAULT),
 static_castv8::PropertyAttribute(v8::ReadOnly)

 There are two really strange behaviors with this:

 1. The descriptor for this reports this as writable:

 var descr = Object.getOwnPropertyDescriptor(object, name);
 descr.writable  // true!

 2. Setting the property works

 object.name = 42;
 object.name  // 42

 However, if we remove the ReadOnly flag in the call to SetAccessor we
 get a writable property that cannot be written to:

 object-SetAccessor(name, getter, 0 /* setter */, data,
 static_castv8::AccessControl(v8::DEFAULT),
 static_castv8::PropertyAttribute(v8::None)

 var descr = Object.getOwnPropertyDescriptor(object, name);
 descr.writable  // true

 object.name = 42;
 object.name  // not 42!


 This is pretty strange. What I want is a property that is writable but
 if not set should call the getter.

 One way I can implement this is to generate a setter too that when set
 reconfigures the property. Is there a way to remove a V8 accessor?
 Would a call to Delete() make this a slow object?


 --
 erik

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: Low benchmark score on MIPS platform

2012-05-30 Thread Vyacheslav Egorov
Paul, I wonder if you have some reference MIPS numbers to share? Do
the numbers you match Lawrence's?

Lawrence, I wonder what numbers you are aiming for? What numbers did
you see with your other JavaScript engine?

--
Vyacheslav Egorov

On Wed, May 30, 2012 at 12:00 PM, Lawrence lawrence@gmail.com wrote:
 Hi Jokob,

    Thanks for your reply and reminding me to switch building tool :)

    I run the simulator on my linux PC so it should have a better
 performance than your cell phone.
    Just want to compare the performance between V8 and the other
 javascript engine.

    I have a MIPS hardware with 700MHz CPU clock.
    As I talked, I want to replace my original javascript engine with
 V8.
    However, the result isn't my expectation.

    Really want to know that's the normal result for MIPS case
 happened on my side.


 Regards,

 On May 30, 4:46 pm, Jakob Kummerow jkumme...@chromium.org wrote:
 You should use GYP/make instead of SCons to build V8 (call make
 dependencies once, then simply make mips.release -j8), but that won't
 change performance numbers.

 You seem to be running this on a pretty fast machine; for the MIPS
 simulator I get a score of only 45.6 on my box. Simulators are slow, that's
 expected.

 I don't have any MIPS hardware. The closest I have is a Nexus S which
 scores roughly 900. Would you expect your MIPS hardware to be about one
 third as fast as a 1GHz ARM A8?







 On Wed, May 30, 2012 at 5:43 AM, Lawrence lawrence@gmail.com wrote:
  I build Version 3.11.6 for little-endian MIPS with command : scons
  arch=mips library=static sample=shell mode=release -j8 . Also setup CC
  CXX AR LD RANLIB CXXFLAGS well.

  And got the following result:
  Richards: 542
  DeltaBlue: 343
  Crypto: 593
  RayTrace: 211
  EarleyBoyer: 979
  RegExp: 120
  Splay: 293
  NavierStokes: 268
  
  Score (version 7): 348

  Besides, I also built mips simulator and got the following result
  Richards: 65.2
  DeltaBlue: 94.6
  Crypto: 49.3
  RayTrace: 123
  EarleyBoyer: 133
  RegExp: 33.2
  Splay: 243
  NavierStokes: 37.1
  
  Score (version 7): 78.8

  Could anyone get good performance on MIPS platform or just I did
  something wrong ?

  Regards,
  Lawrence

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] XOR two 31-bit unsigned integers much faster than XOR two 32-bit unsigned integers?

2012-05-30 Thread Vyacheslav Egorov
Minor correction: 31-bit tagged _signed_ integers are used on ia32, on
x64 you get 32-bit tagged _signed_ integers. Neither though are wide
enough to contain all values from unsigned 32-bit integer range.

Thus if you are really using them as 32bit _unsigned_ integers, e.g.
you are doing something like:

var a = (b ^ c)  0; // force into uint32 and then use in
non-int32-truncating manner.

then unfortunately even V8's optimizing compiler gets confused. It
does not have designated uint32 representation and does not try to
infer when int32 can be safely used instead of uint32 (another example
of this bug: http://code.google.com/p/v8/issues/detail?id=2097).

I suggest you post your code here if possible so that we could take a look.

--
Vyacheslav Egorov

On Wed, May 30, 2012 at 4:40 PM, Jakob Kummerow jkumme...@chromium.org wrote:
 As long as you're running unoptimized code on a 32-bit V8, this is expected:
 31-bit integers are stored directly as small integers, the 32nd bit is
 used to tag them as such, whereas 32-bit integers are converted to doubles
 and stored as objects on the heap, which makes accessing them more
 expensive.

 When your code runs long enough for the optimizer to kick in, it should
 recognize this situation, use untagged 32-bit integer values in optimized
 code, and the difference between 31-bit and 32-bit values should go away. If
 it doesn't, please post a reduced test case that exhibits the behavior so
 that we can investigate. (Running the code for a second or so should be
 enough to get the full effect of optimization and make the initial
 difference negligible.)


 On Wed, May 30, 2012 at 4:31 PM, Joran Greef jo...@ronomon.com wrote:

 I am implementing a table hash
 (http://en.wikipedia.org/wiki/Tabulation_hashing) and noticed that a table
 hash using a table of 31-bit unsigned integers is almost an order of
 magnitude faster than a table hash using a table of 32-bit unsigned
 integers.

 The former has an average hash time of 0.7ms per 20 byte key for a
 31-bit hash, and the latter has an average hash time of 0.00034ms per 20
 byte key for a 32-bit hash.

 I figured that XOR on 8-bit integers would be faster than XOR on 16-bit
 integers would be faster than XOR on 24-bit integers would be faster than
 XOR on 32-bit integers, but did not anticipate such a difference between
 31-bit and 32-bit integers.

 Is there something regarding XOR that I may be missing that could explain
 the difference?


 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] XOR two 31-bit unsigned integers much faster than XOR two 32-bit unsigned integers?

2012-05-30 Thread Vyacheslav Egorov
Is it essential that your hash should be uint32 not int32?

I would assume you can get a good performance and 32 bit of hash if
you stay with int32's:

a) fill your tables with (integer % Math.pow(2,32)) | 0 to force
uint32 into int32 and
b) make your table a typed array instead of normal array: new
Int32Array(256); [not necessary if you are running x64 version of V8]
c) stop doing  0 at the end;

--
Vyacheslav Egorov

On Wed, May 30, 2012 at 5:21 PM, Joran Greef jo...@ronomon.com wrote:
 The difference comes through when changing integer % Math.pow(2,32)
 to integer % Math.pow(2,31) when pre-generating the hash tables.

 Hash tables containing 256 random 31-bit unsigned integers are pre-generated
 for every byte of key. The hash operates on fixed-length 20 byte keys.

 Each byte of the key is XOR-red with one of the integers in the table,
 depending on the position of the byte in the key, so the XOR is dealing with
 an 8-bit unsigned integer and a 31-bit unsigned integer.

 The result is cast to an unsigned integer by  0.

 On Wednesday, May 30, 2012 5:00:55 PM UTC+2, Vyacheslav Egorov wrote:

 Minor correction: 31-bit tagged _signed_ integers are used on ia32, on
 x64 you get 32-bit tagged _signed_ integers. Neither though are wide
 enough to contain all values from unsigned 32-bit integer range.

 Thus if you are really using them as 32bit _unsigned_ integers, e.g.
 you are doing something like:

 var a = (b ^ c)  0; // force into uint32 and then use in
 non-int32-truncating manner.

 then unfortunately even V8's optimizing compiler gets confused. It
 does not have designated uint32 representation and does not try to
 infer when int32 can be safely used instead of uint32 (another example
 of this bug: http://code.google.com/p/v8/issues/detail?id=2097).

 I suggest you post your code here if possible so that we could take a
 look.

 --
 Vyacheslav Egorov

 On Wed, May 30, 2012 at 4:40 PM, Jakob Kummerow jkumme...@chromium.org
 wrote:
  As long as you're running unoptimized code on a 32-bit V8, this is
  expected:
  31-bit integers are stored directly as small integers, the 32nd bit is
  used to tag them as such, whereas 32-bit integers are converted to
  doubles
  and stored as objects on the heap, which makes accessing them more
  expensive.
 
  When your code runs long enough for the optimizer to kick in, it should
  recognize this situation, use untagged 32-bit integer values in
  optimized
  code, and the difference between 31-bit and 32-bit values should go
  away. If
  it doesn't, please post a reduced test case that exhibits the behavior
  so
  that we can investigate. (Running the code for a second or so should be
  enough to get the full effect of optimization and make the initial
  difference negligible.)
 
 
  On Wed, May 30, 2012 at 4:31 PM, Joran Greef jo...@ronomon.com wrote:
 
  I am implementing a table hash
  (http://en.wikipedia.org/wiki/Tabulation_hashing) and noticed that a
  table
  hash using a table of 31-bit unsigned integers is almost an order of
  magnitude faster than a table hash using a table of 32-bit unsigned
  integers.
 
  The former has an average hash time of 0.7ms per 20 byte key for a
  31-bit hash, and the latter has an average hash time of 0.00034ms per
  20
  byte key for a 32-bit hash.
 
  I figured that XOR on 8-bit integers would be faster than XOR on 16-bit
  integers would be faster than XOR on 24-bit integers would be faster
  than
  XOR on 32-bit integers, but did not anticipate such a difference
  between
  31-bit and 32-bit integers.
 
  Is there something regarding XOR that I may be missing that could
  explain
  the difference?
 
 
  --
  v8-users mailing list
  v8-users@googlegroups.com
  http://groups.google.com/group/v8-users

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] XOR two 31-bit unsigned integers much faster than XOR two 32-bit unsigned integers?

2012-05-30 Thread Vyacheslav Egorov
 is a logical right shift. It shifts bits right filling vacant positions 
 with 0 (as opposed to  which fills vacant positions with sign bit).

In JavaScript  also performs ToUint32 on the input before doing
shift (as opposed to  which converts input via ToInt32) so the
result is always a number from unsigned 32-bit integer range.

See: http://es5.github.com/#x11.7.3

x  0 is basically ToUint32(x)

--
Vyacheslav Egorov

On Wed, May 30, 2012 at 6:34 PM, Stephan Beal sgb...@googlemail.com wrote:
 On Wed, May 30, 2012 at 6:25 PM, Vyacheslav Egorov vego...@chromium.org
 wrote:

 c) stop doing  0 at the end;


 A side question: what does 0 do? i have never seen the  operator
 before this thread and never seen a 0-bit shift anywhere.

 :-?

 --
 - stephan beal
 http://wanderinghorse.net/home/stephan/
 http://gplus.to/sgbeal

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] XOR two 31-bit unsigned integers much faster than XOR two 32-bit unsigned integers?

2012-05-30 Thread Vyacheslav Egorov
Interesting. If unsigned integers are completely eliminated I would
expect it to be as fast as 31-bit version. Strange to still the slow
down still.

I would expect that conversion to bucket is independent of hash sign,
it should be the same: bucket_index = hash  mask; where mask =
number_of_buckets - 1 (and number_of_buckets is a power of 2).

--
Vyacheslav Egorov

On Wed, May 30, 2012 at 7:02 PM, Joran Greef jo...@ronomon.com wrote:
 To clarify, Array instead of Uint32Array is slightly slower as expected:
 0.54ms per key vs 0.48ms per key.

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Locally-scoped version of PersistentT?

2012-04-10 Thread Vyacheslav Egorov
Locally scoped version of persistent is almost equivalent to Local
except that Local can never be weak.

 LocalContext lcontext = pcontext;

It should be

LocalContext lcontext = LocalContext::New(pcontext);

--
Vyacheslav Egorov


On Fri, Apr 6, 2012 at 3:32 AM, Marcel Laverdet mar...@laverdet.com wrote:
 Hey I'm wondering why there isn't a helper class for PersistentT which
 will Dispose() a handle at the end of scope. It seems like right now v8
 encourages lots of unfriendly cleanup code such as:

 void function hello() {
   PersistentThing thing = PersistentThing::New(...);
   ...
   thing.Dispose();
 }

 This kind of code is difficult to maintain in many cases, and also
 vulnerable to memory leaks when using C++ exceptions. I'd like to see a
 version of PersistentT that behaves similarly to std::unique_ptrT. v8
 already has helper classes like this with Isolate::Scope and Context::Scope.

 Or perhaps there's a way to get what I want with local handles? I tried
 something like this to no avail:

 PersistentContext pcontext = Context::New(NULL, global);
 LocalContext lcontext = pcontext;
 pcontext.Dispose();

 Any advise would be appreciated!

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: How to get a function's prototype

2012-03-14 Thread Vyacheslav Egorov
operator* defined on a v8::HandleT does not return a pointer to an
object but rather a pointer to place which contains that pointer.

Why do you actually want to get a raw pointer to an object?

--
Vyacheslav Egorov


On Wed, Mar 14, 2012 at 1:44 PM, avasilev alxvasi...@gmail.com wrote:
 Seems this approach will lead to nowhere - two absolutely identical,
 consequent C++ calls which query the same property, return different,
 consequent pointers. Seems in C++ there is also some internal
 shadowing and pointers (at least not *HandleObject) don't uniquely
 identify objects.

 On Mar 14, 2:29 pm, avasilev alxvasi...@gmail.com wrote:
 I,m trying to write a C++ app to test these values, I print C++
 pointers and implement a js function, taking an object, and print this
 object's pointer, so that I have a picture both what Js  and C++.
 However, I discovered a strange thing - the function's arguments,
 transformer like this: *(args[0]-ToObject()) always appear as the
 same pointer between function calls. If I try to print two object
 arguments in the same call, these are consecutive addresses. So it
 seems v8 passes different objects than the actual ones, somehow
 shadowing the real object. The addresses are quite different than
 these that I get in C++, which leads me tho thing they are allocated
 on the stack. This all makes sense, but how is the shadowing mechanism
 implemented, and is there a way to reach the original objects from
 within the function?

 On Mar 13, 4:06 pm, Matthias Ernst matth...@mernst.org wrote:







  On Tue, Mar 13, 2012 at 3:00 PM, avasilev alxvasi...@gmail.com wrote:
   Thanks,
   I was just thinking that as there is GetPrototype() and SetPrototype()
   for objects, which access '__proto__', there should be also for
   'prototype'.

  Well, GetPrototype() has slightly different semantics, at least
  judging from the documentation WRT hidden prototypes.

   I'd like to use the topic to ask - what does GetPrototype() actually
   return on a function object then? Is it func.prototype.__proto__?

  I'd expect func.__proto__.
  In the Chrome console this evaluates as such:

  (function() {}).__proto__
  function Empty() {}

  Matthias

   On Mar 13, 3:50 pm, Matthias Ernst matth...@mernst.org wrote:
   On Tue, Mar 13, 2012 at 2:26 PM, avasilev alxvasi...@gmail.com wrote:
Hello,
Is there a way to get a function's prototype, equivalent to the
function's 'prototype' property, e.g.:

function Func()
{}
var a = Func.prototype;

Using Object::GetPrototype() does not do the same and returns a
different value. Setting an object's prototype via SetPrototype() to
the property value gives the desired effect of instanceof recognizing
the object as constructed by the function. Setting the prototype to
the function's GetPrototype() does not achieve this.
From the doc I don't see a way to access the prototype property of a
function, besides getting it as an ordinary property via  func-
   Get(String::New(prototype));
Am I missing something?

   I don't think you are. Why should there be another way, apart from
   convenience? If JS specifies it as a property, especially not even a
   special one, then use the property accessor. You may of course argue
   that it's inconsistent with, say, Array::Length.

Greetings
Alex

--
v8-users mailing list
v8-users@googlegroups.com
   http://groups.google.com/group/v8-users

   --
   v8-users mailing list
   v8-users@googlegroups.com
  http://groups.google.com/group/v8-users

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] BinaryOpStub_MUL_Alloc_HeapNumbers = ?

2012-01-17 Thread Vyacheslav Egorov
Here is an alternative (a bit more precise) test case:
http://jsperf.com/float64-vs-float/2

--
Vyacheslav Egorov

On Tue, Jan 17, 2012 at 10:46 AM, Daniel Clifford da...@chromium.orgwrote:

 All the examples are optimized by Crankshaft. However, in the Float64Array
 case, storing the intermediate values in the array forces a memory access.
 When you use local variables, it's much faster, since the intermediate
 double operations and local variables are stored in registers, avoiding
 memory accesses and triggering only a single boxing operation at the
 return.

 Danno


 On Tue, Jan 17, 2012 at 2:01 AM, Joseph Gentle jose...@gmail.com wrote:

 I tried a simple test on jsperf to see if I can get a speedup from
 float64 arrays:

 http://jsperf.com/float64-vs-float

 In this test, using float64 arrays end up slower than just using normal
 variables. JSPerf tests are only run for a few seconds - is that long
 enough for v8's optimizer to kick in properly? - Or is that benchmark
 correct, and I'm just missing something?

 -J



 On Friday, 30 December 2011 07:33:14 UTC+11, Vyacheslav Egorov wrote:

 2) There are fields mutated in the loop that contain floating point
 values. This currently requires boxing (and boxing requires heap
 allocation, heap allocation puts pressure on GC etc). I wonder if you can
 put typed arrays (e.g. Float64Array) to work here.

 --
 Vyacheslav Egorov

  --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


  --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] BinaryOpStub_MUL_Alloc_HeapNumbers = ?

2011-12-30 Thread Vyacheslav Egorov
 I tracked down the deoptimization problem - a bug in
 another part of code was occasionally making the contact list the
 number zero instead of an empty list.

Oh, interesting. I did not pay much attention to deoptimization because it
happened in tagged-to-i, so I assumed it was first floating point value,
but apparently it was undefined.

 It seems like replacing the float value in a field with another float
 value shouldn't require an allocation. I would expect it to reuse the
 box of the previous field value..?

Unfortunately this is impossible because boxes are values and they can be
shared:

a.x = 1 / n;  // a.x will contain pointer to the boxed number (HeapNumber)
a.y = a.x;   // a.y points to the same HeapNumber as a.x.

--
Vyacheslav Egorov

On Fri, Dec 30, 2011 at 5:41 AM, Joseph Gentle jose...@gmail.com wrote:

 Cool, thanks. I tracked down the deoptimization problem - a bug in
 another part of code was occasionally making the contact list the
 number zero instead of an empty list. Fixing that gave me a ~15%
 performance boost. My library now performs 3x slower than the
 original C version, which is a huge improvement. I'd like to take that
 number down further still if I can though.

 I'll have a play with typed arrays soon. It seems like I should just
 replace some structures wholesale with float64 arrays. It'll be a bit
 nasty writing contacts[i * C_LENGTH + C_ROT_X] - but if I'm avoiding
 heap allocations, it'll be worth it. Since you can't guarantee that a
 number in javascript will remain constant, I imagine I'll want a
 compilation step which replaces all the constants with literals.

 It seems like replacing the float value in a field with another float
 value shouldn't require an allocation. I would expect it to reuse the
 box of the previous field value..?

 Thanks for the tip about inlining. I manually inlined a couple
 function calls earlier, but stopped when they stopped giving me
 performance gains. - Which makes sense considering applyImpulse was
 deoptimized. Once I've done everything I can think of, I'll take a
 good hard read through the source. Considering that I'm spending 35%
 of my time in that one function, its a pretty obvious place for
 optimization.

 Cheers
 Joseph

 On Fri, Dec 30, 2011 at 7:33 AM, Vyacheslav Egorov vego...@chromium.org
 wrote:
  If you run with --print-code --code-comments you will see generated code
 (v8
  should be build with objectprint=on disassembler=on) and you'll have to
  locate bailout in the code and figure out why it happens.
 
  If it happens only once then the reason it probably that the function was
  optimized before it got correct type feedback.
 
  I took a very quick look through 2nd version of code that V8 generates
  for Arbiter.applyImpulse, without trying to understand what it does,
 just by
  looking for inefficiencies. I don't see anything obvious but there are
 two
  things:
 
  1) V8 seems to exhaust inlining budget when trying to inline things into
  applyImpulse. It leaves one call in the loop not inlined, which prevents
  proper LICM and probably causes unnecessary boxing. If I relax inlining
  budget by --nolimit-inlining I get 10% boost on the benchmark.
 
  2) There are fields mutated in the loop that contain floating point
 values.
  This currently requires boxing (and boxing requires heap allocation, heap
  allocation puts pressure on GC etc). I wonder if you can put typed arrays
  (e.g. Float64Array) to work here.
 
  --
  Vyacheslav Egorov
 
 
 
  On Thu, Dec 29, 2011 at 4:18 AM, Joseph Gentle jose...@gmail.com
 wrote:
 
  Wow, thats awesome information. That would explain why the function in
  question is slow, and why inlining a couple of the function calls it
 makes
  decreases overall speed.
 
  How do I read the trace I get back? I'm getting this:
 
   DEOPT: Arbiter.applyImpulse at bailout #49, address 0x0, frame size
  264
  [deoptimizing: begin 0x1b70ac6a67f1 Arbiter.applyImpulse @49]
translating Arbiter.applyImpulse = node=432, height=216
  0x7fff6f711630: [top + 248] - 0x3ebe7f33eb9 ; [esp + 296]
  0x3ebe7f33eb9 JS Object
  0x7fff6f711628: [top + 240] - 0x2457afa6b4ad ; caller's pc
  0x7fff6f711620: [top + 232] - 0x7fff6f7116c0 ; caller's fp
  
 
  I assume address 0x0 means something the function is doing is hitting a
  null object. Does bailout #49 mean anything? The function is (later)
  repeatedly optimized and deoptimized again with bailout #8. How do I
 track
  these down?
 
  -J
 
 
  On Monday, 26 December 2011 23:56:31 UTC+11, Vyacheslav Egorov wrote:
 
  This is a multiplication stub that is usually called from non-optimized
  code (or optimized code that could not be appropriately specialized).
  Non-optimizing compiler does not try to infer appropriate
 representation for
  local variable so floating point numbers always get boxed.
 
  If this stub is high on the profile then it usually means that
 optimizing
  compiler either failed to optimize hot function which does

Re: [v8-users] Re: ASSERT(state_ != NEAR_DEATH);

2011-12-26 Thread Vyacheslav Egorov
Your weak callback (handle_weak) is empty and does not follow the contract.

Put object.Dispose(); there.

--
Vyacheslav Egorov

On Mon, Dec 26, 2011 at 12:55 PM, D C thatway...@gmail.com wrote:

 Hi:
   Thanks for you replay, I modify my source according to your words,
 It seems not work.
 here is my source snippet:

 void javascript_ctx_impl::call_obj_func (v8::persistentobject
 object, const char* method, int argc, handlevalue argv[])
 {
handlescope handle_scope;

localvalue cb = object-get(string::new(method));

if (!cb-isfunction()) {
std::cerr  method =   method  std::endl;
return;
}

localfunction do_action = localfunction::cast(cb);

trycatch try_catch;
 /*** ASSERT HERE **/
 /*** ASSERT HERE **/
 /*** ASSERT HERE **/
do_action-call(object, argc, argv);
 /*** ASSERT HERE **/
 /*** ASSERT HERE **/
 /*** ASSERT HERE **/
if (try_catch.hascaught()) {
v8::localv8::message msg = try_catch.message ();
if (!msg-getscriptresourcename().isempty()  !msg-
 getscriptresourcename()-isundefined())
{
v8::string::asciivalue name (msg-
 getscriptresourcename());
std::cerr  *name  std::endl;
}
else {
std::cerr  call_obj_func: runtime error.  std::endl;
}
}
 }

 template typename T
 class write_handle : public handle_impl_base
 {
public:
write_handle (boost::asio::io_service io,
 v8::PersistentObject
 local,
v8::PersistentObject h
)
: handle_impl_base (io), handle_ (h), session_ (local)
{
}
public:
void operator () (const boost::system::error_code ec,
 std::size_t bytes_transferred)
{
HandleScope handle_scope;
if (!ec) {
HandleValue args[3] = {
js::instance ().safe_new_value (session_),
js::instance ().safe_new_value
 (TRUE),
js::instance ().safe_new_value
 (bytes_transferred)
};

js::instance ().call_obj_func (handle_,
 onHandle, 3, args);
}
else {
HandleValue args[3] = {
js::instance ().safe_new_value (session_),
js::instance ().safe_new_value
 (FALSE),
js::instance ().safe_new_value
 (bytes_transferred)
};
js::instance ().call_obj_func (handle_,
 onHandle, 3, args);
}
handle_.Dispose (); session_.Dispose ();
}
static void handle_weak (PersistentValue object, void*
 parameter)
{
}
private:
v8::PersistentObject handle_;
v8::PersistentObject session_;
 };

 v8::Handlev8::Value js_asio_socket_ip_tcp_function::async_write
 (const v8::Arguments args)
 {
HandleScope hScope;
js_asio_socket_ip_tcp_function* native_obj =
 unwrapjs_asio_socket_ip_tcp_function(args.This());

if (args.Length ()  4) {
return ThrowException (Exception::TypeError(String::New(
async_resolve need 4 parameters.))
);
}
/** Argument check here */
js_stream_function* s = unwrapjs_stream_function (args[1]-
 ToObject ());
if (s == NULL) {
return ThrowException (Exception::TypeError(String::New(
async_resolve parameter 2 error.))
);
}

v8::Localv8::Object  p0 = args[0]-ToObject ();
v8::Localv8::Integer p2 = args[2]-ToUint32 ();

v8::PersistentObject handle;
v8::PersistentObject sessin;

if (args[3]-ToObject ()-IsFunction ()) {
v8::Localv8::Function f = v8::Localv8::Function::Cast(args
 [3]-ToObject());
handle = v8::Persistentv8::Object::New(f);
}
else {
handle = v8::PersistentObject::New (args[3]-ToObject ());
}

handle.MakeWeak (NULL, write_handlevoid::handle_weak);
handle.MarkIndependent ();

sessin = v8::PersistentObject::New (p0);
boost::asio::async_write (*(native_obj-socket_),
boost::asio::buffer (s-get (), p2-Value ()),
boost::asio::transfer_all (),
make_concrete_handle (write_handle void (native_obj-
 socket_-get_io_service (), sessin, handle)
)
);
return v8::Undefined ();
 }

 if you have some suggestion, it will be very thankful.


 On 12月26日, 午前8:15, Vyacheslav Egorov vego...@chromium.org wrote:
  Every weak callback should either revive (via ClearWeak or MakeWeak) or
  destroy (via Dispose) the handle for which it was called. This contract
 is
  described in v8.h, see the comment above WeakReferenceCallback

Re: [v8-users] BinaryOpStub_MUL_Alloc_HeapNumbers = ?

2011-12-26 Thread Vyacheslav Egorov
This is a multiplication stub that is usually called from non-optimized
code (or optimized code that could not be appropriately specialized).
Non-optimizing compiler does not try to infer appropriate representation
for local variable so floating point numbers always get boxed.

If this stub is high on the profile then it usually means that optimizing
compiler either failed to optimize hot function which does a lot of
multiplications or it failed to infer an optimal representation for some
reason.

Bottom up profile should show which functions invoke the stub. Then you
should inspect --trace-opt --trace-bailout --trace-deopt output to figure
out what optimizer does with those function.

--
Vyacheslav Egorov

On Mon, Dec 26, 2011 at 7:00 AM, Joseph Gentle jose...@gmail.com wrote:

 What does it mean when I see BinaryOpStub_MUL_Alloc_HeapNumbers in my
 profile? Does that mean the compiler is putting local number variables on
 the heap? Why would it do that?

 -J

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] Fast Property Access in V8 JavaScript Engine

2011-12-17 Thread Vyacheslav Egorov
Yes, new Thing(true) and new Thing(false) will produce objects with
different hidden-classes (though all new Thing(x) for a fixed x will have
the same hidden class).

Sites  that see both kinds Things will go megamorphic (become slower than
monomorphic, yet faster than just going to runtime). Optimizer will try to
deal with polymorphism by inlining (limited amount) of polymorphic cases.

--
Vyacheslav Egorov

On Sat, Dec 17, 2011 at 8:51 PM, Marcel Laverdet mar...@laverdet.comwrote:

 I only have a loose understanding of v8's hidden classes and optimizer,
 but there's something I've always wondered about hidden classes that I
 figured I could hijack this thread for..

 How do hidden classes handle this case?

 function Thing(switch) {
   if (switch) {
 this.whatever = true;
   }
   ...
 }

 From what I understand (basically just watching those videos of Lars Bak
 talk about v8 internals) it seems in this case, instances of Thing would
 have divergent hidden classes right from the very start. This would then
 ripple out into the optimizer leading to code that couldn't be optimized
 with fast property access.

 It seems like this would be a fairly common case.. is there something I'm
 missing that makes this not as bad as it seems?


 On Sat, Dec 17, 2011 at 10:19 AM, Vyacheslav Egorov 
 vego...@chromium.orgwrote:

 It should be noted that hidden classes have at least two advantages:

 1) they go beyond simple index to property mappings and can capture many
 different aspects (types of backing stores, constant function properties,
 type specific optimized construction stubs, estimated number of properties
 that will be added to an object).

 2) allow to optimize memory usage --- dictionary based backing stores a
 much less compact.

 Your suggestion of course is viable (and well known,
 e.g. similar approach is used for example by LuaJIT2) optimization that can
 be used to make monomorphic dictionary lookups faster.

 However it cannot replace hidden classes entirely as noted above.

 --
 Vyacheslav Egorov


 On Sat, Dec 17, 2011 at 11:31 AM, Vladimir Nesterovsky 
 vladi...@nesterovsky-bros.com wrote:

 Yesterday I've seen an article about some design principles of V8
 JavaScript Engine (http://code.google.com/apis/v8/design.html). In
 particular V8 engine optimizes property access using dynamically
 created hidden classes, they are derived when a new property is
 created (deleted) on the object.

 We would like to suggest a slightly different strategy, which exploits
 the cache matches, and does not require a dynamic hidden classes.

 Consider an implementation data type with following characteristics:

 a) object is implemented as a hash map of property id to property
 value: MapID, Value;
 b) it stores data as an array of pairs and can be accessed directly:
 PairID, Value values[];
 c) property index can be acquired with a method: int index(ID);

 A pseudo code for the property access looks like this:

 pair = object.values[cachedIndex];

 if (pair.ID == propertyID)
 {
   value = pair.Value;
 }
 else
 {
  // Cache miss.
  cachedIndex = object.index(propertyID);
  value = objec.values[cachedIndex].Value;
 }

 This approach brings us back to dictionary like implementation but
 with important optimization of array speed access when property index
 is cached, and with no dynamic hidden classes.

 See also
 http://www.nesterovsky-bros.com/weblog/2011/12/17/FastPropertyAccessInV8JavaScriptEngine.aspx
 --
 Vladimir Nesterovsky

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


  --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


  --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] Hint V8?

2011-12-16 Thread Vyacheslav Egorov
You can't hit it explicitly.

But V8's optimizing compiler gathers type feedback from inlined caches in
the non-optimized code.

Essentially this means that program itself gives hints to V8 while it
executes, e.g. if you write function like

function add(x, y) { return x + y; }

and you will call it with numbers only V8 will in the end generate code
that is specialized appropriately for the number case.

--
Vyacheslav Egorov

On Fri, Dec 16, 2011 at 12:23 PM, Egor Egorov egor.ego...@gmail.com wrote:

 Let's suppose I have a function that expects an argument of a certain
 type, a number.

 What if I somehow hint the V8 compiler that here I expect only a number so
 that V8 optimises accordingly without guesses? Would such a hint make sense
 for V8 optimisation process?

  --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] [couldn't find pc offset for node=0]

2011-10-11 Thread Vyacheslav Egorov
Marcel,

It's really hard to say anything without reproduction. It might be a
genuine bug in the deoptimizer.

--
Vyacheslav Egorov



On Tue, Oct 11, 2011 at 9:53 PM, Marcel Laverdet mar...@laverdet.com wrote:
 Hey there I'm trying to track down an issue with my v8 application (on
 NodeJS). This is on OS X Lion and x64 v8. I've noticed this error on v8
 3.6.6 and also bleeding_edge.
 The issue is that every now and then I'm seeing this:
 [couldn't find pc offset for node=0]
 [method: Future.wait]
 [source:
 function () {???Future.wait(this);???return this.get();??}
 Bus error: 10
 This error seems to come from deoptimizer.cc and reproducing the error is
 difficult. The only thing strange about my application is that I'm using
 node-fibers [https://github.com/laverdet/node-fibers], which adds coroutine
 support to v8. Each coroutine is actually just a pthread and
 pthread_cond_signal is used to simulate coroutines** which play nicely with
 v8::Locker. So it seems that this issue may be related to threads,
 potentially 100's of threads using the same v8 isolate (locking and
 unlocking where appropriate).
 If I run this instead with a debug build of v8 I get this error:
 #
 # Fatal error in /Users/marcel/code/node/deps/v8/src/objects-inl.h, line
 2996
 # CHECK(kind() == OPTIMIZED_FUNCTION) failed
 #
 I have NOT been able to reproduce this problem on an ia32 build with the
 same application and machine; it seems to be just x64.
 I'm wondering where I should start looking from here. Is this a bug in v8,
 should I work on distilling a test case for you guys?
 ** Actually the default version of node-fibers uses some not-supported v8
 hacking to get true coroutines, but you can modify the build to use pthreads
 instead which is totally within the confines of v8's API. All my testing was
 done with the pthread_cond_signal version of node-fibers.

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: Assertion in Isolate::Current()

2011-10-10 Thread Vyacheslav Egorov
You can not declare handle scope in a thread that does not yet own V8
Isolate. You should enter an Isolate first.

--
Vyacheslav Egorov

On Mon, Oct 10, 2011 at 1:02 PM, Adrian adrianbash...@gmail.com wrote:

 Hi,

 This is the exact, complete program, that is failing..

 class TestThread : public Thread::Thread

 {

 public:

   TestThread(){};

   void TestThread::run()

   {

 v8::HandleScope scope;//--This failes

   }

 };


 int main(int argc, const char* argv[])

 {

   v8::HandleScope scope; //--This works

   TestThread mythread;

   mythread.start();

   mythread.waitDone();

 }


  --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] Re: Assertion in Isolate::Current()

2011-10-10 Thread Vyacheslav Egorov
You have to use Isolate::Enter or Isolate::Scope.

If you want to use the default isolate just acquire isolate lock with
Locker.

v8.h includes very detailed comments. Please read them for further
information:

http://code.google.com/p/v8/source/browse/trunk/include/v8.h#3510

--
Vyacheslav Egorov

On Mon, Oct 10, 2011 at 1:16 PM, Adrian Basheer adrianbash...@gmail.comwrote:

 Hi,

 I am afraid I do not know how to enter an isolate (I don't remember it
 coming up in the v8 tutorial)...

 Can you help me please?

 Thanks!

 Adrian.


 On Mon, Oct 10, 2011 at 2:09 PM, Vyacheslav Egorov 
 vego...@chromium.orgwrote:

 You can not declare handle scope in a thread that does not yet own V8
 Isolate. You should enter an Isolate first.

 --
 Vyacheslav Egorov


 On Mon, Oct 10, 2011 at 1:02 PM, Adrian adrianbash...@gmail.com wrote:

 Hi,

 This is the exact, complete program, that is failing..

 class TestThread : public Thread::Thread

 {

 public:

   TestThread(){};

   void TestThread::run()

   {

 v8::HandleScope scope;//--This failes

   }

 };


 int main(int argc, const char* argv[])

 {

   v8::HandleScope scope; //--This works

   TestThread mythread;

   mythread.start();

   mythread.waitDone();

 }


  --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


  --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


  --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] Re: Large unexpected Code FreeStoreAllocationPolicy resize

2011-09-02 Thread Vyacheslav Egorov
Hi,

I have found only one instantiation of the ListT,P template with
T=Code* and P=FreeStoreAllocationPolicy it type alias --- CodeList
(see list.h).

The only instance of the CodeList I could find is allocated on the
stack (see KeyedIC::ComputeStub) and should not leak.

But this interpretation does not fit your trace. So I think we can
assume that compiler merged some template methods together and it's
either

  ListHandleObject  entered_contexts_; or ListContext* saved_contexts_;

that get resized when you perform Context::Enter.

My guess would be that you have unbalanced Context::Enter and
Context::Exit so your stacks of entered/saved contexts grow without
bounds.

Please check that they are balanced. You can also add some debug
prints into Context::Enter to see length of entered_contexts_ stack.

--
Vyacheslav Egorov


On Fri, Sep 2, 2011 at 4:09 PM, Stuart hapaliba...@googlemail.com wrote:
 The system is still running and has since allocated 2 x 150MB more
 using
 v8::internal::Listv8::internal::Code*,v8::internal::FreeStoreAllocationPolicy::Resize

 The stack profile is similar; calling back into V8 using stored
 function and parameter handles.

 Any clues? Am I even correct in assuming this has been allocated for
 code and not data?

 Stuart.


 *,v8::internal::FreeStoreAllocationPolicy::Resize

 On Aug 31, 10:46 pm, Stuart hapaliba...@googlemail.com wrote:
 I need some help understanding this call stack.

 The resize results in a 47MB allocation that never gets freed. This
 happened twice during a 6 hour run.

 StackTrace Content
  v8director!v8::internal::Listv8::internal::Code
 *,v8::internal::FreeStoreAllocationPolicy::Resize+22 bytes, 0xE6BF76
  v8director!v8::Context::Enter+199 bytes, 0xE1F2D7
  v8director!`anonymous namespace'::DecoupledCall::call
  v8director!
 boost::asio::detail::completion_handlerboost::_bi::bind_tvoid,boost::_mfi 
 ::mf0void,`anonymous

 DecoupledCall is making a synchronous (no locks) call into V8 using
 previously stored persistent handles to a function and parameters. I
 have seen this leak occur twice with this stack, but is exceptionally
 rare; two times over thousands and thousands of iterations. I would
 love to learn that it is related to this stack, but I suspect it's a
 coincidence.

 The parameter v8::internal::Code suggests this is an allocation for
 code. Why would V8 suddenly need 50MB of code storage after running
 the same thing for 6 hours?

 What am I seeing? Could a closure cause this? Would any logging help?

 Stuart.

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: Garbage Collection very slow on Android with latest stable line

2011-09-01 Thread Vyacheslav Egorov
 I did this and I was still seeing the mark-sweep take about 120 ms+.
 While still too slow this leads me to think that it was not the
 compression part that was slow but the sweep.  I thought I was getting
 hammered by all the memmoves but it looks like the sweep was the real
 issue.

Lets additionally enable --trace-gc-nvp flag to get timings for
separate phases of mark-sweep GC.

 1)  Since we are in a limited memory environment we have no issue with
 setting a hard limit to how much memory V8 can use (Android will kill
 the application if you try to allocate more memory then the OS says is
 appropriate).  Is there a way to create just one young generation to a
 set size (say 16 megs), then force nothing but Scavenger sweeps and
 just keep using the one heap?  We would also need to know when we are
 getting close to the high water mark and then start forcing GC's every
 frame to get this our footprint down.


I am not sure I understand you question. But here is the number of concerns:

a) V8's heap has an unorthogonal structure so some spaces (code, map,
largeobject) are not managed by scavenger.

b) scavenger is a copying collector which performs best when there are
little to copy around. managing the whole heap (32 mb in your case)
with scavenger will completely destroy performance (it will be far
worse than MarkSweep).

c) Your heap is larger than 16 mb.

Also here is a small idea for you: try bumping  max_semispace_size_
and initial_semispace_size_ (see heap.cc). Increase
initial_semispace_size_ to 1 or 2 mb and max_semispace_size_ to 4 or 8
mb and see how it affects GC timings.

 2) Is there a good way to limit how much data is swept durning the
 Mark-Sweep phase.  For example: I only want to sweep up to N objects,
 or maybe spend X amount of time durning the sweep phase?  It looks
 like I am spending most of my cycles in:  void
 LargeObjectSpace::FreeUnmarkedObjects().  It would be cool if I could
 just release N number of objects each GC call and then keep doing GC's
 every frame until all the unmarked objects are released.

This is possible. (We are doing similar stuff in our experimental GC
developed on experimental/gc branch but it is not thoroughly tested on
ARM yet, so I am not advising to use it).

FreeUnmarkedObjects is a very simple routine and I would not expect it
to consume a lot of time.

Can you add some debug prints into it? For example like this:

Index: src/spaces.cc
===
--- src/spaces.cc   (revision 9090)
+++ src/spaces.cc   (working copy)
@@ -2932,6 +2932,12 @@


 void LargeObjectSpace::FreeUnmarkedObjects() {
+  double start = OS::TimeCurrentMillis();
+  int alive = 0;
+  int alivebytes = 0;
+  int freed = 0;
+  int freedbytes = 0;
+
   LargeObjectChunk* previous = NULL;
   LargeObjectChunk* current = first_chunk_;
   while (current != NULL) {
@@ -2941,7 +2947,11 @@
   heap()-mark_compact_collector()-tracer()-decrement_marked_count();
   previous = current;
   current = current-next();
+  alive++;
+  alivebytes += object-Size();
 } else {
+  freed++;
+  freedbytes += object-Size();
   // Cut the chunk out from the chunk list.
   LargeObjectChunk* current_chunk = current;
   current = current-next();
@@ -2962,6 +2972,14 @@
   current_chunk-Free(current_chunk-GetPage()-PageExecutability());
 }
   }
+  double end = OS::TimeCurrentMillis();
+  PrintF(FreeUnmarkedObjects took %d ms\n
+ %d bytes in %d objects alive, %d bytes in %d objects freed\n,
+ static_castint(end - start),
+ alivebytes,
+ alive,
+ freedbytes,
+ freed);
 }


--
Vyacheslav Egorov, Software Engineer, V8 Team.
Google Denmark ApS.



On Thu, Sep 1, 2011 at 2:22 AM, Chris Jimison cjimi...@ngmoco.com wrote:
 Hi Vyacheslav,


 editing flag-definitions.h to turn these flags on during compile time.
 Alternatively you can call v8::V8::SetFlagsFromString to pass them to V8

 AWESOME.  That did the trick.  Here is an excerpt from my log
 statements:

 platform-posix.cc(10721): *** Scavenger GC Type Selected
 platform-posix.cc(10721):  Profile Complete 
 PerformGarbageCollection 5
 platform-posix.cc(10721): (149)Logging here
 platform-posix.cc(10721): Scavenge 29.5 - 29.2 MB,
 platform-posix.cc(10721): (149)Logging here
 platform-posix.cc(10721): 5 ms.
 platform-posix.cc(10721): (149)Logging here


 platform-posix.cc(10721): *** Scavenger GC Type Selected
 platform-posix.cc(10721):  Profile Complete 
 PerformGarbageCollection 2
 platform-posix.cc(10721): (149)Logging here
 platform-posix.cc(10721): Scavenge 33.4 - 32.5 MB,
 platform-posix.cc(10721): (149)Logging here
 platform-posix.cc(10721): 3 ms.
 platform-posix.cc(10721): (149)Logging here


 platform-posix.cc(10721): *** Old Generation Promotion Limit Reached,
 MARK_COMPRESS GC Type selected
 platform-posix.cc(10721): (149)Logging here
 D/jni/NgAndroidApp.cpp(10721): (48)Finished

Re: [v8-users] Re: V8 Garbage collection in multitasking mode

2011-09-01 Thread Vyacheslav Egorov
 1. Can I transfer an function object from script compiled in global
 context and main thread Isolate into the other thread's Isolate and
 execute there?

No. You have to serialize it somehow (e.g. JSON) in one isolate and
deserialize it in another one.

 2. If not, Is what I am doing a valid v8 API usage at all? What could
 be other ways to trigger GC more often?

Seem valid to me. But I am a bit confused why forcing GC manually helps so much.

It's not completely clear though why do you need a separate context
for your task because the function you pass as value (not as source)
already has a context attached and will use it but not the context you
have created.

Another thing: do you use weak handles extensively? V8 might be
overflowed by weakly reachable objects. That might cost a lot
especially if those weakly reachable objects hold onto contexts which
you seem to allocate for each task.

--
Vyacheslav Egorov


On Thu, Sep 1, 2011 at 8:34 PM, Dennis H gluck...@gmail.com wrote:
 Hi Vyacheslav,

 I see your point.

 I don't use separate Isolate for each thread and all tasks use
 function objects compiled in the global Context/Isolate.
 This might be a trouble of course, but I was not sure there is a good
 way to create a function object
 in main Context and then transfer it to another Isolate/Context.

 Essentially I wanted to achieve following usage:

 // thread function
 function foo () {}

 // starts execution in a separate thread
 task(foo);

 I do make them run exclusively (at least I hope I do) by using

    Locker locker;
    Locker::StartPreemption(preemption_interval);
    HandleScope scope;

    context = Context::New();
    context-Enter();

 The code is in a fork from nodejs.You can see the code here:
 https://github.com/bfrancojr/node/blob/node-task/src/node_task.cc

 The funny part is that if I create just a couple of threads the memory
 is stable, while I call V8::IdleNotification()  every 5 sec.

 If I increase the number of tasks to 100 it will start to grow and
 the process will run out memory. However If I start to call
 V8::IdleNotification() every 0.3 seconds it fixes it again.

 I've got a feeling the v8 treats the GC as one of the tasks, meaning
 that the more tasks I create the more time is allocated to 'garbage
 producers'
 and the smaller is a relative portion of time for GC. May be I am
 wrong.

 So the questions are:

 1. Can I transfer an function object from script compiled in global
 context and main thread Isolate into the other thread's Isolate and
 execute there?
 2. If not, Is what I am doing a valid v8 API usage at all? What could
 be other ways to trigger GC more often?

 Thanks,
 Dennis

 On Aug 31, 12:44 am, Vyacheslav Egorov vego...@chromium.org wrote:
 Hi Dennis,

 V8's GC is stop-the-world type so you don't have to do anything
 special to make it keep up. If you are getting OOM that most
 probably means you have a leak somewhere. Try tracing GC with
 --trace-gc flag to see how heap grows.

 Also you can't run JavaScript in parallel on V8 unless you create
 several isolates.

 If you are using a single isolate from many threads you have to ensure
 that only one thread is executing JS at the given moment.

 --
 Vyacheslav Egorov

 On Wed, Aug 31, 2011 at 2:28 AM, Dennis H gluck...@gmail.com wrote:
  Dear v8 Developers,

  I am relatively new to v8 internals, but here is what I found:
  I tried to create an app which has multiple tasks running in parallel.
  The v8::internal::Thread API seems to work fine, the trouble is I
  didn't find a good way to make garbage collection to keep up.

  I do call the V8::IdleNotification() in the main event loop
  periodically, but it doesn't scale if number of threads is getting
  bigger. Essentially If I create a lot of tasks, the process would
  reliably run out of memory pretty quick.

  How is the garbage collection supposed to be handled correctly with
  multiple threads?

  I used v3.4.14.

  Thanks,
  Dennis

  --
  v8-users mailing list
  v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: V8 Garbage collection in multitasking mode

2011-09-01 Thread Vyacheslav Egorov
 Would it produce weak references?  I think not, right?

No. When I am talking about weak references I mean v8 API Handles that
were made weak with MakeWeak method.

--
Vyacheslav Egorov


On Thu, Sep 1, 2011 at 11:09 PM, Dennis H gluck...@gmail.com wrote:
 Hi Vyacheslav,

 I guess you are right: additional context is not really needed there.
 Let me
 try to remove it and see if it  improves GC.

 I am not really using any weak reachable objects explicitly. I am not
 sure though if they can be potentially be created in Javascript.
 My test case produces GC-collectable string in the endless loop...

 var task = require(task),
    x  = 1;
 for (var i = 0; i  10 ; i++) {
    var t = task.createTask(function() {
       var r = ;
       while (1)   {
          r = BB  + x;
       }
    });
    console.log(Task created    + i);
    t.run();
 }

 Would it produce weak references?  I think not, right?

 Thanks,
 Denis

 On Sep 1, 12:47 pm, Vyacheslav Egorov vego...@chromium.org wrote:
  1. Can I transfer an function object from script compiled in global
  context and main thread Isolate into the other thread's Isolate and
  execute there?

 No. You have to serialize it somehow (e.g. JSON) in one isolate and
 deserialize it in another one.

  2. If not, Is what I am doing a valid v8 API usage at all? What could
  be other ways to trigger GC more often?

 Seem valid to me. But I am a bit confused why forcing GC manually helps so 
 much.

 It's not completely clear though why do you need a separate context
 for your task because the function you pass as value (not as source)
 already has a context attached and will use it but not the context you
 have created.

 Another thing: do you use weak handles extensively? V8 might be
 overflowed by weakly reachable objects. That might cost a lot
 especially if those weakly reachable objects hold onto contexts which
 you seem to allocate for each task.

 --
 Vyacheslav Egorov

 On Thu, Sep 1, 2011 at 8:34 PM, Dennis H gluck...@gmail.com wrote:
  Hi Vyacheslav,

  I see your point.

  I don't use separate Isolate for each thread and all tasks use
  function objects compiled in the global Context/Isolate.
  This might be a trouble of course, but I was not sure there is a good
  way to create a function object
  in main Context and then transfer it to another Isolate/Context.

  Essentially I wanted to achieve following usage:

  // thread function
  function foo () {}

  // starts execution in a separate thread
  task(foo);

  I do make them run exclusively (at least I hope I do) by using

     Locker locker;
     Locker::StartPreemption(preemption_interval);
     HandleScope scope;

     context = Context::New();
     context-Enter();

  The code is in a fork from nodejs.You can see the code here:
 https://github.com/bfrancojr/node/blob/node-task/src/node_task.cc

  The funny part is that if I create just a couple of threads the memory
  is stable, while I call V8::IdleNotification()  every 5 sec.

  If I increase the number of tasks to 100 it will start to grow and
  the process will run out memory. However If I start to call
  V8::IdleNotification() every 0.3 seconds it fixes it again.

  I've got a feeling the v8 treats the GC as one of the tasks, meaning
  that the more tasks I create the more time is allocated to 'garbage
  producers'
  and the smaller is a relative portion of time for GC. May be I am
  wrong.

  So the questions are:

  1. Can I transfer an function object from script compiled in global
  context and main thread Isolate into the other thread's Isolate and
  execute there?
  2. If not, Is what I am doing a valid v8 API usage at all? What could
  be other ways to trigger GC more often?

  Thanks,
  Dennis

  On Aug 31, 12:44 am, Vyacheslav Egorov vego...@chromium.org wrote:
  Hi Dennis,

  V8's GC is stop-the-world type so you don't have to do anything
  special to make it keep up. If you are getting OOM that most
  probably means you have a leak somewhere. Try tracing GC with
  --trace-gc flag to see how heap grows.

  Also you can't run JavaScript in parallel on V8 unless you create
  several isolates.

  If you are using a single isolate from many threads you have to ensure
  that only one thread is executing JS at the given moment.

  --
  Vyacheslav Egorov

  On Wed, Aug 31, 2011 at 2:28 AM, Dennis H gluck...@gmail.com wrote:
   Dear v8 Developers,

   I am relatively new to v8 internals, but here is what I found:
   I tried to create an app which has multiple tasks running in parallel.
   The v8::internal::Thread API seems to work fine, the trouble is I
   didn't find a good way to make garbage collection to keep up.

   I do call the V8::IdleNotification() in the main event loop
   periodically, but it doesn't scale if number of threads is getting
   bigger. Essentially If I create a lot of tasks, the process would
   reliably run out of memory pretty quick.

   How is the garbage collection supposed to be handled correctly with
   multiple threads

Re: [v8-users] V8 Garbage collection in multitasking mode

2011-08-31 Thread Vyacheslav Egorov
Hi Dennis,

V8's GC is stop-the-world type so you don't have to do anything
special to make it keep up. If you are getting OOM that most
probably means you have a leak somewhere. Try tracing GC with
--trace-gc flag to see how heap grows.

Also you can't run JavaScript in parallel on V8 unless you create
several isolates.

If you are using a single isolate from many threads you have to ensure
that only one thread is executing JS at the given moment.

--
Vyacheslav Egorov


On Wed, Aug 31, 2011 at 2:28 AM, Dennis H gluck...@gmail.com wrote:
 Dear v8 Developers,

 I am relatively new to v8 internals, but here is what I found:
 I tried to create an app which has multiple tasks running in parallel.
 The v8::internal::Thread API seems to work fine, the trouble is I
 didn't find a good way to make garbage collection to keep up.

 I do call the V8::IdleNotification() in the main event loop
 periodically, but it doesn't scale if number of threads is getting
 bigger. Essentially If I create a lot of tasks, the process would
 reliably run out of memory pretty quick.

 How is the garbage collection supposed to be handled correctly with
 multiple threads?

 I used v3.4.14.

 Thanks,
 Dennis

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Crankshaft Preemption Mechanism

2011-08-31 Thread Vyacheslav Egorov
Hi Kyle,

Optimizing compiler inserts stack checks (HStackCheck instruction)
explicitly at loop body's entry[1].

It also does an optimization pass[2] to remove redundant stack checks
that are dominated by function calls (as functions always does stack
check in the prologue).

Stack checks are important part of V8's interruption mechanism so both
compilers emit them to make all loops interruptable.

[1] http://code.google.com/p/v8/source/browse/trunk/src/hydrogen.cc#2823
[2] http://code.google.com/p/v8/source/browse/trunk/src/hydrogen.cc#1247

--
Vyacheslav Egorov


On Wed, Aug 31, 2011 at 9:27 PM, Kyle kyle.n.mor...@gmail.com wrote:
 Hello,

 Some time ago I noticed that the v8 compile was inserting stack limit
 checks at the back edges of loops.  I later found out that this check
 was doubling as a preemption mechanism to interrupt potentially long-
 running code.  However, I've noticed that the hydrogen/lithium
 compiler included with crankshaft does not seem to include these
 checks.  Is there a particular reason for this?  Is the previous
 design for JavaScript preemption no longer being pursued?

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Crankshaft Preemption Mechanism

2011-08-31 Thread Vyacheslav Egorov
 but the optimized code only appears to be checking the stack limit at
 the function entry, not in the loop itself.

Yes. As I said in my previous email: optimizing compiler eliminates
stack checks that are dominated by calls because called functions
should at least have a stack check in the prologue.

However there seems to be a minor issue here. Apparently HandleApiCall
builtin does not perform any stack checks thus breaking
HStackCheckEliminator assumption that every call implies a stack
check.

If print in your example is a JS function everything is fine because
JS function has a stack check in the prologue. But if print is an API
function then this loop will have no interruption point which is bad.

--
Vyacheslav Egorov


On Wed, Aug 31, 2011 at 10:39 PM, Kyle Morgan kyle.n.mor...@gmail.com wrote:
 Hi Vyacheslav,

 Allow me to demonstrate what I mean.  I ran the v8 shell with code
 containing the following function.

 function loop() {
   for(var i = 0; i  5; ++i) {
     print(i);
   }
 }

 It appears to be emitting the following code (optimized and unoptimized).

 --- Code ---
 kind = FUNCTION
 name = loop
 Instructions (size = 196)
 0x7f6f3e26fca0     0  55             push rbp
 0x7f6f3e26fca1     1  4889e5         REX.W movq rbp,rsp
 0x7f6f3e26fca4     4  56             push rsi
 0x7f6f3e26fca5     5  57             push rdi
 0x7f6f3e26fca6     6  41ff7598       push [r13-0x68]
 0x7f6f3e26fcaa    10  493b6508       REX.W cmpq rsp,[r13+0x8]
 0x7f6f3e26fcae    14  7305           jnc 21  (0x7f6f3e26fcb5)
 0x7f6f3e26fcb0    16  e82bf4fdff     call 0x7f6f3e24f0e0     ;; debug:
 statement 36
                                                             ;; code: STUB,
 StackCheckStub, minor: 0
 0x7f6f3e26fcb5    21  33c0           xorl rax,rax
 0x7f6f3e26fcb7    23  488945e8       REX.W movq [rbp-0x18],rax
 0x7f6f3e26fcbb    27  e94d00     jmp 109  (0x7f6f3e26fd0d)
 0x7f6f3e26fcc0    32  ff7627         push [rsi+0x27]
 0x7f6f3e26fcc3    35  ff75e8         push [rbp-0x18]
 0x7f6f3e26fcc6    38  48b9919790636f7f REX.W movq rcx,0x7f6f63909791
  ;; object: 0x7f6f63909791 String[5]: print
 0x7f6f3e26fcd0    48  e80b98     call 0x7f6f3e2694e0     ;; debug:
 statement 76
                                                             ;; code:
 contextual, CALL_IC, UNINITIALIZED, in_loop, argc = 1
 0x7f6f3e26fcd5    53  488b75f8       REX.W movq rsi,[rbp-0x8]
 0x7f6f3e26fcd9    57  488b45e8       REX.W movq rax,[rbp-0x18]
 0x7f6f3e26fcdd    61  a801           test al,0x1
 0x7f6f3e26fcdf    63  7405           jz 70  (0x7f6f3e26fce6)
 0x7f6f3e26fce1    65  e89a83feff     call 0x7f6f3e258080     ;; debug:
 statement 43
                                                             ;; debug:
 position 67
                                                             ;; code: STUB,
 ToNumberStub, minor: 0
 0x7f6f3e26fce6    70  4c01e0         REX.W addq rax,r12
 0x7f6f3e26fce9    73  7004           jo 79  (0x7f6f3e26fcef)
 0x7f6f3e26fceb    75  a801           test al,0x1
 0x7f6f3e26fced    77  720d           jc 92  (0x7f6f3e26fcfc)
 0x7f6f3e26fcef    79  4c29e0         REX.W subq rax,r12
 0x7f6f3e26fcf2    82  4c89e2         REX.W movq rdx,r12
 0x7f6f3e26fcf5    85  e80658feff     call 0x7f6f3e255500     ;; code:
 BINARY_OP_IC, UNINITIALIZED (id = 30)
 0x7f6f3e26fcfa    90  a80d           test al,0xd
 0x7f6f3e26fcfc    92  488945e8       REX.W movq [rbp-0x18],rax
 0x7f6f3e26fd00    96  493b6508       REX.W cmpq rsp,[r13+0x8]
 0x7f6f3e26fd04   100  7307           jnc 109  (0x7f6f3e26fd0d)
 0x7f6f3e26fd06   102  e8d5f3fdff     call 0x7f6f3e24f0e0     ;; code: STUB,
 StackCheckStub, minor: 0
 0x7f6f3e26fd0b   107  a801           test al,0x1
 0x7f6f3e26fd0d   109  ff75e8         push [rbp-0x18]
 0x7f6f3e26fd10   112  4b8d04a4       REX.W leaq rax,[r12+r12*4]
 0x7f6f3e26fd14   116  5a             pop rdx
 0x7f6f3e26fd15   117  488bca         REX.W movq rcx,rdx
 0x7f6f3e26fd18   120  480bc8         REX.W orq rcx,rax
 0x7f6f3e26fd1b   123  f6c101         testb rcx,0x1
 0x7f6f3e26fd1e   126  730a           jnc 138  (0x7f6f3e26fd2a)
 0x7f6f3e26fd20   128  483bd0         REX.W cmpq rdx,rax
 0x7f6f3e26fd23   131  7c9b           jl 32  (0x7f6f3e26fcc0)
 0x7f6f3e26fd25   133  e91d00     jmp 167  (0x7f6f3e26fd47)
 0x7f6f3e26fd2a   138  e8313cfeff     call 0x7f6f3e253960     ;; debug:
 position 60
                                                             ;; code:
 COMPARE_IC, UNINITIALIZED (id = 23)
 0x7f6f3e26fd2f   143  a811           test al,0x11
 0x7f6f3e26fd31   145  eb0b           jmp 158  (0x7f6f3e26fd3e)
 0x7f6f3e26fd33   147  493b45b0       REX.W cmpq rax,[r13-0x50]
 0x7f6f3e26fd37   151  7487           jz 32  (0x7f6f3e26fcc0)
 0x7f6f3e26fd39   153  e90900     jmp 167  (0x7f6f3e26fd47)
 0x7f6f3e26fd3e   158  4885c0         REX.W testq rax,rax
 0x7f6f3e26fd41   161  0f8c79ff   jl 32  (0x7f6f3e26fcc0)
 0x7f6f3e26fd47   167  498b4598       REX.W movq rax,[r13-0x68]
 0x7f6f3e26fd4b   171  488be5

Re: [v8-users] v8 on Linux on Power architecture (powerpc)

2011-08-26 Thread Vyacheslav Egorov
Hi,

Main issue here is that V8 does not have a PPC codegen.

You'll to implement one (plus all required arch specific runtime
support) if you want to see V8 running on Power PC.

--
Vyacheslav Egorov


On Fri, Aug 26, 2011 at 4:06 PM, swsyessws swsyes...@gmail.com wrote:
 Hello all, I am a new user of v8, and would like to see it running on Linux
 on the Power architecture. Can someone help make a list of potential
 issues/concerns/roadblocks/showstoppers to make that happen? It will be
 great to have a list in a priority order. Many thanks in advance!

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Garbage Collection very slow on Android with latest stable line

2011-08-22 Thread Vyacheslav Egorov
Hi Chris,

What version were you using before upgrading to HEAD and how long were
pauses at that version? If there was a major degradation we are very
interested in reproducing it.

Can you share a bit more about your use of V8?

Do you use it inside the browser or do you embed it into your
(C/C++/Java) game directly?

If you want to reduce GC cost we'll first need to understand what
exactly is causing (compacting) GCs.

It seems that promotion rate is not high (judging from 1-2ms
scavenging pauses) but it's hard to say anything without looking at GC
logs. (e.g. --trace-gc, --trace-gc-verbose ones; plus maybe some
additional debug prints in Heap::SelectGarbageCollector to see why V8
chooses MARK_COMPACTOR).

--
Vyacheslav Egorov, Software Engineer, V8 Team
Google Denmark ApS


On Mon, Aug 22, 2011 at 8:01 PM, Chris Jimison cjimi...@ngmoco.com wrote:
 Hi all,

 We have a game engine the uses V8 on Android based phones.  I have
 just upgraded our version of V8 to use the current stable SVN line and
 I am seeing a HUGE slowdown on the GC.  When the GC does a
 MARK_COMPACTOR run I am seeing times jump up to 175 ms on a Nexus S
 phone (one of the faster android based phones) however non
 MARK_COMPACTOR times are at about 1-2 ms.  The MARK_COMPACTOR sweep
 happens about once every 10 - 15 seconds for us and it introduces a
 very noticeable frame rate hitch.

 So I have a couple of questions for the group.

 1) Is there anything I can do to potentially amortize this cost accost
 multiple frames (or GC calls)?
 2) If not is there anyway I can speed this up?

 Thank you so much for any help or insights.

 -Chris

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Memory limits for V8 - max old generation size

2011-07-11 Thread Vyacheslav Egorov
 1) should I change just those values on the code and recompile?
 2) Is there a proper API to change that at runtime?

You can use --max-old-space-size=900 instead of recompiling.

Alternatively look at v8::SetResourceConstraints method.

 3) What is the maximum value that could possibly work considering a
modern 32-bits system? (2GB, 3GB, 4GB, ..., 64GB?)

1GB.

 4) Why this limitation?

There are certain assumptions built into GC and memory management routines.

More details are available in
http://code.google.com/p/v8/issues/detail?id=847

--
Vyacheslav Egorov


On Mon, Jul 11, 2011 at 8:51 PM, Allan Douglas R. de Oliveira 
allandoug...@gmail.com wrote:

 Hello,

 I'm getting some problems while trying to parse big amounts of data.
 My V8-based program crashes after reaching ~512MB  of memory usage (on
 2GB RAM Windows XP system), with:

 # Fatal error in CALL_AND_RETRY_2
 # Allocation failed - process out of memory

 Searching around and guessing I found out that this may be related to
 this lines on heap.cc, specially the max_old_generation_size:
 (...)
 #else
  reserved_semispace_size_(8*MB),
  max_semispace_size_(8*MB),
  initial_semispace_size_(512*KB),
  max_old_generation_size_(512*MB),
  max_executable_size_(128*MB),
  code_range_size_(0),
 #endif
 (...)

 So my questions are:
 1) should I change just those values on the code and recompile?
 2) Is there a proper API to change that at runtime?
 3) What is the maximum value that could possibly work considering a
 modern 32-bits system? (2GB, 3GB, 4GB, ..., 64GB?)
 4) Why this limitation?

 Thanks,
 Allan

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users

Re: [v8-users] objdump of translated JS program

2011-07-06 Thread Vyacheslav Egorov
 In this line, the object at 0x7fe8125aa911 is not shown. I assume it is 
 placed in data section.

Well. There is not data section. The object itself lives in the heap.
If you look at relocation information you can see that object
0x7fe8125aa911 is a two element FixedArray.

You can extend Code::Disassemble to print all referenced objects
recursively but that would produce quite a large output with cycles.

Alternatively you can just put a breakpoint into
CodeGenerator::PrintCode, wait until V8 compiles and prints
interesting function and then expect heap state, print objects that
are interesting to you etc.

 In these two lines, the callees are not shown, I suppose CEntry and 
 StackCheck are both built-in functions of v8.

They will be printed if you pass --print-code-stubs to V8 (your shell
should be compiled with snapshot=off).

 Basically, my goal is to catch runtime instructions trace (this is the easy 
 part) and relate it back to javascript
 source code.

--code-comment will help you to do that.

 Will --gdbjit-dump help? Is this option only available in debug build?

GDBJIT interface produces object that contain debugging information
(like pc to line mapping) but no code. So I  don't think --gdbjit-dump
will help you.

--
Vyacheslav Egorov


On Wed, Jul 6, 2011 at 8:03 PM, Zhaoshi Zheng wingle...@gmail.com wrote:
 Vyacheslav,

 Thanks for your reply. Take this function for example, Benchmark in base.js
 of V8 benchmark suite:

 0x7fe7ed6c2a07 7  49ba11a95a12e87f REX.W movq r10,0x7fe8125aa911
 ;; object: 0x7fe8125aa911 FixedArray[2]

 In this line, the object at 0x7fe8125aa911 is not shown. I assume it is
 placed in data section.

 0x7fe7ed6c2a23    35  e8d8d8fdff call 0x7fe7ed6a0300 ;; debug:
 statement 0
  ;; code: STUB,
 CEntry, minor: 0

 0x7fe7ed6c2a2e    46  e80d23feff call 0x7fe7ed6a4d40 ;; code: STUB,
 StackCheck, minor: 0

 In these two lines, the callees are not shown, I suppose CEntry and
 StackCheck are both built-in functions of v8.

 Basically, my goal is to catch runtime instructions trace (this is the easy
 part) and relate it back to javascript source code. Will --gdbjit-dump help?
 Is this option only available in debug build?

 --- Raw source ---
 // A benchmark has a name (string) and a function that will be run to
 // do the performance measurement. The optional setup and tearDown
 // arguments are functions that will be invoked before and after
 // running the benchmark, but the running time of these functions will
 // not be accounted for in the benchmark score.
 function Benchmark(name, run, setup, tearDown) {
   this.name = name;
   this.run = run;
   this.Setup = setup ? setup : function() { };
   this.TearDown = tearDown ? tearDown : function() { };
 }



 --- Code ---
 kind = FUNCTION
 Instructions (size = 72)
 0x7fe7ed6c2a00 0  55 push rbp
 0x7fe7ed6c2a01 1  4889e5 REX.W movq rbp,rsp
 0x7fe7ed6c2a04 4  56 push rsi
 0x7fe7ed6c2a05 5  57 push rdi
 0x7fe7ed6c2a06 6  56 push rsi
 0x7fe7ed6c2a07 7  49ba11a95a12e87f REX.W movq r10,0x7fe8125aa911
 ;; object: 0x7fe8125aa911 FixedArray[2]
 0x7fe7ed6c2a11    17  4152   push r10
 0x7fe7ed6c2a13    19  6a00   push 0x0
 0x7fe7ed6c2a15    21  6a00   push 0x0
 0x7fe7ed6c2a17    23  b80400 movl rax,0x4
 0x7fe7ed6c2a1c    28  498d9da0d792fe REX.W leaq rbx,[r13-0x16d2860]
 0x7fe7ed6c2a23    35  e8d8d8fdff call 0x7fe7ed6a0300 ;; debug:
 statement 0
  ;; code: STUB,
 CEntry, minor: 0
 0x7fe7ed6c2a28    40  493b6508   REX.W cmpq rsp,[r13+0x8]
 0x7fe7ed6c2a2c    44  7305   jnc 51  (0x7fe7ed6c2a33)
 0x7fe7ed6c2a2e    46  e80d23feff call 0x7fe7ed6a4d40 ;; code: STUB,
 StackCheck, minor: 0
 0x7fe7ed6c2a33    51  498b4598   REX.W movq rax,[r13-0x68]
 0x7fe7ed6c2a37    55  488be5 REX.W movq rsp,rbp  ;; debug:
 statement 513
  ;; js return
 0x7fe7ed6c2a3a    58  5d pop rbp
 0x7fe7ed6c2a3b    59  c20800 ret 0x8
 0x7fe7ed6c2a3e    62  cc int3
 0x7fe7ed6c2a3f    63  cc int3
 0x7fe7ed6c2a40    64  cc int3
 0x7fe7ed6c2a41    65  cc int3
 0x7fe7ed6c2a42    66  cc int3
 0x7fe7ed6c2a43    67  cc int3

 Deoptimization Output Data (deopt points = 0)

 Stack checks (size = 0)
 ast_id  pc_offset

 RelocInfo (size = 14)
 0x7fe7ed6c2a09  embedded object  (0x7fe8125aa911 FixedArray[2])
 0x7fe7ed6c2a23  statement position  (0)
 0x7fe7ed6c2a24  code target (STUB)  (0x7fe7ed6a0300)
 0x7fe7ed6c2a2f  code target (STUB)  (0x7fe7ed6a4d40)
 0x7fe7ed6c2a37  statement position  (513)
 0x7fe7ed6c2a37  js return


 On Wed, Jul 6, 2011 at 1:15 PM, Vyacheslav Egorov vego...@chromium.org
 wrote:

 Hi Albert,

 Can you clarify

Re: [v8-users] objdump of translated JS program

2011-07-06 Thread Vyacheslav Egorov
Short answer is:

There are two compilers: non-optimizing (aka full) and optimizing.

Every function starts non-optimized. V8 profiles the application as it
runs and tries to optimize hot functions making assumptions based on
type feedback it gathered during execution of non-optimized code.

--
Vyacheslav Egorov


On Wed, Jul 6, 2011 at 8:04 PM, Stephan Beal sgb...@googlemail.com wrote:
 On Wed, Jul 6, 2011 at 7:15 PM, Vyacheslav Egorov vego...@chromium.org
 wrote:

 V8 compiles different functions separately as application runs. It
 might compile the same function several times with different
 compilers.

 Just out of curiosity: can you give us (or point us to) an overview of the
 different modes and why one mode might be chosen over another in a different
 context? i'd be particularly interested in knowing why the same function
 might get compiled in different ways. (Again, just of curiosity, not because
 i want to optimize at that level.)
 --
 - stephan beal
 http://wanderinghorse.net/home/stephan/

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: JQuery in V8;

2011-06-22 Thread Vyacheslav Egorov
Hi Ravi,

There is no such thing as  jQuery syntax. It's just a valid
JavaScript program. jQuery simply declares a global function called $.

If you want to get jQuery functionality in you program then you should
either embed jQuery into your program or implement the same feature
from scratch.

--
Vyacheslav Egorov


On Wed, Jun 22, 2011 at 4:44 PM, Chinnu chinnu4...@gmail.com wrote:

 Thank you for the reply. So, you're suggesting checking the Chrome
 source code to see how they're dealing with jquery syntax?

 Thanks,
 Ravi


 On Jun 21, 11:08 pm, Fabio Kaminski fabiokamin...@gmail.com wrote:
 the DOM its not part of the vanilla js engine.. could'nt you just
 use the developer console of chrome?







 On Tue, Jun 21, 2011 at 4:16 PM, Marcel Laverdet mar...@laverdet.com wrote:
  Please look into jsdom:
 https://github.com/tmpvar/jsdom

  On Wed, Jun 22, 2011 at 4:11 AM, Chinnu chinnu4...@gmail.com wrote:

  Hi,

  Is it possible to compile and run a script in JQuery syntax in V8
  engine? For example, can we run something like the following in V8?

  $(document).ready(function() {

            $('body').append('iframe src=http://sampleurl.com;/
  iframe');
  });

  I'm interested in getting the iframe source above. I have a program
  that hooks into V8 to handle some DOM callbacks (like document.write,
  etc.). I was wondering if it would be possible to handle the JQuery
  syntax.

  Please suggest.

  Thank you,
  Ravi

  --
  v8-users mailing list
  v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

  --
  v8-users mailing list
  v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Other data struct alignment

2011-06-21 Thread Vyacheslav Egorov
Hi,

V8 will not work with 8 byte alignment without modification as code V8
generates assumes certain layout for some structures (e.g. both
v8::Arguments and v8::internal::Arguments).

I don't see any problem with 1 byte alignment though.

--
Vyacheslav Egorov


On Tue, Jun 21, 2011 at 9:45 AM, Roman Suvorov windj...@gmail.com wrote:
 Hi!

 Default data struct alignment in v8 for win32 (IA-32) is 4 byte
 alignment as I know.

 How can I compile v8 with 1 byte alignment and 8 byte alignment? These
 requirements are given by project. I cannot change it.

 In addition I need to be able to choose alignment in command line. For
 example:
 scons ... alignment=zp1 ...

 Will v8 work with such alignment?

 Thanks.

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Newline in Eval

2011-05-26 Thread Vyacheslav Egorov
 v8::Script::Compile(v8::String::New(eval('1 + 1\n')));

Try instead:

v8::Script::Compile(v8::String::New(eval('1 + 1\\n')));

Note that you need to escape \ on the C level otherwise you pass:

eval('1 + 1
')

to the v8::Script::Compile which is obviously incorrect.

--
Vyacheslav Egorov



On Thu, May 26, 2011 at 6:09 PM, mcot atm1...@gmail.com wrote:
 In the chrome dev shell this compiles and runs fine:

 eval('1 + 1\n')

 chrome = v8 version 3.1.8.14

 Testing on v8 release 7918 (3.7.7???) I get:

 ...
 v8::Handlev8::Script script =
 v8::Script::Compile(v8::String::New(eval('1 + 1\n')));
 v8::Handlev8::Value result = script-Run();
 ...

 unknown:5: Uncaught SyntaxError: Unexpected token ILLEGAL

 So is this a regression, or is it more compliant with ES5 now?  What
 is the best way to go back to the old behavior?

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] JSFunctionIterateBody and grabbing the closure

2011-05-25 Thread Vyacheslav Egorov
Hi Behram,

Basically speaking ObjectVisitor just gets all pointer slots in the object.
It gets no information about the meaning of those slots (certain
slots in code objects and external strings require special handling so
visitor gets information about them).

IterateBody is not recursive by default. It iterates slots of a single objects.
If caller wants to iterate objects recursively (e.g. to build a
transitive closure)
then it has to handle that manually.

So the only way to implement serialization on top of ObjectVisitor is
to serialize heap as whole as a sequence of bytes (+ some information
to link things
back together during deserialization) --- just like serialize.h does that.

Anything more 'clever' requires a separate visitor that actually
understands meaning of different fields of objects.

Hope this answers your question.

--
Vyacheslav Egorov


On Wed, May 25, 2011 at 8:59 AM, bmistree bmist...@stanford.edu wrote:
 I'm building an application on top of v8 that requires some objects to
 be checkpointed to some state and then later restored from that
 state.

 I am having trouble checkpointing closures.  Looking through the code,
 I noticed the function JSFunctionIterateBody in the JSFunction class
 (in objects.h).  What I was thinking of doing was writing my own
 ObjectVisitor that would be passed into each JSFunction's
 JSFunctionIterateBody method.  This ObjectVistor would then log all
 objects that it encountered.  I have a couple of questions though:

  1)  Would an objectVisitor passed into the JSFunctionIterateBody
 function run through all the contexts that are associated with that
 JSFunction?
  2)  When writing my objectVisitor, am I wrong in thinking that just
 writing the VisitPointers function would give me adequate information
 to reconstruct a JSFunction with its closure?  (Right now, I don't see
 how I'd be able to tell which object pointers would be associated with
 which contexts.)
  3)  Is there another way that I should be doing things?  I've looked
 through serialize.h, and think that it would be a bad idea to try to
 extend that for my application, and would like to do as much of the
 checkpointing and restoration from javascript as possible for
 flexibility and speedy development.

 Please let me know.  Thanks.
  -Behram

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] v8 issue with mobile version of Google Finance

2011-05-18 Thread Vyacheslav Egorov
Hi Petar,

Thanks for your report.

I have investigated the problem and determined that this is not a bug in V8.

I'll open an issue with Google Finance team.

--
Vyacheslav Egorov


On Wed, May 18, 2011 at 9:01 PM, Petar Jovanovic mips3...@gmail.com wrote:
 Recent versions of v8 seem to have an issue with a mobile version of
 Google Finance page, that is:

 http://www.google.com/m/finance

 I have checked some earlier versions of Google Chrome, and it seems
 that:

 - Google Chrome, version 6.0.472 with V8 engine 2.2. *is* able to
 display the mobile version of Google Finance page.
 - Google Chrome, version 7.0.517 with V8 engine 2.3.11.22 is *not*
 able to display the mobile version of Google Finance page.
 - The latest version of Google Chrome - that is 11.0.696 - with V8
 version 3.1.8.14 is *not* able to display it either.

 From this, I would assume that the issue is in v8, and I would guess
 the bug was introduced somewhere between V8 2.2 and V8 2.3.11.22.

 Additional testing has shown that the problematic change may have been
 committed between 2.3.2 (July 19, 2010) and 2.3.3 (July 21, 2010).

 Is anyone at Google V8/Chromium team aware of this?
 If so, have you tried to debug it and can you share your current
 findings on the script that is causing the problem?

 Thanks.

 Petar

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: Unexpected declaration in current context runtime error

2011-05-05 Thread Vyacheslav Egorov
That's pretty weird. I would expect -fno-inline to supersede
__attribute__((always_inline)).

I did a small experiment with gcc version 4.4.3:

int foo ()  __attribute__((always_inline));
int foo (int i) { return i+1; }
int main (int argc, char* argv[]) { return foo(argc); }

It does not inline foo when -fno-inline is enabled.

--
Vyacheslav Egorov


On Thu, May 5, 2011 at 10:54 AM, Evgeny Baskakov
evgeny.baska...@gmail.com wrote:
 Hi all,

 We've finally managed to overcome the unexpected declaration...
 error. As turned out, it was caused by a bug in our vendor's gcc
 compiler. In short, it generates broken code when function inlining is
 enabled.

 There is a subtle problem that prevents us to workaround the gcc bug
 by just turning the -fno-inline compiler switch on. Namely, there
 are the following defines in global.h:

 #define INLINE(header) inline header  __attribute__((always_inline))

 The problem is that there is no way to disable the
 __attribute__((always_inline)) pragma externally (that is, without
 code modifications). Regardless to our case, I think sometimes it
 might be helpful to disable inline at all.

 So why not define the pragma conditionally, e.g.:

 #if !defined(DISABLE_INLINE)
  #define INLINE(header) inline header  __attribute__((always_inline))
 #else
  #define INLINE(header) inline header
 #endif

 I think such a modification would be quite harmless and somewhat
 helpful.


 On Apr 29, 5:48 pm, Kevin Millikin kmilli...@chromium.org wrote:
 Hi Evgeny,

 In the full codegen, the current context is kept in a dedicated register.

 Contexts form a linked list with the global context at the tail.  The
 intermediate contexts are either function contexts corresponding a
 function's scope or with contexts corresponding to a with scope.  There is a
 field in every context pointing to the enclosing function context---and for
 function contexts this field should point to the function context itself.

 The assert is firing when we try to generate code to initialize a function
 (or const) that has to go in the context, because it is possibly referenced
 from some inner scope.  The assert checks that we don't try to do this with
 a 'with' or 'global' context, by checking that the value in the context
 register and that context's function context are identical.

 So, my guess is that the context chain is messed up somehow.  Perhaps you do
 have a function context, but its enclosing function context has not been
 initialized to properly point to itself.  Context initialization happens in
 the runtime function (which is platform independent) and also possibly in a
 platform-specific 'FastNewContext' stub.  I'm not sure if this stub is
 implemented for MIPS, but if it is, I'd look there first to make sure it's
 properly initializing the new context and returning the proper context.

 ;; Kevin

 On Fri, Apr 29, 2011 at 12:05 PM, Evgeny Baskakov evgeny.baska...@gmail.com







  wrote:
  Hi all,

  I just started using the MIPS port of V8. So far I'm using the 'shell'
  program to run JS code. Everything works fine unless the '--debug-
  code' switch is enabled. With the switch, an unexpected declaration
  in current context error error pops out (the full message text is
  below).

  A brief evaluation revealed that it only happens when a nested JS
  function accesses a variable that is defined in the enclosing function
  body. For instance, when the nested function SetupArray.getFunction()
  accesses the SetupArray.specialFunctions variable, in src/array.js.

  I do realize that the MIPS port is out of the official scope, but the
  full-codegen part (which pops out the error) is very similar in all V8
  ports. So could someone point out what the reason could be? What
  should be checked fist?

  Thanks.

  --

  And here goes the full error message text:

  abort: Unexpected declaration in current context.

   Stack trace 

  Security context: 0x2caa8991 JS Object#0#
     1: SetupArray [native array.js:1180] (this=0x2caa9391 JS
  Object#1#)
     2: /* anonymous */ [native array.js:1249] (this=0x2caa9391 JS
  Object#1#)

   Details 

  [1]: SetupArray [native array.js:1180] (this=0x2caa9391 JS
  Object#1#) {
   // stack-allocated locals
   var getFunction = 0x2cae8061 undefined
   // heap-allocated locals
   var a = 0x2cae8061 undefined
   // expression stack (top to bottom)
   [03] : 0
   [02] : 0
   [01] : 3301874
  - s o u r c e   c o d e -
  function SetupArray(){???%SetProperty($Array.prototype,constructor,
  $Array,2);???InstallFunctions($Array,
  2,$Array(?isArray,ArrayIsArray?));??var a=
  %SpecialArrayFunctions({});??function getFunction(b,c,d){?var g=c;?
  if(a.hasOwnProperty(b)){?g=a[b];?}?if(!(typeof(d)==='undefined')){?
  %FunctionSetLength(g,d);?}?return g...

  -
  }

  [2]: /* anonymous */ [native array.js:1249

Re: [v8-users] Cannot compile v8 with debuggersupport=off profilingsupport=off

2011-04-26 Thread Vyacheslav Egorov
Hi,

You are right --- this options are not well maintained.

There is an open issue regarding compilation with
profilingsupport=off:
http://code.google.com/p/v8/issues/detail?id=1271

I think a similar issue should be opened for debuggersupport=off.

--
Vyacheslav Egorov


On Tue, Apr 26, 2011 at 1:28 PM, evgeny.baskakov
evgeny.baska...@gmail.com wrote:
 Hi all,

 I am building version 3.3.1 under Linux ia32.

 Everything goes fine with the default scons settings, but switching on
 any of the options debuggersupport=off or profilingsupport=off
 breaks the compilation process.

 I use these options to strip off the debugger and profiler in order to
 achieve the smallest possible binary size of the v8 library.

 However, it looks like these two options somewhat abandoned, as the
 compilation fails at numerous points of the code when variables like
 debug, profiling, ticker are accessed (that is, they aren't
 protected with any preprocessor defines):

 src/compiler.cc:667: error: 'class v8::internal::Isolate' has no
 member named 'debug'
 src/runtime.cc:7301: error: 'class v8::internal::Isolate' has no
 member named 'debug'
 src/runtime.cc:7307: error: 'class v8::internal::Isolate' has no
 member named 'debug'
 src/log.cc:1553: error: 'ticker_' was not declared in this scope
 src/log.cc: In function 'void
 v8::internal::ComputeCpuProfiling(v8::internal::Sampler*, void*)':
 src/log.cc:1631: error: invalid use of incomplete type 'struct
 v8::internal::Sampler'
 src/log.h:138: error: forward declaration of 'struct
 v8::internal::Sampler'

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: virtual-frame-heavy.cc: No such file or directory

2011-04-22 Thread Vyacheslav Egorov
Hi,

We removed this file (and many others related to obsolete classic
codegen) but did not update MSVC project. Sorry for that.

Just remove it (and any other MSVC will complain about) from the project.

We will update vcproj next week.

--
Vyacheslav Egorov


On Fri, Apr 22, 2011 at 4:32 PM, Matt  El mattzhao...@gmail.com wrote:
 PS. V8 - Revision 7677
 I think it's a bug

 On Apr 22, 3:37 pm, Matt  El mattzhao...@gmail.com wrote:
 When I compiled V8 on Windows, VS2005 complainted :

   1virtual-frame-heavy.cc
   1c1xx : fatal error C1083: Cannot open source file: '..\..\src
 \virtual-frame-heavy.cc': No such file or directory

  It seems v8_base.vcproj needs the file.
  Could someone give me a hand? thanks

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: v8-3.3.1 crash in a 64bit compile

2011-04-22 Thread Vyacheslav Egorov
Hi Ricky,

Yeah, that looks like a dead object.

I would suspect a Handle-misuse somewhere. However it is hard to
diagnose. You need to look through your code and check that you have
HandleScopes in proper places, that you use Persistent handles where
appropriate and does not simply return Local handles without properly
closing containing HandleScope.

--
Vyacheslav Egorov



On Thu, Apr 21, 2011 at 8:04 PM, Ricky Charlet rchar...@speakeasy.net wrote:
 BTW,
 This is trunk code from 4/21.

 On Apr 21, 10:51 am, Ricky Charlet rchar...@speakeasy.net wrote:
 Howdy,
     I'm new to v8. However my company has been using v8 since 1.3.
 I've got the task to investigate modernizing it. So I've got two
 variables in play here... I'm changing from v8-1.3 to v8-3.3.1 and
 also changing from a 32bit architecture to a 64 bit architecture. I'm
 suspecting the 64 bit change is causing my crash for the modest reason
 that there are so many casts in my path to the crash.

 OK, So I have v8-3.3.1 complied with
 `scons arch=x64 arch_size=64 mode=debug` and I've statically linked my
 code to libv8_g.a (renamed to libv8.a).

 My program is calling  v8::Array::Length in api.cc. I guess I'm
 calling length on a dead object because of the deadbee... in
 #1  0x006034e0 in v8::internal::HeapObject::map
 (this=0xdeadbeedbeadbe05)  at src/objects-inl.h:1176

 I've noticed many casts up and down the frame0 through frame5 stuff.
 That may or may not be germane to the issue and I did not ponder them
 very deeply before I just ran to this list to see if anyone else wants
 to chime in with some experience and wisdom here.

 Here is my gdb stack trace.

 Program received signal SIGSEGV, Segmentation fault.
 0x00603536 in v8::internal::HeapObject::map_word
 (this=0xdeadbeedbeadbe05)
     at src/objects-inl.h:1186
 1186      return MapWord(reinterpret_castuintptr_t(READ_FIELD(this,
 kMapOffset)));
 (gdb) bt
 #0  0x00603536 in v8::internal::HeapObject::map_word (
     this=0xdeadbeedbeadbe05) at src/objects-inl.h:1186
 #1  0x006034e0 in v8::internal::HeapObject::map
 (this=0xdeadbeedbeadbe05)
     at src/objects-inl.h:1176
 #2  0x0060224a in v8::internal::Object::IsHeapNumber() ()
 #3  0x006026ae in v8::internal::Object::IsNumber() ()
 #4  0x006031f6 in v8::internal::Object::Number() ()
 #5  0x005fac84 in v8::Array::Length (this=0x147ded8) at src/
 api.cc:4297
 #6  0x004566c5 in mus_parser::_create_step
 (this=0x7fffdff0, obj=...)
     at ../../mus_parser_gen.cc:2840
 #7  0x0043f9d2 in mus_parser::_create_scenario
 (this=0x7fffdff0, obj=...)
     at ../../mus_parser.cc:276
 #8  0x00442764 in mus_parser::load (this=0x7fffdff0,
 musl=...)
     at ../../mus_parser.cc:681
 #9  0x00480ea6 in mus_test_builder::make_scenario
 (this=0x7fffe990,
     scheduler=0x7fffe820, obj=..., error=...) at ../../
 mus_test_builder.cc:200
 #10 0x0048014d in mus_test_builder::make_track
 (this=0x7fffe990,
     scheduler=0x7fffe820, obj=..., error=...) at ../../
 mus_test_builder.cc:141
 #11 0x0047ff1f in mus_test_builder::build_test_internal (
     this=0x7fffe990, scheduler=0x7fffe820, json=...,
 error=...)
     at ../../mus_test_builder.cc:125
 #12 0x0047fb8e in mus_test_builder::build_test
 (this=0x7fffe990,
     scheduler=0x7fffe820, json=..., error=...) at ../../
 mus_test_builder.cc:73
 #13 0x0040e3a5 in execute_json (opts=...) at ../../testr.cc:
 552
 #14 0x0040e816 in main (argc=0, argv=0x7fffec70) at ../../
 testr.cc:621
 (gdb) list
 1181      set_map_word(MapWord::FromMap(value));
 1182    }
 1183
 1184
 1185    MapWord HeapObject::map_word() {
 1186      return MapWord(reinterpret_castuintptr_t(READ_FIELD(this,
 kMapOffset)));
 1187    }
 1188
 1189
 1190    void HeapObject::set_map_word(MapWord map_word) {
 (gdb) :q
 Undefined command: .  Try help.
 (gdb) frame 6
 #6  0x004566c5 in mus_parser::_create_step
 (this=0x7fffdff0, obj=...)
     at ../../mus_parser_gen.cc:2840
 2840            for (uint32_t n=0; nvariables-Length(); ++n) {
 (gdb) l
 2835                if (v_payload == 0) goto bummer;
 2836                v-payload(v_payload);
 2837            }
 2838
 2839            HandleArray variables = _array(obj, variables);
 2840            for (uint32_t n=0; nvariables-Length(); ++n) {
 2841                HandleObject variable_obj = _object(variables, n);
 2842                mus_step_variable *variable = _create_variable(v,
 variable_obj);
 2843                if (variable == 0) goto bummer;
 2844                v-variables(variable);

 Hopefully,
 Ricky Charlet

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: virtual-frame-heavy.cc: No such file or directory

2011-04-22 Thread Vyacheslav Egorov
Yes. They were removed from the codebase by
http://code.google.com/p/v8/source/detail?r=7542.
--
Vyacheslav Egorov


On Fri, Apr 22, 2011 at 6:26 PM, Matt  El mattzhao...@gmail.com wrote:
 Do you mean I should delete these files from v8_base.vcproj if VS2005
 complains them doesn't exist?

 On Apr 22, 10:45 pm, Vyacheslav Egorov vego...@chromium.org wrote:
 Hi,

 We removed this file (and many others related to obsolete classic
 codegen) but did not update MSVC project. Sorry for that.

 Just remove it (and any other MSVC will complain about) from the project.

 We will update vcproj next week.

 --
 Vyacheslav Egorov

 On Fri, Apr 22, 2011 at 4:32 PM, Matt  El mattzhao...@gmail.com wrote:

  PS. V8 - Revision 7677
  I think it's a bug

  On Apr 22, 3:37 pm, Matt  El mattzhao...@gmail.com wrote:
  When I compiled V8 on Windows, VS2005 complainted :

    1virtual-frame-heavy.cc
    1c1xx : fatal error C1083: Cannot open source file: '..\..\src
  \virtual-frame-heavy.cc': No such file or directory

   It seems v8_base.vcproj needs the file.
   Could someone give me a hand? thanks

  --
  v8-users mailing list
  v8-users@googlegroups.com
 http://groups.google.com/group/v8-users



 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Re: v8-3.3.1 crash in a 64bit compile

2011-04-22 Thread Vyacheslav Egorov
 I need to review my own code

I recommend to start by taking a look at _array function which
returned Array in question.

 Why did Length() method fail to detect IsDeadCheck()?

IsDeadCheck checks whether VM is dead, not whether the object itself
is dead. There is no way to detect whether a given object is valid or
it died because his space can be already reused for something else.

--
Vyacheslav Egorov



On Fri, Apr 22, 2011 at 5:44 PM,  rchar...@speakeasy.net wrote:

 Thanks Vyacheslav,

    Thing #1: I do agree with you, I need to review my own code and track my 
 objects better and not try to ask for Length() on a dead object. Bummer.

    Thing #2: Why did Length() method fail to detect IsDeadCheck()? It seems 
 that my `this` pointer is a corrupted form of kZapValue. I have 
 this=0xdeadbeedbeadbe05
 instead of deadbeeddeadbeed.  I think the poignant question here is, who 
 corrupted my kZapValue?. But that is probably very difficult to answer. 
 Bummer^2.

 --
 Ricky Charlet
 See Y' Later

 On Fri Apr 22  7:53 , Vyacheslav Egorov  sent:

Hi Ricky,

Yeah, that looks like a dead object.

I would suspect a Handle-misuse somewhere. However it is hard to
diagnose. You need to look through your code and check that you have
HandleScopes in proper places, that you use Persistent handles where
appropriate and does not simply return Local handles without properly
closing containing HandleScope.

--
Vyacheslav Egorov



On Thu, Apr 21, 2011 at 8:04 PM, Ricky Charlet rchar...@speakeasy.net wrote:
 BTW,
 This is trunk code from 4/21.

 On Apr 21, 10:51 am, Ricky Charlet rchar...@speakeasy.net wrote:
 Howdy,
     I'm new to v8. However my company has been using v8 since 1.3.
 I've got the task to investigate modernizing it. So I've got two
 variables in play here... I'm changing from v8-1.3 to v8-3.3.1 and
 also changing from a 32bit architecture to a 64 bit architecture. I'm
 suspecting the 64 bit change is causing my crash for the modest reason
 that there are so many casts in my path to the crash.

 OK, So I have v8-3.3.1 complied with
 `scons arch=x64 arch_size=64 mode=debug` and I've statically linked my
 code to libv8_g.a (renamed to libv8.a).

 My program is calling  v8::Array::Length in api.cc. I guess I'm
 calling length on a dead object because of the deadbee... in
 #1  0x006034e0 in v8::internal::HeapObject::map
 (this=0xdeadbeedbeadbe05)  at src/objects-inl.h:1176

 I've noticed many casts up and down the frame0 through frame5 stuff.
 That may or may not be germane to the issue and I did not ponder them
 very deeply before I just ran to this list to see if anyone else wants
 to chime in with some experience and wisdom here.

 Here is my gdb stack trace.

 Program received signal SIGSEGV, Segmentation fault.
 0x00603536 in v8::internal::HeapObject::map_word
 (this=0xdeadbeedbeadbe05)
     at src/objects-inl.h:1186
 1186      return MapWord(reinterpret_cast(READ_FIELD(this,
 kMapOffset)));
 (gdb) bt
 #0  0x00603536 in v8::internal::HeapObject::map_word (
     this=0xdeadbeedbeadbe05) at src/objects-inl.h:1186
 #1  0x006034e0 in v8::internal::HeapObject::map
 (this=0xdeadbeedbeadbe05)
     at src/objects-inl.h:1176
 #2  0x0060224a in v8::internal::Object::IsHeapNumber() ()
 #3  0x006026ae in v8::internal::Object::IsNumber() ()
 #4  0x006031f6 in v8::internal::Object::Number() ()
 #5  0x005fac84 in v8::Array::Length (this=0x147ded8) at src/
 api.cc:4297
 #6  0x004566c5 in mus_parser::_create_step
 (this=0x7fffdff0, obj=...)
     at ../../mus_parser_gen.cc:2840
 #7  0x0043f9d2 in mus_parser::_create_scenario
 (this=0x7fffdff0, obj=...)
     at ../../mus_parser.cc:276
 #8  0x00442764 in mus_parser::load (this=0x7fffdff0,
 musl=...)
     at ../../mus_parser.cc:681
 #9  0x00480ea6 in mus_test_builder::make_scenario
 (this=0x7fffe990,
     scheduler=0x7fffe820, obj=..., error=...) at ../../
 mus_test_builder.cc:200
 #10 0x0048014d in mus_test_builder::make_track
 (this=0x7fffe990,
     scheduler=0x7fffe820, obj=..., error=...) at ../../
 mus_test_builder.cc:141
 #11 0x0047ff1f in mus_test_builder::build_test_internal (
     this=0x7fffe990, scheduler=0x7fffe820, json=...,
 error=...)
     at ../../mus_test_builder.cc:125
 #12 0x0047fb8e in mus_test_builder::build_test
 (this=0x7fffe990,
     scheduler=0x7fffe820, json=..., error=...) at ../../
 mus_test_builder.cc:73
 #13 0x0040e3a5 in execute_json (opts=...) at ../../testr.cc:
 552
 #14 0x0040e816 in main (argc=0, argv=0x7fffec70) at ../../
 testr.cc:621
 (gdb) list
 1181      set_map_word(MapWord::FromMap(value));
 1182    }
 1183
 1184
 1185    MapWord HeapObject::map_word() {
 1186      return MapWord(reinterpret_cast(READ_FIELD(this,
 kMapOffset)));
 1187    }
 1188
 1189
 1190    void HeapObject::set_map_word(MapWord map_word) {
 (gdb) :q
 Undefined command: .  Try help

Re: [v8-users] Crankshaft on x86_64?

2011-04-04 Thread Vyacheslav Egorov
Yes, it is okay.

Version 3.2.0 (pushed to trunk 2011-03-07) enabled crankshaft by
default on x64 and ARM platforms.

Version 3.2.7 (pushed to trunk today) disabled classic codegenerator
completely.

Crankshaft is now the default on all platforms.

--
Vyacheslav Egorov


On Mon, Apr 4, 2011 at 3:53 PM, Egor Egorov egor.ego...@gmail.com wrote:
 Is it okay now to use V8 3.2.x x86_64 builds with crankshaft? I remember V.
 Egorov said something about crankshaft being disabled in x86_64 a month ago
 or so..?

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] V8 can't build success in Windows 7 VS2008

2011-03-30 Thread Vyacheslav Egorov
These warnings are harmless and they should not affect V8. Are you
sure that build actually failed?

--
Vyacheslav Egorov


On Wed, Mar 30, 2011 at 11:52 PM, Nightcola Lin nightc...@gmail.com wrote:
 Hi everyone,

 I am new to V8, today I encounter a problem when I try to build the
 solution.
 (I used VS2008 to open v8.sln and convert to 2008 solution then build
 it)

 When I try to build v8_base project, I got these two warning messages:
 1regexp-macro-assembler-irregexp.obj : warning LNK4221: no public
 symbols found; archive member will be inaccessible
 1frame-element.obj : warning LNK4221: no public symbols found;
 archive member will be inaccessible

 When I try yo build v8 project, I got these three warning msgs:
 1v8_base.lib(frame-element.obj) : warning LNK4221: no public symbols
 found; archive member will be inaccessible
 1v8_base.lib(regexp-macro-assembler-irregexp.obj) : warning LNK4221:
 no public symbols found; archive member will be inaccessible
 1v8_base.lib(string-search.obj) : warning LNK4221: no public symbols
 found; archive member will be inaccessible

 I already search this group, and find some similar issue, but still
 cannot build successful,
 If anyone can give me a hand I would be really appreciate :)

 (P.S: I already follow the README.txt at \v8-read-only\tools
 \visual_studio\README.txt, Add my Python(2.7) path to system env)

 Thanks!

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Errors building v8 on Win7-32bit/cygwin

2011-03-23 Thread Vyacheslav Egorov
Hi Andrei,

It seems that that cygwin port was not updated after isolates merge.

You can checkout a version prior to 3.2.4, e.g.
http://v8.googlecode.com/svn/tags/3.2.3/. It should build just fine.

--
Vyacheslav Egorov


On Wed, Mar 23, 2011 at 7:10 PM, Trastabuga lisper...@gmail.com wrote:
 I tried both trunk and beeding-edge, I got the same errors:
 src/platform-cygwin.cc:45:17: error: top.h: No such file or directory
 src/platform-cygwin.cc:290:51: error: macro LOG requires 2
 arguments, but only 1 given
 src/platform-cygwin.cc: In static member function ‘static void
 v8::internal::OS::LogSharedLibraryAddresses()’:
 src/platform-cygwin.cc:290: error: ‘LOG’ was not declared in this
 scope
 src/platform-cygwin.cc: In member function ‘void
 v8::internal::Sampler::PlatformData::Sample()’:
 src/platform-cygwin.cc:653: error: no matching function for call to
 ‘v8::internal::CpuProfiler::TickSampleEvent()’
 src/cpu-profiler.h:235: note: candidates are: static
 v8::internal::TickSample*
 v8::internal::CpuProfiler::TickSampleEvent(v8::internal::Isolate*)
 src/platform-cygwin.cc:658: error: ‘Top’ has not been declared
 src/platform-cygwin.cc:676: error: cannot call member function ‘void
 v8::internal::RuntimeProfiler::NotifyTick()’ without object
 src/platform-cygwin.cc: At global scope:
 src/platform-cygwin.cc:691: error: prototype for
 ‘v8::internal::Sampler::Sampler(int)’ does not match any in class
 ‘v8::internal::Sampler’
 src/platform.h:652: error: candidates are:
 v8::internal::Sampler::Sampler(const v8::internal::Sampler)
 src/platform.h:652: error:
 v8::internal::Sampler::Sampler()
 src/platform.h:601: error:
 v8::internal::Sampler::Sampler(v8::internal::Isolate*, int)
 src/platform-cygwin.cc: In member function ‘void
 v8::internal::Sampler::Stop()’:
 src/platform-cygwin.cc:739: error: ‘Top’ has not been declared
 scons: *** [obj/release/platform-cygwin.o] Error 1
 scons: building terminated because of errors.

 Thank you,
 Andrei

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Function from String

2011-03-10 Thread Vyacheslav Egorov
You can use built-in Function constructor:

v8::Localv8::Function MkFunction(v8::Handlev8::String body) {
  v8::HandleScope scope;

  // Get global object
  v8::Localv8::Object global = v8::Context::GetCurrent()-Global();

  // Get built-in Function constructor (see ECMA-262 5th edition 15.3.2)
  v8::Localv8::Function function_ctor =
  v8::Localv8::Function::Cast(global-Get(v8::String::New(Function)));

  // Invoke Function constructor to create function with the given
body and no arguments
  v8::Handlev8::Value argv[1] = { body };
  v8::Localv8::Object function = function_ctor-NewInstance(1, argv);

  return scope.Close(v8::Localv8::Function::Cast(function));
}

--
Vyacheslav Egorov, Software Engineer, V8 Team.
Google Denmark ApS.



On Thu, Mar 10, 2011 at 5:40 PM, Vasily vasily.stepa...@gmail.com wrote:
 Consider a v8::String variable containing some js code.
 For instance:
 v8::Localv8::String source = v8::String::New(this.foo(););

 I want to evaluate js code with my explicitly defined 'this' keyword.
 First of all I found v8::Function::Call() method. Where I can pass
 'this' object with recv argument (sic).

 My idea was to create v8::Function:
 v8::Localv8::FunctionTemplate templ =
 v8::FunctionTemplate::New(eval, source);
 v8::Localv8::Function function = templ-GetFunction();

 where eval is:
 v8::Handlev8::Value eval(const v8::Arguments args) {
  v8::Handlev8::String source =
 v8::Handlev8::String::Cast(args.Data());
  // I'm not talking about performance here. It's not a real life ;)
  v8::Localv8::Script script = v8::Script::Compile(source);
  script-Run();
  return v8::Undefined();
 }

 and call this function like this:
 function-Call(myObj, 0, NULL);

 But what I found is that inside eval, args.This() points to my object
 (and that is good)... unfortunately inside script-Run() in expression
 this.foo(), 'this' means Global object of a current context. And
 that is probably what expected by v8 developers... but definitely not
 good for me :)

 My dirty solution is to build function() {  + source + } string,
 Compile, Run and Cast the result to v8::Function.

 Is there any clever solution?

 Thanks.

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Closure optimization bug?

2011-02-28 Thread Vyacheslav Egorov
There is no bug in V8. You should not use delete to detach your
listener: it destroys onreadystatechange and all hidden logic
associated with it.

Basically your sample reduces to:

var x = {
  set on(v) { this.on_ = v; },
  do_things: function () { this.on_(); }
};

function foo(cb) {
  x.on = cb;
  x.do_things();
}

foo(function () {
console.log(I am the first callback!);
delete this.on; // (1)
  });

foo(function () {
console.log(I am the second callback!);
delete this.on;
  });

This will print 'I am the first callback!' twice because delete
destroyed 'on' property together with it's setter, so on_ field is not
getting updated.

--
Vyacheslav Egorov


On Fri, Feb 25, 2011 at 8:12 PM, Luis luis.mars...@gmail.com wrote:
 Google Chrome: 10.0.648.119 (Official Build 75907) beta
 V8: 3.0.12.24

 I have a problem where some identical calls to a function create
 different results, but the function makes no changes to the
 environment that should persist between calls or affect their results.
 Shouldn't the results be the same? I use an ECMAScript file 'test.es'
 and an XHTML file to load it (included at the end).

 Steps to produce this result:
 Load 'test.es' in Google Chrome.
 Open the JavaScript Console.
 Enter 'submit(client)' and read output 'fresh true'.
 Enter 'submit(client)' again and read output 'fresh false'.

 Observations:
 An anonymous closure the 'submit' function defines appears to retain
 the value variable 'fresh' binds to from a previous call. However,
 'submit' sets 'fresh' and creates a new closure on each call.
 Moreover, both outputs read 'fresh true' when I replace 'client' with
 'client0', so the behavior depends on some factor.

 Questions:
 Shouldn't the results for the identical calls be identical?
 Why are the results identical for calls to 'client0' and not 'client'?
 Is this a bug?

 test.es:
 function submit(client) {
    var fresh = true, uri = '';
    client.fetch(uri,
                 function () {
                     console.debug('fresh', fresh),
                     fresh = false,
                     delete this.onreadystatechange;
                 });
 }
 var client = new XMLHttpRequest, client0 = {fetch : function (uri,
 continuation) {var self = this;
                                                                               
    setTimeout(function () {continuation.call(self);}, 0);}};
 if (!client.DONE)
    XMLHttpRequest.prototype.DONE = 4;
 client.fetch = function (uri, continuation) {
    /* sets this to request resource at uri and dispatch continuation
 on acquisition
       a continuation error forces continued request until
 continuation succeeds or a fatal error occurs */
    this.onreadystatechange = function (event) {
        if (this.readyState == this.DONE) {
            try {continuation.call(this, event);}
            catch (error) {
                if (error.fail)
                    this.requestURI(uri);
                else
                    debugger;
            }
        }
    },
    this.requestURI(uri);
 },
 client.requestURI = function(uri) {
    this.open('GET', uri),
    this.send();
 };

 The XHTML file:
 html xmlns='http://www.w3.org/1999/xhtml'
  head
    meta http-equiv='Content-Script-Type' content='application/
 ecmascript' /
    script src='test.es' type='application/ecmascript' /
  /head
  body /
 /html

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users


-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Why Function.prototype.bind() is so slow?

2011-02-08 Thread Vyacheslav Egorov
 Can we expect that in nearest 3.1 version update?

Yes.

If nothing bad shows on buildbots I expect it to be pushed to trunk
with other changes tomorrow.

--
Vyacheslav Egorov


On Tue, Feb 8, 2011 at 11:30 AM, Egor Egorov egor.ego...@gmail.com wrote:
 Oh, now THAT's fast:
 glory:~ egor$ node test2.js
 DEBUG: Plain call: 1326ms
 DEBUG: .bind(): 4050ms
 DEBUG: getBinded(): 4064ms
 Thanks a lot! Can we expect that in nearest 3.1 version update?

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


Re: [v8-users] Why Function.prototype.bind() is so slow?

2011-02-07 Thread Vyacheslav Egorov
It is slow because:

a) it is very generic (take a look at the source
http://code.google.com/p/v8/source/browse/trunk/src/v8natives.js#1150);

b) not optimized for common cases (e.g. we could have provided an
optimized implementation for f.bind(this));

Bind returns a pretty complicated function (that does an allocation
per call) which optimizing code generator (aka Crankshaft) can't
optimize much.

As you've probably noticed from JSPerf results we recently improved
handling of closures in Crankshaft (so Closure test is almost as
fast as Func call test on Chrome 10.0.648).

--
Vyacheslav Egorov.



On Mon, Feb 7, 2011 at 2:03 PM, Egor Egorov egor.ego...@gmail.com wrote:

 I discovered this problem when I was trying to figure out why my
 data-intensive code is slow. Turns out, lots of bind calls.

 You can test it here: http://jsperf.com/call-vs-closure-to-pass-scope/2

 --
 v8-users mailing list
 v8-users@googlegroups.com
 http://groups.google.com/group/v8-users

-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users


  1   2   >