Re: [v8-users] Cryptic out-of-memory error

2017-05-11 Thread Michael Lippautz
V8 suffered from a virtual address space leak which was fixed and
backmerged to 57 (https://bugs.chromium.org/p/v8/issues/detail?id=5945).

Awesome it works for you now.

On Thu, May 11, 2017 at 7:04 PM Andre Cunha 
wrote:

> I have repeated the tests in V8 5.8.283.38, and indeed the problem is
> gone. The amount of virtual memory remains stable over time.
>
> With regard to the cause of the problem, I managed to create a similar
> situation (increase in virtual memory consumption without increase in
> actual memory usage) using a loop like this:
>
> while (true) {
>   usleep(100);
>   sbrk(4096 * 40);
> }
>
> I would guess that, in version 5.6, the program break of the process was
> being increased when an Isolate was allocated, some allocated pages were
> not being used, but the program break wasn't being decreased when
> Isolate::Dispose() was called. The memory the Isolate used to occupy was
> nonetheless marked free, and thus reused in subsequent allocations, but the
> allocation process would still increase the program break anyway. Since
> these extra pages were never referenced, no actual memory was allocated to
> the process, but the program break reached its limit at some point. That
> could explain the situation, but it's just a wild guess, and the problem is
> solved in 5.8 anyway.
>
> Thank you for the support.
> Andre
>
> On Thursday, May 11, 2017 at 10:45:30 AM UTC-3, Jakob Kummerow wrote:
>>
>> On Thu, May 11, 2017 at 3:38 PM, Jochen Eisinger 
>> wrote:
>>
>>> Thank you for the detailed bug report.
>>>
>>> I tried reproducing this on the latest version of V8, but couldn't
>>> observe the behavior you described.
>>>
>>> Have you considered updating to at least the latest stable version of V8?
>>>
>>
>> ...which would be branch-heads/5.8 (currently 5.8.283.38)
>>
>>
>
>>> On Wed, May 10, 2017 at 7:50 PM Andre Cunha 
>>> wrote:
>>>
 I've managed to reproduce the problem using just V8's hello_world
 example (source code attached). I just added a loop around the creation and
 destruction of the Isolate (this is what happens in each cycle of my stress
 test). When I run the process and monitor it in "top", the RES column stays
 constant at around 26 MB, but the VIRT column grows indefinitely; after
 about 7 minutes, the VIRT column reaches around 33 GB, and the process
 crashes (the value of "CommitLimit" in my machine, got from /proc/meminfo,
 is 35,511,816 kB).

 Following Michael's suggestion, I changed file src/heap/spaces.cc so
 that it prints a stack trace when it's about to return NULL. I'm also
 sending the stack trace attached. I use V8 5.6.326.42 in Fedora 25, x86_64.

 Just to explain why I'm doing this test: in the library I'm working on,
 the user can create a certain kind of thread and send requests to it. Each
 thread needs to run JS code (received from the user), so it creates its own
 Isolate when it needs to, and destroys it when the Isolate is no longer
 necessary. One of our stress tests involves the constant creation and
 destruction of such threads, as well as constantly sending requests to the
 same thread. It was in this context that I found this problem.

 On Monday, May 8, 2017 at 12:50:37 PM UTC-3, Andre Cunha wrote:
>
> @Michael Lippautz, I'll try adding a breakpoint if AllocateChunk
> returns NULL; hopefully, I'll get more information about the problem.
>
> @Jakob Kummerow, yes, I'm calling Isolate::Dispose() in every isolate
> after using it. I'll also observe the VIRT column and see if it shows any
> abnormality.
>
> Thank you!
>
> On Monday, May 8, 2017 at 11:07:44 AM UTC-3, Jakob Kummerow wrote:
>>
>> My guess would be an address space leak (should show up in the "VIRT"
>> column of "top" on Linux). Are you calling "isolate->Dispose()" on any
>> isolate you're done with?
>>
>> On Mon, May 8, 2017 at 4:01 PM, Michael Lippautz <
>> mlip...@chromium.org> wrote:
>>
>>> V8 usually fails there if it cannot allocate a 512KiB page from the
>>> operating system/
>>>
>>> You could try hooking in AllocateChunk [1] and see why it is
>>> returning NULL and trace back through the underlying calls.
>>>
>>> Best, Michael
>>>
>>> [1]:
>>> https://cs.chromium.org/chromium/src/v8/src/heap/spaces.cc?q=AllocateChunk=package:chromium=739
>>>
>>> On Mon, May 8, 2017 at 3:27 PM Andre Cunha 
>>> wrote:
>>>
 Hello,

 I have embedded v8 into a project for the company I work for, and
 during some stress tests, I've encountered a weird out-of-memory error.
 After considerable investigation, I still have no idea of what might be
 going on, so I'm reaching out to you in hope of some insight.

 So here is a summary of the 

Re: [v8-users] Cryptic out-of-memory error

2017-05-11 Thread Andre Cunha
I have repeated the tests in V8 5.8.283.38, and indeed the problem is gone. 
The amount of virtual memory remains stable over time.

With regard to the cause of the problem, I managed to create a similar 
situation (increase in virtual memory consumption without increase in 
actual memory usage) using a loop like this:

while (true) {
  usleep(100);
  sbrk(4096 * 40);
}

I would guess that, in version 5.6, the program break of the process was 
being increased when an Isolate was allocated, some allocated pages were 
not being used, but the program break wasn't being decreased when 
Isolate::Dispose() was called. The memory the Isolate used to occupy was 
nonetheless marked free, and thus reused in subsequent allocations, but the 
allocation process would still increase the program break anyway. Since 
these extra pages were never referenced, no actual memory was allocated to 
the process, but the program break reached its limit at some point. That 
could explain the situation, but it's just a wild guess, and the problem is 
solved in 5.8 anyway.

Thank you for the support.
Andre

On Thursday, May 11, 2017 at 10:45:30 AM UTC-3, Jakob Kummerow wrote:
>
> On Thu, May 11, 2017 at 3:38 PM, Jochen Eisinger  > wrote:
>
>> Thank you for the detailed bug report.
>>
>> I tried reproducing this on the latest version of V8, but couldn't 
>> observe the behavior you described.
>>
>> Have you considered updating to at least the latest stable version of V8?
>>
>
> ...which would be branch-heads/5.8 (currently 5.8.283.38)
>  
>
>>
>> On Wed, May 10, 2017 at 7:50 PM Andre Cunha > > wrote:
>>
>>> I've managed to reproduce the problem using just V8's hello_world 
>>> example (source code attached). I just added a loop around the creation and 
>>> destruction of the Isolate (this is what happens in each cycle of my stress 
>>> test). When I run the process and monitor it in "top", the RES column stays 
>>> constant at around 26 MB, but the VIRT column grows indefinitely; after 
>>> about 7 minutes, the VIRT column reaches around 33 GB, and the process 
>>> crashes (the value of "CommitLimit" in my machine, got from /proc/meminfo, 
>>> is 35,511,816 kB).
>>>
>>> Following Michael's suggestion, I changed file src/heap/spaces.cc so 
>>> that it prints a stack trace when it's about to return NULL. I'm also 
>>> sending the stack trace attached. I use V8 5.6.326.42 in Fedora 25, x86_64.
>>>
>>> Just to explain why I'm doing this test: in the library I'm working on, 
>>> the user can create a certain kind of thread and send requests to it. Each 
>>> thread needs to run JS code (received from the user), so it creates its own 
>>> Isolate when it needs to, and destroys it when the Isolate is no longer 
>>> necessary. One of our stress tests involves the constant creation and 
>>> destruction of such threads, as well as constantly sending requests to the 
>>> same thread. It was in this context that I found this problem.
>>>
>>> On Monday, May 8, 2017 at 12:50:37 PM UTC-3, Andre Cunha wrote:

 @Michael Lippautz, I'll try adding a breakpoint if AllocateChunk 
 returns NULL; hopefully, I'll get more information about the problem.

 @Jakob Kummerow, yes, I'm calling Isolate::Dispose() in every isolate 
 after using it. I'll also observe the VIRT column and see if it shows any 
 abnormality.

 Thank you!

 On Monday, May 8, 2017 at 11:07:44 AM UTC-3, Jakob Kummerow wrote:
>
> My guess would be an address space leak (should show up in the "VIRT" 
> column of "top" on Linux). Are you calling "isolate->Dispose()" on any 
> isolate you're done with?
>
> On Mon, May 8, 2017 at 4:01 PM, Michael Lippautz  > wrote:
>
>> V8 usually fails there if it cannot allocate a 512KiB page from the 
>> operating system/
>>
>> You could try hooking in AllocateChunk [1] and see why it is 
>> returning NULL and trace back through the underlying calls. 
>>
>> Best, Michael
>>
>> [1]: 
>> https://cs.chromium.org/chromium/src/v8/src/heap/spaces.cc?q=AllocateChunk=package:chromium=739
>>
>> On Mon, May 8, 2017 at 3:27 PM Andre Cunha  
>> wrote:
>>
>>> Hello,
>>>
>>> I have embedded v8 into a project for the company I work for, and 
>>> during some stress tests, I've encountered a weird out-of-memory error. 
>>> After considerable investigation, I still have no idea of what might be 
>>> going on, so I'm reaching out to you in hope of some insight.
>>>
>>> So here is a summary of the scenario: in each test iteration, I 
>>> create an Isolate, run some short JS code fragments, and then destroy 
>>> the 
>>> isolate. After the execution of each code fragment, I perform some 
>>> variable 
>>> manipulations from my C++ code using V8's API, prior to running the 
>>> next 
>>> fragment. I 

Re: [v8-users] Cryptic out-of-memory error

2017-05-11 Thread Jakob Kummerow
On Thu, May 11, 2017 at 3:38 PM, Jochen Eisinger 
wrote:

> Thank you for the detailed bug report.
>
> I tried reproducing this on the latest version of V8, but couldn't observe
> the behavior you described.
>
> Have you considered updating to at least the latest stable version of V8?
>

...which would be branch-heads/5.8 (currently 5.8.283.38)


>
> On Wed, May 10, 2017 at 7:50 PM Andre Cunha 
> wrote:
>
>> I've managed to reproduce the problem using just V8's hello_world example
>> (source code attached). I just added a loop around the creation and
>> destruction of the Isolate (this is what happens in each cycle of my stress
>> test). When I run the process and monitor it in "top", the RES column stays
>> constant at around 26 MB, but the VIRT column grows indefinitely; after
>> about 7 minutes, the VIRT column reaches around 33 GB, and the process
>> crashes (the value of "CommitLimit" in my machine, got from /proc/meminfo,
>> is 35,511,816 kB).
>>
>> Following Michael's suggestion, I changed file src/heap/spaces.cc so that
>> it prints a stack trace when it's about to return NULL. I'm also sending
>> the stack trace attached. I use V8 5.6.326.42 in Fedora 25, x86_64.
>>
>> Just to explain why I'm doing this test: in the library I'm working on,
>> the user can create a certain kind of thread and send requests to it. Each
>> thread needs to run JS code (received from the user), so it creates its own
>> Isolate when it needs to, and destroys it when the Isolate is no longer
>> necessary. One of our stress tests involves the constant creation and
>> destruction of such threads, as well as constantly sending requests to the
>> same thread. It was in this context that I found this problem.
>>
>> On Monday, May 8, 2017 at 12:50:37 PM UTC-3, Andre Cunha wrote:
>>>
>>> @Michael Lippautz, I'll try adding a breakpoint if AllocateChunk returns
>>> NULL; hopefully, I'll get more information about the problem.
>>>
>>> @Jakob Kummerow, yes, I'm calling Isolate::Dispose() in every isolate
>>> after using it. I'll also observe the VIRT column and see if it shows any
>>> abnormality.
>>>
>>> Thank you!
>>>
>>> On Monday, May 8, 2017 at 11:07:44 AM UTC-3, Jakob Kummerow wrote:

 My guess would be an address space leak (should show up in the "VIRT"
 column of "top" on Linux). Are you calling "isolate->Dispose()" on any
 isolate you're done with?

 On Mon, May 8, 2017 at 4:01 PM, Michael Lippautz 
 wrote:

> V8 usually fails there if it cannot allocate a 512KiB page from the
> operating system/
>
> You could try hooking in AllocateChunk [1] and see why it is returning
> NULL and trace back through the underlying calls.
>
> Best, Michael
>
> [1]: https://cs.chromium.org/chromium/src/v8/src/heap/
> spaces.cc?q=AllocateChunk=package:chromium=739
>
> On Mon, May 8, 2017 at 3:27 PM Andre Cunha 
> wrote:
>
>> Hello,
>>
>> I have embedded v8 into a project for the company I work for, and
>> during some stress tests, I've encountered a weird out-of-memory error.
>> After considerable investigation, I still have no idea of what might be
>> going on, so I'm reaching out to you in hope of some insight.
>>
>> So here is a summary of the scenario: in each test iteration, I
>> create an Isolate, run some short JS code fragments, and then destroy the
>> isolate. After the execution of each code fragment, I perform some 
>> variable
>> manipulations from my C++ code using V8's API, prior to running the next
>> fragment. I repeat thousands of such iterations over the same input (it's
>> valid), and I expect no memory leaks and no crashes. However, after 
>> about 3
>> hours, V8 crashes with an out-of-memory error of no apparent reason.
>>
>> I have run the code though valgrind and using address sanitizing, and
>> no memory leaks were detected. Additionally, I monitor memory consumption
>> throughout the test; the program's memory usage is stable, without any
>> peak, and when V8 crashes the system has a lot of available memory (more
>> than 5 Gib). I have used V8's API to get heap usage statistics after each
>> successful iteration; the values are always the same, and are shown below
>> (they are included in an attached file, typical_memory.txt):
>>
>> ScriptEngine::Run: finished running at 2017-05-05T13:20:34
>>   used_heap_size   : 46.9189 Mib
>>   total_heap_size  : 66.1562 Mib
>>   Space 0
>> name   : new_space
>> size   : 8 Mib
>> used_size  : 2.47314 Mib
>> available_size : 5.39404 Mib
>>   Space 1
>> name   : old_space
>> size   : 39.5625 Mib
>> used_size  : 31.6393 Mib
>> available_size : 

Re: [v8-users] Cryptic out-of-memory error

2017-05-11 Thread Jochen Eisinger
Thank you for the detailed bug report.

I tried reproducing this on the latest version of V8, but couldn't observe
the behavior you described.

Have you considered updating to at least the latest stable version of V8?

On Wed, May 10, 2017 at 7:50 PM Andre Cunha 
wrote:

> I've managed to reproduce the problem using just V8's hello_world example
> (source code attached). I just added a loop around the creation and
> destruction of the Isolate (this is what happens in each cycle of my stress
> test). When I run the process and monitor it in "top", the RES column stays
> constant at around 26 MB, but the VIRT column grows indefinitely; after
> about 7 minutes, the VIRT column reaches around 33 GB, and the process
> crashes (the value of "CommitLimit" in my machine, got from /proc/meminfo,
> is 35,511,816 kB).
>
> Following Michael's suggestion, I changed file src/heap/spaces.cc so that
> it prints a stack trace when it's about to return NULL. I'm also sending
> the stack trace attached. I use V8 5.6.326.42 in Fedora 25, x86_64.
>
> Just to explain why I'm doing this test: in the library I'm working on,
> the user can create a certain kind of thread and send requests to it. Each
> thread needs to run JS code (received from the user), so it creates its own
> Isolate when it needs to, and destroys it when the Isolate is no longer
> necessary. One of our stress tests involves the constant creation and
> destruction of such threads, as well as constantly sending requests to the
> same thread. It was in this context that I found this problem.
>
> On Monday, May 8, 2017 at 12:50:37 PM UTC-3, Andre Cunha wrote:
>>
>> @Michael Lippautz, I'll try adding a breakpoint if AllocateChunk returns
>> NULL; hopefully, I'll get more information about the problem.
>>
>> @Jakob Kummerow, yes, I'm calling Isolate::Dispose() in every isolate
>> after using it. I'll also observe the VIRT column and see if it shows any
>> abnormality.
>>
>> Thank you!
>>
>> On Monday, May 8, 2017 at 11:07:44 AM UTC-3, Jakob Kummerow wrote:
>>>
>>> My guess would be an address space leak (should show up in the "VIRT"
>>> column of "top" on Linux). Are you calling "isolate->Dispose()" on any
>>> isolate you're done with?
>>>
>>> On Mon, May 8, 2017 at 4:01 PM, Michael Lippautz 
>>> wrote:
>>>
 V8 usually fails there if it cannot allocate a 512KiB page from the
 operating system/

 You could try hooking in AllocateChunk [1] and see why it is returning
 NULL and trace back through the underlying calls.

 Best, Michael

 [1]:
 https://cs.chromium.org/chromium/src/v8/src/heap/spaces.cc?q=AllocateChunk=package:chromium=739

 On Mon, May 8, 2017 at 3:27 PM Andre Cunha 
 wrote:

> Hello,
>
> I have embedded v8 into a project for the company I work for, and
> during some stress tests, I've encountered a weird out-of-memory error.
> After considerable investigation, I still have no idea of what might be
> going on, so I'm reaching out to you in hope of some insight.
>
> So here is a summary of the scenario: in each test iteration, I create
> an Isolate, run some short JS code fragments, and then destroy the 
> isolate.
> After the execution of each code fragment, I perform some variable
> manipulations from my C++ code using V8's API, prior to running the next
> fragment. I repeat thousands of such iterations over the same input (it's
> valid), and I expect no memory leaks and no crashes. However, after about 
> 3
> hours, V8 crashes with an out-of-memory error of no apparent reason.
>
> I have run the code though valgrind and using address sanitizing, and
> no memory leaks were detected. Additionally, I monitor memory consumption
> throughout the test; the program's memory usage is stable, without any
> peak, and when V8 crashes the system has a lot of available memory (more
> than 5 Gib). I have used V8's API to get heap usage statistics after each
> successful iteration; the values are always the same, and are shown below
> (they are included in an attached file, typical_memory.txt):
>
> ScriptEngine::Run: finished running at 2017-05-05T13:20:34
>   used_heap_size   : 46.9189 Mib
>   total_heap_size  : 66.1562 Mib
>   Space 0
> name   : new_space
> size   : 8 Mib
> used_size  : 2.47314 Mib
> available_size : 5.39404 Mib
>   Space 1
> name   : old_space
> size   : 39.5625 Mib
> used_size  : 31.6393 Mib
> available_size : 5.51526 Mib
>   Space 2
> name   : code_space
> size   : 10.4375 Mib
> used_size  : 6.16919 Mib
> available_size : 0 B
>   Space 3
> name   : map_space
> 

Re: [v8-users] Cryptic out-of-memory error

2017-05-10 Thread Andre Cunha
I've managed to reproduce the problem using just V8's hello_world example 
(source code attached). I just added a loop around the creation and 
destruction of the Isolate (this is what happens in each cycle of my stress 
test). When I run the process and monitor it in "top", the RES column stays 
constant at around 26 MB, but the VIRT column grows indefinitely; after 
about 7 minutes, the VIRT column reaches around 33 GB, and the process 
crashes (the value of "CommitLimit" in my machine, got from /proc/meminfo, 
is 35,511,816 kB).

Following Michael's suggestion, I changed file src/heap/spaces.cc so that 
it prints a stack trace when it's about to return NULL. I'm also sending 
the stack trace attached. I use V8 5.6.326.42 in Fedora 25, x86_64.

Just to explain why I'm doing this test: in the library I'm working on, the 
user can create a certain kind of thread and send requests to it. Each 
thread needs to run JS code (received from the user), so it creates its own 
Isolate when it needs to, and destroys it when the Isolate is no longer 
necessary. One of our stress tests involves the constant creation and 
destruction of such threads, as well as constantly sending requests to the 
same thread. It was in this context that I found this problem.

On Monday, May 8, 2017 at 12:50:37 PM UTC-3, Andre Cunha wrote:
>
> @Michael Lippautz, I'll try adding a breakpoint if AllocateChunk returns 
> NULL; hopefully, I'll get more information about the problem.
>
> @Jakob Kummerow, yes, I'm calling Isolate::Dispose() in every isolate 
> after using it. I'll also observe the VIRT column and see if it shows any 
> abnormality.
>
> Thank you!
>
> On Monday, May 8, 2017 at 11:07:44 AM UTC-3, Jakob Kummerow wrote:
>>
>> My guess would be an address space leak (should show up in the "VIRT" 
>> column of "top" on Linux). Are you calling "isolate->Dispose()" on any 
>> isolate you're done with?
>>
>> On Mon, May 8, 2017 at 4:01 PM, Michael Lippautz  
>> wrote:
>>
>>> V8 usually fails there if it cannot allocate a 512KiB page from the 
>>> operating system/
>>>
>>> You could try hooking in AllocateChunk [1] and see why it is returning 
>>> NULL and trace back through the underlying calls. 
>>>
>>> Best, Michael
>>>
>>> [1]: 
>>> https://cs.chromium.org/chromium/src/v8/src/heap/spaces.cc?q=AllocateChunk=package:chromium=739
>>>
>>> On Mon, May 8, 2017 at 3:27 PM Andre Cunha  wrote:
>>>
 Hello,

 I have embedded v8 into a project for the company I work for, and 
 during some stress tests, I've encountered a weird out-of-memory error. 
 After considerable investigation, I still have no idea of what might be 
 going on, so I'm reaching out to you in hope of some insight.

 So here is a summary of the scenario: in each test iteration, I create 
 an Isolate, run some short JS code fragments, and then destroy the 
 isolate. 
 After the execution of each code fragment, I perform some variable 
 manipulations from my C++ code using V8's API, prior to running the next 
 fragment. I repeat thousands of such iterations over the same input (it's 
 valid), and I expect no memory leaks and no crashes. However, after about 
 3 
 hours, V8 crashes with an out-of-memory error of no apparent reason.

 I have run the code though valgrind and using address sanitizing, and 
 no memory leaks were detected. Additionally, I monitor memory consumption 
 throughout the test; the program's memory usage is stable, without any 
 peak, and when V8 crashes the system has a lot of available memory (more 
 than 5 Gib). I have used V8's API to get heap usage statistics after each 
 successful iteration; the values are always the same, and are shown below 
 (they are included in an attached file, typical_memory.txt):

 ScriptEngine::Run: finished running at 2017-05-05T13:20:34
   used_heap_size   : 46.9189 Mib
   total_heap_size  : 66.1562 Mib
   Space 0
 name   : new_space
 size   : 8 Mib
 used_size  : 2.47314 Mib
 available_size : 5.39404 Mib
   Space 1
 name   : old_space
 size   : 39.5625 Mib
 used_size  : 31.6393 Mib
 available_size : 5.51526 Mib
   Space 2
 name   : code_space
 size   : 10.4375 Mib
 used_size  : 6.16919 Mib
 available_size : 0 B
   Space 3
 name   : map_space
 size   : 8.15625 Mib
 used_size  : 6.63733 Mib
 available_size : 80 B
   Space 4
 name   : large_object_space
 size   : 0 B
 used_size  : 0 B
 available_size : 11.1015 Gib

 When V8 crashes, it prints a heap summary, which I'm sending attached 
 (file 

Re: [v8-users] Cryptic out-of-memory error

2017-05-08 Thread Andre Cunha
@Michael Lippautz, I'll try adding a breakpoint if AllocateChunk returns 
NULL; hopefully, I'll get more information about the problem.

@Jakob Kummerow, yes, I'm calling Isolate::Dispose() in every isolate after 
using it. I'll also observe the VIRT column and see if it shows any 
abnormality.

Thank you!

On Monday, May 8, 2017 at 11:07:44 AM UTC-3, Jakob Kummerow wrote:
>
> My guess would be an address space leak (should show up in the "VIRT" 
> column of "top" on Linux). Are you calling "isolate->Dispose()" on any 
> isolate you're done with?
>
> On Mon, May 8, 2017 at 4:01 PM, Michael Lippautz  > wrote:
>
>> V8 usually fails there if it cannot allocate a 512KiB page from the 
>> operating system/
>>
>> You could try hooking in AllocateChunk [1] and see why it is returning 
>> NULL and trace back through the underlying calls. 
>>
>> Best, Michael
>>
>> [1]: 
>> https://cs.chromium.org/chromium/src/v8/src/heap/spaces.cc?q=AllocateChunk=package:chromium=739
>>
>> On Mon, May 8, 2017 at 3:27 PM Andre Cunha > > wrote:
>>
>>> Hello,
>>>
>>> I have embedded v8 into a project for the company I work for, and during 
>>> some stress tests, I've encountered a weird out-of-memory error. After 
>>> considerable investigation, I still have no idea of what might be going on, 
>>> so I'm reaching out to you in hope of some insight.
>>>
>>> So here is a summary of the scenario: in each test iteration, I create 
>>> an Isolate, run some short JS code fragments, and then destroy the isolate. 
>>> After the execution of each code fragment, I perform some variable 
>>> manipulations from my C++ code using V8's API, prior to running the next 
>>> fragment. I repeat thousands of such iterations over the same input (it's 
>>> valid), and I expect no memory leaks and no crashes. However, after about 3 
>>> hours, V8 crashes with an out-of-memory error of no apparent reason.
>>>
>>> I have run the code though valgrind and using address sanitizing, and no 
>>> memory leaks were detected. Additionally, I monitor memory consumption 
>>> throughout the test; the program's memory usage is stable, without any 
>>> peak, and when V8 crashes the system has a lot of available memory (more 
>>> than 5 Gib). I have used V8's API to get heap usage statistics after each 
>>> successful iteration; the values are always the same, and are shown below 
>>> (they are included in an attached file, typical_memory.txt):
>>>
>>> ScriptEngine::Run: finished running at 2017-05-05T13:20:34
>>>   used_heap_size   : 46.9189 Mib
>>>   total_heap_size  : 66.1562 Mib
>>>   Space 0
>>> name   : new_space
>>> size   : 8 Mib
>>> used_size  : 2.47314 Mib
>>> available_size : 5.39404 Mib
>>>   Space 1
>>> name   : old_space
>>> size   : 39.5625 Mib
>>> used_size  : 31.6393 Mib
>>> available_size : 5.51526 Mib
>>>   Space 2
>>> name   : code_space
>>> size   : 10.4375 Mib
>>> used_size  : 6.16919 Mib
>>> available_size : 0 B
>>>   Space 3
>>> name   : map_space
>>> size   : 8.15625 Mib
>>> used_size  : 6.63733 Mib
>>> available_size : 80 B
>>>   Space 4
>>> name   : large_object_space
>>> size   : 0 B
>>> used_size  : 0 B
>>> available_size : 11.1015 Gib
>>>
>>> When V8 crashes, it prints a heap summary, which I'm sending attached 
>>> (file heap_after_error.txt). I also save a core dump. Sometimes, the 
>>> system crashes during the creation of an Isolate; sometimes, during the 
>>> creation of a Context; typically, it crashes during snapshot 
>>> deserialization. However, the top of the stack is always the same, and it's 
>>> reproduced below (also included attached, file stacktrace.txt).
>>>
>>> #7  v8::internal::OS::Abort () at 
>>> ../../src/base/platform/platform-posix.cc:230
>>> #8  0x7ff15a2f922f in v8::Utils::ReportOOMFailure 
>>> (location=0x7ff15b20f62e "Committing semi space failed.", 
>>> is_heap_oom=false) at ../../src/api.cc:381
>>> #9  0x7ff15a2f918e in v8::internal::V8::FatalProcessOutOfMemory 
>>> (location=0x7ff15b20f62e "Committing semi space failed.", 
>>> is_heap_oom=false) at ../../src/api.cc:352
>>> #10 0x7ff15aa3fefc in v8::internal::Heap::EnsureFromSpaceIsCommitted 
>>> (this=0x7ff12c0bdde0) at ../../src/heap/heap.cc:1234
>>> #11 0x7ff15aa3ed34 in v8::internal::Heap::PerformGarbageCollection 
>>> (this=0x7ff12c0bdde0, collector=v8::internal::MARK_COMPACTOR,
>>> gc_callback_flags=v8::kNoGCCallbackFlags) at 
>>> ../../src/heap/heap.cc:1308
>>> #12 0x7ff15aa3e2ab in v8::internal::Heap::CollectGarbage 
>>> (this=0x7ff12c0bdde0, collector=v8::internal::MARK_COMPACTOR,
>>> gc_reason=v8::internal::GarbageCollectionReason::kDeserializer, 
>>> collector_reason=0x7ff15b20f07a "GC in old space requested",
>>> 

Re: [v8-users] Cryptic out-of-memory error

2017-05-08 Thread Jakob Kummerow
My guess would be an address space leak (should show up in the "VIRT"
column of "top" on Linux). Are you calling "isolate->Dispose()" on any
isolate you're done with?

On Mon, May 8, 2017 at 4:01 PM, Michael Lippautz 
wrote:

> V8 usually fails there if it cannot allocate a 512KiB page from the
> operating system/
>
> You could try hooking in AllocateChunk [1] and see why it is returning
> NULL and trace back through the underlying calls.
>
> Best, Michael
>
> [1]: https://cs.chromium.org/chromium/src/v8/src/heap/
> spaces.cc?q=AllocateChunk=package:chromium=739
>
> On Mon, May 8, 2017 at 3:27 PM Andre Cunha 
> wrote:
>
>> Hello,
>>
>> I have embedded v8 into a project for the company I work for, and during
>> some stress tests, I've encountered a weird out-of-memory error. After
>> considerable investigation, I still have no idea of what might be going on,
>> so I'm reaching out to you in hope of some insight.
>>
>> So here is a summary of the scenario: in each test iteration, I create an
>> Isolate, run some short JS code fragments, and then destroy the isolate.
>> After the execution of each code fragment, I perform some variable
>> manipulations from my C++ code using V8's API, prior to running the next
>> fragment. I repeat thousands of such iterations over the same input (it's
>> valid), and I expect no memory leaks and no crashes. However, after about 3
>> hours, V8 crashes with an out-of-memory error of no apparent reason.
>>
>> I have run the code though valgrind and using address sanitizing, and no
>> memory leaks were detected. Additionally, I monitor memory consumption
>> throughout the test; the program's memory usage is stable, without any
>> peak, and when V8 crashes the system has a lot of available memory (more
>> than 5 Gib). I have used V8's API to get heap usage statistics after each
>> successful iteration; the values are always the same, and are shown below
>> (they are included in an attached file, typical_memory.txt):
>>
>> ScriptEngine::Run: finished running at 2017-05-05T13:20:34
>>   used_heap_size   : 46.9189 Mib
>>   total_heap_size  : 66.1562 Mib
>>   Space 0
>> name   : new_space
>> size   : 8 Mib
>> used_size  : 2.47314 Mib
>> available_size : 5.39404 Mib
>>   Space 1
>> name   : old_space
>> size   : 39.5625 Mib
>> used_size  : 31.6393 Mib
>> available_size : 5.51526 Mib
>>   Space 2
>> name   : code_space
>> size   : 10.4375 Mib
>> used_size  : 6.16919 Mib
>> available_size : 0 B
>>   Space 3
>> name   : map_space
>> size   : 8.15625 Mib
>> used_size  : 6.63733 Mib
>> available_size : 80 B
>>   Space 4
>> name   : large_object_space
>> size   : 0 B
>> used_size  : 0 B
>> available_size : 11.1015 Gib
>>
>> When V8 crashes, it prints a heap summary, which I'm sending attached
>> (file heap_after_error.txt). I also save a core dump. Sometimes, the
>> system crashes during the creation of an Isolate; sometimes, during the
>> creation of a Context; typically, it crashes during snapshot
>> deserialization. However, the top of the stack is always the same, and it's
>> reproduced below (also included attached, file stacktrace.txt).
>>
>> #7  v8::internal::OS::Abort () at ../../src/base/platform/
>> platform-posix.cc:230
>> #8  0x7ff15a2f922f in v8::Utils::ReportOOMFailure
>> (location=0x7ff15b20f62e "Committing semi space failed.",
>> is_heap_oom=false) at ../../src/api.cc:381
>> #9  0x7ff15a2f918e in v8::internal::V8::FatalProcessOutOfMemory
>> (location=0x7ff15b20f62e "Committing semi space failed.",
>> is_heap_oom=false) at ../../src/api.cc:352
>> #10 0x7ff15aa3fefc in v8::internal::Heap::EnsureFromSpaceIsCommitted
>> (this=0x7ff12c0bdde0) at ../../src/heap/heap.cc:1234
>> #11 0x7ff15aa3ed34 in v8::internal::Heap::PerformGarbageCollection
>> (this=0x7ff12c0bdde0, collector=v8::internal::MARK_COMPACTOR,
>> gc_callback_flags=v8::kNoGCCallbackFlags) at
>> ../../src/heap/heap.cc:1308
>> #12 0x7ff15aa3e2ab in v8::internal::Heap::CollectGarbage
>> (this=0x7ff12c0bdde0, collector=v8::internal::MARK_COMPACTOR,
>> gc_reason=v8::internal::GarbageCollectionReason::kDeserializer,
>> collector_reason=0x7ff15b20f07a "GC in old space requested",
>> gc_callback_flags=v8::kNoGCCallbackFlags) at
>> ../../src/heap/heap.cc:1002
>> #13 0x7ff15a33cdee in v8::internal::Heap::CollectGarbage
>> (this=0x7ff12c0bdde0, space=v8::internal::OLD_SPACE,
>> gc_reason=v8::internal::GarbageCollectionReason::kDeserializer,
>> callbackFlags=v8::kNoGCCallbackFlags) at ../../src/heap/heap-inl.h:681
>> #14 0x7ff15aa3d069 in v8::internal::Heap::CollectAllGarbage
>> (this=0x7ff12c0bdde0, flags=2,
>> 

Re: [v8-users] Cryptic out-of-memory error

2017-05-08 Thread Michael Lippautz
V8 usually fails there if it cannot allocate a 512KiB page from the
operating system/

You could try hooking in AllocateChunk [1] and see why it is returning NULL
and trace back through the underlying calls.

Best, Michael

[1]:
https://cs.chromium.org/chromium/src/v8/src/heap/spaces.cc?q=AllocateChunk=package:chromium=739

On Mon, May 8, 2017 at 3:27 PM Andre Cunha  wrote:

> Hello,
>
> I have embedded v8 into a project for the company I work for, and during
> some stress tests, I've encountered a weird out-of-memory error. After
> considerable investigation, I still have no idea of what might be going on,
> so I'm reaching out to you in hope of some insight.
>
> So here is a summary of the scenario: in each test iteration, I create an
> Isolate, run some short JS code fragments, and then destroy the isolate.
> After the execution of each code fragment, I perform some variable
> manipulations from my C++ code using V8's API, prior to running the next
> fragment. I repeat thousands of such iterations over the same input (it's
> valid), and I expect no memory leaks and no crashes. However, after about 3
> hours, V8 crashes with an out-of-memory error of no apparent reason.
>
> I have run the code though valgrind and using address sanitizing, and no
> memory leaks were detected. Additionally, I monitor memory consumption
> throughout the test; the program's memory usage is stable, without any
> peak, and when V8 crashes the system has a lot of available memory (more
> than 5 Gib). I have used V8's API to get heap usage statistics after each
> successful iteration; the values are always the same, and are shown below
> (they are included in an attached file, typical_memory.txt):
>
> ScriptEngine::Run: finished running at 2017-05-05T13:20:34
>   used_heap_size   : 46.9189 Mib
>   total_heap_size  : 66.1562 Mib
>   Space 0
> name   : new_space
> size   : 8 Mib
> used_size  : 2.47314 Mib
> available_size : 5.39404 Mib
>   Space 1
> name   : old_space
> size   : 39.5625 Mib
> used_size  : 31.6393 Mib
> available_size : 5.51526 Mib
>   Space 2
> name   : code_space
> size   : 10.4375 Mib
> used_size  : 6.16919 Mib
> available_size : 0 B
>   Space 3
> name   : map_space
> size   : 8.15625 Mib
> used_size  : 6.63733 Mib
> available_size : 80 B
>   Space 4
> name   : large_object_space
> size   : 0 B
> used_size  : 0 B
> available_size : 11.1015 Gib
>
> When V8 crashes, it prints a heap summary, which I'm sending attached
> (file heap_after_error.txt). I also save a core dump. Sometimes, the
> system crashes during the creation of an Isolate; sometimes, during the
> creation of a Context; typically, it crashes during snapshot
> deserialization. However, the top of the stack is always the same, and it's
> reproduced below (also included attached, file stacktrace.txt).
>
> #7  v8::internal::OS::Abort () at
> ../../src/base/platform/platform-posix.cc:230
> #8  0x7ff15a2f922f in v8::Utils::ReportOOMFailure
> (location=0x7ff15b20f62e "Committing semi space failed.",
> is_heap_oom=false) at ../../src/api.cc:381
> #9  0x7ff15a2f918e in v8::internal::V8::FatalProcessOutOfMemory
> (location=0x7ff15b20f62e "Committing semi space failed.",
> is_heap_oom=false) at ../../src/api.cc:352
> #10 0x7ff15aa3fefc in v8::internal::Heap::EnsureFromSpaceIsCommitted
> (this=0x7ff12c0bdde0) at ../../src/heap/heap.cc:1234
> #11 0x7ff15aa3ed34 in v8::internal::Heap::PerformGarbageCollection
> (this=0x7ff12c0bdde0, collector=v8::internal::MARK_COMPACTOR,
> gc_callback_flags=v8::kNoGCCallbackFlags) at
> ../../src/heap/heap.cc:1308
> #12 0x7ff15aa3e2ab in v8::internal::Heap::CollectGarbage
> (this=0x7ff12c0bdde0, collector=v8::internal::MARK_COMPACTOR,
> gc_reason=v8::internal::GarbageCollectionReason::kDeserializer,
> collector_reason=0x7ff15b20f07a "GC in old space requested",
> gc_callback_flags=v8::kNoGCCallbackFlags) at
> ../../src/heap/heap.cc:1002
> #13 0x7ff15a33cdee in v8::internal::Heap::CollectGarbage
> (this=0x7ff12c0bdde0, space=v8::internal::OLD_SPACE,
> gc_reason=v8::internal::GarbageCollectionReason::kDeserializer,
> callbackFlags=v8::kNoGCCallbackFlags) at ../../src/heap/heap-inl.h:681
> #14 0x7ff15aa3d069 in v8::internal::Heap::CollectAllGarbage
> (this=0x7ff12c0bdde0, flags=2,
> gc_reason=v8::internal::GarbageCollectionReason::kDeserializer,
> gc_callback_flags=v8::kNoGCCallbackFlags) at ../../src/heap/heap.cc:848
> #15 0x7ff15aa3fe84 in v8::internal::Heap::ReserveSpace
> (this=0x7ff12c0bdde0, reservations=0x7ff148fe6078, maps=0x7ff148fe60f8) at
> ../../src/heap/heap.cc:1215
>
> In the heap summary that gets printed, I have noted some apparent
> discrepancies with the typical data I get 

[v8-users] Cryptic out-of-memory error

2017-05-08 Thread Andre Cunha
Hello,

I have embedded v8 into a project for the company I work for, and during 
some stress tests, I've encountered a weird out-of-memory error. After 
considerable investigation, I still have no idea of what might be going on, 
so I'm reaching out to you in hope of some insight.

So here is a summary of the scenario: in each test iteration, I create an 
Isolate, run some short JS code fragments, and then destroy the isolate. 
After the execution of each code fragment, I perform some variable 
manipulations from my C++ code using V8's API, prior to running the next 
fragment. I repeat thousands of such iterations over the same input (it's 
valid), and I expect no memory leaks and no crashes. However, after about 3 
hours, V8 crashes with an out-of-memory error of no apparent reason.

I have run the code though valgrind and using address sanitizing, and no 
memory leaks were detected. Additionally, I monitor memory consumption 
throughout the test; the program's memory usage is stable, without any 
peak, and when V8 crashes the system has a lot of available memory (more 
than 5 Gib). I have used V8's API to get heap usage statistics after each 
successful iteration; the values are always the same, and are shown below 
(they are included in an attached file, typical_memory.txt):

ScriptEngine::Run: finished running at 2017-05-05T13:20:34
  used_heap_size   : 46.9189 Mib
  total_heap_size  : 66.1562 Mib
  Space 0
name   : new_space
size   : 8 Mib
used_size  : 2.47314 Mib
available_size : 5.39404 Mib
  Space 1
name   : old_space
size   : 39.5625 Mib
used_size  : 31.6393 Mib
available_size : 5.51526 Mib
  Space 2
name   : code_space
size   : 10.4375 Mib
used_size  : 6.16919 Mib
available_size : 0 B
  Space 3
name   : map_space
size   : 8.15625 Mib
used_size  : 6.63733 Mib
available_size : 80 B
  Space 4
name   : large_object_space
size   : 0 B
used_size  : 0 B
available_size : 11.1015 Gib

When V8 crashes, it prints a heap summary, which I'm sending attached (file 
heap_after_error.txt). I also save a core dump. Sometimes, the system 
crashes during the creation of an Isolate; sometimes, during the creation 
of a Context; typically, it crashes during snapshot deserialization. 
However, the top of the stack is always the same, and it's reproduced below 
(also included attached, file stacktrace.txt).

#7  v8::internal::OS::Abort () at 
../../src/base/platform/platform-posix.cc:230
#8  0x7ff15a2f922f in v8::Utils::ReportOOMFailure 
(location=0x7ff15b20f62e "Committing semi space failed.", 
is_heap_oom=false) at ../../src/api.cc:381
#9  0x7ff15a2f918e in v8::internal::V8::FatalProcessOutOfMemory 
(location=0x7ff15b20f62e "Committing semi space failed.", 
is_heap_oom=false) at ../../src/api.cc:352
#10 0x7ff15aa3fefc in v8::internal::Heap::EnsureFromSpaceIsCommitted 
(this=0x7ff12c0bdde0) at ../../src/heap/heap.cc:1234
#11 0x7ff15aa3ed34 in v8::internal::Heap::PerformGarbageCollection 
(this=0x7ff12c0bdde0, collector=v8::internal::MARK_COMPACTOR,
gc_callback_flags=v8::kNoGCCallbackFlags) at ../../src/heap/heap.cc:1308
#12 0x7ff15aa3e2ab in v8::internal::Heap::CollectGarbage 
(this=0x7ff12c0bdde0, collector=v8::internal::MARK_COMPACTOR,
gc_reason=v8::internal::GarbageCollectionReason::kDeserializer, 
collector_reason=0x7ff15b20f07a "GC in old space requested",
gc_callback_flags=v8::kNoGCCallbackFlags) at ../../src/heap/heap.cc:1002
#13 0x7ff15a33cdee in v8::internal::Heap::CollectGarbage 
(this=0x7ff12c0bdde0, space=v8::internal::OLD_SPACE,
gc_reason=v8::internal::GarbageCollectionReason::kDeserializer, 
callbackFlags=v8::kNoGCCallbackFlags) at ../../src/heap/heap-inl.h:681
#14 0x7ff15aa3d069 in v8::internal::Heap::CollectAllGarbage 
(this=0x7ff12c0bdde0, flags=2,
gc_reason=v8::internal::GarbageCollectionReason::kDeserializer, 
gc_callback_flags=v8::kNoGCCallbackFlags) at ../../src/heap/heap.cc:848
#15 0x7ff15aa3fe84 in v8::internal::Heap::ReserveSpace 
(this=0x7ff12c0bdde0, reservations=0x7ff148fe6078, maps=0x7ff148fe60f8) at 
../../src/heap/heap.cc:1215

In the heap summary that gets printed, I have noted some apparent 
discrepancies with the typical data I get from the API (shown above): for 
example, the summary says the size of the old space is 4067328 bytes (= 
3.88 Mib), not the typical 39.56 Mib I get from the API.

I have dived into V8 garbage collection, but still couldn't make sense of 
the error message ("Committing semi space failed"). So, I'd like to know 
under which circumstances this error can happen, and how it's possible that 
it only happens occasionally, given that each test iteration is identical 
to the others and there is no detectable memory leaks.

If you need more