[v8-users] ThreadSanitizer finds data race in concurrent creation of Isolates

2017-05-17 Thread Andre Cunha
Hello,

I'm working on a library where multiple threads can run JS code 
independently at the same time. In the library, each thread creates and 
uses its own Isolate. However, I noticed clang's AddressSanitizer was 
complaining about a data race during Isolate creation. I managed to 
reproduce the problem outside our library, and I'm sending both the code 
and the sanitizer's error message (the code needs to be compiled with 
-std=c++11 and -pthread). I'm using V8 5.8.283.38, but I repeated the test 
on the master branch and got the same result. What the test does is spawn 
10 threads; each one of them just creates and destroys Isolates repeatedly.

I have found a recent issue in the bug tracker [1] that reports a similar 
problem, but the commit is not included in V8 5.8, and the problem is 
reproducible in the master branch anyway.

So, my question is: I know that multiple threads can run JS code 
simultaneously, provided that each thread has its own Isolate; but does 
Isolate creation need to be synchronous?

Best regards,
Andre

[1] https://bugs.chromium.org/p/v8/issues/detail?id=5807

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
#include 
#include 

#include "v8.h"
#include "libplatform/libplatform.h"

using namespace v8;

const size_t kNumThreads = 10;
const size_t kNumIterations = 1000;

int main() {
  V8::InitializeICUDefaultLocation("./v8/out.gn/x64.debug/");
  V8::InitializeExternalStartupData("./v8/out.gn/x64.debug/");
  Platform* platform = platform::CreateDefaultPlatform();
  V8::InitializePlatform(platform);
  V8::Initialize();

  std::vector threads(kNumThreads);
  for (size_t i = 0; i < kNumThreads; ++i) {
threads[i] = std::thread([](){
  for (int i = 0; i < kNumIterations; ++i) {
Isolate::CreateParams create_params;
create_params.array_buffer_allocator =
v8::ArrayBuffer::Allocator::NewDefaultAllocator();
Isolate* isolate = Isolate::New(create_params);
{
  Isolate::Scope isolate_scope(isolate);
}  
delete create_params.array_buffer_allocator;
isolate->Dispose();
  }
});
  }

  for (std::thread  : threads) {
t.join();
  }

  V8::Dispose();
  V8::ShutdownPlatform();
  delete platform;
  return 0;
}
==
WARNING: ThreadSanitizer: data race (pid=19032)
  Atomic read of size 1 at 0x7fa59714e078 by thread T9:
#0 pthread_mutex_lock  (stress+0x00439dc5)
#1 v8::base::LockNativeHandle(pthread_mutex_t*) 
/home/andre/Develop/v8/v8/out.gn/x64.debug/../../src/base/platform/mutex.cc:57:16
 (libv8_libbase.so+0x00021284)
#2 void std::_Bind_simple::_M_invoke<>(std::_Index_tuple<>) 
/usr/bin/../lib/gcc/x86_64-redhat-linux/6.3.1/../../../../include/c++/6.3.1/functional:1390:18
 (stress+0x004b2cb8)
#3 std::_Bind_simple::operator()() 
/usr/bin/../lib/gcc/x86_64-redhat-linux/6.3.1/../../../../include/c++/6.3.1/functional:1380:16
 (stress+0x004b2c68)
#4 std::thread::_State_impl >::_M_run() 
/usr/bin/../lib/gcc/x86_64-redhat-linux/6.3.1/../../../../include/c++/6.3.1/thread:196:13
 (stress+0x004b2a9c)
#5 execute_native_thread_routine 
/usr/src/debug/gcc-6.3.1-20161221/obj-x86_64-redhat-linux/x86_64-redhat-linux/libstdc++-v3/src/c++11/../../../../../libstdc++-v3/src/c++11/thread.cc:83
 (libstdc++.so.6+0x000bb5ce)

  Previous write of size 1 at 0x7fa59714e078 by thread T8:
#0 pthread_mutex_init  (stress+0x0042580a)
#1 v8::base::InitializeNativeHandle(pthread_mutex_t*) 
/home/andre/Develop/v8/v8/out.gn/x64.debug/../../src/base/platform/mutex.cc:23:12
 (libv8_libbase.so+0x0002108e)
#2 void std::_Bind_simple::_M_invoke<>(std::_Index_tuple<>) 
/usr/bin/../lib/gcc/x86_64-redhat-linux/6.3.1/../../../../include/c++/6.3.1/functional:1390:18
 (stress+0x004b2cb8)
#3 std::_Bind_simple::operator()() 
/usr/bin/../lib/gcc/x86_64-redhat-linux/6.3.1/../../../../include/c++/6.3.1/functional:1380:16
 (stress+0x004b2c68)
#4 std::thread::_State_impl >::_M_run() 
/usr/bin/../lib/gcc/x86_64-redhat-linux/6.3.1/../../../../include/c++/6.3.1/thread:196:13
 (stress+0x004b2a9c)
#5 execute_native_thread_routine 
/usr/src/debug/gcc-6.3.1-20161221/obj-x86_64-redhat-linux/x86_64-redhat-linux/libstdc++-v3/src/c++11/../../../../../libstdc++-v3/src/c++11/thread.cc:83
 (libstdc++.so.6+0x000bb5ce)

  Location is global 'v8::base::entropy_mutex' of size 56 at 0x7fa59714e070 
(libv8_libbase.so+0x00034078)

  Thread T9 (tid=19042, running) created by main thread at:
#0 pthread_create  

Re: [v8-users] Cryptic out-of-memory error

2017-05-11 Thread Andre Cunha
I have repeated the tests in V8 5.8.283.38, and indeed the problem is gone. 
The amount of virtual memory remains stable over time.

With regard to the cause of the problem, I managed to create a similar 
situation (increase in virtual memory consumption without increase in 
actual memory usage) using a loop like this:

while (true) {
  usleep(100);
  sbrk(4096 * 40);
}

I would guess that, in version 5.6, the program break of the process was 
being increased when an Isolate was allocated, some allocated pages were 
not being used, but the program break wasn't being decreased when 
Isolate::Dispose() was called. The memory the Isolate used to occupy was 
nonetheless marked free, and thus reused in subsequent allocations, but the 
allocation process would still increase the program break anyway. Since 
these extra pages were never referenced, no actual memory was allocated to 
the process, but the program break reached its limit at some point. That 
could explain the situation, but it's just a wild guess, and the problem is 
solved in 5.8 anyway.

Thank you for the support.
Andre

On Thursday, May 11, 2017 at 10:45:30 AM UTC-3, Jakob Kummerow wrote:
>
> On Thu, May 11, 2017 at 3:38 PM, Jochen Eisinger <joc...@chromium.org 
> > wrote:
>
>> Thank you for the detailed bug report.
>>
>> I tried reproducing this on the latest version of V8, but couldn't 
>> observe the behavior you described.
>>
>> Have you considered updating to at least the latest stable version of V8?
>>
>
> ...which would be branch-heads/5.8 (currently 5.8.283.38)
>  
>
>>
>> On Wed, May 10, 2017 at 7:50 PM Andre Cunha <andre.l...@gmail.com 
>> > wrote:
>>
>>> I've managed to reproduce the problem using just V8's hello_world 
>>> example (source code attached). I just added a loop around the creation and 
>>> destruction of the Isolate (this is what happens in each cycle of my stress 
>>> test). When I run the process and monitor it in "top", the RES column stays 
>>> constant at around 26 MB, but the VIRT column grows indefinitely; after 
>>> about 7 minutes, the VIRT column reaches around 33 GB, and the process 
>>> crashes (the value of "CommitLimit" in my machine, got from /proc/meminfo, 
>>> is 35,511,816 kB).
>>>
>>> Following Michael's suggestion, I changed file src/heap/spaces.cc so 
>>> that it prints a stack trace when it's about to return NULL. I'm also 
>>> sending the stack trace attached. I use V8 5.6.326.42 in Fedora 25, x86_64.
>>>
>>> Just to explain why I'm doing this test: in the library I'm working on, 
>>> the user can create a certain kind of thread and send requests to it. Each 
>>> thread needs to run JS code (received from the user), so it creates its own 
>>> Isolate when it needs to, and destroys it when the Isolate is no longer 
>>> necessary. One of our stress tests involves the constant creation and 
>>> destruction of such threads, as well as constantly sending requests to the 
>>> same thread. It was in this context that I found this problem.
>>>
>>> On Monday, May 8, 2017 at 12:50:37 PM UTC-3, Andre Cunha wrote:
>>>>
>>>> @Michael Lippautz, I'll try adding a breakpoint if AllocateChunk 
>>>> returns NULL; hopefully, I'll get more information about the problem.
>>>>
>>>> @Jakob Kummerow, yes, I'm calling Isolate::Dispose() in every isolate 
>>>> after using it. I'll also observe the VIRT column and see if it shows any 
>>>> abnormality.
>>>>
>>>> Thank you!
>>>>
>>>> On Monday, May 8, 2017 at 11:07:44 AM UTC-3, Jakob Kummerow wrote:
>>>>>
>>>>> My guess would be an address space leak (should show up in the "VIRT" 
>>>>> column of "top" on Linux). Are you calling "isolate->Dispose()" on any 
>>>>> isolate you're done with?
>>>>>
>>>>> On Mon, May 8, 2017 at 4:01 PM, Michael Lippautz <mlip...@chromium.org
>>>>> > wrote:
>>>>>
>>>>>> V8 usually fails there if it cannot allocate a 512KiB page from the 
>>>>>> operating system/
>>>>>>
>>>>>> You could try hooking in AllocateChunk [1] and see why it is 
>>>>>> returning NULL and trace back through the underlying calls. 
>>>>>>
>>>>>> Best, Michael
>>>>>>
>>>>>> [1]: 
>>>>>> https://cs.chromium.org/chromium/src/v8/src/heap/spaces.cc?q=AllocateChunk=package:chromium=739
>>>>&g

Re: [v8-users] Cryptic out-of-memory error

2017-05-10 Thread Andre Cunha
I've managed to reproduce the problem using just V8's hello_world example 
(source code attached). I just added a loop around the creation and 
destruction of the Isolate (this is what happens in each cycle of my stress 
test). When I run the process and monitor it in "top", the RES column stays 
constant at around 26 MB, but the VIRT column grows indefinitely; after 
about 7 minutes, the VIRT column reaches around 33 GB, and the process 
crashes (the value of "CommitLimit" in my machine, got from /proc/meminfo, 
is 35,511,816 kB).

Following Michael's suggestion, I changed file src/heap/spaces.cc so that 
it prints a stack trace when it's about to return NULL. I'm also sending 
the stack trace attached. I use V8 5.6.326.42 in Fedora 25, x86_64.

Just to explain why I'm doing this test: in the library I'm working on, the 
user can create a certain kind of thread and send requests to it. Each 
thread needs to run JS code (received from the user), so it creates its own 
Isolate when it needs to, and destroys it when the Isolate is no longer 
necessary. One of our stress tests involves the constant creation and 
destruction of such threads, as well as constantly sending requests to the 
same thread. It was in this context that I found this problem.

On Monday, May 8, 2017 at 12:50:37 PM UTC-3, Andre Cunha wrote:
>
> @Michael Lippautz, I'll try adding a breakpoint if AllocateChunk returns 
> NULL; hopefully, I'll get more information about the problem.
>
> @Jakob Kummerow, yes, I'm calling Isolate::Dispose() in every isolate 
> after using it. I'll also observe the VIRT column and see if it shows any 
> abnormality.
>
> Thank you!
>
> On Monday, May 8, 2017 at 11:07:44 AM UTC-3, Jakob Kummerow wrote:
>>
>> My guess would be an address space leak (should show up in the "VIRT" 
>> column of "top" on Linux). Are you calling "isolate->Dispose()" on any 
>> isolate you're done with?
>>
>> On Mon, May 8, 2017 at 4:01 PM, Michael Lippautz <mlip...@chromium.org> 
>> wrote:
>>
>>> V8 usually fails there if it cannot allocate a 512KiB page from the 
>>> operating system/
>>>
>>> You could try hooking in AllocateChunk [1] and see why it is returning 
>>> NULL and trace back through the underlying calls. 
>>>
>>> Best, Michael
>>>
>>> [1]: 
>>> https://cs.chromium.org/chromium/src/v8/src/heap/spaces.cc?q=AllocateChunk=package:chromium=739
>>>
>>> On Mon, May 8, 2017 at 3:27 PM Andre Cunha <andre.l...@gmail.com> wrote:
>>>
>>>> Hello,
>>>>
>>>> I have embedded v8 into a project for the company I work for, and 
>>>> during some stress tests, I've encountered a weird out-of-memory error. 
>>>> After considerable investigation, I still have no idea of what might be 
>>>> going on, so I'm reaching out to you in hope of some insight.
>>>>
>>>> So here is a summary of the scenario: in each test iteration, I create 
>>>> an Isolate, run some short JS code fragments, and then destroy the 
>>>> isolate. 
>>>> After the execution of each code fragment, I perform some variable 
>>>> manipulations from my C++ code using V8's API, prior to running the next 
>>>> fragment. I repeat thousands of such iterations over the same input (it's 
>>>> valid), and I expect no memory leaks and no crashes. However, after about 
>>>> 3 
>>>> hours, V8 crashes with an out-of-memory error of no apparent reason.
>>>>
>>>> I have run the code though valgrind and using address sanitizing, and 
>>>> no memory leaks were detected. Additionally, I monitor memory consumption 
>>>> throughout the test; the program's memory usage is stable, without any 
>>>> peak, and when V8 crashes the system has a lot of available memory (more 
>>>> than 5 Gib). I have used V8's API to get heap usage statistics after each 
>>>> successful iteration; the values are always the same, and are shown below 
>>>> (they are included in an attached file, typical_memory.txt):
>>>>
>>>> ScriptEngine::Run: finished running at 2017-05-05T13:20:34
>>>>   used_heap_size   : 46.9189 Mib
>>>>   total_heap_size  : 66.1562 Mib
>>>>   Space 0
>>>> name   : new_space
>>>> size   : 8 Mib
>>>> used_size  : 2.47314 Mib
>>>> available_size : 5.39404 Mib
>>>>   Space 1
>>>> name   : old_space
>>>> size   : 39.5625 Mib
&g

[v8-users] Re: unable to locate module needed for external types:.dwo file not exist

2017-05-09 Thread Andre Cunha
I don't know of a better solution, but the one I use is to create, in the 
directory from which I'm running lldb (or gdb), a symlink to the "obj" 
directory inside V8's directory structure. In my computer, the command 
looks like this:

$ ln -s /home/andre/Develop/v8/v8/out.gn/x64.debug/obj

Now lldb/gdb should find the dwo files.

On Tuesday, May 9, 2017 at 1:27:21 AM UTC-3, Early wrote:
>
> hi, I use lldb to debug v8.But when i use lldb `print` command to output 
> some variable.there is always some errors come for me?
> what's the reason for that?
> (lldb) n
> Process 27213 stopped
> * thread #1, name = 'unittests', stop reason = step over
> frame #0: 0x55fec318 
> unittests`v8::internal::SourcePositionTableIterator::is_statement(this=0x7fffd5b8)
>  
> const at source-position-table.h:88
>85 }
>86 bool is_statement() const {
>87   DCHECK(!done());
> -> 88   return current_.is_statement;
>89 }
>90 bool done() const { return index_ == kDone; }
>91   
> (lldb) fr v
> (const v8::internal::SourcePositionTableIterator *) this = 
> 0x7fffd5b8
> (lldb) p current_
> warning: (x86_64) /home/zhujianchen/work/v8/out.gn/x64.debug/unittests 
> 0x1386fcf6: DW_AT_specification(0x0005bb61) has no decl
>
> warning: (x86_64) /home/zhujianchen/work/v8/
> out.gn/x64.debug/obj/v8_base/interpreter-irregexp.dwo 0x000b: unable 
> to locate module needed for external types: 
> obj/v8_base/interpreter-irregexp.dwo
> error: 'obj/v8_base/interpreter-irregexp.dwo' does not exist
> Debugging will be degraded due to missing types. Rebuilding your project 
> will regenerate the needed module files.
> warning: (x86_64) /home/zhujianchen/work/v8/
> out.gn/x64.debug/obj/v8_base/regexp-macro-assembler-irregexp.dwo 
> 0x000b: unable to locate module needed for external types: 
> obj/v8_base/regexp-macro-assembler-irregexp.dwo
> error: 'obj/v8_base/regexp-macro-assembler-irregexp.dwo' does not exist
> Debugging will be degraded due to missing types. Rebuilding your project 
> will regenerate the needed module files.
> warning: (x86_64) /home/zhujianchen/work/v8/
> out.gn/x64.debug/obj/third_party/icu/icuuc/cwchar.dwo 0x000b: unable 
> to locate module needed for external types: 
> obj/third_party/icu/icuuc/cwchar.dwo
> error: 'obj/third_party/icu/icuuc/cwchar.dwo' does not exist
> Debugging will be degraded due to missing types. Rebuilding your project 
> will regenerate the needed module files.
> warning: (x86_64) /home/zhujianchen/work/v8/
> out.gn/x64.debug/obj/third_party/icu/icuuc/ucnv_ct.dwo 0x000b: unable 
> to locate module needed for external types: 
> obj/third_party/icu/icuuc/ucnv_ct.dwo
> error: 'obj/third_party/icu/icuuc/ucnv_ct.dwo' does not exist
> Debugging will be degraded due to missing types. Rebuilding your project 
> will regenerate the needed module files.
> warning: (x86_64) /home/zhujianchen/work/v8/
> out.gn/x64.debug/obj/third_party/icu/icuuc/ucnv_lmb.dwo 0x000b: 
> unable to locate module needed for external types: 
> obj/third_party/icu/icuuc/ucnv_lmb.dwo
> error: 'obj/third_party/icu/icuuc/ucnv_lmb.dwo' does not exist
> Debugging will be degraded due to missing types. Rebuilding your project 
> will regenerate the needed module files.
> warning: (x86_64) /home/zhujianchen/work/v8/
> out.gn/x64.debug/obj/third_party/icu/icuuc/ucnv_u7.dwo 0x000b: unable 
> to locate module needed for external types: 
> obj/third_party/icu/icuuc/ucnv_u7.dwo
> error: 'obj/third_party/icu/icuuc/ucnv_u7.dwo' does not exist
> Debugging will be degraded due to missing types. Rebuilding your project 
> will regenerate the needed module files.
> warning: (x86_64) /home/zhujianchen/work/v8/
> out.gn/x64.debug/obj/third_party/icu/icuuc/ucnvhz.dwo 0x000b: unable 
> to locate module needed for external types: 
> obj/third_party/icu/icuuc/ucnvhz.dwo
> error: 'obj/third_party/icu/icuuc/ucnvhz.dwo' does not exist
> Debugging will be degraded due to missing types. Rebuilding your project 
> will regenerate the needed module files.
> warning: (x86_64) /home/zhujianchen/work/v8/
> out.gn/x64.debug/obj/third_party/icu/icuuc/ucnvisci.dwo 0x000b: 
> unable to locate module needed for external types: 
> obj/third_party/icu/icuuc/ucnvisci.dwo
> error: 'obj/third_party/icu/icuuc/ucnvisci.dwo' does not exist
> Debugging will be degraded due to missing types. Rebuilding your project 
> will regenerate the needed module files.
> warning: (x86_64) /home/zhujianchen/work/v8/
> out.gn/x64.debug/obj/third_party/icu/icuuc/ucnvscsu.dwo 0x000b: 
> unable to locate module needed for external types: 
> obj/third_party/icu/icuuc/ucnvscsu.dwo
> error: 'obj/third_party/icu/icuuc/ucnvscsu.dwo' does not exist
> Debugging will be degraded due to missing types. Rebuilding your project 
> will regenerate the needed module files.
> warning: (x86_64) /home/zhujianchen/work/v8/
> out.gn/x64.debug/obj/third_party/icu/icuuc/wintz.dwo 0x000b: unable 
> to locate 

Re: [v8-users] Cryptic out-of-memory error

2017-05-08 Thread Andre Cunha
@Michael Lippautz, I'll try adding a breakpoint if AllocateChunk returns 
NULL; hopefully, I'll get more information about the problem.

@Jakob Kummerow, yes, I'm calling Isolate::Dispose() in every isolate after 
using it. I'll also observe the VIRT column and see if it shows any 
abnormality.

Thank you!

On Monday, May 8, 2017 at 11:07:44 AM UTC-3, Jakob Kummerow wrote:
>
> My guess would be an address space leak (should show up in the "VIRT" 
> column of "top" on Linux). Are you calling "isolate->Dispose()" on any 
> isolate you're done with?
>
> On Mon, May 8, 2017 at 4:01 PM, Michael Lippautz <mlip...@chromium.org 
> > wrote:
>
>> V8 usually fails there if it cannot allocate a 512KiB page from the 
>> operating system/
>>
>> You could try hooking in AllocateChunk [1] and see why it is returning 
>> NULL and trace back through the underlying calls. 
>>
>> Best, Michael
>>
>> [1]: 
>> https://cs.chromium.org/chromium/src/v8/src/heap/spaces.cc?q=AllocateChunk=package:chromium=739
>>
>> On Mon, May 8, 2017 at 3:27 PM Andre Cunha <andre.l...@gmail.com 
>> > wrote:
>>
>>> Hello,
>>>
>>> I have embedded v8 into a project for the company I work for, and during 
>>> some stress tests, I've encountered a weird out-of-memory error. After 
>>> considerable investigation, I still have no idea of what might be going on, 
>>> so I'm reaching out to you in hope of some insight.
>>>
>>> So here is a summary of the scenario: in each test iteration, I create 
>>> an Isolate, run some short JS code fragments, and then destroy the isolate. 
>>> After the execution of each code fragment, I perform some variable 
>>> manipulations from my C++ code using V8's API, prior to running the next 
>>> fragment. I repeat thousands of such iterations over the same input (it's 
>>> valid), and I expect no memory leaks and no crashes. However, after about 3 
>>> hours, V8 crashes with an out-of-memory error of no apparent reason.
>>>
>>> I have run the code though valgrind and using address sanitizing, and no 
>>> memory leaks were detected. Additionally, I monitor memory consumption 
>>> throughout the test; the program's memory usage is stable, without any 
>>> peak, and when V8 crashes the system has a lot of available memory (more 
>>> than 5 Gib). I have used V8's API to get heap usage statistics after each 
>>> successful iteration; the values are always the same, and are shown below 
>>> (they are included in an attached file, typical_memory.txt):
>>>
>>> ScriptEngine::Run: finished running at 2017-05-05T13:20:34
>>>   used_heap_size   : 46.9189 Mib
>>>   total_heap_size  : 66.1562 Mib
>>>   Space 0
>>> name   : new_space
>>> size   : 8 Mib
>>> used_size  : 2.47314 Mib
>>> available_size : 5.39404 Mib
>>>   Space 1
>>> name   : old_space
>>> size   : 39.5625 Mib
>>> used_size  : 31.6393 Mib
>>> available_size : 5.51526 Mib
>>>   Space 2
>>> name   : code_space
>>> size   : 10.4375 Mib
>>> used_size  : 6.16919 Mib
>>> available_size : 0 B
>>>   Space 3
>>> name   : map_space
>>> size   : 8.15625 Mib
>>> used_size  : 6.63733 Mib
>>> available_size : 80 B
>>>   Space 4
>>> name   : large_object_space
>>> size   : 0 B
>>> used_size  : 0 B
>>> available_size : 11.1015 Gib
>>>
>>> When V8 crashes, it prints a heap summary, which I'm sending attached 
>>> (file heap_after_error.txt). I also save a core dump. Sometimes, the 
>>> system crashes during the creation of an Isolate; sometimes, during the 
>>> creation of a Context; typically, it crashes during snapshot 
>>> deserialization. However, the top of the stack is always the same, and it's 
>>> reproduced below (also included attached, file stacktrace.txt).
>>>
>>> #7  v8::internal::OS::Abort () at 
>>> ../../src/base/platform/platform-posix.cc:230
>>> #8  0x7ff15a2f922f in v8::Utils::ReportOOMFailure 
>>> (location=0x7ff15b20f62e "Committing semi space failed.", 
>>> is_heap_oom=false) at ../../src/api.cc:381
>>> #9  0x7ff15a2f918e in v8::internal::V8::FatalProcessOutOf

[v8-users] Cryptic out-of-memory error

2017-05-08 Thread Andre Cunha
Hello,

I have embedded v8 into a project for the company I work for, and during 
some stress tests, I've encountered a weird out-of-memory error. After 
considerable investigation, I still have no idea of what might be going on, 
so I'm reaching out to you in hope of some insight.

So here is a summary of the scenario: in each test iteration, I create an 
Isolate, run some short JS code fragments, and then destroy the isolate. 
After the execution of each code fragment, I perform some variable 
manipulations from my C++ code using V8's API, prior to running the next 
fragment. I repeat thousands of such iterations over the same input (it's 
valid), and I expect no memory leaks and no crashes. However, after about 3 
hours, V8 crashes with an out-of-memory error of no apparent reason.

I have run the code though valgrind and using address sanitizing, and no 
memory leaks were detected. Additionally, I monitor memory consumption 
throughout the test; the program's memory usage is stable, without any 
peak, and when V8 crashes the system has a lot of available memory (more 
than 5 Gib). I have used V8's API to get heap usage statistics after each 
successful iteration; the values are always the same, and are shown below 
(they are included in an attached file, typical_memory.txt):

ScriptEngine::Run: finished running at 2017-05-05T13:20:34
  used_heap_size   : 46.9189 Mib
  total_heap_size  : 66.1562 Mib
  Space 0
name   : new_space
size   : 8 Mib
used_size  : 2.47314 Mib
available_size : 5.39404 Mib
  Space 1
name   : old_space
size   : 39.5625 Mib
used_size  : 31.6393 Mib
available_size : 5.51526 Mib
  Space 2
name   : code_space
size   : 10.4375 Mib
used_size  : 6.16919 Mib
available_size : 0 B
  Space 3
name   : map_space
size   : 8.15625 Mib
used_size  : 6.63733 Mib
available_size : 80 B
  Space 4
name   : large_object_space
size   : 0 B
used_size  : 0 B
available_size : 11.1015 Gib

When V8 crashes, it prints a heap summary, which I'm sending attached (file 
heap_after_error.txt). I also save a core dump. Sometimes, the system 
crashes during the creation of an Isolate; sometimes, during the creation 
of a Context; typically, it crashes during snapshot deserialization. 
However, the top of the stack is always the same, and it's reproduced below 
(also included attached, file stacktrace.txt).

#7  v8::internal::OS::Abort () at 
../../src/base/platform/platform-posix.cc:230
#8  0x7ff15a2f922f in v8::Utils::ReportOOMFailure 
(location=0x7ff15b20f62e "Committing semi space failed.", 
is_heap_oom=false) at ../../src/api.cc:381
#9  0x7ff15a2f918e in v8::internal::V8::FatalProcessOutOfMemory 
(location=0x7ff15b20f62e "Committing semi space failed.", 
is_heap_oom=false) at ../../src/api.cc:352
#10 0x7ff15aa3fefc in v8::internal::Heap::EnsureFromSpaceIsCommitted 
(this=0x7ff12c0bdde0) at ../../src/heap/heap.cc:1234
#11 0x7ff15aa3ed34 in v8::internal::Heap::PerformGarbageCollection 
(this=0x7ff12c0bdde0, collector=v8::internal::MARK_COMPACTOR,
gc_callback_flags=v8::kNoGCCallbackFlags) at ../../src/heap/heap.cc:1308
#12 0x7ff15aa3e2ab in v8::internal::Heap::CollectGarbage 
(this=0x7ff12c0bdde0, collector=v8::internal::MARK_COMPACTOR,
gc_reason=v8::internal::GarbageCollectionReason::kDeserializer, 
collector_reason=0x7ff15b20f07a "GC in old space requested",
gc_callback_flags=v8::kNoGCCallbackFlags) at ../../src/heap/heap.cc:1002
#13 0x7ff15a33cdee in v8::internal::Heap::CollectGarbage 
(this=0x7ff12c0bdde0, space=v8::internal::OLD_SPACE,
gc_reason=v8::internal::GarbageCollectionReason::kDeserializer, 
callbackFlags=v8::kNoGCCallbackFlags) at ../../src/heap/heap-inl.h:681
#14 0x7ff15aa3d069 in v8::internal::Heap::CollectAllGarbage 
(this=0x7ff12c0bdde0, flags=2,
gc_reason=v8::internal::GarbageCollectionReason::kDeserializer, 
gc_callback_flags=v8::kNoGCCallbackFlags) at ../../src/heap/heap.cc:848
#15 0x7ff15aa3fe84 in v8::internal::Heap::ReserveSpace 
(this=0x7ff12c0bdde0, reservations=0x7ff148fe6078, maps=0x7ff148fe60f8) at 
../../src/heap/heap.cc:1215

In the heap summary that gets printed, I have noted some apparent 
discrepancies with the typical data I get from the API (shown above): for 
example, the summary says the size of the old space is 4067328 bytes (= 
3.88 Mib), not the typical 39.56 Mib I get from the API.

I have dived into V8 garbage collection, but still couldn't make sense of 
the error message ("Committing semi space failed"). So, I'd like to know 
under which circumstances this error can happen, and how it's possible that 
it only happens occasionally, given that each test iteration is identical 
to the others and there is no detectable memory leaks.

If you need more 

[v8-users] Linking issues when sysroot is used

2017-01-30 Thread Andre Cunha
Hello,

I had some issues while integrating v8 with the project I'm working on, so 
I have some questions and (maybe) a suggestion, and I'd like to hear what 
you think about that.

I was experiencing some cryptic std::bad_alloc errors. While investigating 
them, I realized that v8, by default, uses a sysroot image (Debian Wheezy 
for v8 5.4) to build itself (this fact is not mentioned anywhere in the Wiki 
). The use of a sysroot causes libv8.so to 
have weak versions of the symbols from the C++ standard library. However, 
if you link against libv8.so like this:

$ clang++ -L/path/to/v8/out.gn/x64.debug -lv8 -o test test.cc

These weak symbols override some symbols from the system's libstdc++.so, 
since the linker won't search for them in libstdc++.so after having found 
them in libv8.so. This can be seen by using the linker's trace-symbol 
option (I'll use std::__detail::_Prime_rehash_policy::_M_need_rehash, used 
by unordered_map, as an example, since this function was giving me trouble).

$ # With v8
$ clang++ -std=c++11 -L/home/andre/Develop/v8/v8/out.gn/x64.debug -lv8 
-Wl,--trace-symbol=_ZNKSt8__detail20_Prime_rehash_policy14_M_need_rehashEmmm -o 
test test.cc
/tmp/andre/test-56ed0e.o: reference to 
_ZNKSt8__detail20_Prime_rehash_policy14_M_need_rehashEmmm
/home/andre/Develop/v8/v8/out/x64.debug/obj.target/src/*libv8.so*: 
definition of _ZNKSt8__detail20_Prime_rehash_policy14_M_need_rehashEmmm

$ # Without v8
$ 
clang++ -std=c++11 
-Wl,--trace-symbol=_ZNKSt8__detail20_Prime_rehash_policy14_M_need_rehashEmmm -o 
test test.cc
/tmp/andre/test-7b1f80.o: reference to 
_ZNKSt8__detail20_Prime_rehash_policy14_M_need_rehashEmmm
/usr/bin/../lib/gcc/x86_64-redhat-linux/6.3.1/*libstdc++.so*: definition of 
_ZNKSt8__detail20_Prime_rehash_policy14_M_need_rehashEmmm

What ended up happening in my project was that my cc files were compiled 
against the system's standard library, but v8 wasn't. That lead to the 
following scenario:

   1. My code calls function X from the standard library.
   2. Control flows to the version of function X present in the system's 
   standard library.
   3. Function X calls another function Y from the standard library.
   4. Control moves to the version of function Y present in libv8.so, due 
   to the linker's symbol resolution.
   5. Error occurs due to different versions of the standard library.

I found that I could pass use_sysroot=false to gn, and that solved the 
problem, since it produces a libv8.so that contains only undefined 
references to (instead of weak versions of) symbols from the standard 
library. (use_sysroot=false doesn't seem to work with gyp; I know gyp 
support is deprecated, but I'm mentioning it anyway).

So, my question for you is: is this behavior expected? If I'm integrating 
v8 with a project of mine, should I always compile it with 
use_sysroot=false? If that is the case, shouldn't this fact be mentioned in 
the Wiki?

Thank you, and apologies for the long email.

Andre

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[v8-users] ICU options in GN

2016-10-19 Thread Andre Cunha
Hello,

The Building with GYP  
Wiki 
page describes three ways in which I can use ICU while building v8 with 
GYP: I can disable i18n support altogether (i18nsupport=off), use a custom 
version of ICU (icu_gyp_path=...), or use the system's ICU 
(use_system_icu=1). However, when I build with GN, the only option I can 
find is disabling i18n support (v8_enable_i18n_support=false). I tried 
passing use_system_icu=1 to GN, but I got an error.

So, is it possible to use a custom or the system's ICU with GN?

Thank you,
Andre

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[v8-users] Re: How to link when building my own Hello-World sample, embedding V8, with VS2015

2016-09-29 Thread Andre Cunha
You need to statically link against libplatform and libbase. I never 
compiled v8 in Windows, but in Linux the necessary files (with .o 
extension) are generated inside out.gn/x64.release/obj/v8_libplatform 
and out.gn/x64.release/obj/v8_libbase.

On Thursday, September 29, 2016 at 9:34:57 AM UTC-3, DaManuell wrote:
>
> I successfully built "v8" (tags/5.3.332.45) using the instructions on the 
> page Building with GN 
> .
> The arguments to gn were:
> is_component_build = true
> is_debug = false
> target_cpu="x86"
>
> After ninja -C out.gn/x86Release the files v8_hello_world.exe and 
> v8_shell.exe ran just fine.
>
> Then, I wanted to build my own "Hello World" program using a fresh new 
> VS2015 project, but merely copy/pasting hello-world.cc.
>
> I had compiler errors, fixed by adding some directories to the project 
> property "C++/General/Additional include directories"
>
> I had linker errors, fixed by linking against v8.dll.lib
>
> But I still have linker errors, about the unresolved external  "class 
> v8::Platform * __cdecl v8::platform::CreateDefaultPlatform(int)"
>
> I think I need to link against some "platform" static library file. Where 
> it is? How to build it?
>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [v8-users] Re: Need to re-run gn for build options to take effect

2016-09-22 Thread Andre Cunha
Excellent! Thank you.

Andre
On Thursday, September 22, 2016 at 10:35:04 AM UTC-3, Michael Achenbach 
wrote:
>
> This is now fixed by http://crbug.com/648583
>
> Now gn should automatically rebuild after using v8gen. No need to pass -p 
> (unless you want early checking) or to call gn manually.
>
> On Tuesday, September 20, 2016 at 3:42:28 PM UTC+2, Andre Cunha wrote:
>>
>> I tried the following sequence of commands:
>>
>> $ tools/dev/v8gen.py x64.release -- is_component_build=true 
>> v8_enable_i18n_support=false
>> $ gn gen out.gn/x64.release
>> $ ninja -C out.gn/x64.release
>>
>> And it works. I also tried:
>>
>> $ tools/dev/v8gen.py -p x64.release -- is_component_build=true 
>> v8_enable_i18n_support=false
>> $ ninja -C out.gn/x64.release
>>
>> And it works too.
>>
>> Thank you very much.
>> Andre
>>
>> On Tuesday, September 20, 2016 at 10:29:19 AM UTC-3, Michael Achenbach 
>> wrote:
>>>
>>> I actually think this is a bug in our work flow. I filed 
>>> http://crbug.com/648583
>>>
>>> You can also pass -p to v8gen.py to let it perform an additional gn run 
>>> over the extra cmd line parameters automatically.
>>>
>>> On Tuesday, September 20, 2016 at 9:16:56 AM UTC+2, Jochen Eisinger 
>>> wrote:
>>>>
>>>> v8gen creates the input for gn. gn creates the input for ninja. ninja 
>>>> builds the binaries.
>>>>
>>>> You can use gn gen out.gn/x64.release instead of gn args if you don't 
>>>> intend to change the args anyways.
>>>>
>>>> best
>>>> -jochen
>>>>
>>>> On Mon, Sep 19, 2016 at 9:19 PM Andre Cunha <andre.l...@gmail.com> 
>>>> wrote:
>>>>
>>>>> PS: the same behavior applies to i18n support in debug mode. When I 
>>>>> run v8gen.py with "v8_enable_i18n_support=false", the build process still 
>>>>> generates libraries that depend on ICU. I need to run gn and ninja again 
>>>>> to 
>>>>> get rid of the dependency.
>>>>>
>>>>> On Monday, September 19, 2016 at 3:55:08 PM UTC-3, Andre Cunha wrote:
>>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> I'm building v8 5.4 using the following commands (Ubuntu 14.04, x64):
>>>>>>
>>>>>> $ fetch v8
>>>>>> $ cd v8
>>>>>> $ git checkout remotes/branch-heads/5.4
>>>>>> $ gclient sync
>>>>>> $ tools/dev/v8gen.py x64.release -- is_component_build=true 
>>>>>> v8_enable_i18n_support=false
>>>>>> $ ninja -C out.gn/x64.release
>>>>>>
>>>>>> However, when I do this, libv8.so is *not* generated in 
>>>>>> out.gn/x64.release. In order for it to be generated, I need to do:
>>>>>>
>>>>>> $ gn args out.gn/x64.release
>>>>>> # Close vi (no need to save).
>>>>>> $ ninja -C out.gn/x64.release
>>>>>>
>>>>>> Then, 851 files are (re)compiled and libv8.so is generated.
>>>>>>
>>>>>> Why do I need to re-run gn for the options to take effect? Am I 
>>>>>> missing some step in the building process?
>>>>>>
>>>>>> Thank you!
>>>>>> Andre
>>>>>>
>>>>> -- 
>>>>> -- 
>>>>> v8-users mailing list
>>>>> v8-u...@googlegroups.com
>>>>> http://groups.google.com/group/v8-users
>>>>> --- 
>>>>> You received this message because you are subscribed to the Google 
>>>>> Groups "v8-users" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>>> an email to v8-users+u...@googlegroups.com.
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>
>>>>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [v8-users] Undefined reference to `v8::platform::CreateDefaultPlatform(int)`

2016-09-21 Thread Andre Cunha
Okay. I'll try to link statically, then.

Thank you!
Andre

On Wednesday, September 21, 2016 at 5:52:12 AM UTC-3, Jochen Eisinger wrote:
>
> I filed https://bugs.chromium.org/p/v8/issues/detail?id=5412 as feature 
> request
>
> On Wed, Sep 21, 2016 at 10:50 AM Jochen Eisinger <joc...@chromium.org 
> > wrote:
>
>> libplatform currently does not support dynamic linking.
>>
>> On Tue, Sep 20, 2016 at 7:43 PM Andre Cunha <andre.l...@gmail.com 
>> > wrote:
>>
>>> Hello,
>>>
>>> I'm trying to dynamically load libplatform in an application I'm 
>>> writing. I searched through the available gn options, but I couldn't find 
>>> an option to build libplatform as a shared object, so I did it by hand:
>>>
>>> $ gcc -shared -o libv8_libplatform.so obj/v8_lib{platform,base}/*.o
>>>
>>> I use this both in debug and in release mode, but when I link my 
>>> application against libplatform in release mode, I get "undefined reference 
>>> to `v8::platform::CreateDefaultPlatform(int)`". This doesn't happen in 
>>> debug mode. After some investigation, I realized that the aforementioned 
>>> function is exported in debug mode, but is hidden in release mode:
>>>
>>> # Debug
>>> $ nm -C libv8_libplatform.so | grep CreateDefaultPlatform
>>> 0001ad70 *T* v8::platform::CreateDefaultPlatform(int)
>>>
>>> # Release
>>> $ nm -C libv8_libplatform.so | grep CreateDefaultPlatform
>>> 3b20 *t* v8::platform::CreateDefaultPlatform(int)
>>>
>>> I inspected the ninja files, and realized this happens because object 
>>> files are compiled with -fvisibility=default in debug mode, and with 
>>> -fvisibility=hidden in release mode.
>>>
>>> So, my question is: is this function not supposed to be used in release 
>>> applications? Should I, then, link statically against the many individual 
>>> .o files needed (as the hello-world.cc example does)? If so, what's the 
>>> best way to do this?
>>>
>>> Thanks in advance,
>>> Andre
>>>
>>> -- 
>>> -- 
>>> v8-users mailing list
>>> v8-u...@googlegroups.com 
>>> http://groups.google.com/group/v8-users
>>> --- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "v8-users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to v8-users+u...@googlegroups.com .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[v8-users] Undefined reference to `v8::platform::CreateDefaultPlatform(int)`

2016-09-20 Thread Andre Cunha
Hello,

I'm trying to dynamically load libplatform in an application I'm writing. I 
searched through the available gn options, but I couldn't find an option to 
build libplatform as a shared object, so I did it by hand:

$ gcc -shared -o libv8_libplatform.so obj/v8_lib{platform,base}/*.o

I use this both in debug and in release mode, but when I link my 
application against libplatform in release mode, I get "undefined reference 
to `v8::platform::CreateDefaultPlatform(int)`". This doesn't happen in 
debug mode. After some investigation, I realized that the aforementioned 
function is exported in debug mode, but is hidden in release mode:

# Debug
$ nm -C libv8_libplatform.so | grep CreateDefaultPlatform
0001ad70 *T* v8::platform::CreateDefaultPlatform(int)

# Release
$ nm -C libv8_libplatform.so | grep CreateDefaultPlatform
3b20 *t* v8::platform::CreateDefaultPlatform(int)

I inspected the ninja files, and realized this happens because object files 
are compiled with -fvisibility=default in debug mode, and with 
-fvisibility=hidden in release mode.

So, my question is: is this function not supposed to be used in release 
applications? Should I, then, link statically against the many individual 
.o files needed (as the hello-world.cc example does)? If so, what's the 
best way to do this?

Thanks in advance,
Andre

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [v8-users] Re: Need to re-run gn for build options to take effect

2016-09-20 Thread Andre Cunha
I tried the following sequence of commands:

$ tools/dev/v8gen.py x64.release -- is_component_build=true 
v8_enable_i18n_support=false
$ gn gen out.gn/x64.release
$ ninja -C out.gn/x64.release

And it works. I also tried:

$ tools/dev/v8gen.py -p x64.release -- is_component_build=true 
v8_enable_i18n_support=false
$ ninja -C out.gn/x64.release

And it works too.

Thank you very much.
Andre

On Tuesday, September 20, 2016 at 10:29:19 AM UTC-3, Michael Achenbach 
wrote:
>
> I actually think this is a bug in our work flow. I filed 
> http://crbug.com/648583
>
> You can also pass -p to v8gen.py to let it perform an additional gn run 
> over the extra cmd line parameters automatically.
>
> On Tuesday, September 20, 2016 at 9:16:56 AM UTC+2, Jochen Eisinger wrote:
>>
>> v8gen creates the input for gn. gn creates the input for ninja. ninja 
>> builds the binaries.
>>
>> You can use gn gen out.gn/x64.release instead of gn args if you don't 
>> intend to change the args anyways.
>>
>> best
>> -jochen
>>
>> On Mon, Sep 19, 2016 at 9:19 PM Andre Cunha <andre.l...@gmail.com> wrote:
>>
>>> PS: the same behavior applies to i18n support in debug mode. When I run 
>>> v8gen.py with "v8_enable_i18n_support=false", the build process still 
>>> generates libraries that depend on ICU. I need to run gn and ninja again to 
>>> get rid of the dependency.
>>>
>>> On Monday, September 19, 2016 at 3:55:08 PM UTC-3, Andre Cunha wrote:
>>>>
>>>> Hello,
>>>>
>>>> I'm building v8 5.4 using the following commands (Ubuntu 14.04, x64):
>>>>
>>>> $ fetch v8
>>>> $ cd v8
>>>> $ git checkout remotes/branch-heads/5.4
>>>> $ gclient sync
>>>> $ tools/dev/v8gen.py x64.release -- is_component_build=true 
>>>> v8_enable_i18n_support=false
>>>> $ ninja -C out.gn/x64.release
>>>>
>>>> However, when I do this, libv8.so is *not* generated in 
>>>> out.gn/x64.release. In order for it to be generated, I need to do:
>>>>
>>>> $ gn args out.gn/x64.release
>>>> # Close vi (no need to save).
>>>> $ ninja -C out.gn/x64.release
>>>>
>>>> Then, 851 files are (re)compiled and libv8.so is generated.
>>>>
>>>> Why do I need to re-run gn for the options to take effect? Am I missing 
>>>> some step in the building process?
>>>>
>>>> Thank you!
>>>> Andre
>>>>
>>> -- 
>>> -- 
>>> v8-users mailing list
>>> v8-u...@googlegroups.com
>>> http://groups.google.com/group/v8-users
>>> --- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "v8-users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to v8-users+u...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[v8-users] Re: Need to re-run gn for build options to take effect

2016-09-19 Thread Andre Cunha
PS: the same behavior applies to i18n support in debug mode. When I run 
v8gen.py with "v8_enable_i18n_support=false", the build process still 
generates libraries that depend on ICU. I need to run gn and ninja again to 
get rid of the dependency.

On Monday, September 19, 2016 at 3:55:08 PM UTC-3, Andre Cunha wrote:
>
> Hello,
>
> I'm building v8 5.4 using the following commands (Ubuntu 14.04, x64):
>
> $ fetch v8
> $ cd v8
> $ git checkout remotes/branch-heads/5.4
> $ gclient sync
> $ tools/dev/v8gen.py x64.release -- is_component_build=true 
> v8_enable_i18n_support=false
> $ ninja -C out.gn/x64.release
>
> However, when I do this, libv8.so is *not* generated in out.gn/x64.release. 
> In order for it to be generated, I need to do:
>
> $ gn args out.gn/x64.release
> # Close vi (no need to save).
> $ ninja -C out.gn/x64.release
>
> Then, 851 files are (re)compiled and libv8.so is generated.
>
> Why do I need to re-run gn for the options to take effect? Am I missing 
> some step in the building process?
>
> Thank you!
> Andre
>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[v8-users] Need to re-run gn for build options to take effect

2016-09-19 Thread Andre Cunha
Hello,

I'm building v8 5.4 using the following commands (Ubuntu 14.04, x64):

$ fetch v8
$ cd v8
$ git checkout remotes/branch-heads/5.4
$ gclient sync
$ tools/dev/v8gen.py x64.release -- is_component_build=true 
v8_enable_i18n_support=false
$ ninja -C out.gn/x64.release

However, when I do this, libv8.so is *not* generated in out.gn/x64.release. 
In order for it to be generated, I need to do:

$ gn args out.gn/x64.release
# Close vi (no need to save).
$ ninja -C out.gn/x64.release

Then, 851 files are (re)compiled and libv8.so is generated.

Why do I need to re-run gn for the options to take effect? Am I missing 
some step in the building process?

Thank you!
Andre

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [v8-users] Building v8

2016-09-16 Thread Andre Cunha
Hi,

Is there an option to build a shared version of libplatform? I'm upgrading 
a project I'm working on from v8 5.3 (with static linking) to 5.4 (with 
dynamic linking), and I use libplatform. I used to link against 
libv8_libplatform.a, but now I cannot find an option in gn that 
automatically generates libv8_libplatform.so. As a workaround, I generate 
the shared library by hand:

$ cd out.gn/x64.debug
$ gcc -shared -o libv8_libplatform.so obj/v8_libplatform/*.o

Is there a better way to do this?

Thank you very much.
Andre

On Friday, September 16, 2016 at 2:11:01 AM UTC-3, Jochen Eisinger wrote:
>
> by default, we build thin archives which are suitable for static linking 
> against other apps, and yes, you will need the .o files around for that.
>
> If you'd rather have shared libraries (.so files), set the gn 
> arg is_component_build = true
>
> br
> -jochen
>
> On Thu, Sep 15, 2016 at 5:07 PM Travis Sharp  > wrote:
>
>> I've followed the current instructions for building v8 with GN on 
>> https://github.com/v8/v8/wiki/Building%20with%20GN but after further 
>> inspection it looks as if the build only links the .o output instead of 
>> creating libraries for use in other applications.
>>
>> Am I missing a step or is this intended? I am trying to use the output 
>> library in another application.
>>
>> V8 Build-Head 5.4, Linux x64
>>
>> -- 
>> -- 
>> v8-users mailing list
>> v8-u...@googlegroups.com 
>> http://groups.google.com/group/v8-users
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "v8-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to v8-users+u...@googlegroups.com .
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.