The binary size is caused by `is_debug = true`. That configuration is very useful for debugging, but is both slow and large. For release-mode binaries, set `is_debug = false`.
Regarding `is_clang` and `use_custom_libcxx`: Drop them both to get the configuration we test and support. You will then also need to compile the rest of your application with clang and libstdc++, which is an obstacle for some projects. Alternatively, you can continue to use g++ and libc++ and hope that it works, or debug/fix it when it doesn't. We'd probably even accept reasonably non-intrusive patches for upstreaming. Remember to run `gclient sync` after checking out a different git commit. (I'm not sure whether that explains your 13.8 difficulties.) On Wed, Jun 4, 2025 at 8:06 PM Jay Hayes <[email protected]> wrote: > Hi all, > > We are experimenting with building v8 for embedding within our c++ > application. > > Following the guidance - https://v8.dev/docs/embed, it seems the doc is > for 13.1. When I try to builder newer code using the *x64.release.sample* > config, the build fails very quickly: > ../../src/base/vector.h:296:32: error: no member named > 'make_unique_for_overwrite' in namespace 'std' > > I then discovered this issue - > https://issues.chromium.org/issues/377222400 which speaks to the fact the > doc might be out of date. > > Using the suggested config from the linked issue above: > > target_cpu = "x64" > target_os = "linux" > is_debug = true > is_component_build = false > v8_monolithic = true > v8_static_library = false > v8_use_external_startup_data = false > use_custom_libcxx = false > is_clang = false > treat_warnings_as_errors = false > > $ ninja -v -C out.gn/x64.release v8_monolith > > $ g++ -I. -Iinclude samples/hello-world.cc -o hello_world -fno-rtti > -fuse-ld=lld -lv8_monolith -lv8_libbase -lv8_libplatform > -Lout.gn/x64.release/obj/ -pthread -std=c++20 -DV8_COMPRESS_POINTERS > -DV8_ENABLE_SANDBOX > > I was able to compile 13.7 (13.7.152.8) on ubuntu 24.04 and link the hello > world example to the v8 static libs however I noticed the hello_world > executable (which did run successfully) was a whopping 3.3 GB? > > Trying to build 13.8 (13.8.258.2) using the same config fails to build > successfully: > FAILED: mksnapshot > "python3" "../../build/toolchain/gcc_link_wrapper.py" > --output="./mksnapshot" -- g++ -pie -Wl,--fatal-warnings -Wl,--build-id > -fPIC -Wl,-z,noexecstack -Wl,-z,relro -Wl,-z,now -m64 -Wl,-z,defs > -Wl,--as-needed --sysroot=../../build/linux/debian_bullseye_amd64-sysroot > -rdynamic -pie -Wl,--disable-new-dtags -Wl,--gc-sections -o "./mksnapshot" > -Wl,--start-group @"./mksnapshot.rsp" -Wl,--end-group -latomic -ldl > -lpthread -lrt -Wl,--start-group > <<<truncated for brevity>>> > /usr/bin/ld: > obj/build/rust/allocator/libbuild_srust_sallocator_callocator.rlib(libbuild_srust_sallocator_callocator.build_srust_sallocator_callocator.ff715ce24295f21e-cgu.0.rcgu.o): > in function `__rustc::__rust_alloc_error_handler': > ./../../build/rust/allocator/lib.rs:106:(.text._RNvCsiiodXzYB6cW_7___rustc26___rust_alloc_error_handler+0x10): > undefined reference to `rust_allocator_internal::alloc_error_handler_impl()' > collect2: error: ld returned 1 exit status > ninja: build stopped: subcommand failed. > > Currently some of the gn config args used that enables v8 (v13.7) to build > and link the example successfully are doc'd as deprecated: > use_custom_libcxx = false > is_clang = false > > I've read other posts that mention clang is the only compiler supported > moving forward which furthers my belief I'm using an incorrect build > configuration. > > My question is what is the correct gn config one should use to build the > latest (13.8) v8 code and be able to link the hello world example > successfully? > > Thanks in advance for any suggestions/guidance! > > -- > -- -- v8-dev mailing list [email protected] http://groups.google.com/group/v8-dev --- You received this message because you are subscribed to the Google Groups "v8-dev" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion visit https://groups.google.com/d/msgid/v8-dev/CAKSzg3TAxWcFNGL3deaLtvzB1Fs0vtghFP-GvkA3kDQcOiXK_Q%40mail.gmail.com.
