Re: Comparing compilation time of random code in C++, D, Go, Pascal and Rust
On 10/27/2016 02:43 AM, Sebastien Alaiwan wrote: From the article: Surprise: C++ without optimizations is the fastest! A few other surprises: Rust also seems quite competitive here. D starts out comparatively slow." These benchmarks seem to support the idea that it's not the parsing which is slow, but the code generation phase. If code generation/optimization is the bottleneck, a "ccache-for-D" ("dcache"?) tool might be very beneficial. (However, then why do C++ standard committee members believe that the replacement of text-based #includes with C++ modules ("import") will speed up the compilation by one order of magnitude?) How many source files are used? If all the functions are always packed into one large source file, or just a small handful, then that would mean the tests are accidentally working around C++'s infamous #include slowdowns.
Re: Comparing compilation time of random code in C++, D, Go, Pascal and Rust
On Thursday, 27 October 2016 at 06:43:15 UTC, Sebastien Alaiwan wrote: From the article: Surprise: C++ without optimizations is the fastest! A few other surprises: Rust also seems quite competitive here. D starts out comparatively slow." These benchmarks seem to support the idea that it's not the parsing which is slow, but the code generation phase. If code generation/optimization is the bottleneck, a "ccache-for-D" ("dcache"?) tool might be very beneficial. See https://johanengelen.github.io/ldc/2016/09/17/LDC-object-file-caching.html I also have a working dcache implementation in LDC but it still needs some polishing. -Johan
Re: Silicon Valley D Meetup - October 27, 2016 - "D runtime infrastructure for your projects" by Ilya Yaroshenko
On Wednesday, 19 October 2016 at 22:07:04 UTC, Ali Çehreli wrote: We're excited to have Ilya as our guest speaker this month! From Ilya: I will talk about the future D runtime infrastructure which is required for D ecosystem to be desirable for business in large scale. A concept I would like to discuss can be called better C.Tags: Mir, libmir/cpuid, nothrow @nogc, LDC, LLVM, GPU, Julia, Eigen, C++, workforce, async I/O, generic building blocks, HTTP2. Read more about Ilya's work on Mir: http://blog.mir.dlang.io/glas/benchmark/openblas/2016/09/23/glas-gemm-benchmark.html Logistics: We'll have some food and drink starting at 7. Ilya will join us at 7:30 by Google Hangouts. http://www.meetup.com/D-Lang-Silicon-Valley/events/234353529/ We will post Google Hangouts link here when we create it at the time of the meetup. Ali Will Ilya's talk be recorded? I would love to hear more about better C. If not, anywhere else I can read about this?
Re: Comparing compilation time of random code in C++, D, Go, Pascal and Rust
On Wednesday, 19 October 2016 at 17:05:18 UTC, Gary Willoughby wrote: This was posted on twitter a while ago: Comparing compilation time of random code in C++, D, Go, Pascal and Rust http://imgur.com/a/jQUav Very interesting, thanks for sharing! From the article: Surprise: C++ without optimizations is the fastest! A few other surprises: Rust also seems quite competitive here. D starts out comparatively slow." These benchmarks seem to support the idea that it's not the parsing which is slow, but the code generation phase. If code generation/optimization is the bottleneck, a "ccache-for-D" ("dcache"?) tool might be very beneficial. (However, then why do C++ standard committee members believe that the replacement of text-based #includes with C++ modules ("import") will speed up the compilation by one order of magnitude?) Working simultaneously on equally sized C++ projects and D projects, I believe that a "dcache" (using hashes of the AST?) might be usefull. The average project build time in my company is lower for C++ projects than for D projects (we're using "ccache g++ -O3" and "gdc -O3").