Re: textattr library for text colors and attributes available in D
On Thursday, 8 November 2018 at 13:37:08 UTC, Shriramana Sharma wrote: https://github.com/jamadagni/textattr/ textattr is a library and command-line tool that makes adding color and attributes to beautify the terminal output of your program easier by translating human-readable specs into ANSI escape codes. The library is available for C, C++, Python and D. C++ and Python use the C code for internal processing but the D code is a separate implementation for easy inclusion of textattr.d in a D compilation command without requiring any external linking. Copyright: Shriramana Sharma, 2018 License: BSD-2-Clause i was thinking to something like this the other day: struct ColoredText(int color) { string payload; alias payload this; } with a writeln() like function that check if the args are template instances of ColoredText. or even more versatile: @Color!Blue string b1, b2; @Color!Red string r1, r2; @Color!Green int i1, i2; with a writeln() like function that inspects the UDA or the args.
Re: D compilation is too slow and I am forking the compiler
On Friday, 23 November 2018 at 19:21:03 UTC, Walter Bright wrote: On 11/23/2018 5:23 AM, welkam wrote: Currently D reads the all files that are passed in command line before starting lexing/parsing, but in principle we could start lexing/parsing after first file is read. In fact we could start after first file`s first line is read. DMD used to do that. But it was removed because: 1. nobody understood the logic 2. it didn't seem to make a difference You can still see the vestiges by the: static if (ASYNCREAD) blocks in the code. I didnt expect huge wins. This would be useful when you start your computer and files have to be read from old spinning rust and the project has many files. Otherwise files will be cached and memcopy is fast. I was surprised on how fast modern computers copy data from one place to another. Speaking of memcpy here is a video you might like. It has memcpy, assembler and a bit of compiler. Its very easy watch for when you want to relax. Level1 Diagnostic: Fixing our Memcpy Troubles (for Looking Glass) https://www.youtube.com/watch?v=idauoNVwWYE
Re: D compilation is too slow and I am forking the compiler
On 11/23/2018 5:23 AM, welkam wrote: Currently D reads the all files that are passed in command line before starting lexing/parsing, but in principle we could start lexing/parsing after first file is read. In fact we could start after first file`s first line is read. DMD used to do that. But it was removed because: 1. nobody understood the logic 2. it didn't seem to make a difference You can still see the vestiges by the: static if (ASYNCREAD) blocks in the code.
Re: D compilation is too slow and I am forking the compiler
On 11/23/2018 6:37 AM, welkam wrote: Your post on reddit received more comments than D front ends inclusion to GCC. If you titled your post differently you probably wouldn't had such success so from my perspective its a net positive. Sure there are few people that took the wrong message but there are more people who saw your post It definitely shows the value of a provocative title!
Re: D compilation is too slow and I am forking the compiler
On 11/23/2018 2:12 AM, Jacob Carlborg wrote: Would it be possible to have one string table per thread and merge them to one single shared string table before continuing with the next phase? It'd probably be even slower because one would have to rewrite all the pointers into the string table.
Re: D compilation is too slow and I am forking the compiler
On Friday, 23 November 2018 at 14:32:39 UTC, Vladimir Panteleev wrote: On Friday, 23 November 2018 at 13:23:22 UTC, welkam wrote: If we run these steps in different thread on the same core with SMT we could better use core`s resources. Reading file with kernel, decoding UTF-8 with vector instructions and lexing/parsing with scalar operations while all communication is done trough L1 and L2 cache. You might save some pages from the data cache, but by doing more work at once, the code might stop fitting in the execution-related caches (code pages, microcode, branch prediction) instead. Its not about saving tlb pages or fitting better in cache. Compilers are considered streaming applications - they dont utilize cpu caches effectively. You cant read one character and emit machine code then read next character you have to go over all data multiple times while you modify it. I can find white papers if you interested where people test GCC with different cache architectures and it doesnt make much of a difference. GCC is popular application when testing caches. Here are profiling data from DMD Performance counter stats for 'dmd -c main.d': 600.77 msec task-clock:u #0.803 CPUs utilized 0 context-switches:u#0.000 K/sec 0 cpu-migrations:u #0.000 K/sec 33,209 page-faults:u # 55348.333 M/sec 1,072,289,307 cycles:u # 1787148.845 GHz 870,175,210 stalled-cycles-frontend:u # 81.15% frontend cycles idle 721,897,927 stalled-cycles-backend:u # 67.32% backend cycles idle 881,895,208 instructions:u#0.82 insn per cycle #0.99 stalled cycles per insn 171,211,752 branches:u# 285352920.000 M/sec 11,287,327 branch-misses:u #6.59% of all branches 0.747720395 seconds time elapsed 0.497698000 seconds user 0.104165000 seconds sys Most important data in this conversation is 0.82 insn per cycle. My CPU could do ~2 IPC so there are plenty of CPU resources available. New Intel desktop processors are designed to do 4 insn/cycle. What is limiting DMD performance is slow RAM, data fetching and not what you listed. code pages - you mean TLB here? microcode cache. Not all processors have it and those who have only benefit trivial loops. DMD have complex loops. branch prediction. More entries in branch predictor wont help here because branches are missed because data is unpredictable not because there are too many branches. Also branch missprediction penalty is around 30 cycles while reading from RAM could be over 200 cycles. L1 code cache. You didnt mention this but running those tasks in SMT mode might trash L1$ so execution might not be optimal. Instead of parallel reading of imports DMD needs more data oriented data structures instead of old OOP inspired data structures. Ill give you example why its the case. Consider struct { bool isAlive; } If you want to read data from that bool CPU needs to fetch 8 bytes of data(cache line of 64 bits). What this means is that for one bit of information CPU fetches 64 bits of data resulting in 1/64 = 0.015625 or ~1.6 % signal to noise ratio. This is terrible! AFAIK DMD doesnt make this kind of mistake but its full of large structs and classes that are not efficient to read. To fix this we need to split those large data structures into smaller ones that only contain what is needed for particular algorithm. I predict 2x speed improvement if we transform all data structures in DMD. Thats improvement without improving algorithms only changing data structures. This getting too longs so i will stop right now
Re: D compilation is too slow and I am forking the compiler
On Thursday, 22 November 2018 at 04:48:09 UTC, Vladimir Panteleev wrote: Sorry about that. I'll have to think of two titles next time, one for the D community and one for everyone else. If it's of any consolation, the top comments in both discussion threads point out that the title is inaccurate on purpose. Your post on reddit received more comments than D front ends inclusion to GCC. If you titled your post differently you probably wouldn't had such success so from my perspective its a net positive. Sure there are few people that took the wrong message but there are more people who saw your post
Re: D compilation is too slow and I am forking the compiler
On Friday, 23 November 2018 at 13:23:22 UTC, welkam wrote: If we run these steps in different thread on the same core with SMT we could better use core`s resources. Reading file with kernel, decoding UTF-8 with vector instructions and lexing/parsing with scalar operations while all communication is done trough L1 and L2 cache. You might save some pages from the data cache, but by doing more work at once, the code might stop fitting in the execution-related caches (code pages, microcode, branch prediction) instead.
Re: D compilation is too slow and I am forking the compiler
On Wednesday, 21 November 2018 at 10:56:02 UTC, Walter Bright wrote: Wouldn't it be awesome to have the lexing/parsing of the imports all done in parallel? From my testing lexing/parsing takes small amount of build time so running it in parallel might be small gain. We should consider running in parallel more heavy hitting features like CTFE and templates. Since we are in wish land here is my wishes. Currently D reads the all files that are passed in command line before starting lexing/parsing, but in principle we could start lexing/parsing after first file is read. In fact we could start after first file`s first line is read. Out of all operation before semantic pass, reading from hard disk should be the slowest so it might be possible to decode utf-8, lex and parse at the speed of reading from hard disk. If we run these steps in different thread on the same core with SMT we could better use core`s resources. Reading file with kernel, decoding UTF-8 with vector instructions and lexing/parsing with scalar operations while all communication is done trough L1 and L2 cache. I thought about using memory mapped files to unblock file reading as a first step but lack of good documentation about mmf and lack of thorough understanding of front end made me postpone this modification. Its a change with little benefit. The main difficulty in getting that to work is dealing with the shared string table. At begging of parsing a thread could get a read-only shared slice of string table. All strings not in table are put in local string table. After parsing tables are merged and shared slice is updated so new thread could start with bigger table. this assumes that table is not sorted
Re: The New Fundraising Campaign
On Friday, 23 November 2018 at 12:56:49 UTC, Robert M. Münch wrote: Hi, will I get a donation certificate? I haven't found anything about it. Not a certificate, but a receipt with all the information you need for tax deductions.
Re: The New Fundraising Campaign
On Friday, 23 November 2018 at 10:20:22 UTC, Martin Tschierschke wrote: Sorry for annoy you, but this links have to be integrated into the donate page https://dlang.org/foundation/donate.html Yes, I know. I want to do more than just add the link, however. I want to integrate the campaign menu, and that means I have to set aside some time to determine how best to add the integration code into the DDOC for the site. It's one of many items on my TODO list and I'll get to it soon. I gave my 25 bucks and want this topic to stay on top! So please put it on top! The campaign (https://www.flipcause.com/secure/cause_pdetails/NDUwNTY=is) is now at 614$ (goal 3000$), so that keeping this performance since the first donation (11/10) would give the missing 2386$ in app 51 days: => We need another 74 supporters with an average of 32$. I'll do another push in the first week of December, on Twitter and the blog. And as we get closer to the deadline, I'll send out more reminders.
Re: The New Fundraising Campaign
On 2018-11-10 16:09:12 +, Mike Parker said: I've just published a new blog post describing our new fundraising campaign. TL;DR: We want to pay a Pull Request Manager to thin out the pull request queues and coordinate between relevant parties on newer pull requests so they don't go stale. We've launched a three-month campaign, and Nicholas Wilson has agreed to do the work. Hi, will I get a donation certificate? I haven't found anything about it. -- Robert M. Münch http://www.saphirion.com smarter | better | faster
Re: The New Fundraising Campaign
On Saturday, 10 November 2018 at 16:09:12 UTC, Mike Parker wrote: I've just published a new blog post describing our new fundraising campaign. TL;DR: We want to pay a Pull Request Manager to thin out the pull request queues and coordinate between relevant parties on newer pull requests so they don't go stale. We've launched a three-month campaign, and Nicholas Wilson has agreed to do the work. We have high hopes that this will help reduce frustration for current and future contributors. And we will be grateful for your support in making it happen. Please read the blog post for more details: https://dlang.org/blog/2018/11/10/the-new-fundraising-campaign/ For the impatient: https://www.flipcause.com/secure/cause_pdetails/NDUwNTY= Sorry for annoy you, but this links have to be integrated into the donate page https://dlang.org/foundation/donate.html or even better the hint for this campaign should be on the home page, too. I gave my 25 bucks and want this topic to stay on top! So please put it on top! The campaign (https://www.flipcause.com/secure/cause_pdetails/NDUwNTY=is) is now at 614$ (goal 3000$), so that keeping this performance since the first donation (11/10) would give the missing 2386$ in app 51 days: => We need another 74 supporters with an average of 32$. Regards mt.
Re: LDC 1.13.0-beta2
On 2018-11-21 11:43, kinke wrote: Glad to announce the second beta for LDC 1.13: * Based on D 2.083.0+ (yesterday's DMD stable). * The Windows packages are now fully self-sufficient, i.e., a Visual Studio/C++ Build Tools installation isn't required anymore. * Substantial debug info improvements. * New command-line option `-fvisibility=hidden` to hide functions/globals not marked as export, to reduce the size of shared libraries. Full release log and downloads: https://github.com/ldc-developers/ldc/releases/tag/v1.13.0-beta2 I see that the bundled Dub has been updated to the latest version. Awesome, thanks. -- /Jacob Carlborg
Re: D compilation is too slow and I am forking the compiler
On 2018-11-21 11:56, Walter Bright wrote: Wouldn't it be awesome to have the lexing/parsing of the imports all done in parallel? The main difficulty in getting that to work is dealing with the shared string table. Would it be possible to have one string table per thread and merge them to one single shared string table before continuing with the next phase? -- /Jacob Carlborg
Re: LDC 1.13.0-beta2
On Wednesday, 21 November 2018 at 10:43:55 UTC, kinke wrote: Glad to announce the second beta for LDC 1.13: * Based on D 2.083.0+ (yesterday's DMD stable). * The Windows packages are now fully self-sufficient, i.e., a Visual Studio/C++ Build Tools installation isn't required anymore. * Substantial debug info improvements. * New command-line option `-fvisibility=hidden` to hide functions/globals not marked as export, to reduce the size of shared libraries. Full release log and downloads: https://github.com/ldc-developers/ldc/releases/tag/v1.13.0-beta2 Thanks to all contributors! Nice! It seems that docker image is not updated since 1.9.0 (but it is tagged as "automated build"). Could you please update that image? https://hub.docker.com/r/dlanguage/ldc/ Andrea