Re: Why I'm Excited about D
On 4/8/15 4:59 AM, Jacob Carlborg wrote: On 2015-04-07 19:46, Ary Borenszweig wrote: It's true that Ruby is slow, but only because their priority is correctness. I don't think it's so much about the correctness, it's rather the complicated features it supports, like metaprogramming. eval and bindings are causing problems, also promoting Fixnum to Bignum when it doesn't fit is a problem. The autopromotion of Fixnum to Bignum is interesting. It always leads to correct code, although it's slow. Without this feature you start to realize how week are other languages. For example, take the simple problem of parsing a number out of a string. In Ruby it's: "123".to_i #=> 123 But this also works: "12398123091823091823091823091823091820318203123".to_i #=> 12398123091823091823091823091823091820318203123 In typed language one would be forced to make a decision for the return type of to_i. Maybe if it doesn't fit in an Int32 or Int64 raise an exception or signal an error somehow. In Ruby you just forget about these little problems, it will always work and give the correct result. That's one of the reasons I think it's bad to say that Ruby is not a "correct" language.
Re: Why I'm Excited about D
On 4/7/15 3:34 PM, deadalnix wrote: On Tuesday, 7 April 2015 at 18:01:53 UTC, Ary Borenszweig wrote: On 4/7/15 2:16 PM, deadalnix wrote: On Tuesday, 7 April 2015 at 08:58:57 UTC, ixid wrote: Or to be more consistent with UFCS: foreach (name; names.parallel) { name.writeln; } no.please wat unreadable.is.ufcs.using.over Yes, I don't like "writeln" being used with UFCS, it's an abuse. My point is that every language has WATs :-)
Re: Why I'm Excited about D
On 4/7/15 2:16 PM, deadalnix wrote: On Tuesday, 7 April 2015 at 08:58:57 UTC, ixid wrote: Or to be more consistent with UFCS: foreach (name; names.parallel) { name.writeln; } no.please wat
Re: Why I'm Excited about D
On 4/6/15 8:51 PM, Adam Hawkins wrote: Hello everyone, this is my first post on the forum. I've been investigating the language for the past few weeks. I was able to complete my first useful program thanks to very helpful people in #d on IRC . The experience made me very interested in the language and improving the community around it. I'm primarily Ruby developer (been so about the last 7-8 years) doing web stuff with significant JavaScript work as well. I wrote a blog post on why I'm excited about D. You can read it here: http://hawkins.io/2015/04/excited-about-d/. I've been reading the forums here so I can see that there is a focus on improving the marketing for the language and growing the community. I see most of the effort is geared towards C++ programmers, but have you considered looking at us dynamic languages folk? I see a big upside for us. Moving from Ruby to D (my case) gives me power & performance. I still have OOP techniques but I still have functional things like closures and all that good stuff. Only trade off in the Ruby case is metaprogramming. All in all I think there is a significant value promise for those of us doing backend services for folks like me. Regardless, I figured it might be interesting to hear about some experience coming to the language from a different perspective. Cheers! "Ruby was never intended to be correct" -> I think Ruby is the most correct language I've seen around. ~~~ a = [] a << a p a #=> [[...]] p a == a[0] #=> true ~~~ This is just an example. Using Ruby and reading its source code I found so many things that they get right, like border-cases, that I'm surprised you say that. It's true that Ruby is slow, but only because their priority is correctness.
Re: Benchmark of D against other languages
On 4/2/15 11:20 PM, Laeeth Isharc wrote: On Tuesday, 31 March 2015 at 18:20:05 UTC, cym13 wrote: I found this repository (reddit!) that hosts common benchmarks for many languages such as D, Nim, Go, python, C, etc... It uses only standard structures not to influence the benchmark. https://github.com/kostya/benchmarks Thanks for this. BTW, some of these benchmarks were taken from attractivechaos here. He seems to be working in bioinformatics and does not consider himself a programmer by profession. But has written some libraries others use, and is clearly bright and thoughtful. He is pro-D and sad people have not recognized its qualities. https://attractivechaos.wordpress.com/ He contributed some samples and pull requests to the Crystal repository and he's always very humble and does good observations and suggestions. I don't recall seeing him say anything bad about any language: just some benchmarks and how they behave, let the benchmarks (and code) speak for him :-)
Re: unittests are really part of the build, not a special run
On 4/2/15 3:32 AM, Jacob Carlborg wrote: On 2015-04-01 21:28, Ary Borenszweig wrote: No, it's actually much simpler but less powerful. This is because the language is not as dynamic as Ruby. But we'd like to keep things as simple as possible. Can't you implement that using macros? We can. But then it becomes harder to understand what's going on. In RSpec I don't quite understand what's going on really, and I like a bit of magic but not too much of it. In fact with macros it's not that simple because you need to remember the context where you are defining stuff, so that might need adding that capabilities to macros, which will complicate the language. But right now you get these things: This sounds all great. But lowering groups and examples to classes and methods takes it to the next level. Somebody also started writing a minitest clone: https://github.com/ysbaddaden/minitest.cr . Implementing a DSL on top of that using regular code or macros should be possible. But right now the features we have are enough.
Re: unittests are really part of the build, not a special run
On 4/1/15 3:57 PM, Jacob Carlborg wrote: On 2015-04-01 20:04, Ary Borenszweig wrote: By the way, this is the way we do it in Crystal. The source code for the spec library is here, if you need some inspiration: https://github.com/manastech/crystal/tree/master/src/spec . It's just 687 lines long. Ahhh, looks like my old buddy RSpec :). Does it do all the fancy things with classes, instance and inheritance, that is, each describe block is a class an each it block is an instance method? No, it's actually much simpler but less powerful. This is because the language is not as dynamic as Ruby. But we'd like to keep things as simple as possible. But right now you get these things: 1. You can generate many tests in a simple way: ~~~ [1, 2, 3].each do |num| it "works for #{num}" do ... end end ~~~ 2. You get a summary of all the failures and the lines of the specs that failed. Also, you get errors similar to RSpec for matchers. And you get printed a command line for each failing spec so you can rerun it separately. These are the most useful RSpec features for me. 3. You can get dots for each spec or the name of the specs (-format option). 4. You can run a spec given its line number or a regular expression for its name. Eventually it will have more features, as the language evolves, but for now this has proven to be very useful :-) Another good thing about it being just a library is that others send pull requests and patches, and this is easier to understand than some internal logic built into the compiler (compiler code is always harder).
Re: unittests are really part of the build, not a special run
On Monday, 30 March 2015 at 22:20:08 UTC, Andrei Alexandrescu wrote: This is a tooling issue. I think D's built-in "unittest" blocks are a mistake. Yes, they are simple and for simple functions and algorithms they work pretty well. However, when you have a big complex project you start having other needs: 1. Named unit-tests, so you can better find what failed 2. Better error messages for assertions 3. Better output to rerun failed tests 4. Setup and teardown hooks 5. Different outputs depending on use case All of this can be done with a library solution. D should have a very good library solution in phobos and it should be encouraged to use that. DMD could even know about this library and have special commands to trigger the tests. The problem is that you can start with "unittest" blocks, but then you realize you need more, so what do you do? You combine both? You can't! I'd say, deprecate "unittest" and write a good test library. You can still provide it for backwards compatibility. By the way, this is the way we do it in Crystal. The source code for the spec library is here, if you need some inspiration: https://github.com/manastech/crystal/tree/master/src/spec . It's just 687 lines long.
Re: Benchmark of D against other languages
On 3/31/15 7:27 PM, deadalnix wrote: On Tuesday, 31 March 2015 at 22:15:58 UTC, Ary Borenszweig wrote: But in Crystal he also uses classes and doesn't mark methods as final. And it's faster than D. Not familiar with their way of doing. Can you explain the crystal semantic ? You can read how method dispatch works here: http://crystal-lang.org/2015/03/04/crystal-0.6.1-released.html#method-dispatch
Re: Benchmark of D against other languages
On 3/31/15 3:44 PM, Andrei Alexandrescu wrote: On 3/31/15 11:35 AM, cym13 wrote: On Tuesday, 31 March 2015 at 18:32:25 UTC, Meta wrote: On Tuesday, 31 March 2015 at 18:20:05 UTC, cym13 wrote: I found this repository (reddit!) that hosts common benchmarks for many languages such as D, Nim, Go, python, C, etc... It uses only standard structures not to influence the benchmark. https://github.com/kostya/benchmarks Can you provide the Reddit link? Right off on the Brainfuck example, the author used a class instead of a struct, and none of the methods were marked final. Here it is: http://www.reddit.com/r/programming/comments/30y9mk I don't think the author is an experienced D programmer, but that's maybe why I find it interesting. That shows what a new naïve user can expect as a first result. Oh boy all classes with one-liner non-final methods. Manu must be dancing a gig right now :o). -- Andrei But in Crystal he also uses classes and doesn't mark methods as final. And it's faster than D. This is not to start a war, maybe one day I'll have to use D in my workplace so it better be nice and fast (so it's better if all tools are good). At least in this case, the compiler should detect that the class isn't inherited and that you are creating an executable out of the program, so it's can't be a library, and mark the methods as final. And with escape analysis (something that Crystal also lacks, but other languages have) the compiler could figure out that the class doesn't need to be allocated on the heap. In short, I think we need smarter compiler, not more keywords to optimize our code.
Re: [WORK] groupBy is in! Next: aggregate
On 1/26/15 2:34 PM, Andrei Alexandrescu wrote: On 1/26/15 8:11 AM, H. S. Teoh via Digitalmars-d wrote: On Mon, Jan 26, 2015 at 11:26:04AM +, bearophile via Digitalmars-d wrote: Russel Winder: but is it's name "group by" as understood by the rest of the world? Nope... [...] I proposed to rename it but it got shot down. *shrug* We still have a short window of time to sort this out, before 2.067 is released... My suggestion was to keep the name but change the code of your groupBy implementation to return tuple(key, lazyValues) instead of just lazyValues. That needs to happen only for binary predicates; unary predicates will all have alternating true/false keys. Seems that would please everyone. Andrei That's much more harder to implement than what it does right now. I don't know how you'll manage to do the lazyValues thing: you'd need to make multiple passes in the range. Again, other languages return an associative array in this case.
Re: [WORK] groupBy is in! Next: aggregate
On 1/23/15 7:30 PM, bearophile wrote: H. S. Teoh: What you describe could be an interesting candidate to add, though. It could iterate over distinct values of the predicate, and traverse the forward range (input ranges obviously can't work unless you allocate, which makes it no longer lazy) each time. This, however, has O(n*k) complexity where k is the number of distinct predicate values. Let's allocate, creating an associative array inside the grouping function :-) Bye, bearophile All languages I know do this for `group by` (because of the complexity involved), and I think it's ok to do so.
Re: [WORK] groupBy is in! Next: aggregate
On 1/23/15 8:54 PM, Andrei Alexandrescu wrote: On 1/23/15 1:36 PM, H. S. Teoh via Digitalmars-d wrote: On Fri, Jan 23, 2015 at 08:44:05PM +, via Digitalmars-d wrote: [...] You are talking about two different functions here. group by and partition by. The function that has been implemented is often called partition by. [...] It's not too late to rename it, since we haven't released it yet. We still have a little window of time to make this change if necessary. Andrei? Returning each group as a tuple sounds like a distinct, albeit related, function. It can probably be added separately. We already have partition() functions that actually partition a range into two subranges, so adding partitionBy with a different meaning may be confusing. -- Andrei Another name might be chunkBy: it returns chunks that are grouped by some logic.
Re: [WORK] groupBy is in! Next: aggregate
On 1/23/15 3:08 PM, Andrei Alexandrescu wrote: So H.S. Teoh awesomely took https://github.com/D-Programming-Language/phobos/pull/2878 to completion. We now have a working and fast relational "group by" facility. See it at work! #!/usr/bin/rdmd void main() { import std.algorithm, std.stdio; [293, 453, 600, 929, 339, 812, 222, 680, 529, 768] .groupBy!(a => a & 1) .writeln; } [[293, 453], [600], [929, 339], [812, 222, 680], [529], [768]] The next step is to define an aggregate() function, which is a lot similar to reduce() but works on ranges of ranges and aggregates a function over each group. Continuing the previous example: [293, 453, 600, 929, 339, 812, 222, 680, 529, 768] .groupBy!(a => a & 1) .aggregate!max .writeln; should print: [453, 600, 929, 812, 529, 768] The aggregate function should support aggregating several functions at once, e.g. aggregate!(min, max) etc. Takers? Andrei In most languages group by yields a tuple of {group key, group values}. For example (Ruby or Crystal): a = [1, 4, 2, 4, 5, 2, 3, 7, 9] groups = a.group_by { |x| x % 3 } puts groups #=> {1 => [1, 4, 4, 7], 2 => [2, 5, 2], 0 => [3, 9]} In C# it's also called group by: http://www.dotnetperls.com/groupby Java: http://docs.oracle.com/javase/8/docs/api/java/util/stream/Collectors.html#groupingBy-java.util.function.Function- SQL: http://www.w3schools.com/sql/sql_groupby.asp So I'm not sure "groupBy" is a good name for this.
Re: Like Go/Rust, why not to have "func" keyword before function declaration
On 1/19/15 9:17 PM, Walter Bright wrote: On 1/19/2015 2:49 PM, Ary Borenszweig wrote: So... how do you search for a function definition in D without an IDE? I do a text search for the name of the function. I've been programming in C, C++, and D for 30 years without an IDE. It never occurred to me that this was not doable :-) But the results will also contain invocations of that function. Do you go one by one until you find the definition?
Re: Like Go/Rust, why not to have "func" keyword before function declaration
On 1/19/15 7:54 PM, Andrei Alexandrescu wrote: On 1/19/15 2:49 PM, Ary Borenszweig wrote: On 1/19/15 6:25 PM, Andrei Alexandrescu wrote: On 1/19/15 12:51 PM, Alexey T. wrote: Will be much easier to read Source, if func declarataion begins with keyword. "def" of "func". e.g. func myName(params.): typeOfResult; or func myName(params...) -> typeOfResult; easier to read and PARSE. Next D version may allow--with compatability of old syntad (C like where typeOfResult is 1st id). No. -- Andrei How do you search for a function definition? In Ruby I search "def some_name" and I find it. In Go I can probably search "func some_name". In Rust, "fn some_name". Browsing some C code for Ruby I search with regex with "^some_name" because they have the convention of writing functions like this: return_type function_name(...) { } It works, but if you stop following that convention you are lost. So... how do you search for a function definition in D without an IDE? I abandon D and switch to Ruby. -- Andrei Thanks for the answer.
Re: Like Go/Rust, why not to have "func" keyword before function declaration
On 1/19/15 6:25 PM, Andrei Alexandrescu wrote: On 1/19/15 12:51 PM, Alexey T. wrote: Will be much easier to read Source, if func declarataion begins with keyword. "def" of "func". e.g. func myName(params.): typeOfResult; or func myName(params...) -> typeOfResult; easier to read and PARSE. Next D version may allow--with compatability of old syntad (C like where typeOfResult is 1st id). No. -- Andrei How do you search for a function definition? In Ruby I search "def some_name" and I find it. In Go I can probably search "func some_name". In Rust, "fn some_name". Browsing some C code for Ruby I search with regex with "^some_name" because they have the convention of writing functions like this: return_type function_name(...) { } It works, but if you stop following that convention you are lost. So... how do you search for a function definition in D without an IDE?
Re: Use proper frameworks for building dlang.org
On 1/19/15 1:42 PM, Andrei Alexandrescu wrote: On 1/19/15 12:33 AM, Jacob Carlborg wrote: On 2015-01-19 03:31, Andrei Alexandrescu wrote: So now minification and gzipping are the culprit? I don't quite understand. There are plenty existing web frameworks that already have solved, what it seems like, all the problems you're trying to solve now. You're just shooting them down because it's not Ddoc. There might be a misunderstanding over what problems I'm trying to solve. -- Andrei What are you trying to solve?
Re: The ugly truth about ddoc
On 1/18/15 11:18 PM, Andrei Alexandrescu wrote: TL;DR: I've uploaded new menu colors at http://erdani.com/d/, this time aiming for a more martian red ethos. Please let me know. == So I was looking at the css today (original at http://paste.ofcode.org/fHGT24YASrWu3rnMYLdm4C taken from the zip at http://cssmenumaker.com/menu/modern-jquery-accordion-menu) and it was quite unwieldy to experiment with. For example, the same color appears hardcoded in a number of places; whenever changing one I'd need to change all, or miss some important instances (as it happened with my first experiment). I'm sure experts must have tools for allowing things like variables and macros for css creation. Sass (http://sass-lang.com/) and Less (http://lesscss.org/) comes to my mind. Any decent web framework comes with support for them. I think you will eventually find that ddoc is turing-complete and will implement one of the above transpilers in it. And css minification isn't just about gzip. It's also about removing whitespace and compacting stuff. Can you code a ddoc macro for that?
Re: Use proper frameworks for building dlang.org
On 1/18/15 7:24 AM, Jacob Carlborg wrote: Lately Andrei has worked a lot with improving the dlang.org site in various ways. To me it getting more clear and clear that Ddoc is not the right tool for building a web site. Especially the latest "improvement" [1] shows that it's not a good idea to reinvent the wheel, especially when it's not an improvement, at all. Why don't we instead make use of a proper framework both on the server side and client side. Personally I would go with Ruby on Rails but I know that most of you here would hate that so a better suggestion would probably be vibe.d. For the client side I'm thinking Bootstrap and jQuery. The biggest reason why I would prefer Rails is because I know everything that is needed is already implemented and easily available. I can not say the same thing about vibe.d. But it might be enough for dlang.org, I don't know. What do you think? [1] http://forum.dlang.org/thread/m9f558$lbb$1...@digitalmars.com I agree with you, of course. But more than anything, you need a real designer.
Re: Is anyone working on a D source code formatting tool?
On 1/11/15 3:48 PM, Walter Bright wrote: On 1/11/2015 9:45 AM, Stefan Koch wrote: I'm powerful writing a parser-generator, that will be able to transform the generated parse-tree back into source automatically. writing a rule-based formatter should be pretty doable. Formatting the AST into text is straightforward, dmd already does that for .di file generation. The main problem is what to do about comments, which don't fit into the grammar. A secondary problem is what to do when the line length limit is exceeded, such as for long expressions. The way I did it in Descent (I copied the logic from JDT) is to parse the code into an AST, and then walk the AST in sync with a lexer. So if you have this: void /* comment /* foo() {} the AST would be a FunctionDecl (whatever the name is) so you'd expect a type (consume that AST node, in sync with the lexer), then check for comments/newlines/etc., skip/print them, then consume the name, check for comments/newlines/etc. That way the AST doesn't have to know anything about comments, but comments need to be known by the lexer (via a flag, probably). Considering how flexible is JDT's formatter, I think this solution is pretty good.
Re: D and Nim
On 1/5/15 8:01 AM, bearophile wrote: Ary Borenszweig: Are there proofs of percentage of bugs caused by incorrectly mutating variables that were supposed to be immutable? I don't know, probably not, but the progress in language design is still in its pre-quantitative phase (note: I think Rust variables are constant by default, and mutable on request with "mut"). It's not just a matter of bugs, it's also a matter of making the code simpler to better/faster understand what a function is doing and how. You said "Computer Science has found that the right default for variables is to have them immutable". I don't think "Rust == Computer Science". Otherwise their compiler would be fast (Computer Science knows how to do fast compilers). At least I like that they are introducing a new feature to their language that none other has: lifetimes and borrows. But I find it very hard to read their code. Take a look for example at the lerp function defined in this article: http://www.willusher.io/2014/12/30/porting-a-ray-tracer-to-rust-part-1/ Rust: ~~~ pub fn lerp + Add + Copy>(t: f32, a: &T, b: &T) -> T { *a * (1.0 - t) + *b * t } ~~~ C++: ~~~ template T lerp(float t, const T &a, const T &b){ return a * (1.f - t) + b * t; } ~~~ I don't remember having such bug in my life. Perhaps you are very good, but a language like D must be designed for more common programmers like Kenji Hara, Andrei Alexandrescu, or Raymond Hettinger. I don't think those are common programmers :-)
Re: D and Nim
On 1/5/15 1:54 AM, bearophile wrote: Vlad Levenfeld: Can the compiler automatically make variables immutable if it can prove that they are never changed in some code? This is very different from what I am saying. The C compilers don't go to add a "const" annotation to your source code (but perhaps the Rust compiler warns about mut variables that don't get mutated). Inferring that a D template function is pure because it contains no side effects is not the same thing as stating it is pure in the source code. In the second case you get an error if you try to print from the function, it's part of the function contract. Bye, bearophile Are there proofs of percentage of bugs caused by incorrectly mutating variables that were supposed to be immutable? I don't remember having such bug in my life.
Re: D and Nim
On 1/5/15 12:42 AM, Walter Bright wrote: On 1/4/2015 6:11 PM, Ary Borenszweig wrote: But the main D developers are using dmd, written in C++. I'm not sure they have written large D programs, as big as a compiler (but correct me if I'm wrong). Does Javascript count? https://github.com/DigitalMars/DMDScript Definitely! I take my word then.
Re: D and Nim
On 1/4/15 8:17 PM, anonymous wrote: On Sunday, 4 January 2015 at 21:46:09 UTC, Ary Borenszweig wrote: On 1/4/15 3:10 PM, Jonathan wrote: Hey folks, I've been recently checking out Nim/rod and feel like it takes a lot of inspiration from D (I think the creator was in the D community too as some point). How do you think it compares? What areas does D, in principle, makes it a better choice? To give you my background, I like creating games (mostly using SDL bindings) using new languages, aiming for the most efficient yet concise way to write the engine and game logic. FYI, this is NOT a language war thread. I'm just curious about what separates them from a principle level. In my opinion Nim is superior than D in every aspect (and I say this as my personal opinion, not to trigger a language war). You do want a language war because your spewing too much bullshit. I dabbled in both d and nim/rod. All interesting in nim is whats taken from d. As I said, it's just my personal opinion. Others have said D is superior to Nim and they gave their reasons. There are examples of D code in these two repos: https://github.com/logicchains/LPATHBench https://github.com/kostya/benchmarks Take a look at for example the first one in D and Nim: https://github.com/logicchains/LPATHBench/blob/master/d.d https://github.com/logicchains/LPATHBench/blob/master/nim.nim According to the writeup: https://github.com/logicchains/LPATHBench/blob/master/writeup.md Nim is faster than D. And it does so with much less code. Bullshit. dmd is easy to beat. also json parsing? library issue. Then look at kostya/benchmarks: D is always behind Nim (except matuml, where they are similar, but all statically compiled languages are similar in that one). And Nim's code is always shorter and cleaner. (and before you reply to this with "but if you add pure nothrow @safe @abracadabra", continue reading) Bullshit. Main differences are nim has significant whitespace. Code looks shorter because theres less {}. Second difference is nim has code at top level. Great for short benchmarks but aweful in large code. I actually meant all those annotations (pure nothrow safe immutable final) that appear every time someone ones to get their code run fast. There was a time I liked D. But now to make the code fast you have to annotate things with pure nothrow @safe to make sure the compiler generates fast code. This leads to code that's uglier and harder to understand. Bullshit. That stuff makes d more modular than nim. How "pure nothrow @safe" make things more modular? Another point is that Nimrod has CTFE but does so with a virtual machine, so I'm sure it's faster than D in that aspect. How does that make the language superior? Bullshit again. Many have complained that CTFE can take about 2 gigs of memory so they can't compile their programs (I think related to vibe.d templates). If that memory was garbage collected there would be no problem. Of course, DMD can have a GC, Walter said it before, but it slowed down things. Nim compiles itself in 2.5~5 seconds with a GC on (I think, please correct me if this is wrong). In any case Crystal compiles itself in about the same time with a GC on, so disabling a GC for speed shouldn't be an excuse. Then, Nim is written in Nim. How does that make the language superior? Bullshit again. Having the compiler be written in itself is a good way to immediately have the developers of the language get the feeling of the language, find bugs and improve it. ddmd But the main D developers are using dmd, written in C++. I'm not sure they have written large D programs, as big as a compiler (but correct me if I'm wrong). Having a compiler written in D can make things more stable, and authors can improve the language as they get immediate feedback. At least that's how I feel it when I develop Crystal. Nim has 363 issues accoring to https://github.com/Araq/Nim/issues . D has 2444 according to https://issues.dlang.org/buglist.cgi?component=DMD&limit=0&order=bug_status%2Cpriority%2Cassigned_to%2Cbug_id&product=D&query_format=advanced&resolution=--- Bullshit. Thats nimrod is less popular than dmd. Jesus i cant believe you can smoke that. You might be right, I didn't think of that. . Also, because the compiler is written in itself, everything is garbage collected, so there are no worried when doing CTFE (D's CTFE consumes a lot of memory, I read in this newsgroup). Nim compiles itself in between 2.5 and 5 seconds. How does that make the language superior? Bullshit again. Also, I get the feeling that D has too many features and not all of them work in harmony with the rest of them. So people always find small bugs and others suggest workarounds and eventually people learn to program in a WDD way (Workaround-development-driven). Also, I get the feeli
Re: D and Nim
On 1/4/15 11:09 PM, weaselcat wrote: On Monday, 5 January 2015 at 01:56:20 UTC, Ary Borenszweig wrote: On 1/4/15 9:27 PM, bearophile wrote: Walter Bright: Nim: for neighbour in nodes[nodeId].neighbours: D: foreach(immutable route neighbour; nodes[nodeID].neighbours){ Correctly written D: foreach (neighbour; nodes[nodeID].neighbours){ I don't agree, the good D way is: foreach (immutable neighbour; nodes[nodeID].neighbours) { D programmers should apply const/immutable to every variable that doesn't need to mutate. Why? Implies intention of the variable at declaration & gives the compiler more opportunities for optimization The first, maybe (for me it's noise: if I want to mutate it, I do, otherwise I don't). For the second, the compiler can tell that if you don't assign anything more to it then it's immutable. So I'm not sure the second one is true.
Re: D and Nim
On 1/4/15 8:17 PM, anonymous wrote: On Sunday, 4 January 2015 at 21:46:09 UTC, Ary Borenszweig wrote: On 1/4/15 3:10 PM, Jonathan wrote: Bullshit. dmd is easy to beat. also json parsing? library issue. If there are library issues (like a slow json parser, or an unusable one), these should be top priority instead of adding more features (C++ interop?) to the language. Once everything is smooth more things can be added. Otherwise there's always the feeling that things aren't quite stable and fast enough.
Re: D and Nim
On 1/4/15 9:27 PM, bearophile wrote: Walter Bright: Nim: for neighbour in nodes[nodeId].neighbours: D: foreach(immutable route neighbour; nodes[nodeID].neighbours){ Correctly written D: foreach (neighbour; nodes[nodeID].neighbours){ I don't agree, the good D way is: foreach (immutable neighbour; nodes[nodeID].neighbours) { D programmers should apply const/immutable to every variable that doesn't need to mutate. Why?
Re: D and Nim
On 1/4/15 8:32 PM, Elie Morisse wrote: On Sunday, 4 January 2015 at 18:10:52 UTC, Jonathan wrote: - No conditional evaluation of code It has 'when', which is similar to static if. I would say that's conditional evaluation of code.
Re: D and Nim
On 1/4/15 3:10 PM, Jonathan wrote: Hey folks, I've been recently checking out Nim/rod and feel like it takes a lot of inspiration from D (I think the creator was in the D community too as some point). How do you think it compares? What areas does D, in principle, makes it a better choice? To give you my background, I like creating games (mostly using SDL bindings) using new languages, aiming for the most efficient yet concise way to write the engine and game logic. FYI, this is NOT a language war thread. I'm just curious about what separates them from a principle level. In my opinion Nim is superior than D in every aspect (and I say this as my personal opinion, not to trigger a language war). There are examples of D code in these two repos: https://github.com/logicchains/LPATHBench https://github.com/kostya/benchmarks Take a look at for example the first one in D and Nim: https://github.com/logicchains/LPATHBench/blob/master/d.d https://github.com/logicchains/LPATHBench/blob/master/nim.nim According to the writeup: https://github.com/logicchains/LPATHBench/blob/master/writeup.md Nim is faster than D. And it does so with much less code. Then look at kostya/benchmarks: D is always behind Nim (except matuml, where they are similar, but all statically compiled languages are similar in that one). And Nim's code is always shorter and cleaner. (and before you reply to this with "but if you add pure nothrow @safe @abracadabra", continue reading) There was a time I liked D. But now to make the code fast you have to annotate things with pure nothrow @safe to make sure the compiler generates fast code. This leads to code that's uglier and harder to understand. Another point is that Nimrod has CTFE but does so with a virtual machine, so I'm sure it's faster than D in that aspect. Then, Nim is written in Nim. Having the compiler be written in itself is a good way to immediately have the developers of the language get the feeling of the language, find bugs and improve it. Nim has 363 issues accoring to https://github.com/Araq/Nim/issues . D has 2444 according to https://issues.dlang.org/buglist.cgi?component=DMD&limit=0&order=bug_status%2Cpriority%2Cassigned_to%2Cbug_id&product=D&query_format=advanced&resolution=--- . Also, because the compiler is written in itself, everything is garbage collected, so there are no worried when doing CTFE (D's CTFE consumes a lot of memory, I read in this newsgroup). Nim compiles itself in between 2.5 and 5 seconds. Also, I get the feeling that D has too many features and not all of them work in harmony with the rest of them. So people always find small bugs and others suggest workarounds and eventually people learn to program in a WDD way (Workaround-development-driven). Back to LPATHBench, I find things like minimallyInitializedArray and uninitializedArray, which are great for optimizing things, but it's sad that one has to use these special functions instead of regular ones (idiomatic code) to achieve better performance. Also, "uninitialized" sounds unsafe... And then you must compile your code with -noboundscheck to get more performance, but that's so unsafe... But then, both D and Nim have things which I dislike: too many built-in things. Static arrays, arrays, sequences, etc. Can't these be just implemented in D/Nim? Why the need for special built-in types with special operations? Anyway, just my opinion :-)
Re: Improving ddoc
On 1/1/15 2:35 PM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= " wrote: On Thursday, 1 January 2015 at 17:19:09 UTC, Ary Borenszweig wrote: What's cross-library-indexing? You mean show documentation for many libraries at once? Yes, many libraries, source code with builtin links, links to github with line numbers, docs for other languages when D wrappers are provided. The ideal is to propagate as much useful information as possible to a normalized universal format that makes it easy to build intelligent information systems using some kind of deduction. You should generate a database, not a document. HTML is underpowered and assumes that domain specific semantical information is encoded in a different layer like RDF, but that leads to solutions that are overly complicated compared to a domain specific XML format. I really don't understand why you say that. I just started writing a documentation generator for the Crystal programming language. http://crystal-lang.org/api/ You can read about its features here: http://crystal-lang.org/2014/12/31/crystal-0.5.6-release.html Now, I go here http://dlang.org/phobos/std_file.html and I can't believe that after 10+ years of development, D documentation still doesn't have a basic feature like inter-linking between types (I even once submitted a PR but it wasn't accepted because other code was written on top of it and then the merge was hard to do). For example: http://dlang.org/phobos/std_file.html#.timeLastModified I should be able to click SysTime and go to that type definition. But, DDoc can generate Latex. Then, take a look at Rust. Guess what they use for their documentation? Markdown! Here's a web framework called Iron: http://ironframework.io/ Here are the API docs: http://ironframework.io/doc/iron/ Let's take a look at enum.Method: http://ironframework.io/doc/iron/method/enum.Method.html See that red "String" text? Click it and it takes you here: http://doc.rust-lang.org/nightly/collections/string/struct.String.html Wow! It took you to the String definition in an entirely different host! Back to Crystal's docs, below every method there's a "View Source" link that takes you to that code in GitHub. No such thing in D. Note that all of this was done with just Markdown (HTML) and by making the documentation generator understand and use the language as much as possible, something with DDoc doesn't do. I once submitted a PR to fix that, but it was ignored. I would suggest D to use a simple language to write the documentation, and then a powerful tool that understands the semantics of the language and allows you to generate good looking, easily browsable documentation. Neither JSON, XML, YAML or macros are needed for that.
Re: Improving ddoc
On 1/1/15 1:23 PM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= " wrote: On Thursday, 1 January 2015 at 15:01:13 UTC, Ary Borenszweig wrote: it is. There's no need for macros. There's no need to generate JSON, XML, YAML, PDF or anything other than HTML, which is quite universal and accessible now. You only need to generate XML with high quality semantic markup for programming languages. From XML to other formats there are more options than this thread can handle... The semantics of HTML is too weak to build high quality cross-library-indexing and precise search rankings. What's cross-library-indexing? You mean show documentation for many libraries at once?
Re: Improving ddoc
On 12/31/14 4:50 PM, Andrei Alexandrescu wrote: Hello, In wake of the recent discussions on improving ddoc syntax we're looking at doing something about it. Please discuss any ideas you might have here. Thanks! One simple starter would be to allow one escape character, e.g. the backtick (`), as a simple way to expand macros: instead of $(MACRO arg1, arg2) one can write `MACRO arg1, arg2`. Andrei 1. Add "* foo" syntax for lists 2. Add **bold** and __bold__ 3. Add *italic* and _italic_ 4. Make `some text` automatically link to other D code. For example `std.algorithm.any` would automatically link to http://dlang.org/phobos/std_algorithm.html#.any . This must work for types, functions, etc. If it doesn't resolve to a symbol, just put it inside ... 5. Make [text](url) denote a link. 6. Remove macros. Walter said: "Oh, Markdown can't be used to change the typography, generate TOCs, etc.?". Well, you don't need to do those things. Changing the typography will make it look ugly. You need a TOC? That's the job of the documentation tool (the binary), not the documentation syntax. Basically, use Markdown :-) Keep DDoc as it is now. Use it for your website if you want, to write books or whatever. But for documentation don't use it as it is. There's no need for macros. There's no need to generate JSON, XML, YAML, PDF or anything other than HTML, which is quite universal and accessible now. I repeat: keep DDoc, enhance it, use it as dog food for your websites, books, etc. Use something simpler and less powerful for documenting types and functions.
Re: Worst Phobos documentation evar!
On 12/31/14 7:46 PM, Dicebot wrote: On Wednesday, 31 December 2014 at 22:41:41 UTC, Ary Borenszweig wrote: You are right. I browsed some phobo's code and saw the documentation, it looks clean and nice. The only exception is std.algorithm which is full of macros and barely readable. So where is that other macro code? The one that has $(BLANKLINE) or $(COMMA) or $(DASH) and why is it needed? IMHO biggest issue is not inline documentation for functions but more higher level stuff like http://dlang.org/arrays.html - it was all in DDOC too last time I checked and changing anything about it is hardly a pleasure. Inline documentations only suffers when more pretty stuff like tables gets added. It looks quite clean to me: https://github.com/D-Programming-Language/dlang.org/blob/master/arrays.dd Except for the fact that instead of the familiar HTML tags there are macros. I see macros for: Paragraphs: $(P) Lists: $(UL), $(LI) Links: $(HTTP) These are very, very, vry common in documentation and sites. Couldn't DDoc provide nice, readable ways for dealing with these?
Re: Worst Phobos documentation evar!
On 12/31/14 7:43 PM, Ary Borenszweig wrote: On 12/31/14 7:14 PM, Walter Bright wrote: On 12/31/2014 1:23 PM, Andrei Alexandrescu wrote: And it's no wonder why there are so many alternatives (rake, nake, etc.) ... Neither of which successful :o) I googled nake and couldn't find any references to it. Oh, it's for Nimrod (Now Nim) This one: https://github.com/fowlmouth/nake I'm not saying that it's popular, but people keep inventing things to avoid makefiles (and I think it's good to have this for a language).
Re: Worst Phobos documentation evar!
On 12/31/14 6:46 PM, Walter Bright wrote: On 12/31/2014 12:55 PM, Ary Borenszweig wrote: Definitely, because Markdown is not a macro system, it's a documentation tool. I write a lot of documentation. A macro system has saved enormous amounts of effort. Night and day, really. Not having a macro system is like using a programming language that does not have functions. Ketmar, for example, complained mightily about Ddoc markup. I suggested he simply email me the Phobos documentation he wants to write, and I'd mark it up for him and submit the PRs. He admitted he is not interested in actually writing any documentation. Ddoc is not the real issue, at least for him. I'll extend the same offer to you. Email me the fixed Phobos docs. I'll mark them up and submit PRs. The thing is, with Ddoc you don't actually have to write any markup. You can write: / This function does blah, blah, blah. Blah, blah, blah is an amazing algorithm invented by I. M. Nerdly. Params: x = the awesome input value Returns: insightful description of the return value Example: --- int foo(int x) { ... stunning D code ... } --- ***/ You are right. I browsed some phobo's code and saw the documentation, it looks clean and nice. The only exception is std.algorithm which is full of macros and barely readable. So where is that other macro code? The one that has $(BLANKLINE) or $(COMMA) or $(DASH) and why is it needed?
Re: Worst Phobos documentation evar!
On 12/31/14 7:14 PM, Walter Bright wrote: On 12/31/2014 1:23 PM, Andrei Alexandrescu wrote: And it's no wonder why there are so many alternatives (rake, nake, etc.) ... Neither of which successful :o) I googled nake and couldn't find any references to it. Oh, it's for Nimrod (Now Nim)
Re: Worst Phobos documentation evar!
On 12/27/14 10:00 PM, Walter Bright wrote: This is so bad there isn't even a direct link to it, it hides in shame. Just go here: http://dlang.org/phobos/std_encoding.html#.transcode and scroll up one entry. Here it is: size_t encode(Tgt, Src, R)(in Src[] s, R range); Encodes c in units of type E and writes the result to the output range R. Returns the number of Es written. Let me enumerate the awesomeness of its awfulness: 1. No 'Return:' block, though it obviously returns a value. 2. No 'Params:' block, though it obviously has parameters. 3. No 'Example:' block 4. No comparison with other 'encode' functions in the same module. 5. No description of what 'Tgt' is. 6. No description of what 'Src' is. 7. No clue where the variable 'c' comes from. 8. No clue where the type 'E' comes from. 9. 'R' is a type, not an instance. 10. I suspect it has something to do with UTF encodings, but there is no clue. This one is missing some docs too: http://dlang.org/phobos/std_math.html#abs
Re: Worst Phobos documentation evar!
On 12/29/14 10:38 PM, Andrei Alexandrescu wrote: On 12/29/14 2:30 PM, Dicebot wrote: DDOC is probably one of D features with pretty idea and hardly usable design. I wish we had something like Markdown instead - can never remember Phobos macros to use and usually just resort to using plain text instead. I'm with Walter here - I find Markdown and its ilk inferior to macro systems. -- Andrei Definitely, because Markdown is not a macro system, it's a documentation tool.
Re: Worst Phobos documentation evar!
On 12/31/14 4:09 PM, Walter Bright wrote: On 12/31/2014 7:03 AM, Jacob Carlborg wrote: On 2014-12-30 00:52, Walter Bright wrote: (And I should ask, what if you wanted a | in the Markdown?) Just type a |. You don't need to escape most Markdown symbols in the middle of text. And when you want a | in a table entry? Most of the time it's not a problem, see above. And when it is, how do you escape them? Backslash character.
Re: Worst Phobos documentation evar!
On 12/31/14 5:33 PM, Walter Bright wrote: On 12/31/2014 11:59 AM, Anon wrote: A backslash. Y'know, the unambiguous, familiar-to-all-programmers, really-hard-to-mistype thing that almost everything but HTML and DDoc use for escaping? Yeah, the reason that people have added WYSIWYG string literals to languages :-) Which still look readable (and the backslash is just one character of noise so it's also not that bad).
Re: Worst Phobos documentation evar!
On 12/31/14 4:07 PM, Walter Bright wrote: On 12/31/2014 5:03 AM, Ary Borenszweig wrote: And it's no wonder why there are so many alternatives (rake, nake, etc.) Which one has a better text macro system? A real programming language without text macro systems.
Re: Worst Phobos documentation evar!
On 12/31/14 4:14 PM, Walter Bright wrote: On 12/31/2014 6:29 AM, Jacob Carlborg wrote: On 2014-12-30 01:10, Walter Bright wrote: It's not a hack. The macro system is designed to work that way. All markup systems require some sort of escape mechanism. Including Markup. You don't need to escape all the special symbols used in Markdown, only in certain place. In Markdown, if you start a line with a star, '*', it will be interpreted as the beginning of an unordered list. But if you write a star in the middle of text it will just output a star, as expected. I know that Markdown formatting is context sensitive. And what happens if you want to have a * at the beginning of the line of output? And a | in a table entry? And so on for each of the context sensitive things? You use a backslash character, like in almost every programming language, json, etc. http://daringfireball.net/projects/markdown/syntax#backslash Compare \* to $(STAR)
Re: Worst Phobos documentation evar!
On 12/31/14 9:17 AM, Jacob Carlborg wrote: On 2014-12-30 02:51, Walter Bright wrote: (And actually, the Ddoc macro system very closely resembles the one used by make, as that is a simple and effective one, well known by programmers.) "make" has to be the worst tool ever created. I not just me that has that opinion [1]. That you even consider this as a positive argument is baffling me. Or rather not, if you like "make" I can see why you like Ddoc. [1] http://www.conifersystems.com/whitepapers/gnu-make/ Agreed. I try to avoid makefiles as much as I can. And it's no wonder why there are so many alternatives (rake, nake, etc.)
Re: Worst Phobos documentation evar!
On 12/30/14 3:57 PM, ketmar via Digitalmars-d wrote: On Tue, 30 Dec 2014 13:18:05 + Russel Winder via Digitalmars-d wrote: Markdown is inadequate for more than single page HTML which is exactly what API reference documentation is! a list of functions with explanations, some samples and a brief overview. this is why markdown-like language is a good choice. stop writing Charles Dickens' novels in source code, please! ;-) Yes, exactly. If you need to add special HTML beyond what Markdown offers you, then you are doing it wrong. My question is: why D docs need more that the basics?
Re: Worst Phobos documentation evar!
On 12/29/14 10:49 PM, ketmar via Digitalmars-d wrote: On Mon, 29 Dec 2014 15:49:10 -0800 Walter Bright via Digitalmars-d wrote: On 12/29/2014 2:40 PM, Adam D. Ruppe wrote: Ddoc isn't too bad, but trying to document examples in dom.d turned into a mess of /// finds $(LT)foo/$(GT) quickly and I couldn't stand it. I'd make a macro: XML=$(LT)$0/$(GT) I use custom macros all the time in Ddoc. If you aren't, you're not doing it right :-) that's why ddoc is completely unusable either for reading "as is" or for generating separate documentation. i was very excited about built-in documentation generator in D, and now i'm not using it at all. i rarely generating stand-alone docs, they are just not handy for me. i prefer to read documentation right in the source (yet i still want to have an option to generate stand-alone files). did you tried to read Phobos documentation in Phobos sources? those macros are pure visual noise. i don't mind if D will understand one of the Markdown variants, or textile, or rss -- anything that is READABLE without preprocessing, yet can be easily processed to another format. i don't mind learning another markdown dialect if i can easily read it without preprocessing. that's why i'm not using doxygen too: it's noisy. seems that most document generator authors are sure that only generated documentation matters, so source documentation can be of any uglyness. yet if documentation is hard to read without preprocessor, it is hard to write it too! so people will tend to avoid writing it, and they will especially avoid polishing it, 'cause it's write-only, contaminated and hard to fix. D documentation WILL be bad until ddoc will start to understand some markdown-like mostly macro-free markup language. Those are exactly my thoughts.
Re: Worst Phobos documentation evar!
On 12/29/14 8:49 PM, Walter Bright wrote: On 12/29/2014 2:40 PM, Adam D. Ruppe wrote: Ddoc isn't too bad, but trying to document examples in dom.d turned into a mess of /// finds $(LT)foo/$(GT) quickly and I couldn't stand it. I'd make a macro: XML=$(LT)$0/$(GT) I use custom macros all the time in Ddoc. If you aren't, you're not doing it right :-) Macros are for code, not for documentation. When wanting to contrirbute documentation you'll have to learn which macros the author defined and which ones to use. Again, this makes it harder to write docs, not easier.
Re: Worst Phobos documentation evar!
On 12/27/14 10:00 PM, Walter Bright wrote: This is so bad there isn't even a direct link to it, it hides in shame. Just go here: http://dlang.org/phobos/std_encoding.html#.transcode and scroll up one entry. Here it is: size_t encode(Tgt, Src, R)(in Src[] s, R range); Encodes c in units of type E and writes the result to the output range R. Returns the number of Es written. Let me enumerate the awesomeness of its awfulness: 1. No 'Return:' block, though it obviously returns a value. 2. No 'Params:' block, though it obviously has parameters. 3. No 'Example:' block 4. No comparison with other 'encode' functions in the same module. 5. No description of what 'Tgt' is. 6. No description of what 'Src' is. 7. No clue where the variable 'c' comes from. 8. No clue where the type 'E' comes from. 9. 'R' is a type, not an instance. 10. I suspect it has something to do with UTF encodings, but there is no clue. After programming in Ruby for a long time (and I think in Python it's the same) I came to the conclusion that all the sections (Return, Params, Example) just make writing the documentation a harder task. For example: ~~~ /* * Returns a lowered-case version of a string. * * Params: * - x: the string to be lowered case * Return: the string in lower cases */ string lowercase(string x) ~~~ It's kind of redundant. True, there might be something more to say about the parameters or the return value, but if you are reading the documentation then you might as well read a whole paragraph explaining everything about it. For example, this is the documentation for the String#downcase method in Ruby: ~~~ def downcase(str) Returns a copy of `str` with all uppercase letters replaced with their lowercase counterparts. The operation is locale insensitive—only characters “A” to “Z” are affected. Note: case replacement is effective only in ASCII region. "hEllO".downcase #=> "hello" ~~~ Note how the documentation directly mentions the parameters. There's also an example snippet there, that is denoted by indenting code (similar to Markdown). I think it would be much better to use Markdown for the documentation, as it is so popular and easy to read (although not that easy to parse). Then it would be awesome if the documentation could be smarter, providing semi-automatic links. For example writing "#string.join" would create a link to that function in that module (the $(XREF ...) is very noisy and verbose).
Re: What is the D plan's to become a used language?
On 12/23/14, 2:08 PM, Adam D. Ruppe wrote: On Tuesday, 23 December 2014 at 17:01:13 UTC, Russel Winder via Digitalmars-d wrote: If there was a way of mocking (so that you can run integration tests without the actual network) With my cgi.d, I made a command line interface that triggers the library the same as a network does. This works even if you compile it with the embedded http server: $ ./hellocgi GET / foo=bar Cache-Control: private, no-cache="set-cookie" Expires: 0 Pragma: no-cache Content-Type: text/html; charset=utf-8 hello It was really easy for me to implement and it gives easy access to any function for testing and debugging. It also lets me use the shell as a kind of web console too. This is something that often amazes me that everyone doesn't do. This looks interesting. How do you specify what the mock should respond?
Re: Do everything in Java…
On 12/5/14, 12:11 PM, Chris wrote: On Friday, 5 December 2014 at 15:03:39 UTC, Ary Borenszweig wrote: On 12/5/14, 9:42 AM, Chris wrote: On Friday, 5 December 2014 at 12:06:55 UTC, Nemanja Boric wrote: The good thing about unit tests is that they tell you when you break existing code. That's the great thing about unittests, and the reason why I write unittests. I work on a fairly complex code base and every now and then there's a new feature requested. Implementing features involves several to dozen of modules to be changed, and there's no way that I could guarantee that feature implementation didn't change behaviour of the existing code. I both hate and love when I `make` compiles and unittest fails. But you'll realize that soon enough anyway. This is not good enough for me. Sometimes "soon enough" means week or two before somebody actually notice the bug in the implementation (again, very complex project that's simply not hand-testable), and that's definitively not soon enough keeping in mind amount of $$$ that you wasted into air. On Friday, 5 December 2014 at 11:53:11 UTC, Chris wrote: On Friday, 5 December 2014 at 09:27:16 UTC, Paulo Pinto wrote: On Friday, 5 December 2014 at 02:25:20 UTC, Walter Bright wrote: On 12/4/2014 5:32 PM, ketmar via Digitalmars-d wrote: http://www.teamten.com/lawrence/writings/java-for-everything.html i didn't read the article, but i bet that this is just another article about his language of preference and how any other language he tried doesn't have X or Y or Z. and those X, Y and Z are something like "not being on market for long enough", "vendor ACME didn't ported ACMElib to it", "out staff is trained in G but not in M" and so on. boring. From the article: "Most importantly, the kinds of bugs that people introduce most often aren’t the kind of bugs that unit tests catch. With few exceptions (such as parsers), unit tests are a waste of time." Not my experience with unittests, repeated over decades and with different languages. Unit tests are a huge win, even with statically typed languages. Yes, but they cannot test everything. GUI code is specially ugly as it requires UI automation tooling. They do exist, but only enterprise customers are willing to pay for it. This is why WPF has UI automation built-in. The biggest problem with unit tests are managers that want to see shiny reports, like those produced by tools like Sonar. Teams than spend ridiculous amount of time writing superfluous unit tests just to match milestone targets. Just because code has tests, doesn't mean the tests are testing what they should. But if they reach the magical percentage number then everyone is happy. -- Paulo Now is the right time to confess. I hardly ever use unit tests although it's included (and encouraged) in D. Why? When I write new code I "unit test" as I go along, with debug writefln("result %s", result); and stuff like this. Stupid? Unprofessional? I don't know. It works. I once started to write unit tests only to find out that indeed they don't catch bugs, because you only put into unit tests what you know (or expect) at a given moment (just like the old writefln()). The bugs I, or other people, discover later would usually not be caught by unit tests simply because you write for your own expectations at a given moment and don't realize that there are millions of other ways to go astray. So the bugs are usually due to a lack of imagination or a tunnel vision at the moment of writing code. This will be reflected in the unit tests as well. So why bother? You merely enshrine your own restricted and circular logic in "tests". Which reminds me of maths when teachers would tell us "And see, it makes perfect sense!", yeah, because they laid down the rules themselves in the first place. The same goes for comparing your output to some "gold standard". The program claims to have an accuracy of 98%. Sure, because you wrote for the gold standard and not for the real world where it drastically drops to 70%. The good thing about unit tests is that they tell you when you break existing code. But you'll realize that soon enough anyway. Yes, yes, yes. Unit tests can be useful in cases like this. But I don't think that they are _the_ way to cope with bugs. It's more like "stating the obvious", and bugs are hardly ever obvious, else they wouldn't be bugs. Unit tests are not for detecting bugs. They are only useful for: 1. Making sure things work (a bit). 2. Making sure things continue to work when you refactor or introduce new code. 3. When a new bug is found you can write a test for it that will make that bug impossible to ever resurrect. 4. Show how code is supposed to be used. Again, their purpose is not to detect bugs, but to build more robust softwa
Re: Do everything in Java…
On 12/5/14, 9:42 AM, Chris wrote: On Friday, 5 December 2014 at 12:06:55 UTC, Nemanja Boric wrote: The good thing about unit tests is that they tell you when you break existing code. That's the great thing about unittests, and the reason why I write unittests. I work on a fairly complex code base and every now and then there's a new feature requested. Implementing features involves several to dozen of modules to be changed, and there's no way that I could guarantee that feature implementation didn't change behaviour of the existing code. I both hate and love when I `make` compiles and unittest fails. But you'll realize that soon enough anyway. This is not good enough for me. Sometimes "soon enough" means week or two before somebody actually notice the bug in the implementation (again, very complex project that's simply not hand-testable), and that's definitively not soon enough keeping in mind amount of $$$ that you wasted into air. On Friday, 5 December 2014 at 11:53:11 UTC, Chris wrote: On Friday, 5 December 2014 at 09:27:16 UTC, Paulo Pinto wrote: On Friday, 5 December 2014 at 02:25:20 UTC, Walter Bright wrote: On 12/4/2014 5:32 PM, ketmar via Digitalmars-d wrote: http://www.teamten.com/lawrence/writings/java-for-everything.html i didn't read the article, but i bet that this is just another article about his language of preference and how any other language he tried doesn't have X or Y or Z. and those X, Y and Z are something like "not being on market for long enough", "vendor ACME didn't ported ACMElib to it", "out staff is trained in G but not in M" and so on. boring. From the article: "Most importantly, the kinds of bugs that people introduce most often aren’t the kind of bugs that unit tests catch. With few exceptions (such as parsers), unit tests are a waste of time." Not my experience with unittests, repeated over decades and with different languages. Unit tests are a huge win, even with statically typed languages. Yes, but they cannot test everything. GUI code is specially ugly as it requires UI automation tooling. They do exist, but only enterprise customers are willing to pay for it. This is why WPF has UI automation built-in. The biggest problem with unit tests are managers that want to see shiny reports, like those produced by tools like Sonar. Teams than spend ridiculous amount of time writing superfluous unit tests just to match milestone targets. Just because code has tests, doesn't mean the tests are testing what they should. But if they reach the magical percentage number then everyone is happy. -- Paulo Now is the right time to confess. I hardly ever use unit tests although it's included (and encouraged) in D. Why? When I write new code I "unit test" as I go along, with debug writefln("result %s", result); and stuff like this. Stupid? Unprofessional? I don't know. It works. I once started to write unit tests only to find out that indeed they don't catch bugs, because you only put into unit tests what you know (or expect) at a given moment (just like the old writefln()). The bugs I, or other people, discover later would usually not be caught by unit tests simply because you write for your own expectations at a given moment and don't realize that there are millions of other ways to go astray. So the bugs are usually due to a lack of imagination or a tunnel vision at the moment of writing code. This will be reflected in the unit tests as well. So why bother? You merely enshrine your own restricted and circular logic in "tests". Which reminds me of maths when teachers would tell us "And see, it makes perfect sense!", yeah, because they laid down the rules themselves in the first place. The same goes for comparing your output to some "gold standard". The program claims to have an accuracy of 98%. Sure, because you wrote for the gold standard and not for the real world where it drastically drops to 70%. The good thing about unit tests is that they tell you when you break existing code. But you'll realize that soon enough anyway. Yes, yes, yes. Unit tests can be useful in cases like this. But I don't think that they are _the_ way to cope with bugs. It's more like "stating the obvious", and bugs are hardly ever obvious, else they wouldn't be bugs. Unit tests are not for detecting bugs. They are only useful for: 1. Making sure things work (a bit). 2. Making sure things continue to work when you refactor or introduce new code. 3. When a new bug is found you can write a test for it that will make that bug impossible to ever resurrect. 4. Show how code is supposed to be used. Again, their purpose is not to detect bugs, but to build more robust software.
Re: Do everything in Java…
On 12/5/14, 8:53 AM, Chris wrote: On Friday, 5 December 2014 at 09:27:16 UTC, Paulo Pinto wrote: On Friday, 5 December 2014 at 02:25:20 UTC, Walter Bright wrote: On 12/4/2014 5:32 PM, ketmar via Digitalmars-d wrote: Now is the right time to confess. I hardly ever use unit tests although it's included (and encouraged) in D. Why? When I write new code I "unit test" as I go along, with debug writefln("result %s", result); and stuff like this. Stupid? Unprofessional? I don't know. It works. You should trying writing a compiler without unit tests.
Re: Do everything in Java…
On 12/4/14, 10:47 AM, Russel Winder via Digitalmars-d wrote: It's an argument for Java over Python specifically but a bit more general in reality. This stood out for me: !…other languages like D and Go are too new to bet my work on." http://www.teamten.com/lawrence/writings/java-for-everything.html Very interesting read. But the world of humans still has time to grow and evolve, and humans always try to do better, you can't stop that. He says Java is verbose and "so what?". Well, couldn't it be less verbose and still be that good? Could you be very DRY (Don't Repeat Yourself) in a language that's statically typed, but with good type inference and very good performance, superior to those of VM languages? Yes, you can. You shouldn't stop there. OK, use Java now, but don't stop there. Try to think of new ideas, new languages. At least as a hobby. If Python makes you happy and Java not, but Java gets the work done, who cares? I don't want to spend my time in the world being unhappy but doing work (which probably isn't for my own utility, and probably isn't for anyone's *real* utility), I'd rather be happy. Just my 2 cents :-)
Re: Do everything in Java…
On 12/4/14, 2:11 PM, Ary Borenszweig wrote: On 12/4/14, 10:47 AM, Russel Winder via Digitalmars-d wrote: It's an argument for Java over Python specifically but a bit more general in reality. This stood out for me: !…other languages like D and Go are too new to bet my work on." http://www.teamten.com/lawrence/writings/java-for-everything.html Very interesting read. But the world of humans still has time to grow and evolve, and humans always try to do better, you can't stop that. He says Java is verbose and "so what?". Well, couldn't it be less verbose and still be that good? Could you be very DRY (Don't Repeat Yourself) in a language that's statically typed, but with good type inference and very good performance, superior to those of VM languages? Yes, you can. You shouldn't stop there. OK, use Java now, but don't stop there. Try to think of new ideas, new languages. At least as a hobby. If Python makes you happy and Java not, but Java gets the work done, who cares? I don't want to spend my time in the world being unhappy but doing work (which probably isn't for my own utility, and probably isn't for anyone's *real* utility), I'd rather be happy. Just my 2 cents :-) Like, cool, Java helped Twitter improve their search engine. Yes, Twitter has some real value for the humanity.
Re: Overload using nogc
On 11/21/14, 12:36 AM, Jonathan Marler wrote: Has the idea of function overloading via nogc been explored? void func() @nogc { // logic that does not use GC } void func() { // logic that uses GC } void main(string[] args) // @nogc { // if main is @nogc, then the @nogc version of func // will be called, otherwise, the GC version will be func(); } This could be useful for the standard library to expose different implementations based on whether or not the application is using the GC. If you have a version that doesn't use the GC, what's the reason to prefer one that uses it?
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On 11/21/14, 1:32 PM, Andrei Alexandrescu wrote: On 11/21/14 6:17 AM, Ary Borenszweig wrote: On 11/21/14, 5:45 AM, Walter Bright wrote: On 11/21/2014 12:10 AM, bearophile wrote: Walter Bright: All you're doing is trading 0 crossing for 0x7FFF crossing issues, and pretending the problems have gone away. I'm not pretending anything. I am asking in practical programming what of the two solutions leads to leas problems/bugs. So far I've seen the unsigned solution and I've seen it's highly bug-prone. I'm suggesting that having a bug and detecting the bug are two different things. The 0-crossing bug is easier to detect, but that doesn't mean that shifting the problem to 0x7FFF crossing bugs is making the bug count less. BTW, granted the 0x7FFF problems exhibit the bugs less often, but paradoxically this can make the bug worse, because then it only gets found much, much later in supposedly tested & robust code. Is this true? Do you have some examples of buggy code? http://googleresearch.blogspot.com/2006/06/extra-extra-read-all-about-it-nearly.html "This bug can manifest itself for arrays whose length (in elements) is 2^30 or greater (roughly a billion elements)" How often does that happen in practice? Every time you read a DVD image :o). I should say that in my doctoral work it was often the case I'd have very large arrays. Oh, sorry, I totally forgot that when you open a DVD with VLC it reads the whole thing to memory.
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On 11/21/14, 11:47 AM, ketmar via Digitalmars-d wrote: On Fri, 21 Nov 2014 11:17:06 -0300 Ary Borenszweig via Digitalmars-d wrote: "This bug can manifest itself for arrays whose length (in elements) is 2^30 or greater (roughly a billion elements)" How often does that happen in practice? once in almost ten years is too often, as for me. i think that the answer must be "never". either no bug, or the code is broken. and one of the worst code is the code that "works most of the time", but still broken. You see, if you don't use a BigNum for everything than you will always have hidden bugs, be it with int, uint or whatever. The thing is that with int bugs are much less frequent than with uint. So I don't know why you'd rather have uint than int...
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On 11/21/14, 11:29 AM, ketmar via Digitalmars-d wrote: On Fri, 21 Nov 2014 19:31:23 +1100 Daniel Murphy via Digitalmars-d wrote: "bearophile" wrote in message news:lkcltlokangpzzdzz...@forum.dlang.org... From my experience in coding in D they are far more unlikely than sign-related bugs of array lengths. Here's a simple program to calculate the relative size of two files, that will not work correctly with unsigned lengths. module sizediff import std.file; import std.stdio; void main(string[] args) { assert(args.length == 3, "Usage: sizediff file1 file2"); auto l1 = args[1].read().length; auto l2 = args[2].read().length; writeln("Difference: ", l1 - l2); } The two ways this can fail (that I want to highlight) are: 1. If either file is too large to fit in a size_t the result will (probably) be wrong 2. If file2 is bigger than file1 the result will be wrong If length was signed, problem 2 would not exist, and problem 1 would be more likely to occur. I think it's clear that signed lengths would work for more possible realistic inputs. no, the problem 2 just becomes hidden. while the given code works most of the time, it is still broken. So how would you solve problem 2?
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On 11/21/14, 5:45 AM, Walter Bright wrote: On 11/21/2014 12:10 AM, bearophile wrote: Walter Bright: All you're doing is trading 0 crossing for 0x7FFF crossing issues, and pretending the problems have gone away. I'm not pretending anything. I am asking in practical programming what of the two solutions leads to leas problems/bugs. So far I've seen the unsigned solution and I've seen it's highly bug-prone. I'm suggesting that having a bug and detecting the bug are two different things. The 0-crossing bug is easier to detect, but that doesn't mean that shifting the problem to 0x7FFF crossing bugs is making the bug count less. BTW, granted the 0x7FFF problems exhibit the bugs less often, but paradoxically this can make the bug worse, because then it only gets found much, much later in supposedly tested & robust code. Is this true? Do you have some examples of buggy code? http://googleresearch.blogspot.com/2006/06/extra-extra-read-all-about-it-nearly.html "This bug can manifest itself for arrays whose length (in elements) is 2^30 or greater (roughly a billion elements)" How often does that happen in practice?
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On 11/20/14, 5:02 AM, Walter Bright wrote: On 11/19/2014 5:03 PM, H. S. Teoh via Digitalmars-d wrote: If this kind of unsafe mixing wasn't allowed, or required explict casts (to signify "yes I know what I'm doing and I'm prepared to face the consequences"), I suspect that bearophile would be much happier about this issue. ;-) Explicit casts are worse than the problem - they can easily cause bugs. As for me personally, I like having a complete set of signed and unsigned integral types at my disposal. It's like having a full set of wrenches that are open end on one end and boxed on the other :-) Most of the time either end will work, but sometimes only one will. Now, if D were a non-systems language like Basic, Go or Java, unsigned types could be reasonably dispensed with. But D is a systems programming language, and it ought to have available types that match what the hardware supports. Nobody is saying to remove unsigned types from the language. They have their uses. It's just that using them for an array's length leads to subtle bugs. That's all.
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On 11/20/14, 6:47 AM, Andrei Alexandrescu wrote: On 11/20/14 12:18 AM, Don wrote: On Wednesday, 19 November 2014 at 17:55:26 UTC, Andrei Alexandrescu wrote: On 11/19/14 6:04 AM, Don wrote: Almost everybody seems to think that unsigned means positive. It does not. That's an exaggeration. With only a bit of care one can use D's unsigned types for positive numbers. Please let's not reduce the matter to black and white. Andrei Even in the responses in this thread indicate that about half of the people here don't understand unsigned. "unsigned" means "I want to use modulo 2^^n arithmetic". It does not mean, "this is an integer which cannot be negative". Using modulo 2^^n arithmetic is *weird*. If you are using uint/ulong to represent a non-negative integer, you are using the incorrect type. "With only a bit of care one can use D's unsigned types for positive numbers." I do not believe that that statement to be true. I believe that bugs caused by unsigned calculations are subtle and require an extraordinary level of diligence. I showed an example at DConf, that I had found in production code. It's particularly challenging in D because of the widespread use of 'auto': auto x = foo(); auto y = bar(); auto z = baz(); if (x - y > z) { ... } This might be a bug, if one of these functions returns an unsigned type. Good luck finding that. Note that if all functions return unsigned, there isn't even any signed-unsigned mismatch. I believe the correct statement, is "With only a bit of care one can use D's unsigned types for positive numbers and believe that one's code is correct, even though it contains subtle bugs." Well I'm sorry but I quite disagree. -- Andrei I don't think disagreeing without a reason (like the one Don gave above) is good. You could show us the benefits of unsigned types over signed types (possibly considering that not every program in the world needs an array with 2^64 elements).
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On 11/19/14, 9:54 PM, FrankLike wrote: How is that a bug? Can you provide some code that exhibits this? If you compile the dfl Library to 64 bit,you will find error: core.sys.windows.windows.WaitForMultipleObjects(uint nCount,void** lpHandles,) is not callable using argument types(ulong,void**,...) the 'WaitForMultipleObjects' Function is in dmd2/src/druntime/src/core/sys/windows/windows.d the argument of first is dfl's value ,it comes from a 'length' ,it's type is size_t,now it is 'ulong' on 64 bit. So druntime must keep the same as phobos for size_t. Or keep the same to int with WindowsAPI to modify the size_t to int ? Sorry, maybe I wasn't clear. I asked "how a negative length can be a bug". (because you can't set a negative length, so it can't really happen)
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On 11/19/14, 10:21 AM, ketmar via Digitalmars-d wrote: On Wed, 19 Nov 2014 10:03:34 + Don via Digitalmars-d wrote: No! No! No! This is completely wrong. Unsigned does not mean "positive". It means "no sign", and therefore "wrapping semantics". eg length - 4 > 0, if length is 2. Weird consequence: using subtraction with an unsigned type is nearly always a bug. negative length is a bug too. How is that a bug? Can you provide some code that exhibits this?
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On 11/19/14, 1:46 PM, Andrei Alexandrescu wrote: On 11/19/14 2:03 AM, Don wrote: We have a builtin type that is deadly but seductive. I agree this applies to C and C++. Not quite to D. -- Andrei See my response to Don. Don't you think that's counter-intuitive?
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On 11/19/14, 7:03 AM, Don wrote: On Tuesday, 18 November 2014 at 18:23:52 UTC, Marco Leise wrote: > Weird consequence: using subtraction with an unsigned type is nearly always a bug. I wish D hadn't called unsigned integers 'uint'. They should have been called '__uint' or something. They should look ugly. You need a very, very good reason to use an unsigned type. We have a builtin type that is deadly but seductive. I agree. An array's length makes sense as an unsigned ("an array can't have a negative length, right?") but it leads to the bugs you say. For example: ~~~ import std.stdio; void main() { auto a = [1, 2, 3]; auto b = [1, 2, 3, 4]; if (a.length - b.length > 0) { writeln("Can you spot the bug that easily?"); } } ~~~ Yes, it makes sense, but at the same time it leads to super unintuitive math operations being involved. Rust made the same mistake and now a couple of times I've seen bugs like these being reported. Never seen them in Java or .Net though. I wonder why...
Re: Optimization fun
On 11/7/14, 4:16 PM, H. S. Teoh via Digitalmars-d wrote: On Fri, Nov 07, 2014 at 04:06:44PM -0300, Ary Borenszweig via Digitalmars-d wrote: [...] Is the code public? I'd like to port it to other languages and see how they behave, see if this is a general problem or just something specific to D. [...] I haven't posted the code anywhere yet, but I could certainly send you a copy if you want. However, the test input I'm using is not public, so you'll have to craft your own input files for testing purposes. The public input files I do have are too simple to pose a significant challenge to the solver (plus, they default to BFS instead of A*, so you might get different results from what I described here). T Then I don't think I'll be able to do any useful benchmark/profile with that. Thanks anyway :-)
Re: Optimization fun
On 11/6/14, 7:58 PM, H. S. Teoh via Digitalmars-d wrote: So today, I was playing around with profiling and optimizing my sliding block puzzle solver, and found some interesting things: 1) The GC could use some serious improvement: it just so happens that the solver's algorithm only ever needs to allocate memory, never release it (it keeps a hash of visited states that only grows, never shrinks). The profiler indicated that one of the hotspots is the GC. So I added an option to the program to call GC.disable() if a command line option is given. The result is a 40%-50% reduction in running time. 2) Auto-decoding is evil: the solver currently uses a naïve char[] representation for the puzzle board, and uses countUntil() extensively to locate things on the board. The original code had: auto offset = currentBoard.countUntil(ch); which, of course, triggers autodecoding (even though there is actually no reason to do so: ch is a char, and therefore countUntil could've used strchr instead). Simply changing that to: auto offset = currentBoard.representation.countUntil(ch); gave an additional 20%-30% performance boost. 3) However, countUntil remains at the top of the profiler's list of hotspots. DMD's optimizer was rather disappointing here, so I looked at gdc's output instead. Digging into the disassembly, I saw that it was using a foreach loop over the char[], but for some reason even gdc -O3 failed to simplify that loop significantly (I'm not sure why -- maybe what threw off the optimizer is the array indexing + length check operation, where in C one would normally jump bump the pointer instead?). Eventually, I tried writing a manual loop: auto ptr = current.ptr; auto c = current.length; while (c > 0 && *ptr != m.id) ptr++; cert.headOffset = ptr - current.ptr; This pushed gdc far enough in the right direction that it finally produced a 3-instruction inner loop. Strangely enough, the assembly had no check for (c > 0)... could it be because there is an assert following the loop claiming that that will never happen, therefore gdc elided the check? I thought Walter hasn't gotten around to implementing optimizations based on assert yet?? Anyway, with this optimization, I managed to shave off another 3%-5% of the total running time. On that note, at one point I was wondering why gdc -O3 didn't generate a "rep scasb" for the inner loop instead of manually incrementing the loop pointer; but a little research revealed that I'm about 6-7 years out of date: since about 2008 gcc's optimizer has not used "rep scasb" because on modern hardware it has atrocious performance -- much worse than a manually-written C loop! So much for "built-in" string processing that used to be touted in the old days. Yet another proof of the rule that overly-complex CPU instruction sets rarely actually perform better. Anyway, after everything is said and done, a puzzle that used to take about 20 seconds to solve now only takes about 6 seconds. W00t! T Hi, Is the code public? I'd like to port it to other languages and see how they behave, see if this is a general problem or just something specific to D. Thanks!
Re: Programming Language for Games, part 3
On 11/1/14, 8:31 AM, bearophile wrote: Third part of the "A Programming Language for Games", by Jonathan Blow: https://www.youtube.com/watch?v=UTqZNujQOlA Discussions: http://www.reddit.com/r/programming/comments/2kxi89/jonathan_blow_a_programming_language_for_games/ His language seems to disallow comparisons of different types: void main() { int x = 10; assert(x == 10.0); // Refused. } I like the part about compile-time tests for printf: http://youtu.be/UTqZNujQOlA?t=38m6s The same strategy is used to validate game data statically: http://youtu.be/UTqZNujQOlA?t=55m12s A screenshot for the printf case: http://oi57.tinypic.com/2m5b680.jpg That is called a linter. A general linter works on an abstract syntax tree with possibly type annotations. His "linter" only works on functions. I guess he will extend it later, but he's not inventing anything new. My opinion is that he knows C++ a lot and he's tired of some of its stuff so he's inventing a language around those. I don't think that's a good way to design a language. D can run (some) stuff at compile time. Crystal can run (any) stuff at compile time. Rust too. Many modern languages already understood that it is very important to run things at compile time, be it to generate code or to check things. I can understand his excitement because I got excited too when I was able to run stuff at compile time :-) About the bytecode he generates: as someone said in the reddit discussion, having to maintain two separate language implementations (compiled and interpreted) can lead to small and subtle bugs. And, running code via an intepreter is slower than compiled code, even if the interpreter is really good. So I don't think the bytecode stuff is a really good idea. Also, why have a dynamic array as a built-in? You can implement it yourself with pointers...
Re: Program logic bugs vs input/environmental errors
On 9/27/14, 8:15 PM, Walter Bright wrote: This issue comes up over and over, in various guises. I feel like Yosemite Sam here: https://www.youtube.com/watch?v=hBhlQgvHmQ0 In that vein, Exceptions are for either being able to recover from input/environmental errors, or report them to the user of the application. When I say "They are NOT for debugging programs", I mean they are NOT for debugging programs. assert()s and contracts are for debugging programs. Here's another +1 for exceptions. I want to add a a slash command to Slack (https://slack.zendesk.com/hc/en-us/articles/201259356-Slash-Commands). So, for example, when I say: /bot random phrase This hits a web server that processes that request and returns a random phrase. Now, imagine I have an assert in my application. When the web server hits the assertion it shuts down and the user doesn't get a response. What I'd like to do is to trap that assertion, tell the user that there's a problem, and send me an email telling me to debug it and fix it. That way the user can continue using the bot and I meanwhile I can fix the bug. In the real world where you don't want unhappy users, asserts don't work. Walter: how can you do that with an assertion triggering?
Re: RFC: std.json sucessor
On 10/18/14, 4:53 PM, Sean Kelly wrote: On Friday, 17 October 2014 at 18:27:34 UTC, Ary Borenszweig wrote: Once its done you can compare its performance against other languages with this benchmark: https://github.com/kostya/benchmarks/tree/master/json Wow, the C++Rapid parser is really impressive. I threw together a test with my own parser for comparison, and Rapid still beat it. It's the first parser I've encountered that's faster. Ruby 0.4995479721139979 0.49977992077421846 0.49981146157805545 7.53s, 2330.9Mb Python 0.499547972114 0.499779920774 0.499811461578 12.01s, 1355.1Mb C++ Rapid 0.499548 0.49978 0.499811 1.75s, 1009.0Mb JEP (mine) 0.49954797 0.49977992 0.49981146 2.38s, 203.4Mb Yes, C++ rapid seems to be really, really fast. It has some sse2/see4 specific optimizations and I guess a lot more. I have to investigate more in order to do something similar :-)
Re: RFC: std.json sucessor
On 8/21/14, 7:35 PM, Sönke Ludwig wrote: Following up on the recent "std.jgrandson" thread [1], I've picked up the work (a lot earlier than anticipated) and finished a first version of a loose blend of said std.jgrandson, vibe.data.json and some changes that I had planned for vibe.data.json for a while. I'm quite pleased by the results so far, although without a serialization framework it still misses a very important building block. Code: https://github.com/s-ludwig/std_data_json Docs: http://s-ludwig.github.io/std_data_json/ DUB: http://code.dlang.org/packages/std_data_json Destroy away! ;) [1]: http://forum.dlang.org/thread/lrknjl$co7$1...@digitalmars.com Once its done you can compare its performance against other languages with this benchmark: https://github.com/kostya/benchmarks/tree/master/json
Re: Program logic bugs vs input/environmental errors
On 10/15/14, 4:25 AM, Walter Bright wrote: On 10/14/2014 11:23 PM, Jacob Carlborg wrote: On 2014-10-15 07:57, Walter Bright wrote: Why do you need non-fatal unittests? I don't know if this would cause problems with the current approach. But most unit test frameworks don't NOT stop on the first failure, like D does. It catches the exception, continues with the next test and in the end prints a final report. I understand that, but I don't think that is what Dicebot is looking for. He's looking to recover from unittests, not just continue. I think this means you can't get stack traces for exceptions thrown in unit tests, right?
Re: Will D ever get optional named parameters?
On 10/13/14, 4:18 PM, Walter Bright wrote: On 10/13/2014 7:23 AM, Ary Borenszweig wrote: On 10/13/14, 5:47 AM, Walter Bright wrote: On 10/13/2014 1:29 AM, "岩倉 澪" wrote: Are there good reasons not to add something like this to the language, or is it simply a matter of doing the work? Has it been discussed much? Named parameters interact badly with overloading. Could you give an example? Nothing requires function overloads to use the same names in the same order for parameters. "color" can be the name for parameter 1 in one overload and for parameter 3 in another and not be there at all for a third. Parameters need not be named in D: int foo(long); int foo(ulong x); Named parameters are often desired so that default arguments need not be in order at the end: int foo(int x = 5, int y); int foo(int y, int z); To deal with all this, a number of arbitrary rules will have to be created. Overloading is already fairly complex, with the implemented notions of partial ordering. Even if this could all be settled, is it worth it? Can anyone write a document explaining this to people? Do people really want pages and pages of specification for this? One simple thing we did in Crystal is to allow invoking a function with named arguments only for arguments that have a default value. For example: void foo(int x, int y = 2, int z = 3) { ... } foo(x, y: 10); foo(x, y: 10, z: 20); foo(x, z: 30) But not this: foo(x: 10) The logic behind this is that named arguments are usually wanted when you want to replace one of the default values while keeping the others' defaults. You could specify names for arguments that don't have a default value, but that only gives a small readability aid. Changing a default value in the middle is a new feature. This greatly simplifies the logic, since parameter reordering can only happen for names that have default values and you can always fill the gaps. Also, default values can also appear last in a function signature.
Re: Will D ever get optional named parameters?
On 10/13/14, 5:47 AM, Walter Bright wrote: On 10/13/2014 1:29 AM, "岩倉 澪" wrote: Are there good reasons not to add something like this to the language, or is it simply a matter of doing the work? Has it been discussed much? Named parameters interact badly with overloading. Could you give an example?
Re: Program logic bugs vs input/environmental errors
On 9/27/14, 8:15 PM, Walter Bright wrote: This issue comes up over and over, in various guises. I feel like Yosemite Sam here: https://www.youtube.com/watch?v=hBhlQgvHmQ0 In that vein, Exceptions are for either being able to recover from input/environmental errors, or report them to the user of the application. When I say "They are NOT for debugging programs", I mean they are NOT for debugging programs. assert()s and contracts are for debugging programs. For me, assert is useless. We are developing a language using LLVM as our backend. If you give LLVM something it doesn't like, you get something this: ~~~ Assertion failed: (S1->getType() == S2->getType() && "Cannot create binary operator with two operands of differing type!"), function Create, file Instructions.cpp, line 1850. Abort trap: 6 ~~~ That is what the user gets when there is a bug in the compiler, at least when we are generating invalid LLVM code. And that's one of the good paths, if you compiled LLVM with assertions, because otherwise I guess it's undefined behaviour. What I'd like to do, as a compiler, is to catch those errors and tell the user: "You've found a bug in the app, could you please report it in this URL? Thank you.". We can't: the assert is there and we can't change it. Now, this is when you interface with C++/C code. But inside our language code we always use exceptions so that programmers can choose what to do in case of an error. With assert you loose that possibility. Raising an exception is costly, but that should happen in exceptional cases. Installing an exception handler is cost-free, so I don't see why there is a need for a less powerful construct like assert.
Re: What are the worst parts of D?
On 9/24/14, 3:20 AM, Jacob Carlborg wrote: On 24/09/14 07:37, Walter Bright wrote: So help out! You always say we should help out instead of complaining. But where are all the users that want C++ support. Let them implement it instead and lets us focus on actual D users we have now. Maybe Facebook needs D to interface with C++?
Re: [Semi OT] Language for Game Development talk
On 9/19/14, 9:52 PM, bearophile wrote: but currently Rust seems to ignore several kinds of correctness, focusing only on two kinds Could you tell which are those two kinds and which other correctness are ignored? Just to learn more about Rust. Thanks!
Re: why does DMD compile "hello world" to about 500 _kilobytes_ on Mac OS X [x86_64]?!?
On 8/31/14, 8:51 PM, Abe wrote: Please note: 502064 bytes!!! [for the curious: 490.296875 kilobytes] The real question is: why does size matter for you? A simple "hello world" program in Go is 2 megabytes. That's four times the size in D. I don't know if people complain about that. I efficiency matters most.
Re: Using D
On 8/25/14, 1:26 PM, ketmar via Digitalmars-d wrote: On Mon, 25 Aug 2014 16:08:52 + via Digitalmars-d wrote: Beta was static and compiled directly to asm. it's not hard to compile dynamic language to native code. what is hard is to make this code fast. this requires very sofisticated compiler which can eliminate as much indirect calls as possible. that's why we have the ability to create non-virtual methods in languages like D or C++. "everything is object" is a nice concept, but it has it's price. Not at all. In Crystal everything is an object, it compiles to native code and it's super fast. All methods are virtual (and there's actually no way to make a method non-virtual). The trick is to not use virtual tables, but do multiple dispatch (or only use virtual tables when needed). If you have: a = Foo.new a.some_method then it's obvious to the compiler that some_method belongs to Foo: no virtual call involved, no virtual table lookup, etc: just a direct call. If you have: x = 1.abs 1 is still an object, only it's memory representation is 32 bits, and the method turns out to be just like a function call. To me, the real problem with OOP is to automatically relate it to virtual tables, interfaces, etc.
Re: Unused variables and bugs
On 8/22/14, 6:46 PM, bearophile wrote: Currently a group of people are trying to design a language pushing to the extreme the idea of design by committee, it's future a peer reviewed language meant to be used for scientific programming. I've taken a look at its syntax and I was not happy with the current work in progress. What language is that?
Re: RFC: std.json sucessor
On 8/22/14, 1:24 PM, Sönke Ludwig wrote: Am 22.08.2014 16:53, schrieb Ary Borenszweig: On 8/22/14, 3:33 AM, Sönke Ludwig wrote: Without a serialization framework it would in theory work like this: JSONValue v = parseJSON(`{"age": 10, "name": "John"}`); auto p = new Person(v["name"].get!string, v["age"].get!int); unfortunately the operator overloading doesn't work like this currently, so this is needed: JSONValue v = parseJSON(`{"age": 10, "name": "John"}`); auto p = new Person( v.get!(Json[string])["name"].get!string, v.get!(Json[string])["age"].get!int); But does this parse the whole json into JSONValue? I want to create a Person without creating an intermediate JSONValue for the whole json. Can this be done? That would be done by the serialization framework. Instead of using parseJSON(), it could use parseJSONStream() to populate the Person instance on the fly, without putting the whole JSON into memory. But I'd like to leave that for a later addition, because we'd otherwise end up with duplicate functionality once std.serialization gets finalized. Manually it would work similar to this: auto nodes = parseJSONStream(`{"age": 10, "name": "John"}`); with (JSONParserNode.Kind) { enforce(nodes.front == objectStart); nodes.popFront(); while (nodes.front != objectEnd) { auto key = nodes.front.key; nodes.popFront(); if (key == "name") person.name = nodes.front.literal.string; else if (key == "age") person.age = nodes.front.literal.number; } } Cool, that looks good :-)
Re: RFC: std.json sucessor
On 8/22/14, 3:33 AM, Sönke Ludwig wrote: Am 22.08.2014 02:42, schrieb Ary Borenszweig: Say I have a class Person with name (string) and age (int) with a constructor that receives both. How would I create an instance of a Person from a json with the json stream? Suppose the json is this: {"age": 10, "name": "John"} And the class is this: class Person { this(string name, int age) { // ... } } Without a serialization framework it would in theory work like this: JSONValue v = parseJSON(`{"age": 10, "name": "John"}`); auto p = new Person(v["name"].get!string, v["age"].get!int); unfortunately the operator overloading doesn't work like this currently, so this is needed: JSONValue v = parseJSON(`{"age": 10, "name": "John"}`); auto p = new Person( v.get!(Json[string])["name"].get!string, v.get!(Json[string])["age"].get!int); But does this parse the whole json into JSONValue? I want to create a Person without creating an intermediate JSONValue for the whole json. Can this be done?
Re: RFC: std.json sucessor
On 8/21/14, 7:35 PM, Sönke Ludwig wrote: Following up on the recent "std.jgrandson" thread [1], I've picked up the work (a lot earlier than anticipated) and finished a first version of a loose blend of said std.jgrandson, vibe.data.json and some changes that I had planned for vibe.data.json for a while. I'm quite pleased by the results so far, although without a serialization framework it still misses a very important building block. Code: https://github.com/s-ludwig/std_data_json Docs: http://s-ludwig.github.io/std_data_json/ DUB: http://code.dlang.org/packages/std_data_json Say I have a class Person with name (string) and age (int) with a constructor that receives both. How would I create an instance of a Person from a json with the json stream? Suppose the json is this: {"age": 10, "name": "John"} And the class is this: class Person { this(string name, int age) { // ... } }
Re: Why does D rely on a GC?
On 8/18/14, 9:05 PM, bearophile wrote: Ary Borenszweig: It's very smart, yes. But it takes half an hour to compile the compiler itself. I think this is mostly a back-end issue. How much time does it take to compile ldc2? Can't they create a Rust with dmc back-end? :o) Not all hope is lost, though: https://github.com/rust-lang/rust/issues/16624 With such bugs, one can expect a lot of performance improvements in the future :-)
Re: Why does D rely on a GC?
On 8/19/14, 12:01 PM, Ary Borenszweig wrote: On 8/19/14, 11:51 AM, bearophile wrote: Ary Borenszweig: Then here someone from the team says he can't say a way to improve the performance by an order of magnitude: https://www.mail-archive.com/rust-dev@mozilla.org/msg02856.html (but I don't know how true is that) Can't they remove some type inference from the language? Type inference is handy (but I write down all type signatures in Haskell, sometimes even for nested functions) but if it costs so much in compilation time then perhaps isn't it a good idea to remove some type inference from Rust? Bye, bearophile Crystal has *global* type inference. If you look at the compiler's code you will find very few type annotations (mostly for generic types and for arguments types restrictions). Compiling the compiler takes 6 seconds (recompiling it takes 3 seconds). D also has auto, Nimrod has let, and both compilers are very fast. I don't think type inference is what makes their compiler slow. Here are the full stats: $ time CFG_VERSION=1 CFG_RELEASE=0 rustc -Z time-passes src/librustc/lib.rs time: 0.519 sparsing time: 0.026 sgated feature checking time: 0.000 scrate injection time: 0.170 sconfiguration 1 time: 0.083 splugin loading time: 0.000 splugin registration time: 1.803 sexpansion time: 0.326 sconfiguration 2 time: 0.309 smaybe building test harness time: 0.321 sprelude injection time: 0.363 sassigning node ids and indexing ast time: 0.023 schecking that all macro invocations are gone time: 0.031 sexternal crate/lib resolution time: 0.046 slanguage item collection time: 1.250 sresolution time: 0.027 slifetime resolution time: 0.000 slooking for entry point time: 0.023 slooking for plugin registrar time: 0.063 sfreevar finding time: 0.126 sregion resolution time: 0.025 sloop checking time: 0.047 sstability index time: 0.126 stype collecting time: 0.050 svariance inference time: 0.265 scoherence checking time: 17.294 stype checking time: 0.044 scheck static items time: 0.190 sconst marking time: 0.037 sconst checking time: 0.378 sprivacy checking time: 0.080 sintrinsic checking time: 0.070 seffect checking time: 0.843 smatch checking time: 0.184 sliveness checking time: 1.569 sborrow checking time: 0.518 skind checking time: 0.033 sreachability checking time: 0.204 sdeath checking time: 0.835 slint checking time: 0.000 sresolving dependency formats time: 25.645 stranslation time: 1.325 sllvm function passes time: 0.766 sllvm module passes time: 40.950 scodegen passes time: 46.521 sLLVM passes time: 0.607 srunning linker time: 3.372 slinking real1m46.062s user1m41.727s sys0m3.333s So apparently type checking takes a long time, and also generating the llvm code. But it seems wy too much for what it is. Also, the list seems way too big. It's ok from a purist point of view, to make the compiler nice and clean. But that's not a good way to make a fast compiler. The sad thing is that Mozilla is behind the project, so people are really excited about it. Other languages don't have a big corporation behind them and have faster compilers (and nicer languages, I think ^_^).
Re: Why does D rely on a GC?
On 8/19/14, 11:51 AM, bearophile wrote: Ary Borenszweig: Then here someone from the team says he can't say a way to improve the performance by an order of magnitude: https://www.mail-archive.com/rust-dev@mozilla.org/msg02856.html (but I don't know how true is that) Can't they remove some type inference from the language? Type inference is handy (but I write down all type signatures in Haskell, sometimes even for nested functions) but if it costs so much in compilation time then perhaps isn't it a good idea to remove some type inference from Rust? Bye, bearophile Crystal has *global* type inference. If you look at the compiler's code you will find very few type annotations (mostly for generic types and for arguments types restrictions). Compiling the compiler takes 6 seconds (recompiling it takes 3 seconds). D also has auto, Nimrod has let, and both compilers are very fast. I don't think type inference is what makes their compiler slow. Here are the full stats: $ time CFG_VERSION=1 CFG_RELEASE=0 rustc -Z time-passes src/librustc/lib.rs time: 0.519 s parsing time: 0.026 s gated feature checking time: 0.000 s crate injection time: 0.170 s configuration 1 time: 0.083 s plugin loading time: 0.000 s plugin registration time: 1.803 s expansion time: 0.326 s configuration 2 time: 0.309 s maybe building test harness time: 0.321 s prelude injection time: 0.363 s assigning node ids and indexing ast time: 0.023 s checking that all macro invocations are gone time: 0.031 s external crate/lib resolution time: 0.046 s language item collection time: 1.250 s resolution time: 0.027 s lifetime resolution time: 0.000 s looking for entry point time: 0.023 s looking for plugin registrar time: 0.063 s freevar finding time: 0.126 s region resolution time: 0.025 s loop checking time: 0.047 s stability index time: 0.126 s type collecting time: 0.050 s variance inference time: 0.265 s coherence checking time: 17.294 s type checking time: 0.044 s check static items time: 0.190 s const marking time: 0.037 s const checking time: 0.378 s privacy checking time: 0.080 s intrinsic checking time: 0.070 s effect checking time: 0.843 s match checking time: 0.184 s liveness checking time: 1.569 s borrow checking time: 0.518 s kind checking time: 0.033 s reachability checking time: 0.204 s death checking time: 0.835 s lint checking time: 0.000 s resolving dependency formats time: 25.645 s translation time: 1.325 s llvm function passes time: 0.766 s llvm module passes time: 40.950 scodegen passes time: 46.521 s LLVM passes time: 0.607 s running linker time: 3.372 s linking real1m46.062s user1m41.727s sys 0m3.333s So apparently type checking takes a long time, and also generating the llvm code. But it seems wy too much for what it is.
Re: Why does D rely on a GC?
On 8/19/14, 10:55 AM, Daniel Murphy wrote: "Ary Borenszweig" wrote in message news:lsviva$2ip0$1...@digitalmars.com... Actually, it's 26m to just compile Rust without LLVM. Take a look at this: Funny, the DDMD frontend compiles in ~6 seconds. Nimrod's compiler takes 5 second. Crystal takes 6 seconds. I think compiling ldc2 also takes about the same time. When I compile librustc with Rust with stats on, it takes 1.1 seconds to just *parse* the code. I think there's something very bad in their code...
Re: Why does D rely on a GC?
On 8/19/14, 3:50 AM, Paulo Pinto wrote: On Monday, 18 August 2014 at 23:48:24 UTC, Ary Borenszweig wrote: On 8/18/14, 8:51 AM, bearophile wrote: Jonathan M Davis: The biggest reason is memory safety. With a GC, it's possible to make compiler guarantees about memory safety, whereas with manual memory management, it isn't. Unless you have a very smart type system and you accept some compromises (Rust also uses a reference counter some some cases, but I think most allocations don't need it). Bye, bearophile It's very smart, yes. But it takes half an hour to compile the compiler itself. The compilation speed is caused by the C++ code in their compiler backend (LLVM), which gets compiled at least twice during the bootstraping process. Actually, it's 26m to just compile Rust without LLVM. Take a look at this: https://twitter.com/steveklabnik/status/496774607610052608 Then here someone from the team says he can't say a way to improve the performance by an order of magnitude: https://www.mail-archive.com/rust-dev@mozilla.org/msg02856.html (but I don't know how true is that) And you have to put all those unwrap and types everywhere, I don't think it's fun or productive that way. There I fully agree. If they don't improve lifetime's usability, I don't see Rust being adopted by average developers. This. But maybe programming in a safe way is inherently hard? Who knows...
Re: Why does D rely on a GC?
On 8/18/14, 8:51 AM, bearophile wrote: Jonathan M Davis: The biggest reason is memory safety. With a GC, it's possible to make compiler guarantees about memory safety, whereas with manual memory management, it isn't. Unless you have a very smart type system and you accept some compromises (Rust also uses a reference counter some some cases, but I think most allocations don't need it). Bye, bearophile It's very smart, yes. But it takes half an hour to compile the compiler itself. And you have to put all those unwrap and types everywhere, I don't think it's fun or productive that way.
Re: C++'s std::rotate
On 8/11/14, 12:29 AM, Andrei Alexandrescu wrote: Hello, In which algorithms would one use std::rotate?
Re: Behaviour of AAs after initialization
On 8/7/14, 3:57 PM, Andrei Alexandrescu wrote: On 8/7/14, 10:35 AM, Puming wrote: On Thursday, 7 August 2014 at 16:53:24 UTC, H. S. Teoh via Digitalmars-d It's really just the .init value of null which causes odd behaviour with empty AA's. Fun fact: void changeAA(int[string] aa) { aa["a"] = 123; } // Null AA: int[string] aa1; // null assert(aa1.length == 0); changeAA(aa1);// no effect for most of the new users the WAT part is actually here :-) One function we could and should use is one that makes an AA that is empty but not null. Right now one needs to use goofy methods such as adding and then removing a key. -- Andrei It still won't be intuitive for newcomers or for anyone not knowing that function. I would invert it: declaring an associative array makes it non-null. Then you can choose, with a function, to initialize to null. This would follow the principle of least surprise.
Re: Phobos PR: `call` vs. `bindTo`
On 8/5/14, 4:29 PM, H. S. Teoh via Digitalmars-d wrote: There are currently two Phobos PR's that implement essentially the same functionality, but in slightly different ways: https://github.com/D-Programming-Language/phobos/pull/1255 https://github.com/D-Programming-Language/phobos/pull/2343 From the discussion on Github, it seems to me that we should only introduce one new function rather than two similar but not-quite-the-same functions. Since the discussion seems stuck on Github, I thought I should bring it here to the larger community to see if we can reach a consensus (who am I kidding... but one can hope :-P) on: (1) What name to use (bring on the bikeshed rainbow) (2) Exactly what functionality should be included. (3) Which PR to merge. T It's called "variable declaration" in imperative languages, and I think D has that.
Re: assert semantic change proposal
On 8/5/14, 5:26 PM, H. S. Teoh via Digitalmars-d wrote: On Tue, Aug 05, 2014 at 05:09:43PM -0300, Ary Borenszweig via Digitalmars-d wrote: On 8/5/14, 3:55 PM, H. S. Teoh via Digitalmars-d wrote: On Tue, Aug 05, 2014 at 11:18:46AM -0700, Jeremy Powers via Digitalmars-d wrote: Furthermore, I think Walter's idea to use asserts as a source of optimizer hints is a very powerful concept that may turn out to be a revolutionary feature in D. LLVM already has it. It's not revolutionary: http://llvm.org/docs/LangRef.html#llvm-assume-intrinsic Even better, so there's precedent for this. Even if it's only exposed at the LLVM level, rather than the source language. Introducing this at the source language level (like proposed in D) is a good step forward IMO. By the way, I think Walter said "assert can be potentially used to make optimizations" not "Oh, I just had an idea! We could use assert to optimize code". I think the code already does this. Of course, we would have to look at the source code to find out... If the code already does this, then what are we arguing about? Exactly. I think the OP doesn't know that Walter wasn't proposing any semantic change in assert. Walter was just stating how assert works for him (or should work, but probably some optimizations are not implemented). We should ask Walter, but I think he's offline...
Re: assert semantic change proposal
On 8/5/14, 3:55 PM, H. S. Teoh via Digitalmars-d wrote: On Tue, Aug 05, 2014 at 11:18:46AM -0700, Jeremy Powers via Digitalmars-d wrote: Furthermore, I think Walter's idea to use asserts as a source of optimizer hints is a very powerful concept that may turn out to be a revolutionary feature in D. LLVM already has it. It's not revolutionary: http://llvm.org/docs/LangRef.html#llvm-assume-intrinsic By the way, I think Walter said "assert can be potentially used to make optimizations" not "Oh, I just had an idea! We could use assert to optimize code". I think the code already does this. Of course, we would have to look at the source code to find out... By the way, most of the time in this list I hear "We could use this and that feature to allow better optimizations" and no optimizations are ever implemented. Look at all those @pure nosafe nothrow const that you have to put and yet you don't get any performance boost from that.
Re: assume, assert, enforce, @safe
On 8/1/14, 5:16 PM, eles wrote: On Friday, 1 August 2014 at 17:43:27 UTC, Timon Gehr wrote: On 08/01/2014 07:19 PM, Sebastiaan Koppe wrote: The debug and the release build may be subjected to different input and hence traverse different traces of abstract states. It is not valid to say that an assertion will never fail just because it hasn't failed yet. Yes, but is the same for the C apps. There, you have no assertion in the release build, the release build is optimized (I imagine very few would use -O0 on it...), then the sefault happens. Good luck with the debugger and find the bug in the source code This is why debug builds exist, to reproduce problems and to investigate the bugs. The problem is then trying to copy everything C and C++ does and putting it in D...
Re: assume, assert, enforce, @safe
On 8/1/14, 2:19 PM, Sebastiaan Koppe wrote: If assertions are disabled in release builds, and you specifically instruct the compiler to build one, are you not assuming that the assertions will hold? Then what is wrong with extending those assumptions to the optimizer? Unless the assertions trigger in debug build, you will not end up with bugs in the release. If assertions don't trigger in debug build then assertions don't trigger in release. That's false.
Re: assume, assert, enforce, @safe
On 7/31/14, 4:54 PM, H. S. Teoh via Digitalmars-d wrote: On Thu, Jul 31, 2014 at 07:39:54PM +, Jonathan M Davis via Digitalmars-d wrote: On Thursday, 31 July 2014 at 19:36:34 UTC, H. S. Teoh via Digitalmars-d wrote: On Thu, Jul 31, 2014 at 03:43:48PM -0300, Ary Borenszweig via Digitalmars-d wrote: [...] What do you suggest to use to check program bugs? assert() Then you are potentially releasing programs with bugs that are of undefined behavior, instead of halting the program immediately. Isn't that already what you're doing with the current behaviour of assert? Not only in D, but also in C/C++. Yes, but I think that his point was that he wants a way to check programming bugs in release mode, and Walter was saying not to use enforce for checking programming bugs. So, that leaves the question of how to check them in release mode, since assertions won't work in release mode. But the answer to that is normally to not compile in release mode. And I believe that dmd gives enough control over that that you can get everything that -release does without disabling assertions using flags other than -release. [...] Ah, I see. But doesn't that just mean that you shouldn't use -release, period? AFAIK, the only thing -release does it to remove various safety checks, like array bounds checks, asserts, contracts (which are generally written using asserts), etc.. I'd think that Ary wouldn't want any of these disabled, so he shouldn't use -release at all. There's already -O and -inline to enable what people generally expect from a release build, so -release wouldn't really be needed at all. Right? T That's exactly my point, thank you for summing that up :-) I don't see the point of having a "-release" flag. It should be renamed to "-a-bit-faster-but-unsafe". I think there are other languages that do quite well in terms of performance without disabling bounds checks and other stuff.
Re: assume, assert, enforce, @safe
On 7/31/14, 4:34 PM, H. S. Teoh via Digitalmars-d wrote: On Thu, Jul 31, 2014 at 03:43:48PM -0300, Ary Borenszweig via Digitalmars-d wrote: On 7/31/14, 4:37 AM, Walter Bright wrote: On 7/30/2014 4:05 PM, Ary Borenszweig wrote: On 7/30/14, 7:01 PM, Walter Bright wrote: I'd like to sum up my position and intent on all this. 7. using enforce() to check for program bugs is utterly wrong. enforce() is a library creation, the core language does not recognize it. What do you suggest to use to check program bugs? assert() Then you are potentially releasing programs with bugs that are of undefined behavior, instead of halting the program immediately. Isn't that already what you're doing with the current behaviour of assert? Not only in D, but also in C/C++. T I don't program in those languages, and if I did I would always use exceptions (at least in C++). I don't want to compromise the safety of my programs and if they fail I want to get a clean backtrace, not some random undefined behaviour resulting in a segfault.
Re: assume, assert, enforce, @safe
On 7/31/14, 4:37 AM, Walter Bright wrote: On 7/30/2014 4:05 PM, Ary Borenszweig wrote: On 7/30/14, 7:01 PM, Walter Bright wrote: I'd like to sum up my position and intent on all this. 7. using enforce() to check for program bugs is utterly wrong. enforce() is a library creation, the core language does not recognize it. What do you suggest to use to check program bugs? assert() Then you are potentially releasing programs with bugs that are of undefined behavior, instead of halting the program immediately.
Re: assume, assert, enforce, @safe
On 7/30/14, 7:01 PM, Walter Bright wrote: I'd like to sum up my position and intent on all this. 7. using enforce() to check for program bugs is utterly wrong. enforce() is a library creation, the core language does not recognize it. What do you suggest to use to check program bugs?