Re: Evolutionary Programming!
On Tuesday, 5 January 2016 at 16:10:21 UTC, Jason Jeffory wrote: Any more thoughts? I empathize with you Jason. It's kind of like biological evolution that has progressed through organisms spawning new generations and dying, and some humans' search for immortality. Being free from aging and disease would lead to a different kind of evolution happening within the same generation, through technology. This new language would not have to die out and be replaced as progress is made, and it would have a capability of evolving without growing enormous scars like C++. Key to a clean evolution is robust upgrade-ability of source code IMO. If the language designers add the dimension of @safe as an after-thought, an upgrade script could be run on old source-code that would tag all valid functions as @safe, (or better; tag unsafe functions as @unsafe). I'm leaning toward live-editing AST's instead of raw text for robust, quick upgrades and quick compilation times. The tree could be stored in XML/JSON/binary, but be edited in a different view. AST editing would also fix the issue that beauty is in the eye of the beholder (programmer), as the same program tree could be visualized/skinned in different ways. The same programmer could also be writing programs with different defaults (such as @safe/@unsafe) depending on context (short term shell scripts vs aviation-software). The reason similar AST projects have failed in the past AFAIK is that it's very very hard to build pleasant to use editors and viewers/diff-tools. Programmers are married to their editors (https://www.youtube.com/watch?v=qzC5H5xrr-E oh Andrei :) ). Bikeshedding in language forums would also go down a lot if programmers could re-skin keywords, brackets, indentation etc. :) Regarding the struggle for immortality, I think the death/life cycle still provides a way of evolution that is preferable in many ways. Having different languages provides immunization from madness that might take down the "one true language". I would love to see AST-based/structured languages succeed alongside text-based languages like D some day, and see the degree of duplicated programmer hours go down. Cheers, Chris
Re: Invariants are useless the way they are defined
On Monday, 26 August 2013 at 18:25:21 UTC, Dicebot wrote: On Monday, 26 August 2013 at 18:15:25 UTC, H. S. Teoh wrote: This forces the library vendor to have to ship two versions of the binaries, one compiled with -release, one not. I thought it is a common practice - contracts are not the only troublemakers here, same goes, for example, for debug symbols or bounds checks. Why not have the compiler generate internal versions of all public methods of classes with invariants, which do not perform invariant checking, but contain the actual function body? class Foo { private int bar() const { return 5; } public int baz() { return bar() * bar(); } public int third() { return baz() * 2; } invariant() { assert(bar() 6); } } ..gets translated into.. class Foo { private int bar() const { return 5; } public int baz() { invariant(); int r = __internal_baz(); invariant(); return r; } private int __internal_baz() { return bar() * bar(); } public int third() { invariant(); int r = __internal_third(); invariant(); return r; } private int __internal_third() { return baz() * 2; } invariant() { assert(bar() 6); } } And then in release builds simply avoid doing this generation of __internal_ methods, keeping the ABI (int baz() and int third()) the same. A variation would be to have __internal_s call __internal_s: private int __internal_third() { return __internal_baz() * 2; } but that diminishes ease of debugging. Anyway, this kind of code transformation is probably impractical for a number of reasons. I just felt like bikeshedding.
Re: Official D Grammar
On Tuesday, 2 April 2013 at 19:00:21 UTC, Tobias Pankrath wrote: I'm wondering if it's possibly to mechanically check that what's in the grammar is how DMD behaves. Take the grammar and (randomly) generate strings with it and check if DMD does complain. You'd need a parse only don't check semantics flag, though. This will not check if the strings are parsed correctly by DMD nor if invalid strings are rejected. But it would be a start. An alternative idea for ensuring that documentation and implementation are in sync might be to list the full grammar definition as a data structure that can both be used as input for the parser and as input for a tool that generates the documentation. Theoretically possible, :) just look at Philippe Sigaud's Pegged.
Re: CTFE and DI: The Crossroads of D
On Thursday, 10 May 2012 at 17:37:59 UTC, Adam Wilson wrote: On Thu, 10 May 2012 09:56:06 -0700, Steven Schveighoffer schvei...@yahoo.com wrote: On Thu, 10 May 2012 12:04:44 -0400, deadalnix deadal...@gmail.com wrote: Le 10/05/2012 17:54, Steven Schveighoffer a écrit : On Thu, 10 May 2012 10:47:59 -0400, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: On 5/10/12 6:17 AM, Steven Schveighoffer wrote: On Wed, 09 May 2012 23:00:07 -0400, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: Actually the point here is to still be able to benefit of di automated generation while opportunistically marking certain functions as put the body in the .di file. If you aren't going to strip the files, I don't see the point in it. Inlining. No, I mean if dmd -H isn't going to strip the files, what is the point of dmd -H? I can already copy the .d to .di and have inlining/ctfe, or simply use the .d directly. At this point, in order to get CTFE to work, you have to keep just about everything, including private imports. If we want to ensure CTFE works, dmd -H becomes a glorified cp. If we have some half-assed guess at what could be CTFE'd (which is growing by the day), then it's likely to not fit with the goals of the developer running dmd -H. -Steve If you can CTFE, you can know what is CTFEable. If it is currently half assed, then work on it and provide a better tool. There is already a better tool -- cp. I ask again, what is the benefit of .di generation if it is mostly a glorified (faulty?) copy operation? As Adam points out in his original post, ensuring CTFE availability may not be (and is likely not) why you are creating a .di file. Plus, what isn't CTFEable today may be CTFEable tomorrow. inlining is one thing, because that's an optimization that has a valid fallback. CTFE does not. -Steve Exactly this. I am currently in the process of changing the DRuntime makefiles such that some of the files are not processed as DI's. This allows Phobos CTFE dependencies on the DRT to remain valid while still allowing DI's to be generated for parts where they matter, with the goal of making both a shared and static library build of the DRT. The tool I am using to accomplish this feat? cp. It works, it delivers exactly what we need and it's *is not* a broken operation like the current DI generation. Like Steve said, most people generating DI files are not really worried about CTFE working, in fact they almost undoubtedly *know* that they are breaking CTFE, yet they choose to do it anyways. They have their reasons, and frankly, it doesn't concern us as compiler writers if those reasons don't line up with our personal moral world-view. Our job is to provide a tool that DOES WHAT PEOPLE EXPECT. Otherwise they will move on to one that does. If people expected DI generation to be glorified (and not broken) copy operation, they would (and do) use cp. How about: dmd -H mySource.d --keepImplementation MyClass.fooMethod ? It should be good enough for makefiles as in the case of core.time/dur, but get's a bit hairy with overloads (append [0] to select specific ones?). Maybe it requires semantic information though.
Re: dereferencing null
On Tuesday, 6 March 2012 at 15:46:54 UTC, foobar wrote: On Tuesday, 6 March 2012 at 10:19:19 UTC, Timon Gehr wrote: This is quite close, but real support for non-nullable types means that they are the default and checked statically, ideally using data flow analysis. I agree that non-nullable types should be made the default and statically checked but data flow analysis here is redundant. consider: T foo = ..; // T is not-nullable T? bar = ..; // T? is nullable bar = foo; // legal implicit coercion T - T? foo = bar; // compile-time type mismatch error //correct way: if (bar) { // make sure bar isn't null // compiler knows that cast(T)bar is safe foo = bar; } of course we can employ additional syntax sugar such as: foo = bar || default_value; furthermore: foo.method(); // legal bar.method(); // compile-time error it's all easily implementable in the type system. I agree with the above and would also suggest something along the lines of: assert (bar) { // make sure it isn't null in debug builds bar.method(); // legal } The branchy null-check would then disappear in build configurations with asserts disabled.
Re: Lexer and parser generators using CTFE
On Wednesday, 29 February 2012 at 16:41:22 UTC, Andrei Alexandrescu wrote: On 2/28/12 7:16 PM, Christopher Bergqvist wrote: What am I failing to pick up on? Barrier of entry and granularity of approach, I think. Currently if one wants to parse some simple grammar, there are options such as (a) do it by hand, (b) use boost::spirit, or (c) use lex/yacc. Parsing by hand has the obvious disadvantages. Using boost::spirit has a steep learning curve and tends to create very contorted grammar representations, full of representation noise, and scales very poorly. Using lex/yacc is hamfisted - there's an additional build step, generated files to deal with, and the related logistics, which make lex/yacc a viable choice only for big grammars. An efficient, integrated parser generator would lower the barrier of entry dramatically - if we play our cards right, even a sprintf specifier string could be parsed simpler and faster using an embedded grammar, instead of painfully writing the recognizer by hand. Parsing config files, XML, JSON, CSV, various custom file formats and many others - all would all be a few lines away. Ideally a user who has a basic understanding of grammars should have an easier time using a small grammar to parse simple custom formats, than writing the parsing code by hand. Andrei Thanks for your response. The lowered barrier of entry in parsing something like a customized JSON format or config files is nice, and something I could see myself use. I'm still skeptical about the level of killer-featureness but I would be glad to be proven wrong.
Re: Lexer and parser generators using CTFE
On Tuesday, 28 February 2012 at 08:36:13 UTC, CTFE-4-the-win wrote: On Tuesday, 28 February 2012 at 07:59:16 UTC, Andrei Alexandrescu wrote: I'm starting a new thread on this because I think the matter is of strategic importance. We all felt for a long time that there's a lot of potential in CTFE, and potential applications have been discussed more than a few times, ranging from formatting strings parsed to DSLs and parser generators. Such feats are now approaching fruition because a number of factors converge: * Dmitry Olshansky's regex library (now in Phobos) generates efficient D code straight from regexen. * The scope and quality of CTFE has improved enormously, making more advanced uses possible and even relatively easy (thanks Don!) * Hisayuki Mima implemented a parser generator in only 3000 lines of code (sadly, no comments or documentation yet :o)) * With the occasion of that announcement we also find out Philippe Sigaud has already a competing design and implementation of a parser generator. This is the kind of stuff I've had an eye on for the longest time. I'm saying it's of strategic importance because CTFE technology, though not new and already available with some languages, has unique powers when combined with other features of D. With CTFE we get to do things that are quite literally impossible to do in other languages. We need to have a easy-to-use, complete, seamless, and efficient lexer-parser generator combo in Phobos, pronto. The lexer itself could use a character-level PEG or a classic automaton, and emit tokens for consumption by a parser generator. The two should work in perfect tandem (no need for glue code). At the end of the day, defining a complete lexer+parser combo for a language should be just a few lines longer than the textual representation of the grammar itself. What do you all think? Let's get this project off the ground! Thanks, Andrei Definitely, I applaud this initiative! I've long been of the opinion that CTFE parsing is D's killer-feature, which would allow me to sneak D into a nameless above average size company. ;) I agree that the current direction of D in this area is impressive. However, I fail to see a killer-feature in generating a lexer-parser generator at compile-time instead of run-time. A run-time generator would benefit from not having to execute within the limited CTFE environment and would always be on-par in that respect. A compile time generator would internalize the generation and compilation of the result (with possible glue-code), simplifying the build process somewhat. What am I failing to pick up on?
Re: GCC 4.6
Why not split this NG in two? d-pragmatism - Concrete stuff, TDPL + absolutely necessary adjustments which are probably discussed first in the other ng... d-theory - A place to discuss the future of D, stuff with a longer timeline. Or maybe we should accept this NG for being a mix of both and that at least d-announce is d-pragmatism condensed, kind of. On Thu, Mar 31, 2011 at 10:26 AM, so s...@so.so wrote: On Thu, 31 Mar 2011 05:09:44 +0300, jasonw u...@webmails.org wrote: You hit the nail on the head here. I see two real problems with his messages: 1) he's force fitting every possible language feature he learns into D. Clearly some features are useful, others are not, and this is why many of bearophile's ideas fail and generate endless debates and unnecessary noise. He can't see that the features just don't fit in. This is not true, there are ideas here from many others as well that generate endless debates. The reason is as far as i can see not always the ideas just don't fit in. The reasons IMO are the chain of command and the resources. Take the last long discussion on named arguments, i don't think anyone was against it. One another thing is that a few of us evasive to some questions. If you lack the vision of good language design as a whole, you shouldn't start suggesting new features like this. I'd appreciate it more if we won't introduce new concepts in this way. This is oxymoron, by that logic there is not a single soul on earth with that vision. You just dismissed whole academia, isn't this the way it operates? Don't you think this is harmful? Why does D2 exist? D1 wasn't enough? 2) Programming language design requires rigorous definition of terms and other things. The D community doesn't encourage using precise, well-defined, unique terms. This leads to some subtleties and other problems in the discussions. Again I think the best place for general PL discussion is somewhere else, preferable in the academia. I'm sorry to say this, but I likely need to study how to put him in the kill file. The whole bearophile phenomenon takes place on an isolated island somewhere in the dark corners of D's history. The bug reports and benchmarks are priceless, but these lectures about other language often aren't. I agree he should slow down proposing, at the same time people better stop ad hominem attacks. If it is way to go, we all need to shut up.
Re: Splitter.opSlice(), ranges and const strings
On Thu, Feb 24, 2011 at 11:38 AM, Jonathan M Davis jmdavisp...@gmx.com wrote: On Thursday 24 February 2011 01:53:33 spir wrote: On 02/24/2011 08:39 AM, Jonathan M Davis wrote: On Wednesday 23 February 2011 22:41:53 Christopher Bergqvist wrote: Hi! I've run into an issue which I don't understand. Boiled down code: import std.regex; void main() { //string str = sdf; // works //const string str = sdf; // doesn't work immutable str = sdf; // doesn't work auto pat = regex(, *); auto split = splitter(str, pat); } Error: /Library/Compilers/dmd2/osx/bin/../../src/phobos/std/regex.d(3022): Error: this is not mutable Should splitter() be able to cope with const/immutable ranges? (That's with the latest official v2.052 dmd/phobos distribution for mac. I got the same error before upgrading from the v2.051 also). Pretty much _nothing_ copes with const or immutable ranges. And if you think about it, it generally makes sense. You can't pop the front off of a const or immutable range. So, how could you possibly process it? The are some cases where having tail const with ranges would work (assuming that we could have tail const with ranges - which we currently can't), but on the whole, const and immutable ranges don't really make sense. They can hold const or immutable data, but a const or immutable range is pretty useless on the whole. That's one question I'm wondering about for months (but always forget to ask): Why should /collection/ traversal shrink them? Why does the regular range stepping func (popFront) read, for arrays: this.data = thid.data[1..$]; instead of: ++ this.cursor; ??? Should then be caled eg stepFront, which imo is much better to express the semantics of traversal / iteration. I guess the issue expressed in this thread is /invented/ by the regular process of range, precisely by popFront. There is no reason to mutate a collection just to traverse it! And then, how do you traverse the collection again? unittest { auto a = [1,2,3]; while (! a.empty()) { write(a.front() ,' '); a.popFront(); } writeln(); // below nothing written onto terminal while (! a.empty()) { write(a.front() ,' '); a.popFront(); } writeln(); } ??? Collection traversal _doesn't_ shrink them. Ranges get shrunken when you iterate over them. That's how they work. They're a view into the collection/container. They shrink. It's like car and cdr in lisp or head and tail in haskell. Iterating over a range is very much like processing an slist (as in the functional language type slist, not singly linked lists or the type in std.container). Now, arrays are a bit funny in that they're kind of both ranges and containers. Now, as Steven pointed out in a recent thread, for a dynamically allocated array, it's like the container is the GC heap, and an array is a range over a portion of that heap. So, traversing an array as a range shrinks it but does not affect its actual container - the GC heap. Static arrays, on the other hand, really do own their memory and are actual containers - hence why you have to slice them to pass to any range-based functions. And you have to slice any other container type as well if you want to pass it to a range-based function. So, iterating over a collection does _not_ shrink the collection. Iterating over a _range_ does, but a range is a view into a container - a slice of it - so you're just shrinking your view of it as you process it. - Jonathan M Davis I was thinking of something like a C++ const std::vector, but thinking of ranges as mutable views into possibly const data seems helpful. Thanks!
Re: DMD versions
I've had an idea lately on that note. I'd think it would be cool if rdmd (not standard dmd) had support for this style of import magic: // @grab url:http://someserver.com/somelib/v1.0/src/somelib/somemodule.d size:4321 sha1:2fd4e1c67a2d28fced849ee1bb76e7391b93eb12 import somelib.somemodule; The hash size of the file are there to make it very hard for someone to take over someserver.com and put malicious code there which people would download run. rdmd could also have options for turning this feature off/on or into interactive mode where the user has to accept each download. Lets role some local cashing in there while we're at it also. :) Refraining from having a hash size in there could be useful for unstable code but those imports should not be used without conscious input from the user. One could be allowed to view the code before typing yes and having the code put into the cache. On Wed, Feb 23, 2011 at 9:16 AM, Russel Winder rus...@russel.org.uk wrote: On Tue, 2011-02-22 at 22:29 -0800, Walter Bright wrote: [ . . . ] Just for fun, try: dmd -man !! That presupposes you are connected to the Internet. Much of the time I am not. I appreciate this is an almost heretical position to be in but mobile Internet hasn't actually arrived despite being sold. Interestingly, or not, Go allows for imports to refer to non-local Bazaar, Mercurial and Git repositories without local caching which means you can't compile code unless you are connected to the Internet. Don't let D go (!) this route. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: rus...@russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Splitter.opSlice(), ranges and const strings
Hi! I've run into an issue which I don't understand. Boiled down code: import std.regex; void main() { //string str = sdf; // works //const string str = sdf; // doesn't work immutable str = sdf; // doesn't work auto pat = regex(, *); auto split = splitter(str, pat); } Error: /Library/Compilers/dmd2/osx/bin/../../src/phobos/std/regex.d(3022): Error: this is not mutable Should splitter() be able to cope with const/immutable ranges? (That's with the latest official v2.052 dmd/phobos distribution for mac. I got the same error before upgrading from the v2.051 also).
Re: Suggestion: New D front page
bearophile: - Thank you for making me aware of skywriter/Ace. I am all for using that as long as it doesn't incur significantly longer load-times for the page. - My opinion is that D Zen is something newcomers are interested in reading before delving any further into download videos. Denis: - I was thinking that the D Zen section should serve to introduce the language, so maybe it could be changed to include your paragraph, or maybe we should remove the motto and have your paragraph instead to keep it less crowded. - Announcements commits are mainly there to show that the language is in fact very much alive. - I was thinking that the links at the top would be used by recurring users to get to library refs etc without scrolling. Maybe they could be mouse-over drop down menus to keep the page less crowded but at the same time reduce the number of page-jumps. Drop downs could be hard to get to behave though. David Gileadi: Thank you for the support! It's becoming more apparent to me that the darker look is appreciated by a part of the audience (Russel, Tomek etc), so I understand the current design. Andrei: I'm happy that you like the direction! Adam D. Ruppe: - The idea of building the page in D itself seemed like wishful thinking to me. Your rapid progress is jaw-dropping! I did not know that there already existed such powerful web libraries for D, good job! Sandboxing would be the main issue of using such a powerful language as I see it, so running the compiler in a VM seems like a good step. - I like the idea of rotating code samples, but I think a More examples-link to get to a page or set of pages dedicated to examples would be better than tabs. Since Walter's word is one of the heaviest around here, I wonder what his thoughts are on this?
Suggestion: New D front page
Hi! I have been putting some free time into creating a design skeleton for a new http://d-programming-language.orghttp://www.d-programming-language.org/ front page: http://digitalpoetry.se/D%20website/D%20overview%20design.png My main concern is presenting newcomers with an inspiring and relevant first impression of D. I think there is lots to gain by having a more alive front page not based on Ddoc (the rest of the site could still be based on it). I have not attempted adding any visual style to the design myself since its not one of my strengths. It should be made to fit better with the overall theme of d-programming-language.org (although IMO it's currently a bit too dark and foreboding). I must confess to being heavily inspired by http://ooc-lang.org and http://cobra-language.com. As creating this would take a significant time investment, I suggest that some more complex sections of the page could be released after the initial version. I have some background in web development but have been almost exclusively doing professional C++ games development during the last 4 years. I would not mind putting some more work into this but am also hopeful that some others in the D community desire to contribute. Constructive feedback with a minimum of bikeshedding is welcome. (Please avoid discussions about specific textual content for now, its just placeholders). Cheers, Chris
Re: D without a GC
Would std.typecons.scoped!(T) fit? http://svn.dsource.org/projects/phobos/trunk/phobos/std/typecons.d I can't figure out why it's not in the generated reference docs, but exists in the source. Maybe it hasn't been tested enough yet. On 3 Jan 2011, at 15:33, Ulrik Mikaelsson ulrik.mikaels...@gmail.com wrote: 2011/1/3 Iain Buclaw ibuc...@ubuntu.com: == Quote from bearophile (bearophileh...@lycos.com)'s article Dmitry Olshansky: As stated in this proposal they are quite useless, e.g. they are easily implemented via mixin with alloca. Thank you for your comments. Here I have added some answers to your comments: http://d.puremagic.com/issues/show_bug.cgi?id=5348 Bye, bearophile Some thoughts to your proposal: void bar(int len) { Foo* ptr = cast(Foo*)alloca(len * Foo.sizeof); if (ptr == null) throw new Exception(alloca failed); Foo[] arr = ptr[0 .. len]; foreach (ref item; arr) item = Foo.init; // some code here writeln(arr); foreach (ref item; arr) item.__dtor(); } 1) Why call the dtors manually? Forgive me if I'm wrong, but iirc, alloca will free the memory when the stack exits from the current frame level. :) The dtor doesn't free memory, it allows the class to perform cleanup on destruction, such as close():ing any native FD:s etc. Actually, AFAICT, the above code should also after item=Foo.init call item.__ctor(), to allow the constructor to run. IMHO, stdlibs should provide wrappers for this type of functionality. An application-writer should not have to know the intricate details of __ctor and __dtor as opposed to memory-mangement etc. A template stackAllocate!(T) would be nice.
Re: Spec#, nullables and more
Does D have anything comparable to C++ references à la void nullCheckLessFunction(const std::string notNullStr) {...} or does it only have the equivalent of void nullCheckingRequired(const std::string* mightByNullStr) {...}?
Re: Spec#, nullables and more
On Sat, Nov 6, 2010 at 12:23 PM, Denis Koroskin 2kor...@gmail.com wrote: On Sat, 06 Nov 2010 14:06:20 +0300, Christopher Bergqvist ch...@digitalpoetry.se wrote: Does D have anything comparable to C++ references à la void nullCheckLessFunction(const std::string notNullStr) {...} or does it only have the equivalent of void nullCheckingRequired(const std::string* mightByNullStr) {...}? void nullCheckLessFunction(ref const(string) notNullStr) { .. } I made two comparison snippets between D C++. http://ideone.com/VPzz6 (D) http://ideone.com/HzFRB (C++) I feel like C++ is one small step ahead of D in this respect. It's not possible to trust that C++ references are non-null, but at least they serve as concise documentation of the expected contents and tend to make surrounding code perform the null-check up front before dereferencing from pointer to C++ reference.
Re: Marketing D [ was Re: GCC 4.6 ]
Yes, I don't want to run Walter into bankruptcy though. ;) Honestly, I do think it would change the perception of the language in a beneficial way if one could say that the whole reference compiler infrastructure were _unquestionably_ open source. On Nov 3, 2010, at 21:17, Jérôme M. Berger jeber...@free.fr wrote: dsimcha wrote: == Quote from Christopher Bergqvist (quasiconsci...@gmail.com)'s article Would it be possible to organize a bounty for having the backend released u nder an OSI-approved license? Vote++. I understand that this has worked in the past, though I don't remember off the top of my head what the project was. The first one I know of is Blender. I believe there have been a couple of others since. Jerome -- mailto:jeber...@free.fr http://jeberger.free.fr Jabber: jeber...@jabber.fr
Re: Marketing D [ was Re: GCC 4.6 ]
Would it be possible to organize a bounty for having the backend released under an OSI-approved license? On Oct 31, 2010, at 22:48, Jeff Nowakowski j...@dilacero.org wrote: On 10/31/2010 05:12 PM, Walter Bright wrote: D is fully open source. No, Walter, it isn't, and you should know this by now considering all the past discussion. All the back-end work you're doing is source available. Open source was coined in 1998 by people with a precise meaning: See http://www.opensource.org/docs/osd and http://www.opensource.org/history . In particular, free redistribution and derived works are fundamental to the open source definition.
Re: assert(false) in release == splinter in eye
On Tue, Oct 12, 2010 at 2:37 AM, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: assert(false) could be in an ideal world replaced by an intrinsic called e.g. halt() that looks and feels like a regular function but is recognized by the compiler. No new keyword would be needed. But I don't think that would mark a significant improvement in the language. Would it be possible to change the compiler to only treat assert(false); specially, avoiding treating enum a = 0; assert(a); or more complex constructs that way?
assert(false) in release == splinter in eye
Hi, Time for some Sunday nitpicking. While reading TDPL, one thing that stuck out to me was the special behavior of assert(false). Consider the following program compiled with -release. void main() { int a = 0; assert(a); } That program will run without errors. Changing the type of variable a from int to enum results in the program segfaulting thanks to the compiler being able to know the value of the expression a at compile time and inserting a HLT/halt instruction. Having the ability to change something subtle in a more complex expression or series of expressions without realizing you made a compile time assert(false) which crashes your program feels ugly. I would prefer it if assert() didn't have this special type of behavior, and that a halt keyword or equivalent was introduced. What do you think? / Chris
Re: assert(false) in release == splinter in eye
Thanks for the support guys. :) Unfortunately halt would still need to be a keyword if one wants to keep the full behavior of assert(0), where the compiler knows that it affects the control-flow of the program. Legal: int main() { assert(0); } Illegal (Error: function D main has no return statement, but is expected to return a value of type int): int main() { int a = 0; assert(a); } 2010/10/10 Tomek Sowiński j...@ask.me Christopher Bergqvist napisał: Hi, Time for some Sunday nitpicking. While reading TDPL, one thing that stuck out to me was the special behavior of assert(false). Consider the following program compiled with -release. void main() { int a = 0; assert(a); } That program will run without errors. Changing the type of variable a from int to enum results in the program segfaulting thanks to the compiler being able to know the value of the expression a at compile time and inserting a HLT/halt instruction. Having the ability to change something subtle in a more complex expression or series of expressions without realizing you made a compile time assert(false) which crashes your program feels ugly. I would prefer it if assert() didn't have this special type of behavior, and that a halt keyword or equivalent was introduced. What do you think? I have the same feeling. 'halt' is good, 'fail' is good too. It doesn't have to be a keyword, a function in object.d would suffice. BTW, does anybody know the reason for the assert(0) infernal syntax? -- Tomek