Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Saturday, 8 September 2018 at 14:20:10 UTC, Laeeth Isharc wrote: Religions have believers but not supporters - in fact saying you are a supporter says you are not a member of that faith or community. If you are a supporter of Jesus Christ's efforts, then you most certainly are a christian. If you are a supporter of the Pope, then you may or not may be catholic, but you most likely are christian or a sympathise with the faith. Programming languages are more like powertools. You may be a big fan of Makita and dislike using other powertools like Bosch and DeWalt, or you may have different preferences based the situation, or you may accept whatever you have at hand. Being a supporter is stretching it though... Although I am sure that people who only have Makita in their toolbox feel that they are supporting the company. Social institutions need support to develop - language is a very old human institution, and programming languages have more similarity with natural languages alongst certain dimensions (I'm aware that NLP is your field) than some recognise. Sounds like a fallacy. So, why shouldn't a language have supporters? I give some money to the D Foundation - this is called providing support. If you hope to gain some kind of return for it or consequences that you benefit from then it is more like obtaining support and influence through providing funds. I.e. paying for support... It's odd - if something isn't useful for me then either I just move on and find something that is, or I try to directly act myself or organise others to improve it so it is useful. I don't stand there grumbling at the toolmakers whilst taking no positive action to make that change happen. Pointing out that there is a problem, that needs to be solved, in order to reach a state where the tool is applicable in a production line... is not grumbling. It is healthy. Whether that leads to positive actions (changes in policies) can only be affected through politics, not "positive action". Doesn't help to buy a new, bigger and better motor, if the transmission is broken.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Saturday, September 8, 2018 8:05:04 AM MDT Laeeth Isharc via Digitalmars- d wrote: > On Thursday, 6 September 2018 at 20:15:22 UTC, Jonathan M Davis > > wrote: > > On Thursday, September 6, 2018 1:04:45 PM MDT aliak via > > > > Digitalmars-d wrote: > >> D makes the code-point case default and hence that becomes the > >> simplest to use. But unfortunately, the only thing I can think > >> of > >> that requires code point representations is when dealing > >> specifically with unicode algorithms (normalization, etc). > >> Here's > >> a good read on code points: > >> https://manishearth.github.io/blog/2017/01/14/stop-ascribing-meaning-to > >> -un icode-code-points/ - > >> > >> tl;dr: application logic does not need or want to deal with > >> code points. For speed units work, and for correctness, > >> graphemes work. > > > > I think that it's pretty clear that code points are objectively > > the worst level to be the default. Unfortunately, changing it > > to _anything_ else is not going to be an easy feat at this > > point. But if we can first ensure that Phobos in general > > doesn't rely on it (i.e. in general, it can deal with ranges of > > char, wchar, dchar, or graphemes correctly rather than assuming > > that all ranges of characters are ranges of dchar), then maybe > > we can figure something out. Unfortunately, while some work has > > been done towards that, what's mostly happened is that folks > > have complained about auto-decoding without doing much to > > improve the current situation. There's a lot more to this than > > simply ripping out auto-decoding even if every D user on the > > planet agreed that outright breaking almost every existing D > > program to get rid of auto-decoding was worth it. But as with > > too many things around here, there's a lot more talking than > > working. And actually, as such, I should probably stop > > discussing this and go do something useful. > > A tutorial page linked from the front page with some examples > would go a long way to making it easier for people. If I had > time and understood strings enough to explain to others I would > try to make a start, but unfortunately neither are true. Writing up an article on proper Unicode handling in D is on my todo list, but my todo list of things to do for D is long enough that I don't know then I'm going to get to it. > And if we are doing things right with RCString, then isn't it > easier to make the change with that first - which is new so can't > break code - and in some years when people are used to working > that way update Phobos (compiler switch in beginning and have big > transition a few years after that). Well, I'm not actually convinced that what we have for RCString right now _is_ doing the right thing, but even if it is, that doesn't fix the issue that string doesn't do the right thing, and code needs to take that into account - especially if it's generic code. The better job we do at making Phobos code work with arbitrary ranges of characters, the less of an issue that is, but you're still pretty much forced to deal with it in a number of cases if you want your code to be efficient or if you want a function to be able to accept a string and return a string rather than a wrapper range. Using RCString in your code would reduce how much you had to worry about it, but it doesn't completely solve the problem. And if you're doing stuff like writing a library for other people to use, then you definitely can't just ignore the issue. So, an RCString that handles Unicode sanely will definitely help, but it's not really a fix. And plenty of code is still going to be written to use strings (especially when -betterC is involved). RCString is going to be another option, but it's not going to replace string. Even if RCString became the most common string type to use (which I question is going to ever happen), dynamic arrays of char, wchar, etc. are still going to exist in the language and are still going to have to be handled correctly. Phobos won't be able to assume that all of the code out there is using RCString and not string. The combination of improving Phobos so that it works properly with ranges of characters in general (and not just strings or ranges of dchar) and having an alternate string type that does the right thing will definitely help and need to be done if we have any hope of actually removing auto-decoding, but even with all of that, I don't see how it would be possible to really deprecate the old behavior. We _might_ be able to do something if we're willing to deprecate std.algorithm and std.range (since std.range gives you the current definitions of the range primitives for arrays, and std.algorithm publicly imports std.range), but you still then have the problem of two different definitions of the range primitives for arrays and all of the problems that that causes (even if it's only for the deprecation period). So, strings would end up behaving drastically differently with range-based funct
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, September 6, 2018 3:15:59 PM MDT aliak via Digitalmars-d wrote: > On Thursday, 6 September 2018 at 20:15:22 UTC, Jonathan M Davis > > wrote: > > On Thursday, September 6, 2018 1:04:45 PM MDT aliak via > > > > Digitalmars-d wrote: > >> D makes the code-point case default and hence that becomes the > >> simplest to use. But unfortunately, the only thing I can think > >> of > >> that requires code point representations is when dealing > >> specifically with unicode algorithms (normalization, etc). > >> Here's > >> a good read on code points: > >> https://manishearth.github.io/blog/2017/01/14/stop-ascribing-meaning-to > >> -un icode-code-points/ - > >> > >> tl;dr: application logic does not need or want to deal with > >> code points. For speed units work, and for correctness, > >> graphemes work. > > > > I think that it's pretty clear that code points are objectively > > the worst level to be the default. Unfortunately, changing it > > to _anything_ else is not going to be an easy feat at this > > point. But if we can first ensure that Phobos in general > > doesn't rely on it (i.e. in general, it can deal with ranges of > > char, wchar, dchar, or graphemes correctly rather than assuming > > that all ranges of characters are ranges of dchar), then maybe > > we can figure something out. Unfortunately, while some work has > > been done towards that, what's mostly happened is that folks > > have complained about auto-decoding without doing much to > > improve the current situation. There's a lot more to this than > > simply ripping out auto-decoding even if every D user on the > > planet agreed that outright breaking almost every existing D > > program to get rid of auto-decoding was worth it. But as with > > too many things around here, there's a lot more talking than > > working. And actually, as such, I should probably stop > > discussing this and go do something useful. > > > > - Jonathan M Davis > > Is there a unittest somewhere in phobos you know that one can be > pointed to that shows the handling of these 4 variations you say > should be dealt with first? Or maybe a PR that did some of this > work that one could investigate? > > I ask so I can see in code what it means to make something not > rely on autodecoding and deal with ranges of char, wchar, dchar > or graphemes. > > Or a current "easy" bugzilla issue maybe that one could try a > hand at? Not really. The handling of this has generally been too ad-hoc. There are plenty of examples of handling different string types, and there are a few handling different ranges of character types, but there's a distinct lack of tests involving graphemes. And the correct behavior for each is going to depend on what exactly the function does - e.g. almost certainly, the correct thing for filter to do is to not do anything special for ranges of characters at all and just filter on the element type of the range (even though it would almost always be incorrect to filter a range of char unless it's known to be all ASCII), while on the other hand, find is clearly designed to handle different encodings. So, it needs to be able to find a dchar or grapheme in a range of char. And of course, there's the issue of how normalization should be handled (if at all). A number of the tests in std.utf and std.string do a good job of testing Unicode strings of varying encodings, and std.utf does a good job overall of testing ranges of char, wchar, and dchar which aren't strings, but I'm not sure that anything in Phobos outside of std.uni currently does anything with ranges of graphemes. std.conv.to does have some tests for ranges of char, wchar, and dchar due to a bug fix. e.g. // bugzilla 15800 @safe unittest { import std.utf : byCodeUnit, byChar, byWchar, byDchar; assert(to!int(byCodeUnit("10")) == 10); assert(to!int(byCodeUnit("10"), 10) == 10); assert(to!int(byCodeUnit("10"w)) == 10); assert(to!int(byCodeUnit("10"w), 10) == 10); assert(to!int(byChar("10")) == 10); assert(to!int(byChar("10"), 10) == 10); assert(to!int(byWchar("10")) == 10); assert(to!int(byWchar("10"), 10) == 10); assert(to!int(byDchar("10")) == 10); assert(to!int(byDchar("10"), 10) == 10); } but there are no grapheme tests, and no Unicode characters are involved (though I'm not sure that much in std.conv really needs to worry about Unicode characters). So, there are tests scattered all over the place which do pieces of what they need to be doing, but I'm not sure that there are currently any that test the full range of character ranges that they really need to be testing. As with testing reference type ranges, such tests have generally been added only when fixing a specific bug, and there hasn't been a sufficient effort to just go through all of the affected functions and add appropriate tests. And unfortunately, unlike with reference type ranges, the correct behavior of a function when faced with ranges of different character types is going to be hig
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 14:42:14 UTC, Chris wrote: On Thursday, 6 September 2018 at 14:30:38 UTC, Guillaume Piolat wrote: On Thursday, 6 September 2018 at 13:30:11 UTC, Chris wrote: And autodecode is a good example of experts getting it wrong, because, you know, you cannot be an expert in all fields. I think the problem was that it was discovered too late. There are very valid reasons not to talk about auto-decoding again: - it's too late to remove because breakage - attempts at removing it were _already_ tried - it has been debated to DEATH - there is an easy work-around So any discussion _now_ would have the very same structure of the discussion _then_, and would lead to the exact same result. It's quite tragic. And I urge the real D supporters to let such conversation die (topics debated to death) as soon as they appear. The real supporters? So it's a religion? For me it's about technology and finding a good tool for a job. Religions have believers but not supporters - in fact saying you are a supporter says you are not a member of that faith or community. I support the Catholic Church's efforts to relieve poverty in XYZ country - you're not a core part of that effort directly. Social institutions need support to develop - language is a very old human institution, and programming languages have more similarity with natural languages alongst certain dimensions (I'm aware that NLP is your field) than some recognise. So, why shouldn't a language have supporters? I give some money to the D Foundation - this is called providing support. Does that make me a zealot, or someone who confuses a computer programming language with a religion? I don't think so. I give money to the Foundation because it's a win-win. It makes me happy to support the development of things that are beautiful and it's commercially a no-brainer because of the incidental benefits it brings. Probably I would do so without those benefits, but on the other hand the best choices in life often end up solving problems you weren't even planning on solving and maybe didn't know you had. Does that make me a monomaniac who thinks D should be used everywhere, and only D - the one true language? I don't think so. I confess to being excited by the possibility of writing web applications in D, but that has much more to do with Javascript and the ecosystem than it does D. And on the other hand - even though I have supported the development of a Jupyter kernel for D (something that conceivably could make Julia less necessary) - I'm planning on doing more with Julia, because it's a better solution for some of our commercial problems than anything else I could find, including D. Does using Julia mean we will write less D? No - being able to do more work productively means writing more code, probably including more D, Python and C#. I suggest the problem is in fact the entitlement of people who expect others to give them things for free without recognising that some appreciation would be in order, and that if one can helping in whatever way is possible is probably the right thing to do even if it's in a small way in the beginning. This is of course a well-known challenge of open-source projects in general, but it's my belief it's a fleeting period already passing for D. You know sometimes it's clear from the way someone argues that it isn't about what they say. If the things they claim were problems were in fact anti-problems (merits) they would make different arguments but with the same emotional tone. It's odd - if something isn't useful for me then either I just move on and find something that is, or I try to directly act myself or organise others to improve it so it is useful. I don't stand there grumbling at the toolmakers whilst taking no positive action to make that change happen.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 20:15:22 UTC, Jonathan M Davis wrote: On Thursday, September 6, 2018 1:04:45 PM MDT aliak via Digitalmars-d wrote: D makes the code-point case default and hence that becomes the simplest to use. But unfortunately, the only thing I can think of that requires code point representations is when dealing specifically with unicode algorithms (normalization, etc). Here's a good read on code points: https://manishearth.github.io/blog/2017/01/14/stop-ascribing-meaning-to-un icode-code-points/ - tl;dr: application logic does not need or want to deal with code points. For speed units work, and for correctness, graphemes work. I think that it's pretty clear that code points are objectively the worst level to be the default. Unfortunately, changing it to _anything_ else is not going to be an easy feat at this point. But if we can first ensure that Phobos in general doesn't rely on it (i.e. in general, it can deal with ranges of char, wchar, dchar, or graphemes correctly rather than assuming that all ranges of characters are ranges of dchar), then maybe we can figure something out. Unfortunately, while some work has been done towards that, what's mostly happened is that folks have complained about auto-decoding without doing much to improve the current situation. There's a lot more to this than simply ripping out auto-decoding even if every D user on the planet agreed that outright breaking almost every existing D program to get rid of auto-decoding was worth it. But as with too many things around here, there's a lot more talking than working. And actually, as such, I should probably stop discussing this and go do something useful. - Jonathan M Davis A tutorial page linked from the front page with some examples would go a long way to making it easier for people. If I had time and understood strings enough to explain to others I would try to make a start, but unfortunately neither are true. And if we are doing things right with RCString, then isn't it easier to make the change with that first - which is new so can't break code - and in some years when people are used to working that way update Phobos (compiler switch in beginning and have big transition a few years after that). Isn't this one of the challenges created by the tension between D being both a high-level and low-level language. The higher the aim, the more problems you will encounter getting there. That's okay. And isn't the obstacle to breaking auto-decoding because it seems to be a monolithic challenge of overwhelming magnitude, whereas if we could figure out some steps to eat the elephant one mouthful at a time (which might mean start with RCString) then it will seem less intimidating. It will take years anyway perhaps - but so what?
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 17:19:01 UTC, Joakim wrote: No, Swift counts grapheme clusters by default, so it gives 1. I suggest you read the linked Swift chapter above. I think it's the wrong choice for performance, but they chose to emphasize intuitiveness for the common case. I like to point out that Swift spend a lot of time reworking how string are handled. If my memory serves me well, they have reworked strings from version 2 to 3 and finalized it in version 4. Swift 4 includes a faster, easier to use String implementation that retains Unicode correctness and adds support for creating, using and managing substrings. That took them somewhere along the line of two years to get string handling to a acceptable and predictable state. And it annoyed the Swift user base greatly but a lot of changes got made to reaching a stable API. Being honest, i personally find Swift a more easy languages despite it lacking IDE support on several platforms and no official Windows compiler.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 20:15:22 UTC, Jonathan M Davis wrote: On Thursday, September 6, 2018 1:04:45 PM MDT aliak via Digitalmars-d wrote: D makes the code-point case default and hence that becomes the simplest to use. But unfortunately, the only thing I can think of that requires code point representations is when dealing specifically with unicode algorithms (normalization, etc). Here's a good read on code points: https://manishearth.github.io/blog/2017/01/14/stop-ascribing-meaning-to-un icode-code-points/ - tl;dr: application logic does not need or want to deal with code points. For speed units work, and for correctness, graphemes work. I think that it's pretty clear that code points are objectively the worst level to be the default. Unfortunately, changing it to _anything_ else is not going to be an easy feat at this point. But if we can first ensure that Phobos in general doesn't rely on it (i.e. in general, it can deal with ranges of char, wchar, dchar, or graphemes correctly rather than assuming that all ranges of characters are ranges of dchar), then maybe we can figure something out. Unfortunately, while some work has been done towards that, what's mostly happened is that folks have complained about auto-decoding without doing much to improve the current situation. There's a lot more to this than simply ripping out auto-decoding even if every D user on the planet agreed that outright breaking almost every existing D program to get rid of auto-decoding was worth it. But as with too many things around here, there's a lot more talking than working. And actually, as such, I should probably stop discussing this and go do something useful. - Jonathan M Davis Is there a unittest somewhere in phobos you know that one can be pointed to that shows the handling of these 4 variations you say should be dealt with first? Or maybe a PR that did some of this work that one could investigate? I ask so I can see in code what it means to make something not rely on autodecoding and deal with ranges of char, wchar, dchar or graphemes. Or a current "easy" bugzilla issue maybe that one could try a hand at?
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, September 6, 2018 1:04:45 PM MDT aliak via Digitalmars-d wrote: > D makes the code-point case default and hence that becomes the > simplest to use. But unfortunately, the only thing I can think of > that requires code point representations is when dealing > specifically with unicode algorithms (normalization, etc). Here's > a good read on code points: > https://manishearth.github.io/blog/2017/01/14/stop-ascribing-meaning-to-un > icode-code-points/ - > > tl;dr: application logic does not need or want to deal with code > points. For speed units work, and for correctness, graphemes work. I think that it's pretty clear that code points are objectively the worst level to be the default. Unfortunately, changing it to _anything_ else is not going to be an easy feat at this point. But if we can first ensure that Phobos in general doesn't rely on it (i.e. in general, it can deal with ranges of char, wchar, dchar, or graphemes correctly rather than assuming that all ranges of characters are ranges of dchar), then maybe we can figure something out. Unfortunately, while some work has been done towards that, what's mostly happened is that folks have complained about auto-decoding without doing much to improve the current situation. There's a lot more to this than simply ripping out auto-decoding even if every D user on the planet agreed that outright breaking almost every existing D program to get rid of auto-decoding was worth it. But as with too many things around here, there's a lot more talking than working. And actually, as such, I should probably stop discussing this and go do something useful. - Jonathan M Davis
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 16:44:11 UTC, H. S. Teoh wrote: On Thu, Sep 06, 2018 at 02:42:58PM +, Dukc via Digitalmars-d wrote: On Thursday, 6 September 2018 at 14:17:28 UTC, aliak wrote: > // D > auto a = "á"; > auto b = "á"; > auto c = "\u200B"; > auto x = a ~ c ~ a; > auto y = b ~ c ~ b; > > writeln(a.length); // 2 wtf > writeln(b.length); // 3 wtf > writeln(x.length); // 7 wtf > writeln(y.length); // 9 wtf [...] This is an unfair comparison. In the Swift version you used .count, but here you used .length, which is the length of the array, NOT the number of characters or whatever you expect it to be. You should rather use .count and specify exactly what you want to count, e.g., byCodePoint or byGrapheme. I suspect the Swift version will give you unexpected results if you did something like compare "á" to "a\u301", for example (which, in case it isn't obvious, are visually identical to each other, and as far as an end user is concerned, should only count as 1 grapheme). Not even normalization will help you if you have a string like "a\u301\u302": in that case, the *only* correct way to count the number of visual characters is byGrapheme, and I highly doubt Swift's .count will give you the correct answer in that case. (I expect that Swift's .count will count code points, as is the usual default in many languages, which is unfortunately wrong when you're thinking about visual characters, which are called graphemes in Unicode parlance.) And even in your given example, what should .count return when there's a zero-width character? If you're counting the number of visual places taken by the string (e.g., you're trying to align output in a fixed-width terminal), then *both* versions of your code are wrong, because zero-width characters do not occupy any space when displayed. If you're counting the number of code points, though, e.g., to allocate the right buffer size to convert to dstring, then you want to count the zero-width character as 1 rather than 0. And that's not to mention double-width characters, which should count as 2 if you're outputting to a fixed-width terminal. Again I say, you need to know how Unicode works. Otherwise you can easily deceive yourself to think that your code (both in D and in Swift and in any other language) is correct, when in fact it will fail miserably when it receives input that you didn't think of. Unicode is NOT ASCII, and you CANNOT assume there's a 1-to-1 mapping between "characters" and display length. Or 1-to-1 mapping between any of the various concepts of string "length", in fact. In ASCII, array length == number of code points == number of graphemes == display width. In Unicode, array length != number of code points != number of graphemes != display width. Code written by anyone who does not understand this is WRONG, because you will inevitably end up using the wrong value for the wrong thing: e.g., array length for number of code points, or number of code points for display length. Not even .byGrapheme will save you here; you *need* to understand that zero-width and double-width characters exist, and what they imply for display width. You *need* to understand the difference between code points and graphemes. There is no single default that will work in every case, because there are DIFFERENT CORRECT ANSWERS depending on what your code is trying to accomplish. Pretending that you can just brush all this detail under the rug of a single number is just deceiving yourself, and will inevitably result in wrong code that will fail to handle Unicode input correctly. T It's a totally fair comparison. .count in swift is the equivalent of .length in D, you use that to get the size of an array, etc. They've just implemented string.length as string.byGrapheme.walkLength. So it's intuitively correct (and yes, slower). If you didn't want the default though then you could also specify what "view" over characters you want. E.g. let a = "á̂" a.count // 1 <-- Yes it is exactly as expected. a.unicodeScalars // 3 a.utf8.count // 5 I don't really see any issues with a zero-width character. If you want to deal with screen width (i.e. pixel space) that's not the same as how many characters are in a string. And it doesn't matter whether you go byGrapheme or byCodePoint or byCodeUnit because none of those represent a single column on screen. A zero-width character is 0 *width* but it's still *one* character. There's no .length/size/count in any language (that I've heard of) that'll give you your screen space from their string type. You query the font API for that as that depends on font size, kerning, style and face. And again, I agree you need to know how unicode works. I don't argue that at all. I'm just saying that having the default be incorrect for application logic is just silly and when people have to do things like string.representation.normalize.byGrapheme or whatever to
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, September 6, 2018 10:44:11 AM MDT H. S. Teoh via Digitalmars-d wrote: > On Thu, Sep 06, 2018 at 02:42:58PM +, Dukc via Digitalmars-d wrote: > > On Thursday, 6 September 2018 at 14:17:28 UTC, aliak wrote: > > > // D > > > auto a = "á"; > > > auto b = "á"; > > > auto c = "\u200B"; > > > auto x = a ~ c ~ a; > > > auto y = b ~ c ~ b; > > > > > > writeln(a.length); // 2 wtf > > > writeln(b.length); // 3 wtf > > > writeln(x.length); // 7 wtf > > > writeln(y.length); // 9 wtf > > [...] > > This is an unfair comparison. In the Swift version you used .count, but > here you used .length, which is the length of the array, NOT the number > of characters or whatever you expect it to be. You should rather use > .count and specify exactly what you want to count, e.g., byCodePoint or > byGrapheme. > > I suspect the Swift version will give you unexpected results if you did > something like compare "á" to "a\u301", for example (which, in case it > isn't obvious, are visually identical to each other, and as far as an > end user is concerned, should only count as 1 grapheme). > > Not even normalization will help you if you have a string like > "a\u301\u302": in that case, the *only* correct way to count the number > of visual characters is byGrapheme, and I highly doubt Swift's .count > will give you the correct answer in that case. (I expect that Swift's > .count will count code points, as is the usual default in many > languages, which is unfortunately wrong when you're thinking about > visual characters, which are called graphemes in Unicode parlance.) > > And even in your given example, what should .count return when there's a > zero-width character? If you're counting the number of visual places > taken by the string (e.g., you're trying to align output in a > fixed-width terminal), then *both* versions of your code are wrong, > because zero-width characters do not occupy any space when displayed. If > you're counting the number of code points, though, e.g., to allocate the > right buffer size to convert to dstring, then you want to count the > zero-width character as 1 rather than 0. And that's not to mention > double-width characters, which should count as 2 if you're outputting to > a fixed-width terminal. > > Again I say, you need to know how Unicode works. Otherwise you can > easily deceive yourself to think that your code (both in D and in Swift > and in any other language) is correct, when in fact it will fail > miserably when it receives input that you didn't think of. Unicode is > NOT ASCII, and you CANNOT assume there's a 1-to-1 mapping between > "characters" and display length. Or 1-to-1 mapping between any of the > various concepts of string "length", in fact. > > In ASCII, array length == number of code points == number of graphemes > == display width. > > In Unicode, array length != number of code points != number of graphemes > != display width. > > Code written by anyone who does not understand this is WRONG, because > you will inevitably end up using the wrong value for the wrong thing: > e.g., array length for number of code points, or number of code points > for display length. Not even .byGrapheme will save you here; you *need* > to understand that zero-width and double-width characters exist, and > what they imply for display width. You *need* to understand the > difference between code points and graphemes. There is no single > default that will work in every case, because there are DIFFERENT > CORRECT ANSWERS depending on what your code is trying to accomplish. > Pretending that you can just brush all this detail under the rug of a > single number is just deceiving yourself, and will inevitably result in > wrong code that will fail to handle Unicode input correctly. Indeed. And unfortunately, the net result is that a large percentage of the string-processing code out there is going to be wrong, and I don't think that there's any way around that, because Unicode is simply too complicated for the average programmer to understand it (sad as that may be) - especially when most of them don't want to have to understand it. Really, I'd say that there are only three options that even might be sane if you really have the flexibility to design a proper solution: 1. Treat strings as ranges of code units by default. 2. Don't allow strings to be ranges, to be iterated, or indexed. They're opaque types. 3. Treat strings as ranges of graphemes. If strings are treated as ranges of code units by default (particularly if they're UTF-8), you'll get failures very quickly if you're dealing with non-ASCII, and you screw up the Unicode handling. It's also by far the most performant solution and in many cases is exactly the right thing to do. Obviously, something like byCodePoint or byGrapheme would then be needed in the cases where code points or graphemes are the appropriate level to iterate at. If strings are opaque types (with ways to get ranges over code units, code points, etc.), that
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 16:44:11 UTC, H. S. Teoh wrote: On Thu, Sep 06, 2018 at 02:42:58PM +, Dukc via Digitalmars-d wrote: On Thursday, 6 September 2018 at 14:17:28 UTC, aliak wrote: > // D > auto a = "á"; > auto b = "á"; > auto c = "\u200B"; > auto x = a ~ c ~ a; > auto y = b ~ c ~ b; > > writeln(a.length); // 2 wtf > writeln(b.length); // 3 wtf > writeln(x.length); // 7 wtf > writeln(y.length); // 9 wtf [...] This is an unfair comparison. In the Swift version you used .count, but here you used .length, which is the length of the array, NOT the number of characters or whatever you expect it to be. You should rather use .count and specify exactly what you want to count, e.g., byCodePoint or byGrapheme. I suspect the Swift version will give you unexpected results if you did something like compare "á" to "a\u301", for example (which, in case it isn't obvious, are visually identical to each other, and as far as an end user is concerned, should only count as 1 grapheme). Not even normalization will help you if you have a string like "a\u301\u302": in that case, the *only* correct way to count the number of visual characters is byGrapheme, and I highly doubt Swift's .count will give you the correct answer in that case. (I expect that Swift's .count will count code points, as is the usual default in many languages, which is unfortunately wrong when you're thinking about visual characters, which are called graphemes in Unicode parlance.) No, Swift counts grapheme clusters by default, so it gives 1. I suggest you read the linked Swift chapter above. I think it's the wrong choice for performance, but they chose to emphasize intuitiveness for the common case. I agree with most of the rest of what you wrote about programmers having no silver bullet to avoid Unicode's and languages' complexity.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thu, Sep 06, 2018 at 02:42:58PM +, Dukc via Digitalmars-d wrote: > On Thursday, 6 September 2018 at 14:17:28 UTC, aliak wrote: > > // D > > auto a = "á"; > > auto b = "á"; > > auto c = "\u200B"; > > auto x = a ~ c ~ a; > > auto y = b ~ c ~ b; > > > > writeln(a.length); // 2 wtf > > writeln(b.length); // 3 wtf > > writeln(x.length); // 7 wtf > > writeln(y.length); // 9 wtf [...] This is an unfair comparison. In the Swift version you used .count, but here you used .length, which is the length of the array, NOT the number of characters or whatever you expect it to be. You should rather use .count and specify exactly what you want to count, e.g., byCodePoint or byGrapheme. I suspect the Swift version will give you unexpected results if you did something like compare "á" to "a\u301", for example (which, in case it isn't obvious, are visually identical to each other, and as far as an end user is concerned, should only count as 1 grapheme). Not even normalization will help you if you have a string like "a\u301\u302": in that case, the *only* correct way to count the number of visual characters is byGrapheme, and I highly doubt Swift's .count will give you the correct answer in that case. (I expect that Swift's .count will count code points, as is the usual default in many languages, which is unfortunately wrong when you're thinking about visual characters, which are called graphemes in Unicode parlance.) And even in your given example, what should .count return when there's a zero-width character? If you're counting the number of visual places taken by the string (e.g., you're trying to align output in a fixed-width terminal), then *both* versions of your code are wrong, because zero-width characters do not occupy any space when displayed. If you're counting the number of code points, though, e.g., to allocate the right buffer size to convert to dstring, then you want to count the zero-width character as 1 rather than 0. And that's not to mention double-width characters, which should count as 2 if you're outputting to a fixed-width terminal. Again I say, you need to know how Unicode works. Otherwise you can easily deceive yourself to think that your code (both in D and in Swift and in any other language) is correct, when in fact it will fail miserably when it receives input that you didn't think of. Unicode is NOT ASCII, and you CANNOT assume there's a 1-to-1 mapping between "characters" and display length. Or 1-to-1 mapping between any of the various concepts of string "length", in fact. In ASCII, array length == number of code points == number of graphemes == display width. In Unicode, array length != number of code points != number of graphemes != display width. Code written by anyone who does not understand this is WRONG, because you will inevitably end up using the wrong value for the wrong thing: e.g., array length for number of code points, or number of code points for display length. Not even .byGrapheme will save you here; you *need* to understand that zero-width and double-width characters exist, and what they imply for display width. You *need* to understand the difference between code points and graphemes. There is no single default that will work in every case, because there are DIFFERENT CORRECT ANSWERS depending on what your code is trying to accomplish. Pretending that you can just brush all this detail under the rug of a single number is just deceiving yourself, and will inevitably result in wrong code that will fail to handle Unicode input correctly. T -- It's amazing how careful choice of punctuation can leave you hanging:
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thu, Sep 6, 2018 at 4:45 PM Dukc via Digitalmars-d < digitalmars-d@puremagic.com> wrote: > On Thursday, 6 September 2018 at 14:17:28 UTC, aliak wrote: > > // D > > auto a = "á"; > > auto b = "á"; > > auto c = "\u200B"; > > auto x = a ~ c ~ a; > > auto y = b ~ c ~ b; > > > > writeln(a.length); // 2 wtf > > writeln(b.length); // 3 wtf > > writeln(x.length); // 7 wtf > > writeln(y.length); // 9 wtf > > > > writeln(a == b); // false wtf > > writeln("ááá".canFind("á")); // false wtf > > > > I had to copy-paste that because I wondered how the last two can > be false. They are because á is encoded differently. if you > replace all occurences of it with a grapheme that fits to one > code point, the results are: > > 2 > 2 > 7 > 7 > true > true > import std.stdio; import std.algorithm : canFind; import std.uni : normalize; void main() { auto a = "á".normalize; auto b = "á".normalize; auto c = "\u200B".normalize; auto x = a ~ c ~ a; auto y = b ~ c ~ b; writeln(a.length); // 2 writeln(b.length); // 2 writeln(x.length); // 7 writeln(y.length); // 7 writeln(a == b); // true writeln("ááá".canFind("á".normalize)); // true }
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 14:42:14 UTC, Chris wrote: Usually a sign to move on... You have said that at least 10 times in this very thread. Doomsayers are as old as D. It will be doing OK.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 14:17:28 UTC, aliak wrote: Hehe, it's already a bit laughable that correctness is not preferred. // Swift let a = "á" let b = "á" let c = "\u{200B}" // zero width space let x = a + c + a let y = b + c + b print(a.count) // 1 print(b.count) // 1 print(x.count) // 3 print(y.count) // 3 print(a == b) // true print("ááá".range(of: "á") != nil) // true // D auto a = "á"; auto b = "á"; auto c = "\u200B"; auto x = a ~ c ~ a; auto y = b ~ c ~ b; writeln(a.length); // 2 wtf writeln(b.length); // 3 wtf writeln(x.length); // 7 wtf writeln(y.length); // 9 wtf writeln(a == b); // false wtf writeln("ááá".canFind("á")); // false wtf writeln(cast(ubyte[]) a); // [195, 161] writeln(cast(ubyte[]) b); // [97, 204, 129] At least for equality, it doesn't seem far fetched to me that both are not considered equal if they are not the same.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 14:30:38 UTC, Guillaume Piolat wrote: On Thursday, 6 September 2018 at 13:30:11 UTC, Chris wrote: And autodecode is a good example of experts getting it wrong, because, you know, you cannot be an expert in all fields. I think the problem was that it was discovered too late. There are very valid reasons not to talk about auto-decoding again: - it's too late to remove because breakage - attempts at removing it were _already_ tried - it has been debated to DEATH - there is an easy work-around So any discussion _now_ would have the very same structure of the discussion _then_, and would lead to the exact same result. It's quite tragic. And I urge the real D supporters to let such conversation die (topics debated to death) as soon as they appear. The real supporters? So it's a religion? For me it's about technology and finding a good tool for a job. why shouldn't users be allowed to give feedback? Straw-man. I meant in _general_, not necessarily autodecode ;) If we don't get over _some_ technical debate, the only thing that is achieved is a loss of time for everyone involved. Translation: "Nothing to see here, move along!" Usually a sign to move on...
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 14:17:28 UTC, aliak wrote: // D auto a = "á"; auto b = "á"; auto c = "\u200B"; auto x = a ~ c ~ a; auto y = b ~ c ~ b; writeln(a.length); // 2 wtf writeln(b.length); // 3 wtf writeln(x.length); // 7 wtf writeln(y.length); // 9 wtf writeln(a == b); // false wtf writeln("ááá".canFind("á")); // false wtf I had to copy-paste that because I wondered how the last two can be false. They are because á is encoded differently. if you replace all occurences of it with a grapheme that fits to one code point, the results are: 2 2 7 7 true true
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 14:33:27 UTC, rikki cattermole wrote: Either decide a list of conditions before we can break to remove it, or yes lets let this idea go. It isn't helping anyone. Can't you just let mark it as deprecated and provide a library compatibility range (100% compatible). Then people will just update their code to use the range... This should be possible to achieve using automated source-to-source translation in most cases.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On 07/09/2018 2:30 AM, Guillaume Piolat wrote: On Thursday, 6 September 2018 at 13:30:11 UTC, Chris wrote: And autodecode is a good example of experts getting it wrong, because, you know, you cannot be an expert in all fields. I think the problem was that it was discovered too late. There are very valid reasons not to talk about auto-decoding again: - it's too late to remove because breakage - attempts at removing it were _already_ tried - it has been debated to DEATH - there is an easy work-around So any discussion _now_ would have the very same structure of the discussion _then_, and would lead to the exact same result. It's quite tragic. And I urge the real D supporters to let such conversation die (topics debated to death) as soon as they appear. +1 Either decide a list of conditions before we can break to remove it, or yes lets let this idea go. It isn't helping anyone.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 13:30:11 UTC, Chris wrote: And autodecode is a good example of experts getting it wrong, because, you know, you cannot be an expert in all fields. I think the problem was that it was discovered too late. There are very valid reasons not to talk about auto-decoding again: - it's too late to remove because breakage - attempts at removing it were _already_ tried - it has been debated to DEATH - there is an easy work-around So any discussion _now_ would have the very same structure of the discussion _then_, and would lead to the exact same result. It's quite tragic. And I urge the real D supporters to let such conversation die (topics debated to death) as soon as they appear. why shouldn't users be allowed to give feedback? Straw-man. If we don't get over _some_ technical debate, the only thing that is achieved is a loss of time for everyone involved.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Wednesday, 5 September 2018 at 22:00:27 UTC, H. S. Teoh wrote: Because grapheme decoding is SLOW, and most of the time you don't even need it anyway. SLOW as in, it will easily add a factor of 3-5 (if not worse!) to your string processing time, which will make your natively-compiled D code a laughing stock of interpreted languages like Python. It will make autodecoding look like an optimization(!). Hehe, it's already a bit laughable that correctness is not preferred. // Swift let a = "á" let b = "á" let c = "\u{200B}" // zero width space let x = a + c + a let y = b + c + b print(a.count) // 1 print(b.count) // 1 print(x.count) // 3 print(y.count) // 3 print(a == b) // true print("ááá".range(of: "á") != nil) // true // D auto a = "á"; auto b = "á"; auto c = "\u200B"; auto x = a ~ c ~ a; auto y = b ~ c ~ b; writeln(a.length); // 2 wtf writeln(b.length); // 3 wtf writeln(x.length); // 7 wtf writeln(y.length); // 9 wtf writeln(a == b); // false wtf writeln("ááá".canFind("á")); // false wtf Tell me which one would cause the giggles again? If speed is the preference over correctness (which I very much disagree with, but for arguments sake...) then still code points are the wrong choice. So, speed was obviously (??) not the reason to prefer code points as the default. Here's a read on how swift 4 strings behave. Absolutely amazing work there: https://oleb.net/blog/2017/11/swift-4-strings/ Grapheme decoding is really only necessary when (1) you're typesetting a Unicode string, and (2) you're counting the number of visual characters taken up by the string (though grapheme counting even in this case may not give you what you want, thanks to double-width characters, zero-width characters, etc. -- though it can form the basis of correct counting code). Yeah nah. Those are not the only 2 cases *ever* where grapheme decoding is correct. I don't think one can list all the cases where grapheme decoding is the correct behavior. Off the op of me head you've already forgotten comparisons. And on top of that, comparing and counting has a bajillion* use cases. * number is an exaggeration. For all other cases, you really don't need grapheme decoding, and being forced to iterate over graphemes when unnecessary will add a horrible overhead, worse than autodecoding does today. As opposed to being forced to iterate with incorrect results? I understand that it's slower. I just don't think that justifies incorrect output. I agree with everything you've said next though, that people should understand unicode. // Seriously, people need to get over the fantasy that they can just use Unicode without understanding how Unicode works. Most of the time, you can get the illusion that it's working, but actually 99% of the time the code is actually wrong and will do the wrong thing when given an unexpected (but still valid) Unicode string. You can't drive without a license, and even if you try anyway, the chances of ending up in a nasty accident is pretty high. People *need* to learn how to use Unicode properly before complaining about why this or that doesn't work the way they thought it should work. I agree that you should know about unicode. And maybe you can't be correct 100% of the time but you can very well get much closer than were D is right now. And yeah, you can't drive without a license, but most cars hopefully don't show you an incorrect speedometer reading because it produces faster drivers. T -- Gone Chopin. Bach in a minuet. Lol :D
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 11:01:55 UTC, Guillaume Piolat wrote: So Unicode in D works EXACTLY as expected, yet people in this thread act as if the house is on fire. Expected by who? The Unicode expert or the user? D dying because of auto-decoding? Who can possibly think that in its right mind? Nobody, it's just another major issue to be fixed. The worst part of this forum is that suddenly everyone, by virtue of posting in a newsgroup, is an annointed language design expert. Let me break that to you: core developer are language experts. The rest of us are users, that yes it doesn't make us necessarily qualified to design a language. Calm down. I for my part never said I was an expert on language design. Number one: experts do make mistakes too, there is nothing wrong with that. And autodecode is a good example of experts getting it wrong, because, you know, you cannot be an expert in all fields. I think the problem was that it was discovered too late. Number two: why shouldn't users be allowed to give feedback? Engineers and developers need feedback, else we'd still be using CLI, wouldn't we. The user doesn't need to be an expert to know what s/he likes and doesn't like and developers / engineers often have a different point of view as to what is important / annoying etc. That's why IT companies introduced customer service, because the direct interaction between developers and users would often end badly (disgruntled customers).
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 11:01:55 UTC, Guillaume Piolat wrote: Let me break that to you: core developer are language experts. The rest of us are users, that yes it doesn't make us necessarily qualified to design a language. Who?
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 11:43:31 UTC, ag0aep6g wrote: You say that D users shouldn't need a '"Unicode license" before they do anything with strings'. And you say that Python 3 gets it right (or maybe less wrong than D). But here we see that Python requires a similar amount of Unicode knowledge. Without your Unicode license, you couldn't make sense of `len` giving different results for two strings that look the same. So both D and Python require a Unicode license. But on top of that, D also requires an auto-decoding license. You need to know that `string` is both a range of code points and an array of code units. And you need to know that `.length` belongs to the array side, not the range side. Once you know that (and more), things start making sense in D. You'll need some basic knowledge of Unicode, if you deal with strings, that's for sure. But you don't need a "license" and it certainly shouldn't be used as an excuse for D's confusing nature when it comes to strings. Unicode is confusing enough, so you don't need to add another layer of complexity to confuse users further. And most certainly you shouldn't blame the user for being confused. Afaik, there's no warning label with an accompanying user manual for string handling. My point is: D doesn't require more Unicode knowledge than Python. But D's auto-decoding gives `string` a dual nature, and that can certainly be confusing. It's part of why everybody dislikes auto-decoding. D should be clear about it. I think it's too late for `string` to change its behavior (i.e. "á".length = 1). If you wanna change `string`'s behavior now, maybe a compiler switch would be an option for the transition period: -autodecode=off. Maybe a new type of string could be introduced that behaves like one would expect, say `ustring` for correct Unicode handling. Or `string` does that and you introduce a new type for high performance tasks (`rawstring` would unfortunately be confusing). The thing is that even basic things like string handling are complicated and flawed so that I don't want to use D for any future projects and I don't have the time to wait until it gets fixed one day, if it ever will get fixed that is. Neither does it seem to be a priority as opposed to other things that are maybe less important for production. But at least I'm wiser after this thread, since it has been made clear that things are not gonna change soon, at least not soon enough for me. This is why I'll file for D-vorce :) Will it be difficult? Maybe at the beginning, but it will make things easier in the long run. And at the end of the day, if you have to fix and rewrite parts of your code again and again due to frequent language changes, you might as well port it to a different PL altogether. But I have no hard feelings, it's a practical decision I had to make based on pros and cons. [snip]
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On 09/06/2018 12:40 PM, Chris wrote: To avoid this you have to normalize and recompose any decomposed characters. I remember that Mac OS X used (and still uses?) decomposed characters by default, so when you typed 'á' into your cli, it would automatically decompose it to 'a' + acute. `string` however returns len=2 for composed characters too. If you do a lot of string handling it will come back to bite you sooner or later. You say that D users shouldn't need a '"Unicode license" before they do anything with strings'. And you say that Python 3 gets it right (or maybe less wrong than D). But here we see that Python requires a similar amount of Unicode knowledge. Without your Unicode license, you couldn't make sense of `len` giving different results for two strings that look the same. So both D and Python require a Unicode license. But on top of that, D also requires an auto-decoding license. You need to know that `string` is both a range of code points and an array of code units. And you need to know that `.length` belongs to the array side, not the range side. Once you know that (and more), things start making sense in D. My point is: D doesn't require more Unicode knowledge than Python. But D's auto-decoding gives `string` a dual nature, and that can certainly be confusing. It's part of why everybody dislikes auto-decoding. (Not saying that Python is free from such pitfalls. I simply don't know the language well enough.)
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 11:19:14 UTC, Chris wrote: One problem imo is that they mixed the terms up: "Grapheme: A minimally distinctive unit of writing in the context of a particular writing system." In linguistics a grapheme is not a single character like "á" or "g". It may also be a combination of characters like in English spelling ("s" + "h") that maps to a phoneme (e.g. ship, shut, shadow). In German this sound is written as as in "Schiff" (ship) (but not always, cf. "s" in "Stange"). Sorry, this should read "In linguistics a grapheme is not _necessarily_ _only_ a single character like "á" or "g"."
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 10:44:45 UTC, Joakim wrote: [snip] You're not being fair here, Chris. I just saw this SO question that I think exemplifies how most programmers react to Unicode: "Trying to understand the subtleties of modern Unicode is making my head hurt. In particular, the distinction between code points, characters, glyphs and graphemes - concepts which in the simplest case, when dealing with English text using ASCII characters, all have a one-to-one relationship with each other - is causing me trouble. Seeing how these terms get used in documents like Matthias Bynens' JavaScript has a unicode problem or Wikipedia's piece on Han unification, I've gathered that these concepts are not the same thing and that it's dangerous to conflate them, but I'm kind of struggling to grasp what each term means. The Unicode Consortium offers a glossary to explain this stuff, but it's full of "definitions" like this: Abstract Character. A unit of information used for the organization, control, or representation of textual data. ... ... Character. ... (2) Synonym for abstract character. (3) The basic unit of encoding for the Unicode character encoding. ... ... Glyph. (1) An abstract form that represents one or more glyph images. (2) A synonym for glyph image. In displaying Unicode character data, one or more glyphs may be selected to depict a particular character. ... Grapheme. (1) A minimally distinctive unit of writing in the context of a particular writing system. ... Most of these definitions possess the quality of sounding very academic and formal, but lack the quality of meaning anything, or else defer the problem of definition to yet another glossary entry or section of the standard. So I seek the arcane wisdom of those more learned than I. How exactly do each of these concepts differ from each other, and in what circumstances would they not have a one-to-one relationship with each other?" https://stackoverflow.com/questions/27331819/whats-the-difference-between-a-character-a-code-point-a-glyph-and-a-grapheme Honestly, unicode is a mess, and I believe we will all have to dump the Unicode standard and start over one day. Until that fine day, there is no neat solution to how to handle it, no matter how much you'd like to think so. Also, much of the complexity actually comes from the complexity of the various language alphabets, so that cannot be waved away no matter what standard you come up with, though Unicode certainly adds more unneeded complexity on top, which is why it should be dumped. One problem imo is that they mixed the terms up: "Grapheme: A minimally distinctive unit of writing in the context of a particular writing system." In linguistics a grapheme is not a single character like "á" or "g". It may also be a combination of characters like in English spelling ("s" + "h") that maps to a phoneme (e.g. ship, shut, shadow). In German this sound is written as as in "Schiff" (ship) (but not always, cf. "s" in "Stange"). Since Unicode is such a difficult beast to deal with, I'd say D (or any PL for that matter) needs, first and foremost, a clear policy about what's the default behavior - not ad hoc patches. Then maybe a strategy as to how the default behavior can be turned on and off, say for performance reasons. One way _could_ be a compiler switch to turn the default behavior on/off -unicode or -uni or -utf8 or whatever, or maybe better a library solution like `ustring`. If you need high performance and checks are no issue for the most part (web crawling, data harvesting etc), get rid of autodecoding. Once you need to check for character/grapheme correctness (e.g. translation tools) make it available through something like `to!ustring`. Which ever way: be clear about it. But don't let the unsuspecting user use `string` and get bitten by it.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Wednesday, 5 September 2018 at 07:48:34 UTC, Chris wrote: import std.array : array; import std.stdio : writefln; import std.uni : byCodePoint, byGrapheme; import std.utf : byCodeUnit; void main() { string first = "á"; writefln("%d", first.length); // prints 2 auto firstCU = "á".byCodeUnit; // type is `ByCodeUnitImpl` (!) writefln("%d", firstCU.length); // prints 2 auto firstGr = "á".byGrapheme.array; // type is `Grapheme[]` writefln("%d", firstGr.length); // prints 1 auto firstCP = "á".byCodePoint.array; // type is `dchar[]` writefln("%d", firstCP.length); // prints 1 dstring second = "á"; writefln("%d", second.length); // prints 1 (That was easy!) // DMD64 D Compiler v2.081.2 } So Unicode in D works EXACTLY as expected, yet people in this thread act as if the house is on fire. D dying because of auto-decoding? Who can possibly think that in its right mind? The worst part of this forum is that suddenly everyone, by virtue of posting in a newsgroup, is an annointed language design expert. Let me break that to you: core developer are language experts. The rest of us are users, that yes it doesn't make us necessarily qualified to design a language.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 09:35:27 UTC, Chris wrote: On Thursday, 6 September 2018 at 08:44:15 UTC, nkm1 wrote: On Wednesday, 5 September 2018 at 07:48:34 UTC, Chris wrote: On Tuesday, 4 September 2018 at 21:36:16 UTC, Walter Bright wrote: Autodecode - I've suffered under that, too. The solution was fairly simple. Append .byCodeUnit to strings that would otherwise autodecode. Annoying, but hardly a showstopper. import std.array : array; import std.stdio : writefln; import std.uni : byCodePoint, byGrapheme; import std.utf : byCodeUnit; void main() { string first = "á"; writefln("%d", first.length); // prints 2 auto firstCU = "á".byCodeUnit; // type is `ByCodeUnitImpl` (!) writefln("%d", firstCU.length); // prints 2 auto firstGr = "á".byGrapheme.array; // type is `Grapheme[]` writefln("%d", firstGr.length); // prints 1 auto firstCP = "á".byCodePoint.array; // type is `dchar[]` writefln("%d", firstCP.length); // prints 1 dstring second = "á"; writefln("%d", second.length); // prints 1 (That was easy!) // DMD64 D Compiler v2.081.2 } And this has what to do with autodecoding? Nothing. I was just pointing out how awkward some basic things can be. autodecoding just adds to it in the sense that it's a useless overhead but will keep string handling in a limbo forever and ever and ever. TBH, it looks like you're just confused about how Unicode works. None of that is something particular to D. You should probably address your concerns to the Unicode Consortium. Not that they care. I'm actually not confused since I've been dealing with Unicode (and encodings in general) for quite a while now. Although I'm not a Unicode expert, I know what the operations above do and why. I'd only expect a modern PL to deal with Unicode correctly and have some guidelines as to the nitty-gritty. Since you understand Unicode well, enlighten us: what's the best default format to use for string iteration? You can argue that D chose the wrong default by having the stdlib auto-decode to code points in several places, and Walter and a host of the core D team would agree with you, and you can add me to the list too. But it's not clear there should be a default format at all, other than whatever you started off with, particularly for a programming language that values performance like D does, as each format choice comes with various speed vs. correctness trade-offs. Therefore, the programmer has to understand that complexity and make his own choice. You're acting like there's some obvious choice for how to handle Unicode that we're missing here, when the truth is that _no programming language knows how to handle unicode well_, since handling a host of world languages in a single format is _inherently unintuitive_ and has significant efficiency tradeoffs between the different formats. And once again, it's the user's fault as in having some basic assumptions about how things should work. The user is just too stpid to use D properly - that's all. I know this type of behavior from the management of pubs and shops that had to close down, because nobody would go there anymore. Do you know the book "Crónica de una muerte anunciada" (Chronicle of a Death Foretold) by Gabriel García Márquez? "The central question at the core of the novella is how the death of Santiago Nasar was foreseen, yet no one tried to stop it."[1] [1] https://en.wikipedia.org/wiki/Chronicle_of_a_Death_Foretold#Key_themes You're not being fair here, Chris. I just saw this SO question that I think exemplifies how most programmers react to Unicode: "Trying to understand the subtleties of modern Unicode is making my head hurt. In particular, the distinction between code points, characters, glyphs and graphemes - concepts which in the simplest case, when dealing with English text using ASCII characters, all have a one-to-one relationship with each other - is causing me trouble. Seeing how these terms get used in documents like Matthias Bynens' JavaScript has a unicode problem or Wikipedia's piece on Han unification, I've gathered that these concepts are not the same thing and that it's dangerous to conflate them, but I'm kind of struggling to grasp what each term means. The Unicode Consortium offers a glossary to explain this stuff, but it's full of "definitions" like this: Abstract Character. A unit of information used for the organization, control, or representation of textual data. ... ... Character. ... (2) Synonym for abstract character. (3) The basic unit of encoding for the Unicode character encoding. ... ... Glyph. (1) An abstract form that represents one or more glyph images. (2) A synonym for glyph image. In displaying Unicode character data, one or more glyphs may be selected to depict a particular character. ... Grapheme. (1) A minimally distinctive unit of writing in the context of a particular writing system. ... Mo
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 10:22:22 UTC, ag0aep6g wrote: On 09/06/2018 09:23 AM, Chris wrote: Python 3 gives me this: print(len("á")) 1 Python 3 also gives you this: print(len("á")) 2 (The example might not survive transfer from me to you if Unicode normalization happens along the way.) That's when you enter the 'á' as 'a' followed by U+0301 (combining acute accent). So Python's `len` counts in code points, like D's std.range does (auto-decoding). To avoid this you have to normalize and recompose any decomposed characters. I remember that Mac OS X used (and still uses?) decomposed characters by default, so when you typed 'á' into your cli, it would automatically decompose it to 'a' + acute. `string` however returns len=2 for composed characters too. If you do a lot of string handling it will come back to bite you sooner or later.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On 09/06/2018 09:23 AM, Chris wrote: Python 3 gives me this: print(len("á")) 1 Python 3 also gives you this: print(len("á")) 2 (The example might not survive transfer from me to you if Unicode normalization happens along the way.) That's when you enter the 'á' as 'a' followed by U+0301 (combining acute accent). So Python's `len` counts in code points, like D's std.range does (auto-decoding).
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 08:44:15 UTC, nkm1 wrote: On Wednesday, 5 September 2018 at 07:48:34 UTC, Chris wrote: On Tuesday, 4 September 2018 at 21:36:16 UTC, Walter Bright wrote: Autodecode - I've suffered under that, too. The solution was fairly simple. Append .byCodeUnit to strings that would otherwise autodecode. Annoying, but hardly a showstopper. import std.array : array; import std.stdio : writefln; import std.uni : byCodePoint, byGrapheme; import std.utf : byCodeUnit; void main() { string first = "á"; writefln("%d", first.length); // prints 2 auto firstCU = "á".byCodeUnit; // type is `ByCodeUnitImpl` (!) writefln("%d", firstCU.length); // prints 2 auto firstGr = "á".byGrapheme.array; // type is `Grapheme[]` writefln("%d", firstGr.length); // prints 1 auto firstCP = "á".byCodePoint.array; // type is `dchar[]` writefln("%d", firstCP.length); // prints 1 dstring second = "á"; writefln("%d", second.length); // prints 1 (That was easy!) // DMD64 D Compiler v2.081.2 } And this has what to do with autodecoding? Nothing. I was just pointing out how awkward some basic things can be. autodecoding just adds to it in the sense that it's a useless overhead but will keep string handling in a limbo forever and ever and ever. TBH, it looks like you're just confused about how Unicode works. None of that is something particular to D. You should probably address your concerns to the Unicode Consortium. Not that they care. I'm actually not confused since I've been dealing with Unicode (and encodings in general) for quite a while now. Although I'm not a Unicode expert, I know what the operations above do and why. I'd only expect a modern PL to deal with Unicode correctly and have some guidelines as to the nitty-gritty. And once again, it's the user's fault as in having some basic assumptions about how things should work. The user is just too stpid to use D properly - that's all. I know this type of behavior from the management of pubs and shops that had to close down, because nobody would go there anymore. Do you know the book "Crónica de una muerte anunciada" (Chronicle of a Death Foretold) by Gabriel García Márquez? "The central question at the core of the novella is how the death of Santiago Nasar was foreseen, yet no one tried to stop it."[1] [1] https://en.wikipedia.org/wiki/Chronicle_of_a_Death_Foretold#Key_themes
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Wednesday, 5 September 2018 at 07:48:34 UTC, Chris wrote: On Tuesday, 4 September 2018 at 21:36:16 UTC, Walter Bright wrote: Autodecode - I've suffered under that, too. The solution was fairly simple. Append .byCodeUnit to strings that would otherwise autodecode. Annoying, but hardly a showstopper. import std.array : array; import std.stdio : writefln; import std.uni : byCodePoint, byGrapheme; import std.utf : byCodeUnit; void main() { string first = "á"; writefln("%d", first.length); // prints 2 auto firstCU = "á".byCodeUnit; // type is `ByCodeUnitImpl` (!) writefln("%d", firstCU.length); // prints 2 auto firstGr = "á".byGrapheme.array; // type is `Grapheme[]` writefln("%d", firstGr.length); // prints 1 auto firstCP = "á".byCodePoint.array; // type is `dchar[]` writefln("%d", firstCP.length); // prints 1 dstring second = "á"; writefln("%d", second.length); // prints 1 (That was easy!) // DMD64 D Compiler v2.081.2 } And this has what to do with autodecoding? Welcome to my world! TBH, it looks like you're just confused about how Unicode works. None of that is something particular to D. You should probably address your concerns to the Unicode Consortium. Not that they care.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 07:54:09 UTC, Joakim wrote: On Thursday, 6 September 2018 at 07:23:57 UTC, Chris wrote: On Wednesday, 5 September 2018 at 22:00:27 UTC, H. S. Teoh wrote: // Seriously, people need to get over the fantasy that they can just use Unicode without understanding how Unicode works. Most of the time, you can get the illusion that it's working, but actually 99% of the time the code is actually wrong and will do the wrong thing when given an unexpected (but still valid) Unicode string. You can't drive without a license, and even if you try anyway, the chances of ending up in a nasty accident is pretty high. People *need* to learn how to use Unicode properly before complaining about why this or that doesn't work the way they thought it should work. T Python 3 gives me this: print(len("á")) 1 and so do other languages. The same Python 3 that people criticize for having unintuitive unicode string handling? https://learnpythonthehardway.org/book/nopython3.html Is it asking too much to ask for `string` (not `dstring` or `wstring`) to behave as most people would expect it to behave in 2018 - and not like Python 2 from days of yore? But of course, D users should have a "Unicode license" before they do anything with strings. (I wonder is there a different license for UTF8 and UTF16 and UTF32, Big / Little Endian, BOM? Just asking.) Yes and no, unicode is a clusterf***, so every programming language is having problems with it. So again, for the umpteenth time, it's the users' fault. I see. Ironically enough, it was the language developers' lack of understanding of Unicode that led to string handling being a nightmare in D in the first place. Oh lads, if you were politicians I'd say that with this attitude you're gonna the next election. I say this, because many times the posts by (core) developers remind me so much of politicians who are completely detached from the reality of the people. Right oh! You have a point that it was D devs' ignorance of unicode that led to the current auto-decoding problem. But let's have some nuance here, the problem ultimately is unicode. Yes, Unicode is a beast that is hard to tame. But there is, afaik, not even a proper plan to tackle the whole thing in D, just patches. D has autodecoding which slows things down but doesn't even work correctly at the same time. However, it cannot be removed due to massive code breakage. So you sacrifice speed for security (fine) - but the security doesn't even exist. So what's the point? Also, there aren't any guidelines about how to use strings in different contexts. So after a while your code ends up being a mess of .byCodePoint / .byGrapheme / string / dstring whatever, and you never know if you really got it right or not (performance wise and other). We're talking about a basic functionality like string handling. String handling is very important these days (data harvesting, translation tools) and IT is used all over the world where you have to deal with different alphabets that are outside the ASCII range. And because it's such a basic functionality, you don't want to waste time having to think about it.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On 06/09/2018 7:54 PM, Joakim wrote: On Thursday, 6 September 2018 at 07:23:57 UTC, Chris wrote: On Wednesday, 5 September 2018 at 22:00:27 UTC, H. S. Teoh wrote: // Seriously, people need to get over the fantasy that they can just use Unicode without understanding how Unicode works. Most of the time, you can get the illusion that it's working, but actually 99% of the time the code is actually wrong and will do the wrong thing when given an unexpected (but still valid) Unicode string. You can't drive without a license, and even if you try anyway, the chances of ending up in a nasty accident is pretty high. People *need* to learn how to use Unicode properly before complaining about why this or that doesn't work the way they thought it should work. T Python 3 gives me this: print(len("á")) 1 and so do other languages. The same Python 3 that people criticize for having unintuitive unicode string handling? https://learnpythonthehardway.org/book/nopython3.html Is it asking too much to ask for `string` (not `dstring` or `wstring`) to behave as most people would expect it to behave in 2018 - and not like Python 2 from days of yore? But of course, D users should have a "Unicode license" before they do anything with strings. (I wonder is there a different license for UTF8 and UTF16 and UTF32, Big / Little Endian, BOM? Just asking.) Yes and no, unicode is a clusterf***, so every programming language is having problems with it. So again, for the umpteenth time, it's the users' fault. I see. Ironically enough, it was the language developers' lack of understanding of Unicode that led to string handling being a nightmare in D in the first place. Oh lads, if you were politicians I'd say that with this attitude you're gonna the next election. I say this, because many times the posts by (core) developers remind me so much of politicians who are completely detached from the reality of the people. Right oh! You have a point that it was D devs' ignorance of unicode that led to the current auto-decoding problem. But let's have some nuance here, the problem ultimately is unicode. Let's also be realistic here, when D was being designed UTF-16 was touted as being 'the' solution you should support e.g. Java had it retrofitted shortly before D. So it isn't anyone's fault on D's end.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 07:23:57 UTC, Chris wrote: On Wednesday, 5 September 2018 at 22:00:27 UTC, H. S. Teoh wrote: // Seriously, people need to get over the fantasy that they can just use Unicode without understanding how Unicode works. Most of the time, you can get the illusion that it's working, but actually 99% of the time the code is actually wrong and will do the wrong thing when given an unexpected (but still valid) Unicode string. You can't drive without a license, and even if you try anyway, the chances of ending up in a nasty accident is pretty high. People *need* to learn how to use Unicode properly before complaining about why this or that doesn't work the way they thought it should work. T Python 3 gives me this: print(len("á")) 1 and so do other languages. The same Python 3 that people criticize for having unintuitive unicode string handling? https://learnpythonthehardway.org/book/nopython3.html Is it asking too much to ask for `string` (not `dstring` or `wstring`) to behave as most people would expect it to behave in 2018 - and not like Python 2 from days of yore? But of course, D users should have a "Unicode license" before they do anything with strings. (I wonder is there a different license for UTF8 and UTF16 and UTF32, Big / Little Endian, BOM? Just asking.) Yes and no, unicode is a clusterf***, so every programming language is having problems with it. So again, for the umpteenth time, it's the users' fault. I see. Ironically enough, it was the language developers' lack of understanding of Unicode that led to string handling being a nightmare in D in the first place. Oh lads, if you were politicians I'd say that with this attitude you're gonna the next election. I say this, because many times the posts by (core) developers remind me so much of politicians who are completely detached from the reality of the people. Right oh! You have a point that it was D devs' ignorance of unicode that led to the current auto-decoding problem. But let's have some nuance here, the problem ultimately is unicode.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Thursday, 6 September 2018 at 07:23:57 UTC, Chris wrote: Seriously, people need to get over the fantasy that they can just use Unicode without understanding how Unicode works. Most of the time, you can get the illusion that it's working, but actually 99% of the time the code is actually wrong and will do the wrong thing when given an unexpected (but still valid) Unicode string. Is it asking too much to ask for `string` (not `dstring` or `wstring`) to behave as most people would expect it to behave in 2018 - and not like Python 2 from days of yore? I agree with Chris. The boat is sailed, so D2 should just go full throttle with the original design and auto decode to graphemes, regardless of the performance.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Wednesday, 5 September 2018 at 22:00:27 UTC, H. S. Teoh wrote: // Seriously, people need to get over the fantasy that they can just use Unicode without understanding how Unicode works. Most of the time, you can get the illusion that it's working, but actually 99% of the time the code is actually wrong and will do the wrong thing when given an unexpected (but still valid) Unicode string. You can't drive without a license, and even if you try anyway, the chances of ending up in a nasty accident is pretty high. People *need* to learn how to use Unicode properly before complaining about why this or that doesn't work the way they thought it should work. T Python 3 gives me this: print(len("á")) 1 and so do other languages. Is it asking too much to ask for `string` (not `dstring` or `wstring`) to behave as most people would expect it to behave in 2018 - and not like Python 2 from days of yore? But of course, D users should have a "Unicode license" before they do anything with strings. (I wonder is there a different license for UTF8 and UTF16 and UTF32, Big / Little Endian, BOM? Just asking.) So again, for the umpteenth time, it's the users' fault. I see. Ironically enough, it was the language developers' lack of understanding of Unicode that led to string handling being a nightmare in D in the first place. Oh lads, if you were politicians I'd say that with this attitude you're gonna the next election. I say this, because many times the posts by (core) developers remind me so much of politicians who are completely detached from the reality of the people. Right oh!
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Wed, Sep 05, 2018 at 09:33:27PM +, aliak via Digitalmars-d wrote: [...] > The dstring is only ok because the 2 code units fit in a dchar right? > But all the other ones are as expected right? And dstring will be wrong once you have non-precomposed diacritics and other composing sequences. > Seriously... why is it not graphemes by default for correctness > whyyy! Because grapheme decoding is SLOW, and most of the time you don't even need it anyway. SLOW as in, it will easily add a factor of 3-5 (if not worse!) to your string processing time, which will make your natively-compiled D code a laughing stock of interpreted languages like Python. It will make autodecoding look like an optimization(!). Grapheme decoding is really only necessary when (1) you're typesetting a Unicode string, and (2) you're counting the number of visual characters taken up by the string (though grapheme counting even in this case may not give you what you want, thanks to double-width characters, zero-width characters, etc. -- though it can form the basis of correct counting code). For all other cases, you really don't need grapheme decoding, and being forced to iterate over graphemes when unnecessary will add a horrible overhead, worse than autodecoding does today. // Seriously, people need to get over the fantasy that they can just use Unicode without understanding how Unicode works. Most of the time, you can get the illusion that it's working, but actually 99% of the time the code is actually wrong and will do the wrong thing when given an unexpected (but still valid) Unicode string. You can't drive without a license, and even if you try anyway, the chances of ending up in a nasty accident is pretty high. People *need* to learn how to use Unicode properly before complaining about why this or that doesn't work the way they thought it should work. T -- Gone Chopin. Bach in a minuet.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Wednesday, 5 September 2018 at 07:48:34 UTC, Chris wrote: On Tuesday, 4 September 2018 at 21:36:16 UTC, Walter Bright wrote: Autodecode - I've suffered under that, too. The solution was fairly simple. Append .byCodeUnit to strings that would otherwise autodecode. Annoying, but hardly a showstopper. import std.array : array; import std.stdio : writefln; import std.uni : byCodePoint, byGrapheme; import std.utf : byCodeUnit; void main() { string first = "á"; writefln("%d", first.length); // prints 2 auto firstCU = "á".byCodeUnit; // type is `ByCodeUnitImpl` (!) writefln("%d", firstCU.length); // prints 2 auto firstGr = "á".byGrapheme.array; // type is `Grapheme[]` writefln("%d", firstGr.length); // prints 1 auto firstCP = "á".byCodePoint.array; // type is `dchar[]` writefln("%d", firstCP.length); // prints 1 dstring second = "á"; writefln("%d", second.length); // prints 1 (That was easy!) // DMD64 D Compiler v2.081.2 } Welcome to my world! [snip] The dstring is only ok because the 2 code units fit in a dchar right? But all the other ones are as expected right? Seriously... why is it not graphemes by default for correctness whyyy!
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On 9/4/2018 5:37 PM, bachmeier wrote: Having to deal with the possibility that others might have any of twelve different compiler versions installed just isn't sustainable. Back in the bad old DOS days, my compiler depended on the Microsoft linker, which was helpfully included on the DOS distribution disks (!) The problem, however, was Microsoft kept changing the linker, and every linker was different. At one point I had my "linker disk" which was packed with every version of MS-Link I could find. Now that was unsustainable. The eventual solution was Bjorn Freeman-Benson wrote a linker (BLINK) which we then used. When it had a bug, we fixed it. When we shipped a compiler, it had a predictable linker with it. It made all the difference in the world. Hence my penchant for "controlling our destiny" that I've remarked on now and then. It's also why the DMD toolchain is boost licensed - nobody is subject to our whims.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Tuesday, 4 September 2018 at 21:36:16 UTC, Walter Bright wrote: Autodecode - I've suffered under that, too. The solution was fairly simple. Append .byCodeUnit to strings that would otherwise autodecode. Annoying, but hardly a showstopper. import std.array : array; import std.stdio : writefln; import std.uni : byCodePoint, byGrapheme; import std.utf : byCodeUnit; void main() { string first = "á"; writefln("%d", first.length); // prints 2 auto firstCU = "á".byCodeUnit; // type is `ByCodeUnitImpl` (!) writefln("%d", firstCU.length); // prints 2 auto firstGr = "á".byGrapheme.array; // type is `Grapheme[]` writefln("%d", firstGr.length); // prints 1 auto firstCP = "á".byCodePoint.array; // type is `dchar[]` writefln("%d", firstCP.length); // prints 1 dstring second = "á"; writefln("%d", second.length); // prints 1 (That was easy!) // DMD64 D Compiler v2.081.2 } Welcome to my world! [snip]
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Friday, 24 August 2018 at 19:26:40 UTC, Walter Bright wrote: On 8/24/2018 6:04 AM, Chris wrote: For about a year I've had the feeling that D is moving too fast and going nowhere at the same time. D has to slow down and get stable. D is past the experimental stage. Too many people use it for real world programming and programmers value and _need_ both stability and consistency. Every programmer who says this also demands new (and breaking) features. I realize I'm responding to this discussion after a long time, but this is the first chance I've had to return to this thread... What you write is correct. There's nothing wrong with wanting both change and stability, because there are right ways to change the language and wrong ways to change the language. If you have a stable compiler release for which you know there will be no breaking changes for the next two years, you can distribute your code to someone else and know it will work. It's not unreasonable to say "Your compiler is three years old, you need to upgrade it." You will not receive a phone call from someone that doesn't know anything about D in the middle of your workday inquiring about why the program no longer compiles. Having to deal with the possibility that others might have any of twelve different compiler versions installed just isn't sustainable.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Tuesday, 4 September 2018 at 21:36:16 UTC, Walter Bright wrote: On 9/1/2018 4:12 AM, Chris wrote: Hope is usually the last thing to die. But one has to be wise enough to see that sometimes there is nothing one can do. As things are now, for me personally D is no longer an option, because of simple basic things, like autodecode, a flaw that will be there forever, poor support for industry technologies (Android, iOS) and the constant "threat" of code breakage. The D language developers don't seem to understand the importance of these trivial matters. I'm not just opinionating, by now I have no other _choice_ but to look for alternatives - and I do feel a little bit sad. Android, iOS - Contribute to help make it better. It would help if the main official compiler supported those operating systems. That would mean adding ARM support to DMD. Or a much simpler solution, use an existing backend that has ARM support built in to it and is maintained by a much larger established group of individuals. Say like how some languages, like Rust, do.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On 9/4/2018 12:59 PM, Timon Gehr wrote: [...] Thanks for the great explanation! Not sure I thoroughly understand it, though. Therefore, D immutable/pure are both too strong and too weak: they prevent @system code from implementing value representations that internally use mutation (therefore D cannot implement its own runtime system, or alternatives to it), and it does not prevent pure @safe code from leaking reference identities of immutable value representations: pure @safe naughty(immutable(int[]) xs){ return cast(long)xs.ptr; } (In fact, it is equally bad that @safe weakly pure code can depend on the address of mutable data.) Would it make sense to disallow such casts in pure code? What other adjustments would you suggest?
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On 9/1/2018 4:12 AM, Chris wrote: Hope is usually the last thing to die. But one has to be wise enough to see that sometimes there is nothing one can do. As things are now, for me personally D is no longer an option, because of simple basic things, like autodecode, a flaw that will be there forever, poor support for industry technologies (Android, iOS) and the constant "threat" of code breakage. The D language developers don't seem to understand the importance of these trivial matters. I'm not just opinionating, by now I have no other _choice_ but to look for alternatives - and I do feel a little bit sad. Autodecode - I've suffered under that, too. The solution was fairly simple. Append .byCodeUnit to strings that would otherwise autodecode. Annoying, but hardly a showstopper. Android, iOS - Contribute to help make it better. Breakage - I've dealt with this, too. The language changes have been usually just some minor edits. The more serious problems were the removal of some Phobos packages. I dealt with this by creating the undeaD library: https://github.com/dlang/undeaD
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On 29.08.2018 22:01, Walter Bright wrote: On 8/29/2018 10:50 AM, Timon Gehr wrote: D const/immutable is stronger than immutability in Haskell (which is usually _lazy_). I know Haskell is lazy, but don't see the connection with a weaker immutability guarantee. In D, you can't have a lazy value within an immutable data structure (__mutable will fix this). In any case, isn't immutability a precept of FP? Yes, but it's at a higher level of abstraction. The important property of a (lazy) functional programming language is that a language term can be deterministically assigned a value for each concrete instance of an environment in which it is well-typed (i.e., values for all free variables of the term). Furthermore, the language semantics can be given as a rewrite system such that each rewrite performed by the system preserves the semantics of the rewritten term. I.e., terms change, but their values are preserved (immutable). [1] To get this property, it is crucially important the functional programming system does not leak reference identities of the underlying value representations. This is sometimes called referential transparency. Immutability is a means to this end. (If references allow mutation, you can detect reference equality by modifying the underlying object through one reference and observing that the data accessed through some other reference changes accordingly.) Under the hood, functional programming systems simulate term rewriting in some way, ultimately using mutable data structures. Similarly, in D, the garbage collector is allowed to change data that has been previously typed as immutable, and it can type-cast data that has been previously typed as mutable to immutable. However, it is impossible to write a GC or Haskell-like programs in D with pure functions operating on immutable data, because of constraints the language puts on user code that druntime is not subject to. Therefore, D immutable/pure are both too strong and too weak: they prevent @system code from implementing value representations that internally use mutation (therefore D cannot implement its own runtime system, or alternatives to it), and it does not prevent pure @safe code from leaking reference identities of immutable value representations: pure @safe naughty(immutable(int[]) xs){ return cast(long)xs.ptr; } (In fact, it is equally bad that @safe weakly pure code can depend on the address of mutable data.) [1] E.g.: (λa b. a + b) 2 3 and 10 `div` 2 are two terms whose semantics are given as the mathematical value 5. During evaluation, terms change: (λa b. a + b) 2 3 ⇝ 2 + 3 ⇝ 5 10 `div` 2 ⇝ 5 However, each intermediate term still represents the same value.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Tuesday, 4 September 2018 at 13:34:03 UTC, TheSixMillionDollarMan wrote: I think D's 'core' problem, is that it's trying to compete with, what are now, widely used, powerful, and well supported languages, with sophisticate ecosystems in place already. C/C++/Java/C# .. just for beginners. Yes, I believe there was an academic early on that allowed students to use D, but when C++17 (etc) came about he held the view that it would be better for D to align its semantics with C++. He was met with silence, except me, who supported that view. D is too much like C++ for a skilled modern C++ programmer to switch over, but D semantics are also too different to compile to C++ in a useful manner. Then it's also trying to compete with startup languages (Go, Rust ) - and some of those languages have billion dollar organisations behind them, not to mention the talent levels of their *many* designers and contributors. Ok, so my take on this is that Rust is in the same group as D right now, and I consider it experimental as I am not convinced that it is sufficient for effective low level programming. Although Rust has more momentum, it depends too much on a single entity (with unclear profitability) despite being open sourced, just like D. Too much singular ownership. Go is also open source in theory, but if we put legalities aside then I think it is having the traits of a proprietary language. They are building up a solid runtime, and it has massive momentum within services, but the language itself is somewhat primitive and messy. Go could be a decent compilation target for a high level language. That said , I think most languages don't compete directly with other languages, but compete within specific niches. Rust: for writing services and command line programs where C++ would have been a natural candidate, but for people who want a higher level language or dislike C++. Go: for writing web-services where Python, Java and C# is expected to be too resource-intensive. D: based on what seems to be recurring themes in the forums D seems to be used by independent programmers (personal taste?) and programmers in finance that find interpreted languages too slow and aren't willing to adopt C++. C++ is much more than just a langauge. It's an established, international treaty on what the language must be. Yes, it is an ISO-standard, and evolve using a global standardisation community as input. As such it evolves with the feedback from a wide range of user groups by the nature of the process. That is not a statement about the quality of D. It's a statement about the competitive nature of programming languages. It kinda is both, but the issue is really what you aim to be supporting and what you do to move in that direction. When there is no focus on any particular use case, just language features, then it becomes very difficult to move and difficult to engage people in a way that make them pull in the same direction. I wonder has already happened to D. No, it mostly comes down to a lack of focus and a process to back it up. Also, memory management should be the first feature to nail down, should come before language semantics... I just do not see, how D can even defeat its' major competitors. But are they really competitors? Is D predominantly used for writing web-services? What is D primarily used for? Fast scripting-style programming? Instead D could be a place where those competitors come to look for great ideas (which, as I understand it, does occur .. ranges for example). No, there are thousands of such languages. Each university has a handful of languages that they create in order to back their comp.sci. research. No need to focus on performance in that setting. You seem to be saying that, raising money so you can pay people, is enough. But I wonder about that. There has to be a focus based on analysis of where you can be a lot better for a specific use scenario, define the core goals that will enable something valuable for that scenario, then cut back on secondary ambitions and set up a process to achieve those core goals (pertaining to a concrete usage scenario). Being everything for everybody isn't really a strategy. Unless you are Amazon, and not even then. Without defining a core usage scenario you cannot really evaluate the situation or the process that has to be set up to change the situation... Well, I've said this stuff many times before.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Friday, 24 August 2018 at 13:21:25 UTC, Jonathan M Davis wrote: On Friday, August 24, 2018 6:05:40 AM MDT Mike Franklin via Digitalmars-d wrote: > You're basically trying to bypass the OS' public API if > you're trying to bypass libc. No I'm trying to bypass libc and use the OS API directly. And my point is that most OSes consider libc to be their OS API (Linux doesn't, but it's very much abnormal in that respect). Well, it used to be the case that it was normal to call OS directly by using traps, but since the context switch is so expensive on modern CPUs we a have a situation where the calling stub is a fraction of the calling cost these days. Thus most don't bother with it. What usually can happen if you don't use the c-stubs with dynamic linkage is that your precompiled program won't work with new versions of the OS. But that can also happen with static linkage. Trying to bypass it means reimplementing core OS functionality and risking all of the bugs that go with it. It is the right thing to do for a low level language. Why have libc as a dependency if you want to enable hardware oriented programming? Using existing libraries also put limits on low level language semantics. If you're talking about avoiding libc functions like strcmp that's one thing, but if you're talking about reimplementing stuff that uses syscalls, then honestly, I think that you're crazy. No it isn't a crazy position, why the hostile tone? Libc isn't available in many settings. Not even in webassembly.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
The first search engines were created in 1993, google came along in 1998 after at least two dozen others in that list, and didn't make a profit till 2001. Some of those early competitors were giant "billion dollar global companies," yet it's google that dominates the web search engine market today. Why is that? Their original page-rank algorithm. Basically, they found an efficient way of emulating random clicks to all outgoing links from a page and thus got better search result rankings. It was a matter of timing.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Tuesday, 4 September 2018 at 14:23:33 UTC, Joakim wrote: The first search engines were created in 1993, google came along in 1998 after at least two dozen others in that list, and didn't make a profit till 2001. Some of those early competitors were giant "billion dollar global companies," yet it's google that dominates the web search engine market today. Why is that? Well, for one, resources don't matter for software on the internet as much as ideas. It's not that resources don't matter, but that they take a back seat to your fundamental design and the ideas behind it. Google had a $100k angel round in 1998 and a $25 million Series A in 1999. The difference between Google and the $12 billion-ish valued Lycos of the time was not insurmountable, yes, but $25 million was enough to hire dozens of developers, lease offices, and buy the hardware they needed. Similarly, we don't need Google-level funding to produce a developer ecosystem that's sufficiently polished not to be a blocker for corporate VS-only types who rely on autocomplete. But we need a bit more than $4k for that, or it's always going to be someone's personal project that's mostly complete but might be abandoned in six months.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Tuesday, 4 September 2018 at 13:34:03 UTC, TheSixMillionDollarMan wrote: On Tuesday, 4 September 2018 at 01:36:53 UTC, Mike Parker wrote: On Monday, 3 September 2018 at 18:26:57 UTC, Chris wrote: And of course, low manpower and funding aren't the complete picture. Management also play a role. Both Walter and Andrei have freely admitted they are not managers and that they're learning as they go. Mistakes have been made. In hindsight, some decisions should have gone a different way. But that is not the same as not caring, or not understanding/ So please, don't attribute any disingenuous motives to any of the core team members. They all want D to succeed. Identifying core problems and discussing ways to solve them is a more productive way to spend our bandwidth. I think D's 'core' problem, is that it's trying to compete with, what are now, widely used, powerful, and well supported languages, with sophisticate ecosystems in place already. C/C++/Java/C# .. just for beginners. Then it's also trying to compete with startup languages (Go, Rust ) - and some of those languages have billion dollar organisations behind them, not to mention the talent levels of their *many* designers and contributors. C++ is much more than just a langauge. It's an established, international treaty on what the language must be. Java is backed by Oracle (one the of the largest organisations in the world). Go is backed by Google...Rust by Mozilla...(both billion dollar global companies). So one has to wonder, what would motivate a person (or an organisation) to focus their attention on D. That is not a statement about the quality of D. It's a statement about the competitive nature of programming languages. If you've ever read 'No Contest - the case against competition' by Alfie Kohn, then you'd know (or at least you might agree with that statement) that competition is not an inevitable part of human nature. "It warps recreation by turning the playing into a battlefield." I wonder has already happened to D. D should, perhaps, focus on being a place for recreation, where one can focus on technical excellence, instead of trying to compete in the battlefield. I just do not see, how D can even defeat its' major competitors. Instead D could be a place where those competitors come to look for great ideas (which, as I understand it, does occur .. ranges for example). In any case, you have to work out what it is, that is going to motivate people to focus their attention on D. You seem to be saying that, raising money so you can pay people, is enough. But I wonder about that. That's a good question, let me see if I can answer it. Do you know what the first search engine for the web was and when it was created? It wasn't Yahoo, google, or Bing: https://en.m.wikipedia.org/wiki/Web_search_engine#History The first search engines were created in 1993, google came along in 1998 after at least two dozen others in that list, and didn't make a profit till 2001. Some of those early competitors were giant "billion dollar global companies," yet it's google that dominates the web search engine market today. Why is that? Well, for one, resources don't matter for software on the internet as much as ideas. It's not that resources don't matter, but that they take a back seat to your fundamental design and the ideas behind it. And coming up with that design and ideas takes time, the "developmental stage" that Laeeth refers to above. In that incubation stage, you're better off _not_ having a bunch of normal users who want a highly polished product, just a bunch of early adopters who can give you good feedback and are okay with rough edges. For D, that means all the advanced features don't fully play together well yet, and there are various bugs here and there. To use it, you have to be okay with that. Now, it's a fair question to ask when D will leave that developmental stage and get more resources towards that polish, as Chris asks, and I'm not saying I know the answers to those questions. And let me be clear: as long as you don't push the envelope with mixing those advanced D features and are okay working around some bugs here and there, you're probably good now. But simply asserting that others are rushing full-speed ahead with more resources and therefore they will win completely misunderstands how the game has changed online. Resources do matter, but they're not the dominant factor like they used to be for armies or manufacturing. Ideas are now the dominant factor, and D has plenty of those. ;)
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Tuesday, 4 September 2018 at 01:36:53 UTC, Mike Parker wrote: On Monday, 3 September 2018 at 18:26:57 UTC, Chris wrote: And of course, low manpower and funding aren't the complete picture. Management also play a role. Both Walter and Andrei have freely admitted they are not managers and that they're learning as they go. Mistakes have been made. In hindsight, some decisions should have gone a different way. But that is not the same as not caring, or not understanding/ So please, don't attribute any disingenuous motives to any of the core team members. They all want D to succeed. Identifying core problems and discussing ways to solve them is a more productive way to spend our bandwidth. I think D's 'core' problem, is that it's trying to compete with, what are now, widely used, powerful, and well supported languages, with sophisticate ecosystems in place already. C/C++/Java/C# .. just for beginners. Then it's also trying to compete with startup languages (Go, Rust ) - and some of those languages have billion dollar organisations behind them, not to mention the talent levels of their *many* designers and contributors. C++ is much more than just a langauge. It's an established, international treaty on what the language must be. Java is backed by Oracle (one the of the largest organisations in the world). Go is backed by Google...Rust by Mozilla...(both billion dollar global companies). So one has to wonder, what would motivate a person (or an organisation) to focus their attention on D. That is not a statement about the quality of D. It's a statement about the competitive nature of programming languages. If you've ever read 'No Contest - the case against competition' by Alfie Kohn, then you'd know (or at least you might agree with that statement) that competition is not an inevitable part of human nature. "It warps recreation by turning the playing into a battlefield." I wonder has already happened to D. D should, perhaps, focus on being a place for recreation, where one can focus on technical excellence, instead of trying to compete in the battlefield. I just do not see, how D can even defeat its' major competitors. Instead D could be a place where those competitors come to look for great ideas (which, as I understand it, does occur .. ranges for example). In any case, you have to work out what it is, that is going to motivate people to focus their attention on D. You seem to be saying that, raising money so you can pay people, is enough. But I wonder about that.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Tuesday, 4 September 2018 at 09:40:23 UTC, Ecstatic Coder wrote: But it seems that the latest version of "std.file.copy" now completely ignores the "PreserveAttributes.no" argument on Windows, which made recent Windows builds of Resync fail on read-only files. Very typical... While D remains my favorite file scripting language, I must admit that this is quite disappointing for such an old language, compared to similar languages like Crystal. Windows can simply be a pain to work with though. Look at Crystal itself, it doesn't support Windows natively as far as I know, so of course you won't have Windows-specific bugs in Crystal...
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Thursday, 23 August 2018 at 03:50:44 UTC, Shachar Shemesh wrote: On 22/08/18 21:34, Ali wrote: On Wednesday, 22 August 2018 at 17:42:56 UTC, Joakim wrote: Pretty positive overall, and the negatives he mentions are fairly obvious to anyone paying attention. Yea, I agree, the negatives are not really negative Walter not matter how smart he is, he is one man who can work on the so many things at the same time Its a chicken and egg situation, D needs more core contributors, and to get more contributors it needs more users, and to get more users it need more core contributors No, no and no. I was holding out on replying to this thread to see how the community would react. The vibe I'm getting, however, is that the people who are seeing D's problems have given up on affecting change. It is no secret that when I joined Weka, I was a sole D detractor among a company quite enamored with the language. I used to have quite heated water cooler debates about that point of view. Every single one of the people rushing to defend D at the time has since come around. There is still some debate on whether, points vs. counter points, choosing D was a good idea, but the overwhelming consensus inside Weka today is that D has *fatal* flaws and no path to fixing them. And by "fatal", I mean literally flaws that are likely to literally kill the language. And the thing that brought them around is not my power of persuasion. The thing that brought them around was spending a couple of years working with the language on an every-day basis. And you will notice this in the way Weka employees talk on this forum: except me, they all disappeared. You used to see Idan, Tomer and Eyal post here. Where are they? This forum is hostile to criticism, and generally tries to keep everyone using D the same way. If you're cutting edge D, the forum is almost no help at all. Consensus among former posters here is that it is generally a waste of time, so almost everyone left, and those who didn't, stopped posting. And it's not just Weka. I've had a chance to talk in private to some other developers. Quite a lot have serious, fundamental issues with the language. You will notice none of them speaks up on this thread. They don't see the point. No technical project is born great. If you want a technical project to be great, the people working on it have to focus on its *flaws*. The D's community just doesn't do that. To sum it up: fatal flaws + no path to fixing + no push from the community = inevitable eventual death. With great regrets, Shachar Same feeling here btw. I regularly have to face strange bugs while updating the compiler or its libraries. For instance, my Resync tool used to work fine both on Windows and Linux. But it seems that the latest version of "std.file.copy" now completely ignores the "PreserveAttributes.no" argument on Windows, which made recent Windows builds of Resync fail on read-only files. Very typical... While D remains my favorite file scripting language, I must admit that this is quite disappointing for such an old language, compared to similar languages like Crystal.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Tuesday, 4 September 2018 at 05:38:49 UTC, Iain Buclaw wrote: On 4 September 2018 at 04:19, Laeeth Isharc via Digitalmars-d wrote: On Monday, 3 September 2018 at 16:07:21 UTC, RhyS wrote: A good example being the resources going into DMD, LDC, GDC... 3 Compilers for one language, when even well funded languages stick to one compiler. And now some people think its a good idea to have DMD also cross compile because "its not hard to do". No, maybe not but who will do all the testing, what resources are going to spend when things do not work for some users ( and the negative impact on their experience )... Its a long list but people do not look past this. It sounds like fun, lets think / or do it. What resources do you think go into GDC? I think Iain would love to hear about all these resources because I am not sure he has been made aware of them because they don't exist beyond him and possibly a tiny number of others helping in part at certain stages. *Looks behind self* *Looks under desk* *Looks under keyboard* There must be resources somewhere, but none appear to be within reach. :-) If Iain had a beer for every person that complained about the effort spent by team GDC without having first thanked him and his vast team then... People are sometimes quite disconnected from reality. At least I have no other explanation for people demanding others do this or do that without doing the minimum necessary to make it appealing for others to work on it. I mean my experience is that you can pay people a lot of money and ask them beforehand do you want to work on X, and it's no guarantee they actually will be willing to when it comes to it. Programmers in general can be very independent-minded people, and if somebody is looking for especially meek and compliant people then if you have come to the D forums you are in the wrong place! One can be much more persuasive with positive words than complaints. Most people are well aware of that so if they are complaining it's in my judgement because they want to complain. People with high standards will do that when they feel powerless. I'm not talking here about notorious highly intelligent trolls like Ola and sock-puppets who never seem to actually write code in D. But nobody who can keep up here is powerless. It's possible to change the world you know, and from the most unpromising start. Forget about what's realistic, and focus on what you want to achieve. Believe me, you can achieve an awful lot from the most unpromising start. People talk about how most people are not super-hackers and one shouldn't expect them to manage without polish. Well hacker is a state of mind,a way of being in the world. Ask Iain if his self-conception is as a super-hacker with l33t skillz that a mere professional programmer couldn't match and you might be surprised (I think his self-conception might be wrong, but that's Dunning Kruger in action for you). It's really much more about values and virtues then capabilities. Are you able to tolerate discomfort and the accurate initial feeling of conscious incompetence? Because that's what real learning feels like once you leave the spoon-feeding stream of education. D is a gift to the world from Walter, Andrei, and those who contributed after it was begun. Just demanding people do stuff for you without doing anything to contribute back - that's not how life works. I don't think I have ever seen this degree of a feeling of entitlement in my life! And I've been working in finance since 1993. If doesn't want to pay money towards the development of IDE integration, doesn't want to do any work themselves, then the least they could do is draw up a feature list of what's missing and find a way to help from time to time with the organisation of the work. That's the only way things ever get done anyway. Have you noticed how the documentation has gotten much better? Runnable examples too. Did that happen because people complained? No - it happened because Seb Wilzbach (and maybe others) took the initiative to make it happen and did the work themselves. A little money goes a long way in open source. So if you're a company and you're complaining and not donating money to the Foundation then what exactly do you expect? We have a few support contracts with MongoDB (a choice made before I got involved) and the legal fees alone were 20k and we pay about 30k USD a year. If a few companies contributed at that scale to the Foundation that's at least a couple of full-time developers. And if you disagree with Andrei and Walter choices about priorities you know you can just direct where the money should be spent as we are with SAoC.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Monday, 3 September 2018 at 16:41:32 UTC, Iain Buclaw wrote: 15 years ago, people were complaining that there was only one D compiler. It is ironic that people now complain that there's too many. One needs multiple implementations to confirm the accuracy of the language specification. D still has one implementation, i.e. one compiler with multiple backends, distributed as multiple executables (with tweaks). Anyway, I think people complained about the first and only compiler being non-free. That's not relevant now, of course.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Tuesday, 4 September 2018 at 01:36:53 UTC, Mike Parker wrote: On Monday, 3 September 2018 at 18:26:57 UTC, Chris wrote: I think this sort of misunderstanding is the source of a lot of friction on this forum. Some users think (or in my case: thought) that D will be a sound and stable language one day, a language they can use for loads of stuff, while the leadership prefers to keep it at a stage where they can test ideas to see what works and what doesn't, wait let me rephrase this, where the user can test other people's ideas with every new release. First of all, thanks a lot for your answer. I appreciate it. I have to know where I'm standing in order to be able to plan ahead. I don't think this is unreasonable. D is not a petri dish for testing ideas. It's not an experiment. It's a serious language. Walter, Andrei, and everyone involved in maintaining and developing it want to see it succeed. They aren't just treading water, wasting everyone's time. And I know you keep hearing this, but I'll say it anyway: most of the development is based on volunteer work, and trying to get volunteers to do anything they don't want to do is like asking them to voluntarily have their teeth pulled when they don't need to. I have no doubt at all that Walter, Andrei et al are a 100% serious about D, as in "a professional tool" and I do not question their expertise and experience. However, for a bit more than a year I've been under the impression that scarce resources are spent on features and details that are not critical to production when you use D, while more basic things that are sometimes not related to D as such are put on the long finger. Walter has said that people come to him and ask what they should work on. He provides them with a list of priority tasks. Then they go off and work on something else. That's the nature of unsponsored open-source development and has been the biggest challenge for D for years. I can imagine that. This is why volunteers are not the way to go when it comes to core development and the ecosystem. This is why foundations with a lot of funding and IT companies spend a lot of resources on these two aspects. I have high hopes that some of this can be turned around by raising more money and I have long-term plans to try and facilitate that over the coming months. With more money, we can get people to work on targeted tasks even when they have no vested interest in it. We can pull in full-time coders, maybe get someone to oversee PR reviews so they don't stay open so long, fund development of broader ecosystem projects. There isn't anyone involved in the core D development who isn't aware of the broader issues in the community or the ecosystem, and they are working to bring improvements. I have been around this community since 2003. From my perspective, it has been one continuous march of progress. Sometimes slow, sometimes fast, but always moving, always getting better. And right now there are more people involved than ever in moving it forward. Unfortunately, there are also more demands on more fronts than ever. There are longtime users who become increasingly frustrated when the issues that matter to them still aren't resolved, newcomers who have no context of all the progress that has been made and instead hold D in comparison to Rust, Go, and other languages that have stronger backing and more manpower. That's perfectly legitimate. Which is what I've been talking about in this thread. D is too old to live with its parents :) Too many people already use it in production or are interested in it. There are a) longtime users who still have to put up with OSS style hacks and are growing tired of it (after years of putting in a lot of effort). There are b) new users who are put off by the lack of a smooth ecosystem. And both groups are told that "that's the way we do things around here". And of course, low manpower and funding aren't the complete picture. Management also play a role. Both Walter and Andrei have freely admitted they are not managers and that they're learning as they go. Mistakes have been made. In hindsight, some decisions should have gone a different way. But that is not the same as not caring, or not understanding/ Exactly. As I said in an earlier message, it's not just the money (or the lack thereof), it's the approach. In my opinion Both Walter and Andrei should hire a manager who is not involved in the core development. If you're involved in the core development you cannot be managing things at the same time and "learn as you go along". That's a recipe for disaster. It's not a good idea to mix management and development because they are two completely different things, and you might end up not being good at any of them. A manager could lay down a practical road map based on what users need _most_, secure funding and direct the funding towards the most urgent issues. Everyone
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Tuesday, 4 September 2018 at 01:36:53 UTC, Mike Parker wrote: D is not a petri dish for testing ideas. It's not an experiment. Well, the general consensus for programming languages is that it a language is experimental (or proprietary) until it is fully specced out as a stable formal standard with multiple _independent_ implementations...
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Tuesday, 4 September 2018 at 02:19:20 UTC, Laeeth Isharc wrote: On Monday, 3 September 2018 at 16:07:21 UTC, RhyS wrote: On Monday, 3 September 2018 at 15:41:48 UTC, Laurent Tréguier wrote: Yes. It almost sounds like a smooth experience would be a bad thing to have, especially with the classic "you don't need an IDE anyway" speech. Editing experience seems often dismissed as unimportant, when it's one of the first things new users will come across when trying out D. First impressions can matter a lot. I didn't give a you don't need an IDE speech, and I didn't say a smooth experience was a bad thing. I know, I know. But it's always the same story, whenever people wonder about D's state in IDE integration, often other people will just say that vim + terminal is enough. But in my experience a strong reality orientation leads to good things coming out of life and telling the universe it should be something different from what it is is a recipe for misery and suffering, and why would you do that to yourself? So if you want the world to be different, come up with a plan. It could be I am going to donate X dollars a month to the Foundation to fund IDE development, or if could be figuring out how you can help with the work in whatever way. But just grumbling - I really think that mistakes the nature of the situation, and not to mention human psychology. You can accomplish things with a vision that's thought through and inspires others. Negativity is part of being creative but not if you stop there. I stated it in an earlier post, I've been working on editor/IDE integration myself because of this. I don't just grumble, although I do like grumbling a lot :)
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Monday, 3 September 2018 at 22:30:47 UTC, Chris wrote: On Monday, 3 September 2018 at 18:52:45 UTC, Laurent Tréguier wrote: On Monday, 3 September 2018 at 18:26:57 UTC, Chris wrote: it should come with a warning label that says "D is in many parts still at an experimental stage and ships with no guarantees whatsoever. Use at your own risk." Well it comes with the Boost license that says: `THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND` You know exactly what I mean, don't you? I think I know what you mean. But licenses are not decorative. If it says "WITHOUT WARRANTY OF ANY KIND", it means that it actually comes without warranty of any kind.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On 4 September 2018 at 04:19, Laeeth Isharc via Digitalmars-d wrote: > On Monday, 3 September 2018 at 16:07:21 UTC, RhyS wrote: >> >> A good example being the resources going into DMD, LDC, GDC... 3 Compilers >> for one language, when even well funded languages stick to one compiler. And >> now some people think its a good idea to have DMD also cross compile because >> "its not hard to do". No, maybe not but who will do all the testing, what >> resources are going to spend when things do not work for some users ( and >> the negative impact on their experience )... Its a long list but people do >> not look past this. It sounds like fun, lets think / or do it. > > > What resources do you think go into GDC? I think Iain would love to hear > about all these resources because I am not sure he has been made aware of > them because they don't exist beyond him and possibly a tiny number of > others helping in part at certain stages. > *Looks behind self* *Looks under desk* *Looks under keyboard* There must be resources somewhere, but none appear to be within reach. :-)
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Mon, 3 Sep 2018 at 19:35, Laeeth Isharc via Digitalmars-d wrote: > > On Tuesday, 4 September 2018 at 02:24:25 UTC, Manu wrote: > > On Mon, 3 Sep 2018 at 18:45, Laeeth Isharc via Digitalmars-d > > wrote: > >> > >> On Monday, 3 September 2018 at 17:15:03 UTC, Laurent Tréguier > >> wrote: > >> > On Monday, 3 September 2018 at 16:55:10 UTC, Jonathan M > >> > Davis wrote: > >> >> Most of the work that gets done is the stuff that the folks > >> >> contributing think is the most important - frequently what > >> >> is most important for them for what they do, and very few > >> >> (if any) of the major contributors use or care about IDEs > >> >> for their own use. And there's tons to do that has nothing > >> >> to do with IDEs. There are folks who care about it enough > >> >> to work on it, which is why projects such as VisualD exist > >> >> at all, and AFAIK, they work reasonably well, but the only > >> >> two ways that they're going to get more work done on them > >> >> than is currently happening is if the folks who care about > >> >> that sort of thing contribute or if they donate money for > >> >> it to be worked on. Not long ago, the D Foundation > >> >> announced that they were going to use donations to pay > >> >> someone to work on his plugin for Visual Studio Code: > >> >> > >> >> https://forum.dlang.org/post/rmqvglgccmgoajmhy...@forum.dlang.org > >> >> > >> >> So, if you want stuff like that to get worked on, then > >> >> donate or pitch in. > >> >> > >> >> The situation with D - both with IDEs and in general - has > >> >> improved greatly over time even if it may not be where you > >> >> want it to be. But if you're ever expecting IDE support to > >> >> be a top priority of many of the contributors, then you're > >> >> going to be sorely disappointed. It's the sort of thing > >> >> that we care about because we care about D being > >> >> successful, but it's not the sort of thing that we see any > >> >> value in whatsoever for ourselves, and selfish as it may > >> >> be, when we spend the time to contribute to D, we're > >> >> generally going to work on the stuff that we see as having > >> >> the most value for getting done what we care about. And > >> >> there's a lot to get done which impacts pretty much every D > >> >> user and not just those who want something that's > >> >> IDE-related. > >> >> > >> >> - Jonathan M Davis > >> > > >> > The complaints I have is exactly why I'm myself maintaining > >> > plugins for VSCode, Atom, and others soon. Don't worry, I > >> > still > >> > think D is worth putting some time and effort into and I know > >> > actions generally get more things done than words. > >> > I also know that tons of stuff is yet to be done in regards > >> > to > >> > the actual compilers and such. > >> > > >> > It just baffles me a bit to see the state of D in this > >> > department, when languages like Go or Rust (hooray for yet > >> > another comparison to Go and Rust) are a lot younger, but > >> > already have what looks like very good tooling. > >> > Then again they do have major industry players backing them > >> > though... > >> > >> Why is Go's IDE support baffling? It was a necessity to > >> achieve Google's commercial aims, I should think. > >> > >> " > >> The key point here is our programmers are Googlers, they’re not > >> researchers. They’re typically, fairly young, fresh out of > >> school, probably learned Java, maybe learned C or C++, probably > >> learned Python. They’re not capable of understanding a > >> brilliant > >> language but we want to use them to build good software. So, > >> the > >> language that we give them has to be easy for them to > >> understand > >> and easy to adopt." > >> – Rob Pike > >> > >> I don't know the story of Rust, but if I were working on a > >> project as large as Firefox I guess I would want an IDE too! > >> Whereas it doesn't seem like it's so important to some of D's > >> commercial users because they have a different context. > >> > >> I don't think it's overall baffling that D hasn't got the best > >> IDE support of emerging languages. The people that contribute > >> to it, as Jonathan says, seen to be leas interested in IDEs > >> and no company has found it important enough to pay someone > >> else to work on it. So far anyway but as adoption grows maybe > >> that will change. > > > > It's been a key hurdle for as long as I've been around here. > > I've been saying for 10 years that no company I've ever worked > > at can > > take D seriously without industry standard IDE support. > > My feeling is that we have recently reached MVP status... > > that's a > > huge step, 10 years in the making ;) > > I think it's now at a point where more people *wouldn't* reject > > it on > > contact than those who would. But we need to go much further to > > make > > developers genuinely comfortable, and thereby go out of their > > way to > > prefer using D than C++ and pitch as such to their managers. > > Among all developers I've demo-ed or
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Tuesday, 4 September 2018 at 02:24:25 UTC, Manu wrote: On Mon, 3 Sep 2018 at 18:45, Laeeth Isharc via Digitalmars-d wrote: On Monday, 3 September 2018 at 17:15:03 UTC, Laurent Tréguier wrote: > On Monday, 3 September 2018 at 16:55:10 UTC, Jonathan M > Davis wrote: >> Most of the work that gets done is the stuff that the folks >> contributing think is the most important - frequently what >> is most important for them for what they do, and very few >> (if any) of the major contributors use or care about IDEs >> for their own use. And there's tons to do that has nothing >> to do with IDEs. There are folks who care about it enough >> to work on it, which is why projects such as VisualD exist >> at all, and AFAIK, they work reasonably well, but the only >> two ways that they're going to get more work done on them >> than is currently happening is if the folks who care about >> that sort of thing contribute or if they donate money for >> it to be worked on. Not long ago, the D Foundation >> announced that they were going to use donations to pay >> someone to work on his plugin for Visual Studio Code: >> >> https://forum.dlang.org/post/rmqvglgccmgoajmhy...@forum.dlang.org >> >> So, if you want stuff like that to get worked on, then >> donate or pitch in. >> >> The situation with D - both with IDEs and in general - has >> improved greatly over time even if it may not be where you >> want it to be. But if you're ever expecting IDE support to >> be a top priority of many of the contributors, then you're >> going to be sorely disappointed. It's the sort of thing >> that we care about because we care about D being >> successful, but it's not the sort of thing that we see any >> value in whatsoever for ourselves, and selfish as it may >> be, when we spend the time to contribute to D, we're >> generally going to work on the stuff that we see as having >> the most value for getting done what we care about. And >> there's a lot to get done which impacts pretty much every D >> user and not just those who want something that's >> IDE-related. >> >> - Jonathan M Davis > > The complaints I have is exactly why I'm myself maintaining > plugins for VSCode, Atom, and others soon. Don't worry, I > still > think D is worth putting some time and effort into and I know > actions generally get more things done than words. > I also know that tons of stuff is yet to be done in regards > to > the actual compilers and such. > > It just baffles me a bit to see the state of D in this > department, when languages like Go or Rust (hooray for yet > another comparison to Go and Rust) are a lot younger, but > already have what looks like very good tooling. > Then again they do have major industry players backing them > though... Why is Go's IDE support baffling? It was a necessity to achieve Google's commercial aims, I should think. " The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt." – Rob Pike I don't know the story of Rust, but if I were working on a project as large as Firefox I guess I would want an IDE too! Whereas it doesn't seem like it's so important to some of D's commercial users because they have a different context. I don't think it's overall baffling that D hasn't got the best IDE support of emerging languages. The people that contribute to it, as Jonathan says, seen to be leas interested in IDEs and no company has found it important enough to pay someone else to work on it. So far anyway but as adoption grows maybe that will change. It's been a key hurdle for as long as I've been around here. I've been saying for 10 years that no company I've ever worked at can take D seriously without industry standard IDE support. My feeling is that we have recently reached MVP status... that's a huge step, 10 years in the making ;) I think it's now at a point where more people *wouldn't* reject it on contact than those who would. But we need to go much further to make developers genuinely comfortable, and thereby go out of their way to prefer using D than C++ and pitch as such to their managers. Among all developers I've demo-ed or introduced recently, I can say for certain that developer enthusiasm is driven by their perception of the tooling in the order of 10x more than the language. That's only because you insist on working for companies where people use IDEs and think the ones that don't must be in boring industries :) Kidding aside, would you care to enumerate what capabilities are missing that would tip the balance for such people were they to be there? And then would you care to estimate the degree of w
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Mon, 3 Sep 2018 at 18:45, Laeeth Isharc via Digitalmars-d wrote: > > On Monday, 3 September 2018 at 17:15:03 UTC, Laurent Tréguier > wrote: > > On Monday, 3 September 2018 at 16:55:10 UTC, Jonathan M Davis > > wrote: > >> Most of the work that gets done is the stuff that the folks > >> contributing think is the most important - frequently what is > >> most important for them for what they do, and very few (if > >> any) of the major contributors use or care about IDEs for > >> their own use. And there's tons to do that has nothing to do > >> with IDEs. There are folks who care about it enough to work on > >> it, which is why projects such as VisualD exist at all, and > >> AFAIK, they work reasonably well, but the only two ways that > >> they're going to get more work done on them than is currently > >> happening is if the folks who care about that sort of thing > >> contribute or if they donate money for it to be worked on. Not > >> long ago, the D Foundation announced that they were going to > >> use donations to pay someone to work on his plugin for Visual > >> Studio Code: > >> > >> https://forum.dlang.org/post/rmqvglgccmgoajmhy...@forum.dlang.org > >> > >> So, if you want stuff like that to get worked on, then donate > >> or pitch in. > >> > >> The situation with D - both with IDEs and in general - has > >> improved greatly over time even if it may not be where you > >> want it to be. But if you're ever expecting IDE support to be > >> a top priority of many of the contributors, then you're going > >> to be sorely disappointed. It's the sort of thing that we care > >> about because we care about D being successful, but it's not > >> the sort of thing that we see any value in whatsoever for > >> ourselves, and selfish as it may be, when we spend the time to > >> contribute to D, we're generally going to work on the stuff > >> that we see as having the most value for getting done what we > >> care about. And there's a lot to get done which impacts pretty > >> much every D user and not just those who want something that's > >> IDE-related. > >> > >> - Jonathan M Davis > > > > The complaints I have is exactly why I'm myself maintaining > > plugins for VSCode, Atom, and others soon. Don't worry, I still > > think D is worth putting some time and effort into and I know > > actions generally get more things done than words. > > I also know that tons of stuff is yet to be done in regards to > > the actual compilers and such. > > > > It just baffles me a bit to see the state of D in this > > department, when languages like Go or Rust (hooray for yet > > another comparison to Go and Rust) are a lot younger, but > > already have what looks like very good tooling. > > Then again they do have major industry players backing them > > though... > > Why is Go's IDE support baffling? It was a necessity to achieve > Google's commercial aims, I should think. > > " > The key point here is our programmers are Googlers, they’re not > researchers. They’re typically, fairly young, fresh out of > school, probably learned Java, maybe learned C or C++, probably > learned Python. They’re not capable of understanding a brilliant > language but we want to use them to build good software. So, the > language that we give them has to be easy for them to understand > and easy to adopt." > – Rob Pike > > I don't know the story of Rust, but if I were working on a > project as large as Firefox I guess I would want an IDE too! > Whereas it doesn't seem like it's so important to some of D's > commercial users because they have a different context. > > I don't think it's overall baffling that D hasn't got the best > IDE support of emerging languages. The people that contribute to > it, as Jonathan says, seen to be leas interested in IDEs and no > company has found it important enough to pay someone else to work > on it. So far anyway but as adoption grows maybe that will > change. It's been a key hurdle for as long as I've been around here. I've been saying for 10 years that no company I've ever worked at can take D seriously without industry standard IDE support. My feeling is that we have recently reached MVP status... that's a huge step, 10 years in the making ;) I think it's now at a point where more people *wouldn't* reject it on contact than those who would. But we need to go much further to make developers genuinely comfortable, and thereby go out of their way to prefer using D than C++ and pitch as such to their managers. Among all developers I've demo-ed or introduced recently, I can say for certain that developer enthusiasm is driven by their perception of the tooling in the order of 10x more than the language.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Monday, 3 September 2018 at 16:07:21 UTC, RhyS wrote: On Monday, 3 September 2018 at 15:41:48 UTC, Laurent Tréguier wrote: Yes. It almost sounds like a smooth experience would be a bad thing to have, especially with the classic "you don't need an IDE anyway" speech. Editing experience seems often dismissed as unimportant, when it's one of the first things new users will come across when trying out D. First impressions can matter a lot. I didn't give a you don't need an IDE speech, and I didn't say a smooth experience was a bad thing. But in my experience a strong reality orientation leads to good things coming out of life and telling the universe it should be something different from what it is is a recipe for misery and suffering, and why would you do that to yourself? So if you want the world to be different, come up with a plan. It could be I am going to donate X dollars a month to the Foundation to fund IDE development, or if could be figuring out how you can help with the work in whatever way. But just grumbling - I really think that mistakes the nature of the situation, and not to mention human psychology. You can accomplish things with a vision that's thought through and inspires others. Negativity is part of being creative but not if you stop there. Its the same issue why Linux as a Desktop has been stuck with almost no growth. Its easy to break things ( nvidia graphical driver *lol* ), too much is so focused on the Cli that people who do have a issue and are not system users quick run into a flooding swamp. Too much resources split among too many distributions, graphical desktops etc. Choice is good but too much choice means projects are starved for resources, comparability are issues, bugs are even more present, ... Chrome books and Android seem to be doing okay. I run Linux on the desktop and have done full-time since 2014. Maybe you're right that it's not for everyone at this point. And so ? There just wasn't a path for people to put effort into making it utterly easy for non technical people beyond a certain point. Does that mean Linux or Linux on the desktop has failed? I don't think so. It's just not for everyone. It's interesting to see Microsoft making it possible to run Linux on Windows - turns out a minority audience can be an important audience. A good example being the resources going into DMD, LDC, GDC... 3 Compilers for one language, when even well funded languages stick to one compiler. And now some people think its a good idea to have DMD also cross compile because "its not hard to do". No, maybe not but who will do all the testing, what resources are going to spend when things do not work for some users ( and the negative impact on their experience )... Its a long list but people do not look past this. It sounds like fun, lets think / or do it. What resources do you think go into GDC? I think Iain would love to hear about all these resources because I am not sure he has been made aware of them because they don't exist beyond him and possibly a tiny number of others helping in part at certain stages. Its just so frustrating that a lot of people here do not understand. Most programmers are not open-source developers, they are not coding gods, they are simply people who want things to good smooth. Install compiler, install good supported graphical IDE ( and no, VIM does not count! ), read up some simple documentation and off we go... We are not looking to be bug testers, core code implementer's, etc... Sure, and probably most people would be better off at this point using a language that makes getting started easy. One doesn't need to appeal to most people to succeed. That's just a pragmatic statement of the obvious. In time it will change but j don't see how recognising your observation could rationally lead anyone to do something differently from what they would have done before. To change the world you need a goal and a first cut at a plan for getting there. Whether the goal is entirely realistic is much less important than having a plan to begin. And I speak from experience here having at certain points not much more than that. Selfish, ... sure ... but this is how D gain more people. The more people work with your language, the more potential people you have that slowly are interested in helping out. I disagree. At this point the way for D to appeal to more people is to increase its appeal just a bit more to those who are already on the cusp of using D or would be if they had looked into it and to those who use D already in some way but could use it more. The way for D to appeal to more people is not to address the complaints of those who spend more time writing on the forum grumbling but don't use it much, because in my experience you do much better appealing to the people who are your best customers than to those who tell you if only you could do X there woul
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Monday, 3 September 2018 at 17:15:03 UTC, Laurent Tréguier wrote: On Monday, 3 September 2018 at 16:55:10 UTC, Jonathan M Davis wrote: Most of the work that gets done is the stuff that the folks contributing think is the most important - frequently what is most important for them for what they do, and very few (if any) of the major contributors use or care about IDEs for their own use. And there's tons to do that has nothing to do with IDEs. There are folks who care about it enough to work on it, which is why projects such as VisualD exist at all, and AFAIK, they work reasonably well, but the only two ways that they're going to get more work done on them than is currently happening is if the folks who care about that sort of thing contribute or if they donate money for it to be worked on. Not long ago, the D Foundation announced that they were going to use donations to pay someone to work on his plugin for Visual Studio Code: https://forum.dlang.org/post/rmqvglgccmgoajmhy...@forum.dlang.org So, if you want stuff like that to get worked on, then donate or pitch in. The situation with D - both with IDEs and in general - has improved greatly over time even if it may not be where you want it to be. But if you're ever expecting IDE support to be a top priority of many of the contributors, then you're going to be sorely disappointed. It's the sort of thing that we care about because we care about D being successful, but it's not the sort of thing that we see any value in whatsoever for ourselves, and selfish as it may be, when we spend the time to contribute to D, we're generally going to work on the stuff that we see as having the most value for getting done what we care about. And there's a lot to get done which impacts pretty much every D user and not just those who want something that's IDE-related. - Jonathan M Davis The complaints I have is exactly why I'm myself maintaining plugins for VSCode, Atom, and others soon. Don't worry, I still think D is worth putting some time and effort into and I know actions generally get more things done than words. I also know that tons of stuff is yet to be done in regards to the actual compilers and such. It just baffles me a bit to see the state of D in this department, when languages like Go or Rust (hooray for yet another comparison to Go and Rust) are a lot younger, but already have what looks like very good tooling. Then again they do have major industry players backing them though... Why is Go's IDE support baffling? It was a necessity to achieve Google's commercial aims, I should think. " The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt." – Rob Pike I don't know the story of Rust, but if I were working on a project as large as Firefox I guess I would want an IDE too! Whereas it doesn't seem like it's so important to some of D's commercial users because they have a different context. I don't think it's overall baffling that D hasn't got the best IDE support of emerging languages. The people that contribute to it, as Jonathan says, seen to be leas interested in IDEs and no company has found it important enough to pay someone else to work on it. So far anyway but as adoption grows maybe that will change.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Monday, 3 September 2018 at 18:26:57 UTC, Chris wrote: I think this sort of misunderstanding is the source of a lot of friction on this forum. Some users think (or in my case: thought) that D will be a sound and stable language one day, a language they can use for loads of stuff, while the leadership prefers to keep it at a stage where they can test ideas to see what works and what doesn't, wait let me rephrase this, where the user can test other people's ideas with every new release. D is not a petri dish for testing ideas. It's not an experiment. It's a serious language. Walter, Andrei, and everyone involved in maintaining and developing it want to see it succeed. They aren't just treading water, wasting everyone's time. And I know you keep hearing this, but I'll say it anyway: most of the development is based on volunteer work, and trying to get volunteers to do anything they don't want to do is like asking them to voluntarily have their teeth pulled when they don't need to. Walter has said that people come to him and ask what they should work on. He provides them with a list of priority tasks. Then they go off and work on something else. That's the nature of unsponsored open-source development and has been the biggest challenge for D for years. I have high hopes that some of this can be turned around by raising more money and I have long-term plans to try and facilitate that over the coming months. With more money, we can get people to work on targeted tasks even when they have no vested interest in it. We can pull in full-time coders, maybe get someone to oversee PR reviews so they don't stay open so long, fund development of broader ecosystem projects. There isn't anyone involved in the core D development who isn't aware of the broader issues in the community or the ecosystem, and they are working to bring improvements. I have been around this community since 2003. From my perspective, it has been one continuous march of progress. Sometimes slow, sometimes fast, but always moving, always getting better. And right now there are more people involved than ever in moving it forward. Unfortunately, there are also more demands on more fronts than ever. There are longtime users who become increasingly frustrated when the issues that matter to them still aren't resolved, newcomers who have no context of all the progress that has been made and instead hold D in comparison to Rust, Go, and other languages that have stronger backing and more manpower. That's perfectly legitimate. And of course, low manpower and funding aren't the complete picture. Management also play a role. Both Walter and Andrei have freely admitted they are not managers and that they're learning as they go. Mistakes have been made. In hindsight, some decisions should have gone a different way. But that is not the same as not caring, or not understanding/ So please, don't attribute any disingenuous motives to any of the core team members. They all want D to succeed. Identifying core problems and discussing ways to solve them is a more productive way to spend our bandwidth.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Monday, 3 September 2018 at 18:52:45 UTC, Laurent Tréguier wrote: On Monday, 3 September 2018 at 18:26:57 UTC, Chris wrote: it should come with a warning label that says "D is in many parts still at an experimental stage and ships with no guarantees whatsoever. Use at your own risk." Well it comes with the Boost license that says: `THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND` You know exactly what I mean, don't you?
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Monday, September 3, 2018 12:26:57 PM MDT Chris via Digitalmars-d wrote: > There is no real plan and > only problems that someone deems interesting or challenging at a > given moment are tackled. If they solve a problem for a lot of > users, it's only a side effect. The advent of a D Foundation > hasn't changed anything in this regard, and it seems not to be > just a financial issue. It's the mentality. In other words, D is > still unreliable, and if that what the community wants, fine, but > instead of promoting it as a substitute for C/C++, Java etc. it > should come with a warning label that says "D is in many parts > still at an experimental stage and ships with no guarantees > whatsoever. Use at your own risk." That would save both the > language developers and (potential) users a lot of headaches. > > I think this sort of misunderstanding is the source of a lot of > friction on this forum. Some users think (or in my case: thought) > that D will be a sound and stable language one day, a language > they can use for loads of stuff, while the leadership prefers to > keep it at a stage where they can test ideas to see what works > and what doesn't, wait let me rephrase this, where the user can > test other people's ideas with every new release. Plenty of people - whole companies included - use D for real projects and products. It is an extremely powerful tool which can be used for real work. Is it as polished as some other languages? Maybe not, but it's plenty stable for real world use. And it's continually improving. All programming languages and tools are "used at your own risk." They all come with their own sets of pros and cons. If what you want is a language that doesn't change much, then there are plenty of other choices, just like there are plenty of languages that change all the time. Over time D has become more stable, and it doesn't change anywhere near as rapidly as it used to, but if you don't like how it works or is developed, then feel free to go elsewhere. Those of use that stick around find that its pros outweigh its cons. Plenty of folks disagree with us, and they've chosen different languages, which is just fine. In any case, I have better things to do than argue about whether D is a solid, useful language or not. It's the language that I prefer. I'm going to use it as much as I can, and I'm going to continue to contribute to it. If you don't like where D is, and you don't think that it's worth your time to contribute to it, then that's perfectly fine, but it's a waste of my time to continue to argue about it. I spend too much of my time in this newsgroup as it is, and this sort of argument doesn't contribute anything to improving D. - Jonathan M Davis
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Monday, 3 September 2018 at 14:26:46 UTC, Laeeth Isharc wrote: I just spoke with Dicebot about work stuff. He incidentally mentioned what I said before based on my impressions. The people doing work with a language have better things to do than spend a lot of time on forums. And I think in open source you earn the right to be listened to by doing work of some kind. He said (which I knew already) it was an old post he didn't put up in the end - somebody discovered it in his repo. He is working fulltime as a consultant with me for Symmetry and is writing D as part of that role. I don't think that indicates he didn't mean his criticisms, and maybe one could learn from those. But a whole thread triggered by this is quite entertaining. Interesting, I did not realize that he had left Sociomantic. Even if he did not release the article, I think it's a good idea that we take some of his criticisms to heart. I, at the very least, agree with at least a few of them, and as we've seen, so do others.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Monday, 3 September 2018 at 14:26:46 UTC, Laeeth Isharc wrote: I just spoke with Dicebot about work stuff. He incidentally mentioned what I said before based on my impressions. The people doing work with a language have better things to do than spend a lot of time on forums. And I think in open source you earn the right to be listened to by doing work of some kind. He said (which I knew already) it was an old post he didn't put up in the end - somebody discovered it in his repo. He is working fulltime as a consultant with me for Symmetry and is writing D as part of that role. I don't think that indicates he didn't mean his criticisms, and maybe one could learn from those. But a whole thread triggered by this is quite entertaining. I'm the person how found the post, and I'm enjoying the readings... and I'm learning something also! I'm amused by the amount of different topics, minus one, the original: why feature branches are not an option in DLangLand. /Paolo
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Monday, 3 September 2018 at 18:26:57 UTC, Chris wrote: it should come with a warning label that says "D is in many parts still at an experimental stage and ships with no guarantees whatsoever. Use at your own risk." Well it comes with the Boost license that says: `THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND`
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Monday, 3 September 2018 at 16:55:10 UTC, Jonathan M Davis wrote: Most of the work that gets done is the stuff that the folks contributing think is the most important - frequently what is most important for them for what they do, and very few (if any) of the major contributors use or care about IDEs for their own use. And there's tons to do that has nothing to do with IDEs. There are folks who care about it enough to work on it, which is why projects such as VisualD exist at all, and AFAIK, they work reasonably well, but the only two ways that they're going to get more work done on them than is currently happening is if the folks who care about that sort of thing contribute or if they donate money for it to be worked on. Not long ago, the D Foundation announced that they were going to use donations to pay someone to work on his plugin for Visual Studio Code: https://forum.dlang.org/post/rmqvglgccmgoajmhy...@forum.dlang.org So, if you want stuff like that to get worked on, then donate or pitch in. The situation with D - both with IDEs and in general - has improved greatly over time even if it may not be where you want it to be. But if you're ever expecting IDE support to be a top priority of many of the contributors, then you're going to be sorely disappointed. It's the sort of thing that we care about because we care about D being successful, but it's not the sort of thing that we see any value in whatsoever for ourselves, and selfish as it may be, when we spend the time to contribute to D, we're generally going to work on the stuff that we see as having the most value for getting done what we care about. And there's a lot to get done which impacts pretty much every D user and not just those who want something that's IDE-related. - Jonathan M Davis Dear Jonathan, you've just said it. There is no real plan and only problems that someone deems interesting or challenging at a given moment are tackled. If they solve a problem for a lot of users, it's only a side effect. The advent of a D Foundation hasn't changed anything in this regard, and it seems not to be just a financial issue. It's the mentality. In other words, D is still unreliable, and if that what the community wants, fine, but instead of promoting it as a substitute for C/C++, Java etc. it should come with a warning label that says "D is in many parts still at an experimental stage and ships with no guarantees whatsoever. Use at your own risk." That would save both the language developers and (potential) users a lot of headaches. I think this sort of misunderstanding is the source of a lot of friction on this forum. Some users think (or in my case: thought) that D will be a sound and stable language one day, a language they can use for loads of stuff, while the leadership prefers to keep it at a stage where they can test ideas to see what works and what doesn't, wait let me rephrase this, where the user can test other people's ideas with every new release.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Monday, September 3, 2018 11:15:03 AM MDT Laurent Tréguier via Digitalmars-d wrote: > It just baffles me a bit to see the state of D in this > department, when languages like Go or Rust (hooray for yet > another comparison to Go and Rust) are a lot younger, but already > have what looks like very good tooling. > Then again they do have major industry players backing them > though... The dynamics are fundamentally different when you're paying someone to work on something. As I understand it, in addition to whatever volunteer work is done, Google and Mozilla pay people to work on those languages. And when you're doing that, it's trivial enough to say that you think that something matters enough to pay someone to work on it even if it's not something that anyone contributing actually wants to do or really cares about having for themselves. Relatively little time has been spent contributing to D by people who are paid to work on it. Even if both Walter and Andrei agreed that something should be treated as top priority, aside from paying someone to work on it through the D Foundation, they really can't make anyone work on it. What gets done is usually what the contributors care about. That's one reason why donations could end up being a game changer over time. It makes it possible to pay someone to do something that no contributors want to spend their free time doing. - Jonathan M Davis
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Monday, 3 September 2018 at 16:55:10 UTC, Jonathan M Davis wrote: Most of the work that gets done is the stuff that the folks contributing think is the most important - frequently what is most important for them for what they do, and very few (if any) of the major contributors use or care about IDEs for their own use. And there's tons to do that has nothing to do with IDEs. There are folks who care about it enough to work on it, which is why projects such as VisualD exist at all, and AFAIK, they work reasonably well, but the only two ways that they're going to get more work done on them than is currently happening is if the folks who care about that sort of thing contribute or if they donate money for it to be worked on. Not long ago, the D Foundation announced that they were going to use donations to pay someone to work on his plugin for Visual Studio Code: https://forum.dlang.org/post/rmqvglgccmgoajmhy...@forum.dlang.org So, if you want stuff like that to get worked on, then donate or pitch in. The situation with D - both with IDEs and in general - has improved greatly over time even if it may not be where you want it to be. But if you're ever expecting IDE support to be a top priority of many of the contributors, then you're going to be sorely disappointed. It's the sort of thing that we care about because we care about D being successful, but it's not the sort of thing that we see any value in whatsoever for ourselves, and selfish as it may be, when we spend the time to contribute to D, we're generally going to work on the stuff that we see as having the most value for getting done what we care about. And there's a lot to get done which impacts pretty much every D user and not just those who want something that's IDE-related. - Jonathan M Davis The complaints I have is exactly why I'm myself maintaining plugins for VSCode, Atom, and others soon. Don't worry, I still think D is worth putting some time and effort into and I know actions generally get more things done than words. I also know that tons of stuff is yet to be done in regards to the actual compilers and such. It just baffles me a bit to see the state of D in this department, when languages like Go or Rust (hooray for yet another comparison to Go and Rust) are a lot younger, but already have what looks like very good tooling. Then again they do have major industry players backing them though...
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Monday, September 3, 2018 9:41:48 AM MDT Laurent Tréguier via Digitalmars-d wrote: > On Monday, 3 September 2018 at 15:23:12 UTC, Chris wrote: > > On Monday, 3 September 2018 at 14:26:46 UTC, Laeeth Isharc > > > > wrote: > >> On Monday, 3 September 2018 at 11:32:42 UTC, Chris wrote: > >>> [...] > >> > >> D has never been about smooth experiences! That's a > >> commercial benefit if you think that hormesis brings benefits > >> and you are not looking for programmers of the trained-monkey, > >> strap a few APIs together type. > > > > It's high time it got a bit smoother if you want people to use > > it. Is everybody who doesn't use cli and knows all compiler > > flags by heart a coding monkey? Has it ever occurred to you > > that people want a smooth experience so they can concentrate on > > a job and get done with it? > > Yes. It almost sounds like a smooth experience would be a bad > thing to have, especially with the classic "you don't need an IDE > anyway" speech. Editing experience seems often dismissed as > unimportant, when it's one of the first things new users will > come across when trying out D. First impressions can matter a lot. Most of the work that gets done is the stuff that the folks contributing think is the most important - frequently what is most important for them for what they do, and very few (if any) of the major contributors use or care about IDEs for their own use. And there's tons to do that has nothing to do with IDEs. There are folks who care about it enough to work on it, which is why projects such as VisualD exist at all, and AFAIK, they work reasonably well, but the only two ways that they're going to get more work done on them than is currently happening is if the folks who care about that sort of thing contribute or if they donate money for it to be worked on. Not long ago, the D Foundation announced that they were going to use donations to pay someone to work on his plugin for Visual Studio Code: https://forum.dlang.org/post/rmqvglgccmgoajmhy...@forum.dlang.org So, if you want stuff like that to get worked on, then donate or pitch in. The situation with D - both with IDEs and in general - has improved greatly over time even if it may not be where you want it to be. But if you're ever expecting IDE support to be a top priority of many of the contributors, then you're going to be sorely disappointed. It's the sort of thing that we care about because we care about D being successful, but it's not the sort of thing that we see any value in whatsoever for ourselves, and selfish as it may be, when we spend the time to contribute to D, we're generally going to work on the stuff that we see as having the most value for getting done what we care about. And there's a lot to get done which impacts pretty much every D user and not just those who want something that's IDE-related. - Jonathan M Davis
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On 3 September 2018 at 18:07, RhyS via Digitalmars-d wrote: > > Too much resources split among too many distributions, graphical desktops > etc. Choice is good but too much choice means projects are starved for > resources, comparability are issues, bugs are even more present, ... > > A good example being the resources going into DMD, LDC, GDC... 3 Compilers > for one language, when even well funded languages stick to one compiler. This is an argument that has been batted to death and rebutted for nearly 2 decades now. 15 years ago, people were complaining that there was only one D compiler. It is ironic that people now complain that there's too many.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Monday, 3 September 2018 at 15:41:48 UTC, Laurent Tréguier wrote: Yes. It almost sounds like a smooth experience would be a bad thing to have, especially with the classic "you don't need an IDE anyway" speech. Editing experience seems often dismissed as unimportant, when it's one of the first things new users will come across when trying out D. First impressions can matter a lot. Its the same issue why Linux as a Desktop has been stuck with almost no growth. Its easy to break things ( nvidia graphical driver *lol* ), too much is so focused on the Cli that people who do have a issue and are not system users quick run into a flooding swamp. Too much resources split among too many distributions, graphical desktops etc. Choice is good but too much choice means projects are starved for resources, comparability are issues, bugs are even more present, ... A good example being the resources going into DMD, LDC, GDC... 3 Compilers for one language, when even well funded languages stick to one compiler. And now some people think its a good idea to have DMD also cross compile because "its not hard to do". No, maybe not but who will do all the testing, what resources are going to spend when things do not work for some users ( and the negative impact on their experience )... Its a long list but people do not look past this. It sounds like fun, lets think / or do it. Its just so frustrating that a lot of people here do not understand. Most programmers are not open-source developers, they are not coding gods, they are simply people who want things to good smooth. Install compiler, install good supported graphical IDE ( and no, VIM does not count! ), read up some simple documentation and off we go... We are not looking to be bug testers, core code implementer's, etc... Selfish, ... sure ... but this is how D gain more people. The more people work with your language, the more potential people you have that slowly are interested in helping out. But when D puts the carrot in front of the cart instead of the mule. Then do not be so surprised that a lot of people find D extreme frustrating and have a love-hate relationship with it.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Monday, 3 September 2018 at 15:23:12 UTC, Chris wrote: On Monday, 3 September 2018 at 14:26:46 UTC, Laeeth Isharc wrote: On Monday, 3 September 2018 at 11:32:42 UTC, Chris wrote: [...] D has never been about smooth experiences! That's a commercial benefit if you think that hormesis brings benefits and you are not looking for programmers of the trained-monkey, strap a few APIs together type. It's high time it got a bit smoother if you want people to use it. Is everybody who doesn't use cli and knows all compiler flags by heart a coding monkey? Has it ever occurred to you that people want a smooth experience so they can concentrate on a job and get done with it? Yes. It almost sounds like a smooth experience would be a bad thing to have, especially with the classic "you don't need an IDE anyway" speech. Editing experience seems often dismissed as unimportant, when it's one of the first things new users will come across when trying out D. First impressions can matter a lot.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Monday, 3 September 2018 at 14:26:46 UTC, Laeeth Isharc wrote: On Monday, 3 September 2018 at 11:32:42 UTC, Chris wrote: [...] D has never been about smooth experiences! That's a commercial benefit if you think that hormesis brings benefits and you are not looking for programmers of the trained-monkey, strap a few APIs together type. It's high time it got a bit smoother if you want people to use it. Is everybody who doesn't use cli and knows all compiler flags by heart a coding monkey? Has it ever occurred to you that people want a smooth experience so they can concentrate on a job and get done with it? It's a question of developmental stages too. I was a late developer as a person, but then I continued to develop into my 30s and perhaps 40s too. For human beings there are different kinds of people and implicit life strategies and natural fits with niches. Some are quick to grow up, but stop developing sooner and others mature more slowly but this process may continue long after others are done. I'm not saying a computer language is like a human being, but it is in part an organic phenomenon and social institutions develop according to their own logic and rhythm in my experience of studying them. This is not a yoga class. D is a late developer, and I think that's because it is a tremendously ambitious language. What use case is D intended to target? Well it's not like that - it's a general purpose programming language at a time when people have given up on that idea and think that it simply must be that you pick one tool for the job and couldn't possibly have a tool that does many different kind of things reasonably well. So the kind of use cases D is suited for depends much more on the capabilities and virtues of the people using it than is the case for other languages. (In truth in a negative sense that's true also of other languages - Go was designed to be easy to learn and to use for people who didn't have much programming experience). It's not D's usefulness I'm concerned with, it can do a lot of things. It's just a bit awkward to use in production and there's no reason why things should still be like in 2010. [...] Sure. I would agree with what you write but say that it's a case of capabilities and hormesis too sometime. Nassim Taleb told a story about checking into a hotel and seeing a guy in a suit tip the bellboy to carry his bags upstairs. Later on he saw the same guy in a gym lifting weights (and I think on a Nautilus-type machine which is much inferior to free weights). So any tool can make you lazy, and yet any tool - no matter how shiny, polished, and expensive - sometimes will break and then if you are afraid of the command line or just very out of practice you can end up utterly helpless. It's a matter of balance to be sure. Funny story, but this is not the place for esoteric contemplations. [...] I didn't find the experience last time I tried to be worse than just going through the Android C/C++ native SDK instructions. The first time I tried it was quite tough as I struggled to even build the compiler as the instructions weren't quite right. I disagree about it not being maintainable as it's much easier to keep something you understand and can reason about working, but it's harder to use in the beginning, for sure. I think that the point for Android and ARM is not the build process but integration with Java APIs. If you can't figure out a build process that when I tried it mostly just worked and that doesn't have too much dark magic, I fear for how easy you are going to find JNI. (JNI is fine, but building a D project on Android requires less demanding technical capabilities). I know JNI, I've connected D with Java (and vice versa) a few times. [...] You had one or two people who stubbornly devoted considerable parts of their lives to getting D to build on Android. And instead of saying what a remarkable achievement, and thank you so much for this work, and this is very cool but we really should consider in a constructive manner how to make this easy to use, you are saying I want more! Fair enough - it's a free society, although I don't think you were ever promised that the Android experience would be something different from what it is. I never gave out about the guys (I think one of them is Joakim) who made it possible in the end, because without their efforts we wouldn't have anything. I'm just surprised they don't get more full time support to wrap it up nicely. But I really am not surprised that people burn out doing open source. It's very odd to see, because I came back to this world after a long break. My first 'open source' contribution was to part of Tom Jenning's work on FidoNet in 1989 - an improvement to some node routing table, and in those days people used to be pretty appreciative. Same thing with Chuck Forsberg who invented ZModem and
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Monday, 3 September 2018 at 11:32:42 UTC, Chris wrote: On Sunday, 2 September 2018 at 12:07:17 UTC, Laeeth Isharc wrote: That's why the people that adopt D will inordinately be principals not agents in the beginning. They will either be residual claimants on earnings or will have acquired the authority to make decisions without persuading a committee that makes decisions on the grounds of social factors. If D becomes another C++ ? C++ was ugly from the beginning (in my personal subjective assessment) whereas D was designed by people with good taste. That's why it appeals inordinately to people with good taste. [snip] Be that as it may, however, you forget the fact that people "with good taste" who have (had) an intrinsic motivation to learn D are also very critical people who take no bs, else they wouldn't have ended up using D in the first place. Since they've already learned a lot of concepts etc. with D over the years, it's technically easy for them to move on to either an easier language or one that offers more or less the same features as D. I don't think so. If we are talking about the set of technically very capable people with an aesthetic sense then I don't think easier or feature set in a less beautiful way is appealing. This is based on revealed preference, because the conversations I have with technically very capable people that know many other languages as well or better than D go like "what compensation are you expecting? X. But if it's to write D, I can be flexible" and so on. Template meta-programming in D is quite simple. C++ has many of the features that D has. Therefore it's easy to do template meta-programming in C++, and just as easy for others to read your code in C++ as D? I don't think so. Having learnt the concepts in D and that it can be beautiful and easy kind of ruins you for inferior approaches. BTW I was grumbling about some C# wrapper code written manually. It talks to a C style API (connected to an internal C++ code base developed before I became involved). So you have a low level C# side declaration of the C function that returns an exception string by argument. Then you have a C# declaration of a wrapper function that throws an exception if the exception string is not empty. Then you have a layer on top that puts the class back together. Then you have a high level wrapper layer. Then you have the bit that talks to Excel. I thought surely there must be decent code generation possibilities in C#. It's not too bad as a language. I looked it up. Microsoft say use HTML templates. Well, okay... but I'm not sure I like the trade-off of having to do stuff like that versus having to deal with some pain at the command-line now and then. So once they're no longer happy with the way things are, they can dive into a any language fast enough for the cost of transition to be low. You're making an implicit empirical statement that I don't believe to be accurate based on my experience. I would say if a representative programmer from the D community decides the costs no longer offset the benefits then sure they can learn another language because the representative programmer here is pretty talented. But so what?
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Monday, 3 September 2018 at 11:32:42 UTC, Chris wrote: On Sunday, 2 September 2018 at 12:07:17 UTC, Laeeth Isharc wrote: That's why the people that adopt D will inordinately be principals not agents in the beginning. They will either be residual claimants on earnings or will have acquired the authority to make decisions without persuading a committee that makes decisions on the grounds of social factors. If D becomes another C++ ? C++ was ugly from the beginning (in my personal subjective assessment) whereas D was designed by people with good taste. That's why it appeals inordinately to people with good taste. [snip] Be that as it may, however, you forget the fact that people "with good taste" who have (had) an intrinsic motivation to learn D are also very critical people who take no bs, else they wouldn't have ended up using D in the first place. Since they've already learned a lot of concepts etc. with D over the years, it's technically easy for them to move on to either an easier language or one that offers more or less the same features as D. So once they're no longer happy with the way things are, they can dive into a any language fast enough for the cost of transition to be low. One has to be practical too. Yes! And being practical involves recognising different objectives, starting points and considerations apply to different situations and contexts. Programming involves more than just features and concepts. Good, out of the box system integration (e.g. Android, iOS) is important too and he who ignores this simple truth will have to pay a high price. Important for whom? It depends a lot! Ask Sociomantic, Bastian, Weka if the lack of Android or iOS integration is a big problem for them, and I don't think you will get the answer that it is important. For what I am doing, Android or iOS would be nice, but it doesn't need to be out of the box, and you can do quite a lot on Android already. I compiled ldc on my Huawei watch, which I never expected to be possible though given it has 4 Gig of RAM it's not that surprising. JNI is not that bad though could certainly be made easier with a bit of work. And I haven't tried, but I guess you could write the GUI stuff in Python or Lua for a simple app and do the heavy lifting with D. Of course for the ecosystem generally yes it matters. why developers of new languages are so keen on giving users a smooth experience when it comes to app development and cross compilation which leads me to the next point: IDEs. D has never been about smooth experiences! That's a commercial benefit if you think that hormesis brings benefits and you are not looking for programmers of the trained-monkey, strap a few APIs together type. It's a question of developmental stages too. I was a late developer as a person, but then I continued to develop into my 30s and perhaps 40s too. For human beings there are different kinds of people and implicit life strategies and natural fits with niches. Some are quick to grow up, but stop developing sooner and others mature more slowly but this process may continue long after others are done. I'm not saying a computer language is like a human being, but it is in part an organic phenomenon and social institutions develop according to their own logic and rhythm in my experience of studying them. D is a late developer, and I think that's because it is a tremendously ambitious language. What use case is D intended to target? Well it's not like that - it's a general purpose programming language at a time when people have given up on that idea and think that it simply must be that you pick one tool for the job and couldn't possibly have a tool that does many different kind of things reasonably well. So the kind of use cases D is suited for depends much more on the capabilities and virtues of the people using it than is the case for other languages. (In truth in a negative sense that's true also of other languages - Go was designed to be easy to learn and to use for people who didn't have much programming experience). No. You don't need an IDE to develop in D Indeed, and much less so than with some other languages because you can understand the code that's out of focus more easily and hold more of it in your head and reason about it. I personally use Sublime and vim, but tools are very personal because problems are different and people think differently and there's not much upside in engaging in a holy war about tools. However, an IDE can a) make coding comfortable and b) boost your productivity. Sure - in can do for some people in some cases. to a): maybe you just grow tired of the text editor & cli approach and you just want to click a few buttons to fix imports or correct typos and be done with it, and as to b): all this helps to boost your productivity, especially when you can easily set up an app or a web service with a few mouse
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Monday, 3 September 2018 at 06:29:02 UTC, Pjotr Prins wrote: One thing I want to add that we ought to be appreciative of the work people put in - much of it in their spare time. I wonder if W&A and others sometimes despair for the lack of appreciation they get. Guido van Rossum burning out (W, notably, was the one to post that here first) is a shame. Even though he created a language which I find less tasteful he did not deserve to be treated like that. Simple. I feel the same. There is no need to put a huge burden on them even if there is something could not be fixed. A good subset of a language is still a good language I think. Powerful, expressive, precise, that's important for me.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Monday, 3 September 2018 at 06:29:02 UTC, Pjotr Prins wrote: Hear, hear! Even though some languages like Julia, Rust and Go are much better funded than D - and their creators have excellent taste in different ways - they still have to go through similar evolutionary steps. There is no fast path. Whatever design decision you make, you always end up fixes bugs and corner cases. I was amazed how behind Rust's debugger support was last year (I witnessed a talk at FOSDEM). They are catching up, but it just goes to show... No programming language is ever finished. But most programming languages try to get the basics right first and then add new features. If you want to run you have to learn how to walk first. Languages take time to evolve, but we shouldn't be in a situation where the fixing of basic bugs and flaws are considered part of the "long term goals". One thing I want to add that we ought to be appreciative of the work people put in - much of it in their spare time. I wonder if W&A and others sometimes despair for the lack of appreciation they get. Guido van Rossum burning out (W, notably, was the one to post that here first) is a shame. Even though he created a language which I find less tasteful he did not deserve to be treated like that. Simple. I hold both Walter and Andrei (and all the other great contributors) in high esteem and D was the right tool for me back in the day. Without it things would have been a lot harder. But I think D is past the laboratory stage and I as a user feel that our actual experience is less important than design experiments. Respect goes both ways, after all it's the users who keep a programming language alive. If there isn't something fundamentally wrong in the communication between the leadership / language developers and the users, why do we get posts like this: "Thanks! Please add anything you think is missing to https://github.com/dlang/dlang.org/pull/2453 since Walter doesn't seem to be interested." https://forum.dlang.org/post/mxgyoflrsibeyavvm...@forum.dlang.org Not good.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Sunday, 2 September 2018 at 12:07:17 UTC, Laeeth Isharc wrote: That's why the people that adopt D will inordinately be principals not agents in the beginning. They will either be residual claimants on earnings or will have acquired the authority to make decisions without persuading a committee that makes decisions on the grounds of social factors. If D becomes another C++ ? C++ was ugly from the beginning (in my personal subjective assessment) whereas D was designed by people with good taste. That's why it appeals inordinately to people with good taste. [snip] Be that as it may, however, you forget the fact that people "with good taste" who have (had) an intrinsic motivation to learn D are also very critical people who take no bs, else they wouldn't have ended up using D in the first place. Since they've already learned a lot of concepts etc. with D over the years, it's technically easy for them to move on to either an easier language or one that offers more or less the same features as D. So once they're no longer happy with the way things are, they can dive into a any language fast enough for the cost of transition to be low. One has to be practical too. Programming involves more than just features and concepts. Good, out of the box system integration (e.g. Android, iOS) is important too and he who ignores this simple truth will have to pay a high price. That's why developers of new languages are so keen on giving users a smooth experience when it comes to app development and cross compilation which leads me to the next point: IDEs. No. You don't need an IDE to develop in D. However, an IDE can a) make coding comfortable and b) boost your productivity. As to a): maybe you just grow tired of the text editor & cli approach and you just want to click a few buttons to fix imports or correct typos and be done with it, and as to b): all this helps to boost your productivity, especially when you can easily set up an app or a web service with a few mouse clicks. In D, if you want to do something with ARM/Android you will invariably end up with a potpourri of build scripts and spaghetti lines full of compiler flags etc. Not smooth, it takes a lot of time to set it up manually and it's not easily maintainable. Doable, yes, but just because something is doable doesn't mean it's recommendable nor that people will actually bother with doing it. I'm under the impression that the D Foundation doesn't pay much attention to these things once they are kind of "doable" and somebody has volunteered to "look into it" with no guarantee whatsoever if and when it will be available to users. And if there are complaints, hey, it is not "official" ask the guy who's looking into it. Not very professional. See, that doesn't really give you confidence in D and it gives you an uneasy feeling. Nothing worse in software development than to be programming thinking "Am I actually wasting my time here?", and of course, you become reluctant to start anything new in D - which is only natural.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Sunday, 2 September 2018 at 12:07:17 UTC, Laeeth Isharc wrote: I've only been programming since 1983 so I had the benefit of high level languages like BBC BASIC, C, a Forth I wrote myself, and Modula 3. And although I had to write a disassembler at least I has assemblers built in. Programming using a hex keypad is not that satisfying after a while. It takes a long time to develop a language, its ecosystem and community. Hear, hear! Even though some languages like Julia, Rust and Go are much better funded than D - and their creators have excellent taste in different ways - they still have to go through similar evolutionary steps. There is no fast path. Whatever design decision you make, you always end up fixes bugs and corner cases. I was amazed how behind Rust's debugger support was last year (I witnessed a talk at FOSDEM). They are catching up, but it just goes to show... One thing I want to add that we ought to be appreciative of the work people put in - much of it in their spare time. I wonder if W&A and others sometimes despair for the lack of appreciation they get. Guido van Rossum burning out (W, notably, was the one to post that here first) is a shame. Even though he created a language which I find less tasteful he did not deserve to be treated like that. Simple.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Sunday, 2 September 2018 at 14:48:34 UTC, lurker wrote: after the beta i tried it the final again - just to be fair. 1.) install d, install visual d. 2.) trying to to look at options under visual d without a project crashes VS2017 - latest service pack. 3.) VS2017 - displays a problem on startup 4.) creating the dummy project - compile for x64. error something is missing. 5.) deinstall everything and wait for another year this crap does not even work out of the box - what else is not tested in D? i guess you don't intend to draw crowds to D and just keep talking on how to this and that a little better in the compiler pet project. is D that dead that the releases are not tested or do you want to keep all windows users out? Oh and also your error is probably that you're missing the C++ build tools which come with the correct linker for x64, as far as I remember.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Sunday, 2 September 2018 at 14:48:34 UTC, lurker wrote: after the beta i tried it the final again - just to be fair. 1.) install d, install visual d. 2.) trying to to look at options under visual d without a project crashes VS2017 - latest service pack. 3.) VS2017 - displays a problem on startup 4.) creating the dummy project - compile for x64. error something is missing. 5.) deinstall everything and wait for another year this crap does not even work out of the box - what else is not tested in D? i guess you don't intend to draw crowds to D and just keep talking on how to this and that a little better in the compiler pet project. is D that dead that the releases are not tested or do you want to keep all windows users out? Visual D is not official, remember that. Most people would go with VS Code anyway.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Sun, 2 Sep 2018 at 16:05, Andre Pany via Digitalmars-d wrote: > > On Sunday, 2 September 2018 at 14:48:34 UTC, lurker wrote: > > after the beta i tried it the final again - just to be fair. > > > > 1.) install d, install visual d. > > 2.) trying to to look at options under visual d without a > > project crashes VS2017 - latest > > service pack. > > 3.) VS2017 - displays a problem on startup > > 4.) creating the dummy project - compile for x64. error > > something is missing. > > 5.) deinstall everything and wait for another year > > > > this crap does not even work out of the box - what else is not > > tested in D? > > > > i guess you don't intend to draw crowds to D and just keep > > talking on how to this and that a little better in the compiler > > pet project. > > > > is D that dead that the releases are not tested or do you want > > to keep all windows users out? > > There are a lot of motivated people here willing to help you to > get your issue solved if you provide the details. > > I can confirm that DMD is working like a charm for me (different > visual studio versions on build servers, MS build tools on local > pc). I use IntelliJ instead of Visual Studio, but that is only my > personal preferation. > > Kind regards > Andre I'm currently installing VS2017 to test your anecdote (been on 2015 for ages)... although I use VS2017 at work and haven't had any problems.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Sunday, 2 September 2018 at 14:48:34 UTC, lurker wrote: after the beta i tried it the final again - just to be fair. 1.) install d, install visual d. 2.) trying to to look at options under visual d without a project crashes VS2017 - latest service pack. 3.) VS2017 - displays a problem on startup 4.) creating the dummy project - compile for x64. error something is missing. 5.) deinstall everything and wait for another year this crap does not even work out of the box - what else is not tested in D? i guess you don't intend to draw crowds to D and just keep talking on how to this and that a little better in the compiler pet project. is D that dead that the releases are not tested or do you want to keep all windows users out? There are a lot of motivated people here willing to help you to get your issue solved if you provide the details. I can confirm that DMD is working like a charm for me (different visual studio versions on build servers, MS build tools on local pc). I use IntelliJ instead of Visual Studio, but that is only my personal preferation. Kind regards Andre
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Sunday, 2 September 2018 at 14:48:34 UTC, lurker wrote: after the beta i tried it the final again - just to be fair. 1.) install d, install visual d. 2.) trying to to look at options under visual d without a project crashes VS2017 - latest service pack. 3.) VS2017 - displays a problem on startup 4.) creating the dummy project - compile for x64. error something is missing. 5.) deinstall everything and wait for another year this crap does not even work out of the box - what else is not tested in D? i guess you don't intend to draw crowds to D and just keep talking on how to this and that a little better in the compiler pet project. is D that dead that the releases are not tested or do you want to keep all windows users out? Don't worry, that is just the beginning... It's like that in all aspects. The design is fundamentally flawed and no one seems to care(or recognize it). Software is far too complex now days to be using mentalities of the 60's and 70's.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
after the beta i tried it the final again - just to be fair. 1.) install d, install visual d. 2.) trying to to look at options under visual d without a project crashes VS2017 - latest service pack. 3.) VS2017 - displays a problem on startup 4.) creating the dummy project - compile for x64. error something is missing. 5.) deinstall everything and wait for another year this crap does not even work out of the box - what else is not tested in D? i guess you don't intend to draw crowds to D and just keep talking on how to this and that a little better in the compiler pet project. is D that dead that the releases are not tested or do you want to keep all windows users out?
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Saturday, 1 September 2018 at 12:33:49 UTC, rjframe wrote: On Thu, 23 Aug 2018 15:35:45 +, Joakim wrote: * Language complexity Raise your hand if you know how a class with both opApply and the get/next/end functions behaves when you pass it to foreach. How about a struct? Does it matter if it allows copying or not? The language was built because C++ was deemed too complex! Please see the thread about lazy [1] for a case where a question actually has an answer, but nobody seems to know it (and the person who does know it is hard pressed to explain the nuance that triggers this). By this rationale, C++ should be dead by now. Why do you think it's fatal to D? It's worth noting that C++ isn't always chosen for its technical merits. It's a well-known language whose more or less standard status in certain domains means it's the default choice; C++ is sometimes used for projects in which Stroustrup would say it's obviously the wrong language for the job. D is far more likely to require justification based on technical merit. If D becomes another C++, why bother taking a chance with D when you can just use C++, use a well-supported, commonly-used compiler, and hire from a bigger pool of jobseekers? That's why the people that adopt D will inordinately be principals not agents in the beginning. They will either be residual claimants on earnings or will have acquired the authority to make decisions without persuading a committee that makes decisions on the grounds of social factors. If D becomes another C++ ? C++ was ugly from the beginning (in my personal subjective assessment) whereas D was designed by people with good taste. That's why it appeals inordinately to people with good taste. In Hong Kong we had some difficulty hiring a support person for a trading floor. Spoke in some cases to the most senior person in HK for even large and well-known funds (small office in this case) and they simply were not good enough. Thanks to someone from the D community I met a headhunter who used to be at Yandex but realized the money was better as a headhunter. They don't have many financial clients I think, don't have connections on the talent side in finance. But the runners up were by far better than anyone we had found through other sources and the best was outstanding. Good job, I said. It's funny that the person we hired came from a big bank when other headhunters are looking in the same place and know that world better. By the way, how many people did you interact with to find X ? In London if a headhunter puts 10 people before you and you are really pretty happy then that's a good day. He said two hundred ! And they had to come up with a hiring test too. So the basic reason they could find good people in technology in finance when others couldn't is that they have much better taste. Do you see ? The others knew many more people, they had experience doing it, and somebody who had to persuade a committee would have found it hard to justify. Programming ability follows a Pareto curve - see the best and the rest. There might be many more C++, Python and C# programmers. The incidence of outstanding ones is lower than in the D community for the very reason that only someone obtuse or very smart will learn D for career reasons - intrinsic motivation draws the highest talent. It depends if your model of people doing work is an army of intelligent trained monkeys or a force made up of small elite groups of the best people you have ever worked with. Of course the general of the trained monkey army is going to be difficult to persuade. And so ? On the other hand, someone who is smart and has good taste and has earned the right to decide - D is a less popular language that has fewer tutorials and less shiny IDE and debugger support. Well if you're a small company and you are directly or in effect a proxy owner of the residual (ie an owner of some kind) it's a pragmatic question and saying nobody got fired for buying IBM - that's missing the point because the success is yours and the failure is yours and you can't pass the buck. The beauty of being the underdog is that it's easy to succeed. You don't need to be the top dog, and in fact it's not strategically wise to do something that might think you stand a chance - let them think what they want. The underdog just needs to keep improving and keep getting more adoption, which I don't have much doubt is happening. Modern people can be like children in their impatience sometimes! I've only been programming since 1983 so I had the benefit of high level languages like BBC BASIC, C, a Forth I wrote myself, and Modula 3. And although I had to write a disassembler at least I has assemblers built in. Programming using a hex keypad is not that satisfying after a while. It takes a long time to develop a language, its ecosystem and community. An S curve is quite
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Saturday, 1 September 2018 at 18:35:30 UTC, TheSixMillionDollarMan wrote: On Saturday, 1 September 2018 at 12:33:49 UTC, rjframe wrote: [...] Stroustrup also said, that "achieving any degree of compatibility [with C/C++] is very hard, as the C/C++ experience shows." (reference => http://stroustrup.com/hopl-almost-final.pdf (2007) (and here refers to D on page 42 btw - that was 11 years ago now). And yet, D is very intent on doing just that, while also treading its own path. I personally think this is why D has not taken off, as many would hope. It's hard. I think it's also why D won't take off, as many hope. It's hard. Stroustrup was correct (back in the 90's). Yes, it really is hard. Made even harder now, since C++ has evolved into a 'constantly' moving target... D++
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Saturday, 1 September 2018 at 21:18:27 UTC, Nick Sabalausky (Abscissa) wrote: On 09/01/2018 07:12 AM, Chris wrote: Hope is usually the last thing to die. But one has to be wise enough to see that sometimes there is nothing one can do. As things are now, for me personally D is no longer an option, because of simple basic things, like autodecode, a flaw that will be there forever, poor support for industry technologies (Android, iOS) Much as I hate to agree, that IS one thing where I'm actually in the same boat: My primary current paid project centers around converting some legacy Flash stuff to...well, to NOT Flash obviously. I *want* to use D for this very badly. But I'm not. I'm using Unity3D because: 1. For our use right now: It has ready-to-go out-of-the-box WebAsm support (or is it asm.js? Whatever...I can't keep up with the neverending torrent of rubble-bouncing from the web client world.) 2. For our use later: It has ready-to-go out-of-the-box iOS/Android support (along with just about any other platform we could ever possibly hope to care about). 3. It has all the robust multimedia functionality we need ready-to-go on all platforms (actually, its capabilities are totally overkill for us, but that's not a bad problem to have). 4. C# isn't completely totally horrible. I will be migrating the server back-end to D, but I *really* wish I could be doing the client-side in D too, even if that meant having to build an entire 2D engine off nothing more than SDL. Unfortunately, I just don't feel I can trust the D experience to be robust enough on those platforms right now, and I honestly have no idea when or even if it will get there (Maybe I'm wrong on that. I hope I am. But that IS my impression even as the HUUUGE D fan I am.) "when or even if" I'm in the same situation but I can't wait anymore. Apps are everywhere these days and if you can't provide some sort of app, you're not in a good position. It's the realty of things, it's not a game, for many of us our jobs depend on it. Btw, why did I get this message yesterday: "Your message has been saved, and will be posted after being approved by a moderator." My message hasn't shown up yet as it hasn't been approved yet ;)
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On 09/01/2018 07:12 AM, Chris wrote: Hope is usually the last thing to die. But one has to be wise enough to see that sometimes there is nothing one can do. As things are now, for me personally D is no longer an option, because of simple basic things, like autodecode, a flaw that will be there forever, poor support for industry technologies (Android, iOS) Much as I hate to agree, that IS one thing where I'm actually in the same boat: My primary current paid project centers around converting some legacy Flash stuff to...well, to NOT Flash obviously. I *want* to use D for this very badly. But I'm not. I'm using Unity3D because: 1. For our use right now: It has ready-to-go out-of-the-box WebAsm support (or is it asm.js? Whatever...I can't keep up with the neverending torrent of rubble-bouncing from the web client world.) 2. For our use later: It has ready-to-go out-of-the-box iOS/Android support (along with just about any other platform we could ever possibly hope to care about). 3. It has all the robust multimedia functionality we need ready-to-go on all platforms (actually, its capabilities are totally overkill for us, but that's not a bad problem to have). 4. C# isn't completely totally horrible. I will be migrating the server back-end to D, but I *really* wish I could be doing the client-side in D too, even if that meant having to build an entire 2D engine off nothing more than SDL. Unfortunately, I just don't feel I can trust the D experience to be robust enough on those platforms right now, and I honestly have no idea when or even if it will get there (Maybe I'm wrong on that. I hope I am. But that IS my impression even as the HUUUGE D fan I am.)
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Saturday, 1 September 2018 at 12:33:49 UTC, rjframe wrote: C++ is sometimes used for projects in which Stroustrup would say it's obviously the wrong language for the job. D is far more likely to require justification based on technical merit. If D becomes another C++, why bother taking a chance with D when you can just use C++, use a well-supported, commonly-used compiler, and hire from a bigger pool of jobseekers? Stroustrup also said, that "achieving any degree of compatibility [with C/C++] is very hard, as the C/C++ experience shows." (reference => http://stroustrup.com/hopl-almost-final.pdf (2007) (and here refers to D on page 42 btw - that was 11 years ago now). And yet, D is very intent on doing just that, while also treading its own path. I personally think this is why D has not taken off, as many would hope. It's hard. I think it's also why D won't take off, as many hope. It's hard. Stroustrup was correct (back in the 90's). Yes, it really is hard. Made even harder now, since C++ has evolved into a 'constantly' moving target...
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Thu, 23 Aug 2018 14:29:23 +, bachmeier wrote: > Weka is an awesome project, but I don't know that most people > considering D should use your experience as the basis of their decision. > At least in my areas, I expect considerable growth in the usage of D > over the next 10 years. Maybe it won't see much traction as a C++ > replacement for large projects like Weka. As long as D calls itself a systems language (which I believe is still the case), the experience of organizations building large systems is extremely important -- for organizations that want to build large systems.
Re: D is dead (was: Dicebot on leaving D: It is anarchy driven development in all its glory.)
On Thu, 23 Aug 2018 15:35:45 +, Joakim wrote: >> * Language complexity >> >> Raise your hand if you know how a class with both opApply and the >> get/next/end functions behaves when you pass it to foreach. >> How about a struct? Does it matter if it allows copying or not? >> >> The language was built because C++ was deemed too complex! Please see >> the thread about lazy [1] for a case where a question actually has an >> answer, but nobody seems to know it (and the person who does know it is >> hard pressed to explain the nuance that triggers this). > > By this rationale, C++ should be dead by now. Why do you think it's > fatal to D? It's worth noting that C++ isn't always chosen for its technical merits. It's a well-known language whose more or less standard status in certain domains means it's the default choice; C++ is sometimes used for projects in which Stroustrup would say it's obviously the wrong language for the job. D is far more likely to require justification based on technical merit. If D becomes another C++, why bother taking a chance with D when you can just use C++, use a well-supported, commonly-used compiler, and hire from a bigger pool of jobseekers?