Re: [OT] Android
On Fri, Oct 19, 2018 at 06:34:50PM +, Joakim via Digitalmars-d wrote: > On Thursday, 18 October 2018 at 19:37:24 UTC, H. S. Teoh wrote: [...] > > Eventually I resorted to generating Java code from D for some fo the > > most painful repetitive parts, and the way things are looking, I'm > > likely to be doing a lot more of that. I fear the way things are going > > will have be essentially writing a D to Java compiler at some point! > > Why not just use the Android port of D? I want to. But I couldn't get the cross-compiler setup properly: https://forum.dlang.org/post/mailman.4361.1539811552.29801.digitalmar...@puremagic.com If I can get past that hurdle, Java is going out the window pronto. :-D T -- Everybody talks about it, but nobody does anything about it! -- Mark Twain
Re: [OT] Is this a feature is any Linux terminal?
On 10/15/18 2:00 AM, Nick Sabalausky (Abscissa) wrote: Unfortunately, Tilix doesn't appear to support using envvars from the current terminal in the custom command above (if that would even be possible), so I'll have to manually change SESSION_NAME_HERE to my KDevelop session name once per session. (Or always use "curr" or something for whatever session I'm currently working in.) Ooohh, oohh!!! I can just use a file instead of an envvar: kdevelop -s `cat ~/.kdev-sess` $1:$2 $ echo name-of-desired-kdevelop-session > ~/.kdev-sess (Or better yet, wrap that in a trivial script.)
Re: [OT] Is this a feature is any Linux terminal?
On 10/14/18 10:31 PM, Gerald wrote: Tilix supports this. You can define a custom regex and then use the values extracted by the regex to launch an editor to load the file at the right line number. https://gnunn1.github.io/tilix-web/manual/customlinks/ The screenshot shows a configuration that does this for gedit. Awesome! With all the terminal emulators out there, I figured I couldn't be the only one to think of this. I'll admit, I have a very strong personal distaste for UIs that use the GNOME interface guidelines. But, this is such a useful feature, I think I'll be using it anyway when I'm coding. And heck, maybe someday I'll just whip up a term of my own :) I've been using KDevelop as my editor lately, and it doesn't support this *quite* as nicely as I would like. But this configuration does seem to be working for me: Regex: (.*)\(([0-9]+)\):.* Cmd: kdevelop -s SESSION_NAME_HERE $1:$2 Unfortunately, Tilix doesn't appear to support using envvars from the current terminal in the custom command above (if that would even be possible), so I'll have to manually change SESSION_NAME_HERE to my KDevelop session name once per session. (Or always use "curr" or something for whatever session I'm currently working in.) Still though, this will be really nice to have working. Thanks.
Re: [OT] Is this a feature is any Linux terminal?
On 10/14/18 10:28 PM, Basile B. wrote: VTE can certainly do this. It's the library many people use to embed a terminal in their app (or to make terminals, like Tilix). You can look at the API to get a better idea of what's possible https://developer.gnome.org/vte/0.48/VteTerminal.html. Click on message is certainly possible. I use VTE in Coedit but so far the only advanced feature is that it follows the projects path and/or the editor file path dynamically. Useful but far from what you describe, which is done elsewhere. I think that Konsole must be based on something similar to VTE (i.e a library) because the way it's integrated in Dolphin for example shows that it's not just a window that's hosted (it follows the path selected in the explorer... oh i remember where i stole the idea now...) Cool, I didn't know about VTE. On the desktop, I'm more KDE/Qt, but apparently, it seems Qt has an equivalent: qtermwidget. Don't know whether it supports the same thing though. Will have to look into it.
Re: [OT] Is this a feature is any Linux terminal?
On Sunday, 14 October 2018 at 23:28:04 UTC, Nick Sabalausky (Abscissa) wrote: But it just occurred to me: There's no reason any ordinary terminal emulator couldn't be written to do the same thing. A setting for a custom regex to look for, another setting for a command to run when the line is clicked on. That should be about it. The user's editor would have to support some kind of "editor --jump-to..." feature, but aside from that...well, why the heck not? The terminal emulator I've been using (Konsole) doesn't appear to have anything like that, AFAICT. But I'm not really married to Konsole. Anyone know of another terminal with a feature like this? Tilix supports this. You can define a custom regex and then use the values extracted by the regex to launch an editor to load the file at the right line number. https://gnunn1.github.io/tilix-web/manual/customlinks/ The screenshot shows a configuration that does this for gedit.
Re: [OT] Is this a feature is any Linux terminal?
On Sunday, 14 October 2018 at 23:28:04 UTC, Nick Sabalausky (Abscissa) wrote: Was just thinking about this: I've often liked the idea of having a terminal emulator built-into my code editor, so it could auto-highlight errors/etc and do jump-to-line on ANY variation of build command, without having to set up a custom build tool in the editor for "the is the exact command to build my project". (Yes, Iknow that full IDEs...and even many editors support jump-to-line, but they generally don't support running arbitrary modifications of commands without manually setting up ahead of time a very specific instance of a command). But it just occurred to me: There's no reason any ordinary terminal emulator couldn't be written to do the same thing. A setting for a custom regex to look for, another setting for a command to run when the line is clicked on. That should be about it. The user's editor would have to support some kind of "editor --jump-to..." feature, but aside from that...well, why the heck not? The terminal emulator I've been using (Konsole) doesn't appear to have anything like that, AFAICT. But I'm not really married to Konsole. Anyone know of another terminal with a feature like this? VTE can certainly do this. It's the library many people use to embed a terminal in their app (or to make terminals, like Tilix). You can look at the API to get a better idea of what's possible https://developer.gnome.org/vte/0.48/VteTerminal.html. Click on message is certainly possible. I use VTE in Coedit but so far the only advanced feature is that it follows the projects path and/or the editor file path dynamically. Useful but far from what you describe, which is done elsewhere. I think that Konsole must be based on something similar to VTE (i.e a library) because the way it's integrated in Dolphin for example shows that it's not just a window that's hosted (it follows the path selected in the explorer... oh i remember where i stole the idea now...)
Re: OT: Bad translations
On Thursday, 27 September 2018 at 07:03:51 UTC, Andrea Fontana wrote: On Thursday, 27 September 2018 at 05:15:01 UTC, Ali Çehreli wrote: A delicious Turkish desert is "kabak tatlısı", made of squash. Now, it so happens that "kabak" also means "zucchini" in Turkish. Imagine my shock when I came across that desert recipe in English that used zucchini as the ingredient! :) Ali You can't even imagine how many italian words and recipes are distorted... Andrea +1 :-P
Re: OT: Bad translations
On Thursday, 27 September 2018 at 05:15:01 UTC, Ali Çehreli wrote: A delicious Turkish desert is "kabak tatlısı", made of squash. Now, it so happens that "kabak" also means "zucchini" in Turkish. Imagine my shock when I came across that desert recipe in English that used zucchini as the ingredient! :) Ali You can't even imagine how many italian words and recipes are distorted... Andrea
Re: OT: Bad translations
On Wednesday, September 26, 2018 11:15:01 PM MDT Ali Çehreli via Digitalmars-d wrote: > A delicious Turkish desert is "kabak tatlısı", made of squash. Now, it > so happens that "kabak" also means "zucchini" in Turkish. Imagine my > shock when I came across that desert recipe in English that used > zucchini as the ingredient! :) Was it any good? ;) - Jonathan M Davis
Re: OT: Bad translations
A delicious Turkish desert is "kabak tatlısı", made of squash. Now, it so happens that "kabak" also means "zucchini" in Turkish. Imagine my shock when I came across that desert recipe in English that used zucchini as the ingredient! :) Ali
Re: OT: Bad translations
On Wednesday, 26 September 2018 at 12:57:21 UTC, ShadoLight wrote: On Wednesday, 26 September 2018 at 02:12:07 UTC, Ali Çehreli wrote: On 09/24/2018 08:17 AM, 0xEAB wrote: > - Non-idiomatic translations of tech terms [2] [snip] English message was something like "No memory left" and the German translation was "No memory on the left hand side" :) Ali Not sure if this was not just some urban legend, but there was a delightful story back in the late 80s/early 90s about the early translation programs. They were in particular not very good at idiomatic translations, so people would play with idiomatic expressions from language X (say english) to language Y, and then back from Y to X - and then see what was returned. Apparently the expression "the spirit is willing but the flesh is weak" translated to Russian and back was returned by one such program as: "The vodka is good but the meat is rotten!" In case you missed it, this was well spreaded in the tech news last month or so: https://translate.google.fr/?hl=fr#so/en/ngoo%20m%20goon%20goob%20goo%20goo%20goo%20mgoo%20goo%20goo%20goo%20goo%20goo%20m%20goo Still progress to do.
Re: OT: Bad translations
On Wednesday, 26 September 2018 at 02:12:07 UTC, Ali Çehreli wrote: On 09/24/2018 08:17 AM, 0xEAB wrote: > - Non-idiomatic translations of tech terms [2] [snip] English message was something like "No memory left" and the German translation was "No memory on the left hand side" :) Ali Not sure if this was not just some urban legend, but there was a delightful story back in the late 80s/early 90s about the early translation programs. They were in particular not very good at idiomatic translations, so people would play with idiomatic expressions from language X (say english) to language Y, and then back from Y to X - and then see what was returned. Apparently the expression "the spirit is willing but the flesh is weak" translated to Russian and back was returned by one such program as: "The vodka is good but the meat is rotten!"
Re: OT: Bad translations
On Wednesday, 26 September 2018 at 02:12:07 UTC, Ali Çehreli wrote: On 09/24/2018 08:17 AM, 0xEAB wrote: > - Non-idiomatic translations of tech terms [2] This is something I had heard from a Digital Research programmer in early 90s: English message was something like "No memory left" and the German translation was "No memory on the left hand side" :) The K&R in German was of the same "quality". That happens when the translator is not an IT person himself.
Re: OT: Bad translations
On Wednesday, 26 September 2018 at 02:12:07 UTC, Ali Çehreli wrote: On 09/24/2018 08:17 AM, 0xEAB wrote: > - Non-idiomatic translations of tech terms [2] This is something I had heard from a Digital Research programmer in early 90s: English message was something like "No memory left" and the German translation was "No memory on the left hand side" :) My ex-girlfriend tried to learn SQL from a book that had gotten a prize for its use of Norwegian. As a result, every single concept used a different name from what everybody else uses, and while it may be possible to learn som SQL from this, it made googling an absolute nightmare. Just imagine a whole book saying CHOOSE for SELECT, IF for WHERE, and USING instead of FROM - only worse, since it's a different language. It even used SQL pseudo-code with these made-up names, and showed how to translate it to proper SQL as more of an afterthought. -- Simen
Re: [OT] college
On 03/09/2018 7:05 PM, Joakim wrote: One of the root causes of that dysfunction is there's way too much software written. Open source has actually helped alleviate this, because instead of every embedded or server developer who needs an OS kernel convincing management that they should write their own, they now have a hard time justifying it when a free, OSS kernel like linux is out there, which is why so many of those places use linux now. Of course, you'd often like to modify the kernel and linux may not be stripped down or modular enough for some, but there's always other OSS kernels like Minix or Contiki for them. Yes but 30 years ago it was actually realistic and even best practice to write your own OS as part of deployment of e.g. game. Can't say that now even in the micro controller space.
Re: [OT] college
On Sunday, 2 September 2018 at 19:30:58 UTC, Nick Sabalausky (Abscissa) wrote: On 09/02/2018 05:43 AM, Joakim wrote: Most will be out of business within a decade or two, as online learning takes their place. I kinda wish I could agree with that, but schools are too much of a sacred cow to be going anywhere anytime soon. And for that matter, the online ones still have to tackle many of the same challenges anyway, WRT successful and effective teaching. Really the only difference is "physical classroom vs no physical classroom". Well, that and maybe price, but the community colleges have had the uni's well beat on price for a long time (even manage to do a good job teaching certain things, depending on the instructor), but they haven't made the uni's budge: The best they've been able to do is establish themselves as a supplement to the uni's, where people start out with some of their gen-ed classes at the (comparatively) cheap community colleges for the specific purpose of later transferring to a uni. That's because what the current online efforts do is simply slap the in-class curricula online, whereas what really needs to be done is completely change what's taught, away from the incoherent mix of theory and Java that basically describes every degree (non-CS too), and how it's tested and certified. When that happens, the unis will collapse, because online learning will be so much better at a fraction of the cost. As for sacred cows, the newspaper business was one of them, ie Journalism, but it's on death's door, as I pointed out in this forum years ago: https://en.m.wikipedia.org/wiki/File:Naa_newspaper_ad_revenue.svg There are a lot of sacred cows getting butchered by the internet, college will be one of the easier ones to get rid of. On Sunday, 2 September 2018 at 21:07:20 UTC, Nick Sabalausky (Abscissa) wrote: On 09/01/2018 03:47 PM, Everlast wrote: It's because programming is done completely wrong. All we do is program like it's 1952 all wrapped up in a nice box and bow tie. WE should have tools and a compiler design that all work interconnected with complete graphical interfaces that aren't based in the text gui world(an IDE is just a fancy text editor). I'm talking about 3D code representation using graphics so projects can be navigated visually in a dynamic way and many other things. There are really two main, but largely independent, aspects to what you're describing: Visual representation, and physical interface: A. Visual representation: - By visual representation, I mean "some kind of text, or UML-ish diagrams, or 3D environment, etc". What's important to keep in mind here is: The *fundamental concepts* involved in programming are inherently abstract, and thus equally applicable to whatever visual representation is used. If you're going to make a diagram-based or VR-based programming tool, it will still be using the same fundamental concepts that are already established in text-based programming: Imperative loops, conditionals and variables. Functional/declarative immutability, purity and high-order funcs. Encapsulation. Pipelines (like ranges). Etc. And indeed, all GUI based programming tools have worked this way. Because how *else* are they going to work? If what you're really looking for is something that replaces or transcends all of those existing, fundamental programming concepts, then what you're *really* looking for is a new fundamental programming concept, not a visual representation. And ance you DO invent a new fundamental programming concept, being abstract, it will again be applicable to a variety of possible visual representations. That said, it is true some concepts may be more readily amenable to certain visual representations than others. But, at least for all the currently-known concepts, any combination of concept and representation can certainly be made to work. B. Physical interface: -- By this I mean both actual input devices (keyboards, controllers, pointing devices) and also the mappings from their affordances (ie, what you can do with them: push button x, tilt stick's axis Y, point, move, rotate...) to specific actions taken on the visual representation (navigate, modify, etc.) The mappings, of course, tend to be highly dependant on the visual representation (although, theoretically, they don't strictly HAVE to be). The devices themselves, less so: For example, many of us use a pointing device to help us navigate text. Meanwhile, 3D modelers/animators find it's MUCH more efficient to deal with their 3D models and environments by including heavy use of the keyboard in their workflow instead of *just* a mouse and/or wacom alone. An important point here, is that using a keyboard has a tendency to be much more efficient for a much wider range of interactions than, say, a pointing device, like a mouse or touchscreen. There are some
Re: [OT] college
On 09/02/2018 05:43 AM, Joakim wrote: Most will be out of business within a decade or two, as online learning takes their place. I kinda wish I could agree with that, but schools are too much of a sacred cow to be going anywhere anytime soon. And for that matter, the online ones still have to tackle many of the same challenges anyway, WRT successful and effective teaching. Really the only difference is "physical classroom vs no physical classroom". Well, that and maybe price, but the community colleges have had the uni's well beat on price for a long time (even manage to do a good job teaching certain things, depending on the instructor), but they haven't made the uni's budge: The best they've been able to do is establish themselves as a supplement to the uni's, where people start out with some of their gen-ed classes at the (comparatively) cheap community colleges for the specific purpose of later transferring to a uni.
Re: [OT] Leverage Points
On Thursday, 30 August 2018 at 11:45:00 UTC, Joakim wrote: (Quoting from the article I think). Kuhn and Lakatos. Paradigm shifts don't take place when the dominant paradigm is defeated by logical or empirical means. Paradigm shifts take place when for some reason people say "how about we stop talking about that, and start talking about this instead". Not sure why you'd call that anything other than defeat. :) FWIW, it's the point of Lakatos's work: he argues that a paradigm can't be defeated by logical or empirical means. It takes zero effort to not do anything, so status quo is easily maintained.
Re: [OT] Leverage Points
On Monday, 20 August 2018 at 12:26:25 UTC, Laeeth Isharc wrote: On Monday, 20 August 2018 at 11:55:33 UTC, Joakim wrote: "So how do you change paradigms? Thomas Kuhn, who wrote the seminal book about the great paradigm shifts of science, has a lot to say about that. In a nutshell, you keep pointing at the anomalies and failures in the old paradigm, you keep coming yourself, and loudly and with assurance from the new one, you insert people with the new paradigm in places of public visibility and power. You don’t waste time with reactionaries; rather you work with active change agents and with the vast middle ground of people who are open-minded." (Quoting from the article I think). Kuhn and Lakatos. Paradigm shifts don't take place when the dominant paradigm is defeated by logical or empirical means. Paradigm shifts take place when for some reason people say "how about we stop talking about that, and start talking about this instead". Not sure why you'd call that anything other than defeat. :) I think he described certain political changes in the Western World beginning in the mid to late 60s rather well. I don't think it describes how changes in the sphere of voluntary (non-political ie market and genuine civil society) activity unfold. Supposing it were a good idea (which it isn't), how would one be able to to insert people in places of public visibility and power who put forward a point of view that is very different from the prevailing one? Only via a program of entryism, and I don't think that in the end much good will come of that. By convincing those with power/visibility that the contrary view is worth integrating? Look at Microsoft's about-face on open source over a couple decades, going from denigrating it to buying open-source producing or supporting companies like Xamarin and Github and open-sourcing several of their own projects, as an example. So I think the original author has cause and effect the wrong way around (not too surprisingly because he is talking about things that relate to politics and activism). [NB one shouldn't mention the Club of Rome without mentioning what a failure their work was, and it was predictably and indeed predicted to be a failure for the exact same reasons it failed]. It isn't that you insert people representing the new paradigm in positions of influence and power. It is that people from the emerging new paradigm - which is nothing, a bunch of no-hopers, misfits and losers viewed from a conventional perspective - by virtue of the fact that it has something useful to say and has drawn highly talented people who recognise that start ever so slowly to begin things and eventually to accomplish things - still on the fringes - and over time this snowballs. After a while turns out that they are no longer on the fringes but right at the centre of things, in part because the centre has moved. The best illustration of this phenomenon was I think in a work of fiction - Neal Stephenson's Cryptonomicon. I never expected someone to write a novel based on a mailing list - the cypherpunks. It was about as surprising to me then as it would be to see Dlang - the movie - today. And of course that itself was an early indication that the ideas and ways of thinking represented by what was originally quite a small community were on the ascent. I agree that she's looking at it from the point of view of governmental change for her environmental agenda, whereas the market is more likely to have entirely new institutions- it used be new _companies_, but with the internet it's now increasily decentralized operations like the community behind bitcoin or bit torrent... or D- form that become much more important than the old ones: creative destruction. So, significantly open-source Android replaces mostly closed Windows as the dominant OS used by most consumers for personal computing, rather than Microsoft really getting the new religion much. This pretty much reflects what Laeeth always says about finding principals who can make their own decisions about using D. "Places of public visibility and power" for D are commercial or open-source projects that attract attention for being well done or at least popular. Well - I understand what you mean, but I don't recognise this as being my point. Principals who can make their own decisions probably aren't today highly visible and visibly powerful. The latter comes much later on in the development of a project, movement or scene and if you're visible it's a tax that takes time away from doing real work. By the time you're on the front cover of Time or The Economist, it's as often as not the beginning of the end - at least for anything vital. You're misreading what she wrote: she only said that you place new people in positions where they have some visibility or power, again because of her emphasis on government change, not that you convince the
Re: [OT] Leverage Points
On Friday, 24 August 2018 at 03:06:40 UTC, Jonathan Marler wrote: I don't have much influence on the first 4 types of "leverage points" in D, but I have a suggestion for a new "rule of the system" (5th most important type of leverage point). Require reviews from any user before merging their pull requests. There's a number of ways you could implement the requirement, maybe every PR that a user creates needs to have at least 1 review of another PR associated with it. You could require more or less reviews depending on the size of the PR queue. You could also look at developer's "review to pull request" ratio. Interesting idea. Just to get an idea, I wrote a script to calculate some of this data (github.com/marler8997/githubstats). Here's the data for dmd, sorted by review to pr ratio: […] Interesting data as well. Seeing that relatively few have a review/pr ratio > 1, you may be onto something. (The list seems to have an issue with ordering though, for those that reviewed without having PRs. Attributing them a ratio of > 1 would be fairer than 0).
Re: [OT] "I like writing in D" - Hans Zimmer
For anyone not quite getting all the sharp jokes here, this might help: https://www.basicmusictheory.com/d-major-key-signature On Sun, 2018-08-26 at 17:25 +, Meta via Digitalmars-d wrote: > On Sunday, 26 August 2018 at 11:46:17 UTC, Olivier Pisano wrote: […] > > > > Moreover, D is written using two sharp signs, which gives me > > ideas. > > D = C## -- Russel. === Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Roadm: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk signature.asc Description: This is a digitally signed message part
Re: [OT] "I like writing in D" - Hans Zimmer
On Sunday, 26 August 2018 at 11:46:17 UTC, Olivier Pisano wrote: On Wednesday, 22 August 2018 at 22:51:58 UTC, Piotrek wrote: You may already know that from youtube. It seems D starts getting traction even among musicians: https://www.youtube.com/watch?v=yCX1Ze3OcKo&feature=youtu.be&t=64 That really put a smile on my face :D And it would be a nice example of a D advertising campaign ;) Cheers, Piotrek Moreover, D is written using two sharp signs, which gives me ideas. D = C##
Re: [OT] "I like writing in D" - Hans Zimmer
On Wednesday, 22 August 2018 at 22:51:58 UTC, Piotrek wrote: You may already know that from youtube. It seems D starts getting traction even among musicians: https://www.youtube.com/watch?v=yCX1Ze3OcKo&feature=youtu.be&t=64 That really put a smile on my face :D And it would be a nice example of a D advertising campaign ;) Cheers, Piotrek Moreover, D is written using two sharp signs, which gives me ideas.
Re: [OT] "I like writing in D" - Hans Zimmer
On Wednesday, 22 August 2018 at 22:51:58 UTC, Piotrek wrote: You may already know that from youtube. It seems D starts getting traction even among musicians: https://www.youtube.com/watch?v=yCX1Ze3OcKo&feature=youtu.be&t=64 That really put a smile on my face :D And it would be a nice example of a D advertising campaign ;) Cheers, Piotrek D is a great key, there's a number of beautiful compositions written in it. I think most notable one is D-spacito. People liked it so much they are waiting for the second version.
Re: [OT] "I like writing in D" - Hans Zimmer
On Wednesday, 22 August 2018 at 22:51:58 UTC, Piotrek wrote: You may already know that from youtube. It seems D starts getting traction even among musicians: https://www.youtube.com/watch?v=yCX1Ze3OcKo&feature=youtu.be&t=64 That really put a smile on my face :D And it would be a nice example of a D advertising campaign ;) Cheers, Piotrek We don't deserve Hans Zimmer
Re: [OT] Leverage Points
On Saturday, 18 August 2018 at 13:33:43 UTC, Andrei Alexandrescu wrote: A friend recommended this article: http://donellameadows.org/archives/leverage-points-places-to-intervene-in-a-system/ I found it awesome and would recommend to anyone in this community. Worth a close read - no skimming, no tl;rd etc. The question applicable to us - where are the best leverage points in making the D language more successful. Andrei I don't have much influence on the first 4 types of "leverage points" in D, but I have a suggestion for a new "rule of the system" (5th most important type of leverage point). Require reviews from any user before merging their pull requests. There's a number of ways you could implement the requirement, maybe every PR that a user creates needs to have at least 1 review of another PR associated with it. You could require more or less reviews depending on the size of the PR queue. You could also look at developer's "review to pull request" ratio. Just to get an idea, I wrote a script to calculate some of this data (github.com/marler8997/githubstats). Here's the data for dmd, sorted by review to pr ratio: user review/pr reviews open_prs merged_prs closed_prs ZombineDev 25 250 0 7 3 stefan-koch-sociomantic19.5 39 0 2 0 andralex 17.9583431 0 17 7 jacob-carlborg 16.18 809 2 41 7 kubasz 12 12 1 0 0 dkgroot9 18 0 2 0 trikko 8 81 0 0 timotheecour 6.565 0 3 7 iain-buclaw-sociomantic6 60 1 0 majiang6 60 1 0 JinShil5.858895706955 6 129 28 TurkeyMan 5.52941176594 1 15 1 thewilsonator 5.146 2 6 1 Geod24 4.5117 6 15 5 marler8997 4.155172414241 0 30 28 dmdw64 4 40 1 0 leitimmel 4 40 1 0 schveiguy 3.823 0 5 1 atilaneves 3.72727272741 1 7 3 DmitryOlshansky3.1666719 1 2 3 tgehr 3.156 1 16 1 wilzbach 2.946428571990 25 250 61 FeepingCreature2.929 3 6 1 mathias-lang-sociomantic 2.846153846111 0 30 9 belm0 2.780 2 1 n8sh 2.551 1 0 dgileadi 2.510 1 2 1 UplinkCoder2.186813187199 3 52 36 rikkimax 2 61 0 2 EyalIO 2 20 1 0 MoritzMaxeiner 2 21 0 0 rtbo 2 20 1 0 belka-ew 2 20 1 0 RazvanN7 1.89333284 8 116 26 ntrel 1.84615384648 2 21 3 nemanja-boric-sociomantic 1.890 3 2 MetaLang 1.890 3 2 joakim-noah1.57142857111 1 4 2 Darredevil 1.530 1 1 skl131313 1.591 3 2 JackStouffer 1.530 2 0 arBmind1.530 1 1 CyberShadow1.47457627187 0 53 6 BBasile1.36 34 0 11 14 Burgos 1.340 3 0 ibuclaw1.32967033 484 15 293 56 klickverbot1.32758620777 0
Re: [OT] "I like writing in D" - Hans Zimmer
On Wednesday, 22 August 2018 at 22:51:58 UTC, Piotrek wrote: You may already know that from youtube. It seems D starts getting traction even among musicians: https://www.youtube.com/watch?v=yCX1Ze3OcKo&feature=youtu.be&t=64 That really put a smile on my face :D And it would be a nice example of a D advertising campaign ;) Cheers, Piotrek LOL I didn't get it at first! I was looking at the computers expecting some code!
Re: [OT] Leverage Points
On Saturday, 18 August 2018 at 13:33:43 UTC, Andrei Alexandrescu wrote: where are the best leverage points in making the D language more successful. I'm still internalizing the article and thinking about how it applies to the "D system", but I've always thought facilitating the incorporation of GDC into GCC to be the single most accelerating thing we could do to gain more adoption. It somewhat fits into *7. The gain around driving positive feedback loops*. But there's risk associated with that. Walter has often said that "build it and they will come" is a Hollywood myth, but I disagree. Part of the reason why D hasn't achieved mass adoption, isn't because it's not marketed well, but because it has a number of flaws. Most of us see the *potential* of D, and are able to look past the flaws, with the faith (hopefully not misplaced) that they will one day be addressed. Others only see the flaws and the appeal of other programming languages with more resources, better management, more talent, and especially more velocity toward their goals. I often worry that if we encourage adoption, before we have something worthy of adoption, we'll only leave users with a bad taste in their mouth [0]. I've already seen a number of people, some major contributors, leave D for greener pastures. Most of the contributors that built the D runtime and did the majority of bug fixing in the compiler are gone now. At this point in time, I can only recommend D professionally to teams that are risk takers, have the aptitude to solve their own problems, and have the resources and willingness to be D contributors. We should probably be looking more for leverage points to help us better capitalize on the resources and talent we have and bring in more. Unfortunately I'm seeing an over-correction in *8. The strength of negative feedback loops, relative to the impacts they are trying to correct against*. As we try to get contributors to focus on the things that matter (at least to the powers that be), we frustrate them until they close their pull requests or just give up [1] [2]. It took me a few years to find my "in", and I'm still not really "in", but I learned that the *little things* that some consider a distraction are how people get started contributing to D. I've often said that we actually don't need more contributors; but more reviewers. There's a catch to that, though; they're not going to become reviewers if they can't first become contributors. So perhaps, I need to correct my perspective. So, I'll close with this: We should probably be more welcoming to those willing to contribute, let them work on the little stuff that's important to them, throw them a bone or two, review their pull requests in a timely manner, etc... I think those contributors will eventually become our reviewers, and then they will eventually lessen the burden so veterans can focus on the things that they think are higher priorities. This is a positive feedback loop. Help people become positive contributors, and those contributors will eventually help the next generation. I think there are a few little things the leadership, especially, can do to prime that pump, starting with being more active, helpful, and gracious with things that are currently sitting in the PR queue. Though it's a two-way street, and some contributors could also be more cooperative also. Walter and a few others have been quite gracious to me [3] [4]. I've tried to pay that forward and help other contributors find their "in", but I'm still not able to review and make decisions about many things, so I'm only of limited help. I don't think others have been treated as well. Mike [0] - https://issues.dlang.org/show_bug.cgi?id=14100 - Link in that issue no longer exists, but let's just say the user wasn't happy with D [1] - https://github.com/dlang/dmd/pulls?q=is%3Apr+author%3Amarler8997+is%3Aclosed [2] - https://github.com/dlang/dmd/pull/8378 [3] - https://github.com/dlang/dmd/pull/7395#issuecomment-349200847 [4] - https://github.com/dlang/dmd/pull/7055#issuecomment-320006283
Re: [OT] Leverage Points
On Wednesday, 22 August 2018 at 13:17:00 UTC, Kagamin wrote: On Monday, 20 August 2018 at 03:57:10 UTC, John Carter wrote: * Choice. ie. Programmers _want_ to use it, not are constrained to use it. * For programming activity, not new projects. ie. The era of vast tracts of green field programming is long gone. We're mostly in the era tinker toys and tidying. That's a matter of choice, some are tidying, but there's a lot of green field programming even in C, and new languages are all green fields. I suspect if you actually lean of the shoulder of the vast majority programmers earning their daily bread, they aren't writing a brand new program... they enhancing, and fixing an existing one. There is a big difference between "Doing a lot of" and "Being Good at". That's why you can't be tidying all the time, you can improve, but can't become good this way. Oh, I would argue it's the best way. Or this wouldn't be funny http://bonkersworld.net/building-software By tidying I mean refactoring legacy code that is way too large and complex to rewrite all at once. Nobody is going to deep refactoring; example: C/C++ (well, you mention them too) and pretty much everything. And it's that large because it accumulated garbage and rewrite will cut it to a manageable size; example: s2n (fun fact: it's written in C, but uses slices for safety just like D). Whenever I see a rewrite which claims it has made things so wondrously simpler / better, closer inspection reveals it does wondrously less, and supports wondrously less legacy cruft. Thus I do not believe these "experiments" have isolated the effect deleting unneeded or little used features and support for legacy platforms, vs the effect of rewriting vs refactoring. Nobody is going to deep refactoring That I believe could be the paradigm shifting advantage of D. Every time I have written a refactoring or code analysis tool for C or C++, the preprocessor has amplified the complexity of my task by orders of magnitude. And every transformation I might propose it is incredibly hard to guarantee that it is safe and behaviour preserving, a sentiment echo'd by every optimization pass writer for C/C++.
Re: [OT] Leverage Points
On Wednesday, 22 August 2018 at 13:28:37 UTC, Kagamin wrote: On Monday, 20 August 2018 at 08:31:15 UTC, Dave Jones wrote: That's what Im trying to say. Im sure posts like that are popular within the D community but they are not going to make much headway bringing new users in. We had "D parser smokes the competition" posts. Unfortunately, with all the D parsers that smoked the competition, we are mostly stuck with std.xml (dxml might changed this) and std.json, because those other projects never made it into the stdlib for one reason for another (not being 100% range based, not supporting XYZ memory allocator).
Re: [OT] Leverage Points
On Monday, 20 August 2018 at 08:31:15 UTC, Dave Jones wrote: That's what Im trying to say. Im sure posts like that are popular within the D community but they are not going to make much headway bringing new users in. We had "D parser smokes the competition" posts.
Re: [OT] Leverage Points
On Monday, 20 August 2018 at 03:57:10 UTC, John Carter wrote: * Choice. ie. Programmers _want_ to use it, not are constrained to use it. * For programming activity, not new projects. ie. The era of vast tracts of green field programming is long gone. We're mostly in the era tinker toys and tidying. That's a matter of choice, some are tidying, but there's a lot of green field programming even in C, and new languages are all green fields. There is a big difference between "Doing a lot of" and "Being Good at". That's why you can't be tidying all the time, you can improve, but can't become good this way. By tidying I mean refactoring legacy code that is way too large and complex to rewrite all at once. Nobody is going to deep refactoring; example: C/C++ (well, you mention them too) and pretty much everything. And it's that large because it accumulated garbage and rewrite will cut it to a manageable size; example: s2n (fun fact: it's written in C, but uses slices for safety just like D).
Re: [OT] Leverage Points
On Monday, 20 August 2018 at 08:31:15 UTC, Dave Jones wrote: n production. Im not trying to be negative but if Nim or Rust released a blog post saying "We made find faster" is it going to get you to try them out? Is it enough of an enticement to get over you preconceptions about those languages and to think maybe they are worth a try? The majority of the page views on the blog overall come from reddit, twitter and (for the posts that are shared there) HN. That particular post generated a lot of feedback in the reddit comments, much of it positive. The same for Walter's BetterC posts. That sort of content is what people like to discuss, and when that discussion is positive it's a net win for D. Whether that specific post brought anyone in is irrelevant.It certainly influenced opinions about D to some degree. Programming languages aren't impulse buys. When you read enough thoughtful articles about a language and see enough positive discussion about it, it will be more likely to come to mind later on down the road when you're looking for something new. I'm working on another project right now that I intend to use together with the blog to continue to build that sort of capital. As for the content, separate that which I target toward the D community and that which I target outside the community. The former goes to /r/d_language and the latter to /r/programming. Invariably, the latter gets many more page views. How that translates into a conversion ratio in actually bringing people to give D a try I couldn't say. I only measure feedback in terms of page views and discussion. I'm continually learning new things about the content, from little things about how seemingly innocuous lines can set off a massive negative thread in reddit to broader concepts about what kinds of content do well with the right taglines. That influences how I write my own posts, what sort of content I'm looking for at any given moment, and how I edit posts. I'm also always on the lookout for new ideas. The type and quality of content is not a concern from my perspective. I've got a good handle on that. The bigger issue is quantity. I need more people submitting content. Period.
Re: [OT] Leverage Points
On Monday, 20 August 2018 at 12:26:25 UTC, Laeeth Isharc wrote: On Monday, 20 August 2018 at 11:55:33 UTC, Joakim wrote: Finally, regarding leverage, I keep pointing out that mobile has seen a resurgence of AoT-compiled native languages, but nobody seems to be trying D out in that fertile terrain, other than me. I did try, but it's not exactly easy to make a complete app in D, even on Android. It would be great if there were some way to automatically wrap the APIs. Right now, the Android port is more suited for writing some performant libraries that run as part of an existing Android app. The kind of polish you're looking for will only come with early adopters pitching in to smooth out those rough edges. If we had autowrap for JNI and could dump the types and method prototypes as part of the pre-build process, what would the next stage be to be able to just call Android APIs from D and have them work? JNI isn't that bad (I know it's deprecated) and I used it already from D in a semi-wrapped way. So I wonder how much more work it would be to have autowrap for JNI. I didn't use reflection on the Java side because I wasn't wrapping that much code. Are there XML descriptions of Android APIs you could use to generate wrappers? For example, could we make something like this for D? https://github.com/opencollab/giws https://en.wikipedia.org/wiki/GIWS_(software) The above requires the user to specify the types in XML, but I guess you can dump them via reflection. I have done some work on wrapping given the types in the internal code below (which won't build by itself). It was written in a hurry and I didn't know Java, D, or JNI very well at the time: https://github.com/kaleidicassociates/import-java-d
Re: [OT] Leverage Points
On Monday, 20 August 2018 at 08:31:15 UTC, Dave Jones wrote: On Monday, 20 August 2018 at 03:04:30 UTC, Mike Parker wrote: On Sunday, 19 August 2018 at 19:52:44 UTC, Dave Jones wrote: What you need a blog post saying the GC has been made 4x faster. Stuff like that, hey we made D much better now, not stuff about some corporate user who does targeted advertising. If you look through the blog, you'll find posts like that. One of the most-viewed is titled, 'Find Was Too Damn Slow, So We Fixed It' [1]. There are a variety of posts that we've published. I started the series on Funkwerk last year because we needed more posts about D being used in production. Im not trying to be negative but if Nim or Rust released a blog post saying "We made find faster" is it going to get you to try them out? That is the wrong question to be asking. It isn't how branding works (just because D doesn't try and manufacture an image doesn't mean that that itself doesn't create a brand). A post like that is one element in a campaign that gets across what D is like as a language and a community. I would guess many people that have no attention of trying D might read that because it's an interesting topic covered in an interesting way. By far not every post needs to be a call to action, and in fact people that try to do that become extremely annoying and get filtered out. That's an old-fashioned approach to marketing that I don't think works today. Is it enough of an enticement to get over you preconceptions about those languages and to think maybe they are worth a try? I think the relevant question is at the margin of activation energy - the person poised on the edge, not the representative Reddit or Hacker News poster. D is a very practical general-purpose language, and that means most users over time will be in enterprises given that I guess most code is written in enterprises (or maybe academe - and lots of academic code isn't really open-sourced even if it perhaps should be). Large enterprises aren't going to be early adopters of things they didn't create themselves. And people in SMEs have a different calculus from the representative influential person that talks publicly about technology. Have you noticed too how people that actually use D in their business don't spend much time on forums? That's what Im trying to say. Im sure posts like that are popular within the D community but they are not going to make much headway bringing new users in. I disagree. I started using D before the blog, but it was that kind of thing that drew me in, and one way and another as a consequence more new users than me have been brought in. But the extension of that is that you need to have something enticing to write about and there seems to be very little happening at the moment. DPP is probably the most interesting thing happening atm. I think there is lots interesting happening. Dpp (No more manual writing of bindings); Android aarch64; web assembly; continuing improvements in C++ interop; Symmetry Autumn of Code; D running in Jupyter (it excites me, even if nobody else); opMove; the take-off of Weka (from what I have heard); Binderoo generating C# wrappers for D programmatically; a really quite useful betterC (you can use a lot of language and library now); betterC version of Phobos will keep growing thanks to Seb's work on testing; no-gc exceptions; DIP1000 and scope; LDC fuzzing and profile-guided optimisation; GDC moving towards inclusion in GCC finally; adoption of D in bioinformatics; other games companies following in Remedy's footsteps. I haven't even had time to follow forums or github much, but that's all just off the top of my head.
Re: [OT] Leverage Points
On Monday, 20 August 2018 at 11:55:33 UTC, Joakim wrote: "So how do you change paradigms? Thomas Kuhn, who wrote the seminal book about the great paradigm shifts of science, has a lot to say about that. In a nutshell, you keep pointing at the anomalies and failures in the old paradigm, you keep coming yourself, and loudly and with assurance from the new one, you insert people with the new paradigm in places of public visibility and power. You don’t waste time with reactionaries; rather you work with active change agents and with the vast middle ground of people who are open-minded." (Quoting from the article I think). Kuhn and Lakatos. Paradigm shifts don't take place when the dominant paradigm is defeated by logical or empirical means. Paradigm shifts take place when for some reason people say "how about we stop talking about that, and start talking about this instead". I think he described certain political changes in the Western World beginning in the mid to late 60s rather well. I don't think it describes how changes in the sphere of voluntary (non-political ie market and genuine civil society) activity unfold. Supposing it were a good idea (which it isn't), how would one be able to to insert people in places of public visibility and power who put forward a point of view that is very different from the prevailing one? Only via a program of entryism, and I don't think that in the end much good will come of that. So I think the original author has cause and effect the wrong way around (not too surprisingly because he is talking about things that relate to politics and activism). [NB one shouldn't mention the Club of Rome without mentioning what a failure their work was, and it was predictably and indeed predicted to be a failure for the exact same reasons it failed]. It isn't that you insert people representing the new paradigm in positions of influence and power. It is that people from the emerging new paradigm - which is nothing, a bunch of no-hopers, misfits and losers viewed from a conventional perspective - by virtue of the fact that it has something useful to say and has drawn highly talented people who recognise that start ever so slowly to begin things and eventually to accomplish things - still on the fringes - and over time this snowballs. After a while turns out that they are no longer on the fringes but right at the centre of things, in part because the centre has moved. The best illustration of this phenomenon was I think in a work of fiction - Neal Stephenson's Cryptonomicon. I never expected someone to write a novel based on a mailing list - the cypherpunks. It was about as surprising to me then as it would be to see Dlang - the movie - today. And of course that itself was an early indication that the ideas and ways of thinking represented by what was originally quite a small community were on the ascent. This pretty much reflects what Laeeth always says about finding principals who can make their own decisions about using D. "Places of public visibility and power" for D are commercial or open-source projects that attract attention for being well done or at least popular. Well - I understand what you mean, but I don't recognise this as being my point. Principals who can make their own decisions probably aren't today highly visible and visibly powerful. The latter comes much later on in the development of a project, movement or scene and if you're visible it's a tax that takes time away from doing real work. By the time you're on the front cover of Time or The Economist, it's as often as not the beginning of the end - at least for anything vital. We're doing both: most of the material on the D blog and my own D interviews are not with corporate representatives. We could stand for more of the latter though, especially the big successes, because people are more influenced by them. I'm not saying it's a bad thing to go for big stories. But it's a mistake to place the attention people today naturally tend to. It doesn't matter what influences most people - it matters what influences the person who is poised on the edge of adopting D more widely, adopting D as a beginning, or would be if they knew of the language. The latter is quite a different sort, I think. Liran at Weka picked up D because he saw Kent Beck post on Twitter about Facebook's Warp written in D (or maybe it was a linter) and it seemed like an answer to a particular problem he had (if I am remembering correctly). It wasn't because of a grand thing - it was because of a little thing that seemed like it might be a creative solution to a real problem. Signal:noise is much higher away from the limelight too. By far better to have a high share of attention in some specific domains or interest groups than to have a low share of attention of some enormous market. Many devs use large corporate deployments as a litmus test of
Re: [OT] Leverage Points
On Monday, 20 August 2018 at 04:46:35 UTC, Laeeth Isharc wrote: On Sunday, 19 August 2018 at 18:49:53 UTC, Joakim wrote: On Saturday, 18 August 2018 at 13:33:43 UTC, Andrei Alexandrescu wrote: A friend recommended this article: http://donellameadows.org/archives/leverage-points-places-to-intervene-in-a-system/ I found it awesome and would recommend to anyone in this community. Worth a close read - no skimming, no tl;rd etc. The question applicable to us - where are the best leverage points in making the D language more successful. I read the whole thing, pretty much jibes with what I've already realized after decades of observation, but good to see it all laid out and prioritized, as Jonathan said. I thought this paragraph was particularly relevant to D: "So how do you change paradigms? Thomas Kuhn, who wrote the seminal book about the great paradigm shifts of science, has a lot to say about that. In a nutshell, you keep pointing at the anomalies and failures in the old paradigm, you keep coming yourself, and loudly and with assurance from the new one, you insert people with the new paradigm in places of public visibility and power. You don’t waste time with reactionaries; rather you work with active change agents and with the vast middle ground of people who are open-minded." This pretty much reflects what Laeeth always says about finding principals who can make their own decisions about using D. "Places of public visibility and power" for D are commercial or open-source projects that attract attention for being well done or at least popular. Read Vilfredo Pareto on the circulation of the elites, Toynbee on the role of creative minorities, and Ibn Khaldun on civilisational cycles. There's not much point focusing on the influential and powerful people and projects of today - they have too much else going on; powerful people tend to become a bit disconnected from reality, complacent and they and hangers-on have too much vested in the status quo to change. When you have nothing, you have not much to lose, but after considerable success most people start to move to wanting to keep what they have. This doesn't bring open-mindedness to new ideas or approaches. Sure, and though I've not read any of those books, where did I suggest going after the "influential and powerful?" I simply echoed your statement about going after principals who are free to make their own path, who as you've stated before are usually at startups or small projects where everything doesn't have to get past a committee. But we live in a dynamic economy and time and the winners of tomorrow might look unremarkable today. Linus said it was just a hobby project, nothing big like Minix. Would you have thought a few German PhDs had a chance with no capital, starting amidst a bad financial crisis and using a language that was then of questionable stability and commercial viability? Yes, the next great kernel developer or Sociomantic is looking for the language to write their project with now. Hopefully, D will be the right choice for them. I'm not sure we're doing a good job of publicizing those we have though, here's a comment from the proggit thread on BBasile's recent post about writing a language in D: "I keep seeing articles telling me why D is so great, but nothing of note ever gets written in D." https://www.reddit.com/r/programming/comments/97q9sq/comment/e4b36st I don't think it matters a lot what people like that think. In aggregate yes, but as Andrei says people are looking for an excuse not to learn a new language. Somebody actually ready to try D will sooner or later come across the organisations using D page and see that the situation is a bit different. Looking at his proggit comment history now, he seems exactly like the kind of intelligent, opinionated sort D should be attracting: I don't think he was looking to dismiss D. He could have looked harder, we could have marketed harder: there's blame to go around. I'll put out an email to Don. Maybe Laeeth would be willing to do an interview. Sounds a good idea. Alright, I'll email you soon. On the OSS front, I've sent several interview questions to Iain earlier this year about gdc, after he agreed to an interview, no responses yet. Tough to blame others for being ignorant of D's successes when we don't do enough to market it. I think we are still in very early stages. Lots of companies in orgs using D I don't know much about. The Arabia weather channel have a YouTube on their use of D, but I don't speak Arabic. Hunt the Chinese toy company is interesting. Chinese tech scene is huge and very creative, possibly more so than the US in some ways. You might ask EMSI and also AdRoll. By early days I mean it's better to look for interesting stories where people are doing real work on a small scale with D than trying to find super impressive success stories only. We're doing bo
Re: [OT] Leverage Points
On Monday, 20 August 2018 at 03:04:30 UTC, Mike Parker wrote: On Sunday, 19 August 2018 at 19:52:44 UTC, Dave Jones wrote: What you need a blog post saying the GC has been made 4x faster. Stuff like that, hey we made D much better now, not stuff about some corporate user who does targeted advertising. If you look through the blog, you'll find posts like that. One of the most-viewed is titled, 'Find Was Too Damn Slow, So We Fixed It' [1]. There are a variety of posts that we've published. I started the series on Funkwerk last year because we needed more posts about D being used in production. Im not trying to be negative but if Nim or Rust released a blog post saying "We made find faster" is it going to get you to try them out? Is it enough of an enticement to get over you preconceptions about those languages and to think maybe they are worth a try? That's what Im trying to say. Im sure posts like that are popular within the D community but they are not going to make much headway bringing new users in. But the extension of that is that you need to have something enticing to write about and there seems to be very little happening at the moment. DPP is probably the most interesting thing happening atm.
Re: [OT] Leverage Points
On Sunday, 19 August 2018 at 18:49:53 UTC, Joakim wrote: On Saturday, 18 August 2018 at 13:33:43 UTC, Andrei Alexandrescu wrote: A friend recommended this article: http://donellameadows.org/archives/leverage-points-places-to-intervene-in-a-system/ I found it awesome and would recommend to anyone in this community. Worth a close read - no skimming, no tl;rd etc. The question applicable to us - where are the best leverage points in making the D language more successful. I read the whole thing, pretty much jibes with what I've already realized after decades of observation, but good to see it all laid out and prioritized, as Jonathan said. I thought this paragraph was particularly relevant to D: "So how do you change paradigms? Thomas Kuhn, who wrote the seminal book about the great paradigm shifts of science, has a lot to say about that. In a nutshell, you keep pointing at the anomalies and failures in the old paradigm, you keep coming yourself, and loudly and with assurance from the new one, you insert people with the new paradigm in places of public visibility and power. You don’t waste time with reactionaries; rather you work with active change agents and with the vast middle ground of people who are open-minded." This pretty much reflects what Laeeth always says about finding principals who can make their own decisions about using D. "Places of public visibility and power" for D are commercial or open-source projects that attract attention for being well done or at least popular. Read Vilfredo Pareto on the circulation of the elites, Toynbee on the role of creative minorities, and Ibn Khaldun on civilisational cycles. There's not much point focusing on the influential and powerful people and projects of today - they have too much else going on; powerful people tend to become a bit disconnected from reality, complacent and they and hangers-on have too much vested in the status quo to change. When you have nothing, you have not much to lose, but after considerable success most people start to move to wanting to keep what they have. This doesn't bring open-mindedness to new ideas or approaches. But we live in a dynamic economy and time and the winners of tomorrow might look unremarkable today. Linus said it was just a hobby project, nothing big like Minix. Would you have thought a few German PhDs had a chance with no capital, starting amidst a bad financial crisis and using a language that was then of questionable stability and commercial viability? New things often start small, growing at the fringe where there's no competition because at that point it's not obvious to others there is even an opportunity there. It's much better to appeal to new projects or commercial projects where people are in pain and therefore open-minded because suffering will do that to you. D is a general purpose quite ambitious language so I wouldn't expect necessarily that there is a pattern by industry or sector. Probably it will be organic and grass-roots. You have one unusual person in an unusual situation who is open to trying something different. And in the beginning it might not look like much, particularly to outsiders. Note that when you start a small business it takes a long time before you hire significant numbers of people usually. Yet in the US SMEs create more than 100% of the jobs. So there is a lag between people starting to play with D and them doing a lot in it or hiring many people to work with it. Five years even isn't a long time. Perceptions also take a long time to change, but they do tend to catch up with reality eventually. When I started looking at D in 2014 it really wasn't yet ready for primetime. The compiler would crash too often for comfort, and I wasn't even trying to do anything clever. The std.algorithm docs were perfectly clear - if you had a training or sort of mind that understood formalisms. But j tried to interest one ex trader in D - he could work with Python but back then he was absolutely terrified of the Phobos documentation. It's much better today, but the reaction from past improvements is still unfolding. Little things like dpp /dtoh combined with BetterC can make a huge difference I think. Being able to incrementally replace a C codebase without having to do lots of work porting headers (and keeping them in sync) brings down cost of trying D a lot. If DPP works for C++ so you can just #include then even better but it will take some time. I am trying to persuade Atila to have possibility to just handle some types as opaque. You can always write shims for the parts of the API you need but at least this way you can #include cpp headers and get something. I'm not sure we're doing a good job of publicizing those we have though, here's a comment from the proggit thread on BBasile's recent post about writing a language in D: "I keep seeing articles telling me why D is so grea
Re: [OT] Leverage Points
On Saturday, 18 August 2018 at 22:20:57 UTC, Walter Bright wrote: On 8/18/2018 9:59 AM, Jonathan Marler wrote: In your mind, what defines the D language's level of success? It no longer needs me or Andrei. I think that is a pretty weak measure. Stroustrup and Matsumoto are still actively tending their babies decades later. A better measure is that "it is the language of choice for programming activity." Note the fine print there. * Choice. ie. Programmers _want_ to use it, not are constrained to use it. * For programming activity, not new projects. ie. The era of vast tracts of green field programming is long gone. We're mostly in the era tinker toys and tidying. By tinker toys I mean gluing and configuring large frameworks and packages together. While the industry does a huge amount of tinker toy development, and has massive package and dependency management tools we're still not Good at it. There is a big difference between "Doing a lot of" and "Being Good at". By tidying I mean refactoring legacy code that is way too large and complex to rewrite all at once. ie. A "successful" language of the 2020's is one that can "play nice" with the vast pile of legacy. in increasing order of effectiveness 12. Constants, parameters, numbers (such as subsidies, taxes, standards). The cost of starting to use D. 11. The sizes of buffers and other stabilizing stocks, relative to their flows. The size of pool of people who know of, and know D. 10. The structure of material stocks and flows (such as transport networks, population age structures). Current projects, tools and packages using D. 9. The lengths of delays, relative to the rate of system change. Time to bug fix, time to answering a newbies question, time to handle and roll out a dip. 8. The strength of negative feedback loops, relative to the impacts they are trying to correct against. The effect of bad experiences with D. Bugs in compiler and libraries. 7. The gain around driving positive feedback loops. The positive effects on productivity and programmer happiness in using D. 6. The structure of information flows (who does and does not have access to information). How well does information flow from the experts to the newbies. How hard is to create and get accepted new info. 5. The rules of the system (such as incentives, punishments, constraints). Incentives tend to be, "My change, my suggestion, my package was accepted or accepted up to a bunch of constructive feedback suggestions" Punishments tend to be rejection, especially dismissive or insulting. Constraints tend to be number of and pain due to bureaucratic hoops one has to jump through. 4. The power to add, change, evolve, or self-organize system structure. There is a strong push to lock the language standard and standard library down solid. But this merely results in a language and language ecosystem (cough python 3) that cannot evolve. A better goal would be to provide rewrite tools that would allow the language ecosystem to evolve with the language. ie. You need a compiler that reads AND rewrites code! 3. The goals of the system. If the D language evolution is directed at ever smaller and less relevant corners of programming activity, yes, it will die. If it is directed and enabling and enriching ever larger portions of the programming activity, it will thrive. 2. The mindset or paradigm out of which the system — its goals, structure, rules, delays, parameters — arises. ie. Is the mindset to create a language, or to create a language that is so compelling it will dominate the language landscape. 1. The power to transcend paradigms. D is fairly well positioned to take on many of the current language paradigms. This question is more about if a new one comes along, does D absorb it? Or wilt? Again I come back to that rewrite tool. How fast can you evolve the language, the standard library and the whole language ecosystem? If the answer is like, python 3, or perl, no sorry, it takes years. Or like C++, we're going to dance carefully on a head of pin for decades to avoid obsoleting anything. Then your rate of evolution will be the same as or slower than the competing languages.
Re: [OT] Leverage Points
On Sunday, 19 August 2018 at 19:52:44 UTC, Dave Jones wrote: What you need a blog post saying the GC has been made 4x faster. Stuff like that, hey we made D much better now, not stuff about some corporate user who does targeted advertising. If you look through the blog, you'll find posts like that. One of the most-viewed is titled, 'Find Was Too Damn Slow, So We Fixed It' [1]. There are a variety of posts that we've published. I started the series on Funkwerk last year because we needed more posts about D being used in production. But we're constantly in need of content of all types. So anyone involved in obtaining a 4% speedup in garbage collection and knows the details well enough to write about it is invited to do so.
Re: [OT] Leverage Points
On Sunday, 19 August 2018 at 21:59:15 UTC, Guillaume Piolat wrote: On Sunday, 19 August 2018 at 19:52:44 UTC, Dave Jones wrote: I'm of the complete opposite opinion. Everyone like to make money, especially more than the industry average; and we should push the narrative that using D lets you print money in unsuspecting markets (and that's really not far from the truth). That's a hard argument to make. I mean it's a good selling point but how do you convince people that D actually does what you say it does? In Reddit recently there was than comment: https://www.reddit.com/r/programming/comments/97q9sq/why_d_is_a_good_choice_for_writing_a_language/e4ce7kx Who wants to be the competitor getting crushed by the competition because of not using a nimbler, faster language to develop in?* Yet that sort of thing happens a hell of a lot in practice. Constant factors matters a lot when you work on high-performance software, if you can develope 30% faster for the same result then it's a huge competitive advantage. Yeah of course, but we're talking about blog posts, press releases, what will get people to even bother clicking on the posts to actually read them. Of course productivity is a big sell, but i think it's also important to be seen to be making progress on the language and ecosystem. And you're talking about getting non D users to click. It's not just about whats important it's about what will make people take notice. I think that doesn't really move the needle, every native programmer knows that native languages are approximately as fast and that the fastest program had more engineering hours in it. It is _possible_ to have the faster program in any (native) language, now _how long_ will it take? However if you can have something more featureful with less effort that doesn't run slower then it's appealing. Benchmarks where development time is missing just tell half the story. I didn't mean to say that runtime performance is all that's important although I completely understand why it looked like that. What I'm trying to say is that to generate interest the posts or articles have to have a bit of a bang. Either show real progress, or real advantage.
Re: [OT] Leverage Points
On Sunday, 19 August 2018 at 19:52:44 UTC, Dave Jones wrote: Stuff like that, hey we made D much better now, not stuff about some corporate user who does targeted advertising. I'm of the complete opposite opinion. Everyone like to make money, especially more than the industry average; and we should push the narrative that using D lets you print money in unsuspecting markets (and that's really not far from the truth). In Reddit recently there was than comment: https://www.reddit.com/r/programming/comments/97q9sq/why_d_is_a_good_choice_for_writing_a_language/e4ce7kx Who wants to be the competitor getting crushed by the competition because of not using a nimbler, faster language to develop in?* Yet that sort of thing happens a hell of a lot in practice. Constant factors matters a lot when you work on high-performance software, if you can develope 30% faster for the same result then it's a huge competitive advantage. I'm not saying stuff like that isnt valuable, just that it's not gonna crank the faucet very much compared with stuff like "The D xml parser smokes the competition" I think that doesn't really move the needle, every native programmer knows that native languages are approximately as fast and that the fastest program had more engineering hours in it. It is _possible_ to have the faster program in any (native) language, now _how long_ will it take? However if you can have something more featureful with less effort that doesn't run slower then it's appealing. Benchmarks where development time is missing just tell half the story.
Re: [OT] Leverage Points
On Sunday, 19 August 2018 at 19:11:03 UTC, Mike Parker wrote: On Sunday, 19 August 2018 at 18:49:53 UTC, Joakim wrote: they got their team trained up on D. We could stand to talk more about Sociomantic, D's biggest corporate success so far, I'll put out an email to Don. I've got a series on Sociomantic in the works for the blog. What you need a blog post saying the GC has been made 4x faster. Stuff like that, hey we made D much better now, not stuff about some corporate user who does targeted advertising. I'm not saying stuff like that isnt valuable, just that it's not gonna crank the faucet very much compared with stuff like "The D xml parser smokes the competition" It would also help dispel the impression that D is kindof stagnant.
Re: [OT] Leverage Points
On Sunday, 19 August 2018 at 18:49:53 UTC, Joakim wrote: they got their team trained up on D. We could stand to talk more about Sociomantic, D's biggest corporate success so far, I'll put out an email to Don. I've got a series on Sociomantic in the works for the blog.
Re: [OT] Leverage Points
On Saturday, 18 August 2018 at 13:33:43 UTC, Andrei Alexandrescu wrote: A friend recommended this article: http://donellameadows.org/archives/leverage-points-places-to-intervene-in-a-system/ I found it awesome and would recommend to anyone in this community. Worth a close read - no skimming, no tl;rd etc. The question applicable to us - where are the best leverage points in making the D language more successful. I read the whole thing, pretty much jibes with what I've already realized after decades of observation, but good to see it all laid out and prioritized, as Jonathan said. I thought this paragraph was particularly relevant to D: "So how do you change paradigms? Thomas Kuhn, who wrote the seminal book about the great paradigm shifts of science, has a lot to say about that. In a nutshell, you keep pointing at the anomalies and failures in the old paradigm, you keep coming yourself, and loudly and with assurance from the new one, you insert people with the new paradigm in places of public visibility and power. You don’t waste time with reactionaries; rather you work with active change agents and with the vast middle ground of people who are open-minded." This pretty much reflects what Laeeth always says about finding principals who can make their own decisions about using D. "Places of public visibility and power" for D are commercial or open-source projects that attract attention for being well done or at least popular. I'm not sure we're doing a good job of publicizing those we have though, here's a comment from the proggit thread on BBasile's recent post about writing a language in D: "I keep seeing articles telling me why D is so great, but nothing of note ever gets written in D." https://www.reddit.com/r/programming/comments/97q9sq/comment/e4b36st Of course, all he has to do is go to the front page of dlang.org and follow those links others gave him, but maybe he means something really big like google's search engine. We could probably stand to publicize D's commercial successes more. I've been trying to put together an interview blog post with Weka about their use of D, got some answers this summer, but no response in months to a follow-up question about how they got their team trained up on D. We could stand to talk more about Sociomantic, D's biggest corporate success so far, I'll put out an email to Don. Maybe Laeeth would be willing to do an interview. On the OSS front, I've sent several interview questions to Iain earlier this year about gdc, after he agreed to an interview, no responses yet. Tough to blame others for being ignorant of D's successes when we don't do enough to market it. Finally, regarding leverage, I keep pointing out that mobile has seen a resurgence of AoT-compiled native languages, but nobody seems to be trying D out in that fertile terrain, other than me.
Re: [OT] Leverage Points
On 20/08/2018 1:51 AM, Jonathan Marler wrote: On Saturday, 18 August 2018 at 22:20:57 UTC, Walter Bright wrote: On 8/18/2018 9:59 AM, Jonathan Marler wrote: In your mind, what defines the D language's level of success? It no longer needs me or Andrei. Yes, I think this state would be a good indicator of success. This requires attracting developers with strong technical ability and good leadership to manage it. I think requires cultivating a community that rewards good work and encourages contribution. When I was heavily contributing, it was because of people like Seb and Mike who would review pull requests and tried to keep the flow of work moving. But many time it was quashed by other developers and eventually it didn't make sense for me to contribute anymore when dozens of hours of good work can't get through. If this doesn't change, D won't be able to keep good developers. We need a dedicated project manager to facilitate communication and to keep PR's and issues moving. Nobody to my knowledge is taking on this role and Walter definitely isn't able to do it (which he shouldn't be doing anyway). It may be easier to ask for a company to donate somebody to fill this role than it would be to get developers from them. Either way, we need to hire somebody into this role. Because right now, we haven't got somebody who sits on the fence about issues, who's only goal is to keep everybody working together. This release of dmd should have had a fully reloadable frontend in it. But alas somebody does disagree with me on some fundamental enough points that the PR is now pretty much dead after sitting since DConf. Worse than that was it was only the beginning of the PR's required to make it happen. Point is, somebody should have either forced me to make a change that I disagreed with or had it pulled. But alas, all I see is my desire to rewrite the parser growing (bad sign).
Re: [OT] Leverage Points
On Saturday, 18 August 2018 at 22:20:57 UTC, Walter Bright wrote: On 8/18/2018 9:59 AM, Jonathan Marler wrote: In your mind, what defines the D language's level of success? It no longer needs me or Andrei. Yes, I think this state would be a good indicator of success. This requires attracting developers with strong technical ability and good leadership to manage it. I think requires cultivating a community that rewards good work and encourages contribution. When I was heavily contributing, it was because of people like Seb and Mike who would review pull requests and tried to keep the flow of work moving. But many time it was quashed by other developers and eventually it didn't make sense for me to contribute anymore when dozens of hours of good work can't get through. If this doesn't change, D won't be able to keep good developers. I posed this question to Andrei because I really want to know the answer. The success of a language can mean very different things to each person. The most important aspect of D for me is its continuing progress towards stability/robustness. Though I would say that the language could be considered the best in the world with its balance of safety, performance and practicality, it is very far from perfect. In my mind, D becomes more successful as the language itself becomes better. And if D doesn't continue to improve, it will be supplanted by new languages that continue to be created at an astounding rate. Others may consider D's popularity to be the most important indicator of D's success. I think everyone would agree this is important, however, I would much rather use a good language on my own then a mediocre language with everyone else. I will also say that in order to read that article and apply it to "D's success", you most certainly need to know exactly what that means to identify what D's leverage points are. It was an interesting article. Many of the concepts were familiar and it was interesting to see them all laid out in a simple model and prioritized. Thanks for the link Andrei.
Re: [OT] Leverage Points
On 8/18/2018 9:59 AM, Jonathan Marler wrote: In your mind, what defines the D language's level of success? It no longer needs me or Andrei.
Re: [OT] Leverage Points
On Saturday, 18 August 2018 at 13:33:43 UTC, Andrei Alexandrescu wrote: A friend recommended this article: http://donellameadows.org/archives/leverage-points-places-to-intervene-in-a-system/ I found it awesome and would recommend to anyone in this community. Worth a close read - no skimming, no tl;rd etc. The question applicable to us - where are the best leverage points in making the D language more successful. Andrei In your mind, what defines the D language's level of success?
Re: [OT] Re: C's Biggest Mistake on Hacker News
On Tuesday, 31 July 2018 at 22:55:08 UTC, Laeeth Isharc wrote: Dpp doesn't work with STL yet. I asked Atila how long to #include vector and he thought maybe two months of full-time work. That's not out of the question in time, but we have too much else to do right now. I'm not sure if recent mangling improvements help and how much that changes things. But DPP keeps improving as does extern (C++) and probably one way and another it will work for quite a lot. Calypso makes cpp classes work as both value and reference types. I don't know the limit of what's possible without such changes - seems like C++ mangling is improving by leaps and bounds but I don't know when it will be dependable for templates. Yes OK, thanks. It's not that relevant what Andrei or Walter might think because it's a community-led project and we will make progress if somebody decides to spend their time working on it, or a company lends a resource for the same purpose. I'm sure they are all in favour of greater cpp interoperability, but I don't think the binding constraint is will from the top, but rather people willing and able to do the work. I think the DIP system has greatly improved the situation, but for anyone thinking of embarking on a lot of work for something like e.g. the GC, you do need to feel that there will be a good chance of it being adopted - otherwise all that work could go to waste. And if one wants to see it go faster then one can logically find a way to help with the work or contribute financially. I don't think anything else will make a difference. Agreed entirely. Same thing with Calypso. It's not ready yet to be integrated in a production compiler so it's an academic question as to the leadership's view about it. Where I'm coming from is that writing and maintaining something as large and complex as Calypso requires a whole heap both of motivation and also of encouragement from the sidelines - and especially from Walter and/or Andrei. If someone starts to feel that the backing is not there then it's very very hard to maintain motivation, particularly on infrastructure related code that if not integrated by Walter will always be hard for people to use and therefore not be widely adopted. To be fair to Walter though, this is a really intractable problem for him. He could adopt something like Calypso, and then find the original maintainer loses interest. That would leave Walter either needing to maintain someone else's complex code, or try to extricate himself from code having already integrated it. Also, there is no guarantee, in this particular case, that as C++ evolves it will still be possible to use Calypso's strategy. Of course there are other very good reasons for why adopting it is problematic. Still, it leaves the developer struggling, I expect, to maintain motivation. Considering the above, then knowing the general direction that Walter/Andrei want to take D, would be a great help in deciding what larger projects are worth undertaking. It seems to me, anyway (big caveat).
Re: [OT] Re: C's Biggest Mistake on Hacker News
On Sunday, 29 July 2018 at 09:35:06 UTC, Abdulhaq wrote: On Saturday, 28 July 2018 at 14:45:19 UTC, Paolo Invernizzi wrote: I forgot the link... here it is: https://www.quantamagazine.org/to-make-sense-of-the-present-brains-may-predict-the-future-20180710 An interesting article. I found that Dennet's Consciousness Explained, which is presumably debunked old hat by now, is full of interesting experiments and speculation about how we model things in our mind and how our perceptions feed into that. It's a long time since I read it but if I remember correctly he shows how we seem to have a kind of mental theatre which has an expectation of what will come next from the senses, leading to interesting mistakes in perception. It's a useful model of how the mind works. That website often carries good articles about new maths as well. Me and my colleague are pretty different, in the approach to that kind of stuff... Maybe I'll post on the Forum a 'Request for D Advocacy', a-la PostgreSQL, so the community can try to address some of his concerns about modern D, and lower his discomfort! :-P If you can explain to me what is the _direction_ of D in terms of interfacing with large C++ libraries it would be very much appreciated! I'd love to be using D for some of my projects but I have a perception that using e.g. VTK is still a difficult thing to do from D. Is that still true? What is the long term plan for D, is it extern(C++), a binding technology? Is there any interest in Calypso from the upper echelons? I want to know where D is trying to go, not just where it is now. I want to know if anyone has got their heart in it. My CV says my main languages are Java, Python and D. That last one is mainly wishful thinking at the moment. I wish it wasn't! Make me believe, Paulo! Well we are hiring D programmers in London and HK in case it's interesting. Dpp doesn't work with STL yet. I asked Atila how long to #include vector and he thought maybe two months of full-time work. That's not out of the question in time, but we have too much else to do right now. I'm not sure if recent mangling improvements help and how much that changes things. But DPP keeps improving as does extern (C++) and probably one way and another it will work for quite a lot. Calypso makes cpp classes work as both value and reference types. I don't know the limit of what's possible without such changes - seems like C++ mangling is improving by leaps and bounds but I don't know when it will be dependable for templates. It's not that relevant what Andrei or Walter might think because it's a community-led project and we will make progress if somebody decides to spend their time working on it, or a company lends a resource for the same purpose. I'm sure they are all in favour of greater cpp interoperability, but I don't think the binding constraint is will from the top, but rather people willing and able to do the work. And if one wants to see it go faster then one can logically find a way to help with the work or contribute financially. I don't think anything else will make a difference. Same thing with Calypso. It's not ready yet to be integrated in a production compiler so it's an academic question as to the leadership's view about it.
Re: [OT] Re: C's Biggest Mistake on Hacker News
On Tuesday, 31 July 2018 at 12:02:55 UTC, Kagamin wrote: On Saturday, 28 July 2018 at 19:55:56 UTC, bpr wrote: Are the Mozilla engineers behind it deluded in that they eschew GC and exceptions? I doubt it. They are trying to outcompete Chrome in bugs too. You're not Mozilla. And why you mention exceptions, but not bounds checking? Firefox has been complete garbage on my work computer ever since the Quantum update. Works fine at home though.
Re: [OT] Re: C's Biggest Mistake on Hacker News
On Saturday, 28 July 2018 at 19:55:56 UTC, bpr wrote: Are the Mozilla engineers behind it deluded in that they eschew GC and exceptions? I doubt it. They are trying to outcompete Chrome in bugs too. You're not Mozilla. And why you mention exceptions, but not bounds checking? Here we kind of agree. If D is going to support a GC, I want a state of the art precise GC like Go has. Go GC is far from being a state of the art, it trades everything for low latency and ease of configuration.
Re: [OT] Re: C's Biggest Mistake on Hacker News
On Saturday, 28 July 2018 at 21:44:10 UTC, Abdulhaq wrote: On Saturday, 28 July 2018 at 21:27:12 UTC, bpr wrote: I hear you. You're looking (roughly) for a better Java/Go/Scala, and I'm looking for a better C/C++/Rust, at least for what I work on now. I don't think D can be both right now, and that the language which can satisfy both of us doesn't exist yet, though D is close. Yes, this. In the light of D's experience, is it even possible to have a language that satisfies both? I believe that the tension between low and high level features makes it nearly impossible, that tracing GC is one of those difficult problems that rulses out satisfying both sets of users optimally, and that the best D (and C++ and Nim) can do is to be "mediocre to good, but not great" at both the low level (C/Rust) domain and high level domains simultaneously. There are far fewer players in the low level space, which is why I see D more as a competitor there, and welcome DasBetterC and the noGC initiatives so that D can be a great low level and maybe just a good high level language.
Re: [OT] Re: C's Biggest Mistake on Hacker News
On Saturday, 28 July 2018 at 14:45:19 UTC, Paolo Invernizzi wrote: I forgot the link... here it is: https://www.quantamagazine.org/to-make-sense-of-the-present-brains-may-predict-the-future-20180710 An interesting article. I found that Dennet's Consciousness Explained, which is presumably debunked old hat by now, is full of interesting experiments and speculation about how we model things in our mind and how our perceptions feed into that. It's a long time since I read it but if I remember correctly he shows how we seem to have a kind of mental theatre which has an expectation of what will come next from the senses, leading to interesting mistakes in perception. It's a useful model of how the mind works. That website often carries good articles about new maths as well. Me and my colleague are pretty different, in the approach to that kind of stuff... Maybe I'll post on the Forum a 'Request for D Advocacy', a-la PostgreSQL, so the community can try to address some of his concerns about modern D, and lower his discomfort! :-P If you can explain to me what is the _direction_ of D in terms of interfacing with large C++ libraries it would be very much appreciated! I'd love to be using D for some of my projects but I have a perception that using e.g. VTK is still a difficult thing to do from D. Is that still true? What is the long term plan for D, is it extern(C++), a binding technology? Is there any interest in Calypso from the upper echelons? I want to know where D is trying to go, not just where it is now. I want to know if anyone has got their heart in it. My CV says my main languages are Java, Python and D. That last one is mainly wishful thinking at the moment. I wish it wasn't! Make me believe, Paulo!
Re: [OT] Re: C's Biggest Mistake on Hacker News
On 07/28/2018 05:43 AM, Laeeth Isharc wrote: > It's not that bad calling D from Java. Running D's GC in a thread that is started by an external runtime (like Java's) can be problematic. If a D function on another D-runtime thread needs to run a collection, then it will not know about this Java thread and won't stop it. One outcome is a crash if this thread continues to allocate while the other one is collecting. The solution is having to call thread_attachThis() upon entry to the D function and thread_detachThis() upon exit. However, there are bugs with these function, which I posted a pull request (and abandoned it because of 32-bit OS X test failures.) I think a better option would be to forget about all that and not do any GC in the D function that is called from Java. This simple function should just send a message to a D-runtime thread and return back to Java. Ali
Re: [OT] Re: C's Biggest Mistake on Hacker News
On 7/28/2018 7:09 AM, Laeeth Isharc wrote: Opportunities are abundant where people aren't looking because they don't want to. My father told me I wasn't at all afraid of hard work. I could lie down right next to it and go to sleep.
Re: [OT] Re: C's Biggest Mistake on Hacker News
On Saturday, 28 July 2018 at 21:27:12 UTC, bpr wrote: I hear you. You're looking (roughly) for a better Java/Go/Scala, and I'm looking for a better C/C++/Rust, at least for what I work on now. I don't think D can be both right now, and that the language which can satisfy both of us doesn't exist yet, though D is close. Yes, this. In the light of D's experience, is it even possible to have a language that satisfies both?
Re: [OT] Re: C's Biggest Mistake on Hacker News
On Saturday, 28 July 2018 at 20:34:37 UTC, Abdulhaq wrote: On Saturday, 28 July 2018 at 19:55:56 UTC, bpr wrote: On Saturday, 28 July 2018 at 15:36:43 UTC, Abdulhaq wrote: I think that I no longer fall into the category of developer that D is after. D is targeting pedal-to-the-metal requirements, and I don't need that. TBH I think 99% of developers don't need it. I'm 99% sure you just made that number up ;-) Sure, I plucked it out of thin air. But I do think of the software development world as an inverted pyramid in terms of performance demands and headcount. At the bottom of my inverted pyramid I have Linux and Windows. This code needs to be as performant as possible and bug free as possible. C/C++/D shine at this stuff. However, I number those particular developers in the thousands. The developers at Mozilla working on the browser internals, for example, are unaccounted for in your analysis. As are the developers where I work. I think a great bulk of developers, though, sit at the application development layer. They are pumping out great swathes of Java etc. Users of Spring and dozens of other frameworks. C++ is usually the wrong choice for this type of work, but can be adopted in a mistaken bid for performance. I don't know that the great bulk of developers work in Java. Any how many are churning out all that javascript and PHP code? Hence I think that the number of developers who really need top performance is much smaller than the number who don't. I'd be willing to accept that, but I have no idea what the actual numbers are. If I had to write CFD code, and I'd love to have a crack, then I'd really be wanting to use D for its expressiveness and performance. But because of the domain that I do work in, I feel that I am no longer in D's target demographic. If I had to write CFD code, and I wanted to scratch an itch to use a new language, I'd probably pick Julia, because that community is made up of scientific computing experts. D might be high on my list, but not likely the first choice. C++ would be in there too :-(. I remember the subject of write barriers coming up in order (I think?) to improve the GC. Around that time Walter said he would not change D in any way that would reduce performance by even 1%. Here we kind of agree. If D is going to support a GC, I want a state of the art precise GC like Go has. That may rule out some D features, or incur some cost that high performance programmers don't like, or even suggest two kinds of pointer (a la Modula-3/Nim), which Walter also dislikes. Hence I feel that D is ruling itself out of the application developer market. At this stage in its life, I don't think D should try to be all things to all programmers, but rather focus on doing a few things way better than the competition. That's totally cool with me, but it me a long time to realise that it was the case and that therefore it was less promising to me than it had seemed before. I hear you. You're looking (roughly) for a better Java/Go/Scala, and I'm looking for a better C/C++/Rust, at least for what I work on now. I don't think D can be both right now, and that the language which can satisfy both of us doesn't exist yet, though D is close.
Re: [OT] Re: C's Biggest Mistake on Hacker News
On Saturday, 28 July 2018 at 19:55:56 UTC, bpr wrote: On Saturday, 28 July 2018 at 15:36:43 UTC, Abdulhaq wrote: I think that I no longer fall into the category of developer that D is after. D is targeting pedal-to-the-metal requirements, and I don't need that. TBH I think 99% of developers don't need it. I'm 99% sure you just made that number up ;-) Sure, I plucked it out of thin air. But I do think of the software development world as an inverted pyramid in terms of performance demands and headcount. At the bottom of my inverted pyramid I have Linux and Windows. This code needs to be as performant as possible and bug free as possible. C/C++/D shine at this stuff. However, I number those particular developers in the thousands. Then we have driver writers. Performance is important here but as I user I feel that I wish they would concentrate on the 'bug-free' part a bit more. Especially those cowboys who develop printer and bluetooth drivers. Of course, according to them it's the hardware that stinks. These guys and galls number in the tens of thousands. Yes I made that up. Then we have a layer up, Libc developers and co. Then platform developers. Unity, Lumberyard for games. Apache. I think a great bulk of developers, though, sit at the application development layer. They are pumping out great swathes of Java etc. Users of Spring and dozens of other frameworks. C++ is usually the wrong choice for this type of work, but can be adopted in a mistaken bid for performance. Any how many are churning out all that javascript and PHP code? Hence I think that the number of developers who really need top performance is much smaller than the number who don't. For you, perhaps. I currently work mostly at a pretty low level and I'm pretty sure it's not just self delusion that causes us to use C++ at that low level. Perhaps you've noticed the rise of Rust lately? Are the Mozilla engineers behind it deluded in that they eschew GC and exceptions? I doubt it. I mostly prefer higher level languages with GCs, but nothing in life is free, and GC has significant costs. If I had to write CFD code, and I'd love to have a crack, then I'd really be wanting to use D for its expressiveness and performance. But because of the domain that I do work in, I feel that I am no longer in D's target demographic. I remember the subject of write barriers coming up in order (I think?) to improve the GC. Around that time Walter said he would not change D in any way that would reduce performance by even 1%. Hence I feel that D is ruling itself out of the application developer market. That's totally cool with me, but it me a long time to realise that it was the case and that therefore it was less promising to me than it had seemed before.
Re: [OT] Re: C's Biggest Mistake on Hacker News
On Saturday, 28 July 2018 at 15:36:43 UTC, Abdulhaq wrote: I think that I no longer fall into the category of developer that D is after. D is targeting pedal-to-the-metal requirements, and I don't need that. TBH I think 99% of developers don't need it. I'm 99% sure you just made that number up ;-) For those developers who don't need the performance usually achieved with C or C++, and can tolerate GC overheads, there are, IMO, better languages than D. I'm not saying that here to be inflammatory, just that I believe performance is a very big part of the attractiveness of D. If you're mostly working on Android, then Kotlin seems like your best option for a non-Java language. It seems OK, there's a Kotlin native in the works, the tooling is fine, there's a REPL, etc. I like it better than I like Go. We like to think we do and we love to marvel at the speed of improved code, but like prediction, it's overrated ;-) For you, perhaps. I currently work mostly at a pretty low level and I'm pretty sure it's not just self delusion that causes us to use C++ at that low level. Perhaps you've noticed the rise of Rust lately? Are the Mozilla engineers behind it deluded in that they eschew GC and exceptions? I doubt it. I mostly prefer higher level languages with GCs, but nothing in life is free, and GC has significant costs.
Re: [OT] Re: C's Biggest Mistake on Hacker News
On Saturday, 28 July 2018 at 12:43:55 UTC, Laeeth Isharc wrote: It's tough when dealing with genuine - Knightian uncertainty or even more radical versions. When one doesn't even know the structure of the problem then maximising expected utility doesn't work. One can look at capacities - Choquet and the like - but then its harder to say something useful about what you should do. Sounds interesting, I'll look into it. But it's a loop and one never takes a final decision to master D. Also habits, routines and structures _do_ shape perception. In truth I avoid discussions that are really just arguing about definitions of words, but you made a couple of sweeping bumper-stickery comments That's entertaining. I've not been accused of that before! Bear in mind also I tend to write on my phone. I think I was just in need of a decent conversation. I didn't mean it in an accusatory manner :-). TBH I read those comments as coming from a D advocate who was in a motivational mood. They triggered a debate in me that has been wanting to come out, but I rarely contribute to forums these days. Yes I read Kahneman et al papers for the first time in 92 in the university library. I speed-read his book, and I thought it was a bad book. I work with a specialist in making decisions under uncertainty - she was the only person able to articulate to George Soros how he made money because he certainly couldn't, and she is mentioned in the preface to the revised version of Alchemy. She has the same view as me - behavioural finance is largely a dead end. One learns much more by going straight to the neuroeconomics and incorporating also the work of Dr Iain Macgilchrist. Kahneman makes a mistake in his choice of dimension. There's analytic and intuitive/gestalt and in my experience people making high stakes decisions are much less purely analytical than a believer in the popular Kahneman might suggest. What I said about prediction being overrated isn't controversial amongst a good number of the best traders and business people in finance. You might read Nassim Taleb also. You're way ahead of me here, obviously. I didn't read any Taleb until he made an appearance at the local bookshop. It was Black Swan and it didn't say anything that hadn't independently occurred to me already. However, for some reason it seemed to be a revelation to a lot of people. Well it's a pity the D Android ecosystem isn't yet mature. Still I remain in awe of the stubborn accomplishment of the man (with help) who got LDC to run on Android. It's not that bad calling D from Java. Some day I will see if I can help automate that - Kai started working on it already I think. D as a programming language has numerous benefits over Java, but trying to analyse why I would nevertheless choose Kotlin/Java for Android development: * The Android work I do largely does not need high low level performance. The important thinking that is done is the user interface, how communication with the servers should look for good performance, caching etc. Designing good algorithms. * Having done the above, I want a low friction way of getting that into code. That requires a decent expressive language with a quality build system that can churn out an APK without me having to think too hard about it. Kotlin/JDK8 are good enough and Android Studio helps a lot. * Given the above, choosing D to implement some of the code would just be a cognitive and time overhead. It's no reflection on D in any way, it's just that all the tooling is for Java and the platform API/ABI is totally designed to host Java. * "The man who (with help) got LDC to run on Android". The team, with the best will in the world, is too small to answer all the questions that the world of pain known as Android can throw up. Why doesn't this build for me? Gradle is killing me... Dub doesn't seem to be working right after the upgrade to X.Y... it works on my LG but not my Samsung... I've upgraded this but now that doesn't work anymore... * Will there be a functioning team in 5 years time? Will they support older versions of Android? Can I develop on Windows? Or Linux? Why not?., etc., etc. Since you already know D you need to answer a different question. What's the chance the compiler will die on the relevant horizon, and how bad will it be for me if that happens. Personally I'm not worried. If D should disappear in a few years, it wouldn't be the end of the world to port things. I just don't think that's very likely. I answered the Android question already, as for engineering /scientific work (I design/develop engineering frameworks/tools for wing designers) python has bindings to numpy, Qt, CAD kernels, data visualisation tools. Python is fast enough to string those things together and run the overarching algorithms, GUIs, launch trade studies, scipy optimisations. It has even more expressiv
Re: [OT] Re: C's Biggest Mistake on Hacker News
On Saturday, 28 July 2018 at 14:09:44 UTC, Laeeth Isharc wrote: On Saturday, 28 July 2018 at 13:55:31 UTC, Paolo Invernizzi Perceptions, expectations, prediction... an easy read I suggest on the latest trends [1], if someone is interested... I forgot the link... here it is: https://www.quantamagazine.org/to-make-sense-of-the-present-brains-may-predict-the-future-20180710 Yes - it's a competitive advantage, but opportunity often comes dressed in work clothes. Curiosity is the salt of evolution... for example I'm now intrigued by the Master and His Emissary, I've to read it. And another curiosity: I studied in the 90 in Milano, what was your thought on Hayek, von Mises, in those time? Classic Economics was so boring... We're in an era when most people are not used to discomfort and have an inordinate distaste for it. If you're fine with that and make decisions as best you can based on objective factors (objectivity being something quite different from 'evidence-based' because of the drunk/lamppost issue) then there is treasure everywhere (to steal Andrey's talk title). Opportunities are abundant where people aren't looking because they don't want to. Me and my colleague are pretty different, in the approach to that kind of stuff... Maybe I'll post on the Forum a 'Request for D Advocacy', a-la PostgreSQL, so the community can try to address some of his concerns about modern D, and lower his discomfort! :-P /Paolo
Re: [OT] Re: C's Biggest Mistake on Hacker News
On Saturday, 28 July 2018 at 13:55:31 UTC, Paolo Invernizzi wrote: On Saturday, 28 July 2018 at 12:43:55 UTC, Laeeth Isharc wrote: each project I start I give some very hard thought about which development environment I'm going to use, and D is often one of those options. The likely future of D on the different platforms is an important part of that assessment, hence 'predicting' the future of D, hard and very unreliable though that is, is an important element in some of my less trivial decisions. Since you already know D you need to answer a different question. What's the chance the compiler will die on the relevant horizon, and how bad will it be for me if that happens. Personally I'm not worried. If D should disappear in a few years, it wouldn't be the end of the world to port things. I just don't think that's very likely. Of course it depends on your context. The people who use D at work seem to be more principals who have the right to take the best decision as they see it then agents who must persuade others who are the real decision-makers. That's a recipe for quiet adoption that's dispersed across many industries initially and for the early adopters of D being highly interesting people. Since, as the Wharton professor, Adam Grant observes, we are in an age where positive disruptors can achieve a lot within an organisation, that's also rather interesting. A very interesting discussion... really. Perceptions, expectations, prediction... an easy read I suggest on the latest trends [1], if someone is interested... BTW, Laeeth is right in the last paragraph two. I was one of the 'principal' who took the decision to use D in production, 14 years ago, and he described the reasoning of that era very well. Today I'm still convinced that the adoption of D is a competitive advantage for a company, I definitely have to work to improve my bad temper (eheh) to persuade my actual CTO to give it another change. /Paolo (btw, I'm the CEO...) Thanks for the colour, Paolo. Yes - it's a competitive advantage, but opportunity often comes dressed in work clothes. We're in an era when most people are not used to discomfort and have an inordinate distaste for it. If you're fine with that and make decisions as best you can based on objective factors (objectivity being something quite different from 'evidence-based' because of the drunk/lamppost issue) then there is treasure everywhere (to steal Andrey's talk title). Opportunities are abundant where people aren't looking because they don't want to.
Re: [OT] Re: C's Biggest Mistake on Hacker News
On Saturday, 28 July 2018 at 12:43:55 UTC, Laeeth Isharc wrote: each project I start I give some very hard thought about which development environment I'm going to use, and D is often one of those options. The likely future of D on the different platforms is an important part of that assessment, hence 'predicting' the future of D, hard and very unreliable though that is, is an important element in some of my less trivial decisions. Since you already know D you need to answer a different question. What's the chance the compiler will die on the relevant horizon, and how bad will it be for me if that happens. Personally I'm not worried. If D should disappear in a few years, it wouldn't be the end of the world to port things. I just don't think that's very likely. Of course it depends on your context. The people who use D at work seem to be more principals who have the right to take the best decision as they see it then agents who must persuade others who are the real decision-makers. That's a recipe for quiet adoption that's dispersed across many industries initially and for the early adopters of D being highly interesting people. Since, as the Wharton professor, Adam Grant observes, we are in an age where positive disruptors can achieve a lot within an organisation, that's also rather interesting. A very interesting discussion... really. Perceptions, expectations, prediction... an easy read I suggest on the latest trends [1], if someone is interested... BTW, Laeeth is right in the last paragraph two. I was one of the 'principal' who took the decision to use D in production, 14 years ago, and he described the reasoning of that era very well. Today I'm still convinced that the adoption of D is a competitive advantage for a company, I definitely have to work to improve my bad temper (eheh) to persuade my actual CTO to give it another change. /Paolo (btw, I'm the CEO...)
Re: [OT] Re: C's Biggest Mistake on Hacker News
On Saturday, 28 July 2018 at 11:09:28 UTC, Abdulhaq wrote: On Friday, 27 July 2018 at 23:42:47 UTC, Laeeth Isharc wrote: For me, I think that managing money is about choosing to expose your capital intelligently to the market, balancing the risk of loss against the prospective gain and considering this in a portfolio sense. Prediction doesn't really come into that I think this apparent difference of opinion is down to different definitions of the word prediction. When I say prediction I mean the assessment of what are the possible futures for a scenario and how likely each one is. It can be conscious or unconscious. I think my understanding of the word is not an uncommon one. By my definition, when you balance the risk of loss (i.e. predict how likely you are to lose money) against the prospective gain (i.e. multiply the probability of each possible outcome by its reward and sum the total to get a prospective value) then you are, by my definition and therefore, for me, by definition, making predictions. It's tough when dealing with genuine - Knightian uncertainty or even more radical versions. When one doesn't even know the structure of the problem then maximising expected utility doesn't work. One can look at capacities - Choquet and the like - but then its harder to say something useful about what you should do. And I think when dealing with human action and institutions we are in a world of uncertainty more often than not. It's not the prediction that matters but what you do. It's habits, routines, perception, adaptation and actions that matter. I agree they are integral to our behaviour and habits and routines do not involve the element of prediction. Perceptions come before and actions take place after the decision process is made (conscious or not) and so don't factor into this discussion for me. But it's a loop and one never takes a final decision to master D. Also habits, routines and structures _do_ shape perception. In truth I avoid discussions that are really just arguing about definitions of words, but you made a couple of sweeping bumper-stickery comments That's entertaining. I've not been accused of that before! Bear in mind also I tend to write on my phone. that trying to predict things was usually a waste of time and as an alternative we should 'be the change...'. I wholeheartedly agree we should 'be the change...' but it's not an alternative to making predictions, it goes hand in hand with it. I'm sure you've read Kahneman's Thinking, Fast and Slow. You made a generalisation that applies to the 'fast' part. I'm saying your universal rule is wrong because of the slow part. Yes I read Kahneman et al papers for the first time in 92 in the university library. I speed-read his book, and I thought it was a bad book. I work with a specialist in making decisions under uncertainty - she was the only person able to articulate to George Soros how he made money because he certainly couldn't, and she is mentioned in the preface to the revised version of Alchemy. She has the same view as me - behavioural finance is largely a dead end. One learns much more by going straight to the neuroeconomics and incorporating also the work of Dr Iain Macgilchrist. Kahneman makes a mistake in his choice of dimension. There's analytic and intuitive/gestalt and in my experience people making high stakes decisions are much less purely analytical than a believer in the popular Kahneman might suggest. What I said about prediction being overrated isn't controversial amongst a good number of the best traders and business people in finance. You might read Nassim Taleb also. I learnt D many years ago just after Andrei's book came out. I love it but it's on the shelf at the moment for me. I rarely get time for side projects these days but when I do I want them to run on Android with easy access to all the APIs and without too much ado in the build setup. They must continue to work and be supported with future versions of Android. At work, on Windows, JDK8/JavaFX/Eclipse/maven and python/numpy/Qt/OpenCascade/VTK hit the spot. Well it's a pity the D Android ecosystem isn't yet mature. Still I remain in awe of the stubborn accomplishment of the man (with help) who got LDC to run on Android. It's not that bad calling D from Java. Some day I will see if I can help automate that - Kai started working on it already I think. each project I start I give some very hard thought about which development environment I'm going to use, and D is often one of those options. The likely future of D on the different platforms is an important part of that assessment, hence 'predicting' the future of D, hard and very unreliable though that is, is an important element in some of my less trivial decisions. Since you already know D you need to answer a different question. What's the chance the compiler will die on the relevant h
Re: OT: First-Class Statistical Missing Values Support in Julia 0.7
On Thursday, 21 June 2018 at 15:11:51 UTC, jmh530 wrote: The Julia folks have done some interesting work with missing values that I thought might be of interest [1, 2]. Looks like it would be pretty easy to do something similar in D with either unions or Algebraic. The time-consuming part would be making sure everything works seamlessly with mathematical functions. [1] https://julialang.org/blog/2018/06/missing [2] https://www.reddit.com/r/programming/comments/8spmca/firstclass_statistical_missing_values_support_in/ That was interesting. I wonder how it will be implemented as smooth in D, because it has to make sure everything that computes with empty is still empty and I believe it'll be a little challenging
Re: OT - Replacing strings with slices in C# - high performance improvement
On Saturday, 21 April 2018 at 20:54:32 UTC, Steven Schveighoffer wrote: I'm all for a string type and auto-decoding, so we can get rid of auto-decoding for char arrays. I've floated the idea of having the String type not be a range in order to solve this problem once and for all. In order to get a range from a String, you'd have to call toCodeUnits, toCodePoints, or toGraphemes, which would all be range returning member functions. That way, the user is in charge of the iteration every time, and there's no "magic" involved. I might whip up a proof-of-concept of a String (without RC but with SSO) later when I have free time. It'd be useful in some of my projects.
Re: OT - Replacing strings with slices in C# - high performance improvement
On Saturday, 21 April 2018 at 19:15:58 UTC, Steven Schveighoffer wrote: An RCString could have slicing just like C#. And it doesn't prevent "raw slicing" with char arrays. FWIW, I support having a string library type and have been advocating for it for years (I'd love to have my char arrays back as arrays of chars). But it MUST support slicing to pass muster in D. A slice of a String could also return a String, and have some property raw or toArray to get the underlying slice.
Re: OT - Replacing strings with slices in C# - high performance improvement
On 4/21/18 4:31 PM, Jack Stouffer wrote: On Saturday, 21 April 2018 at 16:08:13 UTC, Steven Schveighoffer wrote: Since when? Since Andrei came up with the RCStr concept. Even a non-RC String type would still solve our auto decoding problem while also allowing us to do SSO. Rereading your post, I misunderstood. I thought you implied that we would be getting rid of slicing from strings, but that's not what you meant. I'm all for a string type and auto-decoding, so we can get rid of auto-decoding for char arrays. -Steve
Re: OT - Replacing strings with slices in C# - high performance improvement
On Saturday, 21 April 2018 at 16:08:13 UTC, Steven Schveighoffer wrote: Since when? -Steve Since Andrei came up with the RCStr concept. Even a non-RC String type would still solve our auto decoding problem while also allowing us to do SSO.
Re: OT - Replacing strings with slices in C# - high performance improvement
On 4/21/18 2:37 PM, Seb wrote: On Saturday, 21 April 2018 at 16:08:13 UTC, Steven Schveighoffer wrote: On 4/20/18 8:27 PM, Jack Stouffer wrote: On Friday, 20 April 2018 at 16:33:44 UTC, rumbu wrote: .NET Core 2.1 was announced, with emphasis on using Span instead of classic String class all around the framework. For people not familiar with C#, Span is similar to a D array slice. https://blogs.msdn.microsoft.com/dotnet/2018/04/18/performance-improvements-in-net-core-2-1/ And we’re trying to move towards a string library type and away from raw slices :) Since when? At least 2 1/2 years: https://forum.dlang.org/thread/56b1224a.7050...@erdani.com An RCString could have slicing just like C#. And it doesn't prevent "raw slicing" with char arrays. FWIW, I support having a string library type and have been advocating for it for years (I'd love to have my char arrays back as arrays of chars). But it MUST support slicing to pass muster in D. -Steve
Re: OT - Replacing strings with slices in C# - high performance improvement
On Saturday, 21 April 2018 at 16:08:13 UTC, Steven Schveighoffer wrote: On 4/20/18 8:27 PM, Jack Stouffer wrote: On Friday, 20 April 2018 at 16:33:44 UTC, rumbu wrote: .NET Core 2.1 was announced, with emphasis on using Span instead of classic String class all around the framework. For people not familiar with C#, Span is similar to a D array slice. https://blogs.msdn.microsoft.com/dotnet/2018/04/18/performance-improvements-in-net-core-2-1/ And we’re trying to move towards a string library type and away from raw slices :) Since when? -Steve At least 2 1/2 years: https://forum.dlang.org/thread/56b1224a.7050...@erdani.com
Re: OT - Replacing strings with slices in C# - high performance improvement
On 4/20/18 8:27 PM, Jack Stouffer wrote: On Friday, 20 April 2018 at 16:33:44 UTC, rumbu wrote: .NET Core 2.1 was announced, with emphasis on using Span instead of classic String class all around the framework. For people not familiar with C#, Span is similar to a D array slice. https://blogs.msdn.microsoft.com/dotnet/2018/04/18/performance-improvements-in-net-core-2-1/ And we’re trying to move towards a string library type and away from raw slices :) Since when? -Steve
Re: OT - Replacing strings with slices in C# - high performance improvement
On Friday, 20 April 2018 at 16:33:44 UTC, rumbu wrote: .NET Core 2.1 was announced, with emphasis on using Span instead of classic String class all around the framework. For people not familiar with C#, Span is similar to a D array slice. https://blogs.msdn.microsoft.com/dotnet/2018/04/18/performance-improvements-in-net-core-2-1/ And we’re trying to move towards a string library type and away from raw slices :)
Re: [OT] Unity migrating parts of their engine from C++ into High Performace C# (HPC#)
On Tuesday, 3 April 2018 at 04:50:15 UTC, rumbu wrote: On Monday, 2 April 2018 at 22:55:58 UTC, Meta wrote: On Monday, 2 April 2018 at 20:19:17 UTC, rumbu wrote: void foo(IRange someRange) { //do something with someRange even it's a struct //this includes code completion and other IDE specific stuff. } In D, template constrains are not very clean and they not have IDE support: void foo(T)(T someRange) if (isInputRange!T) { } Worth mentioning is that doing this necessarily causes the struct to be boxed. I would not be surprised if they ban structs inheriting from interfaces. HPC# allows interface inheritance, but does not box structs. It's clearly stated in the video (15:30). In fact, boxing would bring up the GC, and GC is not allowed in HPC#. Oh, that's really neat (was on mobile and could not watch the video).
Re: [OT] Unity migrating parts of their engine from C++ into High Performace C# (HPC#)
On Tuesday, 3 April 2018 at 07:57:36 UTC, rikki cattermole wrote: On 03/04/2018 7:43 PM, aliak wrote: On Tuesday, 3 April 2018 at 05:24:02 UTC, rikki cattermole wrote: Shame we don't have signatures, then we'd have similar functionality only better! https://github.com/rikkimax/DIPs/blob/master/DIPs/DIP1xxx-RC.md Is there an eta on this being submitted for consideration? Or has it already or? Cheers No ETA, it is a major addition comparable to classes and I want to get it right. Ok. Really looking forward to it! :)
Re: [OT] Unity migrating parts of their engine from C++ into High Performace C# (HPC#)
On Tuesday, 3 April 2018 at 05:24:02 UTC, rikki cattermole wrote: Shame we don't have signatures, then we'd have similar functionality only better! https://github.com/rikkimax/DIPs/blob/master/DIPs/DIP1xxx-RC.md +1000 This would be a very interesting development.
Re: [OT] Unity migrating parts of their engine from C++ into High Performace C# (HPC#)
On Tuesday, 3 April 2018 at 07:29:11 UTC, Dukc wrote: On Monday, 2 April 2018 at 17:30:20 UTC, Paulo Pinto wrote: - No code that would trigger GC is allowed Impressive! It definitely won't be anyhing like D or Rust in systems field anyway, but C# usually feels to be so reliant in GC that this sounds wonderous nonetheless! Especially if, even without its core feature, classes, it's worthy as a c++ replacement where applicable. I should really check this at some point. The relevant point is that according to Miguel de Icaza interview at Unity's booth, the ongoing focus on performance was one of the contributors to the C# ref related improvements. Then you have developers like Mike Acton which has a strong point of view on where C++ is going on game development, that has decided to join Unity and contribute to HPC# instead.
Re: [OT] Unity migrating parts of their engine from C++ into High Performace C# (HPC#)
On 03/04/2018 7:43 PM, aliak wrote: On Tuesday, 3 April 2018 at 05:24:02 UTC, rikki cattermole wrote: Shame we don't have signatures, then we'd have similar functionality only better! https://github.com/rikkimax/DIPs/blob/master/DIPs/DIP1xxx-RC.md Is there an eta on this being submitted for consideration? Or has it already or? Cheers No ETA, it is a major addition comparable to classes and I want to get it right.
Re: [OT] Unity migrating parts of their engine from C++ into High Performace C# (HPC#)
On Monday, 2 April 2018 at 18:54:28 UTC, 12345swordy wrote: Interesting that they are going the "No classes allowed" approach. It looks like the bullet points can be done in better c mode of D. I think you'll find Sarn's work on Xanthe quite interesting: https://forum.dlang.org/post/qjowkuwstewnmdune...@forum.dlang.org https://forum.dlang.org/post/ntnvrdoqgjpuogoxe...@forum.dlang.org D provides some remarkable features (type introspection, mixins, templates, etc...) that could make something like this possible in D. Regardless, I been pushing for a way to deallocated classes in the @nogc context, which apparently not very much people here seemed to care about. I think people care, but it's a difficult problem to solve due to the burden of some of D's technical debt. I know you are aware of the proposed ProtoObject which may provide an eventual solution. I, for one, am considering abandoning classes altogether and looking for ways to build on Sarn's aforementioned work to make object hierarchies in D using only structs. Mike
Re: [OT] Unity migrating parts of their engine from C++ into High Performace C# (HPC#)
On Tuesday, 3 April 2018 at 05:24:02 UTC, rikki cattermole wrote: Shame we don't have signatures, then we'd have similar functionality only better! https://github.com/rikkimax/DIPs/blob/master/DIPs/DIP1xxx-RC.md Is there an eta on this being submitted for consideration? Or has it already or? Cheers
Re: [OT] Unity migrating parts of their engine from C++ into High Performace C# (HPC#)
On Monday, 2 April 2018 at 17:30:20 UTC, Paulo Pinto wrote: - No code that would trigger GC is allowed Impressive! It definitely won't be anyhing like D or Rust in systems field anyway, but C# usually feels to be so reliant in GC that this sounds wonderous nonetheless! Especially if, even without its core feature, classes, it's worthy as a c++ replacement where applicable. I should really check this at some point.
Re: [OT] Unity migrating parts of their engine from C++ into High Performace C# (HPC#)
On 03/04/2018 11:01 AM, Meta wrote: On Monday, 2 April 2018 at 22:55:58 UTC, Meta wrote: On Monday, 2 April 2018 at 20:19:17 UTC, rumbu wrote: On Monday, 2 April 2018 at 18:54:28 UTC, 12345swordy wrote: - Only structs are used, no classes; - .NET collections are replaced by native collections that manage their own memory - No code that would trigger GC is allowed - Compiler is aware of Unity features and is able to explore SIMD, by doing auto-vectorization, and transparently transform structs fields into optimal representations The struct type in C# is more versatile than the D's equivalent, mainly because of the fact that you can inherit interfaces. You can have template constraints in D but this is not as user friendly as a struct interface. So in C# you can write code like this: interface IRange { void popFront(); bool empty(); T front(); } struct MyRange: IRange { //implementation } void foo(IRange someRange) { //do something with someRange even it's a struct //this includes code completion and other IDE specific stuff. } In D, template constrains are not very clean and they not have IDE support: void foo(T)(T someRange) if (isInputRange!T) { } Worth mentioning is that doing this necessarily causes the struct to be boxed. I would not be surprised if they ban structs inheriting from interfaces. To clarify, the struct will be boxed when passing it to a function that accepts an IFoo, or if you do `IFoo foo = someStruct` or the like. Shame we don't have signatures, then we'd have similar functionality only better! https://github.com/rikkimax/DIPs/blob/master/DIPs/DIP1xxx-RC.md
Re: [OT] Unity migrating parts of their engine from C++ into High Performace C# (HPC#)
On Monday, 2 April 2018 at 22:55:58 UTC, Meta wrote: On Monday, 2 April 2018 at 20:19:17 UTC, rumbu wrote: void foo(IRange someRange) { //do something with someRange even it's a struct //this includes code completion and other IDE specific stuff. } In D, template constrains are not very clean and they not have IDE support: void foo(T)(T someRange) if (isInputRange!T) { } Worth mentioning is that doing this necessarily causes the struct to be boxed. I would not be surprised if they ban structs inheriting from interfaces. HPC# allows interface inheritance, but does not box structs. It's clearly stated in the video (15:30). In fact, boxing would bring up the GC, and GC is not allowed in HPC#.
Re: [OT] Unity migrating parts of their engine from C++ into High Performace C# (HPC#)
On Monday, 2 April 2018 at 20:19:17 UTC, rumbu wrote: On Monday, 2 April 2018 at 18:54:28 UTC, 12345swordy wrote: - Only structs are used, no classes; - .NET collections are replaced by native collections that manage their own memory - No code that would trigger GC is allowed - Compiler is aware of Unity features and is able to explore SIMD, by doing auto-vectorization, and transparently transform structs fields into optimal representations The struct type in C# is more versatile than the D's equivalent, mainly because of the fact that you can inherit interfaces. You can have template constraints in D but this is not as user friendly as a struct interface. So in C# you can write code like this: interface IRange { void popFront(); bool empty(); T front(); } struct MyRange: IRange { //implementation } void foo(IRange someRange) { //do something with someRange even it's a struct //this includes code completion and other IDE specific stuff. } In D, template constrains are not very clean and they not have IDE support: void foo(T)(T someRange) if (isInputRange!T) { } You can do that in D too - see Atila's concepts library.
Re: [OT] Unity migrating parts of their engine from C++ into High Performace C# (HPC#)
On Tuesday, 3 April 2018 at 01:31:12 UTC, Laeeth Isharc wrote: On Monday, 2 April 2018 at 20:19:17 UTC, rumbu wrote: On Monday, 2 April 2018 at 18:54:28 UTC, 12345swordy wrote: - Only structs are used, no classes; - .NET collections are replaced by native collections that manage their own memory - No code that would trigger GC is allowed - Compiler is aware of Unity features and is able to explore SIMD, by doing auto-vectorization, and transparently transform structs fields into optimal representations The struct type in C# is more versatile than the D's equivalent, mainly because of the fact that you can inherit interfaces. You can have template constraints in D but this is not as user friendly as a struct interface. So in C# you can write code like this: interface IRange { void popFront(); bool empty(); T front(); } struct MyRange: IRange { //implementation } void foo(IRange someRange) { //do something with someRange even it's a struct //this includes code completion and other IDE specific stuff. } In D, template constrains are not very clean and they not have IDE support: void foo(T)(T someRange) if (isInputRange!T) { } You can do that in D too - see Atila's concepts library. interface IFoo { int foo(int i, string s) @safe; double lefoo(string s) @safe; } @implements!(Foo, IFoo) struct Foo { int foo(int i, string s) @safe { return 0; } double lefoo(string s) @safe { return 0; } } // doesn't compile /* @implements!(Oops, IFoo) struct Oops {} */
Re: [OT] Unity migrating parts of their engine from C++ into High Performace C# (HPC#)
On Monday, 2 April 2018 at 22:55:58 UTC, Meta wrote: On Monday, 2 April 2018 at 20:19:17 UTC, rumbu wrote: On Monday, 2 April 2018 at 18:54:28 UTC, 12345swordy wrote: - Only structs are used, no classes; - .NET collections are replaced by native collections that manage their own memory - No code that would trigger GC is allowed - Compiler is aware of Unity features and is able to explore SIMD, by doing auto-vectorization, and transparently transform structs fields into optimal representations The struct type in C# is more versatile than the D's equivalent, mainly because of the fact that you can inherit interfaces. You can have template constraints in D but this is not as user friendly as a struct interface. So in C# you can write code like this: interface IRange { void popFront(); bool empty(); T front(); } struct MyRange: IRange { //implementation } void foo(IRange someRange) { //do something with someRange even it's a struct //this includes code completion and other IDE specific stuff. } In D, template constrains are not very clean and they not have IDE support: void foo(T)(T someRange) if (isInputRange!T) { } Worth mentioning is that doing this necessarily causes the struct to be boxed. I would not be surprised if they ban structs inheriting from interfaces. To clarify, the struct will be boxed when passing it to a function that accepts an IFoo, or if you do `IFoo foo = someStruct` or the like.
Re: [OT] Unity migrating parts of their engine from C++ into High Performace C# (HPC#)
On Monday, 2 April 2018 at 20:19:17 UTC, rumbu wrote: On Monday, 2 April 2018 at 18:54:28 UTC, 12345swordy wrote: - Only structs are used, no classes; - .NET collections are replaced by native collections that manage their own memory - No code that would trigger GC is allowed - Compiler is aware of Unity features and is able to explore SIMD, by doing auto-vectorization, and transparently transform structs fields into optimal representations The struct type in C# is more versatile than the D's equivalent, mainly because of the fact that you can inherit interfaces. You can have template constraints in D but this is not as user friendly as a struct interface. So in C# you can write code like this: interface IRange { void popFront(); bool empty(); T front(); } struct MyRange: IRange { //implementation } void foo(IRange someRange) { //do something with someRange even it's a struct //this includes code completion and other IDE specific stuff. } In D, template constrains are not very clean and they not have IDE support: void foo(T)(T someRange) if (isInputRange!T) { } Worth mentioning is that doing this necessarily causes the struct to be boxed. I would not be surprised if they ban structs inheriting from interfaces.
Re: [OT] Unity migrating parts of their engine from C++ into High Performace C# (HPC#)
On Monday, 2 April 2018 at 18:54:28 UTC, 12345swordy wrote: - Only structs are used, no classes; - .NET collections are replaced by native collections that manage their own memory - No code that would trigger GC is allowed - Compiler is aware of Unity features and is able to explore SIMD, by doing auto-vectorization, and transparently transform structs fields into optimal representations The struct type in C# is more versatile than the D's equivalent, mainly because of the fact that you can inherit interfaces. You can have template constraints in D but this is not as user friendly as a struct interface. So in C# you can write code like this: interface IRange { void popFront(); bool empty(); T front(); } struct MyRange: IRange { //implementation } void foo(IRange someRange) { //do something with someRange even it's a struct //this includes code completion and other IDE specific stuff. } In D, template constrains are not very clean and they not have IDE support: void foo(T)(T someRange) if (isInputRange!T) { }
Re: [OT] Unity migrating parts of their engine from C++ into High Performace C# (HPC#)
On Monday, 2 April 2018 at 17:30:20 UTC, Paulo Pinto wrote: A bit off topic, yet still relevant given the ongoing attempts for D to be usable on the games industry. At this year's GDC Unity had a few talks of the new subsystems being transitioning from internal engine code in C++ into upper layers written in C#. For that purpose they are introducing a new compiler, based on LLVM and targeted to a subset of C# features, they are calling it High Performace C#. Keypoints of the subset: - Only structs are used, no classes; - .NET collections are replaced by native collections that manage their own memory - No code that would trigger GC is allowed - Compiler is aware of Unity features and is able to explore SIMD, by doing auto-vectorization, and transparently transform structs fields into optimal representations They plan to take advantage of the new C# 7.x ref type features after integrating the new compiler infrastructure. "Unity at GDC - C# Sharp to Machine Code" https://www.youtube.com/watch?v=NF6kcNS6U80 Also relevant, "Job System & Entity Component System" https://www.youtube.com/watch?v=kwnb9Clh2Is "A Data Oriented Approach to Using Component Systems" https://www.youtube.com/watch?v=p65Yt20pw0g Some might remember Mike Acton's talk at CppCon about data-oriented programming, he is one of the developers leading this effort. https://www.mcvuk.com/development/exclusive-unity-takes-a-principled-step-into-triple-a-performance-at-gdc Interesting that they are going the "No classes allowed" approach. It looks like the bullet points can be done in better c mode of D. Regardless, I been pushing for a way to deallocated classes in the @nogc context, which apparently not very much people here seemed to care about.
Re: [OT] - Re: Could someone take a look at DIP PR 109?
On 28/03/18 21:23, Nick Treleaven wrote: On Wednesday, 28 March 2018 at 06:43:15 UTC, Shachar Shemesh wrote: For those too lazy to click on the link, BTW It's not the reader's 'laziness', it's basic courtesy on the part of the poster to the newsgroup to provide a meaningful subject line for a thread. The subject of this thread was not "hey, I wrote a cool DIP, have a look". Had that been the case, I'd have your back 100%. The subject was "I wrote a DIP and the steps that the procedure says should have happened have not". Thanks for writing the DIP though ;-) You're welcome. Shachar
Re: OT: Behaviour of Experienced Programmers Towards Newcomers
On Sunday, 18 March 2018 at 06:28:11 UTC, Amorphorious wrote: And who the fuck are you? See, it's funny how you say I'm a noob with mental problems that says shit about people yet you are doing THE EXACT SAME THING! At the very least, you are no better than me, in fact worse, because you pretend you are all high and mighty and then throw your underhanded attacks in. hey. I understand that someone saying you have mental problems can be taken as an attack. I think the person that made the comment, should not have said it. I think we all have mental problems... it's comes from being human ;-) I volunteer in the mental health sector, so I know the mental health issues and being human seem highly correlated ;-) In any case, you are clearly a very intelligent person (based on my analysis of your previous discussions over a long... period of time), so why not use your brain to benefit people instead of attacking them? Try to explain how people are wrong, so they can learn. Don't call people morons. It's pointless, and just reflects badly on you.
Re: OT: Behaviour of Experienced Programmers Towards Newcomers
On Saturday, 17 March 2018 at 06:46:17 UTC, Uknown wrote: https://opensource.com/article/18/3/avoid-humiliating-newcomers Its a blog post about how sometimes expert programmers treat newcomers badly. I haven't really noticed any of what he mentions in the D community, as most of the regular members are very polite and friendly, but I thought it was an important read nonetheless. Most assholes in programming forums are intermediate level, and young. They've got enough experience to overvalue their own opinion, and they are young enough that they get offended easily when someone disagrees with them.
Re: OT: Behaviour of Experienced Programmers Towards Newcomers
On Sunday, 18 March 2018 at 06:28:11 UTC, Amorphorious wrote: We might as well add an IQ test too it, the one with the lower IQ kills himself and does the rest of humanity a favor? Or is this another deal you will reject? Just wow.
Re: OT: Behaviour of Experienced Programmers Towards Newcomers
That is enough. Both of you please stop now.
Re: OT: Behaviour of Experienced Programmers Towards Newcomers
On Saturday, 17 March 2018 at 10:28:27 UTC, Joakim wrote: On Saturday, 17 March 2018 at 07:01:53 UTC, rumbu wrote: On Saturday, 17 March 2018 at 06:46:17 UTC, Uknown wrote: I haven't really noticed any of what he mentions in the D community 3 days ago: https://forum.dlang.org/post/ylngefsfuwqodaprw...@forum.dlang.org That guy's some nutjob who only hangs out in the Learn forum, continually changing nicks and sometimes going off like that (I think I went off on one of his prior nicks), he basically has nothing to do with the D community. Prove it, idiot. I'm pretty sure you are that guy you are talking about. He's no expert, as psychoRabbit says, merely a noob with some mental problems. And who the fuck are you? See, it's funny how you say I'm a noob with mental problems that says shit about people yet you are doing THE EXACT SAME THING! At the very least, you are no better than me, in fact worse, because you pretend you are all high and mighty and then throw your underhanded attacks in. At least I have a point, you don't. You are just using personal attacks. So, the fact is, you are illogical and moronic. You say: This guy attacks peoples then you attack me... and yet you think you are justified. RIIGHHHT. I'm sure the rest of your buddies here will defend you though. Better yet, how bout we both go get a mental analysis and the one that is more mentally insane checks himself in to the mental ward? Deal? We might as well add an IQ test too it, the one with the lower IQ kills himself and does the rest of humanity a favor? Or is this another deal you will reject?
Re: OT: Behaviour of Experienced Programmers Towards Newcomers
On Saturday, 17 March 2018 at 07:16:22 UTC, Jonathan M Davis wrote: On Saturday, March 17, 2018 07:01:53 rumbu via Digitalmars-d wrote: On Saturday, 17 March 2018 at 06:46:17 UTC, Uknown wrote: I haven't really noticed any of what he > mentions in the D community 3 days ago: https://forum.dlang.org/post/ylngefsfuwqodaprw...@forum.dlang.org Unfortunately, we do periodically have folks act like that around here, but fortunately, for the most part, it's folks who don't stick around long, and our regular posters are generally well-behaved. - Jonathan M Davis This. Most people around here are really nice :)
Re: OT: Behaviour of Experienced Programmers Towards Newcomers
On Saturday, 17 March 2018 at 07:01:53 UTC, rumbu wrote: On Saturday, 17 March 2018 at 06:46:17 UTC, Uknown wrote: I haven't really noticed any of what he mentions in the D community 3 days ago: https://forum.dlang.org/post/ylngefsfuwqodaprw...@forum.dlang.org That guy's some nutjob who only hangs out in the Learn forum, continually changing nicks and sometimes going off like that (I think I went off on one of his prior nicks), he basically has nothing to do with the D community. He's no expert, as psychoRabbit says, merely a noob with some mental problems.