Re: Value closures (no GC allocation)
On Thursday, 25 May 2017 at 05:03:15 UTC, Stanislav Blinov wrote: Sigh... You're right. We should've renamed the file at the same time as removing the number from the contents, because "internets". And I shouldn't have snapped like that. I apologize. It's all good :-)
Re: DIP 1008 Preliminary Review Round 1
On 5/23/2017 3:40 PM, Martin Nowak wrote: Why does it have to be refcounted? Seems like there is only ever one reference to the current exception (the catch variable). Rethrowing the catch variable makes for 2 references. Has staticError been considered? It has a potential issue with multiple nested exceptions, but otherwise works fine. https://github.com/dlang/druntime/blob/bc832b18430ce1c85bf2dded07bbcfe348ff0813/src/core/exception.d#L683 Doesn't work for chained exceptions.
Re: Value closures (no GC allocation)
On Thursday, 25 May 2017 at 04:53:14 UTC, Mike Parker wrote: On Thursday, 25 May 2017 at 03:14:14 UTC, Stanislav Blinov wrote: Mike, I've read that README several times over. Yes, a mistake was made, a number self-assigned. Yes, that is not how it's done. Yes, we know. Yes. Thanks for the reminder. All is fixed. Thanks. I wasn't criticizing. It wasn't fixed when I clicked the link, so I just wanted to make sure everyone's on the same page. I can't assume everyone has gone over the README. Sigh... You're right. We should've renamed the file at the same time as removing the number from the contents, because "internets". And I shouldn't have snapped like that. I apologize.
Re: Value closures (no GC allocation)
On Thursday, 25 May 2017 at 03:14:14 UTC, Stanislav Blinov wrote: Mike, I've read that README several times over. Yes, a mistake was made, a number self-assigned. Yes, that is not how it's done. Yes, we know. Yes. Thanks for the reminder. All is fixed. Thanks. I wasn't criticizing. It wasn't fixed when I clicked the link, so I just wanted to make sure everyone's on the same page. I can't assume everyone has gone over the README.
update list of organisations using D to refer to blog posts and talks
Could we try to keep the list of organisations using D updated to include links to talks given since the list was made? Eg Weka, Remedy Games. Similarly could we add blog posts (eg recent one on eBay) to the list as we do for talks. Maybe we could add some more concise quotes as well to this page. Eg Tamedia say "D is not only a highly productive language, but also a great hiring filter." But nothing for Weka. If we could contact Netflix to get their confirmation and approval to mention their use of D for machine learning, I think their name would carry some weight. Bear in mind that this page may be visited by people that have influence on decisions by their organisations, and who are just taking a quick look and won't dig deeper unless their interest is piqued. Then also, on the front page under Community there should be a "Notable Projects" page for non-commercial open-source projects like Tilix (nee Terminix). Finally - I think having a "channels" or domains page that gives a primer on domain-specific resources available would make it a bit easier for people looking into the language. Eg bioinformatics, numerical computing, automated translation from other languages, web services, etc. The knowledge is there, but it's a bit fragmented. I would do myself, but don't have so much time unfortunately. Thanks. Laeeth.
Re: std.functional.memoize : thread local or __gshared memoization?
On Thursday, 25 May 2017 at 01:17:41 UTC, Timothee Cour wrote: thanks; i think docs for this should still make that clear. How about adding memoizeShared for shared variables? There definitely are use cases for this. Perhaps we should first actually properly document and implement what shared *is*.
Re: Value closures (no GC allocation)
On Thursday, 25 May 2017 at 03:42:11 UTC, Adam D. Ruppe wrote: On Thursday, 25 May 2017 at 03:30:38 UTC, Stanislav Blinov wrote: Captures, if any, should be explicit. That is all :) The current behavior is fine in most cases. The explicit by-value capture handles the remaining cases. In my opinion, it is not, because it's utterly silent unless you either stumble upon it via @nogc or a compiler switch. By inference, adding capture syntax at all will also complicate the whole thing. Yeah, there's some complication in adding it but it is worth it because it gives something new without breaking anything old. P.S. have you seen the TODO at the bottom of that section? Yeah, I'm EXTREMELY against removing the current behavior. I'd kill the whole thing just to avoid that. I understand perfectly, wasn't exactly easy contemplating that "maybe" either. In any case, we should not get ahead of ourselves. I'm not insisting on that particular change, I've brought it up for consideration, opened a discussion, and it looks like it worked :)
Re: Value closures (no GC allocation)
On Thursday, 25 May 2017 at 03:30:38 UTC, Stanislav Blinov wrote: Captures, if any, should be explicit. That is all :) The current behavior is fine in most cases. The explicit by-value capture handles the remaining cases. By inference, adding capture syntax at all will also complicate the whole thing. Yeah, there's some complication in adding it but it is worth it because it gives something new without breaking anything old. P.S. have you seen the TODO at the bottom of that section? Yeah, I'm EXTREMELY against removing the current behavior. I'd kill the whole thing just to avoid that.
Re: Value closures (no GC allocation)
On Thursday, 25 May 2017 at 03:10:04 UTC, Adam D. Ruppe wrote: "Currently FunctionLiterals that capture their outer context (i.e. closures/delegates) require an allocation and the garbage collector. " Not necessarily true, make sure you actually mention `scope delegate` and `alias` params that do not return it. Those capture but do not allocate. It's an infant document, wording is out there somewhere... "We will be proposing a syntax redundant w.r.t. current behavior (i.e. capture by reference), so maybe we should consider proposing deprecation?" Don't do that. Why not? I, personally, have a simple, but solid, justification: Captures, if any, should be explicit. That is all :) That is solely my opinion, hence the cautious "maybe"... "Capture by reference" I'm against that, no need adding it and it complicates the whole thing. For example, "ref int _i;" as a struct member; there's no such thing in D. (the compiler could do it but still). And you'd have to explain the lifetime. Just no point doing this, the current behavior is completely fine for this. By inference, adding capture syntax at all will also complicate the whole thing. IMHO, we either should have one, or the other, but not both. P.S. have you seen the TODO at the bottom of that section?
Re: Value closures (no GC allocation)
On Thursday, 25 May 2017 at 01:18:22 UTC, Mike Parker wrote: label: if (self.bored) goto disclaimer; We're well aware. The file name is not indicative of anything. The README outlines the procedure for DIP submission, including the format of the filename. The concern is that if you include number in the filename, it opens the door to people referring to it by that number. We want to avoid that. [1] https://github.com/dlang/DIPs/blob/master/README.md // warning, loops until self.bored is true goto label; disclaimer: Mike, I've read that README several times over. Yes, a mistake was made, a number self-assigned. Yes, that is not how it's done. Yes, we know. Yes. Thanks for the reminder. All is fixed.
Re: Value closures (no GC allocation)
On Wednesday, 24 May 2017 at 20:15:37 UTC, Vittorio Romeo wrote: If you're interested in contributing, please let me know and I'll add you as a collaborator. can i just edit it on the site? but a few comments: "Currently FunctionLiterals that capture their outer context (i.e. closures/delegates) require an allocation and the garbage collector. " Not necessarily true, make sure you actually mention `scope delegate` and `alias` params that do not return it. Those capture but do not allocate. "We will be proposing a syntax redundant w.r.t. current behavior (i.e. capture by reference), so maybe we should consider proposing deprecation?" Don't do that. "// or simply, using existing syntax: //auto l = () => writeln("hello!");" That existing one will actually make a function, not a delegate, since it doesn't access any locals. "struct anonymous_l" That should be `static struct`. "Capture by reference" I'm against that, no need adding it and it complicates the whole thing. For example, "ref int _i;" as a struct member; there's no such thing in D. (the compiler could do it but still). And you'd have to explain the lifetime. Just no point doing this, the current behavior is completely fine for this.
Re: Value closures (no GC allocation)
On Thursday, 25 May 2017 at 01:13:08 UTC, Stanislav Blinov wrote: On Thursday, 25 May 2017 at 00:34:45 UTC, MysticZach wrote: As a matter of procedure, you don't want to assign a DIP number to it yourself. I'm pretty sure that is for the DIP manager. We're well aware. The file name is not indicative of anything. The README outlines the procedure for DIP submission, including the format of the filename. The concern is that if you include number in the filename, it opens the door to people referring to it by that number. We want to avoid that. [1] https://github.com/dlang/DIPs/blob/master/README.md
Re: std.functional.memoize : thread local or __gshared memoization?
thanks; i think docs for this should still make that clear. How about adding memoizeShared for shared variables? There definitely are use cases for this. On Wed, May 24, 2017 at 5:19 PM, Jonathan M Davis via Digitalmars-d wrote: > On Wednesday, May 24, 2017 16:56:49 Timothee Cour via Digitalmars-d wrote: >> I could look at source to figure it out but others might wonder and I >> couldn't find it in the docs in >> https://dlang.org/library/std/functional/memoize.html whether memoize >> works per thread (thread local) or globally (__gshared) > > Definitely thread-local, and there would be problems if it were shared, > since for that to work properly, it would really need to be given either > shared data or data that implicitly converted to shared. And while __gshared > might skirt the compiler yelling at you, it would have the same issues; it's > just that the compiler wouldn't be yelling at you about them, so it would be > harder to catch them and more likely that you'd get weird, > hard-to-track-down bugs. > > - Jonathan M Davis >
Re: Value closures (no GC allocation)
On Thursday, 25 May 2017 at 00:34:45 UTC, MysticZach wrote: I've created a very WIP draft here: https://github.com/SuperV1234/DIPs/blob/master/DIPs/DIP1009.md If you're interested in contributing, please let me know and I'll add you as a collaborator. As a matter of procedure, you don't want to assign a DIP number to it yourself. I'm pretty sure that is for the DIP manager. We're well aware. The file name is not indicative of anything.
Re: Value closures (no GC allocation)
On Wednesday, 24 May 2017 at 20:15:37 UTC, Vittorio Romeo wrote: On Monday, 22 May 2017 at 15:17:24 UTC, Stanislav Blinov wrote: On Monday, 22 May 2017 at 14:06:54 UTC, Vittorio Romeo wrote: On Sunday, 21 May 2017 at 20:25:14 UTC, Adam D. Ruppe wrote: Blah. Well, let's go ahead and formally propose the C++ syntax, our library solutions are all fat. Are you going to create a DIP for this? I would be happy to contribute, but I don't feel confident enough to create a DIP on my own (I just started learning the language) :) The three of us could do it together through the magics of github. I've created a very WIP draft here: https://github.com/SuperV1234/DIPs/blob/master/DIPs/DIP1009.md If you're interested in contributing, please let me know and I'll add you as a collaborator. As a matter of procedure, you don't want to assign a DIP number to it yourself. I'm pretty sure that is for the DIP manager.
Re: std.functional.memoize : thread local or __gshared memoization?
On Wednesday, May 24, 2017 16:56:49 Timothee Cour via Digitalmars-d wrote: > I could look at source to figure it out but others might wonder and I > couldn't find it in the docs in > https://dlang.org/library/std/functional/memoize.html whether memoize > works per thread (thread local) or globally (__gshared) Definitely thread-local, and there would be problems if it were shared, since for that to work properly, it would really need to be given either shared data or data that implicitly converted to shared. And while __gshared might skirt the compiler yelling at you, it would have the same issues; it's just that the compiler wouldn't be yelling at you about them, so it would be harder to catch them and more likely that you'd get weird, hard-to-track-down bugs. - Jonathan M Davis
std.functional.memoize : thread local or __gshared memoization?
I could look at source to figure it out but others might wonder and I couldn't find it in the docs in https://dlang.org/library/std/functional/memoize.html whether memoize works per thread (thread local) or globally (__gshared)
Re: DIP 1008 Preliminary Review Round 1
On Tuesday, 23 May 2017 at 22:40:43 UTC, Martin Nowak wrote: The proposal is a very mechanical fix, throwing several special cases at one specific problem. Why does it have to be refcounted? Seems like there is only ever one reference to the current exception (the catch variable). The only thing that seems necessary is to require scope on catch variable declarations, so that people do not escape the class reference, then this info might be used by the runtime to free exception objects after the catch handler is done. Amaury put a bit more words into that. http://forum.dlang.org/post/gtqsojgqqaorubcsn...@forum.dlang.org Has staticError been considered? It has a potential issue with multiple nested exceptions, but otherwise works fine. https://github.com/dlang/druntime/blob/bc832b18430ce1c85bf2dded07bbcfe348ff0813/src/core/exception.d#L683 I'm trying to understand your and Amaury's point. Normally, when you say `new` you get memory from the GC allocator. The @nogc attribute is supposed to prevent this, if I understand it correctly. Are you saying that `@nogc` as such is misconceived, because what a good language feature should really be doing is identifying and preventing new memory that _can't_ be deterministically destroyed? Is the problem here with the @nogc attribute? Because I think Walter's goal with this DIP is to make it so that you can put @nogc on _called_ functions that throw using `new`. Whereas your solution is to ignore that `new Exception` allocation, on account of the fact that you can deterministically destroy the Exception, provided you use `scope` catch blocks at, or above, the call site. Your solution might actually have its priorities straight, and `@nogc` may be designed badly because it clumps all GC allocations into one big basket. However, getting new memory from the GC still could trigger a collection cycle, which is what @nogc was created for, and simply knowing that you can reliably destroy the allocated memory doesn't change that. Thus, if I understand correctly, you and Amaury are arguing that `@nogc` as currently designed is a false goal to be chasing, that the more important goal is memory that can be deterministically destroyed, and therefore it distresses you that the language may be altered to chase the false prize of `@nogc` everywhere, instead of focusing on a real prize worth attaining?
Re: Value closures (no GC allocation)
On Wednesday, 24 May 2017 at 20:15:37 UTC, Vittorio Romeo wrote: I've created a very WIP draft here: https://github.com/SuperV1234/DIPs/blob/master/DIPs/DIP1009.md If you're interested in contributing, please let me know and I'll add you as a collaborator. Yep, I've made a small PR to make myself visible :)
Re: DMD VS2017 Support
On Wednesday, 24 May 2017 at 03:21:56 UTC, Vladimir Panteleev wrote: On Tuesday, 23 May 2017 at 23:11:30 UTC, Jolly James wrote: Come one, let's be ones: If DMD has no x64 linker, VS integration is not a bit optional. Unlike some other operating systems, 64-bit Windows versions can run 32-bit software just fine. If you require targeting Win64 (or the Microsoft C runtime), that is specific to your use case. That's true. But when D's GC does not trigger early enough, my program runs out of memory (due to the 2 GB limit of 32 bit process), x64 solves this more or less. As a side note: I am talking about allocating huge arrays. And, again, Visual Studio is not required if you want to target Win64 - only the necessary toolchain components (linker and C runtime), which you can also obtain from an SDK. So you are telling me "not working at all" is not worth releasing a fix? xD Again, everything should be already working. We are talking about a convenience feature that automatically sets up the D compiler's configuration file, nothing more. That's all there is to the "integration". You can trivially do the same thing by hand, or set up your environment accordingly - all in a fraction of the time it took you to write these pointless complaints on this forum. I admit, you are right in some points. Nevertheless, have you ever tried letting a novice configuring this stuff? The 'sc.ini' is neither easy to read nor logic. In my particular case I am overchallenged by this task.
Re: Ali's slides from his C++Now talk
On Wednesday, May 24, 2017 13:18:37 Ali Çehreli via Digitalmars-d wrote: > On 05/24/2017 08:19 AM, Manu via Digitalmars-d wrote: > > On 24 May 2017 at 09:31, Joakim via Digitalmars-d > > > > Hehe, I'm honoured to be quoted... verbatim :) > > Sorry for trimming the last part of your quote. ;) Well, you have to cut it off at some point, or you're just repeating everything that he's ever said since the start of the quote. :) - Jonathan M Davis
Re: Ali's slides from his C++Now talk
On 05/24/2017 08:19 AM, Manu via Digitalmars-d wrote: On 24 May 2017 at 09:31, Joakim via Digitalmars-d Hehe, I'm honoured to be quoted... verbatim :) Sorry for trimming the last part of your quote. ;) Ali
Re: Value closures (no GC allocation)
On Monday, 22 May 2017 at 15:17:24 UTC, Stanislav Blinov wrote: On Monday, 22 May 2017 at 14:06:54 UTC, Vittorio Romeo wrote: On Sunday, 21 May 2017 at 20:25:14 UTC, Adam D. Ruppe wrote: Blah. Well, let's go ahead and formally propose the C++ syntax, our library solutions are all fat. Are you going to create a DIP for this? I would be happy to contribute, but I don't feel confident enough to create a DIP on my own (I just started learning the language) :) The three of us could do it together through the magics of github. I've created a very WIP draft here: https://github.com/SuperV1234/DIPs/blob/master/DIPs/DIP1009.md If you're interested in contributing, please let me know and I'll add you as a collaborator.
Re: Runtime?
On Wednesday, 24 May 2017 at 15:22:55 UTC, Stanislav Blinov wrote: It's a bit of pre-allocation and some internal bookkeeping. void main() { import std.stdio; import core.memory; writeln(GC.stats); } // used, free Stats(256, 1048320) The remaining .7Mb could probably be attributed to some internal data structures. Thanks, it places D in a better limelight. I assume that from the 0.7MB, part of it is the executable ( 0.24MB ) that is loaded into memory. +1 for the answer. :)
Re: Simplifying druntime and phobos by getting rid of "shared static this()" blocks
BTW the best outcome of this is a faster initOnce. Steve, take a look at it pliz pliz? -- Andrei
Re: Simplifying druntime and phobos by getting rid of "shared static this()" blocks
On 5/24/17 12:40 PM, Andrei Alexandrescu wrote: On 5/24/17 4:49 PM, Steven Schveighoffer wrote: On 5/23/17 3:47 PM, Andrei Alexandrescu wrote: https://github.com/dlang/phobos/pull/5421 Looking forward to more in the same vein, please contribute! We have 25 left in phobos and 12 in druntime. A big one will be making the GC lazily initialize itself. -- Andrei So every time I do: writeln(...) It has to go through a check to see if it's initialized? Using a delegate? The delegate is not called in the steady state. OK, I worry about inlineability as DMD has issues when you are using delegates sometimes. Unfortunately, I think the compiler isn't smart enough to realize that after one check of the boolean returns true, it could just access the File handle directly. Has the performance of this been tested? Always a good idea. My test bed: void main() { import std.stdio; foreach (i; 0 .. 10_000_000) writeln("1234567890"); } On my laptop using dmd, phobos master, best of 21 runs using "time test /dev/null": 1.371 seconds. With initOnce: 1.469 seconds. Yuck! So I added double checking: https://github.com/dlang/phobos/pull/5421/commits/6ef3b5a6eacfe82239b7bbc4b0bc9f38adc6fe91 With the double checking: 1.372 seconds. So back to sanity. Thanks for asking me to measure! This is pretty decent. I still prefer the static ctor solution, but this is workable. -Steve
Re: Anyone tried to emscripten a D/SDL game?
On Wednesday, 24 May 2017 at 17:06:55 UTC, Guillaume Piolat wrote: On Wednesday, 24 May 2017 at 17:00:51 UTC, Nick Sabalausky "Abscissa" wrote: Anyone have any experience (successful or unsuccessful) attempting this? Any info on the current state of it, or pitfalls, or pointers for getting started? http://code.alaiwan.org/wp/?p=103 It's seems that Dart can be compiled to WASM https://medium.com/dartlang/dart-on-llvm-b82e83f99a70 It's it's possible to do same with D?
Re: Anyone tried to emscripten a D/SDL game?
On Wednesday, 24 May 2017 at 17:06:55 UTC, Guillaume Piolat wrote: http://code.alaiwan.org/wp/?p=103 Awesome, thanks!
Re: Anyone tried to emscripten a D/SDL game?
On Wednesday, 24 May 2017 at 17:00:51 UTC, Nick Sabalausky "Abscissa" wrote: Anyone have any experience (successful or unsuccessful) attempting this? Any info on the current state of it, or pitfalls, or pointers for getting started? http://code.alaiwan.org/wp/?p=103
Anyone tried to emscripten a D/SDL game?
Anyone have any experience (successful or unsuccessful) attempting this? Any info on the current state of it, or pitfalls, or pointers for getting started?
Re: Simplifying druntime and phobos by getting rid of "shared static this()" blocks
On 5/24/17 4:49 PM, Steven Schveighoffer wrote: On 5/23/17 3:47 PM, Andrei Alexandrescu wrote: https://github.com/dlang/phobos/pull/5421 Looking forward to more in the same vein, please contribute! We have 25 left in phobos and 12 in druntime. A big one will be making the GC lazily initialize itself. -- Andrei So every time I do: writeln(...) It has to go through a check to see if it's initialized? Using a delegate? The delegate is not called in the steady state. Has the performance of this been tested? Always a good idea. My test bed: void main() { import std.stdio; foreach (i; 0 .. 10_000_000) writeln("1234567890"); } On my laptop using dmd, phobos master, best of 21 runs using "time test >/dev/null": 1.371 seconds. With initOnce: 1.469 seconds. Yuck! So I added double checking: https://github.com/dlang/phobos/pull/5421/commits/6ef3b5a6eacfe82239b7bbc4b0bc9f38adc6fe91 With the double checking: 1.372 seconds. So back to sanity. Thanks for asking me to measure! Andrei
Re: Simplifying druntime and phobos by getting rid of "shared static this()" blocks
On Wednesday, 24 May 2017 at 15:49:58 UTC, Steven Schveighoffer wrote: On 5/23/17 3:47 PM, Andrei Alexandrescu wrote: https://github.com/dlang/phobos/pull/5421 Looking forward to more in the same vein, please contribute! We have 25 left in phobos and 12 in druntime. A big one will be making the GC lazily initialize itself. -- Andrei So every time I do: writeln(...) It has to go through a check to see if it's initialized? Using a delegate? It also copies every argument four times.
Re: Weak Eco System?
On 22 May 2017 at 18:56, qznc via Digitalmars-d wrote: > On Thursday, 18 May 2017 at 05:43:48 UTC, Manu wrote: > >> On 17 May 2017 at 00:51, Benro via Digitalmars-d < >> digitalmars-d@puremagic.com> wrote: >> >> [...] >>> >>> 4 Hours work. Discouraged and gave up after this. >>> >>> >> Visual Studio proper is the only IDE that 'just works' well, VisualD is >> very good. >> MonoDevelop also has good 'just works' support last I checked, but >> debugging is much better in Visual Studio. >> > > We could use a more precise statement than "just works". > I said: "'just works' well"... It's a deliberately imprecise statement. For me, it means, it didn't require ANY action from me to make it work, and it 'just works' well enough that I rarely think about it; it tends to do what I expect, when I expect it to, as compared to other industry standard tooling. Since this is a Visual Studio plugin, it is being measured against the VC++ and C# experiences.
Re: Simplifying druntime and phobos by getting rid of "shared static this()" blocks
On 5/23/17 4:42 PM, Andrei Alexandrescu wrote: On 05/23/2017 03:47 PM, Andrei Alexandrescu wrote: https://github.com/dlang/phobos/pull/5421 Looking forward to more in the same vein, please contribute! We have 25 left in phobos and 12 in druntime. A big one will be making the GC lazily initialize itself. -- Andrei In the same vein: https://github.com/dlang/phobos/pull/5423 -- Andrei This one I 100% agree with! -Steve
Re: Simplifying druntime and phobos by getting rid of "shared static this()" blocks
On 5/23/17 3:47 PM, Andrei Alexandrescu wrote: https://github.com/dlang/phobos/pull/5421 Looking forward to more in the same vein, please contribute! We have 25 left in phobos and 12 in druntime. A big one will be making the GC lazily initialize itself. -- Andrei So every time I do: writeln(...) It has to go through a check to see if it's initialized? Using a delegate? Has the performance of this been tested? I agree the stdiobase thing is ugly. We could also fix this by improving the cycle detection mechanism (e.g. you could tag the static ctor that initializes the handles to say it doesn't depend on any other static ctors, and just put it in stdio). -Steve
Re: Off-topic discussions (WAS: OT: Re: My two cents on what D needs to be more successful...)
On 5/22/17 8:21 PM, Mike Parker wrote: On Monday, 22 May 2017 at 20:49:22 UTC, Ecstatic Coder wrote: Then, with all due respect, please remove these posts... IMHO, they are so incredibly off-topic that I don't see why they should remain here to pollute the pages of a D language forum. The forum is a web frontend for a newsgroup server, which also has a mailing list interface. I think the NG server does support message deletion, and that may cause it to disappear from the web frontend, but it will still be in NG readers and inboxes. It does support that. When I used Opera for my NNTP client, I could actually remove my own posts (I would do this if I made a huge typo, or postd by accident). I don't think Thunderbird supports that (and Opera discontinued their client). However, I think the web forum software doesn't obey those deletions. And it's so darned fast, I'm sure it picks the erroneous message up before deletion ;) -Steve
Re: Runtime?
On Wednesday, 24 May 2017 at 15:07:42 UTC, Steven Schveighoffer wrote: However, I don't know the true answer. It may actually be 1.7MB of "bookeeping", but I doubt that... It's a bit of pre-allocation and some internal bookkeeping. void main() { import std.stdio; import core.memory; writeln(GC.stats); } // used, free Stats(256, 1048320) The remaining .7Mb could probably be attributed to some internal data structures.
Re: My two cents on what D needs to be more successful...
On Wednesday, 24 May 2017 at 14:49:28 UTC, Adam D. Ruppe wrote: On Wednesday, 24 May 2017 at 12:09:02 UTC, jmh530 wrote: Just wanted to say that I was glad you did this. It makes it that much easier to play around with your stuff. So you use the subpackages or try to use the top level thing? Also have you had trouble with the ~master vs tag thing? dub just doesn't agree with my dev method at all. I had only noticed it was on dub recently so haven't had a chance to play around with it that way. I had only used it the old-fashioned way.
Re: Ali's slides from his C++Now talk
On 24 May 2017 at 09:31, Joakim via Digitalmars-d < digitalmars-d@puremagic.com> wrote: > Enjoying going through these: > > http://ddili.org/AliCehreli_CppNow_2017_Competitive_Advantag > e_with_D.no_pause.pdf > > Ali really has a gift for explaining stuff, we're lucky to have him. > Hehe, I'm honoured to be quoted... verbatim :)
Re: Runtime?
On 5/24/17 6:34 AM, Moritz Maxeiner wrote: On Wednesday, 24 May 2017 at 07:54:04 UTC, Wulfklaue wrote: Great ... accidentally pressed send. So my question was: Why does even a simple empty empty statement like this, compiled with DMD, show under windows a almost 1.7MB memory usage? Because the garbage collector (GC) allocates a sizable chunk of memory from the operating system at program startup, from which it then in turn allocates memory for you when you use things like dynamic closures, `new XYZ`, etc. Note, you can usually allocate close to the entire address space, but if you never access it, it's not actually used in the RAM of your computer. So it's possible D is pre-allocating a bunch of space from the OS, but it's not actually consuming RAM. However, I don't know the true answer. It may actually be 1.7MB of "bookeeping", but I doubt that. The GC structures aren't *that* large, I'd expect at most 10 pages for the small-chunk bins, and a few more for bits. Possibly the static segment is large, but 1.7 MB is around 400+ pages. It's likely that most of that is just a pre-allocation of a large continuous space "just in case", but it's not actually wired to RAM yet. -Steve
Re: Idea: Reverse Type Inference
On 5/24/17 10:58 AM, Steven Schveighoffer wrote: On 5/24/17 10:28 AM, Ola Fosheim Grøstad wrote: Ok, well, I see templates in this context as a variation of overloading, just with the template parametric type being a set of types i.e. all types that the program provides, minus the ones prevented by contraints. I guess it is a matter of vantage point. It would be odd to allow this for templates only and not provide it for functions... I believe that IFTI would be much more straightforward than overloading, because there is no concern about implicit conversion. In fact, you could simulate overloading of return values based on IFTI instantiation: void fooImpl(ref int retval, int x) { ... } void fooImpl(ref string retval, int x) { ... } T foo(T)(int x) { T t; fooImpl(t, x); return t; } int x = foo(1); string y = foo(2); -Steve
Re: Runtime?
On Wednesday, 24 May 2017 at 14:26:42 UTC, Wulfklaue wrote: On Wednesday, 24 May 2017 at 10:34:02 UTC, Moritz Maxeiner wrote: Because the garbage collector (GC) allocates a sizable chunk of memory from the operating system at program startup, from which it then in turn allocates memory for you when you use things like dynamic closures, `new XYZ`, etc. Well, that sounds just silly. Before making such judgement you may want to consider the cost of system calls, how memory fragmentation affects a conservative GC, and why we cannot have a different kind of GC without unacceptable trade offs. Yes, it is. It is the price we have to pay for having a GC. If the overhead bothers you, you might want to compare it with other garbage collected languages (such as Go). Go is a different animal because of the concurrent GC. Not the point.
Re: Idea: Reverse Type Inference
On 5/24/17 10:28 AM, Ola Fosheim Grøstad wrote: On Wednesday, 24 May 2017 at 13:03:37 UTC, Steven Schveighoffer wrote: This is different. It's IFTI based on return type. Well, the way I see it it is a special case of top-down type inference. Yes, you also have to instantiate the template, but I assume that happens after type inference is complete? So my point was more: why not cover the general case, if you are going to add it as a feature? auto p = Point(row.get("x"), row.get("y")); // with better IFTI Ok, well, I see templates in this context as a variation of overloading, just with the template parametric type being a set of types i.e. all types that the program provides, minus the ones prevented by contraints. I guess it is a matter of vantage point. It would be odd to allow this for templates only and not provide it for functions... I believe that IFTI would be much more straightforward than overloading, because there is no concern about implicit conversion. -Steve
Re: My two cents on what D needs to be more successful...
On Wednesday, 24 May 2017 at 12:09:02 UTC, jmh530 wrote: Just wanted to say that I was glad you did this. It makes it that much easier to play around with your stuff. So you use the subpackages or try to use the top level thing? Also have you had trouble with the ~master vs tag thing? dub just doesn't agree with my dev method at all.
Re: My two cents on what D needs to be more successful...
On Wednesday, 24 May 2017 at 04:14:39 UTC, rikki cattermole wrote: They can be removed, but that means projects stop being built hence not wanting to remove them. That old one hasn't actually even compiled for over a year - I still sometimes get bug reports that things don't build cuz of deprecation errors and it is because of that old package. There's nothing gained and some lost. I think all packages without a push for over two years should be removed or reassigned to a new owner (make the dub name refer to a newer fork of the original git repo)
Re: Runtime?
On Wednesday, 24 May 2017 at 14:26:42 UTC, Wulfklaue wrote: Well, that sounds just silly. You can bypass it by doing an `extern(C)` main in D... but really, the 1 MB preallocation isn't much for small programs and for larger programs it will be used anyway, so you aren't actually losing anything.
Re: Idea: Reverse Type Inference
On Wednesday, 24 May 2017 at 13:03:37 UTC, Steven Schveighoffer wrote: This is different. It's IFTI based on return type. Well, the way I see it it is a special case of top-down type inference. Yes, you also have to instantiate the template, but I assume that happens after type inference is complete? So my point was more: why not cover the general case, if you are going to add it as a feature? auto p = Point(row.get("x"), row.get("y")); // with better IFTI Ok, well, I see templates in this context as a variation of overloading, just with the template parametric type being a set of types i.e. all types that the program provides, minus the ones prevented by contraints. I guess it is a matter of vantage point. It would be odd to allow this for templates only and not provide it for functions...
Re: Runtime?
On Wednesday, 24 May 2017 at 10:34:02 UTC, Moritz Maxeiner wrote: Because the garbage collector (GC) allocates a sizable chunk of memory from the operating system at program startup, from which it then in turn allocates memory for you when you use things like dynamic closures, `new XYZ`, etc. Well, that sounds just silly. Yes, it is. It is the price we have to pay for having a GC. If the overhead bothers you, you might want to compare it with other garbage collected languages (such as Go). Go is a different animal because of the concurrent GC.
Re: Ali's slides from his C++Now talk
On 05/24/2017 01:34 AM, Wulfklaue wrote: Page 68: "CTFE is being wastly improved by Stefan Koch.". Wastly? ;) I was paging through quickly so nobody would notice. And of course I must have meant CTFE perforwance... :o) Ali
Re: My two cents on what D needs to be more successful...
On Wednesday, 24 May 2017 at 04:04:53 UTC, Adam D. Ruppe wrote: On Tuesday, 23 May 2017 at 21:21:45 UTC, Guillaume Piolat wrote: [...] So actually, the dub thing for mine is http://code.dlang.org/packages/arsd-official/~master the other was a third party thing that is now obsolete (and imo this shows one of the weaknesses of dub right now... the list order is just most recent explicit update and old things can't be removed afaik) [...] oh my simpledisplay does that too! and 2d/3d drawing. and combined with minigui does user interface (though minigui still isn't quite done) I am tempted to package my stuff with dmd as a one-stop download... but idk if it is really beneficial since my stuff is so easy to download and use anyway with my policy of independent files. Actually the dub version is much easier/convenient for me. Ha ha
Re: Idea: Reverse Type Inference
On 5/23/17 5:20 AM, Ola Fosheim Grøstad wrote: On Monday, 22 May 2017 at 10:13:02 UTC, Sebastiaan Koppe wrote: Over the past weeks I have been noticing a specific case where it happens. I call it reverse type inference, simply because it goes against the normal evaluation order. I think what you want, in the general sense, is basically overloading based on return-type. Which I think is a very useful and cool feature. This is different. It's IFTI based on return type. BTW, swift does this too, and I have to say it's useful in a lot of cases. For example, any time you have a variant-like interface, and an already existing type, something like this looks so much cleaner. struct Point { int x; int y; } auto row = getRowFromDB(); auto p = Point(row.get!int("x"), row.get!int("y")); // currently required auto p = Point(row.get!(typeof(Point.x))("x"), row.get!(typeof(Point.y))("y")); // more generic, but horribly ugly. auto p = Point(row.get("x"), row.get("y")); // with better IFTI -Steve
Re: My two cents on what D needs to be more successful...
On Wednesday, 24 May 2017 at 04:04:53 UTC, Adam D. Ruppe wrote: On Tuesday, 23 May 2017 at 21:21:45 UTC, Guillaume Piolat wrote: http://code.dlang.org/packages/arsd So actually, the dub thing for mine is http://code.dlang.org/packages/arsd-official/~master Just wanted to say that I was glad you did this. It makes it that much easier to play around with your stuff.
Re: Ali's slides from his C++Now talk
On Wednesday, 24 May 2017 at 08:34:14 UTC, Wulfklaue wrote: Page 68: "CTFE is being wastly improved by Stefan Koch.". Wastly? ;) I assume they meant vastly
Re: Runtime?
On Wednesday, 24 May 2017 at 07:54:04 UTC, Wulfklaue wrote: Great ... accidentally pressed send. So my question was: Why does even a simple empty empty statement like this, compiled with DMD, show under windows a almost 1.7MB memory usage? Because the garbage collector (GC) allocates a sizable chunk of memory from the operating system at program startup, from which it then in turn allocates memory for you when you use things like dynamic closures, `new XYZ`, etc. void main() { while(true){} } The same in C/C++ is simply 0.1MB. This is why i asked the question if the runtime is somehow responsible? Yes, it is. It is the price we have to pay for having a GC. If the overhead bothers you, you might want to compare it with other garbage collected languages (such as Go). PS: This might belong in Learn, instead of General.
Re: Any video editing folks n da house?
On Wednesday, 24 May 2017 at 09:27:59 UTC, Andrei Alexandrescu wrote: I'm thinking publicly available videos so the footage is already out there. Public available videos are already compressed. Rikki needs the original source video and audio. Working on a compressed video to create another compressed video, simply result in lower quality video ( and less professional looking ).
Re: Any video editing folks n da house?
rikki cattermole wrote: > On 23/05/2017 9:23 PM, Andrei Alexandrescu wrote: >> Would be great to have a video intro of D featuring a mix of testimonies >> from a few folks, fragments from the existing DConf materials, etc. Some >> of that cool ukulele upbeat music, too. Anyone would enjoy taking up >> such a project? -- Andrei > > If you can provide the raw footage, I'm sure a few of us will give it a > go (I have licenses for music which I can use for this stuff). > I'm thinking publicly available videos so the footage is already out there.
Re: Ali's slides from his C++Now talk
On Wednesday, 24 May 2017 at 03:57:07 UTC, Ali Çehreli wrote: Not yet but I heard the videos will be posted here: https://www.youtube.com/user/BoostCon Ali Page 68: "CTFE is being wastly improved by Stefan Koch.". Wastly? ;)
Re: Runtime?
Great ... accidentally pressed send. So my question was: Why does even a simple empty empty statement like this, compiled with DMD, show under windows a almost 1.7MB memory usage? void main() { while(true){} } The same in C/C++ is simply 0.1MB. This is why i asked the question if the runtime is somehow responsible?
Re: Runtime?
On Tuesday, 23 May 2017 at 15:36:45 UTC, Wulfklaue wrote: A quick question. After watching the DLang 2017 conference, there was mention about the runtime optimizing that is going on. Will this have a impact on the default memory usage (on small projects )? Why does even a simple empty: void main() { while(true) {} }