Re: shared - i need it to be useful
On Thursday, 18 October 2018 at 17:17:37 UTC, Atila Neves wrote: [snip] Assuming this world... how do you use shared? https://github.com/atilaneves/fearless I had posted your library before to no response... I had two questions, if you'll indulge me. The first is perhaps more wrt automem. I noticed that I couldn't use automem's Unique with @safe currently. Is there any way to make it @safe, perhaps with dip1000? Second, Rust's borrow checker is that you can only have one mutable borrow. This is kind of like Exclusive, but it does it at compile-time, rather than with GC/RC. Is this something that can be incorporated into fearless?
Re: shared - i need it to be useful
On Wednesday, 17 October 2018 at 07:20:20 UTC, Manu wrote: [snip] Oh bollocks... everyone has been complaining about this for at least the 10 years I've been here! [snip] As far as I had known from reading the forums, shared was not feature complete. Also, are you familiar with Atila's fearless library for safe sharing? https://github.com/atilaneves/fearless
Re: shared - i need it to be useful
On Wednesday, 17 October 2018 at 05:40:41 UTC, Walter Bright wrote: On 10/15/2018 11:46 AM, Manu wrote: [...] Shared has one incredibly valuable feature - it allows you, the programmer, to identify data that can be accessed by multiple threads. There are so many ways that data can be shared, the only way to comprehend what is going on is to build a wall around shared data. (The exception to this is immutable data. Immutable data does not need synchronization, so there is no need to distinguish between shared and unshared immutable data.) [snip] Isn't that also true for isolated data (data that only allows one alias)?
Re: shared - i need it to be useful
On Monday, 15 October 2018 at 20:44:35 UTC, Manu wrote: snip Are you saying `is(immutable(int) == shared) == true)` ?? From the spec: "Applying any qualifier to immutable T results in immutable T. This makes immutable a fixed point of qualifier combinations and makes types such as const(immutable(shared T)) impossible to create." Example: import std.stdio : writeln; void main() { writeln(is(immutable(int) == shared immutable(int)) == true); //prints true }
Re: shared - i need it to be useful
On Monday, 15 October 2018 at 18:46:45 UTC, Manu wrote: Okay, so I've been thinking on this for a while... I think I have a pretty good feel for how shared is meant to be. 1. shared should behave exactly like const, except in addition to inhibiting write access, it also inhibits read access. Are you familiar with reference capabilities[1] in the pony language? They describe many of them in terms of read/write uniqueness. Another way they describe them [2] is in denying aliases, like deny global read alias. [1] https://tutorial.ponylang.io/capabilities/reference-capabilities.html [2] See page 12-14: http://www.doc.ic.ac.uk/~scd/Pony-WG2.16.pdf
Re: Please don't do a DConf 2018, consider alternatives
On Wednesday, 3 October 2018 at 18:46:02 UTC, Joakim wrote: Except that you can also view the videos at home, then discuss them later at a conference, which is the actual suggestion here. Maybe that would work better with a smaller group? I imagine some people are too busy to do that beforehand. Another thing that might work would be to have everybody read through the presentations beforehand and then just have questions. That doesn't work so well when there are live code examples though.
Re: filtered imports
On Thursday, 13 September 2018 at 17:54:03 UTC, Vladimir Panteleev wrote: [snip] However, it is solved with an alias: alias write = std.stdio.write; (or using selective imports or fully-qualified identifiers for either module of course). That's a nice trick!
Re: More fun with autodecoding
On Wednesday, 12 September 2018 at 12:45:15 UTC, Nicholas Wilson wrote: Overloads: [snip] Good point.
Re: More fun with autodecoding
On Tuesday, 11 September 2018 at 02:00:29 UTC, Nicholas Wilson wrote: [snip] https://github.com/dlang/DIPs/pull/131 will help narrow down the cause. I like it, but I worry people would find multiple ifs confusing. The first line of the comment is about using static asserts and in contracts, but it looks like static asserts are allowed in in contracts for functions [1]. You can do the same thing in structs/classes with invariant blocks (but in contracts are not allowed). So basically, the same behavior for if can be reduced to in contracts with static asserts already. Multiple ifs would just be a slightly less verbose way to accomplish the same thing. I suppose one issue might be that contracts are not compiled in during release mode, but I think release only impacts normal asserts, not static asserts. Is there any reason why this is not sufficient? [1] https://run.dlang.io/is/lu6nQ0
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Friday, 24 August 2018 at 17:12:53 UTC, H. S. Teoh wrote: [snip] This is probably completely unrealistic, but I've been thinking about the possibility of adding *all* D codebases to the CI infrastructure, including personal projects and what-not. Set it up such that any breakages send a notification to the author(s) in advance of a PR being checked in, so that they have time to respond. I'm not sure how this would work in practice since you have to deal with dead / unmaintained projects and/or slow/unresponsive authors, and some PRs you might want to push through regardless of breakage. But it would be nice to know exactly how much code we're breaking out there. A worthy goal. If you could get some download statistics from dub (i.e. like total downloads past month), then you could probably create a few buckets and rules so you could make sure that there aren't breakages in the most downloaded projects while not worrying about dead projects that aren't being downloaded anyway.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Friday, 24 August 2018 at 16:00:10 UTC, bachmeier wrote: You simply can't share a D program with anyone else. It's an endless cycle of compiler upgrades and figuring out how to fix code that stops compiling. It doesn't work for those of us that are busy. Why there is not a stable branch with releases once a year is quite puzzling. (And no, "just use the old compiler" is not an answer.) ...hmm...I can't recall anyone ever suggesting to have a stable branch. It's a good idea. That being said, I see forward progress on reducing breakage. The CI infrastructure has improved a lot and there are a number of dub projects that also get checked.
Re: Is @safe still a work-in-progress?
On Thursday, 23 August 2018 at 23:36:07 UTC, Chris M. wrote: Heck, now that I'm looking at it, DIP25 seems like a more restricted form of Rust's lifetimes. Let me know if I'm just completely wrong about this, but [snip] Check out DIP1000 https://github.com/dlang/DIPs/blob/master/DIPs/DIP1000.md
Re: Is @safe still a work-in-progress?
On Friday, 17 August 2018 at 14:26:07 UTC, H. S. Teoh wrote: [...] And that is exactly why the whole implementation of @safe is currently rather laughable. By blacklisting rather than whitelisting, we basically open the door wide open to loopholes -- anything that we haven't thought of yet could potentially be a @safe-breaking combination, and we wouldn't know until somebody discovers and reports it. Sadly, it seems there is little interest in reimplementing @safe to use whitelisting instead of blacklisting. T Fundamentally, I see it as a good idea. Walter has talked about how important memory safety is for D. People thinking their @safe code is safe is a big problem when that turns out to not be the case. Imagine the black eye D would have if a company was hacked because of something like this? IMO, the problem is that you can't just replace @safe as it is now. You could introduce something like @whitelist or @safewhitelist and begin implementing it, but it would probably be some time before it could replace @safe. Like when @whitelist is only breaking unsafe code.
Re: [OT] Re: C's Biggest Mistake on Hacker News
On Tuesday, 31 July 2018 at 12:02:55 UTC, Kagamin wrote: On Saturday, 28 July 2018 at 19:55:56 UTC, bpr wrote: Are the Mozilla engineers behind it deluded in that they eschew GC and exceptions? I doubt it. They are trying to outcompete Chrome in bugs too. You're not Mozilla. And why you mention exceptions, but not bounds checking? Firefox has been complete garbage on my work computer ever since the Quantum update. Works fine at home though.
Re: ndslice v2 is coming soon
On Sunday, 29 July 2018 at 06:17:29 UTC, 9il wrote: Hi, PR: https://github.com/libmir/mir-algorithm/pull/143 Features === * Slice and Series are C++ ABI compatible without additional wrappers. See example: https://github.com/libmir/mir-algorithm/tree/devel/cpp_example * Intuitive API with default params and without explicit dimension packs ``` alias Slice = mir_slice; struct mir_slice(Iterator, size_t N = 1, SliceKind kind = Contiguous) ``` For example, double[] analog is just Slice!(double*) / mir_slice. Best, Ilya Yaroshenko This might break some stuff in numir that depends on the current behavior, but certainly looks interesting. I haven't had a chance to go through everything as that's a big PR.
Re: std.experimental.collections.rcstring and its integration in Phobos
On Wednesday, 18 July 2018 at 11:56:39 UTC, Seb wrote: [snip] Yes, Array is a reference-counted Array, but it also has a reference-counted allocator. I see. Is it really a good idea to make the ownership/lifetime strategy part of the container? What happens when you want to make nogc collections for lists, trees, etc? You have to make multiple versions for unique/ref counted/some new strategy? I would think it is more generic to have it a separate wrapper that handles the ownership/lifetime strategy, like what exists in automem and C++'s smart pointers...though automem looks like it has a separate type for Unique_Array rather than including it in Unique...so I suppose that potentially has the same issue...
Re: std.experimental.collections.rcstring and its integration in Phobos
On Wednesday, 18 July 2018 at 11:56:39 UTC, Seb wrote: [snip] I think part of the above design decision connects in with why rcstring stores the data as ubytes, even for wchar and dchar. Recent comments suggest that it is related to auto-decoding. Yes rcstring doesn't do any auto-decoding and hence stores its data as an ubyte array. My sense is that an rcstring that does not have auto-decoding, even if it requires more work to get working with phobos is a better solution over the long-run. What do you mean by this? Just that there are a lot of complaints about D's auto-decoding of strings. Not doing any auto-decoding seems like a good long-run design decision, even if it makes some things more difficult.
Re: std.experimental.collections.rcstring and its integration in Phobos
On Tuesday, 17 July 2018 at 15:21:30 UTC, Seb wrote: So we managed to revive the rcstring project and it's already a PR for Phobos: [snip] I'm glad this is getting worked on. It feels like something that D has been working towards for a while. Unfortunately, I haven't (yet) watched the collections video at Dconf and don't see a presentation on the website. Because of that, I don't really understand some of the design decisions. For instance, I also don't really understand how RCIAllocator is different from the old IAllocator (the documentation could use some work, IMO). It looks like RCIAllocator is part of what drives the reference counting semantics, but it also looks like Array has some support for reference counting, like addRef, that invoke RCIAllocator somehow. But Array also has some support for gc_allocator as the default, so my cursory examination suggests that Array is not really intended to be an RCArray... So at that point I started wondering why not just have String as an alias of Array, akin to how D does it for dynamic arrays to strings currently. If there is stuff in rcstring now that isn't in Array, then that could be included in Array as a compile-time specialization for the relevant types (at the cost of bloating Array). And then leave it up to the user how to allocate. I think part of the above design decision connects in with why rcstring stores the data as ubytes, even for wchar and dchar. Recent comments suggest that it is related to auto-decoding. My sense is that an rcstring that does not have auto-decoding, even if it requires more work to get working with phobos is a better solution over the long-run.
Re: Safe Memory Management and Ownership.
On Friday, 13 July 2018 at 17:12:26 UTC, Atila Neves wrote: Rust can do that because it enforces it at compile-time. A D solution wouldn't be able to do anything more with immutable borrows. Hmm, thinking on this a little more...it does seem difficult...but I don't think the problem is with immutable borrows. I think the issue is with the exclusivity of Rust's borrowing. D's immutable is transitive so if you're using immutable at some point, then no one else can modify it anyway. So you should only be able to immutably borrow something that's immutable anyway. Rust, by contrast, allows immutable borrows of mutable data. In some sense what Rust does corresponds more to const (or maybe head const), but it's more than just const. A Rust immutable borrow of mutable data prevents the mutable data from being modified during the borrow. The Rust example below involves an immutable borrow of mutable data, but it fails to compile because you modify x while it is borrowed. If you put y in a separate scope, then it compiles because x is no longer being borrowed after y exits the scope. fn main() { let mut x = 5; let y = & x; x += 1; } This exclusivity also affects mutable borrows as well. Below does not compile because y controls x. Rust's mutable borrows are also exclusive. Only one is allowed at a time. So that same trickiness is applied here. fn main() { let mut x = 5; let y = &mut x; x += 1; } The only thing that made sense to me about implementing this at compile-time was a template parameter that could disable things like opAssign. https://run.dlang.io/is/FvJvFv But being able to change the template parameter is tricky. You can cast it at run-time, but other that that it's beyond me. On a separate note, I didn't have any success working with automem and const/immutable types. https://run.dlang.io/is/el4h3e
Re: Safe Memory Management and Ownership.
On Friday, 13 July 2018 at 14:47:59 UTC, jmh530 wrote: Sounds interesting. I imagine you could specialize this depending on mutability. Rust allows only one mutable borrow, but eliminated I swear I must have dyslexia or something. Eliminated should be unlimited.
Re: Safe Memory Management and Ownership.
On Friday, 13 July 2018 at 12:43:20 UTC, Atila Neves wrote: The only thing I got from this are that "smooth references" are like Rust's borrows. Which just gave me the idea to add this member function to `Unique`: scope ref T borrow(); I have to think about @safety guarantees but it should be ok with DIP 1000. Atila Sounds interesting. I imagine you could specialize this depending on mutability. Rust allows only one mutable borrow, but eliminated immutable borrows, but you can't mix them. You could also place some restrictions, like dis-allow borrows, only allow immutable borrows, etc.
Re: REPL semantics
On Thursday, 12 July 2018 at 22:17:29 UTC, Luís Marques wrote: I actually never tried the existing REPLs, what are your issues with them? No Windows support. For drepl: "Works on any OS with full shared library support by DMD (currently linux, OSX, and FreeBSD)."
Re: REPL semantics
On Thursday, 12 July 2018 at 22:24:19 UTC, Luís Marques wrote: Right. Hopefully there aren't too many weird cases once that is generalized to other corners of the language. I also never used REPLs for major development, only for debugging and minor tests, so I don't have experience with that style of development where you code everything in the REPL and then save the whole state, which makes it harder for me to evaluate how important certain REPL features are. I primarily use REPL's for prototyping. It just makes some things much easier. So for instance, I can load up some functions and libraries and data and then run a bunch of different statistics models without needing to compile everything again. I can also plot everything as needed without needed to compile everything all over again.
Re: REPL semantics
On Thursday, 12 July 2018 at 21:15:46 UTC, Luís Marques wrote: On Thursday, 12 July 2018 at 20:33:04 UTC, jmh530 wrote: On Thursday, 12 July 2018 at 19:07:15 UTC, Luís Marques wrote: Most REPLs I've used are for languages with dynamic typing. Perhaps take a look at a C REPL and see what it does? Well, cling calls the original function: [cling]$ #import [cling]$ void foo(long x) { printf("long\n"); } [cling]$ void bar() { foo(42); } [cling]$ void foo(int x) { printf("int\n"); } [cling]$ bar() long ...but to me that doesn't mean much. If it was the other way around (bar was updated to call foo(int)) I think I could safely conclude that it was an intended consequence. But the actual behavior can easily be explained by the fact that that's the most straightforward implementation (especially for a REPL that uses an existing C++ frontend, like clang). I was looking for a more fundamental answer: what would the user prefer to happen? I think most people, at least most people who have used REPLs before, would think that the above should print int. But this is because most REPLs are used with dynamic languages. I don't doubt that it makes sense that it is easier to implement such that it prints long. You're compiling each line as it comes in, so bar compiles to some machine code that can only depend on the definition of foo at the time it is compiled. I think the mental model of someone coming from a dynamic language would be as if bar is dynamically re-compiled when the foo(int x) is entered.
Re: REPL semantics
On Thursday, 12 July 2018 at 19:07:15 UTC, Luís Marques wrote: Consider a D REPL session like this: void bar(long x) { writeln(x); } void foo() { bar(42); } 42 void bar(int) {} Assuming implementation complexity is not an issue, what do you feel is the more natural semantics for a REPL? Should foo now call bar(int), or should it still call bar(long)? (feel free to generalize the issue) I was curious to see what the existing REPLs did, but they seem to have bit rotted and no longer compile. Most REPLs I've used are for languages with dynamic typing. Perhaps take a look at a C REPL and see what it does?
Re: Copy Constructor DIP
On Thursday, 12 July 2018 at 15:42:29 UTC, Luís Marques wrote: On Thursday, 12 July 2018 at 15:33:03 UTC, Andrei Alexandrescu wrote: Again: not the charter of this DIP, so you should ask yourself, not us, this question. Look, I understand it can be frustrating to have a concrete design proposal derailed by a myriad of speculative questions. But if we suspect that design decision of a DIP might interact poorly with other plausible future D features, should we not at least express our concerns and hopes? By the time the other DIPs come out it might be too late to address the concerns. In any case, I hope my comments were not too out of bounds of the discussion. If so, I'm sorry. I like the idea of implicit conversions (until I've been convinced otherwise at least), but I don't necessarily think this DIP will interact poorly with it. They could be implemented with a new opImplicitCast. Less elegantly, you could have special behavior when @implicit is used with opCast.
Re: Sutter's ISO C++ Trip Report - The best compliment is when someone else steals your ideas....
On Wednesday, 11 July 2018 at 16:17:30 UTC, Jacob Carlborg wrote: The boot time of my computer was reduced from several minutes to around 30 seconds when I switch to SSD disks. My NVMe ssd is very fast.
Re: dmd optimizer now converted to D!
On Tuesday, 3 July 2018 at 23:05:00 UTC, rikki cattermole wrote: On that note, I have a little experiment that I'd like to see done. How would the codegen change, if you were to triple the time the optimizer had to run? Would it make any difference to compile DMD with LDC?
Re: Interoperability, code & marketing
On Tuesday, 3 July 2018 at 15:23:48 UTC, Seb wrote: In the past A&W became a bit more conservative with creating new repos in the dlang organization and typically only agree to move things there if it "has been proven to be successfully adopted by the community". Though I'm more than happy to create a repo (or move repos) to dlang-community to get the ball rolling. I would say the value is putting it all together with a consistent experience and story. The final location, dlang-community or otherwise, is maybe less important.
Re: Interoperability, code & marketing
On Tuesday, 3 July 2018 at 14:40:52 UTC, Nicholas Wilson wrote: [snip] I think we should have an official repository where such code and documentation lives (i.e. on the dlang github). This would also be a good place to have links to other interoperability successes in D like pyd, dpp, embedr, luad, autowrap etc. and put the D interoperability story out there, coherently and in one place. [snip] Sounds like a great idea to me.
OT: First-Class Statistical Missing Values Support in Julia 0.7
The Julia folks have done some interesting work with missing values that I thought might be of interest [1, 2]. Looks like it would be pretty easy to do something similar in D with either unions or Algebraic. The time-consuming part would be making sure everything works seamlessly with mathematical functions. [1] https://julialang.org/blog/2018/06/missing [2] https://www.reddit.com/r/programming/comments/8spmca/firstclass_statistical_missing_values_support_in/
Re: An (old/new?) pattern to utilize phobos better with @nogc
On Saturday, 16 June 2018 at 11:58:47 UTC, Dukc wrote: snip] What are your thoughts? Do you agree with this coding pattern? I like it.
Re: Safe and performant actor model in D
On Thursday, 14 June 2018 at 13:24:06 UTC, Atila Neves wrote: [snip] I need to think about how to do isolated properly. I'll look at vibe.d for inspiration. I took a look at it yesterday, but the class version depended on a long mixin that I didn't feel like fully examining... I did notice that vibe.d's Isolated can be used both safely and unsafely though.
Re: Safe and performant actor model in D
On Wednesday, 13 June 2018 at 13:50:54 UTC, Russel Winder wrote: [snip] Does D have move semantics at the program level or does the use of a garbage collector abrogate the ability of a programmer to have unique references to heap objects. Rust does this by default and Pony allows this and other options with iso, val, ref, etc. Of course the vibe.d Isolated type is similar to the Pony iso. I did not know about vibe.d Isolated.
Re: grain: mir, LLVM, GPU, CUDA, dynamic neural networks
On Tuesday, 12 June 2018 at 11:10:30 UTC, Per Nordlöw wrote: I just discovered https://github.com/ShigekiKarita/grain which seems like a very ambitious and active project for making dynamic neural networks run on the GPU using D in front of mir and CUDA. Are there any long-term goals around this project except for the title? It would great if someone (author) could write a little background-knowledge (tutorial) around the subject of dynamic neural networks that assists all the details in the examples at https://github.com/ShigekiKarita/grain/tree/master/example Further, could parts of grain be refactored out into some generic CUDA-library for use in domains other than dynamic neural networks? Looks interesting, though it seems the author has only just recently tagged the first two releases (3-4 days ago). That doesn't mean that I don't agree with your suggestions (more examples/tutorials, separate GPU & autograd/NN library), just maybe the author has been more focused on basic functionality for now.
Re: Ideas for students' summer projects
On Tuesday, 22 May 2018 at 16:27:05 UTC, Eduard Staniloiu wrote: Hello, everyone! We, at UPB, have initiated D's participation to ROSEdu Summer of Code, see http://soc.rosedu.org/2018/. I will be mentoring a student over the summer and I was wondering if you have any suggestions for a project. If there is a library or feature that you would like just drop an idea. The proposed idea should be something that can be done in 8-10 weeks, though, ideally, we hope that the student/s will continue to contribute to the community after the summer ends. Let the brainstorming begin! GSOC ideas would obviously be a good place to start. https://wiki.dlang.org/GSOC_2018_Ideas
Re: A pattern I'd like to see more of - Parsing template parameter tuples
On Tuesday, 22 May 2018 at 15:25:47 UTC, Jonathan M Davis wrote: Honestly, I hate named argumts in general. This situation is one of the few places I've ever run into where I thought that they made any sense. [snip] It's quite literally the only reason I ever want named arguments.
Re: Extend the call site default argument expansion mechanism?
On Wednesday, 16 May 2018 at 09:01:29 UTC, Simen Kjærås wrote: snip] struct Foo(int x) { int n = x; auto opDispatch(string s)() if (s == "bar") { n++; return n; } } unittest { int y = 0; with(Foo!1()) { y = bar; // Works! } assert(y == 2); } -- Simen Thanks for catching that. Any idea why the original was having problems?
Re: Extend the call site default argument expansion mechanism?
On Tuesday, 15 May 2018 at 15:02:36 UTC, jmh530 wrote: [snip] Note, it's not an issue if Foo were not a struct. This was fixed in Bug 6400 [1]l. The issue is with template instances. I have filed a new enhancement request [2] [1] https://issues.dlang.org/show_bug.cgi?id=6400 [2] https://issues.dlang.org/show_bug.cgi?id=18863
Re: Extend the call site default argument expansion mechanism?
On Tuesday, 15 May 2018 at 14:52:46 UTC, Steven Schveighoffer wrote: [snip] It seems opDispatch isn't being used in the with statement. That seems like a bug, or maybe a limitation. I'm not sure how "with" works, but I assumed it would try calling as a member, and then if it doesn't work, try the call normally. Probably it's checking to see if it has that member first. Annoying... -Steve Looks like with statements ignore opDispatch. struct Foo(int x) { auto opDispatch(string s)() if (s == "bar") { return x++; } } void main() { int y = 0; with(Foo!1) { y = bar; //error: undefined identifier bar } assert(y == 2); }
Re: Extend the call site default argument expansion mechanism?
On Tuesday, 15 May 2018 at 14:26:48 UTC, Yuxuan Shui wrote: [snip] Example: https://run.dlang.io/is/RV2xIH Sadly with(WithAlloc!alloc) doesn't work. (If you have to use withAlloc.func everywhere, it kind of destroy the point, doesn't it?) Yeah I know, I tried it, but couldn't figure out how to do the with statement with it.
Re: Extend the call site default argument expansion mechanism?
On Tuesday, 15 May 2018 at 13:16:21 UTC, Steven Schveighoffer wrote: [snip] Hm... neat idea. Somehow, opDispatch can probably be used to make this work even more generically (untested): struct WithAlloc(alias alloc) { auto opDispatch(string s, Args...)(auto ref Args args) if (__traits(compiles, mixin(s ~ "(args, alloc)"))) { mixin("return " ~ s ~ "(args, alloc);"); } } -Steve Example: https://run.dlang.io/is/RV2xIH
Re: Extend the call site default argument expansion mechanism?
On Friday, 11 May 2018 at 11:42:07 UTC, Dukc wrote: [snip] Doesn't this basically mean including the implicits Martin Odersky talked about at Dconf in D? I don't know whether it's a good idea all-in-all, but assuming the arguments can be used as compile-time I can already see a big use case: killing autodecoding without breaking code. Something like: auto front(C, bool disableDecoding = __NODECODE__)(inout C[] string) { static if (disableDecoding) {...} else {...} } I'm not sure this makes sense or not...but what about instead of implicits, you allow a template to have type erased parameters, basically to optionally mimic the behavior of Java's generics. That way the allocator could be included in the type and checked at compile-time, but it wouldn't be known at run-time (not sure that's a positive or not).
Re: Sealed classes - would you want them in D?
On Friday, 11 May 2018 at 14:05:25 UTC, KingJoffrey wrote: [snip] Actually, it is completely on topic. (although I understand that many on this forum are very eager to shut down any discussion about fixing class encapsulation in D, for some reason). i.e, to be more specific.. so you can understand...my reply to 'do I want sealed classes in D', is simply no. What I want first, is a class that can properly encapsulate itself. Until that occurs, any talk about expanding the class concept with yet more attributes (that probably won't mean what you think they mean), like sealed, is just irrelevant and pushes the problem of broken encapsulation even further down peoples code paths. private is not private at all in D, and because of this, classes are fundamentally broken in D (by design apparently). Now.. I really do have better ways to spend my time. I've made my point. Nobody who uses D seems to think in a similar way, apparently, so I leave it at that. I think the last point in the conversation was "write a DIP". Nothing is going to change unless someone does that. Personally, I don't agree with the idiomatic D doesn't use classes much argument. If that's the case, then they should be removed from the language. The language supports OOP/inheritance and if this is something that makes that experience better, then let it stand on its merits. But a new keyword will not get added without a DIP.
Re: Sealed classes - would you want them in D?
On Thursday, 10 May 2018 at 13:47:16 UTC, rikki cattermole wrote: [snip] Adding a keyword like sealed isn't desirable. I'm trying to find fault of the concept, but it definitely is tough. You basically want protected, but only for specific packages, otherwise final. protected(foo, final) My read was that he wants an escape hatch on final so that he can extend the type in a module and not have to worry about people in other modules extending them. So if he has module foo; class A { } final class B : A {} in one module, he wants to be able to create a new final class C : B {} and keep class B as final so that no one else can extend it in another module.
Re: Binderoo additional language support?
On Thursday, 10 May 2018 at 07:42:36 UTC, Laeeth Isharc wrote: I made a start at writing a Jupyter library for writing kernels in D. Not sure how long it will be till its finished, but it is something in time we will need. Note that one would then need to write a D kernel on top, but that bit should be easy. Fantastic.
Re: D GPU execution module: A survey of requirements.
On Thursday, 10 May 2018 at 00:10:07 UTC, H Paterson wrote: Welp... It's not quite what I would have envisioned, but seems to fill the role. Thanks for pointing Dcompute out to me - I only found it mentioned in a dead link on the D wiki. Time to find a new project... I'm sure the people who work on Dcompute (or libmir) would appreciate any help you're willing to provide.
Re: Binderoo additional language support?
On Wednesday, 9 May 2018 at 19:50:41 UTC, Ethan wrote: Been putting that off until the initial proper stable release, it's still in a pre-release phase. But tl;dr - It acts as an intermediary layer between a host application written in C++/.NET and libraries written in D. And as it's designed for rapid iteration, it also supports recompiling the D libraries and reloading them on the fly. Full examples and documentation will be coming. Great. Thanks.
Re: partially mutable immutable type problem, crazy idea
On Tuesday, 8 May 2018 at 22:31:10 UTC, Yuxuan Shui wrote: snip] This doesn't compile for me on run.dlang.io: onlineapp.d(22): Error: template onlineapp.f cannot deduce function from argument types !()(B), candidates are: onlineapp.d(1):onlineapp.f(T)(immutable T a)
Re: Binderoo additional language support?
On Monday, 7 May 2018 at 17:28:55 UTC, Ethan wrote: 13 responses so far. Cheers to those 13. I don't really understand what to use binderoo for. So rather than fill out the questionnaire, maybe I would just recommend you do some work on wiki, blog post, or simple examples.
Re: Found on proggit: Krug, a new experimental programming language, compiler written in D
On Tuesday, 1 May 2018 at 18:46:20 UTC, H. S. Teoh wrote: Well, yes. Of course the whole idea behind big O is asymptotic behaviour, i.e., behaviour as n becomes arbitrarily large. Unfortunately, as you point out below, this is not an accurate depiction of the real world: [snip] The example I like to use is parallel computing. Sure, throwing 8 cores at a problem might be the most efficient with a huge amount of data, but with a small array there's so much overhead that it's way slower than a single processor algorithm.
Re: Frustrated with dmd codegen bug
On Tuesday, 24 April 2018 at 20:18:55 UTC, Basile B. wrote: [snip] In the report you forgot to mention that the bug is only visible with -O -profile. With just -O the provided test case works fine. I ran the test case on run.dlang.org with -O and it happens.
Re: Frustrated with dmd codegen bug
On Tuesday, 24 April 2018 at 18:53:02 UTC, H. S. Teoh wrote: [snip] That's definitely weird. Problem seems to go away with a static array. Seems somehow related to impl[0]. Re-writing that as *impl.ptr and breaking apart some of the logic might help narrow down the issue. bool method(int v) { int wordIdx = v >> 6; int bitIdx = v & 0b0011; func(); if (impl.length < wordIdx) { import std.stdio : writeln; auto temp1 = (1UL << bitIdx); writeln(1UL << bitIdx); //testing with v=200, prints 256 writeln(*impl.ptr & 256); //prints 0 auto temp2 = (*impl.ptr & temp1); //program killed //writeln(*impl.ptr & 256); //if uncommented, program not killed return temp2 != 0; } else { return false; } }
Re: http://www.graalvm.org
On Sunday, 22 April 2018 at 23:16:53 UTC, jmh530 wrote: I think there's an option so that LLVM bitcode can be used on it. So conceivably, you could compile with LDC to LLVM bitcode and then run that on Graal. Here is the documentation [1] for this feature. There is some discussion about linking standard libraries in Rust/C++ that might be relevant. [1] http://www.graalvm.org/docs/reference-manual/languages/llvm/
Re: http://www.graalvm.org
On Sunday, 22 April 2018 at 15:13:18 UTC, Robert M. Münch wrote: "GraalVM is a universal virtual machine for running applications written in JavaScript, Python 3, Ruby, R, JVM-based languages like Java, Scala, Kotlin, and LLVM-based languages such as C and C++." They use a special protocol to make data access from different languages transparent and very low cost. Perhaps worth an experiment to see if D can benefit of it. I think there's an option so that LLVM bitcode can be used on it. So conceivably, you could compile with LDC to LLVM bitcode and then run that on Graal.
Re: D vs nim
On Friday, 20 April 2018 at 11:07:30 UTC, Russel Winder wrote: Has anyone got Pony on their list of interesting languages? I had spent some time looking over the reference capabilities [1], but I'm not sure I have the time to actually program in the language. The isolated type seemed like the most interesting take-away. I think someone on the D forums had been trying to get something similar. [1] https://tutorial.ponylang.org/capabilities/reference-capabilities.html
Re: #dbugfix Issue 16486 200$
On Friday, 30 March 2018 at 06:11:22 UTC, 9il wrote: [snip] They are used in mir-lapack [5]. The bug fix also required for mir (Sparse, CompressedTensor), and for the future Dlang image library. I was experimenting with one of the open methods matrix examples [1] and trying to replace the virtual methods with templates. My example [3] does it statically and with structs so that it could be used with mir. However, the moment you try to make it a little bit more complicated and allow the type of the underlying matrix data to be templated, then it hits up against this issue and you can't rely on aliases any more. I would support a DIP. [1] https://github.com/jll63/openmethods.d/tree/master/examples/matrix/source [2] http://www.di.unipi.it/~nids/docs/templates_vs_inheritance.html [3] https://run.dlang.io/gist/9da210e321af95e1ce373b5a6621e4f8
Re: #dbugfix Issue 16486 200$
On Friday, 20 April 2018 at 16:03:40 UTC, jmh530 wrote: On Friday, 30 March 2018 at 06:11:22 UTC, 9il wrote: [...] I was experimenting with one of the open methods matrix examples [1] and trying to replace the virtual methods with templates. My example [3] does it statically and with structs so that it could be used with mir. However, the moment you try to make it a little bit more complicated and allow the type of the underlying matrix data to be templated, then it hits up against this issue and you can't rely on aliases any more. I would support a DIP. [1] https://github.com/jll63/openmethods.d/tree/master/examples/matrix/source [2] http://www.di.unipi.it/~nids/docs/templates_vs_inheritance.html [3] https://run.dlang.io/gist/9da210e321af95e1ce373b5a6621e4f8 Sorry, the stupid thing keeps telling me it's down for maintenance.
Re: #dbugfix Issue 16486 200$
On Friday, 30 March 2018 at 06:11:22 UTC, 9il wrote: [snip] They are used in mir-lapack [5]. The bug fix also required for mir (Sparse, CompressedTensor), and for the future Dlang image library. I was experimenting with one of the open methods matrix examples [1] and trying to replace the virtual methods with templates. My example [3] does it statically and with structs so that it could be used with mir. However, the moment you try to make it a little bit more complicated and allow the type of the underlying matrix data to be templated, then it hits up against this issue and you can't rely on aliases any more. I would support a DIP. [1] https://github.com/jll63/openmethods.d/tree/master/examples/matrix/source [2] http://www.di.unipi.it/~nids/docs/templates_vs_inheritance.html [3] https://run.dlang.io/gist/9da210e321af95e1ce373b5a6621e4f8
Re: #dbugfix Issue 16486 200$
On Friday, 30 March 2018 at 06:11:22 UTC, 9il wrote: [snip] They are used in mir-lapack [5]. The bug fix also required for mir (Sparse, CompressedTensor), and for the future Dlang image library. I was experimenting with one of the open methods matrix examples [1] and trying to replace the virtual methods with templates. My example [3] does it statically and with structs so that it could be used with mir. However, the moment you try to make it a little bit more complicated and allow the type of the underlying matrix data to be templated, then it hits up against this issue and you can't rely on aliases any more. I would support a DIP. [1] https://github.com/jll63/openmethods.d/tree/master/examples/matrix/source [2] http://www.di.unipi.it/~nids/docs/templates_vs_inheritance.html [3] https://run.dlang.io/gist/9da210e321af95e1ce373b5a6621e4f8
Re: #dbugfix Issue 16486 200$
On Friday, 30 March 2018 at 06:11:22 UTC, 9il wrote: [snip] They are used in mir-lapack [5]. The bug fix also required for mir (Sparse, CompressedTensor), and for the future Dlang image library. I was experimenting with one of the open methods matrix examples [1] and trying to replace the virtual methods with templates. My example [3] does it statically and with structs so that it could be used with mir. However, the moment you try to make it a little bit more complicated and allow the type of the underlying matrix data to be templated, then it hits up against this issue and you can't rely on aliases any more. I would support a DIP. [1] https://github.com/jll63/openmethods.d/tree/master/examples/matrix/source [2] http://www.di.unipi.it/~nids/docs/templates_vs_inheritance.html [3] https://run.dlang.io/gist/9da210e321af95e1ce373b5a6621e4f8
Re: #dbugfix Issue 16486 200$
On Friday, 30 March 2018 at 06:11:22 UTC, 9il wrote: [snip] They are used in mir-lapack [5]. The bug fix also required for mir (Sparse, CompressedTensor), and for the future Dlang image library. I was experimenting with one of the open methods matrix examples [1] and trying to replace the virtual methods with templates. My example [3] does it statically and with structs so that it could be used with mir. However, the moment you try to make it a little bit more complicated and allow the type of the underlying matrix data to be templated, then it hits up against this issue and you can't rely on aliases any more. I would support a DIP. [1] https://github.com/jll63/openmethods.d/tree/master/examples/matrix/source [2] http://www.di.unipi.it/~nids/docs/templates_vs_inheritance.html [3] https://run.dlang.io/gist/9da210e321af95e1ce373b5a6621e4f8
Re: #dbugfix 18493
On Monday, 16 April 2018 at 13:01:44 UTC, Radu wrote: A blocker for more advanced 'betterC' usage. https://issues.dlang.org/show_bug.cgi?id=18493
Re: What are AST Macros?
On Monday, 9 April 2018 at 15:30:33 UTC, Stefan Koch wrote: [snip] Using templates to introspect and manipulate types is like using a hammmer's flat back to remove a nail. It _can_ be done but with an absurd amount of work. You just have to remove all of the wall around the nail by pounding it until the wall has around the nail is disintegrated :) This is not an exaggeration. Templates used for introspection (or anything else really that's modestly complex) are equally hard to reason about for compilers and for programmers. I guess programmers have an advantage when it comes to _efficient_ pattern recognition. Could AST macros replace things like @safe/@nogc and enable the user to create their own (like a @supersafe that disallows @trusted)?
Re: Thoughts on Herb Sutter's Metaclasses?
On Tuesday, 10 April 2018 at 09:32:49 UTC, Chris Katko wrote: Wow, that thread had very little discussion, and a huge amount of bickering over whether someone actually understood what someone else might have said. My take-away was that it can be done in D, but would be simpler with AST macros and Walter is against AST macros.
Re: =void in struct definition
On Monday, 9 April 2018 at 11:15:14 UTC, Stefan Koch wrote: Not semantically, but you might consider it a performance bug. This particular one could be fixed, put I cannot say how messy the details are. There is potential for code that silently relies on the behavior and would break in very non-obvious ways if we fixed it. If the fix causes non-obvious breakage, then why not a DIP for an opInit that overrides the default initialization and has the desired new functionality? Though it would be annoying to have two ways of doing the same thing...
Re: Deprecating this(this)
On Saturday, 31 March 2018 at 23:38:06 UTC, Andrei Alexandrescu wrote: [snip] * immutable and const are very difficult, but we have an attack (assuming copy construction gets taken care of) Would it be easier if the const/immutable containers were considered separate types? For instance, in the code below, there is InoutFoo and then Foo takes InoutFoo as an alias this (you could do the same thing with immutable, but then you’d have to include two get functions). This would be like inheriting the InoutFoo. With better syntax, InoutFoo would be something like inout(Foo) and the compiler could recognize that the mutable constructor is also defined and to call that when appropriate. struct InoutFoo { int a; this(int b) inout { this.a = b; } int get() inout { return a; } } struct Foo { InoutFoo inoutfoo; alias inoutfoo this; this(int b) { a = b; } void set(int b) { a = b; } } void main() { auto x = immutable InoutFoo(1); auto y = Foo(1); assert(is(typeof(y) : typeof(x))); //x.a++; //not allowed y.a++; assert(x.a == 1); assert(y.a == 2); assert(x.get == 1); y.set(3); assert(y.get == 3); }
Re: newCTFE Status March 2018
On Saturday, 31 March 2018 at 10:38:53 UTC, Simen Kjærås wrote: [snip] So 1.6 years is 10%, the total is 16 years, and there's 14.4 years left. So 2032. -- So,em 60% of the time, it works every time https://www.youtube.com/watch?v=pjvQFtlNQ-M
Re: #dbugfix Issue 16486 200$
On Friday, 30 March 2018 at 15:49:30 UTC, jmh530 wrote: On Friday, 30 March 2018 at 15:21:26 UTC, jmh530 wrote: [snip] Doesn't extend to multiple template parameters that well... [snip] This works, but ugly... template testFunction(T, U = TestAlias!T, alias V = TemplateOf!(U), W = TemplateArgsOf!U[1]) { void testFunction(V!(T, W) arg) {} }
Re: #dbugfix Issue 16486 200$
On Friday, 30 March 2018 at 15:21:26 UTC, jmh530 wrote: [snip] Doesn't extend to multiple template parameters that well... import std.traits : TemplateOf; struct TestType(T, U) {} alias TestAlias(T) = TestType!(T, int); template testFunction(T, alias U = TemplateOf!(TestAlias!T)) { void testFunction(U!(T, int) arg) {} //don't want to have to put the int here } void main() { TestAlias!int testObj; testFunction(testObj); }
Re: #dbugfix Issue 16486 200$
On Friday, 30 March 2018 at 13:56:45 UTC, Stefan Koch wrote: On Friday, 30 March 2018 at 06:11:22 UTC, 9il wrote: [1] https://issues.dlang.org/show_bug.cgi?id=16486 Ah that is an interesting bug which further demonstrates that templates are a tricky thing :) Basically you cannot _generally_ proof that one template just forwards to another. Therefore you have to create separate types. And since you create separate types the alias is not an alias but a separate template. Solving this may be possible for special cases but in the general case is infeasible. What about something like this (using the reduced example from the bug report): import std.traits : TemplateOf; struct TestType(T) {} alias TestAlias(T) = TestType!T; template testFunction(T, alias U = TemplateOf!(TestAlias!T)) { void testFunction(U!T arg) {} } void main() { TestAlias!int testObj; testFunction(testObj); }
Re: #dbugfix Issue 16486 200$
On Friday, 30 March 2018 at 06:11:22 UTC, 9il wrote: Hello, Bugfix for the Issue 16486 [1] (originally [2]) is required for mir-algorithm types [3], [4]. For example, packed triangular matrix can be represented as Slice!(Contiguous, [1], StairsIterator!(T*)) Slice!(Contiguous, [1], RetroIterator!(MapIterator!(StairsIterator!(RetroIterator!(T*)), retro))) They are used in mir-lapack [5]. The bug fix also required for mir (Sparse, CompressedTensor), and for the future Dlang image library. Workarounds aren't interesting. 200$ - bounty ( I can pay directly or transfer money to the Dlang Foundation ) Best Regards, Ilya Yaroshenko [1] https://issues.dlang.org/show_bug.cgi?id=16486 [2] https://issues.dlang.org/show_bug.cgi?id=16465 [3] http://docs.algorithm.dlang.io/latest/mir_ndslice_slice.html#Slice [4] http://docs.algorithm.dlang.io/latest/mir_series.html#Series [5] https://github.com/libmir/mir-lapack/blob/master/source/mir/lapack.d Given the recent blog post on std.variant, it occurs to me that this enhancement would also make writing functions that take Option types much easier. I'm adopting some of the code from the blog post below: import std.variant; alias Null = typeof(null); //for convenience alias Option(T) = Algebraic!(T, Null); Option!size_t indexOf(int[] haystack, int needle) { foreach (size_t i, int n; haystack) if (n == needle) return Option!size_t(i); return Option!size_t(null); } auto foo(T)(VariantN!(T.sizeof, T, typeof(null)) x) { return x; } auto bar(T : Option!U, U)(T x) { return x; } auto baz(T)(Option!T x) { return x; } void main() { import std.stdio : writeln; int[] a = [4, 2, 210, 42, 7]; Option!size_t index = a.indexOf(42); writeln(index.foo!size_t); //works //writeln(index.foo); //doesn't work //writeln(index.bar); //doesn't work //writeln(index.baz); //doesn't work }
Re: D vs nim
On Wednesday, 22 April 2015 at 06:03:07 UTC, Timothee Cour wrote: [snip] I would like to refocus this thread on feature set and how it compares to D, not on flame wars about brackets or language marketing issues. In the comparison you made https://github.com/timotheecour/D_vs_nim/ you say the CTFE engine for nim is a register VM. Stefan Koch's new CTFE is a bytecode interpreter. Is there an advantage of one over the other?
Re: rvalues -> ref (yup... again!)
On Friday, 23 March 2018 at 22:01:44 UTC, Manu wrote: Can you please explain these 'weirdities'? What are said "major unintended consequences"? Explain how the situation if implemented would be any different than the workaround? This seems even simpler than the pow thing to me. Rewrite: func(f()); as: { auto __t0 = f(); func(__t0); } How is that worse than the code you have to write: T temp = f(); T zero = 0; func(temp, zero); I feel like this example wasn't really concrete enough for me. I wrote a version below that I think made it a little clearer for myself. - import std.stdio : writeln; struct Foo { int data; } int foo(Foo x) { writeln("here"); return x.data; } int foo(ref Foo x) { writeln("there"); return x.data; } void main() { auto x = Foo(5); auto y = foo(x); writeln(y); auto z = foo(Foo(5)); writeln(z); }
Re: #dbugfix 17592
On Friday, 23 March 2018 at 21:55:52 UTC, Jonathan M Davis wrote: [snip] Walter and Andrei have been discussing putting together a DIP with a "ProtoObject" which will be the new root class below Object where ProtoObject itself has only the bare minimum required to work as a class (not monitor object, no toString, no opEquals, etc.). Classes could then derive from ProtoObject directly instead of from Object, and then they could define any of the functions that are currently on Object with whatever attributes they wanted (or not define them at all). The DIP has not yet been written, and the details still need to be ironed out, but that's the gist of the direction that's currently being considered. - Jonathan M Davis Object would derive from ProtoObject, right?
Re: CTFE ^^ (pow)
On Friday, 23 March 2018 at 18:09:01 UTC, Manu wrote: [snip] Like, in this particular project, being able to generate all tables at compile time is the thing that distinguishes the D code from the C++ code; it's the *whole point*... If I have to continue to generate tables offline and paste big tables of data in the source code (and then re-generate them manually when I change something); then situation is identical to C++, therefore, stick with C++.\ You can use import expressions, but then you have to parse the string at compile-time to turn it into something useful, I suppose.
Re: #dbugfix 17592
On Thursday, 22 March 2018 at 21:37:40 UTC, Basile B. wrote: [snip] I don't say that's the solution. I think there's no solution. I'm not sure there's no solution, but there's definitely no EASY solution. And when I say solution, I don't mean a resolution of this specific issue. I mean to resolve the fundamental issue, which is being able to clean up memory in betterC or @nogc. There was a discussion last year [1] about adding destructors. According to the spec, class destructors were deprecated in D2, though confusingly I can't write a void delete() {} function, only delete(void*) {}. The delete function (outside of a class) is deprecated in 2.079 though. I'd guess that if deterministic destructors are added to the language, then this() and this() @nogc inheritance becomes much less of an issue. [1]https://forum.dlang.org/thread/blbydqiupdtmgdunj...@forum.dlang.org?page=1
Re: #dbugfix 17592
On Wednesday, 21 March 2018 at 14:04:58 UTC, Adam D. Ruppe wrote: In Simen's example, the child information is not available at compile time. This line here: A a = new B(); discards the static type. The compiler could probably cheat and figure it out anyway in this example, but suppose: [snip] There are a few interesting things in the class destructor part of the spec. For instance: "There can be only one destructor per class, the destructor does not have any parameters, and has no attributes. It is always virtual." If the destructor has no attributes, how can it be @nogc? Also: "The destructor for the super class automatically gets called when the destructor ends. There is no way to call the super destructor explicitly." This means that you can't write something like below. Actually below gives the correct error. The problem is that if you remove the @nogc on A, then the result is wonky (and no compile-time error) and if they both have @nogc then you can't call destroy. import std.stdio : writeln; class A { @nogc ~this() { writeln("destroy A"); } } class B : A { ~this() { writeln("destroy B"); destroy(super); } } void main() { A a = new B; }
Re: -betterC is amazing, make (/keep making) it more sophisticated!
On Wednesday, 21 March 2018 at 22:48:36 UTC, Seb wrote: I heard that Walter recently ported his DMC++ to D and I heard that someone was working on this, so chances aren't too bad that this might happen ;-) You might check out Atila's github page (I don't think it's ready for release yet).
Re: Flaw in DIP1000? Returning a Result Struct in DIP1000
On Wednesday, 21 March 2018 at 17:13:40 UTC, Jack Stouffer wrote: [snip] How can we return non-scoped result variables constructed from scope variables without copies? If you re-wrote this so that it just had pointers, would it be simpler? Below is my attempt, not sure it's the same... struct Foo { int b; } struct Bar { Foo* a; } Bar bar(scope int* a) @safe { Bar res; Foo x = Foo(*a); res.a = &x; return res; } void main() @safe { int x = 1; bar(&x); }
Re: D beyond the specs
On Friday, 16 March 2018 at 19:15:16 UTC, bachmeier wrote: The point is that there is no "fundamental" reason someone using a computer uses a qwerty keyboard. If you are to ask "what makes the qwerty keyboard the best choice for someone using a computer?" you are not going to have any luck finding the answer (or worse, you will find an answer after sufficient data mining). Similarly for programming language usage. There may have been perfectly good reasons for the early adopters of D, but it's not going to help to look for features of the D language that fit certain cultures better. It may be as simple as someone getting introduced to the D language because of a typo in a Google search. Your "fundamental" reasons are more like "technical" reasons than "economic" reasons. Should a large company buy qwerty keyboards or some other kind? Should a worker invest time in learning how to use a qwerty keyboard or some other kind? Those are questions of economic decision-making. The question that is relevant to decision-makers is rarely about "what keyboard layout is best." Rather it is, how much marginal benefit is there in investing time in learning to use a qwerty keyboard vs. another kind and what do I have to give up in order to obtain that benefit. If the marginal benefit of learning the standard keyboard layout is larger than some other kind and the cost is approximately the same, then everyone (except some iconoclasts) are going to learn qwerty. This sort of analysis applies to programming languages in exactly the same way. If I'm a company, do I build products using language X or language Y. If I'm a person, do I spend N hours learning language X or language Y (or do the next best thing you can do...March Madness?). What if I already know language X? Then it's pure marginal cost to learn language Y. C programmers don't just switch to D or Rust or whatever the moment they see it has some "technical" features that are better. That's not what we observe. The marginal benefit has to exceed the marginal cost.
Re: D beyond the specs
On Friday, 16 March 2018 at 16:02:07 UTC, bachmeier wrote: Allow me to put on my economist hat and say you might be looking for explanations when none are required. Much of programming language adoption involves choosing languages others are using (see, well, any conversation about programming languages if you don't think it matters, or even the continued use of C++). There doesn't have to be a reason to settle on a particular language. Perfect example is the qwerty keyboard. There's nothing special about a qwerty keyboard. That is the arrangement of keys that some guy randomly chose many decades back. We continue to use qwerty because that's what we use - not for any particular reason. We use qwerty because that's what the first commercially successful typewriter used. When computers came about, they needed to get people to transition over. Keeping qwerty was the optimal decision because of marginal costs and marginal benefits, not just random decisions. Its creator didn't choose it randomly. He put the keys that were most common where it was easiest to get at them, but it jammed if people were typing too quickly so he made you type the most common letters with your left hand instead of right. Some people bring up the Dvorak keyboard, but the evidence that it was better was scant and the marginal benefit of switching was too small to justify the cost.
Re: Do we need Mat, Vec, TMmat, Diag, Sym and other matrix types?
On Thursday, 15 March 2018 at 05:04:42 UTC, 9il wrote: [snip] BTW, could you please help with the following issue?! struct S(int b, T) { } alias V(T) = S!(1, T); auto foo (T)(V!T v) { } void main() { V!double v; foo(v); } Error: template onlineapp.foo cannot deduce function from argument types !()(S!(1, double)), candidates are: onlineapp.d(7):onlineapp.foo(T)(V!T v) This is issue 16486 [1], which is very similar to 16465 [1] and seems like should be closed. [1] https://issues.dlang.org/show_bug.cgi?id=16486 [2] https://issues.dlang.org/show_bug.cgi?id=16465
Re: Do we need Mat, Vec, TMmat, Diag, Sym and other matrix types?
On Thursday, 15 March 2018 at 12:49:22 UTC, jmh530 wrote: [snip] It looks like it should expand the alias earlier. No problem with auto foo (T)(S!(1, T) v) {}; Also, this issue also shows up in mir.ndslice.traits. I had to do the equivalent of isV below. It doesn't work to do the alternate version. However, given that you have the traits, then you can use them in a template constraint. So you have to repeat yourself in the trait once, rather than bunches of times in each function that calls them. enum bool isV(T) = is(T : S!(1, U), U); enum bool isV_alternate(T) = is(T : V!(U), U);
Re: Do we need Mat, Vec, TMmat, Diag, Sym and other matrix types?
On Thursday, 15 March 2018 at 05:04:42 UTC, 9il wrote: [snip] BTW, could you please help with the following issue?! struct S(int b, T) { } alias V(T) = S!(1, T); auto foo (T)(V!T v) { } void main() { V!double v; foo(v); } Error: template onlineapp.foo cannot deduce function from argument types !()(S!(1, double)), candidates are: onlineapp.d(7):onlineapp.foo(T)(V!T v) It looks like it should expand the alias earlier. No problem with auto foo (T)(S!(1, T) v) {};
Re: Do we need Mat, Vec, TMmat, Diag, Sym and other matrix types?
On Wednesday, 14 March 2018 at 23:30:55 UTC, Sam Potter wrote: [snip] OTOH, the fact that D doesn't have a REPL may kill it from the get go (hard to do exploratory data analysis). There is one in dlang-community (there might be others, can't recall), but it does not yet support Windows and there isn't much documentation. I agree it would be nice to have a D Jupyter kernel that was easy to use. Also, I've been playing with run.dlang.io lately and while it's not the same thing as an REPL, it's one of the easiest ways to play around with D (and a select few libraries, including mir-algorithm).
Re: core.math and std.math
On Thursday, 15 March 2018 at 00:16:05 UTC, Manu wrote: Why does core.math exist? It's basically empty, but with a couple of select functions which seem arbitrarily chosen... Isn't core.math compiler intrinsics? The corresponding functions in std.math call the core.math versions.
Re: Do we need Mat, Vec, TMmat, Diag, Sym and other matrix types?
On Wednesday, 14 March 2018 at 20:21:15 UTC, Sam Potter wrote: Sure. The key word in my statement was "ideally". :-) For what it's worth, there is already an "informal spec" in the form of the high-level interface for numerical linear algebra and sci. comp. that has been developed (over three decades?) in MATLAB. This spec has been replicated (more or less) in Julia, Python, Octave, Armadillo/Eigen, and others. I'm not aware of all the subtleties involved in incorporating it into any standard library, let alone D's, but maybe this is an interesting place where D could get an edge over other competing languages. Considering that people in Python land have picked up D as a "faster Python", there might be more traction here than is readily apparent [snip] libmir [1] originally started as std.experiemental.ndslice (that component is now mir-algorithm). They had removed it from the std.experimental because it wasn't stable enough yet and needed to make breaking changes. I think it's doing just fine as a standalone library, rather than part of the standard library. As this thread makes clear, there's certainly more work to be done on it, but I'm sure Ilya would appreciate any feedback or assistance. I'm sympathetic to your point about D getting an edge by having a better linear algebra experience. I came to D for faster Python/R/Matlab (and not C++), though if I need to do something quickly, I still defer to Python/R. However, if you look at the TIOBE index, R and Matlab are at 18/20. Python is quite a bit higher, but it's growth in popularity was not largely due to the Numpy/Scipy ecosystem. So while I think that D could get more traction if libmir turns itself into a premiere linear algebra library, we should be realistic that linear algebra is a relatively small segment of how people use programming languages. Maybe these firms might be willing to pay up for more support though...(if a user could replace pandas with libmir, I would imagine some financial firms might be interested). [1] https://github.com/libmir
Re: Do we need Mat, Vec, TMmat, Diag, Sym and other matrix types?
On Wednesday, 14 March 2018 at 16:16:55 UTC, Andrei Alexandrescu wrote: Has row-major fallen into disuse? [snip] C has always been row major and is not in disuse (the GSL library has gsl_matrix and that is row-major). However, Fortran and many linear algebra languages/frameworks have also always been column major. Some libraries allow both. I use Stan for Bayesian statistics. It has an array type and a matrix type. Arrays are row-major and matrices are column major. (Now that I think on it, an array of matrices would probably resolve most of my needs for an N-dimensional column major type)
Re: Do we need Mat, Vec, TMmat, Diag, Sym and other matrix types?
On Wednesday, 14 March 2018 at 05:01:38 UTC, 9il wrote: Maybe we should use only column major order. --Ilya In my head I had been thinking that the Mat type you want to introduce would be just an alias to a 2-dimensional Slice with a particular SliceKind and iterator. Am I right on that? If that's the case, why not introduce a Tensor type that corresponds to Slice but in column-major storage and then have Mat (or Matrix) be an alias to 2-dimensional Tensor with a particular SliceKind and iterator.
Re: Do we need Mat, Vec, TMmat, Diag, Sym and other matrix types?
On Tuesday, 13 March 2018 at 21:04:28 UTC, bachmeier wrote: What I have settled on is Row(x,2), which returns a range that works with foreach. I tried x[_,2] to return Row(x,2) but didn't like reading it, so I went with x[_all,2] instead. Similarly for Col(x,2) and x[2,_all]. The exact form is bikeshedding and shouldn't make much difference. I use ByRow(x) and ByColumn(x) to iterate over the full matrix. This Row(x, 2) is essentially the same approach as Armadillo (it also has rows, cols, span). mir's select isn't quite the same thing. _all is interesting. mir's byDim that can iterate by both rows and columns. IME, if you try to mix row-order and column-order, or 0-based indexing and 1-based indexing, it's too complicated to write correct code that interacts with other libraries. I think you need to choose one and go with it. Fair enough. mir uses a row-order 0-based indexing approach by default. That's fine, I'm used to it at this point. What I was thinking about was that Slice's definition would change from struct Slice(SliceKind kind, size_t[] packs, Iterator) to struct Slice(SliceKind kind, size_t[] packs, Iterator, MemoryLayout layout = rowLayout) so that the user has control over changing it on a object by object basis. Ideally, they would keep it the same across the entire program. Nevertheless, I would still prefer it so that all functions in mir provide the same result regardless of what layout is chosen (not sure you can say that about switching to 0-based indexing...). The idea would be that whatever is built on top of it shouldn't need to care about the layout. However, due to cache locality, some programs might run faster depending on the layout chosen. With respect to interacting with libraries, I agree that a user should choose either row-order or column-order and stick to it. But what options are available for the user of a column-major language (or array library) to call mir if mir only makes available functions that handle row-major layouts? RCppArmadillo doesn't have an issue because both R and Armadillo are column-major. Going the other way, you'd probably know better than I would, but it looks like in embedr the only way I see to assign a D matrix to a RMatrix is by copying values. If a matrix was already in column-major form, then how easy much easier would it be to interact with R?
Re: Do we need Mat, Vec, TMmat, Diag, Sym and other matrix types?
On Tuesday, 13 March 2018 at 16:40:13 UTC, 9il wrote: On Tuesday, 13 March 2018 at 14:13:02 UTC, jmh530 wrote: [snip] I'm not sure I understand what your syntax solution does... matrix(j, i) == matrix[i, j] (reversed order) Hopefully, I made the issue more clean in my response to Martin. The issue with row/column major order is one-part performance and one-part interoperability with other Matrix libraries. Perhaps most importantly for that syntax is to imagine how confusing a new user would find that syntax... A free-standing function, such as the simple one below, might be less confusing. Also, this is a solution for the Matrix type, but not so much the Slice type. auto reverseIndex(T)(T x, size_t i, size_t j) { return(x[j, i]); } The reverseIndex function is convenient, but you are looping through each element of the second column. Is there any performance advantage to using the reverseIndex function to do so? I suspect not. This is because you still have to jump around in memory because the underlying storage is row-major. You may not notice this effect when the CPU is able to pre-fetch the whole matrix and put it in cache, but as the matrix gets larger, then you can't fit it all in cache and it starts to matter more. Also, you might break vector operations. Personally, while I think it's important to think about, I also don't think it's a hugely pressing issue so long as the API is flexible enough that you can add the functionality in the future.
Re: Do we need Mat, Vec, TMmat, Diag, Sym and other matrix types?
On Tuesday, 13 March 2018 at 15:47:36 UTC, Martin Tschierschke wrote: I think for mathematics it is more important for easy handling, to be able to get the element of a matrix a_ij by a(i,j) and not only by a[i-1,j-1]. [snip] I didn't really address this in my other post. What you're talking about is 0-based or 1-based indexing. Most languages force you to choose, though there are a few languages that let you specify the type of index you want (Pascal, Chapel, and Ada come to mind). While I've thought that the way Chapel does domains is cool, I guess I never gave much thought into implementing the optional 0 or 1 based indexing in D. Now that I'm thinking about it, I don't see why it couldn't be implemented. For instance, there's nothing stopping you from writing a function like below that has the same behavior. auto access(T)(T x, size_t i, size_t j) { return(x.opIndex(i - 1, j - 1)); } What you really care about is the nice syntax. In that case, you could write an opIndex function that has different behavior based on a template parameter in Slice. Even something simple like below might work. auto ref opIndex(Indexes...)(Indexes indexes) @safe if (isIndexSlice!Indexes) { static if (defaultIndexingBehavior) { return this.opIndex!(indexes.length)([indexes]); } else { Indexes newIndexes; foreach(i, e; indexes) { newIndexes[i] = e - 1; } return this.opIndex!(indexes.length)([newIndexes]); } } The time-consuming part is that you'd have to go through all of mir where it relies on opIndex and ensure that both sets of behavior work.
Re: Do we need Mat, Vec, TMmat, Diag, Sym and other matrix types?
On Tuesday, 13 March 2018 at 15:47:36 UTC, Martin Tschierschke wrote: On Tuesday, 13 March 2018 at 14:13:02 UTC, jmh530 wrote: [...] https://en.wikipedia.org/wiki/Row-_and_column-major_order I think for mathematics it is more important for easy handling, to be able to get the element of a matrix a_ij by a(i,j) and not only by a[i-1,j-1]. The underlying storage concept is less important and depends just on the used libs, which should be the best (fastest) available for the purpose. The underlying storage format is important for performance, especially cache lines. For instance, calculate the sum of the columns of a matrix stored in row major format vs. column major format. If it is stored column-wise, you can just loop right down the column. In LAPACKE (note the E), the first parameter on just about every function is a variable controlling whether it is in row-major or column major. By contrast, I believe the original LAPACK (without E) was written for FORTRAN and is in column-major storage [1]. The documentation for LAPACKE notes: "Note that using row-major ordering may require more memory and time than column-major ordering, because the routine must transpose the row-major order to the column-major order required by the underlying LAPACK routine." It also is relevant if people who use mir want to interact with libraries that use different memory layouts. Alternately, people who use languages like Fortran that have row major formats might want to call mir code. [1] mir-lapack uses Canonical slices in many of these functions. I assume this is correct, but I have a nagging feeling that I should compare the results of some of these functions with another language to really convince myself...When you increment an iterator on canonical it's still going in row order.
Re: Do we need Mat, Vec, TMmat, Diag, Sym and other matrix types?
On Tuesday, 13 March 2018 at 13:02:45 UTC, Ilya Yaroshenko wrote: On Tuesday, 13 March 2018 at 12:23:23 UTC, jmh530 wrote: On Tuesday, 13 March 2018 at 10:35:15 UTC, 9il wrote: On Tuesday, 13 March 2018 at 04:35:53 UTC, jmh530 wrote: [snip] What's TMMat? TMat is a transposed matrix. Not sure for now if it would be required. There are some people who like being able to specify a whether a matrix has column or row layout. Would an option to control this be the same thing? Good point. Would matrix(j, i) syntax solve this issue? One of reasons to introduce Mat is API simplicity. ndslice has 3 compile time params. I hope we would have only type for Mat, like Mat!double. I'm not sure I understand what your syntax solution does... But I agree that there is a benefit from API simplicity. It would probably be easier to just say Mat is row-major and have another that is column-major (or have the options in ndslice). Nevertheless, it can't help to look at what other matrix libraries do. Eigen's (C++ library) Matrix class uses template arguments to set storage order (_Options). It looks like Eigen has six template arguments. https://eigen.tuxfamily.org/dox/classEigen_1_1Matrix.html Numpy does the same thing at run-time https://docs.scipy.org/doc/numpy/reference/generated/numpy.array.html Also, many of the languages that emphasize linear algebra strongly (Fortran, Matlab, etc) use column-major order. Row-order is most popular coming from C-based languages. https://en.wikipedia.org/wiki/Row-_and_column-major_order
Re: Do we need Mat, Vec, TMmat, Diag, Sym and other matrix types?
On Tuesday, 13 March 2018 at 10:35:15 UTC, 9il wrote: On Tuesday, 13 March 2018 at 04:35:53 UTC, jmh530 wrote: [snip] What's TMMat? TMat is a transposed matrix. Not sure for now if it would be required. There are some people who like being able to specify a whether a matrix has column or row layout. Would an option to control this be the same thing?
Re: Do we need Mat, Vec, TMmat, Diag, Sym and other matrix types?
On Tuesday, 13 March 2018 at 12:16:27 UTC, jmh530 wrote: Some kind of improvement that replaces 0 .. $ with some shorter syntax has been brought up in the past. https://github.com/libmir/mir-algorithm/issues/53 Sorry for double post.
Re: Do we need Mat, Vec, TMmat, Diag, Sym and other matrix types?
On Tuesday, 13 March 2018 at 10:39:29 UTC, 9il wrote: On Tuesday, 13 March 2018 at 05:36:06 UTC, J-S Caux wrote: Your suggestion [4] that matrix[i] returns a Vec is perhaps too inflexible. What one needs sometimes is to return a row, or a column of a matrix, so a notation like matrix[i, ..] or matrix[.., j] returning respectively a row or column would be useful. auto row = matrix[i]; // or matrix[i, 0 .. $]; auto col = matrix[0 .. $, j]; Some kind of improvement that replaces 0 .. $ with some shorter syntax has been brought up in the past. https://github.com/libmir/mir-algorithm/issues/53
Re: Do we need Mat, Vec, TMmat, Diag, Sym and other matrix types?
On Tuesday, 13 March 2018 at 10:39:29 UTC, 9il wrote: On Tuesday, 13 March 2018 at 05:36:06 UTC, J-S Caux wrote: Your suggestion [4] that matrix[i] returns a Vec is perhaps too inflexible. What one needs sometimes is to return a row, or a column of a matrix, so a notation like matrix[i, ..] or matrix[.., j] returning respectively a row or column would be useful. auto row = matrix[i]; // or matrix[i, 0 .. $]; auto col = matrix[0 .. $, j]; Some kind of improvement that replaces 0 .. $ with some shorter syntax has been brought up in the past. https://github.com/libmir/mir-algorithm/issues/53
Re: Do we need Mat, Vec, TMmat, Diag, Sym and other matrix types?
On Tuesday, 13 March 2018 at 03:37:36 UTC, 9il wrote: [snip] 4. matrix[i] returns a Vec and increase ARC, matrix[i, j] returns a content of the cell. I'm not 100% sold on matrix[i] returning a Vec instead of a 1-dimensional matrix. R does something similar and you have to convert things back to a matrix for some computations more often than I'd like. If functions can easily take both Mat and Vec types in a relatively painless fashion, then I wouldn't have an issue with it. 5. Clever `=` expression based syntax. For example: // performs CBLAS call of GEMM and does zero memory allocations C = alpha * A * B + beta * C; You might want to explain this in more detail. I saw expression and my head went to expression templates, but that doesn't seem to be what you're talking about (overloading opAssign?) I have a lot of work for next months, but looking for a good opportunity to make Mat happen. +1 With respect to the title, the benefit of special matrix types is when we can call functions (lapack or otherwise) that are optimized for those types. If you want the best performance for mir, then I think that's what it would take. I'm not sure how much you've thought about this. For instance, I understand from graphics libraries that if you're only working with a particular size matrix (say 3x3), then you can generate faster code than if you're working with general matrices. In addition, performance is not the only thing a new user to mir would care about They likely would also care about ease-of-use [1] and documentation. Hopefully these continue to improve. What's TMMat? Diag seems like it would be a special case of sparse matrices, though diag is probably simpler to implement. [1] Would it be seamless to add a Mat to a Diag? Also what happens to the api when you add 10 different matrix types and need to think about all the interactions.