Re: Abstract Database Interface
I am working on similar project, named SQLd[1]. If you are interested, we can join forces and work togheder :) IRC Nick: Robik [1]: http://github.com/robik/SQLd That might be a good idea. I haven't done much for supporting different databases, so getting more backend support would be quite nice. What might work well is for me to just refactor my code to sit on top of your existing database classes. So far, I've really just been playing around, but if people show enough interest, I'd like to play with the idea a while longer. :)
Re: Abstract Database Interface
Looking at the API used in this example it would say that it's not very interesting and not very ActiveRecrod like. I think this looks more interesting and more like ActiveRecrod: class Person : Model { } void main () { auto p = new Person; p.name = John Doe; p.save(); p = Person.where!(x = x.name == John Doe); } But when you start to use associative it won't be as nice looking as ActiveRecord due to the not so nice mixin syntax. What we need is AST macros and user defined attributes/annotations. With that, associations could potentially look like this: class Foo : Model {} class Person : Model { @hasMany Foo; } It's definitely not ActiveRecord, but my goal is just to take some inspiration from it, not to duplicate it. I'm very concerned about efficiency, which is why I'm using structs, and I like hard-coding the fields into the structure so there's some documentation of what the record is supposed to hold and so the compiler can optimize it more heavily. It will probably be a little less pretty, but it'll work, and that's what really matters. At some point, I might implement an interface to generate SQL queries with function calls, but for now, just manually writing the queries really isn't hard, and it provides a significant speed boost for systems like Sqlite that compile queries down to bytecode because it's easier to reuse the query object.
Re: Abstract Database Interface
It's definitely not ActiveRecord, but my goal is just to take some inspiration from it, not to duplicate it. I'm very concerned about efficiency, which is why I'm using structs, and I like hard-coding the fields into the structure so there's some documentation of what the record is supposed to hold and so the compiler can optimize it more heavily. It will probably be a little less pretty, but it'll work, and that's what really matters. At some point, I might implement an interface to generate SQL queries with function calls, but for now, just manually writing the queries really isn't hard, and it provides a significant speed boost for systems like Sqlite that compile queries down to bytecode because it's easier to reuse the query object. My point was just that you removed the key features and soul of ActiveRecord. Without these features it's just like any other ORM library. -- /Jacob Carlborg
Re: Abstract Database Interface
My point was just that you removed the key features and soul of ActiveRecord. Without these features it's just like any other ORM library. That's a good point. I haven't had any experience with other ORM libraries, so ActiveRecord was the closest thing that came to mind. I definitely do want to eventually capture some of ActiveRecord's features, but probably not all of them. I feel like the solution should be implemented in a way that fits well with a statically typed language, so I'll definitely have to drop some of the features. It won't be quite as nice to use, but it will be simpler in dome ways, which is one of my primary goals as a developer. Tools like ActiveRecord are more fun to use, but thinking of all the hash table lookups makes me cringe. :) If and when the library matures, though, I might think about adding some more ActiveRecord-like features if enough people miss them.
Re: Abstract Database Interface
On 2012-10-29 15:42, BLM768 wrote: That's a good point. I haven't had any experience with other ORM libraries, so ActiveRecord was the closest thing that came to mind. I definitely do want to eventually capture some of ActiveRecord's features, but probably not all of them. I feel like the solution should be implemented in a way that fits well with a statically typed language, so I'll definitely have to drop some of the features. It won't be quite as nice to use, but it will be simpler in dome ways, which is one of my primary goals as a developer. Tools like ActiveRecord are more fun to use, but thinking of all the hash table lookups makes me cringe. :) If and when the library matures, though, I might think about adding some more ActiveRecord-like features if enough people miss them. You can have a look at DataMapper. That's also for Ruby but it's not specific for SQL, if I recall correctly. Have a look at some ORM library written in Scala, I would guess they can be quite innovative and it's statically typed. http://squeryl.org/index.html http://datamapper.org/ -- /Jacob Carlborg
Re: Abstract Database Interface
You can have a look at DataMapper. That's also for Ruby but it's not specific for SQL, if I recall correctly. Have a look at some ORM library written in Scala, I would guess they can be quite innovative and it's statically typed. http://squeryl.org/index.html http://datamapper.org/ Those libraries definitely look interesting. I should probably consider some type of NoSQL database support... Thanks for the links!
Re: Abstract Database Interface
On 2012-10-29 18:43, BLM768 wrote: You can have a look at DataMapper. That's also for Ruby but it's not specific for SQL, if I recall correctly. Have a look at some ORM library written in Scala, I would guess they can be quite innovative and it's statically typed. http://squeryl.org/index.html http://datamapper.org/ Those libraries definitely look interesting. I should probably consider some type of NoSQL database support... If I recall correctly, Squeryl use Scala AST macros to support a query syntax, that in D would look, as below: class Person : Model { } void main () { auto p = new Person; p.name = John Doe; p.save(); p = Person.where!(x = x.name == John Doe); } -- /Jacob Carlborg
Re: Remus
Not interested, huh? Funny, that I had not expected.
Re: Remus
Namespace: Not interested, huh? Funny, that I had not expected. Maybe they appreciate more something that improves the life of regular D programmers. There are many possible ways to do that, like trying to design features that quite probably will be added to D, or trying library ideas that will become part of Phobos, trying new GC features, trying new Phobos modules, and so on and on. Otherwise you risk creating another Delight (http://delight.sourceforge.net/ ) that no one uses, it's just a waste of time for you too. Bye, bearophile
Re: Remus
On Tuesday, 9 October 2012 at 21:31:48 UTC, bearophile wrote: use statements are converted to one or more alias' and namespaces to (mixin) templates. But what are they useful for? Namespaces can be useful for organizational reasons. For example they can be used for grouping a collection of items under one roof. However you can already accomplish this and more using struct along with static members. struct io { static { void print() { writeln(foo);} } } io.print(); Plus struct's come with additional abilities that can turn a simple namespace into a much more capable one, for example by adding in ctors and dtors. --rt
Re: Abstract Database Interface
If I recall correctly, Squeryl use Scala AST macros to support a query syntax, that in D would look, as below: class Person : Model { } void main () { auto p = new Person; p.name = John Doe; p.save(); p = Person.where!(x = x.name == John Doe); } If you make x some fancy wrapper type containing more fancy wrapper types with overloaded equality operators that return some sort of Expression class instead of a boolean, you might actually be able to get this to work with only D's current features. However, that would kind of destroy the hope of efficiency. :) What might be nice is a database written in D that completely eschews SQL in favor of a native API. I might have to play with that eventually, but I'll probably give it a while because it would be a huge project, and, like most people, I'm under time constraints. :)
Re: Remus
I wonder if you could use a named public import to create something like a namespace. On 30 Oct 2012 00:25, Rob T r...@ucora.com wrote: On Tuesday, 9 October 2012 at 21:31:48 UTC, bearophile wrote: use statements are converted to one or more alias' and namespaces to (mixin) templates. But what are they useful for? Namespaces can be useful for organizational reasons. For example they can be used for grouping a collection of items under one roof. However you can already accomplish this and more using struct along with static members. struct io { static { void print() { writeln(foo); } } } io.print(); Plus struct's come with additional abilities that can turn a simple namespace into a much more capable one, for example by adding in ctors and dtors. --rt
Re: Why D is annoying =P
On Monday, 29 October 2012 at 05:43:43 UTC, H. S. Teoh wrote: On Mon, Oct 29, 2012 at 01:28:51AM -0400, Nick Sabalausky wrote: Did someone say PHP? ;) I thought I heard Javascript... :-P Both seem to have their own issues, although I'd say javascript is worse than PHP. (at least PHP runs on the server).
Re: Another day in the ordeal of cartesianProduct
On Saturday, 27 October 2012 at 16:19:43 UTC, H. S. Teoh wrote: [snip] I think there is some merit to being able to declare concepts; for example: // This concept matches any type that has fields that satisfy // what's specified inside the body. concept InputRange(T) { bool empty; // this matches @property bool empty() too T front; void popFront(); } auto myRangeBasedFunc(InputRange r) { r.popBack();// compile-time error: InputRange does // not define .popBack } This way, *both* the user of the template and the template writer are kept honest (i.e., they both have to conform to the requirements of an input range). The compiler can thus statically check the template for correctness *before* it ever gets instantiated. Not only so, the compiler will be able to generate a meaningful error message when the templates fail to match -- it can tell the user the template didn't match 'cos struct X that you tried to pass to it isn't an InputRange, nor a ForwardRange, ... etc.. We've had a short discussion with Jacob Carlborg about almost exactly this syntax (http://forum.dlang.org/post/jukabm$1btd$1...@digitalmars.com) in the context of better compiler messages. I will quote my concerns: = void foo (InputRange range); 1. How would you signify `foo` is generic ? For complier it's probably possible - by type of `InputRange`, but that will be one more special case What about user ? 2. How would you put 2 constraints on `range` ? 3. Also you forgot to address: template isInputRange(R) { enum bool isInputRange = is(typeof( (inout int _dummy=0) { R r = void; /// how would you express this /// b.t.w. ? range.d comment on this line is can define a range object. = end quote However I do agree it would be nice if compiler verifies template body against constraints. IMNA compiler-writer, but I wonder if it's possible with current syntax.
Re: [RFC] ColorD
On 10/25/2012 3:27 PM, Jens Mueller wrote: Anybody an idea how to this on Linux? Much of this is implemented in one way or another as part of the source code for MicroEmacs, downloadable from digitalmars.com. https://github.com/DigitalMars/me
Re: To avoid some linking errors
On 2012-10-28 22:57, Walter Bright wrote: I am baffled why a programmer with even a modest skill level in any language would not know what a symbol in a programming language is. Welcome to the real world :) I see the point of that, and at one point optlink did demangle names. But that didn't change anything. There was also a filter one could run the linker output through that would demangle the names, but nobody found that useful, either, and it fell by the wayside. You'll see the same complaints from the same people appearing for C code being linked, which does not have mangled names. You still don't get any source location. http://www.digitalmars.com/ctg/OptlinkErrorMessages.html#symbol_undefined That's only for optlink. I even wrote an entry in this book http://www.amazon.com/Things-Every-Programmer-Should-Know/dp/0596809484 about it. *Every* programmer should know what a linker does. I agree with you, but again, that's not the world we live in. On the other hand, why does, say, an PHP (insert your favorite dynamic programming language that doesn't use a linker) programmer need to know what a linker is? -- /Jacob Carlborg
Re: To avoid some linking errors
On 2012-10-28 22:38, David Nadlinger wrote: Do really think that your typical Java programmer is familiar with the term »symbol« in the compiler/linker sense? Also, don't underestimate the perceived scariness/ugliness of mangled names in linker error messages. I'm pretty much fluent in reading D mangled names by now, but most newcomers definitely aren't. That, coupled with the absence of the typical source location information (IDE integration!), is probably enough to make encountering such errors a significantly more unpleasant experience for most people than compiler errors. Again, maybe not for you, maybe not for me, but I think it is clear that this is a problem to some, so the discussion should not be about talking the problem away, but rather about evaluating possible solutions/mitigation strategies in terms of feasibility (e.g. name demangling in linker output?). I completely agree. I can handle the mangled symbols as well but it would be much nicer with demangled symbols, source location and so on. Just because I can read HTML I'm not surfing the web with curl, I use a GUI browser that renders the HTML. It's much nicer that way. -- /Jacob Carlborg
Re: Why D is annoying =P
On 2012-10-29 07:42, Era Scarecrow wrote: Both seem to have their own issues, although I'd say javascript is worse than PHP. (at least PHP runs on the server). You can run JavaScript on the server too. -- /Jacob Carlborg
Re: To avoid some linking errors
On 29-Oct-12 01:38, David Nadlinger wrote: On Sunday, 28 October 2012 at 20:59:25 UTC, Walter Bright wrote: It baffles me that programmers would find undefined symbol hard to make sense of. Do really think that your typical Java programmer is familiar with the term »symbol« in the compiler/linker sense? Also, don't underestimate the perceived scariness/ugliness of mangled names in linker error messages. I'm pretty much fluent in reading D mangled names by now, but most newcomers definitely aren't. So true. That, coupled with the absence of the typical source location information (IDE integration!), is probably enough to make encountering such errors a significantly more unpleasant experience for most people than compiler errors. Again, maybe not for you, maybe not for me, but I think it is clear that this is a problem to some, so the discussion should not be about talking the problem away, but rather about evaluating possible solutions/mitigation strategies in terms of feasibility (e.g. name demangling in linker output?). Indeed, when dmd works as a driver and invokes a linker can't it just pipe its output through ddemangle? -- Dmitry Olshansky
Re: Can't use a C++ class from a DLL
28.10.2012 23:52, Artie пишет: I have a DLL with a C++ class and a factory function that creates it. The aim is to load the DLL, get an instance of the class and use it. The interface of the DLL is as follows: - class IBank { public: virtual const char* APIENTRY getLastError() = 0; virtual const char* APIENTRY getDetail(char* detail) = 0; virtual const bool APIENTRY deposit(unsigned long number, double amount) = 0; virtual const bool APIENTRY withdraw(unsigned long number, double amount) = 0; virtual const double APIENTRY getBalance(unsigned long number) = 0; virtual const bool APIENTRY transfer(unsigned long numberFrom, IBank* bankTo, unsigned long numberTo, double amount) = 0; virtual const bool APIENTRY transferAccept(IBank* bankFrom, unsigned long numberTo, double amount) = 0; }; - I've followed the instructions given at dlang.org to interface to C/C++ code but got no success. If I use extern(C++) at the place in D code where extern declaration is required I get an access violation when calling any method. On the other hand, if I use extern(Windows, C or Pascal) I can call a method successfully, except that I get wrong return value. The D interface is declared as follows: - extern (Windows) interface IBank { const char* getLastError(); const char* getDetail(char* detail); const bool deposit(uint number, double amount); const bool withdraw(uint number, double amount); const double getBalance(uint number); const bool transfer(uint numberFrom, IBank* bankTo, uint numberTo, double amount); const bool transferAccept(IBank* bankFrom, uint numberTo, double amount); } export extern (C) IBank Get(); - And the main program in D that uses the DLL: - module main; import std.stdio; import core.runtime; import core.sys.windows.windows; import std.string; import std.conv; import ibank; int main() { alias extern(C) IBank function() getBankInstance; FARPROC pDllFunctionVBank, pDllFunctionSberbank; // Load DLL file void* handleVBank = Runtime.loadLibrary(vbank.dll); void* handleSberbank = Runtime.loadLibrary(sberbank.dll); if ( (handleVBank is null) || (handleSberbank is null) ) { writeln(Couldn't find necessary DLL files); return 1; } getBankInstance get1 = cast(getBankInstance) GetProcAddress(handleVBank, Get.toStringz); getBankInstance get2 = cast(getBankInstance) GetProcAddress(handleSberbank, Get.toStringz); if ( get1 is null || get2 is null ) { writeln(Couldn't load factory functions); return 2; } getBankInstance get; IBank vbank = (*get1)(); IBank sberbank = get2(); uint sbnum = 100500; uint vbnum = 128500; writeln(You have an account in Sberbank (100500)); auto balance = sberbank.getBalance(sbnum); writefln(getBalance(%d) = %s, sbnum, balance); bool res = sberbank.withdraw(sbnum, 500.0); writefln(withdraw(%d, %f) = %s, sbnum, 500.0, res); writeln(You got it!); ... - The output I get is (in case I use extern (Windows, C or Pascal)): - You have an account in Sberbank (100500) getBalance(100500) = -nan got into GenericBank::getBalance() // this is an output from a method called inside the DLL account number = 100500 // inside the DLL balance is 1100 // inside the DLL withdraw(100500, 500.00) = false You got it! - First, to interact with C++ `interface` you need: --- extern(C++) interface Ixxx { ... } --- Your `IBank` C++ functions are declared as `APIENTRY` which is almost definitely defined as `__stdcall`. So the correct interface declaration is: --- extern(C++) interface IBank { extern(Windows) const char* getLastError(); ... } --- As all your functions are `APIENTRY`, write `extern(Windows):` before them. And use `c_ulong` as analogue of `unsigned long`. So full correct `IBank` interface declaration here: --- import core.stdc.config: c_ulong; extern(C++) interface IBank { extern(Windows): const char* getLastError(); const char* getDetail(char* detail); bool deposit(c_ulong number, double amount); bool withdraw(c_ulong number, double amount); double getBalance(c_ulong number); bool transfer(c_ulong numberFrom, IBank* bankTo, c_ulong numberTo, double amount); bool transferAccept(IBank* bankFrom, c_ulong numberTo, double amount); }; --- -- Денис В. Шеломовский Denis V. Shelomovskij
Re: Can't use a C++ class from a DLL
On Monday, 29 October 2012 at 12:11:11 UTC, Denis Shelomovskij wrote: const char* getLastError(); const char* getDetail(char* detail); These return values should be const(char)* and the method shouldn't be const.
Re: Can't use a C++ class from a DLL
29.10.2012 16:40, Jakob Ovrum пишет: On Monday, 29 October 2012 at 12:11:11 UTC, Denis Shelomovskij wrote: const char* getLastError(); const char* getDetail(char* detail); These return values should be const(char)* and the method shouldn't be const. Sorry, my bad. -- Денис В. Шеломовский Denis V. Shelomovskij
Re: Can't use a C++ class from a DLL
As all your functions are `APIENTRY`, write `extern(Windows):` before them. And use `c_ulong` as analogue of `unsigned long`. So full correct `IBank` interface declaration here: --- import core.stdc.config: c_ulong; extern(C++) interface IBank { extern(Windows): const char* getLastError(); const char* getDetail(char* detail); bool deposit(c_ulong number, double amount); bool withdraw(c_ulong number, double amount); double getBalance(c_ulong number); bool transfer(c_ulong numberFrom, IBank* bankTo, c_ulong numberTo, double amount); bool transferAccept(IBank* bankFrom, c_ulong numberTo, double amount); }; --- Thank you very much, Denis. It was quite confusing to mix extern(C++) and extern(Windows). And I also thank Jakob for syntax specification. BTW, it's said in the ABI reference that `unsigned long` must be substituted with `uint`. And it seems to work fine for the data I used in the example.
Re: Can't use a C++ class from a DLL
Artie apple2...@mail.ru wrote in message news:uhdpnavdyokxigczl...@forum.dlang.org... BTW, it's said in the ABI reference that `unsigned long` must be substituted with `uint`. And it seems to work fine for the data I used in the example. unsigned int and unsigned long are the same size in 32 bit C/C++, but are mangled differently when using C++ name mangling. unsigned long may not be 32 bits on all platforms, so to portably match the size used by the native C/C++ compiler you should use the c_ulong aliases. The problem with name mangling is avoided in this case as you're not using C++ name mangling, you're using stdcall name mangling, which only keeps track of argument sizes, not their types.
isDroppable range trait for slicing to end
More often than not, we want to slice a range all the way to the end, and we have to use the clumsy r[0 .. r.length] syntax. What's worst is that when a range is infinite, there is no real way to slice to the end, unless you just repeatedly popFront. This is a real shame, because a lot of infinite ranges (sequence, cycle, repeat, ...) support random access, but not slice to end. They *could* slice to end if the language allowed it. I'd like to introduce a new primitive: popFrontN. You may recognize this as a standalone function if range.d: It is. I propose we improve this semantic by allowing ranges to directly implement this function themselves. Then, popFrontN will defer to that function's implementation. This would allow certain infinite ranges (such as sequence) to provide a popFrontN implementation, even though they aren't sliceable. From there, I'd like to introduce a new trait isDroppable: This trait will answer true if a range naturally supports the popFrontN primitive (or is already sliceable). So what makes this so interesting? Not only does it give new performance possibilities, it also unlocks new possibilities for the implementation of algorithms: A LOT of algorithm take a special quick route when the input ranges are sliceable, random access, and hasLength. Blatant examples of this are find, copy, or as a general rule, anything that iterates on two ranges at once. The thing though is that they never actually *really* require sliceability, nor querying length. All they want is to be able to write return r[i .. r.length], but return r.drop(i) would work *just* as well. Another thing which makes this isDropable notion interesting is that the dropped range guarantees the returned range's type is that of the original ranges, unlike hasSlicing, which doesn't really guarantee it: some infinite ranges can be sliced, but the returned slice (obviously) is not infinite...
Re: isDroppable range trait for slicing to end
On Monday, 29 October 2012 at 14:33:01 UTC, monarch_dodra wrote: [SNIP] Extension: An extension to this proposal, is the function extractSlice. This function would *ONLY* require isDroppable. It would be implemented as: // auto extractSlice(R)(R r, size_t i, size_t j) { static if (hasSlicing) return r[i .. j]; else return r.drop(i).takeExactly(j - i); } // What makes this notion interesting, is that it works both on sliceable ranges AND infinite ranges, but does not pretend to have back-assignement to the original range. This is very interesting, because it would allow hasSlicing to turn down infinite ranges, but we'd still have a way to extract a range out of those infinite ranges. Another added bonus would be that certain non-infinite ranges, in particular immutable ranges, are considered not-sliceable, because they can't be back-assignabe. extractSlice would allow us to extract a slice out of those ranges, even if we can't assign them back...
Re: isDroppable range trait for slicing to end
On Monday, 29 October 2012 at 14:34:04 UTC, monarch_dodra wrote: [SNIP] I have a preliminary Pull Request to illustrate my point here: https://github.com/D-Programming-Language/phobos/pull/908 Currently, I only added popFrontN on the ranges on which it is most obvious, you should get the point. I'm very sorry if I'm not very clear. Explaining thingsby writing is not my strong suit. What would you think of this proposal?
Re: Can't use a C++ class from a DLL
On Monday, 29 October 2012 at 14:01:09 UTC, Daniel Murphy wrote: Artie apple2...@mail.ru wrote in message news:uhdpnavdyokxigczl...@forum.dlang.org... BTW, it's said in the ABI reference that `unsigned long` must be substituted with `uint`. And it seems to work fine for the data I used in the example. unsigned int and unsigned long are the same size in 32 bit C/C++, but are mangled differently when using C++ name mangling. unsigned long may not be 32 bits on all platforms, so to portably match the size used by the native C/C++ compiler you should use the c_ulong aliases. The problem with name mangling is avoided in this case as you're not using C++ name mangling, you're using stdcall name mangling, which only keeps track of argument sizes, not their types. That makes sense. I was unaware of such details. Thanks a lot.
Re: Another day in the ordeal of cartesianProduct
On 27/10/12 00:45, H. S. Teoh wrote: http://d.puremagic.com/issues/show_bug.cgi?id=8900 :-( (The code there is called cartesianProd but it's the reduced code, so it doesn't really compute the cartesian product. But that's where it's from.) So far, the outstanding blockers for cartesianProduct are: 1) Compiler bug which causes unittest failure: std/range.d(4629): Error: variable lower used before set std/range.d(4630): Error: variable upper used before set (Jonathan had a pull request with a Phobos workaround for this, which I _think_ is already merged, but the autotester is still failing at this point. :-/) 2) Issue 8542 (crosstalk between template instantiations) 3) And now, issue 8900 (zip fails to compile with repeat(char[])) So there's still no joy for cartesianProduct. :-( I'm getting a bit frustrated with the Phobos bugs related to ranges and std.algorithm. I think we need to increase the number of unittests. And by that I mean, GREATLY increase the number of unittests. Most of the current tests are merely sanity tests for the most common usage patterns, most basic types, or tests added as a result of fixed bugs. This is inadequate. We need to actively unittest corner cases, rare combinations, unusual usages, etc.. Torture test various combinations of range constructs. Algorithms. Nested range constructs. Nested algorithms. Deliberately cook up nasty tests that try their best to break the code by using unusual parameters, unusual range-like objects, strange data, etc.. Go beyond the simple cases to test non-trivial things. We need unittests that pass unusual structs and objects into the range constructs and algorithms, and make sure they actually work as we have been _assuming_ they should. I have a feeling there are a LOT of bugs lurking in there behind overlooked corner cases, off by 1 errors, and other such careless slips, as well as code that only works for basic types like arrays, which starts breaking when you hand it something non-trivial. All these issues must be weeded out and prevented from slipping back in. Here's a start: - Create a set of structs/classes (inside a version(unittest) block) that are input, forward, bidirectional, output, etc., ranges, that are NOT merely arrays. - There should be some easy way, perhaps using std.random, of creating non-trivial instances of these things. These should be put in a separate place, perhaps outside the std/ subdirectory, where they can be imported into unittest blocks by std.range, std.algorithm, whatever else that needs extensive testing. - Use these ranges as input for testing range constructs and algorithms. - For best results, use a compile-time loop to loop over a given combination of these range types, and run them through the same set of tests. This will improve the currently spotty test coverage. Perhaps provide some templated functions that, given a set of range types (from the above structs/classes) and a set of functions, run through all combinations of them to make sure they all work. (We run unittests separately anyway, we aren't afraid of long-running tests.) T I think that unit tests aren't very effective without code coverage. One fairly non-disruptive thing we could do: implement code coverage for templates. Currently, templates get no code coverage numbers. We could do a code-coverage equivalent for templates: which lines actually got instantiated? I bet this would show _huge_ gaps in the existing test suite.
Re: assert(false, ...) doesn't terminate program?!
On 27/10/12 20:39, H. S. Teoh wrote: On Sat, Oct 27, 2012 at 08:26:21PM +0200, Andrej Mitrovic wrote: On 10/27/12, H. S. Teoh hst...@quickfur.ath.cx wrote: writeln(how did the assert not trigger??!!); // how did we get here?! Maybe related to -release? [...] Haha, you're right, the assert is compiled out because of -release. But I disassembled the code, and didn't see the auto x = 1/toInt() either. Is the compiler optimizing that away? Yes, and I don't know on what basis it thinks it's legal to do that. Also, is that even a good idea? Shouldn't we be throwing an exception here instead of trying to trigger integer division by zero (which may not even terminate the program, depending on the OS, etc.)? The intention was that it should behave _exactly_ like an integer division by zero. It's bug 8021, BTW.
Re: isDroppable range trait for slicing to end
29/10/2012 14:33, monarch_dodra пишет: More often than not, we want to slice a range all the way to the end, and we have to use the clumsy r[0 .. r.length] syntax. That supposed to be r[]. What's worst is that when a range is infinite, there is no real way to slice to the end, unless you just repeatedly popFront. Slice to the end was meant to be this: r[x..$] This is a real shame, because a lot of infinite ranges (sequence, cycle, repeat, ...) support random access, but not slice to end. They *could* slice to end if the language allowed it. The real shame is a compiler bug that prevented $ from ever working except in a few special cases. (that and special meaning of length inside of []). I'd like to introduce a new primitive: popFrontN. You may recognize this as a standalone function if range.d: It is. I propose we improve this semantic by allowing ranges to directly implement this function themselves. Then, popFrontN will defer to that function's implementation. This would allow certain infinite ranges (such as sequence) to provide a popFrontN implementation, even though they aren't sliceable. Introducing new things as part of range definition (capability) is a costly move. Paying that cost to address a vague special case - not worth it. From there, I'd like to introduce a new trait isDroppable: This trait will answer true if a range naturally supports the popFrontN primitive (or is already sliceable). So what makes this so interesting? Not only does it give new performance possibilities, it also unlocks new possibilities for the implementation of algorithms: Where? I thought it was about unlocking certain pattern for infinite RA ranges, not that much of benefit elsewhere. A LOT of algorithm take a special quick route when the input ranges are sliceable, random access, and hasLength. Blatant examples of this are find, copy, or as a general rule, anything that iterates on two ranges at once. The thing though is that they never actually *really* require sliceability, nor querying length. All they want is to be able to write return r[i .. r.length], but return r.drop(i) would work *just* as well. Your drop(i) would still have to know length for anything finite. TBH all stuff that is (if ever) specialized in std.algorithm: a) tries to optimize with strings b) tries to optimize with arrays c) sometimes catches general RA or slicing I'm pushing 2 things of c) type in one pull, yet I don't see it as a common thing at all, e.g. presently copy doesn't care about RA/slicing, only for built-in arrays. Most of the time things are downgraded to Input/ForwardRange and worked from that - simple, slow and generic. Another thing which makes this isDropable notion interesting is that the dropped range guarantees the returned range's type is that of the original ranges, unlike hasSlicing, which doesn't really guarantee it: some infinite ranges can be sliced, but the returned slice (obviously) is not infinite... Interesting. I'm thinking that simply defining an opDollar to return special marker type and overloading opSlice should work: struct InfRange{ ... EndMarker opDollar(); InfRange opSlice(size_t start, EndMarker dummy); SomeOtherRangeType opSlice(size_t start, size_t end); ... } And if you omit second overload it doesn't have normal slicing. isDropable then tests specifically slicing with dollar. -- Dmitry Olshansky
Re: isDroppable range trait for slicing to end
On Monday, 29 October 2012 at 15:41:23 UTC, Peter Alexander wrote: On Monday, 29 October 2012 at 14:33:01 UTC, monarch_dodra wrote: I'd like to introduce a new primitive: popFrontN. http://dlang.org/phobos/std_range.html#popFrontN On Monday, 29 October 2012 at 14:33:01 UTC, monarch_dodra wrote: You may recognize this as a standalone function if range.d I think you missed the point
Re: isDroppable range trait for slicing to end
On Monday, 29 October 2012 at 15:43:47 UTC, monarch_dodra wrote: On Monday, 29 October 2012 at 15:41:23 UTC, Peter Alexander wrote: On Monday, 29 October 2012 at 14:33:01 UTC, monarch_dodra wrote: I'd like to introduce a new primitive: popFrontN. http://dlang.org/phobos/std_range.html#popFrontN On Monday, 29 October 2012 at 14:33:01 UTC, monarch_dodra wrote: You may recognize this as a standalone function if range.d I think you missed the point Correct. Sorry about that :-) Carry on...
Re: isDroppable range trait for slicing to end
On Monday, 29 October 2012 at 14:33:01 UTC, monarch_dodra wrote: I'd like to introduce a new primitive: popFrontN. http://dlang.org/phobos/std_range.html#popFrontN
Re: Another day in the ordeal of cartesianProduct
On 10/28/12 8:28 AM, Peter Alexander wrote: On Saturday, 27 October 2012 at 13:06:09 UTC, Andrei Alexandrescu wrote: On 10/27/12 8:23 AM, Peter Alexander wrote: Retrofitting some sort of structure to templates will be a Herculean task, but I think it has to happen. It is clear to me that the development process we use now (write the template, try a few instantiations, pray) is unsustainable beyond simple templates. It's not clear to me at all. The mechanism works very well and is more expressive than alternatives used by other languages. I'm not sure I can agree it works well. For example, here's what happened with bug 8900 mentioned in the OP: std.range.zip creates a Zip object, which has a Tuple member. Tuple has a toString function, which calls formatElement, which calls formatValue, which calls formatRange, which (when there's a range of characters) has a code path for right-aligning the range. To right-align the range it needs to call walkLength. The problem arises when you zip an infinite range of characters e.g. repeat('a'). This proves nothing at all. So this has to do with invoking walkLength against an infinite range. At the time I wrote walkLength, infinite ranges were an experimental notion that I was ready to remove if there wasn't enough practical support for it. So I didn't even think of the connection, which means the restriction wouldn't have likely made it into the definition of walkLength regardless of the formalism used. Before pull request 880, walkLength accepted infinite ranges and just returned size_t.max. Pull request 880 added a constraint to walkLength to stop it accepting infinite ranges. Suddenly, you cannot zip a range of infinite chars because of a seemingly unrelated change. The connection is obvious and is independent qualitatively of other cases of if you change A and B uses it, B may change in behavior too. It's a pattern old as dust in programming. Anyway, I'm not sure whether this is clear as day: expressing constraints as Booleans or C++ concepts style or Gangnam style doesn't influence this case in the least. This would have been caught if there was a unit test, but there wasn't, and as Dijkstra says, testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence. There are probably other places that are broken, and many changes in the future will just introduce more bugs without tests. Maybe we have different standards for working well, but to me at least, this isn't what working well looks like. Working well in this case would look like this: - The person that put together pull request 880 would add the template constraint to walkLength. - On the next compile he would get this error: formatRange potentially calls walkLength with an infinite range. (or something along those lines). - The person fixes formatRange, and all is well. No need for unit tests, it's all caught as soon as possible without need for instantiation. But this works today and has nothing to do with retrofitting structure to templates. Nothing. Nothing. Andrei
Re: isDroppable range trait for slicing to end
On 10/29/12 3:14 PM, Dmitry Olshansky wrote: [snip] 3:14 PM? Since you're in the future, please let me know when the market opens :o). Andrei
Re: isDroppable range trait for slicing to end
On 10/29/12 11:43 AM, monarch_dodra wrote: On Monday, 29 October 2012 at 15:41:23 UTC, Peter Alexander wrote: On Monday, 29 October 2012 at 14:33:01 UTC, monarch_dodra wrote: I'd like to introduce a new primitive: popFrontN. http://dlang.org/phobos/std_range.html#popFrontN On Monday, 29 October 2012 at 14:33:01 UTC, monarch_dodra wrote: You may recognize this as a standalone function if range.d I think you missed the point ... which I think Dmitry destroyed. Andrei
Re: Another day in the ordeal of cartesianProduct
On Monday, 29 October 2012 at 14:47:37 UTC, Don Clugston wrote: One fairly non-disruptive thing we could do: implement code coverage for templates. Currently, templates get no code coverage numbers. We could do a code-coverage equivalent for templates: which lines actually got instantiated? I bet this would show _huge_ gaps in the existing test suite. That's a good step forward, but I don't think it solves (what appears to me to be) the most common issue: incorrect/missing template constraints. auto average(R)(R r) if (isForwardRange!R) { return reduce!(a + b)(r) / walkLength(r); } (This code can have full coverage for some ranges, but also needs to also check !isInfinite!R, and probably other things). These are difficult because every time a constraint changes, you need to go round and update the constraints of all calling functions. It's like when you change the types of a function argument, or return type; except that with template constraints, nothing is checked until instantiation.
Kickstarter and Conference
I went to buy early bird tickets for the D conference through Kickstarter. However, I they only allow payments through Amazon which I refuse to use. Is there any way to sign up at the $250 dollar level without going through Amazon?
Re: Another day in the ordeal of cartesianProduct
On Monday, 29 October 2012 at 15:48:11 UTC, Andrei Alexandrescu wrote: On 10/28/12 8:28 AM, Peter Alexander wrote: For example, here's what happened with bug 8900 mentioned in the OP: std.range.zip creates a Zip object, which has a Tuple member. Tuple has a toString function, which calls formatElement, which calls formatValue, which calls formatRange, which (when there's a range of characters) has a code path for right-aligning the range. To right-align the range it needs to call walkLength. The problem arises when you zip an infinite range of characters e.g. repeat('a'). This proves nothing at all. So this has to do with invoking walkLength against an infinite range. At the time I wrote walkLength, infinite ranges were an experimental notion that I was ready to remove if there wasn't enough practical support for it. So I didn't even think of the connection, which means the restriction wouldn't have likely made it into the definition of walkLength regardless of the formalism used. You're misunderstanding. walkLength used to allow infinite ranges. Recently, a commit added a constraint to walkLength to disallow infinite ranges. After this commit, all the unit tests still passed, but at least one bug was introduced (bug 8900). That's the problem: a change occurred that introduced a bug, but the type system failed to catch it before the change was committed. Something like typeclasses would have caught the bug before commit and without unit tests. The connection is obvious and is independent qualitatively of other cases of if you change A and B uses it, B may change in behavior too. It's a pattern old as dust in programming. Anyway, I'm not sure whether this is clear as day: expressing constraints as Booleans or C++ concepts style or Gangnam style doesn't influence this case in the least. If I change A and B uses it, I expect B to give an error or at least a warning at compile time where possible. This doesn't happen. With template constraints, you don't get an error until you try to instantiate the template. This is too late in my opinion. I would like this to give an error: void foo(R)(R r) if (isForwardRange!R) { r.popBack(); } It doesn't, not until you try to use it at least, and even then it only gives you an error if you try it with a non-bidirectional forward range. If this did give an error, bug 8900 (any many others) would never have happened. The problem with constraints vs. something like typeclasses or C++ concepts is that constraint predicates are not possible to enforce pre-instantiation. They have too much freedom of expression. Working well in this case would look like this: - The person that put together pull request 880 would add the template constraint to walkLength. - On the next compile he would get this error: formatRange potentially calls walkLength with an infinite range. (or something along those lines). - The person fixes formatRange, and all is well. No need for unit tests, it's all caught as soon as possible without need for instantiation. But this works today and has nothing to do with retrofitting structure to templates. Nothing. Nothing. It doesn't work today. This isn't a fabricated example. This happened. walkLength changed its constraint, everything still compiled, and all the unit tests passed. There was no error, no hint that things were broken, nothing. Problems only started to arise when the poor OP tried to implement cartesianProduct. This should never have happened. Typeclasses or C++ concepts wouldn't have allowed it to happen. This is the kind of structure that templates need.
Re: isDroppable range trait for slicing to end
29/10/2012 15:50, Andrei Alexandrescu пишет: On 10/29/12 3:14 PM, Dmitry Olshansky wrote: [snip] 3:14 PM? Since you're in the future, please let me know when the market opens :o). Just got Windows 8 Pro installed, my copy must have come with a time-machine addition. Need to read these feature charts more carefully next time :) On a serious note I recall there is a problem with date/time functionality on Win8 that manifests itself as an assertion in std.datetime unittests. Need to ping Jonathon about it and work out something. -- Dmitry Olshansky
Re: Kickstarter and Conference
On 10/29/2012 9:11 AM, John Mandeville wrote: I went to buy early bird tickets for the D conference through Kickstarter. However, I they only allow payments through Amazon which I refuse to use. Is there any way to sign up at the $250 dollar level without going through Amazon? Unfortunately, that's how kickstarter works. The only other way would be to like send Andrei a check, and he could post to kickstarter on your behalf. Sending Digital Mars a check won't work, as kickstarter won't let the project originator pledge to his own project.
Re: Why D is annoying =P
On Sunday, 28 October 2012 at 05:28:20 UTC, H. S. Teoh wrote: I have to say I was surprised to find out about this after I started using D; after first reading TDPL, I had the impression that it was supposed to do the right thing (and the right thing being == on each field, which I assumed the compiler would optimize into a bitwise compare where possible). Let's fix this. There is a problem with floating point memebers, since nan values will always compare false. There's other things in D that bother me, such as how struc pointers are managed, where they sometimes behave exactly like class references, but not always. The inconsistent behaviour is a trap for the programmer to fall into, and it leads to subtle hard to find errors. For example: struct S { int a; ~this() {} } class C { int a; ~this() {} } // identical behaviour auto Sp = new S; auto Cr = new C; // identical behaviour writeln(Sp.a); writeln(Cr.a); // identical behaviour auto Sp2 = Sp; auto Cr2 = Cr; writeln(Sp2.a); writeln(Cr2.a); // ? assert(Sp2 == Sp); assert(Cr2 == Cr); // different behaviour! clear(Sp); clear(Cr); Last two line compile OK, but clear(Sp) does not invoke the destructor while clear(Cr) does. clear(*Sp) works, but why allow clear(Sp) if it's a pointless operation? Is this a bug in the compiler or with the language design? Subtle differences like this are very nasty. The code should behave identically for both the struct pointer and the class reference, and if this is not poosible for some obscure reason, then the compiler should fail at the clear(Sp) line. If I invoke with clear(*Sp) it works, but the inconsistency makes template code that takes in both a struct or a class impossible to do (or at best needlessly difficult to do). --rt
Re: Kickstarter and Conference
On 10/29/12 12:11 PM, John Mandeville wrote: I went to buy early bird tickets for the D conference through Kickstarter. However, I they only allow payments through Amazon which I refuse to use. Is there any way to sign up at the $250 dollar level without going through Amazon? Not for the time being, sorry. Andrei
Re: assert(false, ...) doesn't terminate program?!
On 10/29/2012 7:51 AM, Don Clugston wrote: On 27/10/12 20:39, H. S. Teoh wrote: On Sat, Oct 27, 2012 at 08:26:21PM +0200, Andrej Mitrovic wrote: On 10/27/12, H. S. Teoh hst...@quickfur.ath.cx wrote: writeln(how did the assert not trigger??!!);// how did we get here?! Maybe related to -release? [...] Haha, you're right, the assert is compiled out because of -release. But I disassembled the code, and didn't see the auto x = 1/toInt() either. Is the compiler optimizing that away? Yes, and I don't know on what basis it thinks it's legal to do that. Because x is a dead assignment, and so the 1/ is removed. Divide by 0 faults are not considered a side effect. I think the code would be better written as: if (toInt() == 0) throw new Error(); If you really must have a divide by zero fault, if (toInt() == 0) divideByZero(); where: void divideByZero() { static int x; *cast(int*)0 = x / 0; }
Re: To avoid some linking errors
On 10/29/2012 2:49 AM, Jacob Carlborg wrote: You'll see the same complaints from the same people appearing for C code being linked, which does not have mangled names. You still don't get any source location. It's usually pretty obvious, but when it isn't, I use: grep -r symbol_name *.d or whatever search function your IDE has. http://www.digitalmars.com/ctg/OptlinkErrorMessages.html#symbol_undefined That's only for optlink. True, but the suggestions for what to do next apply to any linker. *Every* programmer should know what a linker does. I agree with you, but again, that's not the world we live in. On the other hand, why does, say, an PHP (insert your favorite dynamic programming language that doesn't use a linker) programmer need to know what a linker is? Because it's a fundamental tool for programmers, despite PHP not using it. It's like knowing what a CPU register is.
Re: isDroppable range trait for slicing to end
On Monday, 29 October 2012 at 15:48:47 UTC, Andrei Alexandrescu wrote: On 10/29/12 11:43 AM, monarch_dodra wrote: I think you missed the point ... which I think Dmitry destroyed. Andrei The only point he contested was the optimization opportunities in std.algorithm. I agree that optimization opportunities are not enough to warrant new concepts, but that wasn't my main point. But they are there is what I was saying. (PS: There is currently a pull request for making copy exploit doubly RA ranges) My main point is that slicing a range to its end *is* something important, and we currently have nothing to provide this functionality, when we could (easily). The argument: I'm thinking that simply defining an opDollar to return special marker type and overloading opSlice should work, works, but brings its own issues to the table. Inside template code, it would render hasSlicing *even more* complex: If an infinite range indeed has slicing, then what exactly does it mean? - Does it mean you can slice between two indexes? - Does it guarantee you can slice to the end with opDollar? - Does it mean you can do both? - Would it imply that r[0 .. 1] would have a different type from r[0 .. $] ?; - Would it imply that r = r[0 .. $] is legal? - What about that r = r[0 .. 10]? And still, that'd be if anybody actually used opDollar... *cough* The solution I'm proposing barely requires anything new we don't already have (popFrontN). I'm saying we can exploit the existence of this method to clearly separate the two (currently conflicting) notions of slicing we currently have: *On one hand, we can have the hasSlicing ranges, where can clearly write r = r[0 .. 10]; any day of the week, no matter the range. *On the other end, we'd have isDroppable, which would give you two limited features for those ranges that don't satisfy hasSlicing: **Slice to end with guaranteed assignability to original r = r.drop(10); **Extract a slice, but with the explicit notion you *won't* get back-assignability auto myNewSlice = r.extractSlice(0, 10); Note that this extractSlice notion would save a bit of functionality for immutable ranges which *would* have slicing, but since they don't support assign, don't actually verify hasSlicing...
Re: Another day in the ordeal of cartesianProduct
On 10/29/12 12:21 PM, Peter Alexander wrote: On Monday, 29 October 2012 at 15:48:11 UTC, Andrei Alexandrescu wrote: On 10/28/12 8:28 AM, Peter Alexander wrote: For example, here's what happened with bug 8900 mentioned in the OP: std.range.zip creates a Zip object, which has a Tuple member. Tuple has a toString function, which calls formatElement, which calls formatValue, which calls formatRange, which (when there's a range of characters) has a code path for right-aligning the range. To right-align the range it needs to call walkLength. The problem arises when you zip an infinite range of characters e.g. repeat('a'). This proves nothing at all. So this has to do with invoking walkLength against an infinite range. At the time I wrote walkLength, infinite ranges were an experimental notion that I was ready to remove if there wasn't enough practical support for it. So I didn't even think of the connection, which means the restriction wouldn't have likely made it into the definition of walkLength regardless of the formalism used. You're misunderstanding. walkLength used to allow infinite ranges. Recently, a commit added a constraint to walkLength to disallow infinite ranges. After this commit, all the unit tests still passed, but at least one bug was introduced (bug 8900). I thought I understood the matter rather well. That's the problem: a change occurred that introduced a bug, but the type system failed to catch it before the change was committed. Something like typeclasses would have caught the bug before commit and without unit tests. Yes, but what gets ignored here is that typeclasses have a large cognitive cost to everyone involved. I think typeclasses generally don't pull their weight. Besides I think template constraints are more powerful because they operate on arbitrary Boolean expressions instead of types. The connection is obvious and is independent qualitatively of other cases of if you change A and B uses it, B may change in behavior too. It's a pattern old as dust in programming. Anyway, I'm not sure whether this is clear as day: expressing constraints as Booleans or C++ concepts style or Gangnam style doesn't influence this case in the least. If I change A and B uses it, I expect B to give an error or at least a warning at compile time where possible. This doesn't happen. With template constraints, you don't get an error until you try to instantiate the template. That's also at compile time, just a tad later. This is too late in my opinion. I think there's a marked difference between compile-time and run-time. Instantiation time does not make a big enough difference to bring a big gun in the mix. I would like this to give an error: void foo(R)(R r) if (isForwardRange!R) { r.popBack(); } It doesn't, not until you try to use it at least, and even then it only gives you an error if you try it with a non-bidirectional forward range. So them a unittest with a minimal mock forward range should be created. I understand your concern, but please understand that typeclasses are too big a weight for what they do. If this did give an error, bug 8900 (any many others) would never have happened. How many others? (Honest question.) The problem with constraints vs. something like typeclasses or C++ concepts is that constraint predicates are not possible to enforce pre-instantiation. They have too much freedom of expression. Freedom of expression is also a strength. (The problem with C++ concepts is that they almost sunk C++.) Working well in this case would look like this: - The person that put together pull request 880 would add the template constraint to walkLength. - On the next compile he would get this error: formatRange potentially calls walkLength with an infinite range. (or something along those lines). - The person fixes formatRange, and all is well. No need for unit tests, it's all caught as soon as possible without need for instantiation. But this works today and has nothing to do with retrofitting structure to templates. Nothing. Nothing. It doesn't work today. This isn't a fabricated example. It's just blown out of proportion. This happened. walkLength changed its constraint, everything still compiled, and all the unit tests passed. There was no error, no hint that things were broken, nothing. Problems only started to arise when the poor OP tried to implement cartesianProduct. This should never have happened. Typeclasses or C++ concepts wouldn't have allowed it to happen. This is the kind of structure that templates need. We will not add C++ concepts or typeclasses to D. Andrei
Re: To avoid some linking errors
On 29/10/2012 18:38, Walter Bright wrote: On 10/29/2012 2:49 AM, Jacob Carlborg wrote: You'll see the same complaints from the same people appearing for C code being linked, which does not have mangled names. You still don't get any source location. It's usually pretty obvious, but when it isn't, I use: grep -r symbol_name *.d or whatever search function your IDE has. http://www.digitalmars.com/ctg/OptlinkErrorMessages.html#symbol_undefined That's only for optlink. True, but the suggestions for what to do next apply to any linker. *Every* programmer should know what a linker does. I agree with you, but again, that's not the world we live in. On the other hand, why does, say, an PHP (insert your favorite dynamic programming language that doesn't use a linker) programmer need to know what a linker is? Because it's a fundamental tool for programmers, despite PHP not using it. It's like knowing what a CPU register is. Haha, I know a lot of professional programmers who do not know the difference between the stack and the heap but still are able to write useful code. You really seem to be lurking way too much on newsgroups with knowledgeable people ;)
Re: To avoid some linking errors
On Sunday, 28 October 2012 at 20:59:25 UTC, Walter Bright wrote: On 10/28/2012 1:34 PM, deadalnix wrote: As Andrei stated, the linker's native language is encrypted klingon. It baffles me that programmers would find undefined symbol hard to make sense of. _D3yeah9whats82__T4soS36_D4hard4toFZv9__lambda1FNaNbNfiiZbVE3understand9about12this ;-) Seriously though, it's irrelevant. The fact is a lot of programmers, especially new programmers or ones from programming languages that don't use linkers find link errors scary and confusing. Pretending otherwise gets us nowhere. Saying it baffles you why things are this way gets us nowhere. Saying that they should understand gets us nowhere.
Re: Another day in the ordeal of cartesianProduct
Andrei Alexandrescu: Yes, but what gets ignored here is that typeclasses have a large cognitive cost to everyone involved. Such costs are an interesting discussion topic :-) To put people up to speed a bit: http://en.wikipedia.org/wiki/Type_class A bit of explanations regarding Rust ones: https://air.mozilla.org/rust-typeclasses/ Deeper info from one of the original designers: http://homepages.inf.ed.ac.uk/wadler/topics/type-classes.html I think typeclasses generally don't pull their weight. Rust and Haskell designers think otherwise, it seems. We will not add C++ concepts or typeclasses to D. I agree that maybe now it's too much late to add them to D2. But we are free to discuss about the topic. I didn't know much about typeclasses years ago when people were designing D2 :-( I have studied them only recently while learning Haskell. Bye, bearophile
Re: To avoid some linking errors
On 10/29/12 2:10 PM, Peter Alexander wrote: On Sunday, 28 October 2012 at 20:59:25 UTC, Walter Bright wrote: On 10/28/2012 1:34 PM, deadalnix wrote: As Andrei stated, the linker's native language is encrypted klingon. It baffles me that programmers would find undefined symbol hard to make sense of. _D3yeah9whats82__T4soS36_D4hard4toFZv9__lambda1FNaNbNfiiZbVE3understand9about12this ;-) Seriously though, it's irrelevant. The fact is a lot of programmers, especially new programmers or ones from programming languages that don't use linkers find link errors scary and confusing. Pretending otherwise gets us nowhere. Saying it baffles you why things are this way gets us nowhere. Saying that they should understand gets us nowhere. I agree (and was about to post something very close to this). I've heard many times about this particular baffling, and it's one of those cases in which clearly people who are otherwise competent have quite a bit of difficulty. So one reasonable resolution is well that's how people are, and that you think differently doesn't solve the matter one bit, so let's see what steps to take on improving it. From what I can tell here's how to solve linker error issues: 1. Automatic demangling of the symbols involved must be in place. 2. For undefined symbols, there must be reference at source file and line level of where they are referred - /all/ places! 3. For multiply defined symbols, there must be reference at source file and line level for each definition. I understand there are technical difficulties in implementing the above, but that doesn't justify being baffled. Being baffled is not an option. Andrei
Re: Another day in the ordeal of cartesianProduct
On 10/29/12 2:16 PM, bearophile wrote: Andrei Alexandrescu: Yes, but what gets ignored here is that typeclasses have a large cognitive cost to everyone involved. Such costs are an interesting discussion topic :-) To put people up to speed a bit: http://en.wikipedia.org/wiki/Type_class A bit of explanations regarding Rust ones: https://air.mozilla.org/rust-typeclasses/ Deeper info from one of the original designers: http://homepages.inf.ed.ac.uk/wadler/topics/type-classes.html For those who wouldn't know how to search the Net, these indeed are quite appropriate. I think typeclasses generally don't pull their weight. Rust and Haskell designers think otherwise, it seems. It's a matter of what priorities the language has and what other features are available overlapping with the projected usefulness. Andrei
Re: Why D is annoying =P
On Mon, Oct 29, 2012 at 06:34:14PM +0100, Rob T wrote: [...] There is a problem with floating point memebers, since nan values will always compare false. This is IEEE specification. There is no problem. T -- Frank disagreement binds closer than feigned agreement.
Re: Another day in the ordeal of cartesianProduct
Andrei Alexandrescu: For those who wouldn't know how to search the Net, these indeed are quite appropriate. People are often a bit lazy in clicking on links present in newsgroup messages, and they are even more lazy in searching things. So I've seen numerous times that if you give them links they will be more willing to read. This is especially true about topics they know nothing about, because they have a higher cognitive impedance. Google gives many results about this topic, but not all of them are good. That link from mozilla is a little video that doesn't tell a lot. The papers from Wadler were the ones Haskell typeclases come from, they are harder, but they are more fundamental. But starting from those papers is hard, so better to start from some examples and simpler explanations. In the Haskell wiki there are more practical texts on this topic, and comparisons: http://www.haskell.org/tutorial/classes.html http://www.haskell.org/haskellwiki/OOP_vs_type_classes Plus an online free chapter of what maybe is the best book about Haskell: http://book.realworldhaskell.org/read/using-typeclasses.html It's a matter of what priorities the language has and what other features are available overlapping with the projected usefulness. I agree. On the other hand the main point of this thread is that someone perceives as not good enough or not sufficient the current features of D. Even if their perception is wrong (and you are an expert on this field, so we trust your words), I think the topic is worth discussing. Bye, bearophile
Decimal Floating Point types.
Speaking on behalf of Dejan, he expressed a wish to have such a type in D. (eg: such that assert(3.6 * 10 == 36.0) - which may not always be true on all architectures). As maybe a new backend type is out of the question. Perhaps we should create a new library type for the job - eg: _Decimal32, _Decimal64, _Decimal128. Thoughts? Regards Iain.
Re: Another day in the ordeal of cartesianProduct
On Monday, 29 October 2012 at 17:56:26 UTC, Andrei Alexandrescu wrote: We will not add C++ concepts or typeclasses to D. Ok. There is no point continuing this discussion then.
Re: To avoid some linking errors
On Mon, Oct 29, 2012 at 07:05:33PM +0100, Faux Amis wrote: On 29/10/2012 18:38, Walter Bright wrote: On 10/29/2012 2:49 AM, Jacob Carlborg wrote: [...] I agree with you, but again, that's not the world we live in. On the other hand, why does, say, an PHP (insert your favorite dynamic programming language that doesn't use a linker) programmer need to know what a linker is? Because it's a fundamental tool for programmers, despite PHP not using it. It's like knowing what a CPU register is. Haha, I know a lot of professional programmers who do not know the difference between the stack and the heap but still are able to write useful code. The kind of code the average professional programmer produces is ... shall I say, underwhelming? It's the kind of thing that makes me consider career switches. Raising the bar for programmer qualification will do the world a lot of good. T -- In theory, there is no difference between theory and practice.
Re: Decimal Floating Point types.
On Mon, Oct 29, 2012 at 07:43:35PM +0100, Iain Buclaw wrote: Speaking on behalf of Dejan, he expressed a wish to have such a type in D. (eg: such that assert(3.6 * 10 == 36.0) - which may not always be true on all architectures). As maybe a new backend type is out of the question. Perhaps we should create a new library type for the job - eg: _Decimal32, _Decimal64, _Decimal128. [...] Implementing it in the library makes sense. T -- There's light at the end of the tunnel. It's the oncoming train.
Re: isDroppable range trait for slicing to end
10/29/2012 5:40 PM, monarch_dodra пишет: On Monday, 29 October 2012 at 15:48:47 UTC, Andrei Alexandrescu wrote: On 10/29/12 11:43 AM, monarch_dodra wrote: I think you missed the point ... which I think Dmitry destroyed. Andrei I'd clarify that I'm not against the _trait_ itself. isDroppable. The name doesn't sit well with me but the idea to test if range supports limited form of slicing i.e. a[x..$] is a good idea. I'd call it limited slicing or one-side slicing. Everything else in the post - a distinctive no. The only point he contested was the optimization opportunities in std.algorithm. That was an observation. I'm curious how many things you can sensibly do with infinite range directly (no take, takeExactly) that would benefit from being able to iterate them by common index. I have reasons to believe the set is small at best. I agree that optimization opportunities are not enough to warrant new concepts, but that wasn't my main point. But they are there is what I was saying. (PS: There is currently a pull request for making copy exploit doubly RA ranges) Yeah, that's mine... Now the challenge. Quick! How many infinite RA ranges with assignable elements we have in Phobos presently? My main point is that slicing a range to its end *is* something important, and we currently have nothing to provide this functionality, when we could (easily). The argument: I'm thinking that simply defining an opDollar to return special marker type and overloading opSlice should work, works, but brings its own issues to the table. Inside template code, it would render hasSlicing *even more* complex: If an infinite range indeed has slicing, then what exactly does it mean? Basically it wasn't defined precisely. And I don't see how problematic it is to refine the definition. - Does it mean you can slice between two indexes? - Does it guarantee you can slice to the end with opDollar? - Does it mean you can do both? - Would it imply that r[0 .. 1] would have a different type from r[0 .. $] ?; - Would it imply that r = r[0 .. $] is legal? - What about that r = r[0 .. 10]? I'll comment more on these at the bottom. The gist is: All of this boils down to one question adding a popFrontN can't solve: semantics of slicing an Infinite range on 2 indexes. Everything else is trivial to nail down. And still, that'd be if anybody actually used opDollar... *cough* Introducing a new hook for programmers to implement because currently opDollar isn't used (and I told you why) is a bad idea. It is making a new *convention* that is bypassing an existing one built-in into the language. The solution I'm proposing barely requires anything new we don't already have (popFrontN). It requires something new from users. Implement another way to slice a range. While presently popFrontN already works in O(1) for stuff that has [x..$] slicing. Put it another way: library solutions arenice and usable as long as it blends with the core language. Let's not repeat the (few) mistakes of STL. I'm saying we can exploit the existence of this method to clearly separate the two (currently conflicting) notions of slicing we currently have: *On one hand, we can have the hasSlicing ranges, where can clearly write r = r[0 .. 10]; any day of the week, no matter the range. *On the other end, we'd have isDroppable, which would give you two limited features for those ranges that don't satisfy hasSlicing: **Slice to end with guaranteed assignability to original r = r.drop(10); So all of the above can be put into the following 2 statements: - all RA ranges have $ that is the end of range - a slice is self-assignable in any case - Infinite range just plain can't support slicing on 2 indexes (they have limited slicing, or one side slicing not full slicing) I'd argue that any RA range can support slicing simply because supporting popFront/popBack is required. I believe there are no precedents where implementing these won't avail to: a) adding a start, end indexes on top of the underlying RA payload b) using some kind of random access pointer(s) provided natively For Infinite ranges hasSlicing is false, limitedSlicing (isDropable) is true. So I suggest we make the function popFrontN more clever w.r.t infinite ranges with limited form of slicing. That's all. And you are correct to notice it's misses an optimization in this case. And the constraint should be fixed to isDroppable/limitedSlicing. It's a losing battle to add fixed range slicing to InfiniteRange. Arguably for infinite ranges the way to slice is: a[x..y] --- a.drop(x).takeExactly(y-x) Because it doesn't have full slicing that is hasSlicing. Clear as a day. Note that drop(x) will get the speed up. **Extract a slice, but with the explicit notion you *won't* get back-assignability auto myNewSlice = r.extractSlice(0, 10); Another primitive or is that UFCS in the work? Now when to use it? I'd hate to see everything
Re: Decimal Floating Point types.
10/29/2012 6:43 PM, Iain Buclaw пишет: Speaking on behalf of Dejan, he expressed a wish to have such a type in D. (eg: such that assert(3.6 * 10 == 36.0) - which may not always be true on all architectures). As maybe a new backend type is out of the question. Perhaps we should create a new library type for the job - eg: _Decimal32, _Decimal64, _Decimal128. Thoughts? I recall there was proposal for Phobos with both fixed decimal floating point types and arbitrary precision variants. And taking the role of good jinn: https://github.com/andersonpd/decimal/tree/master/decimal (seems very much alive and kicking) Regards Iain. -- Dmitry Olshansky
Re: To avoid some linking errors
On 2012-10-29 19:05, Faux Amis wrote: Haha, I know a lot of professional programmers who do not know the difference between the stack and the heap but still are able to write useful code. You really seem to be lurking way too much on newsgroups with knowledgeable people ;) I completely agree. I can probably count the number of PHP developer who know what a CPU resister is on one hand. Let me add to that: Ruby, JavaScript, Python and a bunch of other languages. -- /Jacob Carlborg
Re: isDroppable range trait for slicing to end
On Monday, 29 October 2012 at 19:20:34 UTC, Dmitry Olshansky wrote: [SNIP] I'll need some time to fully digest your points. Thank you for the full reply. I'd like Jonathan's opinions on this too, he is knee deep in hasSlicing right now...
Re: Why D is annoying =P
On Monday, 29 October 2012 at 18:39:41 UTC, H. S. Teoh wrote: On Mon, Oct 29, 2012 at 06:34:14PM +0100, Rob T wrote: [...] There is a problem with floating point memebers, since nan values will always compare false. This is IEEE specification. There is no problem. T If struc comparisions compare value by value, and both have Nan values in one member variable, then the two otherwise completely identical structs will always compare false even though they are identical. I think it would be best to have the ability to compare structs in two ways, one as struct equivalence, which would be the default method, the other as vale by value equivalence, which would have to be programmer defined through overload of == operator. --rt
Re: To avoid some linking errors
On 2012-10-29 19:19, Andrei Alexandrescu wrote: I agree (and was about to post something very close to this). I've heard many times about this particular baffling, and it's one of those cases in which clearly people who are otherwise competent have quite a bit of difficulty. So one reasonable resolution is well that's how people are, and that you think differently doesn't solve the matter one bit, so let's see what steps to take on improving it. From what I can tell here's how to solve linker error issues: 1. Automatic demangling of the symbols involved must be in place. 2. For undefined symbols, there must be reference at source file and line level of where they are referred - /all/ places! 3. For multiply defined symbols, there must be reference at source file and line level for each definition. I understand there are technical difficulties in implementing the above, but that doesn't justify being baffled. Being baffled is not an option. Well said. -- /Jacob Carlborg
Re: To avoid some linking errors
On 2012-10-29 20:00, H. S. Teoh wrote: The kind of code the average professional programmer produces is ... shall I say, underwhelming? It's the kind of thing that makes me consider career switches. Raising the bar for programmer qualification will do the world a lot of good. Absolutely. The bar is not set very high. -- /Jacob Carlborg
Re: To avoid some linking errors
On 29/10/2012 20:00, H. S. Teoh wrote: On Mon, Oct 29, 2012 at 07:05:33PM +0100, Faux Amis wrote: On 29/10/2012 18:38, Walter Bright wrote: On 10/29/2012 2:49 AM, Jacob Carlborg wrote: [...] I agree with you, but again, that's not the world we live in. On the other hand, why does, say, an PHP (insert your favorite dynamic programming language that doesn't use a linker) programmer need to know what a linker is? Because it's a fundamental tool for programmers, despite PHP not using it. It's like knowing what a CPU register is. Haha, I know a lot of professional programmers who do not know the difference between the stack and the heap but still are able to write useful code. The kind of code the average professional programmer produces is ... shall I say, underwhelming? It's the kind of thing that makes me consider career switches. Raising the bar for programmer qualification will do the world a lot of good. T I am not sure that less programmers would be a good thing for the world. But, I have seen people who know all the ins and outs of their machine create horrible code which I will never allow to merge and people who barely know what a pointer is generate very clean, readable and usable code.
Re: To avoid some linking errors
On Mon, 29 Oct 2012, Andrei Alexandrescu wrote: On 10/29/12 2:10 PM, Peter Alexander wrote: On Sunday, 28 October 2012 at 20:59:25 UTC, Walter Bright wrote: Seriously though, it's irrelevant. The fact is a lot of programmers, especially new programmers or ones from programming languages that don't use linkers find link errors scary and confusing. Pretending otherwise gets us nowhere. Saying it baffles you why things are this way gets us nowhere. Saying that they should understand gets us nowhere. I agree (and was about to post something very close to this). I've heard many times about this particular baffling, and it's one of those cases in which clearly people who are otherwise competent have quite a bit of difficulty. So one reasonable resolution is well that's how people are, and that you think differently doesn't solve the matter one bit, so let's see what steps to take on improving it. From what I can tell here's how to solve linker error issues: 1. Automatic demangling of the symbols involved must be in place. 2. For undefined symbols, there must be reference at source file and line level of where they are referred - /all/ places! 3. For multiply defined symbols, there must be reference at source file and line level for each definition. I understand there are technical difficulties in implementing the above, but that doesn't justify being baffled. Being baffled is not an option. Andrei There's another angle to this: 1) It's been stated more than once that one of the goals for D is to achieve a user base of over 1 million users. 2) I assert that there aren't more than 1 million programmers with the level of expertise and experience required to understand what happens during compilation to a sufficient degree that they feel comfortable with the tool chains that D (and c and c++) have today. Conclusion, the tool chains must get more user friendly.
Re: To avoid some linking errors
On 10/29/12, Brad Roberts bra...@puremagic.com wrote: Conclusion, the tool chains must get more user friendly. Yep. Just compare: $ dmd -c test.d -oftest.obj Optlink: $ link test.obj test.obj(test) Error 42: Symbol Undefined _D4test1A3fooMFZv Unilink: $ ulink test.obj Error: Unresolved external 'test.A.foo()' referenced from 'test.obj'
Re: isDroppable range trait for slicing to end
On Monday, October 29, 2012 20:26:38 Dmitry Olshansky wrote: Need to ping Jonathon about it and work out something. I believe that Microsoft did something stupid with one of their functions which makes it function slightly differently around a DST switch on Windows 8 than it used to, so the unit tests fail when verifying that the times come out correctly around a DST switch (since they don't anymore on Windows 8). But I haven't had time yet to thoroughly investigate what exactly is causing it. Microsoft definitely sucks when it comes to time-handling stuff, but they're usually better about backwards compatibility than they appear to have been here. - Jonathan M Davis
Re: isDroppable range trait for slicing to end
Jonathan M Davis wrote: On Monday, October 29, 2012 20:26:38 Dmitry Olshansky wrote: Need to ping Jonathon about it and work out something. I believe that Microsoft did something stupid with one of their functions which makes it function slightly differently around a DST switch on Windows 8 than it used to, so the unit tests fail when verifying that the times come out correctly around a DST switch (since they don't anymore on Windows 8). But I haven't had time yet to thoroughly investigate what exactly is causing it. Microsoft definitely sucks when it comes to time-handling stuff, but they're usually better about backwards compatibility than they appear to have been here. I find it amazing how many bugs your unittests catch. Jens
Command Line Order + Linker Errors
I'm running into some inexplicable linker errors when trying to compile a project. I've tried two command lines to compile the project that I thought were equivalent except for the names of the output files: // emptymain.d: void main(){} // test.d: unittest { double[double] weights = [1:1.2, 4:2.3]; import std.stdio; writeln(PASSED); } dmd -unittest emptymain.d test.d // Linker errors dmd -unittest test.d emptymain.d // Works Additionally, the linker errors only occur under a custom version of druntime. Don't try to reproduce them under the stock version. (For the curious, it's the precise heap scanning fork from https://github.com/rainers/druntime/tree/precise_gc2 . I'm trying to get precise heap scanning ready for prime time.) My real question, though, is why should the order of these files on the command line matter and does this suggest a compiler or linker bug?
Re: Command Line Order + Linker Errors
On Monday, 29 October 2012 at 20:56:02 UTC, dsimcha wrote: My real question, though, is why should the order of these files on the command line matter and does this suggest a compiler or linker bug? What exactly are the errors you are getting? My first guess would be templates (maybe the precise GC RTInfo ones?) – determining which template instances to emit into what object files is non-trivial, and DMD is currently known to contain a few related bugs. The fact that the problem also appears when compiling all source files at once is somewhat special, though. David
Re: Command Line Order + Linker Errors
The mesasges are below. The exact messages are probably not useful but I included them since you asked. I meant to specify, though, that they're all undefined reference messages. Actually, none of these issues occur at all when compilation of the two files is done separately, regardless of what order the object files are passed to DMD for linking: dmd -c -unittest test.d dmd -c -unittest emptymain.d dmd -unittest test.o emptymain.o # Works dmd -unittest emptymain.o test.o # Works emptymain.o:(.data._D68TypeInfo_S6object26__T16AssociativeArrayTdTdZ16AssociativeArray4Slot6__initZ+0x80): undefined reference to `_D11gctemplates77__T11RTInfoImpl2TS6object26__T16AssociativeArrayTdTdZ16AssociativeArray4SlotZ11RTInfoImpl2yG2m' emptymain.o:(.data._D73TypeInfo_S6object26__T16AssociativeArrayTdTdZ16AssociativeArray9Hashtable6__initZ+0x80): undefined reference to `_D11gctemplates82__T11RTInfoImpl2TS6object26__T16AssociativeArrayTdTdZ16AssociativeArray9HashtableZ11RTInfoImpl2yG2m' emptymain.o:(.data._D69TypeInfo_S6object26__T16AssociativeArrayTdTdZ16AssociativeArray5Range6__initZ+0x80): undefined reference to `_D11gctemplates78__T11RTInfoImpl2TS6object26__T16AssociativeArrayTdTdZ16AssociativeArray5RangeZ11RTInfoImpl2yG2m' emptymain.o:(.data._D149TypeInfo_S6object26__T16AssociativeArrayTdTdZ16AssociativeArray5byKeyMFNdZS6object26__T16AssociativeArrayTdTdZ16AssociativeArray5byKeyM6Result6Result6__initZ+0x80): undefined reference to `_D11gctemplates86__T11RTInfoImpl2TS6object26__T16AssociativeArrayTdTdZ16AssociativeArray5byKeyM6ResultZ11RTInfoImpl2yG2m' emptymain.o:(.data._D153TypeInfo_S6object26__T16AssociativeArrayTdTdZ16AssociativeArray7byValueMFNdZS6object26__T16AssociativeArrayTdTdZ16AssociativeArray7byValueM6Result6Result6__initZ+0x80): undefined reference to `_D11gctemplates88__T11RTInfoImpl2TS6object26__T16AssociativeArrayTdTdZ16AssociativeArray7byValueM6ResultZ11RTInfoImpl2yG2m' emptymain.o: In function `_D11gctemplates66__T6bitmapTS6object26__T16AssociativeArrayTdTdZ16AssociativeArrayZ6bitmapFZG2m': test.d:(.text._D11gctemplates66__T6bitmapTS6object26__T16AssociativeArrayTdTdZ16AssociativeArrayZ6bitmapFZG2m+0x1b): undefined reference to `_D11gctemplates71__T10bitmapImplTS6object26__T16AssociativeArrayTdTdZ16AssociativeArrayZ10bitmapImplFPmZv' On Monday, 29 October 2012 at 21:08:52 UTC, David Nadlinger wrote: On Monday, 29 October 2012 at 20:56:02 UTC, dsimcha wrote: My real question, though, is why should the order of these files on the command line matter and does this suggest a compiler or linker bug? What exactly are the errors you are getting? My first guess would be templates (maybe the precise GC RTInfo ones?) – determining which template instances to emit into what object files is non-trivial, and DMD is currently known to contain a few related bugs. The fact that the problem also appears when compiling all source files at once is somewhat special, though. David
Re: isDroppable range trait for slicing to end
On Monday, October 29, 2012 21:56:36 Jens Mueller wrote: I find it amazing how many bugs your unittests catch. That's why they're there. It's far too easy to miss a corner case and end up with mostly working but still buggy code (especially with date/time stuff). At one point, I had a bug with B.C. years that ended in 99 that I only caught when I made some of the tests more thorough. Being thorough seems to be the only way to catch all those sorts of problems. And in spite of all of that, I've still had a bug or two in the calculations when it was merged into Phobos (long since fixed). DST switches are particularly nasty though, particularly since Microsoft absolutely sucks at time stuff, including the fact that it has a pitifully small number of time zones, and most of them are wrong. Testing that stuff across platforms is a major PITA, but I've tried very hard to guarantee that the behavior is the same across systems. Unfortunately, it looks like I'm going to have to spend some time figuring out how to hack around Windows 8's stupidity though. - Jonathan M Davis
Re: To avoid some linking errors
On Sunday, 28 October 2012 at 13:39:26 UTC, bearophile wrote: This code compiles with no errors, and then later the linker gives a Symbol Undefined: abstract class A { public void foo(); } class B : A {} void main() {} Interestingly, adding abstract to the method will result in no linker error for compiling. And if a new B is created then a compiler error is provided instead of a linker error. I'm for getting some line numbers over the less informative linker error.
Re: To avoid some linking errors
On 10/29/2012 1:19 PM, Brad Roberts wrote: There's another angle to this: 1) It's been stated more than once that one of the goals for D is to achieve a user base of over 1 million users. 2) I assert that there aren't more than 1 million programmers with the level of expertise and experience required to understand what happens during compilation to a sufficient degree that they feel comfortable with the tool chains that D (and c and c++) have today. Conclusion, the tool chains must get more user friendly. Stroustrup estimates more than 3 million C++ users in 2004. http://www.stroustrup.com/bs_faq.html#number-of-C++-users There are probably more than that many C users.
Re: To avoid some linking errors
On Mon, 29 Oct 2012, Walter Bright wrote: On 10/29/2012 1:19 PM, Brad Roberts wrote: There's another angle to this: 1) It's been stated more than once that one of the goals for D is to achieve a user base of over 1 million users. 2) I assert that there aren't more than 1 million programmers with the level of expertise and experience required to understand what happens during compilation to a sufficient degree that they feel comfortable with the tool chains that D (and c and c++) have today. Conclusion, the tool chains must get more user friendly. Stroustrup estimates more than 3 million C++ users in 2004. http://www.stroustrup.com/bs_faq.html#number-of-C++-users There are probably more than that many C users. Think there's any chance that 1/3 of the existing c++ users are going to switch to d? In the next year? Me neither. The majority of D users are newish developers or developers w/out history in the C style compilation model. It's just foreign and the majority aren't interested in having to learn about issues that the higher level languages don't require. It's friction. It needs to be reduced. I say all of the above having been essentially an member of the C style compilation model, exclusively.
Re: To avoid some linking errors
On 29/10/2012 22:34, Walter Bright wrote: On 10/29/2012 1:19 PM, Brad Roberts wrote: There's another angle to this: 1) It's been stated more than once that one of the goals for D is to achieve a user base of over 1 million users. 2) I assert that there aren't more than 1 million programmers with the level of expertise and experience required to understand what happens during compilation to a sufficient degree that they feel comfortable with the tool chains that D (and c and c++) have today. Conclusion, the tool chains must get more user friendly. Stroustrup estimates more than 3 million C++ users in 2004. http://www.stroustrup.com/bs_faq.html#number-of-C++-users There are probably more than that many C users. In 2004 C++ was on the top of its game. http://www.tiobe.com/content/paperinfo/tpci/C__.html But, I also think 1 million is on the low side. It is probably closer to 5 (third=C,C++,alike of 15) million, excluding the feel comfortable part of course. http://stackoverflow.com/questions/453880/how-many-developers-are-there-in-the-world I also wanted to say some thing about expectations and reasons to switch..
Re: To avoid some linking errors
On 10/29/2012 2:28 PM, Jesse Phillips wrote: On Sunday, 28 October 2012 at 13:39:26 UTC, bearophile wrote: This code compiles with no errors, and then later the linker gives a Symbol Undefined: abstract class A { public void foo(); } class B : A {} void main() {} Interestingly, adding abstract to the method will result in no linker error for compiling. That's because by saying abstract you're telling the compiler that there is no implementation for A.foo(), which is fundamentally different from saying that A.foo() is defined elsewhere. The compiler inserts a 0 in the vtbl[] slot for it, even though it won't let you try to call it. And if a new B is created then a compiler error is provided instead of a linker error. Abstract types are really a different thing from it's defined somewhere else. I'm for getting some line numbers over the less informative linker error. The object file format does not support line numbers for symbol references and definitions. None of the 4 supported ones (OMF, ELF, Mach-O, MsCoff) have that. Even the symbolic debug info doesn't have line numbers for references, just for definitions.
Re: To avoid some linking errors
On 10/29/2012 3:11 PM, Brad Roberts wrote: It's friction. It needs to be reduced. Short of building the linking code into dmd, the options are fairly limited. Note that I did build the librarian code into dmd, instead of leaving it as a separate utility (lib.exe, ar), and have been pretty pleased with the results. But the librarian is a trivial piece of code.
Re: Decimal Floating Point types.
On 29 October 2012 23:30, Dmitry Olshansky dmitry.o...@gmail.com wrote: 10/29/2012 6:43 PM, Iain Buclaw пишет: Speaking on behalf of Dejan, he expressed a wish to have such a type in D. (eg: such that assert(3.6 * 10 == 36.0) - which may not always be true on all architectures). As maybe a new backend type is out of the question. Perhaps we should create a new library type for the job - eg: _Decimal32, _Decimal64, _Decimal128. Thoughts? I recall there was proposal for Phobos with both fixed decimal floating point types and arbitrary precision variants. And taking the role of good jinn: https://github.com/andersonpd/decimal/tree/master/decimal (seems very much alive and kicking) Regards Iain. -- Dmitry Olshansky Looks like just the ticket - however on a brief overview is still very incomplete. Regards, -- Iain Buclaw *(p e ? p++ : p) = (c 0x0f) + '0';
DMD on Haiku?
Hello D-folks! I was just wondering if it would be possible to make DMD build out of the box for Haiku (haiku-os.org) with the source from the official DMD repo. Haiku is pretty darn POSIX compliant so the actual porting isn't much of a problem. DMD has ran on Haiku before a while ago and shouldn't have any problem doing it now. From what I hear from the Haiku community it was just to add a bunch of ifeq Haiku and stuff to make it build and run fine. What I want though is to get these things in to the main source of DMD, applying patches and stuff like that is a pain, it is so much better to just be able to clone and build without problems. So what I wanted to ask is: would Digital Mars accept a pull request to make DMD build on Haiku to their main branch on Github? I just wanted to know for sure before I go ahead and fork DMD to do this. Cheers!
Re: Decimal Floating Point types.
On Monday, 29 October 2012 at 18:43:36 UTC, Iain Buclaw wrote: Speaking on behalf of Dejan, he expressed a wish to have such a type in D. (eg: such that assert(3.6 * 10 == 36.0) - which may not always be true on all architectures). As maybe a new backend type is out of the question. Perhaps we should create a new library type for the job - eg: _Decimal32, _Decimal64, _Decimal128. Thoughts? Regards Iain. I would definitely want these. They are necessary for doing math on currency values. --rt
Re: Decimal Floating Point types.
On Mon, Oct 29, 2012 at 11:41:45PM +0100, Rob T wrote: On Monday, 29 October 2012 at 18:43:36 UTC, Iain Buclaw wrote: Speaking on behalf of Dejan, he expressed a wish to have such a type in D. (eg: such that assert(3.6 * 10 == 36.0) - which may not always be true on all architectures). As maybe a new backend type is out of the question. Perhaps we should create a new library type for the job - eg: _Decimal32, _Decimal64, _Decimal128. Thoughts? Regards Iain. I would definitely want these. They are necessary for doing math on currency values. [...] I thought it was better to use fixed-point with currency? Or at least, so I've heard. T -- Mediocrity has been pushed to extremes.
Re: DMD on Haiku?
On Mon, 29 Oct 2012, Isak Andersson wrote: Hello D-folks! I was just wondering if it would be possible to make DMD build out of the box for Haiku (haiku-os.org) with the source from the official DMD repo. Haiku is pretty darn POSIX compliant so the actual porting isn't much of a problem. DMD has ran on Haiku before a while ago and shouldn't have any problem doing it now. From what I hear from the Haiku community it was just to add a bunch of ifeq Haiku and stuff to make it build and run fine. What I want though is to get these things in to the main source of DMD, applying patches and stuff like that is a pain, it is so much better to just be able to clone and build without problems. So what I wanted to ask is: would Digital Mars accept a pull request to make DMD build on Haiku to their main branch on Github? I just wanted to know for sure before I go ahead and fork DMD to do this. Cheers! Is someone in the haiku community willing to step up and keep it working? Contribute a box to run an auto-tester client? Unless the answers to both of the above are 'yes', then it's just about guaranteed to break again at some point. IMHO, every platform that wants to be supported should meet that bar.
Re: Command Line Order + Linker Errors
On 10/29/2012 10:24 PM, dsimcha wrote: The mesasges are below. The exact messages are probably not useful but I included them since you asked. I meant to specify, though, that they're all undefined reference messages. Actually, none of these issues occur at all when compilation of the two files is done separately, regardless of what order the object files are passed to DMD for linking: dmd -c -unittest test.d dmd -c -unittest emptymain.d dmd -unittest test.o emptymain.o # Works dmd -unittest emptymain.o test.o # Works emptymain.o:(.data._D68TypeInfo_S6object26__T16AssociativeArrayTdTdZ16AssociativeArray4Slot6__initZ+0x80): undefined reference to `_D11gctemplates77__T11RTInfoImpl2TS6object26__T16AssociativeArrayTdTdZ16AssociativeArray4SlotZ11RTInfoImpl2yG2m' I had similar ones aswell. As reported in some other mail, the workaround is to create an alias to the AssociativeArray type. double[double] aa; alias AssociativeArray!(double,double) _workaround; It seems the compiler does not always completely instantiate the class AssociativeArray!(Key,Value) when the type Value[Key] is used. Definitely a compiler bug.
Re: Command Line Order + Linker Errors
On Monday, 29 October 2012 at 20:56:02 UTC, dsimcha wrote: My real question, though, is why should the order of these files on the command line matter and does this suggest a compiler or linker bug? This would be a bug. Recently this was closed since the example is working. http://d.puremagic.com/issues/show_bug.cgi?id=4318 Not claiming it to be a wrong choice, just may be relevant to what you are seeing.
Re: Make [was Re: SCons and gdc]
On Saturday, 27 October 2012 at 18:11:30 UTC, Russel Winder wrote: Or it says you know Make but not SCons. All build frameworks have their computational models, idiosyncrasies , and points of pain. Make and SCons both have these. I definitely do not like Make. The scripts are made out of garbage, and maintaining garbage just produces more waste. Unfortunately for me, my attempts to make use of scons is not encouraging. It may be better than Make, but not enough for me to settle down with it. The problem most people have when moving from Make to SCons is that they think Make computational models and idioms. It takes a while to get over these and appreciate that SCons is very different from Make even though it is fundamentally the same. The two problems I mentioned were encountered almost immediately. These are inability to scan subfolders recursively, and inability to build to a level above the source folder. I don't think that neither requirement has anything to do with thinking in terms of Make. It could be that solving these two deficiencies may be enough to keep me going with scons, I don't know. Hummm... Whilst I am a fan of out of source tree builds I have always build within the project tree, so I have never noticed that trying to build in a directory that can only be reached from .. of the SConstruct appears to be impossible – without use of symbolic links. Will you put in the bug report or should I? I don't think it's a bug, because it's actually documented as a feature. It may however be a bug in terms of the assumptions about how applications should be built. Does Make? CMake, Autotools, Waf? I have only used Make, and as bad as it is, at least I can scan subfolders with one built-in command. Yes and no. Clearly there is a core of idiomatic things that every build framework should have. Then there is stuff that is unique. Scons is far too rigid with the assumptions it makes, and IMO some of the assumptions are plain wrong. For example, building to a location out of the source tree has the obvious advantage that your source tree remains a source tree. I don't understand how anyone can consider this unusual or not necessary. If a source tree is to be a tree containing source code, then recursive scanning and building out of the tree is an essential requirement. Scons however assumes that your source tree must be flat, and that your source tree must be polluted with build files. SCons depends only on Python. What are these other dependencies that you speak of? You are correct, only Python, which on a Linux system is normally installed by default. I was refering to the need to manually build scons from from a source repository in order to get latest D support. I know I'm in the bleeding edge zone when it comes to D, so a certain amount of hacking is needed, but I'd like to minimize it as much as possible. At this point I'm considering looking at those old build tools written in D, perhaps I can patch one of them up to get it to do what I want. Or fix SCons? I thought of that, however in order to fix scons, I would have to learn a lot about scons, and also learn Python. The flaws that I see with scons are so basic, I probably would not fit in with the scons culture, so I see nothing but pain in trying to fix scons. I'm also learning D, and would rather spend more of my time learning D than something else. My only interest with scons is for using it, not fixing it, and I have no interest in learning Python. As far as I am aware there are no D coded build frameworks that can handle C, C++, Fortran, D, LaTeX, Vala, Haskell, OCaml, Java, Scala. I'm currently only interested in building C++ and D, generalized tools that can manage multiple languages tend to be much more complex than I need. (*) Think SCons → Python → Monty Python. That's how I view most of what is going on in programming land. --rt
Re: DMD on Haiku?
On Monday, 29 October 2012 at 22:57:41 UTC, Brad Roberts wrote: On Mon, 29 Oct 2012, Isak Andersson wrote: Hello D-folks! I was just wondering if it would be possible to make DMD build out of the box for Haiku (haiku-os.org) with the source from the official DMD repo. Haiku is pretty darn POSIX compliant so the actual porting isn't much of a problem. DMD has ran on Haiku before a while ago and shouldn't have any problem doing it now. From what I hear from the Haiku community it was just to add a bunch of ifeq Haiku and stuff to make it build and run fine. What I want though is to get these things in to the main source of DMD, applying patches and stuff like that is a pain, it is so much better to just be able to clone and build without problems. So what I wanted to ask is: would Digital Mars accept a pull request to make DMD build on Haiku to their main branch on Github? I just wanted to know for sure before I go ahead and fork DMD to do this. Cheers! Is someone in the haiku community willing to step up and keep it working? Contribute a box to run an auto-tester client? Unless the answers to both of the above are 'yes', then it's just about guaranteed to break again at some point. IMHO, every platform that wants to be supported should meet that bar. Well, I would say that I am pretty willing to do both those things. At least if I have the knowledge to do it! I'm not a 100% clear on what the second requirement means. Having a box running 24/7 that can run automated tests at any time? Or just running the tests occationally (like once or twice a week or so, or even just in time for every new DMD release)?
Re: Decimal Floating Point types.
On Oct 29, 2012, at 3:51 PM, H. S. Teoh hst...@quickfur.ath.cx wrote: On Mon, Oct 29, 2012 at 11:41:45PM +0100, Rob T wrote: On Monday, 29 October 2012 at 18:43:36 UTC, Iain Buclaw wrote: Speaking on behalf of Dejan, he expressed a wish to have such a type in D. (eg: such that assert(3.6 * 10 == 36.0) - which may not always be true on all architectures). As maybe a new backend type is out of the question. Perhaps we should create a new library type for the job - eg: _Decimal32, _Decimal64, _Decimal128. Thoughts? Regards Iain. I would definitely want these. They are necessary for doing math on currency values. [...] I thought it was better to use fixed-point with currency? With currency, as in most other instances, you usually want to do your rounding once at the end of the calculation, or at least control exactly where and how the rounding is done. Rounding implicitly on each step means money effectively vanishing into the ether, and people tend not to like that.
Re: To avoid some linking errors
On 10/29/12 6:13 PM, Walter Bright wrote: On 10/29/2012 3:11 PM, Brad Roberts wrote: It's friction. It needs to be reduced. Short of building the linking code into dmd, the options are fairly limited. Why can't the linking code be built into dmd? I am baffled :o). Andrei
Re: To avoid some linking errors
Andrei Alexandrescu: Why can't the linking code be built into dmd? I am baffled :o). This is possible, but a better question is how much work is required to do this? Walter was very slowly translating the current linker from disassembly to C. If and once that program is all C, it's probably not too much hard to convert it to D, merge it with the dmd binary, and improve it in some ways. Bye, bearophile
Imports with versions
There are some updated on the Java-like language Ceylon: http://ceylon-lang.org/blog/2012/10/29/ceylon-m4-analytical-engine/ One of the features of Ceylon that seems interesting are the module imports: http://ceylon-lang.org/documentation/1.0/reference/structure/module/#descriptor An example: doc An example module. module com.example.foo 1.2.0 { import com.example.bar 3.4.1 import org.example.whizzbang 0.5; } I think it helps avoid version troubles. A possible syntax for D: import std.random(2.0); import std.random(2.0+); Bye, bearophile
Re: Decimal Floating Point types.
On Monday, 29 October 2012 at 22:49:16 UTC, H. S. Teoh wrote: I thought it was better to use fixed-point with currency? Or at least, so I've heard. Depends on the application. Years ago (10?) I made a little interest calculating C program, floating point and rounding gave me so many issues with 1-2 pennies off that I went with an int and just took the bottom 2 digits as pennies. Worked far better than fighting with the floating point issues. I do remember somewhere that the double could be used to accurately calculate money up to 18 digits, but I can't remember if there was a specific mode you had to tell the FPU.
Re: Make [was Re: SCons and gdc]
On Tue, Oct 30, 2012 at 12:19:46AM +0100, Rob T wrote: [...] Scons is far too rigid with the assumptions it makes, and IMO some of the assumptions are plain wrong. For example, building to a location out of the source tree has the obvious advantage that your source tree remains a source tree. I don't understand how anyone can consider this unusual or not necessary. If a source tree is to be a tree containing source code, then recursive scanning and building out of the tree is an essential requirement. Scons however assumes that your source tree must be flat, and that your source tree must be polluted with build files. I don't know where you got this assumption from, but it's plain wrong. SCons supports out-of-source-tree builds. In fact, I have a project in which I generate multiple builds from the same source tree (and I can build *all* build variants in a single command, parallelized - something that will cause make to keel over and die). The only bug is that SCons assumes that the source tree and build tree(s) must be under a common root, which may not be the case. Nevertheless, even this is not a fatal problem, as you can just put your SConstruct in the directory above the source tree, then you can build to other subdirectories easily. SCons depends only on Python. What are these other dependencies that you speak of? You are correct, only Python, which on a Linux system is normally installed by default. I was refering to the need to manually build scons from from a source repository in order to get latest D support. I know I'm in the bleeding edge zone when it comes to D, so a certain amount of hacking is needed, but I'd like to minimize it as much as possible. This is just what one puts up with when working with bleeding edge technology. If there weren't kinks in the works, it'd be mainstream already. [...] As far as I am aware there are no D coded build frameworks that can handle C, C++, Fortran, D, LaTeX, Vala, Haskell, OCaml, Java, Scala. I'm currently only interested in building C++ and D, generalized tools that can manage multiple languages tend to be much more complex than I need. To each his own, but I honestly don't see what's so difficult about this: # src/SConscript Import('env') env.Program('mydprogram', ['main.d', 'abc.d', 'def.cc']) # lib1/SConscript Import('env') env.Library('mylib1', ['mod1.d', 'mod2.d']) # lib2/SConscript Import('env') env.Library('mylib2', ['mod2.d', 'mod3.d']) # SConstruct objdir = 'build' env = Environment() Export(env) env.SConscript('src/SConscript', build_dir=objdir) env.SConscript('lib1/SConscript', build_dir=objdir) env.SConscript('lib2/SConscript', build_dir=objdir) Main program in src/, two libraries in lib1, lib2, and everything builds in build/ instead of the respective source trees. No problem. I even threw in a C++ file for kicks. Now granted, SCons does have its own flaws, but railing about how useless it is when one hasn't even bothered to learn what it can do sounds rather unfair to me. T -- Ruby is essentially Perl minus Wall.
Re: To avoid some linking errors
On Tuesday, October 30, 2012 01:45:31 bearophile wrote: Andrei Alexandrescu: Why can't the linking code be built into dmd? I am baffled :o). This is possible, but a better question is how much work is required to do this? Walter was very slowly translating the current linker from disassembly to C. If and once that program is all C, it's probably not too much hard to convert it to D, merge it with the dmd binary, and improve it in some ways. Depending, it should be fairly easy to just wrap the linker call and have dmd process its output and present something saner when there's an error. That could be a bit fragile though, since it would likely depend on the exact formatting of linker error messages. Better integration than that could be quite a bit more work. I think that it's fairly clear that in the long run, we want something like this, but I don't know if it's worth doing right now or not. - Jonathan M Davis
Re: To avoid some linking errors
On Tue, 30 Oct 2012, bearophile wrote: Andrei Alexandrescu: Why can't the linking code be built into dmd? I am baffled :o). This is possible, but a better question is how much work is required to do this? Walter was very slowly translating the current linker from disassembly to C. If and once that program is all C, it's probably not too much hard to convert it to D, merge it with the dmd binary, and improve it in some ways. Bye, bearophile Built in? Absolutely not. There's no way that it's architectually wise to have the linker as a part of the compiler binary. Able to usefully interact with the linker? Absolutely. To be clear, I'm certain that Andrei was kidding / making a joke at Walter's expense.
Re: To avoid some linking errors
On Tue, 30 Oct 2012, Jonathan M Davis wrote: On Tuesday, October 30, 2012 01:45:31 bearophile wrote: Andrei Alexandrescu: Why can't the linking code be built into dmd? I am baffled :o). This is possible, but a better question is how much work is required to do this? Walter was very slowly translating the current linker from disassembly to C. If and once that program is all C, it's probably not too much hard to convert it to D, merge it with the dmd binary, and improve it in some ways. Depending, it should be fairly easy to just wrap the linker call and have dmd process its output and present something saner when there's an error. That could be a bit fragile though, since it would likely depend on the exact formatting of linker error messages. Better integration than that could be quite a bit more work. I think that it's fairly clear that in the long run, we want something like this, but I don't know if it's worth doing right now or not. - Jonathan M Davis If someone wants to work on it, I'm sure no one would stop them. In fact, someone did a specific case already. But for the Top Men to engage on? Almost certainly not. I was working on recognize that there's room for improvement and improvement is important for adoption not get working on it now. -- If someone wanted to take on an ambitious task, one of the key problems with output munging is the parseability of the output (which applies to the compiler, linker, etc.. all the compiler chain tools). Few (none?) of them output text that's designed for parsability (though some make it relatively easy). It would be interesting to design a structured format and write scripts to sit between the various components to handle adapting the output. Restated via an example: today: compiler invokes tools and just passes on output ideal (_an_ ideal, don't nitpick): compiler invokes tool which returns structured output and uses that intermediate that's likely easier to achieve: compiler invokes script that invokes tool (passing args) and fixes output to match structured output pro: + compiler only needs to understand one format + one script per tool (also a con, but on the pro side, each script is focused in what it needs to understand and care about) + no need to tear into each tool to restructure it's i/o code cons: - will likely force some form of lowest common denominator - more overhead due to extra parsing and processes I used the term script, but don't read much into that, just implying that it's small and doesn't have to do much. Now that I've written it up.. might actually be fun to do, but I've got too many in-flight projects as it is, so I'll resist starting on it. Later, Brad
Re: To avoid some linking errors
Brad Roberts: To be clear, I'm certain that Andrei was kidding / making a joke at Walter's expense. Oh, I see, I have missed the joke again, sorry :-) Bye, bearophile