Re: Distributor's whishlist and questions for D
On Thursday, 21 April 2016 at 01:01:01 UTC, Matthias Klumpp wrote: ## How complete are the free compilers? ## Why is every D compiler shipping an own version of Phobos? These two can be answered at once. LDC and GDC share the same frontend code as DMD, but not the glue layer and backend (don't worry, it's only the DMD backend that has the more restrictive licence). Language and library development happens in DMD and with regard to DMD's current capabilities. Changes to DMD and druntime often require effort to port to LDC and GDC, due to the different backends. So an LDC or GDC release is complete w.r.t. a given past DMD version, but new features may have been added in more recent DMDs. The constraints on shipping a shared phobos between them: ABI compatibility: we don't have it, even across compiler versions It would hold back phobos development (e.g. "you can't do that, because GDC doesn't support it yet") You would still need to ship a separate runtime for each compiler, so you don't really gain anything.
Re: [PRs] How to update on Github
On Tuesday, 19 April 2016 at 13:05:35 UTC, tcak wrote: On Thursday, 21 May 2015 at 10:39:46 UTC, ZombineDev wrote: Basically you need clone your fork to your computer, add a "upstream" remote to github.com/D-Programming-Language/[repo name, eg. phobos], pull from upstream the new changes and optionally update github by pushing to origin (origin normally is github). It may sound complicated doing this from the command-line, but after a few times you'll get used to it. Please put this information somewhere. Due to the fear of being told to squash commits, I do not want to do any commits anymore. If all you need to do is squash commits, just `git rebase -i HEAD~N` where N is some number at least as big as the number of commits back that you're interested in messing around with then read the instructions that will appear. Once you're done, `git push --force` to update your branch. I would recommend making a copy of the whole repository locally before any of that just in case you mess something up, at least while you're not as confident with git.
Re: Recursive vs. iterative constraints
On Saturday, 16 April 2016 at 02:42:55 UTC, Andrei Alexandrescu wrote: So the constraint on chain() is: Ranges.length > 0 && allSatisfy!(isInputRange, staticMap!(Unqual, Ranges)) && !is(CommonType!(staticMap!(ElementType, staticMap!(Unqual, Ranges))) == void) Noice. Now, an alternative is to express it as a recursive constraint: (Ranges.length == 1 && isInputRange!(Unqual!(Ranges[0]))) || (Ranges.length == 2 && isInputRange!(Unqual!(Ranges[0])) && isInputRange!(Unqual!(Ranges[1])) && !is(CommonType!(ElementType!(Ranges[0]), ElementType!(Ranges[1])) == void)) || is(typeof(chain(rs[0 .. $ / 2], chain(rs[$ / 2 .. $] In the latter case there's no need for additional helpers but the constraint is a bit more bulky. Pros? Cons? Preferences? Andrei Very strong preference for the first. The second is so much harder to read (not everyone is great at thinking recursively) and also could depend on the implementation of chain if the return type must be inferred from the body.
Re: Release D 2.071.0
On Tuesday, 5 April 2016 at 22:43:05 UTC, Martin Nowak wrote: Glad to announce D 2.071.0. http://dlang.org/download.html This release fixes many long-standing issues with imports and the module system. See the changelog for more details. http://dlang.org/changelog/2.071.0.html -Martin Apologies for the delay for homebrew users, all sorted now, 2.071.0 is now dmd stable.
Re: Sample Rate
On Saturday, 9 April 2016 at 14:15:38 UTC, Nordlöw wrote: Has anybody more than I thought about representing the sample rate of a sampled signal collected from sources such as microphones and digital radio receivers? With it we could automatically relate DFT/FFT bins to real frequencies and other cool stuff. Maybe we could make it part of the standard solution for linear algebra processing and units of measurement in D. Destroy. I don't have time to do much on this, but would be happy to advise and/or answer questions if anyone wants to get in to it. I've spent an unhealthy number of hours in discrete fourier space, as has my computer. Damn it, now I'm thinking about it... The units are easy (either 1/s or 2*pi / s), but in the DFT the topology of the space is the important/difficult thing (it's a torus, which in 1-D is just a circle). When dealing with arrays in fourier space, you can make a lot of things easy by implementing indexing in terms of an integer type modulo-N, but there's often tricks to avoid having to do so many %s. Compilers seem unpredictably fantastic or terrible at optimising this sort of code. P.s. very basic definitions that might be vaguely useful, just because I have them lying around: https://dl.dropboxusercontent.com/u/910836/fourier.pdf
Re: Faster sort?
On Thursday, 7 April 2016 at 09:25:46 UTC, John Colvin wrote: On Thursday, 7 April 2016 at 08:53:32 UTC, Andrea Fontana wrote: On Thursday, 7 April 2016 at 08:41:51 UTC, John Colvin wrote: *hench my example of compiling one module to an object file and then compiling the other and linking them, without ever importing one from the other. If i move boolSort() on another module but I can't use auto as return type. Return type is a "voldemort" type returned by std.range.chain. How can I define it? You could create a dummy function (or even the real function, just with a different name) that creates the same type and use typeof(myDummyFunction(myDummyArgs)) when making the definition. Correction, use http://dlang.org/phobos/std_traits.html#ReturnType instead of all that typeof mess.
Re: Faster sort?
On Thursday, 7 April 2016 at 08:53:32 UTC, Andrea Fontana wrote: On Thursday, 7 April 2016 at 08:41:51 UTC, John Colvin wrote: *hench my example of compiling one module to an object file and then compiling the other and linking them, without ever importing one from the other. If i move boolSort() on another module but I can't use auto as return type. Return type is a "voldemort" type returned by std.range.chain. How can I define it? You could create a dummy function (or even the real function, just with a different name) that creates the same type and use typeof(myDummyFunction(myDummyArgs)) when making the definition.
Re: Faster sort?
On Thursday, 7 April 2016 at 09:01:14 UTC, tsbockman wrote: On Thursday, 7 April 2016 at 08:23:09 UTC, John Colvin wrote: But it definitely can eliminate an unused result. My prediction: you took an array and sorted it, then did nothing with the result, so it rightly concluded that there was no point doing the sort. In any given case the compiler could be removing some or all of the work. Laborious approach that defeats the optimisations you don't want while keeping the ones you do: It's easier to just output the result in some form. I typically calculate and display a checksum of some sort. Take care with this approach. For example, calling a pure function (whether D pure or some optimiser inferred sort of purity) repeatedly in a loop can sometimes cause the loop to be reduced to a single iteration (doesn't apply in this case, because of randomly generating data each iteration).
Re: Faster sort?
On Thursday, 7 April 2016 at 08:33:40 UTC, Andrea Fontana wrote: On Thursday, 7 April 2016 at 08:23:09 UTC, John Colvin wrote: But it definitely can eliminate an unused result. My prediction: you took an array and sorted it, then did nothing with the result, so it rightly concluded that there was no point doing the sort. In any given case the compiler could be removing some or all of the work. But it should remove result if I replace boolSort() with sort() too, instead it take 10 seconds to run. Not necessarily. It totally depends on the implementation details and the exact way the optimiser works. It might be interesting and informative for you to explore exactly why a particular version of a particular compiler with particular compilation flags will inline and elide one sort function but not another, but I would recommend just not letting the compiler see the source code of the benchmark and the sorting at the same time*, then you know neither will be inlined and also no extra attributes will be inferred and unrealistically taken advantage of. *hench my example of compiling one module to an object file and then compiling the other and linking them, without ever importing one from the other.
Re: Faster sort?
On Thursday, 7 April 2016 at 07:57:11 UTC, Andrea Fontana wrote: On Wednesday, 6 April 2016 at 18:54:08 UTC, tsbockman wrote: On Wednesday, 6 April 2016 at 08:15:39 UTC, Andrea Fontana wrote: Using ldmd2 -O -release -noboundscheck -inline sort.d && ./sort instead: 2 inputs: Faster: 0 hnsecs Phobos: 33 μs and 5 hnsecs ... 65536 inputs: Faster: 0 hnsecs (???) Phobos: 7 secs, 865 ms, 450 μs, and 6 hnsecs Can you share the benchmark code? "0 hnsecs" results generally mean that your test was too simple and/or didn't have any obvious side effects, and so the optimizer just removed it completely. A simple test just written: Duration total; foreach(_; 0..1) { auto arr = generate!( () => uniform(0,2)).map!(x => cast(bool)x).take(65536).array; StopWatch sw; sw.start; boolSort(arr); total += sw.peek.to!Duration; sw.stop; } andrea@ububocs:/tmp$ ./sort 0 hnsecs I don't think compiler can remove a random generated array... But it definitely can eliminate an unused result. My prediction: you took an array and sorted it, then did nothing with the result, so it rightly concluded that there was no point doing the sort. In any given case the compiler could be removing some or all of the work. Laborious approach that defeats the optimisations you don't want while keeping the ones you do: % cat modA.d float[] codeToBenchmark(int someParam, float[] someOtherParam) { /* blah blah code */ } % cat modB.d // Do not import modA here auto codeToBenchmark(int, float[]); void main() { // start loop and timers: codeToBenchmark(/*blah params */); // end timers and loop } % ldc2 -c modA.d % ldc2 modA.o modB.d % ./modB I really need to write an article about bencharking in D...
Re: Any usable SIMD implementation?
On Tuesday, 5 April 2016 at 08:34:32 UTC, Walter Bright wrote: On 4/4/2016 11:10 PM, 9il wrote: It is impossible to deduct from that combination that Xeon Phi has 32 FP registers. Since dmd doesn't generate specific code for a Xeon Phi, having a compile time switch for it is meaningless. "Since the compiler never generates AVX or AVX2" - this is definitely nor true, see, for example, LLVM vectorization and SLP vectorization. dmd is not LLVM. The particular design and limitations of the dmd backend shouldn't be used to define D. In the extreme, your argument would imply that there's no point having version(ARM) built in to the language, because dmd doesn't support it. It's entirely practical to compile code with different source code, link them *both* into the executable, and switch between them based on runtime detection of the CPU. This approach is complex, Not at all. Used to do it all the time in the DOS world (FPU vs emulation). I just want an unified instrument to receive CT information about target and optimization switches. It is OK if this information would have different switches on different compilers. Optimizations simply do not transfer from one compiler to another, whether the switch is the same or not. They are highly implementation dependent. Auto vectorization is only example (maybe bad). I would use SIMD vectors, but I need CT information about target CPU, because it is impossible to build optimal BLAS kernels without it! I still don't understand why you cannot just set '-version=xxx' on the command line and then switch off that version in your custom code. So you're suggesting that libraries invent their own list of versions for specific architectures / CPU features, which the user then has to specify somehow on the command line? I want to be able to write code that uses standardised versions that work across various D compilers, with the user only needing to type e.g. -march=native on GDC and get the fastest possible code.
Re: No aa.byKey.length?
On Monday, 4 April 2016 at 02:32:56 UTC, Yuxuan Shui wrote: On Monday, 4 April 2016 at 00:50:27 UTC, Jonathan M Davis wrote: On Sunday, April 03, 2016 23:46:10 John Colvin via Digitalmars-d-learn wrote: On Saturday, 2 April 2016 at 16:00:51 UTC, Jonathan M Davis wrote: > [...] Maybe aa.byKey().takeExactly(aa.length) Yeah, that's a clever workaround. - Jonathan M Davis So should we not add length to byKey? Yes. But until that happens, my workaround allows you to carry on getting work done :)
Re: No aa.byKey.length?
On Saturday, 2 April 2016 at 16:00:51 UTC, Jonathan M Davis wrote: On Saturday, April 02, 2016 15:38:30 Ozan via Digitalmars-d-learn wrote: On Friday, 1 April 2016 at 20:50:32 UTC, Yuxuan Shui wrote: > Why? > > This is annoying when I need to feed it into a function that > requires hasLength. aa.keys.length That allocates an array. Doing that would be like doing aa.byKeys().array().length. And associate arrays already have length. You can do auto len = aa.length; The problem is when you want to operate on a range, and the function that you want to pass it to wants length on the range. If byKeys returned a range with length, then that would work, but since it doesn't, it doesn't. Having other ways to get the length doesn't help. - Jonathan M Davis Maybe aa.byKey().takeExactly(aa.length)
Re: Any usable SIMD implementation?
On Thursday, 31 March 2016 at 08:23:45 UTC, Martin Nowak wrote: I'm currently working on a templated arrayop implementation (using RPN to encode ASTs). So far things worked out great, but now I got stuck b/c apparently none of the D compilers has a working SIMD implementation (maybe GDC has but it's very difficult to work w/ the 2.066 frontend). https://github.com/MartinNowak/druntime/blob/arrayOps/src/core/internal/arrayop.d https://github.com/MartinNowak/dmd/blob/arrayOps/src/arrayop.d I don't want to do anything fancy, just unaligned loads, stores, and integral mul/div. Is this really the current state of SIMD or am I missing sth.? -Martin Am I being stupid or is core.simd what you want?
Re: Concatenative Programming Languages
On Wednesday, 30 March 2016 at 22:14:11 UTC, BLM768 wrote: On Wednesday, 30 March 2016 at 20:53:02 UTC, Shammah Chancellor wrote: I just stumbled on this wikipedia article: https://en.wikipedia.org/wiki/Concatenative_programming_language Seems like D falls under that category? -S. Not really. UFCS allows the syntax "x.foo.bar.baz", which is similar to a concatenative syntax, but the existence of "x" in the expression means it's not purely concatenative. In a purely concatenative language, "foo bar baz" would produce a function that pipelines those three functions. "foo.bar.baz" in D would produce a compiler error. import std.functional : pipe; alias allThree = pipe!(foo, bar, baz); :)
Re: Beta D 2.071.0-b2
On Wednesday, 30 March 2016 at 16:00:34 UTC, Luís Marques wrote: On Wednesday, 30 March 2016 at 15:48:28 UTC, John Colvin wrote: That would be me. Waiting for merge: https://github.com/Homebrew/homebrew/pull/50539 Thanks! Would it be against the homebrew spirit for the DMD recipe to link to some URL like <...lastest-devel.tar.gz>? After all, that already happens with the --HEAD version, which doesn't link to any specific git commit. That way we wouldn't have to wait for the homebrew merges. There's the issue of the hash, but the --HEAD version doesn't have that either, and https://dlang.org should be trusted. I'm 99.9% certain the Homebrew core devs wouldn't allow something like that.
Re: Beta D 2.071.0-b2
On Wednesday, 30 March 2016 at 13:04:08 UTC, Luís Marques wrote: On Wednesday, 30 March 2016 at 11:03:51 UTC, Martin Nowak wrote: Second beta for the 2.071.0 release. http://dlang.org/download.html#dmd_beta http://dlang.org/changelog/2.071.0.html Please report any bugs at https://issues.dlang.org -Martin Who maintains the homebrew recipe? The --devel package is still at beta 1. That would be me. Waiting for merge: https://github.com/Homebrew/homebrew/pull/50539
Re: Beta D 2.071.0-b1
On Thursday, 24 March 2016 at 01:49:25 UTC, Martin Nowak wrote: First beta for the 2.071.0 release. This release comes with many import and lookup related changes and fixes. You might see a lot of deprecation warnings b/c of these changes. We've added the -transition=import switch and -transition=checkimports [¹] switches to ease updating existing code. http://dlang.org/download.html#dmd_beta http://dlang.org/changelog/2.071.0.html Please report any bugs at https://issues.dlang.org -Martin [¹]: -transition=checkimports currently has a bug that creates false positive warnings about the $ symbols, this will be fixed in the next beta (Bugzilla 15825) As usual, `brew update && brew reinstall dmd --devel` :)
Re: Females in the community.
On Thursday, 17 March 2016 at 16:17:46 UTC, Karabuta wrote: Are there any female programmers using D? :) Moreover, the socia Media representation of D sucks. I think we need a female, at least someone soft and mortal who actually understand how to communicate and build a community. Coders suck at these things and its not helping. This is not about gender balance crap, it about building a community. Forgive me for my brutal opinion. Destroy :) Wow, stereotype much? "soft and mortal". Yikes. P.S. what's with calling women "females", is it an americanism? It sounds super weird to a British ear, we'd normally only say "female" in a technical setting or about an animal, so it can sound a bit disrespectful.
Re: Is D a good choice for embedding python/octave/julia
On Sunday, 13 March 2016 at 13:02:16 UTC, Bastien wrote: Hi, apologies for what may be a fairly obvious question to some. ## The background: I have been tasked with building software to process data output by scientific instruments for non-experts - basically with GUI, menus, easy config files (JSON or similar) - and the ability to do some serious number crunching. [...] If the other language has some C api that can be called to interpret code then you can do so from D as well. See also e.g. https://github.com/ariovistus/pyd to make this easier for python.
Re: Dpaste modifies applocation output ?
On Tuesday, 15 March 2016 at 11:48:48 UTC, Temtaime wrote: Hi! http://dpaste.dzfl.pl/93d518c713b5 On dpaste it's ["a\n\nb"] But should be ["a\r\n\rb"] I've tested with dmd on win, linux and mac : all is ok, and only dpaste returns incorrect result. Why so? I'd wrote them using contact form but seems no reply. dpaste has problems with escape codes and also with non-ascii characters. It's been this way for a while, don't know why.
Re: write to file array by lines
On Tuesday, 15 March 2016 at 10:58:16 UTC, Suliman wrote: I have got: string [] total_content; I am appending to it data on every iteration. total_content ~= somedata File file = File(`D:\code\2vlad\result.txt`, "a+"); file.write(total_content); I need to write it's to file by lines. Like: somedataline1 somedataline2 somedataline3 I tried to do like: total_content ~= somedata ~ "\n" but in result I am getting all data in one line: somedataline1 "\n" somedataline2 "\n" somedataline3 "\n" what I am doing wrong? I know about split, but how it can be called on writing time? http://forum.dlang.org/group/learn would be more appropriate for this sort of question.
Re: A comparison between C++ and D
On Wednesday, 9 March 2016 at 20:14:13 UTC, Adam D. Ruppe wrote: --- import std.stdio; @nogc int delegate(int) dg; int helper() @nogc { int a = 50; struct MyFunctor { int a; @nogc this(int a) { this.a = a; } // the function itself @nogc int opCall(int b) { return a+b; } } // capture a by value // WARNING: I stack allocated here but // set it to a global var, this is wrong; // it should probably be malloc'd, but since // I know I am just using it here, it is OK auto myfunc = MyFunctor(a); dg = return dg(10); } void main() { writeln(helper()); } typeof() == int delegate(int b) @nogc what magic is this? I had no idea that taking the address of opCall would give me a delegate.
Re: Named arguments via struct initialization in functions
On Wednesday, 9 March 2016 at 07:30:31 UTC, Edwin van Leeuwen wrote: On Sunday, 6 March 2016 at 17:35:38 UTC, Seb wrote: [...] In ggplotd I often use named tuples as and "anonymoous" struct: Tuple!(double,"x")( 0.0 ) I also added a merge function that will return a tuple containing merged named tuples: Tuple!(double,"x",string,"colour")(-1, "black").merge(Tuple!(double,"x")(0.0)) returns: Tuple!(double,"x",string,"colour")(0, "black"); As an aside the merge function also works with structs so you can do the following: struct Point { double x; double y; } Tuple!(double,"x",string,"colour")(-1, "black").merge(Point(1,2)) returns: Tuple!(double,"x",double,"y",string,"colour")(1, 2, "black"); It works reasonably well, except that the tuples require a lot of typing. slightly tangentially, you might be interested in this: https://github.com/D-Programming-Language/phobos/pull/4043
Re: Speed kills
On Wednesday, 9 March 2016 at 21:01:13 UTC, H. S. Teoh wrote: On Wed, Mar 09, 2016 at 08:30:10PM +, Jon D via Digitalmars-d wrote: On Tuesday, 8 March 2016 at 14:14:25 UTC, ixid wrote: >[...] In the case of std.algorithm.sum, the focus is on accuracy rather than performance. It does some extra work to ensure maximum accuracy in the result, so it shouldn't be expected to have top performance. Granted, though, the docs could be clearer about this accuracy vs. performance tradeoff. Please file a bug on this (or better yet, submit a PR for it). In any case, 4 times slower sounds a bit excessive... it would be good to investigate why this is happening and fix it. I think you'd be good at reviewing this: https://github.com/D-Programming-Language/phobos/pull/4069
Re: Speed kills
On Wednesday, 9 March 2016 at 14:04:40 UTC, Andrei Alexandrescu wrote: On 3/9/16 9:03 AM, John Colvin wrote: On Wednesday, 9 March 2016 at 13:26:45 UTC, Andrei Alexandrescu wrote: On 03/08/2016 09:14 AM, ixid wrote: [...] Whoa. What's happening there? Do we have anyone on it? -- Andrei Ilya has long term plans for this, but I have a short-term fix that will buy us a very large performance improvement here (if my old benchmarks were correct). Give me a few mins to prep the pull request :) Thanks much! -- Andrei https://github.com/D-Programming-Language/phobos/pull/4069
Re: Speed kills
On Wednesday, 9 March 2016 at 13:26:45 UTC, Andrei Alexandrescu wrote: On 03/08/2016 09:14 AM, ixid wrote: Since I posted this thread I've learned std.algorithm.sum is 4 times slower than a naive loop sum. Even if this is for reasons of accuracy this is exactly what I am talking about- this is a hidden iceberg of terrible performance that will reflect poorly on D. That's so slow the function needs a health warning. Whoa. What's happening there? Do we have anyone on it? -- Andrei Ilya has long term plans for this, but I have a short-term fix that will buy us a very large performance improvement here (if my old benchmarks were correct). Give me a few mins to prep the pull request :)
Re: RAII and classes
On Wednesday, 9 March 2016 at 10:48:30 UTC, cym13 wrote: On Wednesday, 9 March 2016 at 10:28:06 UTC, John Colvin wrote: Potential for leaking references from alias this aside, is there some reason that I shouldn't do this for all my C++-like RAII needs: class A { ~this(){ import std.stdio; writeln("hello"); } } auto RAII(T)() if (is(T == class)) { struct Inner { private ubyte[__traits(classInstanceSize, T)] buff; T c; alias c this; ~this() { destroy(c); } } Inner tmp; import std.conv : emplace; tmp.c = tmp.buff.emplace!T; return tmp; } void main() { auto a = RAII!A; } That's almost literally what std.typecons.scoped does. Ok, I forgot std.typecons.scoped, nothing to see here .
RAII and classes
Potential for leaking references from alias this aside, is there some reason that I shouldn't do this for all my C++-like RAII needs: class A { ~this(){ import std.stdio; writeln("hello"); } } auto RAII(T)() if (is(T == class)) { struct Inner { private ubyte[__traits(classInstanceSize, T)] buff; T c; alias c this; ~this() { destroy(c); } } Inner tmp; import std.conv : emplace; tmp.c = tmp.buff.emplace!T; return tmp; } void main() { auto a = RAII!A; }
Re: Pitching D to a gang of Gophers
On Saturday, 5 March 2016 at 11:05:09 UTC, Dmitry Olshansky wrote: I'm having an opportunity to do a small tech-talk on things D in a eCommerce shop that is currently sold on Go (migrating to SOA from PHP monolith). I do not intend that to become Go vs D battle but it gives the context. [...] Have you seen this? http://www.jtolds.com/writing/2016/03/go-channels-are-bad-and-you-should-feel-bad/ I'm not sure if it's all correct and how to compare the situation to D, but it was interesting to read.
Re: Good project: stride() with constant stride value
On Friday, 4 March 2016 at 23:33:40 UTC, Andrei Alexandrescu wrote: On 03/04/2016 04:19 PM, H. S. Teoh via Digitalmars-d wrote: Why not rather improve dmd optimization, so that such manual optimizations are no longer necessary? As I mentioned, optimizing the use of stride in large (non-inlined) functions is a tall order. -- Andrei It seems to me that if the stride is available in the calling scope as usable for a stride template parameter (i.e. as a compile-time value ) then it would also be just as available to the optimiser after the trivial inlining of stride (note not any arbitrarily complex code that contains stride, just stride). Sure, if you nest it away in un-inlineable constructs then it won't be easily optimised, but you wouldn't be able to use it as a template parameter then anyway. Do you have a concrete example where the optimisation(s) you want to occur cannot be done with `stride` as it is?
Re: Good project: stride() with constant stride value
On Friday, 4 March 2016 at 16:45:42 UTC, Andrei Alexandrescu wrote: Currently we have a very useful stride() function that allows spanning a random access range with a specified step, e.g. 0, 3, 6, 9, ... for step 3. I've run some measurements recently and it turns out a compile-time-known stride is a lot faster than a variable. So I was thinking to improve Stride(R) to take an additional parameter: Stride(R, size_t step = 0). If step is 0, then use a runtime-valued stride as until now. If nonzero, Stride should use that compile-time step. Takers? Andrei Surely after inlining (I mean real inlining, not dmd) it makes no difference, a constant is a constant? I remember doing tests of things like that and finding that not only did it not make a difference to performance, ldc produced near-identical asm either way.
Re: std.database
On Thursday, 3 March 2016 at 11:16:03 UTC, Dejan Lekic wrote: On Tuesday, 1 March 2016 at 21:00:30 UTC, Erik Smith wrote: I'm back to actively working on a std.database specification & implementation. It's still unstable, minimally tested, and there is plenty of work to do, but I wanted to share an update on my progress. I suggest you call the package stdx.db - it is not (and may not become) a standard package, so `std` is out of question. If it is supposed to be *proposed* as standard package, then `stdx` is good because that is what some people have used in the past (while others used the ugly std.experimental. for the same purpose). I humbly believe that this effort **must** be collaborative as such package is doomed to fail if done wrong. std.experimental, ugly or not, is what is in phobos. See std.experimental.allocator, std.experimental.logger and std.experimental.ndslice
Re: Compile time performance for metaprogramming is somewhat inconsistent
On Thursday, 3 March 2016 at 02:03:01 UTC, maik klein wrote: Consider the following code void main() { import std.stdio; import std.range: iota, join; import std.algorithm.iteration: map; import std.conv: to; import std.meta: aliasSeqOf, staticMap, AliasSeq; enum types = "AliasSeq!(" ~ iota(0,1).map!(i => to!string(i)).join(",") ~ ")"; alias t = AliasSeq! (mixin(types)); //alias t1 = aliasSeqOf!(iota(0, 1)); } 't' compiles on my machine in ~3.5 seconds while 't1' needs ~1 minute to compile. It seems that mixins are just way more performant than template instantiations. Any ideas why? What causes the slowdown and what can I improve? What happens if you add a few extra branches to std.meta.aliasSeqOf, e.g. https://github.com/D-Programming-Language/phobos/commit/5d2cdf103bd697b8ff1a939c204dd2ed0eec0b59 Only a linear improvement but maybe worth a try?
Re: Why is mangling different for separate compilation?
On Sunday, 28 February 2016 at 12:59:53 UTC, Atila Neves wrote: On Saturday, 27 February 2016 at 11:31:53 UTC, Joakim wrote: On Saturday, 27 February 2016 at 11:27:39 UTC, Walter Bright wrote: On 2/27/2016 1:12 AM, Atila Neves wrote: I've had similar problems in the past with template mixins. It seems D's compile-time features don't mix with any kind of separate compilation, which is a shame. Any ideas on how unit tests should be named? Why has the additional count been added? You're already using the line number to differentiate unit test blocks. For unit test blocks that are all on one line? ;) I guess that makes sense. And it'd link! Atila You could always add an additional number to uniquely identify them if there are multiple unittests on one line. It would seem weird to have a special case in the grammar for unittests.
Re: Clojure vs. D in creating immutable lists that are almost the same.
On Saturday, 27 February 2016 at 23:19:51 UTC, w0rp wrote: On Saturday, 27 February 2016 at 22:31:28 UTC, Brother Bill wrote: Clojure supports immutable lists that allow adding and removing elements, and yet still have excellent performance. For D language, what are the recommended techniques to use functional programming, without massive copying of data and garbage collection, so that it remains immutable. That is, how to create one-off changes to an immutable data structure, while keeping the original immutable, as well as the one-off change, and maintain good performance. Thank you I think this is a property of linked lists which could possibly be advantageous. However, I would keep in mind that memory layout is very important when it comes to execution speed, and that slices of memory are unbeatable in that regard. That's worth stating first. I think for linked lists, you can always create a new node which points to another node. So you start with element a as immutable, then you take a head element b and point to a, so you get b : a, then c : b : a, etc. So you can create larger and large immutable linked lists because you never actually change a list, you just produce a new list with an element pointing the head of a previous list. I'm not sure if Phobos has something suitable for this, but you could always implement your own singly linked list in such a manner pretty easily. I would be tempted just to use slices instead, though. Linked lists are rarely better. Often people use a lot more advanced structures than linked lists for immutable data structures. http://www.infoq.com/presentations/Functional-Data-Structures-in-Scala
Re: Pseudo-random numbers in [0, n), covering all numbers in n steps?
On Friday, 26 February 2016 at 19:35:38 UTC, Joseph Rushton Wakeling wrote: On Thursday, 25 February 2016 at 17:27:25 UTC, Andrei Alexandrescu wrote: This could be fixed by devising a PRNG that takes a given period n and generates all numbers in [0, n) in exactly n steps. On reflection, I have a nasty feeling there's a fundamental problem with this proposed approach. It's this: if you're relying on the PRNG having a period of n in which it covers exactly once each of the numbers from [0, n), then you're essentially outsourcing the random aspect of the permutation to the _seeding_ of this generator. Now, what would be an appropriate seed-generation-mechanism to guarantee that this PRNG can select from all possible permutations with uniform probability? This was what I was trying to get at in my initial post, but I failed to get the idea across properly.
Re: Normal distribution
On Friday, 26 February 2016 at 18:23:41 UTC, Andrei Alexandrescu wrote: On 02/20/2016 09:06 AM, Edwin van Leeuwen wrote: On Saturday, 20 February 2016 at 14:01:22 UTC, Andrei Alexandrescu wrote: Do we have a good quality converter of uniform numbers to Gaussian-distributed numbers around? -- Andrei There is one in dstats: https://github.com/DlangScience/dstats/blob/master/source/dstats/random.d#L266 Thanks! I ended up using this. Is someone working on adding Gaussians to phobos? -- Andrei https://github.com/WebDrake/hap is intended as a replacement for std.random and includes distributions (see e.g. https://github.com/WebDrake/hap/blob/master/source/hap/random/distribution.d#L441). Also, remember http://dconf.org/2015/talks/wakeling.html
Re: Calling python code from D
On Friday, 26 February 2016 at 17:15:02 UTC, Wyatt wrote: On Thursday, 25 February 2016 at 22:28:52 UTC, jmh530 wrote: I think PyD is really your best option. That's what I figured, but I wanted to be sure because, well... http://pyd.readthedocs.org/en/latest/embed.html ...these are some sparse docs. I did stumble into them, but it feels like a bit of a work-in-progress or second-class citizen, so I was kind of hoping someone else had taken the torch and run with it. Maybe I'll have to shave a yak. :/ -Wyatt Docs are quite sparse, but it mostly works as expected. I have a WIP cleanup of the library in my fork. It won't help with docs of course...
Re: Pseudo-random numbers in [0, n), covering all numbers in n steps?
On Thursday, 25 February 2016 at 17:27:25 UTC, Andrei Alexandrescu wrote: So we have https://dlang.org/phobos/std_random.html#.randomCover which needs to awkwardly allocate memory to keep track of the portions of the array already covered. This could be fixed by devising a PRNG that takes a given period n and generates all numbers in [0, n) in exactly n steps. However, I've had difficulty finding such PRNGs. Most want the maximum period possible so they're not concerned with a given period. Any insights? I don't think that's a good idea. A prng is closed path through a state space and it doesn't matter where you start on said path, you're going to follow the same closed path through the state space. I don't know of an algorithm for generating random permutations that isn't in-place (or O(N) storage), but I'm not an expert on the topic so maybe one does exist.
Re: Unum II announcement
On Tuesday, 23 February 2016 at 13:46:33 UTC, Charles wrote: On Tuesday, 23 February 2016 at 08:49:50 UTC, John Colvin wrote: I saw you looking for heavy math users. I work with quite a few actuaries, but I probably wouldn't be able to convince them to use anything if there wasn't a way to use it with either SAS or R. SAS can import C functions, but that's about it in terms of interop. If you don't find people with D, this might be an opportunity. There is https://bitbucket.org/bachmeil/dmdinline2 This seems to be the opposite of what I'd need unfortunately. Why not? You can easily wrap that inside some R and no-one would know it was D.
Re: Unum II announcement
On Tuesday, 23 February 2016 at 01:08:38 UTC, Charles wrote: On Monday, 22 February 2016 at 21:27:31 UTC, Nick B wrote: On Monday, 22 February 2016 at 17:15:54 UTC, Charles wrote: [...] Slide 12, 0101 is repeated. The top [...] I will check with John re this error. [...] Its likely that we can not add the Notes to the PDF, which is why I recommended to everyone, to download the presentation, and read it via Powerpoint, then you can see all the Notes. Nick I saw you looking for heavy math users. I work with quite a few actuaries, but I probably wouldn't be able to convince them to use anything if there wasn't a way to use it with either SAS or R. SAS can import C functions, but that's about it in terms of interop. If you don't find people with D, this might be an opportunity. There is https://bitbucket.org/bachmeil/dmdinline2
Re: Normal distribution
On Saturday, 20 February 2016 at 14:01:22 UTC, Andrei Alexandrescu wrote: Do we have a good quality converter of uniform numbers to Gaussian-distributed numbers around? -- Andrei There is this, from years ago: https://github.com/DlangScience/dstats/blob/master/source/dstats/random.d#L266 and the range wrappers also in that module. There is also of course https://github.com/WebDrake/hap/blob/master/source/hap/random/distribution.d#L507 But as you will remember from many messages from Joseph Rushton Wakeling, he's been blocked by difficulties with the range API, which also apply to the dstats version.
Re: Installing DUB on OSX
On Thursday, 18 February 2016 at 23:28:43 UTC, Joel wrote: On Thursday, 18 February 2016 at 16:33:51 UTC, John Colvin wrote: [...] I don't think I put 'sudo brew' at any point (I can't remember). I hope I haven't broken my OSX! [...] Did you recently upgrade OS X? Anyway, you should probably work through the suggestions from brew doctor, then brew update, then brew upgrade.
Re: Vibe.d Copyright
On Thursday, 18 February 2016 at 16:14:09 UTC, Chris wrote: Just to say that the copyright notice on the vibe.d website should be updated. In the API it still says "Copyright © 2012-2015 RejectedSoftware e.K." In the license it still says "Copyright (c) 2012-2014, rejectedsoftware e.K." and RejectedSoftware is spelled differently. vibe.d bug reports belong here: https://github.com/rejectedsoftware/vibe.d/issues
Re: Installing DUB on OSX
On Thursday, 18 February 2016 at 07:52:11 UTC, Joel wrote: On Thursday, 18 February 2016 at 07:11:23 UTC, Joel wrote: I had dub installed in a folder that meant I had to put 'sudo dub' to run it. I've tried to fix the problem, but where do you put it (also I tried one place, but couldn't put it in that folder)? I've now tried 'brew install dub' and 'brew upgrade dub', but they come up with, 'Warning: dub-0.9.22 already installed', or 'Error: dub 0.9.22 already installed'. Sounds like you have some problems with your homebrew. What does `brew doctor` give? Did you accidentally use `sudo brew` at some point?
Re: Another new io library
On Wednesday, 17 February 2016 at 07:15:01 UTC, Steven Schveighoffer wrote: On 2/17/16 1:58 AM, Rikki Cattermole wrote: A few things: https://github.com/schveiguy/iopipe/blob/master/source/iopipe/traits.d#L126 why isn't that used more especially with e.g. window? After all, window seems like a very well used word... Not sure what you mean. I don't like that a stream isn't inherently an input range. This seems to me like a good place to use this abstraction by default. What is front for an input stream? A byte? A character? A word? A line? Why not just say it's a ubyte and then compose with ranges from there?
Re: @nogc for structs, blocks or modules?
On Tuesday, 16 February 2016 at 03:13:48 UTC, maik klein wrote: On Tuesday, 16 February 2016 at 02:47:38 UTC, WebFreak001 wrote: On Tuesday, 16 February 2016 at 02:42:06 UTC, maik klein wrote: I just seems very annoying to add @nogc to every function. you can mark everything as nogc with // gc functions here @nogc: // nogc functions here void foo() {} Thanks, this should probably added to https://dlang.org/spec/attribute.html#nogc I just realized that I can't even use @nogc because pretty much nothing in phobos uses @nogc You probably can, remember that templates have their attributes inferred.
Re: An important pull request: accessing shared affix for immutable data
On Saturday, 13 February 2016 at 00:30:58 UTC, Andrei Alexandrescu wrote: On 02/12/2016 06:52 PM, deadalnix wrote: [...] I think we're good there. -- Andrei Is there somewhere where I / others can see an explanation of how "we're good"? Those sound like genuine problems.
Re: Procedural drawing using ndslice
On Thursday, 11 February 2016 at 13:05:41 UTC, Claude wrote: Hello, I come from the C world and try to do some procedural terrain generation, and I thought ndslice would help me to make things look clean, but I'm very new to those semantics and I need help. Here's my problem: I have a C-style rough implementation of a function drawing a disk into a 2D buffer. Here it is: import std.math; import std.stdio; void draw(ref float[16][16] buf, int x0, int y0, int x1, int y1) { float xc = cast(float)(x0 + x1) / 2; float yc = cast(float)(y0 + y1) / 2; float xr = cast(float)(x1 - x0) / 2; float yr = cast(float)(y1 - y0) / 2; float disk(size_t x, size_t y) { float xx, yy; xx = (x - xc) / xr; yy = (y - yc) / yr; return 1.0 - sqrt(xx * xx + yy * yy); } for (int y = 0; y < 16; y++) { for (int x = 0; x < 16; x++) { buf[x][y] = disk(x, y); writef(" % 3.1f", buf[x][y]); } writeln(""); } } void main() { float[16][16] buf; draw(buf, 2, 2, 10, 10); } The final buffer contains values where positive floats are the inside of the disk, negative are outside, and 0's represents the perimeter of the disk. I would like to simplify the code of draw() to make it look more something like: Slice!(stuff) draw(int x0, int y0, int x1, int y1) { float disk(size_t x, size_t y) { // ...same as above } return Slice!stuff.something!disk.somethingElseMaybe; } Is it possible? Do I need to back-up the slice with an array, or could the slice be used lazily and modified as I want using some other drawing functions. auto diskNoiseSlice = diskSlice.something!AddNoiseFunction; ... until I do a: auto buf = mySlice.array; ... where the buffer would be allocated in memory and filled with the values according to all the drawing primitives I used on the slice. I had a go at trying the sort of thing you are talking about: http://dpaste.dzfl.pl/8f9da4f4cc34 That won't work with std.experimental.ndslice in 2.070.0, so either use dmd git master or use the latest version of ndslice in mir (https://github.com/DlangScience/mir).
Re: Just because it's a slow Thursday on this forum
On Thursday, 11 February 2016 at 21:38:42 UTC, H. S. Teoh wrote: On Thu, Feb 11, 2016 at 03:38:42PM -0500, Nick Sabalausky via Digitalmars-d wrote: On 02/11/2016 11:22 AM, H. S. Teoh via Digitalmars-d wrote: >[...] My understanding is that's the whole point of the "dump" function being discussed. Unless I misunderstood? IMO `dump` is worthwhile but `print` seems little more than an alias for `writefln`. I can't find enough justification to warrant `print`. (Next thing you know, newbies will be asking why there's both `print` and `write` that do the same thing except different.) T yeah, dump is really useful, print is a bit marginal.
Re: Just because it's a slow Thursday on this forum
On Monday, 8 February 2016 at 13:37:19 UTC, Andrei Alexandrescu wrote: On 2/7/16 7:11 PM, John Colvin wrote: alias dump = dumpTo!stdout; alias errDump = dumpTo!stderr; I'm hoping for something with a simpler syntax, a la dump!(stdout, "x") where stdout is optional. -- Andrei How about this, which allows you to specify variables as alias parameters (i.e. without strings) as well. It could be a lot neater if a static assert is used in the body instead of using template constraints, but obviously that has its downsides. import std.stdio : File; import std.traits : isSomeString; import std.meta : allSatisfy; private template isAlias(a ...) if (a.length == 1) { enum isAlias = __traits(compiles, { alias b = a[0]; }) && is(typeof(a[0])); } private template isStringValue(a ...) if (a.length == 1) { enum isStringValue = isSomeString!(typeof(a[0])); } private template isStringOrAlias(a ...) if (a.length == 1) { /* can't use templateOr in the dump template constraints * because `Error: template instance F!(a) cannot use local 'a' * as parameter to non-global template templateOr(T...)` */ * enum isStringOrAlias = isAlias!a || isStringValue!a; } mixin template dump(alias file, Args ...) if (is(typeof(file) == File) && Args.length > 0 && allSatisfy!(isStringOrAlias, Args)) { auto _unused_dump = { import std.traits : Select; // can put expressions directly in Select with // https://github.com/D-Programming-Language/phobos/pull/3978 enum sep = ", "; enum term = "\n"; foreach (i, arg; Args) { static if (isSomeString!(typeof(arg))) file.write(arg, " = ", mixin(arg), Select!(i < Args.length - 1, sep, term)); else file.write(__traits(identifier, Args[i]), " = ", arg, Select!(i < Args.length - 1, sep, term)); } return false; }(); } mixin template dump(Args ...) if (Args.length > 0 && allSatisfy!(isStringOrAlias, Args)) { import std.stdio : stdout; mixin .dump!(stdout, Args); } unittest { import std.stdio; int a = 3, b = 4; mixin dump!q{ a + b }; mixin dump!(stderr, "a - b"); mixin dump!a; mixin dump!(stderr, a, b); }
Re: IDE - Coedit 2 rc1
On Monday, 8 February 2016 at 07:25:49 UTC, Dominikus Dittes Scherkl wrote: On Monday, 8 February 2016 at 07:05:15 UTC, Suliman wrote: Cool! Thanks! But do you have any plans to reimplement it from Pascal to В to get it's more native... B? What is B? https://en.wikipedia.org/wiki/B_(programming_language) but obviously he meant D.
Re: Dconf 2015 talks...
On Monday, 8 February 2016 at 19:46:19 UTC, Joseph Rushton Wakeling wrote: [snip] This might be a stupid idea, but perhaps there's something useful in it: Determinism isn't the same thing as "one long chain of numbers that everybody reads from". It can be acceptable to seed a set of reasonable pseudo-random number generators with consecutive integers (indeed seeding randomly can be dangerous because of the birthday problem). More generally, any change of the state of the rng in "seed-space" should produce an output equivalent to taking a sample from the output distribution. Can you not have a random number generator make a copy of itself like this: struct RNG { State state; static State.ModifierT modifier; this(this) { this.state.alterBy(modifier++); //recalculate output sample etc... } } Then any time you copy a RNG, the copy is kicked on to a new path in state-space. Basically we're deterministically re-seeding on copy.
Re: Just because it's a slow Thursday on this forum
On Sunday, 7 February 2016 at 23:26:05 UTC, Andrei Alexandrescu wrote: On 02/04/2016 09:46 PM, Tofu Ninja wrote: On Thursday, 4 February 2016 at 15:33:41 UTC, Andrei Alexandrescu wrote: https://github.com/D-Programming-Language/phobos/pull/3971 -- Andrei People one github were asking for a dump function so they could do int a = 5; dump!("a"); // prints "a = 5" Here's a working version if anyone wants it but you have to use it like mixin dump!("a"); // mixin template dump(Names ... ) { auto _unused_dump = { import std.stdio : writeln, write; foreach(i,name; Names) { write(name, " = ", mixin(name), (i
Re: reduce -> fold?
On Friday, 29 January 2016 at 20:40:18 UTC, Andrei Alexandrescu wrote: On 01/29/2016 08:56 AM, Dragos Carp wrote: On Friday, 29 January 2016 at 13:11:34 UTC, Luís Marques wrote: [...] But not in python, where "accumulate"[1] is the generic equivalent of C++ "partial_sum"[2]. I like "fold" more. BTW this week, a colleague of mine implemented a python "accumulate" in D. Is there any interest to contribute it to Phobos? How should this be named? [1] https://docs.python.org/3/library/itertools.html#itertools.accumulate [2] http://en.cppreference.com/w/cpp/algorithm/partial_sum That'd be interesting if (a) lazy and (b) general a la https://dlang.org/library/std/range/recurrence.html. -- Andrei I wrote a bit about this sort of thing here: https://github.com/D-Programming-Language/phobos/pull/2991#issuecomment-141816906
Re: reduce -> fold?
On Wednesday, 3 February 2016 at 21:45:04 UTC, Timon Gehr wrote: On 02/03/2016 09:12 PM, Atila Neves wrote: https://github.com/D-Programming-Language/phobos/pull/3968 I think fold should be nothrow, but maybe that's just me. It's also a massive pain to make it that way, so I didn't for now. Returning Unqual!(ElementType!R).init makes no sense though. The "correct" result of fold!f([]) is a (often, the) value 'a' such that for any 'b', 'f(a,b)==b' (which is the canonical choice of "seed"), but there is no way for fold to infer such a value. I wish we had some standardised way to express what the identities (and maybe inverses) are for a given type under given operations.
Re: D vs Rust
On Friday, 29 January 2016 at 08:23:38 UTC, Ola Fosheim Grøstad wrote: On Friday, 29 January 2016 at 07:01:07 UTC, Sönke Ludwig wrote: Am 29.01.2016 um 00:18 schrieb Ola Foaheim Grøstad: D is closer to C++ style templating and OO, and currently focus on enabling binding to non-template C++ libraries. Small correction: Should be "binding to template based C++ libraries" - non-template libraries have worked more or less for a while now. I was thinking of Walter's work on supporting C++ exceptions as completing the effort to bind to non-templated libraries; exceptions being the "return value" for failure. Is there an effort to support templated libraries? They are often fully inlined and header-only? It depends what you mean by templated. I believe the interoperability work is for the results of instantiated templates, not on the templates themselves.
Re: Beta D 2.070.0-b2
On Sunday, 17 January 2016 at 20:52:20 UTC, Martin Nowak wrote: Second and last beta for the 2.070.0 release. http://dlang.org/download.html#dmd_beta http://dlang.org/changelog/2.070.0.html Please report any bugs at https://issues.dlang.org -Martin % dmd --version DMD64 D Compiler v2.069 Copyright (c) 1999-2015 by Digital Mars written by Walter Bright should be DMD64 D Compiler v2.070 Copyright (c) 1999-2016 by Digital Mars written by Walter Bright
Re: [dlang.org] new forum design - preview
On Wednesday, 13 January 2016 at 06:01:41 UTC, Vladimir Panteleev wrote: http://beta.forum.dlang.org/ https://github.com/CyberShadow/DFeed/pull/51 I tried using this a bit and it's ... frustrating. I'll try and describe the thought process of a visit: I load beta.forum.dlang.org, fullscreen, at 1680x1050. All text looks very slightly out-of-focus and the bold text is far too tightly packed. I notice that there are lines of text that are truncated (post titles), despite having loads of whitespace free on the page. This is instantly irritating. I click on a post and it is loaded in horizontal split mode. The right panel of the split view extends significantly lower than the left panel. The navigation column extends down further still. There is some wasted vertical space above the footer and a *lot* wasted below it. In the actual post window, I now have a line length below 60 characters, which is way too small for me, I prefer closer to 80 for reading (also I don't want code to start getting wrapped below 80). Nested quotations in replies end up with very restricted line lengths. I click the "Toggle navigation" button. With a line-length of 82 and less wasted vertical space around the footer, I'm much happier. However, now I've lost the left navigation column and the header bar. A few summary points/suggestions: The horizontal split layout looks horribly cramped in the default view, which is very irritating to look at, given the large white spaces either side of the content. Users can always resize the window to make line lengths smaller, but if you've capped them too low they can't do anything to make them longer. Having the "toggle navigation" button is nice for focus (less clutter, on demand), please keep it, but don't use it as an excuse for the design to be rubbish without it. I like to *option* to hide all the navigation, but I shouldn't have to just to get reasonable line lengths.
Re: local import hijacking
On Friday, 15 January 2016 at 08:15:50 UTC, Iain Buclaw wrote: On 15 Jan 2016 9:12 am, "Russel Winder via Digitalmars-d" < digitalmars-d@puremagic.com> wrote: In this mindset D is certainly stable enough for production, it is not beta software. DMD is the playground compiler, GDC the conservative but solid one, and LDC the core production tool. -- Russel. Thanks for putting it so eloquently, Russell. Iain. The difficulty is that gdc includes a lot of long-standing bugs that are fixed upstream.
Re: [dlang.org] new forum design - preview
On Wednesday, 13 January 2016 at 21:35:15 UTC, tsbockman wrote: On Wednesday, 13 January 2016 at 20:11:07 UTC, Jacob Carlborg wrote: On 2016-01-13 14:55, Vladimir Panteleev wrote: As soon as anyone comes up with a way to fit it into the design that doesn't look awful. I don't think this [1] looks so awful. [1] https://drive.google.com/open?id=0B7UtafxGD9vEX0NVYXlyWHhDX3c Yes. Please add this; the need to scroll all the way down to find the pager has been annoying me ever since I started reading these forums. It's probably the main reason I switched over to using the horizontal-split view, although now I could never give that up.
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Wednesday, 13 January 2016 at 01:43:21 UTC, John Colvin wrote: On Wednesday, 13 January 2016 at 01:39:26 UTC, John Colvin wrote: On Wednesday, 13 January 2016 at 00:31:48 UTC, Andrei Alexandrescu wrote: [...] I would completely agree, except that we have builtin types that don't obey this rule. I'd be all in favour of sticking with total orders, but it does make it hard (impossible?) to make a proper drop-in replacement for the builtin floating point numbers (including wrappers, e.g. std.typecons.Typedef can't handle nans correctly) or to properly handle comparisons between custom types and builtin floating points (as mentioned by tsbockman). I am all for keeping it simple here, but I still think there's a problem. https://issues.dlang.org/show_bug.cgi?id=15561 https://github.com/D-Programming-Language/phobos/pull/3927
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 21:27:38 UTC, Timon Gehr wrote: On 01/12/2016 10:02 PM, John Colvin wrote: On Tuesday, 12 January 2016 at 20:52:51 UTC, Timon Gehr wrote: On 01/12/2016 07:27 PM, John Colvin wrote: ... struct S{ auto opCmp(S rhs){ return float.nan; } bool opEquals(S rhs){ return false; } } unittest{ S a,b; assert(!(a==b)); assert(!(ab)); assert(!(a>=b)); } what about classes and Object.opCmp? You can introduce a new opCmp signature in your subclass, but == is enforced to be reflexive for class objects. So this approach only really works for structs. (And for structs, it is obviously a hack.) I actually quite like it. Also, checking the generated assembly for an IntWithNaN type: no floating point comparisons in sight (with gdc and ldc at least, don't really care if dmd does something a bit dumb)
Re: [dlang.org] new forum design - preview
On Wednesday, 13 January 2016 at 06:01:41 UTC, Vladimir Panteleev wrote: http://beta.forum.dlang.org/ https://github.com/CyberShadow/DFeed/pull/51 Look pretty, but not using the full width makes it a big thumbs-down from me. I love the horizontal split mode and it doesn't work well in such a narrow space.
Re: D and C APIs
On Tuesday, 12 January 2016 at 10:43:40 UTC, Russel Winder wrote: On Tue, 2016-01-12 at 08:12 +, Atila Neves via Digitalmars-d wrote: On Monday, 11 January 2016 at 17:25:26 UTC, Russel Winder wrote: > I am guessing that people have an answer to this: > > D making use of a C API needs a D module adapter. This can > either be constructed by hand (well it can, but…), or it can > be auto generated from the C header files and then hand > massaged (likely far better). I think the only tool for this > on Linux is DStep. > > This is all very well for a static unchanging API, but what > about C APIs that are generated from elsewhere? This > requires constant update of the D modules. Do people just do > this by hand? > > Is the pain of creating a V4L D module set worth the effort > rather than just suffering the pain of writing in C++? This is the kind of thing I wrote reggae for. CMake is an alternative, but I'd rather write D than CMake script. CMake scripts are hideous in that the language is like nothing else, other than perhaps m4 macros. They should have used Lisp. Or Python. I must try Reggae at some stage, but for now I need to progress this Me TV rewrite. D and Rust provide so many barriers to effective use of a C library, that I am resorting to using C++. Yes you have to do extra stuff to avoid writing C code, but nowhere near the amount you have to to create D and Rust adaptors. What's so hard about writing a few function prototypes, aliases and enums? It's annoying that we have to do it, but compared to writing the rest of a project it's always going to be a tiny amount of work. For a lot of projects you can only bind what you actually need, I often just pretend that I have already written the bindings then write whatever lines are necessary to get it to compile!
Re: D and C APIs
On Tuesday, 12 January 2016 at 13:24:48 UTC, Russel Winder wrote: On Tue, 2016-01-12 at 11:05 +, John Colvin via Digitalmars-d wrote: […] What's so hard about writing a few function prototypes, aliases and enums? It's annoying that we have to do it, but compared to writing the rest of a project it's always going to be a tiny amount of work. I started there but gave up quite quickly as there are two levels of API here, both of which are needed to use the higher-level API as it refers directly to low-level structs and stuff. There is the kernel device drivers level, which defines the low-level API, and then there is libdvd5 which provides a (slightly) higher C API – with all the idiocies of a C API for doing high-level library programming :-( I have found a Rust wrapper of the kernel API, but that would mean writing all the libdvbv5 equivalent myself before writing the application code. There is no equivalent D version and certainly no easy way of wrapping libdvbv5 in D without it. Go has problems with C APIs and no sensible GTK3 wrapper. For a lot of projects you can only bind what you actually need, I often just pretend that I have already written the bindings then write whatever lines are necessary to get it to compile! The problem is that this is easy in C++ for a C API, but not for D or Rust using the same C API. C++ can use the C stuff directly, D and Rust need an adaptor. I agree it's easier in C++, but what I mean is literally doing something like: 1) write code pretending you've got complete bindings 2) try to compile 3) write the bare minimum bindings necessary to make it compile 4) goto 1 It's amazing how little of an API often ends up being used and therefore how little binding code you have to write. Alternatively you can write the bindings immediately when you use them, but I prefer not having to do the context switch between writing application and bindings quite as often as that.
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 19:44:18 UTC, Fool wrote: On Tuesday, 12 January 2016 at 19:21:47 UTC, John Colvin wrote: Note that a non-reflexive <= doesn't imply anything about ==. Non-reflexive '<=' does not make any sense at all. It might be a bit of a mess, agreed, but nonetheless: assert(!(float.nan <= float.nan));
opCmp, [partial/total/pre]orders, custom floating point types etc.
Background: Some important properties for binary relations on sets that are somewhat similar to the normal ≤/≥ on the real numbers or integers are: a ≤ a (reflexivity); if a ≤ b and b ≤ a, then a = b (antisymmetry); if a ≤ b and b ≤ c, then a ≤ c (transitivity); a ≤ b or b ≤ a (totality, implies reflexivity); Definitions: A preorder obeys reflexivity and transitivity. A partial order obeys reflexivity, transitivity and antisymmetry. A total order obeys transitivity, antisymmetry and totality. A total preorder obeys transitivity and totality but not antisymmetry Examples: Arrays ordered by length, vectors ordered by euclidian length, complex numbers ordered by absolute value etc. are all total preorders. Integers with ≤ or ≥ form a total order. float/double/real obey antisymmetry and transitivity but not reflexivity or totality. Implementations in D: Total order: opCmp with "consistent" opEquals to enforce antisymmetry. Total preorder: opCmp with "inconsistent" opEquals to break antisymmetry. Preorder or partial order: not possible in D, opCmp insists on totality. Antisymmetry and transitivity but not reflexivity or totality, e.g. custom float: not possible in D, opCmp insists on totality (no way for opCmp to signify nan comparisons, either with nan (reflexivity) or others (totality & reflexivity)). Solutions to the above problems: 1) opCmp - or some extended, renamed version of it - needs 4 return values: greater, lesser, equal and neither/unequal/incomparible. This would be the value that is returned when e.g. either side is nan. or, less intrusively and more (runtime) efficiently: 2) Introduce a new special function `bool opCmpOrdered(T rhs)` that, if defined, is used to shortcircuit a comparison. Any previous lowering to `a.opCmp(b) [<>]=? 0` (as in https://dlang.org/spec/operatoroverloading.html#compare) would now lower to `a.opCmpOrdered(b) && a.opCmp(b) [<>]=? 0`. E.g. `a >= b` becomes `a.opCmpOrdered(b) && a.opCmp(b) >= 0`. If opCmpOrdered isn't defined the lowering is unchanged from before (or opCmpOrdered defaults to true, same thing...). Bigger example: a custom float type struct MyFloat { // ... bool isNaN() { /* ... */ } bool opCmpOrdered(MyFloat rhs) { if (this.isNaN || rhs.isNaN) return false; else return true; } int opCmp(MyFloat rhs) { //can assume neither are nan /* ... */ } bool opEquals(MyFloat rhs) { if (this.isNaN || rhs.isNaN) return false; else /* ... */ } } unittest { MyFloat a, b; // has .init as nan, of course :) static allFail(MyFloat a, MyFloat b) { // all of these should short-circuit because // opCmpOrdered will return false assert(!(a==b)); assert(!(ab)); assert(!(a>=b)); } allFail(a, b); a = 3; allFail(a, b); b = 4; assert(a!=b); assert(ab)); assert(!(a>=b)); a = 4; assert(a==b); assert(!(ab)); assert(a>=b); } P.S. This is not just about floats! It is also very useful for making types that represent missing data (e.g. encapsulating using int.min for a missing value). I can only come up with strained examples for preorders and partial orders that I would want people using < and > for, so I won't speak of them here. P.P.S. Note that I am *not* trying to extend D's operator overloading to make > and < usable for arbitrary binary relations, like in C++. This small change is strictly within the realm of what <, > and = are already used for (in D, with floats). I'm convinced that if you wouldn't read it out loud as something like "less/fewer/smaller than" or "greater/more/bigger than", you shouldn't be using < or >, you should name a separate function; I don't think this proposal encourages violating that principle.
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 18:36:32 UTC, Ilya Yaroshenko wrote: On Tuesday, 12 January 2016 at 18:27:15 UTC, John Colvin wrote: Background: Some important properties for binary relations on sets that are somewhat similar to the normal ≤/≥ on the real numbers or integers are: [...] http://dlang.org/phobos/std_math.html#.cmp ? --Ilya That doesn't solve the whole problem, because std.math.cmp isn't the default comparator you can't use a totally ordered float type as a drop-in for the builtin float types. A more interesting question it bring up though is: does the approach of imposing a (somewhat arbitrary) total order work for other types where you would normally use a less "strict" ordering? Does it work well for missing data representations?
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 19:13:29 UTC, John Colvin wrote: On Tuesday, 12 January 2016 at 19:00:11 UTC, Andrei Alexandrescu wrote: On 01/12/2016 01:27 PM, John Colvin wrote: Preorder or partial order: not possible in D, opCmp insists on totality. The way I look at it, a partial order would implement opCmp and opEqual such that a < b, b < a, and a == b are simultaneously false for unordered objects. Would that float your boat? -- Andrei a<=b and b<=a must also be false. That would work for a partial order, yes. Unfortunately, that's not possible with the current opCmp design, hence my 2 suggestions for improvements (I'm pretty sure the second one is better). The key thing is to have a design that doesn't enforce totality. s/totality/reflexivity which also implies it can't force totality. Note that a non-reflexive <= doesn't imply anything about ==.
Re: Official Announcement: 'Learning D' is Released
On Tuesday, 12 January 2016 at 20:32:57 UTC, jmh530 wrote: I'm not sure when you would want to use dynamic bindings. When you want to have control over the process of loading a library e.g. if you want it to be an optional dependency at runtime.
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 21:12:08 UTC, tsbockman wrote: On Tuesday, 12 January 2016 at 20:56:41 UTC, John Colvin wrote: Please consider the second design I proposed? It's small, simple, has no impact on existing code and works in the right direction (library types can emulate / act as replacements for builtins) as opposed to the other way (library types are second class). If non-total ordering is going to be supported, I don't understand what's wrong with just allowing this: bool opCmp(string op, T)(T right) const { } As an alternative to the current: bool opEquals(T)(T right) const { } int opCmp(T)(T right) const { } Make it a compile-time error for a type to implement both. There is no need to deprecate the current system - people can even be encouraged to continue using it, in the very common case where it can actually express the desired logic. This approach is simple and breaks no existing code. It is also optimally efficient with respect to runtime performance. I would kindof like that (it would definitely allow me to do what I want, as well as anything else I have failed to notice I need yet), but it flies quite strongly against Walter's (and mine to some extent) views that we'll only end up with C++-like abuse of the overloading if we allow that. Having > and < overloaded separately is asking for trouble. Another possibility would be to introduce opCmpEquals(T)(T rhs) to handle [<>]= explicitly.
Re: Official Announcement: 'Learning D' is Released
On Tuesday, 12 January 2016 at 22:00:32 UTC, jmh530 wrote: On Tuesday, 12 January 2016 at 21:10:28 UTC, John Colvin wrote: When you want to have control over the process of loading a library e.g. if you want it to be an optional dependency at runtime. I've seen the example in the book. I'm just not sure why you would want an optional runtime dependency. Anything in your application / library that relies on resources that may or may not be available on the user's system. E.g. plugins the user wants to load.
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 20:04:26 UTC, Andrei Alexandrescu wrote: On 01/12/2016 03:01 PM, John Colvin wrote: On Tuesday, 12 January 2016 at 19:28:36 UTC, Andrei Alexandrescu wrote: On 01/12/2016 02:13 PM, John Colvin wrote: a<=b and b<=a must also be false. Would the advice "Only use < and == for partially-ordered data" work? -- Andrei If by that you mean "Only use <= or >= on data that defines a total ordering"* I guess it would work, but it has some pretty big downsides: 1) Annoying to use. 2) You have to use the opCmp return 0 (which normally means a[<>]=b && b[<>]=a) to mean "not comparable". 3) Not enforceable. Because of 2 you'll always get true if you use >= or <= on any a pair that doesn't have a defined ordering. 4) inefficient (have to do both < and == separately which can be a lot more work than <=). *would be safer to say "types that define", but strictly speaking... I'd be in favor of giving people the option to disable the use of <= and >= for specific data. It's a simple and logical approach. -- Andrei Having thought about this a bit more, it doesn't fix the problem: It doesn't enable custom float types that are on par with builtins, doesn't enable transparent "missing-value" types and doesn't make tsbockmans checked integer types (or other custom types) work properly and transparently with builtin floats. The points 1, 2 and 4 from above still stand. Also - the big problem - it requires antisymmetry, which means no preorders. One of the great things about D's opCmp and opEquals is that it separates `a==b` from `a<=b && b<=a`, which enables it to express types without antisymmetric ordering (see original post for examples), what you're describing would be a frustrating situation where you have to choose between breaking antisymmetry and breaking totality, but never both. Please consider the second design I proposed? It's small, simple, has no impact on existing code and works in the right direction (library types can emulate / act as replacements for builtins) as opposed to the other way (library types are second class).
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 20:52:51 UTC, Timon Gehr wrote: On 01/12/2016 07:27 PM, John Colvin wrote: ... struct S{ auto opCmp(S rhs){ return float.nan; } bool opEquals(S rhs){ return false; } } unittest{ S a,b; assert(!(a==b)); assert(!(ab)); assert(!(a>=b)); } what about classes and Object.opCmp?
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 20:52:51 UTC, Timon Gehr wrote: On 01/12/2016 07:27 PM, John Colvin wrote: ... struct S{ auto opCmp(S rhs){ return float.nan; } bool opEquals(S rhs){ return false; } } unittest{ S a,b; assert(!(a==b)); assert(!(ab)); assert(!(a>=b)); } Interesting, I'll have to think more about this. Pretty ugly to have to use floating point instructions for every comparison, no matter the actually data, but maybe there's something here...
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 19:50:57 UTC, Fool wrote: On Tuesday, 12 January 2016 at 19:48:35 UTC, Fool wrote: On Tuesday, 12 January 2016 at 19:46:47 UTC, John Colvin wrote: On Tuesday, 12 January 2016 at 19:44:18 UTC, Fool wrote: Non-reflexive '<=' does not make any sense at all. It might be a bit of a mess, agreed, but nonetheless: assert(!(float.nan <= float.nan)); Agreed, but in case of float '<=' is not an order at all. By the way, that implies that the result of sorting an array of float by default comparison is undefined unless the array does not contain NaN. Didn't think of that. Yikes. Should we change the default predicate of std.algorithm.sort to std.math.cmp when ElementType!R is floating point?
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 19:28:36 UTC, Andrei Alexandrescu wrote: On 01/12/2016 02:13 PM, John Colvin wrote: a<=b and b<=a must also be false. Would the advice "Only use < and == for partially-ordered data" work? -- Andrei If by that you mean "Only use <= or >= on data that defines a total ordering"* I guess it would work, but it has some pretty big downsides: 1) Annoying to use. 2) You have to use the opCmp return 0 (which normally means a[<>]=b && b[<>]=a) to mean "not comparable". 3) Not enforceable. Because of 2 you'll always get true if you use >= or <= on any a pair that doesn't have a defined ordering. 4) inefficient (have to do both < and == separately which can be a lot more work than <=). *would be safer to say "types that define", but strictly speaking...
Re: "Good PR" mechanical check
On Tuesday, 12 January 2016 at 17:22:16 UTC, Walter Bright wrote: On 1/12/2016 6:53 AM, Adam D. Ruppe wrote: I'm pretty sure dfmt is up to the task in 99% of cases already. The last 1% always takes 99% of the dev time :-( But in this case, the 1% doesn't actually have to be fixed (although of course, the smaller the better), it's just the 1% of the work left to be done manually, where currently we do 100% manually.
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 19:00:11 UTC, Andrei Alexandrescu wrote: On 01/12/2016 01:27 PM, John Colvin wrote: Preorder or partial order: not possible in D, opCmp insists on totality. The way I look at it, a partial order would implement opCmp and opEqual such that a < b, b < a, and a == b are simultaneously false for unordered objects. Would that float your boat? -- Andrei a<=b and b<=a must also be false. That would work for a partial order, yes. Unfortunately, that's not possible with the current opCmp design, hence my 2 suggestions for improvements (I'm pretty sure the second one is better). The key thing is to have a design that doesn't enforce totality.
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Tuesday, 12 January 2016 at 22:28:13 UTC, Andrei Alexandrescu wrote: On 01/12/2016 03:56 PM, John Colvin wrote: Please consider the second design I proposed? I don't think it solves a large problem. -- Andrei Ok. Would you consider any solution, or is that a "leave it broken"? I think I can find a way around the problem for my purposes in the short term. However, for other people implementing custom types I think it is important, it's a dirty corner that needs sorting out. The more you get to know D, the more of them you find, the more frustrating it gets seeing they aren't likely to get fixed...
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Wednesday, 13 January 2016 at 00:31:48 UTC, Andrei Alexandrescu wrote: On 01/12/2016 06:52 PM, John Colvin wrote: On Tuesday, 12 January 2016 at 22:28:13 UTC, Andrei Alexandrescu wrote: On 01/12/2016 03:56 PM, John Colvin wrote: Please consider the second design I proposed? I don't think it solves a large problem. -- Andrei Ok. Would you consider any solution, or is that a "leave it broken"? I'd leave it to a named function. Using the built-in comparison for exotic orderings is bound to confuse users. BTW not sure you know, but D used to have a number of floating point operators like !<>=. Even those didn't help. -- Andrei I would completely agree, except that we have builtin types that don't obey this rule. I'd be all in favour of sticking with total orders, but it does make it hard (impossible?) to make a proper drop-in replacement for the builtin floating point numbers (including wrappers, e.g. std.typecons.Typedef can't handle nans correctly) or to properly handle comparisons between custom types and builtin floating points (as mentioned by tsbockman). I am all for keeping it simple here, but I still think there's a problem.
Re: opCmp, [partial/total/pre]orders, custom floating point types etc.
On Wednesday, 13 January 2016 at 01:39:26 UTC, John Colvin wrote: On Wednesday, 13 January 2016 at 00:31:48 UTC, Andrei Alexandrescu wrote: [...] I would completely agree, except that we have builtin types that don't obey this rule. I'd be all in favour of sticking with total orders, but it does make it hard (impossible?) to make a proper drop-in replacement for the builtin floating point numbers (including wrappers, e.g. std.typecons.Typedef can't handle nans correctly) or to properly handle comparisons between custom types and builtin floating points (as mentioned by tsbockman). I am all for keeping it simple here, but I still think there's a problem. https://issues.dlang.org/show_bug.cgi?id=15561
Re: Beta D 2.070.0-b1
On Sunday, 3 January 2016 at 19:24:57 UTC, Martin Nowak wrote: First beta for the 2.070.0 release. Still a few things missing from the changelog, there is a new package std.experimental.ndslice, and native (DWARF based) exception handling on linux. http://dlang.org/download.html#dmd_beta http://dlang.org/changelog/2.070.0.html Please report any bugs at https://issues.dlang.org -Martin Very pleased to say that all of DlangScience worked with this beta, no regressions/breakage.
Re: GDC includes from LDC
On Sunday, 10 January 2016 at 16:23:24 UTC, Russel Winder wrote: Iain, Playing with the SCons tests, I am heading to the hypothesis that, at least on Debian Sid, if both gdc and ldc packages are installed, then gdc picks up the D source files from the ldc package in preference to the ones from the gdc package. scons: Building targets ... gdc -I. -c -o foo.o foo.d scons: building terminated because of errors. STDERR = /usr/include/d/core/stdc/config.d:28:3: error: static if conditional cannot be at global scope static if( (void*).sizeof > int.sizeof ) ^ scons: *** [foo.o] Error 1 |> dpkg -S /usr/include/d/core/stdc/config.d libphobos2-ldc-dev: /usr/include/d/core/stdc/config.d I think this is the same problem I have on OS X.
Re: Anyone using DMD to build 32bit on OS X?
On Sunday, 10 January 2016 at 17:12:40 UTC, Jacob Carlborg wrote: I've implemented native TLS in DMD on OS X for 64bit. Now the question is, does it need to work for 32bit as well? The easiest would be to drop the 32bit support all together. Other options would be to continue to use emulate TLS on 32bit or implement native TLS for 32bit as well. I would prefer to not have to do this for 32bit as well. As far as I know we haven't released a 32bit binary of DMD for a very long time. It would be very rare to find a Mac that cannot run 64bit binaries. Native TLS on OS X would mean that the runtime requirements for the binaries produced by DMD on OS X would be OS X 10.7 (Lion) or later. Hopefully that should not be a problem since it's several years (and versions) old. I definitely don't care about 32 bit on OS X. However, I see no need to drop it if the current TLS emulation works.
Re: Self-Modifying code for user settings optimization
On Saturday, 9 January 2016 at 11:38:06 UTC, Rikki Cattermole wrote: Enums are free and global variables may have cache misses issue An enum isn't guaranteed to be embedded in the instruction stream, there's still plenty of opportunities for cache misses.
Re: Self-Modifying code for user settings optimization
On Saturday, 9 January 2016 at 14:55:27 UTC, Rikki Cattermole wrote: On 10/01/16 3:50 AM, John Colvin wrote: On Saturday, 9 January 2016 at 11:38:06 UTC, Rikki Cattermole wrote: Enums are free and global variables may have cache misses issue An enum isn't guaranteed to be embedded in the instruction stream, there's still plenty of opportunities for cache misses. enum FOO = true; static if (FOO) { doThis(); } else { doThat(); } No need for enum to be embedded in the instruction stream. Because it won't be. The else block just doesn't get compiled in. Of course, I just meant that when reading a global or an enum, enum isn't necessarily cheaper. static if f.t.w.
Re: noob in c macro preprocessor hell converting gsl library header files
On Wednesday, 6 January 2016 at 13:36:03 UTC, data pulverizer wrote: I have been converting C numeric libraries and depositing them here: https://github.com/dataPulverizer. So far I have glpk and nlopt converted on a like for like c function basics. I am now stuck on the gsl library, primarily because of the preprocessor c code which I am very new to. The following few are particularly baffling to me: #define INLINE_FUN extern inline // used in gsl_pow_int.h: INLINE_FUN double gsl_pow_2(const double x) { return x*x; } Could I just ignore the INLINE_FUN and use alias for function pointer declaration? For example ... alias gsl_pow_2 = double gsl_pow_2(const(double) x); Yes, you should be able to ignore INLINE_FUN double gsl_pow_2(const double x); is the correct declaration. #define INLINE_DECL // used in interpolation/gsl_interp.h: INLINE_DECL size_t gsl_interp_bsearch(const double x_array[], double x, size_t index_lo, size_t index_hi); I would guess the same as for INLINE_FUN? yes #define GSL_VAR extern // used in rng/gsl_rng.h:GSL_VAR const gsl_rng_type *gsl_rng_borosh13; perhaps GSL_VAR can be ignored and I could use: gsl_rng_borosh13 const(gsl_rng_type)*; It should be extern gsl_rng_type* gsl_rng_borosh13; I have been using these kind of fixes and have not been able to get the rng module to recognise the ported functions, meaning that something has been lost in translation. I am currently getting the following error: $ gsl_rng_print_state rng_example.o: In function `_Dmain': rng_example.d:(.text._Dmain+0x13): undefined reference to `gsl_rng_print_state' collect2: error: ld returned 1 exit status I can't seem to call any of the functions but the types are recognized. Thanks in advance I think you might have some confusion between function declarations: T myFunction(Q myArg); function pointer type declarations: alias MyFunctionPointerType = T function(Q myArg); and function pointer declarations: MyFunctionPointerType myFunctionPointer;
Re: Better docs for D (WIP)
On Tuesday, 5 January 2016 at 18:34:20 UTC, JohnCK wrote: On Tuesday, 5 January 2016 at 18:09:57 UTC, Andrei Alexandrescu wrote: Is the recent http://wiki.dlang.org/Contributing_to_dlang.org along the lines of what you need? What other sort of documentation would you find useful? I took a look at that link, and you know what would be (at least for me) more useful? A "Let's write a doc example", for example: Creating a sample function and documented it, step by step. I really think that would be many times more useful and you see that pattern a lot not only for docs, but explain things, currently we saw this in another topic about writing a scalable chat using vibe.d! JohnCK. +1 That wiki article is a great reference, but it's pretty daunting when someone just wants to make a small change.
Re: Proposal: Database Engine for D
On Monday, 4 January 2016 at 12:28:47 UTC, Russel Winder wrote: I must now try creating a D version of the pytest.mark.parametrize decorator – unless someone already has and I have just missed it. I quick look at pytest.mark.parametrize suggests it could be implemented with UDAs and a test-runner that finds all declarations a module (recursively) and does all the relevant logic (e.g. that's got more than one instance of parametrize, so do some sort of cartesian product of the inputs) and actually runs the test. The main thing that python has here over D is that D's UDAs can't directly modify the function they're attached to, but I don't think that's necessary for parametrize. Interestingly, functions can query their own attributes: @(3) auto attr() { return __traits(getAttributes, attr)[0]; } unittest { assert(attr() == 3); } not sure when I'd use that though...
Re: extern(C++, ns)
On Sunday, 3 January 2016 at 14:50:51 UTC, Jacob Carlborg wrote: On 2016-01-03 08:17, Manu via Digitalmars-d wrote: I'll try and reduce this one again... Have you tried using dustmite [1] to reduce the code? It will automatically modify/reduce the source code as long as the issue persists. When it no longer can modify the source code while the issue still persist it's complete and you have a reduced test case. Hopefully small enough. The result is usually small enough that if you have proprietary code can rename the symbols, if necessary. [1] https://github.com/CyberShadow/DustMite it can obfuscate the output for you with --obfuscate
Re: immutable promise broken in unions?
On Saturday, 2 January 2016 at 10:04:47 UTC, Shriramana Sharma wrote: import std.stdio; union EarthLocation { struct { immutable double lon, lat, alt; } double[3] data; } void main() { EarthLocation d = {data: [4, 5, 6]}; writeln(d.data); d.data = [1, 2, 3]; writeln(d.data); } I get the output: [4, 5, 6] [1, 2, 3] I thought the promise of `immutable` was: never changes, whether via this interface or otherwise. How does then the above work? Using DMD 2.0.69.2 on Kubuntu 64 bit. You are manually breaking immutable by making a union of immutable and mutable data and then writing to the mutable reference. This is roughly equivalent to casting away immutable and then writing to the reference. It's a bug in your code. All references to the same data should be 1) either immutable or const or all the references should be 2) either mutable or const (assuming the data was never immutable). Anything else is dangerous.
Re: immutable promise broken in unions?
On Saturday, 2 January 2016 at 12:08:48 UTC, Meta wrote: On Saturday, 2 January 2016 at 12:07:31 UTC, John Colvin wrote: You are manually breaking immutable by making a union of immutable and mutable data and then writing to the mutable reference. This is roughly equivalent to casting away immutable and then writing to the reference. It's a bug in your code. All references to the same data should be 1) either immutable or const or all the references should be 2) either mutable or const (assuming the data was never immutable). Anything else is dangerous. Surely the compiler should disallow this. It makes it trivial to break the type system otherwise. Casting away immutable can sometimes be necessary (e.g. when talking to other languages), so I'm not sure it should be disallowed, but it'd be great if it was somehow easier to catch these bugs.
Re: Why isn't field-wise constructor automatic for structs and not classes?
On Saturday, 2 January 2016 at 02:12:19 UTC, Shriramana Sharma wrote: If I have: struct TimeSpan { double start, end; } Then both the following automatically work: auto s = TimeSpan(); auto t = TimeSpan(1, 2); But if I make it a class (I need to) then I have to explicitly define a field-wise constructor else only a constructor with no args is automatically defined. Why can't the field-wise functionality be automatic for classes too? Strictly speaking you aren't calling a constructor there, you're writing a struct literal.
Re: Why isn't field-wise constructor automatic for structs and not classes?
On Saturday, 2 January 2016 at 14:57:58 UTC, Shriramana Sharma wrote: John Colvin wrote: Strictly speaking you aren't calling a constructor there, you're writing a struct literal. Why do you say I'm not calling a constructor? https://dlang.org/spec/struct.html#struct-literal And that still doesn't answer the question of why can't we have an automatic field-wise constructor for classes... Classes aren't as simple as structs, they have hidden members, inherited members... Technically speaking the compiler is even allowed to change the ordering. They may be other reasons I'm not aware of / aren't thinking of right now.
Re: Voting For std.experimental.ndslice
On Wednesday, 30 December 2015 at 21:39:54 UTC, Ilya Yaroshenko wrote: On Tuesday, 29 December 2015 at 18:08:52 UTC, Andrei Alexandrescu wrote: On 12/29/2015 11:28 AM, Robert burner Schadek wrote: On Tuesday, 29 December 2015 at 16:11:00 UTC, Ilya Yaroshenko wrote: OK, lets discuss every function. That is acceptably the problem. It is not about the documentation of the functions it is about the documentation binding the functions together and the documentation giving the idea of the library. Hopefully this is something that you or someone else could help by creating pull requests. Any volunteers? -- Andrei Does it means that the PR can be merged? --Ilya If there's a time constraint, perhaps we could merge it for 2.070 but keep adding documentation updates to both master and release branches?
Re: Pain when changing DMD version...
On Thursday, 24 December 2015 at 17:20:02 UTC, JerryR wrote: On Thursday, 24 December 2015 at 14:48:46 UTC, John Colvin wrote: Often when you see breakage it's the compiler actually enforcing a pre-existing rule that the code in question broke. So that made me think, there is any flag that I could turn on, and pass by over those errors? JerryR. No, because then we'd be stuck supporting every piece of code that used to compile, whether or not it was ever legal code. Illegal code that compiles is a bug; bugs must be fixed. There are some changes that could be handled in the way you describe, e.g. the -dip25 flag. Doing more of these risks getting in to complicated interactions between them. It's a reasonable request, but it's not going to happen except in carefully limited cases.
Re: Pain when changing DMD version...
On Thursday, 24 December 2015 at 17:17:39 UTC, JerryR wrote: On Thursday, 24 December 2015 at 16:05:18 UTC, bachmeier wrote: But 2.060 was released in 2012... Yes I know it's old but and the reason was to avoid breakage that already had happened before. I know that sometimes this (Breakage) is inevitable as the language grows, but my concern here is that the language isn't new, and it's in version +2, and again I was changing from 2.060 to 2.066... which looking now is already lagged since the new version is 2.069. JerryR. I would strongly recommend moving straight to 2.069. It will take very little extra effort compared to going to 2.066 and there's been a lot of improvements and bugfixes since then. Some of the failures to compile you're seeing might even be bugs in 2.066!