Re: Speeding up text file parser (BLAST tabular format)
I had some luck building a local copy of llvm in my home directory, using a linux version about as old as yours (llvm 3.5 i used) specifying: --configure --prefix=/home/andrew/llvm so make install would install it somewhere I had permissions. Then I changed the cmake command to: cmake -L -DLLVM_CONFIG="/home/andrew/llvm/bin/llvm-config" .. and I got a working install of ldc. Make yourself a cup of tea while you wait though if you try it, llvm was about an hour and a half to compile. On Tuesday, 15 September 2015 at 13:49:04 UTC, Fredrik Boulund wrote: On Tuesday, 15 September 2015 at 10:01:30 UTC, John Colvin wrote: try this: https://dlangscience.github.io/resources/ldc-0.16.0-a2_glibc2.11.3.tar.xz Nope, :( $ ldd ldc2 ./ldc2: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by ./ldc2) linux-vdso.so.1 => (0x7fff2ffd8000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00318a00) libdl.so.2 => /lib64/libdl.so.2 (0x00318a40) libncurses.so.5 => /lib64/libncurses.so.5 (0x00319bc0) librt.so.1 => /lib64/librt.so.1 (0x00318a80) libz.so.1 => /lib64/libz.so.1 (0x00318ac0) libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00318dc0) libm.so.6 => /lib64/libm.so.6 (0x003189c0) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00318c00) libc.so.6 => /lib64/libc.so.6 (0x00318980) /lib64/ld-linux-x86-64.so.2 (0x00318940) libtinfo.so.5 => /lib64/libtinfo.so.5 (0x00319900) Thanks for trying though!
Re: foreach(line; f.byLine) produces core.exception.InvalidMemoryOperationError@(0) in 2.067 but not 2.066
Thanks very much for your help, it seemed to work a treat (I hope :))! Compiling ldc wasn't too bad, make the changes to runtime/phobos/std/stdio.d and then just building as normal was no problem. Unittests are passing and it handles that file perfectly. On Tuesday, 15 September 2015 at 16:11:06 UTC, Martin Krejcirik wrote: On Tuesday, 15 September 2015 at 15:28:23 UTC, Andrew Brown wrote: A very naive question: would it be possible in this case to backport it into gdc/ldc by copying the pull request and building the compiler from source, or would this get me into a world of pain? Cherry-picking should work and merge cleanly. I have done it for DMD 2.067. I don't know how difficult it is to recompile Phobos and Druntime with LDC/GDC though.
Re: foreach(line; f.byLine) produces core.exception.InvalidMemoryOperationError@(0) in 2.067 but not 2.066
On Tuesday, 15 September 2015 at 14:55:42 UTC, Martin Krejcirik wrote: For reference, it was this PR: https://github.com/D-Programming-Language/phobos/pull/3089 which fixed the same issue for me. A very naive question: would it be possible in this case to backport it into gdc/ldc by copying the pull request and building the compiler from source, or would this get me into a world of pain?
Re: foreach(line; f.byLine) produces core.exception.InvalidMemoryOperationError@(0) in 2.067 but not 2.066
On Tuesday, 15 September 2015 at 14:19:13 UTC, Daniel Kozák wrote: Which OS? It's CentOS release 6.5 (Final), I tried dmd 2.068.1 and the problem has disappeared. Thanks very much for the advice, I can stick to old gdc for speed until ldc catches up to 2.068. Best Andrew
Re: Reading and converting binary file 2 bits at a time
Thanks very much for all the help, your advice worked a treat. One final question, originally I was defining the struct inside the main loop and it was using 4 bytes per field rather than 2 bits, e.g.: import std.bitmanip; import std.stdio; struct Crumb1 { mixin(bitfields!( ubyte, "one", 2, ubyte, "two", 2, ubyte, "three", 2, ubyte, "four", 2)); } void main() { struct Crumb2 { mixin(bitfields!( ubyte, "one", 2, ubyte, "two", 2, ubyte, "three", 2, ubyte, "four", 2)); } writeln(Crumb1.sizeof, " ", Crumb2.sizeof); } outputs: 1 16 Is this correect behaviour? Andrew
Re: Reading and converting binary file 2 bits at a time
On Thursday, 27 August 2015 at 09:26:55 UTC, rumbu wrote: On Thursday, 27 August 2015 at 09:00:02 UTC, Andrew Brown wrote: Hi, I need to read a binary file, and then process it two bits at a time. But I'm a little stuck on the first step. So far I have: import std.file; import std.stdio; void main(){ auto f = std.file.read("binaryfile"); auto g = cast(bool[]) f; writeln(g); } but all the values of g then are just true, could you tell me what I'm doing wrong? I've also looked at the bitmanip module, I couldn't get it to help, but is that the direction I should be looking? Thanks very much Andrew auto bytes = cast(ubyte[])read("binaryfile"); foreach(b; bytes) { writeln((b & 0xC0) >> 6); //bits 7, 6 writeln((b & 0x30) >> 4); //bits 5, 4 writeln((b & 0x0C) >> 2); //bits 3, 2 writeln((b & 0x03)); //bits 1, 0 } That's lovely, thank you. One quick question, the length of the file is not a multiple of the length of ubyte, but the cast still seems to work. Do you know how it converts a truncated final section? Thanks again Andrew
Reading and converting binary file 2 bits at a time
Hi, I need to read a binary file, and then process it two bits at a time. But I'm a little stuck on the first step. So far I have: import std.file; import std.stdio; void main(){ auto f = std.file.read("binaryfile"); auto g = cast(bool[]) f; writeln(g); } but all the values of g then are just true, could you tell me what I'm doing wrong? I've also looked at the bitmanip module, I couldn't get it to help, but is that the direction I should be looking? Thanks very much Andrew
Re: GDC fails to link with GSL and fortran code
I'm getting some idea how difficult compilers could be to maintain across distros, but I'm probably still a long way from knowing the true horror. Thank you for taking the time to help me out, and then when my immediate problem was solved, more time to help me learn something. I see there's a lot of discussion on these forums about the state of the documentation. Between googling historical questions here, and people's willingness to help on this forum, I've always found my answers. I think that's quite amazing. Andrew On Tuesday, 17 March 2015 at 21:00:46 UTC, Johannes Pfau wrote: Am Tue, 17 Mar 2015 12:13:44 + schrieb "Andrew Brown" : Thank you very much for your replies, I now have 2 solutions to my problem! Both compiling on a virtual machine running debian wheezy, and using gcc to do the linking produced executables that would run on the cluster. Compiling with the verbose flags for linker and compiler produced the following output: failed gdc attempt: http://dpaste.com/0Z5V4PV successful dmd attempt: http://dpaste.com/0S5WKJ5 successful use of gcc to link: http://dpaste.com/0YYR39V It seems a bit of a mess, with various libraries in various places. I'll see if I can get to the bottom of it, I think it'll be a learning experience. Thanks again for the swift and useful help and guidance. Andrew GCC's verbose output can indeed be quite confusing but if you know what to look for it's possible to find some useful information :-) In your case the linker messages hinted at a problem with libc. And as there were only a few errors it's likely a version compatibility problem. If you search for libc.so in these logs you'll find this: Failed GDC: attempt to open /usr/lib/../lib64/libc.so succeeded opened script file /usr/lib/../lib64/libc.so opened script file /usr/lib/../lib64/libc.so attempt to open /lib64/libc.so.6 succeeded /lib64/libc.so.6 GCC: attempt to open /software/lib/gcc/x86_64-redhat-linux/4.9.1/../../../../lib64/libc.so succeeded opened script file /software/lib/gcc/x86_64-redhat-linux/4.9.1/../../../../lib64/libc.so opened script file /software/lib/gcc/x86_64-redhat-linux/4.9.1/../../../../lib64/libc.so attempt to open /software/lib64/libc.so.6 succeeded /software/lib64/libc.so.6 The binary gdc searches for libaries in the 'usual' places, including /usr/lib64. Your gcc doesn't search in /usr/lib64 but in /software. You seem to have an incompatible libc in /usr/lib64 which gets picked up by gdc. This is one reason why binary compiler releases are difficult to maintain and we usually recommend to compile gdc from source. DMD avoids this mess by simply calling the local gcc instead of ld to link. GCC unfortunately doesn't support this and forces us to always call the linker directly.
Re: GDC fails to link with GSL and fortran code
Thank you very much for your replies, I now have 2 solutions to my problem! Both compiling on a virtual machine running debian wheezy, and using gcc to do the linking produced executables that would run on the cluster. Compiling with the verbose flags for linker and compiler produced the following output: failed gdc attempt: http://dpaste.com/0Z5V4PV successful dmd attempt: http://dpaste.com/0S5WKJ5 successful use of gcc to link: http://dpaste.com/0YYR39V It seems a bit of a mess, with various libraries in various places. I'll see if I can get to the bottom of it, I think it'll be a learning experience. Thanks again for the swift and useful help and guidance. Andrew On Monday, 16 March 2015 at 19:22:18 UTC, Johannes Pfau wrote: Am Mon, 16 Mar 2015 16:44:45 + schrieb "Andrew Brown" : Hi, I'm trying to compile code which calls C and fortan routines from D on the linux cluster at work. I've managed to get it to work with all 3 compilers on my laptop, but LDC and GDC fail on the cluster (though DMD works perfectly). I'm using the precompiled compiler binaries on these systems, the cluster doesn't have the prerequistites for building them myself and I don't have admin rights. For GDC the commands I run are: gcc -c C_code.c Fortran_code.f gdc D_code.d C_code.o Fortran_code.f -lblas -lgsl -lgslcblas -lm -lgfortran -o out You could try to do the linking with the local compiler: gdc D_code.d gcc D_code.o C_code.o Fortran_code.o -lgphobos2 -lpthread -lblas -lgsl -lgslcblas -lm -L path/to/x86_64-gdcproject-linux-gnu/lib/ The error messages are: /software/lib64/libgsl.so: undefined reference to `memcpy@GLIBC_2.14' /software/lib64/libgfortran.so.3: undefined reference to `clock_gettime@GLIBC_2.17' /software/lib64/libgfortran.so.3: undefined reference to `secure_getenv@GLIBC_2.17' collect2: error: ld returned 1 exit status Seems like the binary GDC toolchain somehow picks up a wrong libc. The toolchains are built with GLIBC 2.14. But IIRC we don't ship the libc in the binary packages (for native compilers) and it should pick up the local libc. Please run gdc with the '-v' and '-Wl,--verbose' options and post a link to the full output. I can remove the gsl messages by statically linking to libgsl.a, but this doesn't solve the gfortran issues. If anyone knows a way round these issues, I'd be very grateful. I'd also eventually like to find a way to easily share linux biniaries with people, so they can use this code without these kinds of headaches. If anyone has any advice for making this portable, that would also help me out a lot. Usually the best option is to compile on old linux systems. Binaries often run on newer systems but not on older ones. You could setup debian wheezy or an older version in a VM or using docker. Or you use docker.io ;-) I personally think the docker approach is kind of overkill but avoiding compatibility issues is one of docker's main selling points.
GDC fails to link with GSL and fortran code
Hi, I'm trying to compile code which calls C and fortan routines from D on the linux cluster at work. I've managed to get it to work with all 3 compilers on my laptop, but LDC and GDC fail on the cluster (though DMD works perfectly). I'm using the precompiled compiler binaries on these systems, the cluster doesn't have the prerequistites for building them myself and I don't have admin rights. For GDC the commands I run are: gcc -c C_code.c Fortran_code.f gdc D_code.d C_code.o Fortran_code.f -lblas -lgsl -lgslcblas -lm -lgfortran -o out The error messages are: /software/lib64/libgsl.so: undefined reference to `memcpy@GLIBC_2.14' /software/lib64/libgfortran.so.3: undefined reference to `clock_gettime@GLIBC_2.17' /software/lib64/libgfortran.so.3: undefined reference to `secure_getenv@GLIBC_2.17' collect2: error: ld returned 1 exit status I can remove the gsl messages by statically linking to libgsl.a, but this doesn't solve the gfortran issues. If anyone knows a way round these issues, I'd be very grateful. I'd also eventually like to find a way to easily share linux biniaries with people, so they can use this code without these kinds of headaches. If anyone has any advice for making this portable, that would also help me out a lot. Thanks very much Andrew
Re: std.algorithm.sort error with default predicate
Is it chain you are after to concatenate the objects and sort them together? http://dlang.org/phobos/std_range.html#.chain You'd need to cast them all to the same type. On Monday, 7 July 2014 at 20:50:06 UTC, Archibald wrote: On Monday, 7 July 2014 at 20:17:16 UTC, bearophile wrote: Archibald: Using std.algorithm.sort(a,b,c,d,e) But isn't std.algorithm.sort accepting only one argument? Bye, bearophile Sorry, it's sort(zip(a,b,c,d,e))
Re: Doing exercise from book, but I'm getting error with splitter
I'm giving up On Monday, 16 June 2014 at 16:49:46 UTC, Andrew Brown wrote: Sorry, comments split over two lines, this should work: import std.stdio, std.array, std.string; //need to import std.array void main() { ulong[string] dictionary; // the length property is ulong, not uint foreach (line; stdin.byLine()) { foreach (word; splitter(strip(line))) { if (word in dictionary) continue; auto newID = dictionary.length; //dictionarys need immutable keys, you // can create this with .idup dictionary[word.idup] = newID; writeln(newID, '\t', word); } } }
Re: Doing exercise from book, but I'm getting error with splitter
Sorry, comments split over two lines, this should work: import std.stdio, std.array, std.string; //need to import std.array void main() { ulong[string] dictionary; // the length property is ulong, not uint foreach (line; stdin.byLine()) { foreach (word; splitter(strip(line))) { if (word in dictionary) continue; auto newID = dictionary.length; //dictionarys need immutable keys, you // can create this with .idup dictionary[word.idup] = newID; writeln(newID, '\t', word); } } } On Monday, 16 June 2014 at 16:46:37 UTC, Andrew Brown wrote: I think you can find splitter in std.array. I had a few other problems compiling your code, I could get this version to work: import std.stdio, std.array, std.string; //need to import std.array void main() { ulong[string] dictionary; // the length property is ulong, not uint foreach (line; stdin.byLine()) { foreach (word; splitter(strip(line))) { if (word in dictionary) continue; auto newID = dictionary.length; dictionary[word.idup] = newID; //dictionarys need immutable keys, you can create this with .idup writeln(newID, '\t', word); } } } Good luck! Andrew On Monday, 16 June 2014 at 16:38:15 UTC, Sanios wrote: Hello guys, as first I don't know, if I'm writing to correct section, but I've got a problem. I'm actually reading book of D guide and trying to do it like it is in book. My code is: import std.stdio, std.string; void main() { uint[string] dictionary; foreach (line; stdin.byLine()) { foreach (word; splitter(strip(line))) { if (word in dictionary) continue; auto newID = dictionary.length; dictionary[word] = newID; writeln(newID, '\t', word); } } } And I'm getting this - Error: undefined identifier splitter It seems like std.string doesn't contain splitter.
Re: Doing exercise from book, but I'm getting error with splitter
I think you can find splitter in std.array. I had a few other problems compiling your code, I could get this version to work: import std.stdio, std.array, std.string; //need to import std.array void main() { ulong[string] dictionary; // the length property is ulong, not uint foreach (line; stdin.byLine()) { foreach (word; splitter(strip(line))) { if (word in dictionary) continue; auto newID = dictionary.length; dictionary[word.idup] = newID; //dictionarys need immutable keys, you can create this with .idup writeln(newID, '\t', word); } } } Good luck! Andrew On Monday, 16 June 2014 at 16:38:15 UTC, Sanios wrote: Hello guys, as first I don't know, if I'm writing to correct section, but I've got a problem. I'm actually reading book of D guide and trying to do it like it is in book. My code is: import std.stdio, std.string; void main() { uint[string] dictionary; foreach (line; stdin.byLine()) { foreach (word; splitter(strip(line))) { if (word in dictionary) continue; auto newID = dictionary.length; dictionary[word] = newID; writeln(newID, '\t', word); } } } And I'm getting this - Error: undefined identifier splitter It seems like std.string doesn't contain splitter.
Re: passing predicates to lowerBound, or alternatively, how lazy is map?
So I was hoping for a learning experience, and I got it. With a little playing around, looking at phobos, and TDPL, I think I've figured out how lowerBound gets its predicate. It learns it from assumeSorted. So I can do this: order.assumeSorted!((a, b) => number[a] < number[b]) .lowerBound(order[4]) and I'll retrieve the indices of the 4 smallest numbers. Not useful for my current purposes, but is getting me closer to figuring out how higher level functions and ranges work. Thanks Andrew
Re: passing predicates to lowerBound, or alternatively, how lazy is map?
On Wednesday, 11 June 2014 at 13:25:03 UTC, John Colvin wrote: On Wednesday, 11 June 2014 at 13:20:37 UTC, Andrew Brown wrote: You are correct. assumeSorted and lowerBound will provide better time complexity than countUntil I'm sorry, one final question because I think I'm close to understanding. Map produces a forward range (lazily) but not a random access range? Therefore, lowerBound will move along this range until the pred is not true? This means it would be better to do: numbers.indexed(order).assumeSorted.lowerBound than: map(a => numbers[a])(order).assumeSorted.lowerBound as the lowerBound will be faster on a random access range as produced by indexed? map preserves the random access capabilities of it's source. An array is random access, therefore map applied to an array is also random access. There isn't any practical difference between indices.map!((i) => src[i])() and src.indexed(indices) that I know of. That's great, thank you very much for taking the time to answer.
Re: passing predicates to lowerBound, or alternatively, how lazy is map?
You are correct. assumeSorted and lowerBound will provide better time complexity than countUntil I'm sorry, one final question because I think I'm close to understanding. Map produces a forward range (lazily) but not a random access range? Therefore, lowerBound will move along this range until the pred is not true? This means it would be better to do: numbers.indexed(order).assumeSorted.lowerBound than: map(a => numbers[a])(order).assumeSorted.lowerBound as the lowerBound will be faster on a random access range as produced by indexed?
Re: passing predicates to lowerBound, or alternatively, how lazy is map?
map is fully lazy. However, if you've already got the sorted indices in `order`, I would do this: auto numLessThanN = numbers.indexed(order).countUntil!((x) => x >= N)(); That indexed command is perfect though, does the trick, thank you very much.
Re: passing predicates to lowerBound, or alternatively, how lazy is map?
My question about this is how lazy is map? Will it work on every value of order and then pass it to lowerBound, or could it work to evaluate only those values asked by lowerBound? I guess probably not, but could a function be composed that worked in this way? Thank you very much Andrew map is fully lazy. However, if you've already got the sorted indices in `order`, I would do this: auto numLessThanN = numbers.indexed(order).countUntil!((x) => x >= N)(); Thanks for the reply, I'm going to have a lot of numbers though. I guess compared to the time it will take me to sort them, it makes no difference, but is it right that countUntil will take linear time? If I can figure out lowerBound, then I have my answer in log(n) time? Best Andrew
passing predicates to lowerBound, or alternatively, how lazy is map?
Hi there, The problem this question is about is now solved, by writing my own binary search algorithm, but I'd like to ask it anyway as I think I could learn a lot from the answers. The problem was, given an array of numbers, double[] numbers, and an ordering from makeIndex size_t[] order, I want to count how many numbers are less than a number N. The obvious way would be to use lowerBound from std.range, but I can't work out how to pass it a predicate like "numbers[a] < b". Could someone explain the template: (SearchPolicy sp = SearchPolicy.binarySearch, V)(V value) if (isTwoWayCompatible!(predFun, ElementType!Range, V)); The alternative way I thought to do it was to combine map with lowerBound, i.e.: map!(a => numbers[a])(order).assumeSorted .lowerBound(N) .length My question about this is how lazy is map? Will it work on every value of order and then pass it to lowerBound, or could it work to evaluate only those values asked by lowerBound? I guess probably not, but could a function be composed that worked in this way? Thank you very much Andrew
Re: Different random shuffles generated when compiled with gdc than with dmd
Having read more of the debate, I think coverage is more important than reproducibility. From my point of view, I'm not sure if there's much point in giving reproducible wrong answers.
Re: Different random shuffles generated when compiled with gdc than with dmd
Thank you for hunting down the difference, in my case it's not a deal breaking problem. I can just specify the compiler and language version, then the results become reproducible. And I'm sure I'll appreciate the performance boost! On Sunday, 1 June 2014 at 12:11:22 UTC, Ivan Kazmenko wrote: On Saturday, 31 May 2014 at 21:22:48 UTC, Joseph Rushton Wakeling via Digitalmars-d-learn wrote: On 31/05/14 22:37, Joseph Rushton Wakeling via Digitalmars-d-learn wrote: On 30/05/14 22:45, monarch_dodra via Digitalmars-d-learn wrote: Didn't you make changes to how and when the global PRNG is popped and accessed in randomShuffle? I figured it *could* be an explanation. I think it's more likely that the culprit is either your set of patches to the Mersenne Twister, or the patches made to uniform() (which is called by partialShuffle). I'll look more deeply into this. It's due to the the updated uniform() provided in this pull request: https://github.com/D-Programming-Language/phobos/commit/fc48d56284f19bf171780554b63b4ae83808b894 I second the thought that reproducibility across different versions is an important feature of any random generation library. Sadly, I didn't use a language yet which supported such a flavor of reproducibility for a significant period of time in its default random library, so I have to use my own randomness routines when it matters. I've reported my concern [1] at the moment of breakage, but apparently it didn't convince people. Perhaps I should make a more significant effort next time (like a pull request) for the things that matter to me. Well, now I know it does matter for others, at least. In short, if uniform() has to be tweaked, the sooner it happens, the better. Alternatively, the library design could allow different uniform() implementations to be plugged in, and provide legacy implementations along with the current (default) one. In that case, all one has to do to reproduce the old behavior is to plug the appropriate one in. [1] http://forum.dlang.org/thread/vgmdoyyqhcqurpmob...@forum.dlang.org#post-gjuprkxzmcbdixtbucea:40forum.dlang.org
Re: Different random shuffles generated when compiled with gdc than with dmd
Looking at old data, it is the dmd version that's changed, so I think this is the likely reason. Andrew On Friday, 30 May 2014 at 20:45:23 UTC, monarch_dodra wrote: On Friday, 30 May 2014 at 18:41:55 UTC, Joseph Rushton Wakeling via Digitalmars-d-learn wrote: On 30/05/14 18:13, monarch_dodra via Digitalmars-d-learn wrote: Are you sure you are compiling with the same version of dmd and gdc? Fixes were made to the rand.d library in the latest release, which could explain the difference you are observing. Which fixes are you thinking of here ... ? I don't recall anything that ought to alter the behaviour of the standard random number generator. Didn't you make changes to how and when the global PRNG is popped and accessed in randomShuffle? I figured it *could* be an explanation.
Re: Different random shuffles generated when compiled with gdc than with dmd
GDC version 4.8.2,i guess that's my problem. This is what happens when you let Ubuntu look after your packages. Thank you very much! Andrew On Friday, 30 May 2014 at 16:13:49 UTC, monarch_dodra wrote: On Friday, 30 May 2014 at 13:39:18 UTC, Andrew Brown wrote: Hi there, The following code: void main(){ import std.array : array; import std.stdio : writeln; import std.random : rndGen, randomShuffle; import std.range : iota; rndGen.seed(12); int[] temp = iota(10).array; randomShuffle(temp); writeln(temp); } writes [1, 8, 4, 2, 0, 7, 5, 6, 9, 3] if it's compiled with dmd, but [1, 7, 4, 6, 2, 9, 5, 0, 3, 8] with gdc. ... Andrew Are you sure you are compiling with the same version of dmd and gdc? Fixes were made to the rand.d library in the latest release, which could explain the difference you are observing.
Re: Different random shuffles generated when compiled with gdc than with dmd
I'd like it to be predictable given the seed, right now it's predictable given the seed and the compiler. Is this a bug, shouldn't the random number process be completely defined in the language? I'm not trying to misuse it like the PHP crowd :) It's for a piece of scientific software: I'm hoping people will test the significance of their results with a stochastic process, and for reproducibility we need to allow other people to recreate their analysis exactly (hence the seed). For my purposes I don't need truly random, I just need it not to repeat too quickly. Thanks Andrew On Friday, 30 May 2014 at 14:09:20 UTC, Wanderer wrote: I must note if the sequence is predictable, it's not random anymore, it's pseudo-random at most. Also, if anyone interested, PHP had such way to generate predictable sequences in the past, but after it was horribly misused by various people for crypto keys/password generation purposes, they have forbidden it completely, so only non-predictable sequences in PHP from now on. Maybe, just maybe, it makes sense not to stand on the same rack twice.
Different random shuffles generated when compiled with gdc than with dmd
Hi there, The following code: void main(){ import std.array : array; import std.stdio : writeln; import std.random : rndGen, randomShuffle; import std.range : iota; rndGen.seed(12); int[] temp = iota(10).array; randomShuffle(temp); writeln(temp); } writes [1, 8, 4, 2, 0, 7, 5, 6, 9, 3] if it's compiled with dmd, but [1, 7, 4, 6, 2, 9, 5, 0, 3, 8] with gdc. I'd like to allow the users to specify an integer if they wish to give a deterministic set of permutations. This won't matter if I only distribute a binary created with a given compiler (aside from scaring me into thinking I need to know more about random number generation), but it's an annoyance when I'm trying to check my code is doing what it should. Is there a better way to specify an integer seed so both compilers will agree on the permutations they generate? Thanks very much Andrew
Re: Write double to text file, then read it back later without losing precision
I'm sure it will be, thank you very much. On Monday, 19 May 2014 at 15:57:53 UTC, bearophile wrote: Andrew Brown: I would like to write a double to a text file as hexadecimal and then read it back in without losing information. Is this good enough for you? void main() { import std.stdio, std.math; auto fout = File("ouput.txt", "w"); fout.writef("%a", PI); fout.close; auto fin = File("ouput.txt"); real r; fin.readf("%f", &r); fin.close; assert(PI == r); } Bye, bearophile
Write double to text file, then read it back later without losing precision
I would like to write a double to a text file as hexadecimal and then read it back in without losing information. Could someone tell me whether this is possible? It would happen in the same program, would I have to worry about different architectures? Thanks very much Andrew
Re: Seg fault when calling C code
I guess my confusion came about because in the page about interfacing with C, there's a static array example where parameters are given in terms D understands: extern (C) { void foo(ref int[3] a); // D prototype } I guess D has no problem translating that into a simple pointer that C can deal with. I assumed the same would be true of dynamic arrays, but maybe the leap is too far? And I've finally got round to seeing the table above, it clearly says that C array is equivalent to D pointer. Sorry for being a time waster.
Re: Seg fault when calling C code
On Friday, 16 May 2014 at 14:52:17 UTC, Marc Schütz wrote: On Friday, 16 May 2014 at 11:42:35 UTC, Kagamin wrote: For example, windows headers do use C++ &-references in function signatures and msdn provides code examples using that convention, the equivalent in D is ref. But that's extern(C++), not extern(C)... I guess my confusion came about because in the page about interfacing with C, there's a static array example where parameters are given in terms D understands: extern (C) { void foo(ref int[3] a); // D prototype } I guess D has no problem translating that into a simple pointer that C can deal with. I assumed the same would be true of dynamic arrays, but maybe the leap is too far?
Re: Seg fault when calling C code
That worked a treat! Thank you very much! On Thursday, 15 May 2014 at 21:11:54 UTC, Ali Çehreli wrote: On 05/15/2014 01:55 PM, Andrew Brown wrote: > extern(C) { >void regress(int nInd, int nCov, ref double[] x, ref double[] y, ref > double[] rOut); > } I don't think that should even be allowed. C functions should not know or be compatible with 'ref' D parameters. Define the arguments as simple 'double *' or 'const double *'. That makes sense because your C function is defined that way anyway. > void main(){ >int nInd = 5; >int nCov = 3; >double[] x = new double[nCov * nInd]; // ... >regress(5, 3, x, y, residuals); You want to pass the address of the first array member: x.ptr (&(x[0]) would work as well). (Same for y.) Otherwise, what ends up happening is that the address of the x and y slices are passed and C has no idea of what that is. Ali
Seg fault when calling C code
I'm trying to calculate residuals after fitting linear regression, and I've got some code in C using the gsl which should do it. Everything works fine if I use static arrays (below, defining X[15], y[5] etc.). Trouble is, I won't know the number of individuals or covariates until runtime, so I'm stuck using dynamic arrays. But passing them to C causes a seg fault. I'm very sure I'm doing something (lots of things) stupid. If someone could point out what, I'd be very grateful. Thanks very much. Andrew Here's a toy D example: import std.stdio; extern(C) { void regress(int nInd, int nCov, ref double[] x, ref double[] y, ref double[] rOut); } void main(){ int nInd = 5; int nCov = 3; double[] x = new double[nCov * nInd]; x = [1, 4, 3, 1, 4, 3, 1, 4, 2, 1, 6, 7, 1, 3, 2]; double[] y = new double[nInd]; y = [5, 3, 4, 1, 5]; double[] residuals = new double[nInd]; regress(5, 3, x, y, residuals); writeln(residuals); } and the C code it calls: #include void regress(int nInd, int nCov, double *x, double *y, double *rOut){ int i, j; gsl_matrix *xMat, *cov; gsl_vector *yVec, *c, *r; double chisq; xMat = gsl_matrix_alloc(nInd, nCov); yVec = gsl_vector_alloc(nInd); r = gsl_vector_alloc(nInd); c = gsl_vector_alloc(nCov); cov = gsl_matrix_alloc(nCov, nCov); for(i = 0; i < nInd; i++) { gsl_vector_set(yVec, i, *(y+i)); for(j = 0; j < nCov; j++) gsl_matrix_set(xMat, i, j, *(x + i * nCov + j)); } gsl_multifit_linear_workspace *work = gsl_multifit_linear_alloc(nInd, nCov); gsl_multifit_linear(xMat, yVec, c, cov, &chisq, work); gsl_multifit_linear_residuals(xMat, yVec, c, r); gsl_multifit_linear_free(work); for(i = 0; i < nInd; i++) rOut[i] = gsl_vector_get(r, i); }