On Friday, 14 July 2017 at 20:52:54 UTC, Jonathan M Davis wrote:
And of course, this whole issue is incredibly confusing to
anyone
coming to D - especially those who aren't well-versed in
Unicode.
Right on. Thanks for your very clear summary (the whole thing,
not just the last line!). Much ap
On Thursday, 10 March 2016 at 10:58:41 UTC, thedeemon wrote:
On Wednesday, 9 March 2016 at 15:14:02 UTC, Gerald Jansen wrote:
enum n = 100_000_000; // some big number
auto a = new ulong[](n);
auto b = new char[8][](n);
struct S { ulong x; char[8] y; }
auto c = new S[](n);
will the l
I've studied [1] and [2] but don't understand everything there.
Hence these dumb questions:
Given
enum n = 100_000_000; // some big number
auto a = new ulong[](n);
auto b = new char[8][](n);
struct S { ulong x; char[8] y; }
auto c = new S[](n);
will the large memory blocks allocated
On Tuesday, 26 January 2016 at 22:36:31 UTC, H. S. Teoh wrote:
...
So the moral of the story is: avoid large numbers of small
allocations. If you have to do it, consider consolidating your
allocations into a series of allocations of large(ish) buffers
instead, and taking slices of the buffers.
On Tuesday, 26 January 2016 at 20:54:34 UTC, Chris Wright wrote:
On Tue, 26 Jan 2016 18:16:28 +, Gerald Jansen wrote:
On Thursday, 21 January 2016 at 21:24:49 UTC, H. S. Teoh wrote:
While this is no fancy range-based code, and one might say
it's more hackish and C-like than idiomatic D, t
On Thursday, 21 January 2016 at 21:24:49 UTC, H. S. Teoh wrote:
While this is no fancy range-based code, and one might say it's
more hackish and C-like than idiomatic D, the problem is that
current D compilers can't quite optimize range-based code to
this extent yet. Perhaps in the future opt
On Thursday, 21 January 2016 at 09:39:30 UTC, data pulverizer
wrote:
I have been reading large text files with D's csv file reader
and have found it slow compared to R's read.table function
This great blog post has an optimized FastReader for CSV files:
http://tech.adroll.com/blog/data/2014/11
On Saturday, 3 October 2015 at 22:21:08 UTC, Gerald Jansen wrote:
My simple test program is here:
http://dpaste.dzfl.pl/329023e651c4.
An alternative link to the program (that doesn't try to run it)
http://codepad.org/FbHJJqYM.
In this great article [1] there is a brief section on buffered
output to files. Also in this thread [2] I was advised to use
explicitly buffered output for maximum performance. This left me
perplexed: surely any high-level routines already use buffered
IO, no?
[1]
http://nomad.so/2015/09/wor
On Sunday, 23 August 2015 at 16:00:19 UTC, Tony wrote:
/usr/bin/ld: cannot find -lcurl
Just the other day I had a similar problem (compiling vibenews,
ld complained of missing -levent and -lssl), which I managed to
solve simply by installing the development versions of the
libraries (i.e. li
On Thursday, 14 May 2015 at 17:12:07 UTC, John Colvin wrote:
Would it be OK if I showed some parts of this code as examples
in my DConf talk in 2 weeks?
Sure!!!
John Colvin's improvements to my D program seem to have resolved
the problem.
(http://forum.dlang.org/post/ydgmzhlspvvvrbeem...@forum.dlang.org
and http://dpaste.dzfl.pl/114d5a6086b7).
I have rerun my tests and now the picture is a bit different (see
tables below).
In the middle table I have
On Wednesday, 13 May 2015 at 12:16:19 UTC, weaselcat wrote:
On Wednesday, 13 May 2015 at 09:01:05 UTC, Gerald Jansen wrote:
On Wednesday, 13 May 2015 at 03:19:17 UTC, thedeemon wrote:
In case of Python's parallel.Pool() separate processes do the
work without any synchronization issues. In case
On Wednesday, 13 May 2015 at 13:40:33 UTC, John Colvin wrote:
On Wednesday, 13 May 2015 at 11:33:55 UTC, John Colvin wrote:
On Tuesday, 12 May 2015 at 18:14:56 UTC, Gerald Jansen wrote:
On Tuesday, 12 May 2015 at 16:35:23 UTC, Rikki Cattermole
wrote:
On 13/05/2015 4:20 a.m., Gerald Jansen wrot
On Wednesday, 13 May 2015 at 14:11:25 UTC, Gerald Jansen wrote:
On Wednesday, 13 May 2015 at 11:33:55 UTC, John Colvin wrote:
On Tuesday, 12 May 2015 at 18:14:56 UTC, Gerald Jansen wrote:
On Tuesday, 12 May 2015 at 16:35:23 UTC, Rikki Cattermole
wrote:
On 13/05/2015 4:20 a.m., Gerald Jansen wr
On Wednesday, 13 May 2015 at 11:33:55 UTC, John Colvin wrote:
On Tuesday, 12 May 2015 at 18:14:56 UTC, Gerald Jansen wrote:
On Tuesday, 12 May 2015 at 16:35:23 UTC, Rikki Cattermole
wrote:
On 13/05/2015 4:20 a.m., Gerald Jansen wrote:
At the risk of great embarassment ... here's my program:
ht
On Wednesday, 13 May 2015 at 03:19:17 UTC, thedeemon wrote:
In case of Python's parallel.Pool() separate processes do the
work without any synchronization issues. In case of D's
std.parallelism it's just threads inside one process and they
do fight for some locks, thus this result.
Okay, so t
On Tuesday, 12 May 2015 at 20:58:16 UTC, Vladimir Panteleev wrote:
On Tuesday, 12 May 2015 at 18:14:56 UTC, Gerald Jansen wrote:
On Tuesday, 12 May 2015 at 16:35:23 UTC, Rikki Cattermole
wrote:
On 13/05/2015 4:20 a.m., Gerald Jansen wrote:
At the risk of great embarassment ... here's my progra
On Tuesday, 12 May 2015 at 17:45:54 UTC, thedeemon wrote:
On Tuesday, 12 May 2015 at 17:02:19 UTC, Gerald Jansen wrote:
About 3.5 million lines read by main(), 0.5 to 2 million lines
read and 3.5 million lines written by runTraits (aka runJob).
Each GC allocation in D is a locking operation (
On Tuesday, 12 May 2015 at 19:14:23 UTC, Laeeth Isharc wrote:
But if you disable the logging does that change things?
There is only a tiny bit of logging happening.
And are you using optimization on gdc ?
gdc -Ofast -march=native -frelease
Also try byLineFast eg
http://forum.dlang.org/th
On Tuesday, 12 May 2015 at 16:35:23 UTC, Rikki Cattermole wrote:
On 13/05/2015 4:20 a.m., Gerald Jansen wrote:
At the risk of great embarassment ... here's my program:
http://dekoppel.eu/tmp/pedupg.d
Would it be possible to give us some example data?
I might give it a go to try rewriting it to
On Tuesday, 12 May 2015 at 16:46:42 UTC, thedeemon wrote:
On Tuesday, 12 May 2015 at 14:59:38 UTC, Gerald Jansen wrote:
The output of /usr/bin/time is as follows:
Lang JobsUser System Elapsed %CPU
Py 2 79.242.16 0:48.90 166
D 2 19.41 10.14 0:17.96 164
Py 30
At the risk of great embarassment ... here's my program:
http://dekoppel.eu/tmp/pedupg.d
As per Rick's first suggestion (thanks) I added
import core.memory : GC;
main()
GC.disable;
GC.reserve(1024 * 1024 * 1024);
... to no avail.
thanks for all the help so far.
Gerald
ps. I am using G
Thanks Ali. I have tried putting GC.disable() in both main and
runJob, but the timing behaviour did not change. The python
version works in a similar fashion and also has automatic GC. I
tend to think that is not the (biggest) problem.
The program is long and newbie-ugly ... but I could put it
I am a data analyst trying to learn enough D to decide whether to
use D for a new project rather than Python + Fortran. I have
recoded a non-trivial Python program to do some simple parallel
data processing (using the map function in Python's
multiprocessing module and parallel foreach in D).
25 matches
Mail list logo