Ok, so IIUC the audience is academic BUT is people interested in using D as a means to an end, not computer scientists? I use D for bioinformatics, which IIUC has similar requirements to econometrics. From my point of view:

I'd emphasize the following:

Native efficiency. (Important for large datasets and monte carlo simulations)

Garbage collection. (Important because it makes it much easier to write non-trivial data structures that don't leak memory, and statistical analyses are a lot easier if the data is structured well.)

Ranges/std.range/builtin arrays and associative arrays. (Again, these make data handling a pleasure.)

Templates. (Makes it easier to write algorithms that aren't overly specialized to the data structure they operate on. This can also be done with OO containers but requires more boilerplate and compromises on efficiency.)

Disclaimer: These last two are things I'm the primary designer and implementer of. I intentionally put them last so it doesn't look like a shameless plug.

std.parallelism (Important because you can easily parallelize your simulation, etc.)

dstats (https://github.com/dsimcha/dstats Important because a lot of statistical analysis code is already implemented for you. It's admittedly very basic compared to e.g. R or Matlab, but it's also in many cases better integrated and more efficient. I'd say that it has the 15% of the functionality that covers ~70% of use cases. I welcome contributors to add more stuff to it. I imagine economists would be interested in time series, which is currently a big area of missing functionality.)

Reply via email to