On Tuesday, 16 February 2016 at 15:03:36 UTC, Jakob Jenkov wrote:

I cannot speak on behalf of the D community. In my opinion I don't think that it is D that needs a big data strategy. It is the users of D that need that strategy.

I am originally a Java developer. Java devs. create all kinds of crazy tools all the time. Lots fail, but some survive and grow big, like Spark.

D devs need to do the same. Just jump into it. Have it be your hobby project in D. Then see where it takes you.

Good attitude. Nevertheless, I think there is a much larger population of people who would want to use D for normal data analysis if packages could replicate much of what people do in R/Python.

If the OP really wants to contribute to big data projects in D, he might want to start with things that will more easily allow D to interact with existing libraries.

For instance, Google's MR4C allows C code to be run in a Hadoop instance. Maybe adding support for D might be do-able?

http://google-opensource.blogspot.com/2015/02/mapreduce-for-c-run-native-code-in.html

There is likely value in writing bindings to machine learning libraries. I did a quick search of machine learning libraries and much of it looked like it was in C++. I don't have much expertise with writing bindings to C++ libraries.


Reply via email to