As a data scientist, I feel that Nim has tremendous potential for data science, 
machine learning and deep learning.

In particular, it's currently non-trivial to bridge the gap between deep 
learning research (mostly Python and sometimes Lua) and production (C for 
embedded devices, javascript for web services ...).

For the past 3 months I've been working on Arraymancer, a tensor library that 
currently provides a subset of Numpy functionality in a fast and ergonomic 
library. It features:

  * Creating tensors from nested sequences and arrays (even 10 level of nesting)
  * Pretty printing of up to 4D tensors (would need help to generalize)
  * Slicing with Nim syntax
  * Slices can be mutated
  * Reshaping, broadcasting, concatenating tensors. Also permuting their 
dimensions.
  * Universal functions
  * Accelerated matrix and vector operations using BLAS
  * Iterators (on values, coordinates, axis)
  * Aggregate and statistics (sum, mean, and a generic aggregate higher order 
function)



Next steps (in no particular order) include:

  * adding CUDA support using andrea's nimcuda package
  * adding Neural Network / Deep Learning functions
  * Improving the documentation and adding the library on Nimble



The library: 
[https://github.com/mratsim/Arraymancer](https://github.com/mratsim/Arraymancer)

I welcome your feedback or expected use case. I especially would love to know 
the pain points people have with deep learning and putting deep learning models 
in production.

Reply via email to