I will reply for Neo and then let mratsim speak for ArrayMancer. Neo was born first (at least, its precedessor linalg was). I tried to give it an interface more familiar for people coming from linear algebra (everything there is expressed in vectors and matrices), as opposed from the popular Numpy approach where everything is a n-dimensional tensor. Not that tensors are not from mathematics, but - well, there are many kinds of tensors in mathematics, they are a bit more complicated than simply an n-dimensional table, so I left that part for later. I tried to focus on providing bindings to the BLAS, LAPACK and CUDA libraries, in order for the actual computations to run fast, and tried to postpone abstractions such as tensors for later (I have not decided on the interface yet).
For similar reasons, only lately I introduced operations such as the element-wise product of vectors, which do not have a natural mathematical interpretation (but it is there at last). Also, I tried to design Neo in such a way that it could accomodate matrices and vectors allocated on the stack. My recent interest is in trying to adapt Neo to work with sparse matrices, both on the CPU and GPU side - which is complicated, since - unlike BLAS for dense matrices - there is not a common approach on the two sides. I would also like to focus on matrices decompositions and other features from LAPACK (but unfortunately, these are not always implemented on CUDA). I also moved the actual libraries bindings into separate packages: [CUDA](https://github.com/unicredit/nimcuda), [BLAS](https://github.com/unicredit/nimblas), [LAPACK](https://github.com/unicredit/nimlapack) so that they could be used by other libraries in the Nim ecosystem. All that said, work on Neo is now quite slowed down. In general, I had not much time to work with Nim lately - I still have plans to go on with Neo, but I have made no actual progress in the latest few months. ArrayMancer is currently more active, but I will let mratsim speak about its features.