On Monday, 7 December 2020 at 12:28:39 UTC, data pulverizer wrote:
On Monday, 7 December 2020 at 02:14:41 UTC, 9il wrote:
I don't know. Tensors aren't so complex. The complex part is a design that allows Mir to construct and iterate various kinds of lazy tensors of any complexity and have quite a universal API, and all of these are boosted by the fact that the user-provided kernel(lambda) function is optimized by the compiler without the overhead.

I agree that a basic tensor is not hard to implement, but the specific design to choose is not always obvious. Your benchmarks shows that design choices have a large impact on performance, and performance is certainly a very important consideration in tensor design.

For example I had no idea that your ndslice variant was using more than one array internally to achieve its performance - it wasn't obvious to me.

ndslice tensor type uses exactly one iterator. However, the iterator is generic and lazy iterators may contain any number of other iterators and pointers.
              • R... Igor Shirkalin via Digitalmars-d-announce
              • R... jmh530 via Digitalmars-d-announce
              • R... Ola Fosheim Grostad via Digitalmars-d-announce
              • R... jmh530 via Digitalmars-d-announce
              • R... Ola Fosheim Grostad via Digitalmars-d-announce
              • R... Igor Shirkalin via Digitalmars-d-announce
              • R... Igor Shirkalin via Digitalmars-d-announce
              • R... Igor Shirkalin via Digitalmars-d-announce
              • R... Igor Shirkalin via Digitalmars-d-announce
              • R... data pulverizer via Digitalmars-d-announce
              • R... 9il via Digitalmars-d-announce
              • R... Igor Shirkalin via Digitalmars-d-announce
  • Re: Mir vs. Numpy: Reworked... Walter Bright via Digitalmars-d-announce

Reply via email to