Hello,

I've been having issue with the following code which is part of a library to 
automatically compute gradients of basic operations. [Full 
code](https://github.com/mratsim/nim-rmad/blob/master/src/autograd.nim)

The idea is to do the operation (+, *, - ...) and at the same time add the 
gradient transformation as closures (type BackProp).

Currently I can store BackProp[T]: proc( _ : T): T It works fine for float32 
for example, but I need to generalize to linalg's Matrix32, DMatrix32, 
Vector32, DVector32.

Here is the current working implementation : 
    
    
    type
      BackProp[T] = proc (gradient: T): T {.noSideEffect.}
      
      Node[T] = object
        weights: array[2, BackProp[T]]
        parents: array[2,int]
      
      Context*[T] = object
        nodes: ref seq[Node[T]]
    
    proc newContext*[T]: Context[T] {.noSideEffect.} =
      result.nodes = new seq[Node[T]]
      result.nodes[] = @[]
    
    let ctx = newContext[float32]
    

Now I need BackProp to be T -> U, T and U can be different types or sometimes 
the same. For example I will have a series of Nodes: BackProp float32 -> 
Vector32 and Vector32 -> Matrix32

I can get to there:
    
    
    import linalg
    
    type
      BackProp[T, U] = proc (gradient: T): U {.noSideEffect.}
      
      Node[T, U] = object
        weights: array[2, BackProp[T, U]]
        parents: array[2, int] #ref indices to parent nodes
      
      Context*[T, U] = object
        nodes: ref seq[Node[T, U]]
    
    proc newContext*[T, U]: Context[T, U] {.noSideEffect.} =
      result.nodes = new seq[Node[T, U]]
      result.nodes[] = @[]
    
    let ctx = newContext[float32, Matrix32[2,2]]
    

But now there is no way to add a Matrix32[2, 2] -> Matrix32[4, 4], I'm stuck 
with float32 -> Matrix.

I've tried the 2 following snippets which fails:
    
    
    import linalg
    
    type
      
      Tensor[float32] = float32 or DVector32
      
      BackProp[T] = proc (gradient: Tensor[T]): Tensor[T] {.noSideEffect.}
      
      Node[T] = object
        weights: array[2, BackProp[T]]
        parents: array[2, int] #ref indices to parent nodes
      
      Context*[T] = object
        nodes: ref seq[Node[T]]
    
    proc newContext*(T: typedesc[SomeReal]): Context[T] {.noSideEffect.} =
      result.nodes = new seq[Node[T]]
      result.nodes[] = @[]
    
    let ctx = newContext(float32)
    

Error is Cannot evaluate at compile time: T

And I also tried removing the T generic
    
    
    import linalg
    
    type
      
      Tensor = float32 or DVector32
      
      BackProp = proc (gradient: Tensor): Tensor {.noSideEffect.}
      
      Node = object
        weights: array[2, BackProp]
        parents: array[2, int] #ref indices to parent nodes
      
      Context* = object
        nodes: ref seq[Node]
    
    proc newContext*(T: typedesc[SomeReal]): Context {.noSideEffect.} =
      result.nodes = new seq[Node]
      result.nodes[] = @[]
    
    let ctx = newContext(float32)
    

Error: 'BackProp' is not a concrete type.

I tried wrapping all the types (float32, Vector32, DVector32, Matrix32, 
DMatrix32) in an object variant but I also get Vector32 is not a concrete type. 
Probably because Vector32 in linalg is actually Vector32[N: static[int]].

I also tried using a tuple (Backprop[T,U], Backprop[T,U]) instead of an array 
but I still can't compile with same errors.

Is there a solution to have Node[Closure[float32,Vector32]], 
Node[Closure[Vector32,Matrix32]] and Node[Closure[Matrix32,Matrix32]] in same 
seq (ClosureType being BackProp in this case)?

If interested, here is the full background of [why I'm passing a closure 
instead of just a value.](https://github.com/mratsim/nim-rmad/issues/2)

Reply via email to