Github user kaknikhil commented on a diff in the pull request:

    https://github.com/apache/madlib/pull/243#discussion_r175871655
  
    --- Diff: src/modules/convex/mlp_igd.cpp ---
    @@ -170,6 +289,24 @@ mlp_igd_final::run(AnyType &args) {
         return state;
     }
     
    +
    +/**
    + * @brief Perform the multilayer perceptron final step
    + */
    +AnyType
    +mlp_minibatch_final::run(AnyType &args) {
    +    // We request a mutable object. Depending on the backend, this might 
perform
    +    // a deep copy.
    +    MLPMiniBatchState<MutableArrayHandle<double> > state = args[0];
    +    // Aggregates that haven't seen any data just return Null.
    +    if (state.algo.numRows == 0) { return Null(); }
    +
    +    L2<MLPModelType>::lambda = state.task.lambda;
    +    state.algo.loss = 
state.algo.loss/static_cast<double>(state.algo.numRows);
    +    state.algo.loss += L2<MLPModelType>::loss(state.task.model);
    +    return state;
    --- End diff --
    
    I noticed that minibatch `AlgoState` does not have an incr model unlike igd 
`AlgoState` . Do you think it makes sense to add a comment to explain this ?


---

Reply via email to