Github user kaknikhil commented on a diff in the pull request: https://github.com/apache/madlib/pull/243#discussion_r175627412 --- Diff: src/modules/convex/mlp_igd.cpp --- @@ -130,6 +145,90 @@ mlp_igd_transition::run(AnyType &args) { return state; } +/** + * @brief Perform the multilayer perceptron minibatch transition step + * + * Called for each tuple. + */ +AnyType +mlp_minibatch_transition::run(AnyType &args) { + // For the first tuple: args[0] is nothing more than a marker that + // indicates that we should do some initial operations. + // For other tuples: args[0] holds the computation state until last tuple + MLPMiniBatchState<MutableArrayHandle<double> > state = args[0]; + + // initilize the state if first tuple + if (state.algo.numRows == 0) { + if (!args[3].isNull()) { + MLPMiniBatchState<ArrayHandle<double> > previousState = args[3]; + state.allocate(*this, previousState.task.numberOfStages, + previousState.task.numbersOfUnits); + state = previousState; + } else { + // configuration parameters + ArrayHandle<double> numbersOfUnits = args[4].getAs<ArrayHandle<double> >(); --- End diff -- is it possible to reuse the code that gets the values from the args parameter ? I noticed that the igd transition function `mlp_igd_transition ` has the exact same code.
---