[
https://issues.apache.org/jira/browse/MADLIB-1206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16440358#comment-16440358
]
Frank McQuillan edited comment on MADLIB-1206 at 4/17/18 6:37 PM:
------------------------------------------------------------------
When mini-batch preprocessor is run with grouping, MLP should only support
exactly the same grouping. Currently MLP with mini-batching will run with no
groups or any types of groups which will give erroneous results. Need to trap
this error
was (Author: fmcquillan):
When mini-batch preprocessor is run with grouping, MLP should only support
exactly the same grouping. Currently MLP with mini-batching will run with no
groups or any types of groups which will give erroneous results.
One approach is to follow the same as 'independent_varname' and
'dependent_varname' for mlp param names when using mini-batching. The grouping
one could be similarly hardcoded to 'grouping_col'
let me know if that seems like a reasonable approach.
> Add mini batch based gradient descent support to MLP
> ----------------------------------------------------
>
> Key: MADLIB-1206
> URL: https://issues.apache.org/jira/browse/MADLIB-1206
> Project: Apache MADlib
> Issue Type: New Feature
> Components: Module: Neural Networks
> Reporter: Nandish Jayaram
> Assignee: Rahul Iyer
> Priority: Major
> Fix For: v1.14
>
>
> Mini-batch gradient descent is typically the algorithm of choice when
> training a neural network.
> MADlib currently supports IGD, we may have to add extensions to include
> mini-batch as a solver for MLP. Other modules will continue to use the
> existing IGD that does not support mini-batching. Later JIRAs will move other
> modules over one at a time to use the new mini-batch GD.
> Related JIRA that will pre-process the input data to be consumed by
> mini-batch is https://issues.apache.org/jira/browse/MADLIB-1200
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)