[ 
https://issues.apache.org/jira/browse/MADLIB-413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095586#comment-16095586
 ] 

Cooper Sloan commented on MADLIB-413:
-------------------------------------

Proposed interface:

optimizer_params,     -- optional, default NULL
                         parameters for optimization in
                         a comma-separated string of key-value pairs.

    learning_rate_init DOUBLE PRECISION,    -- Default: 0.001
                                                Initial learning rate
    learning_rate_policy VARCHAR,           -- Default: 'constant'
                                                One of 
'constant','exp','inv','step'
                                                For 'constant': learning_rate = 
learning_rate_init
                                                For 'exp':      learning_rate = 
learning_rate_init * gamma^(iter)
                                                For 'inv':      learning_rate = 
learning_rate_init * (1 + gamma*iter)^(-power)
                                                For 'step':     learning_rate = 
learning_rate_init * gamma^(floor(iter/step_iterations))
                                                  (Learning rate is multiplied 
by gamma every step_iterations iterations)
                                                Where iter is the current 
iteration of SGD.
                                                Learning rate
    gamma DOUBLE PRECISION,                 -- Default: '0.1'
                                                Decay rate for learning rate.
                                                Valid for learning_rate_policy 
= 'exp','inv', or 'step'
    power DOUBLE PRECISION,                 -- Default: '0.5'
                                                Exponent for 
learning_rate_policy = 'inv'
    iterations_per_step INTEGER,                -- Default: '1000'
                                                Number of iterations to run 
before decreasing the learning
                                                rate by a factor of gamma.  
Valid for learning rate
                                                policy = 'step'
    -- Rest unchanged
    n_iterations INTEGER,                   -- Default: 100
                                                Number of iterations per try
    n_tries INTEGER,                        -- Default: 1
                                                Total number of training cycles,
                                                with random initializations to 
avoid
                                                local minima.
    tolerance DOUBLE PRECISION,             -- Default: 0.001
                                                If the distance in loss between
                                                two iterations is less than the
                                                tolerance training will stop, 
even if
                                                n_iterations has not been 
reached

> Neural Networks - MLP - Phase 1
> -------------------------------
>
>                 Key: MADLIB-413
>                 URL: https://issues.apache.org/jira/browse/MADLIB-413
>             Project: Apache MADlib
>          Issue Type: New Feature
>          Components: Module: Neural Networks
>            Reporter: Caleb Welton
>            Assignee: Cooper Sloan
>             Fix For: v1.12
>
>
> Multilayer perceptron with backpropagation
> Modules:
> * mlp_classification
> * mlp_regression
> Interface
> {code}
> source_table VARCHAR
> output_table VARCHAR
> independent_varname VARCHAR -- Column name for input features, should be a 
> Real Valued array
> dependent_varname VARCHAR, -- Column name for target values, should be Real 
> Valued array of size 1 or greater
> hidden_layer_sizes INTEGER[], -- Number of units per hidden layer (can be 
> empty or null, in which case, no hidden layers)
> optimizer_params VARCHAR, -- Specified below
> weights VARCHAR, -- Column name for weights. Weights the loss for each input 
> vector. Column should contain positive real value
> activation_function VARCHAR, -- One of 'sigmoid' (default), 'tanh', 'relu', 
> or any prefix (eg. 't', 's')
> grouping_cols
> )
> {code}
> where
> {code}
> optimizer_params: -- eg "step_size=0.5, n_tries=5"
> {
> step_size DOUBLE PRECISION, -- Learning rate
> n_iterations INTEGER, -- Number of iterations per try
> n_tries INTEGER, -- Total number of training cycles, with random 
> initializations to avoid local minima.
> tolerance DOUBLE PRECISION, -- Maximum distance between weights before 
> training stops (or until it reaches n_iterations)
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to