[
https://issues.apache.org/jira/browse/MADLIB-1228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16438065#comment-16438065
]
Frank McQuillan commented on MADLIB-1228:
-----------------------------------------
{code}
madlib=# SELECT madlib.mlp_classification(
madlib(# 'iris_data', -- Source table
madlib(# 'mlp_model', -- Destination table
madlib(# 'attributes', -- Input features
madlib(# 'class_text', -- Label
madlib(# ARRAY[5], -- Number of units per layer
madlib(# 'learning_rate_init=0.003,
madlib'# n_iterations=10,
madlib'# tolerance=0', -- Optimizer params
madlib(# 'tanh', -- Activation function
madlib(# NULL, -- Default weight (1)
madlib(# FALSE, -- No warm start
madlib(# TRUE -- verbose
madlib(# );
{code}
{code}
PL/Python function "mlp_classification"
INFO: Iteration: 1, Loss: <1.55963489538>
CONTEXT: PL/Python function "mlp_classification"
INFO: Iteration: 2, Loss: <1.23667772344>
CONTEXT: PL/Python function "mlp_classification"
INFO: Iteration: 3, Loss: <1.16610526261>
CONTEXT: PL/Python function "mlp_classification"
INFO: Iteration: 4, Loss: <1.10136683078>
CONTEXT: PL/Python function "mlp_classification"
INFO: Iteration: 5, Loss: <1.04155851998>
CONTEXT: PL/Python function "mlp_classification"
INFO: Iteration: 6, Loss: <0.985984531389>
CONTEXT: PL/Python function "mlp_classification"
INFO: Iteration: 7, Loss: <0.934109242645>
CONTEXT: PL/Python function "mlp_classification"
INFO: Iteration: 8, Loss: <0.885519287008>
CONTEXT: PL/Python function "mlp_classification"
INFO: Iteration: 9, Loss: <0.839893581408>
CONTEXT: PL/Python function "mlp_classification"
{code}
> MLP verbose - print loss for all iterations in verbose mode
> -----------------------------------------------------------
>
> Key: MADLIB-1228
> URL: https://issues.apache.org/jira/browse/MADLIB-1228
> Project: Apache MADlib
> Issue Type: Bug
> Components: Module: Neural Networks
> Reporter: Nandish Jayaram
> Assignee: Nandish Jayaram
> Priority: Minor
> Fix For: v1.14
>
>
> For MLP running in verbose mode, for n iterations, currently we print loss
> for 2->n-1. We should instead print out loss for 1->n-1 when:
> * running for max iterations &
> * when converges (i.e., does not run to max iterations, so print loss for
> 1->converging_iteration-1)
> Note that the loss for the last iteration can be found in the output table.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)