fmcquillan99 edited a comment on pull request #564:
URL: https://github.com/apache/madlib/pull/564#issuecomment-833762941


   found this ref FYI
   
https://machinelearningmastery.com/adam-optimization-algorithm-for-deep-learning/
   ```
   TensorFlow: learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08.
   Keras: lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0.
   Blocks: learning_rate=0.002, beta1=0.9, beta2=0.999, epsilon=1e-08, 
decay_factor=1.
   Lasagne: learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08
   Caffe: learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08
   MxNet: learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-8
   Torch: learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-8
   ```
   so I think we are good on defaults.
   
   Convergence tests seem reasonable to me on MNIST:
   
   (0)
   The only comment I have is that summary table does not show the newly added 
params.  But I think we are good for this PR, we can add those at a later time.
   
   
   (1)
   SGD
   
   ```
   DROP TABLE IF EXISTS mnist_result, mnist_result_summary, 
mnist_result_standardization;
   
   SELECT madlib.mlp_classification(
       'mnist_train_packed',        -- Packed table from preprocessor
       'mnist_result',              -- Destination table
       'independent_varname',       -- Independent
       'dependent_varname',         -- Dependent
       ARRAY[32,16,8],                    -- Hidden layer sizes
       'learning_rate_init=0.1,
       n_iterations=30,
       learning_rate_policy=const,
       lambda=0.0001,               -- Regularization
       tolerance=0',
       'tanh',                      -- Activation function
       '',                          -- No weights
       FALSE,                       -- No warmstart
       TRUE);                       -- Verbose
   ```
   produces
   ```
   INFO:  Iteration: 1, Loss: <1.12811093016>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 2, Loss: <0.755332954208>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 3, Loss: <0.461081268555>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 4, Loss: <0.392769982268>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 5, Loss: <0.353609188378>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 6, Loss: <0.328416770476>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 7, Loss: <0.311067824611>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 8, Loss: <0.29605826858>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 9, Loss: <0.281167481845>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 10, Loss: <0.27973147345>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 11, Loss: <0.269297460736>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 12, Loss: <0.254955499335>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 13, Loss: <0.248963262336>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 14, Loss: <0.256747431985>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 15, Loss: <0.246768890782>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 16, Loss: <0.238747122866>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 17, Loss: <0.229354761771>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 18, Loss: <0.229097229694>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 19, Loss: <0.225774866697>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 20, Loss: <0.223041187279>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 21, Loss: <0.222356207281>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 22, Loss: <0.211434093527>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 23, Loss: <0.220283965978>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 24, Loss: <0.214014665111>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 25, Loss: <0.211513338037>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 26, Loss: <0.201904671094>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 27, Loss: <0.198295676438>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 28, Loss: <0.205999603598>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 29, Loss: <0.200219470875>
   CONTEXT:  PL/Python function "mlp_classification"
   
    train_accuracy_percent 
   ------------------------
                     97.88
   
    test_accuracy_percent 
   -----------------------
                    94.61
   ```
   
   
   (2a)
   Adam
   
   ```
   DROP TABLE IF EXISTS mnist_result, mnist_result_summary, 
mnist_result_standardization;
   
   SELECT madlib.mlp_classification(
       'mnist_train_packed',        -- Packed table from preprocessor
       'mnist_result',              -- Destination table
       'independent_varname',       -- Independent
       'dependent_varname',         -- Dependent
       ARRAY[32,16,8],                    -- Hidden layer sizes
       'learning_rate_init=0.001,
       n_iterations=30,
       tolerance=0,
       solver=adam',
       'tanh',                      -- Activation function
       '',                          -- No weights
       FALSE,                       -- No warmstart
       TRUE);                       -- Verbose
   ```
   produces
   ```
   INFO:  Iteration: 1, Loss: <2.49219154056>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 2, Loss: <2.2406470304>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 3, Loss: <1.61176099252>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 4, Loss: <1.19194514036>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 5, Loss: <0.865425647132>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 6, Loss: <0.647833798039>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 7, Loss: <0.519056510181>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 8, Loss: <0.439596032868>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 9, Loss: <0.388246910742>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 10, Loss: <0.352024571426>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 11, Loss: <0.323541956657>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 12, Loss: <0.302386538786>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 13, Loss: <0.282025422098>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 14, Loss: <0.265449691502>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 15, Loss: <0.251999601889>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 16, Loss: <0.240175123876>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 17, Loss: <0.230883110179>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 18, Loss: <0.219696470998>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 19, Loss: <0.207799722097>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 20, Loss: <0.20234630213>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 21, Loss: <0.193858612755>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 22, Loss: <0.186792467696>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 23, Loss: <0.180067407853>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 24, Loss: <0.172177625363>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 25, Loss: <0.167304499739>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 26, Loss: <0.160973077191>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 27, Loss: <0.155808075056>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 28, Loss: <0.150402168905>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 29, Loss: <0.147615500149>
   CONTEXT:  PL/Python function "mlp_classification"
   
    train_accuracy_percent 
   ------------------------
                     98.26
   
    test_accuracy_percent 
   -----------------------
                    94.20
   ```
   
   
   (2b)
   Adam lr policy
   
   ```
   DROP TABLE IF EXISTS mnist_result, mnist_result_summary, 
mnist_result_standardization;
   
   SELECT madlib.mlp_classification(
       'mnist_train_packed',        -- Packed table from preprocessor
       'mnist_result',              -- Destination table
       'independent_varname',       -- Independent
       'dependent_varname',         -- Dependent
       ARRAY[32,16,8],                    -- Hidden layer sizes
       'learning_rate_init=0.005,
       learning_rate_policy=inv,
       n_iterations=30,
       tolerance=0,
       solver=adam',
       'tanh',                      -- Activation function
       '',                          -- No weights
       FALSE,                       -- No warmstart
       TRUE);                       -- Verbose
   ```
   produces
   ```
   INFO:  Iteration: 1, Loss: <1.52466331297>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 2, Loss: <0.924587347568>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 3, Loss: <0.493045155817>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 4, Loss: <0.412553532763>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 5, Loss: <0.370052940698>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 6, Loss: <0.345010269101>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 7, Loss: <0.320095146365>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 8, Loss: <0.301508145497>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 9, Loss: <0.283750298649>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 10, Loss: <0.279589959721>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 11, Loss: <0.265455039814>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 12, Loss: <0.255105666621>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 13, Loss: <0.250431891656>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 14, Loss: <0.241122220136>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 15, Loss: <0.231721293923>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 16, Loss: <0.228791303605>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 17, Loss: <0.221401451739>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 18, Loss: <0.214964261915>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 19, Loss: <0.209435823398>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 20, Loss: <0.204085836822>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 21, Loss: <0.205478206792>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 22, Loss: <0.199285867146>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 23, Loss: <0.195479904077>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 24, Loss: <0.187364367993>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 25, Loss: <0.18745233773>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 26, Loss: <0.178917124831>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 27, Loss: <0.180600455093>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 28, Loss: <0.176480296692>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 29, Loss: <0.173755668857>
   CONTEXT:  PL/Python function "mlp_classification"
   
    train_accuracy_percent 
   ------------------------
                     97.94
   
    test_accuracy_percent 
   -----------------------
                    94.57
   ```
   
   
   (2c)
   Adam 100 itns
   
   ```
   DROP TABLE IF EXISTS mnist_result, mnist_result_summary, 
mnist_result_standardization;
   
   SELECT madlib.mlp_classification(
       'mnist_train_packed',        -- Packed table from preprocessor
       'mnist_result',              -- Destination table
       'independent_varname',       -- Independent
       'dependent_varname',         -- Dependent
       ARRAY[32,16,8],                    -- Hidden layer sizes
       'learning_rate_init=0.001,
       n_iterations=100,
       tolerance=0,
       solver=adam',
       'tanh',                      -- Activation function
       '',                          -- No weights
       FALSE,                       -- No warmstart
       TRUE);                       -- Verbose
   ```
   produces
   ```
   INFO:  Iteration: 1, Loss: <1.50391047367>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 2, Loss: <1.03610856958>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 3, Loss: <0.492809028471>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 4, Loss: <0.407187444134>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 5, Loss: <0.363707512677>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 6, Loss: <0.335086226849>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 7, Loss: <0.316462503268>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 8, Loss: <0.295613192977>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 9, Loss: <0.280867397987>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 10, Loss: <0.271135767834>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 11, Loss: <0.262014316967>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 12, Loss: <0.248649398201>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 13, Loss: <0.242932013154>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 14, Loss: <0.2284545217>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 15, Loss: <0.222326038477>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 16, Loss: <0.217182451059>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 17, Loss: <0.209832004773>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 18, Loss: <0.202990027516>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 19, Loss: <0.199974575885>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 20, Loss: <0.203494704896>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 21, Loss: <0.195589183763>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 22, Loss: <0.190845037047>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 23, Loss: <0.187225464853>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 24, Loss: <0.183047759837>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 25, Loss: <0.176157725721>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 26, Loss: <0.177116976914>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 27, Loss: <0.169277549204>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 28, Loss: <0.169430259667>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 29, Loss: <0.163695246384>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 30, Loss: <0.165187825701>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 31, Loss: <0.160460530916>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 32, Loss: <0.15676983394>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 33, Loss: <0.159008487362>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 34, Loss: <0.151747414116>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 35, Loss: <0.151538403231>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 36, Loss: <0.148640650905>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 37, Loss: <0.147534094157>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 38, Loss: <0.1437882125>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 39, Loss: <0.141317622766>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 40, Loss: <0.141255905518>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 41, Loss: <0.138791777293>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 42, Loss: <0.138347790221>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 43, Loss: <0.13656995305>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 44, Loss: <0.131324773709>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 45, Loss: <0.131856327107>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 46, Loss: <0.133657152149>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 47, Loss: <0.130236274115>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 48, Loss: <0.127130788791>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 49, Loss: <0.127103624619>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 50, Loss: <0.121606153613>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 51, Loss: <0.120243623487>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 52, Loss: <0.12548381008>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 53, Loss: <0.122825989048>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 54, Loss: <0.120458194512>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 55, Loss: <0.121511041338>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 56, Loss: <0.120670982109>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 57, Loss: <0.115641426847>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 58, Loss: <0.115738045094>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 59, Loss: <0.116625583836>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 60, Loss: <0.113520583102>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 61, Loss: <0.120114127302>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 62, Loss: <0.114046563283>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 63, Loss: <0.112257682264>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 64, Loss: <0.112863918192>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 65, Loss: <0.111775769015>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 66, Loss: <0.109602495064>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 67, Loss: <0.109533523979>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 68, Loss: <0.111775703654>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 69, Loss: <0.102105870983>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 70, Loss: <0.102377880918>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 71, Loss: <0.103988486496>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 72, Loss: <0.102840824709>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 73, Loss: <0.10485504056>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 74, Loss: <0.102915215065>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 75, Loss: <0.102753940579>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 76, Loss: <0.100258262563>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 77, Loss: <0.103495527492>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 78, Loss: <0.0981754175822>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 79, Loss: <0.0985121203699>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 80, Loss: <0.097624467522>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 81, Loss: <0.0993185049517>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 82, Loss: <0.0993352457337>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 83, Loss: <0.0951777919375>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 84, Loss: <0.0956579996975>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 85, Loss: <0.0979769525761>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 86, Loss: <0.0974653850973>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 87, Loss: <0.0959539633258>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 88, Loss: <0.0907081664635>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 89, Loss: <0.096466239031>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 90, Loss: <0.0941570237729>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 91, Loss: <0.0909757537818>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 92, Loss: <0.0953944147618>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 93, Loss: <0.0940320137924>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 94, Loss: <0.0914005258803>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 95, Loss: <0.0852576882136>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 96, Loss: <0.0903479009159>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 97, Loss: <0.0891995754537>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 98, Loss: <0.0874552962718>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 99, Loss: <0.09178683068>
   CONTEXT:  PL/Python function "mlp_classification"
   
    train_accuracy_percent 
   ------------------------
                     99.07
   
    test_accuracy_percent 
   -----------------------
                    94.36
   ```
   
   
   
   (3)
   RMSprop
   
   ```
   DROP TABLE IF EXISTS mnist_result, mnist_result_summary, 
mnist_result_standardization;
   
   SELECT madlib.mlp_classification(
       'mnist_train_packed',        -- Packed table from preprocessor
       'mnist_result',              -- Destination table
       'independent_varname',       -- Independent
       'dependent_varname',         -- Dependent
       ARRAY[32,16,8],                    -- Hidden layer sizes
       'learning_rate_init=0.001,
       n_iterations=30,
       tolerance=0,
       solver=rmsprop',
       'tanh',                      -- Activation function
       '',                          -- No weights
       FALSE,                       -- No warmstart
       TRUE);                       -- Verbose
   ```
   produces
   ```
   INFO:  Iteration: 1, Loss: <2.41494422478>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 2, Loss: <2.16954988045>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 3, Loss: <1.37535161983>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 4, Loss: <0.909528765916>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 5, Loss: <0.6663504217>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 6, Loss: <0.540761381153>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 7, Loss: <0.467453554261>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 8, Loss: <0.419246452355>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 9, Loss: <0.384879605672>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 10, Loss: <0.358759255292>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 11, Loss: <0.336633156705>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 12, Loss: <0.31752323948>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 13, Loss: <0.301066795201>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 14, Loss: <0.287779652293>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 15, Loss: <0.274361089313>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 16, Loss: <0.26582802731>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 17, Loss: <0.253090334172>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 18, Loss: <0.244221928119>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 19, Loss: <0.235076125529>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 20, Loss: <0.22712747899>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 21, Loss: <0.221659572685>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 22, Loss: <0.21359874004>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 23, Loss: <0.207138262823>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 24, Loss: <0.201992161324>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 25, Loss: <0.195631825847>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 26, Loss: <0.191176863008>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 27, Loss: <0.187077118102>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 28, Loss: <0.181589100317>
   CONTEXT:  PL/Python function "mlp_classification"
   INFO:  Iteration: 29, Loss: <0.177400916474>
   CONTEXT:  PL/Python function "mlp_classification"
   
    train_accuracy_percent 
   ------------------------
                     97.76
   
    test_accuracy_percent 
   -----------------------
                    94.34
   ```
   
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to