[
https://issues.apache.org/jira/browse/MADLIB-1454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Frank McQuillan updated MADLIB-1454:
------------------------------------
Description:
For Hyperband, write the "best so far" to the console so that user knows how
things are progressing.
Note need to keep track of global best, it might not be the one from the last
iteration.
Change console output from:
{code}
INFO: *** Diagonally evaluating 9 configs under bracket=2 & round=0 with 1
iterations ***
CONTEXT: PL/Python function "madlib_keras_automl"
INFO:
Time for training in iteration 1: 9.76507210732 sec
DETAIL:
Training set after iteration 1:
mst_key=2: metric=0.683333337307, loss=0.626947939396
mst_key=8: metric=0.683333337307, loss=0.556752383709
mst_key=3: metric=0.683333337307, loss=0.604624867439
mst_key=6: metric=0.324999988079, loss=1.01775479317
mst_key=1: metric=0.691666662693, loss=0.918690085411
mst_key=7: metric=0.324999988079, loss=1.09102141857
mst_key=9: metric=0.683333337307, loss=0.615454554558
mst_key=4: metric=0.774999976158, loss=0.571036159992
mst_key=5: metric=0.324999988079, loss=1.10194396973
Validation set after iteration 1:
mst_key=2: metric=0.600000023842, loss=0.67598927021
mst_key=8: metric=0.600000023842, loss=0.62441021204
mst_key=3: metric=0.600000023842, loss=0.669852972031
mst_key=6: metric=0.366666674614, loss=0.984160840511
mst_key=1: metric=0.600000023842, loss=0.923334658146
mst_key=7: metric=0.366666674614, loss=1.07771503925
mst_key=9: metric=0.600000023842, loss=0.699421286583
mst_key=4: metric=0.866666674614, loss=0.607381045818
mst_key=5: metric=0.366666674614, loss=1.09954810143
CONTEXT: PL/Python function "madlib_keras_automl"
INFO: *** Diagonally evaluating 3 configs under bracket=2 & round=1, 3 configs
under bracket=1 & round=0 with 3 iterations ***
CONTEXT: PL/Python function "madlib_keras_automl"
INFO:
Time for training in iteration 1: 4.84015893936 sec
DETAIL:
Training set after iteration 1:
mst_key=8: metric=0.925000011921, loss=0.353324443102
mst_key=4: metric=0.949999988079, loss=0.424594521523
mst_key=11: metric=0.675000011921, loss=0.846702694893
mst_key=3: metric=0.808333337307, loss=0.382121056318
mst_key=12: metric=0.916666686535, loss=0.384196609259
mst_key=10: metric=0.683333337307, loss=0.701473772526
Validation set after iteration 1:
mst_key=8: metric=0.933333337307, loss=0.42084941268
mst_key=4: metric=0.933333337307, loss=0.476406633854
mst_key=11: metric=0.600000023842, loss=0.854079544544
mst_key=3: metric=0.899999976158, loss=0.417265832424
mst_key=12: metric=0.899999976158, loss=0.450416505337
mst_key=10: metric=0.600000023842, loss=0.728042304516
CONTEXT: PL/Python function "madlib_keras_automl"
INFO:
Time for training in iteration 2: 4.80181288719 sec
DETAIL:
Training set after iteration 2:
mst_key=8: metric=0.941666662693, loss=0.286089539528
mst_key=4: metric=0.925000011921, loss=0.373028248549
mst_key=11: metric=0.683333337307, loss=0.609232187271
mst_key=3: metric=0.833333313465, loss=0.291878581047
mst_key=12: metric=0.908333361149, loss=0.300016224384
mst_key=10: metric=0.983333349228, loss=0.382896214724
Validation set after iteration 2:
mst_key=8: metric=0.933333337307, loss=0.338641613722
mst_key=4: metric=1.0, loss=0.436057478189
mst_key=11: metric=0.600000023842, loss=0.658753097057
mst_key=3: metric=0.766666650772, loss=0.339546382427
mst_key=12: metric=0.933333337307, loss=0.341486483812
mst_key=10: metric=1.0, loss=0.442664504051
CONTEXT: PL/Python function "madlib_keras_automl"
INFO:
Time for training in iteration 3: 5.17401909828 sec
DETAIL:
Training set after iteration 3:
mst_key=8: metric=0.966666638851, loss=0.196135208011
mst_key=4: metric=0.958333313465, loss=0.243382230401
mst_key=11: metric=0.941666662693, loss=0.395315974951
mst_key=3: metric=0.966666638851, loss=0.171766787767
mst_key=12: metric=0.866666674614, loss=0.283820331097
mst_key=10: metric=0.833333313465, loss=0.313775897026
Validation set after iteration 3:
mst_key=8: metric=0.966666638851, loss=0.214255988598
mst_key=4: metric=1.0, loss=0.268849998713
mst_key=11: metric=0.899999976158, loss=0.45996800065
mst_key=3: metric=1.0, loss=0.157373458147
mst_key=12: metric=0.800000011921, loss=0.340971261263
mst_key=10: metric=0.766666650772, loss=0.365937292576
CONTEXT: PL/Python function "madlib_keras_automl"
{code}
to
{code}
INFO: *** Diagonally evaluating 9 configs under bracket=2 & round=0 with 1
iterations ***
CONTEXT: PL/Python function "madlib_keras_automl"
INFO:
Time for training in iteration 1: 9.76507210732 sec
DETAIL:
Training set after iteration 1:
mst_key=2: metric=0.683333337307, loss=0.626947939396
mst_key=8: metric=0.683333337307, loss=0.556752383709
mst_key=3: metric=0.683333337307, loss=0.604624867439
mst_key=6: metric=0.324999988079, loss=1.01775479317
mst_key=1: metric=0.691666662693, loss=0.918690085411
mst_key=7: metric=0.324999988079, loss=1.09102141857
mst_key=9: metric=0.683333337307, loss=0.615454554558
mst_key=4: metric=0.774999976158, loss=0.571036159992
mst_key=5: metric=0.324999988079, loss=1.1019439697
Validation set after iteration 1:
mst_key=2: metric=0.600000023842, loss=0.67598927021
mst_key=8: metric=0.600000023842, loss=0.62441021204
mst_key=3: metric=0.600000023842, loss=0.669852972031
mst_key=6: metric=0.366666674614, loss=0.984160840511
mst_key=1: metric=0.600000023842, loss=0.923334658146
mst_key=7: metric=0.366666674614, loss=1.07771503925
mst_key=9: metric=0.600000023842, loss=0.699421286583
mst_key=4: metric=0.866666674614, loss=0.607381045818
mst_key=5: metric=0.366666674614, loss=1.09954810143
Best training metric so far:
mst_key=4: metric=0.774999976158, loss=0.571036159992
Best validation metric so far:
mst_key=4: metric=0.866666674614, loss=0.607381045818
CONTEXT: PL/Python function "madlib_keras_automl"
INFO: *** Diagonally evaluating 3 configs under bracket=2 & round=1, 3 configs
under bracket=1 & round=0 with 3 iterations ***
CONTEXT: PL/Python function "madlib_keras_automl"
INFO:
Time for training in iteration 1: 4.84015893936 sec
DETAIL:
Training set after iteration 1:
mst_key=8: metric=0.925000011921, loss=0.353324443102
mst_key=4: metric=0.949999988079, loss=0.424594521523
mst_key=11: metric=0.675000011921, loss=0.846702694893
mst_key=3: metric=0.808333337307, loss=0.382121056318
mst_key=12: metric=0.916666686535, loss=0.384196609259
mst_key=10: metric=0.683333337307, loss=0.701473772526
Validation set after iteration 1:
mst_key=8: metric=0.933333337307, loss=0.42084941268
mst_key=4: metric=0.933333337307, loss=0.476406633854
mst_key=11: metric=0.600000023842, loss=0.854079544544
mst_key=3: metric=0.899999976158, loss=0.417265832424
mst_key=12: metric=0.899999976158, loss=0.450416505337
mst_key=10: metric=0.600000023842, loss=0.728042304516
CONTEXT: PL/Python function "madlib_keras_automl"
INFO:
Time for training in iteration 2: 4.80181288719 sec
DETAIL:
Training set after iteration 2:
mst_key=8: metric=0.941666662693, loss=0.286089539528
mst_key=4: metric=0.925000011921, loss=0.373028248549
mst_key=11: metric=0.683333337307, loss=0.609232187271
mst_key=3: metric=0.833333313465, loss=0.291878581047
mst_key=12: metric=0.908333361149, loss=0.300016224384
mst_key=10: metric=0.983333349228, loss=0.382896214724
Validation set after iteration 2:
mst_key=8: metric=0.933333337307, loss=0.338641613722
mst_key=4: metric=1.0, loss=0.436057478189
mst_key=11: metric=0.600000023842, loss=0.658753097057
mst_key=3: metric=0.766666650772, loss=0.339546382427
mst_key=12: metric=0.933333337307, loss=0.341486483812
mst_key=10: metric=1.0, loss=0.442664504051
CONTEXT: PL/Python function "madlib_keras_automl"
INFO:
Time for training in iteration 3: 5.17401909828 sec
DETAIL:
Training set after iteration 3:
mst_key=8: metric=0.966666638851, loss=0.196135208011
mst_key=4: metric=0.958333313465, loss=0.243382230401
mst_key=11: metric=0.941666662693, loss=0.395315974951
mst_key=3: metric=0.966666638851, loss=0.171766787767
mst_key=12: metric=0.866666674614, loss=0.283820331097
mst_key=10: metric=0.833333313465, loss=0.313775897026
Validation set after iteration 3:
mst_key=8: metric=0.966666638851, loss=0.214255988598
mst_key=4: metric=1.0, loss=0.268849998713
mst_key=11: metric=0.899999976158, loss=0.45996800065
mst_key=3: metric=1.0, loss=0.157373458147
mst_key=12: metric=0.800000011921, loss=0.340971261263
mst_key=10: metric=0.766666650772, loss=0.365937292576
Best training metric so far:
mst_key=8: metric=0.966666638851, loss=0.196135208011
Best validation metric so far:
mst_key=8: metric=0.966666638851, loss=0.214255988598
CONTEXT: PL/Python function "madlib_keras_automl"
{code}
was:
For Hyperband, write the "best so far" to the console so that user knows how
things are progressing.
Note need to keep track of global best, it might not be the one from the last
iteration.
Change console output from:
{code}
INFO: *** Diagonally evaluating 9 configs under bracket=2 & round=0 with 1
iterations ***
CONTEXT: PL/Python function "madlib_keras_automl"
INFO:
Time for training in iteration 1: 9.76507210732 sec
DETAIL:
Training set after iteration 1:
mst_key=2: metric=0.683333337307, loss=0.626947939396
mst_key=8: metric=0.683333337307, loss=0.556752383709
mst_key=3: metric=0.683333337307, loss=0.604624867439
mst_key=6: metric=0.324999988079, loss=1.01775479317
mst_key=1: metric=0.691666662693, loss=0.918690085411
mst_key=7: metric=0.324999988079, loss=1.09102141857
mst_key=9: metric=0.683333337307, loss=0.615454554558
mst_key=4: metric=0.774999976158, loss=0.571036159992
mst_key=5: metric=0.324999988079, loss=1.10194396973
Validation set after iteration 1:
mst_key=2: metric=0.600000023842, loss=0.67598927021
mst_key=8: metric=0.600000023842, loss=0.62441021204
mst_key=3: metric=0.600000023842, loss=0.669852972031
mst_key=6: metric=0.366666674614, loss=0.984160840511
mst_key=1: metric=0.600000023842, loss=0.923334658146
mst_key=7: metric=0.366666674614, loss=1.07771503925
mst_key=9: metric=0.600000023842, loss=0.699421286583
mst_key=4: metric=0.866666674614, loss=0.607381045818
mst_key=5: metric=0.366666674614, loss=1.09954810143
CONTEXT: PL/Python function "madlib_keras_automl"
INFO: *** Diagonally evaluating 3 configs under bracket=2 & round=1, 3 configs
under bracket=1 & round=0 with 3 iterations ***
CONTEXT: PL/Python function "madlib_keras_automl"
INFO:
Time for training in iteration 1: 4.84015893936 sec
DETAIL:
Training set after iteration 1:
mst_key=8: metric=0.925000011921, loss=0.353324443102
mst_key=4: metric=0.949999988079, loss=0.424594521523
mst_key=11: metric=0.675000011921, loss=0.846702694893
mst_key=3: metric=0.808333337307, loss=0.382121056318
mst_key=12: metric=0.916666686535, loss=0.384196609259
mst_key=10: metric=0.683333337307, loss=0.701473772526
Validation set after iteration 1:
mst_key=8: metric=0.933333337307, loss=0.42084941268
mst_key=4: metric=0.933333337307, loss=0.476406633854
mst_key=11: metric=0.600000023842, loss=0.854079544544
mst_key=3: metric=0.899999976158, loss=0.417265832424
mst_key=12: metric=0.899999976158, loss=0.450416505337
mst_key=10: metric=0.600000023842, loss=0.728042304516
CONTEXT: PL/Python function "madlib_keras_automl"
INFO:
Time for training in iteration 2: 4.80181288719 sec
DETAIL:
Training set after iteration 2:
mst_key=8: metric=0.941666662693, loss=0.286089539528
mst_key=4: metric=0.925000011921, loss=0.373028248549
mst_key=11: metric=0.683333337307, loss=0.609232187271
mst_key=3: metric=0.833333313465, loss=0.291878581047
mst_key=12: metric=0.908333361149, loss=0.300016224384
mst_key=10: metric=0.983333349228, loss=0.382896214724
Validation set after iteration 2:
mst_key=8: metric=0.933333337307, loss=0.338641613722
mst_key=4: metric=1.0, loss=0.436057478189
mst_key=11: metric=0.600000023842, loss=0.658753097057
mst_key=3: metric=0.766666650772, loss=0.339546382427
mst_key=12: metric=0.933333337307, loss=0.341486483812
mst_key=10: metric=1.0, loss=0.442664504051
CONTEXT: PL/Python function "madlib_keras_automl"
INFO:
Time for training in iteration 3: 5.17401909828 sec
DETAIL:
Training set after iteration 3:
mst_key=8: metric=0.966666638851, loss=0.196135208011
mst_key=4: metric=0.958333313465, loss=0.243382230401
mst_key=11: metric=0.941666662693, loss=0.395315974951
mst_key=3: metric=0.966666638851, loss=0.171766787767
mst_key=12: metric=0.866666674614, loss=0.283820331097
mst_key=10: metric=0.833333313465, loss=0.313775897026
Validation set after iteration 3:
mst_key=8: metric=0.966666638851, loss=0.214255988598
mst_key=4: metric=1.0, loss=0.268849998713
mst_key=11: metric=0.899999976158, loss=0.45996800065
mst_key=3: metric=1.0, loss=0.157373458147
mst_key=12: metric=0.800000011921, loss=0.340971261263
mst_key=10: metric=0.766666650772, loss=0.365937292576
CONTEXT: PL/Python function "madlib_keras_automl"
{code}
to
{code}
INFO: *** Diagonally evaluating 9 configs under bracket=2 & round=0 with 1
iterations ***
CONTEXT: PL/Python function "madlib_keras_automl"
INFO:
Time for training in iteration 1: 9.76507210732 sec
DETAIL:
Training set after iteration 1:
mst_key=2: metric=0.683333337307, loss=0.626947939396
mst_key=8: metric=0.683333337307, loss=0.556752383709
mst_key=3: metric=0.683333337307, loss=0.604624867439
mst_key=6: metric=0.324999988079, loss=1.01775479317
mst_key=1: metric=0.691666662693, loss=0.918690085411
mst_key=7: metric=0.324999988079, loss=1.09102141857
mst_key=9: metric=0.683333337307, loss=0.615454554558
mst_key=4: metric=0.774999976158, loss=0.571036159992
mst_key=5: metric=0.324999988079, loss=1.1019439697
Validation set after iteration 1:
mst_key=2: metric=0.600000023842, loss=0.67598927021
mst_key=8: metric=0.600000023842, loss=0.62441021204
mst_key=3: metric=0.600000023842, loss=0.669852972031
mst_key=6: metric=0.366666674614, loss=0.984160840511
mst_key=1: metric=0.600000023842, loss=0.923334658146
mst_key=7: metric=0.366666674614, loss=1.07771503925
mst_key=9: metric=0.600000023842, loss=0.699421286583
mst_key=4: metric=0.866666674614, loss=0.607381045818
mst_key=5: metric=0.366666674614, loss=1.09954810143
Best training metric so far:
mst_key=4: metric=0.774999976158, loss=0.571036159992
Best validation metric so far:
mst_key=4: metric=0.866666674614, loss=0.607381045818
CONTEXT: PL/Python function "madlib_keras_automl"
INFO: *** Diagonally evaluating 3 configs under bracket=2 & round=1, 3 configs
under bracket=1 & round=0 with 3 iterations ***
CONTEXT: PL/Python function "madlib_keras_automl"
INFO:
Time for training in iteration 1: 4.84015893936 sec
DETAIL:
Training set after iteration 1:
mst_key=8: metric=0.925000011921, loss=0.353324443102
mst_key=4: metric=0.949999988079, loss=0.424594521523
mst_key=11: metric=0.675000011921, loss=0.846702694893
mst_key=3: metric=0.808333337307, loss=0.382121056318
mst_key=12: metric=0.916666686535, loss=0.384196609259
mst_key=10: metric=0.683333337307, loss=0.701473772526
Best training metric in iteration 1:
mst_key=4: metric=0.949999988079, loss=0.424594521523
Validation set after iteration 1:
mst_key=8: metric=0.933333337307, loss=0.42084941268
mst_key=4: metric=0.933333337307, loss=0.476406633854
mst_key=11: metric=0.600000023842, loss=0.854079544544
mst_key=3: metric=0.899999976158, loss=0.417265832424
mst_key=12: metric=0.899999976158, loss=0.450416505337
mst_key=10: metric=0.600000023842, loss=0.728042304516
CONTEXT: PL/Python function "madlib_keras_automl"
INFO:
Time for training in iteration 2: 4.80181288719 sec
DETAIL:
Training set after iteration 2:
mst_key=8: metric=0.941666662693, loss=0.286089539528
mst_key=4: metric=0.925000011921, loss=0.373028248549
mst_key=11: metric=0.683333337307, loss=0.609232187271
mst_key=3: metric=0.833333313465, loss=0.291878581047
mst_key=12: metric=0.908333361149, loss=0.300016224384
mst_key=10: metric=0.983333349228, loss=0.382896214724
Validation set after iteration 2:
mst_key=8: metric=0.933333337307, loss=0.338641613722
mst_key=4: metric=1.0, loss=0.436057478189
mst_key=11: metric=0.600000023842, loss=0.658753097057
mst_key=3: metric=0.766666650772, loss=0.339546382427
mst_key=12: metric=0.933333337307, loss=0.341486483812
mst_key=10: metric=1.0, loss=0.442664504051
CONTEXT: PL/Python function "madlib_keras_automl"
INFO:
Time for training in iteration 3: 5.17401909828 sec
DETAIL:
Training set after iteration 3:
mst_key=8: metric=0.966666638851, loss=0.196135208011
mst_key=4: metric=0.958333313465, loss=0.243382230401
mst_key=11: metric=0.941666662693, loss=0.395315974951
mst_key=3: metric=0.966666638851, loss=0.171766787767
mst_key=12: metric=0.866666674614, loss=0.283820331097
mst_key=10: metric=0.833333313465, loss=0.313775897026
Validation set after iteration 3:
mst_key=8: metric=0.966666638851, loss=0.214255988598
mst_key=4: metric=1.0, loss=0.268849998713
mst_key=11: metric=0.899999976158, loss=0.45996800065
mst_key=3: metric=1.0, loss=0.157373458147
mst_key=12: metric=0.800000011921, loss=0.340971261263
mst_key=10: metric=0.766666650772, loss=0.365937292576
Best training metric so far:
mst_key=8: metric=0.966666638851, loss=0.196135208011
Best validation metric so far:
mst_key=8: metric=0.966666638851, loss=0.214255988598
CONTEXT: PL/Python function "madlib_keras_automl"
{code}
> DL - Write best so far to console for autoML methods
> -----------------------------------------------------
>
> Key: MADLIB-1454
> URL: https://issues.apache.org/jira/browse/MADLIB-1454
> Project: Apache MADlib
> Issue Type: Improvement
> Components: Deep Learning
> Reporter: Frank McQuillan
> Assignee: Advitya Gemawat
> Priority: Minor
> Fix For: v1.18.0
>
>
> For Hyperband, write the "best so far" to the console so that user knows how
> things are progressing.
> Note need to keep track of global best, it might not be the one from the last
> iteration.
> Change console output from:
> {code}
> INFO: *** Diagonally evaluating 9 configs under bracket=2 & round=0 with 1
> iterations ***
> CONTEXT: PL/Python function "madlib_keras_automl"
> INFO:
> Time for training in iteration 1: 9.76507210732 sec
> DETAIL:
> Training set after iteration 1:
> mst_key=2: metric=0.683333337307, loss=0.626947939396
> mst_key=8: metric=0.683333337307, loss=0.556752383709
> mst_key=3: metric=0.683333337307, loss=0.604624867439
> mst_key=6: metric=0.324999988079, loss=1.01775479317
> mst_key=1: metric=0.691666662693, loss=0.918690085411
> mst_key=7: metric=0.324999988079, loss=1.09102141857
> mst_key=9: metric=0.683333337307, loss=0.615454554558
> mst_key=4: metric=0.774999976158, loss=0.571036159992
> mst_key=5: metric=0.324999988079, loss=1.10194396973
> Validation set after iteration 1:
> mst_key=2: metric=0.600000023842, loss=0.67598927021
> mst_key=8: metric=0.600000023842, loss=0.62441021204
> mst_key=3: metric=0.600000023842, loss=0.669852972031
> mst_key=6: metric=0.366666674614, loss=0.984160840511
> mst_key=1: metric=0.600000023842, loss=0.923334658146
> mst_key=7: metric=0.366666674614, loss=1.07771503925
> mst_key=9: metric=0.600000023842, loss=0.699421286583
> mst_key=4: metric=0.866666674614, loss=0.607381045818
> mst_key=5: metric=0.366666674614, loss=1.09954810143
> CONTEXT: PL/Python function "madlib_keras_automl"
> INFO: *** Diagonally evaluating 3 configs under bracket=2 & round=1, 3
> configs under bracket=1 & round=0 with 3 iterations ***
> CONTEXT: PL/Python function "madlib_keras_automl"
> INFO:
> Time for training in iteration 1: 4.84015893936 sec
> DETAIL:
> Training set after iteration 1:
> mst_key=8: metric=0.925000011921, loss=0.353324443102
> mst_key=4: metric=0.949999988079, loss=0.424594521523
> mst_key=11: metric=0.675000011921, loss=0.846702694893
> mst_key=3: metric=0.808333337307, loss=0.382121056318
> mst_key=12: metric=0.916666686535, loss=0.384196609259
> mst_key=10: metric=0.683333337307, loss=0.701473772526
> Validation set after iteration 1:
> mst_key=8: metric=0.933333337307, loss=0.42084941268
> mst_key=4: metric=0.933333337307, loss=0.476406633854
> mst_key=11: metric=0.600000023842, loss=0.854079544544
> mst_key=3: metric=0.899999976158, loss=0.417265832424
> mst_key=12: metric=0.899999976158, loss=0.450416505337
> mst_key=10: metric=0.600000023842, loss=0.728042304516
> CONTEXT: PL/Python function "madlib_keras_automl"
> INFO:
> Time for training in iteration 2: 4.80181288719 sec
> DETAIL:
> Training set after iteration 2:
> mst_key=8: metric=0.941666662693, loss=0.286089539528
> mst_key=4: metric=0.925000011921, loss=0.373028248549
> mst_key=11: metric=0.683333337307, loss=0.609232187271
> mst_key=3: metric=0.833333313465, loss=0.291878581047
> mst_key=12: metric=0.908333361149, loss=0.300016224384
> mst_key=10: metric=0.983333349228, loss=0.382896214724
> Validation set after iteration 2:
> mst_key=8: metric=0.933333337307, loss=0.338641613722
> mst_key=4: metric=1.0, loss=0.436057478189
> mst_key=11: metric=0.600000023842, loss=0.658753097057
> mst_key=3: metric=0.766666650772, loss=0.339546382427
> mst_key=12: metric=0.933333337307, loss=0.341486483812
> mst_key=10: metric=1.0, loss=0.442664504051
> CONTEXT: PL/Python function "madlib_keras_automl"
> INFO:
> Time for training in iteration 3: 5.17401909828 sec
> DETAIL:
> Training set after iteration 3:
> mst_key=8: metric=0.966666638851, loss=0.196135208011
> mst_key=4: metric=0.958333313465, loss=0.243382230401
> mst_key=11: metric=0.941666662693, loss=0.395315974951
> mst_key=3: metric=0.966666638851, loss=0.171766787767
> mst_key=12: metric=0.866666674614, loss=0.283820331097
> mst_key=10: metric=0.833333313465, loss=0.313775897026
> Validation set after iteration 3:
> mst_key=8: metric=0.966666638851, loss=0.214255988598
> mst_key=4: metric=1.0, loss=0.268849998713
> mst_key=11: metric=0.899999976158, loss=0.45996800065
> mst_key=3: metric=1.0, loss=0.157373458147
> mst_key=12: metric=0.800000011921, loss=0.340971261263
> mst_key=10: metric=0.766666650772, loss=0.365937292576
> CONTEXT: PL/Python function "madlib_keras_automl"
> {code}
> to
> {code}
> INFO: *** Diagonally evaluating 9 configs under bracket=2 & round=0 with 1
> iterations ***
> CONTEXT: PL/Python function "madlib_keras_automl"
> INFO:
> Time for training in iteration 1: 9.76507210732 sec
> DETAIL:
> Training set after iteration 1:
> mst_key=2: metric=0.683333337307, loss=0.626947939396
> mst_key=8: metric=0.683333337307, loss=0.556752383709
> mst_key=3: metric=0.683333337307, loss=0.604624867439
> mst_key=6: metric=0.324999988079, loss=1.01775479317
> mst_key=1: metric=0.691666662693, loss=0.918690085411
> mst_key=7: metric=0.324999988079, loss=1.09102141857
> mst_key=9: metric=0.683333337307, loss=0.615454554558
> mst_key=4: metric=0.774999976158, loss=0.571036159992
> mst_key=5: metric=0.324999988079, loss=1.1019439697
> Validation set after iteration 1:
> mst_key=2: metric=0.600000023842, loss=0.67598927021
> mst_key=8: metric=0.600000023842, loss=0.62441021204
> mst_key=3: metric=0.600000023842, loss=0.669852972031
> mst_key=6: metric=0.366666674614, loss=0.984160840511
> mst_key=1: metric=0.600000023842, loss=0.923334658146
> mst_key=7: metric=0.366666674614, loss=1.07771503925
> mst_key=9: metric=0.600000023842, loss=0.699421286583
> mst_key=4: metric=0.866666674614, loss=0.607381045818
> mst_key=5: metric=0.366666674614, loss=1.09954810143
> Best training metric so far:
> mst_key=4: metric=0.774999976158, loss=0.571036159992
> Best validation metric so far:
> mst_key=4: metric=0.866666674614, loss=0.607381045818
> CONTEXT: PL/Python function "madlib_keras_automl"
> INFO: *** Diagonally evaluating 3 configs under bracket=2 & round=1, 3
> configs under bracket=1 & round=0 with 3 iterations ***
> CONTEXT: PL/Python function "madlib_keras_automl"
> INFO:
> Time for training in iteration 1: 4.84015893936 sec
> DETAIL:
> Training set after iteration 1:
> mst_key=8: metric=0.925000011921, loss=0.353324443102
> mst_key=4: metric=0.949999988079, loss=0.424594521523
> mst_key=11: metric=0.675000011921, loss=0.846702694893
> mst_key=3: metric=0.808333337307, loss=0.382121056318
> mst_key=12: metric=0.916666686535, loss=0.384196609259
> mst_key=10: metric=0.683333337307, loss=0.701473772526
> Validation set after iteration 1:
> mst_key=8: metric=0.933333337307, loss=0.42084941268
> mst_key=4: metric=0.933333337307, loss=0.476406633854
> mst_key=11: metric=0.600000023842, loss=0.854079544544
> mst_key=3: metric=0.899999976158, loss=0.417265832424
> mst_key=12: metric=0.899999976158, loss=0.450416505337
> mst_key=10: metric=0.600000023842, loss=0.728042304516
> CONTEXT: PL/Python function "madlib_keras_automl"
> INFO:
> Time for training in iteration 2: 4.80181288719 sec
> DETAIL:
> Training set after iteration 2:
> mst_key=8: metric=0.941666662693, loss=0.286089539528
> mst_key=4: metric=0.925000011921, loss=0.373028248549
> mst_key=11: metric=0.683333337307, loss=0.609232187271
> mst_key=3: metric=0.833333313465, loss=0.291878581047
> mst_key=12: metric=0.908333361149, loss=0.300016224384
> mst_key=10: metric=0.983333349228, loss=0.382896214724
> Validation set after iteration 2:
> mst_key=8: metric=0.933333337307, loss=0.338641613722
> mst_key=4: metric=1.0, loss=0.436057478189
> mst_key=11: metric=0.600000023842, loss=0.658753097057
> mst_key=3: metric=0.766666650772, loss=0.339546382427
> mst_key=12: metric=0.933333337307, loss=0.341486483812
> mst_key=10: metric=1.0, loss=0.442664504051
> CONTEXT: PL/Python function "madlib_keras_automl"
> INFO:
> Time for training in iteration 3: 5.17401909828 sec
> DETAIL:
> Training set after iteration 3:
> mst_key=8: metric=0.966666638851, loss=0.196135208011
> mst_key=4: metric=0.958333313465, loss=0.243382230401
> mst_key=11: metric=0.941666662693, loss=0.395315974951
> mst_key=3: metric=0.966666638851, loss=0.171766787767
> mst_key=12: metric=0.866666674614, loss=0.283820331097
> mst_key=10: metric=0.833333313465, loss=0.313775897026
> Validation set after iteration 3:
> mst_key=8: metric=0.966666638851, loss=0.214255988598
> mst_key=4: metric=1.0, loss=0.268849998713
> mst_key=11: metric=0.899999976158, loss=0.45996800065
> mst_key=3: metric=1.0, loss=0.157373458147
> mst_key=12: metric=0.800000011921, loss=0.340971261263
> mst_key=10: metric=0.766666650772, loss=0.365937292576
> Best training metric so far:
> mst_key=8: metric=0.966666638851, loss=0.196135208011
> Best validation metric so far:
> mst_key=8: metric=0.966666638851, loss=0.214255988598
> CONTEXT: PL/Python function "madlib_keras_automl"
> {code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)