saicoco opened a new issue #16298: [Gluon with estimator] loss is unchanged?
URL: https://github.com/apache/incubator-mxnet/issues/16298
 
 
   Note: Providing complete information in the most concise form is the best 
way to get help. This issue template serves as the checklist for essential 
information to most of the technical issues and bug reports. For non-technical 
issues and feature requests, feel free to present the information in what you 
believe is the best form.
   
   For Q & A and discussion, please start a discussion thread at 
https://discuss.mxnet.io 
   
   ## Description
   (Brief description of the problem in no more than 2 sentences.)
   
   ## Environment info (Required)
   kaggle
   ```
   What to do:
   train with steel defect dataset by unet based resnet18_v2
   
   ```
   
   Package used (Python/R/Scala/Julia):
   I'm using python
   
   For Scala user, please provide:
   1. Java version: (`java -version`)
   2. Maven version: (`mvn -version`)
   3. Scala runtime if applicable: (`scala -version`)
   
   For R user, please provide R `sessionInfo()`:
   
   ## Build info (Required if built from source)
   
   Compiler (gcc/clang/mingw/visual studio):
   
   MXNet commit hash:
   (Paste the output of `git rev-parse HEAD` here.)
   
   Build config:
   (Paste the content of config.mk, or the build command.)
   
   ## Error Message:
   (Paste the complete error message, including stack trace.)
   ```
   [Epoch 0][Batch 0][Samples 16] time/batch: 0.475s train dice_coefficent: 
0.0003, train ce_loss: 0.6932
   [Epoch 0][Batch 0][Samples 16] time/batch: 0.475s train dice_coefficent: 
0.0003, train ce_loss: 0.6932
   [Epoch 0][Batch 0][Samples 16] time/batch: 0.475s train dice_coefficent: 
0.0003, train ce_loss: 0.6932
   [Epoch 0][Batch 1][Samples 32] time/batch: 0.405s train dice_coefficent: 
0.0013, train ce_loss: 0.6932
   [Epoch 0][Batch 1][Samples 32] time/batch: 0.405s train dice_coefficent: 
0.0013, train ce_loss: 0.6932
   [Epoch 0][Batch 1][Samples 32] time/batch: 0.405s train dice_coefficent: 
0.0013, train ce_loss: 0.6932
   [Epoch 0][Batch 2][Samples 48] time/batch: 0.349s train dice_coefficent: 
0.0021, train ce_loss: 0.6932
   [Epoch 0][Batch 2][Samples 48] time/batch: 0.349s train dice_coefficent: 
0.0021, train ce_loss: 0.6932
   [Epoch 0][Batch 2][Samples 48] time/batch: 0.349s train dice_coefficent: 
0.0021, train ce_loss: 0.6932
   [Epoch 0][Batch 3][Samples 64] time/batch: 0.324s train dice_coefficent: 
0.0016, train ce_loss: 0.6932
   [Epoch 0][Batch 3][Samples 64] time/batch: 0.324s train dice_coefficent: 
0.0016, train ce_loss: 0.6932
   [Epoch 0][Batch 3][Samples 64] time/batch: 0.324s train dice_coefficent: 
0.0016, train ce_loss: 0.6932
   [Epoch 0][Batch 4][Samples 80] time/batch: 0.302s train dice_coefficent: 
0.0016, train ce_loss: 0.6932
   [Epoch 0][Batch 4][Samples 80] time/batch: 0.302s train dice_coefficent: 
0.0016, train ce_loss: 0.6932
   [Epoch 0][Batch 4][Samples 80] time/batch: 0.302s train dice_coefficent: 
0.0016, train ce_loss: 0.6932
   [Epoch 0][Batch 5][Samples 96] time/batch: 0.277s train dice_coefficent: 
0.0016, train ce_loss: 0.6932
   [Epoch 0][Batch 5][Samples 96] time/batch: 0.277s train dice_coefficent: 
0.0016, train ce_loss: 0.6932
   [Epoch 0][Batch 5][Samples 96] time/batch: 0.277s train dice_coefficent: 
0.0016, train ce_loss: 0.6932
   [Epoch 0][Batch 6][Samples 112] time/batch: 0.292s train dice_coefficent: 
0.0014, train ce_loss: 0.6932
   [Epoch 0][Batch 6][Samples 112] time/batch: 0.292s train dice_coefficent: 
0.0014, train ce_loss: 0.6932
   [Epoch 0][Batch 6][Samples 112] time/batch: 0.292s train dice_coefficent: 
0.0014, train ce_loss: 0.6932
   [Epoch 0][Batch 7][Samples 128] time/batch: 0.281s train dice_coefficent: 
0.0014, train ce_loss: 0.6932
   [Epoch 0][Batch 7][Samples 128] time/batch: 0.281s train dice_coefficent: 
0.0014, train ce_loss: 0.6932
   [Epoch 0][Batch 7][Samples 128] time/batch: 0.281s train dice_coefficent: 
0.0014, train ce_loss: 0.6932
   [Epoch 0][Batch 8][Samples 144] time/batch: 0.274s train dice_coefficent: 
0.0015, train ce_loss: 0.6932
   [Epoch 0][Batch 8][Samples 144] time/batch: 0.274s train dice_coefficent: 
0.0015, train ce_loss: 0.6932
   [Epoch 0][Batch 8][Samples 144] time/batch: 0.274s train dice_coefficent: 
0.0015, train ce_loss: 0.6932
   [Epoch 0][Batch 9][Samples 160] time/batch: 0.299s train dice_coefficent: 
0.0017, train ce_loss: 0.6932
   [Epoch 0][Batch 9][Samples 160] time/batch: 0.299s train dice_coefficent: 
0.0017, train ce_loss: 0.6932
   [Epoch 0][Batch 9][Samples 160] time/batch: 0.299s train dice_coefficent: 
0.0017, train ce_loss: 0.6932
   [Epoch 0][Batch 10][Samples 176] time/batch: 0.275s train dice_coefficent: 
0.0016, train ce_loss: 0.6932
   [Epoch 0][Batch 10][Samples 176] time/batch: 0.275s train dice_coefficent: 
0.0016, train ce_loss: 0.6932
   [Epoch 0][Batch 10][Samples 176] time/batch: 0.275s train dice_coefficent: 
0.0016, train ce_loss: 0.6932
   [Epoch 0][Batch 11][Samples 192] time/batch: 0.286s train dice_coefficent: 
0.0015, train ce_loss: 0.6932
   [Epoch 0][Batch 11][Samples 192] time/batch: 0.286s train dice_coefficent: 
0.0015, train ce_loss: 0.6932
   [Epoch 0][Batch 11][Samples 192] time/batch: 0.286s train dice_coefficent: 
0.0015, train ce_loss: 0.6932
   [Epoch 0][Batch 12][Samples 208] time/batch: 0.300s train dice_coefficent: 
0.0014, train ce_loss: 0.6932
   ```
   
   ## Minimum reproducible example
   (If you are using your own code, please provide a short script that 
reproduces the error. Otherwise, please provide link to the existing example.)
   
   ## Steps to reproduce
   (Paste the commands you ran that produced the error.)
   
   1. run for this  https://www.kaggle.com/jiageng/mxnet-gluon-baseline
   2.
   
   ## What have you tried to solve it?
   
   1. change `mx.init.Normal(sigma=0.02)` to `mx.init.Xariver()`
   2. change learning rate
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to