That gives the same problem sadly.
I have found the solution though. The trainer/optimizer is not exported along with the other data unless specifically called through trainer.save_states() which means the rise in loss after is because the trainer doesn't know what it was doing. A bit weird that the official docs/examples do not mention having to save your trainer states explicitly when talking about saving and loading models to resume training. --- [Visit Topic](https://discuss.mxnet.apache.org/t/difference-between-exported-and-imported-model-results/6837/3) or reply to this email to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.mxnet.apache.org/email/unsubscribe/d76ca3dce5a2f06dd3b1718b3028ccbcda52153c25d6c2c9300b6cf981d98d06).
