Generally speaking, we all know it's to save spaces with incremental
learning.

According to the ques in stackoverflow
<https://cs.stackexchange.com/questions/51260/what-are-the-advantages-of-online-learning-when-training-neural-networks>
,
it also said that.

But what's the disadvantages?

What I know from my experiments is two points below:

   1.

   Train with subsets of data *but shouldn't be too small*. I prepared very
   small datasets and the predict result is very worse.
   2.

   When training for a very long time, some elder behavors will be
   forgotten due to the multiple training epochs.

That's all from my experience when training with *xgboost* incrementally.

Or anything else?
_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn

Reply via email to