On 11/22/2015 10:49 PM, Raf wrote:
Hi guys!

Just a newbie question...

I guess the bigger the dataset the more examples are analyzed and the
more accurate will be the model generated by the swarming process.
Unfortunately, though, I noticed quite a clear increase in its duration
directly correlated to the datasets' size. What's a good rule of thumb?
How big should the dataset be in order to generate a good enough model
without wasting resources and time?



Thanks a lot :)




Hello Raf,

I was told once that 3000 line should be enough.

Wakan

Reply via email to