0 features. I can train a random forest
> > classifier in sklearn which works well. I would however like to see
> > the most important features.
> >
> > I tried simply printing out forest.feature_importances_ but this takes
> > about 1 second per feature making about 40,000 seconds overall. This
> > is much much longer than th
Postive classes percentage is 30 %
On the forums and StackOverflow, they suggest using class_weight=balanced
in the decision tree which oversamples the class with the lowest
frequency. However, I don't see how that helps in minimizing the FN.
Any suggestions?
Bests
Nadim
--