Actually, I found the answer. Both seem to be optimising the loss function for 
the various algorithms, below I include some links.

If, we pass class_weight and sample_weight, then the final cost / weight is a 
combination of both.

I have a follow up question: in which scenario would we use both? why do some 
estimators allow to pass weights both as a dict in the init or as sample 
weights in fit? what's the logic? I found it a bit confusing at the beginning.

Thank you!

https://stackoverflow.com/questions/30805192/scikit-learn-random-forest-class-weight-and-sample-weight-parameters

https://stackoverflow.com/questions/30972029/how-does-the-class-weight-parameter-in-scikit-learn-work/30982811#30982811

Soledad Galli
https://www.trainindata.com/

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Thursday, December 3, 2020 11:55 AM, Sole Galli via scikit-learn 
<scikit-learn@python.org> wrote:

> Hello team,
>
> What is the difference in the implementation of class_weight and 
> sample_weight in those algorithms that support both? like random forest or 
> logistic regression?
>
> Are both modifying the loss function? in a similar way?
>
> Thank you!
>
> Sole
_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn

Reply via email to