Thank you guys! very helpful :)
Soledad Galli
https://www.trainindata.com/
‐‐‐ Original Message ‐‐‐
On Friday, December 4, 2020 12:06 PM, mrschots wrote:
> I have been using both in time-series classification. I put a exponential
> decay in sample_weights AND class weights as a diction
I have been using both in time-series classification. I put a exponential
decay in sample_weights AND class weights as a dictionary.
BR/Schots
Em sex., 4 de dez. de 2020 às 12:01, Nicolas Hug
escreveu:
> Basically passing class weights should be equivalent to passing
> per-class-constant sample
Basically passing class weights should be equivalent to passing
per-class-constant sample weights.
> why do some estimators allow to pass weights both as a dict in the
init or as sample weights in fit? what's the logic?
SW is a per-sample property (aligned with X and y) so we avoid passing
t
Actually, I found the answer. Both seem to be optimising the loss function for
the various algorithms, below I include some links.
If, we pass class_weight and sample_weight, then the final cost / weight is a
combination of both.
I have a follow up question: in which scenario would we use both?
Hello team,
What is the difference in the implementation of class_weight and sample_weight
in those algorithms that support both? like random forest or logistic
regression?
Are both modifying the loss function? in a similar way?
Thank you!
Sole___
s