Hello everybody!
I would like to offer a new feature for consideration.
Here is my presentation:
https://github.com/Mathemilda/ElbowMethodForK-means/blob/master/Elbow_Method_for_K-Means_Clustering.ipynb
Thanks for your time! If the feature is to be accepted, can you please
tell me what are conventi
Hi Josh.
Yes, as I mentioned briefly in my second email, you could start a
scikit-learn-contrib project that implements these.
Or, if possible, show how to use Aequitas with sklearn.
This would be interesting since it probably requires some changes to the
API, as our scorers have no side-infor
Hi Andy,
Yes, good point and thank you for your thoughts. The Aequitas project stood
out to me more because of their flowchart than their auditing software
because, as you mention, you always fail the report if you include all the
measures!
Just as with choosing a machine learning algorithm, ther
Would be great for sklearn-contrib, though!
On 10/29/18 1:36 AM, Feldman, Joshua wrote:
Hi,
I was wondering if there's any interest in adding fairness metrics to
sklearn. Specifically, I was thinking of implementing the metrics
described here:
https://dsapp.uchicago.edu/projects/aequitas/
Hi Josh.
I think this would be cool to add at some point, I'm not sure this is now.
I'm a bit surprised by their "fairness report". They have 4 different
metrics of fairness which are conflicting.
If they are all included in the fairness report then you always fail the
fairness report, right?
Hey Adrin.
Thanks for your input.
I had also thought about the first one. It might be a bit tricky to
maintain, but would be quite helpful.
I'm not entirely sure about the second. How much detail should there be
on an algorithm?
The math behind the variational inference in some of the Bayesian m