On Mon, Dec 12, 2011 at 05:12:38PM +0100, Alexandre Gramfort wrote:
> and then I'll see what people want to work on. I have a personal
> interest in randomized linear models
I'll be working on randomized linear models, and also on the
semi-supervised pull request (I have lost track of it).
In gen
Hi,
I think that I will focus on the Hierarchical clustering pr
https://github.com/scikit-learn/scikit-learn/pull/444
(Alexandre, perhaps we could sync for the review.)
Best,
Vincent
2011/12/12 Alexandre Gramfort :
> Hi,
>
> I'll start with :
>
> https://github.com/scikit-learn/scikit-learn/pu
Hi,
I'll start with :
https://github.com/scikit-learn/scikit-learn/pull/438
and
https://github.com/scikit-learn/scikit-learn/pull/444
and then I'll see what people want to work on. I have a personal
interest in randomized linear models
Alex
On Mon, Dec 12, 2011 at 11:06 AM, Gilles Louppe wrot
Hi Gilles,
that would be great indeed! I'll push some updates for the PR today
evening and document some of the places where input
(comments/thoughts) would be much appreciated.
thanks,
Peter
2011/12/12 Gilles Louppe :
> Hi list,
>
> During the sprint, I plan to review @pprett pull request on G
Hi list,
During the sprint, I plan to review @pprett pull request on Gradient
Tree Boosting. It is also my intention to implement parallel
construction and prediction of forest of trees.
I also have some ideas concerning the tree module, like computing
variable importance (which is already includ
Hi All !
I plan on also working on scikit learn during this time, but from
France. I'd like to implement the HAP, the hierarchical version of the
affinity propagation during that time.
On 8 December 2011 15:27, Mathieu Blondel wrote:
> Hi everyone,
>
> I've edited the wiki with ideas for the two
Hi everyone,
I've edited the wiki with ideas for the two main tasks I'm planning to
work on during the sprint (K-means improvements and Random
Projections).
https://github.com/scikit-learn/scikit-learn/wiki/Upcoming-events
Feel free to add the tasks you plan to work on, so that we can avoid
over