> Date: Wed, 29 Oct 2014 14:57:45 +0100
> From: Olivier Grisel
> Subject: Re: [Scikit-learn-general] Fast Johnson-Lindenstrauss
> Transform
> To: scikit-learn-general
> Message-ID:
>
> Content-Type: text/plain; charset=UTF-8
>
> Indeed this is quite a new method and we have a policy
Hi everyone,
I'm thinking of adding the Unrestricted Fast Johnson-Lindenstrauss
Transform [1] to the random_projections module and would like to ask if
maybe someone is already working on this.
(If you know of a competing algorithm that would be worth looking at,
please let me know ;))
Thanks,
M
Has anyone worked on simulated annealing or similar algorithms for
parameter search?
Michal
--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita
se models, the easiest way is to call
> fit() again on the remembered model, with the right portion of training
> data (and parameters if using grid search). [I am sorry this requires a
> patch/branch rather than a gist, but this functionality necessitates a
> polymorphic implementation
Hi,
I am working on a problem where, in addition to the cross-validation
scores, I would like to be able to also record the full classifiers for
further analysis (visualisation etc.) Is there a way to do this?
I tried to build a custom scoring function that returns a tuple of
different metrics (i
I was wondering what would work better for distributing cross-validation
jobs: IPython parallel or Spark? I tried with IPython parallel in the
past but remember having some issues with jobs crashing etc.
Michal
On 29/11/13 12:40, scikit-learn-general-requ...@lists.sourceforge.net wrote:
>> >013/1
Hi everyone,
I submitted a pull request to enable grid_search with failing
classifiers. Did anyone have some time to look at it?
Thanks,
Michal
On 08/11/13 17:56, Michal Romaniuk wrote:
> Did anyone work on this problem (exceptions raised by classifiers in
> grid search) since? I would be
I could have a look, if only I could figure out how to create a second
fork of scikit-learn on github... The current one has the proposed
change to grid_search that I submitted.
Michal
On 19/11/13 19:15, scikit-learn-general-requ...@lists.sourceforge.net wrote:
> Apparently those are all regressi
Hi,
Is the master branch on Github supposed to pass all tests? I get one
error and two failed tests:
==
ERROR: Ensures that checks return valid sparse matrices.
Subject:
> Re: [Scikit-learn-general] Random forest with zero features
> From:
> Andy
> Date:
> 11/11/13 04:37
>
> To:
>
>
>
> Hi Michal.
> Thanks for wanting to work on this.
> Could you please open an issue? That makes it easier to track the progress.
&g
Did anyone work on this problem (exceptions raised by classifiers in
grid search) since? I would be happy to do some work to fix this
problem, but would need some advice.
It seems to me like the easiest way around the issue is to wrap the call
to clf.fit() in a try statement and catch the exceptio
I'm not sure if it's feasible but it would be nice to have links to
github sources in the online docs. When I'm writing my own transforms, I
often browse the docs for something with a similar interface and look up
the sources to see how it's implemented. A direct link would be useful :-)
Cheers,
M
Hi,
I'm trying to figure out this example:
http://scikit-learn.org/stable/auto_examples/cluster/plot_feature_agglomeration_vs_univariate_selection.html#example-cluster-plot-feature-agglomeration-vs-univariate-selection-py
I've looked at hierarchical.py and WardAgglomeration seems to just
return a
/26 Michal Romaniuk :
>> Working with the debugger, here is what's happening:
>>
>> param1 and param2 are both numpy object arrays (containing numerical
>> arrays). So param1.flat[0] gives an array and param2.flat[0] also gives
>> an array. And numpy seems to con
16:26, Andreas Mueller wrote:
> On 07/26/2013 05:15 PM, Michal Romaniuk wrote:
>> So I followed the advice from Andreas but now I get this error when
>> calling clone:
>>
>> PATH/scikit-learn/sklearn/base.pyc in clone(estimator, safe)
>> 66
or and ClassifierMixin, then implement fit, predict and
> __init__ (to set the parameters).
>
> Cheers,
> Andy
>
> On 07/25/2013 10:05 PM, Michal Romaniuk wrote:
>> Hi,
>>
>> I'm working on a customized classifier and I would like it to be
>> compatible
Hi,
I'm working on a customized classifier and I would like it to be
compatible with sklearn, so that I can use it with pipelines,
GridSearchCV and replicate it using sklearn's clone function. I've
looked at the code for some classifiers but I'm not sure which base
classes to use. Is there any doc
Here is an example script:
import numpy
from sklearn import ensemble
y = numpy.random.random_integers(0,1,100)
X = numpy.zeros((100,0))
rf = ensemble.RandomForestClassifier()
rf.fit(X,y)
Michal
> I am not sure to understand. Please provide a minimalistic
> reproduction script (10 lines max) and
What is the default behaviour for random forests with zero features? It
seems to me that it just gives an error (although I'm not 100% sure if
that's the cause). This is a problem when using a feature selection step
and searching a grid for a good feature selection parameter.
Occasionally there mig
19 matches
Mail list logo