Hi,
thanks for your reply.
1. I tested about 100 samples with sklearn. In my example there was only
one sample because of readability and simplicity.
In short: I read image with opencv, then detect a region of interest and
extract digits through contouring. These are machine written digits, but
Hi,
I am curious about few things:
1. what are the samples you use for testing your classifier? merely one sample
is hard to do justice for its accuracy.
2. did you try to fine tune the hyper parameters for your svm?
3. you might be interested in this blog post, the author get a very impressi
If I try to do something like:
import numpy as np
from sklearn.ensemble import RandomForestClassifier
X = np.random.random((100, 10))
y = np.random.randint(2, size=100)
estimator = RandomForestClassifier()
estimator.fit(X, y)
a = np.asarray([estimator])
a is a list of the individual DecisionTre
Replaying to myself...
The cause for reported "problem" is that classifier samples have empty
strips on both sides, so if I shrink my_array to 6 columns and add empty
columns on both sides, I get expected value - zero.
But still, results from this approach can't beat tesseract unfortunately
for m
Hi Tim,
In principles, what you describe exactly corresponds to the decision tree
algorithm. You partition the input space into smaller subspaces, on which
you recursively build sub-decision trees.
In practice however, I would not split things by hand, unless you are
interested in discovering add
Thanks Manoj!
BTW, if you use the Rackspace Cloud account for your next benchmarking
session, please thank them at the end of your blog post.
--
Olivier
--
"Accelerate Dev Cycles with Automated Cross-Browser Testing - F
Hi.
I have updated my blog about my progress this week.
http://manojbits.wordpress.com/2014/05/23/releasing-the-gil-and-coordinate-descent/.
On Tue, May 20, 2014 at 10:09 PM, Hamzeh Alsalhi wrote:
> Thank you! I will be happy to keep this mailing list updated with links to
> blog posts ideally
2014-05-23 11:35 GMT+02:00 Gilles Louppe :
> Thanks! This is really cool! I think I'll try to reproduce some of them and
> put one or two in my slides.
I used Fabian's extension_profiler to produce these.
https://github.com/fabianp/extension_profiler
--
Thanks! This is really cool! I think I'll try to reproduce some of them and
put one or two in my slides.
On 23 May 2014 11:29, Lars Buitinck wrote:
> 2014-05-23 11:08 GMT+02:00 Gilles Louppe :
> > Thanks! Oh, I would be interested in seeing them. Could send me the link
> if
> > you still have t
Hello,
a naive question about what I should do and what already exists in scikit-learn.
I have a classification problem with two classes, and I know that one
of my features has two different different distributions for one of
the classes.
Example made up on the spot (real life is more complicate
2014-05-23 11:08 GMT+02:00 Gilles Louppe :
> Thanks! Oh, I would be interested in seeing them. Could send me the link if
> you still have them?
Here's one with quicksort:
https://camo.githubusercontent.com/914d542cd2bfc0f0f996b16e272c82645f2b1c15/68747470733a2f2f662e636c6f75642e6769746875622e636f6
Hi Lars,
Thanks! Oh, I would be interested in seeing them. Could send me the link if
you still have them?
Thanks,
Gilles
On 23 May 2014 11:05, Lars Buitinck wrote:
> 2014-05-22 8:13 GMT+02:00 Gilles Louppe :
> > Just for letting you know, my talk "Accelerating Random Forests in
> > Scikit-Lea
2014-05-22 8:13 GMT+02:00 Gilles Louppe :
> Just for letting you know, my talk "Accelerating Random Forests in
> Scikit-Learn" was approved for EuroScipy'14. Details can be found at
> https://www.euroscipy.org/2014/schedule/presentation/9/.
>
> My slides are far from being ready, but my intention i
13 matches
Mail list logo