trix (n samples and m features)
>
> >>> data
> array([[1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0,
> 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0],
>[0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0,
> 0, 1, 0, 1,
Hi Mukesh,
I was getting the following error from your code on my environment (Python
2.7.11 - Anaconda 2.4.1, scikit-learn 0.17) on Mac OSX 10.9 for the
following line:
Y = lb.fit_transform(y_train_text)
> ValueError: You appear to be using a legacy multi-label data
> representation. Sequenc
I had a similar situation, so I created a larger training set with roughly
equal class membership by randomly sampling with replacement from the
training set. Results were much better during CV (against the inflated
training set) and also against the held out test set (from the original
training se
Hi Anders,
>> The problem as I see it is the "tearing it down" bit, I don't want the
jobs shutting down before the user has had a chance to get the resulting
data, but I suspect if we let users shut them down themselfes a lot of them
will sit around for no reason.
With Amazon EMR you read and writ
Thank you Fernando, per your suggestion I have added the link to the
project in github, the README.md contains links to the
nbviewer.ipython.orghosted notebooks (copied the pattern from the
Python for Data Analysis link
few entries up).
-sujit
On Tue, May 27, 2014 at 6:41 PM, Fernando Perez wrot
>
> On Sun, May 25, 2014 at 08:59:56AM -0700, Sujit Pal wrote:
> > Hello sklearners,
>
> > I apologize in advance if this is regarded as a shameless plug, but...
>
> > I rewrote the R exercises for the statlearning course (from Stanford
> > University, conducted
Hello sklearners,
I apologize in advance if this is regarded as a shameless plug, but...
I rewrote the R exercises for the statlearning course (from Stanford
University, conducted by professors Trevor Hastie and Rob Tibshirani) into
a set of 9 python notebooks. All the algorithms used in my code
I believe there is already a recommender framework in the scikits family
already called crab?
http://muricoca.github.io/crab/
Few days back, one of the committers to sklearn spoke about the fact that he
detected code in crab that looked like his own. Given that there is so much
reuse, would it
+1 - FWIW very good idea!
On May 2, 2013, at 2:52 AM, Jaques Grobler wrote:
> Perhaps we should consider adding a link to the Scipy-lecture notes in the
> docs? Or would this be
> a bit out of our scope. Then people new to python will be able to find them
> great notes via
> the 'Getting Start
Many thanks for posting these links. I am a recent user of Scikit-Learn having
been introduced to it via Kaggle example code. Both tutorials were very helpful
for me, too bad Olivier did not have time to complete all the stuff he planned
to talk about, but I have downloaded both presentations an
FWIW, and its probably a bit off-topic (sorry about that), I have a
implementation of the Bob-Alice example on Wikipedia's HMM page here, may be
helpful to some...
http://sujitpal.blogspot.com/2013/03/the-wikipedia-bob-alice-hmm-example.html
I think it would make a nice tutorial example too, it
11 matches
Mail list logo