I strongly advice raising an error. Very very very strongly.
Being lax about ambiguous inputs makes prototyping and interactive usage
easier: less typing, and the systems gets it right most of the time.
However, it makes production use and debugging complex code much harder.
Indeed, errors, that m
I vote for 3.
On Fri, May 1, 2015 at 6:27 PM, Andreas Mueller wrote:
> Hi all.
> A quick questions on future API.
> What should happen if a user passes an X with shape (N,), in other words
> X.ndim == 1?
>
> This is unfortunately not really consistent in scikit-learn right now.
> Three things a
Hi all.
A quick questions on future API.
What should happen if a user passes an X with shape (N,), in other words
X.ndim == 1?
This is unfortunately not really consistent in scikit-learn right now.
Three things are possible:
1) Raise an error
2) N = n_features, that is X contains a single sampl
It should. If not, please report a bug.
On 05/01/2015 11:16 AM, Pagliari, Roberto wrote:
I agree with you.
I'm just not sure whether scikit learn would handle that or not.
thank you,
*From:* Michael Eickenberg [michael.e
I agree with you.
I'm just not sure whether scikit learn would handle that or not.
thank you,
From: Michael Eickenberg [michael.eickenb...@gmail.com]
Sent: Friday, May 01, 2015 11:13 AM
To: scikit-learn-general@lists.sourceforge.net
Subject: Re: [Scikit-learn-gen
What do expect a classifier to predict on a label that it has never seen
during training? If there were structure in the target, such as an order,
then an appropriate regression may be able to infer unseen targets due to
this structure. But in classification this information is entirely absent.
Mi
Hi Sebastian,
if classes/labels are the same for both training and test, that should not be a
problem. I've done that and never seen any issues. As far as I can see, scikit
learn automatically maps classes into numbers from 0 to number of classes -1,
which is something Spark, for example, does n