On 01/04/2016 03:31 PM, Joel Nothman wrote:
> FWIW: features that I have had to remove include format strings with
> implicit arg numbers, set literals, dict comprehensions, perhaps
> ordered dicts / counters. We are already clandestinely using argparse
> in benchmark code.
You probably just w
I have many times committed coded and had to fix for python 2.6.
FWIW: features that I have had to remove include format strings with
implicit arg numbers, set literals, dict comprehensions, perhaps ordered
dicts / counters. We are already clandestinely using argparse in benchmark
code.
Most of t
Thanks Yaroslav, I'll try it out.
On 4 January 2016 at 19:05, Yaroslav Halchenko wrote:
>
> On Mon, 04 Jan 2016, William Shipman wrote:
>
> >Hi Scikit-learn community,
>
> >Was there any progress in including DueCredit? I am building my list
> of
> >references for each algorithm I te
On Mon, Jan 04, 2016 at 01:22:12PM -0500, Andreas Mueller wrote:
> I'm not sure I'm for dropping 2.6 for the sake of dropping 2.6.
I agree. I find the attitude of the post that I mentionned a bit
annoying: dropping support for the sake of forcing people to move isn't a
good thing. It should bring
You didn't use a OneVsRestClassifier. SGDClassifier itself can only do
multi-class, not multi-label.
It needs to be GridSearchCV(OneVsRestClassifier(SGDClassifier()), ...)
On 01/04/2016 02:15 AM, Startup Hire wrote:
Providing the full StackTrace here:[ code in previous email]
# Tuning hyper-pa
Check out auto-sklearn and the datasets they use, as well as tpot and
this pr: https://github.com/scikit-learn/scikit-learn/pull/5185
On 12/20/2015 12:37 PM, olologin wrote:
> Hi folks.
>
> I'm seeking for hard examples of hyperparameter search problem. I've
> just implemented scikit-learn compat
Hi Ola.
Sorry for the late reply, I've been offline over the holidays.
Unfortunately, there is no owner of the cross_decomposition module at
the moment, which is probably the main reason it
is in a not-great state.
You can also have a look at
https://github.com/scikit-learn/scikit-learn/issues/
Hi Sonali.
Your skill-set seems great for a GSoC with scikit-learn.
We have found that in recent years, we were quite limited in terms of
mentoring resources.
Many of the core-devs are very busy, and we already have many
contributions waiting for reviews.
If you are interested in working on s
Yeah, having a continuation of the download or retry would be nice, I
think. PR welcome.
On 12/31/2015 05:58 AM, Toasted Corn Flakes wrote:
> Currently, if you interrupt check_fetch_lfw() (which downloads about 200mb of
> data), the incomplete lfw-funneled.tgz stays on disk, and running it again
That's really odd.
scikit-learn 0.17 should definitely work with numpy 1.10.
This looks more like a linker failure, so it being version dependent in
numpy seems strange..
On 01/04/2016 08:00 AM, Joe Cammisa wrote:
folks, just to follow up on this in case anyone else runs into the
same proble
Happy new year!
I think it would be cool to hear from matplotlib what their experience was.
I'm not sure I'm for dropping 2.6 for the sake of dropping 2.6.
What would we actually gain?
There are two fixes in sklearn/utils/fixes.py that we could remove, I think.
Also: what does dropping 2.6 mean?
On Mon, 04 Jan 2016, William Shipman wrote:
>Hi Scikit-learn community,
>Was there any progress in including DueCredit? I am building my list of
>references for each algorithm I tested and would welcome something
>automated like this.
it wasn't included but duecredit carries a n
Yes. Each model should have defined a likelihood function which computes
the BIC/AIC. In the Linear regression, with the error Normality assumption,
it would be RSS/ ( n − p − 1), as far as I know.
What do you think?
2016-01-02 13:39 GMT+01:00 Gael Varoquaux :
> On Fri, Jan 01, 2016 at 08:41:56P
Hi Scikit-learn community,
Was there any progress in including DueCredit? I am building my list of
references for each algorithm I tested and would welcome something
automated like this.
Regards,
William.
On 31 August 2015 at 16:56, Yaroslav Halchenko wrote:
>
> On Sun, 30 Aug 2015, Mathieu B
My own opinion, after reading the link in the original message in this thread,
is to drop Python 2.6 support.
Perhaps Python Weekly and LWN will report on this to increase visibility.
As a follow-up, I'd like to point out
http://www.snarky.ca/why-python-3-exists
Dale Smith, Ph.D.
Data Scientis
For what it’s worth, pandas is dropping 3.3 in our next release (0.18 maybe end
of this month). We’re possibly dropping 2.6 as well, but it might get one more
release.
-Tom
> On Jan 4, 2016, at 7:28 AM, Gael Varoquaux
> wrote:
>
> Happy new year everybody,
>
> As a new year resolution, I su
Happy new year everybody,
As a new year resolution, I suggest that we drop Python 2.6
compatibility.
For an argumentation in this favor, see
http://www.snarky.ca/stop-using-python-2-6 (I don't buy everything there,
but the core idea is there).
For us, this will mean more usage of context manager
folks, just to follow up on this in case anyone else runs into the same
problem: after much mucking around, i conclude that the issue is due to a
change that has taken place in numpy between versions numpy-1.9 and
numpy-1.10. in my experience, scikit-learn-0.17 cannot be successfully
built agains
18 matches
Mail list logo