On Wed, Apr 23, 2014 at 12:57 AM, Gael Varoquaux
wrote:
> On Tue, Apr 22, 2014 at 08:11:30PM -0700, Faraz Mirzaei wrote:
>> I'm putting together a regression test for a LogisticRegression classifier.
>> After fixing random_state, I get identical results on linux. But the same
>> random_state leads
On Wed, Dec 4, 2013 at 9:58 AM, Olivier Grisel wrote:
> As a user I must confess that I like the flat numpy API, both in
> interactive sessions and in regular code. The main con is that it's
> often hard to find the source code of a particular class or function,
> especially when it's a builtin ob
On Mon, Dec 2, 2013 at 4:17 PM, Gael Varoquaux
wrote:
> On Tue, Dec 03, 2013 at 06:56:14AM +1100, Joel Nothman wrote:
>> As for "There should be one-- and preferably only one --obvious way to
>> do it," Gaël, I feel there are times where the one obvious way to do it
>> should be conditioned on wh
On Sat, Aug 24, 2013 at 1:41 PM, Peter Prettenhofer <
peter.prettenho...@gmail.com> wrote:
> Hi,
>
> while investigating some 64bit vs. 32bit differences in the GBRT code I
> stumpled upon this ticket [1].
> Even though its quite old, I can confirm that it holds on my virtual
> machines (np 1.7.1)
On Fri, Mar 15, 2013 at 10:24 AM, wrote:
>> Having both margin fixed is an unlikely situation, especially for
>> confusion matrices. Your case looks like the one margin fixed. Could
>> you elaborate more on the final goal of your attempt?
>
> I would like to investigate more on Cohen's kappa (
>
On Fri, Mar 15, 2013 at 9:05 AM, wrote:
> Dear ScikitLearners,
>
> I hope that I'm not too much off topic...
>
> Given a confusion matrix (trained in scikit-learn):
> [[186 187]
> [119 997]]
>
> I calculate these variables:
> exp_class0 = conf_matrix[0].sum()
> exp_class1 = conf_matrix[1].sum()
On Wed, Mar 13, 2013 at 10:48 AM, Lars Buitinck wrote:
> 2013/3/13 :
>> I have a problem where fsolve goes into a range where the values are
>> nan. After that it goes into an endless loop, as far as I can tell.
>
> I think you intended to send this to scipy-dev? This is
> scikit-learn-general. W
preliminary question, I didn't have time yet to look closely
>>> scipy.__version__
'0.9.0'
I have a problem where fsolve goes into a range where the values are
nan. After that it goes into an endless loop, as far as I can tell.
Something like this has been fixed for optimize.fmin_bfgs. Was there
trying a second time without links to see if I'm still a spam sender.
josef
On Mon, Oct 22, 2012 at 11:30 AM, wrote:
> On Mon, Oct 22, 2012 at 10:51 AM, federico vaggi
> wrote:
>> Josef,
>>
>> could you explain that in slightly more detail? I'm afraid I'm not
>> familiar with the literature y
On Mon, Oct 22, 2012 at 10:33 AM, wrote:
> On Mon, Oct 22, 2012 at 10:05 AM, federico vaggi
> wrote:
>> Hi Gael,
>>
>> I took the time to dig a little bit, and found some MATLAB code
>> written by Diego di Bernardo.
>>
>> http://dibernardo.tigem.it/wiki/index.php/Network_Inference_by_Reverse-eng
On Mon, Oct 22, 2012 at 10:05 AM, federico vaggi
wrote:
> Hi Gael,
>
> I took the time to dig a little bit, and found some MATLAB code
> written by Diego di Bernardo.
>
> http://dibernardo.tigem.it/wiki/index.php/Network_Inference_by_Reverse-engineering_NIR
>
> He has a closed form result (the for
On Fri, Oct 19, 2012 at 8:01 AM, wrote:
> On Fri, Oct 19, 2012 at 7:22 AM, Lars Buitinck wrote:
>> 2012/10/19 Peter Prettenhofer :
>>> BTW: Has anybody of your looked into patsy [1]? They have plenty of
>>> functionality for this kind of encodings (they call it treatment
>>> coding [2]).
>>
>> I
On Fri, Oct 19, 2012 at 7:22 AM, Lars Buitinck wrote:
> 2012/10/19 Peter Prettenhofer :
>> BTW: Has anybody of your looked into patsy [1]? They have plenty of
>> functionality for this kind of encodings (they call it treatment
>> coding [2]).
>
> It doesn't seem to use scipy.sparse, which for me w
On Thu, Oct 18, 2012 at 1:57 PM, Aron Culotta wrote:
> The results I get from DPGMM are not what I expect. E.g.:
>
import sklearn.mixture
sklearn.__version__
> '0.12-git'
data = [[1.1],[0.9],[1.0],[1.2],[1.0], [6.0],[6.1],[6.1]]
m = sklearn.mixture.DPGMM(n_components=5, n_iter=
On Fri, Jun 15, 2012 at 4:50 PM, Yaroslav Halchenko wrote:
>
> On Fri, 15 Jun 2012, josef.p...@gmail.com wrote:
>> https://github.com/PyMVPA/PyMVPA/blob/master/mvpa2/misc/dcov.py#L160
>> looks like a double sum, but wikipedia only has one sum, elementwise product.
>
> sorry -- I might be slow -- w
On Fri, Jun 15, 2012 at 4:20 PM, Yaroslav Halchenko wrote:
> Here is a comparison to output of my code (marked with >):
>
> 0.00458652660079 0.788017364828 0.00700027844478 0.00483928213727
>> 0.145564526722 0.480124905375 0.422482399359 0.217567496918
> 6.50616752373e-07 7.99461373461e-05 0.0070
On Fri, Jun 15, 2012 at 3:50 PM, wrote:
> On Fri, Jun 15, 2012 at 10:45 AM, Yaroslav Halchenko
> wrote:
>>
>> On Fri, 15 Jun 2012, Satrajit Ghosh wrote:
>>> hi yarik,
>>> here is my attempt:
>>>
>>> [1]https://github.com/satra/scikit-learn/blob/enh/covariance/sklearn/covariance/distan
On Fri, Jun 15, 2012 at 10:45 AM, Yaroslav Halchenko
wrote:
>
> On Fri, 15 Jun 2012, Satrajit Ghosh wrote:
>> hi yarik,
>> here is my attempt:
>>
>> [1]https://github.com/satra/scikit-learn/blob/enh/covariance/sklearn/covariance/distance_covariance.py
>> i'll look at your code in det
On Sun, Jan 22, 2012 at 3:34 PM, Andreas wrote:
> Hi everybody.
> While reviewing the label propagation PR, I thought about the pairwise
> rbf functions.
> Would it be possible to compute an sparse, approximate RBF kernel matrix
> using ball trees?
> The idea would be that if the distance between
On Tue, Dec 27, 2011 at 4:36 PM, Nelle Varoquaux
wrote:
> Hi all,
>
> Despite this not being directly related to scikit-learn, I hope to benefit
> from the experience of machine learning developpers:
> I'm currently studying an article on Variable Duration HMMs (VDHMMs), and I
> am seeking advices
On Mon, Dec 19, 2011 at 10:19 AM, Jieyun Fu wrote:
> It's Wald's Z test in here. Search for Wald Test in
>
> http://userwww.sfsu.edu/~efc/classes/biol710/logistic/logisticreg.htm
>
> In ordinary linear regression they like to call it t-statistics. I guess
> it's just a terminology thing.
Since t
On Thu, Dec 8, 2011 at 10:51 AM, Gael Varoquaux
wrote:
> On Thu, Dec 08, 2011 at 10:44:23AM -0500, Yaroslav Halchenko wrote:
>> > and function calls of type 'f(arg1, arg2=1, **kwargs)'.
>> hm... not sure what you mean (there was some change on how keyword args
>> are handled but can't recall now),
On Wed, Nov 9, 2011 at 12:20 PM, Virgile Fritsch
wrote:
> Reminds me of the PR by Robert about performing clustering from similarity
> matrix or directly from the data.
> So I would be in favour of having a X_is_cov keyword.
>
> Sorry for biasing the discussion with cov_init, I answered to quikly
On Wed, Nov 9, 2011 at 10:59 AM, Lars Buitinck wrote:
> 2011/11/9 Virgile Fritsch :
>> Did you notice the `cov_init` parameter?, or maybe it was added after your
>> comment?
>
> OOPS, sorry.
In my reading of the code cov_init is just the starting matrix, the
updating is still based on emp_cov.
J
On Wed, Nov 9, 2011 at 6:21 AM, Gael Varoquaux
wrote:
> Hi list,
>
> I'd like to ask for comments on the GraphLasso pull request that I have
> put in. I think that it is ready for merge, even though it has been in
> development for a short amount of time, because I have been working on
> similar a
On Wed, Oct 26, 2011 at 10:50 PM, Alexandre Passos
wrote:
> On Wed, Oct 26, 2011 at 22:48, Robert Layton wrote:
>> You have it correct. I haven't done this level of algebra for a while, so
>> I'll need to work it out.
>> If I have it correctly, I should be able to do:
>> log(a!) + log(b!) + log((
On Thu, Oct 20, 2011 at 10:08 AM, Mathieu Blondel wrote:
> On Thu, Oct 20, 2011 at 10:41 PM, Virgile Fritsch
> wrote:
>> In both cases, we would need to write some more lines of code to deal with a
>> np.matrix input (whether it is for converting it or rejecting it, we need to
>> test the object
On Sat, Oct 15, 2011 at 4:12 PM, Pietro Berkes wrote:
> On Sat, Oct 15, 2011 at 9:07 PM, wrote:
>> On Sat, Oct 15, 2011 at 3:57 PM, Pietro Berkes wrote:
>>> I wish there was a native numpy function for this case, which is
>>> fairly common in information theory quantities.
>>> As a workaround,
On Sat, Oct 15, 2011 at 3:57 PM, Pietro Berkes wrote:
> I wish there was a native numpy function for this case, which is
> fairly common in information theory quantities.
> As a workaround, I sometimes use these reasonably efficient utility functions:
>
> def log0(x):
> """Robust 'entropy' loga
On Thu, Oct 13, 2011 at 11:29 PM, Robert Layton wrote:
> That makes sense. I'll add an optional eps value, and handle the case of 0
> when it comes up.
> Thanks,
> Robert
>
> On 14 October 2011 14:23, Skipper Seabold wrote:
>>
>> On Thu, Oct 13, 2011 at 11:10 PM, Robert Layton
>> wrote:
>> > I'm
On Thu, Oct 13, 2011 at 11:10 PM, Robert Layton wrote:
> I'm working on adding Adjusted Mutual Information, and need to calculate the
> Mutual Information.
> I think I have the algorithm itself correct, except for the fact that
> whenever the contingency matrix is 0, a nan happens and propogates t
On Mon, Oct 3, 2011 at 6:43 PM, Satrajit Ghosh wrote:
> hi gael,
>
>>
>> In the scikit, there is a convention that everything that is a 'score' is
>> 'bigger is better'. The reason is that it enables black box optimizers to
>> tune parameters or select models based on this score. I wouldn't like
>
On Sun, Sep 25, 2011 at 11:12 AM, Lars Buitinck wrote:
> 2011/9/25 :
> > The predict_proba are just nonlinear monotonic transformations of the
> > parameters. So the difference is only in specifying the convergence
> > tolerance.
>
> That's what I thought, and I'd be so lazy to let the client de
On Sun, Sep 25, 2011 at 7:57 AM, Lars Buitinck wrote:
> 2011/9/25 Mathieu Blondel :
> > On Sun, Sep 25, 2011 at 7:05 PM, Lars Buitinck
> wrote:
> >
> > That seems very similar to Kamal Nigam's semi-supervised Naive-Bayes.
>
> That's right. The first difference is the initialization, where Nigam
On Sun, Sep 25, 2011 at 7:57 AM, Lars Buitinck wrote:
> 2011/9/25 Mathieu Blondel :
> > On Sun, Sep 25, 2011 at 7:05 PM, Lars Buitinck
> wrote:
> >
> > That seems very similar to Kamal Nigam's semi-supervised Naive-Bayes.
>
> That's right. The first difference is the initialization, where Nigam
Can you add a superseded and link on
http://pypi.python.org/pypi/scikits.learn/0.8.1 ?
I think I looked at the wrong pypi page this morning.
Josef
On Sat, Sep 24, 2011 at 8:33 AM, wrote:
>
>
> On Sat, Sep 24, 2011 at 8:28 AM, Vlad Niculae wrote:
>
>> Import errors were the real issue here.
On Sat, Sep 24, 2011 at 8:28 AM, Vlad Niculae wrote:
> Import errors were the real issue here. The failing test is a numerical
> stability issue that has already been raised somewhere else.
>
> Thank you for your help confirming that the latest build works, and my
> deepest apologies for the bugg
On Sat, Sep 24, 2011 at 8:08 AM, Vlad Niculae wrote:
> The date should be 24th I think since I uploaded it late at night.
> You can get it from PyPI:
> http://pypi.python.org/pypi?:action=display&name=scikit-learn&version=0.9
> I sure hope it will work, there have been two success stories on this
On Sat, Sep 24, 2011 at 7:52 AM, Vlad Niculae wrote:
> Hi Josef,
> Does this (still) happen using the installer uploaded yesterday evening?
>
I downloaded it this morning after I saw Gael's initial message, but maybe
the new file hasn't propagated yet on sourceforge
scikit-learn-0.9.win32-py2.6
On Sat, Sep 24, 2011 at 5:20 AM, Gael Varoquaux <
gael.varoqu...@normalesup.org> wrote:
> On Thu, Sep 22, 2011 at 09:25:30AM -0700, hesety wrote:
> > >>> from sklearn import svm
>
> > Traceback (most recent call last):
> > File "", line 1, in
> > from sklearn import svm
> > File "C:\Pytho
On Fri, Sep 23, 2011 at 2:56 PM, Mathieu Blondel wrote:
> On Sat, Sep 24, 2011 at 3:17 AM, Skipper Seabold
> wrote:
>
> > I thought the main goal when we started talking about dropping scikits
> > was simply to avoid having namespace packages. It sounds like this is
> > no longer the focus?
>
> W
41 matches
Mail list logo