I'd like to ask why `BayesianRidge` and `ARDRegression` do not use
marginal log likelihood (MLL) but learned coefficients to check
convergence when fitting.
I know that most iterative algorithms must have some objective
function by which the convergence is checked.
In Bayesian inference, like varia
> In my case, I write library code that takes a base estimator and I
> want to inspect the parameters (I don't have control over the base
> estimator).
ok fair enough.
it is a use case like knowing if the regularization parameter in alpha or C?
>>> if "C" in clf.get_params():
>>> pass
> An
2012/1/16 Mathieu Blondel :
> In my case, I write library code that takes a base estimator and I
> want to inspect the parameters (I don't have control over the base
> estimator).
>
> An argument in favor of making it public is that set_params is public.
+1
--
Lars Buitinck
Scientific programmer
On Mon, Jan 16, 2012 at 11:35 PM, Alexandre Gramfort
wrote:
> I am not against this but I have to admit I don't fully understand the
> motivation
> as _get_params basically inspects the __init__.py hence is already
> visible to the user.
In my case, I write library code that takes a base estimato
I am not against this but I have to admit I don't fully understand the
motivation
as _get_params basically inspects the __init__.py hence is already
visible to the user.
could you write a tiny piece of code that would make the decision obvious?
Alex
On Mon, Jan 16, 2012 at 2:37 PM, Olivier Grise
2012/1/16 Mathieu Blondel :
> On Mon, Jan 16, 2012 at 10:29 PM, Olivier Grisel
> wrote:
>
>> +1 for going on with the merge of ndarray / sparse matrix implementations.
>
> +1. When you have abstract code that is representation-independent,
> being able to use the same estimator transparently is a
2012/1/16 Mathieu Blondel :
> I want to make _get_params public (i.e., rename it to get_params) and
I would +1 to make this method "optional yet recommended API for well
behaved scikit-learn style estimators". If we do so, we should update
the contributors guide to make this explicit.
In particul
On Mon, Jan 16, 2012 at 10:29 PM, Olivier Grisel
wrote:
> +1 for going on with the merge of ndarray / sparse matrix implementations.
+1. When you have abstract code that is representation-independent,
being able to use the same estimator transparently is a real comfort.
> However that won't sol
2012/1/16 Lars Buitinck :
> 2012/1/16 Mathieu Blondel :
>> I wrote a class which takes a base estimator in its constructor. For
>> efficiency reasons, it is best if the estimator supports dense input.
>> I would like thus to issue a warning if the given estimator supports
>> only sparse input (as i
I want to make _get_params public (i.e., rename it to get_params) and
deprecate _get_params (for backward-compatibility). Any objection?
Mathieu
--
RSA(R) Conference 2012
Mar 27 - Feb 2
Save $400 by Jan. 27
Register now!
2012/1/16 Mathieu Blondel :
> I wrote a class which takes a base estimator in its constructor. For
> efficiency reasons, it is best if the estimator supports dense input.
> I would like thus to issue a warning if the given estimator supports
> only sparse input (as is the case of e.g. svm.sparse.Li
On Mon, Jan 16, 2012 at 9:46 PM, Olivier Grisel
wrote:
> Since we dropped python 2.5 support I think we could use an
> @accept_input class decorators to make this kind of static
> declarations more syntactically pleasing.
Excellent idea!
Mathieu
-
Since we dropped python 2.5 support I think we could use an
@accept_input class decorators to make this kind of static
declarations more syntactically pleasing.
--
RSA(R) Conference 2012
Mar 27 - Feb 2
Save $400 by Jan. 27
Hello everyone,
I wrote a class which takes a base estimator in its constructor. For
efficiency reasons, it is best if the estimator supports dense input.
I would like thus to issue a warning if the given estimator supports
only sparse input (as is the case of e.g. svm.sparse.LinearSVC). This
rais
On 01/16/2012 10:12 AM, Andreas wrote:
> On 01/16/2012 10:07 AM, Andreas wrote:
>
>> On 01/16/2012 09:44 AM, Andreas wrote:
>>
>>
>>> Hi Everybody.
>>> I'm still trying to hack at the trees. This time I stumbled across the
>>> computation of the Gini index.
>>> Could someone please explai
On Mon, Jan 16, 2012 at 10:13:44AM +0100, Andreas wrote:
> I'm not sure I used the right proposition, though.
> Hacking at the trees probably means hacking in the woods.
> I guess I should just be hacking the trees, which
> makes more sense in either context.
Actually no, if you look up the litera
On 01/16/2012 10:06 AM, Olivier Grisel wrote:
> 2012/1/16 Andreas:
>
>> Hi Everybody.
>> I'm still trying to hack at the trees.
>>
> Which is an etymologically valid attitude.
>
>http://etymonline.com/?search=hack
>
> As for your question, I let the tree experts answer :)
>
>
I'm
On 01/16/2012 10:07 AM, Andreas wrote:
> On 01/16/2012 09:44 AM, Andreas wrote:
>
>> Hi Everybody.
>> I'm still trying to hack at the trees. This time I stumbled across the
>> computation of the Gini index.
>> Could someone please explain this to me?
>> Hastie, Tishirani and Friedman told me th
On Mon, Jan 16, 2012 at 10:06:18AM +0100, Olivier Grisel wrote:
> 2012/1/16 Andreas :
> > I'm still trying to hack at the trees.
> Which is an etymologically valid attitude.
> http://etymonline.com/?search=hack
As long as you are not doing it with a tray (
http://www.youtube.com/watch?v=Sv5iEK
2012/1/16 Andreas :
> Hi Everybody.
> I'm still trying to hack at the trees.
Which is an etymologically valid attitude.
http://etymonline.com/?search=hack
As for your question, I let the tree experts answer :)
--
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel
---
On 01/16/2012 09:44 AM, Andreas wrote:
> Hi Everybody.
> I'm still trying to hack at the trees. This time I stumbled across the
> computation of the Gini index.
> Could someone please explain this to me?
> Hastie, Tishirani and Friedman told me this is computed as
> \sum_{k} p_{mk}*(1- p_{mk})
> wh
Hi Everybody.
I'm still trying to hack at the trees. This time I stumbled across the
computation of the Gini index.
Could someone please explain this to me?
Hastie, Tishirani and Friedman told me this is computed as
\sum_{k} p_{mk}*(1- p_{mk})
where k enumerates the classes and m denotes a node (I
22 matches
Mail list logo