Dear Michael,
Thanks for your reply. In my case, the original dimension of the coefficient
matrix is very large, including ~10,000 elements, but actually there are only
several hundred of independent elements in the coefficient matrix based on the
some symmetric nature of my data.
I know how
Dear Guoqiang,
it sounds as though you could just throw all the irrelevant variables away
and then do an ordinary least squares or ridge regression on what you keep.
That is if I understand correctly that you have already successfully
identified the support.
If this is not the case, could you try
Dear all,
I am using the LASSO model to optimize a huge sparse coefficient-matrix, W.
Luckily, I have known how many independent elements and how they distribute in
the coefficient matrix. What I want to obtain now is just the values of these
independent elements. Is there a way to define suc
On Fri, Jan 01, 2016 at 08:41:56PM +0100, Marco De Nadai wrote:
> I would expose it through a score function. In this way it can be called to
> evaluate 2 models (let's say model A with 4 params and model B with 10).
> Moreover, this could also be called by feature_selection.RFECV.
OK, but BIC is