In which version of sklearn, is the above mention 'make_pipeline' and
'make_union' defined??
When I read through some example, the idea of using FeatureUnion and
Pipelined are easy, I guess. Former chains the features obtained from each
individual estimators given as the input were as the latter u
Kyle is facing the same question for his incremental pca pr
https://github.com/scikit-learn/scikit-learn/pull/3285
On Monday, June 30, 2014, Michael Eickenberg
wrote:
> Hi Sean,
>
> this has been mentioned in an issue
> https://github.com/scikit-learn/scikit-learn/pull/3107 along with the
> chan
Hi Sean,
this has been mentioned in an issue
https://github.com/scikit-learn/scikit-learn/pull/3107 along with the
changes necessary to invert the whitening properly (if you look at files
changed).
While we are at it, Alex Gramfort asked me to ask whether anybody sees good
reasons *not* to invert
The model selection properties of the lasso are such that it will select a
certain number of variables that suit it best. If you like, you can run a
lasso and then select the degree of the polynomial of maximum degree active
and call it the polynomial order of your problem.
I maintain that the set
Hi
Why doesn't PCA and Probabilistic PCA calculate the inverse transform
properly when whitening is enabled? AFAIK all that is required is to (in
addition) multiply by explained_variance?
sean
On Mon, Jun 30, 2014 at 5:28 AM, <
scikit-learn-general-requ...@lists.sourceforge.net> wrote:
> Send
Note2: In summary, I want the coefficients a_i without having to pre-define
neither the degree of the polynomial to fit (n) nor the amount of
regularization to apply (alpha), and always preferring the simpler model
(less coefficients).
-fernando
On Sun, Jun 29, 2014 at 6:52 PM, Fernando Paolo
Michael and Mathieu, thanks for your answers!
Perhaps I should explain better my problem, so you may have a better
suggestion on how to approach it. I have several datasets of the form f =
y(x), and I need to fit to these data a 'linear', 'quadratic' or 'cubic'
polynomial. So I want to (i) *automa
Hi,
Suppose I wanted to test the independence of two boolean variables using
Chi-Square:
>>> X = numpy.vstack(([[0,0]] * 18, [[0,1]] * 7, [[1,0]] * 42, [[1,1]] *
33))
>>> X.shape
(100, 2)
I'd like to understand the difference between doing:
>>> sklearn.feature_selection.chi2(X[:,[0]], X[:,1])
(
Hi Fernando,
On Sun, Jun 29, 2014 at 1:53 PM, Fernando Paolo wrote:
> Hello,
>
> I must be missing something obvious because I can't find the "actual"
> coefficients of the polynomial fitted using LassoCV. That is, for a 3rd
> degree polynomial
>
> p = a0 + a1 * x + a2 * x^2 + a3 * x^3
>
> I w
Running your code and then plotting
plt.plot(x, lasso_predict)
plt.plot(x, y)
shows a pretty good correspondence (r^2 = .97)!
I think the problem is that Xpoly does not contain raw polynomials as you
use for prediction.
If you do
p_lasso = a[0] + a[1] * Xpoly[:, 1] + a[2] * Xpoly[:, 2] + a[3]
10 matches
Mail list logo