https://stats.stackexchange.com/questions/69157/why-do-we-need-to-normalize-data-before-principal-component-analysis-pca
On Thursday, May 24, 2018, 4:41:07 PM PDT, Shiheng Duan
wrote:
Hello all,
I wonder is it necessary or correct to do z score transformation
Hi,
that totally depends on the nature of your data and whether the standard
deviation of individual feature axes/columns of your data carry some form
of importance measure. Note that PCA will bias its loadings towards columns
with large standard deviations all else being held equal (meaning that
Hello all,
I wonder is it necessary or correct to do z score transformation before
PCA? I didn't see any preprocessing for face image in the example of Faces
recognition example using eigenfaces and SVMs, link:
I did some more tests. My issue that I brought up may be related to the
custom kernel.
On Thursday, May 24, 2018, 12:49:34 PM PDT, Gael Varoquaux
wrote:
On Thu, May 24, 2018 at 09:35:00PM +0530, aijaz qazi wrote:
> scikit- multi learn is misleading.
On Thu, May 24, 2018 at 09:35:00PM +0530, aijaz qazi wrote:
> scikit- multi learn is misleading.
Yes, but I am not sure what scikit-learn should do about this.
Gaël
___
scikit-learn mailing list
scikit-learn@python.org
I have an SVR model that uses custom kernel as follows:
1)
sgk = dual_laplace_gaussian_swarm(ss)
svr_cust_sig = SVR(kernel=sgk, C=C_Value, epsilon = epsilon_value)
svr_fit = svr_cust_sig.fit(X, y)
#X is an array shape is [93, 24] where each row is a time in the columns are
variables for the
scikit- multi learn is misleading.
*Regards,*
*Aijaz A.Qazi *
___
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn