Hi,

I'm new to scikit-learn; I'm using linear_model.BayesianRidge to fit a 
linear model, after which I'd like to find the MAP model estimate and 
its posterior probability.

The parameters of the MAP model are right there in coef_ and intercept_. 
For its probability: since the posterior is gaussian, the posterior peak 
value can be calculated from the learnt 'precision' values. So in this 
simple example, the last line calculates it in the log-domain:

from sklearn import linear_model
from numpy import log, sqrt, pi
X = [[0., 0.], [1., 1.], [2., 2.], [3., 3.]]
Y = [0., 1., 2., 3.01]
clf = linear_model.BayesianRidge()
clf.fit (X, Y)
logmapval = len(clf.coef_)*log(clf.lambda_) + log(clf.alpha_) - 
log(sqrt(2 * pi))


So I'd like to confirm a couple of things please:
  * Am I right in thinking the precision lambda_ is the reciprocal of 
the variance (not the reciprocal of the stdev)?
  * alpha_ - am I right to believe this is the precision for the noise 
on Y, not the regularisation parameter? In the doc for the non-Bayesian 
RidgeRegression "alpha" is used to denote the regularisation parameter, 
but in this model the regularisation is implied by the prior on the 
coefficients.
  * Given the above, is it correct to include log(clf.alpha) in the 
probability calculation?

Thanks for any advice you can give -
Dan

-- 
Dan Stowell
Postdoctoral Research Assistant
Centre for Digital Music
Queen Mary, University of London
Mile End Road, London E1 4NS
http://www.elec.qmul.ac.uk/digitalmusic/people/dans.htm
http://www.mcld.co.uk/

------------------------------------------------------------------------------
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to