Hi Sujit , 
Firstly, I must show my deep appreciation and respect towards your kind help 
and excellent knowledge.It would be the best if you and me are in the same 
place then I shall specially go to express my thanks and respect to you.
I will try your way by spark mllib SVD .
For Linear Regression, Ax = b, in fact I want to view their variables and 
coefficient conversely, just as (1):   x1 * a1 + x2 * a2 + ... + xn * an = b , 
there is only with one linear  formula for it.There are also training data set 
with n number of point tuple [a11, a21, ..., an1, b1] just from [A, b] 
(variables ), then the coefficient x = [x1, x2, ..., xn]T may be got by mllib 
linear regression.
 
However, I tested spark mllib LR, while the point tuple dimension is more than 
6, it would need more than 100 000 number of iterations to get enough accurate 
solution about its coefficient, the time complexity is too much, the time cost 
would be very tremendous while the dimension is hundreds of.
In effect, I am working on algorithm optimization with specific model not in 
MLlib, that is object quadratic functionf(x1, x2, ..., xn) with lots of linear 
constraint conditions, then I use Lagrange way to convert the question as 
linear system of equations.My last problem is that, whether spark is properly 
used to algorithm optimization , or just directly use 
org.apache.spark.mllib.optimization, or by some other way, or it is not much 
convenient for this application...
Thank you very much~~Zhiliang 

     On Saturday, October 24, 2015 12:41 AM, Sujit Pal <sujitatgt...@gmail.com> 
wrote:
   

 Hi Zhiliang,
For a system of equations AX = y, Linear Regression will give you a best-fit 
estimate for A (coefficient vector) for a matrix of feature variables X and 
corresponding target variable y for a subset of your data. OTOH, what you are 
looking for here is to solve for x a system of equations Ax = b, where A and b 
are known and you want the vector x.
This Math Stackexchange page [2] explains the math in more detail, but 
basically...
A * x = b can be rewritten as x = A.I * b. You can get the pseudo-inverse of A 
using SVD (Spark MLLib supports SVD [1]). So the SVD decomposition would make A 
a product of three other matrices.
A = U * S * V.T
and the pseudo-inverse can be written as:
A.I = V * S * U.T
Then x can be found by multiplying A.I with b.
-sujit
[1] https://spark.apache.org/docs/1.2.0/mllib-dimensionality-reduction.html[2] 
http://math.stackexchange.com/questions/458404/how-can-we-compute-pseudoinverse-for-any-matrix

On Fri, Oct 23, 2015 at 2:19 AM, Zhiliang Zhu <zchl.j...@yahoo.com> wrote:

Hi Sujit, and All,
Currently I lost in large difficulty, I am eager to get some help from you.
There is some big linear system of equations as:Ax = b,  A with N number of row 
and N number of column, N is very large, b = [0, 0, ..., 0, 1]TThen, I will 
sovle it to get x = [x1, x2, ..., xn]T.
The simple solution would be to get inverse(A), and then  x = (inverse(A)) * b 
.A would be some JavaRDD<Interable<double>> , however, for RDD/matrix there is 
add/multply/transpose APIs, no inverse API for it!
Then, how would it conveniently get inverse(A), or just solve the linear system 
of equations by some other way...In Spark MLlib, there was linear regression, 
the training process might be to solve the coefficients to get some specific 
linear model, just is, 
Ax = y, just train by (x, y) to get A , this might be used to solve the linear 
system of equations. It is like that? I could not decide.
I must show my deep appreciation torwards your all help.
Thank you very much!Zhiliang





  

Reply via email to