Re: [julia-users] Julia backslash performance vs MATLAB backslash

2015-01-06 Thread Tim Davis
Most of my code in SuiteSparse is under my copyright, not the University of
Florida.  (I'm now at Texas AM by the way ...
http://faculty.cse.tamu.edu/davis )

Most of SuiteSparse is LGPL or GPL, but the Factorize package itself is
2-clause BSD (attached).

So you can use the Factorize package as you wish.  The Factorize does
connect to sparse Cholesky (chol in MATLAB),
sparse LU, etc, but those are different packages (and all of them are GPL
or LGPL).  The backslash polyalgorithm is in
Factorize, however, and is thus 2-clause BSD.





On Mon, Jan 5, 2015 at 10:29 PM, Viral Shah vi...@mayin.org wrote:

 This is similar to the FFTW situation, where the license is held by MIT.

 -viral

  On 06-Jan-2015, at 8:14 am, Viral Shah vi...@mayin.org wrote:
 
  I believe that it is University of Florida that owns the copyright and
 they would lose licencing revenue. I would love it too if we could have
 these under the MIT licence, but it may not be a realistic expectation.
 
  Looking at the paper is the best way to go. Jiahao has already produced
 the pseudo code in the issue, and we do similar things in our dense \.
 
  -viral
 
  On 6 Jan 2015 07:31, Kevin Squire kevin.squ...@gmail.com wrote:
  Since Tim wrote the code (presumably?), couldn't he give permission to
 license it under MIT?  (Assuming he was okay with that, of course!).
 
  Cheers,
 Kevin
 
  On Mon, Jan 5, 2015 at 3:09 PM, Stefan Karpinski ste...@karpinski.org
 wrote:
  A word of legal caution: Tim, I believe some (all?) of your SuiteSparse
 code is GPL and since Julia is MIT (although not all libraries are), we can
 look at pseudocode but not copy GPL code while legally keeping the MIT
 license on Julia's standard library.
 
  Also, thanks so much for helping with this.
 
 
  On Mon, Jan 5, 2015 at 4:09 PM, Ehsan Eftekhari e.eftekh...@gmail.com
 wrote:
  Following your advice, I tried the code again, this time I also used
 MUMPS solver from https://github.com/lruthotto/MUMPS.jl
  I used a 42x43x44 grid. These are the results:
 
  MUMPS: elapsed time: 2.09091471 seconds
  lufact: elapsed time: 5.01038297 seconds (9952832 bytes allocated)
  backslash: elapsed time: 16.604061696 seconds (80189136 bytes allocated,
 0.45% gc time)
 
  and in Matlab:
  Elapsed time is 5.423656 seconds.
 
  Thanks a lot Tim and Viral for your quick and helpful comments.
 
  Kind regards,
  Ehsan
 
 
  On Monday, January 5, 2015 9:56:12 PM UTC+1, Viral Shah wrote:
  Thanks, that is great. I was wondering about the symmetry checker - we
 have the naive one currently, but I can just use the CHOLMOD one now.
 
  -viral
 
 
 
   On 06-Jan-2015, at 2:22 am, Tim Davis da...@tamu.edu wrote:
  
   oops.  Yes, your factorize function is broken.  You might try mine
 instead, in my
   factorize package.
  
   I have a symmetry-checker in CHOLMOD.  It checks if the matrix is
 symmetric and
   with positive diagonals.  I think I have a MATLAB interface for it
 too.  The code is efficient,
   since it doesn't form A transpose, and it quits early as soon as
 asymmetry is detected.
  
   It does rely on the fact that MATLAB requires its sparse matrices to
 have sorted row indices
   in each column, however.
  
   On Mon, Jan 5, 2015 at 2:43 PM, Viral Shah vi...@mayin.org wrote:
   Tim - thanks for the reference. The paper will come in handy. This is
 a longstanding issue, that we just haven’t got around to addressing yet,
 but perhaps now is a good time.
  
   https://github.com/JuliaLang/julia/issues/3295
  
   We have a very simplistic factorize() for sparse matrices that must
 have been implemented as a stopgap. This is what it currently does and that
 explains everything.
  
   # placing factorize here for now. Maybe add a new file
   function factorize(A::SparseMatrixCSC)
   m, n = size(A)
   if m == n
   Ac = cholfact(A)
   Ac.c.minor == m  ishermitian(A)  return Ac
   end
   return lufact(A)
   end
  
   -viral
  
  
  
On 06-Jan-2015, at 1:57 am, Tim Davis da...@tamu.edu wrote:
   
That does sound like a glitch in the \ algorithm, rather than in
 UMFPACK.  The OpenBLAS is pretty good.
   
This is very nice in Julia:
   
F = lufact (d[M]) ; F \ d
   
That's a great idea to have a factorization object like that.  I
 have a MATLAB toolbox that does
the same thing, but it's not a built-in function inside MATLAB.
 It's written in M, so it can be slow for
small matrices.   With it, however, I can do:
   
F = factorize (A) ;% does an LU, Cholesky, QR, SVD, or
 whatever.  Uses my polyalgorithm for \.
x = F\b ;
   
I can do S = inverse(A); which returns a factorization, not an
 inverse, but with a flag
set so that S*b does A\b (and yes, S\b would do A*b, since S keeps a
 copy of A inside it, as well).
   
You can also specify the factorization, such as
   
 F=factorize(A, 'lu')
F=factorize(A,'svd') ; etc.
   
It's in SuiteSparse/MATLAB_tools/Factorize, if you're interested.
 I've suggested

Re: [julia-users] Julia backslash performance vs MATLAB backslash

2015-01-06 Thread Tim Davis
CSparse doesn't have a proper Bunch-Kaufman LDL^T with proper 2-by-2
pivoting.  It just allows for 1-by-1 pivots, and it doesn't check to see if
they are small.  MA57 is the best code out there for that, but if MUMPS can
do it then that would be the best option.  My SuiteSparse is missing this
feature.

On Tue, Jan 6, 2015 at 5:00 PM, Tony Kelman t...@kelman.net wrote:

 That's good to hear on the BSD license, and thanks for correcting our
 misunderstanding of the SuiteSparse licensing situation.

 One thing I'd like us to be clearer about in our notation for Julia is
 when we say ldltfact, we only have a modified-Cholesky, diagonal-D variant
 of the LDL^T factorization hooked up. In the Factorize paper and in Matlab,
 ldl is a Bunch-Kaufman LDL^T where D is a block-diagonal matrix with either
 1x1 or 2x2 blocks. Most of the people in this thread are well aware of
 this, but for the sake of any who aren't I'd like us to make the
 distinction more clear. Bunch-Kaufman block-diagonal LDL^T can handle
 symmetric indefinite matrices, and do useful things like give you the
 inertia (number of positive, negative, and zero eigenvalues) of the matrix.
 This is mandatory in many optimization applications. Modified-Cholesky with
 diagonal D can only handle semidefinite matrices, or with extra care, some
 special classes like quasidefinite that appear in a subset (e.g. convex
 optimization) of symmetric problems.

 I believe CSparse does have a Bunch-Kaufman LDL^T implementation, but it's
 not as high-performance as say Cholmod or UMFPACK. MATLAB uses HSL MA57 for
 sparse ldl, which is an excellent well-regarded piece of Fortran code but
 unfortunately is not under a redistributable license. The public-domain
 MUMPS code has this functionality and is exposed by several Julia packages,
 none of which currently meet the cross-platform build system requirements
 that Base Julia has.

 -Tony


 On Tuesday, January 6, 2015 1:13:55 PM UTC-8, Stefan Karpinski wrote:

 2-clause BSD is basically MIT-equivalent, so that works.

 On Tue, Jan 6, 2015 at 2:49 PM, Tim Davis da...@tamu.edu wrote:

 Most of my code in SuiteSparse is under my copyright, not the University
 of Florida.  (I'm now at Texas AM by the way ...
 http://faculty.cse.tamu.edu/davis )

 Most of SuiteSparse is LGPL or GPL, but the Factorize package itself is
 2-clause BSD (attached).

 So you can use the Factorize package as you wish.  The Factorize does
 connect to sparse Cholesky (chol in MATLAB),
 sparse LU, etc, but those are different packages (and all of them are
 GPL or LGPL).  The backslash polyalgorithm is in
 Factorize, however, and is thus 2-clause BSD.





 On Mon, Jan 5, 2015 at 10:29 PM, Viral Shah vi...@mayin.org wrote:

 This is similar to the FFTW situation, where the license is held by MIT.

 -viral

  On 06-Jan-2015, at 8:14 am, Viral Shah vi...@mayin.org wrote:
 
  I believe that it is University of Florida that owns the copyright
 and they would lose licencing revenue. I would love it too if we could have
 these under the MIT licence, but it may not be a realistic expectation.
 
  Looking at the paper is the best way to go. Jiahao has already
 produced the pseudo code in the issue, and we do similar things in our
 dense \.
 
  -viral
 
  On 6 Jan 2015 07:31, Kevin Squire kevin@gmail.com wrote:
  Since Tim wrote the code (presumably?), couldn't he give permission
 to license it under MIT?  (Assuming he was okay with that, of course!).
 
  Cheers,
 Kevin
 
  On Mon, Jan 5, 2015 at 3:09 PM, Stefan Karpinski 
 ste...@karpinski.org wrote:
  A word of legal caution: Tim, I believe some (all?) of your
 SuiteSparse code is GPL and since Julia is MIT (although not all libraries
 are), we can look at pseudocode but not copy GPL code while legally keeping
 the MIT license on Julia's standard library.
 
  Also, thanks so much for helping with this.
 
 
  On Mon, Jan 5, 2015 at 4:09 PM, Ehsan Eftekhari e.eft...@gmail.com
 wrote:
  Following your advice, I tried the code again, this time I also used
 MUMPS solver from https://github.com/lruthotto/MUMPS.jl
  I used a 42x43x44 grid. These are the results:
 
  MUMPS: elapsed time: 2.09091471 seconds
  lufact: elapsed time: 5.01038297 seconds (9952832 bytes allocated)
  backslash: elapsed time: 16.604061696 seconds (80189136 bytes
 allocated, 0.45% gc time)
 
  and in Matlab:
  Elapsed time is 5.423656 seconds.
 
  Thanks a lot Tim and Viral for your quick and helpful comments.
 
  Kind regards,
  Ehsan
 
 
  On Monday, January 5, 2015 9:56:12 PM UTC+1, Viral Shah wrote:
  Thanks, that is great. I was wondering about the symmetry checker -
 we have the naive one currently, but I can just use the CHOLMOD one now.
 
  -viral
 
 
 
   On 06-Jan-2015, at 2:22 am, Tim Davis da...@tamu.edu wrote:
  
   oops.  Yes, your factorize function is broken.  You might try mine
 instead, in my
   factorize package.
  
   I have a symmetry-checker in CHOLMOD.  It checks if the matrix is
 symmetric

Re: [julia-users] Re: Julia backslash performance vs MATLAB backslash

2015-01-05 Thread Tim Davis
That does sound like a glitch in the \ algorithm, rather than in
UMFPACK.  The OpenBLAS is pretty good.

This is very nice in Julia:

F = lufact (d[M]) ; F \ d

That's a great idea to have a factorization object like that.  I have a
MATLAB toolbox that does
the same thing, but it's not a built-in function inside MATLAB.  It's
written in M, so it can be slow for
small matrices.   With it, however, I can do:

F = factorize (A) ;% does an LU, Cholesky, QR, SVD, or whatever.  Uses
my polyalgorithm for \.
x = F\b ;

I can do S = inverse(A); which returns a factorization, not an inverse, but
with a flag
set so that S*b does A\b (and yes, S\b would do A*b, since S keeps a copy
of A inside it, as well).

You can also specify the factorization, such as

 F=factorize(A, 'lu')
F=factorize(A,'svd') ; etc.

It's in SuiteSparse/MATLAB_tools/Factorize, if you're interested.  I've
suggested the same
feature to The MathWorks.

My factorize function includes a backslash polyalgorithm, if you're
interested in taking a look.

Algorithm 930: FACTORIZE: an object-oriented linear system solver for
MATLAB T. A. Davis, ACM Transactions on Mathematical Software, Vol 39,
Issue 4, pp. 28:1 - 28:18, 2013.
http://faculty.cse.tamu.edu/davis/publications_files/Factorize_an_object_oriented_linear_system_solver_for_MATLAB.pdf

On Mon, Jan 5, 2015 at 2:11 PM, Viral Shah vi...@mayin.org wrote:

 The BLAS will certainly make a difference, but OpenBLAS is reasonably
 good.

 I also wonder what is happening in our \ polyalgorithm. The profile
 suggests the code is trying Cholesky decomposition, but it really shouldn't
 since the matrix is not symmetric. If I just do the lufact(), which
 essentially calls Umfpack, I can match Matlab timing:

 @time F = lufact(d[M]); F \ d[RHS];

 -viral


 On Tuesday, January 6, 2015 12:31:34 AM UTC+5:30, Tim Davis wrote:

 The difference could be the BLAS.  MATLAB comes with its own BLAS
 library, and the performance
 of the BLAS has a huge impact on the performance of UMFPACK, particularly
 for 3D discretizations.

 On Mon, Jan 5, 2015 at 6:21 AM, Ehsan Eftekhari e.eftekh...@gmail.com
 wrote:

 I'm solving diffusion equation in Matlab on a 3D uniform grid (31x32x33)
 and Julia. I use the \ to solve the linear system of equations. Here is
 the performance of the linear solver in Julia:
 elapsed time: 2.743971424 seconds (35236720 bytes allocated)

 and Matlab (I used spparms('spumoni',1) to see what \ does in Matlab):
 sp\: bandwidth = 1056+1+1056.
 sp\: is A diagonal? no.
 sp\: is band density (0.00)  bandden (0.50) to try banded solver? no.
 sp\: is A triangular? no.
 sp\: is A morally triangular? no.
 sp\: is A a candidate for Cholesky (symmetric, real positive diagonal)?
 no.
 sp\: use Unsymmetric MultiFrontal PACKage with automatic reordering.
 sp\: UMFPACK's factorization was successful.
 sp\: UMFPACK's solve was successful.
 Elapsed time is 0.819120 seconds.

 I have uploaded the sparse matrix (M) and the right-hand side (RHS)
 vectors in a mat file here:
 https://drive.google.com/open?id=0B8OOfC6oWXEPV2xYTWFMZTljU00authuser=0

 I read in the documents that Julia uses Umfpack for sparse matrices. My
 question is why umfpack is faster when it is called from matlab?

 The matlab and julia codes are here:
 https://drive.google.com/open?id=0B8OOfC6oWXEPbXFnYlh2TFBKV1kauthuser=0
 https://drive.google.com/open?id=0B8OOfC6oWXEPdlNfOEFKbnV5MlEauthuser=0

 and the FVM codes are here:
 https://github.com/simulkade/FVTool
 https://github.com/simulkade/JFVM

 Thanks a lot in advance,

 Ehsan


 On Wednesday, June 5, 2013 8:39:15 AM UTC+2, Viral Shah wrote:

 I guess it is the last 20 years of sparse solver work packed into one
 function. Not many fields can boast of providing this level of usability
 out of their work. :-)

 There are a class of people who believe that things like \ encourage
 blackbox usage, with people doing stuff they do not understand, and there
 are others who believe in standing on the shoulders of giants.

 I find that we have taken a good approach in Julia, where we have \ and
 it will have the perfect polyalgorithm at some point. But, you also have
 the option of digging deeper with interfaces such as lufact(), cholfact(),
 qrfact(), and finally, even if that does not work out for you, call the
 LAPACK and SuiteSparse functions directly.

 -viral

 On Wednesday, June 5, 2013 9:42:12 AM UTC+5:30, Stefan Karpinski wrote:

 Goodness. This is why there needs to be a polyalgorithm – no mortal
 user could know all of this stuff!


 On Tue, Jun 4, 2013 at 11:11 PM, Viral Shah vi...@mayin.org wrote:

 Doug,

 Ideally, the backslash needs to look for diagonal matrices,
 triangular matrices and permutations thereof, banded matrices and the 
 least
 squares problems (non-square). In case it is square, symmetric and
 hermitian, with a heavy diagonal(?), cholesky can be attempted, with a
 fallback to LU. I believe we do some of this in the dense \ 
 polyalgorithm,
 but I am not sure if we look

Re: [julia-users] Re: Julia backslash performance vs MATLAB backslash

2015-01-05 Thread Tim Davis
The difference could be the BLAS.  MATLAB comes with its own BLAS library,
and the performance
of the BLAS has a huge impact on the performance of UMFPACK, particularly
for 3D discretizations.

On Mon, Jan 5, 2015 at 6:21 AM, Ehsan Eftekhari e.eftekh...@gmail.com
wrote:

 I'm solving diffusion equation in Matlab on a 3D uniform grid (31x32x33)
 and Julia. I use the \ to solve the linear system of equations. Here is
 the performance of the linear solver in Julia:
 elapsed time: 2.743971424 seconds (35236720 bytes allocated)

 and Matlab (I used spparms('spumoni',1) to see what \ does in Matlab):
 sp\: bandwidth = 1056+1+1056.
 sp\: is A diagonal? no.
 sp\: is band density (0.00)  bandden (0.50) to try banded solver? no.
 sp\: is A triangular? no.
 sp\: is A morally triangular? no.
 sp\: is A a candidate for Cholesky (symmetric, real positive diagonal)? no.
 sp\: use Unsymmetric MultiFrontal PACKage with automatic reordering.
 sp\: UMFPACK's factorization was successful.
 sp\: UMFPACK's solve was successful.
 Elapsed time is 0.819120 seconds.

 I have uploaded the sparse matrix (M) and the right-hand side (RHS)
 vectors in a mat file here:
 https://drive.google.com/open?id=0B8OOfC6oWXEPV2xYTWFMZTljU00authuser=0

 I read in the documents that Julia uses Umfpack for sparse matrices. My
 question is why umfpack is faster when it is called from matlab?

 The matlab and julia codes are here:
 https://drive.google.com/open?id=0B8OOfC6oWXEPbXFnYlh2TFBKV1kauthuser=0
 https://drive.google.com/open?id=0B8OOfC6oWXEPdlNfOEFKbnV5MlEauthuser=0

 and the FVM codes are here:
 https://github.com/simulkade/FVTool
 https://github.com/simulkade/JFVM

 Thanks a lot in advance,

 Ehsan


 On Wednesday, June 5, 2013 8:39:15 AM UTC+2, Viral Shah wrote:

 I guess it is the last 20 years of sparse solver work packed into one
 function. Not many fields can boast of providing this level of usability
 out of their work. :-)

 There are a class of people who believe that things like \ encourage
 blackbox usage, with people doing stuff they do not understand, and there
 are others who believe in standing on the shoulders of giants.

 I find that we have taken a good approach in Julia, where we have \ and
 it will have the perfect polyalgorithm at some point. But, you also have
 the option of digging deeper with interfaces such as lufact(), cholfact(),
 qrfact(), and finally, even if that does not work out for you, call the
 LAPACK and SuiteSparse functions directly.

 -viral

 On Wednesday, June 5, 2013 9:42:12 AM UTC+5:30, Stefan Karpinski wrote:

 Goodness. This is why there needs to be a polyalgorithm – no mortal user
 could know all of this stuff!


 On Tue, Jun 4, 2013 at 11:11 PM, Viral Shah vi...@mayin.org wrote:

 Doug,

 Ideally, the backslash needs to look for diagonal matrices, triangular
 matrices and permutations thereof, banded matrices and the least squares
 problems (non-square). In case it is square, symmetric and hermitian, with
 a heavy diagonal(?), cholesky can be attempted, with a fallback to LU. I
 believe we do some of this in the dense \ polyalgorithm, but I am not sure
 if we look for the banded cases yet.

 This is what Octave does:
 http://www.gnu.org/software/octave/doc/interpreter/Sparse-
 Linear-Algebra.html#Sparse-Linear-Algebra

 This is Tim's Factorize for solving linear and least squares systems:
 http://www.cise.ufl.edu/research/sparse/SuiteSparse/
 current/SuiteSparse/MATLAB_Tools/Factorize/Doc/factorize_demo.html

 -viral


 On Tuesday, June 4, 2013 8:18:39 PM UTC+5:30, Douglas Bates wrote:

 On Thursday, May 30, 2013 10:10:59 PM UTC-5, Mingming Wang wrote:

 Hi,

 I am trying to port my MATLAB program to Julia. The for loop is about
 25% faster. But the backslash is about 10 times slower. It seems in 
 MATLAB,
 the backslash is parallelized automatically. Is there any plan in Julia 
 to
 do this? BTW, the matrix I am solving is sparse and symmetric.


 For a sparse symmetric matrix try

 cholfact(A)\b

 The simple

 A\b

 call will always use an LU decomposition from UMFPACK.






Re: [julia-users] Julia backslash performance vs MATLAB backslash

2015-01-05 Thread Tim Davis
oops.  Yes, your factorize function is broken.  You might try mine instead,
in my
factorize package.

I have a symmetry-checker in CHOLMOD.  It checks if the matrix is symmetric
and
with positive diagonals.  I think I have a MATLAB interface for it too.
The code is efficient,
since it doesn't form A transpose, and it quits early as soon as asymmetry
is detected.

It does rely on the fact that MATLAB requires its sparse matrices to have
sorted row indices
in each column, however.

On Mon, Jan 5, 2015 at 2:43 PM, Viral Shah vi...@mayin.org wrote:

 Tim - thanks for the reference. The paper will come in handy. This is a
 longstanding issue, that we just haven’t got around to addressing yet, but
 perhaps now is a good time.

 https://github.com/JuliaLang/julia/issues/3295

 We have a very simplistic factorize() for sparse matrices that must have
 been implemented as a stopgap. This is what it currently does and that
 explains everything.

 # placing factorize here for now. Maybe add a new file
 function factorize(A::SparseMatrixCSC)
 m, n = size(A)
 if m == n
 Ac = cholfact(A)
 Ac.c.minor == m  ishermitian(A)  return Ac
 end
 return lufact(A)
 end

 -viral



  On 06-Jan-2015, at 1:57 am, Tim Davis da...@tamu.edu wrote:
 
  That does sound like a glitch in the \ algorithm, rather than in
 UMFPACK.  The OpenBLAS is pretty good.
 
  This is very nice in Julia:
 
  F = lufact (d[M]) ; F \ d
 
  That's a great idea to have a factorization object like that.  I have a
 MATLAB toolbox that does
  the same thing, but it's not a built-in function inside MATLAB.  It's
 written in M, so it can be slow for
  small matrices.   With it, however, I can do:
 
  F = factorize (A) ;% does an LU, Cholesky, QR, SVD, or whatever.
 Uses my polyalgorithm for \.
  x = F\b ;
 
  I can do S = inverse(A); which returns a factorization, not an inverse,
 but with a flag
  set so that S*b does A\b (and yes, S\b would do A*b, since S keeps a
 copy of A inside it, as well).
 
  You can also specify the factorization, such as
 
   F=factorize(A, 'lu')
  F=factorize(A,'svd') ; etc.
 
  It's in SuiteSparse/MATLAB_tools/Factorize, if you're interested.  I've
 suggested the same
  feature to The MathWorks.
 
  My factorize function includes a backslash polyalgorithm, if you're
 interested in taking a look.
 
  Algorithm 930: FACTORIZE: an object-oriented linear system solver for
 MATLAB T. A. Davis, ACM Transactions on Mathematical Software, Vol 39,
 Issue 4, pp. 28:1 - 28:18, 2013.
 
 http://faculty.cse.tamu.edu/davis/publications_files/Factorize_an_object_oriented_linear_system_solver_for_MATLAB.pdf
 
  On Mon, Jan 5, 2015 at 2:11 PM, Viral Shah vi...@mayin.org wrote:
  The BLAS will certainly make a difference, but OpenBLAS is reasonably
 good.
 
  I also wonder what is happening in our \ polyalgorithm. The profile
 suggests the code is trying Cholesky decomposition, but it really shouldn't
 since the matrix is not symmetric. If I just do the lufact(), which
 essentially calls Umfpack, I can match Matlab timing:
 
  @time F = lufact(d[M]); F \ d[RHS];
 
  -viral
 
 
  On Tuesday, January 6, 2015 12:31:34 AM UTC+5:30, Tim Davis wrote:
  The difference could be the BLAS.  MATLAB comes with its own BLAS
 library, and the performance
  of the BLAS has a huge impact on the performance of UMFPACK,
 particularly for 3D discretizations.
 
  On Mon, Jan 5, 2015 at 6:21 AM, Ehsan Eftekhari e.eftekh...@gmail.com
 wrote:
  I'm solving diffusion equation in Matlab on a 3D uniform grid (31x32x33)
 and Julia. I use the \ to solve the linear system of equations. Here is
 the performance of the linear solver in Julia:
  elapsed time: 2.743971424 seconds (35236720 bytes allocated)
 
  and Matlab (I used spparms('spumoni',1) to see what \ does in Matlab):
  sp\: bandwidth = 1056+1+1056.
  sp\: is A diagonal? no.
  sp\: is band density (0.00)  bandden (0.50) to try banded solver? no.
  sp\: is A triangular? no.
  sp\: is A morally triangular? no.
  sp\: is A a candidate for Cholesky (symmetric, real positive diagonal)?
 no.
  sp\: use Unsymmetric MultiFrontal PACKage with automatic reordering.
  sp\: UMFPACK's factorization was successful.
  sp\: UMFPACK's solve was successful.
  Elapsed time is 0.819120 seconds.
 
  I have uploaded the sparse matrix (M) and the right-hand side (RHS)
 vectors in a mat file here:
  https://drive.google.com/open?id=0B8OOfC6oWXEPV2xYTWFMZTljU00authuser=0
 
  I read in the documents that Julia uses Umfpack for sparse matrices. My
 question is why umfpack is faster when it is called from matlab?
 
  The matlab and julia codes are here:
  https://drive.google.com/open?id=0B8OOfC6oWXEPbXFnYlh2TFBKV1kauthuser=0
  https://drive.google.com/open?id=0B8OOfC6oWXEPdlNfOEFKbnV5MlEauthuser=0
 
  and the FVM codes are here:
  https://github.com/simulkade/FVTool
  https://github.com/simulkade/JFVM
 
  Thanks a lot in advance,
 
  Ehsan
 
 
  On Wednesday, June 5, 2013 8:39:15 AM UTC+2, Viral Shah wrote:
  I