[petsc-users] Using direct solvers in parallel

2012-05-15 Thread Anton Popov
On 5/15/12 9:25 AM, Thomas Witkowski wrote: > I made some comparisons of using umfpack, superlu, superlu_dist and > mumps to solve systems with sparse matrices arising from finite > element method. The size of the matrices range from around 5 to > more than 3 million unknowns. I used 1, 2, 4

[petsc-users] Using direct solvers in parallel

2012-05-15 Thread Thomas Witkowski
Am 15.05.2012 10:00, schrieb Dave May: > Ah okay. Thanks for the timings. > > Have you monitored the CPU usage when you using umfpack? > On my machine, it's definitely not running on a single process, > so I wouldn't consider it a sequential solver. > > > Yes, the CPU usage is 100% and not more. If

[petsc-users] Using direct solvers in parallel

2012-05-15 Thread Dave May
Ah okay. Thanks for the timings. Have you monitored the CPU usage when you using umfpack? On my machine, it's definitely not running on a single process, so I wouldn't consider it a sequential solver. On 15 May 2012 09:54, Thomas Witkowski wrote: > Am 15.05.2012 09:36, schrieb Dave May: > >>

[petsc-users] Using direct solvers in parallel

2012-05-15 Thread Xiangdong Liang
There is another sparse direct solver that PETSc supports: PaStix. You can try it by --download-pastix. Xiangdong On Tue, May 15, 2012 at 5:34 AM, Anton Popov wrote: > On 5/15/12 9:25 AM, Thomas Witkowski wrote: >> >> I made some comparisons of using umfpack, superlu, superlu_dist and mumps >>

[petsc-users] Using direct solvers in parallel

2012-05-15 Thread Thomas Witkowski
Am 15.05.2012 09:36, schrieb Dave May: > I have seem similar behaviour comparing umfpack and superlu_dist, > however the difference wasn't enormous, possibly umfpack was a factor > of 1.2-1.4 times faster on 1 - 4 cores. > What sort of time differences are you observing? Can you post the > numbers

[petsc-users] Using direct solvers in parallel

2012-05-15 Thread Hong Zhang
Thomas : > >> I attached my data to this mail. For the largest matrix, umfpack failed > after allocating 4 GB of memory. I have not tried to figure out what's the > problem there. As you can see, for these matrices the distributed solvers > are umfpack is a sequential package. 4GB+ likely exce

[petsc-users] Using direct solvers in parallel

2012-05-15 Thread Dave May
I have seem similar behaviour comparing umfpack and superlu_dist, however the difference wasn't enormous, possibly umfpack was a factor of 1.2-1.4 times faster on 1 - 4 cores. What sort of time differences are you observing? Can you post the numbers somewhere? However, umpack will not work on a di

[petsc-users] Using direct solvers in parallel

2012-05-15 Thread Thomas Witkowski
I made some comparisons of using umfpack, superlu, superlu_dist and mumps to solve systems with sparse matrices arising from finite element method. The size of the matrices range from around 5 to more than 3 million unknowns. I used 1, 2, 4, 8 and 16 nodes to make the benchmark. Now, I wond