On 5/15/12 9:25 AM, Thomas Witkowski wrote:
> I made some comparisons of using umfpack, superlu, superlu_dist and
> mumps to solve systems with sparse matrices arising from finite
> element method. The size of the matrices range from around 5 to
> more than 3 million unknowns. I used 1, 2, 4
Am 15.05.2012 10:00, schrieb Dave May:
> Ah okay. Thanks for the timings.
>
> Have you monitored the CPU usage when you using umfpack?
> On my machine, it's definitely not running on a single process,
> so I wouldn't consider it a sequential solver.
>
>
>
Yes, the CPU usage is 100% and not more. If
Ah okay. Thanks for the timings.
Have you monitored the CPU usage when you using umfpack?
On my machine, it's definitely not running on a single process,
so I wouldn't consider it a sequential solver.
On 15 May 2012 09:54, Thomas Witkowski
wrote:
> Am 15.05.2012 09:36, schrieb Dave May:
>
>>
There is another sparse direct solver that PETSc supports: PaStix. You
can try it by --download-pastix.
Xiangdong
On Tue, May 15, 2012 at 5:34 AM, Anton Popov wrote:
> On 5/15/12 9:25 AM, Thomas Witkowski wrote:
>>
>> I made some comparisons of using umfpack, superlu, superlu_dist and mumps
>>
Am 15.05.2012 09:36, schrieb Dave May:
> I have seem similar behaviour comparing umfpack and superlu_dist,
> however the difference wasn't enormous, possibly umfpack was a factor
> of 1.2-1.4 times faster on 1 - 4 cores.
> What sort of time differences are you observing? Can you post the
> numbers
Thomas :
>
>> I attached my data to this mail. For the largest matrix, umfpack failed
> after allocating 4 GB of memory. I have not tried to figure out what's the
> problem there. As you can see, for these matrices the distributed solvers
> are
umfpack is a sequential package. 4GB+ likely exce
I have seem similar behaviour comparing umfpack and superlu_dist,
however the difference wasn't enormous, possibly umfpack was a factor
of 1.2-1.4 times faster on 1 - 4 cores.
What sort of time differences are you observing? Can you post the
numbers somewhere?
However, umpack will not work on a di
I made some comparisons of using umfpack, superlu, superlu_dist and
mumps to solve systems with sparse matrices arising from finite element
method. The size of the matrices range from around 5 to more than 3
million unknowns. I used 1, 2, 4, 8 and 16 nodes to make the benchmark.
Now, I wond