Hi.

I ran across the same issue before, but after some tests, I still can't 
beat the speed of the built in K=Sparse(I,J,V). Actually, if the local 
element matrices are expensive, one can make the assembly of I, J and V in 
parallel, but the overall speed will depend on the memory copy.



On Wednesday, October 21, 2015 at 5:11:35 AM UTC-2, mac wrote:


Hi, 
>
> I am trying to speedup my Julia finite element code. Right now I use the 
> built in sparse solver to solve the linear system in parallel and the 
> solving step is very fast. But my system matrix assembly is done serially 
> using single process and its slow. I would like to speed up by assembling 
> the system matrix and vector in parallel. I am executing the code using a 
> shared memory machine (12 core workstation). Can someone give me a very 
> simple example to do the following to help me get started: 
>
> Lets say, we have three matrices: A (dense)  and B (dense) and C (sparse). 
> All 3 can be shared arrays. I would like to have several processes running 
> in parallel to fetch a set of elements from A and B, do some simple 
> arithmetic and store the results into the sparse matrix C. 
>
> I am treating 'A' as a matrix containing nodal co-ordinates and B 
> containing the element info. Using the example, I would eventually convert 
> my code such that each process computes an element matrix and assemble into 
> the big sparse system matrix in parallel. Is this approach efficient ? 
>
> Thank you. 
>

Reply via email to