cs.anl.gov>> wrote:
Fande or John,
Could any of you have a try? Thanks
--Junchao Zhang
-- Forwarded message -
From: Junchao Zhang mailto:jczh...@mcs.anl.gov>>
Date: Thu, Jul 4, 2019 at 8:21 AM
Subject: Re: [petsc-dev] Slowness of PetscSortIntWithArrayPair in MatAssemb
Zhang mailto:jczh...@mcs.anl.gov>>
Date: Thu, Jul 4, 2019 at 8:21 AM
Subject: Re: [petsc-dev] Slowness of PetscSortIntWithArrayPair in MatAssembly
To: Fande Kong mailto:fdkong...@gmail.com>>
Fande,
I wrote tests but could not reproduce the error. I pushed a commit that
changed the
Could you debug it or paste the stack trace? Since it is a segfault, it should
be easy.
--Junchao Zhang
On Wed, Jul 3, 2019 at 5:16 PM Fande Kong
mailto:fdkong...@gmail.com>> wrote:
Thanks Junchao,
But there is still segment fault. I guess you could write some continuous
integers to test your
Fande and John,
Could you try jczhang/feature-better-quicksort-pivot? It passed Jenkins tests
and I could not imagine why it failed on yours.
Hash table has its own cost. We'd better get quicksort right and see how it
performs before rewriting code.
--Junchao Zhang
On Tue, Jul 2, 2019 at 2:
John Peterson writes:
>> Do you add values many times into the same location? The array length
>> will be the number of misses to the local part of the matrix. We could
>> (and maybe should) make the stash use a hash instead of building the
>> array with multiplicity and combining duplicates la
Try this to see if it helps:
diff --git a/src/sys/utils/sorti.c b/src/sys/utils/sorti.c
index 1b07205a..90779891 100644
--- a/src/sys/utils/sorti.c
+++ b/src/sys/utils/sorti.c
@@ -294,7 +294,8 @@ static PetscErrorCode
PetscSortIntWithArrayPair_Private(PetscInt *L,PetscInt *J,
}
PetscFun
John Peterson writes:
> On Tue, Jul 2, 2019 at 1:44 PM Jed Brown wrote:
>
>> Fande Kong via petsc-dev writes:
>>
>> > Hi Developers,
>> >
>> > John just noticed that the matrix assembly was slow when having
>> sufficient
>> > amount of off-diagonal entries. It was not a MPI issue since I was a
Fande Kong via petsc-dev writes:
> Hi Developers,
>
> John just noticed that the matrix assembly was slow when having sufficient
> amount of off-diagonal entries. It was not a MPI issue since I was able to
> reproduce the issue using two cores on my desktop, that is, "mpirun -n 2".
>
> I turned
Is it because the array is already sorted?
--Junchao Zhang
On Tue, Jul 2, 2019 at 12:13 PM Fande Kong via petsc-dev
mailto:petsc-dev@mcs.anl.gov>> wrote:
Hi Developers,
John just noticed that the matrix assembly was slow when having sufficient
amount of off-diagonal entries. It was not a MPI i