Ok, thanks.

On Mon, Jan 11, 2021 at 1:38 PM Matthew Knepley <[email protected]> wrote:

> On Mon, Jan 11, 2021 at 4:32 PM Sam Guo <[email protected]> wrote:
>
>> A follow up question: if I call preallocation, is there any
>> performance difference between
>>
>> MatSetValues 
>> <https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatSetValues.html#MatSetValues>(mat,1,&I,1,&J,&v,INSERT_VALUES
>>  
>> <https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/INSERT_VALUES.html#INSERT_VALUES>);
>>  // insert value one by one
>>
>> vs
>>
>> MatSetValues 
>> <https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatSetValues.html#MatSetValues>(Mat
>>  
>> <https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/Mat.html#Mat>
>>  mat,PetscInt 
>> <https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscInt.html#PetscInt>
>>  m,const PetscInt 
>> <https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscInt.html#PetscInt>
>>  idxm[],PetscInt 
>> <https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscInt.html#PetscInt>
>>  n,const PetscInt 
>> <https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscInt.html#PetscInt>
>>  idxn[],const PetscScalar 
>> <https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscScalar.html#PetscScalar>
>>  v[],InsertMode 
>> <https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/InsertMode.html#InsertMode>
>>  addv) // insert multiple values
>>
>> My input is triple arrays
>>
>> vector<int> r;
>>
>> vector<int> c;
>>
>> vector<double> a;
>>
>> where r/c are not sorted by rows/cols. I don't want to waste memory/time to 
>> create idxm/idxn unless there is a performance penalty.
>>
>>
> It is cheaper to insert many values than sequences of single values.
> However, allocating memory is much much more expensive. As with most
> performance questions, there is no substitute for experiments.
>
>   Thanks,
>
>      Matt
>
>
>> Thanks,
>>
>> Sam
>>
>>
>>
>> On Mon, Jan 11, 2021 at 12:27 PM Sam Guo <[email protected]> wrote:
>>
>>> Thanks!
>>>
>>> On Mon, Jan 11, 2021 at 12:25 PM Matthew Knepley <[email protected]>
>>> wrote:
>>>
>>>> On Mon, Jan 11, 2021 at 3:00 PM Sam Guo <[email protected]> wrote:
>>>>
>>>>> Dear PETSc Dev Team,
>>>>>    The documentation recommends calling  both of the above
>>>>> preallocation routines for simplicity. Do we waste memory by calling both?
>>>>>
>>>>
>>>> No. Only one will function, depending on the matrix type.
>>>>
>>>>   Thanks,
>>>>
>>>>      Matt
>>>>
>>>>
>>>>> Thanks,
>>>>> Sam
>>>>> MATAIJMATAIJ
>>>>> <https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATAIJ.html#MATAIJ>
>>>>>  =
>>>>> "aij" - A matrix type to be used for sparse matrices. This matrix type is
>>>>> identical to MATSEQAIJ
>>>>> <https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATSEQAIJ.html#MATSEQAIJ>
>>>>>  when
>>>>> constructed with a single process communicator, and MATMPIAIJ
>>>>> <https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATMPIAIJ.html#MATMPIAIJ>
>>>>>  otherwise.
>>>>> As a result, for single process communicators,
>>>>> MatSeqAIJSetPreallocation
>>>>> <https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatSeqAIJSetPreallocation.html#MatSeqAIJSetPreallocation>
>>>>>  is
>>>>> supported, and similarly MatMPIAIJSetPreallocation
>>>>> <https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatMPIAIJSetPreallocation.html#MatMPIAIJSetPreallocation>()
>>>>> is supported for communicators controlling multiple processes. It is
>>>>> recommended that you call both of the above preallocation routines for
>>>>> simplicity.
>>>>>
>>>>
>>>>
>>>> --
>>>> What most experimenters take for granted before they begin their
>>>> experiments is infinitely more interesting than any results to which their
>>>> experiments lead.
>>>> -- Norbert Wiener
>>>>
>>>> https://www.cse.buffalo.edu/~knepley/
>>>> <http://www.cse.buffalo.edu/~knepley/>
>>>>
>>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>

Reply via email to