Dear PETSc team,
I have tried to load a test mesh available in Gmsh' s demos directory
(share/doc/gmsh/demos/simple_geo/filter.geo, attached to this email) as a
DMPlex.
So I produced a msh4 file by doing :
gmsh -3 filter.geo -o /tmp/test.msh4
Then I used src/dm/impls/plex/tutorials/ex2.c
*'--with-cuda-gencodearch=70',*
--Junchao Zhang
On Tue, May 18, 2021 at 6:29 AM Mark Adams wrote:
> Damn, I am getting this problem on Summit and did a clean configure.
> I removed the Kokkos arch=70 line and added
> '--with-cudac-gencodearch=70',
>
> Any ideas?
>
> < Number of SNES
I found a variety of things on the web, below. I don't understand this but
for the even case it seems one simply modifies the input matrix before the FFT
http://www.fftw.org/faq/section3.html#centerorigin
https://stackoverflow.com/questions/5915125/fftshift-ifftshift-c-c-source-code
Hi Matthew,
Thank you very much for your quick answer, as always !
Le mar. 18 mai 2021 à 17:46, Matthew Knepley a écrit :
> On Tue, May 18, 2021 at 5:19 AM Thibault Bridel-Bertomeu <
> thibault.bridelberto...@gmail.com> wrote:
>
>> Dear all,
>>
>> I am playing around with creating DMPlex from
On Tue, May 18, 2021 at 5:19 AM Thibault Bridel-Bertomeu <
thibault.bridelberto...@gmail.com> wrote:
> Dear all,
>
> I am playing around with creating DMPlex from a periodic Gmsh mesh (still
> in the same finite volume code that solves the Euler equations) because I
> want to run some classical
Dear all,
I tried to implement the function fftshift from numpy (i.e. swap the
half-spaces of all axis) for row vectors in a matrix by using the
following code
void fft_shift(Mat _matrix) {
PetscScalar *mat_ptr;
MatDenseGetArray (fft_matrix, _ptr);
PetscInt r_0, r_1;
On Tue, May 18, 2021 at 8:18 AM Karin wrote:
> Dear PETSc team,
>
> I have tried to load a test mesh available in Gmsh' s demos directory
> (share/doc/gmsh/demos/simple_geo/filter.geo, attached to this email) as a
> DMPlex.
> So I produced a msh4 file by doing :
> gmsh -3 filter.geo -o
configure prints the information about CUDA at the end of the run, you can
check that information to see which was actually used.
I have a new MR where PETSc records the gencodearch it was built with and
then when your program starts up CUDA it verifies that the hardware supports
the
; but I do not like the approach of having a second matrix as temporary
> > storage space. Are there more efficient approaches possible using
> > PETSc-functions?
> >
> > Thanks!
> >
> > Regards,
> >
> > Roland Richter
> >
>
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20210518/b8710455/attachment-0001.html
> >
>
>
--
Sajid Ali (he/him) | PhD Candidate
Applied Physics
Northwestern University
s-sajid-ali.github.io
On Tue, May 18, 2021 at 12:16 PM Thibault Bridel-Bertomeu <
thibault.bridelberto...@gmail.com> wrote:
> Hi Matthew,
>
> Thank you very much for your quick answer, as always !
>
Cool.
> Le mar. 18 mai 2021 à 17:46, Matthew Knepley a écrit :
>
>> On Tue, May 18, 2021 at 5:19 AM Thibault
10 matches
Mail list logo