I have a not-so-trivial, but compact test case for you. I will try it out with the receipe you gave :).
Regards, Arjen Op ma 21 jul 2025 om 13:31 schreef Andre Vehreschild <ve...@gmx.de>: > Hi all, > > we are looking for people having old and/or modern Fortran codes available > that use Coarrays more or less intensively. Jerry has build a test branch > on > gcc's git, so testing is easier than usual: > > > For those who need some guidance to the test branch: > > > > $ git clone git://gcc.gnu.org/git/gcc.git > > > > $ cd gcc > > $ git checkout remotes/origin/devel/gfortran-test > > $ git switch -c gfortran-test > > > > Configure and build as usual in a separate directory not the source > directory. > > > > cd .. ; mkdir build ; cd build ; ../gcc/configure --prefix=<PREFIX> > > gmake install > > > > Replace <PREFIX> with a writeable full path on your system, e.g. > > ${HOME}/gcc-16 > > > > To use the new gcc use: > > > > export PATH=${HOME}/gcc-16/bin:$PATH > > > > and > > > > export LD_LIBRARY_PATH=${HOME}/gcc-16/lib64:$LD_LIBRARY_PATH > > > > or > > > > export LD_LIBRARY_PATH=${HOME}/gcc-16/lib:$LD_LIBRARY_PATH > > > > depending on how your OS names the library directory. Just have a look > into > > gcc-16 and use lib64 if it is present, else use lib. > > We like everyone to test the new caf_shmem library and report back any > problems, like "does not compile", "does not run" or "hangs during > execution". > If you can narrow down the problem, that would be of great help. If you can > also share (whether in private or in public) any code, that has issues, > please > do not hesitate to contact me or the gfortran mailing list. > > To compile your Fortran coarray code add -lcaf_shmem instead of -lcaf_mpi, > if > you previously used OpenCoarrays. When using the OpenCoarrays compile > helper > `caf` replace it with `gfortran -fcoarray=lib` for comiling and `gfortran > -fcoarray=lib -lcaf_shmem` for linking. > > caf_shmem is multi process shared memory library for using coarrays with > gfortran from version 16 on. It can provide great speed improvements in > comparison to MPI-based implementations, but is limited to a single node > where > all CPUs can share memory. > > Any feedback is greatly appreciated. > > Thanks and regards, > Andre > -- > Andre Vehreschild * Email: vehre ad gmx dot de >