Thanks, I have verified that PetscPrint with an installed libs works as
expected on my Mac (on ex73f90t).
I'll keep looking on Cori.
ThanksMark
On Mon, Mar 16, 2020 at 10:37 PM Jacob Faibussowitsch
wrote:
> May be an issue printing to STDOUT, HPC sometimes redirect STDOUT to some
> other log fil
May be an issue printing to STDOUT, HPC sometimes redirect STDOUT to some other
log files. Maybe try redirecting your PetscInfo() or PetscPrintf() output to a
file. For PetscInfo its just a filename after the info flag: ./yourCode -info
somefile . As for the PetscPrintf you will have to change t
On Mon, Mar 16, 2020 at 10:04 PM Satish Balay wrote:
> Wrt fortran I/O - you can try adding calls to flush() after that.
> Similarly C has fflush()
>
This is in zmatnestf.c. So C code that is the fortran stubs. Custom in this
case.
>
> Satish
>
>
> On Mon, 16 Mar 2020, Mark Adams wrote:
>
> > I
On Mon, Mar 16, 2020 at 9:50 PM Jacob Faibussowitsch
wrote:
> Hello Mark,
>
> If you are using PetscInfo to print, PetscInfo requires that you pass
> “-info” as a command line flag in order to print anything. Sounds simple
> enough to miss, did you run your code with it? i.e. ./yourCode -info?
>
Wrt fortran I/O - you can try adding calls to flush() after that. Similarly C
has fflush()
Satish
On Mon, 16 Mar 2020, Mark Adams wrote:
> I am trying to debug an application code that works with v3.7 but fails
> with master. The code works for "normal" solvers but for a solver that uses
> Fie
Hello Mark,
If you are using PetscInfo to print, PetscInfo requires that you pass “-info”
as a command line flag in order to print anything. Sounds simple enough to
miss, did you run your code with it? i.e. ./yourCode -info?
For what its worth, PetscPrintf on PETSC_COMM_WORLD will always print
I am trying to debug an application code that works with v3.7 but fails
with master. The code works for "normal" solvers but for a solver that uses
FieldSplit it fails. It looks like vectors are not getting created from
MatCreateVecs with a matrix that is a MatNest (I can't run the code).
I have p
On Mon, 16 Mar 2020 at 20:26, Zhang, Hong wrote:
>
>
> I am a bit confused. Isn’t this required when one uses MPI-IO?
>
For MPI-IO, of course! Every process must call MPI_File_open().
I'm NOT talking/asking about the MPI-IO path. Our MPI-IO code is just fine,
no objections/concerns about it.
Lisandro Dalcin writes:
>> I'm not sure of this suggested change, in that a
>> "bad for MPI-IO" workload (like each rank randomly seeking around a big
>> file) might not be better with rank 0 acting as a service rank.
>>
>
> Please note my main question is unrelated to MPI-IO. It is about the
>
On Mar 16, 2020, at 12:12 PM, Lisandro Dalcin
mailto:dalc...@gmail.com>> wrote:
On Mon, 16 Mar 2020 at 16:35, Jed Brown
mailto:j...@jedbrown.org>> wrote:
Lisandro Dalcin mailto:dalc...@gmail.com>> writes:
> Currently, binary viewers using POSIX file descriptors with READ mode open
> the fil
On Mon, 16 Mar 2020 at 16:35, Jed Brown wrote:
> Lisandro Dalcin writes:
>
> > Currently, binary viewers using POSIX file descriptors with READ mode
> open
> > the file in ALL processes in the communicator. For WRITE mode, only
> process
> > zero opens the file.
> >
> > The current PetscViewerBy
Lisandro Dalcin writes:
> Currently, binary viewers using POSIX file descriptors with READ mode open
> the file in ALL processes in the communicator. For WRITE mode, only process
> zero opens the file.
>
> The current PetscViewerBynaryXXX APIs make it really unnecessary to open
> the file in all
Currently, binary viewers using POSIX file descriptors with READ mode open
the file in ALL processes in the communicator. For WRITE mode, only process
zero opens the file.
The current PetscViewerBynaryXXX APIs make it really unnecessary to open
the file in all processes for READ. I would like to g
13 matches
Mail list logo