[petsc-users] KSP Set Operators to be calculated before RHS

2015-11-27 Thread Mostafa Ghadamyari
Hi all, I've used PETSc to develop my SIMPLE algorithm CFD code. SIMPLE algorithm has its own way to handle non-linearity of Navier-Stokes equation's so I only used PETSc's KSP solvers. In the SIMPLE algorithm, the diagonal coefficient of the matrix is used in the right hand side for implicit

[petsc-users] Solving/creating SPD systems

2015-11-27 Thread Justin Chang
Hi all, Say I have a saddle-point system for the mixed-poisson equation: [I -grad] [u] = [0] [-div 0 ] [p] [-f] The above is symmetric but indefinite. I have heard that one could make the above symmetric and positive definite (SPD). How would I do that? And if that's the case, would this

Re: [petsc-users] parallel IO messages

2015-11-27 Thread Fande Kong
Hi Barry, You are highly possibly right. Not 100% because this happens randomly. I have tried several tests, and all of them passed. Any reason to put SIGTRAP into IO system? Thanks, Fande, On Fri, Nov 27, 2015 at 2:29 PM, Barry Smith wrote: > > SIGTRAP is a way a process can interact with

Re: [petsc-users] running applications with 64 bit indices

2015-11-27 Thread Barry Smith
> On Nov 27, 2015, at 4:47 PM, Randall Mackie wrote: > > I’ve been struggling to get an application running, which was compiled with > 64 bit indices. > > It runs fine locally on my laptop with a petsc-downloaded mpich (and is > Valgrind clean). > > On our cluster, with Intel MPI, it crashes

[petsc-users] running applications with 64 bit indices

2015-11-27 Thread Randall Mackie
I’ve been struggling to get an application running, which was compiled with 64 bit indices. It runs fine locally on my laptop with a petsc-downloaded mpich (and is Valgrind clean). On our cluster, with Intel MPI, it crashes immediately. When I say immediately, I put a goto end of program right

Re: [petsc-users] parallel IO messages

2015-11-27 Thread Barry Smith
SIGTRAP is a way a process can interact with itself or another process asynchronously. It is possible that in all the mess of HDF5/MPI IO/OS code that manages getting the data in parallel from the MPI process memory to the hard disk some of the code uses SIGTRAP. PETSc, by default, always tra

Re: [petsc-users] parallel IO messages

2015-11-27 Thread Fande Kong
Thanks, Barry, I also was wondering why this happens randomly? Any explanations? If this is something in PETSc, that should happen always? Thanks, Fande Kong, On Fri, Nov 27, 2015 at 1:20 PM, Barry Smith wrote: > > Edit PETSC_ARCH/include/petscconf.h and add > > #if !defined(PETSC_MISSING_S

Re: [petsc-users] master branch option "-snes_monitor_solution"

2015-11-27 Thread Ed Bueler
Barry -- Works great for me in next and master. Having value "draw" is perfectly natural, as is default behavior. Ed On Wed, Nov 25, 2015 at 4:54 PM, Barry Smith wrote: > > Ed, > >I have fixed the error in the branch barry/update-monitors now in next > for testing. > >There is one

Re: [petsc-users] parallel IO messages

2015-11-27 Thread Barry Smith
Edit PETSC_ARCH/include/petscconf.h and add #if !defined(PETSC_MISSING_SIGTRAP) #define PETSC_MISSING_SIGTRAP #endif then do make gnumake It is possible that they system you are using uses SIGTRAP in managing the IO; by making the change above you are telling PETSc to ignore SIGTRAPS. Let

Re: [petsc-users] parallel IO messages

2015-11-27 Thread Fande Kong
Hi Dave, This not always happens. I am trying to get performance measurement so that the PETSc is compiled with --with-debugging=no. I will try later. Thanks, Fande, On Fri, Nov 27, 2015 at 12:08 PM, Dave May wrote: > There is little information in this stack trace. > You would get more inform

Re: [petsc-users] parallel IO messages

2015-11-27 Thread Fande Kong
HI Matt, Thanks for your reply. I put my application data into PETSc Vec and IS that take advantage of HDF5 viewer (you implemented). In fact, I did not add any new output and input functions. Thanks, Fande, On Fri, Nov 27, 2015 at 12:08 PM, Matthew Knepley wrote: > On Fri, Nov 27, 2015 at 1:

Re: [petsc-users] parallel IO messages

2015-11-27 Thread Dave May
There is little information in this stack trace. You would get more information if you use a debug build of petsc. e.g. configure with --with-debugging=yes It is recommended to always debug problems using a debug build of petsc and a debug build of your application. Thanks, Dave On 27 November

Re: [petsc-users] parallel IO messages

2015-11-27 Thread Matthew Knepley
On Fri, Nov 27, 2015 at 1:05 PM, Fande Kong wrote: > Hi all, > > I implemented a parallel IO based on the Vec and IS which uses HDF5. I am > testing this loader on a supercomputer. I occasionally (not always) > encounter the following errors (using 8192 cores): > What is different from the curre

Re: [petsc-users] question about MPI_Bcast and 64-bit-indices

2015-11-27 Thread Randall Mackie
Thanks Barry and Jose. > On Nov 27, 2015, at 10:27 AM, Barry Smith wrote: > > > Use MPIU_INTEGER for Fortran > > >> On Nov 27, 2015, at 12:09 PM, Jose E. Roman wrote: >> >> >>> El 27 nov 2015, a las 19:00, Randall Mackie >>> escribió: >>> >>> If my program is compiled using 64-bit-in

Re: [petsc-users] question about MPI_Bcast and 64-bit-indices

2015-11-27 Thread Barry Smith
Use MPIU_INTEGER for Fortran > On Nov 27, 2015, at 12:09 PM, Jose E. Roman wrote: > > >> El 27 nov 2015, a las 19:00, Randall Mackie escribió: >> >> If my program is compiled using 64-bit-indices, and I have an integer >> variable defined as PetscInt, what is the right way to broadcast

Re: [petsc-users] question about MPI_Bcast and 64-bit-indices

2015-11-27 Thread Jose E. Roman
> El 27 nov 2015, a las 19:00, Randall Mackie escribió: > > If my program is compiled using 64-bit-indices, and I have an integer > variable defined as PetscInt, what is the right way to broadcast that using > MPI_Bcast? > > I currently have: > > call MPI_Bcast(n, 1, MPI_INTEGER, … > > whic

[petsc-users] question about MPI_Bcast and 64-bit-indices

2015-11-27 Thread Randall Mackie
If my program is compiled using 64-bit-indices, and I have an integer variable defined as PetscInt, what is the right way to broadcast that using MPI_Bcast? I currently have: call MPI_Bcast(n, 1, MPI_INTEGER, … which is the right way to do it for regular integers, but what do I use in place of

Re: [petsc-users] .pc file does not include dependencies

2015-11-27 Thread Satish Balay
On Fri, 27 Nov 2015, Arne Morten Kvarving wrote: > On 25/11/15 20:29, Satish Balay wrote: > > On Wed, 25 Nov 2015, Satish Balay wrote: > > > > > I'll check why libs are listed as libfoo.a instead of -lfoo in this file. > > Ok - the following patch should fix the issue. Could you try it out? > > >

Re: [petsc-users] .pc file does not include dependencies

2015-11-27 Thread Arne Morten Kvarving
On 25/11/15 20:29, Satish Balay wrote: On Wed, 25 Nov 2015, Satish Balay wrote: I'll check why libs are listed as libfoo.a instead of -lfoo in this file. Ok - the following patch should fix the issue. Could you try it out? sorry for the late response, time zone differences and out-of-officin