Hi Barry, Thanks for your advice.
You are right on this. somehow there is some inconsistency when I compute the right hand side (true RHS + time-stepping contribution to the diagonal matrix) to compute the finite difference Jacobian. If I just use the call back function to recompute my RHS before I call MatMFFDSetBase, then it works like a charm. But now I end up with computing my RHS three times. 1st time is to compute the true RHS, the rest two is for computing finite difference Jacobian. In my previous buggy version, I only compute RHS twice. If possible, could you elaborate on your comments "Also be careful about petsc_baserhs", so I may possibly understand what was going on with my buggy version. Besides, for a parallel implementation, my code already has its own partition method, is it possible to allow petsc read in a user-defined partition? if not what is a better way to do this? Many thanks, Feng ________________________________ From: Barry Smith <[email protected]> Sent: 11 March 2021 22:15 To: feng wang <[email protected]> Cc: [email protected] <[email protected]> Subject: Re: [petsc-users] Questions on matrix-free GMRES implementation Feng, The first thing to check is that for each linear solve that involves a new operator (values in the base vector) the MFFD matrix knows it is using a new operator. The easiest way is to call MatMFFDSetBase() before each solve that involves a new operator (new values in the base vector). Also be careful about petsc_baserhs, when you change the base vector's values you also need to change the petsc_baserhs values to the function evaluation at that point. If that is correct I would check with a trivial function evaluator to make sure the infrastructure is all set up correctly. For examples use for the matrix free a 1 4 1 operator applied matrix free. Barry On Mar 11, 2021, at 7:35 AM, feng wang <[email protected]<mailto:[email protected]>> wrote: Dear All, I am new to petsc and trying to implement a matrix-free GMRES. I have assembled an approximate Jacobian matrix just for preconditioning. After reading some previous questions on this topic, my approach is: the matrix-free matrix is created as: ierr = MatCreateMFFD(*A_COMM_WORLD, iqe*blocksize, iqe*blocksize, PETSC_DETERMINE, PETSC_DETERMINE, &petsc_A_mf); CHKERRQ(ierr); ierr = MatMFFDSetFunction(petsc_A_mf, FormFunction_mf, this); CHKERRQ(ierr); KSP linear operator is set up as: ierr = KSPSetOperators(petsc_ksp, petsc_A_mf, petsc_A_pre); CHKERRQ(ierr); //petsc_A_pre is my assembled pre-conditioning matrix Before calling KSPSolve, I do: ierr = MatMFFDSetBase(petsc_A_mf, petsc_csv, petsc_baserhs); CHKERRQ(ierr); //petsc_csv is the flow states, petsc_baserhs is the pre-computed right hand side The call back function is defined as: PetscErrorCode cFdDomain::FormFunction_mf(void *ctx, Vec in_vec, Vec out_vec) { PetscErrorCode ierr; cFdDomain *user_ctx; cout << "FormFunction_mf called\n"; //in_vec: flow states //out_vec: right hand side + diagonal contributions from CFL number user_ctx = (cFdDomain*)ctx; //get perturbed conservative variables from petsc user_ctx->petsc_getcsv(in_vec); //get new right side user_ctx->petsc_fd_rhs(); //set new right hand side to the output vector user_ctx->petsc_setrhs(out_vec); ierr = 0; return ierr; } The linear system I am solving is (J+D)x=RHS. J is the Jacobian matrix. D is a diagonal matrix and it is used to stabilise the solution at the start but reduced gradually when the solution moves on to recover Newton's method. I add D*x to the true right side when non-linear function is computed to work out finite difference Jacobian, so when finite difference is used, it actually computes (J+D)*dx. The code runs but diverges in the end. If I don't do matrix-free and use my approximate Jacobian matrix, GMRES works. So something is wrong with my matrix-free implementation. Have I missed something in my implementation? Besides, is there a way to check if the finite difference Jacobian matrix is computed correctly in a matrix-free implementation? Thanks for your help in advance. Feng
