y stable for an SSP method to preserve
> >> it. It sounds like yours is not, so maybe there is no particular
> >> benefit to using SSP over any other method (but likely tighter time step
> >> restriction).
> >>
> >> Manuel Valera writes:
> >&g
gly stable for an SSP method to preserve
> it. It sounds like yours is not, so maybe there is no particular
> benefit to using SSP over any other method (but likely tighter time step
> restriction).
>
> Manuel Valera writes:
>
> > To correct for the deformation of the si
to
happening right now.
On Wed, Oct 9, 2019 at 3:47 PM Jed Brown wrote:
> Manuel Valera writes:
>
> > Thanks,
> >
> > My time integration schemes are all explicit, sorry if this a very
> atypical
> > setup. This is similar to the barotropic splitting but not exactl
see it this could be solved by obtaining the intermediate stages
and then updating them accordingly, is this not possible to do ?
On Wed, Oct 9, 2019 at 3:40 PM Jed Brown wrote:
> Manuel Valera writes:
>
> > Thanks for the answer, I will read the mentioned example, but to clarify
t numeration are different algorithms, and
each TS in the 2nd numeration generate a different RHS.
What Jed is suggesting is to create an overarching routine that does all
that is the first list under one single step?
Thanks,
On Wed, Oct 9, 2019 at 3:24 PM Jed Brown wrote:
> Manuel V
Hello,
I have a set of equations which are co-dependent when integrating in time,
this means the velocities u,v,w need a component from the Temperature and
Salinity integration at the same intermediate step. Same for Temperature
and Salinity, which need the current velocities (at the intermediate
PM Manuel Valera wrote:
> Hello all,
>
> I finally implemented the TS routine operating in several DAs at the same
> time, hacking it as you suggested. I still have a problem with my algorithm
> though. It is not DMDA related so there's that.
>
> My algorithm
t;> dimensions. You may need to be careful on the boundaries deepening on the
>> types of boundary conditions.
>>
>
> Yes, SNES ex30 does exactly this. However, I still recommend looking at
> DMStag. Patrick created it because managing the DMDA
> became such as headac
:
> On Tue, Sep 17, 2019 at 6:15 PM Manuel Valera via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
>> Hello, petsc users,
>>
>> I have integrated the TS routines in my code, but i just noticed i didn't
>> do it optimally. I was using 3 different TS objects
Hello, petsc users,
I have integrated the TS routines in my code, but i just noticed i didn't
do it optimally. I was using 3 different TS objects to integrate
velocities, temperature and salinity, and it works but only for small DTs.
I suspect the intermediate Runge-Kutta states are unphased and
> Note what the people who give these types of presentations forget to
> emphasis enough is that though this techniques for some algorithms can give
> good benefits on GPUs utilizing lower precision on CPUs doesn't generally
> benefit you much.
>
>
> > On Jul 23, 2019,
Hello,
I was wondering if PETSc had some form of a low precision linear solver
algorithm like seen in:
https://www.dropbox.com/s/rv5quc3k72qdpmp/iciam_lowprec19.pdf?dl=0
I understand this treatment is coming from one of the NAG library
developers,
Thanks,
Hi petsc devs and users,
Is there an analogue to reshape as in the fortran / matlab functions for
petsc arrays? i am looking into converting a 3D DMDA array of (nx,ny,nz)
sizes into a 2D MATMPI matrix of size (nx) x (ny) x (nz) by (nx) x (ny) x
(nz), that will be used as the laplacian matrix in a
vectors as function and input for the TS RHS
function.
I'll write back if i have further questions,
Thanks so much,
On Tue, Jul 9, 2019 at 1:32 PM Matthew Knepley wrote:
> On Tue, Jul 9, 2019 at 2:39 PM Manuel Valera via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
&
On Tue, Jul 9, 2019 at 11:27 AM Smith, Barry F. wrote:
>
>
> > On Jul 8, 2019, at 6:53 PM, Manuel Valera via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
> >
> > Hi Zhang,
> >
> > Thanks to your help i have implemented the TS routine for my tem
is that they are in a 4D dmda array of 3 degrees of freedom, any
suggestions on how to implement this? does TS support multiple degrees of
freedom arrays?
Thanks,
On Thu, Jul 4, 2019 at 9:18 PM Zhang, Hong wrote:
>
>
> On Jul 3, 2019, at 3:10 PM, Manuel Valera wrote:
>
> Thanks
Hi PETSc,
I am trying to implement the Time stepping routines in my model, i have a
working runge-kutta time scheme that goes to the following steps:
- Interpolate u,v,w to the time advancing variable position.
- Calculate nonlinear coefficients and advect velocities with a
t; On Fri, Mar 22, 2019 at 4:09 PM Manuel Valera wrote:
>
>> Hello,
>>
>> I repeated the timings with the -log_sync option and now i get for 200
>> processors / 20 nodes:
>>
>>
>> ---
.
>
> --Junchao Zhang
>
>
> On Wed, Mar 20, 2019 at 4:44 PM Zhang, Junchao via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
>>
>>
>> On Wed, Mar 20, 2019 at 4:18 PM Manuel Valera wrote:
>>
>>> Thanks for your answe
Sorry i meant 20 cores at one node. Ok i will retry with -log_sync and come
back. Thanks for your help.
On Wed, Mar 20, 2019 at 2:43 PM Zhang, Junchao wrote:
>
>
> On Wed, Mar 20, 2019 at 4:18 PM Manuel Valera wrote:
>
>> Thanks for your answer, so for example i have a log fo
n extra MPI_Barrier for each event to let them start
> at the same time. With that, it is easier to interpret the number.
> src/vec/vscat/examples/ex4.c is a tiny example for VecScatter logging.
>
> --Junchao Zhang
>
>
> On Wed, Mar 20, 2019 at 2:58 PM Manuel Valera via petsc-users
Hello,
I am working on timing my model, which we made MPI scalable using petsc
DMDAs, i want to know more about the output log and how to calculate a
total communication times for my runs, so far i see we have "MPI Messages"
and "MPI Messages Lengths" in the log, along VecScatterEnd and
y fail :)
>
> --Junchao Zhang
>
>
> On Tue, Mar 12, 2019 at 8:20 PM Manuel Valera wrote:
>
>> Hi Mr Zhang, thanks for your reply,
>>
>> I just checked your branch out, reconfigured and recompiled and i am
>> still getting the same error from my last email
'? See VecScatter changes:
>>
>> https://www.mcs.anl.gov/petsc/documentation/changes/dev.html
>>
>> Manuel Valera via petsc-users writes:
>>
>> > Hello,
>> >
>> > I just updated petsc from the repo to the latest master branch version
ared-libraries=1 --known-64-bit-blas-indices=0
> [0]PETSC ERROR: #4 User provided function() line 0 in unknown file
> ------
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
Now, interesting part is, if
Hello,
I just updated petsc from the repo to the latest master branch version, and
a compilation problem popped up, it seems like the variable types are not
being acknowledged properly, what i have in a minimum working example
fashion is:
#include
> #include
> #include
> #include
> #include
,
Thanks for your help,
Manuel
On Sat, Oct 6, 2018 at 4:45 AM Matthew Knepley wrote:
> On Fri, Oct 5, 2018 at 6:49 PM Manuel Valera wrote:
>
>> Hello,
>>
>> I'm trying to do a simple variable interpolation, from a cell center to a
>> face in a staggered grid, my
Hello,
I'm trying to do a simple variable interpolation, from a cell center to a
face in a staggered grid, my model data management is done with DMDAs, with
two different DMs one for each cell position,
I already did this task on a fortran only version of the model using the 4
closest neighbors
t; Thanks,
>>
>> On Wed, Sep 12, 2018 at 2:20 PM, Matthew Knepley
>> wrote:
>>
>>> On Wed, Sep 12, 2018 at 5:13 PM Manuel Valera
>>> wrote:
>>>
>>>> Hello guys,
>>>>
>>>> I am working in a multi-gpu cluster and
Ok then, how can i try getting more than one GPU with the same number of
MPI processes?
Thanks,
On Wed, Sep 12, 2018 at 2:20 PM, Matthew Knepley wrote:
> On Wed, Sep 12, 2018 at 5:13 PM Manuel Valera wrote:
>
>> Hello guys,
>>
>> I am working in a multi-gpu cluster
Hello guys,
I am working in a multi-gpu cluster and i want to request 2 or more GPUs,
how can i do that from PETSc? evidently mpirun -n # is for requesting
processors, but what if i want to use one mpi processor but several GPUs
instead?
Also, i understand the GPU handles the linear system
, Matthew Knepley wrote:
> On Wed, Aug 29, 2018 at 5:49 PM Manuel Valera wrote:
>
>> Update:
>>
>> I made it work like you suggested Barry, i had to comment the code line
>> to set up the pc_type saviennacl, that way i am getting as ksp_view:
>>
>> KSP Object:
at 2:21 PM, Manuel Valera wrote:
> Ok, executing with:
>
> mpirun -n 2 ./gcmLEP.GPU tc=TestCases/LockRelease/LE_6x6x6/
> jid=tiny_cuda_test_n1 -pc_type bjacobi -pc_sub_type saviennacl -ksp_view
>
>
> I get:
>
>
> SETTING GPU TYPES
> Matrix type: mpiaijvienn
has sent help message help-mpi-api.txt /
mpi-abort
[node50:77836] Set MCA parameter "orte_base_help_aggregate" to 0 to see all
help / error messages
On Wed, Aug 29, 2018 at 2:02 PM, Smith, Barry F. wrote:
> Please send complete error message
>
>
> > On Aug 29, 2018,
Yeah, no sorry, i get the same error with -pc_type bjacobi -sub_pc_type
SAVIENNACL : "Currently only handles ViennaCL matrices"
Thanks and let me know of any progress on this issue,
On Wed, Aug 29, 2018 at 1:37 PM, Manuel Valera wrote:
> Awesome, thanks!
>
> On Wed, Aug
Awesome, thanks!
On Wed, Aug 29, 2018 at 1:29 PM, Smith, Barry F. wrote:
>
>
> > On Aug 29, 2018, at 3:26 PM, Manuel Valera wrote:
> >
> >
> >
> > You may need to use just plain PCBJACOBI or PCASM for parallelism and
> then SAVIENNACL sequentially on ea
e it work?
Thanks,
> > On Aug 29, 2018, at 1:50 PM, Manuel Valera wrote:
> >
> > Hi everyone,
> >
> > Thanks for your responses, i understand communicating on this way to
> this level of technicality can be hard, i still think we can work ways to
> solve this pr
On Wed, Aug 29, 2018 at 11:50 AM, Manuel Valera wrote:
> Hi everyone,
>
> Thanks for your responses, i understand communicating on this way to this
> level of technicality can be hard, i still think we can work ways to solve
> this problem though,
>
> I can say the fo
you run in a debugger and provide a complete backtrace?
>
> Thanks and best regards,
> Karli
>
>
> On 08/29/2018 01:33 AM, Manuel Valera wrote:
>
>> Talked too fast,
>>
>> After fixing that problem, i tried more than one mpi processor and got
>> the following
g in 1 mpi processor + GPU but i would like to call
at least 16 mpi processors + GPU to do the rest of the data management who
is not part of the main laplacian on the mpi and the laplacian solution on
the GPU, is this currently possible?
Thanks for your help,
On Tue, Aug 28, 2018 at 4:21 PM, Man
ries to do
> what you would need would help us a great deal in understanding that you
> are trying to do.
>
> Barry
>
>
>
>
>
> > On Aug 28, 2018, at 1:18 PM, Manuel Valera wrote:
> >
> > Matthew, PetscMalloc gives the same error,
> >
> >
are attempting to put a value at column 10980
>
>Barry
>
>
> > On Aug 15, 2018, at 9:44 PM, Manuel Valera wrote:
> >
> > Thanks Matthew and Barry,
> >
> > Now my code looks like:
> >
> > call DMSetMatrixPreallocateOnly(daDummy,PETSC_TRUE
aDummy,A,ierr)
>
> and remove the rest. You need to set the type of Mat you want the DM to
> return BEFORE you create the matrix.
>
> Barry
>
>
>
> > On Aug 15, 2018, at 4:45 PM, Manuel Valera wrote:
> >
> > Ok thanks for clarifying that, i wasn't s
2018 at 2:32 PM, Matthew Knepley wrote:
> On Wed, Aug 15, 2018 at 5:20 PM Manuel Valera wrote:
>
>> It seems to be resumed on: I do not know how to preallocate a DM Matrix
>> correctly.
>>
>
> There is only one matrix type, Mat. There are no separate DM matrices. A
> DM
,
Thanks,
On Wed, Aug 15, 2018 at 2:15 PM, Matthew Knepley wrote:
> On Wed, Aug 15, 2018 at 4:53 PM Manuel Valera wrote:
>
>> Thanks Matthew,
>>
>> I try to do that when calling:
>>
>> call MatMPIAIJSetPreallocation(A,19,PETSC_NULL_INTEGER,19,PETSC_
>> N
-viennacl
Thanks,
On Wed, Aug 15, 2018 at 1:53 PM, Manuel Valera wrote:
> Thanks Matthew,
>
> I try to do that when calling:
>
> call MatMPIAIJSetPreallocation(A,19,PETSC_NULL_INTEGER,19,PETSC_
> NULL_INTEGER,ierr)
>
> But i am not aware on how to do this for the DM if
:
> On Wed, Aug 15, 2018 at 4:39 PM Manuel Valera wrote:
>
>> Hello PETSc devs,
>>
>> I am running into an error when trying to use the MATMPIAIJVIENNACL
>> Matrix type in MPI calls, the same code runs for MATSEQAIJVIENNACL type in
>> one processor. The err
Hello PETSc devs,
I am running into an error when trying to use the MATMPIAIJVIENNACL Matrix
type in MPI calls, the same code runs for MATSEQAIJVIENNACL type in one
processor. The error happens when calling MatSetValues for this specific
configuration. It does not occur when using MPI DMMatrix
Thanks, i was able to find the bug and correct it, it is working now,
I was calling a wrong DM for some DA,
Regards,
On Tue, Aug 14, 2018 at 2:13 PM, Jed Brown wrote:
> Manuel Valera writes:
>
> > Thanks Jed,
> >
> > I got the attached, it looks is coming
Thanks Jed,
I got the attached, it looks is coming from one of my routines
CorrectU4Pressure.F90, what other information can i get from this log?
Thanks,
On Tue, Aug 14, 2018 at 12:57 PM, Jed Brown wrote:
> Manuel Valera writes:
>
> > Hello everyone,
> >
> > I am
Hello everyone,
I am working on running part of my code in a GPU, recently i was able to
run the whole model using one P100 GPU and one processor with good timing
but using --with-debugging=1 as configure argument,
With this in mind i compiled PETSc in a separate folder with the same exact
Knepley wrote:
> On Tue, Jul 10, 2018 at 2:11 PM Manuel Valera wrote:
>
>> Hi guys,
>>
>> It's me with another basic question, this time i need to find the global
>> maximum and minimum of DMDA array to do an average, it is usually operated
>> over a local v
Hi guys,
It's me with another basic question, this time i need to find the global
maximum and minimum of DMDA array to do an average, it is usually operated
over a local vector but it has a global vector too, the code works as
intended in one core but it comes up with different values for this
that
tomorrow,
Thanks,
On Mon, Jul 2, 2018 at 4:52 PM, Smith, Barry F. wrote:
>
>
> > On Jul 2, 2018, at 6:48 PM, Manuel Valera wrote:
> >
> >
> >
> > On Mon, Jul 2, 2018 at 4:23 PM, Smith, Barry F.
> wrote:
> >
> >
> > > On Jul 2, 2018,
on
globaltolocalbegin/end?
Thanks,
On Mon, Jul 2, 2018 at 3:42 PM, Manuel Valera wrote:
>
>
> On Mon, Jul 2, 2018 at 3:04 PM, Smith, Barry F.
> wrote:
>
>>
>>First make sure that getCenterInfo(daScalars, xstart, ystart, zstart,
>> xend, yend, zend) returns what it
NSERT_VALUES doesn't work with DMDAs, is this still an issue or
have been fixed? i understand it looks like these are two different
problems in my code,
Thanks,
>
>Barry
>
>
>
>
> > On Jul 2, 2018, at 2:58 PM, Manuel Valera wrote:
> >
> > Hi guys,
>
Hi guys,
I've noticed a bug in my code that seems to happen right after a call to
DMGlobalToLocalBegin/End and i can't seem to find a reason, it goes like
this:
I create the DMDA (daScalars) with the following:
bx = DM_BOUNDARY_GHOSTED
> by = DM_BOUNDARY_PERIODIC
> bz = DM_BOUNDARY_GHOSTED
>
Knepley <knep...@gmail.com> wrote:
> On Wed, May 2, 2018 at 4:19 PM, Manuel Valera <mvaler...@sdsu.edu> wrote:
>
>> Hello guys,
>>
>> We are working in writing a paper about the parallelization of our model
>> using PETSc, which is very exciting since is
Hello guys,
We are working in writing a paper about the parallelization of our model
using PETSc, which is very exciting since is the first time we see our
model scaling, but so far i feel my results for the laplacian solver could
be much better,
For example, using CG/Multigrid i get less than
I get it, thanks, that's a strong argument i will tell my advisor about
Have a great day,
On Wed, Apr 25, 2018 at 12:30 PM, Smith, Barry F. <bsm...@mcs.anl.gov>
wrote:
>
>
> > On Apr 25, 2018, at 2:12 PM, Manuel Valera <mvaler...@sdsu.edu> wrote:
> >
> >
t; Best regards,
> Karli
>
>
>
>
> On 04/25/2018 08:26 PM, Manuel Valera wrote:
>
>> Hi,
>>
>> I'm running scaling tests on my system to check why my scaling is so
>> poor, and after following the MPIVersion guidelines my scaling.log output
>> looks
Hi,
I'm running scaling tests on my system to check why my scaling is so poor,
and after following the MPIVersion guidelines my scaling.log output looks
like this:
Number of MPI processes 1 Processor names node37
Triad:12856.9252 Rate (MB/s)
Number of MPI processes 1 Processor names
It looks like it's working now :)
i needed to setup the DMMatrix and that made the trick,
Thanks,
On Mon, Apr 9, 2018 at 6:13 PM, Manuel Valera <mvaler...@sdsu.edu> wrote:
> Oh ok, thanks Matt,
>
> I think the problem is that i am not using DMCreateMatrix at all but a
> re
i dont have anything for DMSetOptionsPrefix
On Mon, Apr 9, 2018 at 4:55 PM, Manuel Valera <mvaler...@sdsu.edu> wrote:
>
>
> On Mon, Apr 9, 2018 at 4:53 PM, Matthew Knepley <knep...@gmail.com> wrote:
>
>> On Mon, Apr 9, 2018 at 7:52 PM, Manuel Valera <mvaler...@
On Mon, Apr 9, 2018 at 4:53 PM, Matthew Knepley <knep...@gmail.com> wrote:
> On Mon, Apr 9, 2018 at 7:52 PM, Manuel Valera <mvaler...@sdsu.edu> wrote:
>
>> Ok thanks, i'm learning more every day,
>>
>> I still get the same error, i am running with -dm_vec_type
wrote:
> On Mon, Apr 9, 2018 at 7:27 PM, Manuel Valera <mvaler...@sdsu.edu> wrote:
>
>> On Mon, Apr 9, 2018 at 4:09 PM, Matthew Knepley <knep...@gmail.com>
>> wrote:
>>
>>> On Mon, Apr 9, 2018 at 6:12 PM, Manuel Valera <mvaler...@sdsu.edu>
>>&
On Mon, Apr 9, 2018 at 4:09 PM, Matthew Knepley <knep...@gmail.com> wrote:
> On Mon, Apr 9, 2018 at 6:12 PM, Manuel Valera <mvaler...@sdsu.edu> wrote:
>
>> Hello guys,
>>
>> I've made advances in my CUDA acceleration project, as you remember i
>> have a C
Hello guys,
I've made advances in my CUDA acceleration project, as you remember i have
a CFD model in need of better execution times.
So far i have been able to solve the pressure system in the GPU and the
rest in serial, using PETSc only for this pressure solve, the library i got
to work was
on
exactly when Open MPI kills them.
--
[valera@node50 alone]$
Thanks,
On Wed, Mar 14, 2018 at 1:52 PM, Matthew Knepley <knep...@gmail.com> wrote:
> On Thu, Mar 15, 2018 at 4:01 AM, Manuel Valer
to the valera/petsc/cuda
build,
should i just delete the petsc installation folder and start over?
Thanks,
On Wed, Mar 14, 2018 at 11:36 AM, Matthew Knepley <knep...@gmail.com> wrote:
> On Thu, Mar 15, 2018 at 3:25 AM, Manuel Valera <mvaler...@mail.sdsu.edu>
> wrote:
>
>> yeah
@node50 alone]$
I made sure there is a call for Vec/MatSetFromOptions() there, i am loading
the matrix from a petsc binary in this case,
Thanks,
On Wed, Mar 14, 2018 at 11:22 AM, Matthew Knepley <knep...@gmail.com> wrote:
> On Thu, Mar 15, 2018 at 3:19 AM, Manuel Valera <mvaler...@m
.
--
On Wed, Mar 14, 2018 at 11:16 AM, Matthew Knepley <knep...@gmail.com> wrote:
> On Thu, Mar 15, 2018 at 3:12 AM, Manuel Valera <mvaler...@mail.sdsu.edu>
> wrote:
>
>> Thanks, got this error:
>>
&g
M, Matthew Knepley <knep...@gmail.com> wrote:
> On Thu, Mar 15, 2018 at 2:46 AM, Manuel Valera <mvaler...@mail.sdsu.edu>
> wrote:
>
>> Ok lets try that, if i go to /home/valera/testpetsc/arch
>> -linux2-c-opt/tests/src/snes/examples/tutorials there is runex19.sh and
.
# --
# 2 total processes failed to start
ok snes_tutorials-ex19_1 # SKIP Command failed so no diff
is this the one i should be running ?
On Wed, Mar 14, 2018 at 10:39 AM, Matthew Knepley <knep...@gmail.com> wrote:
> On Thu, Mar 15, 2018 at 2:27 AM, Manuel Valera <mvaler...@mail.sdsu.
wrote:
> On Fri, Mar 9, 2018 at 3:05 AM, Manuel Valera <mvaler...@mail.sdsu.edu>
> wrote:
>
>> Hello all,
>>
>> I am working on porting a linear solver into GPUs for timing purposes, so
>> far i've been able to compile and run the CUSP libraries and compile PETS
Hello all,
I am working on porting a linear solver into GPUs for timing purposes, so
far i've been able to compile and run the CUSP libraries and compile PETSc
to be used with CUSP and ViennaCL, after the initial runs i noticed some
errors, they are different for different flags and i would
wrote:
> On Sat, Feb 18, 2017 at 2:44 PM, Manuel Valera <mval...@mail.sdsu.edu>
> wrote:
>
>> thanks guys that helped a lot!
>>
>> I think i got it know, i copy the code i created in case you want to
>> suggest something or maybe use it as example...
>>
&
in a
! distributed fashion using the DMDA objects from PETSc.
! Manuel Valera 1/20/17
! Arguments:
! da = DMDA array (3D) already created and setup
! veczero =
! globalv =
! localv = local chunk each processor works in.
! array = the array
Hello,
My question this time is just if there is a way to distribute a 3D array
whos located at Zero rank over the processors, if possible using the DMDAs,
i'm trying not to do a lot of initialization I/O in parallel.
Thanks for your time,
Manuel
wrote:
>
>For DMDAVecGetArrayF90 you need to declare the "array" arguments as
> Fortran pointers you don't declare them like
>
> > u0(-1:IMax+2,-1:JMax+1,-1:KMax+1)
>
>
>
> > On Feb 4, 2017, at 7:34 PM, Matthew Knepley <knep...@gmail.com> wrote:
> >
> > O
,localv,array) !
use PetscObjects, only :: ierr ! Umbrella program to update and
communicate the arrays in a ! distributed fashion using the DMDA
objects from PETSc. ! Manuel Valera 1/20/17 ! Arguments:
! da = DMDA array either 1d,2d or 3d, already created and setup
! Umbrella program to update and communicate the arrays in
a ! distributed fashion using the DMDA objects from PETSc.
! Manuel Valera 1/20/17 ! Arguments:! da = DMDA array
either 1d,2d or 3d, already created and setup! globalv =
global vector to be operated
to the (local) imax.
!
Could someone explain a little bit more about these functions?
petsc_to_local(), local_to_petsc(), and specially why are used
transform_petsc_us() and transform_us_petsc() ?
Thanks,
Manuel
On Thu, Jan 19, 2017 at 2:01 PM, Manuel Valera <mval...@mail.sdsu.edu>
.
that is it for now, thanks for your time,
Manuel Valera
algebraic multigrid!
>
>For some reason AMG doesn't like your pressure matrix (even though AMG
> generally loves pressure matrices). What do you have for boundary
> conditions for your pressure?
>
>Please run with -ksp_view_mat binary -ksp_view_rhs binary and then send
> the resul
;
>make stream NPMAX=4
>
> run in the PETSc directory.
>
>
>
> > On Jan 7, 2017, at 7:38 PM, Manuel Valera <mval...@mail.sdsu.edu> wrote:
> >
> > Ok great, i tried those command line args and this is the result:
> >
> > when i use -p
MatSetValues calls =0
not using I-node (on process 0) routines
but still the timing is terrible.
On Sat, Jan 7, 2017 at 5:28 PM, Jed Brown <j...@jedbrown.org> wrote:
> Manuel Valera <mval...@mail.sdsu.edu> writes:
>
> > Awesome Matt and Jed,
> >
> >
,
Manuel
On Sat, Jan 7, 2017 at 4:34 PM, Jed Brown <j...@jedbrown.org> wrote:
> Manuel Valera <mval...@mail.sdsu.edu> writes:
>
> > I was able to find the bug, it was the outer loop bound in the same
> fashion
&g
om> wrote:
> On Sat, Jan 7, 2017 at 5:33 PM, Manuel Valera <mval...@mail.sdsu.edu>
> wrote:
>
>> Thanks Barry and Matt,
>>
>> I was able to detect a bug that i just solved, as suggested the loop
>> parameters weren't updated as it should, now it does and th
Thank you Matthew,
On Sat, Jan 7, 2017 at 1:49 PM, Matthew Knepley <knep...@gmail.com> wrote:
> On Sat, Jan 7, 2017 at 3:32 PM, Manuel Valera <mval...@mail.sdsu.edu>
> wrote:
>
>> Hi Devs, hope you are having a great weekend,
>>
>> I could finally par
,
'pc_hypre_boomeramg_nodal_coarsen','1',ierr)
call PetscOptionsSetValue(PETSC_NULL_OBJECT,
'pc_hypre_boomeramg_vec_interp_variant','1',ierr)
What are your thoughts ?
Thanks,
Manuel
On Fri, Jan 6, 2017 at 1:58 PM, Manuel Valera <mval...@mail.sdsu.edu> wrote:
> Awesome, that did i
; > On Jan 6, 2017, at 3:29 PM, Manuel Valera <mval...@mail.sdsu.edu> wrote:
> >
> > Thanks Dave,
> >
> > I think is interesting it never gave an error on this, after adding the
> vecassembly calls it still shows the same behavior, without complaining, i
> did:
CHKERRQ(ierr)
endif
CHKERRQ(ierr)
Thanks.
On Fri, Jan 6, 2017 at 12:44 PM, Dave May <dave.mayhe...@gmail.com> wrote:
>
>
> On 6 January 2017 at 20:24, Manuel Valera <mval...@mail.sdsu.edu> wrote:
>
>> Great help Barry, i totally ha
Thanks inmensely for your help,
Manuel
On Thu, Jan 5, 2017 at 4:39 PM, Barry Smith <bsm...@mcs.anl.gov> wrote:
>
> > On Jan 5, 2017, at 6:21 PM, Manuel Valera <mval...@mail.sdsu.edu> wrote:
> >
> > Hello Devs is me again,
> >
> > I'm trying to
more than one
processor, what would be a better approach ?
Thanks once again,
Manuel
On Wed, Jan 4, 2017 at 3:30 PM, Manuel Valera <mval...@mail.sdsu.edu> wrote:
> Thanks i had no idea how to debug and read those logs, that solved this
> issue at least (i was sending a
nep...@gmail.com> wrote:
> On Wed, Jan 4, 2017 at 5:21 PM, Manuel Valera <mval...@mail.sdsu.edu>
> wrote:
>
>> I did a PetscBarrier just before calling the vicariate routine and im
>> pretty sure im calling it from every processor, code looks like this:
>>
>
> Fro
from 0
entering POInit from 1
entering POInit from 2
entering POInit from 3
Still hangs in the same way,
Thanks,
Manuel
On Wed, Jan 4, 2017 at 2:55 PM, Manuel Valera <mval...@mail.sdsu.edu> wrote:
> Thanks for the answers !
zes() before VecSetType()
>
> Thanks,
> Dave
>
>
> On Wed, 4 Jan 2017 at 23:21, Manuel Valera <mval...@mail.sdsu.edu> wrote:
>
> Hello all, happy new year,
>
> I'm working on parallelizing my code, it worked and provided some results
> when i just called more
Hello all, happy new year,
I'm working on parallelizing my code, it worked and provided some results
when i just called more than one processor, but created artifacts because i
didn't need one image of the whole program in each processor, conflicting
with each other.
Since the pressure solver is
Knepley <knep...@gmail.com> wrote:
> On Fri, Nov 4, 2016 at 7:37 PM, Manuel Valera <mval...@mail.sdsu.edu>
> wrote:
>
>> Hello all,
>>
>> I'm reviving this old post because we are trying to share the petsc
>> objects from outside our iteration routine, th
1 - 100 of 117 matches
Mail list logo