The second one should absolutely be slower than the first (because it
actually iterations through the indices you pass in with an indirection) and
the first should not get slower the more you run it.
Depending on your environment I recommend you using a profiling tool on the
code and lo
On 6 January 2017 at 22:31, Łukasz Kasza wrote:
>
>
> Dear PETSc Users,
>
> Please consider the following 2 snippets which do exactly the same
> (calculate a sum of two vectors):
> 1.
> VecAXPY(amg_level_x[level],1.0,amg_level_residuals[level]);
>
> 2.
> Ve
On Fri, Jan 6, 2017 at 4:31 PM, Łukasz Kasza wrote:
>
>
> Dear PETSc Users,
>
> Please consider the following 2 snippets which do exactly the same
> (calculate a sum of two vectors):
> 1.
> VecAXPY(amg_level_x[level],1.0,amg_level_residuals[level]);
>
> 2.
>
Dear PETSc Users,
Please consider the following 2 snippets which do exactly the same
(calculate a sum of two vectors):
1.
VecAXPY(amg_level_x[level],1.0,amg_level_residuals[level]);
2.
VecGetArray(amg_level_residuals[level], &values);
VecSetValu
Awesome, that did it, thanks once again.
On Fri, Jan 6, 2017 at 1:53 PM, Barry Smith wrote:
>
>Take the scatter out of the if () since everyone does it and get rid of
> the VecView().
>
>Does this work? If not where is it hanging?
>
>
> > On Jan 6, 2017, at 3:29 PM, Manuel Valera wrote
Take the scatter out of the if () since everyone does it and get rid of the
VecView().
Does this work? If not where is it hanging?
> On Jan 6, 2017, at 3:29 PM, Manuel Valera wrote:
>
> Thanks Dave,
>
> I think is interesting it never gave an error on this, after adding the
> vecass
Thanks Dave,
I think is interesting it never gave an error on this, after adding the
vecassembly calls it still shows the same behavior, without complaining, i
did:
if(rankl==0)then
call VecSetValues(bp0,nbdp,ind,Rhs,INSERT_VALUES,ierr)
call VecAssemblyBegin(bp0,ierr) ; call VecAssem
On 6 January 2017 at 20:24, Manuel Valera wrote:
> Great help Barry, i totally had overlooked that option (it is explicit in
> the vecscatterbegin call help page but not in vecscattercreatetozero, as i
> read later)
>
> So i used that and it works partially, it scatters te values assigned in
> ro
Great help Barry, i totally had overlooked that option (it is explicit in
the vecscatterbegin call help page but not in vecscattercreatetozero, as i
read later)
So i used that and it works partially, it scatters te values assigned in
root but not the rest, if i call vecscatterbegin from outside ro
Yes, the option MatSetOption(M, MAT_NEW_NONZERO_LOCATION_ERR, PETSC_FALSE)
seems to be the path of least resistance. Especially as it is something I
am doing out of my
own curiosity and not part of anything larger.
I might have to bug you again very soon on how to optimize or move forward
based on
Great, you should be now about to remove the extra options I had you add.
> -fieldsplit_0_ksp_type gmres -fieldsplit_0_ksp_pc_side right
> -fieldsplit_1_ksp_type gmres -fieldsplit_1_ksp_pc_side right)
> On Jan 6, 2017, at 5:17 AM, Karin&NiKo wrote:
>
> Barry,
>
> you are goddamn right -
On Fri, Jan 6, 2017 at 10:08 AM, Patrick Begou <
patrick.be...@legi.grenoble-inp.fr> wrote:
> Hi Matthew,
>
> Using the debuguer I finaly found the problem. It is related to MPI. In
> src/sys/objects/pinit.c line 779, petsc test the availability of
> PETSC_HAVE_MPI_INIT_THREAD and this is set to T
On Fri, Jan 6, 2017 at 8:52 AM, Rochan Upadhyay wrote:
> Constraints come from so-called cohomology conditions. In practical
> applications,
> they arise when you couple field models (e.g. Maxwell's equations) with
> lumped
> models (e.g. circuit equations). They are described in this paper :
> h
On Fri, 6 Jan 2017, Klaij, Christiaan wrote:
> Satish,
>
> Our sysadmin is not keen on downgrading glibc.
sure
> I'll stick with "--with-shared-libraries=0" for now
thats fine.
> and wait for SL7.3 with intel 17.
Well they are not related so if you can - you should upgrade to
intel-17 [irre
Hi Matthew,
Using the debuguer I finaly found the problem. It is related to MPI. In
src/sys/objects/pinit.c line 779, petsc test the availability of
PETSC_HAVE_MPI_INIT_THREAD and this is set to True beccause my OpenMPI version
is compiled with --enable-mpi-thread-multiple.
However the call t
"Mark W. Lohry" writes:
> I have an unsteady problem I'm trying to solve for steady state. The regular
> time-accurate stepping works fine (uses around 5 Newton iterations with 100
> krylov iterations each per time step) with beuler stepping.
>
>
> But when changing only TSType to pseudo it loo
hi,
first, this was an user error and i totally acknowledge this, but i
wonder if this might be an oversight in your error checking: if you
configure gamg with ilu/asm smoothing, and are stupid enough to have set
the number of smoother cycles to 0, your program churns along and
apparently con
Constraints come from so-called cohomology conditions. In practical
applications,
they arise when you couple field models (e.g. Maxwell's equations) with
lumped
models (e.g. circuit equations). They are described in this paper :
http://gmsh.info/doc/preprints/gmsh_homology_preprint.pdf
In their mat
It is not the first time I have this problem and my aim was now to try to solve
it instead of ignoring tests. The environment seams coherent (see below).
I'll try to run in debug mode to investigate where the code hangs.
Patrick
[begou@kareline tutorials]$ make ex19
*mpicc* -o ex19.o -c -Wall -
On Fri, Jan 6, 2017 at 2:39 AM, Patrick Begou <
patrick.be...@legi.grenoble-inp.fr> wrote:
> Hi Matthew,
>
> Launching manualy ex19 shows only one process consuming cpu time, after
> 952mn I've killed the job this morning.
>
> [begou@kareline tutorials]$ make ex19
> mpicc -o ex19.o -c -Wall -Wwrit
Hi Matthew,
Launching manualy ex19 shows only one process consuming cpu time, after 952mn
I've killed the job this morning.
[begou@kareline tutorials]$ make ex19
mpicc -o ex19.o -c -Wall -Wwrite-strings -Wno-strict-aliasing
-Wno-unknown-pragmas -fvisibility=hidden -g3
-I/kareline/data/begou/
Satish,
Our sysadmin is not keen on downgrading glibc. I'll stick with
"--with-shared-libraries=0" for now and wait for SL7.3 with intel 17. Thanks
for filing the bugreport at RHEL, very curious to see their response.
Chris
dr. ir. Christiaan Klaij | CFD Researcher | Research & Development
M
22 matches
Mail list logo