> MatSolverPackage
became MatSolverType
> On Mar 5, 2018, at 1:35 PM, Danyang Su wrote:
>
> Hi Barry and Matt,
>
> The compiling problem should be caused by the PETSc version installed on my
> computer. When updated to PETSc-Dev version, the ex1f example works fine.
> However, I canno
> On Mar 5, 2018, at 1:10 PM, Nelson David Rufus wrote:
>
> Thanks Matt. I will try to use a 32 bit integer to see if the problem goes
> away. I had a query about compatibility of while reading and writing files
> compiled with different int varaints. For example, in the past, I've noticed
>
Hi Barry and Matt,
The compiling problem should be caused by the PETSc version installed on
my computer. When updated to PETSc-Dev version, the ex1f example works
fine. However, I cannot compile this example under PETSc-3.8.3 version.
After updating to PETSc-Dev verison, I encounter another c
On Mon, Mar 5, 2018 at 2:10 PM, Nelson David Rufus
wrote:
> Thanks Matt. I will try to use a 32 bit integer to see if the problem goes
> away. I had a query about compatibility of while reading and writing files
> compiled with different int varaints. For example, in the past, I've
> noticed that
Thanks Matt. I will try to use a 32 bit integer to see if the problem goes
away. I had a query about compatibility of while reading and writing files
compiled with different int varaints. For example, in the past, I've
noticed that a file written using an int32 petsc installation (using code
simila
On Mon, Mar 5, 2018 at 1:52 PM, Nelson David Rufus
wrote:
> Hello,
>
> I am writing a vector on a petsc binary file and subsequently reading the
> same in another part of the code.
> This works well for smaller vector sizes but fails for a large vector
> (vectorsize=204800 <(204)%20800->;
Hello,
I am writing a vector on a petsc binary file and subsequently reading the
same in another part of the code.
This works well for smaller vector sizes but fails for a large vector
(vectorsize=204800; PetscScalar=PetscComplex). For a large vector, when
the read vector is probed for values,
On Mon, 5 Mar 2018, Smith, Barry F. wrote:
> > My Poisson eqn solving should be the biggest culprit. I call it thru:
> >
> > call hypre_solver(p_array,q_p_array)
> >
> > So in this case, I first create a global variable with
> > PetscLogEventRegister, and then before/after the subroutine, I cal
> On Mar 5, 2018, at 9:59 AM, TAY wee-beng wrote:
>
>
> On 5/3/2018 11:43 AM, Smith, Barry F. wrote:
>> 360 process
>>
>> KSPSolve 99 1.0 2.6403e+02 1.0 6.67e+10 1.1 2.7e+05 9.9e+05
>> 5.1e+02 15100 17 42 19 15100 17 42 19 87401
>>
>> 1920 processes
>>
>> KSPSolve
On 5/3/2018 11:43 AM, Smith, Barry F. wrote:
360 process
KSPSolve 99 1.0 2.6403e+02 1.0 6.67e+10 1.1 2.7e+05 9.9e+05
5.1e+02 15100 17 42 19 15100 17 42 19 87401
1920 processes
KSPSolve 99 1.0 2.3184e+01 1.0 1.32e+10 1.2 1.5e+06 4.3e+05
5.1e+02 4100 17 42 19 410
On Mon, Mar 5, 2018 at 9:55 AM, Åsmund Ervik wrote:
> Hmm, there's many elements to consider here. Learning to use Plex might be
> a better use of my time than setting up and maintaining a custom DMDA
> repartitioning. But DMDAs are a more natural fit to what I want to do...
>
> However, I'm not
Hmm, there's many elements to consider here. Learning to use Plex might be a
better use of my time than setting up and maintaining a custom DMDA
repartitioning. But DMDAs are a more natural fit to what I want to do...
However, I'm not in a huge rush to do this immediately. Can I ping you back
s
Matthew Knepley writes:
> On Mon, Mar 5, 2018 at 9:01 AM, Tobin Isaac wrote:
>
>> This is a somewhat incomplete description of the steps in linear
>> partitioning. The rest can be accomplished with PetscSF calls, but I
>> should wrap it up in a PetscPartitioner because it's a mistake-prone
>> o
On Mon, Mar 5, 2018 at 9:25 AM, Åsmund Ervik wrote:
> My only argument "against" using Plex is that I don't understand how to
> use it. Is there a simple example anywhere that shows how to set up a 1D
> simplical (?) mesh, and then just get/return data between vectors
> associated with the Plex a
On 5 March 2018 at 13:56, Åsmund Ervik wrote:
> As Jed suggests, computing the (re)partitioning is straightforward in my
> 1D case. We're not planning to move this to multiple dimensions (we have
> another type of solver for that).
>
> So if it's possible to expose the repartitioning code for DAs
My only argument "against" using Plex is that I don't understand how to use it.
Is there a simple example anywhere that shows how to set up a 1D simplical (?)
mesh, and then just get/return data between vectors associated with the Plex
and (local) Fortran arrays on each proc? I don't have any KS
On Mon, Mar 5, 2018 at 9:01 AM, Tobin Isaac wrote:
> This is a somewhat incomplete description of the steps in linear
> partitioning. The rest can be accomplished with PetscSF calls, but I
> should wrap it up in a PetscPartitioner because it's a mistake-prone
> operation.
>
Jed likes to do ever
This is a somewhat incomplete description of the steps in linear partitioning.
The rest can be accomplished with PetscSF calls, but I should wrap it up in a
PetscPartitioner because it's a mistake-prone operation.
On March 5, 2018 8:31:42 AM EST, Jed Brown wrote:
>Dave May writes:
>
>> For a
As Jed suggests, computing the (re)partitioning is straightforward in my 1D
case. We're not planning to move this to multiple dimensions (we have another
type of solver for that).
So if it's possible to expose the repartitioning code for DAs, I'd be very
happy to go this route. Is it a lot of w
Dave May writes:
> For a 1D problem such as yours, I would use your favourite graph
> partitioner (Metis,Parmetis, Scotch) together with your cell based
> weighting and repartition the data yourself.
That's overkill in 1D. You can MPI_Allreduce(SUM) and MPI_Scan(SUM) the
weights, then find the
On Mon, Mar 5, 2018 at 8:17 AM, Dave May wrote:
>
>
> On 5 March 2018 at 09:29, Åsmund Ervik wrote:
>
>> Hi all,
>>
>> We have a code that solves the 1D multiphase Euler equations, using some
>> very expensive thermodynamic calls in each cell in each time step. The
>> computational time for diff
On 5 March 2018 at 09:29, Åsmund Ervik wrote:
> Hi all,
>
> We have a code that solves the 1D multiphase Euler equations, using some
> very expensive thermodynamic calls in each cell in each time step. The
> computational time for different cells varies significantly in the spatial
> direction (d
On Mon, Mar 5, 2018 at 4:29 AM, Åsmund Ervik wrote:
> Hi all,
>
> We have a code that solves the 1D multiphase Euler equations, using some
> very expensive thermodynamic calls in each cell in each time step. The
> computational time for different cells varies significantly in the spatial
> direct
Have you tried with inexact shift-and-invert, i.e. with an iterative KSP
instead of MUMPS? Or with preconditioned eigensolvers (GD or JD)?
If you want, send the matrix to my personal address and I will try a few things.
Jose
> El 5 mar 2018, a las 19:50, Santiago Andres Triana
> escribió:
>
Dear Jose,
Thanks for your reply. The problem I deal with (rotational fluid dynamics)
involves a very small parameter, the Ekman number, which needs to be as
small as possible, hopefully 10^-10 or smaller (typical of the molten core
of a planet). I have noticed (and other authors before) that roun
Hi all,
We have a code that solves the 1D multiphase Euler equations, using some very
expensive thermodynamic calls in each cell in each time step. The computational
time for different cells varies significantly in the spatial direction (due to
different thermodynamic states), and varies slowly
26 matches
Mail list logo