I see. Does that s^2 memory scaling mean that sparse direct solvers are not meant to be used beyond a certain point? I.e. if the supercomputer I'm using doesn't have enough memory per core to store even a single row of the factored matrix, then I'm out of luck?
On Fri, Jan 31, 2014 at 9:51 PM, Jed Brown <[email protected]> wrote: > David Liu <[email protected]> writes: > > > Hi, I'm solving a 3d problem with mumps. When I increased the grid size > to > > 70x60x20 with 6 unknowns per point, I started noticing that the program > was > > crashing at runtime at the factoring stage, with the mumps error code: > > > > -17 The internal send buffer that was allocated dynamically by MUMPS on > the > > processor is too small. > > The user should increase the value of ICNTL(14) before calling MUMPS > again. > > > > However, when I increase the grid spacing in the z direction by about > 50%, > > this crash does not happen. > > > > Why would how much memory an LU factorization uses depend on an overall > > numerical factor (for part of the matrix at least) like this? > > I'm not sure exactly what you're asking, but the complexity of direct > solves depend on the minimal vertex separators in the sparse > matrix/graph. Yours will be s=60*20*6 (more if your stencil needs > second neighbors). The memory usage scales with s^2 and the > factorization time scales with s^3. >
