Re: [petsc-users] PETSc interface to MueLu

2016-01-28 Thread victor . magri
Dear Hong, According to this link http://www.fastmath-scidac.org/software/mlmuelu.html [1] MueLu is the sucessor to ML and should support a larger number of scalar types. Also according to this presentation https://cfwebprod.sandia.gov/cfdocs/CompResearch/docs/MueLuOverview_TUG2013.pdf

Re: [petsc-users] repartition for dynamic load balancing

2016-01-28 Thread Barry Smith
> On Jan 28, 2016, at 11:11 AM, Xiangdong wrote: > > Yes, it can be either DMDA or DMPlex. For example, I have 1D DMDA with Nx=10 > and np=2. At the beginning each processor owns 5 cells. After some simulation > time, I found that repartition the 10 cells into 3 and 7 is

Re: [petsc-users] PETSc interface to MueLu

2016-01-28 Thread Hong
Victor, What are the differences between MueLu and ML? Hong On Thu, Jan 28, 2016 at 3:04 AM, wrote: > Dear PETSc developers, > > is it possible to create an interface for MueLu (given its dependencies to > other Trilinos packages)? Do you plan to do that in the

Re: [petsc-users] repartition for dynamic load balancing

2016-01-28 Thread Matthew Knepley
On Thu, Jan 28, 2016 at 11:36 AM, Xiangdong wrote: > What functions/tools can I use for dynamic migration in DMPlex framework? > In this paper, http://arxiv.org/abs/1506.06194, we explain how to use the DMPlexMigrate() function to redistribute data. In the future, its likely

Re: [petsc-users] repartition for dynamic load balancing

2016-01-28 Thread Matthew Knepley
On Thu, Jan 28, 2016 at 1:37 PM, Dave May wrote: > > > On Thursday, 28 January 2016, Matthew Knepley wrote: > >> On Thu, Jan 28, 2016 at 11:36 AM, Xiangdong wrote: >> >>> What functions/tools can I use for dynamic migration in

[petsc-users] repartition for dynamic load balancing

2016-01-28 Thread Dave May
On Thursday, 28 January 2016, Matthew Knepley > wrote: > On Thu, Jan 28, 2016 at 11:36 AM, Xiangdong wrote: > >> What functions/tools can I use for dynamic migration in DMPlex framework? >> > > In this

[petsc-users] MatCreateSeqDense

2016-01-28 Thread Bhalla, Amneet Pal S
Hi Folks, Is there a way to get back the user allocated raw data pointer (column-major order) used in creating MatCreateSeqDense() from the Mat object? Thanks, --Amneet

Re: [petsc-users] MatCreateSeqDense

2016-01-28 Thread Matthew Knepley
On Thu, Jan 28, 2016 at 4:53 PM, Bhalla, Amneet Pal S wrote: > Hi Folks, > > Is there a way to get back the user allocated raw data pointer > (column-major order) used in creating MatCreateSeqDense() from the Mat > object? >

Re: [petsc-users] repartition for dynamic load balancing

2016-01-28 Thread Xiangdong
I am thinking to use parmetis to repartition the mesh (based on new updated weights for vertices), and use some functions (maybe DMPlexMigrate) to redistribute the data. I will look into Matt's paper to see whether it is possible. Thanks. Xiangdong On Thu, Jan 28, 2016 at 2:41 PM, Matthew

Re: [petsc-users] MatCreateSeqDense

2016-01-28 Thread Barry Smith
> On Jan 28, 2016, at 6:23 PM, Bhalla, Amneet Pal S > wrote: > > Thanks! > > Another related question: If I do something like this: > > double* data; > > // do stuff with data > data[i] = ... > > Mat A; > MatCreateSeqDense(...,data,..., ); > > // do more stuff with

Re: [petsc-users] MatCreateSeqDense

2016-01-28 Thread Bhalla, Amneet Pal S
Thanks! Another related question: If I do something like this: double* data; // do stuff with data data[i] = ... Mat A; MatCreateSeqDense(...,data,..., ); // do more stuff with data data[i] = .. Now would the matrix A reflect the change (i.e updated A[i][j]) without making an explicit call

[petsc-users] PETSc interface to MueLu

2016-01-28 Thread victor . magri
Dear PETSc developers, is it possible to create an interface for MueLu (given its dependencies to other Trilinos packages)? Do you plan to do that in the future? Thank you! -- Victor A. P. Magri - PhD student Dept. of Civil, Environmental and Architectural Eng. University of Padova Via

Re: [petsc-users] MPI_AllReduce error with -xcore-avx2 flags

2016-01-28 Thread Bikash Kanungo
Hi Jose, Here is the complete error message: [0]PETSC ERROR: - Error Message -- [0]PETSC ERROR: Invalid argument [0]PETSC ERROR: Scalar value must be same on all processes, argument # 3 [0]PETSC ERROR: See

Re: [petsc-users] MPI_AllReduce error with -xcore-avx2 flags

2016-01-28 Thread Jose E. Roman
> El 28 ene 2016, a las 9:13, Bikash Kanungo escribió: > > Hi Jose, > > Here is the complete error message: > > [0]PETSC ERROR: - Error Message > -- > [0]PETSC ERROR: Invalid argument > [0]PETSC

Re: [petsc-users] MPI_AllReduce error with -xcore-avx2 flags

2016-01-28 Thread Bikash Kanungo
Yeah I suspected linear dependence. But I was puzzled by the error occurring in one machine and not the other. But even on the machine that it failed, it failed for some runs and passed successfully for others. So it suggests that the vector norm is almost zero in certain cases (i.e, in the runs

Re: [petsc-users] repartition for dynamic load balancing

2016-01-28 Thread Xiangdong
What functions/tools can I use for dynamic migration in DMPlex framework? Can you also name some external mesh management systems? Thanks. Xiangdong On Thu, Jan 28, 2016 at 12:21 PM, Barry Smith wrote: > > > On Jan 28, 2016, at 11:11 AM, Xiangdong

Re: [petsc-users] repartition for dynamic load balancing

2016-01-28 Thread Xiangdong
Yes, it can be either DMDA or DMPlex. For example, I have 1D DMDA with Nx=10 and np=2. At the beginning each processor owns 5 cells. After some simulation time, I found that repartition the 10 cells into 3 and 7 is better for load balancing. Is there an easy/efficient way to migrate data from one