Dear Hong,
According to this link
http://www.fastmath-scidac.org/software/mlmuelu.html [1]
MueLu is the sucessor to ML and should support a larger number of scalar
types. Also according to this presentation
https://cfwebprod.sandia.gov/cfdocs/CompResearch/docs/MueLuOverview_TUG2013.pdf
> On Jan 28, 2016, at 11:11 AM, Xiangdong wrote:
>
> Yes, it can be either DMDA or DMPlex. For example, I have 1D DMDA with Nx=10
> and np=2. At the beginning each processor owns 5 cells. After some simulation
> time, I found that repartition the 10 cells into 3 and 7 is
Victor,
What are the differences between MueLu and ML?
Hong
On Thu, Jan 28, 2016 at 3:04 AM, wrote:
> Dear PETSc developers,
>
> is it possible to create an interface for MueLu (given its dependencies to
> other Trilinos packages)? Do you plan to do that in the
On Thu, Jan 28, 2016 at 11:36 AM, Xiangdong wrote:
> What functions/tools can I use for dynamic migration in DMPlex framework?
>
In this paper, http://arxiv.org/abs/1506.06194, we explain how to use the
DMPlexMigrate() function to redistribute data.
In the future, its likely
On Thu, Jan 28, 2016 at 1:37 PM, Dave May wrote:
>
>
> On Thursday, 28 January 2016, Matthew Knepley wrote:
>
>> On Thu, Jan 28, 2016 at 11:36 AM, Xiangdong wrote:
>>
>>> What functions/tools can I use for dynamic migration in
On Thursday, 28 January 2016, Matthew Knepley > wrote:
> On Thu, Jan 28, 2016 at 11:36 AM, Xiangdong wrote:
>
>> What functions/tools can I use for dynamic migration in DMPlex framework?
>>
>
> In this
Hi Folks,
Is there a way to get back the user allocated raw data pointer (column-major
order) used in creating MatCreateSeqDense() from the Mat object?
Thanks,
--Amneet
On Thu, Jan 28, 2016 at 4:53 PM, Bhalla, Amneet Pal S
wrote:
> Hi Folks,
>
> Is there a way to get back the user allocated raw data pointer
> (column-major order) used in creating MatCreateSeqDense() from the Mat
> object?
>
I am thinking to use parmetis to repartition the mesh (based on new updated
weights for vertices), and use some functions (maybe DMPlexMigrate) to
redistribute the data. I will look into Matt's paper to see whether it is
possible.
Thanks.
Xiangdong
On Thu, Jan 28, 2016 at 2:41 PM, Matthew
> On Jan 28, 2016, at 6:23 PM, Bhalla, Amneet Pal S
> wrote:
>
> Thanks!
>
> Another related question: If I do something like this:
>
> double* data;
>
> // do stuff with data
> data[i] = ...
>
> Mat A;
> MatCreateSeqDense(...,data,..., );
>
> // do more stuff with
Thanks!
Another related question: If I do something like this:
double* data;
// do stuff with data
data[i] = ...
Mat A;
MatCreateSeqDense(...,data,..., );
// do more stuff with data
data[i] = ..
Now would the matrix A reflect the change (i.e updated A[i][j]) without making
an explicit call
Dear PETSc developers,
is it possible to create an interface for MueLu (given its dependencies
to other Trilinos packages)? Do you plan to do that in the future?
Thank you!
--
Victor A. P. Magri - PhD student
Dept. of Civil, Environmental and Architectural Eng.
University of Padova
Via
Hi Jose,
Here is the complete error message:
[0]PETSC ERROR: - Error Message
--
[0]PETSC ERROR: Invalid argument
[0]PETSC ERROR: Scalar value must be same on all processes, argument # 3
[0]PETSC ERROR: See
> El 28 ene 2016, a las 9:13, Bikash Kanungo escribió:
>
> Hi Jose,
>
> Here is the complete error message:
>
> [0]PETSC ERROR: - Error Message
> --
> [0]PETSC ERROR: Invalid argument
> [0]PETSC
Yeah I suspected linear dependence. But I was puzzled by the error
occurring in one machine and not the other. But even on the machine that it
failed, it failed for some runs and passed successfully for others. So it
suggests that the vector norm is almost zero in certain cases (i.e, in the
runs
What functions/tools can I use for dynamic migration in DMPlex framework?
Can you also name some external mesh management systems? Thanks.
Xiangdong
On Thu, Jan 28, 2016 at 12:21 PM, Barry Smith wrote:
>
> > On Jan 28, 2016, at 11:11 AM, Xiangdong
Yes, it can be either DMDA or DMPlex. For example, I have 1D DMDA with
Nx=10 and np=2. At the beginning each processor owns 5 cells. After some
simulation time, I found that repartition the 10 cells into 3 and 7 is
better for load balancing. Is there an easy/efficient way to migrate data
from one
17 matches
Mail list logo