Sounds good, thanks. I’ve also been looking into Elemental, but the documentation seems outdated and I can’t find good examples on how to use it. I have the LLNL fork installed.
Thanks, -Damyn > On Oct 28, 2023, at 8:56 AM, Matthew Knepley <knep...@gmail.com> wrote: > > On Fri, Oct 27, 2023 at 3:54 PM Damyn Chipman <damynchip...@u.boisestate.edu > <mailto:damynchip...@u.boisestate.edu>> wrote: >> Yeah, I’ll make an issue and use a modified version of this test routine. >> >> Does anything change if I will be using MATSCALAPACK matrices instead of the >> built in MATDENSE? > > No, that is likely worse. > >> Like I said, I will be computing Schur complements and need to use a >> parallel and dense matrix format. > > I do not understand the communication pattern, but it is possible that > Elemental would be slightly faster since it has some cool built-in > communication operations, however it might be more programming. > > Thanks, > > Matt > >> -Damyn >> >>> On Oct 26, 2023, at 10:01 AM, Matthew Knepley <knep...@gmail.com >>> <mailto:knep...@gmail.com>> wrote: >>> >>> On Wed, Oct 25, 2023 at 11:55 PM Damyn Chipman >>> <damynchip...@u.boisestate.edu <mailto:damynchip...@u.boisestate.edu>> >>> wrote: >>>> Great thanks, that seemed to work well. This is something my algorithm >>>> will do fairly often (“elevating” a node’s communicator to a communicator >>>> that includes siblings). The matrices formed are dense but low rank. With >>>> MatCreateSubMatrix, it appears I do a lot of copying from one Mat to >>>> another. Is there a way to do it with array copying or pointer movement >>>> instead of copying entries? >>> >>> We could make a fast path for dense that avoids MatSetValues(). Can you >>> make an issue for this? The number one thing that would make this faster is >>> to contribute a small test. Then we could run it continually when putting >>> in the fast path to make sure we are preserving correctness. >>> >>> Thanks, >>> >>> Matt >>> >>>> -Damyn >>>> >>>>> On Oct 24, 2023, at 9:51 PM, Jed Brown <j...@jedbrown.org >>>>> <mailto:j...@jedbrown.org>> wrote: >>>>> >>>>> You can place it in a parallel Mat (that has rows or columns on only one >>>>> rank or a subset of ranks) and then MatCreateSubMatrix with all new >>>>> rows/columns on a different rank or subset of ranks. >>>>> >>>>> That said, you usually have a function that assembles the matrix and you >>>>> can just call that on the new communicator. >>>>> >>>>> Damyn Chipman <damynchip...@u.boisestate.edu >>>>> <mailto:damynchip...@u.boisestate.edu>> writes: >>>>> >>>>>> Hi PETSc developers, >>>>>> >>>>>> In short, my question is this: Does PETSc provide a way to move or copy >>>>>> an object (say a Mat) from one communicator to another? >>>>>> >>>>>> The more detailed scenario is this: I’m working on a linear algebra >>>>>> solver on quadtree meshes (i.e., p4est). I use communicator subsets in >>>>>> order to facilitate communication between siblings or nearby neighbors. >>>>>> When performing linear algebra across siblings (a group of 4), I need to >>>>>> copy a node’s data (i.e., a Mat object) from a sibling’s communicator to >>>>>> the communicator that includes the four siblings. From what I can tell, >>>>>> I can only copy a PETSc object onto the same communicator. >>>>>> >>>>>> My current approach will be to copy the raw data from the Mat on one >>>>>> communicator to a new Mat on the new communicator, but I wanted to see >>>>>> if there is a more “elegant” approach within PETSc. >>>>>> >>>>>> Thanks in advance, >>>>>> >>>>>> Damyn Chipman >>>>>> Boise State University >>>>>> PhD Candidate >>>>>> Computational Sciences and Engineering >>>>>> damynchip...@u.boisestate.edu <mailto:damynchip...@u.boisestate.edu> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/> > > > -- > What most experimenters take for granted before they begin their experiments > is infinitely more interesting than any results to which their experiments > lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>