And for those who've said that the future is exclusively GPUs, the most efficient HPC machine in the world right now is CPU only.https://www.top500.org/green500/lists/2019/11/On Nov 18, 2019 19:52, Jed Brown wrote:This and OpenMP target are there recommended models for Aurora.On Nov 18, 2019
This and OpenMP target are there recommended models for Aurora.On Nov 18, 2019 19:13, "Balay, Satish via petsc-dev" wrote:Ah - ok - so we need to use this oneapi for aurora..
https://hothardware.com/news/intel-ponte-vecchio-7nm-exascale-gpu-for-hpc-market
Satish
Matthew Knepley writes:
> On Wed, Oct 23, 2019 at 4:59 PM Jed Brown wrote:
>
>> Matthew Knepley writes:
>>
>> > That is an unreliable check for Z. You would not eliminate the case
>> > where you give --with-Z, but the check fails, so Z is not available,
>> > but you do not find out until
"Smith, Barry F. via petsc-dev" writes:
> Some idiot logged what they did, but not why they did it.
>
> commit bf108f309acab50613e150419c680842cf4b8a05 (HEAD)
> Author: Barry Smith
> Date: Thu Mar 18 20:40:53 2004 -0600
>
> bk-changeset-1.2063.1.1
>
"Balay, Satish via petsc-dev" writes:
> Just a reminder:
>
> We've had regular changes to CI - [.gitlab-ci.yaml] - so its generally
> a good idea to rebase branches to latest maint/master before starting
> any new test pipeline.
>
> One can always check if .gitlab-ci.yaml was updated with:
>
>
"Smith, Barry F." writes:
>> On Oct 26, 2019, at 9:09 AM, Jed Brown wrote:
>>
>> "Smith, Barry F." writes:
>>
>>> The proposed fix is #if defined(PETSC_USE_AVX512_KERNELS) && && && && &&
>>> in https://gitlab.com/petsc/petsc/merge_requests/2213/diffs
>>
>> Looks fine; approved.
>>
>>>
"Smith, Barry F." writes:
> The proposed fix is #if defined(PETSC_USE_AVX512_KERNELS) && && && && &&
> in https://gitlab.com/petsc/petsc/merge_requests/2213/diffs
Looks fine; approved.
> but note that PETSC_USE_AVX512_KERNELS does not even do a configure check to
> make sure it is valid.
"Smith, Barry F. via petsc-dev" writes:
>This needs to be fixed properly with a configure test(s) and not with huge
> and inconsistent checks like this
>
> #if defined(PETSC_HAVE_IMMINTRIN_H) && defined(__AVX512F__) &&
> defined(PETSC_USE_REAL_DOUBLE) && !defined(PETSC_USE_COMPLEX) &&
>
IMO, Figures 2 and 7+ are more interesting when the x axis (vector size)
is replaced by execution time. We don't scale by fixing the resource
and increasing the problem size, we choose the global problem size based
on accuracy/model complexity and choose a Pareto tradeoff of execution
time with
Matthew Knepley writes:
> That is an unreliable check for Z. You would not eliminate the case
> where you give --with-Z, but the check fails, so Z is not available,
> but you do not find out until checking X or Y.
You can verify that Z works in a fraction of a second, but building X
may take
Matthew Knepley writes:
> On Wed, Oct 23, 2019 at 2:27 PM Jed Brown wrote:
>
>> Matthew Knepley via petsc-dev writes:
>>
>> > On Wed, Oct 23, 2019 at 11:24 AM Faibussowitsch, Jacob via petsc-dev <
>> > petsc-dev@mcs.anl.gov> wrote:
>> >> As I am largely unfamiliar with the internals of the
Matthew Knepley via petsc-dev writes:
> On Wed, Oct 23, 2019 at 11:24 AM Faibussowitsch, Jacob via petsc-dev <
> petsc-dev@mcs.anl.gov> wrote:
>> As I am largely unfamiliar with the internals of the configure process,
>> this is potentially more of an involved change than I am imagining, given
Switch to search (we really don't need to invoke Python for this) and slap a %
on the end of anything where there might be extra.
make -f gmakefile test globsearch='mat_tests-ex128_1
mat_tests-ex37_nsize-2_mat_type-mpibaij_mat_block_size%'
Scott Kruger writes:
> When we created the map of
Yeah, it's missing the numeric block size. The following works
/usr/bin/make -f gmakefile test globsearch='mat_tests-ex128_1
mat_tests-ex37_nsize-2_mat_type-mpibaij_mat_block_size-1'
Also, globsearch can be replaced by search in this usage.
"Smith, Barry F. via petsc-dev" writes:
> May
All "developers" are listed as able to grant (optional) approvals --
approval from codeowners/integrators is still needed regardless of those
optional approvals. We should perhaps remove that because I don't know
a way to have some able to approve without the notification problem you
mention
Pierre Jolivet via petsc-dev writes:
> On Oct 20, 2019, at 6:07 PM, "Smith, Barry F." wrote:
>
>>
>> The reason the code works this way is that normally
>> -ksp_error_if_not_converged is propagated into the inner (and innerer)
>> solves and normally it is desirable that these inner solves
Pierre Jolivet via petsc-dev writes:
>> On 21 Oct 2019, at 7:52 AM, Smith, Barry F. wrote:
>>
>>
>>
>>> On Oct 21, 2019, at 12:23 AM, Pierre Jolivet
>>> wrote:
>>>
>>>
>>>
On 21 Oct 2019, at 7:11 AM, Smith, Barry F. wrote:
> On Oct 20, 2019, at 11:52 PM,
Pierre Jolivet via petsc-dev writes:
>> On 16 Oct 2019, at 8:01 PM, Zhang, Junchao wrote:
>>
>> The value of "owner" should fit in PetscMPIInt.
>
> Are you implying that BuildSystem always promotes PetscInt to be able to
> store a PetscMPIInt (what if you configure with 32 bit indices and a
Stefano Zampini writes:
> I just took a look at the ISGENERAL code. ISSetBlockSize_General just sets
> the block size of the layout (??)
> ISGetIndices always return the data->idx memory.
> So, a more profound question is: what is the model behind setting the block
> size on a ISGENERAL? And
Stefano Zampini via petsc-dev writes:
>> Thoughts and/or comments? Would it make sense to add an
>> ISGetBlockIndices/ISRestoreBlockIndices or would that be too confusing for
>> the user?
>
> That would be more general and I think it makes sense, and should pair with
> ISGetBlockSize
What
I think this thread got dropped when I was on travel (two months ago and
I'm just now getting back to it, eek!). Matt, could you please comment
on this model?
Jed Brown via petsc-dev writes:
> Matthew Knepley writes:
>
>>>> >> The local points could be distinct
"Smith, Barry F. via petsc-dev" writes:
> Is this one process with one subdomain? (And hence no meaningful overlap
> since there is nothing to overlap?) And you expect to get the "exact" answer
> on one iteration?
>
> Please run the right preconditioned GMRES with -pc_asm_type [restrict
Karl Rupp writes:
>> Do you have any experience with nsparse?
>>
>> https://github.com/EBD-CREST/nsparse
>>
>> I've seen claims that it is much faster than cuSPARSE for sparse
>> matrix-matrix products.
>
> I haven't tried nsparse, no.
>
> But since the performance comes from a hardware
Do you have any experience with nsparse?
https://github.com/EBD-CREST/nsparse
I've seen claims that it is much faster than cuSPARSE for sparse
matrix-matrix products.
Karl Rupp via petsc-dev writes:
> Hi Richard,
>
> CPU spGEMM is about twice as fast even on the GPU-friendly case of a
>
Matthew Knepley via petsc-dev writes:
> Can anyone think of a way to get a better message from
We could register all types and implement PetscViewerCreate_HDF5() to
raise an error when not configured with HDF5. The "downside" is that
-help would show implementations that aren't supported by
"Smith, Barry F. via petsc-dev" writes:
>> On Sep 22, 2019, at 11:26 PM, Balay, Satish wrote:
>>
>> Even-though a fix addresses a breakage in a single build - that change
>> could break other things so its generally best to run a full test.
>
> Sure before a merge we want everything tested
gt; On Sep 22, 2019, at 8:35 AM, Jed Brown via petsc-dev wrote:
>
> Karl Rupp writes:
>
>>> I wonder if the single-node latency bugs on AC922 are related to these
>>> weird performance results.
>>>
>>> https://docs.google.com/spreadsheets
Karl Rupp writes:
>> I wonder if the single-node latency bugs on AC922 are related to these
>> weird performance results.
>>
>> https://docs.google.com/spreadsheets/d/1amFJIbpvs9oJcUc-WntsFHO_C0LE7xFJeor-oElt0LY/edit#gid=0
>>
>
> Thanks for these numbers!
> Intra-Node > Inter-Node is indeed
"Smith, Barry F." writes:
>> On Sep 21, 2019, at 11:43 PM, Jed Brown wrote:
>>
>> "Smith, Barry F." writes:
>>
>>> Jed,
>>>
>>> What does latency as a function of message size mean? It is in the plots
>>
>> It's just the wall-clock time to ping-pong a message of that size. All
>> the
on to being network bandwidth limited for large sizes.
>
>> On Sep 21, 2019, at 11:15 PM, Jed Brown via petsc-dev
>> wrote:
>>
>> Karl Rupp via petsc-dev writes:
>>
>>> Hi Junchao,
>>>
>>> thanks, these numbers are interesting.
>>>
Karl Rupp via petsc-dev writes:
> Hi Junchao,
>
> thanks, these numbers are interesting.
>
> Do you have an easy way to evaluate the benefits of a CUDA-aware MPI vs.
> a non-CUDA-aware MPI that still keeps the benefits of your
> packing/unpacking routines?
>
> I'd like to get a feeling of
For an AIJ matrix with 32-bit integers, this is 1 flops/6 bytes, or 165
GB/s for the node for the best case (42 ranks).
My understanding is that these systems have 8 channels of DDR4-2666 per
socket, which is ~340 GB/s of theoretical bandwidth on a 2-socket
system, and 270 GB/s STREAM Triad
"Smith, Barry F. via petsc-dev" writes:
> When using valgrind it is important to understand that it does not
> immediately make a report when it finds an uninitialized memory, it only
> makes a report when an uninitialized memory would cause a change in the
> program flow (like in an if
"Smith, Barry F." writes:
>> Satish and Barry: Do we need the Error codes or can I revert to previous
>> functionality?
>
> I think it is important to display the error codes.
>
> How about displaying at the bottom how to run the broken tests? You already
> show how to run them with the
Hapla Vaclav via petsc-dev writes:
> On 20 Sep 2019, at 19:59, Scott Kruger
> mailto:kru...@txcorp.com>> wrote:
>
>
> On 9/20/19 10:44 AM, Hapla Vaclav via petsc-dev wrote:
> I was used to copy the command actually run by test harness, change to
> example's directory and paste the command
Pierre Jolivet via petsc-dev writes:
> Hello,
> Given a Mat A, I’d like to know if there is an implementation available for
> doing C=A*B
> I was previously using MatHasOperation(A, MATOP_MATMAT_MULT, )
> but the result is not correct in at least two cases:
Do you want MATOP_MAT_MULT and
Stefano Zampini writes:
> So, for example, including petscmat.h we get all the constructors,
> including "petscdm.h" we don't get DMPlexCreate... (BTW, this should
> spelled DMCreatePlex if we follow the Mat convention)
That's a legacy convention for DM. There is a usage difference in that
Václav Hapla via petsc-dev writes:
> 19. září 2019 12:23:43 SELČ, Matthew Knepley napsal:
>>On Thu, Sep 19, 2019 at 6:21 AM Matthew Knepley
>>wrote:
>>
>>> On Thu, Sep 19, 2019 at 6:20 AM Stefano Zampini
>>
>>> wrote:
>>>
So why it is in the vec package?
>>>
>>> Its in IS, so if
"Mills, Richard Tran via petsc-dev" writes:
> On 9/12/19 6:33 AM, Jed Brown via petsc-dev wrote:
> [...]
>> https://docs.gitlab.com/ee/user/project/code_owners.html
>>
>> We currently require approval from Integration (of which you are a
>> member) an
ot;, but it confuses me since it says you can
>> use it to match "Participate", but you can't do something like:
>> Email all new issues, but only show me the MR's I am mentioned
>> in or own.
>>
>> Scott
>>
>>
>> On 9/12/19 7:39 AM,
Matthew Knepley via petsc-dev writes:
> On Thu, Sep 12, 2019 at 9:05 AM Balay, Satish via petsc-dev <
> petsc-dev@mcs.anl.gov> wrote:
>
>> When a new MR is created, approval rules default to 'Integration' and
>> 'Team'
>>
>> So everyone in the team probably receives emails on all MRs. Now that
Please fork the repository on GitLab
(https://gitlab.com/petsc/petsc/-/forks/new) and push your branch to the
fork, then make a merge request. If you become a regular contributor,
we can give you push privileges to the main repository.
Pierre Gosselet via petsc-dev writes:
> Dear all,
> I am
Can we query HDF5 to determine whether it supports zlib? When shipping
shared libraries, some people will use a different libhdf5, so it'd be
better to determine this at run-time.
"Smith, Barry F. via petsc-dev" writes:
>Vaclav,
>
> At the time of the PR Jed complained about all the
"Smith, Barry F." writes:
> Jed,
>
> Good recall. We could use the new flag that indicates the block size was
> never set by the user to allow a change from the 1?
Yeah, I thought that had been the idea behind -1, but the code doesn't seem to
enforce it.
"Smith, Barry F. via petsc-dev" writes:
> It seems reasonable at SetUp time to make it 1. If we need to have the
> information that user never set it (I don't know why we would want this) then
> that can go into a new flag.
I think I recall code paths in which the blocksize is set after
Lisandro Dalcin via petsc-dev writes:
>>If this line was protected with #if defined(PETSC_HAVE_METIS) and
>> PETSc was not installed with ParMetis, but only Metis would the code run
>> correctly? Or is it somehow that even though you are only using metis here
>> you still need parmetis? For
"Smith, Barry F. via petsc-dev" writes:
>> Our Metis wrapper is marked as a sequential one, but since you are linking
>> libmetis with MPI, this is problematic for some configurations.
>
> What is your work flow? Are you using --prefix to compile particular
> combinations of external
FYI, the source for this example is here:
https://bitbucket.org/psanan/sphinx_scratch/src/master/introductory_tutorial_ksp.rst
(raw)
https://bitbucket.org/psanan/sphinx_scratch/raw/a19b48b61e50181e754becb57fc6ff36d7639005/introductory_tutorial_ksp.rst
I'm concerned that the code is copied in
Matthew Knepley writes:
>>> >> The local points could be distinct for
>>> >> both fields and coordinates, with the global SF de-duplicating the
>>> >> periodic points for fields, versus leaving them distinct for
>>> >> coordinates.
>>> >
>>> >
>>> > Oh, no I would never do that.
>>>
>>> Can you
"Smith, Barry F." writes:
>> On Aug 14, 2019, at 5:58 PM, Jed Brown wrote:
>>
>> "Smith, Barry F." writes:
>>
On Aug 14, 2019, at 2:37 PM, Jed Brown wrote:
Mark Adams via petsc-dev writes:
> On Wed, Aug 14, 2019 at 2:35 PM Smith, Barry F.
> wrote:
>
"Smith, Barry F." writes:
>> On Aug 14, 2019, at 2:37 PM, Jed Brown wrote:
>>
>> Mark Adams via petsc-dev writes:
>>
>>> On Wed, Aug 14, 2019 at 2:35 PM Smith, Barry F. wrote:
>>>
Mark,
Would you be able to make one run using single precision? Just single
Brad Aagaard via petsc-dev writes:
> Q2 is often useful in problems with body forces (such as gravitational
> body forces), which tend to have linear variations in stress.
It's similar on the free-surface Stokes side, where pressure has a
linear gradient and must be paired with a stable
Mark Adams writes:
> On Wed, Aug 14, 2019 at 3:37 PM Jed Brown wrote:
>
>> Mark Adams via petsc-dev writes:
>>
>> > On Wed, Aug 14, 2019 at 2:35 PM Smith, Barry F.
>> wrote:
>> >
>> >>
>> >> Mark,
>> >>
>> >>Would you be able to make one run using single precision? Just single
>> >>
Mark Adams via petsc-dev writes:
> On Wed, Aug 14, 2019 at 2:35 PM Smith, Barry F. wrote:
>
>>
>> Mark,
>>
>>Would you be able to make one run using single precision? Just single
>> everywhere since that is all we support currently?
>>
>>
> Experience in engineering at least is single
Matthew Knepley writes:
> On Wed, Aug 14, 2019 at 11:46 AM Jed Brown wrote:
>
>> Matthew Knepley writes:
>>
>> > On Tue, Aug 13, 2019 at 7:35 PM Stefano Zampini <
>> stefano.zamp...@gmail.com>
>> > wrote:
>> >
>> >>
>>
Jed Brown via petsc-dev writes:
> Matthew Knepley writes:
>
>> On Tue, Aug 13, 2019 at 7:35 PM Stefano Zampini
>> wrote:
>>
>>>
>>>
>>> On Aug 14, 2019, at 1:19 AM, Jed Brown via petsc-dev <
>>> petsc-dev@mcs.anl.gov> wr
Matthew Knepley writes:
> On Tue, Aug 13, 2019 at 7:35 PM Stefano Zampini
> wrote:
>
>>
>>
>> On Aug 14, 2019, at 1:19 AM, Jed Brown via petsc-dev <
>> petsc-dev@mcs.anl.gov> wrote:
>>
>> [Cc: petsc-dev]
>>
>> Also,
[Cc: petsc-dev]
Also, why is our current mode of localized coordinates preferred over
the coordinate DM being non-periodic? Is the intent to preserve that
for every point in a DM, the point is also valid in the coordinate DM?
Can there be "gaps" in a chart?
I've been digging around in the
https://bitbucket.org/petsc/petsc/issues/333/use-64-bit-indices-for-row-offsets-in
"Smith, Barry F." writes:
> Make an issue
>
>
>> On Jul 30, 2019, at 7:00 PM, Jed Brown wrote:
>>
>> "Smith, Barry F. via petsc-users" writes:
>>
>>> The reason this worked for 4 processes is that the
"Smith, Barry F. via petsc-users" writes:
>The reason this worked for 4 processes is that the largest count in that
> case was roughly 6,653,750,976/4 which does fit into an int. PETSc only needs
> to know the number of nonzeros on each process, it doesn't need to know the
> amount across
"Smith, Barry F." writes:
> I don't know what it means.
>
> I just know that for several years the result of the test said the MPI
> libraries were not shared. I don't think that changed anything the rest of
> configure did.
Can we delete it/
$ git grep '\.shared\b' config
Does this mean we've been incorrectly identifying shared libraries all this
time?
"Smith, Barry F. via petsc-dev" writes:
> Jed and Matt,
>
>I have two problems with the MPI shared library check goes back to at
> least 3.5
>
> 1) Executing:
Dave May via petsc-dev writes:
> I'd describe how to use the binary dump and how to generate vtk files.
>
> The first is the most universal as it's completely generic and does not
> depend on a dm, thus users with their own mesh data structure and or don't
> have a mesh at all can use it. Would
X11 plotting is unreliable (requires installing something non-obvious)
on anything but Linux. I use it in live demos, but it's so limited I
wouldn't recommend it to users. VTK is more discoverable/explorable for
users; install a binary and make all the plots they want.
Patrick Sanan via
If you are thinking about attending the American Geophysical Union Fall
Meeting (Dec 9-13 in San Francisco), please consider submitting an
abstract to this interdisciplinary session. Abstracts are due July 31.
T003: Advances in Computational Geosciences
This session highlights advances in the
"Smith, Barry F. via petsc-dev" writes:
> Satish,
>
> I am confused. I checked out the commit just before this commit and do
>
> $ touch src/mat/interface/matrix.c
> $ make -j 12 -f gmakefile.test test globsearch="snes*tests*ex1*"
Use "-f gmakefile" if you want to include library build
"Zhang, Junchao" writes:
> A side question: Do lossy compressors have value for PETSc?
Perhaps if they're very fast, but I think it's usually not PETSc's place
to be performing such compression due to tolerances being really subtle.
There certainly is a place for preconditioning using reduced
"Smith, Barry F." writes:
> Sorry, I wasn't clear. Just meant something simpler. Compress the matrix to
> copy it to the GPU for faster transfers (and uncompress it appropriately on
> the GPU).
Oh, perhaps. Probably not relevant with NVLink (because it's nearly as fast as
DRAM), but could
by breadth-first search or similar. But we'd need to demo that
use specifically.
"Smith, Barry F." writes:
> CPU to GPU? Especially matrices?
>
>> On Jul 11, 2019, at 9:05 AM, Jed Brown via petsc-dev
>> wrote:
>>
>> Zstd is a remark
Zstd is a remarkably good compressor. I've experimented with it for
compressing column indices for sparse matrices on structured grids and
(after a simple transform: subtracting the row number) gotten
decompression speed in the neighborhood of 10 GB/s (i.e., faster per
core than DRAM). I've been
Matthew Knepley writes:
> On Mon, Jul 8, 2019 at 10:37 PM Jed Brown via petsc-dev <
> petsc-dev@mcs.anl.gov> wrote:
>
>> "Smith, Barry F. via petsc-dev" writes:
>>
>> >> On Jul 8, 2019, at 9:53 PM, Jakub Kruzik via petsc-dev <
>>
"Smith, Barry F." writes:
>> There is some nontrivial infrastructure that would be needed for this
>> model.
>>
>> 1. This new component needs to be built into a new library such as
>> libpetsc-plugin.a (when static).
>>
>> 2. Users need to know when they should link this module. They'll
"Smith, Barry F. via petsc-dev" writes:
>> On Jul 8, 2019, at 9:53 PM, Jakub Kruzik via petsc-dev
>> wrote:
>>
>> Just to clarify, the suggested solution is a plug-in sitting anywhere in the
>> PETSc source tree with postponed compilation and using
>> __attribute__((constructor)) to
John Peterson writes:
>> Do you add values many times into the same location? The array length
>> will be the number of misses to the local part of the matrix. We could
>> (and maybe should) make the stash use a hash instead of building the
>> array with multiplicity and combining duplicates
John Peterson writes:
> On Tue, Jul 2, 2019 at 1:44 PM Jed Brown wrote:
>
>> Fande Kong via petsc-dev writes:
>>
>> > Hi Developers,
>> >
>> > John just noticed that the matrix assembly was slow when having
>> sufficient
>> > amount of off-diagonal entries. It was not a MPI issue since I was
Fande Kong via petsc-dev writes:
> Hi Developers,
>
> John just noticed that the matrix assembly was slow when having sufficient
> amount of off-diagonal entries. It was not a MPI issue since I was able to
> reproduce the issue using two cores on my desktop, that is, "mpirun -n 2".
>
> I turned
"Smith, Barry F. via petsc-dev" writes:
> Does it make sense to recommend/suggest git bash for Windows as an
> alternative/in addition to Cygwin?
I would love to be able to recommend git-bash and/or WSL2 (which now
includes a full Linux kernel). I don't have a system on which to test,
but
Matthew Knepley writes:
> On Sat, Jun 29, 2019 at 8:39 AM Jed Brown wrote:
>
>> Matthew Knepley writes:
>>
>> > On Fri, Jun 28, 2019 at 4:37 PM Jed Brown wrote:
>> >
>> >> Matthew Knepley writes:
>> >>
>> >> > On Fri, Jun 28, 2019 at 2:04 PM Smith, Barry F. via petsc-dev <
>> >> >
Matthew Knepley writes:
> On Fri, Jun 28, 2019 at 4:37 PM Jed Brown wrote:
>
>> Matthew Knepley writes:
>>
>> > On Fri, Jun 28, 2019 at 2:04 PM Smith, Barry F. via petsc-dev <
>> > petsc-dev@mcs.anl.gov> wrote:
>> >
>> >>
>> >> You are right, these do not belong in petscconf.h
>> >>
>> >
>>
Matthew Knepley writes:
> On Fri, Jun 28, 2019 at 2:04 PM Smith, Barry F. via petsc-dev <
> petsc-dev@mcs.anl.gov> wrote:
>
>>
>> You are right, these do not belong in petscconf.h
>>
>
> The problematic thing here is hiding information from users of
> PETSc. If you are a user that counts on
We have a lot of lines like this
$ grep -c HAVE_LIB $PETSC_ARCH/include/petscconf.h
96
but only four of these are ever checked in src/. Delete them?
IMO, unused stuff should not go into petscconf.h. We have to scroll up
past these lines every time configure crashes. These are apparently all
> trash. I am not sure trying to "patch up" the old model is the best
> approach; though maybe it is.
>
> Barry
>
>
>> On Jun 28, 2019, at 10:41 AM, Jed Brown via petsc-dev
>> wrote:
>>
>> If we configure with --download-pnetcdf (version 1.
"Balay, Satish" writes:
> On Fri, 28 Jun 2019, Jed Brown via petsc-dev wrote:
>
>> If we configure with --download-pnetcdf (version 1.9.0), then update the
>> PETSc repository to use a new version (1.11.2), then re-run ./configure
>> --download-pnetcdf, we get a
If we configure with --download-pnetcdf (version 1.9.0), then update the
PETSc repository to use a new version (1.11.2), then re-run ./configure
--download-pnetcdf, we get a warning making us look like dolts:
===
Matthew Knepley writes:
> On Wed, Jun 26, 2019 at 3:42 PM Jed Brown via petsc-dev <
> petsc-dev@mcs.anl.gov> wrote:
>
>> "Smith, Barry F." writes:
>>
>> >> On Jun 26, 2019, at 1:53 PM, Jed Brown wrote:
>> >>
>> >> "S
"Smith, Barry F." writes:
>> On Jun 26, 2019, at 1:53 PM, Jed Brown wrote:
>>
>> "Smith, Barry F." writes:
>>
>>> It is still a PC, it may as part of its computation solve an eigenvalue
>>> problem but its use is as a PC, hence does not belong in SLEPc.
>>
>> Fine; it does not belong in
"Smith, Barry F." writes:
>> On Jun 26, 2019, at 1:53 PM, Jed Brown wrote:
>>
>> "Smith, Barry F." writes:
>>
>>> It can be a plug-in whose source sits in the PETSc source tree, even in
>>> the PC directory. It gets built by the PETSc build system after the
>>> build system installs PETSc
"Smith, Barry F." writes:
> It is still a PC, it may as part of its computation solve an eigenvalue
> problem but its use is as a PC, hence does not belong in SLEPc.
Fine; it does not belong in src/ksp/pc/.
"Smith, Barry F." writes:
> It can be a plug-in whose source sits in the PETSc source tree, even in the
> PC directory. It gets built by the PETSc build system after the
> build system installs PETSc and SLEPc (in the Spack world it would have its
> own Spack file that just depends on PETSc
Jed Brown writes:
> Patrick Sanan writes:
>
>> How about a plug-in PC implementation, compiled as its own dynamic library,
>> depending on both SLEPc and PETSc?
>
> Of course, but such a thing would need its own continuous integration, etc.
We could develop a better system for packaging and
"Smith, Barry F." writes:
>> You can implement and register a PC in SLEPc (it would go in libslepc.so).
>
> It makes no sense to have a PC in SLEPc.
We're talking about a PC that is implemented by iteratively solving an
eigenproblem.
Patrick Sanan writes:
> How about a plug-in PC implementation, compiled as its own dynamic library,
> depending on both SLEPc and PETSc?
Of course, but such a thing would need its own continuous integration, etc.
Matthew Knepley writes:
> On Wed, Jun 26, 2019 at 1:05 PM Jed Brown wrote:
>
>> Matthew Knepley writes:
>>
>> > On Wed, Jun 26, 2019 at 12:45 PM Jed Brown wrote:
>> >
>> >> Matthew Knepley writes:
>> >>
>> >> >> You can implement and register a PC in SLEPc (it would go in
>> >> libslepc.so).
Matthew Knepley writes:
> On Wed, Jun 26, 2019 at 12:45 PM Jed Brown wrote:
>
>> Matthew Knepley writes:
>>
>> >> You can implement and register a PC in SLEPc (it would go in
>> libslepc.so).
>> >>
>> >
>> > I think this is the bad workflow solution. What Barry suggested will work
>> > and be
Matthew Knepley writes:
>> You can implement and register a PC in SLEPc (it would go in libslepc.so).
>>
>
> I think this is the bad workflow solution. What Barry suggested will work
> and be MUCH easier for a developer. Isn't
> the point of our tools to make our lives easier, not to enforce
"Smith, Barry F. via petsc-dev" writes:
>> On Jun 26, 2019, at 9:56 AM, Balay, Satish via petsc-dev
>> wrote:
>>
>> On Wed, 26 Jun 2019, Jakub Kruzik via petsc-dev wrote:
>>
>>> Hello,
>>>
>>> as I mentioned in PR #1819, I would like to use SLEPc in PETSc.
>>>
>>> Currently when PETSc is
"Hapla Vaclav" writes:
>> On 20 Jun 2019, at 15:56, Vaclav Hapla wrote:
>>
>>
>>
>>> On 20 Jun 2019, at 15:52, Vaclav Hapla wrote:
>>>
>>>
>>>
On 20 Jun 2019, at 15:15, Hapla Vaclav wrote:
> On 20 Jun 2019, at 15:14, Jed Brown wrote:
>
> Hapla
Hapla Vaclav via petsc-dev writes:
>> On 20 Jun 2019, at 14:28, PETSc checkBuilds
>> wrote:
>>
>>
>>
>> Dear PETSc developer,
>>
>> This email contains listings of contributions attributed to you by
>> `git blame` that caused compiler errors or warnings in PETSc automated
>> testing.
Alexander Lindsay writes:
> I'm assuming this would be served out of an Argonne domain?
No, gitlab.com.
> On Sun, Jun 16, 2019 at 12:49 PM Jed Brown via petsc-dev <
> petsc-dev@mcs.anl.gov> wrote:
>
>> "Zhang, Hong via petsc-dev" writes:
>>
>>
Patrick Sanan writes:
>> It ought, I suppose, be possible to write a plugin that adds links
>> automagically to all keywords in formatted source, but I don't know the
>> details of how these are written.
>>
> Sounds like Jed's suggesting that this could be done with a script similar
> to the one
1 - 100 of 206 matches
Mail list logo