Hi Everybody,
I have been using DEAL_II_LIBRARIES and related variables from
deal.IIConfig.cmake to build Python extensions against deal.II. These
variables do not seem to be available any longer in installed .cmake files,
unless I am missing something? However, they are still documented as
exis
On 11/15/23 13:57, Marc BAKRY wrote:
I won't hesitate. At the moment, I tried to change the template from
to and to make the corresponding
changes in the FERaviartThomas class; however it seems that it is not
/that/ easy (I triggered /a lot/ of errors at compile time, among them
in the FET
Dear David,
The Digital Alliance of Canada (Compute Canada) has deal.II on their
cluster through the use of the module system:
https://docs.alliancecan.ca/wiki/Available_software
You could reach out to them and ask them for their module file. They are
generallly very chill.
If you need help reac
In addition to what Wolfgang wrote,
Sometimes the OS will map the cores of a CPU in the following order:
Physical-Logical-Physical-Logical-Physical-Logical etc.
Consequently, if your processor supports hyperthreading through the use of
logical core, running with core 0-1 means you are essentially
I won't hesitate. At the moment, I tried to change the template from to and to make the corresponding changes in
the FERaviartThomas class; however it seems that it is not *that* easy (I
triggered *a lot* of errors at compile time, among them in the FETools
namespace). As an intermediate step, I
On 11/15/23 04:09, Abbas Ballout wrote:
If I run mpirun -np2 the assemble times are 25.8 seconds and the MUMPS
solve times are 51.7 seconds.
If I run mpirun --cpu-set 0-1 -np 2, the assemble times are 26 seconds
(unchanged) but the solve time are at 94.9 seconds!
Is this normal and expecte
Dear Peter,
Thank you very much. I will look into that PR and try to understand it.
Best,
Ce
Peter Munch 于2023年11月15日周三 20:12写道:
> Hi Ce Qin,
>
> There is a WIP PR for adding a fallback path to MGTransferMF:
> https://github.com/dealii/dealii/pull/12924. The PR is already 2 years
> old and has
Thanks, Wolfgang. I will abide by those guidelines.
On Tuesday, November 14, 2023 at 10:15:29 PM UTC-5 Wolfgang Bangerth wrote:
> On 11/14/23 16:25, Alex Quinlan wrote:
> >
> > I'm curious what your thoughts are on this approach. I imagine it could
> have
> > an advantage in some situations,
Hi everyone,
I'm working with a group at another university on a deal.II-powered project
(yay). I'd like to set things up as a module (i.e., do module load
dealii/9.5.1) so that everyone there is using the same version of everything.
Are there any published versions of deal.II modules for vario
Hi Greg,
good that you find a working solution! However, I would suggest against
using `VectorTools::interpolate_to_different_mesd` but use
`SolutionTransfer` or `parallel::distributed::SolutionTransfer` to move the
refinement information from a coarse mesh to a fine mesh. You can take a
look
Hi Ce Qin,
There is a WIP PR for adding a fallback path to MGTransferMF:
https://github.com/dealii/dealii/pull/12924. The PR is already 2 years old
and has merge conflicts with master. But I guess the changes would be
similar on master.
Best,
Peter
On Wednesday, 15 November 2023 at 08:30:00 U
ztdep,
You got me curious
You can output the results of the Kelly error estimate with something like
this in the data output:
*Vector
estimated_error_per_cell(triangulation.n_active_cells());
KellyErrorEstimator::estimate(dof_handler,QGauss(fe.degree + 1),
This isn't a dealii problem per-se.
I am trying to run a number of simulations with different parameters of the
same code with the underlying solver being MUMPS. I am using *mpirun
--cpu-set * to bind and isolate the different simulations to different
cores (As I believe I should)
*Profilin
13 matches
Mail list logo