On 2/16/22 20:16, blais...@gmail.com wrote:
It seems MuParser does not, by default, allow for the use of the error
function (erf). We are solving a Stefan problem right now and I'd like to be
able to compare the solution to the analytical one to make a unit test out of
it. Regrettably, the an
Dear all,
I hope you are well?
It seems MuParser does not, by default, allow for the use of the error
function (erf). We are solving a Stefan problem right now and I'd like to
be able to compare the solution to the analytical one to make a unit test
out of it. Regrettably, the analytical solutio
Hi,
I am new to deal.II I download deal.II dmg and install it on my mac. But
after I open deal.II terminal, I am really confused about the information
shown blow. My question is what difference between ordinary terminal and
deal.II terminal. It seems valid to run deal.II codes by makefile throu
On 2/16/22 10:44, Uclus Heis wrote:
It seems this "solution.h5" instead of f_output was a misspelling mistake, I
have right f_output in my code. I can see that the problem here is that there
are processes without cells, which may be a bug of my implementation as I
would like to use all the pr
Dear Wolfgang,
It seems this "solution.h5" instead of f_output was a misspelling mistake,
I have right f_output in my code. I can see that the problem here is that
there are processes without cells, which may be a bug of my implementation
as I would like to use all the processes. How does could
Thank you for the suggestions. I updated the code as you mentioned. I do not
fully understand what you mentioned about that I am not writing into the
f_output stream.
In the code snippet you showed, you are always writing to "solution.h5", not
to the f_output stream.
Is data_out.write_hd
Dear Wolfgang,
Thank you for the suggestions. I updated the code as you mentioned. I do
not fully understand what you mentioned about that I am not writing into
the f_output stream. Is data_out.write_hdf5_parallel() not writing the
data? However, I still have the error when running more than 1
Uclus,
the usual style is to create the DataOut object every time you are writing
into a file, and then let the variable die at the end of the scope again,
rather than keeping it around for the next time you want to create output. You
might want to take a look at how time dependent programs d
Dear Timo,
Thank you for the comments.
I am doing now one file per iteration. However, when running with mpirun
more tha 1 MPI ranks I get an error in the writing function.
My code right now looks :
DataOut data_out;
data_out.attach_dof_handler(dof_handler);
for(int freq_iter=0 ... )*//so