Thank you for your reply Prof. Bangerth. I believe I am already using 
distributed computing in my code. But I am not sure it is true for the post 
processing part. I use write_vtu and write_pvtu_record to write the output 
files. Is that the best way to do it in distributed computing? Is there any 
example where distributed computing is implemented in the post processing 
so that I can compare my code and see if it can be improved?

Thanks in advance,
Raghunandan.

On Tuesday, August 23, 2022 at 12:06:37 PM UTC-5 Wolfgang Bangerth wrote:

> On 8/23/22 07:22, Raghunandan Pratoori wrote:
> > 
> > I have 2 sets of dof_handlers. The one that is giving me a problem has 
> 6440067 
> > dofs and it becomes 6x when initializing history_field_stress or 
> > history_field_strain. I also plan on increasing these in future 
> simulations.
>
> Well, at 8 bytes per 'double' variable, you need 6,440,067 * 6 * 8 = 
> 309,123,216 bytes = 309 MB of memory to do what you want to do for just 
> this 
> one step. If your machine does not have 309 MB left, you are bound to get 
> the 
> error you describe. There is nothing you can do about this other than (i) 
> get 
> a machine with more memory, (ii) use distributed computing where the 6M 
> unknowns are distributed across more than just one machine.
>
> Best
> W.
>
> -- 
> ------------------------------------------------------------------------
> Wolfgang Bangerth email: bang...@colostate.edu
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/b24baa1e-b837-4ecd-b58d-0474b7699a90n%40googlegroups.com.

Reply via email to