Remember: localize_to_one() only pulls the vector down to one processor
(processor 0 by default). If you need a full copy of the vector on every
processor then you should use localize().

Derek
On Thu, Apr 16, 2015 at 11:25 AM John Peterson <[email protected]> wrote:

> On Thu, Apr 16, 2015 at 8:52 AM, ernestol <[email protected]> wrote:
>
> > Em 2015-04-16 09:18, John Peterson escreveu:
> >
> >> On Wed, Apr 15, 2015 at 7:55 PM, ernestol <[email protected] [1]>
> >>
> >> wrote:
> >>
> >>  Hi all,
> >>>
> >>> I wonder if there is a simple way to get the global solution when
> >>> running the code in parallel?
> >>>
> >>> I tried:
> >>>
> >>>    const System& system = es.get_system("System");
> >>>    const unsigned short int variable_num =
> >>> system.variable_number("variable");
> >>>    const unsigned int dim = mesh.mesh_dimension();
> >>>    std::vector<Number> sys_soln;
> >>>    system.update_global_solution (sys_soln, 0);
> >>>
> >>> And also created this function
> >>>
> >>> void Solution(const EquationSystems& es,const MeshBase& mesh,string
> >>> s){
> >>>    std::vector<Number> soln;
> >>>    std::vector<std::string> names;
> >>>    es.build_variable_names(names);
> >>>    es.build_solution_vector(soln);
> >>>    ofstream myfile;
> >>>    myfile.open(s);
> >>>    for(unsigned int i=0;i<mesh.n_nodes();i++){
> >>>      const unsigned int n_vars = names.size();
> >>>      for(unsigned int c=0;c<n_vars;c++){
> >>>        myfile << scientific << " " << soln[i*n_vars + c];
> >>>      }
> >>>      myfile << endl;
> >>>    }
> >>>    myfile.close();
> >>> }
> >>>
> >>> However both only work in serial. The first in parallel gives me
> >>> only 0
> >>> so sys_soln and the second gives me an error with PESTC when in
> >>> parallel.
> >>>
> >>
> >> Out of curiosity, what does calling
> >>
> >> system.solution->print_global();
> >>
> >> do?  Im a bit skeptical about your loop over nodes and vars... it
> >> might work for this one case, but be aware that it probably wont work
> >> if there are element and/or scalar dofs in the solution vector.
> >>
> >
> > It prints the solution without a problem both in serial and parallel.
> > However what I am needing is a vector with the solution, I am just
> > printing to check if I managed to get it. I will need to have the values
> at
> > each node so I can couple with a discrete part of the model. All the
> ways I
> > had tried worked in serial but not in parallel, any ideas?
> >
>
> OK, sorry, misunderstood what you were after.
>
> In that case, the solution posted by Vasileios Vavourakis should work.  I'm
> not sure if localize_to_one() calls close() on the vector beforehand, so
> why don't you try doing that as well?
>
> --
> John
>
> ------------------------------------------------------------------------------
> BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
> Develop your own process in accordance with the BPMN 2 standard
> Learn Process modeling best practices with Bonita BPM through live
> exercises
> http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual-
> event?utm_
> source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF
> _______________________________________________
> Libmesh-users mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/libmesh-users
>
------------------------------------------------------------------------------
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to