bmesh-users@lists.sourceforge.net ;
Roy Stogner
Sent: Thu Jan 28 08:57:51 2010
Subject: Re: [Libmesh-users] Fwd: further data Re: weird problem
Sorry for reply late. However, why is it ok for Intel 32bits Debug mode? There
is not problem on 32bits-based cluster. Thanks a lot.
Regards,
Yujie
O
Sorry for reply late. However, why is it ok for Intel 32bits Debug mode?
There is not problem on 32bits-based cluster. Thanks a lot.
Regards,
Yujie
On Wed, Jan 27, 2010 at 2:42 PM, Kirk, Benjamin (JSC-EG311) <
benjamin.kir...@nasa.gov> wrote:
> > I got the different cost time using "METHOD=pro"
> I got the different cost time using "METHOD=pro" and "METHOD=dbg". You can
> find the details from the following tables. In "dbg", the problem is always
> there. However, in "pro", the problem disapears. Any advice? In this case, I
> run the codes for both in slave node. Thanks a lot.
Whoops, pr
On Wed, 27 Jan 2010, Yujie wrote:
> In "dbg", the problem is always there.
I hadn't noticed before that you were running in debug mode. Bad
performance is inherent to METHOD=dbg; we turn off optimization, we
test every assertion, we sometimes even add extra "double-checking"
libMesh code, GNU l
Dear Ben and Roy,
I got the different cost time using "METHOD=pro" and "METHOD=dbg". You can
find the details from the following tables. In "dbg", the problem is always
there. However, in "pro", the problem disapears. Any advice? In this case, I
run the codes for both in slave node. Thanks a lot.
On Wed, 27 Jan 2010, Yujie wrote:
> Thanks, Roy. Do I need to compile PETSc with the same parameter,
> that is "-pg"?
I don't believe so. You just need -pg as a compiler parameter on the
objects you want profiling data from and on the linker, and setting
METHOD should have done that.
> I meet
Thanks, Roy. Do I need to compile PETSc with the same parameter, that is
"-pg"? I meet the link errors when I compile LibMesh.
Regards,
Yujie
On Wed, Jan 27, 2010 at 10:59 AM, Roy Stogner wrote:
>
> On Wed, 27 Jan 2010, Yujie wrote:
>
> Thank you very much for your reply. I will recompile the c
On Wed, Jan 27, 2010 at 11:00 AM, Kirk, Benjamin (JSC-EG311) <
benjamin.kir...@nasa.gov> wrote:
> >> Can you confirm that the problem doesn't exist on one processor? What
> are
> >> the details of the mesh you are using??
> >
> > You know, if we want to try repeating this ourselves, I believe Pau
>> Can you confirm that the problem doesn't exist on one processor? What are
>> the details of the mesh you are using??
>
> You know, if we want to try repeating this ourselves, I believe Paul
> saw a relatively long find_global_indices() execution time by simply
> ultra-refining (~500K elements)
On Wed, 27 Jan 2010, Yujie wrote:
> Thank you very much for your reply. I will recompile the codes with
> "METHOD=pro". What is "gprof"?
A userspace profiling utility; you've probably got it installed
already.
If you have a cluster to yourself you might also check out oprofile -
a bit more of a
Dear Ben,
Thank you very much for your reply. I will recompile the codes with
"METHOD=pro". What is "gprof"?
To AMD X86_64-based cluster, actually, I run the codes in Master and one
slave nodes with 2 CPUs. The same problem is there.
Regards,
Yujie
On Wed, Jan 27, 2010 at 10:42 AM, Kirk, Benja
On Wed, 27 Jan 2010, Kirk, Benjamin (JSC-EG311) wrote:
>> It only gets used for I/O and the cost should scale more slowly than
>> solves, though; for large implicit 2D/3D problems it shouldn't be an
>> issue even on inefficient MPI implementations.
>
> Yes, this issue is bizarre indeed. The code
>> When I sent the following email to libmesh mail list. I met one
>> problem because of the size of the email. Could you give me some
>> advice regarding this problem? thanks a lot.
>
> It looks like it made it through eventually; just a little late.
I had to approve it based on size, and it was
On Wed, 27 Jan 2010, Yujie wrote:
> When I sent the following email to libmesh mail list. I met one
> problem because of the size of the email. Could you give me some
> advice regarding this problem? thanks a lot.
It looks like it made it through eventually; just a little late.
I'm not sure if
14 matches
Mail list logo