How can I give one MPI rank per shared-memory system in my own computer? I
thought that running the program in serial with the option "--n_threads=4"
would work, but it doesn't seem so. It might be that the rest of my code
that is not "threaded" is too slow. Thanks in advance.



On Wed, Aug 27, 2014 at 11:06 PM, Derek Gaston <[email protected]> wrote:

> Just a heads up: we have been using "--enable-static --disable-shared"
> exclusively on one of our clusters... so it's getting tested :-)
>
> Miguel: I would still recommend going with shared libraries... generally
> clusters have slow filesystems and link times for huge static executables
> can be atrocious...
>
>
> On Wed, Aug 27, 2014 at 11:20 PM, Roy Stogner <[email protected]>
> wrote:
>
>>
>>
>> On Wed, 27 Aug 2014, Miguel Angel Salazar de Troya wrote:
>>
>>  Thanks. ParallelMesh::allgather() is going to be pretty useful for
>>> me. Yes, my part of the code that would run in the entire mesh would
>>> be pretty small. Could I rebuild the ParallelMesh in each processor
>>> once I'm done?
>>>
>>
>> Not so much "rebuild" as "unbuild".
>> ParallelMesh::delete_remote_elements() will remove all the
>> non-semilocal elements, cutting your memory use to O(n_elem/n_proc)
>> instead of O(n_elem) again.
>>
>>
>>  Also, one question related with MPI and clusters. I have little
>>> experience on clusters, only some homework on MPI. I've thought of
>>> compiling my program with static libraries and avoid the hassle of
>>> installing the libraries on the cluster. Is this a recommended
>>> option? Is this possible with libmesh?
>>>
>>
>> I wouldn't say it's recommended, but it should definitely be possible.
>>
>> I generally do personal builds of whatever I don't trust the
>> cluster/supercomputer sysadmins to do correctly, but those are still
>> shared library builds.  I'd recommend the same to you, except: we're
>> getting ready to release libMesh 1.0 and I'll bet nobody has tested a
>> "--enable-static --disable-shared" build recently.  If you're
>> considering doing so I'd appreciate hearing whether it worked or
>> whether you encountered problems.
>>
>> Thanks,
>> ---
>> Roy
>>
>> ------------------------------------------------------------------------------
>> Slashdot TV.
>> Video for Nerds.  Stuff that matters.
>> http://tv.slashdot.org/
>> _______________________________________________
>> Libmesh-users mailing list
>> [email protected]
>> https://lists.sourceforge.net/lists/listinfo/libmesh-users
>>
>>
>


-- 
*Miguel Angel Salazar de Troya*
Graduate Research Assistant
Department of Mechanical Science and Engineering
University of Illinois at Urbana-Champaign
(217) 550-2360
[email protected]
------------------------------------------------------------------------------
Slashdot TV.  
Video for Nerds.  Stuff that matters.
http://tv.slashdot.org/
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to