Hi everyone,

we could compile the toolkit on MareNostrum4 without any further 
problems ("unset MPI" did the trick), and I want to run some further 
tests before providing the configuration script.

What is the standard way to include new configuration files into the 
toolkit? Should I send it to someone in particular?

cheers,
Helvi

On 2017-10-18 19:16, hwitek wrote:
> Hi Miguel, everyone,
> 
> regarding your first point: I had a similar problem on its 
> predecessor,
> MareNostrum 3.
> The trick was to unset the global MPI variable, so in my bashrc I had
> something like
> 
> module load intel
> ...
> unset MPI
> 
> I hope this helps! I'll also have a go at the compilation myself
> tonight.
> 
> cheers,
> Helvi
> 
> 
> On 2017-10-18 18:52, Miguel Zilhão wrote:
>> hi all,
>> 
>> i've been compiling the latest ET on marenostrum 4, and there were a
>> couple of issues i'd like to ask:
>> 
>> 1. on marenostrum, they bundle a lot of mpi-related tools on modules.
>> in particular, one of the
>> default modules that is loaded is called 'impi'. this module sets the
>> global variable $MPI to 'impi', ie
>> 
>>   $ echo $MPI
>>     impi
>> 
>> this seems to severely interfere with the Cactus building process. at
>> the configuration stage i get
>> the following:
>> 
>> Configuring with flesh MPI
>>    Warning: use of flesh MPI via MPI option is deprecated and should
>> be replaced with the thorn
>> ExternalLibraries/MPI and its MPI_DIR option
>>    MPI selected, but no known MPI method - what is "impi" ?
>> 
>> to workaround this problem, i've unloaded the module (module unload
>> impi), which undefines the $MPI
>> variable. after this, the configuration stage works fine.
>> this seems to me like a bug, though. after all, a cluster-wide global
>> variable called $MPI seems
>> like a natural thing to exist in a lot of these machines... should
>> the building of Cactus rely on
>> the non-existence of such a variable?
>> 
>> the other inconvenient thing is that, by unloading the impi module,
>> one also removes from $PATH the
>> mpirun command. so one has to unload the module for compiling the
>> code, and then load it back to be
>> able to run it.
>> 
>> 2. i was not able to compile with any of the provided intel compilers
>> (with gcc it worked fine). i
>> know there are known issues with some versions of the intel
>> compilers; is there some sort of list of
>> intel compilers that are known to work? i could maybe try to ask the
>> technical support if they could
>> make those specific versions available...
>> 
>> thanks,
>> Miguel
>> _______________________________________________
>> Users mailing list
>> Users@einsteintoolkit.org
>> http://lists.einsteintoolkit.org/mailman/listinfo/users

-- 
===========================
Dr. Helvi Witek
Marie-Curie Research Fellow
Dep. Fisica Quantica i Astrofisica
Universitat de Barcelona
===========================
_______________________________________________
Users mailing list
Users@einsteintoolkit.org
http://lists.einsteintoolkit.org/mailman/listinfo/users

Reply via email to