It is getting complicated when you do both MPI + k-point
parallelization. In large calculations there is usually less k-points. Will
it be possible to test MPI with the local scratch without k-point
parallelization (i.e., k-point run sequentially)? This will help to mediate
problems mentioned by Mi
Hello César,
To perform parallel calculations you do need a shared directory between
all nodes. As you have described '/home' appears to be a form of shared
storage.
What its intention is, is of course not well-known to us. If it is
shared there is no direct reason it cannot function for wie
Hi,
I 'm doing some tests in the Memento cluster of the University of Zaragoza
on TiC system with a k- 100k pts , 4 nodes with 64 CPUs per node. It is a
system that does not share RAM and hard disks between nodes during
calculations. Initially the parallel computation with Wien2k stopped in the
fi
Dear Wien2k users,
A new version of wien2wannier (1.0-beta), the interface from Wien2k to
Wannier90, is available at
http://www.ifp.tuwien.ac.at/forschung/arbeitsgruppen/cms/software-download/wien2wannier/
The new version is tagged as a “beta” release until it has been more
thoroughly teste
You might have a look at http://www.wien2k.at/reg_user/benchmark/ , and
can search the mailing list for OMP_NUM_THREAD. You will find several
instructive discussions there.
Stefaan
On 13/02/2014 9:40, shamik chakrabarti wrote:
Dear wien2k users,
we have successfully ins
Dear wien2k users,
we have successfully installed wien2k 13. We are using a
system having 16 cpu.
I have set a value of OMP_NUM_THREADS=16 by editing .bash_profile as
export OMP_NUM_THREADS=16
However, it still using only 1 cpu out of 16.
So what could be the proper value of
6 matches
Mail list logo