[Wien] value of OMP_NUM_THREAD
Dear wien2k users, we have successfully installed wien2k 13. We are using a system having 16 cpu. I have set a value of OMP_NUM_THREADS=16 by editing .bash_profile as export OMP_NUM_THREADS=16 However, it still using only 1 cpu out of 16. So what could be the proper value of OMP_NUM_THREADS such that we can made all the cpu operating together? Thanks in advance. -- Shamik Chakrabarti Senior Research Fellow Dept. of Physics Meteorology Material Processing Solid State Ionics Lab IIT Kharagpur Kharagpur 721302 INDIA ___ Wien mailing list Wien@zeus.theochem.tuwien.ac.at http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien SEARCH the MAILING-LIST at: http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
[Wien] wien2wannier release 1.0-beta
Dear Wien2k users, A new version of wien2wannier (1.0-beta), the interface from Wien2k to Wannier90, is available at http://www.ifp.tuwien.ac.at/forschung/arbeitsgruppen/cms/software-download/wien2wannier/ The new version is tagged as a “beta” release until it has been more thoroughly tested. Nonetheless, it is fully functional and you are encouraged to use this version. Bug reports to wien2wann...@ifp.tuwien.ac.at will be appreciated. New features include: * more flexible and powerful specification of initial projections (e.g., arbitrary rotations are supported) * fix handling of k-points for various lattice types * wien2wannier may now be used under the terms of the GNU GPL Please see the file ‘NEWS’ in the distribution for more information. -- Elias Assmann (TU Wien) Wien2Wannier: maximally localized Wannier functions from linearized augmented plane waves http://www.ifp.tuwien.ac.at/forschung/arbeitsgruppen/cms/software-download/wien2wannier/ ___ Wien mailing list Wien@zeus.theochem.tuwien.ac.at http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien SEARCH the MAILING-LIST at: http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
[Wien] Configuring SCRATCH variable for parallel computation
Hi, I 'm doing some tests in the Memento cluster of the University of Zaragoza on TiC system with a k- 100k pts , 4 nodes with 64 CPUs per node. It is a system that does not share RAM and hard disks between nodes during calculations. Initially the parallel computation with Wien2k stopped in the first cycle because a file system problem. The variable $ SCRATCH point to the local hard disks of each node used in the parallel computation. Fortunately I was able to finish the calculation re-directing the variable $ SCRATCH to /home directory that it is shared by all nodes. The calculation finish fine and it is correct, but I think that something is wrong. Wien2k not seems to be originally designed for a parallel calculation using /home as SCARTCH. Is it correct to use the /home directory as SCRATCH in Wien2k ? , Can this cause problems in the OS of Memento's cluster or in future wien2k calculations?. In fact, I'm having other problems with wien2k in other systems but I'm not sure if it is because SCRATCH points to /home directory or not. Thank you for your attention and appreciate any comment. Sincerely, Dr. César de la Fuente. Depto. de Física de la Materia Condensada. Edificio Torres-Quevedo EINA-Universidad de Zaragoza. C/María de Luna 3, 50018-Zaragoza (SPAIN). smime.p7s Description: S/MIME cryptographic signature ___ Wien mailing list Wien@zeus.theochem.tuwien.ac.at http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien SEARCH the MAILING-LIST at: http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
Re: [Wien] Configuring SCRATCH variable for parallel computation
Hello César, To perform parallel calculations you do need a shared directory between all nodes. As you have described '/home' appears to be a form of shared storage. What its intention is, is of course not well-known to us. If it is shared there is no direct reason it cannot function for wien, however the problems that might occur are: -the connection to the shared storage is too slow - if it is not meant to transfer files from and to during a calculation -there is not enough space - if it is just meant for temporary storage and user settings -the /home is meant as a login system and not for actual user usage Maybe contact the person who set up this cluster and ask them what they recommend. Regards, Michael Sluydts César de la Fuente schreef op 13/02/2014 17:09: Hi, I 'm doing some tests in the Memento cluster of the University of Zaragoza on TiC system with a k- 100k pts , 4 nodes with 64 CPUs per node. It is a system that does not share RAM and hard disks between nodes during calculations. Initially the parallel computation with Wien2k stopped in the first cycle because a file system problem. The variable $ SCRATCH point to the local hard disks of each node used in the parallel computation. Fortunately I was able to finish the calculation re-directing the variable $ SCRATCH to /home directory that it is shared by all nodes. The calculation finish fine and it is correct, but I think that something is wrong. Wien2k not seems to be originally designed for a parallel calculation using /home as SCARTCH. Is it correct to use the /home directory as SCRATCH in Wien2k ? , Can this cause problems in the OS of Memento's cluster or in future wien2k calculations?. In fact, I'm having other problems with wien2k in other systems but I'm not sure if it is because SCRATCH points to /home directory or not. Thank you for your attention and appreciate any comment. Sincerely, Dr. César de la Fuente. Depto. de Física de la Materia Condensada. Edificio Torres-Quevedo EINA-Universidad de Zaragoza. C/María de Luna 3, 50018-Zaragoza (SPAIN). ___ Wien mailing list Wien@zeus.theochem.tuwien.ac.at http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien SEARCH the MAILING-LIST at: http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html ___ Wien mailing list Wien@zeus.theochem.tuwien.ac.at http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien SEARCH the MAILING-LIST at: http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
Re: [Wien] Configuring SCRATCH variable for parallel computation
It is getting complicated when you do both MPI + k-point parallelization. In large calculations there is usually less k-points. Will it be possible to test MPI with the local scratch without k-point parallelization (i.e., k-point run sequentially)? This will help to mediate problems mentioned by Michael. Oleg On Thu, Feb 13, 2014 at 11:15 AM, Michael Sluydts michael.sluy...@ugent.bewrote: Hello César, To perform parallel calculations you do need a shared directory between all nodes. As you have described '/home' appears to be a form of shared storage. What its intention is, is of course not well-known to us. If it is shared there is no direct reason it cannot function for wien, however the problems that might occur are: -the connection to the shared storage is too slow - if it is not meant to transfer files from and to during a calculation -there is not enough space - if it is just meant for temporary storage and user settings -the /home is meant as a login system and not for actual user usage Maybe contact the person who set up this cluster and ask them what they recommend. Regards, Michael Sluydts César de la Fuente schreef op 13/02/2014 17:09: Hi, I 'm doing some tests in the Memento cluster of the University of Zaragoza on TiC system with a k- 100k pts , 4 nodes with 64 CPUs per node. It is a system that does not share RAM and hard disks between nodes during calculations. Initially the parallel computation with Wien2k stopped in the first cycle because a file system problem. The variable $ SCRATCH point to the local hard disks of each node used in the parallel computation. Fortunately I was able to finish the calculation re-directing the variable $ SCRATCH to /home directory that it is shared by all nodes. The calculation finish fine and it is correct, but I think that something is wrong. Wien2k not seems to be originally designed for a parallel calculation using /home as SCARTCH. Is it correct to use the /home directory as SCRATCH in Wien2k ? , Can this cause problems in the OS of Memento's cluster or in future wien2k calculations?. In fact, I'm having other problems with wien2k in other systems but I'm not sure if it is because SCRATCH points to /home directory or not. Thank you for your attention and appreciate any comment. Sincerely, Dr. César de la Fuente. Depto. de Física de la Materia Condensada. Edificio Torres-Quevedo EINA-Universidad de Zaragoza. C/María de Luna 3, 50018-Zaragoza (SPAIN). ___ Wien mailing listw...@zeus.theochem.tuwien.ac.athttp://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien SEARCH the MAILING-LIST at: http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html ___ Wien mailing list Wien@zeus.theochem.tuwien.ac.at http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien SEARCH the MAILING-LIST at: http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html ___ Wien mailing list Wien@zeus.theochem.tuwien.ac.at http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien SEARCH the MAILING-LIST at: http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html