Re: [SIESTA-L] undefined reference to `getarg_' undefined reference to `iargc_'
Hi, I'm no expert either, but according to the web, gfortran already has those functions built in. So, an easy workarround is: edit f2kcli.F90 and right after the "MODULE F2KCLI" statement type " #define GFORTRAN " (which means do not compile the module at all). After this you'll get denchar executable, hopefully it will also do its job (didn't try by myself). Good luck, Roberto On Tue, 28 Jul 2009, ?? ? wrote: Dear Siesta users, in page http://fisica.ehu.es/ag/siesta-extra/release.notes_2.0.2 there is stntence But when somebody try to compile denchar with gfortran immediatly receives an error /home/zhachuk/siesta-2.0.1/Src/f2kcli.F90:212: undefined reference to `getarg_' f2kcli.o: In function `__f2kcli__command_argument_count': /home/zhachuk/siesta-2.0.1/Src/f2kcli.F90:122: undefined reference to `iargc_' collect2: ld returned 1 exit status make: *** [denchar] Error 1 zhac...@solaris:~/siesta-2.0.1/Src>
[SIESTA-L] Reading different density matrices
Hello list, I'm using the "siesta as a subrouitine" feature; I understand that the DM is kept in memory between calls to the siesta processes. My question: Is it possible to clear up the memory DM between siesta calls, such as to force the reading from file ?. Ok, it sounds awkward, but believe me, I know what I'm doing. I figured out that a possible solution would be to close and re-open connection with each call, like restarting from fresh; cumbersome but doable. I wonder if there is a neater solution though, ... is there ?. Thanks a lot, Roberto
Re: [SIESTA-L] K-grid cutoff problem
Hi, You must type " kgrid_cutoff " not " kgrid cutoff ", just as it is written in the manual. Good luck. R. On Fri, 24 Jul 2009, Elham Beheshti wrote: Dear all, I am a new user of Siesta. I am working on graphene and Zigzag Carbon nanotubes. I have encountered a serious problem with defining k-grid cutoff in my jobs. Here is a part of my input: kgrid cutoff 15.003 Ang But I have this in my output file: siesta: k-grid: Number of k-points = 1 siesta: k-grid: Cutoff (effective) = 2.460 Ang siesta: k-grid: Supercell and displacements siesta: k-grid:0 1 0 0.000 siesta: k-grid:1 0 0 0.000 siesta: k-grid:0 0 1 0.000 Naive supercell factors: 441 Which gives 2.46 Ang cutoff an just gamma point!!! I tried the kgrid_Monkhorst_Pack with 31X31X1 grid for graphene, but from the outputs I can say that the effective k-grid cutoff is totally independent of the mesh grid I've defined in my input and it just changes by changing the length of the vector in Z direction (which is perpendicular to the graphene plane) in my unit cell. Could anybody guide me why I get these weird outputs?! Thanks in advance, Elham Beheshti, Department of Electrical Engineering, University of British Columbia |--| | Dr. Roberto C. Pasianot Phone: 54 11 6772 7244 | | Dpto. Materiales, CAC-CNEA FAX : 54 11 6772 7303 | | Avda. Gral. Paz 1499Email: pasia...@cnea.gov.ar| | 1650 San Martin, Buenos Aires | | ARGENTINA | |--|
[SIESTA-L] Velocity units in XV file
Hello list, First time I launch a dynamic calculation with Siesta, and want to start from a definite point in phase space. From the manual I understand one should use a "previous" *.XV file together with " MD.UseSaveXV .true. " What are the velocity units in the *.XV file ?,.. Bohr/s ?. Thanks, Roberto
Re: [SIESTA-L] Negative vibrational frequencies
Hello Andrei, Marcos, and rest of list I'm back, after a long journey, for the benefit of those who may build upon the experience (mistakes) of others ... The so many imaginary frequencies I've got before were the result of a silly mistake: I've pasted the FC matrices of two independent runs in the wrong order; apologies for wasting your brain power :( . After the fix, a few imaginary frequencies still remained though. The solution was to use a smaller DM.Tolerance, currently 0.0001, compared to 0.001 before. The latter was fine for a perfect-lattice two-atoms cell (compared to literature), but apparently not good enough for a larger box containing a single vacancy. The side effect of this stricter convergence criterion however, was to about double the computing time ... so that it becomes hardly practical as general procedure :( ... but that's another story ... Cheers, Roberto
Re: [SIESTA-L] Negative vibrational frequencies
Hi Andrei, Yes, I meant the system is well relaxed, Max(forces) < 0.02 eV/Ang, and the same holds for the total sum. On the other hand, the MeshCutoff I'm using, 450 Ry, based on many previous runs, gave me no suspicions of "important" eggbox effects; though, true, I've never performed a systematic test on the subject. I'll test for the box's uniform displacement, thanks for the suggestion. Also, my box contains defects, thus anharmonicity might be an issue. I should agree however that if atomic displacements of some 0.02 Ang do impact on the frequencies, the problem may be ill possed since the beginning. Moreover, if this happens together with a bad MeshCutoff choise, intuitively I'd say the situation can only become worse ... Cheers, Roberto On Wed, 15 Apr 2009, apost...@uni-osnabrueck.de wrote: Dear Andrei Thanks a lot for your answer; unfortunately you've confirmed my fears on the real meaning of negative frequencies. The forces level of the configuration is not larger than 0.02 eV/Ang, which is half the default (zero) value. I thus expected to be on the safe side, but as you suggest this is apparently not the case ;( . Dear Roberto: what do you mean by "The forces level of the configuration is not larger than 0.02 eV/Ang, which is half the default (zero) value" ? That you relaxed your system well before doing the phonons? That's OK, but you should really check MeshCutoff first. What you should care about is that the sum of forces over all atoms remains small enough (typically within +/- 0.1 eV/Ang), and this independently on uniform displacement of all atoms. Making this eggbox test fixes your necessary value of MeshCutoff. Perhaps the default displacement, 0.04 Bohr, is also a bit too large ?. This is about how anharmonic the vibrations are. Usually this parameter does not change much, and in any case it cannot be resonsible for imaginary frequencies. Best regards Andrei
Re: [SIESTA-L] Negative vibrational frequencies
Dear Andrei Thanks a lot for your answer; unfortunately you've confirmed my fears on the real meaning of negative frequencies. The forces level of the configuration is not larger than 0.02 eV/Ang, which is half the default (zero) value. I thus expected to be on the safe side, but as you suggest this is apparently not the case ;( . Perhaps the default displacement, 0.04 Bohr, is also a bit too large ?. Anyway, I'll redo the calculations playing with the MeshCutoff then. Thanks again, Roberto On Wed, 15 Apr 2009, apost...@uni-osnabrueck.de wrote: Dear Roberto, the frequencies appearing in VIBRA as negative are in fact imaginary. The reason for their appearance is too bad forces (big numerical noise in forces because of unsufficient MeshCutoff) and/or calculation of phonos too far from equilibrium (unsufficient structure relaxation prior to phonon calculation). In a calculation of Gamma-phonon you'd have three acoustic frequencies which must be close enough to zero; but numerically thay may deviate a bit by ~0.1; if some of them are negative, this is no problem. Otherwise, big negative (imaginary) frequencies mean a problem. Usually you should check MeshCutoff, check relaxation and repeat the phonon calculation. Sincerely, Andrei Postnikov
[SIESTA-L] Negative vibrational frequencies
Hello everybody, Perhaps some of you can give a quick answer to the following question, I'm computing the phonon spectra at the \gamma point for a rather big unit cell (some 50 atoms) using the VIBRA package. The point is that about half of the frequencies are coming out NEGATIVE, though not IMAGINARY. This is somewhat unexpected to me, though not impossible because after all the solutions are combinations of $ \exp(\pm \omega t) $. I wonder then if those negative numbers really mean imaginary, or in other words, should I worry about the result ?. Thanks a lot. Cheers, Roberto
[SIESTA-L] Bug in VIBRA ?
Hello everybody, I believe to tave found a bug in VIBRA 0.2 :( . There seems to be an inconsistency betweem k-lines in fractional coordinates and lattice structure specification. Specifically, for phonons in the hcp structure, the following LatticeConstant 3.1433 Ang %block LatticeParameters 1. 1. 1.6226 90. 90. 120. %endblock LatticeParameters ... BandLinesScale ReciprocalLatticeVectors %block BandLines 1 0.0 0.0 0.0 \Gamma 20 0.3 0.3 0.0 K 10 0.5 0.0 0.0 M 20 0.0 0.0 0.0 \Gamma 20 0.0 0.0 0.5 A %endblock BandLines does not work correctly, whereas replacing the 1st block above with next one works fine LatticeConstant 3.1433 Ang %block LatticeVectors 1.00.0 0.0 -0.50.866025404 0.0 0.00.0 1.6226 %endblock LatticeVectors "Does not work correctly" means, e.g., that the two k-lines 1 0.0 0.0 0.0 \Gamma 10 0.5 0.0 0.0 M 1 0.0 0.0 0.0 \Gamma 10 0.0 0.5 0.0 M that should obtain the same curve in fact DO NOT. If anyone is interested I can provide the needed files. That's it, bye Roberto
Re: [SIESTA-L] changes of energy for a system under strain
Hi Julie, The same happened to us for a metallic system. The cure seems the use of a quite dense k-point grid (like 3K - 4K for the single atom primitive cell) and to keep the M&K grid fixed across the different deformations. Comments from the list appreciated. Regards, Roberto On Fri, 20 Feb 2009, Julie Smart wrote: Hello all, I have a question about the changes of the total energy of a system under strain. I have relaxed the system I have first under "Variablecell False" by "MaxForceTol 0.04eV/Ang" and then I turned on the "Variablecell true" and decreased the "MaxForceTol " to 0.01 eV/Ang to have the lattice parameters optimized as well. When I change the lattice parameter in "z direction" to put the strain on the system sometimes the enrgy goes lower than the energy of the relaxed system!!! it may also happen when I insert the starin in further steps. I mean it first increases and then I see the decrease of the energy!!! This is occuring to me while I have tried the optimization of my system checking different "energyshifts" and setting this parameter to the one that makes the energy of my system go to the lowst! The basis and the pseudo also have been checked very carefully. Please let me know where I am going wrong. Regards, Julie
Re: [SIESTA-L] Obviously poor PARALLEL performance compared to VASP
Hi, Use keyword "ParallelOverK" and see . Default parallelization is over orbitals, which is less efficient. Regards, Roberto On Wed, 18 Feb 2009, Mehmet Topsakal wrote: Hi, I'm an experienced user of VASP. Nowadays i'm trying to learn Siesta. During my simple tests i have realized that parallellization of Siesta is obviously poor than VASP. I'm using latest intel ifort (11) and mkl 10.1. My system is qual core xeon 2.33 with infiniband (4 core in one node). I've chosen "siesta-2.0.2/Tests/si64/" as input. To see the realistic effects, i have increased kpoints distorted 1 atom and increased mesh cut-off. 2CPU 4CPU 8CPU parallellization of vasp is nearly linear. However siesta's performance is poor. CLOCK results are as below. 2CPU - Start of run 0.000 -- end of scf step65.702 -- end of scf step 123.005 -- end of scf step 179.978 -- end of scf step 236.833 -- end of scf step 293.613 -- end of scf step 350.316 -- end of scf step 407.082 -- end of scf step 463.780 -- end of scf step 520.457 -- end of scf step 577.072 -- end of scf step 633.713 -- end of scf step 690.372 -- end of scf step 747.105 -- end of scf step 803.680 -- end of scf step 860.304 -- end of scf step 916.886 -- end of scf step 920.865 --- end of geometry step 920.887 4CPU - Start of run 0.000 -- end of scf step52.757 -- end of scf step99.481 -- end of scf step 145.754 -- end of scf step 191.974 -- end of scf step 238.180 -- end of scf step 284.612 -- end of scf step 330.736 -- end of scf step 377.200 -- end of scf step 423.579 -- end of scf step 469.623 -- end of scf step 515.901 -- end of scf step 561.912 -- end of scf step 608.275 -- end of scf step 654.488 -- end of scf step 700.990 -- end of scf step 747.432 -- end of scf step 749.578 --- end of geometry step 749.604 End of run 749.843 8CPU - Start of run 0.000 -- end of scf step57.490 -- end of scf step 106.014 -- end of scf step 154.452 -- end of scf step 202.971 -- end of scf step 251.328 -- end of scf step 299.604 -- end of scf step 348.336 -- end of scf step 396.550 -- end of scf step 445.203 -- end of scf step 493.459 -- end of scf step 541.900 -- end of scf step 590.203 -- end of scf step 638.980 -- end of scf step 687.550 -- end of scf step 735.906 -- end of scf step 784.315 -- end of scf step 785.593 --- end of geometry step 785.667 End of run 786.080 I've done more tests and observed no difference. Even serial version is faster than some parallel jobs :(. Am I doing something WRONG ? Thanks. I'm sending my input and output file below. # - # FDF for a cubic c-Si supercell with 64 atoms # # E. Artacho, April 1999 # - SystemName 64-atom silicon SystemLabel si64 NumberOfAtoms 64 NumberOfSpecies 1 %block ChemicalSpeciesLabel 1 14 Si %endblock ChemicalSpeciesLabel PAO.BasisSize SZ PAO.EnergyShift 300 meV LatticeConstant 5.430 Ang %block LatticeVectors 2.000 0.000 0.000 0.000 2.000 0.000 0.000 0.000 2.000 %endblock LatticeVectors %block kgrid_Monkhorst_Pack 7 0 0 0.0 0 7 0 0.0 0 0 7 0.0 %endblock kgrid_Monkhorst_Pack MeshCutoff 100.0 Ry MaxSCFIterations50 DM.MixingWeight 0.3 DM.NumberPulay 3 DM.Tolerance 1.d-4 DM.UseSaveDM SolutionMethod diagon ElectronicTemperature 25 meV MD.TypeOfRun cg MD.NumCGsteps0 MD.MaxCGDispl 0.1 Ang MD.MaxForceTol0.04 eV/Ang AtomicCoordinatesFormat ScaledCartesian %block AtomicCoordinatesAndAtomicSpecies 0.10000.10000.1000 1 # Si 1 0.2500.2500.250 1 # Si 2 0.0000
Re: [SIESTA-L] Errors in PARALLEL run: collective abort of all ranks
Hi, I guess this is because 32 nodes are too many for such a small system. Have you tried something like 4 or so nodes ?. Regards, Roberto On Wed, 18 Feb 2009, ?? ? wrote: Dear SIESTA users, could you please help me to resolve the problem with parallel run. It works fine with Diag.ParallelOverK .true. But in default mode (paralell over orbitals) I have errors. The test system is very small, just slab of 8 Si atoms. Regards, Ruslan The errors are as following: Siesta Version: siesta-2.0.1 Architecture : x86_64-unknown-linux-gnu--Intel Compiler flags: /mnt/storage/home/siesta/siesta-2.0.1/mpif90_siesta -g PARALLEL version * Running on 32 nodes in parallel Start of run: 19-FEB-2009 0:12:48 ... some info, then: * Maximum dynamic memory allocated = 1 MB siesta: == Begin MD step = 1 == superc: Internal auxiliary supercell: 3 x 6 x 1 = 18 superc: Number of atoms, orbitals, and projectors:144 1872 2304 outcell: Unit cell vectors (Ang): 7.6638840.000.00 0.003.8319420.00 0.000.00 40.00 outcell: Cell vector modules (Ang) :7.6638843.831942 40.00 outcell: Cell angles (23,13,12) (deg): 90. 90. 90. outcell: Cell volume (Ang**3): 1174.7023 InitMesh: MESH =72 x36 x 360 = 933120 InitMesh: Mesh cutoff (required, used) = 200.000 223.865 Ry * Maximum dynamic memory allocated =41 MB rank 21 in job 1 cn03_58443 caused collective abort of all ranks exit status of rank 21: return code 13 rank 19 in job 1 cn03_58443 caused collective abort of all ranks exit status of rank 19: killed by signal 9 rank 18 in job 1 cn03_58443 caused collective abort of all ranks exit status of rank 18: killed by signal 9 rank 17 in job 1 cn03_58443 caused collective abort of all ranks exit status of rank 17: killed by signal 9 rank 16 in job 1 cn03_58443 caused collective abort of all ranks exit status of rank 16: killed by signal 9 rank 10 in job 1 cn03_58443 caused collective abort of all ranks exit status of rank 10: return code 13 rank 9 in job 1 cn03_58443 caused collective abort of all ranks exit status of rank 9: return code 13 rank 6 in job 1 cn03_58443 caused collective abort of all ranks exit status of rank 6: killed by signal 9 rank 3 in job 1 cn03_58443 caused collective abort of all ranks exit status of rank 3: killed by signal 9 rank 2 in job 1 cn03_58443 caused collective abort of all ranks exit status of rank 2: return code 13 rank 1 in job 1 cn03_58443 caused collective abort of all ranks exit status of rank 1: return code 13
Re: [SIESTA-L] xyz format input problem
Hi, In your input file I count 25 atoms, whereas some lines earlier you tell to read 27 ... Regards, R. On Fri, 13 Feb 2009, Sridhar Neelamraju wrote: Hello all, I have just started using Siesta. I am attaching my input file. My system is basically an organic molecule with gold clusters on either side (27 atoms in total) and I want to do a PDOS calculation for this. Now, the problem is that the siesta does not like my input file. I get an error from Coor.F saying ?At line 182 of Coor.F Fortran runtime error: Bad real number in item 1?. Line 182 of the code reads some variable called ?iunit?, an integer! I assume not being able to read this is causing a problem. It does recognise the xyz format. I have compared the the AtomicCoordinatesAndAtomicSpecies block with the samples given. I dont see what is going wrong. Could this be because of the value I give to the latticeconstant? Any help is much appreciated. Also, does it help to use the z-matrix formulation? I find it easier to work with xyz formats. Thanks Sridhar
Re: [SIESTA-L] Fwd: crystal energy calculation
Hi, ... also, in GaN forexample again we have 2 Ga and 2 N in the unit cell, right, so how could I mention the basis I have for Ga as an example, because I have only mentioned the ghost atom with a negative sign of atomic number? All in all, how could I have a real Ga and a ghost Ga basis? ... Well, you could cheat the atomic species. Just say you have Ga1 and Ga2, both sharing the same pseudo and basis (as real Ga); Ga1 gets +Z and Ga2 -Z . How about that ? ... ;) Bye, bye, Roberto
Re: [SIESTA-L] Fwd: crystal energy calculation
Hi, I've told you: Never did that type of calculations. BUT, if remember correctly, if atom X gets label "n", then the corresponding ghost atom is specified as "-n". No less, no more; you shouldn't worry about ghost atom basis, Siesta should take care. Search for "ghost" in the manual, it should be there. Regards, Roberto On Thu, 12 Feb 2009, Sarah Lebedev wrote: Hello, Thanks for replying me. Could you please let me know: 1. how you show the basis of the ghost atom in the input file? 2. If we have 3 different types of atoms in the unit cell, let's say BaTiO3, how dowe consider the ghost atoms then? Regards, Sarah ...
Re: [SIESTA-L] Fwd: crystal energy calculation
Hello Sarah, Never did that type of calculation, so please sages out there correct me if I'm wrong ... Let say you want to calculate Fe, which is bcc. 1st, perform a calculation with the standard, cubic, two atoms unit cell. 2nd, perform the same calculation (keeping lattice parameter, basis, etc) but putting ghosts at the vertices, namely, keeping only the atom at the center. Finally, compare the two and you'l get the cohesive energy. Best, Roberto On Thu, 12 Feb 2009, Sarah Lebedev wrote: *I am very sorry to re-send my email, but I really need to know the answer.* Dear Siesta users, I am sorry, I have not got my answer reading the whole archive about this subject. Therefore, I hope you would kindly let me know the answer of the following question: We all know that for the crystals we have the following relation: *Cohesive Energy (CE) = (energy of free atom) - (energy of atom in a crystal).* I am working on GaN bulk and nanostructures. 1. Regarding the energy of the free atom, as I read the archive, it seems that we have to consider the ghost atoms surrounding the main atom to get the enrgy of the free atom?! or, it is enough to have the right basis of the atom and calculate its energy? 2. energy of atom in a crystal: when we calculate the enery of the bulk, then how do we have the energy of the atom in the structure? 3. I would be grateful if you let me know the additional terms I have to consider in order to calculate the cohesive energy of a slab too. Regards, Sarah
Re: [SIESTA-L] Phonons with Siesta
Hello Lydia, Please check the attachment for a response I've got from Andrei P. regarding the structure of the *.FC file. On Fri, 12 Dec 2008, Lydia Nemec wrote: ... restart function. I don't know how many atoms will siesta move? ... All of them, back and forth, unless you tell which. Have a look at the manual. This allows you to continue FC runs. ... It seems to me that SIESTA completly forgets about the symmetries of the ... Correct, SIESTA is not meant for using symmetries. They're unimportant within its context. ... of the block. Can it be that I just need to to Calculate the six atoms of the middle unitcell, one after another and assemble the resulting FC-Files into one FC-File and Vibra will do the rest for me? ... No, if you do that, phonons of wavelength longer than your 6 atoms cell will be lost (and even the remaining ones probably poorly evaluated) ... Can I use order N solution method with Phonon (FC and Vibra) calculation? I think the orderN algorythm just calculates the gamma point, if it is so, wouldn't it be useless for an Phononcalculation as I will need a good k-grid for reasonable results? ... You're confusing phonon k's with electron k's. So no, it wouldn't be useless. Regards, Roberto
Re: [SIESTA-L] phonon spectrum
Hi, I don't know anything about the Vibra tools, but I guess you can always cheat those tools saying your system is composed of only two atoms. Of course, you also must supply the correct FC matrix for those, extracted from Siesta's *.FC file. Best, Roberto On Fri, 24 Oct 2008, Zhanyu Ning wrote: Dear all, Is there anyone who has the experience on phonon calculation by siesta? I'm trying to calculate the phonon spectrum by siesta. I only want to calculate the force over two atoms by fixing all the others. In siesta, I can specify this by using MD.FCfirst and MD.FClast, then I have the system.FC file. However, it seems the analysis tool in the Utils/Vibra/ directory can only deal with the force matrix with all the atoms in the system. I'm wondering is there any way to calculate the phonon mod just based on these two atoms? Thanks for your consideration! Zhanyu
Re: [SIESTA-L] I met this problem when compile parallel version ofsiesta-2.0
Hi Jonathan, You're not able to link in the BLACS library. The double slash here: .../LIB//blacs* ... /export/home/student/code/BLACS/LIB//blacsCinit_MPI-LINUX-0.a /export/home/student/code/BLACS/LIB//blacsF77init_MPI-LINUX-0.a /export/home/student/code/BLACS/LIB//blacs_MPI-LINUX-0.a ... is certainly a mistake. It comes from your arch.make ... BLACS=/export/home/student/code/BLACS/LIB/ ... Drop the last slash (after "LIB") and retry. Regards, Roberto
Re: [SIESTA-L] Structure of *.FC file
Thank you Andrei for the detailed answer. Key points were the 6N lines block and the eV/Angs^2 units (!). I was trying with "wc -l *.FC" plus some guesses, but couldn't come up with a really convincing aswer. Best, Roberto On Thu, 23 Oct 2008, [EMAIL PROTECTED] wrote: ... Dear Roberto, the FC file contains -F/D (F: forces; D: displacements) in the units of eV/(Ang^2). ...
[SIESTA-L] Structure of *.FC file
Hello everybody, I'm computing the FC matrix for the first time :), and by now I'm sure the calculation will hit the wall clock time limit. I see Siesta's moving each atom back and forth and along the three coordinate axis, son can guess what's doing. But what exactly is the structure of the *.FC file being written ?. Need to know at least for continuing the current run. Thanks a lot, Roberto
Re: [SIESTA-L] Qtot convergence pb is back
Thank you Eduardo for your time and commitment !. Unfortunately I tried all that already with no success. I'm confident however to find some solution, then I'll post to the list for the benefit of the other users. Cheers, Roberto Hola, Sorry, this is another problem. It has nothing to do with the master/slave calculation, it happens in "normal" siesta runs. Try using a different diagonalization option: Diag.DivideAndConquer .false. Diag.AllInOne .true. ...
Re: [SIESTA-L] Qtot convergence pb is back
Hola Eduardo, But the slave Siesta is not waiting: the calculation proceeds and spits the "Qtot WARNING", 1st CALL siesta: iscf Eharris(eV) E_KS(eV) FreeEng(eV) dDmax Ef(eV) siesta:1 -62761.6906 -62761.6902 -62761.6902 0.0007 -2.8547 timer: Routine,Calls,Time,% = IterSCF12967.293 93.61 elaps: Routine,Calls,Wall,% = IterSCF1 496.914 93.59 siesta: E_KS(eV) = -62761.6902 2nd CALL siesta: iscf Eharris(eV) E_KS(eV) FreeEng(eV) dDmax Ef(eV) siesta:1 -62232.7174 -62761.6903 -62765.8366 0.1123 -2.8559 siesta:2 -62232.7194 -62761.7688 -62765.9673 0.1089 -2.8559 siesta: WARNING: Qtot, Tr[D*S] = 587.00 565.368007 siesta:3 -62232.7922 -62764.3826 -62768.5811 0.0955 -2.8548 This is gfortran 4.2.3, which includes a FLUSH intrinsics, then as long as Siesta calls it there shouldn't be problems. I positively know that pipe communication was broken in 4.1 (straightly didn't work). THAT problem however is gone in 4.2. My other boxes use 4.3.x, thus I thought of upgradig the compiler, but don't really like the idea ... and may not solve the problem either :/. I'm inclined to think memory issues somewhere, namely, the Siesta "state" is perhaps different between the two calls. Sort of a variable that should be zero, really is in the 1st call, but not in the 2nd because the effect of the previous one. Best, Roberto On Wed, 22 Oct 2008, Eduardo Anglada wrote: Hola Roberto, Then the problem is that your compiler does buffering I/O. As the buffer isn't full there is no flush of the communication channel and the slave siesta waits forever. Almost all the compilers have an option which turns off the buffering. Best, Eduardo
Re: [SIESTA-L] Qtot convergence pb is back
Hi Eduardo, Sorry for my missleading use of "run", should have said "call" instead: It is the 2nd call to siesta_forces (within a series between siesta_launch and siesta_quit) that doesn't work. And the problem's only with my new Linux box; older ones run fine ... :/ . Best, Roberto Dear Roberto, Do you mean that restarting doesn't work? If this is the problem, remove the *.forces and *.coords files. Best, Eduardo 1st CALL siesta: iscf Eharris(eV) E_KS(eV) FreeEng(eV) dDmax Ef(eV) siesta:1 -62761.6906 -62761.6902 -62761.6902 0.0007 -2.8547 timer: Routine,Calls,Time,% = IterSCF12967.293 93.61 elaps: Routine,Calls,Wall,% = IterSCF1 496.914 93.59> siesta: E_KS(eV) = -62761.6902 2nd CALL siesta: iscf Eharris(eV) E_KS(eV) FreeEng(eV) dDmax Ef(eV) siesta:1 -62232.7174 -62761.6903 -62765.8366 0.1123 -2.8559 siesta:2 -62232.7194 -62761.7688 -62765.9673 0.1089 -2.8559 siesta: WARNING: Qtot, Tr[D*S] = 587.00 565.368007 siesta:3 -62232.7922 -62764.3826 -62768.5811 0.0955 -2.8548
Re: [SIESTA-L] Error of SIESTA parallel compile
Ok, but there's a Makefile that builds it; try "make" then. I don't know why your (complex) arch.make doesn't take care of that already, it should come out automaticly. Bye. Roberto On Tue, 21 Oct 2008, HeeSung Choi wrote: I looked into ./Src/MPI but there is no mpi_siesta.mod in there. Thanks. HeeSung
Re: [SIESTA-L] Error of SIESTA parallel compile
Hi, Look into ./Src/MPI , it should be there. It's a module build from mpi.F . Good luck. Roberto. On Tue, 21 Oct 2008, HeeSung Choi wrote: Dear all, I am trying to compile siesta 2.0 parallel version. But I don't know how to solve the following error. What is mpi_siesta.mod? How do I generate it? Compilation architecture to be used: pgf90 If this is not what you want, create the right arch.make file using the models in Sys Hit ^C to abort... ==> Incorporating information about present compilation (compiler and flags) make[1]: Entering directory `/net/jj/ph/u1/hsc081000/siesta-2.0/Src' pgf90 -c -g -O0 -DCDF -DMPI compinfo.F90 make[1]: Leaving directory `/net/jj/ph/u1/hsc081000/siesta-2.0/Src' [EMAIL PROTECTED] -f compinfo.F90 pgf90 -c -g -O0 -DCDF -DMPI sys.F PGF90-F-0004-Unable to open MODULE file mpi_siesta.mod (sys.F: 24) PGF90/x86-64 Linux 7.1-2: compilation aborted make: *** [sys.o] Error 2 == Here is my arch.make: .SUFFIXES: .SUFFIXES: .f .F .o .a .f90 .F90 SIESTA_ARCH=pgf90 # SIESTA_HOME=/net/jj/ph/u1/hsc081000/siesta-2.0/Src BLACS_HOME= /net/jj/ph/u1/hsc081000/local/BLACS LAPACK_HOME=/net/jj/ph/u1/hsc081000/local/lapack-3.1.1 SCALAPACK_HOME= /net/jj/ph/u1/hsc081000/local/scalapack-1.8.0 MPI_HOME= /usr/mpi/pgi/mvapich-1.0.0/lib NETCDF_HOME=/net/jj/ph/u1/hsc081000/local/netcdf-4.0 # FC=pgf90 FC_ASIS=$(FC) # #FFLAGS= -O3 #-traceback FFLAGS= -g -O0 #-traceback #FFLAGS_EXTRA=-I$(MPI_HOME) -I$(NETCDF_HOME)/includes #FFLAGS_MPI= -I$(MPI_HOME) FFLAGS_DEBUG= -g -O0 RANLIB= echo LDFLAGS= #-static #-Vaxlib #SYS=nag SYS=bad SP_KIND=4 DP_KIND=8 KINDS=$(SP_KIND) $(DP_KIND) # netcdf NETCDF_INTERFACE=libnetcdf.a NETCDF_INCLUDE=$(NETCDF_HOME)/includes DEFS_CDF=-DCDF # mpi #MPI_INTERFACE=libmpichf90.a MPI_INTERFACE=libmpi_f90.a MPI_INCLUDE=/usr/mpi/gcc/mvapich-1.0.0/include MPI_LIBS=/usr/mpi/gcc/mvapich-1.0.0/lib DEFS_MPI=-DMPI # HOME_LIB=/net/jj/ph/u1/hsc081000/siesta-2.0/Src/Libs BLAS_LIBS=$(LAPACK_HOME)/libblas.a \ BLACS_LIBS=$(BLACS_HOME)/libblacsF77init.a $(BLACS_HOME)/libblacsCinit.a $(BLACS_HOME)/libblacs.a \ LAPACK_LIBS=$(LAPCAK_HOME)/liblapack.a \ SCALAPACK_LIBS=$(SCALAPACK_HOME)/libscalapack.a \ COMP_LIBS= NETCDF_LIB=$(NETCDF_HOME)/libsrc/.libs/ -lnetcdf LIBS=-L$(HOME_LIB)$(SCALAPACK_LIBS)$(BLACS_LIBS)$(LAPACK_LIBS)$(BLAS_LIBS)$(NETCDF_LIBS)$(MPI_LIBS) # Thanks. HeeSung |--| | Dr. Roberto C. Pasianot Phone: 54 11 6772 7244 | | Dpto. Materiales, CAC-CNEA FAX : 54 11 6772 7303 | | Avda. Gral. Paz 1499Email: [EMAIL PROTECTED]| | 1650 San Martin, Buenos Aires | | ARGENTINA | |--|
Re: [SIESTA-L] ZnO modeling
Hi, Check DM.MixingWeight, probably a small value is needed, like 0.05 or so. Also check DM.NumberPulay and DM.NumberBroyden, something like 4 should be reasonable. Bye,bye Roberto On Tue, 21 Oct 2008, Ravi Agrawal wrote: Hi, I am trying to use SIESTA to model ZnO bulk structure. However, I ran into the convergence issues. The dDmax value does not seem to decrease beyond a certain value. The output looks like this: siesta: 141-3882.8445-3868.6803-3868.6946 1.5756 -3.4114 siesta: 142-3882.8432-3868.6789-3868.6907 1.5591 -3.4098 siesta: 143-3882.8447-3868.6803-3868.6947 1.5759 -3.4113 siesta: 144-3882.8438-3868.6789-3868.6906 1.5593 -3.4097 siesta: 145-3882.8444-3868.6803-3868.6946 1.5757 -3.4114 siesta: 146-3882.8432-3868.6789-3868.6907 1.5592 -3.4098 siesta: 147-3882.8447-3868.6803-3868.6947 1.5760 -3.4113 siesta: 148-3882.8438-3868.6789-3868.6906 1.5594 -3.4097 siesta: 149-3882.8444-3868.6803-3868.6946 1.5759 -3.4114 siesta: 150-3882.8432-3868.6789-3868.6907 1.5594 -3.4098 siesta: 151-3882.8447-3868.6803-3868.6947 1.5761 -3.4113 siesta: 152-3882.8438-3868.6789-3868.6906 1.5595 -3.4097 siesta: 153-3882.8444-3868.6803-3868.6946 1.5760 -3.4114 siesta: 154-3882.8432-3868.6789-3868.6907 1.5595 -3.4098 siesta: 155-3882.8447-3868.6803-3868.6947 1.5763 -3.4113 siesta: 156-3882.8438-3868.6789-3868.6906 1.5597 -3.4097 siesta: 157-3882.8444-3868.6803-3868.6946 1.5761 -3.4114 siesta: 158-3882.8432-3868.6789-3868.6907 1.5596 -3.4098 siesta: 159-3882.8447-3868.6803-3868.6947 1.5764 -3.4113 siesta: 160-3882.8438-3868.6789-3868.6906 1.5598 -3.4097 siesta: 161-3882.8444-3868.6803-3868.6946 1.5763 -3.4114 siesta: 162-3882.8432-3868.6789-3868.6907 1.5598 -3.4098 siesta: 163-3882.8447-3868.6803-3868.6947 1.5765 -3.4113 siesta: 164-3882.8438-3868.6789-3868.6906 1.5599 -3.4097 siesta: 165-3882.8444-3868.6803-3868.6946 1.5764 -3.4114 siesta: 166-3882.8432-3868.6789-3868.6907 1.5599 -3.4098 siesta: 167-3882.8447-3868.6803-3868.6947 1.5767 -3.4113 siesta: 168-3882.8438-3868.6789-3868.6906 1.5601 -3.4097 siesta: 169-3882.8444-3868.6803-3868.6946 1.5765 -3.4114 I am also copying the input file below. Any help will be appreciated. SystemName ZnO Crystal SystemLabel zno NumberOfAtoms 4 NumberOfSpecies 2 MeshCutoff 500 Ry %block ChemicalSpeciesLabel 1 30 Zn # Species index, atomic number, species label 2 8 O %endblock ChemicalSpeciesLabel AtomicCoordinatesFormat Ang # c = 5.20658; a = 3.249925; u = 0.389 LatticeConstant 5.20658 Ang #%block LatticeParameters # 1.0811 1.0811 1. 90 90 60 #%endblock LatticeParameters %block LatticeVectors 1.0811390.000.00 0.5405690.9362940.00 0.000.001.00 %endblock LatticeVectors %block AtomicCoordinatesAndAtomicSpecies 2.81452 1.62496 2.60329 1 2.81452 1.62496 4.62865 2 5.62904 3.24993 0.0 1 5.62904 3.24993 2.02536 2 %endblock AtomicCoordinatesAndAtomicSpecies kgrid_cutoff 5.0 Bohr maxSCFIterations 200
Re: [SIESTA-L] Qtot convergence pb is back
Hi again guys, Yeah I know, I promised the previous to be last question, but I'm almost there; and of course you have the right not to answer :-) ... The problem has to do with using the "siesta_forces" interface (i.e. communication through pipes): a standard parallel siesta job runs fine. So it's like the 2nd run ( i.e. 2nd pipe read ) is not equivalent to the 1st one ... garbage kept somewhere ?. Cheers, Roberto ... Bottomline, it is still a weak indicator. It could be anything related to the FFTs, the diagonaliser and many more bits. ... 1st run siesta: iscf Eharris(eV) E_KS(eV) FreeEng(eV) dDmax Ef(eV) siesta:1 -62761.6906 -62761.6902 -62761.6902 0.0007 -2.8547 timer: Routine,Calls,Time,% = IterSCF12967.293 93.61 elaps: Routine,Calls,Wall,% = IterSCF1 496.914 93.59> siesta: E_KS(eV) = -62761.6902 2nd run siesta: iscf Eharris(eV) E_KS(eV) FreeEng(eV) dDmax Ef(eV) siesta:1 -62232.7174 -62761.6903 -62765.8366 0.1123 -2.8559 siesta:2 -62232.7194 -62761.7688 -62765.9673 0.1089 -2.8559 siesta: WARNING: Qtot, Tr[D*S] = 587.00 565.368007 siesta:3 -62232.7922 -62764.3826 -62768.5811 0.0955 -2.8548
Re: [SIESTA-L] Qtot convergence pb is back
Hello guys, Thanks for your prompt replies, I've tried all of them (including Salvador's suggestion of changing "nitmax" to 1000 in fermid.F) and none of them worked ... :( . Apparently, however, there's a problem with my NEW Linux box, because after trying an older one everything went smooth. So this is a partial solution at least. Abusing of your patiente, a final question. I've noticed that on the 2nd run the E_KS value matches the one of the 1st run; contrarily Eharris is off quite a bit. Isn't this telling what/where should I look ? Thanks a lot. Best, Roberto -- Here's the story, any comments are most welcome. I run the SCF twice on the same configuration (yes, so stupid is my program's work flow). The 1st time I start from a well converged density matrix; everything goes fine, siesta: iscf Eharris(eV) E_KS(eV) FreeEng(eV) dDmax Ef(eV) siesta:1 -62761.6904 -62761.6853 -62761.6853 0.0008 -2.8547 timer: Routine,Calls,Time,% = IterSCF12795.838 90.28 elaps: Routine,Calls,Wall,% = IterSCF1 466.974 90.30 siesta: E_KS(eV) = -62761.6905 Now, see what happens 2nd time siesta: iscf Eharris(eV) E_KS(eV) FreeEng(eV) dDmax Ef(eV) siesta:1 -62232.7213 -62761.6956 -62765.8419 0.1123 -2.8559 siesta: WARNING: Qtot, Tr[D*S] = 587.00 565.139728 ...
Re: [SIESTA-L] SIESTA parallel compile
Hi again, I don't know anything about ( pgf90 + mvapich ) but it seems your MPI installation is having troubles to compile "mpi_include.f90" within Siesta's ./Src/MPI . May be you can take it separately and tinker a bit to see what's wrong. Bye, bye. R. On Thu, 16 Oct 2008, HeeSung Choi wrote: Dear all, I have a following message when I compile parallel version siesta make[1]: Entering directory `/net/jj/ph/u1/hsc081000/siesta-2.0.1/Src/MPI' pgf90 -c -O3 -I/usr/mpi/pgi/mvapich-1.0.0/include -I/net/jj/ph/u1/hsc081000/siesta-2.0.1/netcdf-4.0/includes Interfaces.f90 PGF90-F-0004-Corrupt or Old Module file ./mpi__include.mod (Interfaces.f90: 223) PGF90/x86-64 Linux 7.1-2: compilation aborted make[1]: *** [Interfaces.o] error 2 make[1]: Leaving directory `/net/jj/ph/u1/hsc081000/siesta-2.0.1/Src/MPI' make: *** [libmpi_f90.a] error 2 How do I solve this error? My arch.make is following;SIESTA_ARCH=pgf90 # SIESTA_HOME=/net/jj/ph/u1/hsc081000/siesta-2.0.1/Src BLACS_HOME= /net/jj/ph/u1/hsc081000/siesta-2.0.1/BLACS SCALAPACK_HOME= /net/jj/ph/u1/hsc081000/siesta-2.0.1/scalapack-1.8.0 MPI_HOME= /usr/mpi/pgi/mvapich-1.0.0 NETCDF_HOME=/net/jj/ph/u1/hsc081000/siesta-2.0.1/netcdf-4.0 # FC=pgf90 FC_ASIS=$(FC) # FFLAGS= -O3 #-OPT:Ofast -OPT:ro=0 FFLAGS_EXTRA=-I$(MPI_HOME)/include -I$(NETCDF_HOME)/includes FFLAGS_MPI= -I$(MPI_HOME)/include FFLAGS_DEBUG=-g -O0 RANLIB=echo LDFLAGS= -O3 #-OPT:Ofast -OPT:ro=0 # netcdf #NETCDF_INTERFACE=libnetcdf.a NETCDF_INTERFACE=libnetcdf_f90.a DEFS_CDF=-DCDF NETCDF_INCLUDE=$(NETCDF_HOME)/includes NETCDF_LIB=$(NETCDF_HOME)/libsrc # mpi MPI_INTERFACE=libmpi_f90.a MPI_INCLUDE=$(MPI_HOME)/include DEFS_MPI=-DMPI # LIBS= -L$(SIESTA_HOME)/Libs -lblas \ -L$(SIESTA_HOME)/Libs -llapack \ -L$(SIESTA_HOME)/Libs -lscalapack \ -L$(SIESTA_HOME)/Libs -lblacsF77init -lblacsCinit -lblacs \ -L$(MPI_HOME)/lib -lfmpich -lmpich -lpmpich \ Thanks HeeSung
[SIESTA-L] Qtot convergence pb is back
Hello everybody, I've seen the posting by Salvador Barraza-Lopez of this April saying that, by moving from siesta1.3p to siesta-2.0, his SCF convergence problem "WARNING: Qtot, Tr[D*S] = ..." was gone. Well, not quite apparently: I'm using siesta-2.0.1 and stumbled on the same issue. Here's the story, any comments are most welcome. I run the SCF twice on the same configuration (yes, so stupid is my program's work flow). The 1st time I start from a well converged density matrix; everything goes fine, siesta: iscf Eharris(eV) E_KS(eV) FreeEng(eV) dDmax Ef(eV) siesta:1 -62761.6904 -62761.6853 -62761.6853 0.0008 -2.8547 timer: Routine,Calls,Time,% = IterSCF12795.838 90.28 elaps: Routine,Calls,Wall,% = IterSCF1 466.974 90.30 siesta: E_KS(eV) = -62761.6905 Now, see what happens 2nd time siesta: iscf Eharris(eV) E_KS(eV) FreeEng(eV) dDmax Ef(eV) siesta:1 -62232.7213 -62761.6956 -62765.8419 0.1123 -2.8559 siesta: WARNING: Qtot, Tr[D*S] = 587.00 565.139728 and after some more iterations and several of these messages the calculation turns weird. Besides the problem in itself, the bad guy seems to be the new density matrix being written right after the 1st run. Best regards, Roberto PS: in case it matters, this is a (48 + 1) atoms metallic box and the calculation is performed in parallel.
Re: [SIESTA-L] Vacancy formation energy
On Fri, 8 Aug 2008, Roberto Veiga wrote: > I got a problem doing that... the formation of a vacancy is exothermic! If I > do the calculation in this way: > > Ef=Et(64)-Et(63)-Et(64)/64 > But it's the other way arround man! ... so you got it right ;-) Cheers. R. > C'est bizarre! > > Roberto Veiga > PhD student > INSA-Lyon >
Re: [SIESTA-L] Vacancy formation energy
Sorry Marcos, error cancellation is important here, because absolute energy convergence may never be assured, least for big boxes. Thus one should use Et(1)=Et(64)/64 or, equivalently, scale the result for the perfect lattice box of 64 Fe atoms to 63. Regards, Roberto On Fri, 1 Aug 2008, Marcos Verissimo Alves wrote: > Hi Roberto, > > This seems to be ok. Just one thing: make sure Et(1) is the total energy > of the isolated atom (that is, place it in a box big enough that you have > it isolated). Also, since you can write down the unit cell of bcc-Fe with > a single atom, you can simply take the energy of the unit cell, multiply > it by 64 and save yourself the computation time for calculating Et(64). > > Marcos >
Re: [SIESTA-L] parallel version and mpi
Ciao Antonio, I did compile Siesta 2.0.1 with gfortran and openmpi, no problems. Check the arch.make in the attachment. Regards, Roberto On Wed, 28 May 2008, antonio aliano wrote: > I am A.Aliano, a researcher from politecnico di Torino and i'm trying to use > SIESTA in parallel. > Unfortunately i met some difficulties to compile and link the library mpi of > SIESTA. The main problem seems to be that I'm not able to compile the > version of siesta i have (2.0) with openmpi libreries. How can i do to It is > possible to use siesta with openmpi? Or should i use the usual mpi librery? > > Thanks for your attention > > regards > > A.A. > # # This file is part of the SIESTA package. # # Copyright (c) Fundacion General Universidad Autonoma de Madrid: # E.Artacho, J.Gale, A.Garcia, J.Junquera, P.Ordejon, D.Sanchez-Portal # and J.M.Soler, 1996-2006. # # Use of this software constitutes agreement with the full conditions # given in the SIESTA license, as signed by all legitimate users. # .SUFFIXES: .SUFFIXES: .f .F .o .a .f90 .F90 SIESTA_ARCH=i686-pc-linux-gnu--Intel FPP= FPP_OUTPUT= FC=mpif90 RANLIB=ranlib SYS=bsd FFLAGS=-O2 FPPFLAGS= -DFC_HAVE_FLUSH -DFC_HAVE_ABORT -DMPI LDFLAGS= FCFLAGS_fixed_f= FCFLAGS_free_f90= FPPFLAGS_fixed_F= FPPFLAGS_free_F90= BLAS_LIBS=-lcblas -lf77blas -latlas LAPACK_LIBS=-llapack BLACS_LIBS=-lblacsF77init -lblacsCinit -lblacs SCALAPACK_LIBS=-lscalapack COMP_LIBS= NETCDF_LIBS= NETCDF_INTERFACE= LIBS=$(SCALAPACK_LIBS) $(BLACS_LIBS) $(LAPACK_LIBS) $(BLAS_LIBS) $(NETCDF_LIBS) #SIESTA needs an F90 interface to MPI #This will give you SIESTA's own implementation #If your compiler vendor offers an alternative, you may change #to it here. MPI_INTERFACE=libmpi_f90.a MPI_INCLUDE=/usr/local/include #Dependency rules are created by autoconf according to whether #discrete preprocessing is necessary or not. .F.o: $(FC) -c $(FFLAGS) $(INCFLAGS) $(FPPFLAGS) $(FPPFLAGS_fixed_F) $< .F90.o: $(FC) -c $(FFLAGS) $(INCFLAGS) $(FPPFLAGS) $(FPPFLAGS_free_F90) $< .f.o: $(FC) -c $(FFLAGS) $(INCFLAGS) $(FCFLAGS_fixed_f) $< .f90.o: $(FC) -c $(FFLAGS) $(INCFLAGS) $(FCFLAGS_free_f90) $<
[SIESTA-L] Zr GGA + basis
Hello everybody, This is a sort of newby question. I'm looking for a consistent pair of GGA Zr pseudo plus the basis for Siesta. Does anybody possess such a thing and is willing to share ?. Would it be ok to use the (LDA) basis already available for Siesta together with a GGA Zr ( such the recently available from ABINIT ) ?. Thanks a lot, Roberto
[SIESTA-L] Speeding up the SFC cycle
Dear list, Please bear with me for a while, need to provide minimal details to make a meaningful/understandable argument. Thanks in advance. I'm doing simulations with metallic boxes, particularly CG type of runs either using Siesta's own or driven from outside. In such structural optimizations one goes from configuration {R}, where an SCF cycle must be performed, to a nearby one {R+dR}, where a new SCF cycle must be performed. I'm worried by the apparent lack of efficiency in dealing with the 2nd SCF: Seems that knowledge of the density matrix at {R}, DM(R), is not specially helpfull for SCF(R+dR), even though the configurations are pretty close ... :( I believe that SCF(R+dR) could be speeded up considerably by knowing some estimation for DM(R+dR) to use as starting point instead of DM(R). Now, this is exactly what provides Eq.(A5) of Siesta's paper J. Phys. Condens. Matter Vol.14, 2745 (2002). Apparently the code for such a calculation is already there, so my questions are, Do you think it worth trying ? Would it be too difficult to implement ? What are the routines involved ? ... and probably some others, but it's enough for now. Best regards, Roberto
Re: [SIESTA-L] MPI error
Hi Auluck, Try playing with the "BlockSize" keyword, particularly a small value such as 8 or so; also try Diag.Memory > 1 . If using the diagon method try instead Diag.ParallelOverK, this most likely will cure the issue. Which libraries/compiler are you using ?. Intel's ifc and mkl should not give those errors. Cheers, Roberto On Tue, 25 Mar 2008, Sushil Auluck wrote: > hi, > i am encountering the following error when running siesta in parallel. > could anyone suggest a remedy... > s.auluck > > > > > outcell: Cell vector modules (Ang) : 10.86 10.86 10.86 > > outcell: Cell angles (23,13,12) (deg): 90. 90. 90. > > outcell: Cell volume (Ang**3): 1280.8241 > > > > iodm: Reading Density Matrix from files > > > > InitMesh: MESH =48 x48 x48 = 110592 > > InitMesh: Mesh cutoff (required, used) =40.00053.991 Ry > > > > * Maximum dynamic memory allocated =16 MB > > [cli_0]: aborting job: > > Fatal error in MPI_Comm_group: Invalid communicator, error stack: > > MPI_Comm_group(149): MPI_Comm_group(comm=0x0, group=0x7fbfff6e64) failed > > MPI_Comm_group(73).: Invalid communicator > > [cli_1]: aborting job: > > Fatal error in MPI_Comm_group: Invalid communicator, error stack: > > MPI_Comm_group(149): MPI_Comm_group(comm=0x0, group=0x7fbfff6e64) failed > > MPI_Comm_group(73).: Invalid communicator > > rank 0 in job 1 master.iitk.com_51313 caused collective abort of > > all ranks > > exit status of rank 0: return code 1 > > [EMAIL PROTECTED] work1]$ > > > > -- > > Prof. Sushil Auluck Phone: +91-512-6798177(Home) > Department of Physics +91-512-6797306/7092(Work) > Indian Institute of TechnologyMobile: +91-9358470794 > Kanpur 208016 (UP) e-mail: [EMAIL PROTECTED] > India e-mail: [EMAIL PROTECTED] > http://www.iitk.ac.in/phy/People/phy_facvis.html > http://www.iitk.ac.in/phy/New01/profile_SA.html >
[SIESTA-L] Question on siesta_forces
Hello list, I've got a question on the "siesta_forces" helper routine that's worrying me: Do not understand why the returned value for the energy does not match the converged E_KS but is a bit smaller (check the example below). Would someone please explain this to me in a few words ?. Thanks sincerely, Roberto PS: This is Siesta version 2.0.1 . siesta: Program's energy decomposition (eV): siesta: Eions =101251.465017 siesta: Ena = 23678.543702 siesta: Ekin= 18426.819752 siesta: Enl = 5879.470617 siesta: DEna= 500.598593 siesta: DUscf =10.120630 siesta: DUext = 0.00 siesta: Exc = -7503.978529 siesta: eta*DQ = 0.00 siesta: Emadel = 0.00 siesta: Ekinion = 0.00 siesta: Eharris =-60259.892298 siesta: Etot=-60259.890252 siesta: FreeEng =-60259.890252 siesta: iscf Eharris(eV) E_KS(eV) FreeEng(eV) dDmax Ef(eV) siesta:1 -60259.8923 -60259.8903 -60259.8903 0.0004 -2.8576 timer: Routine,Calls,Time,% = IterSCF15154.107 95.34 elaps: Routine,Calls,Wall,% = IterSCF1 861.802 95.48 siesta: E_KS(eV) = -60259.8922 siesta: E_KS - E_eggbox =-60259.8922<- HERE'S ONE siesta: Atomic forces (eV/Ang): Tot -0.000444 -0.000738 -0.000387 Max0.034829 Res0.012640sqrt( Sum f_i^2 / 3N ) Max0.034829constrained Stress-tensor-Voigt (kbar):6.51 16.61 10.410.00 0.005.34 Target enthalpy (eV/cell) -60263.9046 siesta: Stress tensor (static) (eV/Ang**3): 0.0040610.020.003331 0.020.010366 -0.01 0.003332 -0.010.006497 siesta: Pressure (static):-11.17478434 kBar siesta: Stress tensor (total) (eV/Ang**3): 0.0040610.020.003331 0.020.010366 -0.01 0.003332 -0.010.006497 siesta: Pressure (total):-11.17478434 kBar forcesToPipe: energy (eV) = -60259.956581 <-- AND HERE THE OTHER ONE forcesToPipe: stress (eV/Ang**3) = 0.0040610.020.003331 0.020.010366 -0.01 0.003332 -0.010.006497 forcesToPipe: forces (eV/Ang) = -0.0088200.0123040.007603 -0.010401 -0.004651 -0.000996 0.0088500.012268 -0.007605
Re: [SIESTA-L] "Species not found" error message - Parallelsiesta.
Hi Kamaran, This may sound a bit stupid but I seem to recall that last time I've got a message like that, it was because the *.psf was missing ... ;) Cheers, Roberto On Mon, 17 Mar 2008, Kamaram Munira wrote: > I did mention the number of species, here is the input fdf file. > > - > SystemName Au > SystemLabel Au > NumberOfAtoms 105 > LongOutputtrue > NumberOfSpecies1 > > > DM.UseSaveDM .true. > DM.MixingWeight 0.1 > MaxSCFIterations 50 > > WriteCoorXmol .true. > #DM.MixSCF1 .true. > > > > PAO.BasisType split > PAO.BasisSize DZ > > %block ChemicalSpeciesLabel > 1 79 Au # Species index, atomic number, species label > %endblock ChemicalSpeciesLabel > > > #LatticeConstant3.52 Ang > #%block LatticeVectors > # 0.707106 0.0 0.0 > # 0.353550 0.61237 0.0 > # 0.00 0.0 20.0 > #%endblock LatticeVectors > > #LatticeConstant1 Ang > #%block LatticeVectors > # 9.0 0.00 0.0 > #0.0 9.00 0.0 > #0.0 0.0 7.066550 > #endblock LatticeVectors > > MeshCutoff 200 Ry > > DM.Tolerance 0.10E-03 > > %block k_grid_Monkhorst_Pack > 5 0 0 0.0 > 0 5 0 0.0 > 0 0 5 0.0 > %endblock k_grid_Monkhorst_Pack > > > AtomicCoordinatesFormat Ang > %block AtomicCoordinatesAndAtomicSpecies < Au3.xyz > > xc.functional LDA # Exchange-correlation functional > xc.authorsCA # Exchange-correlation version > > AtomCoorFormatOut Ang > > > > > %block ProjectedDensityOfStates > -10.00 10.00 0.2500 1500 eV > %endblock ProjectedDensityOfStates > saveHS true > > SolutionMethod diagon > WriteCoorXmol .true. > WriteBands true > SaveTotalPotential true > > > > > On Mon, 17 Mar 2008 15:05:48 +0100 > Eduardo Anglada <[EMAIL PROTECTED]> wrote: > > Hi, > > > > Maybe the line with the number of spices is missing? > > post your input so we can take a look. > > Regards > > Eduardo > > > > > > On 15/03/2008, at 15:00, Kamaram Munira wrote: > > > >> I have successfully run Siesta in serial mode till now. Recently, I > >> am trying to run the parallel version and I get a error message that > >> "Species not found" even though I have the Chemical and Species > >> Block in my fdf file. I know I am getting the error from Chemical.f > >> file. How do I go about fixing it. Any help would be much appreciated. > >> > >> Thanks > >> -Kamaram > >> > >> --- > >> Kamaram Munira > >> Graduate Research Assistant > >> University of Virginia > >> Charlottesville, VA-22903. > >> > >> > > --- > Kamaram Munira > Graduate Research Assistant > University of Virginia > Charlottesville, VA-22903. >
[SIESTA-L] Rejected postings
Hello everybody, It's me again, with a (small) complaint :( . Each time I send a message to the list I get a "rejected message" notification from [EMAIL PROTECTED] saying I've sent the same message twice. Of course I didn't do that, so there must be something in my mail header that is confusing this dummy (does not happen with other lists I'm subscribed). Is this a known issue ?, how can I fix this missbehabiour ?. Thanks. Cheers, Roberto
[SIESTA-L] Forces and free energy
Hello list, I've got a couple of questions, probably for the developers. 1) This one is a little fancy. On p.77 of the manual (for 2.0.1) regarding parallelism on the spacial grid, it is stated "... 2-D block cyclic distribution ..." ; because "cyclic" has a quite specific meaning everywhere else, and I haven't seen a keyword like "BlockSize" in the context of spacial grid ... I mean, is it really cyclic ?. 2) Here's a less trivial one. I'm using Siesta as a subroutine; I need the forces in particular which, they say, are calculated as the derivatives of the free energy (FreeE). However the call to siesta_forces returns the energy (Etot), not FreeE. Let's say I want to be strict (particularly because I'm using FD temperature): is it just a matter of substituting FreeE for Etot in the call to "forcesToPipe" within siesta.F ?. Thanks a lot. Best Roberto
Re: [SIESTA-L] Parellelization overview
Muchas gracias por todo Eduardo, On Thu, 6 Mar 2008, Eduardo Anglada wrote: > ... > El tiempo total en diagon es casi tres veces mayor en la > paralela! Probablemente esto es debido a que el sistema es > peque?ito. Al pasar a 4 cpus la situaci?n empeora. > ... Es lo que mas me sorprendio de toda esta historia: Son 2 procesos que "viven" sobre el mismo motherboard comunicandose via bus y tardan casi lo mismo que uno solo !!. De hecho hice la prueba entre dos maquinas locales ( i.e., 1 proceso en c/u conectados via red ) y la diferencia entre kp y orb es apenas significativa (otro hardware, otro compilador, otras bibliotecas). Ese cluster debe tener un problema de hardware y/o software. Creo me sentare un rato con el admin a meditar sobre el asunto ... :/ Un cordial saludo, Roberto
[SIESTA-L] siesta.size file
Hello list The manual (2.0.1) mentions a file, "siesta.size", that's supposed to report array storage requirements, however I've never seen this file being produced. Is there a controlling keyword for it or what ?. Thanks. Best, Roberto
Re: [SIESTA-L] Parellelization overview
Ah !, me olvidaba: si necesitas la(s) salida completa (standard output) solo dimelo. Por supuesto que no tengo inconvenientes en que le heches un vistazo. Hasta pronto, Roberto On Wed, 5 Mar 2008, Eduardo Anglada wrote: > > Hola Roberto, > Para ver que est? pasando necesito los tiempos > que siesta imprime al final (elapsed y el otro). > Si no quieres que vea el fdf con la parte del final > me deber?a bastar. > Saludos > Eduardo
Re: [SIESTA-L] Parellelization overview
Hola Eduardo, Gracias por el interes. Va lo que me pides en el adjunto, para todas las corridas. Existe algun lugar donde pueda ojear el diagrama de flujo del Siesta ?. Eso me daria, creo, una base para mirar los numeros que te envio con algun entendimiento. Cordialmente, Roberto On Wed, 5 Mar 2008, Eduardo Anglada wrote: > > Hola Roberto, > Para ver que est? pasando necesito los tiempos > que siesta imprime al final (elapsed y el otro). > Si no quieres que vea el fdf con la parte del final > me deber?a bastar. > Saludos > Eduardo > Para_Eduardo_A.tar.gz Description: Binary data
Re: [SIESTA-L] Parellelization overview
Hi Oleksandr, Thanks for your observations/suggestions On Wed, 5 Mar 2008, Oleksandr Voznyy wrote: > First of all, SIESTA 2 is more demanding for k-points parallelization. > The number of k-points should be multiple of the number of k-points. > So 2 and 4 processors are not a good choice for 405 points. 5 and 9 > would be better. > Ok, but that's not going to change the picture. A small remainder out of 405 k points means too fine tuning. Agree ? > Secondly, slower run for orb. is expected for such a small system. > You have to reduce the BlockSize to 2 for this case. I think it would be > better. > Hmmm, didn't tweak with BlockSize. But smaller block size will also increase communication overhead, right ?. > Third, you should compare not the total run time, but rather the time > per one SCF cycle (see the CLOCK file). > Those times were for one SCF: I just run one SCF up to convergence and extracted the time from the CLOCK file. Cheers, Roberto
Re: [SIESTA-L] Parellelization overview
Hello list and Eduardo in particular, Found a little time to do the runs by myself: feel like owing an excuse to the lady for being somewhat distrustful :/. Looks like anything that travels through the network out of the computing boxes gets punished, though that's not the whole story however. Here're the runs; notation is natural; "time" is the number at the bottom of the CLOCK file. type of run time (sec) -- serial 1618 kp.2-proc 989 orb.2-proc 1660 (seen this ???) kp.4-proc 839 orb.4-proc 1908 kp.8-proc 832 orb.8-procexceed wall time The sample is a 2 atoms cell, 23 basis functions each. 9x9x9 k points ---> 405 Siesta says. superc: 640 atoms, 14720 orbitals, 12800 projectors. InitMesh: 36x36x72 = 93312 Siesta version: 2.0.1 Libraries: cmkl-8, mkl[scalapack,blacs,lapack] Compiler: mpiifort-9 ( -O3 -xP -mp ) The cluster is made out of dual Xeon boxes (2 physical processors per motherboard), EMT64, 1GB RAM, 1Giga Ethernet. They are of exclusive use and I verified they where fully allocated to my job in every run (e.g. the 2-proc runs reside entirely in one box). I saved copies of stdout, CLOCK, and out.fdf . Anyone interested just ask. That's it. Best, Roberto
Re: [SIESTA-L] Parellelization overview
Hello Eduardo, Thanks for the interest, but I suggest to forget the subject for a while. I don't have the real numbers, it was a kind of general comment of my colleague. I'm suspecting also other variables entering the picture, like memory exhaustion --> disc swapping --> meaningless comparisons :/. Promise to gather more precisions on the matter and if deemed worth I'll post them. Cheers, Roberto On Tue, 4 Mar 2008, Eduardo Anglada wrote: > > Dear Roberto, > Could you post the timings found at the end of each siesta run > (with and without the parallel over k)? > Maybe that gives us a hint of what is going on. > Cheers, > Eduardo > >
Re: [SIESTA-L] DSTEQR error
Hi Eduardo, Thanks for your reply and suggestion, that's probably the most robust solution. Especially after having tried a system based on Intel and mkl: no troubles. Unfortunately it's too slow for my needs. In the meantime I've found a configuration that gives the error just at the start, so I played with some few parameters, 1) Increased Diag.Memory to 1.3 ---> ok 2) Increased BlockSize to 16---> ok 3) Shut off Diag.DivideAndConquer ---> "eingenproblem failed to converge" So, for the moment, I feel like sticking to 1) + 2). Best regards, Roberto On Fri, 29 Feb 2008, Eduardo Anglada wrote: > Dear Roberto > > Maybe you could try the intel mpi and mkl (scalpack) as > they are really well teste. If it is not possible then you could > try using the standard lapack instead of atlas, the speed impact > will be small and most probably it will work. > > Regards, > > Eduardo > > On 27/02/2008, at 23:24, R.C.Pasianot wrote: > > > Hello list, > > > > I'm rather new to Siesta. Anyway, have compiled 2.0.1 with > > gfortran under linux, using Scalapack+Atlas and Open MPI. > > Tests run ok, but sometimes I meet the sort of troubles reported > > below. Seems (not 100% sure) it happens only with the parallel > > version (4 nodes), > > > > - > > > > siesta: iscf Eharris(eV) E_KS(eV) FreeEng(eV) dDmax Ef(eV) > > siesta:1 -61031.2316 -61031.2391 -61035.2006 0.0271 -2.8833 > > siesta:2 -61033.1173 -61031.1476 -61035.1156 0.3658 -2.8973 > > siesta:3 -61031.2323 -61031.2362 -61035.1600 0.0253 -2.8837 > > siesta:4 -61031.2308 -61031.2355 -61035.1990 0.0231 -2.8836 > > siesta:5 -61031.2324 -61031.2338 -61035.1986 0.0170 -2.8837 > > {1,1}: On entry to DSTEQR parameter number -23 had an > > illegal value > > {0,0}: On entry to DSTEQR parameter number -23 had an > > illegal value > > 4 processes killed (possibly by Open MPI) > > > > - > > > > To be straight: has anybody seen this error before ?. > > > > Thanks, > > > > Roberto > > > > >
Re: [SIESTA-L] Parellelization overview
Hello Eduardo, Thanks for your reply, The short story, My understanding is that Scalapack should put much more stress on the communication hardware than parallelization over k. But apparently the story below challenges this view. More details, See, a colleague of mine says she's not seeing a (significant) performance gain while running in parallel (10 or so nodes) with respect to the serial case. Apparently there's a hardware issue with the cluster, but anyway, Wien2K guys are running their stuff fairly OK (parallel over k). So I suggested her to go with Siesta parallel over k, at the expense of needing (quite a lot) more RAM (correct?). But the outcome was about the same as before :( . Best regards, Roberto > snipped > I *think* they are sent back one by one, could you specify what are > you interested in? > > > > > 3) seems that for k parallelization each process is self-sufficient > >and therefore memory-hungry, while the Scalapack way essentially > >divides memory needs. Correct ?. > > With the parallel over K option each Hamiltonian is completely stored > in each processor > so there is no need for scalapack. When H is too big then it's > scattered in all the processors, > so you need to call scalapack. > . snipped
[SIESTA-L] DSTEQR error
Hello list, I'm rather new to Siesta. Anyway, have compiled 2.0.1 with gfortran under linux, using Scalapack+Atlas and Open MPI. Tests run ok, but sometimes I meet the sort of troubles reported below. Seems (not 100% sure) it happens only with the parallel version (4 nodes), - siesta: iscf Eharris(eV) E_KS(eV) FreeEng(eV) dDmax Ef(eV) siesta:1 -61031.2316 -61031.2391 -61035.2006 0.0271 -2.8833 siesta:2 -61033.1173 -61031.1476 -61035.1156 0.3658 -2.8973 siesta:3 -61031.2323 -61031.2362 -61035.1600 0.0253 -2.8837 siesta:4 -61031.2308 -61031.2355 -61035.1990 0.0231 -2.8836 siesta:5 -61031.2324 -61031.2338 -61035.1986 0.0170 -2.8837 {1,1}: On entry to DSTEQR parameter number -23 had an illegal value {0,0}: On entry to DSTEQR parameter number -23 had an illegal value 4 processes killed (possibly by Open MPI) - To be straight: has anybody seen this error before ?. Thanks, Roberto
[SIESTA-L] Parellelization overview
Hello everybody, This is a newby question. I wish to have some overview of the parallelization process implemented in Siesta. I mean general things, like 1) how is I/O handled (process 0 takes care of everything ?), 2) while parallelizing over k, are the results sent back one by one or in bunches ?. 3) seems that for k parallelization each process is self-sufficient and therefore memory-hungry, while the Scalapack way essentially divides memory needs. Correct ?. The above are just examples; in a nutshell, is there any place where I can look for these things ?. Thanks a lot, Roberto