[Pw_forum] published PBE data
I think that you should use google and will find many papers. Best. Arles V. Gil Rebaza Instituto de F?sica La Plata Argentina 2011/3/7 Tram Bui > Hi Everyone, >I have one more quick question, would you introduce to me any PBE > published work (references) for SiC data. > > Thank you very much, > > Tram Bui > > M.S. Materials Science & Engineering > trambui at u.boisestate.edu > > > ___ > Pw_forum mailing list > Pw_forum at pwscf.org > http://www.democritos.it/mailman/listinfo/pw_forum > > -- ###-> Arles V. <-### -- next part -- An HTML attachment was scrubbed... URL: http://www.democritos.it/pipermail/pw_forum/attachments/20110307/fa604adb/attachment.htm
[Pw_forum] НА: Re: problem in MPI running of QE (16 processors)
Dear all I tried to use full paths, but it didn't give positive results. It wrote an error message application called MPI_Abort(MPI_COMM_WORLD, 0) - process 0 On 7 March 2011 10:30, Alexander Kvashnin wrote: > Thanks, I tried to use "<" instead of "-in" it also didn't work. > OK,I will try to use full paths for input and output, and answer about > result. > > - ? - > ??: Omololu Akin-Ojo > ??: 7 ? 2011 ?. 9:56 > : PWSCF Forum > : Re: [Pw_forum] ??: Re: problem in MPI running of QE (16 processors) > > Try to see if specifying the full paths help. > E.g., try something like: > > mpiexec /home/MyDir/bin/pw.x -in /scratch/MyDir/graph.inp > > /scratch/MyDir/graph.out > > (where /home/MyDir/bin is the full path to your pw.x and > /scratch/MyDir/graph.inp is the full path to your output ) > > ( I see you use "-in" instead of "<" to indicate the input. I don't > know too much but _perhaps_ you could also _try_ using "<" instead of > "-in") . > > o. > > On Mon, Mar 7, 2011 at 7:31 AM, Alexander Kvashnin > wrote: > > Yes, I wrote > > > > #PBS -l nodes=16:ppn=4 > > > > And in userguide of MIPT-60 wrote,that mpiexec must choose number of > > processors automatically, that's why I didn't write anything else > > > > > > > > : Huiqun Zhou > > : 7 ?? 2011 ??. 7:52 > > : PWSCF Forum > > : Re: [Pw_forum] problem in MPI running of QE (16 processors) > > > > How did you apply number of node, procs per node in your job > > script? > > > > #PBS -l nodes=?:ppn=? > > > > zhou huiqun > > @earth sciences, nanjing university, china > > > > > > - Original Message - > > From: Alexander G. Kvashnin > > To: PWSCF Forum > > Sent: Saturday, March 05, 2011 2:53 AM > > Subject: Re: [Pw_forum] problem in MPI running of QE (16 processors) > > I create PBS task on supercomputer MIPT-60 where I write > > > > mpiexec ../../espresso-4.2.1/bin/pw.x -in graph.inp > output.opt > > all other > > [??? ?? ? ? ?] > -- Sincerely yours Alexander G. Kvashnin Student Moscow Institute of Physics and Technology http://mipt.ru/ 141700, Institutsky lane 9, Dolgoprudny, Moscow Region, Russia Junior research scientist Technological Institute for Superhard and Novel Carbon Materials http://www.ntcstm.troitsk.ru/ 142190, Central'naya St. 7a, Troitsk, Moscow Region, Russia -- next part -- An HTML attachment was scrubbed... URL: http://www.democritos.it/pipermail/pw_forum/attachments/20110307/6fa9ea8b/attachment-0001.htm
[Pw_forum] calculation wouldn't run using espresso-4.2.1 but did run with espresso-4.1.3
for isolated C, you need to do spin polarized calculation with the usage of occupation card. See example 11 for details. -- Duy Le PhD Student Department of Physics University of Central Florida. "Men don't need hand to do things" On Mon, Mar 7, 2011 at 7:05 PM, Tram Bui wrote: > Dear Emine, > ? Thank you for your respond. And to answer your chain of questions :), > this is what I have got > -?? First my installation was successful. I have done tons of calculation > for single silicon as well as silicon carbide system and everything works > fine, except when it comes to this single carbon atom calculation. > -?? Second, the erroe message was: "the convergence was not achieved after > 100 interations" (so you can see that the i took really long for this > calculation but no result was given in the end) > -??? Third, we tried the calculation with my thesis advisor computer,it > didn't work and gave out the same problem. > -??? Forth, I have tried to do the calculation using older version of QE, > and it worked!!, but again, isn't the newer version supposed to work better > than the older one? not mention about the fact that it should give more > accurate results. > ??? So now both my advisor and I couldn't figure out why it is not working > with properly in espresso-4.2.1. and I really appreciate any help from > everyone! > > Thank you, > Tram > > On Mon, Mar 7, 2011 at 4:44 PM, Emine Kucukbenli wrote: >> >> Dear Tram Bui, >> Doesnt it bug you that such a simple calculation which almost 'tests' the >> pw.x doesnt work in your installation but seems to work for everyone else? >> :) >> >> ok, sorry lets get serious: was your installation successful? what is the >> error message? how does it stop? can you reproduce the same problem in >> another machine/compiler etc? >> what have you done to locate the problem? >> yadda yadda.. the usual questions which i think you should have asked >> ?yourself before posting.. :) >> >> emine kucukbenli, phd student, sissa, italy >> >> >> Quoting Tram Bui : >> >>> Hi Everyone, >>> ? ? I have post a question regarding the single atom calculation for >>> carbon >>> simple cubic system last month. I was using the ultra-soft >>> pseudopotential >>> of C as :C.pbe-van_ak.UPF. And the calculation ran fine using >>> espresso-4.1.3 >>> (oder version), but not for espresso-4.2.1 (latest version). So would you >>> let me know what might have been my problem? was it my input file or the >>> new >>> version of quantum espresso? I also attached my input file here for more >>> info. >>> >>> Regard, >>> Tram Bui >>> >>> M.S. Materials Science & Engineering >>> trambui at u.boisestate.edu >>> >> >> >> >> >> ?SISSA Webmail https://webmail.sissa.it/ >> ?Powered by Horde http://www.horde.org/ >> >> > > > > -- > Tram Bui > > M.S. Materials Science & Engineering > trambui at u.boisestate.edu > > > ___ > Pw_forum mailing list > Pw_forum at pwscf.org > http://www.democritos.it/mailman/listinfo/pw_forum > >
[Pw_forum] ecfixed makes the volume different?
Dear Forum, Could anyone give me some hints about how to adjust "qcutz,q2sigma,ecfixed" in vc-cp? I have some difficulties. After wave function initialization, the input file are for zero pressure zero temperature are following. (After zero pressure zero temperature, I will elevate temperature with verlet algorithm.) But when I use different ecfixed with fixed qcutz and q2sigma, the volumes at equilibrium are different. some is close to that without "qcutz,q2sigma,ecfixed" used. some are 100 ( toal around 3000) different from that without "qcutz,q2sigma,ecfixed" used. (In my case, I am sure the fnosep, wmass, and emass is good enough.) I attached the voluem evolution figure. In the figure, from step 200-1200 is belong to the following input file. before 200 step is wave initialization, after 1200 step, I increased temperature. Thank you very much! WANG Riping 2011.3.7 calculation = 'vc-cp' , prefix = 'SiO2-mog' , restart_mode = 'restart' , nstep = 1000 , iprint = 1 , isave = 10 , dt = 5.0 , ndr = 51 , ndw = 52 , tstress = .TRUE. , tprnfor = .TRUE. , saverho = .TRUE. , disk_io = 'high' , /ekin_conv_thr = 1.0d-3 , /etot_conv_thr = 5.0d-3 , /forc_conv_thr = 1.0d-2 , pseudo_dir ='~/espresso/pseudo' , outdir = './' , / ibrav = 14 , celldm(1) = 16.021289268 , celldm(2) = 0.558448710 , celldm(3) = 1.245135500 , celldm(4) = -0.28830 , celldm(5) = -0.007432130 , celldm(6) = 0.64680 , nat = 36 , ntyp = 2 , ecutwfc = 30 , ecutrho = 240.0 , nr1b = 16 , nr2b = 16 , nr3b = 16 , qcutz = 150.0 , q2sigma = 2 , ecfixed = 16.8 , / electron_dynamics = 'sd' , emass = 400 , emass_cutoff = 3. , / ion_dynamics = 'sd' , / cell_dynamics = 'pr' , press = 0 , wmass = 300 , / ATOMIC_SPECIES O 16.00 O.pbe-van_ak.UPF Si 28.00 Si.pbe-n-van.UPF ATOMIC_POSITIONS (crystal) ... -- ** WANG Riping Ph.D student, Institute for Study of the Earth's Interior,Okayama University, 827 Yamada, Misasa, Tottori-ken 682-0193, Japan Tel: +81-858-43-3739(Office), 1215(Inst) E-mail: wang.riping.81 at gmail.com ** -- next part -- An HTML attachment was scrubbed... URL: http://www.democritos.it/pipermail/pw_forum/attachments/20110307/172de12c/attachment.htm -- next part -- A non-text attachment was scrubbed... Name: volume.p Type: application/octet-stream Size: 3736 bytes Desc: not available Url : http://www.democritos.it/pipermail/pw_forum/attachments/20110307/172de12c/attachment.obj
[Pw_forum] calculation wouldn't run using espresso-4.2.1 but did run with espresso-4.1.3
Dear Emine, Thank you for your respond. And to answer your chain of questions :), this is what I have got - First my installation was successful. I have done tons of calculation for single silicon as well as silicon carbide system and everything works fine, except when it comes to this single carbon atom calculation. - Second, the erroe message was: "the convergence was not achieved after 100 interations" (so you can see that the i took really long for this calculation but no result was given in the end) -Third, we tried the calculation with my thesis advisor computer,it didn't work and gave out the same problem. -Forth, I have tried to do the calculation using older version of QE, and it worked!!, but again, isn't the newer version supposed to work better than the older one? not mention about the fact that it should give more accurate results. So now both my advisor and I couldn't figure out why it is not working with properly in espresso-4.2.1. and I really appreciate any help from everyone! Thank you, Tram On Mon, Mar 7, 2011 at 4:44 PM, Emine Kucukbenli wrote: > Dear Tram Bui, > Doesnt it bug you that such a simple calculation which almost 'tests' the > pw.x doesnt work in your installation but seems to work for everyone else? > :) > > ok, sorry lets get serious: was your installation successful? what is the > error message? how does it stop? can you reproduce the same problem in > another machine/compiler etc? > what have you done to locate the problem? > yadda yadda.. the usual questions which i think you should have asked > yourself before posting.. :) > > emine kucukbenli, phd student, sissa, italy > > > > Quoting Tram Bui : > > Hi Everyone, >> I have post a question regarding the single atom calculation for >> carbon >> simple cubic system last month. I was using the ultra-soft pseudopotential >> of C as :C.pbe-van_ak.UPF. And the calculation ran fine using >> espresso-4.1.3 >> (oder version), but not for espresso-4.2.1 (latest version). So would you >> let me know what might have been my problem? was it my input file or the >> new >> version of quantum espresso? I also attached my input file here for more >> info. >> >> Regard, >> Tram Bui >> >> M.S. Materials Science & Engineering >> trambui at u.boisestate.edu >> >> > > > > SISSA Webmail https://webmail.sissa.it/ > Powered by Horde http://www.horde.org/ > > > -- Tram Bui M.S. Materials Science & Engineering trambui at u.boisestate.edu -- next part -- An HTML attachment was scrubbed... URL: http://www.democritos.it/pipermail/pw_forum/attachments/20110307/964f7482/attachment.htm
[Pw_forum] published PBE data
Hi Everyone, I have one more quick question, would you introduce to me any PBE published work (references) for SiC data. Thank you very much, Tram Bui M.S. Materials Science & Engineering trambui at u.boisestate.edu -- next part -- An HTML attachment was scrubbed... URL: http://www.democritos.it/pipermail/pw_forum/attachments/20110307/279f639a/attachment.htm
[Pw_forum] calculation wouldn't run using espresso-4.2.1 but did run with espresso-4.1.3
Hi Everyone, I have post a question regarding the single atom calculation for carbon simple cubic system last month. I was using the ultra-soft pseudopotential of C as :C.pbe-van_ak.UPF. And the calculation ran fine using espresso-4.1.3 (oder version), but not for espresso-4.2.1 (latest version). So would you let me know what might have been my problem? was it my input file or the new version of quantum espresso? I also attached my input file here for more info. Regard, Tram Bui M.S. Materials Science & Engineering trambui at u.boisestate.edu -- next part -- An HTML attachment was scrubbed... URL: http://www.democritos.it/pipermail/pw_forum/attachments/20110307/d3c8f9f3/attachment.htm -- next part -- A non-text attachment was scrubbed... Name: a2.in Type: application/octet-stream Size: 482 bytes Desc: not available Url : http://www.democritos.it/pipermail/pw_forum/attachments/20110307/d3c8f9f3/attachment.obj
[Pw_forum] problem in MPI running of QE (16 processors)
How did you apply number of node, procs per node in your job script? #PBS -l nodes=?:ppn=? zhou huiqun @earth sciences, nanjing university, china - Original Message - From: Alexander G. Kvashnin To: PWSCF Forum Sent: Saturday, March 05, 2011 2:53 AM Subject: Re: [Pw_forum] problem in MPI running of QE (16 processors) I create PBS task on supercomputer MIPT-60 where I write mpiexec ../../espresso-4.2.1/bin/pw.x -in graph.inp > output.opt all other types of this line such as mpiexec -np 16 ../../espresso-4.2.1/bin/pw.x -in graph.inp > output.opt doesn't work. Maybe this number of processor too small for parallel calculation to QE? On 4 March 2011 21:37, Eyvaz Isaev wrote: Dear Alexander, How do you run a job? You should launch a command like (some parameters are omitted) mpirun -np 16 -maxtime 30 ./pw.x < scf.in >scf.out The easiest way to be added to the forum list is subscribing to this forum. Please visit http://www.pwscf.org/contacts.php Please also provide your affiliation. Best regards, Eyvaz. --- Prof. Eyvaz Isaev, Department of Physics, Chemistry, and Biology (IFM), Linkoping University, Sweden Theoretical Physics Department, Moscow State Institute of Steel & Alloys, Russia, isaev at ifm.liu.se, eyvaz_isaev at yahoo.com From: Alexander G. Kvashnin To: pw_forum at pwscf.org Sent: Fri, March 4, 2011 9:07:46 PM Subject: [Pw_forum] problem in MPI running of QE (16 processors) Hello, I have some problem when I ran parallel version of QE (16 procs), I saw next line in output file Parallel version (MPI), running on 1 processors And it works using only 1 processor, but there is MPI version. Help me please in my problem Thank you! -- Sincerely yours Alexander G. Kvashnin ___ Pw_forum mailing list Pw_forum at pwscf.org http://www.democritos.it/mailman/listinfo/pw_forum -- Sincerely yours Alexander G. Kvashnin -- ___ Pw_forum mailing list Pw_forum at pwscf.org http://www.democritos.it/mailman/listinfo/pw_forum -- next part -- An HTML attachment was scrubbed... URL: http://www.democritos.it/pipermail/pw_forum/attachments/20110307/df6b7939/attachment.htm
[Pw_forum] НА: Re: problem in MPI running of QE (16 processors)
You should make sure that the mpiexec is used correctly. Try one of these sameple in this link. http://hamilton.nuigalway.ie/teaching/AOS/NINE/mpi-first-examples.html -- Duy Le PhD Student Department of Physics University of Central Florida. "Men don't need hand to do things" On Mon, Mar 7, 2011 at 11:24 AM, Alexander G. Kvashnin wrote: > Dear all > > I tried to use full paths, but it didn't give positive results. It wrote an > error message > application called MPI_Abort(MPI_COMM_WORLD, 0) - process 0 > > On 7 March 2011 10:30, Alexander Kvashnin wrote: >> >> Thanks, I tried to use "<" instead of "-in" it also didn't work. >> OK,I will try to use full paths for input and output, and answer about >> result. >> >> - ? - >> ??: Omololu Akin-Ojo >> ??: 7 ? 2011 ?. 9:56 >> : PWSCF Forum >> : Re: [Pw_forum] ??: Re: problem in MPI running of QE (16 processors) >> >> Try to see if specifying the full paths help. >> E.g., try something like: >> >> mpiexec /home/MyDir/bin/pw.x -in ?/scratch/MyDir/graph.inp > >> /scratch/MyDir/graph.out >> >> (where /home/MyDir/bin is the full path to your pw.x and >> /scratch/MyDir/graph.inp is the full path to your output ) >> >> ( I see you use "-in" instead of "<" to indicate the input. I don't >> know too much but _perhaps_ you could also _try_ using "<" instead of >> "-in") . >> >> o. >> >> On Mon, Mar 7, 2011 at 7:31 AM, Alexander Kvashnin >> wrote: >> > Yes, I wrote >> > >> > #PBS -l nodes=16:ppn=4 >> > >> > And in userguide of MIPT-60 wrote,that mpiexec must choose number of >> > processors automatically, that's why I didn't write anything else >> > >> > >> > >> > : Huiqun Zhou >> > : 7 ?? 2011 ??. 7:52 >> > : PWSCF Forum >> > : Re: [Pw_forum] problem in MPI running of QE (16 processors) >> > >> > How did you apply number of node, procs per node in your job >> > script? >> > >> > #PBS -l nodes=?:ppn=? >> > >> > zhou huiqun >> > @earth sciences, nanjing university, china >> > >> > >> > - Original Message - >> > From: Alexander G. Kvashnin >> > To: PWSCF Forum >> > Sent: Saturday, March 05, 2011 2:53 AM >> > Subject: Re: [Pw_forum] problem in MPI running of QE (16 processors) >> > I create PBS task on supercomputer MIPT-60 where I write >> > >> > mpiexec ../../espresso-4.2.1/bin/pw.x -in graph.inp > output.opt >> > all other >> >> [??? ?? ? ? ?] > > > -- > Sincerely yours > Alexander G. Kvashnin > > Student > Moscow Institute of Physics and Technology ? ? ? ? ?http://mipt.ru/ > 141700, Institutsky lane 9, Dolgoprudny, Moscow Region, Russia > > Junior research scientist > Technological Institute for Superhard > and Novel Carbon Materials > http://www.ntcstm.troitsk.ru/ > 142190, Central'naya St. 7a, Troitsk, Moscow Region, Russia > > > ___ > Pw_forum mailing list > Pw_forum at pwscf.org > http://www.democritos.it/mailman/listinfo/pw_forum > >
[Pw_forum] DOS: tetrahedron method + spin-orbit coupling
On Mar 7, 2011, at 10:30 , Iurii TIMROV wrote: > Thank you very much for your help! thank you very much for YOUR help! Paolo --- Paolo Giannozzi, Dept of Chemistry, Univ. Udine, via delle Scienze 208, 33100 Udine, Italy Phone +39-0432-558216, fax +39-0432-558222
[Pw_forum] DOS: tetrahedron method + spin-orbit coupling
> you will need for sure to modify routine sumkt as in the attached > file. Not sure this will solve all problems, though > > P. Dear Paolo, I managed to solve the problem. There are two routines which were modified, namely, "sumkt" (as you told), and "tweights" (both attached). The problem was that the case nspin=4 (i.e. noncolin=.true.) was treated not properly: there was a loop do ns=1,nspin ... enddo Now I replaced it by (in the spirit of dost.f90): if (nspin==4) then nspin0=1 else nspin0=nspin endif do ns=1,nspin0 ... enddo I tested the code with these modifications, it works correctly. This bug is still present in the latest cvs version of espresso. So, now it can be removed. Thank you very much for your help! Best regards, Iurii Timrov Iurii TIMROV Doctorant (PhD student) Laboratoire des Solides Irradies Ecole Polytechnique F-91128 Palaiseau +33 1 69 33 45 08 timrov at theory.polytechnique.fr -- next part -- A non-text attachment was scrubbed... Name: sumkt.f90 Type: text/x-fortran Size: 3009 bytes Desc: not available Url : http://www.democritos.it/pipermail/pw_forum/attachments/20110307/dbfe6930/attachment.bin -- next part -- A non-text attachment was scrubbed... Name: tweights.f90 Type: text/x-fortran Size: 6818 bytes Desc: not available Url : http://www.democritos.it/pipermail/pw_forum/attachments/20110307/dbfe6930/attachment-0001.bin
[Pw_forum] НА: Re: НА: Re: problem in MPI running of QE (16 processors)
Thanks, I tried to use "<" instead of "-in" it also didn't work. OK,I will try to use full paths for input and output, and answer about result. - ? - ??: Omololu Akin-Ojo ??: 7 ? 2011 ?. 9:56 : PWSCF Forum : Re: [Pw_forum] ??: Re: problem in MPI running of QE (16 processors) Try to see if specifying the full paths help. E.g., try something like: mpiexec /home/MyDir/bin/pw.x -in /scratch/MyDir/graph.inp > /scratch/MyDir/graph.out (where /home/MyDir/bin is the full path to your pw.x and /scratch/MyDir/graph.inp is the full path to your output ) ( I see you use "-in" instead of "<" to indicate the input. I don't know too much but _perhaps_ you could also _try_ using "<" instead of "-in") . o. On Mon, Mar 7, 2011 at 7:31 AM, Alexander Kvashnin wrote: > Yes, I wrote > > #PBS -l nodes=16:ppn=4 > > And in userguide of MIPT-60 wrote,that mpiexec must choose number of > processors automatically, that's why I didn't write anything else > > > > : Huiqun Zhou > : 7 ?? 2011 ??. 7:52 > : PWSCF Forum > : Re: [Pw_forum] problem in MPI running of QE (16 processors) > > How did you apply number of node, procs per node in your job > script? > > #PBS -l nodes=?:ppn=? > > zhou huiqun > @earth sciences, nanjing university, china > > > - Original Message - > From: Alexander G. Kvashnin > To: PWSCF Forum > Sent: Saturday, March 05, 2011 2:53 AM > Subject: Re: [Pw_forum] problem in MPI running of QE (16 processors) > I create PBS task on supercomputer MIPT-60 where I write > > mpiexec ../../espresso-4.2.1/bin/pw.x -in graph.inp > output.opt > all other [??? ?? ? ? ?]
[Pw_forum] НА: Re: problem in MPI running of QE (16 processors)
Yes, I wrote #PBS -l nodes=16:ppn=4 And in userguide of MIPT-60 wrote,that mpiexec must choose number of processors automatically, that's why I didn't write anything else ? - ? - ??: Huiqun Zhou ??: 7 ? 2011 ?. 7:52 : PWSCF Forum : Re: [Pw_forum] problem in MPI running of QE (16 processors) How did you apply number of node, procs per node in your job script? ? #PBS -l nodes=?:ppn=? ? zhou huiqun @earth sciences, nanjing university, china ? - Original Message - From: Alexander G. Kvashnin To: PWSCF Forum Sent: Saturday, March 05, 2011 2:53 AM Subject: Re: [Pw_forum] problem in MPI running of QE (16 processors) I create PBS task on supercomputer MIPT-60 where I write? mpiexec ../../espresso-4.2.1/bin/pw.x -in graph.inp > output.opt all other types of this line such as? mpiexec -np 16 ../../espresso-4.2.1/bin/pw.x -in graph.inp > output.opt does [??? ?? ? ? ?] -- next part -- An HTML attachment was scrubbed... URL: http://www.democritos.it/pipermail/pw_forum/attachments/20110307/cd2caef3/attachment.htm
[Pw_forum] НА: Re: problem in MPI running of QE (16 processors)
Try to see if specifying the full paths help. E.g., try something like: mpiexec /home/MyDir/bin/pw.x -in /scratch/MyDir/graph.inp > /scratch/MyDir/graph.out (where /home/MyDir/bin is the full path to your pw.x and /scratch/MyDir/graph.inp is the full path to your output ) ( I see you use "-in" instead of "<" to indicate the input. I don't know too much but _perhaps_ you could also _try_ using "<" instead of "-in") . o. On Mon, Mar 7, 2011 at 7:31 AM, Alexander Kvashnin wrote: > Yes, I wrote > > #PBS -l nodes=16:ppn=4 > > And in userguide of MIPT-60 wrote,that mpiexec must choose number of > processors automatically, that's why I didn't write anything else > > > > : Huiqun Zhou > : 7 ?? 2011 ??. 7:52 > : PWSCF Forum > : Re: [Pw_forum] problem in MPI running of QE (16 processors) > > How did you apply number of node, procs per node in your job > script? > > #PBS -l nodes=?:ppn=? > > zhou huiqun > @earth sciences, nanjing university, china > > > - Original Message - > From: Alexander G. Kvashnin > To: PWSCF Forum > Sent: Saturday, March 05, 2011 2:53 AM > Subject: Re: [Pw_forum] problem in MPI running of QE (16 processors) > I create PBS task on supercomputer MIPT-60 where I write > > mpiexec ../../espresso-4.2.1/bin/pw.x -in graph.inp > output.opt > all other types of this line such as > mpiexec -np 16 ../../espresso-4.2.1/bin/pw.x -in graph.inp > output.opt > > does > > [?? ? ?? ??? ?? ??? ?? ? > ??? ] > ___ > Pw_forum mailing list > Pw_forum at pwscf.org > http://www.democritos.it/mailman/listinfo/pw_forum > > -- * Seek GOD! *