[Pw_forum] pseudo potential for Yb and U

2011-03-08 Thread partha sarathi ghosh
Hello QE users,
I need PAW potential (for QE use) for Yb and U in my calculation. If any one
is having please shear with me. Can any one please give references for some
documents containing making of PAW pp using QE for these kind of high Z
elements which contains spin-orbit coupling.

With warm regards
Partha Ghosh
B.A.R.C., INDIA
-- next part --
An HTML attachment was scrubbed...
URL: 
http://www.democritos.it/pipermail/pw_forum/attachments/20110308/c6b5ed46/attachment.htm
 


[Pw_forum] atomic positions in xsf from cppp.x

2011-03-08 Thread Riping WANG
Dear Forum,

when I use cppp.x to generate an xsf file for an vc-cp output file with
nstep=1000, there will be cell vectors and atomic positions in xsf file.
what is this cell vectors and atomic positions stand for?
For average structure configuration of 1000 steps, or for average structure
configuration of one period, just for the last configuration of the this
calculation?

Part of input file are following:
 
calculation = 'vc-cp' ,
prefix = 'SiO2' ,
restart_mode = 'restart' ,
nstep = 1000 ,
ndr = 51 ,
ndw = 52 ,

Thank you very much.

WANG Riping
2011.3.8




-- 
**
WANG Riping
Ph.D student,
Institute for Study of the Earth's Interior,Okayama University,
827 Yamada, Misasa, Tottori-ken 682-0193, Japan
Tel: +81-858-43-3739(Office), 1215(Inst)
E-mail: wang.riping.81 at gmail.com
**
-- next part --
An HTML attachment was scrubbed...
URL: 
http://www.democritos.it/pipermail/pw_forum/attachments/20110308/2c5abb9d/attachment.htm
 


[Pw_forum] НА: Re: problem in MPI running of QE (16 processors)

2011-03-08 Thread Huiqun Zhou
Alexander,

According to your reply to my message, you actually applied 64 CPU cores 
(16 nodes, 4 cores per node), this should have no problem unless the policy
of using your cluster prohibited it. Once upon a time, we had such a policy
on our cluster: an job occupies at most 32 CPU cores, otherwise put it into
sequential queue.

Maybe, you should ask your administrator whether there is such a policy ...

zhou huiqun
@earth sciences, nanjing university, china
 
  - Original Message - 
  From: Alexander G. Kvashnin 
  To: PWSCF Forum 
  Sent: Tuesday, March 08, 2011 12:24 AM
  Subject: Re: [Pw_forum] ??: Re: problem in MPI running of QE (16 processors)


  Dear all



  I tried to use full paths, but it didn't give positive results. It wrote an 
error message  


  application called MPI_Abort(MPI_COMM_WORLD, 0) - process 0




  On 7 March 2011 10:30, Alexander Kvashnin  wrote:

Thanks, I tried to use "<" instead of "-in" it also didn't work.
OK,I will try to use full paths for input and output, and answer about 
result.

-  ? -
??: Omololu Akin-Ojo 
??: 7 ? 2011 ?. 9:56
: PWSCF Forum 
: Re: [Pw_forum] ??: Re: problem in MPI running of QE (16 processors)

Try to see if specifying the full paths help.
E.g., try something like:

mpiexec /home/MyDir/bin/pw.x -in  /scratch/MyDir/graph.inp >
/scratch/MyDir/graph.out

(where /home/MyDir/bin is the full path to your pw.x and
/scratch/MyDir/graph.inp is the full path to your output )

( I see you use "-in" instead of "<" to indicate the input. I don't
know too much but _perhaps_ you could also _try_ using "<" instead of
"-in") .

o.

On Mon, Mar 7, 2011 at 7:31 AM, Alexander Kvashnin  wrote:
> Yes, I wrote
>
> #PBS -l nodes=16:ppn=4
>
> And in userguide of MIPT-60 wrote,that mpiexec must choose number of
> processors automatically, that's why I didn't write anything else
>
>
> 
> : Huiqun Zhou 
> : 7 ?? 2011 ??. 7:52
> : PWSCF Forum 
> : Re: [Pw_forum] problem in MPI running of QE (16 processors)
>
> How did you apply number of node, procs per node in your job
> script?
>
> #PBS -l nodes=?:ppn=?
>
> zhou huiqun
> @earth sciences, nanjing university, china
>
>
> - Original Message -
> From: Alexander G. Kvashnin
> To: PWSCF Forum
> Sent: Saturday, March 05, 2011 2:53 AM
> Subject: Re: [Pw_forum] problem in MPI running of QE (16 processors)
> I create PBS task on supercomputer MIPT-60 where I write
>
> mpiexec ../../espresso-4.2.1/bin/pw.x -in graph.inp > output.opt
> all other

[??? ??  ? ? ?]



  -- 

  Sincerely yours
  Alexander G. Kvashnin
  

  Student
  Moscow Institute of Physics and Technology  http://mipt.ru/
  141700, Institutsky lane 9, Dolgoprudny, Moscow Region, Russia



  Junior research scientist

  Technological Institute for Superhard 
  and Novel Carbon Materials   
http://www.ntcstm.troitsk.ru/
  142190, Central'naya St. 7a, Troitsk, Moscow Region, Russia
  




--


  ___
  Pw_forum mailing list
  Pw_forum at pwscf.org
  http://www.democritos.it/mailman/listinfo/pw_forum
-- next part --
An HTML attachment was scrubbed...
URL: 
http://www.democritos.it/pipermail/pw_forum/attachments/20110308/05fec3d7/attachment-0001.htm
 


[Pw_forum] calculation wouldn't run using espresso-4.2.1 but did run with espresso-4.1.3

2011-03-08 Thread Paolo Giannozzi
Tram Bui wrote:

> isn't the newer version supposed to work better than the older one?

it is (after all this is why there are new versions) but new versions
fix old bugs and introduce new ones. This is clearly a bug and
will be fixed soon. Meanwhile, use
K_POINTS tpiba
1 0.0 0.0 0.0 1.0
instead of
K_POINTS gamma
It is slower but it converges very quickly (it should converge
exactly in the same way,but it doesn't, for some obscure reason)

P.
-- 
Paolo Giannozzi, IOM-Democritos and University of Udine, Italy


[Pw_forum] НА: Re: problem in MPI running of QE (16 processors)

2011-03-08 Thread Alexander G. Kvashnin
OK, I'll try to contact with my administrator again and solve this problem
about machine.

Thank you!

On 8 March 2011 12:14, Paolo Giannozzi  wrote:

> Alexander G. Kvashnin wrote:
>
> > Previously I used also 16 nodes, when I calculate with ABINIT and there
> > is no problem for its running.
>
> I don't know why you cannot run QE in parallel, but I know for sure
> that it is not a QE problem: it is either a problem of your machine,
> or QE wasn't properly compiled for parallel execution.
>
> P.
> --
> Paolo Giannozzi, IOM-Democritos and University of Udine, Italy
> ___
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://www.democritos.it/mailman/listinfo/pw_forum
>



-- 
Sincerely yours
Alexander G. Kvashnin

Student
Moscow Institute of Physics and Technology  http://mipt.ru/
141700, Institutsky lane 9, Dolgoprudny, Moscow Region, Russia

Junior research scientist
Technological Institute for Superhard
and Novel Carbon Materials
http://www.ntcstm.troitsk.ru/
142190, Central'naya St. 7a, Troitsk, Moscow Region, Russia

-- next part --
An HTML attachment was scrubbed...
URL: 
http://www.democritos.it/pipermail/pw_forum/attachments/20110308/fdfcd781/attachment.htm
 


[Pw_forum] НА: Re: problem in MPI running of QE (16 processors)

2011-03-08 Thread Alexander G. Kvashnin
Previously I used also 16 nodes, when I calculate with ABINIT and there is
no problem for its running.
I asked my administrator about it, he said that anything alright with
policy.

On 8 March 2011 07:48, Huiqun Zhou  wrote:

>  Alexander,
>
> According to your reply to my message, you actually applied 64 CPU cores
> (16 nodes, 4 cores per node), this should have no problem unless the policy
> of using your cluster prohibited it. Once upon a time, we had such a policy
> on our cluster: an job occupies at most 32 CPU cores, otherwise put it into
> sequential queue.
>
> Maybe, you should ask your administrator whether there is such a policy ...
>
> zhou huiqun
> @earth sciences, nanjing university, china
>
>
> - Original Message -
> *From:* Alexander G. Kvashnin 
> *To:* PWSCF Forum 
> *Sent:* Tuesday, March 08, 2011 12:24 AM
> *Subject:* Re: [Pw_forum] ??: Re: problem in MPI running of QE (16
> processors)
>
> Dear all
>
> I tried to use full paths, but it didn't give positive results. It wrote an
> error message
>
> application called MPI_Abort(MPI_COMM_WORLD, 0) - process 0
>
>
> On 7 March 2011 10:30, Alexander Kvashnin  wrote:
>
>>  Thanks, I tried to use "<" instead of "-in" it also didn't work.
>> OK,I will try to use full paths for input and output, and answer about
>> result.
>>
>> -  ? -
>> ??: Omololu Akin-Ojo 
>> ??: 7 ? 2011 ?. 9:56
>> : PWSCF Forum 
>> : Re: [Pw_forum] ??: Re: problem in MPI running of QE (16 processors)
>>
>> Try to see if specifying the full paths help.
>> E.g., try something like:
>>
>> mpiexec /home/MyDir/bin/pw.x -in  /scratch/MyDir/graph.inp >
>> /scratch/MyDir/graph.out
>>
>> (where /home/MyDir/bin is the full path to your pw.x and
>> /scratch/MyDir/graph.inp is the full path to your output )
>>
>> ( I see you use "-in" instead of "<" to indicate the input. I don't
>> know too much but _perhaps_ you could also _try_ using "<" instead of
>> "-in") .
>>
>> o.
>>
>> On Mon, Mar 7, 2011 at 7:31 AM, Alexander Kvashnin 
>> wrote:
>> > Yes, I wrote
>> >
>> > #PBS -l nodes=16:ppn=4
>> >
>> > And in userguide of MIPT-60 wrote,that mpiexec must choose number of
>> > processors automatically, that's why I didn't write anything else
>> >
>> >
>> > 
>> > : Huiqun Zhou 
>> > : 7 ?? 2011 ??. 7:52
>> > : PWSCF Forum 
>> > : Re: [Pw_forum] problem in MPI running of QE (16 processors)
>> >
>> > How did you apply number of node, procs per node in your job
>> > script?
>> >
>> > #PBS -l nodes=?:ppn=?
>> >
>> > zhou huiqun
>> > @earth sciences, nanjing university, china
>> >
>> >
>> > - Original Message -
>> > From: Alexander G. Kvashnin
>> > To: PWSCF Forum
>> > Sent: Saturday, March 05, 2011 2:53 AM
>> > Subject: Re: [Pw_forum] problem in MPI running of QE (16 processors)
>> > I create PBS task on supercomputer MIPT-60 where I write
>> >
>> > mpiexec ../../espresso-4.2.1/bin/pw.x -in graph.inp > output.opt
>> > all other
>>
>> [??? ??  ? ? ?]
>>
>
>
>
> --
> Sincerely yours
> Alexander G. Kvashnin
>
> 
> Student
> Moscow Institute of Physics and Technology  http://mipt.ru/
> 141700, Institutsky lane 9, Dolgoprudny, Moscow Region, Russia
>
> Junior research scientist
> Technological Institute for Superhard
> and Novel Carbon Materials
> http://www.ntcstm.troitsk.ru/
> 142190, Central'naya St. 7a, Troitsk, Moscow Region, Russia
> 
>
>  --
> ___
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://www.democritos.it/mailman/listinfo/pw_forum
>
>
> ___
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://www.democritos.it/mailman/listinfo/pw_forum
>
>


-- 
Sincerely yours
Alexander G. Kvashnin

Student
Moscow Institute of Physics and Technology  http://mipt.ru/
141700, Institutsky lane 9, Dolgoprudny, Moscow Region, Russia

Junior research scientist
Technological Institute for Superhard
and Novel Carbon Materials
http://www.ntcstm.troitsk.ru/
142190, Central'naya St. 7a, Troitsk, Moscow Region, Russia

-- next part --
An HTML attachment was scrubbed...
URL: 
http://www.democritos.it/pipermail/pw_forum/attachments/20110308/de79756e/attachment.htm
 


[Pw_forum] НА: Re: problem in MPI running of QE (16 processors)

2011-03-08 Thread Paolo Giannozzi
Alexander G. Kvashnin wrote:

> Previously I used also 16 nodes, when I calculate with ABINIT and there 
> is no problem for its running.

I don't know why you cannot run QE in parallel, but I know for sure
that it is not a QE problem: it is either a problem of your machine,
or QE wasn't properly compiled for parallel execution.

P.
-- 
Paolo Giannozzi, IOM-Democritos and University of Udine, Italy