Hi Ahskan
There are a couple of ways you can do this. The first and probably the most
simple way is to hook directly into the pwscf libraries with some fortran
code. Look at PP/pw_export.f90 for inspiration in those regards.
Alternatively you can instead actually run pw_export.x which will give yo
rg
>> http://pwscf.org/mailman/listinfo/pw_forum
>
> --
> Paolo Giannozzi, Dept. Chemistry&Physics&Environment,
> Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
> Phone +39-0432-558216, fax +39-0432-558222
>
> ___
>
On Wed, 2014-03-26 at 17:01 +0100, vborisov wrote:
> init :432.25s CPU 18519.07s WALL ( 1 calls)
in PWCOND/src/do_cond.f90, place some calls to print_clock('init')
between the calls to start_clock('init') and stop_clock('init').
With a few attempts you should be able to locate
.84s WALL ( 1 calls)
>
> Notice the large difference between the CPU and the WALL times
> for the init subroutine. This was observed during the parallel
> execution with different number of processors both for 5.0.1 and
> 5.0.2 versions, and on different architectures using OPENMPI
> environment.
>
> I would very much appreciate any help with these matters.
>
> With kind regards,
> Vladislav Borisov
>
>
> Max Planck Institute of Microstructure Physics
> Weinberg 2, 06120, Halle (Saale), Germany
> Tel No: +49 345 5525448
> Fax No: +49 345 5525446
> Email: vborisov at mpi-halle.mpg.de
>
> ___
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>
-- next part --
An HTML attachment was scrubbed...
URL:
http://pwscf.org/pipermail/pw_forum/attachments/20140326/87a5576e/attachment.html
Dear all,
I noticed a problem while using the PWCOND code for calculating
the transmission for more than one k-point within the two-dimensional
BZ.
Whereas there is no problem with the conductance calculation for
a single k-point, the following error message appears once I provide
a list with n>=
ld you, please, help me to properly add manufacturer name, the city and
state of its location.
Best regards,
Dmitry
-- next part --
An HTML attachment was scrubbed...
URL:
http://pwscf.org/pipermail/pw_forum/attachments/20140326/abaea62b/attachment.html
s, Henan University of Science and Technology, Henan, China
> ___
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>
-- next part --
An HTML attachment was scrubbed...
URL:
http://pwscf.org/pipermail/pw_forum/attachments/20140326/9e4134a6/attachment.html
re not to be disclosed to anyone other than the addressee. Unauthorized
> recipients are requested to preserve this confidentiality and to advise the
> sender immediately of any error in transmission."
>
>
>
> ___
> Pw_forum mailing list
> Pw_forum at pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum
>
-- next part --
An HTML attachment was scrubbed...
URL:
http://pwscf.org/pipermail/pw_forum/attachments/20140326/16b6007b/attachment.html
Sorry, I do not think so.
On Wed, Mar 26, 2014 at 1:07 AM, kulwinder kaur
wrote:
>
> hello QE user
>
> how can i find thermal conductivity using quantum espresso code?
>
>
>
>
>
> --
> Regards
> kulwinder kaur
> physics department
> panjab university chandigarh
>
> __
On Mar 26, 2014, at 12:33 AM, Alexander G. Kvashnin
wrote:
> mpirun -np 2 /qe-dir/bin/pw-gpu.x -in input > output
>
> And it will start with 2 MPI process and get 2 GPUs on my host. Is it correct?
Yes. But if 2 MPI processes are enough to run the calculation I suggest to run
the code in seria
10 matches
Mail list logo