On Wed, 10 Jan 2007, Oleksandr Voznyy wrote:
| Hi,
| till recently I though that checking the convergence of total energy vs k-grid
| cutoff is enough.
| However, now I've found that while total energy can be very well converged,
| Fermi level position is not, and requires at least twice denser
Hi,
till recently I though that checking the convergence of total energy vs
k-grid cutoff is enough.
However, now I've found that while total energy can be very well
converged, Fermi level position is not, and requires at least twice
denser k-grid (and ~4 times more time).
Here is my example
Hi,
There are some keywords that reduce the memory usage in parallel
calculations, like ON.LowerMemory and a couple of others, check with the
manual. Besides these, what you can do is increase the number of nodes
(pretty obvious), shrink the basis set (obvious, too), use basis orbitals
with a
dear friends,
can anybody give me pseudopotential for Li?
thanx in advance.
regards,
Saswata
Send free SMS to your Friends on Mobile from your Yahoo! Messenger. Download
Now! http://messenger.yahoo.com/download.php
Hi,
Sincere apologies if such a message has already
been posted to this list. I have been trying to
compile parallel version of SIESTA 1.3/2.0 for
a while but with no luck.
Our system is a 16-node beowulf cluster running
ROCKS 4.x with Red Hat Enterprise Linux 4.x. These
have Pentium 4
Hi Rodrigo,
Following your arch.make I can read:
BLAS_LIBS=/usr/local/lib/blas_LINUX.a
LAPACK_LIBS=/usr/local/lib/lapack_LINUX.a
and
GUIDE=/opt/intel/mkl/8.0.1/lib/em64t/libguide.a
LAPACK=/opt/intel/mkl/8.0.1/lib/em64t/libmkl_lapack.a
BLAS=/opt/intel/mkl/8.0.1/lib/em64t/libmkl_em64t.a
You
I also found something wired. I was optimize a
nanostructure, and if I do a grep max:
siesta: iscf Eharris(eV) E_KS(eV) FreeEng(eV)
dDmax Ef(eV)
Max0.077525
Max0.077525constrained
* Maximum dynamic memory allocated = 188 MB
siesta: iscf Eharris(eV) E_KS(eV)
Dear SIESTA developers
Actually I tested the memory usage with the SZP basis
against the SZ basis. For the bulk Si, SZP
calculations require twice as much memory as the SZ
calculations. However for the nanowire (2600 atoms),
the memory usage jumps from 3.4 GB to more than 14GB!
Is it normal?
Dear SIESTA developers,
I am running SIESTA for some silicon nanowires, the
memory of each node in my cluster is 4GB. when the
atom is less than 1000, the vmem and mem is roughly
the same, for example:
resources_used.mem = 2213804kb
resources_used.vmem = 2258976kb
Then after that, the
9 matches
Mail list logo