Hi,

There are some keywords that reduce the memory usage in parallel
calculations, like ON.LowerMemory and a couple of others, check with the
manual. Besides these, what you can do is increase the number of nodes
(pretty obvious), shrink the basis set (obvious, too), use basis orbitals
with a narrower localiation range and a lower O(N) cutoff distance (to
sparsen the matrices and reduce their dimensions), and finally, reduce the
grid cutoff. That's all I can think of by now.

Regards,
Vasilii

2007/1/10, Cherry Y. Yates <[EMAIL PROTECTED]>:

I also found something wired. I was optimize a
nanostructure, and if I do a grep "max":

siesta: iscf   Eharris(eV)      E_KS(eV)   FreeEng(eV)
  dDmax  Ef(eV)
   Max    0.077525
   Max    0.077525    constrained
* Maximum dynamic memory allocated =   188 MB
siesta: iscf   Eharris(eV)      E_KS(eV)   FreeEng(eV)
  dDmax  Ef(eV)
   Max    0.063488
   Max    0.063488    constrained
* Maximum dynamic memory allocated =   188 MB
siesta: iscf   Eharris(eV)      E_KS(eV)   FreeEng(eV)
  dDmax  Ef(eV)
   Max    0.167946
   Max    0.167946    constrained
* Maximum dynamic memory allocated =   188 MB
siesta: iscf   Eharris(eV)      E_KS(eV)   FreeEng(eV)
  dDmax  Ef(eV)
   Max    0.067086
   Max    0.067086    constrained
* Maximum dynamic memory allocated =  2146 MB
siesta: iscf   Eharris(eV)      E_KS(eV)   FreeEng(eV)
  dDmax  Ef(eV)
   Max    0.055845
   Max    0.055845    constrained
* Maximum dynamic memory allocated =  2146 MB
siesta: iscf   Eharris(eV)      E_KS(eV)   FreeEng(eV)
  dDmax  Ef(eV)
   Max    0.044563
   Max    0.044563    constrained
* Maximum dynamic memory allocated =  2146 MB
* Maximum dynamic memory allocated : Node    0 =  2146
MB
* Maximum memory occured during rdiag


As you see, the maximum dynamic memory allocated jumps
from 188 MB to 2146 MB. Is it all right? Thanks,

Cherry

--- "Cherry Y. Yates" <[EMAIL PROTECTED]> wrote:

> Dear SIESTA developers
>
> Actually I tested the memory usage with the SZP
> basis
> against the SZ basis. For the bulk Si, SZP
> calculations require twice as much memory as the SZ
> calculations. However for the nanowire (2600 atoms),
> the memory usage jumps from 3.4 GB to more than
> 14GB!
> Is it normal? Please give me some hints!
>
> Thanks,
>
> Cherry
>
>
> --- "Cherry Y. Yates" <[EMAIL PROTECTED]> wrote:
>
> > Dear SIESTA developers,
> >
> > I am running SIESTA for some silicon nanowires,
> the
> > memory of each node in my cluster is 4GB. when the
> > atom is less than 1000, the vmem and mem is
> roughly
> > the same, for example:
> >
> >     resources_used.mem = 2213804kb
> >     resources_used.vmem = 2258976kb
> >
> > Then after that, the vmem increases quickly, e.g.,
> > for
> > a 1800-atom system,
> >
> >     resources_used.mem = 3894052kb
> >     resources_used.vmem = 6427792kb
> >
> > It looks like SIESTA knows 4GB is the limit for my
> > machine, and it uses swap. Is it right? Do you
> know
> > how to cut some memory usages to fit my machine?
> It
> > really slows down a lot...
> >
> > Thanks,
> >
> > Cherry
> >
> >
> > __________________________________________________
> > Do You Yahoo!?
> > Tired of spam?  Yahoo! Mail has the best spam
> > protection around
> > http://mail.yahoo.com
> >
>
>
> __________________________________________________
> Do You Yahoo!?
> Tired of spam?  Yahoo! Mail has the best spam
> protection around
> http://mail.yahoo.com
>





____________________________________________________________________________________
Any questions? Get answers on any topic at www.Answers.yahoo.com.  Try it
now.

Reply via email to