Ok, so I suppose that you have (correctly, very important!) converted the
Siesta data to either VASP or Gaussian formats. I will also assume that you
are using the latest version which, according to the website, has fixes in
very fundamental stuff.

I don't know this version explicitly, but I got a hold of Ed Sanville's
personal code and tried to interface the Siesta data to it. I had quite a
few problems by then, which Ed and I could not really solve them properly.
For example, Ed's program needed the core charge density from the
all-electron calculation, and the ones that came out from Siesta's atom
program were not good enough, for whatever reason - from the grid used by
Ed's code, which is very specific to the PLATO code, to some numerical noise
that I saw close to the position of the nucleus which would incorrectly
assign its position. Another thing is that I could not make it work for
non-orthogonal (i.e., monoclinic, hexagonal, etc) lattices. I was in
Loughboro with Ed and, in the end, we could not make it work properly in the
three days I spent there.

Now, I am not completely sure of what I am going to say here, but I think
that in the case of the UTexas code, the fineness of the mesh you use for
writing the charge density could be an issue. In Ed Sanville's program, the
all-electron charge densities provided additional data for the location of
the nuclear positions through the calculation of the gradients and
lapacians, which eliminated an incorrect assignment of the Bader volumes.
This is not present in the UTexas code, so I think a very fine mesh could be
of great help.

One good thing to do, if you can, would be to perform a Gaussian or VASP
calculation for this simple system - which is already problematic enough
because of the H atoms! -, check that it gives good results and then find
out what is the minimum mesh fineness you need (still in VASP or Gaussian)
and then set a mesh cutoff (now in Siesta) in which the mesh divisions of
your cell are comparable to those of VASP or Gaussian. You can relax using
the usual 200-300 Ry mesh but, probably a charge density written onto a much
finer grid (800 Ry up) could be better. If you don't have access to either
VASP or Gaussian, then you could simply perform a convergence study on the
Bader charges as a function of the mesh cutoff:

1 - Relax your system with the usual 200-300 Ry mesh cutoff, and you can
already use the grid generated by this cutoff as the initial step.
2 - After that, vary the mesh cutoff, allowing a full scf cycle to complete
(but do not relax further), saving the charge densities and then do a Bader
calculation for each of the computed grids.

Please let us know if your efforts are successful - this is a problem of
great interest for everyone on the list, since Mulliken charges can be
completely unreliable.

Good luck,

Marcos

On Fri, May 7, 2010 at 3:36 AM, Fen Hong <gm030...@gmail.com> wrote:

> Marcos,
> I use this http://theory.cm.utexas.edu/bader/
> Linux x86 
> <http://theory.cm.utexas.edu/bader/download/bader_lnx_32.tar.gz>(ifort)
> there is no need to compile it, just decompression is ok
>
>
>

Responder a