Dear Marcos,
Thank you very much for your guideness.
The OS is reinstalled instead of upgraded. So, after install the OS,
the mpich, mkl lib et. al. all are recompiled. (I use the MKL).
It's strange that this may due to the OS. We use the same type of OS,
only a new version.
We export the path of the mpi and mkl lib. Set the 'ulimit -s
unlimited' in my .bashrc.
Do you think that I should change some FLAGS? Do this may effect the result?
For convenient, the arch.make is also attached again.
Thank you for all of you and the your time.
Best regards,
Shizheng

PS:
the mpi is mpich2-1.0.8, ifor-v10.1 and mkl-10.1.1.019

the arch.make is:

#
# This file is part of the SIESTA package.
#
# Copyright (c) Fundacion General Universidad Autonoma de Madrid:
# E.Artacho, J.Gale, A.Garcia, J.Junquera, P.Ordejon, D.Sanchez-Portal
# and J.M.Soler, 1996- .
#
# Use of this software constitutes agreement with the full conditions
# given in the SIESTA license, as signed by all legitimate users.
#
.SUFFIXES:
.SUFFIXES: .f .F .o .a .f90 .F90

SIESTA_ARCH=intel64_RHEL5.4

FPP=
FPP_OUTPUT=
FC=mpif90
RANLIB=ranlib

SYS=nag

SP_KIND=4
DP_KIND=8
KINDS=$(SP_KIND) $(DP_KIND)

FFLAGS=-g -O1 -static
FFLAGS_DEBUG=-g
FFLAGS_CHECKS=-g -O0 -debug full -traceback -C
FPPFLAGS= -DMPI -DFC_HAVE_FLUSH -DFC_HAVE_ABORT
LDFLAGS=-Vaxlib -static

ARFLAGS_EXTRA=

FCFLAGS_fixed_f=
FCFLAGS_free_f90=
FPPFLAGS_fixed_F=
FPPFLAGS_free_F90=

SCALAPACK=-L/opt/intel/cmkl/10.1.1.019/lib/em64t -lmkl_scalapack_ilp64
SOLVER=-L/opt/intel/cmkl/10.1.1.019/lib/em64t -lmkl_solver_ilp64
BLACS= -L/opt/intel/cmkl/10.1.1.019/lib/em64t -lmkl_blacs_ilp64
LAPACK=-L/opt/intel/cmkl/10.1.1.019/lib/em64t -lmkl_lapack
ITHREAD=-L/opt/intel/cmkl/10.1.1.019/lib/em64t -lmkl_sequential
ICORE=-L/opt/intel/cmkl/10.1.1.019/lib/em64t -lmkl_core
BLAS= -L/opt/intel/cmkl/10.1.1.019/lib/em64t -lmkl_intel_ilp64
GUIDE=-L/opt/intel/cmkl/10.1.1.019/lib/em64t -lguide
PTHREAD=-lpthread

COMP_LIBS=

NETCDF_LIBS=
NETCDF_INTERFACE=

LIBS=$(SCALAPACK) $(SOLVER) $(BLACS) $(LAPACK) $(ITHREAD) $(ICORE)
$(BLAS) $(GUIDE) $(PTHREAD)

#SIESTA needs an F90 interface to MPI
#This will give you SIESTA's own implementation
#If your compiler vendor offers an alternative, you may change
#to it here.
MPI_INTERFACE=libmpi_f90.a
MPI_INCLUDE=/usr/local/include

#Dependency rules are created by autoconf according to whether
#discrete preprocessing is necessary or not.
.F.o:
       $(FC) -c $(FFLAGS) $(INCFLAGS) $(FPPFLAGS) $(FPPFLAGS_fixed_F)  $<
.F90.o:
       $(FC) -c $(FFLAGS) $(INCFLAGS) $(FPPFLAGS) $(FPPFLAGS_free_F90) $<
.f.o:
       $(FC) -c $(FFLAGS) $(INCFLAGS) $(FCFLAGS_fixed_f)  $<
.f90.o:
       $(FC) -c $(FFLAGS) $(INCFLAGS) $(FCFLAGS_free_f90)  $<

2010/6/9 Marcos Veríssimo Alves <marcos.verissimo.al...@gmail.com>:
> Shihzheng,
>
> If you have upgraded your OS then it is likely that you could have to
> recompile mpich, scalapack and blacs from scratch. As far as I know siesta
> has no problems with mpich in general.
>
> Cheers,
>
> Marcos
>
> On Jun 9, 2010 1:58 PM, "shizheng wen" <chsz...@gmail.com> wrote:
>
> Dear Marcos,
> Thank you very much!
> First, I try to run with the small system, C6H4N2H4, with mpirun -np 2
> siesta < input.fdf > output.out.
> The same error appeared.
> Then, I recompiled the program, and run it. The same!.
> The only different from before is that we use the newer version of
> Linux--Red Hat Enterprise Linux Server release 5.3。
> Is it that siesta has some problem with the mpich2-1.0.8?
> In the mail-list it said that it has the same problem with Intel
> compiler of version 11, but not 10.1.
>
> Thank you very much in advance!
>
> Best regards,
> Shizheng
>
> PS:
>
> The input file:
>
> # test mpi
>
> SystemName         diam
> SystemLabel        diam_opt
>
>
> NumberOfAtoms        16
> NumberOfSpecies      3
>
> %block ChemicalSpeciesLabel
>   1  6   C
>   2  7   N
>   3  1   H
> %endblock ChemicalSpeciesLabel
>
> PAO.BasisSize       DZP
>
> #LatticeConstant       1. Ang
>
> #%block LatticeVectors
> #    4.16   0.0000   0.0000
> #    0.0000   4.16   0.0000
> #    0.0000   0.0000   8.32
> #%endblock LatticeVectors
>
> AtomicCoordinatesFormat  Ang
>
> %block AtomicCoordinatesAndAtomicSpecies
>   0.696236   -1.200334   -0.005944   1
>  -0.696186   -1.200391    0.005941   1
>  -1.421078   -0.000046    0.011328   1
>  -0.696242    1.200335    0.005889   1
>   0.696193    1.200390   -0.005947   1
>   1.421078    0.000053   -0.011397   1
>   1.229238   -2.148265   -0.017528   3
>  -1.229144   -2.148338    0.017781   3
>  -1.229228    2.148274    0.017497   3
>   1.229131    2.148350   -0.017724   3
>   2.827689    0.000037   -0.090437   2
>   3.253412   -0.829442    0.304092   3
>  -2.827680   -0.000084    0.090510   2
>  -3.253443    0.829583   -0.303581   3
>  -3.253580   -0.828983   -0.305057   3
>   3.253544    0.829123    0.304785   3
> %endblock AtomicCoordinatesAndAtomicSpecies
>
>
> #%block kgrid_Monkhorst_Pack
> #   20    0     0    0.0
> #    0   20     0    0.0
> #    0    0    20    0.0
> #%endblock kgrid_Monkhorst_Pack
>
> xc.functional           GGA      # 'LDA', 'GGA'
> xc.authors              PBE      # 'CA'='PZ', 'PW92', 'PBE'
> SpinPolarized           F
> FixSpin                 F
> TotalSpin               0.0
> NonCollinearSpin        F
> MeshCutoff              300.0 Ry
> MaxSCFIterations        500
>
> DM.MixingWeight           0.1
> DM.NumberPulay            7       # Pulay convergency accelerator
> DM.MixSCF1                F
> DM.PulayOnFile            F       # Store in memory ('F') or in files ('T')
> DM.Tolerance              5.0E-5
> DM.UseSaveDM              T
>
>
> NeglNonOverlapInt          T      # F does not neglect
> SolutionMethod             diagon
> ElectronicTemperature      300 K  # Default value
>
>
> MD.TypeOfRun                    CG
> #MD.VariableCell                 T
> #MD.PreconditionVariableCell     5.    Ang
> MD.NumCGsteps                  500
> MD.MaxCGDispl                   0.01  Ang
> MD.MaxForceTol                  0.02  eV/Ang
>
>
> WriteCoorXmol                   F
> SaveElectrostaticPotential      T
> SaveHS                          F     # Save the Hamiltonian and
> Overlap matrices
> SaveRho                         F     # Save the valence pseudocharge
> density
> SaveDeltaRho                    F
> WriteDenchar                    F     # Write Denchar output
> WriteDMT                        T
> WriteEigenvalues                T
> WriteMullikenPop                1
> LongOutput                      T
> UseSaveData                     T
> WriteDM                         T
>
> last part of the output file:
>
> InitMesh: MESH =   144 x   108 x    72 =     1119744
> InitMesh: Mesh cutoff (required, used) =   300.000   309.331 Ry
>
> * Maximum dynamic memory allocated =    68 MB
> rank 1 in job 3  pc2_40434   caused collective abort of all ranks
>
> exit status of rank 1: return code 1
>
> rank 0 in job 3  pc2_40434   caused collective abort of all ranks
>
> exit status of rank 0: return code 1
>
> And in the screen is:
>
> Fatal error in MPI_Comm_size: Invalid communicator, error stack:
> MPI_Comm_size(112): MPI_Comm_size(...
>
> MPI_Comm_size(70).: Invalid communicator 98.9  0.4   0:06.55
>
> Fatal error in MPI_Comm_size: Invalid communicator, error stack:
>
> MPI_Comm_size(112): MPI_Comm_size(comm=0x5b, size=0x13c46e8) failedion/0
> MPI_Comm_size(70).: Invalid communicator[cli_1]: aborting job:softirqd/0
> Fatal error in MPI_Comm_size: Invalid communicator, error stack:chdog/0
> MPI_Comm_size(112): MPI_Comm_size(comm=0x5b, size=0x13c46e8) failedion/1
> MPI_Comm_size(70).: Invalid communicator  0.0  0.0   0:00.00
>
> 2010/6/9 Marcos Veríssimo Alves <marcos.verissimo.al...@gmail.com>:
>
>> Shizheng, > Have you tried it for small systems? 12 GB Ram for Siesta is
>> enough for > running syst...
>
> --
>
> Sincerely, Shizheng Wen Dept. of Chem., NENU. Changchun, Jilin, China



-- 
Sincerely,
Shizheng Wen
Dept. of Chem., NENU.
Changchun, Jilin, China

Responder a