@gus
we are not able to make hpl sucessfully.

i think it has to do something with blas

i cannot find blas tar file on the net, i found rpm but de installation
steps is with tar file.

#*locate blas* gave us the following result

*[root@ccomp1 hpl]# locate blas
/hpl/include/hpl_blas.h
/hpl/makes/Make.blas
/hpl/src/blas
/hpl/src/blas/HPL_daxpy.c
/hpl/src/blas/HPL_dcopy.c
/hpl/src/blas/HPL_dgemm.c
/hpl/src/blas/HPL_dgemv.c
/hpl/src/blas/HPL_dger.c
/hpl/src/blas/HPL_dscal.c
/hpl/src/blas/HPL_dswap.c
/hpl/src/blas/HPL_dtrsm.c
/hpl/src/blas/HPL_dtrsv.c
/hpl/src/blas/HPL_idamax.c
/hpl/src/blas/ccomp
/hpl/src/blas/i386
/hpl/src/blas/ccomp/Make.inc
/hpl/src/blas/ccomp/Makefile
/hpl/src/blas/i386/Make.inc
/hpl/src/blas/i386/Makefile
/usr/include/boost/numeric/ublas
/usr/include/boost/numeric/ublas/banded.hpp
/usr/include/boost/numeric/ublas/blas.hpp
/usr/include/boost/numeric/ublas/detail
/usr/include/boost/numeric/ublas/exception.hpp
/usr/include/boost/numeric/ublas/expression_types.hpp
/usr/include/boost/numeric/ublas/functional.hpp
/usr/include/boost/numeric/ublas/fwd.hpp
/usr/include/boost/numeric/ublas/hermitian.hpp
/usr/include/boost/numeric/ublas/io.hpp
/usr/include/boost/numeric/ublas/lu.hpp
/usr/include/boost/numeric/ublas/matrix.hpp
/usr/include/boost/numeric/ublas/matrix_expression.hpp
/usr/include/boost/numeric/ublas/matrix_proxy.hpp
/usr/include/boost/numeric/ublas/matrix_sparse.hpp
/usr/include/boost/numeric/ublas/operation.hpp
/usr/include/boost/numeric/ublas/operation_blocked.hpp
/usr/include/boost/numeric/ublas/operation_sparse.hpp
/usr/include/boost/numeric/ublas/storage.hpp
/usr/include/boost/numeric/ublas/storage_sparse.hpp
/usr/include/boost/numeric/ublas/symmetric.hpp
/usr/include/boost/numeric/ublas/traits.hpp
/usr/include/boost/numeric/ublas/triangular.hpp
/usr/include/boost/numeric/ublas/vector.hpp
/usr/include/boost/numeric/ublas/vector_expression.hpp
/usr/include/boost/numeric/ublas/vector_of_vector.hpp
/usr/include/boost/numeric/ublas/vector_proxy.hpp
/usr/include/boost/numeric/ublas/vector_sparse.hpp
/usr/include/boost/numeric/ublas/detail/concepts.hpp
/usr/include/boost/numeric/ublas/detail/config.hpp
/usr/include/boost/numeric/ublas/detail/definitions.hpp
/usr/include/boost/numeric/ublas/detail/documentation.hpp
/usr/include/boost/numeric/ublas/detail/duff.hpp
/usr/include/boost/numeric/ublas/detail/iterator.hpp
/usr/include/boost/numeric/ublas/detail/matrix_assign.hpp
/usr/include/boost/numeric/ublas/detail/raw.hpp
/usr/include/boost/numeric/ublas/detail/returntype_deduction.hpp
/usr/include/boost/numeric/ublas/detail/temporary.hpp
/usr/include/boost/numeric/ublas/detail/vector_assign.hpp
/usr/lib/libblas.so.3
/usr/lib/libblas.so.3.1
/usr/lib/libblas.so.3.1.1
/usr/lib/openoffice.org/basis3.0/share/gallery/htmlexpo/cublast.gif
/usr/lib/openoffice.org/basis3.0/share/gallery/htmlexpo/cublast_.gif
/usr/share/backgrounds/images/tiny_blast_of_red.jpg
/usr/share/doc/blas-3.1.1
/usr/share/doc/blas-3.1.1/blasqr.ps
/usr/share/man/manl/intro_blas1.l.gz*

When we try to make using the following command
*# make arch=ccomp*
**
it gives error :
*Makefile:47: Make.inc: No such file or directory
make[2]: *** No rule to make target `Make.inc'.  Stop.
make[2]: Leaving directory `/hpl/src/auxil/ccomp'
make[1]: *** [build_src] Error 2
make[1]: Leaving directory `/hpl'
make: *** [build] Error 2*
**
*ccomp* folder is created but *xhpl* file is not created
is it some prob with de config file?




On Wed, Apr 22, 2009 at 11:40 AM, Ankush Kaul <ankush.rk...@gmail.com>wrote:

> i feel the above problem occured due 2 installing mpich package, now even
> nomal mpi programs are not running.
> What should we do? we even tried *yum remove mpich* but it says no
> packages to remove.
> Please Help!!!
>
>   On Wed, Apr 22, 2009 at 11:34 AM, Ankush Kaul <ankush.rk...@gmail.com>wrote:
>
>> We are facing another problem, we were tryin to install different
>> benchmarking packages
>>
>> now whenever we try to run *mpirun* command (which was working perfectly
>> before) we get this error:
>> *usr/local/bin/mpdroot: open failed for root's mpd conf filempdtrace
>> (__init__ 1190): forked process failed; status=255*
>>
>> whats the problem here?
>>
>>
>>
>> On Tue, Apr 21, 2009 at 11:45 PM, Gus Correa <g...@ldeo.columbia.edu>wrote:
>>
>>> Hi Ankush
>>>
>>> Ankush Kaul wrote:
>>>
>>>> @Eugene
>>>> they are ok but we wanted something better, which would more clearly
>>>> show de diff in using a single pc and the cluster.
>>>>
>>>> @Prakash
>>>> i had prob with running de programs as they were compiling using mpcc n
>>>> not mpicc
>>>>
>>>> @gus
>>>> we are tryin 2 figure out de hpl config, its quite complicated,
>>>>
>>>
>>> I sent you some sketchy instructions to build HPL,
>>> on my last message to this thread.
>>> I built HPL and run it here yesterday that way.
>>> Did you try my suggestions?
>>> Where did you get stuck?
>>>
>>> also de locate command lists lots of confusing results.
>>>>
>>>>
>>> I would say the list is just long, not really confusing.
>>> You can  find what you need if you want.
>>> Pipe the output of locate through "more", and search carefully.
>>> If you are talking about BLAS try "locate libblas.a" and
>>> "locate libgoto.a".
>>> Those are the libraries you need, and if they are not there
>>> you need to install one of them.
>>> Read my previous email for details.
>>> I hope it will help you get HPL working, if you are interested on HPL.
>>>
>>> I hope this helps.
>>>
>>> Gus Correa
>>> ---------------------------------------------------------------------
>>> Gustavo Correa
>>> Lamont-Doherty Earth Observatory - Columbia University
>>> Palisades, NY, 10964-8000 - USA
>>> ---------------------------------------------------------------------
>>>
>>>  @jeff
>>>> i think u are correct we may have installed openmpi without VT support,
>>>> but is there anythin we can do now???
>>>>
>>>> One more thing I found this program but dont know how to run it :
>>>> http://www.cis.udel.edu/~pollock/367/manual/node35.html
>>>>
>>>> Thanks 2 all u guys 4 putting in so much efforts to help us out.
>>>>
>>>>
>>>> ------------------------------------------------------------------------
>>>>
>>>>
>>>> _______________________________________________
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>
>>>
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>
>>
>

Reply via email to