Hi Heshan,

Thank you for replying so promptly.

There are a number of items that are confusing; perhaps you could
address each of them individually.

First, I am using a parallel file system and using the full NT database.

1)  The README for mpiblast says you need to run mpiformatdb first
(with args to specify frag size, number frags, etc).  However, your paper
"Efficient Data Access for Parallel BLAST" states one of the goals of
mpiblast-PIO was to avoid this and instead do dynamic partitioning. So exactly what is required in terms of prepartitioning?

2)  I assume all the documentation that is relevant specifically to
mpiblast-PIO is in the first 43 lines of your README, starting at
"Changes between 1.4.0 and 1.4.0-pio test release".  Is that correct?

3)  Regarding Section 4 of your paper, you indicate that at the time the
paper was written, pioBlast would not handle very large databases. Has that
issue been addressed?  If so, what do I do to deal with NT?

4) mpiblast requires that 2 extra processes be allocated to deal with IO and
master slave issues.  Does mpiblast-PIO still require extra processes or is
all that transparent?

5)  I will be running on a large cluster with 4 cpus per node, each node is
diskless as we have a parallel file system. How does your code know how big to make a fragment so that it is cache friendly? Or do I specify the size of a frag? If I specify it, are there any guidelines for determining how big to make a frag relative to total memory (all 4 cpus share
the memory on a node.)

6) Do you know if anyone at LLNL has any experience using mpiblast-PIO?

7) Regarding the --enable-mpi-atomicity flag, you say "Use this option if missing data is observed in the output file." That sentence does not make sense. How can you observe missing data? If you mean to run everything using regular blast and and also mpiblast-PIO and then compare the output, then that is a bit absurd. We need correct results that we can rely on. Is there a penalty or some disadvantages for using the atomicity flag? If not, they why not make it the default?

8) Is there a current paper published on mpiblast-PIO, (later than the one quoted above)?

9) Am I missing something that answers my questions?

It looks like the mpiblast-PIO project is very, very valuable and I appreciate all the work you and your colleagues have done. However, I believe it deserves some simple documentation
on how to use it --- unless I'm missing something.

Thank so much.

Peter




[EMAIL PROTECTED] wrote:

Peter,

The usage of mpiblast-pio is very similar to that of mpiblast 1.4. The
most important difference is that there is an extra execution flag
"--use-master-write". You need to set this flag when running mpiblast-pio
on non-parallel file systems. An optional flag "--enable-mpi-atomicity"
and a macro "CONFIG_LIBC" might also be applicable to the configuration
process depending on the platforms. Their usages are explained in the
README file in the mpiblast-pio package.

One reminder is that currently mpiblast-pio only supports the OCT_2004
NCBI toolbox. The support of updated versions of NCBI toolbox is going to
be included in the next release. Let me know if you have further
questions.

Thanks,
Heshan


Where can I find documentation on how to use mpiblast-PIO?

The README and INSTALL files do not have this information,
nor does the document mpiBLAST 1.4 - pio Design Document by
Heshan Lin.

Thanks,

Peter Williams
===============================================
Peter L. Williams, PhD        email: [EMAIL PROTECTED]  phone: 925-422-3832
Computer Scientist   Center for Applied Scientific Computing,
Mail Stop L-560   Lawrence Livermore National Laboratory
Livermore, CA 94550, USA




-------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
Mpiblast-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/mpiblast-users

Reply via email to