It should be about the same in mpich or openmpi. I think these days you just pass a --with-pvfs2=<path_to_your_pvfs2_installation_directory> argument at configure time for whichever one your are building and that's it.

-Phil

On 07/18/2013 11:04 AM, Becky Ligon wrote:
Yves:

Take a look at www.orangefs.org/documentation <http://www.orangefs.org/documentation>. There is a section on setting up ROMIO with PVFS in the installation guide that might help you. Be aware, though, that this documentation is old. You won't need to use the patches, but the basic process is the same, as far as I know.

Hope this helps!
Becky


On Thu, Jul 18, 2013 at 2:39 AM, Yves Revaz <[email protected] <mailto:[email protected]>> wrote:


    Ok, thanks a lot Phil ! This helped me a lot.

    When using the prefix "ufs:" both my mpich2 and openmpi
    works fine now. They however still failed with the "pvfs2:" prefix.
    So, I guess they both haven't been compiled with the pvfs2 support.

    I recompiled openmpi openmpi-1.6.5 including the flag
    --with-io-romio-flags="--with-file-system=pvfs2+ufs+nfs"

    But now, even if I use the "ufs:" prefix, I get:

    MPI_File_open(): MPI_ERR_OTHER: known error not in list

    ....

    Do I need to recompile orangefs by specifying
    --with-mpi pointing towards my new openmpi-1.6.5 directory ?

    Any help welcome,

    yves





    On 07/13/2013 01:28 PM, Phil Carns wrote:

        On 07/12/2013 04:27 AM, Yves Revaz wrote:


            Ok, thanks a lot for your help !
            Indeed, there is a problem when opening the file. Here
            is the message I get:

            MPI_File_open(): Other I/O error , error stack:
            ADIO_RESOLVEFILETYPE(754): Specified filesystem is not
            available
            rank 0 in job 1  regor2.obs_51694   caused collective
            abort of all ranks
              exit status of rank 0: return code 255


            Is there an issue with mpich2 ?


        I'm not an expert in the error messages that might appear
        here, but it looks like MPICH was not compiled with support
        for PVFS.  You can try a couple of experiments to check:

        - add a "pvfs2:" prefix to the file name.  This will force
        MPICH to use the pvfs2 MPI-IO driver instead of trying to
        detect the file system so that you can be sure of which driver
        it is using (and whether it works or not)

        - if the above doesn't work, then you can add the "ufs:"
        prefix to the file name instead.  This will force MPI-IO to
        use the generic Unix file system driver to access PVFS through
        the PVFS mount point instead of the PVFS library.  It is not
        as efficient, but it will work.

            By the way, could you confirm that MPI_IO on Pvfs works
            only with mpich2 ?
            I usually use openmpi.


        OpenMPI will work as well.  It uses essentially the same
        MPI-IO implementation which includes PVFS support.

        -Phil


            Thanks in advance,

            yves




            On 07/09/2013 04:06 PM, Phil Carns wrote:

                Hi Yves,

                It seems like your MPI_File_open() call is probably
                failing since no file is being created at all, but you
                aren't checking the return code of MPI_File_open() so
                it is hard to tell what the problem is.  Could you
                modify your MPI_File_open() call to be like this:

                int ret;
                int resultlen;
                char msg[MPI_MAX_ERROR_STRING];

                ret = MPI_File_open(MPI_COMM_WORLD, "testfile",
                MPI_MODE_WRONLY | MPI_MODE_CREATE,
                                   MPI_INFO_NULL,&myfile);
                if(ret != MPI_SUCCESS)
                {
                     MPI_Error_string(ret, msg, &resultlen);
                     fprintf(stderr, "MPI_File_open(): %s\n", msg);
                     return(-1);
                }

                That will not only check the error code from the open
                call but print a human readable error message if it fails.

                You will probably want to check the return code of all
                of your other MPI_File_*** calls as well.

                thanks,
                -Phil

                On 07/09/2013 09:44 AM, Yves Revaz wrote:

                    On 07/09/2013 03:39 PM, Wei-keng Liao wrote:

                        Are you running from a PVFS directory?
                        If so, please run command "mount" and "pwd" to
                        confirm that.

                        I usually us the full file path name in
                        MPI_File_open()

                    ok, I just tried, but the problem remains....



                        Wei-keng

                        On Jul 9, 2013, at 5:10 AM, Yves Revaz wrote:

                            Dear Wei-keng,

                            Thanks for the corrections. I improved the
                            code and run it again,
                            and I get the following output:


                            PE00: write at offset = 0
                            PE00: write at offset = 10
                            PE00: write at offset = 20
                            PE00: write at offset = 30
                            PE00: write count = 0
                            PE00: write count = 0
                            PE00: write count = 0
                            PE00: write count = 0

                            and in addition, the outputfile is still
                            not created !
                            So I really suspect an issue with the
                            parallel file system.

                            yves



                            On 07/08/2013 06:15 PM, Wei-keng Liao wrote:

                                    MPI_Exscan(&b,&offset, 1, MPI_INT,
                                    MPI_SUM,MPI_COMM_WORLD);

                                Your offset is of type MPI_Offset, an
                                8-byte integer.
                                Variable b is of type int, a 4-byte
                                integer.

                                Please try change the data type for b
                                to MPI_Offset and use
                                     MPI_Exscan(&b,&offset, 1,
                                MPI_OFFSET, MPI_SUM,MPI_COMM_WORLD);


                                Wei-keng

                                On Jul 8, 2013, at 10:58 AM, Yves
                                Revaz wrote:

                                    Dear List,

                                    I'm facing a problem with the
                                    mpi-io on pvfs.
                                    I'm using orangefs-2.8.7 on 6
                                    server nodes.

                                    I recently tried to play with
                                    mpi-io as pvfs is designed to
                                    support parallel access to a
                                    single file.

                                    I used the following simple code
                                    (see below) where different
                                    processes open a file
                                    and write each at a different
                                    position in the file.

                                    I compiled it with mpich2, as pvfs
                                    seems to only support this version
                                    to use the
                                      parallel access facilities, right ?

                                    When running my test code on a
                                    classical file system, the code works
                                    perfectly and I get the following
                                    output:

                                    mpirun -n 4 ./a.out
                                    PE03: write at offset = 30
                                    PE03: write count = 10
                                    PE00: write at offset = 0
                                    PE00: write count = 10
                                    PE01: write at offset = 10
                                    PE01: write count = 10
                                    PE02: write at offset = 20
                                    PE02: write count = 10

                                    as expected a file "testfile" is
                                    created.
                                    However, the same code acessing my
                                    pvfs file system gives:

                                    mpirun -n 4 ./a.out
                                    PE00: write at offset = 0
                                    PE00: write count = -68457862
                                    PE02: write at offset = 20
                                    PE02: write count = 32951150
                                    PE01: write at offset = 10
                                    PE01: write count = -110085322
                                    PE03: write at offset = 30
                                    PE03: write count = -268114218

                                    and no file "testfile" is created.

                                    I'm I doing something wrong ? Do I
                                    need to compile orangefs or mpich2
                                    with
                                    particular options ?

                                    Thanks for your precious help,

                                    yves



                                    My simple code:
                                    ---------------------

                                    #include "mpi.h"
                                    #include<stdio.h>
                                    #define BUFSIZE 10

                                    int main(int argc, char *argv[])
                                    {
                                         int i,  me, buf[BUFSIZE];
                                         int b, offset, count;
                                         MPI_File myfile;
                                         MPI_Offset disp;
                                         MPI_Status stat;

                                         MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD,&me);
                                         for (i=0; i<BUFSIZE; i++)
                                             buf[i] = me*BUFSIZE + i;

                                         MPI_File_open(MPI_COMM_WORLD,
                                    "testfile", MPI_MODE_WRONLY |
                                    MPI_MODE_CREATE,
                                     MPI_INFO_NULL,&myfile);

                                         offset = 0;
                                         b = BUFSIZE;
                                         MPI_Exscan(&b,&offset, 1,
                                    MPI_INT, MPI_SUM,MPI_COMM_WORLD);
                                         disp = offset*sizeof(int);
                                         printf("PE%2.2i: write at
                                    offset = %d\n", me, offset);

                                         MPI_File_set_view(myfile, disp,
                                                           MPI_INT,
                                    MPI_INT, "native", MPI_INFO_NULL);
                                         MPI_File_write(myfile, buf,
                                    BUFSIZE, MPI_INT,
                                    &stat);
                                         MPI_Get_count(&stat,
                                    MPI_INT,&count);
                                         printf("PE%2.2i: write count
                                    = %d\n", me, count);
                                         MPI_File_close(&myfile);

                                         MPI_Finalize();

                                         return 0;
                                    }













--
                                    
---------------------------------------------------------------------

                                       Dr. Yves Revaz
                                       Laboratory of Astrophysics
                                       Ecole Polytechnique Fédérale de
                                    Lausanne (EPFL)
Observatoire de Sauverny Tel : +41 22 379 24 28
                                    <tel:%2B41%2022%20379%2024%2028>
51. Ch. des Maillettes Fax : +41 22 379 22 05
                                    <tel:%2B41%2022%20379%2022%2005>
                                       1290 Sauverny             e-mail :
                                    [email protected]
                                    <mailto:[email protected]>

                                       SWITZERLAND                  Web :
                                    http://people.epfl.ch/yves.revaz

                                    
---------------------------------------------------------------------


                                    
_______________________________________________
                                    Pvfs2-users mailing list
                                    [email protected] 
<mailto:[email protected]>
                                    
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users


--
                            
---------------------------------------------------------------------

                              Dr. Yves Revaz
                              Laboratory of Astrophysics
                              Ecole Polytechnique Fédérale de Lausanne
                            (EPFL)
                              Observatoire de Sauverny     Tel : +41
                            22 379 24 28 <tel:%2B41%2022%20379%2024%2028>
                              51. Ch. des Maillettes       Fax : +41
                            22 379 22 05 <tel:%2B41%2022%20379%2022%2005>
                              1290 Sauverny             e-mail :
                            [email protected] <mailto:[email protected]>
                              SWITZERLAND                  Web :
                            http://people.epfl.ch/yves.revaz
                            
---------------------------------------------------------------------





                _______________________________________________
                Pvfs2-users mailing list
                [email protected]
                <mailto:[email protected]>
                http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users




        _______________________________________________
        Pvfs2-users mailing list
        [email protected]
        <mailto:[email protected]>
        http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users



--
    ---------------------------------------------------------------------
      Dr. Yves Revaz
      Laboratory of Astrophysics
      Ecole Polytechnique Fédérale de Lausanne (EPFL)
      Observatoire de Sauverny     Tel : +41 22 379 24 28
    <tel:%2B41%2022%20379%2024%2028>
      51. Ch. des Maillettes       Fax : +41 22 379 22 05
    <tel:%2B41%2022%20379%2022%2005>
      1290 Sauverny             e-mail : [email protected]
    <mailto:[email protected]>
      SWITZERLAND                  Web : http://people.epfl.ch/yves.revaz
    ---------------------------------------------------------------------

    _______________________________________________
    Pvfs2-users mailing list
    [email protected]
    <mailto:[email protected]>
    http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users




--
Becky Ligon
OrangeFS Support and Development
Omnibond Systems
Anderson, South Carolina



_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to