Hi, Becky
Thanks for the update.
I have another Linux box that runs an older Linux kernel 2.6.23.9-85.fc8.
When I tried your experiment 1 (i.e. ls a non-existent PVFS file, then
create the file using pvfs2-touch, and then trying to "ls" the file again)
I got this error below.
data ::~(4:33pm) #1010 ls /orangefs/wkliao/tmpfile
ls: cannot access /orangefs/wkliao/tmpfile: No such file or directory
data ::~(4:33pm) #1011 pvfs2-touch /orangefs/wkliao/tmpfile
data ::~(4:34pm) #1012 ls /orangefs/wkliao/tmpfile
ls: cannot access /orangefs/wkliao/tmpfile: No such file or directory
Then I tried your experiment 2 as root. The same error occurs.
[root@data wkliao]# echo 3 > /proc/sys/vm/drop_caches
[root@data wkliao]# ls /orangefs/wkliao/tmpfile2
ls: cannot access /orangefs/wkliao/tmpfile2: No such file or directory
[root@data wkliao]# pvfs2-touch /orangefs/wkliao/tmpfile2
[root@data wkliao]# ls /orangefs/wkliao/tmpfile2
ls: cannot access /orangefs/wkliao/tmpfile2: No such file or directory
[root@data wkliao]# cat /proc/sys/vm/drop_caches
3
It looks like the problem also occurs in kernel 2.6.23.9-85.fc8
As for your additional info bullet 3, in my original email, I am
having problems with file name prefixed with "pvfs2:" and no prefix.
The only one ran correctly is when I used "ufs:" prefix.
Wei-keng
On Mar 30, 2015, at 3:48 PM, Becky Ligon wrote:
> Wei-keng:
>
> Just wanted to give you an update: I have discovered that newer versions of
> the kernel, i.e., 3.10.0-123.20.1.el7.x86_64 distributed via CentOS 7.0 do
> not exhibit the problem that is found using kernel 2.6.32-504.12.2.el6.x86_64
> distributed by Scientific Linux 6.6. As I mentioned before, kernel
> 2.6.32-358.14.1.el6.x86_64 distributed by Scientific Linux 6.4 didn't exhibit
> the problem behavior either.
>
> When I run your program coll_write without prefacing the /path/to/filename,
> mpi issues a lstat through the kernel, which passes the request to PVFS.
> PVFS returns a "file not found" and marks that filename as an invalid inode
> and stores this information in the kernel's dcache (directory cache). mpi
> then uses the PVFS libraries to complete the creation of the file. The
> libraries bypass the kernel, so the kernel doesn't know that this file has
> been created. Since the inode has been marked invalid in the kernel, the
> "ls" command sees this file as "file-not-found". This kernel behavior is
> wrong. The "ls" *SHOULD* be passed to the PVFS kernel module and on to the
> client core where a request for this file can be sent to the servers. Since
> I can't change the behavior of the kernel, I am experimenting to see if we
> need to mark the inode invalid in the first place. This is where my efforts
> have gotten me so far.
>
> Additional information:
>
> 1. I can reproduce this problem by issuing an "ls" on a non-existent PVFS
> file, then create the file using pvfs2-touch, and then trying to "ls" the
> file again.
>
> 2. If you drop the dcache ( echo 3 > /proc/sys/vm/drop_caches), you can "ls"
> the newly created file with no problem.
>
> 3. With your coll_write program, if you preface the file with pvfs2 or ufs
> ({pvfs2|ufs}:/path/to/filename), the system behaves properly. When the pvfs2
> prefix is used, nothing goes through the kernel; everything goes through the
> libraries. When the ufs prefix is used, everything goes through the kernel
> and on to PVFS.
>
>
> Becky
>
> On Tue, Mar 17, 2015 at 6:10 PM, Becky Ligon <[email protected]> wrote:
> I'm heading out for the evening but will jump on the problem tomorrow!
>
> Thanks for finding this problem!
>
> Becky
>
> On Tue, Mar 17, 2015 at 6:07 PM, Wei-keng Liao <[email protected]>
> wrote:
> Alright! It is good to know.
> I will stop messing around my builds and wait for your good news.
> Thanks.
>
> Wei-keng
>
> On Mar 17, 2015, at 5:05 PM, Becky Ligon wrote:
>
> > [bligon@SL6 wkliao]$ ls -l /mnt/test/wkliao
> > ls: cannot access /mnt/test/wkliao/testfile: No such file or directory
> > total 0
> > ?????????? ? ? ? ? ? testfile
> >
> >
> > On Tue, Mar 17, 2015 at 6:04 PM, Becky Ligon <[email protected]> wrote:
> > Okay. I just recreated your problem. There is something amuck with the
> > newer kernel. Let me work on it and get back to you!
> >
> > Becky
> >
> > On Tue, Mar 17, 2015 at 6:01 PM, Wei-keng Liao
> > <[email protected]> wrote:
> > % cat /etc/redhat-release
> > Red Hat Enterprise Linux Server release 6.6 (Santiago)
> >
> >
> > Wei-keng
> >
> > On Mar 17, 2015, at 5:01 PM, Becky Ligon wrote:
> >
> > > issue: cat /etc/redhat-release
> > >
> > > On Tue, Mar 17, 2015 at 6:00 PM, Becky Ligon <[email protected]> wrote:
> > > Yes, but are you running CentOS, SL, ?????
> > >
> > > On Tue, Mar 17, 2015 at 5:54 PM, Wei-keng Liao
> > > <[email protected]> wrote:
> > > The command uname -a shows
> > >
> > > Linux bigdata.eecs.northwestern.edu 2.6.32-504.8.1.el6.x86_64 #1 SMP Fri
> > > Dec 19 12:09:25 EST 2014 x86_64 x86_64 x86_64 GNU/Linux
> > >
> > > Wei-keng
> > >
> > > On Mar 17, 2015, at 4:52 PM, Becky Ligon wrote:
> > >
> > > > Good!
> > > >
> > > > I'm working on the getting the system up and running with the newer
> > > > kernel.
> > > >
> > > > Which distro are you using?
> > > >
> > > > Becky
> > > >
> > > > On Tue, Mar 17, 2015 at 5:51 PM, Wei-keng Liao
> > > > <[email protected]> wrote:
> > > > Versioning is not an issue, as my older version of orangefs is on a
> > > > different machine.
> > > > This machine is a fresh install for Orangefs, BerkeleyDB, and MPICH.
> > > >
> > > > Wei-keng
> > > >
> > > > On Mar 17, 2015, at 4:48 PM, Becky Ligon wrote:
> > > >
> > > > > Could it be that you have a versioning issue here? Somehow, you have
> > > > > a mix of 2.9.1 and some older version?
> > > > >
> > > > > I was told by our tester that he always tests with --enabled-shared.
> > > > > I will try it both ways. Maybe, that's not the case!
> > > > >
> > > > > Becky
> > > > >
> > > > > On Tue, Mar 17, 2015 at 5:43 PM, Wei-keng Liao
> > > > > <[email protected]> wrote:
> > > > > Here is from my /var/log/messages when I restart the pvfs2
> > > > > server/client.
> > > > >
> > > > > Mar 16 12:20:50 bigdata kernel: pvfs2: module version 2.9.1- unloaded
> > > > > Mar 16 12:20:53 bigdata kernel: pvfs2: pvfs2_init called with debug
> > > > > mask: "none" (0x00000000)
> > > > > Mar 16 12:20:53 bigdata kernel: pvfs2: module version 2.9.1- loaded
> > > > > Mar 16 12:20:55 bigdata kernel: PVFS: kernel debug mask has been
> > > > > modified to "none" (0x00000000)
> > > > > Mar 16 12:20:55 bigdata kernel: PVFS: client debug mask has been
> > > > > modified to "none" (0x00000000)
> > > > >
> > > > > Are you saying that MPICH required PVFS2 built with --enable-shared
> > > > > option?
> > > > > I don't know about this, as I had an older version of pvfs2 running
> > > > > fine and it was
> > > > > not built with that option.
> > > > >
> > > > > Rob, do you know?
> > > > >
> > > > >
> > > > > Wei-keng
> > > > >
> > > > > On Mar 17, 2015, at 4:35 PM, Becky Ligon wrote:
> > > > >
> > > > > > I'm also updating my kernel to 2.6.32-504.12.2.el6 and will rerun
> > > > > > my previous tests to see if the kernel is the problem.
> > > > > >
> > > > > > In /var/log/messages, you should also see a message like:
> > > > > >
> > > > > > Mar 17 15:15:42 SL6 kernel: pvfs2: module version 2.9.1- loaded
> > > > > >
> > > > > > If you are not seeing this message, then it appears the kernel
> > > > > > module did not get loaded.
> > > > > >
> > > > > > My understanding is that MPI requires the shared libraries.
> > > > > >
> > > > > > Becky
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Tue, Mar 17, 2015 at 5:27 PM, Wei-keng Liao
> > > > > > <[email protected]> wrote:
> > > > > > I tried to build pvfs2 with --enable-shared before and the same
> > > > > > occurred.
> > > > > >
> > > > > > I am going to try install orangefs on one computer only and not
> > > > > > using any symbolic link.
> > > > > > will let you know.
> > > > > >
> > > > > > I see the following from the pvfs2 client log file.
> > > > > > [D 03/16/2015 12:20:55] [INFO]: Mapping pointer 0x7ff8c0087000 for
> > > > > > I/O.
> > > > > > [D 03/16/2015 12:20:55] [INFO]: Mapping pointer 0x197e000 for I/O.
> > > > > >
> > > > > > I see the following from /var/log/message
> > > > > > Mar 16 10:19:35 bigdata kernel: pvfs2: module version 2.9.1-
> > > > > > unloaded
> > > > > > Mar 16 10:19:37 bigdata kernel: pvfs2: pvfs2_init called with debug
> > > > > > mask: "none" (0x00000000)
> > > > > >
> > > > > >
> > > > > >
> > > > > > Wei-keng
> > > > > >
> > > > > > On Mar 17, 2015, at 4:04 PM, Becky Ligon wrote:
> > > > > >
> > > > > > > Try removing the symbolic link. Shut everything down; recreate
> > > > > > > your storage, and restart.
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > If that doesn't work, then I want you to rebuild the software
> > > > > > > using --enable-shared as one of your configure options. Recreate
> > > > > > > everything, including your storage. Before you start the client,
> > > > > > > you will have to add LD_LIBRARY_PATH to your environment and
> > > > > > > point it to the OFS installation lib directory. Recompile your
> > > > > > > mpi programs making sure you have the following set:
> > > > > > >
> > > > > > > C_INCLUDE_PATH contains the include directory for OFS and the
> > > > > > > LIBRARY_PATH includes the OFS lib directory.
> > > > > > >
> > > > > > >
> > > > > > > Are you seeing any errors in the pvfs2-client.log file or
> > > > > > > /var/log/messages?
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > If none of the above works, then we can turn on kernel debugging
> > > > > > > and/or client debugging to see where the problem is coming from.
> > > > > > >
> > > > > > > Let me know!
> > > > > > >
> > > > > > >
> > > > > > > Becky
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On Tue, Mar 17, 2015 at 4:44 PM, Wei-keng Liao
> > > > > > > <[email protected]> wrote:
> > > > > > >
> > > > > > > The command "mount" shows /home directory is not nfs and there is
> > > > > > > no
> > > > > > > nfs client running on this machine (the metadata server).
> > > > > > > > > > /dev/sda7 on /home type ext4 (rw,usrquota)
> > > > > > >
> > > > > > > The command "df" also shows /home is not NSF mounted.
> > > > > > > bigdata::~(3:21pm) #1003 df
> > > > > > > Filesystem 1K-blocks Used Available Use%
> > > > > > > Mounted on
> > > > > > > /dev/sda2 30106576 8053816 20516760 29% /
> > > > > > > tmpfs 16414648 0 16414648 0%
> > > > > > > /dev/shm
> > > > > > > /dev/sda1 3997376 90920 3696744 3% /boot
> > > > > > > /dev/sda7 850331204 373303148 433827084 47% /home
> > > > > > > /dev/sda3 50264772 53104 47651668 1% /tmp
> > > > > > > /dev/sda6 9948012 875016 8560996 10% /var
> > > > > > > tcp://bigdata:3334/orangefs
> > > > > > > 3228520448 1581383680 1647136768 49%
> > > > > > > /orangefs
> > > > > > >
> > > > > > >
> > > > > > > My guess it is still the kernel module that does not behave
> > > > > > > correctly.
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Wei-keng
> > > > > > >
> > > > > > > On Mar 17, 2015, at 3:26 PM, Becky Ligon wrote:
> > > > > > >
> > > > > > > > I also meant to say that if your storage is on a nfs mount,
> > > > > > > > move your filesystem onto local storage, no links or nfs mounts.
> > > > > > > >
> > > > > > > > Becky
> > > > > > > >
> > > > > > > > On Tue, Mar 17, 2015 at 4:14 PM, Becky Ligon
> > > > > > > > <[email protected]> wrote:
> > > > > > > > [bligon@SL6 wkliao]$ ls -l /mnt/test/wkliao
> > > > > > > > total 91804
> > > > > > > > -rw-r--r-- 1 bligon bligon 31000000 Mar 17 16:06 testfile
> > > > > > > > -rw-r--r-- 1 bligon bligon 31000000 Mar 17 16:06 testfile.pvfs2
> > > > > > > > -rw-rw-rw- 1 bligon bligon 32000000 Mar 17 16:05 testfile.ufs
> > > > > > > >
> > > > > > > >
> > > > > > > > [bligon@SL6 wkliao]$ cat /proc/sys/pvfs2/acache/timeout-msecs
> > > > > > > > 60000
> > > > > > > > [bligon@SL6 wkliao]$ cat /proc/sys/pvfs2/ncache/timeout-msecs
> > > > > > > > 60000
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > I was able to run your programs and issue an "ls" with no
> > > > > > > > problems, even with timeouts set at 60 seconds. My environment
> > > > > > > > is slightly different: my distro is 2.6.32-358.14.1.el6.x86_64.
> > > > > > > >
> > > > > > > > Is your storage on a NFS-mounted directory? You said in an
> > > > > > > > earlier post that /files1 was a symbolic link to /home. Does
> > > > > > > > /home reside on a NFS mounted filesystem? If not, then try
> > > > > > > > recreating your storage without the symbolic link.
> > > > > > > >
> > > > > > > > Becky
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > On Tue, Mar 17, 2015 at 4:00 PM, Becky Ligon
> > > > > > > > <[email protected]> wrote:
> > > > > > > > Got it. Thanks.
> > > > > > > >
> > > > > > > > I've installed mpich 3.1.4 on my local vm and have created a
> > > > > > > > filesystem just like yours. Now, I'm going to try running your
> > > > > > > > program and see what shakes out!
> > > > > > > >
> > > > > > > > Becky
> > > > > > > >
> > > > > > > > On Tue, Mar 17, 2015 at 3:42 PM, Wei-keng Liao
> > > > > > > > <[email protected]> wrote:
> > > > > > > > Sorry, please try again.
> > > > > > > >
> > > > > > > > Wei-keng
> > > > > > > >
> > > > > > > > On Mar 17, 2015, at 2:22 PM, Becky Ligon wrote:
> > > > > > > >
> > > > > > > > > Wei-keng:
> > > > > > > > >
> > > > > > > > > I can't access coll_write.c. Permission denied.
> > > > > > > > >
> > > > > > > > > Can you grant me the appropriate permissions so I can copy
> > > > > > > > > the code?
> > > > > > > > >
> > > > > > > > > Thanks!
> > > > > > > > > Becky
> > > > > > > > >
> > > > > > > > > On Mon, Mar 16, 2015 at 5:31 PM, Becky Ligon
> > > > > > > > > <[email protected]> wrote:
> > > > > > > > > Thanks! Still working on it!
> > > > > > > > >
> > > > > > > > > Becky
> > > > > > > > >
> > > > > > > > > On Mon, Mar 16, 2015 at 5:07 PM, Wei-keng Liao
> > > > > > > > > <[email protected]> wrote:
> > > > > > > > > > How did you compile your programs?
> > > > > > > > >
> > > > > > > > > My test program can be found in
> > > > > > > > > http://www.ece.northwestern.edu/~wkliao/coll_write.c
> > > > > > > > >
> > > > > > > > > bigdata::~/TEST_PROG(3:57pm) #1053 make coll_write
> > > > > > > > > mpicc -g -o coll_write coll_write.c
> > > > > > > > >
> > > > > > > > > The command I ran:
> > > > > > > > > mpiexec -n 2 coll_write /orangefs/wkliao/testfile
> > > > > > > > >
> > > > > > > > > bigdata::~/TEST_PROG(3:58pm) #1055 mpicc --version
> > > > > > > > > gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-11)
> > > > > > > > > Copyright (C) 2010 Free Software Foundation, Inc.
> > > > > > > > > This is free software; see the source for copying conditions.
> > > > > > > > > There is NO
> > > > > > > > > warranty; not even for MERCHANTABILITY or FITNESS FOR A
> > > > > > > > > PARTICULAR PURPOSE.
> > > > > > > > >
> > > > > > > > > bigdata::~/TEST_PROG(3:58pm) #1056 mpichversion
> > > > > > > > > MPICH Version: 3.1.4
> > > > > > > > > MPICH Release date: Fri Feb 20 15:02:56 CST 2015
> > > > > > > > > MPICH Device: ch3:nemesis
> > > > > > > > > MPICH configure: --enable-g=debug --disable-fast
> > > > > > > > > --enable-shared --enable-fc --enable-cxx --enable-romio
> > > > > > > > > --with-file-system=ufs+pvfs2 CC=gcc CXX=g++ FC=gfortran
> > > > > > > > > CFLAGS=-g -O0 FCFLAGS=-g -O0
> > > > > > > > > MPICH CC: gcc -g -O0 -g -O0
> > > > > > > > > MPICH CXX: g++ -g -O0
> > > > > > > > > MPICH F77: gfortran -g -O0
> > > > > > > > > MPICH FC: gfortran -g -O0 -g -O0
> > > > > > > > >
> > > > > > > > > > Can you send me a listing of the /files1 directories where
> > > > > > > > > > your storage is located?
> > > > > > > > >
> > > > > > > > > /files1 is a symbolic link to the local disk mounted at /home
> > > > > > > > >
> > > > > > > > > bigdata::~(3:55pm) #1043 ls -l /files1
> > > > > > > > > total 0
> > > > > > > > > lrwxrwxrwx 1 root root 21 Mar 12 09:15 orangefs ->
> > > > > > > > > /home/files1/orangefs
> > > > > > > > >
> > > > > > > > > bigdata::~(3:55pm) #1044 ldd /usr/local/sbin/pvfs2-client
> > > > > > > > > linux-vdso.so.1 => (0x00007fff54db6000)
> > > > > > > > > librt.so.1 => /lib64/librt.so.1 (0x0000003fc2400000)
> > > > > > > > > libm.so.6 => /lib64/libm.so.6 (0x0000003fc1800000)
> > > > > > > > > libdl.so.2 => /lib64/libdl.so.2 (0x0000003fc2000000)
> > > > > > > > > libpthread.so.0 => /lib64/libpthread.so.0
> > > > > > > > > (0x0000003fc1c00000)
> > > > > > > > > libssl.so.10 => /usr/lib64/libssl.so.10
> > > > > > > > > (0x0000003fcb000000)
> > > > > > > > > libcrypto.so.10 => /usr/lib64/libcrypto.so.10
> > > > > > > > > (0x0000003fc8400000)
> > > > > > > > > libc.so.6 => /lib64/libc.so.6 (0x0000003fc1400000)
> > > > > > > > > /lib64/ld-linux-x86-64.so.2 (0x0000003fc1000000)
> > > > > > > > > libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2
> > > > > > > > > (0x0000003fca800000)
> > > > > > > > > libkrb5.so.3 => /lib64/libkrb5.so.3
> > > > > > > > > (0x0000003fcac00000)
> > > > > > > > > libcom_err.so.2 => /lib64/libcom_err.so.2
> > > > > > > > > (0x0000003fc8c00000)
> > > > > > > > > libk5crypto.so.3 => /lib64/libk5crypto.so.3
> > > > > > > > > (0x0000003fca400000)
> > > > > > > > > libz.so.1 => /lib64/libz.so.1 (0x0000003fc2800000)
> > > > > > > > > libkrb5support.so.0 => /lib64/libkrb5support.so.0
> > > > > > > > > (0x0000003fc9800000)
> > > > > > > > > libkeyutils.so.1 => /lib64/libkeyutils.so.1
> > > > > > > > > (0x0000003fc9000000)
> > > > > > > > > libresolv.so.2 => /lib64/libresolv.so.2
> > > > > > > > > (0x0000003fc3400000)
> > > > > > > > > libselinux.so.1 => /lib64/libselinux.so.1
> > > > > > > > > (0x0000003fc3000000)
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Wei-keng
> > > > > > > > >
> > > > > > > > > On Mar 16, 2015, at 3:49 PM, Becky Ligon wrote:
> > > > > > > > >
> > > > > > > > > > Also:
> > > > > > > > > >
> > > > > > > > > > ldd ./pvfs2-client
> > > > > > > > > >
> > > > > > > > > > Becky
> > > > > > > > > >
> > > > > > > > > > On Mon, Mar 16, 2015 at 4:47 PM, Becky Ligon
> > > > > > > > > > <[email protected]> wrote:
> > > > > > > > > > Thanks!
> > > > > > > > > >
> > > > > > > > > > How did you compile your programs?
> > > > > > > > > >
> > > > > > > > > > Just checking everything. So far, all of your settings
> > > > > > > > > > look good.
> > > > > > > > > >
> > > > > > > > > > Can you send me a listing of the /files1 directories where
> > > > > > > > > > your storage is located?
> > > > > > > > > >
> > > > > > > > > > Becky
> > > > > > > > > >
> > > > > > > > > > On Mon, Mar 16, 2015 at 4:32 PM, Wei-keng Liao
> > > > > > > > > > <[email protected]> wrote:
> > > > > > > > > > Here are the outputs.
> > > > > > > > > >
> > > > > > > > > > bigdata::~(3:27pm) #1030 /sbin/lsmod | grep pvfs
> > > > > > > > > > pvfs2 139546 2
> > > > > > > > > >
> > > > > > > > > > bigdata::~(3:27pm) #1031 ps aux | grep pvfs
> > > > > > > > > > root 5775 0.5 0.0 321788 11228 ? Ssl 12:20
> > > > > > > > > > 1:02 /usr/local/sbin/pvfs2-server --pidfile
> > > > > > > > > > /var/run/pvfs2.pid /etc/orangefs-server.conf
> > > > > > > > > > root 5791 0.0 0.0 39640 472 ? Ss 12:20
> > > > > > > > > > 0:00 /usr/local/sbin/pvfs2-client -p
> > > > > > > > > > /usr/local/sbin/pvfs2-client-core --logfile
> > > > > > > > > > /files1/orangefs/client.log --logstamp=datetime -a 0 -n 0
> > > > > > > > > > root 5792 0.2 0.0 79904 27532 ? SL 12:20
> > > > > > > > > > 0:25 pvfs2-client-core --child -a 0 -n 0 --logtype file -L
> > > > > > > > > > /files1/orangefs/client.log --logstamp datetime
> > > > > > > > > > wkliao 7037 0.0 0.0 105316 864 pts/3 S+ 15:27
> > > > > > > > > > 0:00 grep pvfs
> > > > > > > > > >
> > > > > > > > > > bigdata::~(3:27pm) #1032 cat /etc/pvfs2tab
> > > > > > > > > > tcp://bigdata:3334/orangefs /orangefs pvfs2 default,noauto
> > > > > > > > > > 0 0
> > > > > > > > > >
> > > > > > > > > > bigdata::~(3:27pm) #1033 /bin/mount
> > > > > > > > > > /dev/sda2 on / type ext4 (rw)
> > > > > > > > > > proc on /proc type proc (rw)
> > > > > > > > > > sysfs on /sys type sysfs (rw)
> > > > > > > > > > devpts on /dev/pts type devpts (rw,gid=5,mode=620)
> > > > > > > > > > tmpfs on /dev/shm type tmpfs (rw)
> > > > > > > > > > /dev/sda1 on /boot type ext4 (rw)
> > > > > > > > > > /dev/sda7 on /home type ext4 (rw,usrquota)
> > > > > > > > > > /dev/sda3 on /tmp type ext4 (rw)
> > > > > > > > > > /dev/sda6 on /var type ext4 (rw)
> > > > > > > > > > none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
> > > > > > > > > > sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
> > > > > > > > > > nfsd on /proc/fs/nfsd type nfsd (rw)
> > > > > > > > > > tcp://bigdata:3334/orangefs on /orangefs type pvfs2 (rw)
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Wei-keng
> > > > > > > > > >
> > > > > > > > > > On Mar 16, 2015, at 3:21 PM, Becky Ligon wrote:
> > > > > > > > > >
> > > > > > > > > > > Can you also issue:
> > > > > > > > > > >
> > > > > > > > > > > /bin/mount
> > > > > > > > > > >
> > > > > > > > > > > so, I can see if the filesystem is mounted correctly?
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > On Mon, Mar 16, 2015 at 4:13 PM, Becky Ligon
> > > > > > > > > > > <[email protected]> wrote:
> > > > > > > > > > > On your system where you have the client and server
> > > > > > > > > > > running, can you send me the output from the following
> > > > > > > > > > > commands:
> > > > > > > > > > >
> > > > > > > > > > > /sbin/lsmod | grep pvfs
> > > > > > > > > > > ps aux | grep pvfs
> > > > > > > > > > > cat /etc/pvfs2tab
> > > > > > > > > > >
> > > > > > > > > > > On Mon, Mar 16, 2015 at 4:01 PM, Becky Ligon
> > > > > > > > > > > <[email protected]> wrote:
> > > > > > > > > > > I'm running a test case to see if I can reproduce your
> > > > > > > > > > > problem. I'll get back with you shortly.
> > > > > > > > > > >
> > > > > > > > > > > Becky
> > > > > > > > > > >
> > > > > > > > > > > On Mon, Mar 16, 2015 at 3:40 PM, Wei-keng Liao
> > > > > > > > > > > <[email protected]> wrote:
> > > > > > > > > > > Here you go.
> > > > > > > > > > >
> > > > > > > > > > > http://www.ece.northwestern.edu/~wkliao/orangefs-server.conf
> > > > > > > > > > >
> > > > > > > > > > > Wei-keng
> > > > > > > > > > >
> > > > > > > > > > > On Mar 16, 2015, at 2:13 PM, Becky Ligon wrote:
> > > > > > > > > > >
> > > > > > > > > > > > PVFS2_SERVER_CONF=/etc/orangefs-server.conf
> > > > > > > > > > > >
> > > > > > > > > > > > On Mon, Mar 16, 2015 at 3:10 PM, Wei-keng Liao
> > > > > > > > > > > > <[email protected]> wrote:
> > > > > > > > > > > > I am not sure what file that is?
> > > > > > > > > > > >
> > > > > > > > > > > > Wei-keng
> > > > > > > > > > > >
> > > > > > > > > > > > On Mar 16, 2015, at 2:04 PM, Becky Ligon wrote:
> > > > > > > > > > > >
> > > > > > > > > > > > > Now send me a copy of your server config file.
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > On Mon, Mar 16, 2015 at 2:37 PM, Wei-keng Liao
> > > > > > > > > > > > > <[email protected]> wrote:
> > > > > > > > > > > > > The requested 2 files can be found in the following
> > > > > > > > > > > > > URLs.
> > > > > > > > > > > > >
> > > > > > > > > > > > > http://www.ece.northwestern.edu/~wkliao/config.log
> > > > > > > > > > > > > http://www.ece.northwestern.edu/~wkliao/pvfs2-server
> > > > > > > > > > > > >
> > > > > > > > > > > > > Wei-keng
> > > > > > > > > > > > >
> > > > > > > > > > > > > On Mar 16, 2015, at 1:29 PM, Becky Ligon wrote:
> > > > > > > > > > > > >
> > > > > > > > > > > > > > Can you also send me your config.log file from when
> > > > > > > > > > > > > > you compiled the source?
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Becky
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > On Mon, Mar 16, 2015 at 2:28 PM, Becky Ligon
> > > > > > > > > > > > > > <[email protected]> wrote:
> > > > > > > > > > > > > > If you are experimenting with OrangeFS, then having
> > > > > > > > > > > > > > one metadata and 4 data servers is fine.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Can you send me your pvfs2-server init file, the
> > > > > > > > > > > > > > one used with the /sbin/service command?
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Becky
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > On Mon, Mar 16, 2015 at 1:27 PM, Wei-keng Liao
> > > > > > > > > > > > > > <[email protected]> wrote:
> > > > > > > > > > > > > > On Mar 16, 2015, at 12:02 PM, Becky Ligon wrote:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Wei-Keing:
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Did you umount and mount the filesystem? If not,
> > > > > > > > > > > > > > > umount the filesystem, restart the client core,
> > > > > > > > > > > > > > > and then mount the filesystem again.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Yes. My restart command ran "/sbin/servive
> > > > > > > > > > > > > > pvfs2-server restart"
> > > > > > > > > > > > > > the script contains both client and server
> > > > > > > > > > > > > > commands, include client's umount and mount.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > I also suggest that you define your environment
> > > > > > > > > > > > > > > so that each of your machines (bigdata, bigdata1,
> > > > > > > > > > > > > > > bigdata2, bigdata3) have their pvfs servers
> > > > > > > > > > > > > > > configured to handle both I/O and metadata. To do
> > > > > > > > > > > > > > > this, you will have to recreate the filesystem.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > My Orangefs is newly created and the test program
> > > > > > > > > > > > > > is the first one run in parallel on it.
> > > > > > > > > > > > > > Isn't my configuration legit (one metadata sever
> > > > > > > > > > > > > > and 4 data servers) for orangefs setup?
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Again, my mpi test program ran 2 processes locally
> > > > > > > > > > > > > > on the metadata server which is both data server
> > > > > > > > > > > > > > and client.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Wei-keng
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Becky
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > On Mon, Mar 16, 2015 at 11:30 AM, Wei-keng Liao
> > > > > > > > > > > > > > > <[email protected]> wrote:
> > > > > > > > > > > > > > > HI, Becky
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > I tried the command option "-a 0 -n 0" and
> > > > > > > > > > > > > > > restart the client/server, but the same issue
> > > > > > > > > > > > > > > persists.
> > > > > > > > > > > > > > > pvfs2-ping command shows one metadata server and
> > > > > > > > > > > > > > > 4 data servers.
> > > > > > > > > > > > > > > I ran my test program on the metadata server.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > meta servers:
> > > > > > > > > > > > > > > tcp://bigdata:3334 Ok
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > data servers:
> > > > > > > > > > > > > > > tcp://bigdata:3334 Ok
> > > > > > > > > > > > > > > tcp://bigdata1:3334 Ok
> > > > > > > > > > > > > > > tcp://bigdata2:3334 Ok
> > > > > > > > > > > > > > > tcp://bigdata3:3334 Ok
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Wei-keng
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > On Mar 16, 2015, at 8:26 AM, Becky Ligon wrote:
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Caching is still an issue if you have servers
> > > > > > > > > > > > > > > > on more than one machine and those servers
> > > > > > > > > > > > > > > > provide metadata. Even in a one-server
> > > > > > > > > > > > > > > > environment, it could make a difference.
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > The "ls" command uses the kernel module and
> > > > > > > > > > > > > > > > client core, which in turn use the caches,
> > > > > > > > > > > > > > > > while the pvfs2-ls command does not.
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > If you don't have the appropriate sudo
> > > > > > > > > > > > > > > > permissions to modify the /proc filesystem,
> > > > > > > > > > > > > > > > then you can start the client with the caches
> > > > > > > > > > > > > > > > turned off.
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Example:
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > ./pvfs2-client -a 0 -n 0
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > If you execute pvfs2-client --help, you will
> > > > > > > > > > > > > > > > see these options.
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Becky
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Sent from my iPhone
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > >> On Mar 15, 2015, at 5:05 PM, Wei-keng Liao
> > > > > > > > > > > > > > > >> <[email protected]> wrote:
> > > > > > > > > > > > > > > >>
> > > > > > > > > > > > > > > >> I assume after 60 seconds, the client will
> > > > > > > > > > > > > > > >> flush the cache.
> > > > > > > > > > > > > > > >> Please note I am running orangefs client and
> > > > > > > > > > > > > > > >> server on the same machine.
> > > > > > > > > > > > > > > >> In this case, should caching become an issue?
> > > > > > > > > > > > > > > >>
> > > > > > > > > > > > > > > >> Long after 60 seconds of the file creation,
> > > > > > > > > > > > > > > >> command ls still could not find the file.
> > > > > > > > > > > > > > > >>
> > > > > > > > > > > > > > > >> I got permission denied when running the two
> > > > > > > > > > > > > > > >> echo commands you suggested.
> > > > > > > > > > > > > > > >> I DO have sudo permission. I also tried vi
> > > > > > > > > > > > > > > >> those files but got error of
> > > > > > > > > > > > > > > >> "/proc/sys/pvfs2/acache/timeout-msecs" E667:
> > > > > > > > > > > > > > > >> Fsync failed
> > > > > > > > > > > > > > > >>
> > > > > > > > > > > > > > > >> Also, how do I set this automatically after
> > > > > > > > > > > > > > > >> system reboot?
> > > > > > > > > > > > > > > >>
> > > > > > > > > > > > > > > >> Wei-keng
> > > > > > > > > > > > > > > >>
> > > > > > > > > > > > > > > >>> On Mar 15, 2015, at 2:16 PM, Becky Ligon
> > > > > > > > > > > > > > > >>> wrote:
> > > > > > > > > > > > > > > >>>
> > > > > > > > > > > > > > > >>> Wei-keng:
> > > > > > > > > > > > > > > >>>
> > > > > > > > > > > > > > > >>> This is most likely a caching issue with the
> > > > > > > > > > > > > > > >>> client. By default, we set the cache to
> > > > > > > > > > > > > > > >>> timeout after 60 seconds, which may be too
> > > > > > > > > > > > > > > >>> high in your environment. Or, you have
> > > > > > > > > > > > > > > >>> deleted and redefined a file using the same
> > > > > > > > > > > > > > > >>> name outside of the client where you are
> > > > > > > > > > > > > > > >>> seeing the question marks, in which case, the
> > > > > > > > > > > > > > > >>> cache would be wrong for that file.
> > > > > > > > > > > > > > > >>>
> > > > > > > > > > > > > > > >>> To verify, turn off caching to see if this
> > > > > > > > > > > > > > > >>> resolves your problem:
> > > > > > > > > > > > > > > >>>
> > > > > > > > > > > > > > > >>> As root on your client machine:
> > > > > > > > > > > > > > > >>>
> > > > > > > > > > > > > > > >>> echo "0" >
> > > > > > > > > > > > > > > >>> /proc/sys/pvfs2/acache/timeout-msecs
> > > > > > > > > > > > > > > >>> echo "0" >
> > > > > > > > > > > > > > > >>> /proc/sys/pvfs2/ncache/timeout-msecs
> > > > > > > > > > > > > > > >>>
> > > > > > > > > > > > > > > >>> If this change fixes your problem, try
> > > > > > > > > > > > > > > >>> setting the timeout-msecs to something more
> > > > > > > > > > > > > > > >>> appropriate for your environment.
> > > > > > > > > > > > > > > >>>
> > > > > > > > > > > > > > > >>> Becky
> > > > > > > > > > > > > > > >>>
> > > > > > > > > > > > > > > >>> On Sun, Mar 15, 2015 at 11:43 AM, Wei-keng
> > > > > > > > > > > > > > > >>> Liao <[email protected]> wrote:
> > > > > > > > > > > > > > > >>> Hi
> > > > > > > > > > > > > > > >>>
> > > > > > > > > > > > > > > >>> I am having problems with OrangeFS 2.9.1 and
> > > > > > > > > > > > > > > >>> MPICH 3.1.4.
> > > > > > > > > > > > > > > >>>
> > > > > > > > > > > > > > > >>> Here is my system settings:
> > > > > > > > > > > > > > > >>> Linux Kernel 2.6.32
> > > > > > > > > > > > > > > >>> Berkeley DB version 6.1.19
> > > > > > > > > > > > > > > >>>
> > > > > > > > > > > > > > > >>> A simple MPI test program that calls
> > > > > > > > > > > > > > > >>> MPI_File_open and MPI_File_write_all
> > > > > > > > > > > > > > > >>> is used and ran two processes on the same
> > > > > > > > > > > > > > > >>> host.
> > > > > > > > > > > > > > > >>>
> > > > > > > > > > > > > > > >>> The MPI commands I used with prefix file
> > > > > > > > > > > > > > > >>> names to force ADIO drivers:
> > > > > > > > > > > > > > > >>> mpiexec -n 2 coll_write
> > > > > > > > > > > > > > > >>> /orangefs/wkliao/testfile
> > > > > > > > > > > > > > > >>> mpiexec -n 2 coll_write
> > > > > > > > > > > > > > > >>> pvfs2:/orangefs/wkliao/testfile.pvfs2
> > > > > > > > > > > > > > > >>> mpiexec -n 2 coll_write
> > > > > > > > > > > > > > > >>> ufs:/orangefs/wkliao/testfile.ufs
> > > > > > > > > > > > > > > >>> The first two will use the pvfs2 driver and
> > > > > > > > > > > > > > > >>> the 3rd the ufs driver.
> > > > > > > > > > > > > > > >>>
> > > > > > > > > > > > > > > >>> Here is what I see when running "ls -l" and
> > > > > > > > > > > > > > > >>> "pvfs2-ls -l" commands.
> > > > > > > > > > > > > > > >>>
> > > > > > > > > > > > > > > >>> % ls -l /orangefs/wkliao/
> > > > > > > > > > > > > > > >>> ls: cannot access /orangefs/wkliao/testfile:
> > > > > > > > > > > > > > > >>> No such file or directory
> > > > > > > > > > > > > > > >>> ls: cannot access
> > > > > > > > > > > > > > > >>> /orangefs/wkliao/testfile.pvfs2: No such file
> > > > > > > > > > > > > > > >>> or directory
> > > > > > > > > > > > > > > >>> total 31252
> > > > > > > > > > > > > > > >>> ?????????? ? ? ? ?
> > > > > > > > > > > > > > > >>> ? testfile
> > > > > > > > > > > > > > > >>> ?????????? ? ? ? ?
> > > > > > > > > > > > > > > >>> ? testfile.pvfs2
> > > > > > > > > > > > > > > >>> -rw------- 1 wkliao users 32000000 Mar 13
> > > > > > > > > > > > > > > >>> 18:55 testfile.ufs
> > > > > > > > > > > > > > > >>>
> > > > > > > > > > > > > > > >>> % pvfs2-ls -l /orangefs/wkliao/
> > > > > > > > > > > > > > > >>> -rw-r--r-- 1 wkliao users 31000000
> > > > > > > > > > > > > > > >>> 2015-03-13 18:55 testfile
> > > > > > > > > > > > > > > >>> -rw------- 1 wkliao users 32000000
> > > > > > > > > > > > > > > >>> 2015-03-13 18:55 testfile.ufs
> > > > > > > > > > > > > > > >>> -rw-r--r-- 1 wkliao users 31000000
> > > > > > > > > > > > > > > >>> 2015-03-13 18:55 testfile.pvfs2
> > > > > > > > > > > > > > > >>>
> > > > > > > > > > > > > > > >>> My config.log file for building
> > > > > > > > > > > > > > > >>> orangefhttp://www.scientificlinux.org/downloads/sl-versions/sl6/s
> > > > > > > > > > > > > > > >>> can be found in this URL
> > > > > > > > > > > > > > > >>> http://www.ece.northwestern.edu/~wkliao/config.log
> > > > > > > > > > > > > > > >>>
> > > > > > > > > > > > > > > >>> Wei-keng
> > > > > > > > > > > > > > > >>>
> > > > > > > > > > > > > > > >>> _______________________________________________
> > > > > > > > > > > > > > > >>> Pvfs2-users mailing list
> > > > > > > > > > > > > > > >>> [email protected]
> > > > > > > > > > > > > > > >>> http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
> > > > > > > > > > > > > > > >>
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > --
> > > > > > > > > > > > > > > Becky Ligon
> > > > > > > > > > > > > > > Research Associate
> > > > > > > > > > > > > > > Clemson University
> > > > > > > > > > > > > > > Clemson, SC
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > --
> > > > > > > > > > > > > > Becky Ligon
> > > > > > > > > > > > > > Research Associate
> > > > > > > > > > > > > > Clemson University
> > > > > > > > > > > > > > Clemson, SC
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > --
> > > > > > > > > > > > > > Becky Ligon
> > > > > > > > > > > > > > Research Associate
> > > > > > > > > > > > > > Clemson University
> > > > > > > > > > > > > > Clemson, SC
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > --
> > > > > > > > > > > > > Becky Ligon
> > > > > > > > > > > > > Research Associate
> > > > > > > > > > > > > Clemson University
> > > > > > > > > > > > > Clemson, SC
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > --
> > > > > > > > > > > > Becky Ligon
> > > > > > > > > > > > Research Associate
> > > > > > > > > > > > Clemson University
> > > > > > > > > > > > Clemson, SC
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > --
> > > > > > > > > > > Becky Ligon
> > > > > > > > > > > Research Associate
> > > > > > > > > > > Clemson University
> > > > > > > > > > > Clemson, SC
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > --
> > > > > > > > > > > Becky Ligon
> > > > > > > > > > > Research Associate
> > > > > > > > > > > Clemson University
> > > > > > > > > > > Clemson, SC
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > --
> > > > > > > > > > > Becky Ligon
> > > > > > > > > > > Research Associate
> > > > > > > > > > > Clemson University
> > > > > > > > > > > Clemson, SC
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > > Becky Ligon
> > > > > > > > > > Research Associate
> > > > > > > > > > Clemson University
> > > > > > > > > > Clemson, SC
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > > Becky Ligon
> > > > > > > > > > Research Associate
> > > > > > > > > > Clemson University
> > > > > > > > > > Clemson, SC
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > > Becky Ligon
> > > > > > > > > Research Associate
> > > > > > > > > Clemson University
> > > > > > > > > Clemson, SC
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > > Becky Ligon
> > > > > > > > > Research Associate
> > > > > > > > > Clemson University
> > > > > > > > > Clemson, SC
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > > Becky Ligon
> > > > > > > > Research Associate
> > > > > > > > Clemson University
> > > > > > > > Clemson, SC
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > > Becky Ligon
> > > > > > > > Research Associate
> > > > > > > > Clemson University
> > > > > > > > Clemson, SC
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > > Becky Ligon
> > > > > > > > Research Associate
> > > > > > > > Clemson University
> > > > > > > > Clemson, SC
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > Becky Ligon
> > > > > > > Research Associate
> > > > > > > Clemson University
> > > > > > > Clemson, SC
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Becky Ligon
> > > > > > Research Associate
> > > > > > Clemson University
> > > > > > Clemson, SC
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Becky Ligon
> > > > > Research Associate
> > > > > Clemson University
> > > > > Clemson, SC
> > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Becky Ligon
> > > > Research Associate
> > > > Clemson University
> > > > Clemson, SC
> > >
> > >
> > >
> > >
> > > --
> > > Becky Ligon
> > > Research Associate
> > > Clemson University
> > > Clemson, SC
> > >
> > >
> > >
> > > --
> > > Becky Ligon
> > > Research Associate
> > > Clemson University
> > > Clemson, SC
> >
> >
> >
> >
> > --
> > Becky Ligon
> > Research Associate
> > Clemson University
> > Clemson, SC
> >
> >
> >
> > --
> > Becky Ligon
> > Research Associate
> > Clemson University
> > Clemson, SC
>
>
>
>
> --
> Becky Ligon
> Research Associate
> Clemson University
> Clemson, SC
>
>
>
> --
> Becky Ligon
> Research Associate
> Clemson University
> Clemson, SC
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users