Hi all,
pvfs2-cp and pvfs2-rm are working as well. I copied a 256 MB file to
PVFS and then back to a different filename. Diff reports that they
are identical.
I have also tried pvfs2-stat, pvfs2-ls, pvfs2-lsplus, and pvfs2-
viewdist.
While copying the large file, I see a lot of expected messages arrive
before the matching receive has been posted. I believe Sam or RobL
mentioned that if there are multiple messages per operation, that the
next receive is not posted until the previous receive is completed.
Is that correct? If so, would it be possible to post all at once or
would that break some of the other BMI methods?
Scott
On Dec 27, 2006, at 3:13 PM, Scott Atchley wrote:
Hi all,
Both pvfs2-ping and pvfs2-statfs seem to be working. Does
everything below seem reasonable?
Thanks,
Scott
% pvfs2-ping -m /mnt/pvfs2
(1) Parsing tab file...
(2) Initializing system interface...
(3) Initializing each file system found in tab file: /etc/pvfs2tab...
PVFS2 servers: mx://fog33:0:3
Storage name: pvfs2-fs
Local mount point: /mnt/pvfs2
/mnt/pvfs2: Ok
(4) Searching for /mnt/pvfs2 in pvfstab...
PVFS2 servers: mx://fog33:0:3
Storage name: pvfs2-fs
Local mount point: /mnt/pvfs2
meta servers:
mx://fog33:0:3
data servers:
mx://fog33:0:3
(5) Verifying that all servers are responding...
meta servers:
mx://fog33:0:3 Ok
data servers:
mx://fog33:0:3 Ok
(6) Verifying that fsid 1318064247 is acceptable to all servers...
Ok; all servers understand fs_id 1318064247
(7) Verifying that root handle is owned by one server...
Root handle: 1048576
Ok; root handle is owned by exactly one server.
=============================================================
The PVFS2 filesystem at /mnt/pvfs2 appears to be correctly configured.
% pvfs2-statfs -m /mnt/pvfs2
aggregate statistics:
---------------------------------------
fs_id: 1318064247
total number of servers (meta and I/O): 1
handles available (meta and I/O): 4294967290
handles total (meta and I/O): 4294967294
bytes available: 23916113920
bytes total: 28573982720
NOTE: The aggregate total and available statistics are calculated
based
on an algorithm that assumes data will be distributed evenly; thus
the free space is equal to the smallest I/O server capacity
multiplied by the number of I/O servers. If this number seems
unusually small, then check the individual server statistics below
to look for problematic servers.
meta server statistics:
---------------------------------------
server: mx://fog33:0:3
RAM bytes total : 1058672640
RAM bytes free : 396337152
uptime (seconds) : 2276629
load averages : 33184 21376 14912
handles available: 4294967290
handles total : 4294967294
bytes available : 23916113920
bytes total : 28573982720
mode: serving both metadata and I/O data
I/O server statistics:
---------------------------------------
server: mx://fog33:0:3
RAM bytes total : 1058672640
RAM bytes free : 396337152
uptime (seconds) : 2276629
load averages : 33184 21376 14912
handles available: 4294967290
handles total : 4294967294
bytes available : 23916113920
bytes total : 28573982720
mode: serving both metadata and I/O data
% cat ../etc/fs.conf
<Defaults>
UnexpectedRequests 50
EventLogging none
LogStamp datetime
BMIModules bmi_mx
FlowModules flowproto_multiqueue
PerfUpdateInterval 1000
ServerJobBMITimeoutSecs 30
ServerJobFlowTimeoutSecs 30
ClientJobBMITimeoutSecs 300
ClientJobFlowTimeoutSecs 300
ClientRetryLimit 5
ClientRetryDelayMilliSecs 2000
</Defaults>
<Aliases>
Alias fog33 mx://fog33:0:3
</Aliases>
<Filesystem>
Name pvfs2-fs
ID 1318064247
RootHandle 1048576
FlowBufferSizeBytes 1048576
<MetaHandleRanges>
Range fog33 4-2147483650
</MetaHandleRanges>
<DataHandleRanges>
Range fog33 2147483651-4294967297
</DataHandleRanges>
<StorageHints>
TroveSyncMeta yes
TroveSyncData no
</StorageHints>
</Filesystem>
% cat ../etc/server.conf-fog33
StorageSpace /scratch
HostID "mx://fog33:0:3"
LogFile /nfs/home/atchley/projects/pvfs2/pvfs2-mx/bin/../log
_______________________________________________
Pvfs2-developers mailing list
Pvfs2-developers@beowulf-underground.org
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-developers
_______________________________________________
Pvfs2-developers mailing list
Pvfs2-developers@beowulf-underground.org
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-developers