Please don't reply to lustre-devel. Instead, comment in Bugzilla by using the 
following link:
https://bugzilla.lustre.org/show_bug.cgi?id=12365



This is really weird.  Unfortunately, I can't reproduce it.  It occurred on a
clean filesystem of 3 loopback OSTs - all I did was run sanity test 78 a few
times (and it failed but shouldn't have for similar reasons.)

[EMAIL PROTECTED] tests]# lfs setstripe /mnt/lustre/foo 0 -1 -1
[EMAIL PROTECTED] tests]# ./directio rdwr /mnt/lustre/foo/f78 0 50 1048576
directio on /mnt/lustre/foo/f78 for 50x1048576 bytes
Write error Success (rc = 48234496, len = 52428800)
[EMAIL PROTECTED] tests]# lctl df
usage: df <input> [output]
[EMAIL PROTECTED] tests]# lfs df /mnt/lustre
UUID                 1K-blocks      Used Available  Use% Mounted on
lustre-MDT0000_UUID      34984      6380     28604   18% /mnt/lustre[MDT:0]
lustre-OST0000_UUID      46856     22808     24048   48% /mnt/lustre[OST:0]
lustre-OST0001_UUID      46856     22280     24576   47% /mnt/lustre[OST:1]
lustre-OST0002_UUID      46856     23304     23552   49% /mnt/lustre[OST:2]

filesystem summary:     140568     68392     72176   48% /mnt/lustre

Note that the write failed but none of the OSTs are full.  The console shows:

LustreError: 1418:0:(obd_class.h:928:obd_brw_rqset()) error from callback: rc = 
-28

I was also able to reproduce the problem using dd:

[EMAIL PROTECTED] tests]# lfs df /mnt/lustre
UUID                 1K-blocks      Used Available  Use% Mounted on
lustre-MDT0000_UUID      34984      6372     28612   18% /mnt/lustre[MDT:0]
lustre-OST0000_UUID      46856      6920     39936   14% /mnt/lustre[OST:0]
lustre-OST0001_UUID      46856      6920     39936   14% /mnt/lustre[OST:1]
lustre-OST0002_UUID      46856     41800      5056   89% /mnt/lustre[OST:2]

filesystem summary:     140568     55640     84928   39% /mnt/lustre

[EMAIL PROTECTED] tests]# dd if=/dev/zero of=/mnt/lustre/test1 bs=1M count=35
dd: writing `/mnt/lustre/test1': No space left on device
21+0 records in
20+0 records out
[EMAIL PROTECTED] tests]# !lfs
lfs df /mnt/lustre
UUID                 1K-blocks      Used Available  Use% Mounted on
lustre-MDT0000_UUID      34984      6372     28612   18% /mnt/lustre[MDT:0]
lustre-OST0000_UUID      46856      6920     39936   14% /mnt/lustre[OST:0]
lustre-OST0001_UUID      46856     27528     19328   58% /mnt/lustre[OST:1]
lustre-OST0002_UUID      46856     42760      4096   91% /mnt/lustre[OST:2]

filesystem summary:     140568     77208     63360   54% /mnt/lustre

Console shows:

LustreError: 30415:0:(client.c:574:ptlrpc_check_status()) @@@ type ==
PTL_RPC_MSG_ERR, err == -28  [EMAIL PROTECTED] x106735/t0
o4->[EMAIL PROTECTED]@tcp:28 lens 384/352 ref 2 fl Rpc:R/0/0 rc
0/-28

I wanted to see if this affected v1_6_0_RC4, so I shut down Lustre and started
that and it worked.  Unfortunately, when I restarted HEAD, it also worked.  I
have been unable to reproduce this problem.

_______________________________________________
Lustre-devel mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-devel

Reply via email to