[Re-adding the list because I failed to last time.]
Interesting! I don't think I've seen local nodes get their usage wrong
like that before, but there are a lot of storage systems I don't have
much experience with. The aggregate usage stats across a Ceph cluster
are derived from the local output o
On Mon, Feb 24, 2014 at 11:48 PM, Markus Goldberg
wrote:
> Hi Gregory,
> here we go:
>
> root@bd-a:/mnt/myceph#
> root@bd-a:/mnt/myceph# ls -la
> insgesamt 4
> drwxr-xr-x 1 root root 25928099891213 Feb 24 14:14 .
> drwxr-xr-x 4 root root 4096 Aug 30 10:34 ..
> drwx-- 1 root root 2592
Hi Gregory,
here we go:
root@bd-a:/mnt/myceph#
root@bd-a:/mnt/myceph# ls -la
insgesamt 4
drwxr-xr-x 1 root root 25928099891213 Feb 24 14:14 .
drwxr-xr-x 4 root root 4096 Aug 30 10:34 ..
drwx-- 1 root root 25920394954765 Feb 7 10:07 Backup
drwxr-xr-x 1 root root32826961870 Feb 2
Hrm, yeah, that patch actually went in prior to 3.9 (it's older than I
remember!). What's the output of "ls -l" from the root of the Ceph
hierarchy, and what's the output of "ceph osd dump"?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Sat, Feb 22, 2014 at 12:09 AM, Marku
Hi Gregory,
i'm running kernel 3.13, which is much newer than the orig kernel of
Ubuntu 13.04:
root@bd-a:/mnt/myceph/Backup/bs3/tapes# uname -a
Linux bd-a 3.13.0-031300-generic #201401192235 SMP Mon Jan 20 03:36:48
UTC
Markus
Am 21.02.2014 20:59, schrieb Gregory Farnum:
I haven't done the mat
I haven't done the math, but it's probably a result of how the df
command interprets the output of the statfs syscall. We changed the
fr_size and block_size units we report to make it work more
consistently across different systems "recently"; I don't know if that
change was before or after the ker
Hi,
no, it's sure that the backup-files are so big. The output of the
du-command is correct.
The files were rsynced from an other system, which is not cephfs.
Markus
Am 21.02.2014 13:34, schrieb Yan, Zheng:
I think the result reported by df is correct. It's likely you have
lots of sparse files
I think the result reported by df is correct. It's likely you have
lots of sparse files in cephfs.
For sparse files, cephfs increase the "used" space by the full file size. See
http://ceph.com/docs/next/dev/differences-from-posix/
Yan, Zheng
On Fri, Feb 21, 2014 at 6:13 PM, Markus Goldberg
wrote
Hi,
this is ceph 0.77, Ubuntu 13.04 (ceph-server and ceph client)
df-command gives goofy results:
/root@bd-a:/mnt/myceph/Backup/bs3/tapes#//
//root@bd-a:/mnt/myceph/Backup/bs3/tapes# df -h .//
//Dateisystem GröÃe Benutzt Verf. Verw% Eingehängt auf//
//xxx.xxx.xxx.xxx:6789:/ 60T