I'm testing cephfs. I have 3 nodes, with 2 hard disks and one ssd on each.
cephfs is set to put metadata on ssd and data on hdd.
With the two pools set size = 3, untar'ing a 19 G file with 90K files in it
takes 4.5 minutes.
With size = 2, it takes 40 sec. (The tar file is stored in a file
We have a ZFS file system with a billion (smallish) files. We backup using zfs
send / receive to a separate system, and write tapes with zfs send. It stores
files on HDD, but metadata on SSD. It would be totally impractical to backup to
tape using something like tar from HDD with that many
thanks. That's the behavior I was hoping for.
From: Gregory Farnum
Sent: Thursday, December 8, 2022 12:57 PM
To: Charles Hedrick
Cc: Manuel Holtgrewe ; Dhairya Parmar
; ceph-users@ceph.io
Subject: Re: [ceph-users] Re: what happens if a server crashes
for an operation that
won't actually be possible to complete. I'm assuming that it won't happen
during a cephadm upgrade.
From: Manuel Holtgrewe
Sent: Thursday, December 8, 2022 12:38 PM
To: Charles Hedrick
Cc: Gregory Farnum ; Dhairya Parmar ;
ceph-users@ceph.io
upgrade? Is
that done in a way that won't generate errors in user code?
From: Gregory Farnum
Sent: Thursday, December 8, 2022 11:44 AM
To: Manuel Holtgrewe
Cc: Charles Hedrick ; Dhairya Parmar ;
ceph-users@ceph.io
Subject: Re: [ceph-users] Re: what happens
reboot, and then
continue. However there's an obvious performance penalty for this.
From: Gregory Farnum
Sent: Thursday, December 8, 2022 2:08 AM
To: Dhairya Parmar
Cc: Charles Hedrick ; ceph-users@ceph.io
Subject: Re: [ceph-users] Re: what happens if a server
I believe asynchronous operations are used for some operations in cephfs. That
means the server acknowledges before data has been written to stable storage.
Does that mean there are failure scenarios when a write or close will return an
error? fail silently?