> - any suggestions what SSD to buy? e.g. an OCZ Vertex 4?

OCZ is IMO a troubled company and their products are seriously hit/miss. I 
would go with Intel, samsung, micron/corsair in that order.

> - are there any suggestions on what size SSD to use for a
> certain size

It all depends on your workload. The "big boys" generally put at least 5% of 
backend capacity in SSD. For normal singleton server/desktops you'd be 
hard-pressed to see any value. If you're running virtualized servers, the 
combined I/O patterns seen at the central array degenerate to random I/O and 
that is where tiering or Bcache can be useful. The individual server running 
it's virtualized workload might see some benefit too but probably not to near 
the same extent. If you're using NFS datastores running FS-cache on the virtual 
host might be a very useful thing but only if the local SSD/fast HD is big 
enough to grab the entire file since virtual disk files tend to be sizable.

> - is it possible to use more than 1 SSD as a bcache for the
> same   storage?

use LVM segments. ie. put <N> SSDs in a VGpool and then when allocating the LV, 
specify extent ranges out of some/all of them. When you need to remove the SSD, 
use pvchange to have LVM vacate the extents on that drive. Then you can remove 
at will.

> - does bcache work with e.g. iscsi/nbd remote storage?
>   - then maybe it would be nice if it is possible to
> temporarily bring
>     down the remote-storage + bcache combination
> when the network
>     connection goes down

If your iSCSI is unreachable your I/O will stall and timeout regardless of any 
local Bcache. Oh you might get lucky for a couple seconds but after that, boom. 
Engineer your network-attached storage correctly.

> - is it possible to swap via bcache?

swap is generally a continuous write, so no. If you're swapping, you need to 
fix the underlying problem, not play games with trying to make swap "faster". 
You can ofcourse mount your swap device from something that is locally attached 
and cut out the bandwidth and latency of an over-the-wire target.

> - does it keep track of checksums/crcs of the data it writes
> to the SSD?
>   e.g. it writes a block to the SSD and then later on
> it reads that
>   block back from the SSD (to write to e.g. HDD) and
> verifies that what
>   it got back is what it expected

No. If you want ZFS' features use that. This whole silent block corruption 
thing is a marketing gimmick by Sun because at some point 10+ years ago they 
used a family of REALLY buggy controllers. Oh sure, if you've got petabytes on 
your petabytes you'll run into it eventually. If you use crap hardware with 
half-baked drivers or shitty consumer hardware you may run into it too. But so 
you know, the PCIe busses use checksums and erasure-coding. As does the 
SATA/SAS chip when sending it down the wire to the drive interface. And the 
drive itself does checksums and in fact writes the data using erasure coding 
too. So about the only time that 512bytes is just 512bytes of "fragile", 
unprotected data is when it's in RAM be it system, controller, or disk. Not to 
say that bit-flips don't happen where the drive miss-writes or miss-reads but 
the occurance is vanishingly rare and the drive already has mechanisms to deal 
with it.

--
To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
  • speed folkert
    • Re: speed matthew patton

Reply via email to