As a regular BTRFS user I can tell you that there is no such thing as hot data tracking yet. Some people seem to use bcache together with btrfs and come asking for help on the mailing list.

Raid5/6 have received a few fixes recently, and it *may* soon me worth trying out raid5/6 for data, but keeping metadata in raid1/10 (I would rather loose a file or two than the entire filesystem).
I had plans to run some tests on this a while ago, but forgot about it.
As call good citizens, remember to have good backups. Last time I tested for Raid5/6 I ran into issues easily. For what it's worth - raid1/10 seems pretty rock solid as long as you have sufficient disks (hint: you need more than two for raid1 if you want to stay safe)

As for dedupe there is (to my knowledge) nothing fully automatic yet. You have to run a program to scan your filesystem but all the deduplication is done in the kernel. duperemove works apparently quite well when I tested it, but there may be some performance implications.

Roy Sigurd Karlsbakk wrote:
Hi all

I've been following this project on and off for quite a few years, and I wonder 
if anyone has looked into tiered storage on it. With tiered storage, I mean hot 
data lying on fast storage and cold data on slow storage. I'm not talking about 
cashing (where you just keep a copy of the hot data on the fast storage).

And btw, how far is raid[56] and block-level dedup from something useful in 
production?

Vennlig hilsen

roy
--
Roy Sigurd Karlsbakk
(+47) 98013356
http://blogg.karlsbakk.net/
GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt
--
Hið góða skaltu í stein höggva, hið illa í snjó rita.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to