On 2016-05-29 22:45, Ferry Toth wrote:
Op Sun, 29 May 2016 12:33:06 -0600, schreef Chris Murphy:
On Sun, May 29, 2016 at 12:03 PM, Holger Hoffstätte
wrote:
On 05/29/16 19:53, Chris Murphy wrote:
But I'm skeptical of bcache using a hidden area historically for the
bootloader, to put its devic
On 2016-05-29 16:45, Ferry Toth wrote:
Op Sun, 29 May 2016 12:33:06 -0600, schreef Chris Murphy:
On Sun, May 29, 2016 at 12:03 PM, Holger Hoffstätte
wrote:
On 05/29/16 19:53, Chris Murphy wrote:
But I'm skeptical of bcache using a hidden area historically for the
bootloader, to put its devic
Op Sun, 29 May 2016 12:33:06 -0600, schreef Chris Murphy:
> On Sun, May 29, 2016 at 12:03 PM, Holger Hoffstätte
> wrote:
>> On 05/29/16 19:53, Chris Murphy wrote:
>>> But I'm skeptical of bcache using a hidden area historically for the
>>> bootloader, to put its device metadata. I didn't realize
On Sun, May 29, 2016 at 12:03 PM, Holger Hoffstätte
wrote:
> On 05/29/16 19:53, Chris Murphy wrote:
>> But I'm skeptical of bcache using a hidden area historically for the
>> bootloader, to put its device metadata. I didn't realize that was the
>> case. Imagine if LVM were to stuff metadata into t
On 05/29/16 19:53, Chris Murphy wrote:
> But I'm skeptical of bcache using a hidden area historically for the
> bootloader, to put its device metadata. I didn't realize that was the
> case. Imagine if LVM were to stuff metadata into the MBR gap, or
> mdadm. Egads.
On the matter of bcache in genera
On Sun, May 29, 2016 at 12:23 AM, Andrei Borzenkov wrote:
> 20.05.2016 20:59, Austin S. Hemmelgarn пишет:
>> On 2016-05-20 13:02, Ferry Toth wrote:
>>> We have 4 1TB drives in MBR, 1MB free at the beginning, grub on all 4,
>>> then 8GB swap, then all the rest btrfs (no LVM used). The 4 btrfs
>>> p
20.05.2016 20:59, Austin S. Hemmelgarn пишет:
> On 2016-05-20 13:02, Ferry Toth wrote:
>> We have 4 1TB drives in MBR, 1MB free at the beginning, grub on all 4,
>> then 8GB swap, then all the rest btrfs (no LVM used). The 4 btrfs
>> partitions are in the same pool, which is in btrfs RAID10 format.
On 2016-05-20 18:26, Henk Slager wrote:
Yes, sorry, I took some shortcut in the discussion and jumped to a
method for avoiding this 0.5-2% slowdown that you mention. (Or a
kernel crashing in bcache code due to corrupt SB on a backing device
or corrupted caching device contents).
I am actually bit
bcache protective superblocks is a one-time procedure which can be done
online. The bcache devices act as normal HDD if not attached to a
caching SSD. It's really less pain than you may think. And it's a
solution available now. Converting back later is easy: Just detach the
On Fri, May 20, 2016 at 7:59 PM, Austin S. Hemmelgarn
wrote:
> On 2016-05-20 13:02, Ferry Toth wrote:
>>
>> We have 4 1TB drives in MBR, 1MB free at the beginning, grub on all 4,
>> then 8GB swap, then all the rest btrfs (no LVM used). The 4 btrfs
>> partitions are in the same pool, which is in bt
On 2016-05-20 13:02, Ferry Toth wrote:
We have 4 1TB drives in MBR, 1MB free at the beginning, grub on all 4,
then 8GB swap, then all the rest btrfs (no LVM used). The 4 btrfs
partitions are in the same pool, which is in btrfs RAID10 format. /boot
is in subvolume @boot.
If you have GRUB installed
Op Fri, 20 May 2016 08:03:12 -0400, schreef Austin S. Hemmelgarn:
> On 2016-05-19 19:23, Henk Slager wrote:
>> On Thu, May 19, 2016 at 8:51 PM, Austin S. Hemmelgarn
>> wrote:
>>> On 2016-05-19 14:09, Kai Krakow wrote:
Am Wed, 18 May 2016 22:44:55 + (UTC)
schrieb Ferry Toth :
>>
On 2016-05-19 19:23, Henk Slager wrote:
On Thu, May 19, 2016 at 8:51 PM, Austin S. Hemmelgarn
wrote:
On 2016-05-19 14:09, Kai Krakow wrote:
Am Wed, 18 May 2016 22:44:55 + (UTC)
schrieb Ferry Toth :
Op Tue, 17 May 2016 20:33:35 +0200, schreef Kai Krakow:
Bcache is actually low maintenan
On 2016-05-19 17:01, Kai Krakow wrote:
Am Thu, 19 May 2016 14:51:01 -0400
schrieb "Austin S. Hemmelgarn" :
For a point of reference, I've
got a pair of 250GB Crucial MX100's (they cost less than 0.50 USD per
GB when I got them and provide essentially the same power-loss
protections that the hig
o making sure the module is not
needed/loaded/used the next boot) can be done done by changing the
start-sector of the partition from 2048 to 2064. In gdisk one has to
change the alignment to 16 first, otherwise this it refuses. And of
course, also first flush+stop+de-register bcache for the HDD.
The
Am Thu, 19 May 2016 14:51:01 -0400
schrieb "Austin S. Hemmelgarn" :
> For a point of reference, I've
> got a pair of 250GB Crucial MX100's (they cost less than 0.50 USD per
> GB when I got them and provide essentially the same power-loss
> protections that the high end Intel SSD's do) which have
(like backups) and sequential access is detected and bypassed
by bcache. It won't invalidate your valuable "hot data" in the cache.
It works really well.
I'd even recommend to format filesystems with bcache protective
superblock (aka format backing device) even if you not gonna us
ock still in place doesn't
hurt then. Bcache is a no-op without caching device attached.
> Even at home, I would just throw in a low cost ssd next to the hdd if
> it was as simple as device add. But I wouldn't want to store my
> photo/video collection on just ssd, too expen
Op Tue, 17 May 2016 20:33:35 +0200, schreef Kai Krakow:
> Am Tue, 17 May 2016 07:32:11 -0400 schrieb "Austin S. Hemmelgarn"
> :
>
>> On 2016-05-17 02:27, Ferry Toth wrote:
>> > Op Mon, 16 May 2016 01:05:24 +0200, schreef Kai Krakow:
>> >
>> >> Am Sun, 15 May 2016 21:11:11 + (UTC)
>> >> schr
Am Tue, 17 May 2016 07:32:11 -0400
schrieb "Austin S. Hemmelgarn" :
> On 2016-05-17 02:27, Ferry Toth wrote:
> > Op Mon, 16 May 2016 01:05:24 +0200, schreef Kai Krakow:
> >
> >> Am Sun, 15 May 2016 21:11:11 + (UTC)
> >> schrieb Duncan <1i5t5.dun...@cox.net>:
> >>
> [...]
> >
> >>
> >
On 2016-05-17 02:27, Ferry Toth wrote:
Op Mon, 16 May 2016 01:05:24 +0200, schreef Kai Krakow:
Am Sun, 15 May 2016 21:11:11 + (UTC)
schrieb Duncan <1i5t5.dun...@cox.net>:
Ferry Toth posted on Sun, 15 May 2016 12:12:09 + as excerpted:
You can go there with only one additional HDD
Op Mon, 16 May 2016 01:05:24 +0200, schreef Kai Krakow:
> Am Sun, 15 May 2016 21:11:11 + (UTC)
> schrieb Duncan <1i5t5.dun...@cox.net>:
>
>> Ferry Toth posted on Sun, 15 May 2016 12:12:09 + as excerpted:
>>
>
> You can go there with only one additional HDD as temporary storage. Just
>
On 2016-05-15 08:12, Ferry Toth wrote:
Is there anything going on in this area?
We have btrfs in RAID10 using 4 HDD's for many years now with a rotating
scheme of snapshots for easy backup. <10% files (bytes) change between
oldest snapshot and the current state.
However, the filesystem seems to
r to prefer new files to go to the ssd
> > and move away unpopular stuff to hdd during balance should do the
> > trick, or am I wrong?
> >
> > Are none of the big users looking into this?
>
> Hot data tracking remains on the list of requested features, but at
&g
> btrfs is probably a complex thing with lots of overhead.
>
> Simply telling the allocator to prefer new files to go to the ssd and
> move away unpopular stuff to hdd during balance should do the trick, or
> am I wrong?
>
> Are none of the big users looking into this?
Hot
Is there anything going on in this area?
We have btrfs in RAID10 using 4 HDD's for many years now with a rotating
scheme of snapshots for easy backup. <10% files (bytes) change between
oldest snapshot and the current state.
However, the filesystem seems to become very slow, probably due to the
On Mon, Oct 29, 2012 at 12:30 PM, wrote:
> From: Zhi Yong Wu
>
> NOTE:
>
> The patchset can be obtained via my kernel dev git on github:
> g...@github.com:wuzhy/kernel.git hot_tracking
> If you're interested, you can also can review them via
> https://github.com/wuzhy/kernel/commits/hot_trac
On Mon, Oct 29, 2012 at 6:30 PM, Andi Kleen wrote:
> zwu.ker...@gmail.com writes:
>>
>> TODO List:
>>
>> 1.) Need to do scalability or performance tests.
>
> You're changing some of the most performance critical code in the
> kernel. This step is absolutely not optional.
ah, i know, but now i nee
zwu.ker...@gmail.com writes:
>
> TODO List:
>
> 1.) Need to do scalability or performance tests.
You're changing some of the most performance critical code in the
kernel. This step is absolutely not optional.
-Andi
--
a...@linux.intel.com -- Speaking for myself only
--
To unsubscribe from this
From: Zhi Yong Wu
NOTE:
The patchset can be obtained via my kernel dev git on github:
g...@github.com:wuzhy/kernel.git hot_tracking
If you're interested, you can also can review them via
https://github.com/wuzhy/kernel/commits/hot_tracking
For more info, please check hot_tracking.txt in D
From: Zhi Yong Wu
Miscellaneous features that implement hot data tracking
and generally make the hot data functions a bit more friendly.
Signed-off-by: Zhi Yong Wu
---
fs/direct-io.c |6 ++
mm/filemap.c|6 ++
mm/page-writeback.c | 12
mm
s: add hot tracking support
Zhi Yong Wu (14):
vfs,hot_track: introduce private radix tree structures
vfs,hot_track: initialize and free key data structures
vfs,hot_track: add the function for collecting I/O frequency
vfs,hot_track: add two map arrays
vfs,hot_track: add hooks to enable hot dat
On Tue, Oct 16, 2012 at 4:42 AM, Dave Chinner wrote:
> On Wed, Oct 10, 2012 at 06:07:22PM +0800, zwu.ker...@gmail.com wrote:
>> From: Zhi Yong Wu
>>
>> NOTE:
>>
>> The patchset is currently post out mainly to make sure
>> it is going in the correct direction and hope to get some
>> helpful comm
On Thu, Oct 18, 2012 at 1:17 PM, Dave Chinner wrote:
> On Thu, Oct 18, 2012 at 12:44:47PM +0800, Zhi Yong Wu wrote:
>> On Thu, Oct 18, 2012 at 12:29 PM, Dave Chinner wrote:
>> > On Wed, Oct 17, 2012 at 04:57:14PM +0800, Zhi Yong Wu wrote:
>> >> On Tue, Oct 16, 2012 at 4:42 AM, Dave Chinner wrote
On Thu, Oct 18, 2012 at 12:44:47PM +0800, Zhi Yong Wu wrote:
> On Thu, Oct 18, 2012 at 12:29 PM, Dave Chinner wrote:
> > On Wed, Oct 17, 2012 at 04:57:14PM +0800, Zhi Yong Wu wrote:
> >> On Tue, Oct 16, 2012 at 4:42 AM, Dave Chinner wrote:
> >> > On Wed, Oct 10, 2012 at 06:07:22PM +0800, zwu.ker.
On Thu, Oct 18, 2012 at 12:29 PM, Dave Chinner wrote:
> On Wed, Oct 17, 2012 at 04:57:14PM +0800, Zhi Yong Wu wrote:
>> On Tue, Oct 16, 2012 at 4:42 AM, Dave Chinner wrote:
>> > On Wed, Oct 10, 2012 at 06:07:22PM +0800, zwu.ker...@gmail.com wrote:
>> >> From: Zhi Yong Wu
>
>> > (*) Tested o
On Wed, Oct 17, 2012 at 04:57:14PM +0800, Zhi Yong Wu wrote:
> On Tue, Oct 16, 2012 at 4:42 AM, Dave Chinner wrote:
> > On Wed, Oct 10, 2012 at 06:07:22PM +0800, zwu.ker...@gmail.com wrote:
> >> From: Zhi Yong Wu
> > (*) Tested on an empty 17TB XFS filesystem with:
> >
> > $ sudo mkfs.xfs -f
On Tue, Oct 16, 2012 at 4:42 AM, Dave Chinner wrote:
> On Wed, Oct 10, 2012 at 06:07:22PM +0800, zwu.ker...@gmail.com wrote:
>> From: Zhi Yong Wu
>>
>> NOTE:
>>
>> The patchset is currently post out mainly to make sure
>> it is going in the correct direction and hope to get some
>> helpful comm
On Wed, Oct 10, 2012 at 06:07:22PM +0800, zwu.ker...@gmail.com wrote:
> From: Zhi Yong Wu
>
> NOTE:
>
> The patchset is currently post out mainly to make sure
> it is going in the correct direction and hope to get some
> helpful comments from other guys.
> For more infomation, please check h
On Wed, Oct 10, 2012 at 06:07:22PM +0800, zwu.ker...@gmail.com wrote:
> From: Zhi Yong Wu
>
> NOTE:
>
> The patchset is currently post out mainly to make sure
> it is going in the correct direction and hope to get some
> helpful comments from other guys.
> For more infomation, please check h
On Mon, Oct 15, 2012 at 8:39 AM, Zheng Liu wrote:
> On Wed, Oct 10, 2012 at 06:07:22PM +0800, zwu.ker...@gmail.com wrote:
>> From: Zhi Yong Wu
>>
>> NOTE:
>>
>> The patchset is currently post out mainly to make sure
>> it is going in the correct direction and hope to get some
>> helpful comment
On Wed, Oct 10, 2012 at 06:07:22PM +0800, zwu.ker...@gmail.com wrote:
> From: Zhi Yong Wu
>
> NOTE:
>
> The patchset is currently post out mainly to make sure
> it is going in the correct direction and hope to get some
> helpful comments from other guys.
> For more infomation, please check h
ialize and free main data structures
vfs: add function for collecting raw access info
vfs: add two map arrays
vfs: add hooks to enable hot data tracking
vfs: add function for updating map arrays
vfs: add aging function for old map info
vfs: add one wq to update map info periodically
vfs: reg
From: Zhi Yong Wu
Miscellaneous features that implement hot data tracking
and generally make the hot data functions a bit more friendly.
Signed-off-by: Zhi Yong Wu
---
fs/direct-io.c |8
fs/hot_tracking.h |5 +
mm/filemap.c|7 +++
mm/page
gt; >> From: Zhi Yong Wu
>> >>
>> >> Miscellaneous features that implement hot data tracking
>> >> and generally make the hot data functions a bit more friendly.
>> >>
>> >> Signed-off-by: Zhi Yong Wu
>> >> ---
>> >> fs/dire
On Thu, Sep 27, 2012 at 02:28:12PM +0800, Zhi Yong Wu wrote:
> On Thu, Sep 27, 2012 at 11:54 AM, Dave Chinner wrote:
> > On Sun, Sep 23, 2012 at 08:56:31PM +0800, zwu.ker...@gmail.com wrote:
> >> From: Zhi Yong Wu
> >>
> >> Miscellaneous features that
On Thu, Sep 27, 2012 at 11:54 AM, Dave Chinner wrote:
> On Sun, Sep 23, 2012 at 08:56:31PM +0800, zwu.ker...@gmail.com wrote:
>> From: Zhi Yong Wu
>>
>> Miscellaneous features that implement hot data tracking
>> and generally make the hot data functions a bit more fr
On Sun, Sep 23, 2012 at 08:56:31PM +0800, zwu.ker...@gmail.com wrote:
> From: Zhi Yong Wu
>
> Miscellaneous features that implement hot data tracking
> and generally make the hot data functions a bit more friendly.
>
> Signed-off-by: Zhi Yong Wu
> ---
> fs/direct-i
From: Zhi Yong Wu
Miscellaneous features that implement hot data tracking
and generally make the hot data functions a bit more friendly.
Signed-off-by: Zhi Yong Wu
---
fs/direct-io.c | 10 ++
include/linux/hot_tracking.h | 11 +++
mm/filemap.c
updating access frequency
vfs: add one new mount option '-o hottrack'
vfs: add init and exit support
vfs: introduce one hash table
vfs: enable hot data tracking
vfs: fork one kthread to update data temperature
vfs: add 3 new ioctl interfaces
vfs: add debugfs support
vfs: add docu
https://btrfs.wiki.kernel.org/index.php?title=Main_Page advertises "Hot
data tracking and moving to faster devices" but as far as I could find
(http://permalink.gmane.org/gmane.comp.file-systems.btrfs/15884) the
patches have never been merged. What is known about the future of thi
On Thursday 03 of May 2012 15:09:25 Waxhead wrote:
> David Sterba wrote:
> > On Sat, Feb 11, 2012 at 05:49:41AM +0100, Timo Witte wrote:
> >> What happened to the hot data tracking feature in btrfs? There are a lot
> >> of old patches from aug 2010, but it look
David Sterba wrote:
On Sat, Feb 11, 2012 at 05:49:41AM +0100, Timo Witte wrote:
What happened to the hot data tracking feature in btrfs? There are a lot
of old patches from aug 2010, but it looks like the feature has been
completly removed from the current version of btrfs. Is this feature
On Sat, Feb 11, 2012 at 05:49:41AM +0100, Timo Witte wrote:
> What happened to the hot data tracking feature in btrfs? There are a lot
> of old patches from aug 2010, but it looks like the feature has been
> completly removed from the current version of btrfs. Is this feature
>
What happened to the hot data tracking feature in btrfs? There are a lot
of old patches from aug 2010, but it looks like the feature has been
completly removed from the current version of btrfs. Is this feature
still on the roadmap?
signature.asc
Description: OpenPGP digital signature
From: Ben Chociej
Adds hot_inode_tree and hot_range_tree structs to keep track of
frequently accessed files and ranges within files. Trees contain
hot_{inode,range}_items representing those files and ranges, each of
which contains a btrfs_freq_data struct with its frequency of access
metrics (num
From: Ben Chociej
Miscellaneous features that implement hot data tracking, enable hot data
migration to faster media, and generally make the hot data functions a
bit more friendly.
ctree.h: Add the root hot_inode_tree and heat hashlists. Defines some
mount options and inode flags for turning
On Tue, 2010-07-27 at 15:29 -0700, Tracy Reed wrote:
> On Tue, Jul 27, 2010 at 05:00:18PM -0500, bchoc...@gmail.com spake thusly:
> > The long-term goal of these patches, as discussed in the Motivation
> > section at the end of this message, is to enable Btrfs to perform
> > automagic relocation o
On Wed, 28 Jul 2010 09:18:23 am Ben Chociej wrote:
> Yes, that's correct. It's likely not going to be a cache in the
> traditional sense, since the entire capacity of both HDD and SSD would
> be available.
To me that sounds like an HSM type arrangement, with most frequently used data
on the high
On Tue, Jul 27, 2010 at 6:10 PM, Diego Calleja wrote:
> On Miércoles, 28 de Julio de 2010 00:00:18 bchoc...@gmail.com escribió:
>> With Btrfs's COW approach, an external cache (where data is moved to
>> SSD, rather than just cached there) makes a lot of sense. Though these
>
> As I understand it,
On Miércoles, 28 de Julio de 2010 00:00:18 bchoc...@gmail.com escribió:
> With Btrfs's COW approach, an external cache (where data is moved to
> SSD, rather than just cached there) makes a lot of sense. Though these
As I understand it, what your proyect intends to do is to move "hot"
data to a SSD
On Tue, Jul 27, 2010 at 05:00:18PM -0500, bchoc...@gmail.com spake thusly:
> The long-term goal of these patches, as discussed in the Motivation
> section at the end of this message, is to enable Btrfs to perform
> automagic relocation of hot data to fast media like SSD. This goal has
> been motiva
From: Ben Chociej
Adds hot_inode_tree and hot_range_tree structs to keep track of
frequently accessed files and ranges within files. Trees contain
hot_{inode,range}_items representing those files and ranges, each of
which contains a btrfs_freq_data struct with its frequency of access
metrics (num
From: Ben Chociej
Miscellaneous features that enable hot data tracking features, open the
door for future hot data migration to faster media, and generally make
the hot data functions a bit more friendly.
ctree.h: Add the root hot_inode_tree and heat hashlists. Defines some
mount options and
INTRODUCTION:
This patch series adds experimental support for tracking data
temperature in Btrfs. Essentially, this means maintaining some key
stats (like number of reads/writes, last read/write time, frequency of
reads/writes), then distilling those numbers down to a single
"temperature" value th
65 matches
Mail list logo