10 +++---
1 file changed, 7 insertions(+), 3 deletions(-)
--- linux-next-20180706.orig/fs/f2fs/sysfs.c
+++ linux-next-20180706/fs/f2fs/sysfs.c
@@ -9,6 +9,7 @@
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
+#include
On 2018/7/7 9:13, Jaegeuk Kim wrote:
> On 07/05, Chao Yu wrote:
>> f2fs is focused on flash based storage, so let's enable real-time
>> discard by default, if user don't want to enable it, 'nodiscard'
>> mount option should be used on mount.
>>
>> Signed-off-by: Chao Yu
>> ---
>> fs/f2fs/super.c
Hi Jaegeuk,
On 2018/7/7 9:12, Jaegeuk Kim wrote:
> Hi Chao,
>
> I'm hitting some messages below during fault injection test. I'll dig in the
> issue later, but meanwhile could you review this patch again?
Oh, okay, let me check this patch again.
Thanks,
>
> Thanks,
>
> On 06/28, Chao Yu wrot
On 2018/7/7 9:10, Jaegeuk Kim wrote:
> On 07/07, Chao Yu wrote:
>> Hi Jaegeuk,
>>
>> On 2018/7/7 6:49, Jaegeuk Kim wrote:
>>> On 07/05, Chao Yu wrote:
If discard IOs are blocked by user IO, do not skip to select and issue
discard with lower granularity, retry with current granularity.
>>>
On 2018/7/7 9:08, Jaegeuk Kim wrote:
> On 07/07, Chao Yu wrote:
>> Hi Jaegeuk,
>>
>> On 2018/7/7 6:45, Jaegeuk Kim wrote:
>>> On 07/04, Chao Yu wrote:
From: Chao Yu
Some devices has small max_{hw,}discard_sectors, so that in
__blkdev_issue_discard(), one big size discard bio ca
When unmounting f2fs in force mode, we can get it stuck by io_schedule()
by some pending IOs in meta_inode.
io_schedule+0xd/0x30
wait_on_page_bit_common+0xc6/0x130
__filemap_fdatawait_range+0xbd/0x100
filemap_fdatawait_keep_errors+0x15/0x40
sync_inodes_sb+0x1cf/0x240
sync_filesystem+0x52/0x90
gene
On 07/05, Chao Yu wrote:
> f2fs is focused on flash based storage, so let's enable real-time
> discard by default, if user don't want to enable it, 'nodiscard'
> mount option should be used on mount.
>
> Signed-off-by: Chao Yu
> ---
> fs/f2fs/super.c | 7 +++
> 1 file changed, 3 insertions(+
Hi Chao,
I'm hitting some messages below during fault injection test. I'll dig in the
issue later, but meanwhile could you review this patch again?
Thanks,
On 06/28, Chao Yu wrote:
> https://bugzilla.kernel.org/show_bug.cgi?id=200221
>
> - Overview
> BUG() in clear_inode() when mounting and un-
On 07/07, Chao Yu wrote:
> Hi Jaegeuk,
>
> On 2018/7/7 7:23, Jaegeuk Kim wrote:
> > On 07/05, Chao Yu wrote:
> >> For small granularity discard which size is smaller than 64KB, if we
> >> issue those kind of discards orderly by size, their IOs will be spread
> >> into entire logical address, so th
On 07/07, Chao Yu wrote:
> Hi Jaegeuk,
>
> On 2018/7/7 6:49, Jaegeuk Kim wrote:
> > On 07/05, Chao Yu wrote:
> >> If discard IOs are blocked by user IO, do not skip to select and issue
> >> discard with lower granularity, retry with current granularity.
> >
> > We need to stop as soon as possible
On 07/07, Chao Yu wrote:
> Hi Jaegeuk,
>
> On 2018/7/7 6:45, Jaegeuk Kim wrote:
> > On 07/04, Chao Yu wrote:
> >> From: Chao Yu
> >>
> >> Some devices has small max_{hw,}discard_sectors, so that in
> >> __blkdev_issue_discard(), one big size discard bio can be split
> >> into multiple small size
Hi Jaegeuk,
On 2018/7/7 7:23, Jaegeuk Kim wrote:
> On 07/05, Chao Yu wrote:
>> For small granularity discard which size is smaller than 64KB, if we
>> issue those kind of discards orderly by size, their IOs will be spread
>> into entire logical address, so that in FTL, L2P table will be updated
>>
Hi Jaegeuk,
On 2018/7/7 6:49, Jaegeuk Kim wrote:
> On 07/05, Chao Yu wrote:
>> If discard IOs are blocked by user IO, do not skip to select and issue
>> discard with lower granularity, retry with current granularity.
>
> We need to stop as soon as possible since user activity comes. Later, discar
Hi Jaegeuk,
On 2018/7/7 6:45, Jaegeuk Kim wrote:
> On 07/04, Chao Yu wrote:
>> From: Chao Yu
>>
>> Some devices has small max_{hw,}discard_sectors, so that in
>> __blkdev_issue_discard(), one big size discard bio can be split
>> into multiple small size discard bios, result in heavy load in IO
>>
On 2018/7/7 6:32, Jaegeuk Kim wrote:
> On 07/04, Chao Yu wrote:
>> f2fs recovery flow is relying on dnode block link list, it means fsynced
>> file recovery depends on previous dnode's persistence in the list, so
>> during fsync() we should wait on all regular inode's dnode writebacked
>> before is
On 07/05, Chao Yu wrote:
> For small granularity discard which size is smaller than 64KB, if we
> issue those kind of discards orderly by size, their IOs will be spread
> into entire logical address, so that in FTL, L2P table will be updated
> randomly, result bad wear rate in the table.
>
> In th
On 07/05, Chao Yu wrote:
> If discard IOs are blocked by user IO, do not skip to select and issue
> discard with lower granularity, retry with current granularity.
We need to stop as soon as possible since user activity comes. Later, discard
thread will try it again in another idle time. What's yo
On 07/04, Chao Yu wrote:
> From: Chao Yu
>
> Some devices has small max_{hw,}discard_sectors, so that in
> __blkdev_issue_discard(), one big size discard bio can be split
> into multiple small size discard bios, result in heavy load in IO
> scheduler and device, which can hang other sync IO for l
On 07/03, Chao Yu wrote:
> Fsyncer will wait on all dnode pages of regular writeback before flushing,
> if there are async dnode pages blocked by IO scheduler, it may decrease
> fsync's performance.
So, it'd be better to keep tracking what we really need to wait for first.
>
> In this patch, we
On 07/04, Chao Yu wrote:
> f2fs recovery flow is relying on dnode block link list, it means fsynced
> file recovery depends on previous dnode's persistence in the list, so
> during fsync() we should wait on all regular inode's dnode writebacked
> before issuing flush.
We don't need to wait for all
Let's flush journal nat entries for speed up in the next run.
Signed-off-by: Jaegeuk Kim
---
fs/f2fs/node.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index 29237aeca041..0f076fb0d828 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -2613,6 +261
Once we shutdown f2fs, we have to flush stale pages in order to unmount
the system. In order to make stable, we need to stop fault injection as well.
Signed-off-by: Jaegeuk Kim
---
fs/f2fs/checkpoint.c | 1 +
fs/f2fs/data.c | 4
fs/f2fs/f2fs.h | 7 +++
fs/f2fs/file.c
This fixes to get the page type after pullback by fscrypto, since the original
page is bounce page but we should need to get control page which is what we
originally submitted.
Signed-off-by: Jaegeuk Kim
---
fs/f2fs/data.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/
This fixes to support unaligned dio as buffered writes.
Signed-off-by: Jaegeuk Kim
---
fs/f2fs/data.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index e66379961804..6e8e78bb64a7 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -2425,7 +
Hi Wen,
I've update two patches today for these issues, could you please test them?
On 2018/7/6 9:30, Xu, Wen wrote:
> Thanks very much! I would like to provide any further help or testing.
>
> -Wen
>
>> On Jul 5, 2018, at 9:13 PM, Chao Yu wrote:
>>
>> Hi Wen,
>>
>> On 2018/7/6 3:19, Xu, Wen w
On Fri, Jun 29, 2018 at 11:30:55AM -0600, Ross Zwisler wrote:
> On Sat, Jun 16, 2018 at 07:00:46PM -0700, Matthew Wilcox wrote:
> > Signed-off-by: Matthew Wilcox
> > ---
> <>
> > +static void *dax_make_page_entry(struct page *page, void *entry)
> > +{
> > + pfn_t pfn = page_to_pfn_t(page);
> > +
My fuzzer still randomly fuzzes the bytes in the image, but whatever it writes,
it will fix the checksum in CP blocks afterwards. F2FS only has CRC check in
CP, so
it is not very hard for me to study the existed code and do this. I just want
to touch more
code by passing CRC checks.
Thanks,
Wen
Hi Wen
On 2018/7/6 9:30, Xu, Wen wrote:
> Thanks very much! I would like to provide any further help or testing.
I found something interesting, our key metadata in checkpoint pack has already
been protected by checksum, in image you attached, the value of checksum is
correct, but still some key m
28 matches
Mail list logo