On 09/23/2017 12:35 PM, Hans van Kranenburg wrote:
> Hi,
>
> When looking around in the kernel code, I ran into this (hash.h):
>
> u32 btrfs_crc32c(u32 crc, const void *address, unsigned int length);
>
> [...]
>
> static inline u64 btrfs_extref_hash(u64 parent_objectid, const char *name,
> int
Am Sun, 24 Sep 2017 19:43:05 +0300
schrieb Andrei Borzenkov :
> 24.09.2017 16:53, Fuhrmann, Carsten пишет:
> > Hello,
> >
> > 1)
> > I used direct write (no page cache) but I didn't disable the Disk
> > cache of the HDD/SSD itself. In all tests I wrote 1GB and looked
> > for
24.09.2017 16:53, Fuhrmann, Carsten пишет:
> Hello,
>
> 1)
> I used direct write (no page cache) but I didn't disable the Disk cache of
> the HDD/SSD itself. In all tests I wrote 1GB and looked for the runtime of
> that write process.
So "latency" on your diagram means total time to write 1GiB
On Fri, Sep 15, 2017 at 03:06:51PM -0600, Liu Bo wrote:
> commit 4246a0b63bd8 ("block: add a bi_error field to struct bio")
> changed the logic of how dio read endio reports errors.
>
> For single stripe dio read, %bio->bi_status reflects the error before
> verifying checksum, and now we're
On Sun, Sep 24, 2017 at 05:17:19PM +0300, Nikolay Borisov wrote:
> >>> However, such whac-a-mole fix will finally be a nightmare to maintain.
> >>>
> >>> What about integrating all of such validation checkers into one place?
> >>> So fsck part will only need to check their cross reference without
On Fri, Sep 22, 2017 at 12:11:18PM -0600, Liu Bo wrote:
> The local bio_list may have pending bios when doing cleanup, it can
> end up with memory leak if they don't get free'd.
I was wondering if we could make a common helper that would call
rbio_orig_end_io and while (..) put_bio(), but
On Sat, Sep 23, 2017 at 09:09:24AM +0800, Qu Wenruo wrote:
>
>
> On 2017年09月23日 08:48, Liu Bo wrote:
> > On Sat, Sep 23, 2017 at 08:46:55AM +0800, Qu Wenruo wrote:
> >>
> >>
> >> On 2017年09月23日 07:36, Liu Bo wrote:
> >>> This uses a bool 'do_backup' to help understand this piece of code.
> >>>
>
On Wed, Sep 20, 2017 at 05:50:18PM -0600, Liu Bo wrote:
> The kernel oops happens at
>
> kernel BUG at fs/btrfs/extent_io.c:2104!
> ...
> RIP: clean_io_failure+0x263/0x2a0 [btrfs]
>
> It's showing that read-repair code is using an improper mirror index.
> This is due to the fact that compression
On Wed, Sep 20, 2017 at 05:50:19PM -0600, Liu Bo wrote:
> Currently even if the underlying disk reports failure on IO,
> compressed read endio still gets to verify checksum and reports it as
> a checksum error.
>
> In fact, if some IO have failed during reading a compressed data
> extent ,
Hello,
1)
I used direct write (no page cache) but I didn't disable the Disk cache of the
HDD/SSD itself. In all tests I wrote 1GB and looked for the runtime of that
write process.
I run every test 5 times with different Blocksizes (2k, 8k, 32k, 128k, 512k).
Those values are on the x-axis. On
On Wed, Sep 20, 2017 at 05:50:18PM -0600, Liu Bo wrote:
> The kernel oops happens at
>
> kernel BUG at fs/btrfs/extent_io.c:2104!
> ...
> RIP: clean_io_failure+0x263/0x2a0 [btrfs]
>
> It's showing that read-repair code is using an improper mirror index.
> This is due to the fact that compression
Hello,
i run a few performance tests comparing mdadm, hardware raid and the btrfs
raid. I noticed that the performance for small blocksizes (2k) is very bad on
SSD in general and on HDD for sequential writing.
I wonder about that result, because you say on the wiki that btrfs is very
effective
On Fri, Sep 22, 2017 at 02:56:49PM +0900, Misono, Tomohiro wrote:
> Summary:
> Cleanup mount path by avoiding calling btrfs_mount() twice.
That would be great to get rid of it, but please do it in smaller steps,
each of them being bisectable and preserving the existing functionality.
Patch 1/3
On 2017年09月24日 21:24, Fuhrmann, Carsten wrote:
Hello,
i run a few performance tests comparing mdadm, hardware raid and the btrfs
raid. I noticed that the performance for small blocksizes (2k) is very bad on
SSD in general and on HDD for sequential writing.
2K is smaller than the minimal
On 2017年09月24日 21:53, Fuhrmann, Carsten wrote:
Hello,
1)
I used direct write (no page cache) but I didn't disable the Disk cache of the
HDD/SSD itself. In all tests I wrote 1GB and looked for the runtime of that
write process.
Are you writing all the 1G into one file?
Or into different
All my points are clear for this patchset:
I know I removed one function, and my reason is:
1) No or little usage
And it's anti intuition.
2) Dead code (not tested nor well documented)
3) Possible workaround
I can add several extra reasons as I stated before, but number of
reasons won't
On 09/24/2017 12:10 PM, Anand Jain wrote:
>
>
>> All my points are clear for this patchset:
>> I know I removed one function, and my reason is:
>> 1) No or little usage
>> And it's anti intuition.
>> 2) Dead code (not tested nor well documented)
>> 3) Possible workaround
>>
>> I can add
1)
Every test has it's own file. So the 2k blocksize write to a different file
then the 4k blocksize test. In the End there are 5 files on the disk (2k,
8k,...)
2)
Well I think it is 2 as well since for 4k and higher the performance is much
better .
I'm gonna test the -o max_inline and test
On Fri, Sep 22, 2017 at 05:21:27PM -0600, Liu Bo wrote:
>We had a bug in btrfs compression code which could end up with a
>kernel panic.
>
>This is adding a regression test for the bug and I've also sent a
>kernel patch to fix the bug.
>
>The patch is "Btrfs: fix kernel oops while reading
On 19.09.2017 13:00, Qu Wenruo wrote:
>
>
> On 2017年09月19日 17:48, Su Yue wrote:
>>
>>
>> On 09/19/2017 04:48 PM, Qu Wenruo wrote:
>>>
>>>
>>> On 2017年09月19日 16:32, Su Yue wrote:
Lowmem check does not skip invalid type in extent_inline_ref then
calls btrfs_extent_inline_ref_size(type)
20 matches
Mail list logo