Hi, David,
Many test of xfstests such as btrfs/007, btrfs/008 and btrfs/016 failed
with the following patch.
fefbab75 btrfs-progs: send-stream: check number of read bytes from stream
This is because cmds-receive.c:do_receive() make a judgement of the end
of stream by returning 1 from
Commit c8b978188c ("Btrfs: Add zlib compression support") produces
data corruption when reading a file with a hole positioned after an
inline extent. btrfs_get_extent will return uninitialized kernel memory
instead of zero bytes in the hole.
Commit 93c82d5750 ("Btrfs: zero page past end of
On Mon, 2016-11-28 at 06:53 +0300, Andrei Borzenkov wrote:
> If you allow any write to filesystem before resuming from hibernation
> you risk corrupted filesystem. I strongly believe that "ro" must be
> really read-only
You're aware that "ro" already doesn't mean "no changes to the block
device"
28.11.2016 06:37, Christoph Anton Mitterer пишет:
> On Sat, 2016-11-26 at 14:12 +0100, Goffredo Baroncelli wrote:
>> I cant agree. If the filesystem is mounted read-only this behavior
>> may be correct; bur in others cases I don't see any reason to not
>> correct wrong data even in the read case.
On Sat, 2016-11-26 at 14:12 +0100, Goffredo Baroncelli wrote:
> I cant agree. If the filesystem is mounted read-only this behavior
> may be correct; bur in others cases I don't see any reason to not
> correct wrong data even in the read case. If your ram is unreliable
> you have big problem
On Mon, Nov 28, 2016 at 10:50:30AM +0800, Qu Wenruo wrote:
> Any comment?
Sorry for the late review, I'm planning to look at them this week.
Thanks,
Eryu
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo
Ulli Horlacher posted on Mon, 28 Nov 2016 01:38:29 +0100 as excerpted:
> Ok, then next question :-)
>
> What is better (for a single user workstation): using mount option
> "autodefrag" or call "btrfs filesystem defragment -r" (-t ?) via nightly
> cronjob?
>
> So far, I use neither.
First
Any comment?
At 11/22/2016 04:38 PM, Qu Wenruo wrote:
Despite the scrub test cases in fstests, there is not even one test case
which really checked if scrub can recover data.
In fact, btrfs scrub for RAID56 will even corrupt correct data stripes.
So let's start from the needed facilities and
2016-10-19 14:16 GMT+03:00 Dāvis Mosāns :
>
> Basically on multi-disk btrfs partition few sectors on one HDD became
> unreadable and when trying to delete one folder on that filesystem I
> got this in log
>
[...]
dmesg now on 4.8.11
[ 3825.603883] WARNING: CPU: 5 PID: 15736
Newly introduced qgroup reserved space trace points are normally nested
into several common qgroup operations.
While some other trace points are not well placed to co-operate with
them, causing confusing output.
This patch re-arrange trace_btrfs_qgroup_release_data() and
Goldwyn Rodrigues has exposed and fixed a bug which underflows btrfs
qgroup reserved space, and leads to non-writable fs.
This reminds us that we don't have enough underflow check for qgroup
reserved space.
For underflow case, we should not really underflow the numbers but warn
and keeps qgroup
Introduce the following trace points:
qgroup_update_reserve
qgroup_meta_reserve
These trace points are handy to trace qgroup reserve space related
problems.
Signed-off-by: Qu Wenruo
---
v2:
None
v3:
Separate from trace point timing modification patch.
v4:
Change
At 11/27/2016 07:16 AM, Goffredo Baroncelli wrote:
On 2016-11-26 19:54, Zygo Blaxell wrote:
On Sat, Nov 26, 2016 at 02:12:56PM +0100, Goffredo Baroncelli wrote:
On 2016-11-25 05:31, Zygo Blaxell wrote:
[...]
BTW Btrfs in RAID1 mode corrects the data even in the read case. So
Have you
On Sat 2016-11-26 (11:27), Kai Krakow wrote:
> > I have vmware and virtualbox VMs on btrfs SSD.
> As a side note: I don't think you can use "nodatacow" just for one
> subvolume while the other subvolumes of the same btrfs are mounted
> different. The wiki is just wrong here.
>
> The list of
At 11/26/2016 02:26 AM, David Sterba wrote:
Hi,
I have comments regarding the code organization, not really the raid56
functionality itself.
Thanks for the comment.
On Fri, Oct 28, 2016 at 10:31:36AM +0800, Qu Wenruo wrote:
For any one who wants to try it, it can be get from my repo:
I have reinstalled from scratch my system (for remote receiving of the
snapshot).
Now it uses the 4.8.8-100.fc23.x86_64 kernel, and now I have a separate
partition for '''possible dangerous''' test.
I do not remember the exact sequence that caused the corruption of the file
system, but now it
On Sun, Nov 27, 2016 at 12:16:34AM +0100, Goffredo Baroncelli wrote:
> On 2016-11-26 19:54, Zygo Blaxell wrote:
> > On Sat, Nov 26, 2016 at 02:12:56PM +0100, Goffredo Baroncelli wrote:
> >> On 2016-11-25 05:31, Zygo Blaxell wrote:
> [...]
> >>
> >> BTW Btrfs in RAID1 mode corrects the data even in
17 matches
Mail list logo