On 2019/6/22 下午11:11, Andrei Borzenkov wrote:
[snip]
>
> 10:/mnt # dd if=/dev/urandom of=test/file bs=1M count=100 seek=0
> conv=notrunc
> 100+0 records in
> 100+0 records out
> 104857600 bytes (105 MB, 100 MiB) copied, 0.685532 s, 153 MB/s
> 10:/mnt # sync
> 10:/mnt # btrfs qgroup show .
> qgro
On 2019/6/23 下午3:55, Qu Wenruo wrote:
>
>
> On 2019/6/22 下午11:11, Andrei Borzenkov wrote:
> [snip]
>>
>> 10:/mnt # dd if=/dev/urandom of=test/file bs=1M count=100 seek=0
>> conv=notrunc
>> 100+0 records in
>> 100+0 records out
>> 104857600 bytes (105 MB, 100 MiB) copied, 0.685532 s, 153 MB/s
>>
23.06.2019 11:08, Qu Wenruo пишет:
>
>
> On 2019/6/23 下午3:55, Qu Wenruo wrote:
>>
>>
>> On 2019/6/22 下午11:11, Andrei Borzenkov wrote:
>> [snip]
>>>
>>> 10:/mnt # dd if=/dev/urandom of=test/file bs=1M count=100 seek=0
>>> conv=notrunc
>>> 100+0 records in
>>> 100+0 records out
>>> 104857600 bytes
From: Su Yue
Since the commmit 8dd3e5dc2df5
("btrfs-progs: tests: fix misc-tests/029 to run on NFS") added the
compatibility of NFS, it called run_mayfail() in the last of the test.
However, run_mayfail() always return the original code. If the test
case is not running on NFS, the last `run_mayf
On 2019/6/23 下午6:15, Andrei Borzenkov wrote:
[snip]
>> If the last command reports qgroup mismatch, then it means qgroup is
>> indeed incorrect.
>>
>
> no error reported.
Then it's not a bug, and should be caused by btrfs extent booking behavior.
> 10:/home/bor # btrfs ins dump-tree -t 258 /de
Hi all
I have a ReadyNAS device with 4 4TB disks. It was working all right
for couple of years. At one point the system became read-only, and
after reboot data is inaccessible.
Can anyone give some advise how to recover data from the file system?
system details are
root@Dyskietka:~# uname -a
Linu
23.06.2019 14:29, Qu Wenruo пишет:
>
>
> BTW, so many fragmented extents, this normally means your system has
> very high memory pressure or lack of memory, or lack of on-disk space.
It is 1GiB QEMU VM with vanilla Tumbleweed with GNOME desktop; nothing
runs except user GNOME session. Does it fi
On 2019/6/23 下午9:42, Andrei Borzenkov wrote:
> 23.06.2019 14:29, Qu Wenruo пишет:
>>
>>
>> BTW, so many fragmented extents, this normally means your system has
>> very high memory pressure or lack of memory, or lack of on-disk space.
>
> It is 1GiB QEMU VM with vanilla Tumbleweed with GNOME desk
On Tue, Apr 23, 2019 at 07:06:51PM -0400, Zygo Blaxell wrote:
> I had a test filesystem that ran out of unallocated space, then ran
> out of metadata space during a snapshot delete, and forced readonly.
> The workload before the failure was a lot of rsync and bees dedupe
> combined with random snap
Greetings!
When using btrfs with multiple devices in a "single" mode, is it
possible to force some files and directories onto one drive and some to
the other? Or at least specify "single" mode on a specific device for
some directories and "DUP" for some others.
The following scenario, if it is po
Greetings!
When using btrfs with multiple devices in a "single" mode, is it
possible to force some files and directories onto one drive and some to
the other? Or at least specify "single" mode on a specific device for
some directories and "DUP" for some others.
The following scenario, if it is po
On Thu, Jun 20, 2019 at 01:00:50PM +0800, Qu Wenruo wrote:
> On 2019/6/20 上午7:45, Zygo Blaxell wrote:
> > On Sun, Jun 16, 2019 at 12:05:21AM +0200, Claudius Winkel wrote:
> >> What should I do now ... to use btrfs safely? Should i not use it with
> >> DM-crypt
> >
> > You might need to disable wri
On 2019/6/24 上午4:45, Zygo Blaxell wrote:
> On Thu, Jun 20, 2019 at 01:00:50PM +0800, Qu Wenruo wrote:
>> On 2019/6/20 上午7:45, Zygo Blaxell wrote:
>>> On Sun, Jun 16, 2019 at 12:05:21AM +0200, Claudius Winkel wrote:
What should I do now ... to use btrfs safely? Should i not use it with
D
On 2019-06-23 4:45 p.m., Zygo Blaxell wrote:
> Model Family: Western Digital Green Device Model: WDC WD20EZRX-00DC0B0
> Firmware Version: 80.00A80
>
> Change the query to 1-30 power cycles, and we get another model with
> the same firmware version string:
>
> Model Family: Western D
On Mon, Jun 24, 2019 at 08:46:06AM +0800, Qu Wenruo wrote:
> On 2019/6/24 上午4:45, Zygo Blaxell wrote:
> > On Thu, Jun 20, 2019 at 01:00:50PM +0800, Qu Wenruo wrote:
> >> On 2019/6/20 上午7:45, Zygo Blaxell wrote:
[...]
> So the worst scenario really happens in real world, badly implemented
> flush/fu
On Sun, Jun 23, 2019 at 10:45:50PM -0400, Remi Gauvin wrote:
> On 2019-06-23 4:45 p.m., Zygo Blaxell wrote:
>
> > Model Family: Western Digital Green Device Model: WDC WD20EZRX-00DC0B0
> > Firmware Version: 80.00A80
> >
> > Change the query to 1-30 power cycles, and we get another model with
On Mon, Jun 24, 2019 at 12:37:51AM -0400, Zygo Blaxell wrote:
> On Sun, Jun 23, 2019 at 10:45:50PM -0400, Remi Gauvin wrote:
> > On 2019-06-23 4:45 p.m., Zygo Blaxell wrote:
> >
> > > Model Family: Western Digital Green Device Model: WDC WD20EZRX-00DC0B0
> > > Firmware Version: 80.00A80
> > >
On 2019/6/24 下午12:29, Zygo Blaxell wrote:
[...]
>
>> Btrfs is relying more the hardware to implement barrier/flush properly,
>> or CoW can be easily ruined.
>> If the firmware is only tested (if tested) against such fs, it may be
>> the problem of the vendor.
> [...]
>>> WD Green and Black are l
18 matches
Mail list logo