On Sat, Apr 17, 2021 at 4:03 PM Florian Franzeck wrote:
>
> Dear users,
>
> I need help to recover from a btrfs error after a power cut
>
> btrfs-progs v5.4.1
>
> Linux banana 5.4.0-72-generic #80-Ubuntu SMP Mon Apr 12 17:35:00 UTC
> 2021 x86_64 x86_64 x86_64
`btrfs rescue
chunk-recover`
Unfortunately I'm not well versed enough in btrfs to know what precisely
that would do, or if it would help at all or just make it worse.
I also don't know how to use that less corrupted DUPlicated version of
the FStree to get the disk to be mountable again, w
Dear users,
I need help to recover from a btrfs error after a power cut
btrfs-progs v5.4.1
Linux banana 5.4.0-72-generic #80-Ubuntu SMP Mon Apr 12 17:35:00 UTC
2021 x86_64 x86_64 x86_64 GNU/Linux
dmesg output:
[ 30.330824] BTRFS info (device md1): disk space caching is enabled
> Gesendet: Montag, 29. März 2021 um 13:36 Uhr
> Von: "Josef Bacik"
> An: "B A" , "Chris Murphy" ,
> "btrfs kernel mailing list"
> Cc: "Qu Wenruo"
> Betreff: Re: Aw: Re: Re: Help needed with filesystem errors: parent tr
Hi Chris,
> Gesendet: Dienstag, 30. März 2021 um 20:17 Uhr
> Von: "Chris Murphy"
>
> On Tue, Mar 30, 2021 at 2:44 AM B A wrote:
> >
> > > Gesendet: Dienstag, 30. März 2021 um 00:07 Uhr
> > > Von: "Chris Murphy"
> > > […]
> > > EVO or PRO? And what does its /proc/mounts line look like?
> >
> > M
other disks are failing at that spot (given it's
> > > raid6), correct?
> >
> > That's the general idea.
> >
> > > If so I would be comfortable giving that a shot. I do
> > > expect that while doing a replace and reading the same LBA from the
On Tue, Mar 30, 2021 at 2:44 AM B A wrote:
>
>
> > Gesendet: Dienstag, 30. März 2021 um 00:07 Uhr
> > Von: "Chris Murphy"
> > An: "B A"
> > Cc: "Btrfs BTRFS"
> > Betreff: Re: Help needed with filesystem errors: parent tran
27;s
> > raid6), correct?
>
> That's the general idea.
>
> > If so I would be comfortable giving that a shot. I do
> > expect that while doing a replace and reading the same LBA from the
> > disk, it will just crash again and ruin my replace.
>
&
On Tue, Mar 30, 2021 at 03:01:57PM +0200, Bas Hulsken wrote:
> I followed your advice, Zygo and Chris, and did both:
> 1) smartctl -l scterc,70,70 /dev/sdX for all 4 drives in the array (the
> drives do support this)
> 2) echo 180 > /sys/block/sdX/device/timeout for all 4 drives
>
> with that I at
> Gesendet: Dienstag, 30. März 2021 um 00:07 Uhr
> Von: "Chris Murphy"
> An: "B A"
> Cc: "Btrfs BTRFS"
> Betreff: Re: Help needed with filesystem errors: parent transid verify failed
>
> On Sun, Mar 28, 2021 at 9:41 AM B A wrote:
>
On Sun, Mar 28, 2021 at 9:41 AM B A wrote:
>
> * Samsung 840 series SSD (SMART data looks fine)
EVO or PRO? And what does its /proc/mounts line look like?
Total_LBAs_Written?
--
Chris Murphy
On Mon, Mar 29, 2021 at 02:03:06PM +0200, Bas Hulsken wrote:
> Dear list,
> due to a disk intermittently failing in my 4 disk array, I'm getting
> "transid verify failed" errors on my btrfs filesystem (see attached
> dmesg | grep -i btrfs dump in btrfs_dmesg.txt).
Scary! But in this case, it lo
hen I run a scrub, the
> bad disk (/dev/sdd) becomes unresponsive, so I'm hesitant to try that
> again (happened 3 times now, and was the root cause of the transid
> verify failed errors possibly, at least they did not show up earlier
> than the failed scrub).
Is the dmesg filtered? An
On 3/29/21 4:42 AM, B A wrote:
Gesendet: Montag, 29. März 2021 um 08:09 Uhr
Von: "Chris Murphy"
An: "B A" , "Btrfs BTRFS"
Cc: "Qu Wenruo" , "Josef Bacik"
Betreff: Re: Re: Help needed with filesystem errors: parent transid verify
failed
[…]
Dear list,
due to a disk intermittently failing in my 4 disk array, I'm getting "transid
verify failed" errors on my btrfs filesystem (see attached dmesg | grep -i
btrfs dump in btrfs_dmesg.txt). When I run a scrub, the bad disk (/dev/sdd)
becomes unresponsive, so I'm hesitant to try that
again
> Gesendet: Montag, 29. März 2021 um 08:09 Uhr
> Von: "Chris Murphy"
> An: "B A" , "Btrfs BTRFS"
> Cc: "Qu Wenruo" , "Josef Bacik"
> Betreff: Re: Re: Help needed with filesystem errors: parent transid verify
> failed
>
On Mon, Mar 29, 2021 at 1:34 AM B A wrote:
>
> This is a very old BTRFS filesystem created with Fedora *23* i.e. a linux
> kernel and btrfs-progs around version 4.2. It was probably created 2015-10-31
> with Fedora 23 beta and kernel 4.2.4 or 4.2.5.
>
> I ran `btrfs scrub` about a month ago with
> Gesendet: Montag, 29. März 2021 um 01:02 Uhr
> Von: "Chris Murphy"
> An: "B A"
> Cc: "Btrfs BTRFS"
> Betreff: Re: Help needed with filesystem errors: parent transid verify failed
>
> On Sun, Mar 28, 2021 at 9:41 AM B A wrote:
> >
>
> Gesendet: Montag, 29. März 2021 um 01:02 Uhr
> Von: "Chris Murphy"
> An: "B A"
> Cc: "Btrfs BTRFS"
> Betreff: Re: Help needed with filesystem errors: parent transid verify failed
>
> On Sun, Mar 28, 2021 at 9:41 AM B A wrote:
> >
>
On Sun, Mar 28, 2021 at 7:02 PM Chris Murphy wrote:
>
> Can you post the output from both:
>
> btrfs insp dump-t -b 1144783093760 /dev/dm-0
> btrfs insp dump-t -b 1144881201152 /dev/dm-0
I'm not sure if those dumps will contain filenames, so check them.
It's ok to remove filenames before posting
On Sun, Mar 28, 2021 at 9:41 AM B A wrote:
>
> Dear btrfs experts,
>
>
> On my desktop PC, I have 1 btrfs partition on a single SSD device with 3
> subvolumes (/, /home, /var). Whenever I boot my PC, after logging in to
> GNOME, the btrfs partition is being remounted as ro due to errors. This is
Dear btrfs experts,
On my desktop PC, I have 1 btrfs partition on a single SSD device with 3
subvolumes (/, /home, /var). Whenever I boot my PC, after logging in to GNOME,
the btrfs partition is being remounted as ro due to errors. This is the dmesg
output at that time:
> [ 616.155392] BTRFS
btrfs inspect-internal --help show some incomplete sentenses. As shown
below,
btrfs inspect-internal --help
btrfs inspect-internal min-dev-size [options]
Get the minimum size the device can be shrunk to. The
btrfs inspect-internal dump-tree [options] [ ..]
This
btrfs inspect-internal --help show some incomplete sentenses. As shown
below,
btrfs inspect-internal --help
btrfs inspect-internal min-dev-size [options]
Get the minimum size the device can be shrunk to. The
btrfs inspect-internal dump-tree [options] [ ..]
This
d it with
> unassigned devices.#
> And then i realized the following error in my Unraid Cache Devices
>
>
>
> Unfortunately i cannot mount it again.
> Can you please help me?
I don't know anything about unraid. The attached dmesg contains:
[ 3660.395013] BTRFS info (device
mount it again.
Can you please help me?
Thank you in advance
Best Regards
Patrick
Attached are the information requested.
root@JARVIS:/dev# uname -a
Linux JARVIS 4.19.107-Unraid #1 SMP Thu Mar 5 13:55:57 PST 2020 x86_64 Intel(R)
Core(TM) i5-9600K CPU @ 3.70GHz GenuineIntel GNU/Linux
root
File System"
Unfortunately I cannot mount it again.
Can you please help me?
Thank you in advance
Best Regards
Patrick
Attached are the information requested.
root@JARVIS:/dev# uname -a
Linux JARVIS 4.19.107-Unraid #1 SMP Thu Mar 5 13:55:57 PST 2020 x86_64
Intel(R) Core(TM) i5-9600K CPU
On Mon, Jan 4, 2021 at 11:09 AM André Isidro da Silva
wrote:
>
> I'm sure it used to be one, but indeed it seems that a TYPE is missing
> in /dev/sda10; gparted says it's unknown.
> It seems there is no trace of the fs. I'm trying to recall any other
> operations I might have done, but if it was s
I'm sure it used to be one, but indeed it seems that a TYPE is missing
in /dev/sda10; gparted says it's unknown.
It seems there is no trace of the fs. I'm trying to recall any other
operations I might have done, but if it was something else I can't
remember what could have been. I used cfdisk, t
; operations but mount/cp. This partition was my data partition, I thought
> it was safe to use for this process, since I was just copying files from
> it. I do have a backup, but it's old so I'll still lose a lot.. help.
First, make no changes, attempt no repairs. Next save history
files from
it. I do have a backup, but it's old so I'll still lose a lot.. help.
I ended up giving up on my system partition after this happens to my
/home, I'm reinstalling in a ext4 for the time being, so I should have a
system running to fill in logs missing from this mail written from my
phone haha.
Regards,
André
btrfs balance start|status support both short and long option
-v|--verbose however failed to show it in its --help. This patch fixes
the --help.
Signed-off-by: Anand Jain
---
cmds/balance.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/cmds/balance.c b/cmds/balance.c
>>> backup_chunk_root: 22020096gen: 156
> >>> level: 1
> >>> backup_extent_root: 340721664 gen: 166
> >>> level: 2
> >>> backup_fs_root: 338608128
ot: 57311232gen: 167
>>> level: 1
>>> backup_chunk_root: 22020096gen: 156
>>> level: 1
>>> backup_extent_root: 69419008gen: 167
>>> level: 2
>>>
backup_num_devices: 1
> >
> > backup 2:
> > backup_tree_root: 340721664 gen: 168
> > level: 1
> > backup_chunk_root: 22020096gen: 156
> > level: 1
> >
sed: 243939692544
> backup_num_devices: 1
>
> backup 3:
> backup_tree_root: 57311232gen: 165
> level: 1
> backup_chunk_root: 22020096gen: 156
> level: 1
>
vel: 2
> backup_dev_root:345358336 gen: 168
> level: 1
> backup_csum_root: 353320960 gen: 168
> level: 2
> backup_total_bytes: 375567417344
> backup_bytes_used: 243939
um 11:50
> geschrieben:
>
>
>
>
> On 2019/9/22 下午2:34, Felix Koop wrote:
> > Hello,
> >
> > I need help accessing a btrfs-filesystem. When I try to mount the fs, I
> > get the following error:
> >
> > # mount -t btrfs /dev/md/1 /mnt
> > mount: /
On 2019/9/22 下午2:34, Felix Koop wrote:
> Hello,
>
> I need help accessing a btrfs-filesystem. When I try to mount the fs, I
> get the following error:
>
> # mount -t btrfs /dev/md/1 /mnt
> mount: /mnt: wrong fs type, bad option, bad superblock on /dev/md1,
> missing co
Hello,
I need help accessing a btrfs-filesystem. When I try to mount the fs, I
get the following error:
# mount -t btrfs /dev/md/1 /mnt
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/md1,
missing codepage or helper program, or other error.
When I then try to check the fs, this
On 2019/8/25 下午2:41, Patrick Dijkgraaf wrote:
> Hi Qu,
>
> At the end of my first initial post, I mentioned that I finally was
> able to mount the volume using:
>
> mount -o usebackuproot,ro /dev/sdh2 /mnt/data
>
> The chunk tree and super blocks dumps were taken after that.
>
> Now I noticed
Hi Qu,
At the end of my first initial post, I mentioned that I finally was
able to mount the volume using:
mount -o usebackuproot,ro /dev/sdh2 /mnt/data
The chunk tree and super blocks dumps were taken after that.
Now I noticed that I was able to mount the volume without special
options (same k
On 2019/8/24 下午8:05, Patrick Dijkgraaf wrote:
> Thanks for the quick reply!
> See responses inline.
>
> On Sat, 2019-08-24 at 19:01 +0800, Qu Wenruo wrote:
>> On 2019/8/24 下午2:48, Patrick Dijkgraaf wrote:
>>> Hi all,
>>>
>>> My server hung this morning, and I had to hard-reset is. I did not
>>>
oadable here: https://kwek.duckstad.net/tree.txt
> And, have you tried to mount using different devices? If it's some
> super
> blocks get corrupted, using a different device to mount may help.
> (With that said, it's better to call that dump-super for each device)
Tried it
es? If it's some super
blocks get corrupted, using a different device to mount may help.
(With that said, it's better to call that dump-super for each device)
>
> FS config is shown below:
> [root@cornelis ~]# btrfs fi show
> Label: 'cornelis-btrfs' uuid: ac64351
Hi all,
My server hung this morning, and I had to hard-reset is. I did not
apply any updates. After the reboot, my FS won't mount:
[Sat Aug 24 08:16:31 2019] BTRFS error (device sde2): super_total_bytes
92017957797888 mismatch with fs_devices total_rw_bytes 184035915595776
[Sat Aug 24 08:16:31 20
On Fri, Jul 26, 2019 at 10:56 AM Chris Murphy wrote:
>
> Looks like Fedora have run into this in automated testing, and think
> it's causing OS installations to hang, which would be a release
> blocking bug. Is this really a Btrfs bug or is it something else that
> Btrfs merely is exposing?
> http
Looks like Fedora have run into this in automated testing, and think
it's causing OS installations to hang, which would be a release
blocking bug. Is this really a Btrfs bug or is it something else that
Btrfs merely is exposing?
https://bugzilla.redhat.com/show_bug.cgi?id=1733388
I've seen a varia
On Wed, Jul 03, 2019 at 11:12:10PM +0200, Peter Zijlstra wrote:
> On Wed, Jul 03, 2019 at 09:54:06AM -0400, Josef Bacik wrote:
> > Hello,
> >
> > I've been seeing a variation of the following splat recently and I have no
> > earthly idea what it's trying to tell me.
>
> That you have a lock cycle
b0
> ksys_mount+0x7e/0xd0
> __x64_sys_mount+0x21/0x30
> do_syscall_64+0x4a/0x1b0
> entry_SYSCALL_64_after_hwframe+0x49/0xbe
>
> -> #0 (&fs_info->reloc_mutex){+.+.}:
> lock_acquire+0xb0/0x1a0
> __mutex_lock+0x81/0x8f0
>
0x81/0x8f0
btrfs_record_root_in_trans+0x3c/0x70
start_transaction+0xaa/0x510
btrfs_dirty_inode+0x49/0xe0
file_update_time+0xc7/0x110
btrfs_page_mkwrite+0x152/0x4f0
do_page_mkwrite+0x2b/0x70
do_wp_page+0x4b1/0x5e0
__handle_mm_fault+0x6b8/0x10e0
[root@TYOMIX tyomix]# btrfs restore -D /dev/sda3 /dev/null
checksum verify failed on 1048576 found E4E3BDB6 wanted
checksum verify failed on 1048576 found E4E3BDB6 wanted
bad tree block 1048576, bytenr mismatch, want=1048576, have=0
ERROR: cannot read chunk root
Could not open r
Hi Chris,
On Fri, 5 Apr 2019 at 17:45, Chris Murphy wrote:
>
> On Thu, Apr 4, 2019 at 11:51 PM Glenn Trigg wrote:
> >
> > Hi Chris,
> >
> > Thanks for spending the time and energy to help me look into this.
> >
> > btrfs restore isn't very hap
On Thu, Apr 4, 2019 at 11:51 PM Glenn Trigg wrote:
>
> Hi Chris,
>
> Thanks for spending the time and energy to help me look into this.
>
> btrfs restore isn't very happy either, so I guess I'll wait until
> btrfs-progs v5.0 comes out and see if that helps.
>
&
Hi Chris,
Thanks for spending the time and energy to help me look into this.
btrfs restore isn't very happy either, so I guess I'll wait until
btrfs-progs v5.0 comes out and see if that helps.
btrfs restore says...
% ./btrfs restore -v -D /dev/sda1 /data2
This is a dry-run, no files
On Sun, Mar 31, 2019 at 11:48 PM Glenn Trigg wrote:
>
> Hi Chris,
>
> After booting the fedora usb stick (running rc2), I got the same results.
>
> On Mon, 1 Apr 2019 at 08:35, Chris Murphy wrote:
> >
> > On Sat, Mar 30, 2019 at 5:43 PM Glenn Trigg wrote:
> ...
> >
> > I'm confused because "can'
Hi Chris,
After booting the fedora usb stick (running rc2), I got the same results.
On Mon, 1 Apr 2019 at 08:35, Chris Murphy wrote:
>
> On Sat, Mar 30, 2019 at 5:43 PM Glenn Trigg wrote:
...
>
> I'm confused because "can't read superblock" isn't found in fs/btrfs.
> I'm only finding it in fs/g
On Sat, Mar 30, 2019 at 5:43 PM Glenn Trigg wrote:
>
> Hi Chris,
>
> Thanks for replying.
>
> On Fri, 29 Mar 2019 at 13:27, Chris Murphy wrote:
> ...
> > Seem in conflict. I don't really understand how the kernel complains
> > about a bad super and yet user space tools say they're all OK. What
>
; That's usually bad.
>
>
> > Other system information is:
> > % uname -a
> > Linux izen 4.18.0-16-generic #17-Ubuntu SMP Fri Feb 8 00:06:57 UTC
> > 2019 x86_64 x86_64 x86_64 GNU/Linux
>
> It looks like extent tree corruption so I don't think it'll he
On Thu, Mar 28, 2019 at 8:27 PM Chris Murphy wrote:
>>So I suggest 5.0.4, or 4.19.32, or you can be brave and
> download this, image it to a USB stick (dd if=file of=/dev/ bs=1M
> oflag=direct) which of course will erase everything on the stick.
>
> https://kojipkgs.fedoraproject.org/compose/rawhi
168376320 wanted 37601 found 37700
That's usually bad.
> Other system information is:
> % uname -a
> Linux izen 4.18.0-16-generic #17-Ubuntu SMP Fri Feb 8 00:06:57 UTC
> 2019 x86_64 x86_64 x86_64 GNU/Linux
It looks like extent tree corruption so I don't think it'll hel
I wonder why this is not getting any replies?
On Sat, 23 Mar 2019 at 11:45, Glenn Trigg wrote:
>
> Hi,
>
> Since mailing this I have tried using the more recent utils - version
> btrfs-progs v4.20.2.
>
> I still have not had any success in getting the filesystem to a
> mountable state and I have
26.03.2019 1:26, berodual_xyz пишет:
> Dear all,
>
> on a large btrfs based filesystem (multi-device raid0 - all devices okay,
> nodatacow,nodatasum...) I experienced severe filesystem corruption, most
> likely due to a hard reset with inflight data.
> The system cannot mount (also not with "ro,
): has skinny extents
> > [33814.361708] BTRFS error (device sdd): parent transid verify failed on
> > 1048576 wanted 60234 found 60230
> > [33814.361764] BTRFS error (device sdd): failed to read chunk root
> > [33814.373140] BTRFS error (device sdd): open_ctree failed
&
BTRFS error (device sdd): failed to read chunk root
> [33814.373140] BTRFS error (device sdd): open_ctree failed
> ##
>
>
> Again, thank you very much for all help!
>
>
>
> Sent with ProtonMail Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
> On M
t;tree dump". Do you mean
btrfs-debug-tree? Or btrfs-image? Or something else? In any case, none
of those are likely to help all that much. The metadata is
corrupted in a way that shouldn't ever happen, and where it's really
hard to work out how to fix it, even with an actual human ex
6 wanted 60234 found 60230
[33814.361764] BTRFS error (device sdd): failed to read chunk root
[33814.373140] BTRFS error (device sdd): open_ctree failed
##
Again, thank you very much for all help!
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Monday, March 25, 2019 11
On Mon, Mar 25, 2019 at 10:26:29PM +, berodual_xyz wrote:
> Dear all,
>
> on a large btrfs based filesystem (multi-device raid0 - all devices okay,
> nodatacow,nodatasum...)
Ouch. I think the only thing you could have done to make the FS
more fragile is mounting with nobarrier(*). Frankl
the net say to run "btrfs check --init-extent-tree" but I would
like to reach out first.
btrfs progs version is 4.20.2 and kernel is 4.20.17
Thank you for any help! Much appreciated!
Marcel
Sent with ProtonMail Secure Email.
Hi,
Since mailing this I have tried using the more recent utils - version
btrfs-progs v4.20.2.
I still have not had any success in getting the filesystem to a
mountable state and I have now also tried recovering files with btrfs
restore, also with no success. The restore output is:
% ./btrfs res
Hello,
I had some random machine freezing events which I suspected was due to
issues with a raid1 filesystem and kernel module crashes. I attempted
to use the information available to get the filesystem into a good
state where "btrfs check" and "btrfs scrub" would not have any errors,
however I fe
On 2019/2/5 下午7:45, Moritz M wrote:
>
>
> Am 2019-02-04 14:55, schrieb Qu Wenruo:
>> On 2019/2/4 下午8:49, Moritz M wrote:
You're using qgroups, it's known to cause huge performance overhead for
balance.
We have upcoming patches to solve it, but it not going to mainline
>
Am 2019-02-04 14:55, schrieb Qu Wenruo:
On 2019/2/4 下午8:49, Moritz M wrote:
You're using qgroups, it's known to cause huge performance overhead
for
balance.
We have upcoming patches to solve it, but it not going to mainline
before v5.1 kernel.
So please disable qgroups if you're not usin
You're using qgroups, it's known to cause huge performance overhead for
balance.
We have upcoming patches to solve it, but it not going to mainline
before v5.1 kernel.
So please disable qgroups if you're not using it actively.
Thanks, was not aware that I turned it on. Is
btrfs quota disabl
On 2019/2/4 下午7:47, Moritz M wrote:
> Hi,
>
> I'm running a Ubuntu server with a btrfs RAID1 consisting of three HDDs.
>
> I do balancing daily via
>
>> btrfs balance start -dusage=50 -dlimit=2 -musage=50 -mlimit=4 /
>
> It usually takes between 1 - 10 minutes.
>
> But today the server was u
Hi,
I'm running a Ubuntu server with a btrfs RAID1 consisting of three HDDs.
I do balancing daily via
btrfs balance start -dusage=50 -dlimit=2 -musage=50 -mlimit=4 /
It usually takes between 1 - 10 minutes.
But today the server was unresponsive (no ssh connect possible, no
direct login via
So, does anyone have any suggestion on how I might recover some of the
data? If not, I'll cut my losses and create a new array...
Thanks!
--
Groet / Cheers,
Patrick Dijkgraaf
On Tue, 2018-12-04 at 10:58 +0100, Patrick Dijkgraaf wrote:
> Hi, thanks again.
> Please see answers inline.
>
>
Hi Chris,
they're SATA.
smartctl -x gives:
SCT Error Recovery Control command not supported
So seems like we can't do enything with it.
--
Groet / Cheers,
Patrick Dijkgraaf
On Tue, 2018-12-04 at 12:38 -0700, Chris Murphy wrote:
> On Tue, Dec 4, 2018 at 3:09 AM Patrick Dijkgraaf
> <
> bolder
Urgently need money? We can help you!
Are you by the current situation in trouble or threatens you in trouble?
In this way, we give you the ability to take a new development.
As a rich person I feel obliged to assist people who are struggling to give
them a chance. Everyone deserved a second
Thomas Mohr posted on Thu, 06 Dec 2018 12:31:15 +0100 as excerpted:
> We wanted to convert a file system to a RAID0 with two partitions.
> Unfortunately we had to reboot the server during the balance operation
> before it could complete.
>
> Now following happens:
>
> A mount attempt of the arra
30523392ERROR: failed to repair root items:
Operation not permitted
Any ideas what is going on or how to recover the file system ? I would
greatly appreciate your help !!!
best,
Thomas
uname -a:
Linux server2 4.19.5-1-default #1 SMP PREEMPT Tue Nov 27 19:56:09 UTC
2018 (6210279) x86_64
On Tue, Dec 4, 2018 at 3:09 AM Patrick Dijkgraaf
wrote:
>
> Hi Chris,
>
> See the output below. Any suggestions based on it?
If they're SATA drives, they may not support SCT ERC; and if they're
SAS, depending on what controller they're behind, smartctl might need
a hint to properly ask the drive
Hi Chris,
See the output below. Any suggestions based on it?
Thanks!
--
Groet / Cheers,
Patrick Dijkgraaf
On Mon, 2018-12-03 at 20:16 -0700, Chris Murphy wrote:
> Also useful information for autopsy, perhaps not for fixing, is to
> know whether the SCT ERC value for every drive is less than t
gt; I have been a happy BTRFS user for quite some time. But now I'm
> > > > facing
> > > > a potential ~45TB dataloss... :-(
> > > > I hope someone can help!
> > > >
> > > > I have Server A and Server B. Both having a 20-devices BTRFS
>
Also useful information for autopsy, perhaps not for fixing, is to
know whether the SCT ERC value for every drive is less than the
kernel's SCSI driver block device command timeout value. It's super
important that the drive reports an explicit read failure before the
read command is considered fail
On 2018/12/3 上午4:30, Andrei Borzenkov wrote:
> 02.12.2018 23:14, Patrick Dijkgraaf пишет:
>> I have some additional info.
>>
>> I found the reason the FS got corrupted. It was a single failing drive,
>> which caused the entire cabinet (containing 7 drives) to reset. So the
>> FS suddenly lost 7 d
;> On Sat, 2018-12-01 at 07:57 +0800, Qu Wenruo wrote:
>>> On 2018/11/30 下午9:53, Patrick Dijkgraaf wrote:
>>>> Hi all,
>>>>
>>>> I have been a happy BTRFS user for quite some time. But now I'm
>>>> facing
>>>> a potential
53, Patrick Dijkgraaf wrote:
>>> Hi all,
>>>
>>> I have been a happy BTRFS user for quite some time. But now I'm
>>> facing
>>> a potential ~45TB dataloss... :-(
>>> I hope someone can help!
>>>
>>> I have Server A and Server
02.12.2018 23:14, Patrick Dijkgraaf пишет:
> I have some additional info.
>
> I found the reason the FS got corrupted. It was a single failing drive,
> which caused the entire cabinet (containing 7 drives) to reset. So the
> FS suddenly lost 7 drives.
>
This remains mystery for me. btrfs is mark
57 +0800, Qu Wenruo wrote:
> > On 2018/11/30 下午9:53, Patrick Dijkgraaf wrote:
> > > Hi all,
> > >
> > > I have been a happy BTRFS user for quite some time. But now I'm
> > > facing
> > > a potential ~45TB dataloss... :-(
> > >
ome time. But now I'm
> > facing
> > a potential ~45TB dataloss... :-(
> > I hope someone can help!
> >
> > I have Server A and Server B. Both having a 20-devices BTRFS RAID6
> > filesystem. Because of known RAID5/6 risks, Server B was a backup
> > of
>
On 2018/11/30 下午9:53, Patrick Dijkgraaf wrote:
> Hi all,
>
> I have been a happy BTRFS user for quite some time. But now I'm facing
> a potential ~45TB dataloss... :-(
> I hope someone can help!
>
> I have Server A and Server B. Both having a 20-devices BTRFS RAID
Hi all,
I have been a happy BTRFS user for quite some time. But now I'm facing
a potential ~45TB dataloss... :-(
I hope someone can help!
I have Server A and Server B. Both having a 20-devices BTRFS RAID6
filesystem. Because of known RAID5/6 risks, Server B was a backup of
Server A.
Explicitly states that -d requires root privileges.
Also, update some option handling with regard to -d option.
Signed-off-by: Misono Tomohiro
---
Documentation/btrfs-subvolume.asciidoc | 3 ++-
cmds-subvolume.c | 8
2 files changed, 10 insertions(+), 1 deletion(-)
Currently "sub list -o" lists only child subvolumes of the specified
path. So, update help message and variable name more appropriately.
Signed-off-by: Misono Tomohiro
---
Documentation/btrfs-subvolume.asciidoc | 2 +-
cmds-subvolume.c | 10 +-
2 files
From: Jeff Mahoney
The usage definitions for send and receive follow the command
definitions, which use them. This works because we declare them
in commands.h. When we move to using cmd_struct as the entry point,
these declarations will be removed, breaking the commands. Since
that would be an
nt argc, char **argv)
{
- int has_help = 0;
- int has_full = 0;
+ bool has_help = false;
+ bool has_full = false;
int i;
for (i = 0; i < shift; i++) {
if (strcmp(argv[i], "--help") == 0)
- has_
Dear Sir/Madam,
I am Sgt Swanson Dennis, I have a good business proposal for you.
There are no risks involved and it is easy. Please reply for briefs
and procedures.
Best regards,
Sgt Swanson Dennis
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message
l = 0;
> + bool has_help = false;
> + bool has_full = false;
> int i;
>
> for (i = 0; i < shift; i++) {
> if (strcmp(argv[i], "--help") == 0)
> - has_help = 1;
> + has_help = true;
>
From: Jeff Mahoney
The usage definitions for send and receive follow the command
definitions, which use them. This works because we declare them
in commands.h. When we move to using cmd_struct as the entry point,
these declarations will be removed, breaking the commands. Since
that would be an
1 - 100 of 678 matches
Mail list logo