On Mon, Jun 24, 2019 at 11:31:35AM -0600, Chris Murphy wrote:
> Right. The questions I have: should Btrfs (or any file system) be able
> to detect such devices and still protect the data? i.e. for the file
I have more than 600 industrial machine all around the world.
After a few fs corruption (ext
On Mon, Jun 24, 2019 at 11:31:35AM -0600, Chris Murphy wrote:
> On Sun, Jun 23, 2019 at 7:52 PM Qu Wenruo wrote:
> >
> >
> >
> > On 2019/6/24 上午4:45, Zygo Blaxell wrote:
> > > I first observed these correlations back in 2016. We had a lot of WD
> > > Green and Black drives in service at the time-
On Sun, Jun 23, 2019 at 7:52 PM Qu Wenruo wrote:
>
>
>
> On 2019/6/24 上午4:45, Zygo Blaxell wrote:
> > I first observed these correlations back in 2016. We had a lot of WD
> > Green and Black drives in service at the time--too many to replace or
> > upgrade them all early--so I looked for a workar
On 2019/6/24 下午12:29, Zygo Blaxell wrote:
[...]
>
>> Btrfs is relying more the hardware to implement barrier/flush properly,
>> or CoW can be easily ruined.
>> If the firmware is only tested (if tested) against such fs, it may be
>> the problem of the vendor.
> [...]
>>> WD Green and Black are l
On Mon, Jun 24, 2019 at 12:37:51AM -0400, Zygo Blaxell wrote:
> On Sun, Jun 23, 2019 at 10:45:50PM -0400, Remi Gauvin wrote:
> > On 2019-06-23 4:45 p.m., Zygo Blaxell wrote:
> >
> > > Model Family: Western Digital Green Device Model: WDC WD20EZRX-00DC0B0
> > > Firmware Version: 80.00A80
> > >
On Sun, Jun 23, 2019 at 10:45:50PM -0400, Remi Gauvin wrote:
> On 2019-06-23 4:45 p.m., Zygo Blaxell wrote:
>
> > Model Family: Western Digital Green Device Model: WDC WD20EZRX-00DC0B0
> > Firmware Version: 80.00A80
> >
> > Change the query to 1-30 power cycles, and we get another model with
On Mon, Jun 24, 2019 at 08:46:06AM +0800, Qu Wenruo wrote:
> On 2019/6/24 上午4:45, Zygo Blaxell wrote:
> > On Thu, Jun 20, 2019 at 01:00:50PM +0800, Qu Wenruo wrote:
> >> On 2019/6/20 上午7:45, Zygo Blaxell wrote:
[...]
> So the worst scenario really happens in real world, badly implemented
> flush/fu
On 2019-06-23 4:45 p.m., Zygo Blaxell wrote:
> Model Family: Western Digital Green Device Model: WDC WD20EZRX-00DC0B0
> Firmware Version: 80.00A80
>
> Change the query to 1-30 power cycles, and we get another model with
> the same firmware version string:
>
> Model Family: Western D
On 2019/6/24 上午4:45, Zygo Blaxell wrote:
> On Thu, Jun 20, 2019 at 01:00:50PM +0800, Qu Wenruo wrote:
>> On 2019/6/20 上午7:45, Zygo Blaxell wrote:
>>> On Sun, Jun 16, 2019 at 12:05:21AM +0200, Claudius Winkel wrote:
What should I do now ... to use btrfs safely? Should i not use it with
D
On Thu, Jun 20, 2019 at 01:00:50PM +0800, Qu Wenruo wrote:
> On 2019/6/20 上午7:45, Zygo Blaxell wrote:
> > On Sun, Jun 16, 2019 at 12:05:21AM +0200, Claudius Winkel wrote:
> >> What should I do now ... to use btrfs safely? Should i not use it with
> >> DM-crypt
> >
> > You might need to disable wri
On 2019/6/20 上午7:45, Zygo Blaxell wrote:
> On Sun, Jun 16, 2019 at 12:05:21AM +0200, Claudius Winkel wrote:
>> Thanks for the Help
>>
>> I get my data back.
>>
>> But now I`m thinking... how did it come so far?
>>
>> Was it luks the dm-crypt?
>
> dm-crypt is fine. dm-crypt is not a magical tool
On Sun, Jun 16, 2019 at 12:05:21AM +0200, Claudius Winkel wrote:
> Thanks for the Help
>
> I get my data back.
>
> But now I`m thinking... how did it come so far?
>
> Was it luks the dm-crypt?
dm-crypt is fine. dm-crypt is not a magical tool for creating data loss
in Linux storage stacks. I'v
Thanks for the Help
I get my data back.
But now I`m thinking... how did it come so far?
Was it luks the dm-crypt?
What did i do wrong? Old Ubuntu Kernel? ubuntu 18.04
What should I do now ... to use btrfs safely? Should i not use it with
DM-crypt
Or even use ZFS instead...
Am 11/06/2019 u
On 2019/6/11 下午6:53, claud...@winca.de wrote:
> HI Guys,
>
> you are my last try. I was so happy to use BTRFS but now i really hate
> it
>
>
> Linux CIA 4.15.0-51-generic #55-Ubuntu SMP Wed May 15 14:27:21 UTC 2019
> x86_64 x86_64 x86_64 GNU/Linux
> btrfs-progs v4.15.1
So old kernel and o
HI Guys,
you are my last try. I was so happy to use BTRFS but now i really hate
it
Linux CIA 4.15.0-51-generic #55-Ubuntu SMP Wed May 15 14:27:21 UTC 2019
x86_64 x86_64 x86_64 GNU/Linux
btrfs-progs v4.15.1
btrfs fi show
Label: none uuid: 9622fd5c-5f7a-4e72-8efa-3d56a462ba85
To
Austin S. Hemmelgarn posted on Tue, 31 Jan 2017 07:45:42 -0500 as
excerpted:
>> There's actually a btrfs-undelete script on github that turns the
>> otherwise multiple manual steps into a nice, smooth, undelete
>> operation. Or at least it's supposed to. I've never actually used it,
>> tho I have
Michael,
That's great news. Well done. ext4 works just fine for most cases.
If you wish to experiment I might suggest more work on your part (just
what you need, right?) by using btrfs for smaller file systems
(perhaps just root, maybe /var, /bin etc.) but try installing zfs for
large file syste
Thank you all for your help.
Magically, btrfs-find-root worked today. (I attached the steps at the end)
I don't think I changed anything. The btrfs-progs version is still 4.1
because I tried different tagged versions (starting from 4.9) from the
cloned git repo.
The btrfs-find-root on the working
On 2017-01-30 23:58, Duncan wrote:
Oliver Freyermuth posted on Sat, 28 Jan 2017 17:46:24 +0100 as excerpted:
Just don't count on restore to save your *** and always treat what it
can often bring to current as a pleasant surprise, and having it fail
won't be a down side, while having it work, if
On 30/01/17 22:37, Michael Born wrote:
> Also, I'm not interested in restoring the old Suse 13.2 system. I just
> want some configuration files from it.
If all you really want is to get some important information from some
specific config files, and it is so important it is worth an hour or so
of
Oliver Freyermuth posted on Sat, 28 Jan 2017 17:46:24 +0100 as excerpted:
>> Just don't count on restore to save your *** and always treat what it
>> can often bring to current as a pleasant surprise, and having it fail
>> won't be a down side, while having it work, if it does, will always be
>> u
Michael Born posted on Mon, 30 Jan 2017 22:07:00 +0100 as excerpted:
> Am 30.01.2017 um 21:51 schrieb Chris Murphy:
>> On Mon, Jan 30, 2017 at 1:02 PM, Michael Born
>> wrote:
>>> Hi btrfs experts.
>>>
>>> Hereby I apply for the stupidity of the month award.
>>
>> There's still another day :-D
>>
Hello, Micheal,
Yes, you would certainly run the risk of doing more damage with dd, so
if you have an alternative, use that, and avoid dd. If nothing else
works and you need the files, you might try it as a last resort.
My guess (and it is only a guess) is that if the image is close to the
same
Hi Gordon,
I'm quite sure this is not a good idea.
I do understand, that dd-ing a running system will miss some changes
done to the file system while copying. I'm surprised that I didn't end
up with some corrupted files, but with no files at all.
Also, I'm not interested in restoring the old Suse
<<
Hi btrfs experts.
Hereby I apply for the stupidity of the month award.
>>
I have no doubt that I will will mount a serious challenge to you for
that title, so you haven't won yet.
Why not dd the image back onto the original partition (or another
partition identical in size) and see if that is
Am 30.01.2017 um 22:20 schrieb Chris Murphy:
> On Mon, Jan 30, 2017 at 2:07 PM, Michael Born wrote:
>> The files I'm interested in (fstab, NetworkManager.conf, ...) didn't
>> change for months. Why would they change in the moment I copy their
>> blocks with dd?
>
> They didn't change. The file sy
On Mon, Jan 30, 2017 at 2:20 PM, Chris Murphy wrote:
> What people do with huge databases, which have this same problem,
> they'll take a volume snapshot. This first commits everything in
> flight, freezes the fs so no more changes can happen, then takes a
> snapshot, then unfreezes the original
On Mon, Jan 30, 2017 at 2:07 PM, Michael Born wrote:
>
>
> Am 30.01.2017 um 21:51 schrieb Chris Murphy:
>> On Mon, Jan 30, 2017 at 1:02 PM, Michael Born
>> wrote:
>>> Hi btrfs experts.
>>>
>>> Hereby I apply for the stupidity of the month award.
>>
>> There's still another day :-D
>>
>>
>>
>>>
>
On 01/30/2017 10:07 PM, Michael Born wrote:
>
>
> Am 30.01.2017 um 21:51 schrieb Chris Murphy:
>> On Mon, Jan 30, 2017 at 1:02 PM, Michael Born
>> wrote:
>>> Hi btrfs experts.
>>>
>>> Hereby I apply for the stupidity of the month award.
>>
>> There's still another day :-D
>>
>>
>>
>>>
>>> Befor
Am 30.01.2017 um 21:51 schrieb Chris Murphy:
> On Mon, Jan 30, 2017 at 1:02 PM, Michael Born wrote:
>> Hi btrfs experts.
>>
>> Hereby I apply for the stupidity of the month award.
>
> There's still another day :-D
>
>
>
>>
>> Before switching from Suse 13.2 to 42.2, I copied my / partition w
On Mon, Jan 30, 2017 at 1:02 PM, Michael Born wrote:
> Hi btrfs experts.
>
> Hereby I apply for the stupidity of the month award.
There's still another day :-D
>
> Before switching from Suse 13.2 to 42.2, I copied my / partition with dd
> to an image file - while the system was online/running.
On 01/30/2017 09:02 PM, Michael Born wrote:
> Hi btrfs experts.
>
> Hereby I apply for the stupidity of the month award.
> But, maybe you can help me restoring my dd backup or extracting some
> files from it?
>
> Before switching from Suse 13.2 to 42.2, I copied my / partition with dd
> to an ima
Hi btrfs experts.
Hereby I apply for the stupidity of the month award.
But, maybe you can help me restoring my dd backup or extracting some
files from it?
Before switching from Suse 13.2 to 42.2, I copied my / partition with dd
to an image file - while the system was online/running.
Now, I can't
On 2017-01-28 00:00, Duncan wrote:
Austin S. Hemmelgarn posted on Fri, 27 Jan 2017 07:58:20 -0500 as
excerpted:
On 2017-01-27 06:01, Oliver Freyermuth wrote:
I'm also running 'memtester 12G' right now, which at least tests 2/3
of the memory. I'll leave that running for a day or so, but of cour
On 01/29/2017 08:52 PM, Oliver Freyermuth wrote:
> Am 29.01.2017 um 20:28 schrieb Hans van Kranenburg:
>> On 01/29/2017 08:09 PM, Oliver Freyermuth wrote:
[..whaaa.. text.. see previous message..]
>>> Wow - this nice python toolset really makes it easy, bigmomma holding your
>>> hands ;-) .
Am 29.01.2017 um 20:28 schrieb Hans van Kranenburg:
> On 01/29/2017 08:09 PM, Oliver Freyermuth wrote:
>>> [..whaaa.. text.. see previous message..]
>> Wow - this nice python toolset really makes it easy, bigmomma holding your
>> hands ;-) .
>>
>> Indeed, I get exactly the same output you did sho
On 01/29/2017 08:09 PM, Oliver Freyermuth wrote:
>> [..whaaa.. text.. see previous message..]
> Wow - this nice python toolset really makes it easy, bigmomma holding your
> hands ;-) .
>
> Indeed, I get exactly the same output you did show in your example, which
> almost matches my manual chang
Am 29.01.2017 um 17:44 schrieb Hans van Kranenburg:
> On 01/29/2017 03:02 AM, Oliver Freyermuth wrote:
>> Am 28.01.2017 um 23:27 schrieb Hans van Kranenburg:
>>> On 01/28/2017 10:04 PM, Oliver Freyermuth wrote:
Am 26.01.2017 um 12:01 schrieb Oliver Freyermuth:
> Am 26.01.2017 um 11:00 schr
On 01/29/2017 03:02 AM, Oliver Freyermuth wrote:
> Am 28.01.2017 um 23:27 schrieb Hans van Kranenburg:
>> On 01/28/2017 10:04 PM, Oliver Freyermuth wrote:
>>> Am 26.01.2017 um 12:01 schrieb Oliver Freyermuth:
Am 26.01.2017 um 11:00 schrieb Hugo Mills:
>We can probably talk you through
Am 28.01.2017 um 23:27 schrieb Hans van Kranenburg:
> On 01/28/2017 10:04 PM, Oliver Freyermuth wrote:
>> Am 26.01.2017 um 12:01 schrieb Oliver Freyermuth:
>>> Am 26.01.2017 um 11:00 schrieb Hugo Mills:
We can probably talk you through fixing this by hand with a decent
hex editor. I've
On 01/28/2017 10:04 PM, Oliver Freyermuth wrote:
> Am 26.01.2017 um 12:01 schrieb Oliver Freyermuth:
>> Am 26.01.2017 um 11:00 schrieb Hugo Mills:
>>>We can probably talk you through fixing this by hand with a decent
>>> hex editor. I've done it before...
>>>
>> That would be nice! Is it fine v
Am 26.01.2017 um 12:01 schrieb Oliver Freyermuth:
>Am 26.01.2017 um 11:00 schrieb Hugo Mills:
>>We can probably talk you through fixing this by hand with a decent
>> hex editor. I've done it before...
>>
> That would be nice! Is it fine via the mailing list?
> Potentially, the instructions cou
Hi Duncan,
thanks for your extensive reply!
Am 28.01.2017 um 06:00 schrieb Duncan:
> All three options apparently default to 64K (as that's what I see here
> and I don't believe I've changed them), but can be changed. See the
> kernel options help and where it points for more.
>
Indeed, I h
Am 28.01.2017 um 13:37 schrieb Janos Toth F.:
> I usually compile my kernels with CONFIG_X86_RESERVE_LOW=640 and
> CONFIG_X86_CHECK_BIOS_CORRUPTION=N because 640 kilobyte seems like a
> very cheap price to pay in order to avoid worrying about this (and
> skip the associated checking + monitoring).
I usually compile my kernels with CONFIG_X86_RESERVE_LOW=640 and
CONFIG_X86_CHECK_BIOS_CORRUPTION=N because 640 kilobyte seems like a
very cheap price to pay in order to avoid worrying about this (and
skip the associated checking + monitoring).
Out of curiosity (after reading this email) I set the
Austin S. Hemmelgarn posted on Fri, 27 Jan 2017 07:58:20 -0500 as
excerpted:
> On 2017-01-27 06:01, Oliver Freyermuth wrote:
>>> I'm also running 'memtester 12G' right now, which at least tests 2/3
>>> of the memory. I'll leave that running for a day or so, but of course
>>> it will not provide a
On 2017-01-27 06:01, Oliver Freyermuth wrote:
I'm also running 'memtester 12G' right now, which at least tests 2/3 of the
memory. I'll leave that running for a day or so, but of course it will not
provide a clear answer...
A small update: while the online memtester is without any errors still
> I'm also running 'memtester 12G' right now, which at least tests 2/3 of the
> memory. I'll leave that running for a day or so, but of course it will not
> provide a clear answer...
A small update: while the online memtester is without any errors still, I
checked old syslogs from the machine
>It's on line 248 of the paste:
>
> 246. key (5547032576 EXTENT_ITEM 204800) block 596426752 (36403) gen 20441
> 247. key (5561905152 EXTENT_ITEM 184320) block 596443136 (36404) gen 20441
> 248. key (15606380089319694336 UNKNOWN.76 303104) block 596459520 (36405)
> gen 20441
> 249. ke
On Thu, Jan 26, 2017 at 10:36:55AM +0100, Oliver Freyermuth wrote:
> Hi and thanks for the quick reply!
>
> Am 26.01.2017 um 10:25 schrieb Hugo Mills:
> >Can you post the output of "btrfs-debug-tree -b 35028992
> > /dev/sdb1", specifically the 5 or so entries around item 243. It is
> > quite
Hi and thanks for the quick reply!
Am 26.01.2017 um 10:25 schrieb Hugo Mills:
>Can you post the output of "btrfs-debug-tree -b 35028992
> /dev/sdb1", specifically the 5 or so entries around item 243. It is
> quite likely that you have bad RAM, and the output will help confirm
> that.
>
Sinc
On Thu, Jan 26, 2017 at 10:18:40AM +0100, Oliver Freyermuth wrote:
> Hi,
>
> I have just encountered on mount of one of my filesystems (after a clean
> reboot...):
> [ 495.303313] BTRFS critical (device sdb1): corrupt node, bad key order:
> block=35028992, root=1, slot=243
> [ 495.315642] BT
Hi,
I have just encountered on mount of one of my filesystems (after a clean
reboot...):
[ 495.303313] BTRFS critical (device sdb1): corrupt node, bad key order:
block=35028992, root=1, slot=243
[ 495.315642] BTRFS critical (device sdb1): corrupt node, bad key order:
block=35028992, root=1,
At 01/23/2017 07:15 PM, Sebastian Gottschall wrote:
Hello again
by the way. the init-extent-tree is still running (now almost 7 days).
is there any chance to find out how long it will take at the end?
Sebastian
I think it may encounters a dead loop.
If its output doesn't loop(from a large
Hello again
by the way. the init-extent-tree is still running (now almost 7 days).
is there any chance to find out how long it will take at the end?
Sebastian
Am 20.01.2017 um 02:08 schrieb Qu Wenruo:
At 01/19/2017 06:06 PM, Sebastian Gottschall wrote:
Hello
I have a question. after a po
Am 20.01.2017 um 09:05 schrieb Duncan:
Sebastian Gottschall posted on Thu, 19 Jan 2017 11:06:19 +0100 as
excerpted:
I have a question. after a power outage my system was turning into a
unrecoverable state using btrfs (kernel 4.9)
since im running --init-extent-tree now for 3 days i'm asking how
Am 20.01.2017 um 02:08 schrieb Qu Wenruo:
At 01/19/2017 06:06 PM, Sebastian Gottschall wrote:
Hello
I have a question. after a power outage my system was turning into a
unrecoverable state using btrfs (kernel 4.9)
since im running --init-extent-tree now for 3 days i'm asking how long
this pro
Sebastian Gottschall posted on Thu, 19 Jan 2017 11:06:19 +0100 as
excerpted:
> I have a question. after a power outage my system was turning into a
> unrecoverable state using btrfs (kernel 4.9)
> since im running --init-extent-tree now for 3 days i'm asking how long
> this process normally takes
At 01/19/2017 06:06 PM, Sebastian Gottschall wrote:
Hello
I have a question. after a power outage my system was turning into a
unrecoverable state using btrfs (kernel 4.9)
since im running --init-extent-tree now for 3 days i'm asking how long
this process normally takes and why it outputs mill
Hello
I have a question. after a power outage my system was turning into a
unrecoverable state using btrfs (kernel 4.9)
since im running --init-extent-tree now for 3 days i'm asking how long
this process normally takes and why it outputs millions of lines like
Backref 1562890240 root 262 owne
From: Martin Wilck
This patch series contains all changes I made to the btrfs tools
in the course of analyzing and repairing the corruption I described
in my other mail to linux-btrfs titled "A story of btrfs corruption
and recovery".
The bottom line of this patch set is: 1) have the tools conti
On 04/10/13 19:32, Duncan wrote:
> Martin posted on Fri, 04 Oct 2013 16:47:19 +0100 as "condensed":
>
>> There's ad-hoc comment for various commands to recover from filesystem
>> errors.
>>
>> But what do they actually do and when should what command be used?
>> What do they do exactly and what ar
Martin posted on Fri, 04 Oct 2013 16:47:19 +0100 as "condensed":
> There's ad-hoc comment for various commands to recover from filesystem
> errors.
>
> But what do they actually do and when should what command be used?
> What do they do exactly and what are the indicators to try using them?
> Or
There's ad-hoc comment for various commands to recover from filesystem
errors.
But what do they actually do and when should what command be used?
(The wiki gives scant indication other than to 'blindly' try things...)
There's:
mount "-o recovery,noatime"
btrfsck:
--repair
64 matches
Mail list logo