On 01/27/2017 07:31 AM, Christoph Anton Mitterer wrote:
On Thu, 2017-01-26 at 11:10 +0800, Qu Wenruo wrote:
Would you please try lowmem_tests branch of my repo?
That branch contains a special debug output for the case you
encountered, which should help to debug the case.
pecial debug output
Am 28.01.2017 um 23:27 schrieb Hans van Kranenburg:
> On 01/28/2017 10:04 PM, Oliver Freyermuth wrote:
>> Am 26.01.2017 um 12:01 schrieb Oliver Freyermuth:
>>> Am 26.01.2017 um 11:00 schrieb Hugo Mills:
We can probably talk you through fixing this by hand with a decent
hex editor.
2017-01-28 11:03 GMT+03:00 Rich Gannon :
> Hello Btrfs users and devs,
>
>
> I've gone searching through the mailing list dating back to 2014 or so and
> never found a positive answer - mostly "guesses" about as to the answer,
> albeit with good theories. I have two separate
On 01/28/2017 10:04 PM, Oliver Freyermuth wrote:
> Am 26.01.2017 um 12:01 schrieb Oliver Freyermuth:
>> Am 26.01.2017 um 11:00 schrieb Hugo Mills:
>>>We can probably talk you through fixing this by hand with a decent
>>> hex editor. I've done it before...
>>>
>> That would be nice! Is it fine
Am 26.01.2017 um 12:01 schrieb Oliver Freyermuth:
>Am 26.01.2017 um 11:00 schrieb Hugo Mills:
>>We can probably talk you through fixing this by hand with a decent
>> hex editor. I've done it before...
>>
> That would be nice! Is it fine via the mailing list?
> Potentially, the instructions
This same file system (which crashed again with the same errors) is also
giving this output during a metadata or data balance:
Jan 27 19:42:47 my_machine kernel: [ 335.018123] BTRFS info (device
sda1): no csum found for inode 28472371 start 2191360
Jan 27 19:42:47 my_machine kernel: [
Hello,
Of course I can't retrieve the data from before the balance, but here
is the data from now:
root@vmhost:~# btrfs fi show /tmp/mnt/curlybrace
Label: 'curlybrace' uuid: f471bfca-51c4-4e44-ac72-c6cd9ccaf535
Total devices 1 FS bytes used 752.38MiB
devid1 size 2.00GiB used 1.90GiB
Hi Duncan,
thanks for your extensive reply!
Am 28.01.2017 um 06:00 schrieb Duncan:
> All three options apparently default to 64K (as that's what I see here
> and I don't believe I've changed them), but can be changed. See the
> kernel options help and where it points for more.
>
Indeed, I
Am 28.01.2017 um 13:37 schrieb Janos Toth F.:
> I usually compile my kernels with CONFIG_X86_RESERVE_LOW=640 and
> CONFIG_X86_CHECK_BIOS_CORRUPTION=N because 640 kilobyte seems like a
> very cheap price to pay in order to avoid worrying about this (and
> skip the associated checking + monitoring).
I usually compile my kernels with CONFIG_X86_RESERVE_LOW=640 and
CONFIG_X86_CHECK_BIOS_CORRUPTION=N because 640 kilobyte seems like a
very cheap price to pay in order to avoid worrying about this (and
skip the associated checking + monitoring).
Out of curiosity (after reading this email) I set
Signed-off-by: Lakshmipathi.G
---
tests/fsck-tests/026-check-inode-link/test.sh | 34 +++
1 file changed, 34 insertions(+)
create mode 100755 tests/fsck-tests/026-check-inode-link/test.sh
diff --git
27.01.2017 23:03, Austin S. Hemmelgarn пишет:
> On 2017-01-27 11:47, Hans Deragon wrote:
>> On 2017-01-24 14:48, Adam Borowski wrote:
>>
>>> On Tue, Jan 24, 2017 at 01:57:24PM -0500, Hans Deragon wrote:
>>>
If I remove 'ro' from the option, I cannot get the filesystem mounted
because of
On Fri, 27 Jan 2017, Christoph Hellwig wrote:
On Fri, Jan 27, 2017 at 11:40:42AM -0500, Theodore Ts'o wrote:
The reason why I'm nervous is that nojournal mode is not a common
configuration, and "wait until production systems start failing" is
not a strategy that I or many SRE-types find
Hello Btrfs users and devs,
I've gone searching through the mailing list dating back to 2014 or so
and never found a positive answer - mostly "guesses" about as to the
answer, albeit with good theories. I have two separate questions I'm
looking to have answered with the latest known facts,
On Fri, Jan 27, 2017 at 11:40:42AM -0500, Theodore Ts'o wrote:
> The reason why I'm nervous is that nojournal mode is not a common
> configuration, and "wait until production systems start failing" is
> not a strategy that I or many SRE-types find comforting.
What does SRE stand for?
--
To
15 matches
Mail list logo