On 2018/12/7 上午7:15, Michael Wade wrote:
> Hi Qu,
>
> Me again! Having formatted the drives and rebuilt the RAID array I
> seem to have be having the same problem as before (no power cut this
> time [I bought a UPS]).
But strangely, your super block shows it has log tree, which means
either
On 2018-12-05 14:50, Roman Mamedov wrote:
Hello,
To migrate my FS to a different physical disk, I have added a new empty device
to the FS, then ran the remove operation on the original one.
Now my FS has only devid 2:
Label: 'p1' uuid: d886c190-b383-45ba-9272-9f00c6a10c50
Total
On 4.12.18 г. 22:14 ч., Wilson, Ellis wrote:
> On 12/4/18 8:07 AM, Nikolay Borisov wrote:
>> On 3.12.18 г. 20:20 ч., Wilson, Ellis wrote:
>>> With 14TB drives available today, it doesn't take more than a handful of
>>> drives to result in a filesystem that takes around a minute to mount.
>>> As
On 12/4/18 8:07 AM, Nikolay Borisov wrote:
> On 3.12.18 г. 20:20 ч., Wilson, Ellis wrote:
>> With 14TB drives available today, it doesn't take more than a handful of
>> drives to result in a filesystem that takes around a minute to mount.
>> As a result of this, I suspect this will become an
Le 04/12/2018 à 03:52, Chris Murphy a écrit :
> On Mon, Dec 3, 2018 at 1:04 PM Lionel Bouton
> wrote:
>> Le 03/12/2018 à 20:56, Lionel Bouton a écrit :
>>> [...]
>>> Note : recently I tried upgrading from 4.9 to 4.14 kernels, various
>>> tuning of the io queue (switching between classic
On 2018/12/4 下午9:07, Nikolay Borisov wrote:
>
>
> On 3.12.18 г. 20:20 ч., Wilson, Ellis wrote:
>> Hi all,
>>
>> Many months ago I promised to graph how long it took to mount a BTRFS
>> filesystem as it grows. I finally had (made) time for this, and the
>> attached is the result of my
On Fri, Nov 30, 2018 at 06:27:58PM +0200, Nikolay Borisov wrote:
>
>
> On 30.11.18 г. 17:22 ч., Chris Mason wrote:
> > On 29 Nov 2018, at 12:37, Nikolay Borisov wrote:
> >
> >> On 29.11.18 г. 18:43 ч., Jean Fobe wrote:
> >>> Hi all,
> >>> I've been studying LZ4 and other compression
On 3.12.18 г. 20:20 ч., Wilson, Ellis wrote:
> Hi all,
>
> Many months ago I promised to graph how long it took to mount a BTRFS
> filesystem as it grows. I finally had (made) time for this, and the
> attached is the result of my testing. The image is a fairly
> self-explanatory graph,
On Mon, Dec 3, 2018 at 1:04 PM Lionel Bouton
wrote:
>
> Le 03/12/2018 à 20:56, Lionel Bouton a écrit :
> > [...]
> > Note : recently I tried upgrading from 4.9 to 4.14 kernels, various
> > tuning of the io queue (switching between classic io-schedulers and
> > blk-mq ones in the virtual machines)
On 2018/12/4 上午2:20, Wilson, Ellis wrote:
> Hi all,
>
> Many months ago I promised to graph how long it took to mount a BTRFS
> filesystem as it grows. I finally had (made) time for this, and the
> attached is the result of my testing. The image is a fairly
> self-explanatory graph, and
Hi,
On 12/3/18 8:56 PM, Lionel Bouton wrote:
>
> Le 03/12/2018 à 19:20, Wilson, Ellis a écrit :
>>
>> Many months ago I promised to graph how long it took to mount a BTRFS
>> filesystem as it grows. I finally had (made) time for this, and the
>> attached is the result of my testing. The
Le 03/12/2018 à 20:56, Lionel Bouton a écrit :
> [...]
> Note : recently I tried upgrading from 4.9 to 4.14 kernels, various
> tuning of the io queue (switching between classic io-schedulers and
> blk-mq ones in the virtual machines) and BTRFS mount options
> (space_cache=v2,ssd_spread) but there
Hi,
Le 03/12/2018 à 19:20, Wilson, Ellis a écrit :
> Hi all,
>
> Many months ago I promised to graph how long it took to mount a BTRFS
> filesystem as it grows. I finally had (made) time for this, and the
> attached is the result of my testing. The image is a fairly
> self-explanatory graph,
On 30.11.18 г. 17:22 ч., Chris Mason wrote:
> On 29 Nov 2018, at 12:37, Nikolay Borisov wrote:
>
>> On 29.11.18 г. 18:43 ч., Jean Fobe wrote:
>>> Hi all,
>>> I've been studying LZ4 and other compression algorithms on the
>>> kernel, and seen other projects such as zram and ubifs using the
On 29 Nov 2018, at 12:37, Nikolay Borisov wrote:
> On 29.11.18 г. 18:43 ч., Jean Fobe wrote:
>> Hi all,
>> I've been studying LZ4 and other compression algorithms on the
>> kernel, and seen other projects such as zram and ubifs using the
>> crypto api. Is there a technical reason for not
On 29.11.18 г. 18:43 ч., Jean Fobe wrote:
> Hi all,
> I've been studying LZ4 and other compression algorithms on the
> kernel, and seen other projects such as zram and ubifs using the
> crypto api. Is there a technical reason for not using the crypto api
> for compression (and possibly for
On Fri 23-11-18 19:53:11, Amir Goldstein wrote:
> On Fri, Nov 23, 2018 at 3:34 PM Amir Goldstein wrote:
> > > So open_by_handle() should work fine even if we get mount_fd of /subvol1
> > > and handle for inode on /subvol2. mount_fd in open_by_handle() is really
> > > only used to get the
On Fri, Nov 23, 2018 at 3:34 PM Amir Goldstein wrote:
>
> On Fri, Nov 23, 2018 at 2:52 PM Jan Kara wrote:
> >
> > Changed subject to better match what we discuss and added btrfs list to CC.
> >
> > On Thu 22-11-18 17:18:25, Amir Goldstein wrote:
> > > On Thu, Nov 22, 2018 at 3:26 PM Jan Kara
On Fri, Nov 23, 2018 at 2:52 PM Jan Kara wrote:
>
> Changed subject to better match what we discuss and added btrfs list to CC.
>
> On Thu 22-11-18 17:18:25, Amir Goldstein wrote:
> > On Thu, Nov 22, 2018 at 3:26 PM Jan Kara wrote:
> > >
> > > On Thu 22-11-18 14:36:35, Amir Goldstein wrote:
> >
On Thu, Nov 22, 2018 at 6:07 AM Tomasz Chmielewski wrote:
>
> On 2018-11-22 21:46, Nikolay Borisov wrote:
>
> >> # echo w > /proc/sysrq-trigger
> >>
> >> # dmesg -c
> >> [ 931.585611] sysrq: SysRq : Show Blocked State
> >> [ 931.585715] taskPC stack pid father
> >> [
On 2018/11/22 下午10:03, Roman Mamedov wrote:
> On Thu, 22 Nov 2018 22:07:25 +0900
> Tomasz Chmielewski wrote:
>
>> Spot on!
>>
>> Removed "discard" from fstab and added "ssd", rebooted - no more
>> btrfs-cleaner running.
>
> Recently there has been a bugfix for TRIM in Btrfs:
>
> btrfs:
On Thu, 22 Nov 2018 22:07:25 +0900
Tomasz Chmielewski wrote:
> Spot on!
>
> Removed "discard" from fstab and added "ssd", rebooted - no more
> btrfs-cleaner running.
Recently there has been a bugfix for TRIM in Btrfs:
btrfs: Ensure btrfs_trim_fs can trim the whole fs
On 2018-11-22 21:46, Nikolay Borisov wrote:
# echo w > /proc/sysrq-trigger
# dmesg -c
[ 931.585611] sysrq: SysRq : Show Blocked State
[ 931.585715] task PC stack pid father
[ 931.590168] btrfs-cleaner D 0 1340 2 0x8000
[ 931.590175] Call Trace:
[
On 22.11.18 г. 14:31 ч., Tomasz Chmielewski wrote:
> Yet another system upgraded to 4.19 and showing strange issues.
>
> btrfs-cleaner is showing as ~90-100% busy in iotop:
>
> TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
> 1340 be/4 root 0.00 B/s 0.00
On 2018/11/17 下午5:49, Serhat Sevki Dincer wrote:
> Hi,
>
> On my second attempt to convert my 698 GiB usb HDD from ext4 to btrfs
> with btrfs-progs 4.19 from Manjaro (kernel 4.14.80):
>
> I identified bad files with
> $ find . -type f -exec cat {} > /dev/null \;
> This revealed 6 corrupted
Hi,
On my second attempt to convert my 698 GiB usb HDD from ext4 to btrfs
with btrfs-progs 4.19 from Manjaro (kernel 4.14.80):
I identified bad files with
$ find . -type f -exec cat {} > /dev/null \;
This revealed 6 corrupted files, I deleted them. Tried it again with
no error message.
Then I
On Thu, Nov 15, 2018 at 10:39 AM Juan Alberto Cirez
wrote:
>
> Is BTRFS mature enough to be deployed on a production system to underpin
> the storage layer of a 16+ ipcameras-based NVR (or VMS if you prefer)?
>
> Based on our limited experience with BTRFS (1+ year) under the above
> scenario the
On 11/16/2018 03:51 AM, Nikolay Borisov wrote:
On 15.11.18 г. 20:39 ч., Juan Alberto Cirez wrote:
Is BTRFS mature enough to be deployed on a production system to underpin
the storage layer of a 16+ ipcameras-based NVR (or VMS if you prefer)?
Based on our limited experience with BTRFS (1+
On 2018-11-15 13:39, Juan Alberto Cirez wrote:
Is BTRFS mature enough to be deployed on a production system to underpin
the storage layer of a 16+ ipcameras-based NVR (or VMS if you prefer)?
For NVR, I'd say no. BTRFS does pretty horribly with append-only
workloads, even if they are WORM
On Thu, 15 Nov 2018 11:39:58 -0700
Juan Alberto Cirez wrote:
> Is BTRFS mature enough to be deployed on a production system to underpin
> the storage layer of a 16+ ipcameras-based NVR (or VMS if you prefer)?
What are you looking to gain from using Btrfs on an NVR system? It doesn't
sound like
On 15.11.18 г. 20:39 ч., Juan Alberto Cirez wrote:
> Is BTRFS mature enough to be deployed on a production system to underpin
> the storage layer of a 16+ ipcameras-based NVR (or VMS if you prefer)?
>
> Based on our limited experience with BTRFS (1+ year) under the above
> scenario the answer
Hi Qu,
I created an issue for this feature request, so that it is not lost:
https://github.com/kdave/btrfs-progs/issues/153
Thanks for the help again! Now unsubscribing from this list.
Regards,
Attila
On Tue, Nov 6, 2018 at 9:23 AM Qu Wenruo wrote:
> On 2018/11/6 下午4:17, Attila Vangel wrote:
On 2018/11/6 下午4:17, Attila Vangel wrote:
> Hi Qu,
>
> Thanks again for the help.
> In my case:
>
> $ sudo btrfs check /dev/nvme0n1p2
> Opening filesystem to check...
> checksum verify failed on 18811453440 found E4E3BDB6 wanted
> checksum verify failed on 18811453440 found E4E3BDB6
Hi Qu,
Thanks again for the help.
In my case:
$ sudo btrfs check /dev/nvme0n1p2
Opening filesystem to check...
checksum verify failed on 18811453440 found E4E3BDB6 wanted
checksum verify failed on 18811453440 found E4E3BDB6 wanted
bad tree block 18811453440, bytenr mismatch,
On 2018/11/6 上午2:01, Attila Vangel wrote:
> Hi,
>
> TL;DR: I want to save data from my unmountable btrfs partition.
> I saw some commands in another thread "Salvage files from broken btrfs".
> I use the most recent Manjaro live (kernel: 4.19.0-3-MANJARO,
> btrfs-progs 4.17.1-1) to execute these
On Mon, Nov 5, 2018 at 6:27 AM, Austin S. Hemmelgarn
wrote:
> On 11/4/2018 11:44 AM, waxhead wrote:
>>
>> Sterling Windmill wrote:
>>>
>>> Out of curiosity, what led to you choosing RAID1 for data but RAID10
>>> for metadata?
>>>
>>> I've flip flipped between these two modes myself after finding
Hi,
Stupid gmail has put my email (or Qu's reply? ) to spam, so I just saw
the reply after I sent my reply (gmail asked me whether to remove it
from spam).
Anyway here is the requested output. Thanks for the help!
$ sudo btrfs check /dev/nvme0n1p2
Opening filesystem to check...
checksum verify
Hi,
TL;DR: I want to save data from my unmountable btrfs partition.
I saw some commands in another thread "Salvage files from broken btrfs".
I use the most recent Manjaro live (kernel: 4.19.0-3-MANJARO,
btrfs-progs 4.17.1-1) to execute these commands.
$ sudo mount -o ro,nologreplay
On 11/4/2018 11:44 AM, waxhead wrote:
Sterling Windmill wrote:
Out of curiosity, what led to you choosing RAID1 for data but RAID10
for metadata?
I've flip flipped between these two modes myself after finding out
that BTRFS RAID10 doesn't work how I would've expected.
Wondering what made you
Sterling Windmill wrote:
Out of curiosity, what led to you choosing RAID1 for data but RAID10
for metadata?
I've flip flipped between these two modes myself after finding out
that BTRFS RAID10 doesn't work how I would've expected.
Wondering what made you choose your configuration.
Thanks!
Out of curiosity, what led to you choosing RAID1 for data but RAID10
for metadata?
I've flip flipped between these two modes myself after finding out
that BTRFS RAID10 doesn't work how I would've expected.
Wondering what made you choose your configuration.
Thanks!
On Fri, Nov 2, 2018 at 3:55
Duncan wrote:
waxhead posted on Fri, 02 Nov 2018 20:54:40 +0100 as excerpted:
Note that I tend to interpret the btrfs de st / output as if the error
was NOT fixed even if (seems clearly that) it was, so I think the output
is a bit misleading... just saying...
See the btrfs-device manpage,
waxhead posted on Fri, 02 Nov 2018 20:54:40 +0100 as excerpted:
> Note that I tend to interpret the btrfs de st / output as if the error
> was NOT fixed even if (seems clearly that) it was, so I think the output
> is a bit misleading... just saying...
See the btrfs-device manpage, stats
On 2018/11/2 上午4:40, Attila Vangel wrote:
> Hi,
>
> Somehow my btrfs partition got broken. I use Arch, so my kernel is
> quite new (4.18.x).
> I don't remember exactly the sequence of events. At some point it was
> accessible in read-only, but unfortunately I did not take backup
> immediately.
On 2018/10/28 上午6:58, Dave wrote:
> I'm using btrfs and snapper on a system with an SSD. On this system
> when I run `snapper -c root ls` (where `root` is the snapper config
> for /), the process takes a very long time and top shows the following
> process using 100% of the CPU:
>
>
On Sat, Oct 20, 2018 at 1:34 PM Filipe Manana wrote:
>
> On Sat, Oct 20, 2018 at 9:27 PM Liu Bo wrote:
> >
> > On Fri, Oct 19, 2018 at 7:09 PM Andrew Nelson
> > wrote:
> > >
> > > I am having an issue with btrfs resize in Fedora 28. I am attempting
> > > to enlarge my Btrfs partition. Every
On Mon, Oct 22, 2018 at 10:06 AM Andrew Nelson
wrote:
>
> OK, an update: After unmouting and running btrfs check, the drive
> reverted to reporting the old size. Not sure if this was due to
> unmounting / mounting or doing btrfs check. Btrfs check should have
> been running in readonly mode.
It
OK, an update: After unmouting and running btrfs check, the drive
reverted to reporting the old size. Not sure if this was due to
unmounting / mounting or doing btrfs check. Btrfs check should have
been running in readonly mode. Since it looked like something was
wrong with the resize process, I
On Sun, Oct 21, 2018 at 6:05 AM Andrew Nelson wrote:
>
> Also, is the drive in a safe state to use? Is there anything I should
> run on the drive to check consistency?
It should be in a safe state. You can verify it running "btrfs check
/dev/" (it's a readonly operation).
If you are able to
Also, is the drive in a safe state to use? Is there anything I should
run on the drive to check consistency?
On Sat, Oct 20, 2018 at 10:02 PM Andrew Nelson
wrote:
>
> I have ran the "btrfs inspect-internal dump-tree -t 1" command, but
> the output is ~55mb. Is there something in particular you
I have ran the "btrfs inspect-internal dump-tree -t 1" command, but
the output is ~55mb. Is there something in particular you are looking
for in this?
On Sat, Oct 20, 2018 at 1:34 PM Filipe Manana wrote:
>
> On Sat, Oct 20, 2018 at 9:27 PM Liu Bo wrote:
> >
> > On Fri, Oct 19, 2018 at 7:09 PM
On Sat, Oct 20, 2018 at 9:27 PM Liu Bo wrote:
>
> On Fri, Oct 19, 2018 at 7:09 PM Andrew Nelson
> wrote:
> >
> > I am having an issue with btrfs resize in Fedora 28. I am attempting
> > to enlarge my Btrfs partition. Every time I run "btrfs filesystem
> > resize max $MOUNT", the command runs
On Fri, Oct 19, 2018 at 7:09 PM Andrew Nelson wrote:
>
> I am having an issue with btrfs resize in Fedora 28. I am attempting
> to enlarge my Btrfs partition. Every time I run "btrfs filesystem
> resize max $MOUNT", the command runs for a few minutes and then hangs
> forcing the system to be
On Wed, Oct 17, 2018 at 10:29 AM Libor Klepáč wrote:
>
> Hello,
> i have new 32GB SSD in my intel nuc, installed debian9 on it, using btrfs as
> a rootfs.
> Then i created subvolumes /system and /home and moved system there.
>
> System was installed using kernel 4.9.x and filesystem created
On 2018/10/16 下午11:25, Anton Shepelev wrote:
> Qu Wenruo to Anton Shepelev:
>
>>> On all our servers with BTRFS, which are otherwise working
>>> normally, `btrfs check /' complains that
>>>
>>> Superblock bytenr is larger than device size
>>> Couldn't open file system
>>>
>> Please try latest
Qu Wenruo to Anton Shepelev:
>>On all our servers with BTRFS, which are otherwise working
>>normally, `btrfs check /' complains that
>>
>>Superblock bytenr is larger than device size
>>Couldn't open file system
>>
>Please try latest btrfs-progs and see if btrfs check
>reports any error.
It is
On 2018/10/16 下午10:05, Anton Shepelev wrote:
> Hello, all
>
> On all our servers with BTRFS, which are otherwise working
> normally, `btrfs check /' complains that
Btrfs check shouldn't continue on mount point.
Latest one would report error like:
Opening filesystem to check...
ERROR: not
On 10/14/2018 07:08 PM, waxhead wrote:
In case BTRFS fails to WRITE to a disk. What happens?
Does the bad area get mapped out somehow?
There was a proposed patch, its not convincing because the disks does
the bad block relocation part transparently to the host and if disk runs
out of
On 2018-10-14 07:08, waxhead wrote:
In case BTRFS fails to WRITE to a disk. What happens?
Does the bad area get mapped out somehow? Does it try again until it
succeed or until it "times out" or reach a threshold counter?
Does it eventually try to write to a different disk (in case of using
the
On 2018/10/14 下午7:08, waxhead wrote:
> In case BTRFS fails to WRITE to a disk. What happens?
Normally it should return error when we flush disk.
And in that case, error will leads to transaction abort and the fs goes
RO to prevent further corruption.
> Does the bad area get mapped out somehow?
Hey Filipe,
thanks for the feedback. I ran the command again with -vv. Below are
the last commands logged by btrfs receive to stderr:
mkfile o2138798-5016457-0
rename
leonard/mail/lists/emacs-orgmode/new/1530428589.M675528862P21583Q6R28ec1af3.leonard-xps13
-> o2138802-5207521-0
rename
Filipe Manana - 05.10.18, 17:21:
> On Fri, Oct 5, 2018 at 3:23 PM Martin Steigerwald
wrote:
> > Hello!
> >
> > On ThinkPad T520 after battery was discharged and machine just
> > blacked out.
> >
> > Is that some sign of regular consistency check / replay or something
> > to investigate
On Fri, Oct 5, 2018 at 3:23 PM Martin Steigerwald wrote:
>
> Hello!
>
> On ThinkPad T520 after battery was discharged and machine just blacked
> out.
>
> Is that some sign of regular consistency check / replay or something to
> investigate further?
I think it's harmless, if anything were messed
On Tue, Oct 2, 2018 at 7:02 AM Leonard Lausen wrote:
>
> Hello,
>
> does anyone have an idea about below issue? It is a severe issue as it
> renders btrfs send / receive dysfunctional and it is not clear if there
> may be a data corruption issue hiding in the current send / receive
> code.
>
>
Hello,
does anyone have an idea about below issue? It is a severe issue as it
renders btrfs send / receive dysfunctional and it is not clear if there
may be a data corruption issue hiding in the current send / receive
code.
Thank you.
Best regards
Leonard
Leonard Lausen writes:
> Hello!
>
> I
在 2018年09月25日 16:31, Nikolay Borisov 写道:
On 25.09.2018 11:20, sunny.s.zhang wrote:
在 2018年09月20日 02:36, Liu Bo 写道:
On Mon, Sep 17, 2018 at 5:28 PM, sunny.s.zhang
wrote:
Hi All,
My OS(4.1.12) panic in kmem_cache_alloc, which is called by
btrfs_get_or_create_delayed_node.
I found that
On 25.09.2018 11:20, sunny.s.zhang wrote:
>
> 在 2018年09月20日 02:36, Liu Bo 写道:
>> On Mon, Sep 17, 2018 at 5:28 PM, sunny.s.zhang
>> wrote:
>>> Hi All,
>>>
>>> My OS(4.1.12) panic in kmem_cache_alloc, which is called by
>>> btrfs_get_or_create_delayed_node.
>>>
>>> I found that the freelist of
在 2018年09月20日 00:12, Nikolay Borisov 写道:
On 19.09.2018 02:53, sunny.s.zhang wrote:
Hi Duncan,
Thank you for your advice. I understand what you mean. But i have
reviewed the latest btrfs code, and i think the issue is exist still.
At 71 line, if the function of btrfs_get_delayed_node run
在 2018年09月20日 02:36, Liu Bo 写道:
On Mon, Sep 17, 2018 at 5:28 PM, sunny.s.zhang wrote:
Hi All,
My OS(4.1.12) panic in kmem_cache_alloc, which is called by
btrfs_get_or_create_delayed_node.
I found that the freelist of the slub is wrong.
crash> struct kmem_cache_cpu 887e7d7a24b0
struct
Adrian Bastholm posted on Thu, 20 Sep 2018 23:35:57 +0200 as excerpted:
> Thanks a lot for the detailed explanation.
> Aabout "stable hardware/no lying hardware". I'm not running any raid
> hardware, was planning on just software raid. three drives glued
> together with "mkfs.btrfs -d raid5
On 2018-09-20 05:35 PM, Adrian Bastholm wrote:
> Thanks a lot for the detailed explanation.
> Aabout "stable hardware/no lying hardware". I'm not running any raid
> hardware, was planning on just software raid. three drives glued
> together with "mkfs.btrfs -d raid5 /dev/sdb /dev/sdc /dev/sdd".
On Thu, Sep 20, 2018 at 3:36 PM Adrian Bastholm wrote:
>
> Thanks a lot for the detailed explanation.
> Aabout "stable hardware/no lying hardware". I'm not running any raid
> hardware, was planning on just software raid.
Yep. I'm referring to the drives, their firmware, cables, logic board,
its
Thanks a lot for the detailed explanation.
Aabout "stable hardware/no lying hardware". I'm not running any raid
hardware, was planning on just software raid. three drives glued
together with "mkfs.btrfs -d raid5 /dev/sdb /dev/sdc /dev/sdd". Would
this be a safer bet, or would You recommend running
On Thu, Sep 20, 2018 at 11:23 AM, Adrian Bastholm wrote:
> On Mon, Sep 17, 2018 at 2:44 PM Qu Wenruo wrote:
>
>>
>> Then I strongly recommend to use the latest upstream kernel and progs
>> for btrfs. (thus using Debian Testing)
>>
>> And if anything went wrong, please report asap to the mail
On Wed, Sep 19, 2018 at 1:41 PM, Jürgen Herrmann wrote:
> Am 13.9.2018 14:35, schrieb Nikolay Borisov:
>>
>> On 13.09.2018 15:30, Jürgen Herrmann wrote:
>>>
>>> OK, I will install kdump later and perform a dump after the hang.
>>>
>>> One more noob question beforehand: does this dump contain
On Mon, Sep 17, 2018 at 2:44 PM Qu Wenruo wrote:
>
> Then I strongly recommend to use the latest upstream kernel and progs
> for btrfs. (thus using Debian Testing)
>
> And if anything went wrong, please report asap to the mail list.
>
> Especially for fs corruption, that's the ghost I'm always
Hi,
> You may try to run the show command under strace to see where it blocks.
any recommendations for strace options?
On Friday, September 14, 2018 1:25:30 PM CEST David Sterba wrote:
> Hi,
>
> thanks for the report, I've forwarded it to the issue tracker
>
Am 13.9.2018 14:35, schrieb Nikolay Borisov:
On 13.09.2018 15:30, Jürgen Herrmann wrote:
OK, I will install kdump later and perform a dump after the hang.
One more noob question beforehand: does this dump contain sensitive
information, for example the luks encryption key for the disk etc? A
Am 13.9.2018 18:22, schrieb Chris Murphy:
(resend to all)
On Thu, Sep 13, 2018 at 9:44 AM, Nikolay Borisov
wrote:
On 13.09.2018 18:30, Chris Murphy wrote:
This is the 2nd or 3rd thread containing hanging btrfs send, with
kernel 4.18.x. The subject of one is "btrfs send hung in pipe_wait"
On Mon, Sep 17, 2018 at 5:28 PM, sunny.s.zhang wrote:
> Hi All,
>
> My OS(4.1.12) panic in kmem_cache_alloc, which is called by
> btrfs_get_or_create_delayed_node.
>
> I found that the freelist of the slub is wrong.
>
> crash> struct kmem_cache_cpu 887e7d7a24b0
>
> struct kmem_cache_cpu {
>
On 19.09.2018 02:53, sunny.s.zhang wrote:
> Hi Duncan,
>
> Thank you for your advice. I understand what you mean. But i have
> reviewed the latest btrfs code, and i think the issue is exist still.
>
> At 71 line, if the function of btrfs_get_delayed_node run over this
> line, then switch to
On 2018/9/19 上午8:35, sunny.s.zhang wrote:
>
> 在 2018年09月19日 08:05, Qu Wenruo 写道:
>>
>> On 2018/9/18 上午8:28, sunny.s.zhang wrote:
>>> Hi All,
>>>
>>> My OS(4.1.12) panic in kmem_cache_alloc, which is called by
>>> btrfs_get_or_create_delayed_node.
>> Any reproducer?
>>
>> Anyway we need a
在 2018年09月19日 08:05, Qu Wenruo 写道:
On 2018/9/18 上午8:28, sunny.s.zhang wrote:
Hi All,
My OS(4.1.12) panic in kmem_cache_alloc, which is called by
btrfs_get_or_create_delayed_node.
Any reproducer?
Anyway we need a reproducer as a testcase.
I have had a try, but could not reproduce yet.
On 2018/9/18 上午8:28, sunny.s.zhang wrote:
> Hi All,
>
> My OS(4.1.12) panic in kmem_cache_alloc, which is called by
> btrfs_get_or_create_delayed_node.
Any reproducer?
Anyway we need a reproducer as a testcase.
The code looks
>
> I found that the freelist of the slub is wrong.
>
> crash>
Hi Duncan,
Thank you for your advice. I understand what you mean. But i have
reviewed the latest btrfs code, and i think the issue is exist still.
At 71 line, if the function of btrfs_get_delayed_node run over this
line, then switch to other process, which run over the 1282 and release
the
On Tue, Sep 18, 2018 at 06:28:37PM +, Gervais, Francois wrote:
> > No. It is already possible (by setting received UUID); it should not be
> made too open to easy abuse.
>
>
> Do you mean edit the UUID in the byte stream before btrfs receive?
No, there's an ioctl to change the received
18.09.2018 21:28, Gervais, Francois пишет:
>> No. It is already possible (by setting received UUID); it should not be
> made too open to easy abuse.
>
>
> Do you mean edit the UUID in the byte stream before btrfs receive?
>
No, I mean setting received UUID on subvolume. Unfortunately, it is
> No. It is already possible (by setting received UUID); it should not be
made too open to easy abuse.
Do you mean edit the UUID in the byte stream before btrfs receive?
18.09.2018 20:56, Gervais, Francois пишет:
>
> Hi,
>
> I'm trying to apply a btrfs send diff (done through -p) to another subvolume
> with the same content as the proper parent but with a different uuid.
>
> I looked through btrfs receive and I get the feeling that this is not
> possible
Add Junxiao
在 2018年09月18日 13:05, Duncan 写道:
sunny.s.zhang posted on Tue, 18 Sep 2018 08:28:14 +0800 as excerpted:
My OS(4.1.12) panic in kmem_cache_alloc, which is called by
btrfs_get_or_create_delayed_node.
I found that the freelist of the slub is wrong.
[Not a dev, just a btrfs list
sunny.s.zhang posted on Tue, 18 Sep 2018 08:28:14 +0800 as excerpted:
> My OS(4.1.12) panic in kmem_cache_alloc, which is called by
> btrfs_get_or_create_delayed_node.
>
> I found that the freelist of the slub is wrong.
[Not a dev, just a btrfs list regular and user, myself. But here's a
Sorry, modify some errors:
Process A (btrfs_evict_inode) Process B
call btrfs_remove_delayed_node call
btrfs_get_delayed_node
node = ACCESS_ONCE(btrfs_inode->delayed_node);
BTRFS_I(inode)->delayed_node = NULL;
> If your primary concern is to make the fs as stable as possible, then
> keep snapshots to a minimal amount, avoid any functionality you won't
> use, like qgroup, routinely balance, RAID5/6.
>
> And keep the necessary btrfs specific operations to minimal, like
> subvolume/snapshot (and don't
On 2018/9/17 下午7:55, Adrian Bastholm wrote:
>> Well, I'd say Debian is really not your first choice for btrfs.
>> The kernel is really old for btrfs.
>>
>> My personal recommend is to use rolling release distribution like
>> vanilla Archlinux, whose kernel is already 4.18.7 now.
>
> I just
> Well, I'd say Debian is really not your first choice for btrfs.
> The kernel is really old for btrfs.
>
> My personal recommend is to use rolling release distribution like
> vanilla Archlinux, whose kernel is already 4.18.7 now.
I just upgraded to Debian Testing which has the 4.18 kernel
>
On Sun, Sep 16, 2018 at 2:11 PM, Adrian Bastholm wrote:
> Thanks for answering Qu.
>
>> At this timing, your fs is already corrupted.
>> I'm not sure about the reason, it can be a failed CoW combined with
>> powerloss, or corrupted free space cache, or some old kernel bugs.
>>
>> Anyway, the
On Sun, Sep 16, 2018 at 7:58 AM, Adrian Bastholm wrote:
> Hello all
> Actually I'm not trying to get any help any more, I gave up BTRFS on
> the desktop, but I'd like to share my efforts of trying to fix my
> problems, in hope I can help some poor noob like me.
There's almost no useful
On 2018/9/16 下午9:58, Adrian Bastholm wrote:
> Hello all
> Actually I'm not trying to get any help any more, I gave up BTRFS on
> the desktop, but I'd like to share my efforts of trying to fix my
> problems, in hope I can help some poor noob like me.
>
> I decided to use BTRFS after reading the
Hi,
thanks for the report, I've forwarded it to the issue tracker
https://github.com/kdave/btrfs-progs/issues/148
The show command uses the information provided by blkid, that presumably
caches that. The default behaviour of 'fi show' is to skip mount checks,
so the delays are likely caused by
On 2018/9/14 下午1:52, Nikolay Borisov wrote:
>
>
> On 14.09.2018 02:17, Qu Wenruo wrote:
>>
>>
>> On 2018/9/14 上午12:37, Nikolay Borisov wrote:
>>>
>>>
>>> On 13.09.2018 19:15, Serhat Sevki Dincer wrote:
> -1 seems to be EPERM, is your device write-protected, readonly or
> something
1 - 100 of 5464 matches
Mail list logo