Am Thu, 13 Aug 2015 00:34:19 +0200
schrieb Marc Joliet :
[...]
> Since this is the root file system, I haven't gotten a copy of the actual
> output
> of "btrfs check", though I have run it from an initramfs rescue shell. The
> output I saw there was much like the following (taken from an Email b
I'm getting the following trace on a daily basis when stacking a lot of
cp --reflink commands.
Somethinkg like:
File a 80GB
cp --reflink=always a b
modify b
cp --reflink=always b c
modify c
cp --reflink=always c d
modify d
...
[57623.099897] INFO: task cp:1319 blocked for more than 120 seconds
root@toy02:~# df -T /data
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/sdb btrfs 3906909856 140031696 3765056176 4% /data
root@toy02:~# btrfs filesystem show /data
Label: data uuid: 411af13f-6cae-4f03-99dc-5941acb3135b
Total devices 2 FS bytes use
Marc Joliet posted on Thu, 13 Aug 2015 09:05:41 +0200 as excerpted:
> Here's the actual output now, obtained via btrfs-progs 4.0.1 from an
> initramfs emergency shell:
>
> checking extents checking free space cache checking fs roots root 5
> inode 8338813 errors 2000, link count wrong
> u
Seen today:
[150110.712196] [ cut here ]
[150110.776995] kernel BUG at fs/btrfs/inode.c:3230!
[150110.841067] invalid opcode: [#1] SMP
[150110.904472] Modules linked in: dm_mod netconsole ipt_REJECT
nf_reject_ipv4 xt_multiport iptable_filter ip_tables x_tables
cpufreq
Btrfs has a problem when defraging a file which has a large fragment'ed range,
it'd leave the tail extent as a seperate extent instead of merging it with
previous extents.
This makes generic/018 recognize the above regression.
Meanwhile, I find that in the case of 'write backwards sync but contig
Btrfs has a problem when defraging a file which has a large fragment'ed range,
it'd leave the tail extent as a seperate extent instead of merging it with
previous extents.
This makes generic/018 recognize the above regression.
Meanwhile, I find that in the case of 'write backwards sync but contig
Am Thu, 13 Aug 2015 08:29:19 + (UTC)
schrieb Duncan <1i5t5.dun...@cox.net>:
> Marc Joliet posted on Thu, 13 Aug 2015 09:05:41 +0200 as excerpted:
>
> > Here's the actual output now, obtained via btrfs-progs 4.0.1 from an
> > initramfs emergency shell:
> >
> > checking extents checking free s
On Thu, Aug 13, 2015 at 9:47 AM, Liu Bo wrote:
> Btrfs has a problem when defraging a file which has a large fragment'ed range,
> it'd leave the tail extent as a seperate extent instead of merging it with
> previous extents.
>
> This makes generic/018 recognize the above regression.
>
> Meanwhile,
Am Sun, 21 Jun 2015 07:21:03 +
schrieb Paul Jones :
> > -Original Message-
> > From: Lutz Euler [mailto:lutz.eu...@freenet.de]
> > Sent: Sunday, 21 June 2015 12:11 AM
> > To: Christian; Paul Jones; Austin S Hemmelgarn
> > Cc: linux-btrfs@vger.kernel.org
> > Subject: RE: trim not workin
On Thu, Aug 13, 2015 at 10:43 AM, Filipe David Manana
wrote:
> On Thu, Aug 13, 2015 at 9:47 AM, Liu Bo wrote:
>> Btrfs has a problem when defraging a file which has a large fragment'ed
>> range,
>> it'd leave the tail extent as a seperate extent instead of merging it with
>> previous extents.
>>
On Thu, Aug 13, 2015 at 01:33:22PM +1000, David Seikel wrote:
> I don't actually think that this is a BTRFS problem, but it's showing
> symptoms within BTRFS, and I have no other clues, so maybe the BTRFS
> experts can help me figure out what is actually going wrong.
>
> I'm a sysadmin working for
On 2015-08-12 15:30, Chris Murphy wrote:
On Wed, Aug 12, 2015 at 12:44 PM, Konstantin Svist wrote:
On 08/06/2015 04:10 AM, Austin S Hemmelgarn wrote:
On 2015-08-05 17:45, Konstantin Svist wrote:
Hi,
I've been running btrfs on Fedora for a while now, with bedup --defrag
running in a night-tim
A couple of observations:
1. BTRFS currently has no knowledge of multipath or anything like that.
In theory it should work fine as long as the multiple device instances
all point to the same storage directly (including having identical block
addresses), but we still need to add proper handling
On Wed 2015-08-12 (11:03), Chris Murphy wrote:
> On Wed, Aug 12, 2015 at 7:07 AM, Ulli Horlacher
> wrote:
>
> > /dev/sdb and /dev/sde are in reality the same physical disk!
>
> When does all of this confusion happen? Is it already confused before
> mkfs or only after mkfs or only after mount?
I
On Thu 2015-08-13 (15:34), anand jain wrote:
> > root@toy02:~# df -T /data
> > Filesystem Type 1K-blocks Used Available Use% Mounted on
> > /dev/sdb btrfs 3906909856 140031696 3765056176 4% /data
> >
> > root@toy02:~# btrfs filesystem show /data
> > Label: data uuid: 411af13f-6
On Thu 2015-08-13 (07:44), Austin S Hemmelgarn wrote:
> 2. Be _VERY_ careful using BTRFS on top of _ANY_ kind of shared storage.
> Most non-clustered filesystems will have issues if multiply mounted,
> but in almost all cases I've personally seen, it _WILL_ cause
> irreparable damage to a BTR
Hi Filipe,
Any reason to not run here fsstress like the test from patch 2? Doing
the device delete with a non-empty fs is a lot more interesting and
can help find bugs and regressions in the future.
You are reviewing v2. fsstress was added in v4.
Also this test needs the latest btrfs-p
+# this test requires the device mapper error target
+#
+_require_dmerror()
+{
+_require_command "$DMSETUP_PROG" dmsetup
+
+$DMSETUP_PROG targets | grep error >/dev/null 2>&1
+if [ $? -eq 0 ]
+then
+ :
+else
+ _notrun "This test requires dm error support"
+fi
So the test always fails due to a mismatch with this expected golden output:
-Label: none uuid:
+Label: none uuid:
Total devices FS bytes used
devid size used path SCRATCH_DEV
devid size used path /dev/mapper/error-test
The extra space after "uuid:
On Thu 2015-08-13 (14:02), Ulli Horlacher wrote:
> On Thu 2015-08-13 (15:34), anand jain wrote:
>
> > > root@toy02:~# df -T /data
> > > Filesystem Type 1K-blocks Used Available Use% Mounted on
> > > /dev/sdb btrfs 3906909856 140031696 3765056176 4% /data
> > >
> > > root@toy02:
On 08/13/2015 10:55 PM, Ulli Horlacher wrote:
On Thu 2015-08-13 (14:02), Ulli Horlacher wrote:
On Thu 2015-08-13 (15:34), anand jain wrote:
root@toy02:~# df -T /data
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/sdb btrfs 3906909856 140031696 3765056176 4%
Hi,
I use qgroups for subvolumes in Rockstor and have been noticing this
behavior for a while(at least from 3.18 days). The behavior is that I
get "Disk quota exceeded" errors before even hitting 70% usage. Here's
a simple demonstration of the problem.
[root@rock-dev ~]# btrfs fi show singlepool
On Thu, Aug 13, 2015 at 11:22 AM, Suman Chakravartula
wrote:
> Hi,
>
> I use qgroups for subvolumes in Rockstor and have been noticing this
> behavior for a while(at least from 3.18 days). The behavior is that I
> get "Disk quota exceeded" errors before even hitting 70% usage. Here's
> a simple de
On Thu, Aug 13, 2015 at 9:44 PM, Austin S Hemmelgarn
wrote:
> 3. See the warnings about doing block level copies and LVM snapshots of
> BTRFS volumes, the same applies to using it on DRBD currently as well (with
> the possible exception of remote DRBD nodes (ie, ones without a local copy
> of the
Hi,
I think I might be having this problem too. 12 x 4TB RAID10 (original makefs,
not converted from ext or whatnot). Says it has ~6TiB left. Centos 7. Dual Xeon
CPU. 32GB RAM. ELRepo Kernel 4.1.5. Fstab options:
noatime,autodefrag,compress=zlib,space_cache,nossd,noauto,x-systemd.automount
Som
On Fri, Aug 14, 2015 at 08:32:46AM +1000, Gareth Pye wrote:
> On Thu, Aug 13, 2015 at 9:44 PM, Austin S Hemmelgarn
> wrote:
> > 3. See the warnings about doing block level copies and LVM snapshots of
> > BTRFS volumes, the same applies to using it on DRBD currently as well (with
> > the possible e
Hi I was looking at qgroups in linux 4.2 and noticed that the code to handle
subvolume deletion was removed and replaced with a comment:
/*
* TODO: Modify related function to add related node/leaf to
* dirty_extent_root,
* for later qgroup accounting.
*
* Current, this function does nothing.
On Thu, Aug 13, 2015 at 3:23 AM, Marc Joliet wrote:
> Speaking as a user, since "fstrim -av" still always outputs 0 bytes trimmed
> on my system: what's the status of this? Did anybody ever file a bug report?
Since I'm not having this problem with my SSD, I'm not in a position
to provide any me
On Thu, Aug 13, 2015 at 4:38 PM, Vincent Olivier wrote:
> Hi,
>
> I think I might be having this problem too. 12 x 4TB RAID10 (original makefs,
> not converted from ext or whatnot). Says it has ~6TiB left. Centos 7. Dual
> Xeon CPU. 32GB RAM. ELRepo Kernel 4.1.5. Fstab options:
> noatime,autode
On Thu, 13 Aug 2015 09:55:10 + Hugo Mills
wrote:
> On Thu, Aug 13, 2015 at 01:33:22PM +1000, David Seikel wrote:
> > I don't actually think that this is a BTRFS problem, but it's
> > showing symptoms within BTRFS, and I have no other clues, so maybe
> > the BTRFS experts can help me figure ou
I would have been surprised if any generic file system copes well with
being mounted in several locations at once, DRBD appears to fight
really hard to avoid that happening :)
And yeah I'm doing the second thing, I've successfully switched which
of the servers is active a few times with no ill eff
Below is the result of testing a corrupted filesystem. What's going on here?
The kernel message log and the btrfs output don't tell me how many errors
there were. Also the data is RAID-0 (the default for a filesystem created with
2 devices) so if this was in a data area it should have lost da
Chris Murphy posted on Thu, 13 Aug 2015 17:19:41 -0600 as excerpted:
> Well I think others have suggested 3000 snapshots and quite a few things
> will get very slow. But then also you have autodefrag and I forget the
> interaction of this with many snapshots since the snapshot aware defrag
> code
I have 2 snapshots a few days apart for incrementally backing up the volume but
that's it.
I'll try without autodefrag tomorrow.
Vincent
-Original Message-
From: "Chris Murphy"
Sent: Thursday, August 13, 2015 19:19
To: "Btrfs BTRFS"
Subject: Re: mount btrfs takes 30 minutes, btrfs che
I'll try without autodefrag anyways tomorrow just to make sure.
And then file a bug report too with however it decides to behave.
Vincent
-Original Message-
From: "Duncan" <1i5t5.dun...@cox.net>
Sent: Thursday, August 13, 2015 20:30
To: linux-btrfs@vger.kernel.org
Subject: Re: mount btrf
Salutations linux
http://aruizendaal.nl/leave.php?happen=mprgvgeyvskesmthupr
hend...@yahoo.com
linux
Sent from my iPhone
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.ker
37 matches
Mail list logo