On Oct 26, 2014, at 7:40 PM, Qu Wenruo quwen...@cn.fujitsu.com wrote:
BTW what's the output of 'df' command?
Jasper,
What do you get for the conventional df command when this btrfs volume is
mounted? Thanks.
Chris Murphy--
To unsubscribe from this list: send the line unsubscribe
: enospc errors during
balance
Original Message
Subject: Re: Problem converting data raid0 to raid1: enospc errors
during balance
From: Chris Murphy li...@colorremedies.com
To: Qu Wenruo quwen...@cn.fujitsu.com
Date: 2014年10月27日 12:40
On Oct 26, 2014, at 7:40 PM, Qu Wenruo
On Oct 27, 2014, at 9:56 AM, Jasper Verberk jverb...@hotmail.com wrote:
These are the results to a normal df:
http://paste.debian.net/128932/
The mountpoint is /data.
OK so this is with the new computation in kernel 3.17 (which I think contains a
bug by counting free space twice); so
data raid0 to raid1: enospc errors during
balance
From: Jasper Verberk jverb...@hotmail.com
To: linux-btrfs@vger.kernel.org linux-btrfs@vger.kernel.org
Date: 2014年10月24日 21:32
Hello,
I'm trying to change my 4 disk btrfs data from raid0 to raid1. The metadata is
allready in raid1 and now I'm
On Oct 26, 2014, at 7:40 PM, Qu Wenruo quwen...@cn.fujitsu.com wrote:
Hi,
Although I'm not completely sure, but it seems that, you really ran out of
space.
[1] Your array won't hold raid1 for 1.97T data
Your array used up 1.97T raid0 data, it takes 1.97T for raid0.
But if converted to
Original Message
Subject: Re: Problem converting data raid0 to raid1: enospc errors
during balance
From: Chris Murphy li...@colorremedies.com
To: Qu Wenruo quwen...@cn.fujitsu.com
Date: 2014年10月27日 12:40
On Oct 26, 2014, at 7:40 PM, Qu Wenruo quwen...@cn.fujitsu.com wrote
(device sdc): relocating block group 1103101952 flags
9
[60103.168554] BTRFS info (device sdc): relocating block group 4194304 flags 4
[60103.619552] BTRFS info (device sdc): relocating block group 0 flags 2
[60104.040776] BTRFS info (device sdc): 1046 enospc errors during balance
On Oct 24, 2014, at 7:32 AM, Jasper Verberk jverb...@hotmail.com wrote:
[60104.040776] BTRFS info (device sdc): 1046 enospc errors during balance
To get more information, remount with enospc_debug option, and try to convert
again.
root@BlackMesa:/mnt# btrfs --version
Btrfs v3.14.1
Did that, you can find the result in the pastebin.
http://paste.debian.net/128552/
Subject: Re: Problem converting data raid0 to raid1: enospc errors during
balance
From: li...@colorremedies.com
Date: Fri, 24 Oct 2014 11:11:21 -0600
To: linux-btrfs
On Oct 24, 2014, at 11:49 AM, Jasper Verberk jverb...@hotmail.com wrote:
Did that, you can find the result in the pastebin.
http://paste.debian.net/128552/
Maybe Josef or David will have some idea.
I'd still take a btrfs-image and btrfs check with current btrfs-progs.
Chris Murphy
--
To
Am Tue, 22 Jul 2014 03:26:39 + (UTC)
schrieb Duncan 1i5t5.dun...@cox.net:
Marc Joliet posted on Tue, 22 Jul 2014 01:30:22 +0200 as excerpted:
And now that the background deletion of the old snapshots is done, the file
system ended up at:
# btrfs filesystem df
On 20/07/14 14:59, Duncan wrote:
Marc Joliet posted on Sun, 20 Jul 2014 12:22:33 +0200 as excerpted:
On the other hand, the wiki [0] says that defragmentation (and
balancing) is optional, and the only reason stated for doing either is
because they will have impact on performance.
Yes. That's
Am Sun, 20 Jul 2014 21:44:40 +0200
schrieb Marc Joliet mar...@gmx.de:
[...]
What I did:
- delete the single largest file on the file system, a 12 GB VM image, along
with all subvolumes that contained it
- rsync it over again
[...]
I want to point out at this point, though, that doing
Am Mon, 21 Jul 2014 15:22:16 +0200
schrieb Marc Joliet mar...@gmx.de:
Am Sun, 20 Jul 2014 21:44:40 +0200
schrieb Marc Joliet mar...@gmx.de:
[...]
What I did:
- delete the single largest file on the file system, a 12 GB VM image, along
with all subvolumes that contained it
-
Am Tue, 22 Jul 2014 00:30:57 +0200
schrieb Marc Joliet mar...@gmx.de:
Am Mon, 21 Jul 2014 15:22:16 +0200
schrieb Marc Joliet mar...@gmx.de:
Am Sun, 20 Jul 2014 21:44:40 +0200
schrieb Marc Joliet mar...@gmx.de:
[...]
What I did:
- delete the single largest file on the file
Marc Joliet posted on Tue, 22 Jul 2014 01:30:22 +0200 as excerpted:
And now that the background deletion of the old snapshots is done, the file
system ended up at:
# btrfs filesystem df /run/media/marcec/MARCEC_BACKUP
Data, single: total=219.00GiB, used=140.13GiB
System, DUP:
Am Sat, 19 Jul 2014 19:11:00 -0600
schrieb Chris Murphy li...@colorremedies.com:
I'm seeing this also in the 2nd dmesg:
[ 249.893310] BTRFS error (device sdg2): free space inode generation (0) did
not match free space cache generation (26286)
So you could try umounting the volume. And
Am Sat, 19 Jul 2014 18:53:03 -0600
schrieb Chris Murphy li...@colorremedies.com:
On Jul 19, 2014, at 2:58 PM, Marc Joliet mar...@gmx.de wrote:
Am Sat, 19 Jul 2014 22:10:51 +0200
schrieb Marc Joliet mar...@gmx.de:
[...]
Another random idea: the number of errors decreased the
Am Sun, 20 Jul 2014 02:39:27 + (UTC)
schrieb Duncan 1i5t5.dun...@cox.net:
Chris Murphy posted on Sat, 19 Jul 2014 11:38:08 -0600 as excerpted:
I'm not sure of the reason for the BTRFS info (device sdg2): 2 enospc
errors during balance but it seems informational rather than either
Am Sun, 20 Jul 2014 12:22:33 +0200
schrieb Marc Joliet mar...@gmx.de:
[...]
I'll try this and see, but I think I have more files 1GB than would account
for this error (which comes towards the end of the balance when only a few
chunks are left). I'll see what find /mnt -type f -size +1G finds
Marc Joliet posted on Sun, 20 Jul 2014 12:22:33 +0200 as excerpted:
On the other hand, the wiki [0] says that defragmentation (and
balancing) is optional, and the only reason stated for doing either is
because they will have impact on performance.
Yes. That's what threw off the other guy as
Marc Joliet posted on Sun, 20 Jul 2014 21:44:40 +0200 as excerpted:
Am Sun, 20 Jul 2014 13:40:54 +0200 schrieb Marc Joliet mar...@gmx.de:
Am Sun, 20 Jul 2014 12:22:33 +0200 schrieb Marc Joliet mar...@gmx.de:
[...]
I'll try this and see, but I think I have more files 1GB than would
volume. So this isn't related to your sdg2 and enospc error,
it's a different problem.
I'm not sure of the reason for the BTRFS info (device sdg2): 2 enospc errors
during balance but it seems informational rather than either a warning or
problem. I'd treat ext4-btrfs converted file systems
Start weitergeleitete Nachricht:
Huh, turns out the Reply-To was to Chris Murphy, so here it is again for the
whole list.
Datum: Sat, 19 Jul 2014 20:34:34 +0200
Von: Marc Joliet mar...@gmx.de
An: Chris Murphy li...@colorremedies.com
Betreff: Re: ENOSPC errors during balance
Am Sat, 19 Jul
Am Sat, 19 Jul 2014 22:10:51 +0200
schrieb Marc Joliet mar...@gmx.de:
[...]
Another random idea: the number of errors decreased the second time I ran
balance (from 4 to 2), I could run another full balance and see if it keeps
decreasing.
Well, this time there were still 2 ENOSPC errors. But
On Sat, Jul 19, 2014 at 11:38:08AM -0600, Chris Murphy wrote:
[96241.882138] ata2.00: exception Emask 0x1 SAct 0x7ffe0fff SErr 0x0 action
0x6 frozen
[96241.882139] ata2.00: Ata error. fis:0x21
[96241.882142] ata2.00: failed command: READ FPDMA QUEUED
[96241.882148] ata2.00: cmd
On Jul 19, 2014, at 2:58 PM, Marc Joliet mar...@gmx.de wrote:
Am Sat, 19 Jul 2014 22:10:51 +0200
schrieb Marc Joliet mar...@gmx.de:
[...]
Another random idea: the number of errors decreased the second time I ran
balance (from 4 to 2), I could run another full balance and see if it keeps
Chris Murphy posted on Sat, 19 Jul 2014 11:38:08 -0600 as excerpted:
I'm not sure of the reason for the BTRFS info (device sdg2): 2 enospc
errors during balance but it seems informational rather than either a
warning or problem. I'd treat ext4-btrfs converted file systems to be
something
129
[ 411.957809] btrfs: 62 enospc errors during balance
Extra info:
Data, RAID1: total=3.26TiB, used=3.25TiB
Data, RAID5: total=124.00GiB, used=123.90GiB
System, RAID1: total=32.00MiB, used=508.00KiB
System, single: total=4.00MiB, used=0.00
Metadata, RAID1: total=9.00GiB, used=7.97GiB
Arjen Nienhuis posted on Tue, 22 Apr 2014 23:52:43 +0200 as excerpted:
Did you try running another balance soft?
I tried a few. The only thing that happens is that more empty blocks get
allocated. I also tried convert=single,profiles=raid5. I ended up with a
few empty 'single' blocks. (The
: relocating block group 11605306769408 flags 129
[ 411.957809] btrfs: 62 enospc errors during balance
Extra info:
Data, RAID1: total=3.26TiB, used=3.25TiB
Data, RAID5: total=124.00GiB, used=123.90GiB
System, RAID1: total=32.00MiB, used=508.00KiB
System, single: total=4.00MiB, used=0.00
Metadata, RAID1
] btrfs: relocating block group 11607454253056 flags 129
[ 410.528075] btrfs: relocating block group 11605306769408 flags 129
[ 411.957809] btrfs: 62 enospc errors during balance
Extra info:
Data, RAID1: total=3.26TiB, used=3.25TiB
Data, RAID5: total=124.00GiB, used=123.90GiB
System, RAID1
due to a cable or
controller
issue that is later fixed). User issues btrfs filesystem balance.
Alas, this scenario ends in an effor btrfs: 1 enospc errors during balance,
with the raid1 staying degraded.
Here's the test procedure in detail:
Testing was done using vanilla linux-3.12 (x86_64)
plus
Hugo Mills posted on Fri, 15 Nov 2013 12:38:41 + as excerpted:
I also wonder: Would btrfs try to write _two_ copies of everything to
_one_ remaining device of a degraded two-disk raid1?
No. It would have to degrade from RAID-1 to DUP to do that (and I
think we prevent DUP data for some
Hello,
With kernel 3.7.10 patched with Btrfs: limit the global reserve to 512mb.
(the problem was occuring also without this patch, but seemed to be even worse).
At the start of balance:
Data: total=31.85GB, used=9.96GB
System: total=4.00MB, used=16.00KB
Metadata: total=1.01GB, used=696.17MB
On Tue, 2 Apr 2013 14:04:52 +0600
Roman Mamedov r...@romanrm.ru wrote:
With kernel 3.7.10 patched with Btrfs: limit the global reserve to 512mb.
(the problem was occuring also without this patch, but seemed to be even
worse).
At the start of balance:
Data: total=31.85GB, used=9.96GB
On Tue, Apr 02, 2013 at 02:04:52AM -0600, Roman Mamedov wrote:
Hello,
With kernel 3.7.10 patched with Btrfs: limit the global reserve to 512mb.
(the problem was occuring also without this patch, but seemed to be even
worse).
At the start of balance:
Data: total=31.85GB, used=9.96GB
On Tue, 2 Apr 2013 09:46:26 -0400
Josef Bacik jba...@fusionio.com wrote:
On Tue, Apr 02, 2013 at 02:04:52AM -0600, Roman Mamedov wrote:
Hello,
With kernel 3.7.10 patched with Btrfs: limit the global reserve to 512mb.
(the problem was occuring also without this patch, but seemed to be
38 matches
Mail list logo