3.430147] BTRFS info (device sda2): relocating block group
4778521264128 flags system|raid10
[4020286.363882] BTRFS info (device sda2): found 35 extents
[4020292.904640] BTRFS info (device sda2): 2 enospc errors during
balance
# btrfs fi usage /data
WARNING: RAID56 detected, not implemented
I already tried balancing with dusage=1. This removed all the allocated space.
It still doesn't allow me to do the conversion though, as it then reallocates
the space.
> On 27 okt. 2014, at 18:28, Chris Murphy wrote:
>
>
>> On Oct 27, 2014, at 10:54 AM, Jasper Verberk wrote:
>>
>> I actua
On Oct 27, 2014, at 10:54 AM, Jasper Verberk wrote:
> I actually tried going to raid1 with kernel 3.10 beforethe reason I
> updated to 3.17 was to see if this would fix the error.
>
> It still remained though….
If you have btrfs-progs you could do a btrfs check without repair and see if
I actually tried going to raid1 with kernel 3.10 beforethe reason I updated
to 3.17 was to see if this would fix the error.
It still remained though
> Subject: Re: Problem converting data raid0 to raid1: enospc errors during
> balance
On Oct 27, 2014, at 9:56 AM, Jasper Verberk wrote:
> These are the results to a normal df:
>
> http://paste.debian.net/128932/
>
> The mountpoint is /data.
OK so this is with the new computation in kernel 3.17 (which I think contains a
bug by counting free space twice); so now it shows avail
data raid0 to raid1: enospc errors during
> balance
>
>
> Original Message
> Subject: Re: Problem converting data raid0 to raid1: enospc errors
> during balance
> From: Chris Murphy
> To: Qu Wenruo
> Date: 2014年10月27日 12:40
>> On
>
> > On Oct 26, 2014, at 7:40 PM, Qu Wenruo wrote:
>
> BTW what's the output of 'df' command?
Jasper,
What do you get for the conventional df command when this btrfs volume is
mounted? Thanks.
Chris Murphy--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body o
Original Message
Subject: Re: Problem converting data raid0 to raid1: enospc errors
during balance
From: Chris Murphy
To: Qu Wenruo
Date: 2014年10月27日 12:40
On Oct 26, 2014, at 7:40 PM, Qu Wenruo wrote:
Hi,
Although I'm not completely sure, but it seems that, you r
On Oct 26, 2014, at 7:40 PM, Qu Wenruo wrote:
> Hi,
>
> Although I'm not completely sure, but it seems that, you really ran out of
> space.
>
> [1] Your array won't hold raid1 for 1.97T data
> Your array used up 1.97T raid0 data, it takes 1.97T for raid0.
> But if converted to 1.97T, it will
age
Subject: Problem converting data raid0 to raid1: enospc errors during
balance
From: Jasper Verberk
To: linux-btrfs@vger.kernel.org
Date: 2014年10月24日 21:32
Hello,
I'm trying to change my 4 disk btrfs data from raid0 to raid1. The metadata is
allready in raid1 and now I'
On Oct 24, 2014, at 11:49 AM, Jasper Verberk wrote:
> Did that, you can find the result in the pastebin.
>
> http://paste.debian.net/128552/
Maybe Josef or David will have some idea.
I'd still take a btrfs-image and btrfs check with current btrfs-progs.
Chris Murphy
--
To unsubscribe from t
Did that, you can find the result in the pastebin.
http://paste.debian.net/128552/
> Subject: Re: Problem converting data raid0 to raid1: enospc errors during
> balance
> From: li...@colorremedies.com
> Date: Fri, 24 Oct 2014 11:11:21 -0600
On Oct 24, 2014, at 7:32 AM, Jasper Verberk wrote:
>
> [60104.040776] BTRFS info (device sdc): 1046 enospc errors during balance
To get more information, remount with enospc_debug option, and try to convert
again.
> root@BlackMesa:/mnt# btrfs --version
> Btrfs v3.14.1
There ar
[60102.776406] BTRFS info (device sdc): relocating block group 1103101952 flags
9
[60103.168554] BTRFS info (device sdc): relocating block group 4194304 flags 4
[60103.619552] BTRFS info (device sdc): relocating block group 0 flags 2
[60104.040776] BTRFS info (device sdc): 1046 eno
Am Tue, 22 Jul 2014 03:26:39 + (UTC)
schrieb Duncan <1i5t5.dun...@cox.net>:
> Marc Joliet posted on Tue, 22 Jul 2014 01:30:22 +0200 as excerpted:
>
> > And now that the background deletion of the old snapshots is done, the file
> > system ended up at:
> >
> > # btrfs filesystem df /run/media
Marc Joliet posted on Tue, 22 Jul 2014 01:30:22 +0200 as excerpted:
> And now that the background deletion of the old snapshots is done, the file
> system ended up at:
>
> # btrfs filesystem df /run/media/marcec/MARCEC_BACKUP
> Data, single: total=219.00GiB, used=140.13GiB
> System, DUP: tota
Am Tue, 22 Jul 2014 00:30:57 +0200
schrieb Marc Joliet :
> Am Mon, 21 Jul 2014 15:22:16 +0200
> schrieb Marc Joliet :
>
> > Am Sun, 20 Jul 2014 21:44:40 +0200
> > schrieb Marc Joliet :
> >
> > [...]
> > > What I did:
> > >
> > > - delete the single largest file on the file system, a 12 GB VM im
Am Mon, 21 Jul 2014 15:22:16 +0200
schrieb Marc Joliet :
> Am Sun, 20 Jul 2014 21:44:40 +0200
> schrieb Marc Joliet :
>
> [...]
> > What I did:
> >
> > - delete the single largest file on the file system, a 12 GB VM image, along
> > with all subvolumes that contained it
> > - rsync it over aga
Am Sun, 20 Jul 2014 21:44:40 +0200
schrieb Marc Joliet :
[...]
> What I did:
>
> - delete the single largest file on the file system, a 12 GB VM image, along
> with all subvolumes that contained it
> - rsync it over again
[...]
I want to point out at this point, though, that doing those two st
On 20/07/14 14:59, Duncan wrote:
Marc Joliet posted on Sun, 20 Jul 2014 12:22:33 +0200 as excerpted:
On the other hand, the wiki [0] says that defragmentation (and
balancing) is optional, and the only reason stated for doing either is
because they "will have impact on performance".
Yes. That'
Marc Joliet posted on Sun, 20 Jul 2014 21:44:40 +0200 as excerpted:
> Am Sun, 20 Jul 2014 13:40:54 +0200 schrieb Marc Joliet :
>
>> Am Sun, 20 Jul 2014 12:22:33 +0200 schrieb Marc Joliet :
>>
>> [...]
>> > I'll try this and see, but I think I have more files >1GB than would
>> > account for this
Oh, and because I'm forgetful, here the new dmesg output. The new content
(relative to dmesg4) starts at line 2513.
--
Marc Joliet
--
"People who think they know everything really annoy those of us who know we
don't" - Bjarne Stroustrup
dmesg5.log.xz
Description: application/xz
signature.asc
Am Sun, 20 Jul 2014 13:40:54 +0200
schrieb Marc Joliet :
> Am Sun, 20 Jul 2014 12:22:33 +0200
> schrieb Marc Joliet :
>
> [...]
> > I'll try this and see, but I think I have more files >1GB than would account
> > for this error (which comes towards the end of the balance when only a few
> > chunk
Marc Joliet posted on Sun, 20 Jul 2014 12:22:33 +0200 as excerpted:
> On the other hand, the wiki [0] says that defragmentation (and
> balancing) is optional, and the only reason stated for doing either is
> because they "will have impact on performance".
Yes. That's what threw off the other guy
Am Sun, 20 Jul 2014 12:22:33 +0200
schrieb Marc Joliet :
[...]
> I'll try this and see, but I think I have more files >1GB than would account
> for this error (which comes towards the end of the balance when only a few
> chunks are left). I'll see what "find /mnt -type f -size +1G" finds :) .
No
Am Sun, 20 Jul 2014 02:39:27 + (UTC)
schrieb Duncan <1i5t5.dun...@cox.net>:
> Chris Murphy posted on Sat, 19 Jul 2014 11:38:08 -0600 as excerpted:
>
> > I'm not sure of the reason for the "BTRFS info (device sdg2): 2 enospc
> > errors during balance"
Am Sat, 19 Jul 2014 18:53:03 -0600
schrieb Chris Murphy :
>
> On Jul 19, 2014, at 2:58 PM, Marc Joliet wrote:
>
> > Am Sat, 19 Jul 2014 22:10:51 +0200
> > schrieb Marc Joliet :
> >
> > [...]
> >> Another random idea: the number of errors decreased the second time I ran
> >> balance (from 4 to
Am Sat, 19 Jul 2014 19:11:00 -0600
schrieb Chris Murphy :
> I'm seeing this also in the 2nd dmesg:
>
> [ 249.893310] BTRFS error (device sdg2): free space inode generation (0) did
> not match free space cache generation (26286)
>
>
> So you could try umounting the volume. And doing a one time
Chris Murphy posted on Sat, 19 Jul 2014 11:38:08 -0600 as excerpted:
> I'm not sure of the reason for the "BTRFS info (device sdg2): 2 enospc
> errors during balance" but it seems informational rather than either a
> warning or problem. I'd treat ext4->bt
I'm seeing this also in the 2nd dmesg:
[ 249.893310] BTRFS error (device sdg2): free space inode generation (0) did
not match free space cache generation (26286)
So you could try umounting the volume. And doing a one time mount with the
clear_cache mount option. Give it some time to rebuild t
On Jul 19, 2014, at 2:58 PM, Marc Joliet wrote:
> Am Sat, 19 Jul 2014 22:10:51 +0200
> schrieb Marc Joliet :
>
> [...]
>> Another random idea: the number of errors decreased the second time I ran
>> balance (from 4 to 2), I could run another full balance and see if it keeps
>> decreasing.
>
>
On Sat, Jul 19, 2014 at 11:38:08AM -0600, Chris Murphy wrote:
> [96241.882138] ata2.00: exception Emask 0x1 SAct 0x7ffe0fff SErr 0x0 action
> 0x6 frozen
> [96241.882139] ata2.00: Ata error. fis:0x21
> [96241.882142] ata2.00: failed command: READ FPDMA QUEUED
> [96241.882148] ata2.00: cmd 60/08:00:
Am Sat, 19 Jul 2014 22:10:51 +0200
schrieb Marc Joliet :
[...]
> Another random idea: the number of errors decreased the second time I ran
> balance (from 4 to 2), I could run another full balance and see if it keeps
> decreasing.
Well, this time there were still 2 ENOSPC errors. But I can show
Start weitergeleitete Nachricht:
Huh, turns out the Reply-To was to Chris Murphy, so here it is again for the
whole list.
Datum: Sat, 19 Jul 2014 20:34:34 +0200
Von: Marc Joliet
An: Chris Murphy
Betreff: Re: ENOSPC errors during balance
Am Sat, 19 Jul 2014 11:38:08 -0600
schrieb Chris
a btrfs raid10 volume. So this isn't related to your sdg2 and enospc error,
it's a different problem.
I'm not sure of the reason for the "BTRFS info (device sdg2): 2 enospc errors
during balance" but it seems informational rather than either a warning or
problem. I'
Arjen Nienhuis posted on Tue, 22 Apr 2014 23:52:43 +0200 as excerpted:
>> Did you try running another balance soft?
>
> I tried a few. The only thing that happens is that more empty blocks get
> allocated. I also tried convert=single,profiles=raid5. I ended up with a
> few empty 'single' blocks.
lock group 11607454253056 flags 129
> [ 410.528075] btrfs: relocating block group 11605306769408 flags 129
> [ 411.957809] btrfs: 62 enospc errors during balance
>
> Extra info:
> Data, RAID1: total=3.26TiB, used=3.25TiB
> Data, RAID5: total=124.00GiB, used=123.90GiB
> Sys
elocating block group 13338255818752 flags 129 ...
> [ 409.184588] btrfs: relocating block group 11607454253056 flags 129
> [ 410.528075] btrfs: relocating block group 11605306769408 flags 129
> [ 411.957809] btrfs: 62 enospc errors during balance
>
> Extra info:
> Data, RAID1: total=3.26
075] btrfs: relocating block group 11605306769408 flags 129
[ 411.957809] btrfs: 62 enospc errors during balance
Extra info:
Data, RAID1: total=3.26TiB, used=3.25TiB
Data, RAID5: total=124.00GiB, used=123.90GiB
System, RAID1: total=32.00MiB, used=508.00KiB
System, single: total=4.00MiB, used=0.00
Metad
Hugo Mills posted on Fri, 15 Nov 2013 12:38:41 + as excerpted:
>> I also wonder: Would btrfs try to write _two_ copies of everything to
>> _one_ remaining device of a degraded two-disk raid1?
>
> No. It would have to degrade from RAID-1 to DUP to do that (and I
> think we prevent DUP data for
missing device re-appears (think of e.g.
> a storage device that temporarily became unavailable due to a cable or
> controller
> issue that is later fixed). User issues "btrfs filesystem balance".
>
> Alas, this scenario ends in an effor "btrfs: 1 enospc errors du
due to a cable or
controller
issue that is later fixed). User issues "btrfs filesystem balance".
Alas, this scenario ends in an effor "btrfs: 1 enospc errors during balance",
with the raid1 staying degraded.
Here's the test procedure in detail:
Testing was done using v
On Tue, Apr 02, 2013 at 10:55:04AM -0600, Roman Mamedov wrote:
> On Tue, 2 Apr 2013 09:46:26 -0400
> Josef Bacik wrote:
>
> > On Tue, Apr 02, 2013 at 02:04:52AM -0600, Roman Mamedov wrote:
> > > Hello,
> > >
> > > With kernel 3.7.10 patched with "Btrfs: limit the global reserve to
> > > 512mb".
On Tue, 2 Apr 2013 09:46:26 -0400
Josef Bacik wrote:
> On Tue, Apr 02, 2013 at 02:04:52AM -0600, Roman Mamedov wrote:
> > Hello,
> >
> > With kernel 3.7.10 patched with "Btrfs: limit the global reserve to 512mb".
> > (the problem was occuring also without this patch, but seemed to be even
> > w
On Tue, Apr 02, 2013 at 02:04:52AM -0600, Roman Mamedov wrote:
> Hello,
>
> With kernel 3.7.10 patched with "Btrfs: limit the global reserve to 512mb".
> (the problem was occuring also without this patch, but seemed to be even
> worse).
>
> At the start of balance:
>
> Data: total=31.85GB, used
On Tue, 2 Apr 2013 14:04:52 +0600
Roman Mamedov wrote:
> With kernel 3.7.10 patched with "Btrfs: limit the global reserve to 512mb".
> (the problem was occuring also without this patch, but seemed to be even
> worse).
>
> At the start of balance:
>
> Data: total=31.85GB, used=9.96GB
> System:
Hello,
With kernel 3.7.10 patched with "Btrfs: limit the global reserve to 512mb".
(the problem was occuring also without this patch, but seemed to be even worse).
At the start of balance:
Data: total=31.85GB, used=9.96GB
System: total=4.00MB, used=16.00KB
Metadata: total=1.01GB, used=696.17MB
47 matches
Mail list logo