Hello,
since around two or three years i'm using btrfs for incremental VM backups.
some data:
- volume size 60TB
- around 2000 subvolumes
- each differential backup stacks on top of a subvolume
- compress-force=zstd
- space_cache=v2
- no quote / qgroup
this works fine since Kernel 4.14 except
Am 14.11.2017 um 18:45 schrieb Andrei Borzenkov:
> 14.11.2017 12:56, Stefan Priebe - Profihost AG пишет:
>> Hello,
>>
>> after a controller firmware bug / failure i've a broken btrfs.
>>
>> # parent transid verify failed on 181846016 wanted 143404 found 143399
&
Hello,
after a controller firmware bug / failure i've a broken btrfs.
# parent transid verify failed on 181846016 wanted 143404 found 143399
running repair, fsck or zero-log always results in the same failure message:
extent-tree.c:2725: alloc_reserved_tree_block: BUG_ON `ret` triggered,
value
Hello,
after a power failure i have a btrfs volume which isn't mountable.
dmesg shows:
parent transid verify failed on 181846016 wanted 143404 found 143399
If i run:
btrfs check --repair /dev/mapper/crypt_md1
The output is:
parent transid verify failed on 181846016 wanted 143404 found 143399
Am 05.09.2017 um 07:58 schrieb Stefan Priebe - Profihost AG:
> Hello,
>
> while expecting slow btrfs volumes i switched to kernel v4.13 and to
> space_cache=v2.
...
>
> Is btrfs trying to hard to find free space?
Even nobody replied - i reply to myself. I could completely &
Hello,
Am 04.09.2017 um 20:32 schrieb Stefan Priebe - Profihost AG:
> Am 04.09.2017 um 15:28 schrieb Timofey Titovets:
>> 2017-09-04 15:57 GMT+03:00 Stefan Priebe - Profihost AG
>> <s.pri...@profihost.ag>:
>>> Am 04.09.2017 um 12:53 schrieb Henk Slager:
>&
Hello,
while expecting slow btrfs volumes i switched to kernel v4.13 and to
space_cache=v2.
But i'm still expecting slow performance and single kworker processes
using 100% CPU.
Tracing the kworker process shows me:
# sed 's/.*: //' /trace | sort | uniq -c | sort -n
21595
Am 04.09.2017 um 15:28 schrieb Timofey Titovets:
> 2017-09-04 15:57 GMT+03:00 Stefan Priebe - Profihost AG
> <s.pri...@profihost.ag>:
>> Am 04.09.2017 um 12:53 schrieb Henk Slager:
>>> On Sun, Sep 3, 2017 at 8:32 PM, Stefan Priebe - Profihost AG
>>> <s
Am 04.09.2017 um 12:53 schrieb Henk Slager:
> On Sun, Sep 3, 2017 at 8:32 PM, Stefan Priebe - Profihost AG
> <s.pri...@profihost.ag> wrote:
>> Hello,
>>
>> i'm trying to speed up big btrfs volumes.
>>
>> Some facts:
>> - Kernel will be 4.13-rc7
Hello,
i'm trying to speed up big btrfs volumes.
Some facts:
- Kernel will be 4.13-rc7
- needed volume size is 60TB
Currently without any ssds i get the best speed with:
- 4x HW Raid 5 with 1GB controller memory of 4TB 3,5" devices
and using btrfs as raid 0 for data and metadata on top of
kworker/u24:4-13405 [003] 344186.202623: __get_raid_index
<-find_free_extent
kworker/u24:4-13405 [003] 344186.202623: up_read <-find_free_extent
kworker/u24:4-13405 [003] 344186.202623: btrfs_put_block_group
<-find_free_extent
kworker/u24:4-13405 [
Hello,
this still happens with space_cache v2. I don't think it is space_cache
related?
Stefan
Am 17.08.2017 um 09:43 schrieb Stefan Priebe - Profihost AG:
> while mounting the device the dmesg is full of:
> [ 1320.325147] [] ? kthread_park+0x60/0x60
> [ 1440.330008] INFO: task btrfs-
transaction_kthread+0x1d5/0x240 [btrfs]
[ 1680.341258] [] kthread+0xeb/0x110
[ 1680.341262] [] ret_from_fork+0x3f/0x70
[ 1680.343062] DWARF2 unwinder stuck at ret_from_fork+0x3f/0x70
Stefan
Am 17.08.2017 um 07:47 schrieb Stefan Priebe - Profihost AG:
> i've backported the free space cache tree to
+ new parity)
>
> 2. The maximum compressed write (128k) would require the update of 1 chunk on
> each of the 4 data disks + 1 parity write
>
>
>
> Stefan what mount flags do you use?
>
> kos
>
>
>
> - Original Message -
> From: "Roman Mame
(new chunk + new parity)
>
> 2. The maximum compressed write (128k) would require the update of 1 chunk on
> each of the 4 data disks + 1 parity write
>
>
>
> Stefan what mount flags do you use?
>
> kos
>
>
>
> - Original Message -
> From: &quo
gt;
>
>
> Stefan what mount flags do you use?
noatime,compress-force=zlib,noacl,space_cache,skip_balance,subvolid=5,subvol=/
Greets,
Stefan
> kos
>
>
>
> - Original Message -
> From: "Roman Mamedov" <r...@romanrm.net>
> To: "Konstantin
gt;
>
>
> Stefan what mount flags do you use?
noatime,compress-force=zlib,noacl,space_cache,skip_balance,subvolid=5,subvol=/
Greets,
Stefan
> kos
>
>
>
> - Original Message -
> From: "Roman Mamedov" <r...@romanrm.net>
> To: "Konstantin
2.92TiB
Stefan
>
> p.s.
> you can also check the tread "Btrfs + compression = slow performance and high
> cpu usage"
>
> - Original Message -
> From: "Stefan Priebe - Profihost AG" <s.pri...@profihost.ag>
> To: "Marat Khalili" &l
Am 16.08.2017 um 08:53 schrieb Marat Khalili:
>> I've one system where a single kworker process is using 100% CPU
>> sometimes a second process comes up with 100% CPU [btrfs-transacti]. Is
>> there anything i can do to get the old speed again or find the culprit?
>
> 1. Do you use quotas
Hello,
I've one system where a single kworker process is using 100% CPU
sometimes a second process comes up with 100% CPU [btrfs-transacti]. Is
there anything i can do to get the old speed again or find the culprit?
Greets,
Stefan
--
To unsubscribe from this list: send the line "unsubscribe
Hello,
thanks. But is there any way to recover from this error? Like removing
the item or so? Data loss isn't a problem. Just reconstructing the whole
FS will take quite a long time.
Stefan
Am 10.05.2017 um 11:54 schrieb Hugo Mills:
> On Wed, May 10, 2017 at 11:20:58AM +0200, Stefan Pri
Hello,
here's the output:
# for block in 163316514816 163322413056 163325722624; do echo $block;
btrfs-debug-tree -b $block /dev/mapper/crypt_md0|sed -re 's/(\t| )name:
.*/\1name: HIDDEN/'; done
163316514816
btrfs-progs v4.8.5
leaf 163316514816 items 188 free space 1387 generation 86739 owner
Hi,
Am 10.05.2017 um 09:48 schrieb Martin Steigerwald:
> Stefan Priebe - Profihost AG - 10.05.17, 09:02:
>> I'm now trying btrfs progs 4.10.2. Is anybody out there who can tell me
>> something about the expected runtime or how to fix bad key ordering?
>
> I had a similar
Am 10.05.2017 um 09:40 schrieb Hugo Mills:
> On Wed, May 10, 2017 at 09:36:30AM +0200, Stefan Priebe - Profihost AG wrote:
>> Hello Roman,
>>
>> the FS is mountable. It just goes readonly when trying to write some data.
>>
>> The kernel msgs are:
>> BTRFS cr
readonly
BTRFS info (device dm-2): delayed_refs has NO entry
Greets,
Stefan
Am 10.05.2017 um 09:18 schrieb Roman Mamedov:
> On Wed, 10 May 2017 09:02:46 +0200
> Stefan Priebe - Profihost AG <s.pri...@profihost.ag> wrote:
>
>> how to fix bad key ordering?
>
> You
I'm now trying btrfs progs 4.10.2. Is anybody out there who can tell me
something about the expected runtime or how to fix bad key ordering?
Greets,
Stefan
Am 06.05.2017 um 07:56 schrieb Stefan Priebe - Profihost AG:
> It's still running. Is this the normal behaviour? Is there any other
It's still running. Is this the normal behaviour? Is there any other way
to fix the bad key ordering?
Greets,
Stefan
Am 02.05.2017 um 08:29 schrieb Stefan Priebe - Profihost AG:
> Hello list,
>
> i wanted to check an fs cause it has bad key ordering.
>
> But btrfscheck is now
Hello list,
i wanted to check an fs cause it has bad key ordering.
But btrfscheck is now running since 7 days. Current output:
# btrfsck -p --repair /dev/mapper/crypt_md0
enabling repair mode
Checking filesystem on /dev/mapper/crypt_md0
UUID: 37b15dd8-b2e1-4585-98d0-cc0fa2a5a7c9
bad key ordering
Hello Qu,
still noone on this one? Or is this one solved in another way in 4.10 or
4.11 or is compression just experimental? Haven't seen a note on this.
Thanks,
Stefan
Am 27.02.2017 um 14:43 schrieb Stefan Priebe - Profihost AG:
> Hi,
>
> can please anybody comment on that one? Jos
Thanks Qu, removing BTRFS_I from the inode fixes this issue to me.
Greets,
Stefan
Am 14.03.2017 um 03:50 schrieb Qu Wenruo:
>
>
> At 03/13/2017 09:26 PM, Stefan Priebe - Profihost AG wrote:
>>
>> Am 13.03.2017 um 08:39 schrieb Qu Wenruo:
>>>
>>>
&g
Am 13.03.2017 um 08:39 schrieb Qu Wenruo:
>
>
> At 03/13/2017 03:26 PM, Stefan Priebe - Profihost AG wrote:
>> Hi Qu,
>>
>> Am 13.03.2017 um 02:16 schrieb Qu Wenruo:
>>>
>>> At 03/13/2017 04:49 AM, Stefan Priebe - Profihost AG wrote:
>>&g
Hi Qu,
Am 13.03.2017 um 02:16 schrieb Qu Wenruo:
>
> At 03/13/2017 04:49 AM, Stefan Priebe - Profihost AG wrote:
>> Hi Qu,
>>
>> while V5 was running fine against the openSUSE-42.2 kernel (based on
>> v4.4).
>
> Thanks for the test.
>
>> V7 re
Hi Qu,
while V5 was running fine against the openSUSE-42.2 kernel (based on v4.4).
V7 results in OOPS to me:
BUG: unable to handle kernel NULL pointer dereference at 01f0
IP: [] __endio_write_update_ordered+0x33/0x140 [btrfs]
PGD 14e18d4067 PUD 14e1868067 PMD 0
Oops: [#1] SMP
Hi,
can please anybody comment on that one? Josef? Chris? I still need those
patches to be able to let btrfs run for more than 24hours without ENOSPC
issues.
Greets,
Stefan
Am 27.02.2017 um 08:22 schrieb Qu Wenruo:
>
>
> At 02/25/2017 04:23 PM, Stefan Priebe - Profihost AG wrote:
&
Dear Qu,
any news on your branch? I still don't see it merged anywhere.
Greets,
Stefan
Am 04.01.2017 um 17:13 schrieb Stefan Priebe - Profihost AG:
> Hi Qu,
>
> Am 01.01.2017 um 10:32 schrieb Qu Wenruo:
>> Hi Stefan,
>>
>> I'm trying to push it to for-next (will be
Hi,
is there any chance to optimize btrfs_find_space_for_alloc / rb_next on
big devices?
I've plenty of free space but most of the time there's only low I/O but
high cpu usage. perf top shows:
60,41% [kernel] [k] rb_next
9,74% [kernel] [k]
//www.marc.info/?l=linux-btrfs=148338312525137=2
Stefan
> Thanks,
> Qu
>
> On 12/31/2016 03:31 PM, Stefan Priebe - Profihost AG wrote:
>> Any news on this series? I can't see it in 4.9 nor in 4.10-rc
>>
>> Stefan
>>
>> Am 11.11.2016 um 09:39 schrieb Wang
Any news on this series? I can't see it in 4.9 nor in 4.10-rc
Stefan
Am 11.11.2016 um 09:39 schrieb Wang Xiaoguang:
> When having compression enabled, Stefan Priebe ofen got enospc errors
> though fs still has much free space. Qu Wenruo also has submitted a
> fstests test case
isn't there a way to move free space to unallocated space again?
Am 03.12.2016 um 05:43 schrieb Andrei Borzenkov:
> 01.12.2016 18:48, Chris Murphy пишет:
>> On Thu, Dec 1, 2016 at 7:10 AM, Stefan Priebe - Profihost AG
>> <s.pri...@profihost.ag> wrote:
>>>
>>&
Am 01.12.2016 um 16:48 schrieb Chris Murphy:
> On Thu, Dec 1, 2016 at 7:10 AM, Stefan Priebe - Profihost AG
> <s.pri...@profihost.ag> wrote:
>>
>> Am 01.12.2016 um 14:51 schrieb Hans van Kranenburg:
>>> On 12/01/2016 09:12 AM, Andrei Borzenkov wrote:
>>>
Am 01.12.2016 um 14:51 schrieb Hans van Kranenburg:
> On 12/01/2016 09:12 AM, Andrei Borzenkov wrote:
>> On Thu, Dec 1, 2016 at 10:49 AM, Stefan Priebe - Profihost AG
>> <s.pri...@profihost.ag> wrote:
>> ...
>>>
>>> Custom 4.4 kernel with patches up t
Am 01.12.2016 um 09:12 schrieb Andrei Borzenkov:
> On Thu, Dec 1, 2016 at 10:49 AM, Stefan Priebe - Profihost AG
> <s.pri...@profihost.ag> wrote:
> ...
>>
>> Custom 4.4 kernel with patches up to 4.10. But i already tried 4.9-rc7
>> which does the same.
>>
&
Am 01.12.2016 um 00:02 schrieb Chris Murphy:
> On Wed, Nov 30, 2016 at 2:03 PM, Stefan Priebe - Profihost AG
> <s.pri...@profihost.ag> wrote:
>> Hello,
>>
>> # btrfs balance start -v -dusage=0 -musage=1 /ssddisk/
>> Dumping filters: flags 0x7, state 0x
Hello,
# btrfs balance start -v -dusage=0 -musage=1 /ssddisk/
Dumping filters: flags 0x7, state 0x0, force is off
DATA (flags 0x2): balancing, usage=0
METADATA (flags 0x2): balancing, usage=1
SYSTEM (flags 0x2): balancing, usage=1
ERROR: error during balancing '/ssddisk/': No space left on
Am 23.11.2016 um 19:23 schrieb Holger Hoffstätte:
> On 11/23/16 18:21, Stefan Priebe - Profihost AG wrote:
>> Am 04.11.2016 um 20:20 schrieb Liu Bo:
>>> If we have
>>>
>>> |0--hole--4095||4096--preallocate--12287|
>>>
>>> instead of using prea
Hi,
sorry last mail was from the wrong box.
Am 04.11.2016 um 20:20 schrieb Liu Bo:
> If we have
>
> |0--hole--4095||4096--preallocate--12287|
>
> instead of using preallocated space, a 8K direct write will just
> create a new 8K extent and it'll end up with
>
> |0--new
Am 12.11.2016 um 03:18 schrieb Liu Bo:
> On Wed, Nov 09, 2016 at 09:19:21PM +0100, Stefan Priebe - Profihost AG wrote:
>> Hello,
>>
>> found this one from 2014:
>> https://patchwork.kernel.org/patch/5551651/
>>
>> it this still valid?
>
> The spac
Hello,
found this one from 2014:
https://patchwork.kernel.org/patch/5551651/
it this still valid?
Am 09.11.2016 um 09:09 schrieb Stefan Priebe - Profihost AG:
> Dear list,
>
> even there's a lot of free space on my disk:
>
> # df -h /vmbackup/
> Filesystem
Dear list,
even there's a lot of free space on my disk:
# df -h /vmbackup/
FilesystemSize Used Avail Use% Mounted on
/dev/mapper/stripe0-backup 37T 24T 13T 64% /backup
# btrfs filesystem df /backup/
Data, single: total=23.75TiB, used=22.83TiB
System, DUP:
Hi,
currently i've an fs which triggers this one on mount while originally
having 50% disk free - but btrfs progs fails too.
# btrfs check --repair -p /dev/vdb1
enabling repair mode
couldn't open RDWR because of unsupported option features (3).
ERROR: cannot open file system
[ 164.378512]
, used=155045216256,
pinned=0, reserved=0, may_use=524288, readonly=65536
Greets,
Stefan
Am 29.09.2016 um 09:27 schrieb Stefan Priebe - Profihost AG:
> Am 29.09.2016 um 09:13 schrieb Wang Xiaoguang:
>>>> I found that compress sometime report ENOSPC error even in 4.8-rc8,
>>>
Hello list,
just want to report again that i've seen not a single ENOSPC msg with
this series applied.
Now working fine since 18 days.
Stefan
Am 14.10.2016 um 15:09 schrieb Stefan Priebe - Profihost AG:
>
> Am 06.10.2016 um 04:51 schrieb Wang Xiaoguang:
>> This issue was revealed
Am 17.10.2016 um 03:50 schrieb Qu Wenruo:
> At 10/17/2016 02:54 AM, Stefan Priebe - Profihost AG wrote:
>> Am 16.10.2016 um 00:37 schrieb Hans van Kranenburg:
>>> Hi,
>>>
>>> On 10/15/2016 10:49 PM, Stefan Priebe - Profihost AG wrote:
>>>>
>>
Am 16.10.2016 um 21:48 schrieb Hans van Kranenburg:
> On 10/16/2016 08:54 PM, Stefan Priebe - Profihost AG wrote:
>> Am 16.10.2016 um 00:37 schrieb Hans van Kranenburg:
>>> On 10/15/2016 10:49 PM, Stefan Priebe - Profihost AG wrote:
>>>>
>>>> cp --reflink=
Am 16.10.2016 um 00:37 schrieb Hans van Kranenburg:
> Hi,
>
> On 10/15/2016 10:49 PM, Stefan Priebe - Profihost AG wrote:
>>
>> cp --reflink=always takes sometimes very long. (i.e. 25-35 minutes)
>>
>> An example:
>>
>> source file:
>> #
Hello,
cp --reflink=always takes sometimes very long. (i.e. 25-35 minutes)
An example:
source file:
# ls -la vm-279-disk-1.img
-rw-r--r-- 1 root root 204010946560 Oct 14 12:15 vm-279-disk-1.img
target file after around 10 minutes:
# ls -la vm-279-disk-1.img.tmp
-rw-r--r-- 1 root root
Hi,
Am 14.10.2016 um 15:19 schrieb Stefan Priebe - Profihost AG:
> Dear julian,
>
> Am 14.10.2016 um 14:26 schrieb Julian Taylor:
>> On 10/14/2016 08:28 AM, Stefan Priebe - Profihost AG wrote:
>>> Hello list,
>>>
>>> while running the same workload on two
Dear julian,
Am 14.10.2016 um 14:26 schrieb Julian Taylor:
> On 10/14/2016 08:28 AM, Stefan Priebe - Profihost AG wrote:
>> Hello list,
>>
>> while running the same workload on two machines (single xeon and a dual
>> xeon) both with 64GB RAM.
>>
>> I need
ion path.
>
> With this patch, we can fix these false enospc error for compression.
>
> Signed-off-by: Wang Xiaoguang <wangxg.f...@cn.fujitsu.com>
Tested-by: Stefan Priebe <s.pri...@profihost.ag>
Works fine since 8 days - no ENOSPC errors anymore.
Greets,
Stefan
ve_meta
>Just as a place holder.
> 2) Increase *accurate* outstanding_extents at set_bit_hooks()
>This is the real increaser.
> 3) Decrease *INACCURATE* outstanding_extents before returning
>This makes outstanding_extents to correct value.
>
> For 128M BTRFS_MAX_EXTENT
Hello list,
while running the same workload on two machines (single xeon and a dual
xeon) both with 64GB RAM.
I need to run echo 3 >/proc/sys/vm/drop_caches every 15-30 minutes to
keep the speed as good as on the non numa system. I'm not sure whether
this is related to numa.
Is there any sysctl
Dear Wang,
Am 06.10.2016 um 05:04 schrieb Wang Xiaoguang:
> Hi,
>
> On 09/29/2016 03:27 PM, Stefan Priebe - Profihost AG wrote:
>> Am 29.09.2016 um 09:13 schrieb Wang Xiaoguang:
>>>>> I found that compress sometime report ENOSPC error even in 4.8-rc8,
>>
main difference between the system where oom happens is:
- Single Xeon => no OOM
- Dual Xeon / NUMA => OOM
both 64GB mem.
Am 07.10.2016 um 11:33 schrieb Holger Hoffstätte:
> On 10/07/16 09:17, Wang Xiaoguang wrote:
>> Hi,
>>
>> On 10/07/2016 03:03 PM, Stefan Priebe - P
Hi Wang,
currently on the system where it's working fine - no ENOSPC error. But
it will take a week to be sure they don't come back.
Thanks!
Greets,
Stefan
Am 06.10.2016 um 05:04 schrieb Wang Xiaoguang:
> Hi,
>
> On 09/29/2016 03:27 PM, Stefan Priebe - Profihost AG wrote:
>> A
Hi Holger,
Am 07.10.2016 um 11:33 schrieb Holger Hoffstätte:
> On 10/07/16 09:17, Wang Xiaoguang wrote:
>> Hi,
>>
>> On 10/07/2016 03:03 PM, Stefan Priebe - Profihost AG wrote:
>>> Dear Wang,
>>>
>>> can't use v4.8.0 as i always get OOMs an
Am 07.10.2016 um 10:07 schrieb Wang Xiaoguang:
> hello,
>
> On 10/07/2016 04:06 PM, Stefan Priebe - Profihost AG wrote:
>> and it shows:
>>
>> PAG | scan 33829e5 | steal 1968e3 | stall 0 | |
>>| | swin 257071 | swo
03:03 PM, Stefan Priebe - Profihost AG wrote:
>> Dear Wang,
>>
>> can't use v4.8.0 as i always get OOMs and total machine crashes.
>>
>> Complete traces with your patch and some more btrfs patches applied (in
>> the hope in fixes the OOM but it did not):
>>
| | |
| | vmcom 2.8G | vmlim 35.1G |
Greets,
Stefan
Am 07.10.2016 um 09:17 schrieb Wang Xiaoguang:
> Hi,
>
> On 10/07/2016 03:03 PM, Stefan Priebe - Profihost AG wrote:
>> Dear Wang,
>>
>> can't use v4.8.0 as i always get OOMs and total machine crashes.
>>
>> C
Am 07.10.2016 um 09:17 schrieb Wang Xiaoguang:
> Hi,
>
> On 10/07/2016 03:03 PM, Stefan Priebe - Profihost AG wrote:
>> Dear Wang,
>>
>> can't use v4.8.0 as i always get OOMs and total machine crashes.
>>
>> Complete traces with your patch and some mor
Hi,
>
> On 09/29/2016 03:27 PM, Stefan Priebe - Profihost AG wrote:
>> Am 29.09.2016 um 09:13 schrieb Wang Xiaoguang:
>>>>> I found that compress sometime report ENOSPC error even in 4.8-rc8,
>>>>> currently
>>>> I cannot confirm tha
Thanks Wang,
i applied them both on top of vanilla v4.8 - i hope this is OK. Will
report back what happens.
Greets,
Stefan
Am 06.10.2016 um 05:04 schrieb Wang Xiaoguang:
> Hi,
>
> On 09/29/2016 03:27 PM, Stefan Priebe - Profihost AG wrote:
>> Am 29.09.2016 um 09:13 schrieb
Thanks Wang,
i applied them both on top of vanilla v4.8 - i hope this is OK. Will
report back what happens.
Greets,
Stefan
Am 06.10.2016 um 05:04 schrieb Wang Xiaoguang:
> Hi,
>
> On 09/29/2016 03:27 PM, Stefan Priebe - Profihost AG wrote:
>> Am 29.09.2016 um 09:13 schrieb
Hi,
Am 29.09.2016 um 12:03 schrieb Adam Borowski:
> On Thu, Sep 29, 2016 at 09:27:01AM +0200, Stefan Priebe - Profihost AG wrote:
>> Am 29.09.2016 um 09:13 schrieb Wang Xiaoguang:
>>>>> I found that compress sometime report ENOSPC error even in 4.8-rc8,
>>>&
Am 29.09.2016 um 09:13 schrieb Wang Xiaoguang:
>>> I found that compress sometime report ENOSPC error even in 4.8-rc8,
>>> currently
>> I cannot confirm that as i do not have anough space to test this without
>> compression ;-( But yes i've compression enabled.
> I might not get you, my poor
Am 29.09.2016 um 08:55 schrieb Wang Xiaoguang:
> Hi,
>
> On 09/29/2016 02:49 PM, Stefan Priebe - Profihost AG wrote:
>> Hi,
>>
>> Am 28.09.2016 um 14:10 schrieb Wang Xiaoguang:
>>> OK, I see.
>>> But given that you often run into enospc errors
Hi,
Am 28.09.2016 um 14:10 schrieb Wang Xiaoguang:
> OK, I see.
> But given that you often run into enospc errors, can you work out a
> reproduce
> script according to you work load. That will give us great help.
I tried hard to reproduce it but i can't get it to reproduce with a test
script.
Am 28.09.2016 um 15:44 schrieb Holger Hoffstätte:
>> Good idea but it does not. I hope i can reproduce this with my already
>> existing testscript which i've now bumped to use a 37TB partition and
>> big files rather than a 15GB part and small files. If i can reproduce it
>> i can also check
Dear Holger,
first thanks for your long e-mail.
Am 28.09.2016 um 14:47 schrieb Holger Hoffstätte:
> On 09/28/16 13:35, Wang Xiaoguang wrote:
>> hello,
>>
>> On 09/28/2016 07:15 PM, Stefan Priebe - Profihost AG wrote:
>>> Dear list,
>>>
>>> i
Am 28.09.2016 um 14:10 schrieb Wang Xiaoguang:
> hello,
>
> On 09/28/2016 08:02 PM, Stefan Priebe - Profihost AG wrote:
>> Hi Xiaoguang Wang,
>>
>> Am 28.09.2016 um 13:35 schrieb Wang Xiaoguang:
>>> hello,
>>>
>>> On 09/28/2016 07:15
Hi Xiaoguang Wang,
Am 28.09.2016 um 13:35 schrieb Wang Xiaoguang:
> hello,
>
> On 09/28/2016 07:15 PM, Stefan Priebe - Profihost AG wrote:
>> Dear list,
>>
>> is there any chance anybody wants to work with me on the following issue?
> Though I'm also somew
Dear list,
is there any chance anybody wants to work with me on the following issue?
BTRFS: space_info 4 has 18446742286429913088 free, is not full
BTRFS: space_info total=98247376896, used=77036814336, pinned=0,
reserved=0, may_use=1808490201088, readonly=0
i get this nearly every day.
Here
: Show Blcoked State
but nothing more.
Stefan
Am 22.09.2016 um 16:28 schrieb Chris Mason:
>
>
> On 09/22/2016 02:41 AM, Stefan Priebe - Profihost AG wrote:
>> Hi,
>>
>> i always encounter btrfs deadlocks / hung tasks, when i have a lot of
>> cached mem and i'm doing
Hi,
this is vanilla linux 4.8-rc6 and i still have ENOSPC issues with btrfs
- caused by wrong space_tree entries.
[ 9736.921995] [ cut here ]
[ 9736.923342] WARNING: CPU: 1 PID: 23942 at fs/btrfs/extent-tree.c:5734
btrfs_free_block_groups+0x35e/0x440 [btrfs]
[
Hi,
today i've seen this one with 4.8-rc5 and the system was going to be
unresponsible.
BUG: workqueue lockup - pool cpus=14 node=1 flags=0x0 nice=0 stuck for 33s!
BUG: workqueue lockup - pool cpus=14 node=1 flags=0x0 nice=-20 stuck for
33s!
Showing busy workqueues and worker pools:
workqueue
trfs_alloc_data_chunk_ondemand(). Either method
> will
> work.
>
> But given that delete_unused_bgs_mutex's name length is longer than
> bg_delete_sem,
> I choose the first method, to create a new struct rw_semaphore bg_delete_sem
> and
> delete delete_unused_bgs_mutex :
).d/
# cat > /etc/systemd/system/$(systemd-escape --suffix=mount -p
/foo/bar/baz).d/timeout.conf < 2016-08-29 9:28 GMT+03:00 Stefan Priebe - Profihost AG
> <s.pri...@profihost.ag>:
>> Hi Qu,
>>
>> Am 29.08.2016 um 03:48 schrieb Qu Wenruo:
>>>
>>>
&g
Hi Josef,
this still hapens with current 4.8-rc* releases. Anything i can do to
debug this? May be insert some code to check for an under or overflow in
the code?
Stefan
Am 14.08.2016 um 17:22 schrieb Stefan Priebe - Profihost AG:
> Hi Josef,
>
> anything i could do or test
Hi Qu,
Am 29.08.2016 um 03:48 schrieb Qu Wenruo:
>
>
> At 08/29/2016 04:15 AM, Stefan Priebe - Profihost AG wrote:
>> Hi,
>>
>> i'm trying to get my 60TB btrfs volume to mount with systemd at boot.
>> But this always fails with: "mounting timed out. St
Hi,
i'm trying to get my 60TB btrfs volume to mount with systemd at boot.
But this always fails with: "mounting timed out. Stopping." after 90s.
I can't find any fstab setting for systemd to higher this timeout.
There's just the x-systemd.device-timeout but this controls how long to
wait for
Hi Josef,
anything i could do or test? Results with a vanilla next branch are the
same.
Stefan
Am 11.08.2016 um 08:09 schrieb Stefan Priebe - Profihost AG:
> Hello,
>
> the backtrace and info on umount looks the same:
>
> [241910.341124] [ cut here ]
&
]---
[241915.982893] BTRFS: space_info 4 has 114577997824 free, is not full
[241916.045103] BTRFS: space_info total=307627032576, used=193048903680,
pinned=0, reserved=0, may_use=688537059328, readonly=131072
Greets,
Stefan
Am 10.08.2016 um 23:31 schrieb Stefan Priebe - Profihost AG:
> Hi Jo
=5
SYSTEM (flags 0x2): balancing, usage=5
dmesg:
[203784.411189] BTRFS info (device dm-0): 114 enospc errors during balance
uname -r 4.7.0-rc6-29043-g8b8b08c
Greets,
Stefan
Am 08.08.2016 um 08:17 schrieb Stefan Priebe - Profihost AG:
> Am 04.08.2016 um 13:40 schrieb Stefan Priebe - Profih
Am 04.08.2016 um 13:40 schrieb Stefan Priebe - Profihost AG:
> Am 29.07.2016 um 23:03 schrieb Josef Bacik:
>> On 07/29/2016 03:14 PM, Omar Sandoval wrote:
>>> On Fri, Jul 29, 2016 at 12:11:53PM -0700, Omar Sandoval wrote:
>>>> On Fri, Jul 29, 2016 at 08:40:26PM +0
Am 29.07.2016 um 23:03 schrieb Josef Bacik:
> On 07/29/2016 03:14 PM, Omar Sandoval wrote:
>> On Fri, Jul 29, 2016 at 12:11:53PM -0700, Omar Sandoval wrote:
>>> On Fri, Jul 29, 2016 at 08:40:26PM +0200, Stefan Priebe - Profihost
>>> AG wrote:
>>>> Dear li
Am 29.07.2016 um 21:14 schrieb Omar Sandoval:
> On Fri, Jul 29, 2016 at 12:11:53PM -0700, Omar Sandoval wrote:
>> On Fri, Jul 29, 2016 at 08:40:26PM +0200, Stefan Priebe - Profihost AG wrote:
>>> Dear list,
>>>
>>> i'm seeing btrfs no space messages f
Am 29.07.2016 um 21:11 schrieb Omar Sandoval:
> On Fri, Jul 29, 2016 at 08:40:26PM +0200, Stefan Priebe - Profihost AG wrote:
>> Dear list,
>>
>> i'm seeing btrfs no space messages frequently on big filesystems (> 30TB).
>>
>> In all cases i'm getting a trac
Dear list,
i'm seeing btrfs no space messages frequently on big filesystems (> 30TB).
In all cases i'm getting a trace like this one a space_info warning.
(since commit [1]). Could someone please be so kind and help me
debugging / fixing this bug? I'm using space_cache=v2 on all those systems.
Am 20.07.2016 um 09:35 schrieb Holger Hoffstätte:
> On 07/20/16 07:31, Stefan Priebe - Profihost AG wrote:
>> Hi list,
>>
>> while i didn't had the problem for some month i'm now getting ENOSPC on
>> a regular basis on one host.
>
> Well, it's getting b
here we go...
Am 20.07.2016 um 08:31 schrieb Wang Xiaoguang:
> hello,
>
> On 07/20/2016 01:31 PM, Stefan Priebe - Profihost AG wrote:
>> Hi list,
>>
>> while i didn't had the problem for some month i'm now getting ENOSPC on
>> a regular basis on one host.
>&
Hi list,
while i didn't had the problem for some month i'm now getting ENOSPC on
a regular basis on one host.
It would be great if someone can help me debugging this.
Some basic informations:
# touch /vmbackup/abc
touch: cannot touch `/vmbackup/abc': No space left on device
# df -h /vmbackup/
1 - 100 of 174 matches
Mail list logo