Re: 6TB partition, Data only 2TB - aka When you haven't hit the "usual" problem

2016-08-25 Thread Lutz Vieweg

On 08/05/2016 10:03 PM, Gabriel C wrote:

On 04.08.2016 18:53, Lutz Vieweg wrote:


I was today hit by what I think is probably the same bug:
A btrfs on a close-to-4TB sized block device, only half filled
to almost exactly 2 TB, suddenly says "no space left on device"
upon any attempt to write to it. The filesystem was NOT automatically
switched to read-only by the kernel, I should mention.

Re-mounting (which is a pain as this filesystem is used for
$HOMEs of a multitude of active users who I have to kick from
the server for doing things like re-mounting) removed the symptom
for now, but from what I can read in linux-btrfs mailing list
archives, it pretty likely the symptom will re-appear.

Here are some more details:

Software versions:

linux-4.6.1 (vanilla from kernel.org)

...


dmesg output from the time the "no space left on device"-symptom
appeared:


[5171203.601620] WARNING: CPU: 4 PID: 23208 at fs/btrfs/inode.c:9261 
btrfs_destroy_inode+0x263/0x2a0 [btrfs]




...

[5171230.306037] WARNING: CPU: 18 PID: 12656 at fs/btrfs/extent-tree.c:4233 
btrfs_free_reserved_data_space_noquota+0xf3/0x100 [btrfs]


Sounds like the bug I hit too also ..

To fix this you'll need :

crazy@zwerg:~/Work/linux-git$ git show 8b8b08cbf
commit 8b8b08cbfb9021af4b54b4175fc4c51d655aac8c
Author: Chris Mason 
Date:   Tue Jul 19 05:52:36 2016 -0700

 Btrfs: fix delalloc accounting after copy_from_user faults


Thanks for this hint!

Yesterday (20 days after the first time this bug struck us, and after 
re-mounting
the filesystem) we were hit by the same bug again - twice! - once in the 
morning,
and again in the evening.

That called for immediate action, and short of reverting the whole setup to XFS,
installing a new kernel with the above (and other) btrfs fix(es) was the one 
thing
I could try.

The system is now running linux-4.7.2, which does contain those patches.
If that doesn't fix it, we're really running out of options.

Regards,

Lutz Vieweg


--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 6TB partition, Data only 2TB - aka When you haven't hit the "usual" problem

2016-08-05 Thread Gabriel C

On 04.08.2016 18:53, Lutz Vieweg wrote:
> 
> I was today hit by what I think is probably the same bug:
> A btrfs on a close-to-4TB sized block device, only half filled
> to almost exactly 2 TB, suddenly says "no space left on device"
> upon any attempt to write to it. The filesystem was NOT automatically
> switched to read-only by the kernel, I should mention.
> 
> Re-mounting (which is a pain as this filesystem is used for
> $HOMEs of a multitude of active users who I have to kick from
> the server for doing things like re-mounting) removed the symptom
> for now, but from what I can read in linux-btrfs mailing list
> archives, it pretty likely the symptom will re-appear.
> 
> Here are some more details:
> 
> Software versions:
>> linux-4.6.1 (vanilla from kernel.org)
...
> 
> dmesg output from the time the "no space left on device"-symptom
> appeared:
> 
>> [5171203.601620] WARNING: CPU: 4 PID: 23208 at fs/btrfs/inode.c:9261 
>> btrfs_destroy_inode+0x263/0x2a0 [btrfs]


> ...
>> [5171230.306037] WARNING: CPU: 18 PID: 12656 at fs/btrfs/extent-tree.c:4233 
>> btrfs_free_reserved_data_space_noquota+0xf3/0x100 [btrfs]


Sounds like the bug I hit too also ..

To fix this you'll need :


crazy@zwerg:~/Work/linux-git$ git show 8b8b08cbf
commit 8b8b08cbfb9021af4b54b4175fc4c51d655aac8c
Author: Chris Mason 
Date:   Tue Jul 19 05:52:36 2016 -0700

Btrfs: fix delalloc accounting after copy_from_user faults

Commit 56244ef151c3cd11 was almost but not quite enough to fix the
reservation math after btrfs_copy_from_user returned partial copies.

Some users are still seeing warnings in btrfs_destroy_inode, and with a
long enough test run I'm able to trigger them as well.

This patch fixes the accounting math again, bringing it much closer to
the way it was before the sectorsize conversion Chandan did.  The
problem is accounting for the offset into the page/sector when we do a
partial copy.  This one just uses the dirty_sectors variable which
should already be updated properly.

Signed-off-by: Chris Mason 
cc: sta...@vger.kernel.org # v4.6+

diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index f3f61d1..bcfb4a2 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -1629,13 +1629,11 @@ again:
 * managed to copy.
 */
if (num_sectors > dirty_sectors) {
-   /*
-* we round down because we don't want to count
-* any partial blocks actually sent through the
-* IO machines
-*/
-   release_bytes = round_down(release_bytes - copied,
- root->sectorsize);
+
+   /* release everything except the sectors we dirtied */
+   release_bytes -= dirty_sectors <<
+   root->fs_info->sb->s_blocksize_bits;
+
if (copied > 0) {
spin_lock(_I(inode)->lock);
BTRFS_I(inode)->outstanding_extents++;
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 6TB partition, Data only 2TB - aka When you haven't hit the "usual" problem

2016-08-05 Thread Lutz Vieweg

On 08/05/2016 02:12 PM, Austin S. Hemmelgarn wrote:

> If you stick to single disk

We do, all our btrfs filesystems reside on one single block device,
redundancy is provided by a DRBD layer below.

> don't use quota groups

We don't use any quotas.

> stick to reasonably sized filesystems (not more than a few TB)

We do, currently 4 TB max, because that's the only way to utilize
different physical storage devices for different filesystem instances
such that we can backup them in parallel within reasonable time.

> and avoid a couple of specific unconventional storage configurations below it

Configurations like what?

> The whole issue with
> databases is often a non-issue for desktop users in my experience

Well, try a "cat" on a sqlite file that has been used by some ordinary
desktop software (like a browser) for a year - and you'll experience
horrible performance, due to the extreme amount of fragments.

(Having to manually "de-fragment" a filesystem periodically is something
that I had considered a thing of the past when I started using BSD's hfs
instead of the Amiga FFS in the late 1980s... ;-)

> and if you think VM image
> performance is bad, you should really be looking at using real block storage 
instead of a file
> (seriously, this will usually get you a bigger performance boost than using 
ext4 or XFS
> over BTRFS as an underlying filesystem will).

Sure, assigning block devices to each VM would be even better, but
also much less convenient for operations. It's a feature here that any
user can start a new VM instance (without root privileges) at any
time, and that the images used by those VMs are part of the incremental
backup that stores only differences, not "whole files that have been changed".

>> We sure do - actually, the possibility to "run daily backups from a
>> snapshot while write performance remains acceptable" is the one and
>> only reason for me to use btrfs rather than xfs for those $HOME dirs.
>> In every other aspect (stability, performance, suitability for
>> storing VM-images or database-files) xfs wins for me.
>> And the btrfs advantage "file system based snapshot being more
>> performant than block device based snapshot" may fade away
>> with the replacement of magnetic disks with SSDs in the long run.
> I'm going to respond to the two parts of this separately:
> 1. As far as snapshot performance, you'd be surprised. I've got pretty good 
consumer grade SSD's
> that can do a sustained 250MB/s write speed, which means that to be as fast 
as a snapshot,
> the data set would have to be less than 25MB

No, I'm talking about LVM snapshots, which utilitze Copy-On-Write
on the block device level. Creating such an LVM snapshot is
as quick as creating a btrfs snapshot, regardless of the size.
The only significant draw-back of the LVM snapshot is that whenever
data is written to the filesystem, that causes copy operations from
one part of the (currently magnetic) storage to another part, and
that seriously hurts the write performance.

(Of course, it would not be a reasonable option to take a block device
snapshot by first copying all the data on it.)

> 2. As far as snapshots being the only advantage of BTRFS, that's just bogus.
> XFS does have metadata checksumming now, but that provides no protection for
> data, just metadata.

We check for bit-rot on the block device level, DRBD verifies the integrity
of the data by reading from both redundant storage devices and comparing the
checksums, periodically every week.

So far, we never encountered a single bit-rot error, even though the underlying
physical storage devices are "cheap SATA disks".

> XFS also doesn't have transparent compression support

I have no use for that. Disk space is relatively cheap, cheap enough
that we don't bother with RAID-5 or such, but use the "full redundancy"
provided by a shared-nothing DRBD setup.

> filesystems can't be shrunk

I enlarged XFS filesystems multiple times while in use, which worked well.
I never had to shrink a filesystem, and I cannot imagine how such a use case
could occur to me.

> and it stores no backups of any metadata except super-blocks.

Which is fine with me, as redundancy is provided on the block device level
by DRBD.

> While the compression and filesystem shrinking may not be needed in
> your use case, the data integrity features are almost certainly an advantage.

Btrfs sure has some nifty features, and I understand that for some
stuff like "subvolumes" or "deduplication" are important.

But a hundred great features cannot make up for a lack of stability,
therefore I would love to see those ENOSPC-related issues to
be resolved rather than more fancy features being built :-)

Regards,

Lutz Vieweg

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 6TB partition, Data only 2TB - aka When you haven't hit the "usual" problem

2016-08-05 Thread Austin S. Hemmelgarn

On 2016-08-05 06:56, Lutz Vieweg wrote:

On 08/04/2016 10:30 PM, Chris Murphy wrote:

Keep in mind the list is rather self-selecting for problems. People
who aren't having problems are unlikely to post their non-problems to
the list.


True, but the number of people inclined to post a bug report to
the list is also a lot smaller than the number of people who
experienced problems.

Personally, I know at least 2 Linux users who happened to
get a btrfs filesystem as part of upgrading to a newer Suse
distribution on their PC, and both of them experienced
trouble with their filesystems that caused them to re-install
without using btrfs. They weren't interested in what filesystem
they use enough to bother investigating what happened
in detail or to issue bug-reports.

I'm afraid that btrfs' reputation has already taken damage
from the combination of "early deployment as a root filesystem
to unsuspecting users" and "being at a development stage where
users are likely to experience trouble at some time".
FWIW, the 'early deployment' thing is an issue of the distributions 
themselves, and most people who have come to me personally complaining 
about BTRFS have understood this after I explained it to them.


As far as the rest, it's hit or miss whether you have issues.  I've been 
using BTRFS on all my personal systems since about 3.14, and have had 
zero issues with data loss or filesystem corruption (or horrible 
performance) since about 3.18 that were actually BTRFS issues (it's 
helped me ID a lot of marginal hardware though), and in fact, I had more 
issues trying to use ZFS for a year than I've had in the now multiple 
years of using BTRFS, and in the case of BTRFS, I was actually able to 
fix things.  I know quite a few people (and a number of big companies 
for that matter) who have been running BTRFS for longer and had fewer 
issues too.  The biggest issue is that the risks involved aren't well 
characterized, although most filesystems have that same issue.


If you stick to single disk or raid1 mode, don't use quota groups (which 
at least SUSE does by default now), stick to reasonably sized 
filesystems (not more than a few TB), and avoid a couple of specific 
unconventional storage configurations below it, BTRFS works fine.  The 
whole issue with databases is often a non-issue for desktop users in my 
experience, and if you think VM image performance is bad, you should 
really be looking at using real block storage instead of a file 
(seriously, this will usually get you a bigger performance boost than 
using ext4 or XFS over BTRFS as an underlying filesystem will).



c. Take some risk and use 4.8 rc1 once it's out. Just make sure to
keep backups.


We sure do - actually, the possibility to "run daily backups from a
snapshot while write performance remains acceptable" is the one and
only reason for me to use btrfs rather than xfs for those $HOME dirs.
In every other aspect (stability, performance, suitability for
storing VM-images or database-files) xfs wins for me.
And the btrfs advantage "file system based snapshot being more
performant than block device based snapshot" may fade away
with the replacement of magnetic disks with SSDs in the long run.

I'm going to respond to the two parts of this separately:
1. As far as snapshot performance, you'd be surprised. I've got pretty 
good consumer grade SSD's that can do a sustained 250MB/s write speed, 
which means that to be as fast as a snapshot, the data set would have to 
be less than 25MB (and that's being generous, snapshots usually take 
less than 0.1s to create on my system).  Where the turnover point occurs 
varies of course based on storage bandwidth, but I don't see it being 
very likely that SSD's will obsolete snapshotting any time soon.  Even 
if disks suddenly get the ability to run at full bandwidth of the link 
they're on, a SAS3 disk (12Gbit/s signaling, practical bandwidth of 
about 1GB/s) would have a turn over point of about 100MB, and a NVMe 
device on a PCIe 4.0 X16 link (3.151GB/s theoretical bandwidth) would 
have a turn over point of 3.1GB.  In theory, a high-end NVDIMM might be 
able to do better than a snapshot, but it probably couldn't get much 
faster right now than twice the speed of a PCIe 4.0 X16 link, which 
means that it would likely have a turn over point of about 6.2GB.  In 
comparison, it's not unusual to need a snapshot of a data set in excess 
of a terabyte in size.
2. As far as snapshots being the only advantage of BTRFS, that's just 
bogus. XFS does have metadata checksumming now, but that provides no 
protection for data, just metadata.  XFS also doesn't have transparent 
compression support, filesystems can't be shrunk, and it stores no 
backups of any metadata except super-blocks.  While the compression and 
filesystem shrinking may not be needed in your use case, the data 
integrity features are almost certainly an advantage.

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the 

Re: 6TB partition, Data only 2TB - aka When you haven't hit the "usual" problem

2016-08-05 Thread Lutz Vieweg

On 08/04/2016 10:30 PM, Chris Murphy wrote:

Keep in mind the list is rather self-selecting for problems. People
who aren't having problems are unlikely to post their non-problems to
the list.


True, but the number of people inclined to post a bug report to
the list is also a lot smaller than the number of people who
experienced problems.

Personally, I know at least 2 Linux users who happened to
get a btrfs filesystem as part of upgrading to a newer Suse
distribution on their PC, and both of them experienced
trouble with their filesystems that caused them to re-install
without using btrfs. They weren't interested in what filesystem
they use enough to bother investigating what happened
in detail or to issue bug-reports.

I'm afraid that btrfs' reputation has already taken damage
from the combination of "early deployment as a root filesystem
to unsuspecting users" and "being at a development stage where
users are likely to experience trouble at some time".


c. Take some risk and use 4.8 rc1 once it's out. Just make sure to
keep backups.


We sure do - actually, the possibility to "run daily backups from a
snapshot while write performance remains acceptable" is the one and
only reason for me to use btrfs rather than xfs for those $HOME dirs.
In every other aspect (stability, performance, suitability for
storing VM-images or database-files) xfs wins for me.
And the btrfs advantage "file system based snapshot being more
performant than block device based snapshot" may fade away
with the replacement of magnetic disks with SSDs in the long run.


Regards,

Lutz Vieweg

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 6TB partition, Data only 2TB - aka When you haven't hit the "usual" problem

2016-08-04 Thread Chris Murphy
On Thu, Aug 4, 2016 at 10:53 AM, Lutz Vieweg  wrote:

> The amount of threads on "lost or unused free space" without resolutions
> in the btrfs mailing list archive is really frightening. If these
> symptoms commonly re-appear with no fix in sight, I'm afraid I'll have
> to either resort to using XFS (with ugly block-device based snapshots
> for backup) or try my luck with OpenZFS :-(

Keep in mind the list is rather self-selecting for problems. People
who aren't having problems are unlikely to post their non-problems to
the list.

It'll be interesting to see what other suggestions you get, but I see
it as basically three options in order of increasing risk+effort.

a. Try the clear_cache mount option (one time) and let the file system
stay mounted so the cache is recreated. If the problem happens soon
after again, try nospace_cache. This might buy you time before 4.8 is
out, which has a bunch of new enospc code in it.

b. Recreate the file system. For reasons not well understood, some
file systems just get stuck in this state with bogus enospc claims.

c. Take some risk and use 4.8 rc1 once it's out. Just make sure to
keep backups. I have no idea to what degree the new enospc code can
help well used existing systems already having enospc issues, vs the
code prevents the problem from happening in the first place. So you
may end up at b. anyway.


-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 6TB partition, Data only 2TB - aka When you haven't hit the "usual" problem

2016-08-04 Thread Lutz Vieweg

Hi,

I was today hit by what I think is probably the same bug:
A btrfs on a close-to-4TB sized block device, only half filled
to almost exactly 2 TB, suddenly says "no space left on device"
upon any attempt to write to it. The filesystem was NOT automatically
switched to read-only by the kernel, I should mention.

Re-mounting (which is a pain as this filesystem is used for
$HOMEs of a multitude of active users who I have to kick from
the server for doing things like re-mounting) removed the symptom
for now, but from what I can read in linux-btrfs mailing list
archives, it pretty likely the symptom will re-appear.

Here are some more details:

Software versions:

linux-4.6.1 (vanilla from kernel.org)
btrfs-progs v4.1


Info obtained while the symptom occured (before re-mount):

> btrfs filesystem show /data3
Label: 'data3'  uuid: f4c69d29-62ac-4e15-a825-c6283c8fd74c
Total devices 1 FS bytes used 2.05TiB
devid1 size 3.64TiB used 2.16TiB path 
/dev/mapper/cryptedResourceData3


(/dev/mapper/cryptedResourceData3 is a dm-crypt device,
which is based on a DRBD block device, which is based
on locally attached SATA disks on two servers - no trouble
with that setup for years, no I/O-errors or such, same
kind of block-device stack also used for another btrfs
and some XFS filesystems.)


> btrfs filesystem df /data3
Data, single: total=2.11TiB, used=2.01TiB
System, single: total=4.00MiB, used=256.00KiB
Metadata, single: total=48.01GiB, used=36.67GiB
GlobalReserve, single: total=512.00MiB, used=5.52MiB


Currently and at the time the bug occured no snapshots existed
on "/data3". A snapshot is created once per night, a backup
created, then the snapshot is removed again.
There is lots of mixed I/O-activity during the day, both from interactive
users and from automatic build processes and such.

dmesg output from the time the "no space left on device"-symptom
appeared:


[5171203.601620] WARNING: CPU: 4 PID: 23208 at fs/btrfs/inode.c:9261 
btrfs_destroy_inode+0x263/0x2a0 [btrfs]
[5171203.602719] Modules linked in: dm_snapshot dm_bufio fuse btrfs xor 
raid6_pq nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack ipt_REJECT 
nf_reject_ipv4 tun ebtable_filter ebtables ip6table_filter ip6_tables 
iptable_filter drbd lru_cache bridge stp llc kvm_amd kvm irqbypass 
ghash_clmulni_intel amd64_edac_mod ses edac_mce_amd enclosure edac_core 
sp5100_tco pcspkr k10temp fam15h_power sg i2c_piix4 shpchp acpi_cpufreq nfsd 
auth_rpcgss nfs_acl lockd grace sunrpc ip_tables xfs libcrc32c dm_crypt mgag200 
drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm ixgbe 
crct10dif_pclmul crc32_pclmul crc32c_intel igb ahci libahci aesni_intel 
glue_helper libata lrw gf128mul ablk_helper mdio cryptd ptp serio_raw 
i2c_algo_bit pps_core i2c_core dca sd_mod dm_mirror dm_region_hash dm_log dm_mod

...

[5171203.617358] Call Trace:
[5171203.618543]  [] dump_stack+0x4d/0x6c
[5171203.619568]  [] __warn+0xe3/0x100
[5171203.620660]  [] warn_slowpath_null+0x1d/0x20
[5171203.621779]  [] btrfs_destroy_inode+0x263/0x2a0 [btrfs]
[5171203.622716]  [] destroy_inode+0x3b/0x60
[5171203.623774]  [] evict+0x11c/0x180

...

[5171230.306037] WARNING: CPU: 18 PID: 12656 at fs/btrfs/extent-tree.c:4233 
btrfs_free_reserved_data_space_noquota+0xf3/0x100 [btrfs]
[5171230.310298] Modules linked in: dm_snapshot dm_bufio fuse btrfs xor 
raid6_pq nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack ipt_REJECT 
nf_reject_ipv4 tun ebtable_filter ebtables ip6table_filter ip6_tables 
iptable_filter drbd lru_cache bridge stp llc kvm_amd kvm irqbypass 
ghash_clmulni_intel amd64_edac_mod ses edac_mce_amd enclosure edac_core 
sp5100_tco pcspkr k10temp fam15h_power sg i2c_piix4 shpchp acpi_cpufreq nfsd 
auth_rpcgss nfs_acl lockd grace sunrpc ip_tables xfs libcrc32c dm_crypt mgag200 
drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm ixgbe 
crct10dif_pclmul crc32_pclmul crc32c_intel igb ahci libahci aesni_intel 
glue_helper libata lrw gf128mul ablk_helper mdio cryptd ptp serio_raw 
i2c_algo_bit pps_core i2c_core dca sd_mod dm_mirror dm_region_hash dm_log dm_mod

...

[5171230.341755] Call Trace:
[5171230.344119]  [] dump_stack+0x4d/0x6c
[5171230.346444]  [] __warn+0xe3/0x100
[5171230.348709]  [] warn_slowpath_null+0x1d/0x20
[5171230.350976]  [] 
btrfs_free_reserved_data_space_noquota+0xf3/0x100 [btrfs]
[5171230.353212]  [] btrfs_clear_bit_hook+0x27f/0x350 [btrfs]
[5171230.355392]  [] ? free_extent_state+0x1a/0x20 [btrfs]
[5171230.357556]  [] clear_state_bit+0x66/0x1d0 [btrfs]
[5171230.359698]  [] __clear_extent_bit+0x224/0x3a0 [btrfs]
[5171230.361810]  [] ? btrfs_update_reserved_bytes+0x45/0x130 
[btrfs]
[5171230.363960]  [] extent_clear_unlock_delalloc+0x7a/0x2d0 
[btrfs]
[5171230.366079]  [] ? kmem_cache_alloc+0x17d/0x1f0
[5171230.368204]  [] ? __btrfs_add_ordered_extent+0x43/0x310 
[btrfs]
[5171230.370350]  [] ? __btrfs_add_ordered_extent+0x1fb/0x310 
[btrfs]
[5171230.372491]  [] cow_file_range+0x28a/0x460 [btrfs]

Re: 6TB partition, Data only 2TB - aka When you haven't hit the "usual" problem

2016-01-01 Thread cheater00 .
here is the info requested, if that helps anyone.

# uname -a
Linux SX20S 4.3.0-040300rc7-generic #201510260712 SMP Mon Oct 26
11:27:59 UTC 2015 i686 i686 i686 GNU/Linux
# aptitude show btrfs-tools
Package: btrfs-tools
State: installed
Automatically installed: no
Version: 4.2.1+ppa1-1~ubuntu15.10.1
# btrfs --version
btrfs-progs v4.2.1
# btrfs fi show Media
Label: 'Media'  uuid: b397b7ef-6754-4ba4-8b1a-fbf235aa1cf8
Total devices 1 FS bytes used 1.92TiB
devid1 size 5.46TiB used 1.93TiB path /dev/sdd1

btrfs-progs v4.2.1
# btrfs fi usage Media
Overall:
Device size:   5.46TiB
Device allocated:   1.93TiB
Device unallocated:   3.52TiB
Device missing: 0.00B
Used:   1.93TiB
Free (estimated):   3.53TiB (min: 1.76TiB)
Data ratio:  1.00
Metadata ratio:  2.00
Global reserve: 512.00MiB (used: 0.00B)

Data,single: Size:1.92TiB, Used:1.92TiB
   /dev/sdd1   1.92TiB

Metadata,single: Size:8.00MiB, Used:0.00B
   /dev/sdd1   8.00MiB

Metadata,DUP: Size:5.00GiB, Used:3.32GiB
   /dev/sdd1  10.00GiB

System,single: Size:4.00MiB, Used:0.00B
   /dev/sdd1   4.00MiB

System,DUP: Size:8.00MiB, Used:224.00KiB
   /dev/sdd1  16.00MiB

Unallocated:
   /dev/sdd1   3.52TiB



# btrfs-show-super /dev/sdd1
superblock: bytenr=65536, device=/dev/sdd1
-
csum 0xae174f16 [match]
bytenr 65536
flags 0x1
( WRITTEN )
magic _BHRfS_M [match]
fsid b397b7ef-6754-4ba4-8b1a-fbf235aa1cf8
label Media
generation 11983
root 34340864
sys_array_size 226
chunk_root_generation 11982
root_level 1
chunk_root 21135360
chunk_root_level 1
log_root 0
log_root_transid 0
log_root_level 0
total_bytes 6001173463040
bytes_used 2115339448320
sectorsize 4096
nodesize 16384
leafsize 16384
stripesize 4096
root_dir 6
num_devices 1
compat_flags 0x0
compat_ro_flags 0x0
incompat_flags 0x61
( MIXED_BACKREF |
 BIG_METADATA |
 EXTENDED_IREF )
csum_type 0
csum_size 4
cache_generation 11983
uuid_tree_generation 11983
dev_item.uuid 819e1c8a-5e55-4992-81d3-f22fdd088dc9
dev_item.fsid b397b7ef-6754-4ba4-8b1a-fbf235aa1cf8 [match]
dev_item.type 0
dev_item.total_bytes 6001173463040
dev_item.bytes_used 2124972818432
dev_item.io_align 4096
dev_item.io_width 4096
dev_item.sector_size 4096
dev_item.devid 1
dev_item.dev_group 0
dev_item.seek_speed 0
dev_item.bandwidth 0
dev_item.generation 0

I did mount Media -o enospc_debug and now mount shows:
/dev/sdd1 on /media/cheater/Media type btrfs
(rw,nosuid,nodev,enospc_debug,_netdev)


On Wed, Dec 30, 2015 at 11:13 PM, Chris Murphy  wrote:
> kernel and btrfs-progs versions
> and output from:
> 'btrfs fi show '
> 'btrfs fi usage '
> 'btrfs-show-super '
> 'df -h'
>
> Then umount the volume, and mount with option enospc_debug, and try to
> reproduce the problem, then include everything from dmesg from the
> time the volume was mounted.
>
> --
> Chris Murphy

On Sat, Jan 2, 2016 at 3:09 AM, cheater00 .  wrote:
> I have been unable to reproduce so far.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 6TB partition, Data only 2TB - aka When you haven't hit the "usual" problem

2016-01-01 Thread cheater00 .
I have been unable to reproduce so far.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


6TB partition, Data only 2TB - aka When you haven't hit the "usual" problem

2015-12-30 Thread cheater00 .
Hi,
I have a 6TB partition here, it filled up while still just under 2TB
were on it. btrfs fi df showed that Data is 1.92TB:

Data, single: total=1.92TiB, used=1.92TiB
System, DUP: total=8.00MiB, used=224.00KiB
System, single: total=4.00MiB, used=0.00B
Metadata, DUP: total=5.00GiB, used=3.32GiB
Metadata, single: total=8.00MiB, used=0.00B
GlobalReserve, single: total=512.00MiB, used=0.00B

btrfs fs resize max . did nothing, I also tried resize -1T and resize
+1T and that did nothing as well. On IRC I was directed to this:

https://btrfs.wiki.kernel.org/index.php/FAQ#if_your_device_is_large_.28.3E16GiB.29

"When you haven't hit the "usual" problem

If the conditions above aren't true (i.e. there's plenty of
unallocated space, or there's lots of unused metadata allocation),
then you may have hit a known but unresolved bug. If this is the case,
please report it to either the mailing list, or IRC. In some cases, it
has been possible to deal with the problem, but the approach is new,
and we would like more direct contact with people experiencing this
particular bug."

What do I do now? It's kind of important to me to get that free space.
I'm really jonesing for that free space.

Thanks.

(btw, so far I haven't been able to follow up on that unrelated thread
from a while back. But I hope to be able to do that sometime in
January.)
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 6TB partition, Data only 2TB - aka When you haven't hit the "usual" problem

2015-12-30 Thread Chris Murphy
kernel and btrfs-progs versions
and output from:
'btrfs fi show '
'btrfs fi usage '
'btrfs-show-super '
'df -h'

Then umount the volume, and mount with option enospc_debug, and try to
reproduce the problem, then include everything from dmesg from the
time the volume was mounted.

-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html