Hey.
Without the last patches on 4.17:
checking extents
checking free space cache
checking fs roots
ERROR: errors found in fs roots
Checking filesystem on /dev/mapper/system
UUID: 6050ca10-e778-4d08-80e7-6d27b9c89b3c
found 619543498752 bytes used, error(s) found
total csum bytes: 602382204
total
Hello!
FS_IOC_FIEMAP on btrfs seems to be returning fe_physical values that
don't always correspond to the actual on-disk data locations. For some
files the values match, but e.g. for this file:
# filefrag -v foo
Filesystem type is: 9123683e
File size of foo is 4096 (1 block of 4096 bytes)
ext:
27.10.2018 18:45, Lennert Buytenhek пишет:
> Hello!
>
> FS_IOC_FIEMAP on btrfs seems to be returning fe_physical values that
> don't always correspond to the actual on-disk data locations. For some
> files the values match, but e.g. for this file:
>
> # filefrag -v foo
> Filesystem type is: 9123
On Wed, Oct 24, 2018 at 01:07:25PM +0800, Qu Wenruo wrote:
> > saruman:/mnt/btrfs_pool1# btrfs balance start -musage=80 -v .
> > Dumping filters: flags 0x6, state 0x0, force is off
> > METADATA (flags 0x2): balancing, usage=80
> > SYSTEM (flags 0x2): balancing, usage=80
> > Done, had to relocat
On 2018-10-27 01:42 PM, Marc MERLIN wrote:
>
> I've been using btrfs for a long time now but I've never had a
> filesystem where I had 15GB apparently unusable (7%) after a balance.
>
The space isn't unusable. It's just allocated.. (It's used in the sense
that it's reserved for data chunks.).
On Sat, Oct 27, 2018 at 02:12:02PM -0400, Remi Gauvin wrote:
> On 2018-10-27 01:42 PM, Marc MERLIN wrote:
>
> >
> > I've been using btrfs for a long time now but I've never had a
> > filesystem where I had 15GB apparently unusable (7%) after a balance.
> >
>
> The space isn't unusable. It's ju
27.10.2018 21:12, Remi Gauvin пишет:
> On 2018-10-27 01:42 PM, Marc MERLIN wrote:
>
>>
>> I've been using btrfs for a long time now but I've never had a
>> filesystem where I had 15GB apparently unusable (7%) after a balance.
>>
>
> The space isn't unusable. It's just allocated.. (It's used in t
On 2018-10-27 04:19 PM, Marc MERLIN wrote:
> Thanks for confirming. Because I always have snapshots for btrfs
> send/receive, defrag will duplicate as you say, but once the older
> snapshots get freed up, the duplicate blocks should go away, correct?
>
> Back to usage, thanks for pointing out tha
I'm using btrfs and snapper on a system with an SSD. On this system
when I run `snapper -c root ls` (where `root` is the snapper config
for /), the process takes a very long time and top shows the following
process using 100% of the CPU:
kworker/u8:6+btrfs-qgroup-rescan
I have multiple comput
On 2018/10/28 上午1:42, Marc MERLIN wrote:
> On Wed, Oct 24, 2018 at 01:07:25PM +0800, Qu Wenruo wrote:
>>> saruman:/mnt/btrfs_pool1# btrfs balance start -musage=80 -v .
>>> Dumping filters: flags 0x6, state 0x0, force is off
>>> METADATA (flags 0x2): balancing, usage=80
>>> SYSTEM (flags 0x2):
On 2018/10/28 上午6:58, Dave wrote:
> I'm using btrfs and snapper on a system with an SSD. On this system
> when I run `snapper -c root ls` (where `root` is the snapper config
> for /), the process takes a very long time and top shows the following
> process using 100% of the CPU:
>
> kworker/
On Sun, Oct 28, 2018 at 07:27:22AM +0800, Qu Wenruo wrote:
> > I can't drop all the snapshots since at least two is used for btrfs
> > send/receive backups.
> > However, if I delete more snapshots, and do a full balance, you think
> > it'll free up more space?
>
> No.
>
> You're already too worri
On 2018/10/27 下午11:45, Lennert Buytenhek wrote:
> Hello!
>
> FS_IOC_FIEMAP on btrfs seems to be returning fe_physical values that
> don't always correspond to the actual on-disk data locations. For some
> files the values match, but e.g. for this file:
>
> # filefrag -v foo
> Filesystem type i
13 matches
Mail list logo