On 2018-03-05 10:28, Christoph Hellwig wrote:
On Sat, Mar 03, 2018 at 06:59:26AM +, Duncan wrote:
Indeed. Preallocation with COW doesn't make the sense it does on an
overwrite-in-place filesystem.
It makes a whole lot of sense, it just is a little harder to implement.
There is no reason
On Sat, Mar 03, 2018 at 06:59:26AM +, Duncan wrote:
> Indeed. Preallocation with COW doesn't make the sense it does on an
> overwrite-in-place filesystem.
It makes a whole lot of sense, it just is a little harder to implement.
There is no reason not to preallocate specific space, or if you
vinayak hegde posted on Thu, 01 Mar 2018 14:56:46 +0530 as excerpted:
> This will happen over and over again until we have completely
> overwritten the original extent, at which point your space usage will go
> back down to ~302g.We split big extents with cow, so unless you've got
> lots of space
On 2018-03-01 05:18, Andrei Borzenkov wrote:
On Thu, Mar 1, 2018 at 12:26 PM, vinayak hegde wrote:
No, there is no opened file which is deleted, I did umount and mounted
again and reboot also.
I think I am hitting the below issue, lot of random writes were
happening and the file is not fully w
On Thu, Mar 1, 2018 at 12:26 PM, vinayak hegde wrote:
> No, there is no opened file which is deleted, I did umount and mounted
> again and reboot also.
>
> I think I am hitting the below issue, lot of random writes were
> happening and the file is not fully written and its sparse file.
> Let me tr
No, there is no opened file which is deleted, I did umount and mounted
again and reboot also.
I think I am hitting the below issue, lot of random writes were
happening and the file is not fully written and its sparse file.
Let me try with disabling COW.
file offset 0
On 2018-02-28 14:54, Duncan wrote:
Austin S. Hemmelgarn posted on Wed, 28 Feb 2018 14:24:40 -0500 as
excerpted:
I believe this effect is what Austin was referencing when he suggested
the defrag, tho defrag won't necessarily /entirely/ clear it up. One
way to be /sure/ it's cleared up would be
Austin S. Hemmelgarn posted on Wed, 28 Feb 2018 14:24:40 -0500 as
excerpted:
>> I believe this effect is what Austin was referencing when he suggested
>> the defrag, tho defrag won't necessarily /entirely/ clear it up. One
>> way to be /sure/ it's cleared up would be to rewrite the entire file,
>
On 2018-02-28 14:09, Duncan wrote:
vinayak hegde posted on Tue, 27 Feb 2018 18:39:51 +0530 as excerpted:
I am using btrfs, But I am seeing du -sh and df -h showing huge size
difference on ssd.
mount:
/dev/drbd1 on /dc/fileunifier.datacache type btrfs
(rw,noatime,nodiratime,flushoncommit,disc
vinayak hegde posted on Tue, 27 Feb 2018 18:39:51 +0530 as excerpted:
> I am using btrfs, But I am seeing du -sh and df -h showing huge size
> difference on ssd.
>
> mount:
> /dev/drbd1 on /dc/fileunifier.datacache type btrfs
>
(rw,noatime,nodiratime,flushoncommit,discard,nospace_cache,recovery,
On Wed, Feb 28, 2018 at 9:01 AM, vinayak hegde wrote:
> I ran full defragement and balance both, but didnt help.
Showing the same information immediately after full defragment would be helpful.
> My created and accounting usage files are matching the du -sh output.
> But I am not getting why btr
I ran full defragement and balance both, but didnt help.
My created and accounting usage files are matching the du -sh output.
But I am not getting why btrfs internals use so much extra space.
My worry is, will get no space error earlier than I expect.
Is it expected with btrfs internal that it wil
On 2018-02-27 08:09, vinayak hegde wrote:
I am using btrfs, But I am seeing du -sh and df -h showing huge size
difference on ssd.
mount:
/dev/drbd1 on /dc/fileunifier.datacache type btrfs
(rw,noatime,nodiratime,flushoncommit,discard,nospace_cache,recovery,commit=5,subvolid=5,subvol=/)
du -sh /
I am using btrfs, But I am seeing du -sh and df -h showing huge size
difference on ssd.
mount:
/dev/drbd1 on /dc/fileunifier.datacache type btrfs
(rw,noatime,nodiratime,flushoncommit,discard,nospace_cache,recovery,commit=5,subvolid=5,subvol=/)
du -sh /dc/fileunifier.datacache/ - 331G
df -h
/de
14 matches
Mail list logo