On Sun, Jul 12, 2015 at 2:54 AM, Martin Steigerwald <mar...@lichtvoll.de> wrote:
> Note however: Even without "sync" BTRFS will defragment the file. It may
> just take a while till the new extents are written. So there is no need to
> call "sync" after btrfs fi defrag.

My purpose in calling sync in that transcript was to ensure each
invocation of filefrag returned the correct results.

> Why are you trying to defragment anyway? What are you trying to achieve /
> solve?

There are several reasons. Some are technical and some are emotional.

Technical reason 1: I sometimes play computer games that are delivered
through Steam. If you're not familiar with it, Steam is a sort of
online storefront application for Linux and other platforms that--I
believe--delivers game patches by overwriting selected ranges in each
game's files. Lots of small overwrites are the main culprit behind
small files with thousand or tens of thousands of fragments on btrfs.
All I know for sure is that some game files have become heavily
fragmented and load times seem excessive, especially given that
they're older games compared to my hardware.

Technical reason 2: Whenever a new Ubuntu release comes out, I make
sure to download and burn the ISO before I upgrade my OS. Usually I
download the ISO using bittorrent, which uses lots of random writes
creating heavily fragmented downloads that severely impact read speeds
from my non-SSD hard disk. This increases my load quite a bit when I
seed the download back to other people. I've solved this problem in
the past by pausing the bittorrent client, copying the file to another
location, deleting the original, moving the copy back into place with
the same name, and resuming the client, but that's a pain. I'd rather
defragment instead.

Emotional reason 1: Heavily fragmented files on rotating media just
feel slovenly to me, like a person wearing dirty clothes or a bathroom
that hasn't been cleaned in a while.

Emotional reason 2: I switched from Windows to Linux in the late 90s.
At that time, RAM was expensive, so I often found myself in situations
where my kernel was dropping pages from the filesystem cache to make
room for application memory. Rereading those files later was very slow
if they were fragmented, so I religiously defragmented my Windows 95
and 98 machines every week. When I switched to Linux, I was dismayed
to learn that ext2 had no defragmentation tool, and I was very
skeptical of claims that "Linux doesn't need defragmentation",
especially considering that most of the people making those claims
probably had no idea how fragmented their own filesystems were. In the
years that followed, I was continually disappointed as stable
defragmentation tools did not appear for ext2 or any other popular
Linux filesystems, so I'm relieved and excited not to be in that
predicament anymore.

>> I'm now seeing that recursive defragging doesn't work the way I expect.
>> Running
>>
>> $ btrfs fi defrag -r /path/to
>>
>> returns almost immediately and does not reduce the number of fragments
>> in /path/to/file. However, running
>>
>> $ btrfs fi defrag /path/to/file
>>
>> does reduce the number of fragments.
>
> Well, I have no idea about this one. I have the same behavior with
>
>> >> btrfs --version: Btrfs v3.17
>
> v4.0

Thanks for checking,
Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to