On 9/13/2012 5:21 AM, Veljko wrote:
> On Tue, Sep 11, 2012 at 05:44:46PM -0500, Stan Hoeppner wrote:
>> On 9/11/2012 10:29 AM, Jon Dowland wrote:
>>
>>> Actually, lots and lots of small files is the worst use-case for rsnapshot, 
>>> and
>>> the reason I believe it should be avoided. It creates large hard-link trees 
>>> and
>>> with lots and lots of small files, the filesystem metadata for the trees can
>>> consume more space than the files themselves. Also performing operations 
>>> that
>>> need to recurse over large link trees (such as simply removing an old
>>> increment) can be very slow in that case.
>>
>> Which is why I recommend XFS.  It is exceptionally fast at traversing
>> large btrees.  You'll need the 3.2 bpo kernel for Squeeze.  The old as
>> dirt 2.6.32 kernel doesn't contain any of the recent (last 3 years)
>> metadata optimizations.

> Unlike my boss, whom I faild to persuade to buy RAID card, you confince
> me to use XFS. I created 1TB LV for backup (and will resize it when
> necessary). Will default XFS be OK in my case?

Due to its allocation group design, continually growing an XFS
filesystem in such small increments, with this metadata heavy backup
workload, will yield very poor performance.  Additionally, putting an
XFS filesystem atop an LV is not recommended as it cannot properly align
journal write out to the underlying RAID stripe width.  While this is
more critical with parity arrays, it also effects non parity striped arrays.

Thus my advice to you is:

Do not use LVM.  Directly format the RAID10 device using the mkfs.xfs
defaults.  mkfs.xfs will read the md configuration and automatically
align the filesystem to the stripe width.

When the filesystem reaches 85% capacity, add 4 more drives and create
another RAID10 array.  At that point we'll teach you how to create a
linear device of the two arrays and grow XFS across the 2nd array.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/50526b3d....@hardwarefreak.com

Reply via email to