Re: [BackupPC-users] Filesystem Recommendation for 100 TB

2020-04-30 Thread G.W. Haywood via BackupPC-users

Hi there,

On Tue, Apr 28, 2020 at 1:02 PM Andrew Maksymowsky wrote:


I have no strong preference for either xfs or zfs (our team is
comfortable with either) was mainly just curious to hear about what
folks were using and if they've run into any major issues or found
particular file-system features they really like when coupled with
backuppc.


Data volumes of the systems I back up approach those with which you're
working, and I have had no issues with ext4.  Being very conservative
about filesystem choice now (after a disastrous outing with ReiserFS,
a little over a decade ago) I haven't yet taken the plunge with any of
the more modern filesystems.  It's probably past time for me to put a
toe in the water once more, but there are always more pressing issues
and I *really* don't need another episode like that with Reiser.

At one time I routinely used to modify the BackupPC GUI to display the
ext4 inode usage on BackupPC systems, but happily I no longer need to
do that. :)  Although I'd have said my systems tend to have lots of
small files, typically they're only using a few percent of inode
capacity at a few tens % of storage capacity; I have no clue what the
fragmentation is like, and likely won't unless something bites me.

There's no RAID here at all, but there are LVMs, so snapshots became
possible whatever the filesystem.  Although at one time I thought I'd
be using snapshots a lot, and sometimes did, now I seem not to bother
with them.  Large databases tend to be few in number and can probably
be backed up better using the tools provided by the database system
itself; directories containing database files and VMs are specifically
excluded in my BackupPC configurations; some routine data collection
like security camera video is treated specially in the config too, and
what's left is largely configuration and users' home directories.  All
machines run Linux or similar, thankfully no Windows boxes any more.

Just to state one possibly obvious point, the ability to prevent the
filesystem used by BackupPC from writing access times would probably
be important to most, although I'm aware that you're interested more
in the reliability of the system and this is a performance issue.  On
1GBit/s networks I see backup data rates ranging from 20MByte/s for a
full backup to 3GByte/s for an incremental.  Obviously the network is
not the bottleneck and from that point of view I think the filesystem
probably doesn't matter; you're looking at CPU, I/O (think SSDs?) and
very likely RAM too, e.g. for rsync transfers which can be surprising.

HTH

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem Recommendation for 100 TB

2020-04-29 Thread Andrew Maksymowsky
Thanks Robert and to everyone for your feedback. Really appreciate it !

On Apr 28, 2020, at 1:45 PM, Robert Trevellyan 
mailto:robert.trevell...@gmail.com>> wrote:

I've been using ZFS for storage on Ubuntu Server for several years now. Among 
other things, the server runs two BackupPC 4 instances in LXC containers. One 
backs up local machines, the other backs up cloud servers. I haven't run into 
any serious problems. My use case is on a much smaller scale than yours in 
terms of storage and number of hosts, but the backups in both BPC instances 
include large numbers of small files. IMO the fact that ZFS has no limit on 
inodes is one of the attributes that makes is a good choice, aside from the 
obvious (reliability and scale).

Robert Trevellyan


On Tue, Apr 28, 2020 at 1:02 PM Andrew Maksymowsky 
mailto:andrew.maksymow...@sickkids.ca>> wrote:
I have no strong preference for either xfs or zfs (our team is comfortable with 
either) was mainly just curious to hear about what folks were using and if 
they've run into any major issues or found particular file-system features they 
really like when coupled with backuppc.

Most of the files we'll be backing up are fairly small (under a few mb). We've 
got a handful of large databases the we'll also be backing up dumps of.

Thanks !

Andrew

From: Robert Trevellyan 
mailto:robert.trevell...@gmail.com>>
Sent: April 28, 2020 12:31 PM
To: General list for user discussion, questions and support 
mailto:backuppc-users@lists.sourceforge.net>>
Subject: Re: [BackupPC-users] Filesystem Recommendation for 100 TB

Any reason not to use ZFS?

Robert Trevellyan


On Tue, Apr 28, 2020 at 11:59 AM Andrew Maksymowsky 
mailto:andrew.maksymow...@sickkids.ca>> wrote:
Hello,

I believe the last time this was asked was a few years ago and I was wondering 
if anything has changed.
We’ve been running backuppc for a few years and now have new server for it with 
100 TB of space in hardware raid 6 array.

We’re wondering what the recommended filesystem for backuppc would be on an 
array of this size ?

Our priority is stability over performance. We’ll be running ubuntu as the 
operating system.

Most of the hosts we’re backing up are fairly small (under 1 TB) linux servers. 
Right now we’re backing up about 100 servers for a total of around 20 TB. 
(We’re expecting the new server to last a few years).

Thanks !

- Andrew



This e-mail may contain confidential, personal and/or health 
information(information which may be subject to legal restrictions on use, 
retention and/or disclosure) for the sole use of the intended recipient. Any 
review or distribution by anyone other than the person for whom it was 
originally intended is strictly prohibited. If you have received this e-mail in 
error, please contact the sender and delete all copies.

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net<mailto:BackupPC-users@lists.sourceforge.net>
List:
https://lists.sourceforge.net/lists/listinfo/backuppc-users<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.sourceforge.net_lists_listinfo_backuppc-2Dusers=DwMFaQ=Sj806OTFwmuG2UO1EEDr-2uZRzm2EPz39TfVBG2Km-o=OaOl9vMMs7To5r1WtpsEjuelxox8OhadYKz_ljkjikk=ZZbvpGjyfvgQFNeLlhANgseqK5edmy6zAFemZagmyJI=j-fkTu3P2645IiQ51moJNlcudiydzcUbb8fS3xz4dtA=>
Wiki:
http://backuppc.wiki.sourceforge.net<https://urldefense.proofpoint.com/v2/url?u=http-3A__backuppc.wiki.sourceforge.net=DwMFaQ=Sj806OTFwmuG2UO1EEDr-2uZRzm2EPz39TfVBG2Km-o=OaOl9vMMs7To5r1WtpsEjuelxox8OhadYKz_ljkjikk=ZZbvpGjyfvgQFNeLlhANgseqK5edmy6zAFemZagmyJI=vK2U46EwlblSGkiFB50k-PrF0uXeK8xH3Mpi1DLwJAU=>
Project: 
http://backuppc.sourceforge.net/<https://urldefense.proofpoint.com/v2/url?u=http-3A__backuppc.sourceforge.net_=DwMFaQ=Sj806OTFwmuG2UO1EEDr-2uZRzm2EPz39TfVBG2Km-o=OaOl9vMMs7To5r1WtpsEjuelxox8OhadYKz_ljkjikk=ZZbvpGjyfvgQFNeLlhANgseqK5edmy6zAFemZagmyJI=ZCV6mOysmij0l3s3oKWO9takKe0gbiyJGGW1rVcBOfw=>



This e-mail may contain confidential, personal and/or health 
information(information which may be subject to legal restrictions on use, 
retention and/or disclosure) for the sole use of the intended recipient. Any 
review or distribution by anyone other than the person for whom it was 
originally intended is strictly prohibited. If you have received this e-mail in 
error, please contact the sender and delete all copies.
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net<mailto:BackupPC-users@lists.sourceforge.net>
List:
https://lists.sourceforge.net/lists/listinfo/backuppc-users<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.sourceforge.net_lists_listinfo_backuppc-2Dusers=DwMFaQ=Sj806OTFwmuG2UO1EEDr-2uZRzm2EPz39TfVBG2Km-o=OaOl9vMMs7To5r1WtpsEjuel

Re: [BackupPC-users] Filesystem Recommendation for 100 TB

2020-04-28 Thread Robert Trevellyan
I've been using ZFS for storage on Ubuntu Server for several years now.
Among other things, the server runs two BackupPC 4 instances in LXC
containers. One backs up local machines, the other backs up cloud servers.
I haven't run into any serious problems. My use case is on a much smaller
scale than yours in terms of storage and number of hosts, but the backups
in both BPC instances include large numbers of small files. IMO the fact
that ZFS has no limit on inodes is one of the attributes that makes is a
good choice, aside from the obvious (reliability and scale).

Robert Trevellyan


On Tue, Apr 28, 2020 at 1:02 PM Andrew Maksymowsky <
andrew.maksymow...@sickkids.ca> wrote:

> I have no strong preference for either xfs or zfs (our team is comfortable
> with either) was mainly just curious to hear about what folks were using
> and if they've run into any major issues or found particular file-system
> features they really like when coupled with backuppc.
>
> Most of the files we'll be backing up are fairly small (under a few mb).
> We've got a handful of large databases the we'll also be backing up dumps
> of.
>
> Thanks !
>
> Andrew
> --
> *From:* Robert Trevellyan 
> *Sent:* April 28, 2020 12:31 PM
> *To:* General list for user discussion, questions and support <
> backuppc-users@lists.sourceforge.net>
> *Subject:* Re: [BackupPC-users] Filesystem Recommendation for 100 TB
>
> Any reason not to use ZFS?
>
> Robert Trevellyan
>
>
> On Tue, Apr 28, 2020 at 11:59 AM Andrew Maksymowsky <
> andrew.maksymow...@sickkids.ca> wrote:
>
> Hello,
>
> I believe the last time this was asked was a few years ago and I was
> wondering if anything has changed.
> We’ve been running backuppc for a few years and now have new server for it
> with 100 TB of space in hardware raid 6 array.
>
> We’re wondering what the recommended filesystem for backuppc would be on
> an array of this size ?
>
> Our priority is stability over performance. We’ll be running ubuntu as the
> operating system.
>
> Most of the hosts we’re backing up are fairly small (under 1 TB) linux
> servers. Right now we’re backing up about 100 servers for a total of around
> 20 TB. (We’re expecting the new server to last a few years).
>
> Thanks !
>
> - Andrew
>
> 
>
> This e-mail may contain confidential, personal and/or health
> information(information which may be subject to legal restrictions on use,
> retention and/or disclosure) for the sole use of the intended recipient.
> Any review or distribution by anyone other than the person for whom it was
> originally intended is strictly prohibited. If you have received this
> e-mail in error, please contact the sender and delete all copies.
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.sourceforge.net_lists_listinfo_backuppc-2Dusers=DwMFaQ=Sj806OTFwmuG2UO1EEDr-2uZRzm2EPz39TfVBG2Km-o=OaOl9vMMs7To5r1WtpsEjuelxox8OhadYKz_ljkjikk=ZZbvpGjyfvgQFNeLlhANgseqK5edmy6zAFemZagmyJI=j-fkTu3P2645IiQ51moJNlcudiydzcUbb8fS3xz4dtA=>
> Wiki:http://backuppc.wiki.sourceforge.net
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__backuppc.wiki.sourceforge.net=DwMFaQ=Sj806OTFwmuG2UO1EEDr-2uZRzm2EPz39TfVBG2Km-o=OaOl9vMMs7To5r1WtpsEjuelxox8OhadYKz_ljkjikk=ZZbvpGjyfvgQFNeLlhANgseqK5edmy6zAFemZagmyJI=vK2U46EwlblSGkiFB50k-PrF0uXeK8xH3Mpi1DLwJAU=>
> Project: http://backuppc.sourceforge.net/
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__backuppc.sourceforge.net_=DwMFaQ=Sj806OTFwmuG2UO1EEDr-2uZRzm2EPz39TfVBG2Km-o=OaOl9vMMs7To5r1WtpsEjuelxox8OhadYKz_ljkjikk=ZZbvpGjyfvgQFNeLlhANgseqK5edmy6zAFemZagmyJI=ZCV6mOysmij0l3s3oKWO9takKe0gbiyJGGW1rVcBOfw=>
>
>
> --
>
> This e-mail may contain confidential, personal and/or health
> information(information which may be subject to legal restrictions on use,
> retention and/or disclosure) for the sole use of the intended recipient.
> Any review or distribution by anyone other than the person for whom it was
> originally intended is strictly prohibited. If you have received this
> e-mail in error, please contact the sender and delete all copies.
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem Recommendation for 100 TB

2020-04-28 Thread Michael Huntley
I think zfs is perfectly acceptable as well.

Cheers,

Mph

> On Apr 28, 2020, at 9:31 AM, Robert Trevellyan  
> wrote:
> 
> 
> Any reason not to use ZFS?
> 
> Robert Trevellyan
> 
> 
>> On Tue, Apr 28, 2020 at 11:59 AM Andrew Maksymowsky 
>>  wrote:
>> Hello,
>> 
>> I believe the last time this was asked was a few years ago and I was 
>> wondering if anything has changed.
>> We’ve been running backuppc for a few years and now have new server for it 
>> with 100 TB of space in hardware raid 6 array.
>> 
>> We’re wondering what the recommended filesystem for backuppc would be on an 
>> array of this size ?
>> 
>> Our priority is stability over performance. We’ll be running ubuntu as the 
>> operating system.
>> 
>> Most of the hosts we’re backing up are fairly small (under 1 TB) linux 
>> servers. Right now we’re backing up about 100 servers for a total of around 
>> 20 TB. (We’re expecting the new server to last a few years).
>> 
>> Thanks !
>> 
>> - Andrew
>> 
>> 
>> 
>> This e-mail may contain confidential, personal and/or health 
>> information(information which may be subject to legal restrictions on use, 
>> retention and/or disclosure) for the sole use of the intended recipient. Any 
>> review or distribution by anyone other than the person for whom it was 
>> originally intended is strictly prohibited. If you have received this e-mail 
>> in error, please contact the sender and delete all copies.
>> 
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem Recommendation for 100 TB

2020-04-28 Thread Andrew Maksymowsky
I have no strong preference for either xfs or zfs (our team is comfortable with 
either) was mainly just curious to hear about what folks were using and if 
they've run into any major issues or found particular file-system features they 
really like when coupled with backuppc.

Most of the files we'll be backing up are fairly small (under a few mb). We've 
got a handful of large databases the we'll also be backing up dumps of.

Thanks !

Andrew

From: Robert Trevellyan 
Sent: April 28, 2020 12:31 PM
To: General list for user discussion, questions and support 

Subject: Re: [BackupPC-users] Filesystem Recommendation for 100 TB

Any reason not to use ZFS?

Robert Trevellyan


On Tue, Apr 28, 2020 at 11:59 AM Andrew Maksymowsky 
mailto:andrew.maksymow...@sickkids.ca>> wrote:
Hello,

I believe the last time this was asked was a few years ago and I was wondering 
if anything has changed.
We’ve been running backuppc for a few years and now have new server for it with 
100 TB of space in hardware raid 6 array.

We’re wondering what the recommended filesystem for backuppc would be on an 
array of this size ?

Our priority is stability over performance. We’ll be running ubuntu as the 
operating system.

Most of the hosts we’re backing up are fairly small (under 1 TB) linux servers. 
Right now we’re backing up about 100 servers for a total of around 20 TB. 
(We’re expecting the new server to last a few years).

Thanks !

- Andrew



This e-mail may contain confidential, personal and/or health 
information(information which may be subject to legal restrictions on use, 
retention and/or disclosure) for the sole use of the intended recipient. Any 
review or distribution by anyone other than the person for whom it was 
originally intended is strictly prohibited. If you have received this e-mail in 
error, please contact the sender and delete all copies.

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net<mailto:BackupPC-users@lists.sourceforge.net>
List:
https://lists.sourceforge.net/lists/listinfo/backuppc-users<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.sourceforge.net_lists_listinfo_backuppc-2Dusers=DwMFaQ=Sj806OTFwmuG2UO1EEDr-2uZRzm2EPz39TfVBG2Km-o=OaOl9vMMs7To5r1WtpsEjuelxox8OhadYKz_ljkjikk=ZZbvpGjyfvgQFNeLlhANgseqK5edmy6zAFemZagmyJI=j-fkTu3P2645IiQ51moJNlcudiydzcUbb8fS3xz4dtA=>
Wiki:
http://backuppc.wiki.sourceforge.net<https://urldefense.proofpoint.com/v2/url?u=http-3A__backuppc.wiki.sourceforge.net=DwMFaQ=Sj806OTFwmuG2UO1EEDr-2uZRzm2EPz39TfVBG2Km-o=OaOl9vMMs7To5r1WtpsEjuelxox8OhadYKz_ljkjikk=ZZbvpGjyfvgQFNeLlhANgseqK5edmy6zAFemZagmyJI=vK2U46EwlblSGkiFB50k-PrF0uXeK8xH3Mpi1DLwJAU=>
Project: 
http://backuppc.sourceforge.net/<https://urldefense.proofpoint.com/v2/url?u=http-3A__backuppc.sourceforge.net_=DwMFaQ=Sj806OTFwmuG2UO1EEDr-2uZRzm2EPz39TfVBG2Km-o=OaOl9vMMs7To5r1WtpsEjuelxox8OhadYKz_ljkjikk=ZZbvpGjyfvgQFNeLlhANgseqK5edmy6zAFemZagmyJI=ZCV6mOysmij0l3s3oKWO9takKe0gbiyJGGW1rVcBOfw=>



This e-mail may contain confidential, personal and/or health 
information(information which may be subject to legal restrictions on use, 
retention and/or disclosure) for the sole use of the intended recipient. Any 
review or distribution by anyone other than the person for whom it was 
originally intended is strictly prohibited. If you have received this e-mail in 
error, please contact the sender and delete all copies.
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem Recommendation for 100 TB

2020-04-28 Thread Brad Alexander
I use ZFS for all of my FreeBSD boxes, and it is stable and robust. I am in
the process of preparing to convert my current backuppc 3.3.1 installation
to a backuppc 4 in a jail on my FreeNAS with the backuppc pool living on a
FreeNAS ZFS pool.

On Tue, Apr 28, 2020 at 12:32 PM Robert Trevellyan <
robert.trevell...@gmail.com> wrote:

> Any reason not to use ZFS?
>
> Robert Trevellyan
>
>
> On Tue, Apr 28, 2020 at 11:59 AM Andrew Maksymowsky <
> andrew.maksymow...@sickkids.ca> wrote:
>
>> Hello,
>>
>> I believe the last time this was asked was a few years ago and I was
>> wondering if anything has changed.
>> We’ve been running backuppc for a few years and now have new server for
>> it with 100 TB of space in hardware raid 6 array.
>>
>> We’re wondering what the recommended filesystem for backuppc would be on
>> an array of this size ?
>>
>> Our priority is stability over performance. We’ll be running ubuntu as
>> the operating system.
>>
>> Most of the hosts we’re backing up are fairly small (under 1 TB) linux
>> servers. Right now we’re backing up about 100 servers for a total of around
>> 20 TB. (We’re expecting the new server to last a few years).
>>
>> Thanks !
>>
>> - Andrew
>>
>> 
>>
>> This e-mail may contain confidential, personal and/or health
>> information(information which may be subject to legal restrictions on use,
>> retention and/or disclosure) for the sole use of the intended recipient.
>> Any review or distribution by anyone other than the person for whom it was
>> originally intended is strictly prohibited. If you have received this
>> e-mail in error, please contact the sender and delete all copies.
>>
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem Recommendation for 100 TB

2020-04-28 Thread Robert Trevellyan
Any reason not to use ZFS?

Robert Trevellyan


On Tue, Apr 28, 2020 at 11:59 AM Andrew Maksymowsky <
andrew.maksymow...@sickkids.ca> wrote:

> Hello,
>
> I believe the last time this was asked was a few years ago and I was
> wondering if anything has changed.
> We’ve been running backuppc for a few years and now have new server for it
> with 100 TB of space in hardware raid 6 array.
>
> We’re wondering what the recommended filesystem for backuppc would be on
> an array of this size ?
>
> Our priority is stability over performance. We’ll be running ubuntu as the
> operating system.
>
> Most of the hosts we’re backing up are fairly small (under 1 TB) linux
> servers. Right now we’re backing up about 100 servers for a total of around
> 20 TB. (We’re expecting the new server to last a few years).
>
> Thanks !
>
> - Andrew
>
> 
>
> This e-mail may contain confidential, personal and/or health
> information(information which may be subject to legal restrictions on use,
> retention and/or disclosure) for the sole use of the intended recipient.
> Any review or distribution by anyone other than the person for whom it was
> originally intended is strictly prohibited. If you have received this
> e-mail in error, please contact the sender and delete all copies.
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem Recommendation for 100 TB

2020-04-28 Thread Michael Huntley
I’m still enjoying xfs.

Cheers! 

Mph

> On Apr 28, 2020, at 8:58 AM, Andrew Maksymowsky 
>  wrote:
> 
> Hello,
> 
> I believe the last time this was asked was a few years ago and I was 
> wondering if anything has changed.
> We’ve been running backuppc for a few years and now have new server for it 
> with 100 TB of space in hardware raid 6 array.
> 
> We’re wondering what the recommended filesystem for backuppc would be on an 
> array of this size ?
> 
> Our priority is stability over performance. We’ll be running ubuntu as the 
> operating system.
> 
> Most of the hosts we’re backing up are fairly small (under 1 TB) linux 
> servers. Right now we’re backing up about 100 servers for a total of around 
> 20 TB. (We’re expecting the new server to last a few years).
> 
> Thanks !
> 
> - Andrew
> 
> 
> 
> This e-mail may contain confidential, personal and/or health 
> information(information which may be subject to legal restrictions on use, 
> retention and/or disclosure) for the sole use of the intended recipient. Any 
> review or distribution by anyone other than the person for whom it was 
> originally intended is strictly prohibited. If you have received this e-mail 
> in error, please contact the sender and delete all copies.
> 
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem Recommendation for 100 TB

2020-04-28 Thread Doug Lytle
>>> We’re wondering what the recommended filesystem for backuppc would be on an 
>>> array of this size ?

Depends on the file sizes that you are backing up.  I've read that XFS (My 
preferred) has issues with lots of small files.

Doug





___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Filesystem Recommendation for 100 TB

2020-04-28 Thread Andrew Maksymowsky
Hello,

I believe the last time this was asked was a few years ago and I was wondering 
if anything has changed.
We’ve been running backuppc for a few years and now have new server for it with 
100 TB of space in hardware raid 6 array.

We’re wondering what the recommended filesystem for backuppc would be on an 
array of this size ?

Our priority is stability over performance. We’ll be running ubuntu as the 
operating system.

Most of the hosts we’re backing up are fairly small (under 1 TB) linux servers. 
Right now we’re backing up about 100 servers for a total of around 20 TB. 
(We’re expecting the new server to last a few years).

Thanks !

- Andrew



This e-mail may contain confidential, personal and/or health 
information(information which may be subject to legal restrictions on use, 
retention and/or disclosure) for the sole use of the intended recipient. Any 
review or distribution by anyone other than the person for whom it was 
originally intended is strictly prohibited. If you have received this e-mail in 
error, please contact the sender and delete all copies.

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem?

2013-12-02 Thread Michael Stowe

Personally, I'd recommend xfs, for a number of reasons including speed and
stability.  Your mileage may vary, especially depending on your type of
files and the configuration of the array.

 Hi,
 I'm using BackupPC 3.2.1-4 (official Debian 7 package).
 I'm going to configure an external storage (Coraid) in order to backup
 several
 server (mostly Linux).
 What kind of file system do you suggest?
 Array is 7 TB large (raid6).
 Thank you very much


--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349351iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem?

2013-12-02 Thread Hans Kraus
Am 02.12.2013 16:00, schrieb absolutely_f...@libero.it:
 Hi,
 I'm using BackupPC 3.2.1-4 (official Debian 7 package).
 I'm going to configure an external storage (Coraid) in order to backup several
 server (mostly Linux).
 What kind of file system do you suggest?
 Array is 7 TB large (raid6).
 Thank you very much

Hi,
I've chosen Ext4, standig before the same problem some months ago. The
reason behind this decision was that Ext4 seemed the best 'general
purpose' FS. Maybe one of the developers can shed more light on this.

Regards, Hans



--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349351iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem?

2013-12-02 Thread Brad Alexander
My original backuppc server, back in the day, used reiserfs3. The latest
incarnation uses ext4. Both have been reliable, though reiser3 is long in
the tooth...And doesn't play well with multicore machines. :)




On Mon, Dec 2, 2013 at 2:15 PM, Hans Kraus h...@hanswkraus.com wrote:

 Am 02.12.2013 16:00, schrieb absolutely_f...@libero.it:
  Hi,
  I'm using BackupPC 3.2.1-4 (official Debian 7 package).
  I'm going to configure an external storage (Coraid) in order to backup
 several
  server (mostly Linux).
  What kind of file system do you suggest?
  Array is 7 TB large (raid6).
  Thank you very much

 Hi,
 I've chosen Ext4, standig before the same problem some months ago. The
 reason behind this decision was that Ext4 seemed the best 'general
 purpose' FS. Maybe one of the developers can shed more light on this.

 Regards, Hans




 --
 Rapidly troubleshoot problems before they affect your business. Most IT
 organizations don't have a clear picture of how application performance
 affects their revenue. With AppDynamics, you get 100% visibility into your
 Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics
 Pro!
 http://pubads.g.doubleclick.net/gampad/clk?id=84349351iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349351iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem?

2013-12-02 Thread Carl Cravens
My experience troubleshooting I/O performance over iSCSI is that Ext4 
journaling has a much higher CPU overhead than XFS does.  Papers I've read show 
evidence that modern XFS journaling scales better (better performance) than 
Ext4 as disks grow larger.  http://lwn.net/Articles/476263/

As a sysadmin, I like XFS management tools better than I do Ext4's.

On 12/02/2013 01:15 PM, Hans Kraus wrote:
 Am 02.12.2013 16:00, schrieb absolutely_f...@libero.it:
 Hi,
 I'm using BackupPC 3.2.1-4 (official Debian 7 package).
 I'm going to configure an external storage (Coraid) in order to backup 
 several
 server (mostly Linux).
 What kind of file system do you suggest?
 Array is 7 TB large (raid6).
 Thank you very much

 Hi,
 I've chosen Ext4, standig before the same problem some months ago. The
 reason behind this decision was that Ext4 seemed the best 'general
 purpose' FS. Maybe one of the developers can shed more light on this.

 Regards, Hans



 --
 Rapidly troubleshoot problems before they affect your business. Most IT
 organizations don't have a clear picture of how application performance
 affects their revenue. With AppDynamics, you get 100% visibility into your
 Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
 http://pubads.g.doubleclick.net/gampad/clk?id=84349351iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


-- 
Carl D Cravens (ccrav...@excelii.com), Ext 228 (620.327.1228)
Lead System Architect

--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349351iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem?

2013-12-02 Thread Sabuj Pattanayek
I've been doing some ZFS on linux vs XFS benchmarking and I'm seeing that
ZFS is performing slightly better than XFS on reads and writes but sucks on
deletes. If you're not going to be doing lots of deletes and need the
ability to expand (e.g. thinking of using LVM) then ZFS may be a nice
alternative to XFS+LVM . ZFS also has built in compression, and so far with
my benchmarks (using lzjb) with it turned, random, sequential reads and
writes are slightly slower than the vs without compression and still a few
seconds faster than XFS (which has no compression).


On Mon, Dec 2, 2013 at 4:35 PM, Carl Cravens ccrav...@excelii.com wrote:

 My experience troubleshooting I/O performance over iSCSI is that Ext4
 journaling has a much higher CPU overhead than XFS does.  Papers I've read
 show evidence that modern XFS journaling scales better (better
 performance) than Ext4 as disks grow larger.
 http://lwn.net/Articles/476263/

 As a sysadmin, I like XFS management tools better than I do Ext4's.

 On 12/02/2013 01:15 PM, Hans Kraus wrote:
  Am 02.12.2013 16:00, schrieb absolutely_f...@libero.it:
  Hi,
  I'm using BackupPC 3.2.1-4 (official Debian 7 package).
  I'm going to configure an external storage (Coraid) in order to backup
 several
  server (mostly Linux).
  What kind of file system do you suggest?
  Array is 7 TB large (raid6).
  Thank you very much
 
  Hi,
  I've chosen Ext4, standig before the same problem some months ago. The
  reason behind this decision was that Ext4 seemed the best 'general
  purpose' FS. Maybe one of the developers can shed more light on this.
 
  Regards, Hans
 
 
 
 
 --
  Rapidly troubleshoot problems before they affect your business. Most IT
  organizations don't have a clear picture of how application performance
  affects their revenue. With AppDynamics, you get 100% visibility into
 your
  Java,.NET,  PHP application. Start your 15-day FREE TRIAL of
 AppDynamics Pro!
 
 http://pubads.g.doubleclick.net/gampad/clk?id=84349351iu=/4140/ostg.clktrk
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: http://backuppc.sourceforge.net/
 

 --
 Carl D Cravens (ccrav...@excelii.com), Ext 228 (620.327.1228)
 Lead System Architect


 --
 Rapidly troubleshoot problems before they affect your business. Most IT
 organizations don't have a clear picture of how application performance
 affects their revenue. With AppDynamics, you get 100% visibility into your
 Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics
 Pro!
 http://pubads.g.doubleclick.net/gampad/clk?id=84349351iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349351iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem?

2013-12-02 Thread Russ Poyner
I'm a big fan of zfs, but the compression won't be a factor since backuppc 
already compresses the data. My one backuppc box runs FreeBSD with the data on 
zfs and compression off. 

I like managing storage with the zpool and zfs commands much better than md,  
lvm and ext4. 

RP

Sent from my U.S. Cellular® Smartphone

 Original message 
From: Sabuj Pattanayek sab...@gmail.com 
Date: 12/02/2013  4:42 PM  (GMT-06:00) 
To: General list for user discussion,  questions and support 
backuppc-users@lists.sourceforge.net 
Subject: Re: [BackupPC-users] Filesystem? 
 
I've been doing some ZFS on linux vs XFS benchmarking and I'm seeing that ZFS 
is performing slightly better than XFS on reads and writes but sucks on 
deletes. If you're not going to be doing lots of deletes and need the ability 
to expand (e.g. thinking of using LVM) then ZFS may be a nice alternative to 
XFS+LVM . ZFS also has built in compression, and so far with my benchmarks 
(using lzjb) with it turned, random, sequential reads and writes are slightly 
slower than the vs without compression and still a few seconds faster than XFS 
(which has no compression). 


On Mon, Dec 2, 2013 at 4:35 PM, Carl Cravens ccrav...@excelii.com wrote:
My experience troubleshooting I/O performance over iSCSI is that Ext4 
journaling has a much higher CPU overhead than XFS does.  Papers I've read show 
evidence that modern XFS journaling scales better (better performance) than 
Ext4 as disks grow larger.  http://lwn.net/Articles/476263/

As a sysadmin, I like XFS management tools better than I do Ext4's.

On 12/02/2013 01:15 PM, Hans Kraus wrote:
 Am 02.12.2013 16:00, schrieb absolutely_f...@libero.it:
 Hi,
 I'm using BackupPC 3.2.1-4 (official Debian 7 package).
 I'm going to configure an external storage (Coraid) in order to backup 
 several
 server (mostly Linux).
 What kind of file system do you suggest?
 Array is 7 TB large (raid6).
 Thank you very much

 Hi,
 I've chosen Ext4, standig before the same problem some months ago. The
 reason behind this decision was that Ext4 seemed the best 'general
 purpose' FS. Maybe one of the developers can shed more light on this.

 Regards, Hans



 --
 Rapidly troubleshoot problems before they affect your business. Most IT
 organizations don't have a clear picture of how application performance
 affects their revenue. With AppDynamics, you get 100% visibility into your
 Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
 http://pubads.g.doubleclick.net/gampad/clk?id=84349351iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:    http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
Carl D Cravens (ccrav...@excelii.com), Ext 228 (620.327.1228)
Lead System Architect

--
Rapidly troubleshoot problems before they affect your business. Most IT
organizations don't have a clear picture of how application performance
affects their revenue. With AppDynamics, you get 100% visibility into your
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349351iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349351iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem?

2013-12-02 Thread Sharuzzaman Ahmat Raslan
My customer is storing backuppc data on ext3 filesystem. No known issue
exist for this customer, though I have not perform performance comparison
with other filesystem.




On Mon, Dec 2, 2013 at 11:00 PM, absolutely_f...@libero.it 
absolutely_f...@libero.it wrote:

 Hi,
 I'm using BackupPC 3.2.1-4 (official Debian 7 package).
 I'm going to configure an external storage (Coraid) in order to backup
 several
 server (mostly Linux).
 What kind of file system do you suggest?
 Array is 7 TB large (raid6).
 Thank you very much



 --
 Rapidly troubleshoot problems before they affect your business. Most IT
 organizations don't have a clear picture of how application performance
 affects their revenue. With AppDynamics, you get 100% visibility into your
 Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics
 Pro!
 http://pubads.g.doubleclick.net/gampad/clk?id=84349351iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
Sharuzzaman Ahmat Raslan
--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349351iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem?

2013-12-02 Thread Christian Völker
I tried several file systems.
I ended up with ext3, because:
ZFS is not very common in my Linux distros and for backup purposes I
can't fiddle very long in restore case
XFS haven't had a possibility to shrink (which I use from time to time).
BTRFS or GFS als AFAIK cluster filesystems and overkill here.
All others are more or less outdated...



Am 02.12.2013 16:00, schrieb absolutely_f...@libero.it:
 Hi,
 I'm using BackupPC 3.2.1-4 (official Debian 7 package).
 I'm going to configure an external storage (Coraid) in order to backup 
 several 
 server (mostly Linux).
 What kind of file system do you suggest?
 Array is 7 TB large (raid6).
 Thank you very much


 --
 Rapidly troubleshoot problems before they affect your business. Most IT 
 organizations don't have a clear picture of how application performance 
 affects their revenue. With AppDynamics, you get 100% visibility into your 
 Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
 http://pubads.g.doubleclick.net/gampad/clk?id=84349351iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349351iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem separation

2011-01-26 Thread Rob Owens
On Wed, Jan 26, 2011 at 01:40:19AM +, John Goerzen wrote:
 Rob Owens rowens at ptd.net writes:
 
  One reason I always specify the --one-file-system argument for rsync is
  that prevents me from accidentally backing up an NFS share.  Since I use
  BackupPC for all the computers on my LAN, the data in the NFS share gets
  backed up when I back up the server that is hosting/exporting the share.
  
  Same thing goes for the occasional fuse share.  In particular, I've
  started using encfs and I certainly wouldn't want a copy of my encrypted
  data to get backed up unencrypted, just because BackupPC happened to be
  running when I had an encrypted volume mounted.
 
 That is a reasonable point, and a good idea.  I'm used to doing that with 
 other
 backup software as well.  But I'm still not understanding why the manual says 
 a
 *restore* is easier.
 
I don't know the answer, but it might have to do with preventing a
restore operation from attempting to restore over NFS, or to any other
share which might be mounted read-only.  

Maybe the author could speak up...

-Rob

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Filesystem separation

2011-01-25 Thread John Goerzen
Hi,

In reading the manual for parameters such as the tar, rsync, etc. share, 
I see:

Alternatively, rather than backup all the file systems as a single 
share (/), it is easier to restore a single file system if you backup 
each file system separately.

Can anyone tell me why this is easier?  Can't one select the subset of 
the backup to restore out of a whole filesystem backup anyhow?

-- John


--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem separation

2011-01-25 Thread Rob Owens
On Tue, Jan 25, 2011 at 06:17:12PM -0600, John Goerzen wrote:
 Hi,
 
 In reading the manual for parameters such as the tar, rsync, etc. share, 
 I see:
 
 Alternatively, rather than backup all the file systems as a single 
 share (/), it is easier to restore a single file system if you backup 
 each file system separately.
 
 Can anyone tell me why this is easier?  Can't one select the subset of 
 the backup to restore out of a whole filesystem backup anyhow?
 
One reason I always specify the --one-file-system argument for rsync is
that prevents me from accidentally backing up an NFS share.  Since I use
BackupPC for all the computers on my LAN, the data in the NFS share gets
backed up when I back up the server that is hosting/exporting the share.

Same thing goes for the occasional fuse share.  In particular, I've
started using encfs and I certainly wouldn't want a copy of my encrypted
data to get backed up unencrypted, just because BackupPC happened to be
running when I had an encrypted volume mounted.

-Rob

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem separation

2011-01-25 Thread John Goerzen
Rob Owens rowens at ptd.net writes:

 One reason I always specify the --one-file-system argument for rsync is
 that prevents me from accidentally backing up an NFS share.  Since I use
 BackupPC for all the computers on my LAN, the data in the NFS share gets
 backed up when I back up the server that is hosting/exporting the share.
 
 Same thing goes for the occasional fuse share.  In particular, I've
 started using encfs and I certainly wouldn't want a copy of my encrypted
 data to get backed up unencrypted, just because BackupPC happened to be
 running when I had an encrypted volume mounted.

That is a reasonable point, and a good idea.  I'm used to doing that with other
backup software as well.  But I'm still not understanding why the manual says a
*restore* is easier.

-- John


--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem corruption: consistency of the pool

2010-12-10 Thread martin f krafft
also sprach Jeffrey J. Kosowsky backu...@kosowsky.org [2010.12.09.1538 +0100]:
   I did a test run of this tool and it took 12 days to run across the
   pool. I cannot take the backup machine offline for so long. Is it
   possible to run this while BackupPC runs in the background?
 
 It can run while backuppc is running though it will obviously miss
 some new files added by backuppc after you started running the
 program. My routine is non-destructive (it doesn't 'fix' anything) so
 it shouldn't conflict.

Oh, so how do I fix the problems it finds (there are plenty it
reports)?

 Or if you trust it to detect and fix it all in one step:
BackupPC_fixLinks.pl -f [ optional output file to capture all the
detections and status's]

Unfortunately, BackupPC_fixlinks.pl needs jLib.pm, which doesn't
seem to be in Debian's backuppc 3.1.0 :(


Btw, do you plan to track your very useful scripts in some sort of
VCS or integrate them with the BackupPC main source?

Thanks,

-- 
martin | http://madduck.net/ | http://two.sentenc.es/
 
auch der mutigste von uns hat nur selten den mut zu dem,
 was er eigentlich weiß.
 - friedrich nietzsche
 
spamtraps: madduck.bo...@madduck.net


digital_signature_gpg.asc
Description: Digital signature (see http://martin-krafft.net/gpg/)
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem corruption: consistency of the pool

2010-12-09 Thread martin f krafft
also sprach Jeffrey J. Kosowsky backu...@kosowsky.org [2010.11.17.0059 +0100]:
 I wrote two programs that might be helpful here:
 1. BackupPC_digestVerify.pl
If you use rsync with checksum caching then this program checks the
(uncompressed) contents of each pool file against the stored md4
checksum. This should catch any bit errors in the pool. (Note
though that I seem to recall that the checksum only gets stored the
second time a file in the pool is backed up so some pool files may
not have a checksum included - I may be wrong since it's been a
while...)

I did a test run of this tool and it took 12 days to run across the
pool. I cannot take the backup machine offline for so long. Is it
possible to run this while BackupPC runs in the background?

 2. BackupPC_fixLinks.pl
This program scans through both the pool and pc trees to look for
wrong, duplicate, or missing links. It can fix most errors.

And this?

How else do you suggest I run it?

Thanks,

-- 
martin | http://madduck.net/ | http://two.sentenc.es/
 
remember, half the people are below average.
 
spamtraps: madduck.bo...@madduck.net


digital_signature_gpg.asc
Description: Digital signature (see http://martin-krafft.net/gpg/)
--
This SF Dev2Dev email is sponsored by:

WikiLeaks The End of the Free Internet
http://p.sf.net/sfu/therealnews-com___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem corruption: consistency of the pool

2010-12-09 Thread Jeffrey J. Kosowsky
martin f krafft wrote at about 09:53:25 +0100 on Thursday, December 9, 2010:
  also sprach Jeffrey J. Kosowsky backu...@kosowsky.org [2010.11.17.0059 
  +0100]:
   I wrote two programs that might be helpful here:
   1. BackupPC_digestVerify.pl
  If you use rsync with checksum caching then this program checks the
  (uncompressed) contents of each pool file against the stored md4
  checksum. This should catch any bit errors in the pool. (Note
  though that I seem to recall that the checksum only gets stored the
  second time a file in the pool is backed up so some pool files may
  not have a checksum included - I may be wrong since it's been a
  while...)
  
  I did a test run of this tool and it took 12 days to run across the
  pool. I cannot take the backup machine offline for so long. Is it
  possible to run this while BackupPC runs in the background?

It can run while backuppc is running though it will obviously miss
some new files added by backuppc after you started running the
program. My routine is non-destructive (it doesn't 'fix' anything) so
it shouldn't conflict.

  
   2. BackupPC_fixLinks.pl
  This program scans through both the pool and pc trees to look for
  wrong, duplicate, or missing links. It can fix most errors.
  
  And this?
I don't think i understand the question...
(note I posted a slightly updated version on the group last night)
  
  How else do you suggest I run it?
Look at the usage info ;)
Or if you trust it to detect and fix it all in one step:
   BackupPC_fixLinks.pl -f [ optional output file to capture all the
   detections and status's]

Or to do it sequentially:
   Detect:
   BackupPC_fixlinks.pl   [output file]
   Fix:
   BackupPC_fixlinks.pl  -l [output file]

--
This SF Dev2Dev email is sponsored by:

WikiLeaks The End of the Free Internet
http://p.sf.net/sfu/therealnews-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem corruption: consistency of the pool

2010-12-09 Thread Robin Lee Powell
On Thu, Dec 09, 2010 at 09:53:25AM +0100, martin f krafft wrote:
 also sprach Jeffrey J. Kosowsky backu...@kosowsky.org [2010.11.17.0059 
 +0100]:
  I wrote two programs that might be helpful here:
  1. BackupPC_digestVerify.pl
 If you use rsync with checksum caching then this program checks the
 (uncompressed) contents of each pool file against the stored md4
 checksum. This should catch any bit errors in the pool. (Note
 though that I seem to recall that the checksum only gets stored the
 second time a file in the pool is backed up so some pool files may
 not have a checksum included - I may be wrong since it's been a
 while...)
 
 I did a test run of this tool and it took 12 days to run across the
 pool. I cannot take the backup machine offline for so long. Is it
 possible to run this while BackupPC runs in the background?
 
  2. BackupPC_fixLinks.pl
 This program scans through both the pool and pc trees to look for
 wrong, duplicate, or missing links. It can fix most errors.
 
 And this?

I don't know about the first one, but BackupPC_fixLinks.pl can
*definitely* be run while BackupPC runs.

For serious corruption, you may want to grab the patch I posted a
few days ago; it makes the run *much* slower, but on the plus side
it will fix more errors.

OTOH, the errors it fixes only waste disk space, they don't actually
break BackupPC's ability to function at all.

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem corruption: consistency of the pool

2010-12-09 Thread Jeffrey J. Kosowsky
Robin Lee Powell wrote at about 11:20:26 -0800 on Thursday, December 9, 2010:
  On Thu, Dec 09, 2010 at 09:53:25AM +0100, martin f krafft wrote:
   also sprach Jeffrey J. Kosowsky backu...@kosowsky.org [2010.11.17.0059 
   +0100]:
I wrote two programs that might be helpful here:
1. BackupPC_digestVerify.pl
   If you use rsync with checksum caching then this program checks the
   (uncompressed) contents of each pool file against the stored md4
   checksum. This should catch any bit errors in the pool. (Note
   though that I seem to recall that the checksum only gets stored the
   second time a file in the pool is backed up so some pool files may
   not have a checksum included - I may be wrong since it's been a
   while...)
   
   I did a test run of this tool and it took 12 days to run across the
   pool. I cannot take the backup machine offline for so long. Is it
   possible to run this while BackupPC runs in the background?
   
2. BackupPC_fixLinks.pl
   This program scans through both the pool and pc trees to look for
   wrong, duplicate, or missing links. It can fix most errors.
   
   And this?
  
  I don't know about the first one, but BackupPC_fixLinks.pl can
  *definitely* be run while BackupPC runs.
  
  For serious corruption, you may want to grab the patch I posted a
  few days ago; it makes the run *much* slower, but on the plus side
  it will fix more errors.

I would suggest instead using the version I posted last night...
It should be much faster though still slow and may avoid some issues...

  
  OTOH, the errors it fixes only waste disk space, they don't actually
  break BackupPC's ability to function at all.
  

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem corruption: consistency of the pool

2010-12-09 Thread Robin Lee Powell
On Thu, Dec 09, 2010 at 03:03:37PM -0500, Jeffrey J. Kosowsky wrote:
 Robin Lee Powell wrote at about 11:20:26 -0800 on Thursday, December 9, 2010:
   On Thu, Dec 09, 2010 at 09:53:25AM +0100, martin f krafft wrote:
also sprach Jeffrey J. Kosowsky backu...@kosowsky.org [2010.11.17.0059 
 +0100]:
 I wrote two programs that might be helpful here:
 1. BackupPC_digestVerify.pl
If you use rsync with checksum caching then this program checks the
(uncompressed) contents of each pool file against the stored md4
checksum. This should catch any bit errors in the pool. (Note
though that I seem to recall that the checksum only gets stored the
second time a file in the pool is backed up so some pool files may
not have a checksum included - I may be wrong since it's been a
while...)

I did a test run of this tool and it took 12 days to run across the
pool. I cannot take the backup machine offline for so long. Is it
possible to run this while BackupPC runs in the background?

 2. BackupPC_fixLinks.pl
This program scans through both the pool and pc trees to look for
wrong, duplicate, or missing links. It can fix most errors.

And this?
   
   I don't know about the first one, but BackupPC_fixLinks.pl can
   *definitely* be run while BackupPC runs.
   
   For serious corruption, you may want to grab the patch I posted a
   few days ago; it makes the run *much* slower, but on the plus side
   it will fix more errors.
 
 I would suggest instead using the version I posted last night...
 It should be much faster though still slow and may avoid some
 issues...

Well, I meant that version *plus* my patch. :D

Will your new version catch the this has multiple hard links but
not into the pool error I was seeing?  (If so yay! and thank you!)

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem corruption: consistency of the pool

2010-12-09 Thread Jeffrey J. Kosowsky
Robin Lee Powell wrote at about 12:06:24 -0800 on Thursday, December 9, 2010:
  On Thu, Dec 09, 2010 at 03:03:37PM -0500, Jeffrey J. Kosowsky wrote:
   Robin Lee Powell wrote at about 11:20:26 -0800 on Thursday, December 9, 
   2010:
 On Thu, Dec 09, 2010 at 09:53:25AM +0100, martin f krafft wrote:
  also sprach Jeffrey J. Kosowsky backu...@kosowsky.org 
   [2010.11.17.0059 +0100]:
   I wrote two programs that might be helpful here:
   1. BackupPC_digestVerify.pl
  If you use rsync with checksum caching then this program checks 
   the
  (uncompressed) contents of each pool file against the stored md4
  checksum. This should catch any bit errors in the pool. (Note
  though that I seem to recall that the checksum only gets stored 
   the
  second time a file in the pool is backed up so some pool files 
   may
  not have a checksum included - I may be wrong since it's been a
  while...)
  
  I did a test run of this tool and it took 12 days to run across the
  pool. I cannot take the backup machine offline for so long. Is it
  possible to run this while BackupPC runs in the background?
  
   2. BackupPC_fixLinks.pl
  This program scans through both the pool and pc trees to look for
  wrong, duplicate, or missing links. It can fix most errors.
  
  And this?
 
 I don't know about the first one, but BackupPC_fixLinks.pl can
 *definitely* be run while BackupPC runs.
 
 For serious corruption, you may want to grab the patch I posted a
 few days ago; it makes the run *much* slower, but on the plus side
 it will fix more errors.
   
   I would suggest instead using the version I posted last night...
   It should be much faster though still slow and may avoid some
   issues...
  
  Well, I meant that version *plus* my patch. :D

My version does what your patch posted a couple of days does only
faster  probably better (i.e. your version may miss some cases where
there are pool dups and unlinked pc files with multiple links).


  Will your new version catch the this has multiple hard links but
  not into the pool error I was seeing?  (If so yay! and thank you!)
  


I don't know what error you are referring to. My version simple
extends to also test pc files with more than one link and fix them as
appropriate though I haven't test it.

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem corruption: consistency of the pool

2010-12-09 Thread Robin Lee Powell
On Thu, Dec 09, 2010 at 03:15:41PM -0500, Jeffrey J. Kosowsky wrote:
 Robin Lee Powell wrote at about 12:06:24 -0800 on Thursday,
 December 9, 2010:
   On Thu, Dec 09, 2010 at 03:03:37PM -0500, Jeffrey J. Kosowsky
   wrote:
Robin Lee Powell wrote at about 11:20:26 -0800 on Thursday,
December 9, 2010:
  On Thu, Dec 09, 2010 at 09:53:25AM +0100, martin f krafft
  wrote:
   also sprach Jeffrey J. Kosowsky backu...@kosowsky.org
   [2010.11.17.0059 +0100]:
I wrote two programs that might be helpful here:
1. BackupPC_digestVerify.pl
   If you use rsync with checksum caching then this
   program checks the (uncompressed) contents of each
   pool file against the stored md4 checksum. This
   should catch any bit errors in the pool. (Note
   though that I seem to recall that the checksum only
   gets stored the second time a file in the pool is
   backed up so some pool files may not have a
   checksum included - I may be wrong since it's been
   a while...)
   
   I did a test run of this tool and it took 12 days to run
   across the pool. I cannot take the backup machine
   offline for so long. Is it possible to run this while
   BackupPC runs in the background?
   
2. BackupPC_fixLinks.pl
   This program scans through both the pool and pc
   trees to look for wrong, duplicate, or missing
   links. It can fix most errors.
   
   And this?
  
  I don't know about the first one, but BackupPC_fixLinks.pl
  can *definitely* be run while BackupPC runs.
  
  For serious corruption, you may want to grab the patch I
  posted a few days ago; it makes the run *much* slower, but
  on the plus side it will fix more errors.

I would suggest instead using the version I posted last
night... It should be much faster though still slow and may
avoid some issues...
   
   Well, I meant that version *plus* my patch. :D
 
 My version does what your patch posted a couple of days does only
 faster  probably better (i.e. your version may miss some cases
 where there are pool dups and unlinked pc files with multiple
 links).

I repeat my assertion that you are my hero.  :)

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which this parrot
is dead is ti poi spitaki cu morsi, but this sentence is false
is na nei.   My personal page: http://www.digitalkingdom.org/rlp/

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem corruption: consistency of the pool

2010-11-16 Thread Jeffrey J. Kosowsky
martin f krafft wrote at about 19:16:52 +0100 on Wednesday, November 3, 2010:
  Hello,
  
  My filesystem holding the backuppc pool was corrupted. While e2fsck
  managed to fix it all and now doesn't complain anymore, I am a bit
  scared that the backuppc pool isn't consistent anymore.
  
  Is there a tool to check the consistency of the pool?
  
  Is there a tool to repair an inconsistent pool?
  

I wrote two programs that might be helpful here:
1. BackupPC_digestVerify.pl
   If you use rsync with checksum caching then this program checks the
   (uncompressed) contents of each pool file against the stored md4
   checksum. This should catch any bit errors in the pool. (Note
   though that I seem to recall that the checksum only gets stored the
   second time a file in the pool is backed up so some pool files may
   not have a checksum included - I may be wrong since it's been a
   while...)

2. BackupPC_fixLinks.pl
   This program scans through both the pool and pc trees to look for
   wrong, duplicate, or missing links. It can fix most errors.

The second program is on the wikki somewhere.
I will attach below a copy of the first program.
I find that the above two routines do a pretty good job of checking
for corruption in the pc and pool trees.

-

#!/usr/bin/perl
#
#
# BackupPC_digestVerify.pl
#   
#
# DESCRIPTION
#   Check contents of cpool and/or pc tree entries (or the entire tree) 
#   against the stored rsync md4 checksum digests (when available)
#
# AUTHOR
#   Jeff Kosowsky
#
# COPYRIGHT
#   Copyright (C) 2010  Jeff Kosowsky
#
#   This program is free software; you can redistribute it and/or modify
#   it under the terms of the GNU General Public License as published by
#   the Free Software Foundation; either version 2 of the License, or
#   (at your option) any later version.
#
#   This program is distributed in the hope that it will be useful,
#   but WITHOUT ANY WARRANTY; without even the implied warranty of
#   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#   GNU General Public License for more details.
#
#   You should have received a copy of the GNU General Public License
#   along with this program; if not, write to the Free Software
#   Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
#
#
#
# Version 0.1, released Nov 2010
#
#


use strict;
use Getopt::Std;

use lib /usr/share/BackupPC/lib;
use BackupPC::Xfer::RsyncDigest;
use BackupPC::Lib;
use File::Find;

use constant RSYNC_CSUMSEED_CACHE = 32761;
use constant DEFAULT_BLOCKSIZE = 2048;


my $dotfreq=100;
my %opts;
if ( !getopts(cCpdv, \%opts) || @ARGV !=1
 || ($opts{c} + $opts{C} + $opts{p}  1)
 || ($opts{d} + $opts{v}  1)) {
print STDERR EOF;
usage: $0 [-c|-C|-p] [-d|-v] [File or Directory]
  Verify Rsync digest in compressed files containing digests.
  Ignores directories and files without digests
  Only prints if digest does not match content unless verbose flag
  (firstbyte = 0xd7)
  Options:
-c   Consider path relative to cpool directory
-C   Entry is a single cpool file name (no path)
-p   Consider path relative to pc directory
-d   Print a '.' for every $dotfreq digest checks
-v   Verbose - print result of each check;

EOF
exit(1);
}

die(BackupPC::Lib-new failed\n) if ( !(my $bpc = BackupPC::Lib-new) );
#die(BackupPC::Lib-new failed\n) if ( !(my $bpc = BackupPC::Lib-new(, , 
, 1)) ); #No user check

my $Topdir = $bpc-TopDir();
my $root;
$root = $Topdir . /pc/ if $opts{p};
$root = $bpc-{CPoolDir}/ if $opts{c};
$root =~ s|//*|/|g;

my $path = $ARGV[0];
if ($opts{C}) {
$path = $bpc-MD52Path($ARGV[0], 1, $bpc-{CPoolDir});
$path =~ m|(.*/)|;
$root = $1; 
}
else {
$path = $root . $ARGV[0];
}
my $verbose=$opts{v};
my $progress= $opts{d};

die $0: Cannot read $path\n unless (-r $path);


my ($totfiles, $totdigfiles, $totbadfiles) = (0, 0 , 0);
find(\verify_digest, $path); 
print \n if $progress;
print Looked at $totfiles files including $totdigfiles digest files of which 
$totbadfiles have bad digests\n;
exit;

sub verify_digest {
return -200 unless (-f);
$totfiles++;
return -200 unless -s  0;
return -201 unless BackupPC::Xfer::RsyncDigest-fileDigestIsCached($_); 
#Not cached type (i.e. first byte not 0xd7); 
$totdigfiles++;

my $ret = BackupPC::Xfer::RsyncDigest-digestAdd($_, DEFAULT_BLOCKSIZE, 
RSYNC_CSUMSEED_CACHE, 2);  #2=verify
#Note setting blocksize=0, results in using the default blocksize of 2048 also, 
but it generates an error message
#Also leave out final protocol_version input since by setting it undefined we 
make it read it from the digest.
   

Re: [BackupPC-users] Filesystem corruption: consistency of the pool

2010-11-04 Thread martin f krafft
also sprach Les Mikesell lesmikes...@gmail.com [2010.11.03.2156 +0100]:
 Yes, anything that is not linked by a current backup will be removed in 
 the nightly runs.

I might thus want to disable that cronjob for now.

 The more subtle problem is that the corruption may 
 have overwritten the contents of existing files - but I'm pretty sure 
 that a full run will detect any content differences and fix things up. 
 There would be a chance that it would miss something if blocks in the 
 middle of a file changed and you are using the --checksum-seed option 
 with rsync, though.  In that case it would use the cached checksums 
 appended to the files instead of verifying all the way through.

RsyncCsumCacheVerifyProb can help here too. I might just set it to
1.0

-- 
martin | http://madduck.net/ | http://two.sentenc.es/
 
kermit: why are there so many songs about rainbows?
fozzy: that's part of what rainbows do.
 
spamtraps: madduck.bo...@madduck.net


digital_signature_gpg.asc
Description: Digital signature (see http://martin-krafft.net/gpg/)
--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Filesystem corruption: consistency of the pool

2010-11-03 Thread martin f krafft
Hello,

My filesystem holding the backuppc pool was corrupted. While e2fsck
managed to fix it all and now doesn't complain anymore, I am a bit
scared that the backuppc pool isn't consistent anymore.

Is there a tool to check the consistency of the pool?

Is there a tool to repair an inconsistent pool?

Thanks,

-- 
martin | http://madduck.net/ | http://two.sentenc.es/
 
be the change you want to see in the world
 -- mahatma gandhi
 
spamtraps: madduck.bo...@madduck.net


digital_signature_gpg.asc
Description: Digital signature (see http://martin-krafft.net/gpg/)
--
Achieve Improved Network Security with IP and DNS Reputation.
Defend against bad network traffic, including botnets, malware, 
phishing sites, and compromised hosts - saving your company time, 
money, and embarrassment.   Learn More! 
http://p.sf.net/sfu/hpdev2dev-nov___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem corruption: consistency of the pool

2010-11-03 Thread Carl Wilhelm Soderstrom
On 11/03 07:16 , martin f krafft wrote:
 Is there a tool to check the consistency of the pool?
 
 Is there a tool to repair an inconsistent pool?

Run full backups on all hosts, then BackupPC_nightly?

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

--
Achieve Improved Network Security with IP and DNS Reputation.
Defend against bad network traffic, including botnets, malware, 
phishing sites, and compromised hosts - saving your company time, 
money, and embarrassment.   Learn More! 
http://p.sf.net/sfu/hpdev2dev-nov
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem corruption: consistency of the pool

2010-11-03 Thread Les Mikesell
On 11/3/2010 1:16 PM, martin f krafft wrote:
 Hello,

 My filesystem holding the backuppc pool was corrupted. While e2fsck
 managed to fix it all and now doesn't complain anymore, I am a bit
 scared that the backuppc pool isn't consistent anymore.

 Is there a tool to check the consistency of the pool?

The part that is important are the hardlinks from the files in the 
backup directories under pc to the correct contents, and there is 
probably no way to check or fix them.  There would be some chance that 
the corruption overwrote some content or fsck removed some if you had 
inodes claiming the same space.

 Is there a tool to repair an inconsistent pool?

I'd run new full backups as soon as practical. That will at least fix up 
anything missing in the latest run which is usually the most important.

-- 
   Les Mikesell
lesmikes...@gmail.com



--
Achieve Improved Network Security with IP and DNS Reputation.
Defend against bad network traffic, including botnets, malware, 
phishing sites, and compromised hosts - saving your company time, 
money, and embarrassment.   Learn More! 
http://p.sf.net/sfu/hpdev2dev-nov
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem corruption: consistency of the pool

2010-11-03 Thread martin f krafft
also sprach Carl Wilhelm Soderstrom chr...@real-time.com [2010.11.03.2020 
+0100]:
 Run full backups on all hosts, then BackupPC_nightly?

also sprach Les Mikesell lesmikes...@gmail.com [2010.11.03.2022 +0100]:
 I'd run new full backups as soon as practical. That will at least
 fix up anything missing in the latest run which is usually the
 most important.

Yeah, that's surely a good idea. I was wondering mostly about
cleanup actually.

I assume BackupPC_nightly removes everything from the pool with
a link count == 1. Hence, the worst that could happen is that all
previous backups would be rendered invalid, no?

Thanks guys,

-- 
martin | http://madduck.net/ | http://two.sentenc.es/
 
they that can give up essential liberty
 to obtain a little temporary safety
 deserve neither liberty nor safety.
  -- benjamin franklin
 
spamtraps: madduck.bo...@madduck.net


digital_signature_gpg.asc
Description: Digital signature (see http://martin-krafft.net/gpg/)
--
Achieve Improved Network Security with IP and DNS Reputation.
Defend against bad network traffic, including botnets, malware, 
phishing sites, and compromised hosts - saving your company time, 
money, and embarrassment.   Learn More! 
http://p.sf.net/sfu/hpdev2dev-nov___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem corruption: consistency of the pool

2010-11-03 Thread Les Mikesell
On 11/3/2010 2:26 PM, martin f krafft wrote:

 I'd run new full backups as soon as practical. That will at least
 fix up anything missing in the latest run which is usually the
 most important.

 Yeah, that's surely a good idea. I was wondering mostly about
 cleanup actually.

 I assume BackupPC_nightly removes everything from the pool with
 a link count == 1. Hence, the worst that could happen is that all
 previous backups would be rendered invalid, no?

Yes, anything that is not linked by a current backup will be removed in 
the nightly runs.  The more subtle problem is that the corruption may 
have overwritten the contents of existing files - but I'm pretty sure 
that a full run will detect any content differences and fix things up. 
There would be a chance that it would miss something if blocks in the 
middle of a file changed and you are using the --checksum-seed option 
with rsync, though.  In that case it would use the cached checksums 
appended to the files instead of verifying all the way through.

-- 
   Les Mikesell
lesmikes...@gmail.com

--
Achieve Improved Network Security with IP and DNS Reputation.
Defend against bad network traffic, including botnets, malware, 
phishing sites, and compromised hosts - saving your company time, 
money, and embarrassment.   Learn More! 
http://p.sf.net/sfu/hpdev2dev-nov
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem recommendation

2007-09-29 Thread Doug Lytle
Ben Nickell wrote:

 Doug,

 Can I ask what method or command you used to copy the data to the new 
 LVM?  (see my new thread on this subject for the whole story)

Ben,

Unlike most people here, our BackupPC server is for catastrophic 
recovery only.  We aren't using it for archival purposes.  This allows 
me to completely purge the pool every now and then without causing a 
ruckus.  The data that I moved was config files and some other data that 
is easily moved.

Doug



-- 
Ben Franklin quote:

Those who would give up Essential Liberty to purchase a little Temporary 
Safety, deserve neither Liberty nor Safety.



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem recommendation

2007-09-28 Thread Ben Nickell
Doug Lytle wrote:
 Josh Marshall wrote:
   
   
 
   
 I use xfs on all my installations and feel that's the best mix of 
 performance and reliability. I use the standard mkfs.xfs but I've read 

   
 
 Just a note on this,

 I've recently purchased two 500GB drives that I wanted to add to my XFS 
 LVM.  It turns out that you can't resize an XFS partition.  I ended up 
 having to recreate the LVM.  I moved the data over to 1 of the drives, 
 recreated the LVM using reiserfs, copied the data over to the new LVM.  
 Then I added the 2nd drive and resized the partition.

 Dou

Doug,

Can I ask what method or command you used to copy the data to the new 
LVM?  (see my new thread on this subject for the whole story)

Thanks,
Ben

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem recommendation

2007-09-27 Thread Doug Lytle
Josh Marshall wrote:
   
 
 I use xfs on all my installations and feel that's the best mix of 
 performance and reliability. I use the standard mkfs.xfs but I've read 

   
Just a note on this,

I've recently purchased two 500GB drives that I wanted to add to my XFS 
LVM.  It turns out that you can't resize an XFS partition.  I ended up 
having to recreate the LVM.  I moved the data over to 1 of the drives, 
recreated the LVM using reiserfs, copied the data over to the new LVM.  
Then I added the 2nd drive and resized the partition.

Doug



-- 
Ben Franklin quote:

Those who would give up Essential Liberty to purchase a little Temporary 
Safety, deserve neither Liberty nor Safety.



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem recommendation

2007-09-27 Thread David Rees
On 9/27/07, Doug Lytle [EMAIL PROTECTED] wrote:
 I've recently purchased two 500GB drives that I wanted to add to my XFS
 LVM.  It turns out that you can't resize an XFS partition.  I ended up
 having to recreate the LVM.

You can resize an XFS partition, you need to use the xfs_growfs utility.

-Dave

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem recommendation

2007-09-24 Thread Ski Kacoroski
While I agree with Josh that raid 5 is slower than raid10, I have over
1500 clients backing up to 8 backuppc servers running raid5 on 3ware
cards (about 600GB of data on each server which is around 1TB of data
from the clients).  I think you need to think more about what is your
load and backup window.  Depending on your situation, you may not have
to move to raid 10.

I went with multiple smaller servers because in my testing I found that
the 3ware cards could only handle 4 streams (jobs) at a time before
they crapped out and the system load skyrocketed.  During testing
I tried raid10, raid1, and several different file systems.  Things
may be different with the newer 3ware cards as I have not tested them.

For a file system, I have used ext3 or reiserfs successfully (all my
new installations are ext3).

cheers,

ski

On Mon, 24 Sep 2007 08:23:47 +1000 Josh Marshall
[EMAIL PROTECTED] wrote:
 Ben Nickell wrote:
  I am creating a new filesystem for backuppc that will be about 3.4
  Tb. It consistes of 6 750gb SATA drives in RAID 5 on 3ware raid
  controller also using LVM.  (though not to span arrays, just for
  flexibility) 
 I strongly recommend you don't use RAID5. The read and write
 performance is nowhere near as good as RAID10 and that is what
 BackupPC's bottleneck is.
  Does anyone have any filesystem tuning ideas or options they used
  to create their filesystem that you think work well, particularly
  for large filesystems?  If so, please share the mkfs command line
  you used to create your filesystem.

 I use xfs on all my installations and feel that's the best mix of 
 performance and reliability. I use the standard mkfs.xfs but I've
 read having the journal on a separate disk makes an enourmous write
 speed difference.
 
 Regards,
 Josh.
 
 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2005.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/backuppc-users
 http://backuppc.sourceforge.net/


-- 
When we try to pick out anything by itself, we find it
 connected to the entire universeJohn Muir

Chris Ski Kacoroski, [EMAIL PROTECTED], 206-501-9803

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem recommendation

2007-09-24 Thread daniel berteaud
Le Sat, 22 Sep 2007 20:45:22 -0400,
Doug Lytle [EMAIL PROTECTED] a écrit :

 Ben Nickell wrote:
  so I'm 
  thinking of moving to something else that journals data in
  additional to metadata but feel free to try to talk me out
  changing.  Any thing I 
 

 

Just to share my experience.
I've setup a Backup server running BackupPC 3.0 on a smeserver (centos
based)
For the disk I've configured a big RAID5 (on an perc5i) array with 9x750
Go + 1x750Go in hot-spare
Then, with LVM, I've created a logical volume of about 3,5To which
I've formated in ext3. I keep the free place for some future usage.

Everything is working, and the bottleneck wasn't the disque but the
processor (an Intel Xeon Dual Core 2,8 GHz), so I've added a second
processor and now, I can reach the max performances of the disk array.
But even if the disk array is the bottleneck, max performances are
rarelly reached because backups occure at different times.

-- 
Daniel Berteaud
FIREWALL-SERVICES SARL.
Société de Services en Logiciels Libres
Technopôle Montesquieu
33650 MARTILLAC
Tel : 05 56 64 15 32
Fax : 05 56 64 82 05
Mail: [EMAIL PROTECTED]

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem recommendation

2007-09-24 Thread Les Mikesell
daniel berteaud wrote:

 Just to share my experience.
 I've setup a Backup server running BackupPC 3.0 on a smeserver (centos
 based)

How much trouble was it to install backuppc on SME server?  (For those 
who don't know, this is an appliance-like server setup with all
administration done through a simple web interface).

-- 
   Les Mikesell
[EMAIL PROTECTED]



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem recommendation

2007-09-23 Thread Josh Marshall
Ben Nickell wrote:
 I am creating a new filesystem for backuppc that will be about 3.4 Tb.  
 It consistes of 6 750gb SATA drives in RAID 5 on 3ware raid controller 
 also using LVM.  (though not to span arrays, just for flexibility)
   
I strongly recommend you don't use RAID5. The read and write performance 
is nowhere near as good as RAID10 and that is what BackupPC's bottleneck is.
 Does anyone have any filesystem tuning ideas or options they used to 
 create their filesystem that you think work well, particularly for large 
 filesystems?  If so, please share the mkfs command line you used to 
 create your filesystem.
   
I use xfs on all my installations and feel that's the best mix of 
performance and reliability. I use the standard mkfs.xfs but I've read 
having the journal on a separate disk makes an enourmous write speed 
difference.

Regards,
Josh.

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] filesystem recommendation

2007-09-22 Thread Ben Nickell
I have been very happy with backuppc so far. Thanks for the great program. 

I am creating a new filesystem for backuppc that will be about 3.4 Tb.  
It consistes of 6 750gb SATA drives in RAID 5 on 3ware raid controller 
also using LVM.  (though not to span arrays, just for flexibility)

Does anyone have any filesystem tuning ideas or options they used to 
create their filesystem that you think work well, particularly for large 
filesystems?  If so, please share the mkfs command line you used to 
create your filesystem.

I have been using resiserfs, but have had to do reiserfsck 
--rebuild-tree a couple of times on my old 1.5tb filesystem, so I'm 
thinking of moving to something else that journals data in additional to 
metadata but feel free to try to talk me out changing.  Any thing I 
should watch out for? (such as how do I ensure I have enough inodes if I 
choose ext3)  


Thanks and Best Regards,
Ben Nickell

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem recommendation

2007-09-22 Thread Doug Lytle
Ben Nickell wrote:
 so I'm 
 thinking of moving to something else that journals data in additional to 
 metadata but feel free to try to talk me out changing.  Any thing I 

   

Here is some info on File systems:

http://en.wikipedia.org/wiki/Comparison_of_file_systems

Doug


-- 
Ben Franklin quote:

Those who would give up Essential Liberty to purchase a little Temporary 
Safety, deserve neither Liberty nor Safety.



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Filesystem benchmarks

2007-03-28 Thread John Pettitt




Following the extended discussion of system benchmarks here are some
actual numbers from a FreeBSD box - if anybody has the time to run
similar numbers on linux boxes I will happily collate the data.

John

2.93 GHz Celeron D, 768 MB ram FreeBSD 6.2

bonnie++ -f 0 -d . -s 3072 -n 10:10:10:10 

Key:
IDE = 80 GB IDE, soft upodates, atime on
IDE-R1-atime = 300gb raid 1 (mirror) IDE, atime on, soft
updates
IDE-R1 = 300gb raid 1 (mirror) IDE, no atime, soft updates
IDE-R1-sync = 300gb raid 1 (mirror) IDE, no atime, sync
IDE-R1-async = 300gb raid 1 (mirror) IDE, no atime, async
SATA-R10 = 1.5TB raid 10 sata on 3ware 9500S-12. no atime, soft updates

v4 = raiserfs v4 from namesys on a 2.4ghz xeon
ext3 = ext3 from namesys on 2.4ghz xeon

Version 1.93c --Sequential Output-- --Sequential Input-
--Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
IDE 3G 40181 22 12106 6 36944 12
99.4 7
IDE-R1 3G 34511 20 14857 8 54482 19
121.7 9
IDE-R1-atime 3G 34426 21 14832 8 54402 18
122.1 9
IDE-R1-sync 3G 4904 8 4248 4 53750 18
103.7 8
IDE-R1-async 3G 34405 20 14877 8 53579 18
122.6 9
SATA-R10 3G 85375 53 25188 14 49751 17
454.1 33

v4 3G 37579 19 15657 11 41531 11
105.8 0
ext3 3G 35221 22 10987 4 41105 6
90.9 0

Version 1.93c --Sequential Create-- Random
Create
 -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
IDE 236 8 5071 73 9954 35 229 7 4354 68
+ +++
IDE-R1 460 19 5524 76 13606 86 301 10 4576 65
+ +++
IDE-R1-atime 377 15 4368 63 11580 70 395 13 5061 78
11642 90
IDE-R1-sync 107 12 6027 86 + +++ 112 12 5466 83
19644 82
IDE-R1-async 370 15 4609 66 18905 49 376 12 5583 84
11427 90
SATA-R10 973 41 7365 98 13281 84 1079 38 5877 79
12875 85

v4 570 39 746 17 1435 23 513 40 104 2
951 15
ext3 221 8 364 4 853 4 204 7 99 1
306 2


Notes:
The 3ware card is somewhat bus limited because it's in a 32bit PCI slot
in a 64
bit slot I'd expect better sequential read performance. This also
drives up the
CPU number due to bus contention.

The read numbers for ext3 and raiserfs look suspect 

Stripe size for the 3ware raid 10 is 256k

all file sytems were live and had other files on them - virgin file
systems may
perform very differently..

Conclusion:
ufs2 is pretty similar to ext2 if not a little better but not as fast
as raiser4.

sync is a big drag but async makes almost no difference over soft
updates.

atime/noatime doesn't make a whole lot of difference on this test.
~



-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem benchmarks

2007-03-28 Thread Brien Dieterle




Here are some benchmarks I ran last week: I think it's important
to balance the -ssize with the -n numbers so that you are
dealing with the same amount of data, otherwise caching can bite you
and you can have misleading results. Therefore, I used 10k file-size,
and adjusted the number of files to give approximately the same amount
of megabytes that was specified with -s. IE; 200MB of block IO and
200MB of small-file io. I only did raid5 vs single disk comparisons on
200, 800, and 3200mb data points, for time reasons :-)

I think these results show two things going on. 1) the steady decline
of ext3 performance as # files increase. 2) that raid5 and multiple
disks really doesn't significantly increase read performance
for small files. 3) once you're outside the caching zones things go
badly: get more ram :-)

Then again, my hardware might just be shoddy :-)


Machine info:
DL380-G4
dual cpu 3.20GHz
Debian Sarge 2.6.8-3-686-smp
Smart Array 6i controller with 128MB BBC, 50/50r/w
Machine Ram limited to 256MB with kernel param mem=256M
Ext3 filesystems, defaults


logical drives created with defaults:

Raid-5 array: 

Smart Array 6i in Slot 0
 logicaldrive 1
 Size: 1 TB
 Fault Tolerance: RAID 5
 Heads: 255
 Sectors per Track: 32
 Cylinders: 65535
 Stripe Size: 64 KB
 Array Accelerator: Enabled

Single Disk:

Smart Array 6i in Slot 0
 logicaldrive 2
 Size: 279 GB
 Fault Tolerance: RAID 0
 Heads: 255
 Sectors per Track: 32
 Cylinders: 65535
 Stripe Size: 128 KB
 Status: Ok
 Array Accelerator: Enabled






200MB:

Single DISK
bonnie -u root -s200 -n20:1:1:10 -r0 -f

Version 1.03 --Sequential Output-- --Sequential Input-
--Random-
 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
mirror 200M 144896 45 15404 4 159673
9 8163 6
 --Sequential Create-- Random
Create
 -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
 20:1:1/10 4523 41 1976 6 + +++ 6038 55 408 1
25162 99
mirror,200M,,,144896,45,15404,4,,,159673,9,8163.2,6,20:1:1/10,4523,41,1976,6,+,+++,6038,55,408,1,25162,99

5 Disk Raid5

Version 1.03 --Sequential Output-- --Sequential
Input- --Random-
 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
mirror 200M 01 72 35381 10 110627 12
10372 11
 --Sequential Create-- Random
Create
 -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
 20:1:1/10 6753 63 2855 9 + +++ 10416 96 421 1
25088 99
mirror,200M,,,01,72,35381,10,,,110627,12,10372.3,11,20:1:1/10,6753,63,2855,9,+,+++,10416,96,421,1,25088,99



300MB

Single DISK
bonnie -u root -s300 -n30:1:1:10 -r0 -f

Version 1.03 --Sequential Output-- --Sequential Input-
--Random-
 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
mirror 300M 69272 23 19904 6 45771 7
1439 2
 --Sequential Create-- Random
Create
 -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
 30:1:1/10 3972 43 2067 6 45890 93 2960 32 196 1
21817 99
mirror,300M,,,69272,23,19904,6,,,45771,7,1439.0,2,30:1:1/10,3972,43,2067,6,45890,93,2960,32,196,1,21817,99




400MB

Single DISK
bonnie -u root -s400 -n40:1:1:10 -r0 -f

Version 1.03 --Sequential Output-- --Sequential Input-
--Random-
 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
mirror 400M 57034 19 20791 6 52027 8
669.3 1
 --Sequential Create-- Random
Create
 -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
 40:1:1/10 2844 37 2142 6 41534 82 2965 39 171 1
18650 98
mirror,400M,,,57034,19,20791,6,,,52027,8,669.3,1,40:1:1/10,2844,37,2142,6,41534,82,2965,39,171,1,18650,98



800MB

Single DISK:
bonnie -u root -s800 -n80:1:1:10 -r0 -f


Version 1.03 --Sequential Output-- --Sequential Input-
--Random-
 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
mirror 800M 52857 19 23641 7 58240 8
419.0 0
 --Sequential Create-- Random
Create
 -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
 80:1:1/10 3310 70 2858 9 35289 66 2309 49 170 1
8127 68
mirror,800M,,,52857,19,23641,7,,,58240,8,419.0,0,80:1:1/10,3310,70,2858,9,35289,66,2309,49,170,1,8127,68


5 Disk Raid5:
Version 1.03 --Sequential 

Re: [BackupPC-users] Filesystem recommendation?

2007-01-08 Thread Carl Wilhelm Soderstrom
On 12/30 10:23 , Michael Mansour wrote:
 Personally I'd be trouble-shooting your ext3 problems and working them out, 
 since ext3 by default offers quite a bit of data resilience.

I use reiserfs for BackupPC data pools (and ext3 for the rest of the OS).
Partly because with reiserfs you don't have to worry about running out of
inodes. Also, reiserfs is very fast at creating lots of little files, and it
resizes nicely.

It's nothing special at rewriting or deleting files, and may in fact be
slower than ext3 in some ways. (Hans Reiser's claims to the contrary). But
it works well enough for the purpose of backuppc's pool; and better than
most.

I know people who use XFS tho, and report very good success with it.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Filesystem recommendation?

2006-12-29 Thread John Villalovos
I was wondering if people had any recommendations for a filesystem to
use with BackupPC?

I am currently using ext3 but now I can't fsck the drive anymore.  So
I would like to move to a different filesystem.

Any suggestions are appreciated.

Thanks,
John

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem recommendation?

2006-12-29 Thread Michael Mansour
Hi John,

 I was wondering if people had any recommendations for a filesystem to
 use with BackupPC?
 
 I am currently using ext3 but now I can't fsck the drive anymore.  So
 I would like to move to a different filesystem.

ext3 is one of the most stable linux filesystems around, as this is storing 
backups of your other servers/pc's, I'd think that this is something you'd 
like to keep instead of moving to something less stable.

There's heaps of others you can choose from though, including reiserfs, jfs, 
xfs, etc but each has its good points and bad points depending on your 
requirements and even the Linux dist you're using.

Personally I'd be trouble-shooting your ext3 problems and working them out, 
since ext3 by default offers quite a bit of data resilience.

Regards,

Michael.

 Any suggestions are appreciated.
 
 Thanks,
 John
 
 -
 Take Surveys. Earn Cash. Influence the Future of IT
 Join SourceForge.net's Techsay panel and you'll get the chance to 
 share your opinions on IT  business topics through brief surveys -
  and earn cash http://www.techsay.com/default.php?
page=join.phpp=sourceforgeCID=DEVDEV
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/backuppc-users
 http://backuppc.sourceforge.net/
--- End of Original Message ---


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] filesystem versus host backups

2006-05-10 Thread Matt Wette

Hi,

I am using BackupPC at home for three Win32 boxes.  I would like to
add in my Linux box as well.  However, if my understanding is correct,
backuppc (2.1.2) seems to be set up for one transport method and one
set of filesystems which gets backed up on each host.  I would like
to be able to use different transport methods for different
filesystems on different hosts.  Am I wrong and this is possible, or
if not possible, is it planned for a future release?

Here is my impression of current situation:

  config.pl:
  $Conf{XferMethod} = 'smb';
  $Conf{XmbShareName} = ['Sam', 'Mary', 'Matt'];

  hosts:
  pc1  0  family
  pc2  0  family
  pc3  0  family

Here is what I'd like to see:

  filesystems:
  #filesys-spec  X  meth host filesystem
  pc1-Sam0  smb  pc1  Sam
  pc2-Mary   0  smb  pc2  Mary
  pc3-Matt   0  tar  pc3  /home/Matt


---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


RE: [BackupPC-users] filesystem versus host backups

2006-05-10 Thread Justin Best
 I would like
 to be able to use different transport methods for different
 filesystems on different hosts.

This is extremely easy to do. Put your 'default' settings in config.pl and
create a 'per-PC' config file to specify settings that are unique to a
particular host.

So, in config.pl, set:
   $Conf{XferMethod} = 'smb';

Then, create a pc3.pl file and set
   $Conf{XferMethod} = 'tar';

If you're using the Debian packages, this new file should be put into the
/etc/backuppc/ directory, alongside config.pl

All that needs to be in the pc3.pl file are the settings you need to be
different. No need to specify all the settings that are already defined
properly in config.pl

Hope that helps!


Justin Best
503.906.7611 Voice
561.828.0496 Fax



---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem benchmark results

2006-03-10 Thread Les Mikesell
On Fri, 2006-03-10 at 12:57, Matt wrote:

 I'm not going to discuss this proposal any further.  The way that backuppc
 (and other programs) work is too ingrained in people's thinking to realize
 that there are other ways to do things.
   
 
 I generally dislike such statements. They  stop otherwise healthy
 discussions.   That said,  you may be right here. If only because
 implementing your proposal may imply changing a lot of the core
 structure of backuppc and a fresh start, stealing some pieces from
 backuppc, might be a better approach.

Of course there are other ways to do things, but they aren't
necessarily going to be better.  I'm not convinced that there
is much you can do to maintain any kind of structured order
over a long term when you are adding files from multiple
sources simultaneously and expiring them more or less randomly.
You might make it faster to traverse the directory of one
host or the pool, but in my usage those are rare operations.
You could also make it easier to do normal file-based copies
of an existing archive/pool, but there are other approaches
to this too.

 BTW, I barely understand how backuppc is working. It's workings are
 surely not ingrained in my mind. On the contrary I am still struggling
 to understand why distinguishing full and incremental backups is
 necessary if one uses rsync.  To me it this seems like a relict from
 tape archives. Same for doing full based on the last full, not the last
 filled-in incremental.

Rsync incrementals turn on the option to skip files where the length
and timestamp match, making them considerably faster but more likely
to accidentally miss a changed file. However, since they always work
against the last full, you end up re-copying things that were copied
in previous incrementals.  There is some room for improvement here.  
-- 
  Les Mikesell
   [EMAIL PROTECTED]



---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=110944bid=241720dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem benchmark results

2006-03-10 Thread Matt
Les Mikesell wrote:

Of course there are other ways to do things, but they aren't
necessarily going to be better.  I'm not convinced that there
is much you can do to maintain any kind of structured order
over a long term when you are adding files from multiple
sources simultaneously and expiring them more or less randomly.
  

It's not really random!   The data are expiring because a backup of a
host expires.  As I said. Dirvish's performance was more than an order
of magnitude better.It uses cross-links but it keeps the original
tree  structure for  each host.   To me this  shows that there has to be
is a better way to do things and Dave's proposal seems right on target.

You might make it faster to traverse the directory of one
host or the pool, but in my usage those are rare operations.
You could also make it easier to do normal file-based copies
of an existing archive/pool, but there are other approaches
to this too.
  

Maybe, but none is as simple and with the DB managing the metadata one
may be able keep  the transparency without much cost.  Filename mangling
isn't needed anymore.



---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=110944bid=241720dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem benchmark results

2006-03-10 Thread Les Mikesell
On Fri, 2006-03-10 at 14:41, Matt wrote:
 I'm not convinced that there
 is much you can do to maintain any kind of structured order
 over a long term when you are adding files from multiple
 sources simultaneously and expiring them more or less randomly.
   

 It's not really random!   The data are expiring because a backup of a
 host expires.

You might speed up access to the directory listing you are
expiring, but the data files are still going to be randomly
located.  They will probably have been accumulated by several
simultaneous runs with the first unique copy from any host
being the one that gets saved.

 As I said. Dirvish's performance was more than an order
 of magnitude better.It uses cross-links but it keeps the original
 tree  structure for  each host.   To me this  shows that there has to be
 is a better way to do things and Dave's proposal seems right on target.

Are you comparing uncompressed native rsync runs to the perl version
that handles backuppc's compressed files?  There are more variables
involved than the link structure.  Also, backuppc may be running
several backups at once, which is a good thing if you have a fast
server and slow or remote clients. 

-- 
  Les Mikesell
   [EMAIL PROTECTED]
 



---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=110944bid=241720dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem benchmark results

2006-03-10 Thread Matt
Les Mikesell wrote:

Are you comparing uncompressed native rsync runs to the perl version
that handles backuppc's compressed files?

More or less yes.  Dirvish boils down to a perl wrapper around rsync
--link-dest=DIR ...

I haven't tested backuppc without compression -- after all this was one
reason for my switch. If storing uncompressed buys me back the order of
magnitude in speed I may be  tempted to use it.  My assumption is that
the directory structure of the pool is the main reason for the
performance degradation.

  There are more variables
involved than the link structure.  Also, backuppc may be running
several backups at once, which is a good thing if you have a fast
server and slow or remote clients. 
  

No, it's just one client. Rough numbers off of my head:

 Full-back,  1 client, 1.4TB, 5e5 -- 1e6 files,  after existing backup,
I guess  5% changes during the week: 21 h
Incremental backup of the client: 1.5 h
Dirvish, same client, same server:  15 min



---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=110944bid=241720dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem benchmark results

2006-03-09 Thread David Brown
On Tue, Mar 07, 2006 at 09:23:36AM -0600, Carl Wilhelm Soderstrom wrote:

 I'm experimenting with an external firewire drive enclosure, and I formatted
 it with 3 different filesystems, then used bonnie++ to generate 10GB of
 sequential data, and 1,024,000 small files between 1000 and 100 bytes in
 size.
 
 I tried it with xfs, reiserfs, and ext3; and contrary to a lot of hype out
 there, ext3 seems to have won the race for random file reads and deletes
 (which is what BackupPC seems to be really heavy on).

Unfortunately, the resultant filesystem has very little resemblance to the
file tree that backuppc writes.  I'm not sure if there is any utility that
creates this kind of tree, and I would argue that backuppc shouldn't be
either, since it is so hard on the filesystem.

Basically, you need to first create a deep tree (like a filesystem), and
then hardlink all of those files into something like a pool, in a very
different order than they were put into the tree.

Then, create another tree, except some of the files should be fresh, and
some should be hardlinks back to the pool (or to the first tree).  Then the
new files should be linked into the pool.

Programs like backuppc are the only thing I know that creates these, and
the performance in a given filesystem of this tree isn't really going to
correlate much to that filesystems performance on any other task.  Most
filesystems optimize assuming that files will tend to be in the directory
that they were created in.  Creating this massive pool of links to files in
diverse places completely breaks these optimizations.

Honestly, you probably won't ever find a filesystem that handles the
backuppc pool very well.  I think the solution is to change backuppc to not
create multiple trees, but to store the filesystem tree in some kind of
database, and just store the files themselves in the pool.  Database
engines are optimized to be able to handle multiple indexing into the data,
whereas filesystems are not (and aren't likely to be, either).

As far as implementation of this pool-only storage, it is important to
create the file in the proper directory first, which means the has must be
known before it can be written.  Of course, if there is a database, there
is no reason to make the filenames part of the hash, and not just
sequential integers, using a unique key in the database table.

Dave


---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=110944bid=241720dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem benchmark results

2006-03-09 Thread Matt
David Brown wrote:

 I think the solution is to change backuppc to not

create multiple trees, but to store the filesystem tree in some kind of
database, and just store the files themselves in the pool.  Database
engines are optimized to be able to handle multiple indexing into the data,
whereas filesystems are not (and aren't likely to be, either).

As far as implementation of this pool-only storage, it is important to
create the file in the proper directory first, which means the has must be
known before it can be written.  Of course, if there is a database, there
is no reason to make the filenames part of the hash, and not just
sequential integers, using a unique key in the database table.
  


Wouldn't it be better to keep the directory structure of the
(compressed) files and keep hash and attributes in the DB?  After all,
this is how the data are received and how they will be accessed  during
a restore.

Here I  show my heritage. I have been using dirvish for 2 years before
switching to backuppc.  I switched to backuppc because of the build in
file compression and, more importantly the dublication of renamed files
on the backup by dirvish.   However, I never had performance problems
even though dirvish creates a lot of hard link just like backuppc

Speedwise, backuppc really sucks compared to dirvish. With dirvish I has
able to backup 1.4 TB in 30 minutes.  Most of the time was spend by
rsync collecting and transmitting the file lists.  Now a full backups
take almost a  day, even if the amount of new data is negligible
(checksum caching enabled).

... Matt


---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=110944bid=241720dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem benchmark results

2006-03-09 Thread Kanwar Ranbir Sandhu
On Thu, 2006-09-03 at 18:31 -0800, Matt wrote:
 Speedwise, backuppc really sucks compared to dirvish. With dirvish I has
 able to backup 1.4 TB in 30 minutes.  Most of the time was spend by
 rsync collecting and transmitting the file lists.  Now a full backups
 take almost a  day, even if the amount of new data is negligible
 (checksum caching enabled).

I don't know about dirvish, but I concur: backuppc is slow.  Even LAN
based backups are not as fast as they should be.

Still, backuppc is a great app.  I haven't found anything else that is
as easy to setup, use, and maintain.   If backuppc could be sped up, it
would be perfect.

Seems to me backuppc would benefit immensely from a few more developers.
I think a marketing campaign is in order.  It'll pique interest amongst
the OSS community, and hopefully attract a couple of developers.
Donations to the project might be a great way to encourage development
as well.

Regards,

Ranbir

-- 
Kanwar Ranbir Sandhu
Linux 2.6.15-1.1831_FC4 i686 GNU/Linux 
22:16:41 up 1 day, 18 min, 3 users, load average: 0.48, 0.33, 0.20 




---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=110944bid=241720dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem benchmark results

2006-03-09 Thread David Brown
On Thu, Mar 09, 2006 at 06:31:51PM -0800, Matt wrote:

 Wouldn't it be better to keep the directory structure of the
 (compressed) files and keep hash and attributes in the DB?  After all,
 this is how the data are received and how they will be accessed  during
 a restore.

Even that isn't all that important.  Just store the files into the pool in
the order they are traversed.  The DB stores hashes and attributes (and the
real tree).  Each new file, after storing it, has a full hash computed, and
the DB checked.  If it is unique, leave it, if new, delete the file, and
update the DB.

Since the DB maintains the mapping, the names don't need to be very
interesting, I was thinking something like

  1 2 3 4 5 6 7 8 9
  1x/10
 11
 12
 13
 14
 15
 16
 17
 18
 19
  2x/20
 21
 ..
  9x/98
  9x/99
  1x/1x/00

or some other traversal where the names get deeper as the numbers get
longer.  On something like reiserfs (especially reiser4) there isn't much
particular reason to not just put all of the files in a single directory
(at least according to them).  'ls' might struggle with that, though.

By storing the files in the same order as the traversal, they will likely
stay near files that will be retrieved at a similar time.

Dave


---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=110944bid=241720dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] filesystem benchmark results

2006-03-07 Thread Carl Wilhelm Soderstrom
I'm experimenting with an external firewire drive enclosure, and I formatted
it with 3 different filesystems, then used bonnie++ to generate 10GB of
sequential data, and 1,024,000 small files between 1000 and 100 bytes in
size.

I tried it with xfs, reiserfs, and ext3; and contrary to a lot of hype out
there, ext3 seems to have won the race for random file reads and deletes
(which is what BackupPC seems to be really heavy on).

Reiserfs of course wins hands-down when it comes to *creating* files, but
isn't always so good at reading them back or deleting them.

Am I missing something here? Am I mis-interpreting the data? Is there anyone
else out there with more bonnie experience than I, who can suggest other
things to try to gain more surety about this?

Of course, one of the nice things about Reiserfs is that you don't have to
worry about running out of inodes. For that alone, it is likely worthwhile
on backuppc storage filesystems.

This was done with an 800MHz Dell X200 laptop with an Adaptec external drive
enclosure, attached via firewire (400M). The filesystem was re-created
between each run, then the same bonnie command re-run.
In my copious spare time, I should try this on another testbed machine I
have. (Also, more runs on the same box, since they seem to vary somewhat).

Version  1.03   --Sequential Output-- --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
xfs-adaptec- 1M9494   7  4944   4   10659   3  98.3 0
--Sequential Create-- Random Create
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 50:1000:100/64   933  14 53578  99   842  13  1074  17 40257  80   478 10
xfs-adaptec-250maxtor,1M,,,9494,7,4944,4,,,10659,3,98.3,0,50:1000:100/64,933,14,53578,99,842,13,1074,17,40257,80,478,10


Version  1.03   --Sequential Output-- --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
reiserfs-ada 1M9800   9  5003   4   10343   3  86.9 0
--Sequential Create-- Random Create
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 50:1000:100/64  5016  92  8231  21  8086  94  3591  88   796   4   524 9
reiserfs-adaptec-250maxtor,1M,,,9800,9,5003,4,,,10343,3,86.9,0,50:1000:100/64,5016,92,8231,21,8086,94,3591,88,796,4,524,9

Version  1.03   --Sequential Output-- --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
ext3-adaptec 1M9814   9  5004   4   10407   3  86.6 0
--Sequential Create-- Random Create
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 50:1000:100/64  6820  62 47836  98  1301   5  6738  61 44981  94  1198 5
ext3-adaptec-250maxtor,1M,,,9814,9,5004,4,,,10407,3,86.6,0,50:1000:100/64,6820,62,47836,98,1301,5,6738,61,44981,94,1198,5


-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com


---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=110944bid=241720dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem benchmark results

2006-03-07 Thread Carl Wilhelm Soderstrom
On 03/07 04:43 , Guus Houtzager wrote:
 I think you're right. I have 2 suggestions for additional testing. It's my 
 experience that backuppc became really really slow after a few weeks when 
 more data began to accumulate. Could you test ext3 again, but with a few 
 million more files? I'm also rather interested to know if the dir_index 
 option of ext3 makes any difference. Could you try that too (mke2fs -j -O 
 dir_index /dev/whatever) please? 

I created that filesystem with the dir_index option already. :)

 You can let bonnie use softlinks or hardlinks instead of real files in the 
 test, so maybe that would be a nice additional test to run.

yeah, thanks for the reminder.

  Of course, one of the nice things about Reiserfs is that you don't have to
  worry about running out of inodes. For that alone, it is likely worthwhile
  on backuppc storage filesystems.
 
 mke2fs -j -T news should take care of that.

I've had problems with that in the past; can't remember exactly what they
were. I tend to just leave that option alone these days.

 What kernel did you use?

2.6.14. Debian 2.6.14-2-686, specifically.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com


---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=110944bid=241720dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem benchmark results

2006-03-07 Thread Les Mikesell
On Tue, 2006-03-07 at 09:23, Carl Wilhelm Soderstrom wrote:

 Am I missing something here? Am I mis-interpreting the data? Is there anyone
 else out there with more bonnie experience than I, who can suggest other
 things to try to gain more surety about this?

See if you can find a benchmark program called 'postmark'.  This
used to be available from NetApp but I haven't been able to find
a copy recently.  It specifically tests creation and deletion
of lots of small files.  When I used it years ago it showed
the then-current version of Reiserfs was much faster at this
than ext2.

-- 
  Les Mikesell
   [EMAIL PROTECTED]




---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=110944bid=241720dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem benchmark results

2006-03-07 Thread Carl Wilhelm Soderstrom
On 03/07 09:54 , Les Mikesell wrote:
 See if you can find a benchmark program called 'postmark'.  This
 used to be available from NetApp but I haven't been able to find
 a copy recently.  It specifically tests creation and deletion
 of lots of small files.  When I used it years ago it showed
 the then-current version of Reiserfs was much faster at this
 than ext2.

$ apt-cache search postmark
postmark - File system benchmark from NetApp

whaddya know? I'll have to give it a try. thanks for the pointer!

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com


---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=110944bid=241720dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem benchmark results

2006-03-07 Thread David Brown
On Tue, Mar 07, 2006 at 09:23:36AM -0600, Carl Wilhelm Soderstrom wrote:

 I'm experimenting with an external firewire drive enclosure, and I formatted
 it with 3 different filesystems, then used bonnie++ to generate 10GB of
 sequential data, and 1,024,000 small files between 1000 and 100 bytes in
 size.
 
 I tried it with xfs, reiserfs, and ext3; and contrary to a lot of hype out
 there, ext3 seems to have won the race for random file reads and deletes
 (which is what BackupPC seems to be really heavy on).

Unfortunately, the resultant filesystem has very little resemblance to the
file tree that backuppc writes.  I'm not sure if there is any utility that
creates this kind of tree, and I would argue that backuppc shouldn't be
either, since it is so hard on the filesystem.

Basically, you need to first create a deep tree (like a filesystem), and
then hardlink all of those files into something like a pool, in a very
different order than they were put into the tree.

Then, create another tree, except some of the files should be fresh, and
some should be hardlinks back to the pool (or to the first tree).  Then the
new files should be linked into the pool.

Programs like backuppc are the only thing I know that creates these, and
the performance in a given filesystem of this tree isn't really going to
correlate much to that filesystems performance on any other task.  Most
filesystems optimize assuming that files will tend to be in the directory
that they were created in.  Creating this massive pool of links to files in
diverse places completely breaks these optimizations.

Honestly, you probably won't ever find a filesystem that handles the
backuppc pool very well.  I think the solution is to change backuppc to not
create multiple trees, but to store the filesystem tree in some kind of
database, and just store the files themselves in the pool.  Database
engines are optimized to be able to handle multiple indexing into the data,
whereas filesystems are not (and aren't likely to be, either).

As far as implementation of this pool-only storage, it is important to
create the file in the proper directory first, which means the has must be
known before it can be written.  Of course, if there is a database, there
is no reason to make the filenames part of the hash, and not just
sequential integers, using a unique key in the database table.

Dave


---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=110944bid=241720dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem benchmark results

2006-03-07 Thread Carl Wilhelm Soderstrom
On 03/07 08:14 , David Brown wrote:
 Unfortunately, the resultant filesystem has very little resemblance to the
 file tree that backuppc writes.  I'm not sure if there is any utility that
 creates this kind of tree, and I would argue that backuppc shouldn't be
 either, since it is so hard on the filesystem.
 
 Basically, you need to first create a deep tree (like a filesystem), and
 then hardlink all of those files into something like a pool, in a very
 different order than they were put into the tree.

ok. point taken.
Bonnie does create a very shallow tree for these files, but it's only a
directory or two deep.

What we really need is a Backuppc-specific benchmark. I don't suppose
there's an easy way to take the storage engine out of backuppc, put it into
some sort of test harness, and run some benchmarks with it?

The only thing like this that I can conceive, would be to take a backup of
the backuppc pool before a BackupPC_nightly run, time that run; then restore
the copy of the pool onto a different filesystem type and try the same thing
again. rather laborious, and not necessarily accurate.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com


---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=110944bid=241720dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem benchmark results

2006-03-07 Thread Paul Fox
  The depth isn't really the issue.  It is that they are created under one
  tree, and hardlinked to another tree.  The normal FS optimization of
  putting the inodes of files in a given directory near each other breaks
  down, and the directories in the pool end up with files of very diverse
  inodes.
  
  Just running a 'du' on my pool takes several seconds for each leaf
  directory, very heavily thrashing the drive.
  
  If you copy a backup pool, either with 'cp -a' or tar (something that will
  preserve the hardlinks), the result will either be the same, or the pool
  will be more efficient and the pc trees will be very inefficient.  It all
  depends on which tree the backup copies first.

to clarify -- in the normal case, where the backup data is
usually not read, but only written, the current filesystems are
okay, right?  it's only when you want to preserve or copy your
pool that there's an issue?  (or am i neglecting something?  i
might well be.)

if this is mostly true, then creating a better data copier might
be productive.  i thought there was work some time ago to allow
listing the files to be copied in inode order, using an external
tool that pre-processed the tree.  what happened with that?

  I still say it is going to be a lot easier to change how backuppc works
  than it is going to be to find a filesystem that will deal with this very
  unusual use case well.

but having the backup pools exist in the native filesystem in a
(relatively) transparent way is a huge part of backuppc's
attraction.

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 36.9 degrees)


---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=110944bid=241720dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem benchmark results

2006-03-07 Thread Les Mikesell
On Tue, 2006-03-07 at 11:10, David Brown wrote:

 The depth isn't really the issue.  It is that they are created under one
 tree, and hardlinked to another tree.  The normal FS optimization of
 putting the inodes of files in a given directory near each other breaks
 down, and the directories in the pool end up with files of very diverse
 inodes.
 
 Just running a 'du' on my pool takes several seconds for each leaf
 directory, very heavily thrashing the drive.

If it hurts, don't do it.  The only operation in backuppc that
traverses directories is the nightly run to remove the expired
links and it only has to go through the pool. Most operations
look things up by name.

 I still say it is going to be a lot easier to change how backuppc works
 than it is going to be to find a filesystem that will deal with this very
 unusual use case well.

All you'll do by trying is lose the atomic nature of the hardlinks.
You aren't ever going have the data at the same time you know all
of it's names so you can store them close together.  Just throw in
lots of ram and let caching do the best it can.

-- 
  Les Mikesell
   [EMAIL PROTECTED]




---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=110944bid=241720dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem benchmark results

2006-03-07 Thread David Brown
On Tue, Mar 07, 2006 at 11:49:40AM -0600, Les Mikesell wrote:

  I still say it is going to be a lot easier to change how backuppc works
  than it is going to be to find a filesystem that will deal with this very
  unusual use case well.
 
 All you'll do by trying is lose the atomic nature of the hardlinks.
 You aren't ever going have the data at the same time you know all
 of it's names so you can store them close together.  Just throw in
 lots of ram and let caching do the best it can.

Any reasonable SQL database would do this very well.  Doing operations
atomically is fundamental, and indexing diversely added data is an
important feature.

The caching doesn't generally help at all, because the nodes are only
touched once, and that is very out of order.

Dave


---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=110944bid=241720dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem benchmark results

2006-03-07 Thread Les Mikesell
On Tue, 2006-03-07 at 11:55, David Brown wrote:
  
  All you'll do by trying is lose the atomic nature of the hardlinks.
  You aren't ever going have the data at the same time you know all
  of it's names so you can store them close together.  Just throw in
  lots of ram and let caching do the best it can.
 
 Any reasonable SQL database would do this very well.  Doing operations
 atomically is fundamental, and indexing diversely added data is an
 important feature.

The piece that has to be atomic has to do with the actual pool
file, so unless you move the data into the database as well
you can't atomically manage the links or ever be sure that
they are actually correct.  And if you move the data in, I
suspect you'll find that databases aren't as efficient as
you thought.

 The caching doesn't generally help at all, because the nodes are only
 touched once, and that is very out of order.

Add enough RAM to hold the pool inodes.  That's what your
SQL vendor is going to say about the database too.

-- 
  Les Mikesell
   [EMAIL PROTECTED]




---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=110944bid=241720dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem benchmark results

2006-03-07 Thread Paul Fox
   okay, right?  it's only when you want to preserve or copy your
   pool that there's an issue?  (or am i neglecting something?  i
   might well be.)
  
  Even just the normal process of looking at the pool, either to see if a
  file is present, or as part of the cleanup scan is much slower.

noted.

  The pools wouldn't change.  The backup trees themselves are not really
  transparent, anyway.  The names are mangled, and the attributes are stored
  in an attribute file.  I would suspect that people browse backups using the
  web interface more than they try to glean anything from the 'pc'
  directories.

but when one just wants to look at a file, you _can_ just cd there. 

  If someone really wanted to, they could write a fuse plugin that would
  present the backup directory as a real tree, complete with attributes, and
  visible at any particular time.  This would be a useful browsing method.

this is a good idea, in any case.

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 36.7 degrees)


---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=110944bid=241720dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem benchmark results

2006-03-07 Thread David Brown
On Tue, Mar 07, 2006 at 12:15:50PM -0600, Les Mikesell wrote:

 The piece that has to be atomic has to do with the actual pool
 file, so unless you move the data into the database as well
 you can't atomically manage the links or ever be sure that
 they are actually correct.  And if you move the data in, I
 suspect you'll find that databases aren't as efficient as
 you thought.

There are no links.  Each file has one entry, under the pool directory.
All that has to be managed is creation and deletion of these files.  It is
not difficult to be able to easily recover from a crash in either of these
scenarios, as long as ordering is done right.

  The caching doesn't generally help at all, because the nodes are only
  touched once, and that is very out of order.
 
 Add enough RAM to hold the pool inodes.  That's what your
 SQL vendor is going to say about the database too.

It doesn't help.  I have plenty of RAM.  The problem is that over the
course of the day, the pool nodes leave the cache.  Then, next pool scan,
they get fetched, in a very out-of-order fashion.

Dave


---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=110944bid=241720dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


RE: [BackupPC-users] filesystem benchmark results

2006-03-07 Thread Brown, Wade ASL (GE Healthcare)
 
I agree.   It is sometimes nice to be able to step down the client's
tree and look for a specific file.

So, what's the drawback of using a database to manage the tree?
Obviously, you only have a single hash tree that contains all backups
and you wouldn't be able to browse it for a specific file.  I suppose it
would also require additional tools to browse, pull and clean.

- Wade





-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Paul
Fox
Sent: Tuesday, March 07, 2006 12:37 PM
To: backuppc-users@lists.sourceforge.net
Subject: Re: [BackupPC-users] filesystem benchmark results 

   okay, right?  it's only when you want to preserve or copy your
   pool that there's an issue?  (or am i neglecting something?  i
   might well be.)
  
  Even just the normal process of looking at the pool, either to see if
a
  file is present, or as part of the cleanup scan is much slower.

noted.

  The pools wouldn't change.  The backup trees themselves are not
really
  transparent, anyway.  The names are mangled, and the attributes are
stored
  in an attribute file.  I would suspect that people browse backups
using the
  web interface more than they try to glean anything from the 'pc'
  directories.

but when one just wants to look at a file, you _can_ just cd there. 

  If someone really wanted to, they could write a fuse plugin that
would
  present the backup directory as a real tree, complete with
attributes, and
  visible at any particular time.  This would be a useful browsing
method.

this is a good idea, in any case.

paul
=-
 paul fox, [EMAIL PROTECTED] (arlington, ma, where it's 36.7
degrees)


---
This SF.Net email is sponsored by xPML, a groundbreaking scripting
language
that extends applications into web and mobile media. Attend the live
webcast
and join the prime developer group breaking into this new coding
territory!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=110944bid=241720dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/



---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnkkid0944bid$1720dat1642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] filesystem benchmark results

2006-03-07 Thread Les Mikesell
On Tue, 2006-03-07 at 12:46, David Brown wrote:

 There are no links.  Each file has one entry, under the pool directory.
 All that has to be managed is creation and deletion of these files.  It is
 not difficult to be able to easily recover from a crash in either of these
 scenarios, as long as ordering is done right.

But you need database entries pointing to these files and the
database entry and file creation steps must be coordinated along
with removal of all files when the last database entry is expired.
If you crash during any of the steps, how do you find the
incomplete part?  

  Add enough RAM to hold the pool inodes.  That's what your
  SQL vendor is going to say about the database too.
 
 It doesn't help.  I have plenty of RAM.  The problem is that over the
 course of the day, the pool nodes leave the cache.  Then, next pool scan,
 they get fetched, in a very out-of-order fashion.

Hmmm... maybe it would pay to have a periodic cron job to stat()
the pool files just to keep those inodes in cache. 

-- 
  Les Mikesell
[EMAIL PROTECTED]




---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=110944bid=241720dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/