Re: [BackupPC-users] backuppc 3.0.0: another xfs problem?

2008-12-23 Thread Chris Robertson
thomat...@gmail.com wrote:
> How dangerous is it to run xfs without write barriers?

http://oss.sgi.com/projects/xfs/faq.html#nulls

As long as your computer shuts down properly, sends a flush to the 
drives, and the drives manage to clear their on-board cache before power 
is removed or the chip set is reset, it's not dangerous at all.  :o)

Here's a thread from SGI's XFS mailing list from before XFS on Linux had 
barrier support:
http://oss.sgi.com/archives/xfs/2005-06/msg00149.html

Here's an informative thread on LKML with some good information:
http://lkml.org/lkml/2006/5/19/33
A analysis of the performance hit due to barriers (and a fairly vague 
suggestion on a solution) can be found at:
http://lkml.org/lkml/2006/5/22/278

The executive summary is that you can use xfs_db to change the log 
(journal) to version 2, which allows larger buffers, which reduces "the 
impact the barriers have (fewer, larger log IOs, so fewer barriers)."

Chris

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.0.0: another xfs problem?

2008-12-22 Thread thomathom
How dangerous is it to run xfs without write barriers?

On 12/22/08, Chris Robertson  wrote:
> dan wrote:
>> I guess that updatedb thing reinforces my arguement about not seeing
>> any mixed load tests.  ext3 handles these situations pretty good,
>> maybe XFS does not...
>
> Write barriers really harmed XFS performance on my setup (16 Seagate
> ES.2 spindles attached to an Adaptec 51645 utilizing hardware RAID6).
> iostat was showing a peak of 400 tps with barriers.  Mounting nobarrrier
> raised that limit to to over 20,000.  Obviously your mileage may vary.
> Two interesting data points to note, it appears that LVM doesn't support
> barriers
> (http://hightechsorcery.com/2008/06/linux-write-barriers-write-caching-lvm-and-filesystems),
> and ext3 (and ext4) don't use barriers by default
> (http://lwn.net/Articles/282958/).  The design allows for this without
> as much risk as might be expected (http://lwn.net/Articles/283168/).
>
> Back to XFS, allocation groups, unless specified at file system creation
> are calculated on file system size, and can have a great effect on
> performance when multiple threads are contenting for FS access.
> Changing the journal size can also have an effect on performance, but
> again, this is only possible at creation.  Changing the number  and size
> of log buffers is a mount time modification, and might also have a
> decent effect on performance.  The kernel documentation has more
> information on this.
>
>>
>> By the way, I read that EXT4 should allow for EXT3>EXT4 upgrades.
>
> Same thing for btrfs.  Neat stuff.
>
>>   One(of many) nice things about EXT4 is delayed writes which
>> essentially means write re-ordering to mask/reduce I/O bottlenecks.
>
> Which XFS already has...  But they are affected by write barriers.
>
>>   Hopefully EXT4 will become stable pretty soon!
>
> Agreed.  As a side note, development on Reiser4 is ongoing:
> http://marc.info/?l=reiserfs-devel&r=1&w=2
>
> Finally, if you are sleeping too well at night because you think your
> data is safe, I have a couple of papers your might be interested in that
> I stumbled across while fact checking:
>
> Model-Based Failure Analysis of Journaling File Systems (covers ext3,
> Reiserfs, and JFS):
> http://www.cs.wisc.edu/adsl/Publications/sfa-dsn05.pdf
>
> Failure Analysis of SGI XFS File System:
> http://pages.cs.wisc.edu/~vshree/xfs.pdf
>
> Chris
>
> --
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>


-- 
http://resc.smugmug.com/

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.0.0: another xfs problem?

2008-12-22 Thread dan
I'm pretty excited (as exited as one gets about filesystems) with the nexgen
filesystems that are progressing.  btfs looks to be a very good concept and
I hope that translated to a very good filesystem.

Im glad to hear that reiserfs4 is continuing, though I think they may want
to change the name and distance the project from Hanns' name.  I really
believe that reiserfs3 was the best of the current generation filesystems.
I used to use gentoo when I jumped ship on redhat and rpm hell.  reiserfs
was always vastly superior on gentoo as it's small file handling is very
very good and gentoo/compiling from source benefitted from that.  I have not
really tried reiserfs3 with backuppc as I have been on ext3 and happy enough
that ext3 is reliable and fast enough.



On Mon, Dec 22, 2008 at 4:14 PM, Chris Robertson  wrote:

> dan wrote:
> > I guess that updatedb thing reinforces my arguement about not seeing
> > any mixed load tests.  ext3 handles these situations pretty good,
> > maybe XFS does not...
>
> Write barriers really harmed XFS performance on my setup (16 Seagate
> ES.2 spindles attached to an Adaptec 51645 utilizing hardware RAID6).
> iostat was showing a peak of 400 tps with barriers.  Mounting nobarrrier
> raised that limit to to over 20,000.  Obviously your mileage may vary.
> Two interesting data points to note, it appears that LVM doesn't support
> barriers
> (
> http://hightechsorcery.com/2008/06/linux-write-barriers-write-caching-lvm-and-filesystems
> ),
> and ext3 (and ext4) don't use barriers by default
> (http://lwn.net/Articles/282958/).  The design allows for this without
> as much risk as might be expected (http://lwn.net/Articles/283168/).
>
> Back to XFS, allocation groups, unless specified at file system creation
> are calculated on file system size, and can have a great effect on
> performance when multiple threads are contenting for FS access.
> Changing the journal size can also have an effect on performance, but
> again, this is only possible at creation.  Changing the number  and size
> of log buffers is a mount time modification, and might also have a
> decent effect on performance.  The kernel documentation has more
> information on this.
>
> >
> > By the way, I read that EXT4 should allow for EXT3>EXT4 upgrades.
>
> Same thing for btrfs.  Neat stuff.
>
> >   One(of many) nice things about EXT4 is delayed writes which
> > essentially means write re-ordering to mask/reduce I/O bottlenecks.
>
> Which XFS already has...  But they are affected by write barriers.
>
> >   Hopefully EXT4 will become stable pretty soon!
>
> Agreed.  As a side note, development on Reiser4 is ongoing:
> http://marc.info/?l=reiserfs-devel&r=1&w=2
>
> Finally, if you are sleeping too well at night because you think your
> data is safe, I have a couple of papers your might be interested in that
> I stumbled across while fact checking:
>
> Model-Based Failure Analysis of Journaling File Systems (covers ext3,
> Reiserfs, and JFS):
> http://www.cs.wisc.edu/adsl/Publications/sfa-dsn05.pdf
>
> Failure Analysis of SGI XFS File System:
> http://pages.cs.wisc.edu/~vshree/xfs.pdf
>
> Chris
>
>
> --
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.0.0: another xfs problem?

2008-12-22 Thread Chris Robertson
dan wrote:
> I guess that updatedb thing reinforces my arguement about not seeing 
> any mixed load tests.  ext3 handles these situations pretty good, 
> maybe XFS does not... 

Write barriers really harmed XFS performance on my setup (16 Seagate 
ES.2 spindles attached to an Adaptec 51645 utilizing hardware RAID6).  
iostat was showing a peak of 400 tps with barriers.  Mounting nobarrrier 
raised that limit to to over 20,000.  Obviously your mileage may vary.  
Two interesting data points to note, it appears that LVM doesn't support 
barriers 
(http://hightechsorcery.com/2008/06/linux-write-barriers-write-caching-lvm-and-filesystems),
 
and ext3 (and ext4) don't use barriers by default 
(http://lwn.net/Articles/282958/).  The design allows for this without 
as much risk as might be expected (http://lwn.net/Articles/283168/).

Back to XFS, allocation groups, unless specified at file system creation 
are calculated on file system size, and can have a great effect on 
performance when multiple threads are contenting for FS access.  
Changing the journal size can also have an effect on performance, but 
again, this is only possible at creation.  Changing the number  and size 
of log buffers is a mount time modification, and might also have a 
decent effect on performance.  The kernel documentation has more 
information on this.

>
> By the way, I read that EXT4 should allow for EXT3>EXT4 upgrades.

Same thing for btrfs.  Neat stuff.

>   One(of many) nice things about EXT4 is delayed writes which 
> essentially means write re-ordering to mask/reduce I/O bottlenecks.

Which XFS already has...  But they are affected by write barriers.

>   Hopefully EXT4 will become stable pretty soon!

Agreed.  As a side note, development on Reiser4 is ongoing: 
http://marc.info/?l=reiserfs-devel&r=1&w=2

Finally, if you are sleeping too well at night because you think your 
data is safe, I have a couple of papers your might be interested in that 
I stumbled across while fact checking:

Model-Based Failure Analysis of Journaling File Systems (covers ext3, 
Reiserfs, and JFS):
http://www.cs.wisc.edu/adsl/Publications/sfa-dsn05.pdf

Failure Analysis of SGI XFS File System:
http://pages.cs.wisc.edu/~vshree/xfs.pdf

Chris

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.0.0: another xfs problem?

2008-12-20 Thread dan
I guess that updatedb thing reinforces my arguement about not seeing any
mixed load tests.  ext3 handles these situations pretty good, maybe XFS does
not...

By the way, I read that EXT4 should allow for EXT3>EXT4 upgrades.  One(of
many) nice things about EXT4 is delayed writes which essentially means write
re-ordering to mask/reduce I/O bottlenecks.  Hopefully EXT4 will become
stable pretty soon!



On Sat, Dec 20, 2008 at 8:31 PM, Thomas Smith  wrote:

> Hi,
> The server seems to be at a good level of performance now (1 hour and
> 45 minutes), thank you all for your help!
>
> Retrospective, for people coming across this thread later and wanting
> to fix backuppc xfs performance problems:
>
> To fix this problem, I set the noatime and nodiratime options on the
> filesystem (I modified fstab and ran "mount -o
> remount,noatime,nodiratime /var/lib/backuppc").  This cut the time in
> half.  I also noticed that updatedb was indexing the filesystem, and
> stopping this cut the time by a further 80%.  Updatedb had been
> running under ext3, too, but hadn't slowed the FS down anywhere near
> as much.  I switched to XFS because I thought I would be making
> archives a lot, then deleting them, and it took about 45 minutes to
> delete some of these files on ext3 (some of them are 400GB or so).
> But it turns out that I'm not doing this anyway, so if I had the
> chance I would switch back, since under the rest of the load
> conditions of backuppc, ext3 clearly performs better for me.
> Unfortunately, it takes 2 or 3 days to do this switch, so it might not
> happen for a while.
>
> Thanks again!
> -Thomas
>
> On Sat, Dec 20, 2008 at 2:47 PM, dan  wrote:
> > true enough.
> >
> > I have been doing a lot of expirimentation with opensolaris and zfs for
> > backuppc.  It is a bit of a pain getting backuppc working on opensolaris,
> > specifically CPAN stuff.
> >
> > I am still in testing  but ZFS seems to be an ideal filesystem for
> > backuppc.  SUN claims that it is essentially bulletproof.  I want to see
> for
> > myself.  I am now running this test setup on 10 seagate sata 7200.11
> disks
> > in a single pool.  I turn on light compression using lzof to maintain
> > performance.  This setup performs very very well and is in a raidz2 which
> is
> > similar to raid6.
> >
> > I still run ext3 on my primary backup server and wont be changeing that
> > unless ZFS works out or ext4,btfs,tux3 get to market and are stable.
> >
> > On Sat, Dec 20, 2008 at 12:10 PM, Tino Schwarze 
> > wrote:
> >>
> >> On Fri, Dec 19, 2008 at 10:13:08AM -0700, dan wrote:
> >>
> >> > If the disk usage is the same as before the pool, the issue isnt
> >> > hardlinks
> >> > not being maintained.  I am not convinced that XFS is an ideal
> >> > filesystem.
> >> > I'm sure it has it's merits, but I have lost data on 3 filesystems
> ever,
> >> > FAT*, XFS and NTFS.  I have never lost data on reiserfs3 or ext2,3.
> >>
> >> I've lost data on reiserfs, but it's been a while ago. I've been using
> >> XFS for my BackupPC pool for about 2 years now and it's performance is
> >> okay (the pool used to be reiserfs). Since I also changed hardware, I
> >> cannot compare 1:1. Perceived performance of XFS vs. reiserfs is about
> >> the same.
> >>
> >> > I say switch back to ext3.
> >>
> >> Which isn't that easy given a reasonably large BackuPC pool...
> >>
> >> Tino.
> >>
> >> --
> >> "What we nourish flourishes." - "Was wir nähren erblüht."
> >>
> >> www.lichtkreis-chemnitz.de
> >> www.craniosacralzentrum.de
> >>
> >>
> >>
> --
> >> ___
> >> BackupPC-users mailing list
> >> BackupPC-users@lists.sourceforge.net
> >> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> >> Wiki:http://backuppc.wiki.sourceforge.net
> >> Project: http://backuppc.sourceforge.net/
> >
> >
> >
> --
> >
> > ___
> > BackupPC-users mailing list
> > BackupPC-users@lists.sourceforge.net
> > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > Wiki:http://backuppc.wiki.sourceforge.net
> > Project: http://backuppc.sourceforge.net/
> >
> >
>
>
>
> --
> http://resc.smugmug.com/
>
>
> --
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backupp

Re: [BackupPC-users] backuppc 3.0.0: another xfs problem?

2008-12-20 Thread Thomas Smith
Hi,
The server seems to be at a good level of performance now (1 hour and
45 minutes), thank you all for your help!

Retrospective, for people coming across this thread later and wanting
to fix backuppc xfs performance problems:

To fix this problem, I set the noatime and nodiratime options on the
filesystem (I modified fstab and ran "mount -o
remount,noatime,nodiratime /var/lib/backuppc").  This cut the time in
half.  I also noticed that updatedb was indexing the filesystem, and
stopping this cut the time by a further 80%.  Updatedb had been
running under ext3, too, but hadn't slowed the FS down anywhere near
as much.  I switched to XFS because I thought I would be making
archives a lot, then deleting them, and it took about 45 minutes to
delete some of these files on ext3 (some of them are 400GB or so).
But it turns out that I'm not doing this anyway, so if I had the
chance I would switch back, since under the rest of the load
conditions of backuppc, ext3 clearly performs better for me.
Unfortunately, it takes 2 or 3 days to do this switch, so it might not
happen for a while.

Thanks again!
-Thomas

On Sat, Dec 20, 2008 at 2:47 PM, dan  wrote:
> true enough.
>
> I have been doing a lot of expirimentation with opensolaris and zfs for
> backuppc.  It is a bit of a pain getting backuppc working on opensolaris,
> specifically CPAN stuff.
>
> I am still in testing  but ZFS seems to be an ideal filesystem for
> backuppc.  SUN claims that it is essentially bulletproof.  I want to see for
> myself.  I am now running this test setup on 10 seagate sata 7200.11 disks
> in a single pool.  I turn on light compression using lzof to maintain
> performance.  This setup performs very very well and is in a raidz2 which is
> similar to raid6.
>
> I still run ext3 on my primary backup server and wont be changeing that
> unless ZFS works out or ext4,btfs,tux3 get to market and are stable.
>
> On Sat, Dec 20, 2008 at 12:10 PM, Tino Schwarze 
> wrote:
>>
>> On Fri, Dec 19, 2008 at 10:13:08AM -0700, dan wrote:
>>
>> > If the disk usage is the same as before the pool, the issue isnt
>> > hardlinks
>> > not being maintained.  I am not convinced that XFS is an ideal
>> > filesystem.
>> > I'm sure it has it's merits, but I have lost data on 3 filesystems ever,
>> > FAT*, XFS and NTFS.  I have never lost data on reiserfs3 or ext2,3.
>>
>> I've lost data on reiserfs, but it's been a while ago. I've been using
>> XFS for my BackupPC pool for about 2 years now and it's performance is
>> okay (the pool used to be reiserfs). Since I also changed hardware, I
>> cannot compare 1:1. Perceived performance of XFS vs. reiserfs is about
>> the same.
>>
>> > I say switch back to ext3.
>>
>> Which isn't that easy given a reasonably large BackuPC pool...
>>
>> Tino.
>>
>> --
>> "What we nourish flourishes." - "Was wir nähren erblüht."
>>
>> www.lichtkreis-chemnitz.de
>> www.craniosacralzentrum.de
>>
>>
>> --
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>
>
> --
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
>



-- 
http://resc.smugmug.com/

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.0.0: another xfs problem?

2008-12-20 Thread dan
true enough.

I have been doing a lot of expirimentation with opensolaris and zfs for
backuppc.  It is a bit of a pain getting backuppc working on opensolaris,
specifically CPAN stuff.

I am still in testing  but ZFS seems to be an ideal filesystem for
backuppc.  SUN claims that it is essentially bulletproof.  I want to see for
myself.  I am now running this test setup on 10 seagate sata 7200.11 disks
in a single pool.  I turn on light compression using lzof to maintain
performance.  This setup performs very very well and is in a raidz2 which is
similar to raid6.

I still run ext3 on my primary backup server and wont be changeing that
unless ZFS works out or ext4,btfs,tux3 get to market and are stable.

On Sat, Dec 20, 2008 at 12:10 PM, Tino Schwarze wrote:

> On Fri, Dec 19, 2008 at 10:13:08AM -0700, dan wrote:
>
> > If the disk usage is the same as before the pool, the issue isnt
> hardlinks
> > not being maintained.  I am not convinced that XFS is an ideal
> filesystem.
> > I'm sure it has it's merits, but I have lost data on 3 filesystems ever,
> > FAT*, XFS and NTFS.  I have never lost data on reiserfs3 or ext2,3.
>
> I've lost data on reiserfs, but it's been a while ago. I've been using
> XFS for my BackupPC pool for about 2 years now and it's performance is
> okay (the pool used to be reiserfs). Since I also changed hardware, I
> cannot compare 1:1. Perceived performance of XFS vs. reiserfs is about
> the same.
>
> > I say switch back to ext3.
>
> Which isn't that easy given a reasonably large BackuPC pool...
>
> Tino.
>
> --
> "What we nourish flourishes." - "Was wir nähren erblüht."
>
> www.lichtkreis-chemnitz.de
> www.craniosacralzentrum.de
>
>
> --
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.0.0: another xfs problem?

2008-12-20 Thread Tino Schwarze
On Fri, Dec 19, 2008 at 10:13:08AM -0700, dan wrote:

> If the disk usage is the same as before the pool, the issue isnt hardlinks
> not being maintained.  I am not convinced that XFS is an ideal filesystem.
> I'm sure it has it's merits, but I have lost data on 3 filesystems ever,
> FAT*, XFS and NTFS.  I have never lost data on reiserfs3 or ext2,3.

I've lost data on reiserfs, but it's been a while ago. I've been using
XFS for my BackupPC pool for about 2 years now and it's performance is
okay (the pool used to be reiserfs). Since I also changed hardware, I
cannot compare 1:1. Perceived performance of XFS vs. reiserfs is about
the same.

> I say switch back to ext3.

Which isn't that easy given a reasonably large BackuPC pool...

Tino.

-- 
"What we nourish flourishes." - "Was wir nähren erblüht."

www.lichtkreis-chemnitz.de
www.craniosacralzentrum.de

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.0.0: another xfs problem?

2008-12-20 Thread dan
I would suggest ( on linux anyway ) that you stick with ext3 unless it is
incapable of handling your pool data until ext4 is marked stable.  Then look
at btfs or tux3 and see what their roadmaps say.  ext3 is a good
filesystem.  It is fast and reliable.  XFS and JFS are ports from other
systems and have not had the same attention given too them.  I would note
that you dont see threads that say "ext3 corrupted my data" or "ext3 too
slow" very much but you do see this for XFS and JFS.

Last year I would have suggested reiserfs because it is also a very good
filesystem, especially for small files and deletes like backuppc needs) but
its unclear future stears me away from it.

On Sat, Dec 20, 2008 at 1:13 AM, Anand Gupta  wrote:

>
>
> On Sat, Dec 20, 2008 at 6:22 AM, Chris Robertson wrote:
>
>> dan wrote:
>> > If the disk usage is the same as before the pool, the issue isnt
>> > hardlinks not being maintained.  I am not convinced that XFS is an
>> > ideal filesystem.  I'm sure it has it's merits, but I have lost data
>> > on 3 filesystems ever, FAT*, XFS and NTFS.  I have never lost data on
>> > reiserfs3 or ext2,3.
>> >
>> > Additionally, I am not convinced that it performs any better than ext3
>> > in real world workloads.  I have see many comparisons showing XFS
>> > marginally faster in some operations, and much faster for file
>> > deletions and a few other things, but these are all simulated
>> > workloads and I have never seen a comparison running all of these
>> > various operations in mixed operation.  how about mixing 100MB random
>> > reads with 10MB sequential writes on small files and deleting 400
>> > hardlinks?
>> >
>> > I say switch back to ext3.
>>
>> Creating or resizing (you do a proper fsck before and after resizing,
>> don't you?) an ext3 filesystem greater than about 50GB is painful.  The
>> larger the filesystem, the more painful it gets.  Having to guess the
>> number of inodes you are going to need at filesystem creation is a nice
>> bonus.
>>
>> EXT4, btrfs, or Tux3 can't get here (and stable!) fast enough.
>>
>
> Has anyone tried JFS ? I have been using JFS in production for over a year
> now with several volumes of 2T+. I have found the performance satisfactory
> atleast for my needs. Besides once in a while when someone pulls the plug of
> a switch (the volumes serve iscsi volumes), we have to run fsck, which again
> is very fast and recovers without any problems. Just a thought.
>
> Thanks and Regards,
>
> Anand
>
>
> --
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
>
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.0.0: another xfs problem?

2008-12-20 Thread Anand Gupta
On Sat, Dec 20, 2008 at 6:22 AM, Chris Robertson  wrote:

> dan wrote:
> > If the disk usage is the same as before the pool, the issue isnt
> > hardlinks not being maintained.  I am not convinced that XFS is an
> > ideal filesystem.  I'm sure it has it's merits, but I have lost data
> > on 3 filesystems ever, FAT*, XFS and NTFS.  I have never lost data on
> > reiserfs3 or ext2,3.
> >
> > Additionally, I am not convinced that it performs any better than ext3
> > in real world workloads.  I have see many comparisons showing XFS
> > marginally faster in some operations, and much faster for file
> > deletions and a few other things, but these are all simulated
> > workloads and I have never seen a comparison running all of these
> > various operations in mixed operation.  how about mixing 100MB random
> > reads with 10MB sequential writes on small files and deleting 400
> > hardlinks?
> >
> > I say switch back to ext3.
>
> Creating or resizing (you do a proper fsck before and after resizing,
> don't you?) an ext3 filesystem greater than about 50GB is painful.  The
> larger the filesystem, the more painful it gets.  Having to guess the
> number of inodes you are going to need at filesystem creation is a nice
> bonus.
>
> EXT4, btrfs, or Tux3 can't get here (and stable!) fast enough.
>

Has anyone tried JFS ? I have been using JFS in production for over a year
now with several volumes of 2T+. I have found the performance satisfactory
atleast for my needs. Besides once in a while when someone pulls the plug of
a switch (the volumes serve iscsi volumes), we have to run fsck, which again
is very fast and recovers without any problems. Just a thought.

Thanks and Regards,

Anand
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.0.0: another xfs problem?

2008-12-19 Thread Chris Robertson
dan wrote:
> If the disk usage is the same as before the pool, the issue isnt 
> hardlinks not being maintained.  I am not convinced that XFS is an 
> ideal filesystem.  I'm sure it has it's merits, but I have lost data 
> on 3 filesystems ever, FAT*, XFS and NTFS.  I have never lost data on 
> reiserfs3 or ext2,3. 
>
> Additionally, I am not convinced that it performs any better than ext3 
> in real world workloads.  I have see many comparisons showing XFS 
> marginally faster in some operations, and much faster for file 
> deletions and a few other things, but these are all simulated 
> workloads and I have never seen a comparison running all of these 
> various operations in mixed operation.  how about mixing 100MB random 
> reads with 10MB sequential writes on small files and deleting 400 
> hardlinks?
>
> I say switch back to ext3.

Creating or resizing (you do a proper fsck before and after resizing, 
don't you?) an ext3 filesystem greater than about 50GB is painful.  The 
larger the filesystem, the more painful it gets.  Having to guess the 
number of inodes you are going to need at filesystem creation is a nice 
bonus.

EXT4, btrfs, or Tux3 can't get here (and stable!) fast enough.

Chris

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.0.0: another xfs problem?

2008-12-19 Thread dan
If the disk usage is the same as before the pool, the issue isnt hardlinks
not being maintained.  I am not convinced that XFS is an ideal filesystem.
I'm sure it has it's merits, but I have lost data on 3 filesystems ever,
FAT*, XFS and NTFS.  I have never lost data on reiserfs3 or ext2,3.

Additionally, I am not convinced that it performs any better than ext3 in
real world workloads.  I have see many comparisons showing XFS marginally
faster in some operations, and much faster for file deletions and a few
other things, but these are all simulated workloads and I have never seen a
comparison running all of these various operations in mixed operation.  how
about mixing 100MB random reads with 10MB sequential writes on small files
and deleting 400 hardlinks?

I say switch back to ext3.
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.0.0: another xfs problem?

2008-12-19 Thread Thomas Smith
Hi again!

I checked and most of the files have lots of hard links, so that's good.

Today the nightly thing only took 10 hours, down from 14 down from 22.
 Other than the filesystem tweaks (noatime), is there any reason for
the nightly cleanups to take less and less time?

Thanks!  This is getting more reasonable.
-Thomas

On Thu, Dec 18, 2008 at 10:15 PM, Thomas Smith  wrote:
> Hi everyone, thanks for the help!
>
> Today around noon I remounted the backup disk with noatime, and then
> it only took another three hours, rather than another 10, which is
> exciting.  I just remounted with nodiratime so we'll see if that makes
> any difference in tonight's run.
>
> I'm pretty sure that the hard links were not exploded into multiple
> files, because there's several terabytes of overlapped data between
> computers, but the pool is much smaller.  Of course, this doesn't
> preclude some other brokenness in the backup.  To make the copy of the
> pool, I stopped backuppc (/etc/init.d/backuppc stop), then made an LVM
> snapshot of the backup logical volume, then made a tarfile of that
> (with GNU tar, which handles hard links!).
>
> The XFS filesystem is only 1/2 full, so that shouldn't be a problem.
> The computer has 2G of memory, and yeah, there were some backups
> running at the time.  I killed those today, so that might also have
> helped it finish faster.  There were no errors in the backuppc log,
> though there were some backups happening---not long-running ones,
> though, I think.  Nothing else happens on this computer, which is a
> Core 2 Duo machine with three SATA-connected 750GB 7200RPM disks, with
> the backup volume a 3-way stripe LVM volume with 4MB physical extents.
>
> One weird thing about the files I back up is that a lot of them are
> very similar PGM images, which don't differ in the first and last 128K
> or whatever, so there are some very long hash chains: "Pool hashing
> gives 523 repeated files with longest chain 182".  I don't think that
> this would cause hours and hours of problems, though.
>
> If the problem is out-of-order inodes, is there some sort of XFS
> tuning program that can optimize this?
>
> If it gets down to a more manageable time, I'll probably just split
> the nightly clean across several days, as you say.
>
> Thank you again!
>
> -Thomas
>
> On Thu, Dec 18, 2008 at 8:33 PM, Holger Parplies  wrote:
>> Hi,
>>
>> Adam Goryachev wrote on 2008-12-19 10:56:44 +1100 [Re: [BackupPC-users] 
>> backuppc 3.0.0: another xfs problem?]:
>>> Jeffrey J. Kosowsky wrote:
>>> >
>>> > I don't think that BackupPC_nightly checks for hard link dups between
>>> > the pc/ and pool/ directories.
>>
>> I fully agree on that point.
>>
>>> I would advise that you confirm whether or not your hard links were
>>> restored properly:
>>> cd /var/lib/backuppc/pool/3/3/3
>>> for file in `ls`
>>
>> Don't you trust shell globbing? ;-)
>>
>>> do
>>>   stat $file|grep Links|awk '{print $5" "$6}'
>>> done
>>
>> You mean
>>
>>cd /var/lib/backuppc/cpool/3/3/3
>>perl -e 'foreach (<*>) { $l {(stat $_) [3]} ++; }
>> foreach (sort {$a <=> $b} keys %l) {
>> print "$l{$_} files have $_ links\n";
>> }'
>>
>> ? ;-)
>>
>> Actually, if your hard links have *not* been restored correctly, your 700GB
>> tar file will have been unpacked to occupy significantly more space on your
>> destination device (at least twice the amount, checking *before the first run
>> of BackupPC_nightly*). I would almost be surprised if a 700GB pool was, in
>> fact, restored correctly - see all of the "copying the pool" discussions for
>> details.
>>
>>> The other possibility is that xfs is that much slower on your hardware,
>>> with your mount options, etc... Perhaps look at the backuppc wiki for
>>> some suggestions on improving performance on xfs.
>>
>> I can think of one additional point to note. Your files will have been 
>> created
>> in a different order than before (probably the first instance of each inode
>> encountered is "beneath" a different directory than it was on your old pool).
>> If a directory references inodes scattered all over the disk and all these
>> inodes need to be read (such as for determining their link count), this will
>> be significantly slower than if the inodes are all neatly stored near each
>> other. Even reading the inodes in numerical order (inste

Re: [BackupPC-users] backuppc 3.0.0: another xfs problem?

2008-12-18 Thread Jeffrey J. Kosowsky
Holger Parplies wrote at about 02:33:44 +0100 on Friday, December 19, 2008:
 > Hi,
 > 
 > Adam Goryachev wrote on 2008-12-19 10:56:44 +1100 [Re: [BackupPC-users] 
 > backuppc 3.0.0: another xfs problem?]:
 > > I would advise that you confirm whether or not your hard links were
 > > restored properly:
 > > cd /var/lib/backuppc/pool/3/3/3
 > > for file in `ls`
 > 
 > Don't you trust shell globbing? ;-)
 > 
 > > do
 > >stat $file|grep Links|awk '{print $5" "$6}'
 > > done
 > 
 > You mean
 > 
 > cd /var/lib/backuppc/cpool/3/3/3
 > perl -e 'foreach (<*>) { $l {(stat $_) [3]} ++; }
 >  foreach (sort {$a <=> $b} keys %l) {
 >  print "$l{$_} files have $_ links\n";
 >  }'
 > 
 > ? ;-)

Although I am in awe of Perl as much as the next guy, I prefer the
following to quickly check for files with only 1 link:
find /var/lib/backuppc/cpool/3/3/3/ -type f -links 1
;-)

Now Perl shines if you want to tabulate but that was more than
the OP was trying to do ;)

Although... the following works nicely too (and uses less system &
user time on my machine):
 
 find /var/lib/backuppc/cpool/3/3/3 -type f -printf "%n\n"| 
gsl-histogram 1 1000 | awk '{if ($3 > 0) print $1" "$3}'

where gsl-histogram is a GNU program that happens to be lying around on
my machine..

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.0.0: another xfs problem?

2008-12-18 Thread Thomas Smith
Hi everyone, thanks for the help!

Today around noon I remounted the backup disk with noatime, and then
it only took another three hours, rather than another 10, which is
exciting.  I just remounted with nodiratime so we'll see if that makes
any difference in tonight's run.

I'm pretty sure that the hard links were not exploded into multiple
files, because there's several terabytes of overlapped data between
computers, but the pool is much smaller.  Of course, this doesn't
preclude some other brokenness in the backup.  To make the copy of the
pool, I stopped backuppc (/etc/init.d/backuppc stop), then made an LVM
snapshot of the backup logical volume, then made a tarfile of that
(with GNU tar, which handles hard links!).

The XFS filesystem is only 1/2 full, so that shouldn't be a problem.
The computer has 2G of memory, and yeah, there were some backups
running at the time.  I killed those today, so that might also have
helped it finish faster.  There were no errors in the backuppc log,
though there were some backups happening---not long-running ones,
though, I think.  Nothing else happens on this computer, which is a
Core 2 Duo machine with three SATA-connected 750GB 7200RPM disks, with
the backup volume a 3-way stripe LVM volume with 4MB physical extents.

One weird thing about the files I back up is that a lot of them are
very similar PGM images, which don't differ in the first and last 128K
or whatever, so there are some very long hash chains: "Pool hashing
gives 523 repeated files with longest chain 182".  I don't think that
this would cause hours and hours of problems, though.

If the problem is out-of-order inodes, is there some sort of XFS
tuning program that can optimize this?

If it gets down to a more manageable time, I'll probably just split
the nightly clean across several days, as you say.

Thank you again!

-Thomas

On Thu, Dec 18, 2008 at 8:33 PM, Holger Parplies  wrote:
> Hi,
>
> Adam Goryachev wrote on 2008-12-19 10:56:44 +1100 [Re: [BackupPC-users] 
> backuppc 3.0.0: another xfs problem?]:
>> Jeffrey J. Kosowsky wrote:
>> >
>> > I don't think that BackupPC_nightly checks for hard link dups between
>> > the pc/ and pool/ directories.
>
> I fully agree on that point.
>
>> I would advise that you confirm whether or not your hard links were
>> restored properly:
>> cd /var/lib/backuppc/pool/3/3/3
>> for file in `ls`
>
> Don't you trust shell globbing? ;-)
>
>> do
>>   stat $file|grep Links|awk '{print $5" "$6}'
>> done
>
> You mean
>
>cd /var/lib/backuppc/cpool/3/3/3
>perl -e 'foreach (<*>) { $l {(stat $_) [3]} ++; }
> foreach (sort {$a <=> $b} keys %l) {
> print "$l{$_} files have $_ links\n";
> }'
>
> ? ;-)
>
> Actually, if your hard links have *not* been restored correctly, your 700GB
> tar file will have been unpacked to occupy significantly more space on your
> destination device (at least twice the amount, checking *before the first run
> of BackupPC_nightly*). I would almost be surprised if a 700GB pool was, in
> fact, restored correctly - see all of the "copying the pool" discussions for
> details.
>
>> The other possibility is that xfs is that much slower on your hardware,
>> with your mount options, etc... Perhaps look at the backuppc wiki for
>> some suggestions on improving performance on xfs.
>
> I can think of one additional point to note. Your files will have been created
> in a different order than before (probably the first instance of each inode
> encountered is "beneath" a different directory than it was on your old pool).
> If a directory references inodes scattered all over the disk and all these
> inodes need to be read (such as for determining their link count), this will
> be significantly slower than if the inodes are all neatly stored near each
> other. Even reading the inodes in numerical order (instead of by appearance
> in the directory) speeds up things (IO::Dirent optimization).
> So, while your source pool was *not* in the ideal condition (for
> BackupPC_nightly) of having the files stored neatly near each other (with
> respect to appearance in the pool directories), your current pool may (or may
> not) behave significantly worse - though I doubt we're talking of a factor of
> 22 here. It might be a combination of several things.
>
> - How much memory does your BackupPC server have? How much of that is
>  available as disk cache?
> - What happens between 01:00:01 and 23:11:42 (the "[...]" in your quote from
>  the log file)? Are there backups running during this time? Many? Any error
>  messages?
> - Is the machine do

Re: [BackupPC-users] backuppc 3.0.0: another xfs problem?

2008-12-18 Thread Holger Parplies
Hi,

Adam Goryachev wrote on 2008-12-19 10:56:44 +1100 [Re: [BackupPC-users] 
backuppc 3.0.0: another xfs problem?]:
> Jeffrey J. Kosowsky wrote:
> >
> > I don't think that BackupPC_nightly checks for hard link dups between
> > the pc/ and pool/ directories.

I fully agree on that point.

> I would advise that you confirm whether or not your hard links were
> restored properly:
> cd /var/lib/backuppc/pool/3/3/3
> for file in `ls`

Don't you trust shell globbing? ;-)

> do
>   stat $file|grep Links|awk '{print $5" "$6}'
> done

You mean

cd /var/lib/backuppc/cpool/3/3/3
perl -e 'foreach (<*>) { $l {(stat $_) [3]} ++; }
 foreach (sort {$a <=> $b} keys %l) {
 print "$l{$_} files have $_ links\n";
 }'

? ;-)

Actually, if your hard links have *not* been restored correctly, your 700GB
tar file will have been unpacked to occupy significantly more space on your
destination device (at least twice the amount, checking *before the first run
of BackupPC_nightly*). I would almost be surprised if a 700GB pool was, in
fact, restored correctly - see all of the "copying the pool" discussions for
details.

> The other possibility is that xfs is that much slower on your hardware,
> with your mount options, etc... Perhaps look at the backuppc wiki for
> some suggestions on improving performance on xfs.

I can think of one additional point to note. Your files will have been created
in a different order than before (probably the first instance of each inode
encountered is "beneath" a different directory than it was on your old pool).
If a directory references inodes scattered all over the disk and all these
inodes need to be read (such as for determining their link count), this will
be significantly slower than if the inodes are all neatly stored near each
other. Even reading the inodes in numerical order (instead of by appearance
in the directory) speeds up things (IO::Dirent optimization).
So, while your source pool was *not* in the ideal condition (for
BackupPC_nightly) of having the files stored neatly near each other (with
respect to appearance in the pool directories), your current pool may (or may
not) behave significantly worse - though I doubt we're talking of a factor of
22 here. It might be a combination of several things.

- How much memory does your BackupPC server have? How much of that is
  available as disk cache?
- What happens between 01:00:01 and 23:11:42 (the "[...]" in your quote from
  the log file)? Are there backups running during this time? Many? Any error
  messages?
- Is the machine doing anything significant that is unrelated to BackupPC
  during this time?
- Sorry if this is a stupid question: are any read errors reported for the
  disk in question?

You *can* spread out the nightly cleanup process over several nights
($Conf{BackupPCNightlyPeriod}). If you do that, does the time it takes
decrease proportionally? Do all passes take the same time?

Regards,
Holger

--
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.0.0: another xfs problem?

2008-12-18 Thread Adam Goryachev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Jeffrey J. Kosowsky wrote:
> Paul Mantz wrote at about 10:19:45 -0800 on Thursday, December 18, 2008:
>  > Hello Thomas,
>  >
>  > Did the BackupPC_nightly jobs take 22 hours on the 17th as well?  If
>  > they didn't, I would suspect that since you restored the TopDir from a
>  > tarball, that the hardlinking wasn't handled correctly in the tar
>  > compression.  BackupPC_nightly would have been re-establishing the
>  > de-duplication between the pc/ and pool/ directories all in one go,
>  > which could reasonably take that long.
>  >
>
> I don't think that BackupPC_nightly checks for hard link dups between
> the pc/ and pool/ directories. I believe that it only checks for pool
> files with nlink=1 (which are deleted) and for "chains" that need to
> be renumbered due to holes in the numbering. (also weekly, it updates
> the backupInfo files and deletes old logs). Unless there is a bug in
> the code or filesystem issues, it's hard to see what would cause such
> a relatively "simple" and "linear" routine to bog down like that.
>
> In fact, the whole reason I had to write my BackupPC_fixLinks.pl
> routine was to fix the various cases of pool duplications and missing
> links between pc/ and pool/

I would advise that you confirm whether or not your hard links were
restored properly:
cd /var/lib/backuppc/pool/3/3/3
for file in `ls`
do
stat $file|grep Links|awk '{print $5" "$6}'
done

If they all come back as one or two, then you should see that you
haven't properly restored your pool. You should have some at least
greater than 10 depending on # backups you keep, # machines you backup,
etc...

The other possibility is that xfs is that much slower on your hardware,
with your mount options, etc... Perhaps look at the backuppc wiki for
some suggestions on improving performance on xfs.

Regards,
Adam

- --
Adam Goryachev
Website Managers
www.websitemanagers.com.au
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAklK4zwACgkQGyoxogrTyiWY+wCfdsGDRrLMgTK7MeAnzzSb2ry4
9yIAn11AayXFW+QGyydxBiZgZgGkqID0
=ifhp
-END PGP SIGNATURE-

--
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.0.0: another xfs problem?

2008-12-18 Thread Jeffrey J. Kosowsky
Paul Mantz wrote at about 10:19:45 -0800 on Thursday, December 18, 2008:
 > Hello Thomas,
 > 
 > Did the BackupPC_nightly jobs take 22 hours on the 17th as well?  If
 > they didn't, I would suspect that since you restored the TopDir from a
 > tarball, that the hardlinking wasn't handled correctly in the tar
 > compression.  BackupPC_nightly would have been re-establishing the
 > de-duplication between the pc/ and pool/ directories all in one go,
 > which could reasonably take that long.
 > 

I don't think that BackupPC_nightly checks for hard link dups between
the pc/ and pool/ directories. I believe that it only checks for pool
files with nlink=1 (which are deleted) and for "chains" that need to
be renumbered due to holes in the numbering. (also weekly, it updates
the backupInfo files and deletes old logs). Unless there is a bug in
the code or filesystem issues, it's hard to see what would cause such
a relatively "simple" and "linear" routine to bog down like that.

In fact, the whole reason I had to write my BackupPC_fixLinks.pl
routine was to fix the various cases of pool duplications and missing
links between pc/ and pool/






--
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.0.0: another xfs problem?

2008-12-18 Thread Chris Robertson
Thomas Smith wrote:
> Hi,
>
> No, it continues to take 22 hours or so each day.
>
> -Thomas

How is your XFS volume mounted?  Did you add the "noatime" and 
"nodiratime" directives?  If you have battery backed storage, I would 
highly recommend using "nobarrier" as well 
(http://oss.sgi.com/projects/xfs/faq.html#wcache_persistent).

How full is the XFS partition?  Performance suffers greatly when the 
filesystem usage rises above about 80%.

Chris

--
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.0.0: another xfs problem?

2008-12-18 Thread Thomas Smith
Hi,

No, it continues to take 22 hours or so each day.

-Thomas

On Thu, Dec 18, 2008 at 1:19 PM, Paul Mantz  wrote:
> Hello Thomas,
>
> Did the BackupPC_nightly jobs take 22 hours on the 17th as well?  If
> they didn't, I would suspect that since you restored the TopDir from a
> tarball, that the hardlinking wasn't handled correctly in the tar
> compression.  BackupPC_nightly would have been re-establishing the
> de-duplication between the pc/ and pool/ directories all in one go,
> which could reasonably take that long.
>
> On Thu, Dec 18, 2008 at 8:25 AM, Thomas Smith  wrote:
>> Hi,
>>
>> I'm running BackupPC 3.0.0 under Ubuntu.  I'm having problems similar
>> to the ones some people have had with 3.1.0 on XFS, but I also did
>> some other odd things before this started happening, so I want to
>> relate the whole scenario, in case I did something else to break it.
>>
>> I accidentally corrupted the filesystem /var/lib/backuppc lived on.
>> Luckily, I had a backup of the backup filesystem (which contains about
>> 700GB of compressed data), just a giant tarball of the whole thing,
>> sitting on another server.  It was from a couple weeks ago.  So I blew
>> away the old (ext3) volume from my LVM volume group and put a new XFS
>> filesystem in its place, then I extracted the tarball, giving me a
>> several-week-old snapshot of BackupPC.  Then I just started BackupPC
>> up again.  Now, BackupPC_nightly takes a zillion years to run:
>>
>> 2008-12-16 01:00:01 Running 2 BackupPC_nightly jobs from 0..15 (out of 0..15)
>> 2008-12-16 01:00:01 Running BackupPC_nightly -m 0 127 (pid=7491)
>> 2008-12-16 01:00:01 Running BackupPC_nightly 128 255 (pid=7492)
>> [...]
>> 2008-12-16 23:11:42 Finished  admin1  (BackupPC_nightly 128 255)
>> 2008-12-16 23:12:42 BackupPC_nightly now running BackupPC_sendEmail
>> 2008-12-16 23:12:45 Finished  admin  (BackupPC_nightly -m 0 127)
>>
>> It used to take about an hour, now it takes 22 hours.
>>
>> I found mailing list threads about similar problems in 3.1.0, but they
>> seemed to have to do with IO::Dirent.pm, which isn't even installed on
>> my system.  Should I update to 3.1.0 and then install the patch for
>> the Dirent problem?  Or is it maybe not an XFS bug, and I did
>> something wrong when I restored the BackupPC filesystem?  Something
>> else?
>>
>> Thank you for your help,
>> -Thomas
>>
>> --
>> http://resc.smugmug.com/
>>
>> --
>> SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
>> The future of the web can't happen without you.  Join us at MIX09 to help
>> pave the way to the Next Web now. Learn more and register at
>> http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>>
>
>
>
> --
> Paul Mantz
> http://www.mcpantz.org
> BackupPC - Network Backup with De-Duplication http://www.backuppc.com
> Zmanda - Open source backup and recovery http://www.zmanda.com/
>
> --
> SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
> The future of the web can't happen without you.  Join us at MIX09 to help
> pave the way to the Next Web now. Learn more and register at
> http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>



-- 
http://resc.smugmug.com/

--
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.0.0: another xfs problem?

2008-12-18 Thread Paul Mantz
Hello Thomas,

Did the BackupPC_nightly jobs take 22 hours on the 17th as well?  If
they didn't, I would suspect that since you restored the TopDir from a
tarball, that the hardlinking wasn't handled correctly in the tar
compression.  BackupPC_nightly would have been re-establishing the
de-duplication between the pc/ and pool/ directories all in one go,
which could reasonably take that long.

On Thu, Dec 18, 2008 at 8:25 AM, Thomas Smith  wrote:
> Hi,
>
> I'm running BackupPC 3.0.0 under Ubuntu.  I'm having problems similar
> to the ones some people have had with 3.1.0 on XFS, but I also did
> some other odd things before this started happening, so I want to
> relate the whole scenario, in case I did something else to break it.
>
> I accidentally corrupted the filesystem /var/lib/backuppc lived on.
> Luckily, I had a backup of the backup filesystem (which contains about
> 700GB of compressed data), just a giant tarball of the whole thing,
> sitting on another server.  It was from a couple weeks ago.  So I blew
> away the old (ext3) volume from my LVM volume group and put a new XFS
> filesystem in its place, then I extracted the tarball, giving me a
> several-week-old snapshot of BackupPC.  Then I just started BackupPC
> up again.  Now, BackupPC_nightly takes a zillion years to run:
>
> 2008-12-16 01:00:01 Running 2 BackupPC_nightly jobs from 0..15 (out of 0..15)
> 2008-12-16 01:00:01 Running BackupPC_nightly -m 0 127 (pid=7491)
> 2008-12-16 01:00:01 Running BackupPC_nightly 128 255 (pid=7492)
> [...]
> 2008-12-16 23:11:42 Finished  admin1  (BackupPC_nightly 128 255)
> 2008-12-16 23:12:42 BackupPC_nightly now running BackupPC_sendEmail
> 2008-12-16 23:12:45 Finished  admin  (BackupPC_nightly -m 0 127)
>
> It used to take about an hour, now it takes 22 hours.
>
> I found mailing list threads about similar problems in 3.1.0, but they
> seemed to have to do with IO::Dirent.pm, which isn't even installed on
> my system.  Should I update to 3.1.0 and then install the patch for
> the Dirent problem?  Or is it maybe not an XFS bug, and I did
> something wrong when I restored the BackupPC filesystem?  Something
> else?
>
> Thank you for your help,
> -Thomas
>
> --
> http://resc.smugmug.com/
>
> --
> SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
> The future of the web can't happen without you.  Join us at MIX09 to help
> pave the way to the Next Web now. Learn more and register at
> http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>



-- 
Paul Mantz
http://www.mcpantz.org
BackupPC - Network Backup with De-Duplication http://www.backuppc.com
Zmanda - Open source backup and recovery http://www.zmanda.com/

--
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] backuppc 3.0.0: another xfs problem?

2008-12-18 Thread Thomas Smith
Hi,

I'm running BackupPC 3.0.0 under Ubuntu.  I'm having problems similar
to the ones some people have had with 3.1.0 on XFS, but I also did
some other odd things before this started happening, so I want to
relate the whole scenario, in case I did something else to break it.

I accidentally corrupted the filesystem /var/lib/backuppc lived on.
Luckily, I had a backup of the backup filesystem (which contains about
700GB of compressed data), just a giant tarball of the whole thing,
sitting on another server.  It was from a couple weeks ago.  So I blew
away the old (ext3) volume from my LVM volume group and put a new XFS
filesystem in its place, then I extracted the tarball, giving me a
several-week-old snapshot of BackupPC.  Then I just started BackupPC
up again.  Now, BackupPC_nightly takes a zillion years to run:

2008-12-16 01:00:01 Running 2 BackupPC_nightly jobs from 0..15 (out of 0..15)
2008-12-16 01:00:01 Running BackupPC_nightly -m 0 127 (pid=7491)
2008-12-16 01:00:01 Running BackupPC_nightly 128 255 (pid=7492)
[...]
2008-12-16 23:11:42 Finished  admin1  (BackupPC_nightly 128 255)
2008-12-16 23:12:42 BackupPC_nightly now running BackupPC_sendEmail
2008-12-16 23:12:45 Finished  admin  (BackupPC_nightly -m 0 127)

It used to take about an hour, now it takes 22 hours.

I found mailing list threads about similar problems in 3.1.0, but they
seemed to have to do with IO::Dirent.pm, which isn't even installed on
my system.  Should I update to 3.1.0 and then install the patch for
the Dirent problem?  Or is it maybe not an XFS bug, and I did
something wrong when I restored the BackupPC filesystem?  Something
else?

Thank you for your help,
-Thomas

-- 
http://resc.smugmug.com/

--
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/