Re: [BackupPC-users] BackupPC writes to disk very slow

2018-12-06 Thread Bedynek, Matthew J. via BackupPC-users
Ari,

I have been using BackupPC successfully with ZFS On Linux for a few years now.  
This was simply convenient as I had a lot of old hardware with 3TB drives that 
I was able to repurpose for the task.   We have about a dozen hosts with either 
~60TB  or ~160TB usable in Raid Z1+spare.   Despite having no issues I would 
probably redo them as raidz2 with no hot spare.

I use config management to set the number of concurrent jobs per hosts based on 
the number of cores on the systems.  The data being backed up ranges from many 
small files to few very large files.   The only tweaking I've done is configure 
so that we do many incrementals but 1 full every 90 days (just to reduce 
traffic to primary storage).

I also disable compression in backuppc letting the file system handle that.  
Other than the manual act of rebuilding the zfs modules between kernel upgrades 
it works very well!

Matt 


-Original Message-
From: Ari Sovijärvi  
Sent: Sunday, December 2, 2018 10:46 AM
To: backuppc-users@lists.sourceforge.net
Subject: Re: [BackupPC-users] BackupPC writes to disk very slow

On 2.12.2018 15.32, Tapio Lehtonen wrote:
> On new BackupPC host backups go very slow. I believe I have determined 
> the network connection is not at fault, from Windows machine 
> Speedtest.net shows a little less than gigabit speeds and from this 
> backuppc host speedtest.cli shows over 800 Mbits / sek.

Out of curiosity, have you experimented with other filesystems? I have couple 
relatively large setups (pool at ~ 9 terabytes) with ext4 and those still 
crunch backups happily.

XFS has been a bit hit and miss for me, I know many swear by it, but where I've 
tested it, I've hit all kinds of random problems to a degree that I haven't 
bothered with it any more.

I've been recently experimenting with separating the pc and pool directories so 
pc directories are on SSD storage and the pool on HDDs. 
Jury is still out on real life speedups on this, but it seems possible with 
BPC4.

--
Ari Sovijärvi



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Rsync issue

2018-10-04 Thread Bedynek, Matthew J.
All,



Am using BackupPC 4.2.1 on a Redhat 7.5 host to backup a rather large 
repository of data.  I believe things worked OK with tar but after we changed 
the file system we switched back to Rsync.  I think we have plenty of others 
which roughly equal in size but backup fine.  Not sure if there is something 
specific about this path that causes issue.



I have worked with Craig in the past to troubleshoot and provide data.  I 
suspect he or someone will ask I turn up logging and repeat.  I can provide the 
logs on request -- just not here since would be very large.



It seems our environment is good for testing / finding issues since some of 
them are large in size and number of files.



Thanks and take care!



(..list of files..)

[receiver] io timeout after 72035 seconds -- exiting

Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0 filesTotal, 0 
sizeTotal, 0 filesNew, 0 sizeNew, 0 sizeNewComp, 233977 inode

rsync error: timeout in data send/receive (code 30) at io.c(140) 
[receiver=3.0.9.12]

rsync_bpc: connection unexpectedly closed (39 bytes received so far) [generator]

DoneGen: 0 errors, 14204 filesExist, 809628 sizeExist, 809628 sizeExistComp, 
131190 filesTotal, 2915859542718 sizeTotal, 0 filesNew, 0 sizeNew, 0 
sizeNewComp, 305814 inode

rsync error: error in rsync protocol data stream (code 12) at io.c(629) 
[generator=3.0.9.12]

rsync_bpc exited with fatal status 12 (3072) (rsync error: error in rsync 
protocol data stream (code 12) at io.c(629) [generator=3.0.9.12])

Xfer PIDs are now

Got fatal error during xfer (rsync error: error in rsync protocol data stream 
(code 12) at io.c(629) [generator=3.0.9.12])

Backup aborted (rsync error: error in rsync protocol data stream (code 12) at 
io.c(629) [generator=3.0.9.12])

BackupFailCleanup: nFilesTotal = 131190, type = full, BackupCase = 1, inPlace = 
1, lastBkupNum =

Keeping non-empty backup #0 (/backups/BackupPC/pc/cncs5to9-8-repo/0)

Running BackupPC_refCountUpdate -h cncs5to9-8-repo -f on cncs5to9-8-repo

Xfer PIDs are now 12182

BackupPC_refCountUpdate: cncs5to9-8-repo #0 inodeLast set to 305812 (was 1)

BackupPC_refCountUpdate: host cncs5to9-8-repo got 0 errors (took 141 secs)

Xfer PIDs are now

Finished BackupPC_refCountUpdate (running time: 142 sec)


Thanks,

Matt

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Replication

2017-05-19 Thread Bedynek, Matthew J.
> It's possible to replicate backuppc on datacenter ?

I do not replicate them using zsend but I do run about a dozen hosts with a 
mixture of V3 and v4 with ZFS on Linux.

If built out right you can get good I/O performance on ZFS for Backups.


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC 4.1.1 and NFS shares

2017-04-28 Thread Bedynek, Matthew J.
On Apr 21, 2017, at 7:39 PM, Les Mikesell 
> wrote:

If you think about what rysnc is supposed to do, it doesn't make much
sense to run both ends locally accessing data over NFS.

I would think it depends.  I have found many cases in environments where 
running rsync between local mounts is preferable to using cp or a tar pipe.

I would have to look at the Rsync implementation with a little more scrutiny 
but I suspect rsync provides greater flexibility in tuning its behavior on what 
to copy.  To be clear, it is a very minor concern for my purpose and nothing 
worth getting in a twist over.  The type of directories I backup tend to only 
grow between incrementals.  It wasn’t really a concern but more of a question.

I did however run into another problem in that my backups using Tar have been 
freezing.  I do not know if I hit some size limitation except that the same 
jobs configured on version 3 which use Rsync work fine.

For any file that is not skipped by the timestamp/length check in
incrementals, you are going to read the entire file over NFS so rsync
can compute the differences (where the usual point is to only send the
differences over the network).

I seem to recall there was a change in how files are compared but also know 
that incrementals with Rsync on V3 ran in a fraction of time as a full backup.  
So it must have had some way to skip doing a full file comparison.


Is there any way you can run rsync remotely against the NFS host instead?

I backup from two types of hosts: Lustre and NFS. I figured that it made far 
more sense to provide a direct mount rather than use Rsync over SSH or stand up 
Rsync.  I would also run the risk of having to have hosts

matt
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Multiple issues on newly installed 4.1.0.

2017-04-24 Thread Bedynek, Matthew J.
Thanks for trying to get some debug log files.  I was able to reproduce the 
problem, and I just pushed a fix (maybe not the final one) to git.

Could you please test that change?  You can either apply the diff (see below), 
manually replace 
BackupPC_tarExtract
 (edit the "use lib" path to make sure it is correct), or reinstall using 
makeDist / configure.pl from git master.

I will test this later today.  Just waiting for a few jobs to complete.  Thanks 
for taking to time out of your day to help us!

-matt
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] New BackupPC 4 install & --acls option

2017-04-24 Thread Bedynek, Matthew J.
Craig,

> I just discovered on my Ubuntu 16.04 server, rsync_bpc (and also rsync) don't 
> detect that acls are supported when you build them from source, which is 
> wrong.  I found out that you have to install a dev acl library first:


ACLs is something I was curious about as well.  It appears tar supports them so 
if one includes the ‘—acls’ option on tar will Backup PC store them?


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC 4.1.1 and NFS shares

2017-04-21 Thread Bedynek, Matthew J.
All,

With version 3 I am using Rsync instead of tar to backup a NFS share which the 
backupPC host has direct access to.  This has worked great with the exception 
that linking takes forever if there are a large number of files.   I simply 
didn’t have much luck getting version 3 to work with tar and got rsync to work 
rather easily so stopped there.

In the version 4 change log I noted that there were some improvements to 
linking so decided to give that a try.  However, with version 4, there have 
been changes to rsync such that am forced to use tar for a local copy.


My V3 rsync config looked like:

$Conf{ClientNameAlias} = 'localhost';
$Conf{BackupFilesExclude} = {
  '*' => [
‘/x/y/SEQ/IPTS-*'
  ]
};
$Conf{RsyncClientCmd} = 'sudo -u username $rsyncPath $argList+';
$Conf{RsyncClientRestoreCmd} = 'sudo -u username $rsyncPath $argList+’;


I believe the RsyncClientCmd and RsyncClientRestoreCmd are gone in V4.  I did 
get Rsync to work with V4 but it seems to ssh to localhost which consumes 
additional host resources.  

Rsync isn’t a big deal since I have tar working now but am I correct in reading 
that Rsync might be better for incremental backups in terms of handling 
deletions?

-matt
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Multiple issues on newly installed 4.1.0.

2017-04-21 Thread Bedynek, Matthew J.
Craig,

It was very short so I’ll probably redo it today with more debugging.  The high 
point being that I saw similar behavior.

tarExtract: copyInodes: finished getAll()
tarExtract: copyInodes: finished getAll()
tarExtract: copyInodes: finished getAll()
tarExtract: copyInodes: finished getAll()
tarExtract: copyInodes: finished getAll()
tarExtract: copyInodes: finished getAll()
tarExtract: copyInodes: finished getAll()
tarExtract: copyInodes: finished getAll()
tarExtract: copyInodes: finished getAll()
tarExtract: copyInodes: finished getAll()
tarExtract: copyInodes: finished getAll()
tarExtract: copyInodes: finished getAll()
tarExtract: copyInodes: finished getAll()
tarExtract: *** stack smashing detected ***: /usr/bin/perl terminated
tarExtract: === Backtrace: =
tarExtract: /lib64/libc.so.6(__fortify_fail+0x37)[0x7fcb4317b047]
tarExtract: /lib64/libc.so.6(__fortify_fail+0x0)[0x7fcb4317b010]
tarExtract: 
/usr/lib64/perl5/vendor_perl/auto/BackupPC/XS/XS.so(+0x7a80)[0x7fcb3b602a80]
tarExtract: 
/usr/lib64/perl5/CORE/libperl.so(Perl_pp_entersub+0x58f)[0x7fcb4447842f]
tarExtract: 
/usr/lib64/perl5/CORE/libperl.so(Perl_runops_standard+0x16)[0x7fcb44470ba6]
tarExtract: /usr/lib64/perl5/CORE/libperl.so(perl_run+0x355)[0x7fcb4440d9a5]
tarExtract: /usr/bin/perl[0x400d99]
tarExtract: /lib64/libc.so.6(__libc_start_main+0xf5)[0x7fcb4308db35]
tarExtract: /usr/bin/perl[0x400dd1]
tarExtract: === Memory map: 

[..]
BackupPC_tarExtract exited with fail status 6
Xfer PIDs are now
Got fatal error during xfer (BackupPC_tarExtract exited with fail status 6)
Backup aborted (BackupPC_tarExtract exited with fail status 6)
BackupFailCleanup: nFilesTotal = 1469, type = full, BackupCase = 4, inPlace = 
0, lastBkupNum = 2
Keeping non-empty backup #2 (/backups/...)
Running BackupPC_refCountUpdate -h m8u-test -f on m8u-test
Xfer PIDs are now 7965
BackupPC_refCountUpdate: host m8u-test got 0 errors (took 17 secs)



I somehow missed that.  Can you re-send to me please?

Matt
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Multiple issues on newly installed 4.1.0.

2017-04-20 Thread Bedynek, Matthew J.

On Apr 20, 2017, at 10:35 PM, Craig Barratt 
> wrote:

Jens,

Thanks for including interesting parts of the XferLOG file.  Could you re-run 
with XferLogLevel set to 6?  Feel free to send the log file (or
interesting parts) to me directly, rather than posting to the list.

I see similar behavior here here except I got a crash dump from perl.  I posted 
a message previously with section of the log but if you’re interested I can 
collect a log as well.


tarExtract: copyInodes: finished getAll()

I see many of those in my log file in addition to the have to stop the job 
twice in UI for it to cancel.

matt
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/