'/' -f5)
# backup_number=$(echo "$path" | cut -d'/'
-f6)
host=$(echo "$path" | cut -d'/' -f4)
backup_number=$(echo "$path" | cut -d'/
ough.
*However*, ZFSonlinux has had *plenty* of sparse file related bugs to
do with snapshots and similar. It was getting so bad at one point, I
nearly stopped trusting the filesystem (my mind has been more at ease
since I stopped having the time to follow mailing list discussions).
I d
try to rename it atomically into the
destination file, and fail because that temporary file has been dropped on
the floor. But reading the files for backup poses no such problems.
--
Tim Connors
___
BackupPC-users mailing list
BackupPC-users@lists.
s://github.com/backuppc/backuppc/issues/87
Also unfortunately, this will affect all your previous backups, so there's
no way of recovering what your filesystem actually looked like lastnight.
--
Tim Connors
___
BackupPC-users mailing list
Bac
l the hardlinks. But I
suspect here it's a case of finding the mapping of the file from each
backup (that info is not in $machine/XferLOG.4218.z, so I don't know where
else it would be), and then truncate/remove (since no more hardlinks),
while also taking care of legacy v3 files.
ives,
I'd tell backuppc to turn off compression and munging of the pool, and let
ZFS do it.
I used JFS 10 years ago, and "niche buggy product" would be my description
for it. Basically, go with the well tested popular FSs,
2.0l/part20d,20h/high-timeres/galout/output/data/pview_columns/g02.processed.followsmc.columns-xproj:yproj:zproj:xscreen:yscreen:vlsr:vgsr:vsub:rp:th_lmc:ph_lmc:x_smc:y_smc:z_smc:x_mw:y_mw:z_mw:x_sun:y_sun:z_sun.pview.dat
--
Tim Connors
--
This SF.net email is sponsored by Windows:
Build for Windows Store.
http:
ally is best with its own server. And with the amount of data
> you're quoting, make sure you're using gigabit Ethernet at minimum.
Nah, I was running on 100M for a long time quite successfully. The
bottleneck is disk on the server anyway.
--
Tim Connors
---
ine
130.\x{a}') called at /usr/lib/perl5/Fuse.pm line 130
Fuse::main('mountopts', 'ro,default_permissions,allow_other',
'mountpoint', '/snapshots', 'threaded', 0, 'debug', 1, 'open', ...) called at
/home/tconnors/
sync
usage) might be a big win. I'm sure ZFS is a little quicker than that
given that it's not done in perl.
--
Tim Connors
--
All the data continuously generated in your IT infrastructure
contains a definitiv
failures.
Slightly different RPMs from the different generations of disks in your
Raid10 as someone else suggested, might introduce resonances in your
enclosure that would cause more vibration than identical drives.
It's all too tricky. I
rebuild,
suffer the likely consequence of losing another disk when rebuilding
raid6, you still have a valid array.
Worse case, fairly likely occurence with raid10, lose that second disk and
lose all your data.
Care for your data ==> don't use raid10.
--
Tim Connors
--
when I was doing
night shift, this *wasn't* at 4am!), and you won't even notice that it's
busy.
Then wrap locate up in a simple cgi script to present to your users
instead of training them how to use locate on the commandline.
--
Tim Connors
---
On Thu, 22 Apr 2010, Les Mikesell wrote:
> On 4/22/2010 9:37 AM, Tim Connors wrote:
> >
> > I've seen this problem too, and it's not filesystem issues.
> >
> > It's either rsync or the perl library (and I very strongly suspect the
> > latter) t
On Wed, 21 Apr 2010, Les Mikesell wrote:
> On 4/21/2010 7:47 AM, Les Mikesell wrote:
> > Philippe Bruhat (BooK) wrote:
> >> On Wed, Apr 21, 2010 at 08:45:29AM +0200, Philippe Bruhat (BooK) wrote:
> >>> On Wed, Apr 21, 2010 at 08:35:03AM +0200, Philippe Bruhat (BooK) wrote:
> I understand this
On Tue, 22 Dec 2009, Sebastiaan van Erk wrote:
> Question 1:
>
> The first thing I noticed is that trashClean runs every 5 minutes, making it
> unable to spindown my disks. I changed the configuration setting to once every
> 7 days, but I was wondering if this is likely to cause any problems...
I
On Sat, 19 Dec 2009, Holger Parplies wrote:
> Hi,
>
> Jeffrey J. Kosowsky wrote on 2009-12-18 15:36:48 -0500 [Re: [BackupPC-users]
> Unexpected call?BackupPC::Xfer::RsyncFileIO->unlink(...)]:
> > Jeffrey J. Kosowsky wrote at about 13:11:37 -0500 on Monday, November 2,
> > 2009:
> > > Unexpected
On Fri, 18 Dec 2009, Malik Recoing. wrote:
> The Holy Doc says ( Barratt:Desing:operation:2 ): "it checks each file in the
> backup to see if it is identical to an existing file from any previous backup
> of
> any PC. It does this without needed to write the file to disk."
>
> But it doesn't say
backuppc has spontaneously a few days ago started getting stuck on a
partition on a machine that it has been dealing with fine for 120 days
now.
The logfile gives:
2009-12-02 08:46:04 incr backup 122 complete, 38 files, 46425552 bytes, 0
xferErrs (0 bad files, 0 bad shares, 0 other)
2009-12-03 0
I know there have been threads on here debating the relative merit of
throwing out all of the data on a backup disk that suffered major
filesystem damage, vs trying to fix it and let backuppc regenerate any
files that might have corrupted data (via setting a
RsyncCsumCacheVerifyProb of 1 and redoi
009-05-04 21:14:47 unexpected repeated share name / skipped
2009-05-04 21:14:47 full backup started for directory /boot (baseline backup
#89)
2009-05-04 21:15:10 full backup started for directory /dos (baseline backup #89)
--
Tim Connors
backuppc fails to backup '/' if it is not specified before anything else.
In fact, any subtree can't be specified before its parents, or the parents
won't be backed up because the directory already exists by the time
backuppc performs the test for repeated share name. If it is a wise idea
to keep
On Wed, 24 Dec 2008, Holger Parplies wrote:
> Hi,
>
> Glassfox wrote on 22.12.2008 at 16:50:24 [[BackupPC-users] missing rdev on
> device...]:
> > I tried a complete restore on my localhost (which also runs BackupPC) today
> > and got a lot of error messages like this:
> >
> > "2008-12-22 21:54
Hi,
There is a way of finding annotated history of one directory:
http:///backuppc/index.cgi?action=dirHistory&host=&share=/&dir=/
But I was wondering whether it was currently possible, or whether it was
easy to add support to show the changes of an entire hierarchy (starting
from some nominated
t
should be rebooted and force fscked as soon as possible, as per the thread
above).
--
Tim Connors
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applicatio
x27;t so busy
screwing around doing other things. Has anyone done similar?
--
Tim Connors
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Mob
26 matches
Mail list logo