[BackupPC-users] BackupPC4 currently unresolvable on opensuse tumbleweed

2024-02-21 Thread gregrwm
BackupPC4 currently shows as unresolvable for tumbleweed on
https://build.opensuse.org/package/show/home:ecsos:Backup/BackupPC4,
something about having choices for sendmail and openssl.

is there hope?

(i never have been particularly interested in it sending mail.  or the
gui.  i love it for using rsync, for the intelligently shared and
compressed pool, and clever expiry.)
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] files removed

2023-05-06 Thread gregrwm
On Sat, May 6, 2023 at 11:31 AM G.W. Haywood wrote:

> On Fri, 5 May 2023, gregrwm wrote:
> > if so, umm..!!  heck, isn't that one of the times when backups are sorely
> > wanted?  when the original flakes or fades and no longer has a valid copy
> > of the file?
>
> I think you may be misinterpreting message, or jumping to unwarranted
> conclusions, or possibly both...
> Have you actually verified that something you're fond of has been lost?
>

i see messages at startup that clue me that there's some issues with the
drive, i suspect some sort of media fade.  the "originals" of the files
"removed" are in one of my directories of older versions, so i'm not crying
if they're gone, but certainly disappointed if backuppc hastily deleted its
copies due to "mismatch" with the "originals".

> do they mean the backed-up copies were removed?  because they no longer
> > match the "original" file?
>
> I can't comment, I no longer keep the V3 code lying around.  The message
> that's spooking you doesn't seem to be in the V4 code.  I had a quick look
> for anything which might resemble the same thing but it was very quick and
> I wasn't amazed when I didn't get any results.
>

that's probably good news, likely the bp4 behavior is better

You seem to be updating a failed backup.  Does that sound right?
>

yes.  a backup was done ages ago, not sure but maybe even before this
laptop was re-installed on a "new" (used) disc.  recently i made an attempt
or two that were non-starters, and then this seemingly mostly good run.
i'm backing up through a vpn tunnel which is only up when i bring it up,
hence i invoke manually.

Have you actually got
> at least one of what you consider to be a *complete* full backup?  And
> have you also got a bunch of completed incremental backups?  Have you
> any reason to believe that the 'complete' backups are not complete?
>

so far i've excluded vast swaths of the os to pull the configs i've
modified and data i've created through the vpn tunnel first.  it hasn't run
enough to create any incrementals yet.  next i'll pare the exclusions, and
eventually eliminate them.

version 4.0.0alpha of BackupPC was released ten
> years ago next month.  V4 offers advantages over V3.  Just sayin'...
>

the production backup workhorse is still ubu bp3 as of yet.  i only
recently emerged from the ubuntu cocoon to tumbleweed, an up to date robust
and stable distribution, bp4 is right in the main repos.
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] files removed

2023-05-05 Thread gregrwm
hi folks,
i'm curious about the "file removed" messages below, they're from running
"sudo -ubackuppc /u*/s*/b*/b*/*_dump -v $c" [ubuntu focal backuppc 3.3.2-3]

do they mean the backed-up copies were removed?  because they no longer
match the "original" file?

if so, umm..!!  heck, isn't that one of the times when backups are sorely
wanted?  when the original flakes or fades and no longer has a valid copy
of the file?

maybe that's a bug fixed in backuppc4?
thank you,
greg

full backup started for directory /; updating partial #1
started full dump, share=/
Running: ssh -qxp993 -oStrictHostKeyChecking=accept-new 192.168.128.234
sudo rsync --server --sender --numeric-ids --perms --owner --group -D
--links >
Xfer PIDs are now 919942
xferPids 919942
Xfer PIDs are now 919942,919943
xferPids 919942,919943
Unexpected call
BackupPC::Xfer::RsyncFileIO->unlink(etc/fonts/conf.d/30-metric-aliases.conf)
Unexpected call BackupPC::Xfer::RsyncFileIO->unlink(etc/sgml/catalog)
Unexpected call
BackupPC::Xfer::RsyncFileIO->unlink(f/etc/fonts/conf.d/30-metric-aliases.conf)
Unexpected call BackupPC::Xfer::RsyncFileIO->unlink(f/etc/sgml/catalog)
f/fk/home/greg/^/usr~local~bin~tmux-master2021-02-08Mon17:51+move.diff:ut:
md4 doesn't match: will retry in phase 1; file removed
Unexpected call
BackupPC::Xfer::RsyncFileIO->unlink(f/removedtoomuch/etc/resolv.conf)
f/fk/home/greg/^/usr~local~bin~vimuf: md4 doesn't match: will retry in
phase 1; file removed
f/fk/home/k/^/usr~local~bin~tmux-master2021-02-08Mon17:51+move.diff:ut: md4
doesn't match: will retry in phase 1; file removed
f/fk/home/k/^/usr~local~bin~vimuf: md4 doesn't match: will retry in phase
1; file removed
f/removedtoomuch/home/k/^/usr~local~bin~tmux-master2021-02-08Mon17:51+move.diff:ut:
md4 doesn't match: will retry in phase 1; file removed
f/removedtoomuch/home/k/^/usr~local~bin~vimuf: md4 doesn't match: will
retry in phase 1; file removed
f/fk/home/greg/^/usr~local~bin~tmux-master2021-02-08Mon17:51+move.diff:ut:
fatal error: md4 doesn't match on retry; file removed
MD4 does't agree: fatal error on #12579
(f/fk/home/greg/^/usr~local~bin~tmux-master2021-02-08Mon17:51+move.diff:ut)
f/fk/home/greg/^/usr~local~bin~vimuf: fatal error: md4 doesn't match on
retry; file removed
MD4 does't agree: fatal error on #12584
(f/fk/home/greg/^/usr~local~bin~vimuf)
f/fk/home/k/^/usr~local~bin~tmux-master2021-02-08Mon17:51+move.diff:ut:
fatal error: md4 doesn't match on retry; file removed
MD4 does't agree: fatal error on #13059
(f/fk/home/k/^/usr~local~bin~tmux-master2021-02-08Mon17:51+move.diff:ut)
f/fk/home/k/^/usr~local~bin~vimuf: fatal error: md4 doesn't match on retry;
file removed
MD4 does't agree: fatal error on #13067 (f/fk/home/k/^/usr~local~bin~vimuf)
f/removedtoomuch/home/k/^/usr~local~bin~tmux-master2021-02-08Mon17:51+move.diff:ut:
fatal error: md4 doesn't match on retry; file removed
MD4 does't agree: fatal error on #62477
(f/removedtoomuch/home/k/^/usr~local~bin~tmux-master2021-02-08Mon17:51+move.diff:ut)
f/removedtoomuch/home/k/^/usr~local~bin~vimuf: fatal error: md4 doesn't
match on retry; file removed
MD4 does't agree: fatal error on #62482
(f/removedtoomuch/home/k/^/usr~local~bin~vimuf)
Done: 93690 files, 74691618577 bytes
full backup complete
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] rsync FROM backuppc

2022-10-29 Thread gregrwm
i have an old copy of a VM (from before it was migrated to a different
server), and backuppc captured more recent changes, before a problem
occurred on the server the VM has been running on recently.

what's missing is backuppc is capturing the content of the VM, but that's
not quite enough to recreate the VM.

so what would be ideal is to run rsync out of what backuppc captured into
the old copy of the VM.

sure i can restore the entire new contents of the VM, and then run rsync to
update the the old copy of the VM.

my question is, perhaps there's something more direct that would work?  but
i don't think a 'tarCreate|ssh tar' pipeline can behave quite like rsync.

and yes i'd also welcome any ideas or pointers on how to better organize
backing up KVM VMs.
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Serious error: last backup directory doesn't exist!!! Need to remove back to last filled backup

2022-06-14 Thread gregrwm
On Tue, Jun 14, 2022 at 5:25 PM G.W. Haywood via BackupPC-users <
backuppc-users@lists.sourceforge.net> wrote:

> Hi there,
>
> On Tue, 14 Jun 2022, gregrwm wrote:
> > ... interrupted BackupPC_dump.  on the next invocation i got:
> > 2022-06-12 21:35:02 Serious error: last backup
> > /var/lib/backuppc/pc/avocado/32 directory doesn't exist!!!  Need to
> remove
> > back to last filled backup
> > 2022-06-12 21:35:02 Deleting backup 14
> > 2022-06-12 21:35:08 Deleting backup 15
> > 2022-06-12 21:35:14 Deleting backup 16
> > 2022-06-12 21:35:20 Deleting backup 17
> > 2022-06-12 21:35:27 Deleting backup 18
> > 2022-06-12 21:35:34 Deleting backup 19
> > 2022-06-12 21:35:41 Deleting backup 22
> > 2022-06-12 21:35:47 Deleting backup 30
> > 2022-06-12 21:36:00 Deleting backup 32
> >
> > wow.  not too robust!  doesn't that seem like an inordinate consequence?
>
> Mr. Kosowsky didn't specifically address the robustness issue so I'll
> chime in here about that.  No, it doesn't seem inordinate if you think
> about how BackupPC manages backups.  The non-filled backups are based
> on a filled backup.  If you don't have that, then backups which are
> based on it are useless so there's no point in keeping them.  I think
> the moral of the story is that if you care about your backups, don't
> do what you did (nor anything like it) without taking precautions.
>
> Having said that I don't generally mess with BackupPC (whether it's in
> the middle of doing something or not).  After a couple of false starts
> (which must have been at least partly my fault, while I was migrating
> from V3 to V4) once I got version 4 settled in it has never put a foot
> wrong backing up dozens of machines, which aren't even all in the same
> country, with tens of terabytes of data.  Occasionally I recover files
> and directories from the backups; it's often much easier than fetching
> them from the backed up machines directly.  I've found that doing this
> makes me more confident of BackupPC.  That then makes it more likely
> that when I need to fetch more files I'll grab backups rather than go
> to the originals.  It gives me a warm fuzzy feeling I suppose, to know
> the recovered backed up data is exactly what I expect it to be, so I'm
> that much more confident that if I needed it because I've managed to
> lose the original then it would be there for me.
>
> I have no axe to grind.  I'm not in any way connected with BackupPC
> development nor with the developers, I'm just a very satisfied user
> and I thought that a message which could be seen as critical needed
> something to balance it.  Of course there will be faults to be fixed
> in any even moderately complex software.  BackupPC is probably a bit
> more than just moderately complex, but I've found it very robust if
> treated with reasonable care.
>

we've been using backuppc3 many years now, it's still chugging along and
i'm just getting backuppc4 going.  yes i'm quite happy with it and find it
robust.  but hence my surprise that an interrupt can have this consequence
in backuppc4.  the 'robust' merit is called on the carpet if a power loss
can result in significant loss of backups.

offhand i'm not really seeing why an interrupt would result in the loss of
a prior backup directory.  i'd be surprised to learn that a prior backup is
ever moved or renamed and thus subject to loss if interrupted, but what
else could it be?  if an existing backup directory is ever renamed or moved
i would consider that a design flaw worth correcting.  i would think it
preferable that once a backup directory gets it's number, it keeps it for
it's entire lifetime, no renames, no moves.
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] backuppc4 just idle, no backups

2022-06-14 Thread gregrwm
actually what was going on was i wasn't putting my hosts.pl files in the
right directory.  thanks again for the -v tip.


On Fri, May 6, 2022 at 9:52 PM gregrwm  wrote:

> thank you!  just what i needed!
>
> in backuppc3 i set BackupsDisable to 1 in config.pl, and set it to 0 in
> just the hosts.pl i want to backup.  This apparently doesn't work in
> backuppc4.  Too bad, that was handy.  But whatever, got it going, thanks
> again.
>
>
> On Fri, May 6, 2022 at 7:02 PM Norman J. Goldstein 
> wrote:
>
>> I have found it useful at times to run
>> /usr/share/BackupPC/bin/BackupPC_dump from the command line e.g.
>>
>>BackupPC_dump -f -v CLIENTNAME
>>
>> for a full backup with verbose messages.  Run this as the backuppc user.
>>
>>
>> On 2022-05-06 16:22, gregrwm wrote:
>>
>> thanks for your reply
>> they're all 0 already
>> any other ideas?
>>
>>
>> On Fri, May 6, 2022 at 6:07 PM Norman Goldstein 
>> wrote:
>>
>>> If you have clients with dhcp set to 1 in the /etc/BackupPC/hosts file,
>>> try setting the dhcp to 0.  I had a similar problem, and this worked for my
>>> situation.
>>>
>>>
>>> On 2022-05-06 15:59, gregrwm wrote:
>>>
>>> when i invoke BackupPC_dump it always just says "nothing to do".  why
>>> would it be doing that?
>>>
>>>
>>> On Fri, May 6, 2022 at 4:51 PM gregrwm  wrote:
>>>
>>>> i'm trying to get backuppc4 working on manjaro.  it's up and sends me
>>>> mail like if i remove a host from the hosts file, but it's not backing up
>>>> any hosts.  The logs look mostly normal, mentioning wakeups, nightly, and
>>>> pool cleaning, but there's no mention of even trying to do any backups.
>>>> Any ideas like what to try or what to look for?
>>>>
>>>> $Conf{IncrPeriod} = 0.97;
>>>> $Conf{BackupsDisable} = 0;
>>>> $Conf{BlackoutGoodCnt} = -1;
>>>>
>>> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] Serious error: last backup directory doesn't exist!!! Need to remove back to last filled backup

2022-06-13 Thread gregrwm
i hadn't prepared my config quite as i'd intended, and expected a rather
long wait, so i interrupted BackupPC_dump.
on the next invocation i got:
2022-06-12 21:35:02 Serious error: last backup
/var/lib/backuppc/pc/avocado/32 directory doesn't exist!!!  Need to remove
back to last filled backup
2022-06-12 21:35:02 Deleting backup 14
2022-06-12 21:35:08 Deleting backup 15
2022-06-12 21:35:14 Deleting backup 16
2022-06-12 21:35:20 Deleting backup 17
2022-06-12 21:35:27 Deleting backup 18
2022-06-12 21:35:34 Deleting backup 19
2022-06-12 21:35:41 Deleting backup 22
2022-06-12 21:35:47 Deleting backup 30
2022-06-12 21:36:00 Deleting backup 32

wow.  not too robust!  doesn't that seem like an inordinate consequence?

(manjaro backuppc 4.4.0-4)
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] are 2 backuppc hosts safe from each other's activity?

2022-05-14 Thread gregrwm
i have brought up backuppc4 on a new kvm guest, and still also have
backuppc3 on another kvm guest.  but is my assumption correct?  i assume
that the backuppc hosts are where it is saved which backups were done last,
and when.  i might be wrong tho, maybe these are stored on each client as
they are backed up?  if i am right, there should be no problem while the
two backuppc hosts take leapfrog backups from the same set of clients.  if
i am wrong, neither backuppc host could be trusted to gather complete
backups.  can someone confirm?
tia,
greg
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] backuppc4 just idle, no backups

2022-05-06 Thread gregrwm
thank you!  just what i needed!

in backuppc3 i set BackupsDisable to 1 in config.pl, and set it to 0 in
just the hosts.pl i want to backup.  This apparently doesn't work in
backuppc4.  Too bad, that was handy.  But whatever, got it going, thanks
again.


On Fri, May 6, 2022 at 7:02 PM Norman J. Goldstein 
wrote:

> I have found it useful at times to run
> /usr/share/BackupPC/bin/BackupPC_dump from the command line e.g.
>
>BackupPC_dump -f -v CLIENTNAME
>
> for a full backup with verbose messages.  Run this as the backuppc user.
>
>
> On 2022-05-06 16:22, gregrwm wrote:
>
> thanks for your reply
> they're all 0 already
> any other ideas?
>
>
> On Fri, May 6, 2022 at 6:07 PM Norman Goldstein  wrote:
>
>> If you have clients with dhcp set to 1 in the /etc/BackupPC/hosts file,
>> try setting the dhcp to 0.  I had a similar problem, and this worked for my
>> situation.
>>
>>
>> On 2022-05-06 15:59, gregrwm wrote:
>>
>> when i invoke BackupPC_dump it always just says "nothing to do".  why
>> would it be doing that?
>>
>>
>> On Fri, May 6, 2022 at 4:51 PM gregrwm  wrote:
>>
>>> i'm trying to get backuppc4 working on manjaro.  it's up and sends me
>>> mail like if i remove a host from the hosts file, but it's not backing up
>>> any hosts.  The logs look mostly normal, mentioning wakeups, nightly, and
>>> pool cleaning, but there's no mention of even trying to do any backups.
>>> Any ideas like what to try or what to look for?
>>>
>>> $Conf{IncrPeriod} = 0.97;
>>> $Conf{BackupsDisable} = 0;
>>> $Conf{BlackoutGoodCnt} = -1;
>>>
>>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] backuppc4 just idle, no backups

2022-05-06 Thread gregrwm
thanks for your reply
they're all 0 already
any other ideas?


On Fri, May 6, 2022 at 6:07 PM Norman Goldstein  wrote:

> If you have clients with dhcp set to 1 in the /etc/BackupPC/hosts file,
> try setting the dhcp to 0.  I had a similar problem, and this worked for my
> situation.
>
>
> On 2022-05-06 15:59, gregrwm wrote:
>
> when i invoke BackupPC_dump it always just says "nothing to do".  why
> would it be doing that?
>
>
> On Fri, May 6, 2022 at 4:51 PM gregrwm  wrote:
>
>> i'm trying to get backuppc4 working on manjaro.  it's up and sends me
>> mail like if i remove a host from the hosts file, but it's not backing up
>> any hosts.  The logs look mostly normal, mentioning wakeups, nightly, and
>> pool cleaning, but there's no mention of even trying to do any backups.
>> Any ideas like what to try or what to look for?
>>
>> $Conf{IncrPeriod} = 0.97;
>> $Conf{BackupsDisable} = 0;
>> $Conf{BlackoutGoodCnt} = -1;
>>
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] backuppc4 just idle, no backups

2022-05-06 Thread gregrwm
when i invoke BackupPC_dump it always just says "nothing to do".  why would
it be doing that?


On Fri, May 6, 2022 at 4:51 PM gregrwm  wrote:

> i'm trying to get backuppc4 working on manjaro.  it's up and sends me mail
> like if i remove a host from the hosts file, but it's not backing up any
> hosts.  The logs look mostly normal, mentioning wakeups, nightly, and pool
> cleaning, but there's no mention of even trying to do any backups.  Any
> ideas like what to try or what to look for?
>
> $Conf{IncrPeriod} = 0.97;
> $Conf{BackupsDisable} = 0;
> $Conf{BlackoutGoodCnt} = -1;
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] backuppc4 just idle, no backups

2022-05-06 Thread gregrwm
i'm trying to get backuppc4 working on manjaro.  it's up and sends me mail
like if i remove a host from the hosts file, but it's not backing up any
hosts.  The logs look mostly normal, mentioning wakeups, nightly, and pool
cleaning, but there's no mention of even trying to do any backups.  Any
ideas like what to try or what to look for?

$Conf{IncrPeriod} = 0.97;
$Conf{BackupsDisable} = 0;
$Conf{BlackoutGoodCnt} = -1;
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] does BackupPC_dump omit anything important?

2022-05-05 Thread gregrwm
at this point i'm using blackout to suspend backuppc scheduling, and via a
cron script invoking BackupPC_dump for each host (right after updating its
list of shares to match its current set of kvm guests, and using
DumpPreShareCmd to snapshot and guestmount the guest storage for each
"share").

this seems like it may be working well, tho i'm curious if as a result
anything of import, such as BackupPC_link, might be getting missed?
Because, before, logs such as /var/lib/backuppc/log/LOG.8.z mentioned
"Running BackupPC_link" for each host, eg:

>2022-04-26 04:06:26 Started incr backup on avocado (pid=1029206,
share=/mnt/^BACKUPPC/g193)
>2022-04-26 05:02:38 Started incr backup on avocado (pid=1029206, share=/)
>2022-04-26 05:13:49 Finished incr backup on avocado
>2022-04-26 05:13:49 *Running BackupPC_link* avocado (pid=1030494)
>2022-04-26 05:14:08 Finished avocado (BackupPC_link avocado)

but now logs such as /var/lib/backuppc/log/LOG.1.z no longer show any
mention of backups invoked via BackupPC_dump.

if BackupPC_link, and/or other things, are being omitted now, what is the
result?

how or what might i look at to see more?
tia,
greg
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] VDO

2022-04-13 Thread gregrwm
i'm prepared to try backuppc4, storing to a (rhel8) VDO (block level dedup)
volume, thinking it may dedup significant portions of large files that are
mostly but not completely the same.  is this sensible?  Why or why not?
Beyond just dedup, should i expect better performance with VDO compression,
backuppc compression, both, or neither?  in your answer are you considering
space performance primarily, or time performance primarily?  Please
explain?  Thank you!  i expect a pool size of roughly 1t.
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] how might i backup the hosts in the order i wish?

2021-10-25 Thread gregrwm
i want certain hosts done first, so they are more likely finished when
things get busy in the morning.

i could probably forsake the backuppc wakeup procedure and invoke either
BackupPC_dump or _serverMesg from a script.

but with a tidbit of insider knowledge maybe i could let backuppc initiate
the backups and still get the order i want.

might there be a magic poke, tweak, or finagle to set the order?  any clues?
thank you,
greg
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] guestfish rsync-out

2021-03-20 Thread gregrwm
is there a way to make guestfish rsync-out work with backuppc?

otherwise i guess the way to backup kvm guests is using guestmount
(preceeded by snapshot-creat-as and followed by blockcommit), which all
works, tho if i could figure out how i would refrain from mounting the
guest filesystems on the host.
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] double hop rsync

2021-03-16 Thread gregrwm
On Tue, Mar 16, 2021 at 10:50 AM Alexander Kobel  wrote:

> Hi Greg,
> On 3/16/21 4:27 PM, gregrwm wrote:
> > On Tue, Mar 16, 2021 at 8:45 AM  backu...@kosowsky.org>> wrote:
> > gregrwm wrote at about 19:59:53 -0500 on Monday, March 15, 2021:
> >  > i'm trying to use a double hop rsync to backup a server that can
> only be
> >  > reached indirectly.  a simple test of a double hop rsync to the
> target
> >  > server seems to work:
> >  >
> >  >   #  sudo -ubackuppc rsync -PHSAXaxe"ssh -xq 192.168.128.11 ssh
> -xq"
> >  > --rsync-path=sudo\ /usr/bin/rsync 192.168.1.243:
> /var/log/BackupPC/.bashrc
> >  > /tmp
> >  > receiving incremental file list
> >  > .bashrc
> >  > 231 100%  225.59kB/s0:00:00 (xfr#1, to-chk=0/1)
> >  >   0#
> >  >
> >  > which demonstrates that the backuppc keys, sudo settings, and
> double hop
> >  > rsync all work.
> >  >
> >  > here's my double hop settings:
> >  > $Conf{RsyncClientCmd} = 'ssh -xq 192.168.128.11 ssh -xq
> 192.168.1.243 sudo
> >  > /usr/bin/rsync $argList+';
> >  > $Conf{ClientNameAlias} = '192.168.128.11';
> >
> > Why don't you try using the 'jump' host option on ssh.
> > -J 192.168.128.11
> >
> > seems like a really good idea.  so i tried:
> >
> > $Conf{RsyncClientCmd} = 'ssh -xqJ192.168.128.11 sudo /usr/bin/rsync
> $argList+';
> > $Conf{ClientNameAlias} = '192.168.1.243';
> >
> > and got:
> > Got remote protocol 1851877475
> > Fatal error (bad version): channel 0: open failed: connect failed: Name
> or service not known
> > stdio forwarding failed
> > Can't write 1298 bytes to socket
> > fileListReceive() failed
> >
> > if you've any ideas how to tweak that and try again i'm eager,
>
> any luck with the ProxyJump config option? I use this in my BackupPC
> user's ~/.ssh/config to keep the BackupPC config as clean as possible.
> See, e.g., https://wiki.gentoo.org/wiki/SSH_jump_host#Multiple_jumps
>
> Probably, in your case it would be something like
>
> Host client
> HostName192.168.1.243
> ProxyJump   192.168.1.243
>
> HTH,
> Alex
>

and the winning magic incantation is...
$Conf{RsyncClientCmd} = 'ssh -xqJ192.168.128.11 192.168.1.243  sudo
/usr/bin/rsync $argList+';
$Conf{ClientNameAlias} = '127.0.0.1';

thank you alex and @kosowsky
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] double hop rsync

2021-03-16 Thread gregrwm
On Tue, Mar 16, 2021 at 8:45 AM  wrote:

> gregrwm wrote at about 19:59:53 -0500 on Monday, March 15, 2021:
>  > i'm trying to use a double hop rsync to backup a server that can only be
>  > reached indirectly.  a simple test of a double hop rsync to the target
>  > server seems to work:
>  >
>  >   #  sudo -ubackuppc rsync -PHSAXaxe"ssh -xq 192.168.128.11 ssh -xq"
>  > --rsync-path=sudo\ /usr/bin/rsync 192.168.1.243:
> /var/log/BackupPC/.bashrc
>  > /tmp
>  > receiving incremental file list
>  > .bashrc
>  > 231 100%  225.59kB/s0:00:00 (xfr#1, to-chk=0/1)
>  >   0#
>  >
>  > which demonstrates that the backuppc keys, sudo settings, and double hop
>  > rsync all work.
>  >
>  > here's my double hop settings:
>  > $Conf{RsyncClientCmd} = 'ssh -xq 192.168.128.11 ssh -xq 192.168.1.243
> sudo
>  > /usr/bin/rsync $argList+';
>  > $Conf{ClientNameAlias} = '192.168.128.11';
>
> Why don't you try using the 'jump' host option on ssh.
> -J 192.168.128.11
>

seems like a really good idea.  so i tried:

$Conf{RsyncClientCmd} = 'ssh -xqJ192.168.128.11 sudo /usr/bin/rsync
$argList+';
$Conf{ClientNameAlias} = '192.168.1.243';

and got:
Got remote protocol 1851877475
Fatal error (bad version): channel 0: open failed: connect failed: Name or
service not known
stdio forwarding failed
Can't write 1298 bytes to socket
fileListReceive() failed

if you've any ideas how to tweak that and try again i'm eager,
thank you,
greg
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] what's the best way to backup qcow2 files (on centos8)?

2021-03-16 Thread gregrwm
On Mon, Mar 15, 2021 at 7:19 PM gregrwm  wrote:

> problem:  blockcommit only works if the guest is running.
>
> so, not a problem with backuppc, but a problem with how to use it.
>
> currently i'm using:
> DumpPreShareCmd:
> virsh snapshot-create-as   --atomic --no-metadata --disk-only
> --diskspec=
> mkdir -p /mnt/point
> guestmount -iroallow_root -a /mnt/point
>
> DumpPostShareCmd:
> guestunmount /mnt/point
> virsh blockcommit   --active --pivot --delete
>
> but blockcommit only works if the guest is running.
>
> is there an approach that works, whether the guest is running, stopped, or
> stops or starts during the backup?
>

methods i'm not considering:
just copy the qcow2 files.
copy files out from inside a running vm.
stop the vm and then copy files.

"QCOW2 backing files & overlays"
(https://kashyapc.fedorapeople.org/virt/lc-2012/snapshots-handout.html)
...discusses internal snapshots, which to me raises the question, how might
i mount (or guestmount) an internal snapshot so i can copy files out of it?
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] double hop rsync

2021-03-15 Thread gregrwm
i'm trying to use a double hop rsync to backup a server that can only be
reached indirectly.  a simple test of a double hop rsync to the target
server seems to work:

  #  sudo -ubackuppc rsync -PHSAXaxe"ssh -xq 192.168.128.11 ssh -xq"
--rsync-path=sudo\ /usr/bin/rsync 192.168.1.243:/var/log/BackupPC/.bashrc
/tmp
receiving incremental file list
.bashrc
231 100%  225.59kB/s0:00:00 (xfr#1, to-chk=0/1)
  0#

which demonstrates that the backuppc keys, sudo settings, and double hop
rsync all work.

here's my double hop settings:
$Conf{RsyncClientCmd} = 'ssh -xq 192.168.128.11 ssh -xq 192.168.1.243 sudo
/usr/bin/rsync $argList+';
$Conf{ClientNameAlias} = '192.168.128.11';

fwiw my soon-to-be-decommissioned prior backuppc server is still backing up
the target server without issue.  it's still located where it can reach the
target server directly.  it uses:
$Conf{RsyncClientCmd} = 'ssh -xq $host sudo /usr/bin/rsync $argList+';

but while the new backuppc is working fine for servers it can reach
directly, for the double hop it says:  "fileListReceive failed"

any ideas?
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] what's the best way to backup qcow2 files (on centos8)?

2021-03-15 Thread gregrwm
problem:  blockcommit only works if the guest is running.

so, not a problem with backuppc, but a problem with how to use it.

currently i'm using:
DumpPreShareCmd:
virsh snapshot-create-as   --atomic --no-metadata --disk-only
--diskspec=
mkdir -p /mnt/point
guestmount -iroallow_root -a /mnt/point

DumpPostShareCmd:
guestunmount /mnt/point
virsh blockcommit   --active --pivot --delete

but blockcommit only works if the guest is running.

is there an approach that works, whether the guest is running, stopped, or
stops or starts during the backup?
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] BackupPC/XS.pm

2020-03-20 Thread gregrwm
trying to get started with backuppcfs, found libbackuppc-xs-perl in ubuntu
repos, but now wondering where to find BackupPC/DirOps.pm:

  0#  apt-get install libbackuppc-xs-perl
...
  0#  PERLLIB=/usr/share/backuppc/lib ./backuppcfs.pl /mnt/b
Can't locate BackupPC/DirOps.pm in @INC (you may need to install the
BackupPC::DirOps module) (@INC contains: /usr/local/BackupPC/lib
/usr/share/backuppc/lib /etc/perl
/usr/local/lib/x86_64-linux-gnu/perl/5.26.1
/usr/local/share/perl/5.26.1 /usr/lib/x86_64-linux-gnu/perl5/5.26
/usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl/5.26 /usr/share/perl/5.26
/usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base) at ./
backuppcfs.pl line 68.
BEGIN failed--compilation aborted at ./backuppcfs.pl line 68.
*  2#  apt-get install libbackuppc-dirops-perl
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package libbackuppc-dirops-perl
  100#  apt-cache search backuppc
backuppc - high-performance, enterprise-grade system for backing up PCs
libfile-rsyncp-perl - Perl based implementation of an Rsync client
libio-dirent-perl - Perl module for accessing dirent structs returned by
readdir
hobbit-plugins - plugins for the Xymon network monitor
libbackuppc-xs-perl - Perl module for BackupPC
nagios-plugins-contrib - Plugins for nagios compatible monitoring systems
  0#


On Fri, Mar 20, 2020 at 10:13 AM gregrwm  wrote:

> trying to get started with backuppcfs.  what and where is BackupPC/XS.pm?
> it's not in dpkg-query -L backuppc.  is it because the version of
> backuppcfs.pl i have is for backuppc4?  and not compatible with
> backuppc3?  if so is there a version for backuppc3?
>
>   0#  PERLLIB=/usr/share/backuppc/lib ./backuppcfs.pl /mnt/b
> Can't locate BackupPC/XS.pm in @INC (you may need to install the
> BackupPC::XS module) (@INC contains: /usr/local/BackupPC/lib
> /usr/share/backuppc/lib /etc/perl
> /usr/local/lib/x86_64-linux-gnu/perl/5.26.1 /usr/local/share/perl/5.26.1
> /usr/lib/x86_64-linux-gnu/perl5/5.26 /usr/share/perl5
> /usr/lib/x86_64-linux-gnu/perl/5.26 /usr/share/perl/5.26
> /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base) at ./
> backuppcfs.pl line 67.
> BEGIN failed--compilation aborted at ./backuppcfs.pl line 67.
>   2#  dpkg-query -W backuppc
> backuppc3.3.1-4ubuntu1
>   0#  cat /etc/lsb-release
> DISTRIB_ID=Ubuntu
> DISTRIB_RELEASE=18.04
> DISTRIB_CODENAME=bionic
> DISTRIB_DESCRIPTION="Ubuntu 18.04.3 LTS"
>   0#
>
> tia,
> greg
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC/XS.pm

2020-03-20 Thread gregrwm
trying to get started with backuppcfs.  what and where is BackupPC/XS.pm?
it's not in dpkg-query -L backuppc.  is it because the version of
backuppcfs.pl i have is for backuppc4?  and not compatible with backuppc3?
if so is there a version for backuppc3?

  0#  PERLLIB=/usr/share/backuppc/lib ./backuppcfs.pl /mnt/b
Can't locate BackupPC/XS.pm in @INC (you may need to install the
BackupPC::XS module) (@INC contains: /usr/local/BackupPC/lib
/usr/share/backuppc/lib /etc/perl
/usr/local/lib/x86_64-linux-gnu/perl/5.26.1 /usr/local/share/perl/5.26.1
/usr/lib/x86_64-linux-gnu/perl5/5.26 /usr/share/perl5
/usr/lib/x86_64-linux-gnu/perl/5.26 /usr/share/perl/5.26
/usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base) at ./
backuppcfs.pl line 67.
BEGIN failed--compilation aborted at ./backuppcfs.pl line 67.
  2#  dpkg-query -W backuppc
backuppc3.3.1-4ubuntu1
  0#  cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.3 LTS"
  0#

tia,
greg
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Digest::MD5 vs md5sum

2015-09-16 Thread gregrwm
>
> Where did you get the idea to 4b $s? You might try echo -n ning it
> instead...  Hope that helps. Not sure what you are actually trying to do,
> though.


in order to copy my backups through a narrow pipe i just selected all the
most recent full backups and sent them via rsync.  later it became
expedient to make the new location the backuppc host, so i then wanted to
(re)create cpool on the new backuppc host.  unfortunately
BackupPC_fixLinks.pl wasn't quite working.  it was presuming that any
multiply linked file was already in the pool.  well my rsync copy preserved
quite a lot of hardlinks, so that clearly wasn't true for me.  i did
eventually work out a simple fix for BackupPC_fixLinks.pl and will be glad
to share it if anyone's interested.

meanwhile i was looking at whipping up NewFileList files so i could use
BackupPC_link, hence i was trying to work out how to recompute the custom
backuppc md5, using this post:

Re: [BackupPC-devel] Hash (MD4?) Algorithm used for Pool
> From: Craig Barratt  - 2005-08-19 09:23:30
>

> Roy Keene writes:
> >  Can you describe what is hashed and using which algorithm is used
> > to determine the pool hash name ?
>

> Sorry about the delay in replying - I'm on vacation this week.
>
> It's a little arcane, but here it is.  The MD5 digest is used
> on the following data:
>
>- for files <= 256K we use the file size and the whole file, ie:
>
> MD5([4 byte file size, file contents])
>
>- for files <= 1M we use the file size, the first 128K and
>  the last 128K.
>- for files > 1M, we use the file size, the first 128K and
>  the 8th 128K (ie: the 128K up to 1MB)...
>

> One thing that is not clear is what perl does when the fileSize
> is bigger than 4GB.  In particular, we start off with:
>
> $md5->add($fileSize);
>
> I suspect that this will be the real file size modulo 2^32 (ie: the
> lower 4 bytes of the file size).


so that led me to presume i should represent the size as 4 bytes containing
32bits of binary.  be that as it may, you were correct, it works with the
size as a decimal string, neither padded nor truncated.  i verified it with
files of various sizes:

>$ alias zcat=/usr/share/?ackup??/bin/BackupPC_zcat
>$ alias li=ls' -aFqldi --color --time-style=+%F" %a "%T'
>$ um()(s=$( cat $1|wc -c) m=$((echo -n $s; cat $1|sf)|md5sum);echo $(li
$1) $m s=$s)  #li & md5 of uncompressed argfile
>$ zm()(s=$(zcat $1|wc -c) m=$((echo -n $s;zcat $1|sf)|md5sum);echo $(li
$1) $m s=$s)  #li & md5 of  compressed argfile
>$ sf()if [ $s -le $((256*1024)) ];then cat
 #select filedata for md5
>>   else head -c1M|(head -c128K;tail -c128K)
>>   fi
>$ um /etc/papersize
>349898 -rw-r--r-- 1 root root 3 2014-01-01 Wed 17:39:09 /etc/papersize
7a59e82651106239413a38eb30735991 - s=3
>$ zm cpool/7/a/5/7a59e82651106239413a38eb30735991
>19720838 -rw-r- 41 backuppc backuppc 11 2015-04-23 Thu 02:30:49
cpool/7/a/5/7a59e82651106239413a38eb30735991
7a59e82651106239413a38eb30735991 - s=3
>$ zm cpool/4/6/0/46052bcabfe39626ccbcee2b709ce1a8
>7620687 -rw-r- 2 backuppc backuppc 1287 2014-01-04 Sat 03:15:17
cpool/4/6/0/46052bcabfe39626ccbcee2b709ce1a8
46052bcabfe39626ccbcee2b709ce1a8 - s=1276
>$ zm cpool/4/e/2/4e2ae5ba88b4a31a9995b6d7bbee9ca6
>22095546 -rw-r- 4 backuppc backuppc 7615 2008-04-05 Sat 19:10:45
cpool/4/e/2/4e2ae5ba88b4a31a9995b6d7bbee9ca6
4e2ae5ba88b4a31a9995b6d7bbee9ca6 - s=43116
>$ zm cpool/4/6/0/4603834cfc3e7323a83c54d542d191a8
>5738140 -rw-r- 19 backuppc backuppc 126642 2015-04-23 Thu 02:06:36
cpool/4/6/0/4603834cfc3e7323a83c54d542d191a8
4603834cfc3e7323a83c54d542d191a8 - s=641020
>$ zm cpool/4/e/2/4e226deede40ab1199eb2ebdbf220995
>7825485 -rw-r- 3 backuppc backuppc 202772654 2010-01-22 Fri 19:31:57
cpool/4/e/2/4e226deede40ab1199eb2ebdbf220995
4e226deede40ab1199eb2ebdbf220995 - s=204085248
--
Monitor Your Dynamic Infrastructure at Any Scale With Datadog!
Get real-time metrics from all of your servers, apps and tools
in one place.
SourceForge users - Click here to start your Free Trial of Datadog now!
http://pubads.g.doubleclick.net/gampad/clk?id=241902991=/4140___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Digest::MD5 vs md5sum

2015-09-15 Thread gregrwm
the following commands demonstrate that either Digest::MD5 and gnu md5sum
are not compatible, or that i haven't got the backuppc md5 formula quite
right.  can anyone set me straight?  the commands below merely use the
filesize, and the whole file since it's <256K, feed them both to md5sum,
and show the result adjacent the filename.  the below shows /etc/papersize
and it's poolfile:

$ alias zcat=/usr/share/?ackup??/bin/BackupPC_zcat
> $ alias li=ls' -aFqldi --color --time-style=+%F" %a "%T'
> $ 1b()(printf \\x$(printf %x $1))
> $ 2b()(1b $(($1  /  256));1b $(($1%  256)))
> $ 4b()(2b $(($1%2**32/65536));2b $(($1%65536))) #write rightmost 4 bytes
> $ 4b 513 |xxd   #4b sanity tests
> 000:  0201
> $ 4b 65539   |xxd
> 000: 0001 0003
> $ 4b $((2**32-7))|xxd
> 000:  fff9
> $ 4b $((2**32+9))|xxd
> 000:  0009
> $ md5sum --version|head -1
> md5sum (GNU coreutils) 8.13
> $ um()(s=$( cat $1|wc -c) m=$((4b $s; cat $1|sf)|md5sum);echo $(li $1) $m
> s=$s) #li & md5 of uncompressed argfile
> $ zm()(s=$(zcat $1|wc -c) m=$((4b $s;zcat $1|sf)|md5sum);echo $(li $1) $m
> s=$s) #li & md5 of  compressed argfile
> $ sf()if [ $s -le $(( 256*1024)) ];then
> cat #select filedata for md5
> >   elif [ $s -le $((1024*1024)) ];then
> >  dd bs=128K count=1
> >  tail -c128K
> >   else
> >  dd bs=128K count=1
> >  dd bs=128K skip=6 count=1
> >   fi
> $ cat /etc/papersize
> a4
> $ xxd /etc/papersize
> 000: 6134 0a  a4.
> $ um /etc/papersize
> 349898 -rw-r--r-- 1 root root 3 2014-01-01 Wed 17:39:09 /etc/papersize
> 12cf0b6059ccc2701eb9e55277f161c2 - s=3
> $ zm /var/lib/backuppc/pc/127.0.0.1/0/f%2f/fetc/fpapersize
> 19720838 -rw-r- 40 backuppc backuppc 11 2015-04-23 Thu 02:30:49
> /var/lib/backuppc/pc/127.0.0.1/0/f%2f/fetc/fpapersize
> 12cf0b6059ccc2701eb9e55277f161c2 - s=3
> $ zm /var/lib/backuppc/cpool/7/a/5/*991
> 19720838 -rw-r- 40 backuppc backuppc 11 2015-04-23 Thu 02:30:49
> /var/lib/backuppc/cpool/7/a/5/7a59e82651106239413a38eb30735991
> 12cf0b6059ccc2701eb9e55277f161c2 - s=3
>
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC_fixLinks.pl

2015-04-22 Thread gregrwm
i've used BackupPC_fixLinks.pl in the past with success, on RHEL.  i'm
trying to use it again, this time on ubuntu:

 # sudo -ubackuppc -H ./BackupPC_fixLinks.pl -f -q
 Use of qw(...) as parentheses is deprecated at
 /usr/share/backuppc/lib/BackupPC/Storage/Text.pm line 302.
 Use of qw(...) as parentheses is deprecated at
 /usr/share/backuppc/lib/BackupPC/Lib.pm line 1425.
 String found where operator expected at ./BackupPC_fixLinks.pl line 243,
 near warnerr Can't read : $File::Find::name\n
 (Do you need to predeclare warnerr?)
 String found where operator expected at ./BackupPC_fixLinks.pl line 257,
 near warnerr Can't stat: $File::Find::name\n
 (Do you need to predeclare warnerr?)
 String found where operator expected at ./BackupPC_fixLinks.pl line 261,
 near warnerr Hole in pool chain at $root$prevsuffix
 (Do you need to predeclare warnerr?)
 String found where operator expected at ./BackupPC_fixLinks.pl line 269,
 near warnerr Parent not a file or unreadable: $File::Find::dir/$parent\n
 (Do you need to predeclare warnerr?)
 Global symbol %Conf requires explicit package name at
 ./BackupPC_fixLinks.pl line 55.
 Global symbol $dryrun requires explicit package name at
 ./BackupPC_fixLinks.pl line 90.
 Global symbol $dryrun requires explicit package name at
 ./BackupPC_fixLinks.pl line 94.
 Global symbol %Conf requires explicit package name at
 ./BackupPC_fixLinks.pl line 100.
 Global symbol %Conf requires explicit package name at
 ./BackupPC_fixLinks.pl line 224.
 Global symbol $dryrun requires explicit package name at
 ./BackupPC_fixLinks.pl line 232.
 syntax error at ./BackupPC_fixLinks.pl line 243, near warnerr Can't read
 : $File::Find::name\n
 syntax error at ./BackupPC_fixLinks.pl line 257, near warnerr Can't
 stat: $File::Find::name\n
 syntax error at ./BackupPC_fixLinks.pl line 261, near warnerr Hole in
 pool chain at $root$prevsuffix
 syntax error at ./BackupPC_fixLinks.pl line 269, near warnerr Parent not
 a file or unreadable: $File::Find::dir/$parent\n
 ./BackupPC_fixLinks.pl has too many errors.


any possible clues about an updated or ported version somewhere?
--
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15utm_medium=emailutm_campaign=VA_SF___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] fulls consistently align

2015-04-20 Thread gregrwm
iiuc backuppc currently has no feature to disperse the likelihood that
several or many full backups will consistently be aligned to occur on
the same day.  indeed it's quite likely that many full backups will
consistently align on the same day, eg if they were added on the same
day and follow the same schedule.

i prefer that the load/time of doing full backups be dispersed from
being aligned on the same day.  currently any such dispersion must be
contrived manually, and either gets lost whenever downtime happens, or
requires separate config files for each host/filesystem.

so if there's an enhancement queue i suggest an option to specify if
full backups are desired on certain days, or if it's desired that they
are dispersed from all occurring together.

for an example, if 4 hosts are defined, and if fulls are preferred on
sundays, and if fulls are desired every 2 weeks, it would be nicest if
the size of the last full backups were noted, the smallest and largest
full backup done on one sunday, and the remaining full backups on the
next sunday, and it remained/returned to be so even after any
downtime.

--
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15utm_medium=emailutm_campaign=VA_SF
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] scheduler ignores BackupPC_dump

2013-10-13 Thread gregrwm
  i want a backup of my server as of now, so i start one with
BackupPC_dump.
  before it finishes, the scheduler starts another one, which executes
  simultaneously.  seems like trouble.

 The server/scheduler doesn't know about programs you start from the
 command line.  If you are doing a one-off manually, why not just use
 the web interface to start a backup?   If you need to do it from cron
 or some other program you can use BackupPC_serverMesg to tell the
 server what you want it to do:


http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=ServerMesg_commands

thank you les, i'll try that next time.

as for BackupPC_dump, either it ought to feedback the same activity
awareness as BackupPC_serverMesg, or its documentation ought to warn of the
risk and advise against its use.
--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] scheduler ignores BackupPC_dump

2013-10-12 Thread gregrwm
i want a backup of my server as of now, so i start one with BackupPC_dump.
before it finishes, the scheduler starts another one, which executes
simultaneously.  seems like trouble.
--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] scheduler ignores BackupPC_dump

2013-10-12 Thread gregrwm
 i want a backup of my server as of now, so i start one with
 BackupPC_dump.  before it finishes, the scheduler starts another one, which
 executes simultaneously.  seems like trouble.


(another simultaneous backup of the same server (localhost))
--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Pool synchronization?

2013-03-02 Thread gregrwm

  i'm using a simple procedure i cooked up to maintain a third copy at a
  third physical location using as little bandwidth as possible.  it simply
  looks at each pc/*/backups, selects the most recent full and most recent
  incremental (plus any partial or /new), and copies them across the wire,
  together with the most recently copied fullincremental set
  [...]
  that's already there (and not bothering with the cpool), which, for me,
 is
  a happily sized set of hardlinks that rsync can actually manage (ymmv).
  [...]  if it's of interest i could share it.

 I think this fills a useful use case, so yeah I would say send it to
 the mailing list.


#currently running as pull, tho can run as push with minor mods:
#(root bash)
rbh=remote.backuppc.host
b=/var/lib/BackupPC/pc
cd /local/third/copy
sb='sudo -ubackuppc'
ssp=$sb ssh -p#nonstandard ssh port
ssb=$ssp $rbh cd $b
from_to=($rbh:$b/* .)
fob=$ssb
df=($(df -m .))
prun=   #prun=--delete-excluded to prune
local/third/copy down to most recent backups only
echo df=${df[10]} prun=$prun#show current filespace and prun setting
[ ! -s iMRFIN ]{ touch iMRFIN ||exit
$?;} #most recent finished set
[ ! -s iMRUNF ]{ touch iLRUNF ||exit $?;}||{ cat iMRUNFiLRUNF||exit
$?;}#most recent and less recent unfinished sets
$fob 'echo  --include=*/new
#any unfinished backups
   for m in */backups;do unset f i
#look at all pc/*/backups files
   while read -r r;do r=($r)
   [[ ${r[1]} = full]]fu[f++]=$r
   [[ ${r[1]} = incr]]in[i++]=$r
   [[ ${r[1]} = partial ]]echo  --include=${m%backups}$r
#any incomplete backups
   done  $m
   [[ $f -gt 0  ]]echo  --include=${m%backups}${fu[f-1]}
#most recent full
   [[ $i -gt 0  ]]echo  --include=${m%backups}${in[i-1]}
#most recent incremental
   done'| iMRUNF ||echo badexit;head -99 i*
#show backup sets included for transfer
rc=255;while [[ $rc = 255 ]];do date
#reconnect if 255(connection dropped)
   #note some special custom excludes are on a separate line
   rsync -qPHSae$ssp --rsync-path=sudo rsync $(cat iMRFIN iLRUNF
iMRUNF) $prun --exclude=/*/*/ \
  --exclude=fNAVupdate --exclude=fDownloads --exclude=\*Personal
--exclude=*COPY of C* \
  ${from_to[@]}
   rc=$?;echo rc=$rc;if [ $rc = 0 ];then mv iMRUNF iMRFIN;rm
iLRUNF;fi;done;df -m .
--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_feb___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Pool synchronization?

2013-03-01 Thread gregrwm
i'm using a simple procedure i cooked up to maintain a third copy at a
third physical location using as little bandwidth as possible.  it simply
looks at each pc/*/backups, selects the most recent full and most recent
incremental (plus any partial or /new), and copies them across the wire,
together with the most recently copied fullincremental set (plus any
incompletely copied sets), using rsync, with it's hardlink copying
feature.  thus my third location has a copy of the most recent (already
compressed) pc/ tree data, using rsync to avoid copying stuff over the wire
that's already there (and not bothering with the cpool), which, for me, is
a happily sized set of hardlinks that rsync can actually manage (ymmv).  i
have successfully used this together with a script to recreate the cpool
if/when necessary.  if it's of interest i could share it.
--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_feb___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] wakeup command?

2011-07-07 Thread gregrwm
  On 6/23/2011 3:59 PM, gregrwm wrote:
   is there a command that triggers the equivalent of a wakeup?  normally i
   only want 1 wakeup per day, yet for special circumstances i often find
   myself editing in a wakeup a couple minutes hence and triggering a reload.
 
 Les Mikesell wrote on 2011-06-23 16:19:52 -0500 [Re: [BackupPC-users] wakeup 
 command?]:
  Normally you'd have moderately frequent wakeups where the actual
  scheduling of the runs is controlled by other settings (which are
  checked at each wakeup).  Is there some reason that is a problem?

generally i want backups to run at one specific time only, but i want
specifically requested backups to be allowed anytime.

On Thu, Jun 23, 2011 at 21:15, Holger Parplies wb...@parplies.de wrote:
 In any case, no there is no command that triggers the equivalent of a wakeup,
 but the part you are probably interested in - running backups which are due -
 can be triggered with

        BackupPC_serverMesg backup all

ty!

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] wakeup command?

2011-06-23 Thread gregrwm
is there a command that triggers the equivalent of a wakeup?  normally i only 
want 1 wakeup per day, yet for special circumstances i often find myself 
editing in a wakeup a couple minutes hence and triggering a reload.

--
Simplify data backup and recovery for your virtual environment with vRanger.
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Data protection magic?
Nope - It's vRanger. Get your free trial download today.
http://p.sf.net/sfu/quest-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_dump works but Unable to read 4 bytes at wakeup

2011-06-23 Thread gregrwm
some of my machines don't respond to ping, i replaced it with true in the
config.
--
Simplify data backup and recovery for your virtual environment with vRanger.
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Data protection magic?
Nope - It's vRanger. Get your free trial download today.
http://p.sf.net/sfu/quest-sfdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC_dump works but Unable to read 4 bytes at wakeup

2011-06-22 Thread gregrwm
backuppc here backs up localhost, 2 more boxes on the lan, and a centos VPS 
over the internet.  i just upgraded the 3 local boxes to natty (this all worked 
under maverick).

at wakeuptime, only localhost and the centos VPS succeed.  the 2 other local 
boxes log Unable to read 4 bytes.  but they succeed via 
/usr/share/backuppc/bin/BackupPC_dump.  so i don't get it.  what could be wrong?

--
Simplify data backup and recovery for your virtual environment with vRanger.
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Data protection magic?
Nope - It's vRanger. Get your free trial download today.
http://p.sf.net/sfu/quest-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_dump works but Unable to read 4 bytes at wakeup

2011-06-22 Thread gregrwm
oops, sorry, one had a filesystem that wasn't mounted, and the other
was off, and no, they don't succeed to run backuppc_dump in that
condition!

--
Simplify data backup and recovery for your virtual environment with vRanger.
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Data protection magic?
Nope - It's vRanger. Get your free trial download today.
http://p.sf.net/sfu/quest-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/