> The system got itself into this state from a standard yum update.
That's why you want to stick to all packaged modules whenever
possible. Over time, dependencies can change and the packaged
versions will update together. You can probably update a cpan module
to the correct version manually bu
d issues depending on what you have connected and
your cabling.
-
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https:
e you'll probably need to
interact with the database servers with commands backuppc can send to
get dumps that won't change during the copy.
-- Les Mikesell
[email protected]
___
BackupPC-users mailing list
BackupP
up using BackupPC? Or am I
> missing something?
VSS gives you a frozen snapshot that you can copy without contention
of worrying about changes. It doesn't protect against future
corruption or other problems with that disk or filesystem.
--
Les
e the place to start.
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backupp
e interferes with wifi -
but you'd want to use ethernet for backups anyway.
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https://lists.sourceforge.net/lists/list
ill be that helpful. One thing to
consider, though, is that the tiny CPU fans that some of the kits
provide are remarkably noisy. I ended up swapping with a fanless
Flirc case but you might want to check some reviews or youtube
demonstrations before picking anything.
--
Les Mikesell
but there are others and you can find some
YouTube demonstrations/reviews of them.
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https://lists.sourceforge.net/lists/listinfo/
lly need to know anything to build
your own package if that is ever necessary.
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https://lists.sourceforge.net/lists/listinfo/backuppc
going to be too complicated for most casual users to
tackle on their own.
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https://lists.sourceforge.net/lists/listinfo/backuppc-user
On Mon, Jan 11, 2021 at 10:49 AM Jan Stransky
wrote:
>
> Sorry, I did not read full thread... But I use a Docker image, that is
> very easy at home.
Do you lose any efficiency between the docker image and the mounted storage?
--
Les Mikesell
lesmikes...@
rpret it.
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Projec
o a whole lot of reading before it finds something that
doesn't match what the server already has.
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https://lists.sourc
ntent to ellipses that you can ignore or
expand that it doesn't really matter whether the new part is at the
top or bottom. But yes if you need to reply to different parts it
should be interwoven.
-- Les Mikesell
[email protected]
__
o it doesn't
matter much whether the new part was on the top or bottom. And when
you reply, if you expand the old part it shows as >prefixed if you
want to interleave your reply or trim the quoted part.
-- les Mikesell
[email protected]
asional
> backuppc gremlin of disappearing files in that I can find the cpool
> file and revert it from past snapshots.
Are you sure that the disappearing files aren't a quirk of btrfs in
the first place?
--
Les Mikesell
[email protected]
_
#x27;ve always found its tools
to be good at repairs (had trouble with XFS long ago, back when it's
speed of creating/deleting files made it seem to be worth using).
--
Les Mikesell
[email protected]
___
BackupPC-user
ad of FUSE or questionable legality.
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/back
1/13/zfs_linux/
Torvalds declared: "Don't use ZFS. It's that simple."
So, there's always freebsd...
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https
I don't think it was very
well publicized. If you connect 2 drives configured as time machine
backups and keep them both connected it will automatically alternate
between them. Since it runs every hour, I think this is a more robust
scheme than using raid for redundancy with the things that
fully! – be the
> solution.
Isn't rrsync just a perl wrapper to start rsync but ensure it is only
accessing a permitted subdirectory? If so it should be a matter of
tweaking the command to start it to be compatible.
--
Les Mikesell
[email protected]
_
On Wed, Feb 10, 2021 at 1:58 PM wrote:
>
> 4. Further, along that line, while sudoer has been well-tested,
About that
https://www.helpnetsecurity.com/2021/01/27/cve-2021-3156/
--
Les Mikesell
[email protected]
___
BackupPC
re sensitive to out-of-spec
cables or something and auto-switch down.
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https://lists.sourceforge.net/lists/listinfo/backupp
em into
subversion or cvs (perhaps editing out any timestamps inserted by the
retrieval process so only 'real' changes will show as a new commit
with differences) and having a viewvc wrapper to make the changes easy
to spot.
--
Les Mikesel
ry configurable and will
notify you about all kinds of things. I used it very extensively, but
again it was several years ago - I'm retired now.
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
BackupPC-us
f
tiny files that could feasibly be archived as a tar or zip file
instead of stored separately?
Also, rsync versions newer than 3.x are supposed to handle it better.
Is your server side extremely old?
https://rsync.samba.org/FAQ.html#4
--
Les Mikesell
lesmikes...@gma
NS or puts the IP in ClientAliasName he still needs to rename
the existing backups made with the IP as the hostname. Does that take
more than just changing the host and renaming the directory under pc/
to match?
--
Les Mikesell
lesmike
lvable name or IP address that will work on your network to reach
it. It can also be used if you want to split a single large host into
separate runs with subsets of directories. You can make it appear
like separate hosts for scheduling while still pointing to the same
actual target.
--
Les Mikes
et using netbios
names it would be the client's own idea of its name that matters.
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https://lists.sourceforge.net/lists/listinfo
you only
include specific directories you may miss later additions or changes
on the host.
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https://lists.sourceforge.net/lists/l
expectations.
I think I recall something about the third rsync backup getting a
speedup too, where the block checkums are saved on the 2nd run after a
file has been copied. Not sure if that still applies to v4.
--
Les Mikesell
[email protected]
__
u mentioned 'which' system you
are talking about. I'd guess it was some linux distribution, but
their packaging and updates work differently.
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
BackupPC-users@lis
proach for most people will be
to install the packages built for your linux distribution so it will
stay updated with the rest of the system along with its dependencies.
And then the final configuration will depend somewhat on the package
and distribution defaults as well as your local clients.
--
escue-data-recovery-services.product.100458004.html
I'm not even sure you have to be a member to order it online. I've
generally had better luck with Seagate than WD but I think they are
mostly the same these days.
--
Les Mikesell
[email protected]
___
ting the backuppc pool on NFS.
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/bac
e:
https://kifarunix.com/install-backuppc-on-debian-11/
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:htt
d miss
new/changed/moved files in an incremental but get them in a full.
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https://lists.sourceforge.net/lists/listinfo/back
be to use windows VSS snapshots. You can search
the mail list for other discussions but there is a link here to a tool
to do that.
https://sourceforge.net/p/backuppc/mailman/message/37189039/
--
Les Mikesell
[email protected]
___
y
not be large but most tools to copy it will act as though the empty
parts were filled with nulls. Rsync might handle them these days but
may still take the time to send the stream of nulls. But in any case
they are rarely used on Windows.
--
at least with the
rsync back then and slower CPUs. It could be that rsync tried to
write them all at the other end too.
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https:
ks with the old drive mounted?
Then on the odd chance that you need something from it, just fire up
that VM again.
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https://lists.
On Thu, Aug 4, 2022 at 11:36 AM backuppc--- via BackupPC-users
wrote:
>
>
> Right, I guess I should have mentioned that I don't trust that old HDD
> anymore. Plus keeping backups of two old clients that only exist in BackupPC
> has been a bad idea because if my backups get corrupted or deleted,
sk operation now has to run the whole contents
of each file over the network.
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https://lists.sourceforge.net/lists/listinfo/backupp
umentation would be a helpful thing to do -- it was
> written for a purpose!
The log location is almost certain a distribution-packaging choice, so
along with recommending reading documentation you might also point out
that the source documentation may not match what is actually installed
it wasn't. What's
> involved in creating it post-hoc?
>
Is there some reason you don't use the packaged version for your linux
distribution?
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
BackupPC-user
them before finding they are
duplicates and converting to hardlinks and only skips matches from a
previous backup of the same host. Maybe it is calculating sizes from
the transfers.
--
Les Mikesell
[email protected]
___
Ba
ly the right idea but
also failing and continuing too quickly. 'fdisk -l' might be better
but needs to run as root, or maybe just adding ";sleep 2' after your
ls command would work.
--
Les Mikesell
[email protected]
___
Backup
imes? And if the admin gets the email that it hasn't
been backed up, he can call the guy up and remind him...
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https://li
nts frequently
> have duplicated files scattered around. I used BackupPC for website backups;
> my chain length was approximately equal to the number of WordPress sites I
> was hosting.
>
Identical files are not collisions to backuppc - they are de-duplicated.
--
Les M
in a production system
you should really have a formal version control system and deployment
process to fix that sort of thing.
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:
reinstall script would pull the backup from backuppc. But there is a
little too much black magic dealing with differences in Linux versions
for me to trust that it would work if I tried to change it.
--
Les Mikesell
[email protected]
___
On Thu, Aug 10, 2023 at 9:52 AM G.W. Haywood via BackupPC-users
wrote:
>
>> > ... bootable image ... integrate that into backuppc ...
>
> My head hurts. :)
>
Saving headaches would be the point... If you've ever had to
re-create a one-off custom filesystem layout and get enough stuff
installed t
S, that complete
read will be over a network before it can decide how to reduce network
use sending to the rsync partner.
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https://
luttered them up so much with extra utilities etc. that I take the
opportunity to start from scratch and just move the documents I know I
need. Having a backup of the old machine helps with that since I know
I can retrieve anything I missed later.
of clients. I was following
> this guide but I get this error in the XferLOG.bad.z file:
>
You are supposed to use the rsyncd backup method in backuppc with that
setup, not rsync. The difference is that rsyncd expects to connect
directly with a listening rsync daemon, not start one with ss
expert on
that myself but you can probably find what has previously been posted
to this list if you search.
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https://lists.sourcef
/backuppc/pc/localhost'
will tell you.
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github
On Thu, Feb 8, 2024 at 2:46 PM daggs via BackupPC-users
wrote:
>
> any ideas? maybe to use the cli interface?
>
If normal permissions and mount status are OK, the next thing that
could be preventing writing is selinux. Is it enabled/enforcing?
--
Les Mikesell
lesmikes...@
al file. Send it where you have space - or test again restoring
only a small directory.
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https://lists.sourceforge.net/lists/listin
tar image file instead you could use the web
interface to download it on some other computer with space.
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https://lists.sourcef
tput to an appropriate tar restore command. Note that the extract
will happen in your current directory unless you include a -C path
option, and ownership is going to be set according to the numeric user
id on the original file which may not match when on a different
machine.
--
Les Mikesell
track. I'm retired and a little fuzzy on this myself.Also,
is the drive mounted as /mnt or something under /mnt? I'd cd into
the top of that mount point to run the command instead of using -C.
Also tar needs a '-' after the
7;*'. Otherwise it expands to the list
of files in your current directory before the command sees it.
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https://list
e acceptable key pairs for sshd in
CS-9. I found this article saying you either have to configure the
crypto policy to accept SHA1 or generate new key pairs with an
acceptable format.
https://serverfault.com/questions/1095898/how-can-i-use-a-legacy-ssh-rsa-key-on-centos-9
able name).
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Proje
you understand the difference between the rsync and
rsyncd methods?Rsyncd expects a standalone rsync daemon listening
on the client and backs up 'shares' in the rsyncd.conf setup. The
rsync method connects over ssh to the client and either needs to
connect as root on the client or ha
eds to run as root to access most files and even then Selinux can
prevent access. Try disabling Selinux to see if that allows access.
--
Les Mikesell
[email protected]
___
BackupPC-users mailing list
[email protected]
L
server are pinging
> - first step of backup taakes place (calculating size and files list)
> - then timeout.
>
If the failing hosts are behind a stateful firewall or NAT gateway the
connection may be timing out since rsync can have long pauses.
Enabling keepalives on the ssh connection might fix
test, but I think adding -o
ServerAliveInterval=60 after $sshPath in $Conf{RsyncClientCmd} should
work. Is there some sort of error listed in your Xferlog for the
failed backups? Another somewhat remote possibility is that you have
some sort of file corruption on the clients causing the remote
l chances of
recovery from either a drive or controller issue. And if you have
multiple locations with good connectivity, run another backuppc
instance in a different location. The price is right for the software.
--
Les Mikesell
[email protected]
uilding when you replace it. I always liked
the simplicity of software raid1 (mirrored) because you could recover
from any surviving disk on any computer with a compatible interface or
even a USB adapter and a laptop, but realistically most raid
controllers are good enough. Also having the s
ad most of the
parts necessary to get a bootable system image with its up to date
drivers and configuration into a file that backuppc could pick up and
some minor tweaks could make it restore itself from backuppc instead
of a tar image or the other options it already has.
ed to fix anyway. If you are going to mention
every possible obscure problem, I once (long ago) had a machine where
the RAM had a stuck bit making everything it wrote to disk unreliable
including the mirroring process. But I think modern hardware makes
that unlikely.
--
L
tefirewalls between the server and
target? Sometimes those will time out and block an established
connection that appears to be idle for some time - and sometimes rsync
can appear idle for a long time.
--
Les Mikesell
[email protected]
ly way to copy them
involves reading through the unused space. You might be able to
identify these files, exclude them, and find a better backup approach
(for example databases usually have their own way to make a consistent
backup).
--
Les Mikesell
[email protected]
--
On Wed, May 7, 2014 at 8:52 AM, Philippe MALADJIAN
wrote:
> Le 07/05/2014 15:06, Les Mikesell a écrit :
>
>> On Wed, May 7, 2014 at 1:22 AM, Philippe MALADJIAN
>> wrote:
>>>>>
>>>>> My backuppc server a 6Go of RAM, I've 118 751 files for 9,1
latest copy (filled with
the backing full if it is an incremental) and you get all the shares.
--
Les Mikesell
[email protected]
--
"Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Inst
up, if that is what you mean about the
other web site.
However, if you have installed the backuppc package from the EPEL
repository, it should take care of everything for you, including
making the cgi script setuid.All you should have to do is follow
the instructions in /etc/httpd/conf.d/Back
the case of large files with changes. The server
has to uncompress the previous version to copy it while merging in the
new changes.
Also note that if you use the checksum-seed option, the 3rd full
backup will be faster than the 2nd since the server won't have to
uncompress the matching unc
; I001.FCS and that is the last one it shows in the backup. The folder
> contains a LOT of files with names starting with J-Z, but none of those are
> getting backed up.
What does the Xfer Error Summary for that backup run say was the
reason they were skipped?
--
Les Mikesell
is if I002.FCS is locked, it should just skip and move
> to the next file...not sure what I am doing wrong.
It did move on. But note that the next thing was an unsucessful
listing of the rest of the files. I don't know what that means
either. Maybe filesystem corruption or conten
to downgrade and send the whole
list.
--
Les Mikesell
[email protected]
--
"Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combo
e you really want the remote server to run backuppc (over a
vpn if necessary) using rsync or rsyncd as the transport.
--
Les Mikesell
[email protected]
--
"Accelerate Dev Cycles with Automated Cross-Browser Te
On Thu, May 29, 2014 at 10:53 AM, J.L. Blom wrote:
>
> Thanks for the quick reply. I assume I have to login to the backup
> server as root and then generate the keys as backuppc has no password
> for logging in as user backuppc.
Log in as root, then "su - backuppc', or if your installation does n
folder . >/restore/restore.tar
where '.' will match all the files/directories and the /restore folder
has to already exist.
--
Les Mikesell
[email protected]
--
HPCC Systems Open Source Big Da
"/cygdrive/c/shadow/d/svn_backup/Daily/svn(10.06.14).tgz"
>
> Is it the extra dots or brackets in the file path or name? If so, how can I
> get around this? My clients last backup had 349416 "vanishing" files.
>
What that is 'supposed' to mean is that s
ript that deletes the shadow copy after
the backup completes and check later to see if it is still there with
files the backup says went missing.
--
Les Mikesell
[email protected]
--
HPCC Systems Open Source B
lib/backuppc/cpool/test
> root@backuppc:/var/lib/backuppc/pc/localhost# ls -l /var/lib/backuppc/cpool/
> total 0
> -rw-r--r-- 2 root root 0 Jun 19 2014 test
>
>
>
> My backuppc server is Debian wheezy and the nfs server is Centos 6
>
>
>
> any help or hint will
loads what I believe is a cgi.
That means apache really isn't configured with a scriptalias or
handler for the cgi.
>
> Using Ubuntu 14 & Xampp lamp distro.
>
What do you mean by xampp lamp? Ubuntu should have a package in the
distribution repository.
not php. Are you using the ubuntu deb package? It
should set the web server up for you but I'm only familar with the rpm
version.
--
Les Mikesell
[email protected]
--
HPCC Systems Open Source Big Data Platf
backuppc package from the EPEL
yum repository - specifically because they rarely update things in
non-backward compatible ways to minimize that kind of breakage.
--
Les Mikesell
[email protected]
--
HPCC Systems Open
what
> could be the issue.
>
Are you using the /BackupPC urrl as set up by the ScriptAlias in
/etc/httpd/conf.d/BackupPC.conf? You'll also need to set up
authentication with the htpasswd command mentioned there in a comment
near the top.
--
Les Mikes
On Fri, Jun 20, 2014 at 3:19 PM, Francisco Suarez
wrote:
> Les, I got it working on CentOs, thanks lot. configured apache with all
> dependencies.
>
You may also need the perl-suidperl package if you have trouble with
permissions.
--
Les Mikesell
lesmikes...@
quire
valid-user' in the
section.
Note that your browser will cache your credentials if you haven't
completely closed it, though.
--
Les Mikesell
[email protected]
--
HPCC Systems Open Source Big Data Platf
nUsers} =
to the user that will have admin rights in /etc/BackupPC/config.pl.
Then you can do everything else in the web interface.
--
Les Mikesell
[email protected]
--
HPCC Systems Open Source Big Data Pla
ce downloads is open/insecure)
This means you configured backuppc to use rsyncd with a login and
password but you configured an rsync to run in standalone/daemon mode
on the target host without requiring a login/password.
--
configure backuppc to transfer
> only changed files and merge them with the previous backup? I want to backup
> 200GB of email files.
Yes, that is the way it is supposed to work. If the files are mbox
format (many messages in one file) it can be rather slow to merge the
changes but it does
lot more. If they
aren't being linked into the next backup number's tree (like they
should be), where are they? And if they are, why can't you see them?
--
Les Mikesell
[email protected]
--
Op
iguration problem cause rsync based backups to hang and fail.
I'm missing something here. The smb config has nothing to do with
rsync based xfers.Does the xfer log show any files copied? Maybe
the box is just very slow at compressing files and wa
has not changed during the backups 1,2,3...
I think you are the first person to report anything like that, at
least without seeing errors in the logs. I don't know where to start
to debug it. I'd be inclined to start with a fresh install in a VM to
get something that works nor
1 - 100 of 3314 matches
Mail list logo