Re: Home made backup system

2019-12-26 Thread rhkramer
Thanks for the reply and the useful explanations (and the expression of 
limitation of your personal knowledge).  I will add one question / comment 
down below:

On Thursday, December 26, 2019 10:23:54 AM Greg Wooledge wrote:
> For most people, it comes down to "when you can't write to the device
> any more, you throw it away and get another".

I guess that is the rub, and a question, I'll want to throw it away when I 
can't read from it anymore.  (But only if I somehow either copy all the still 
good stuff off it, or have another copy that is still readable.)

(I once read a thread about long term archival storage where they suggested a 
scheme (I think it might have been CDs at the time), like every year making 
one (or more) additional copies of the CD (and, probably, but I really don't 
remember details), starting with more than one copy (CD) of the data to be 
archived.  (IIRC, the objective was trying to maintain the data intact for 100 
years or something like that.)

I wonder if failures typically start with a failure to write or a failure to 
read?  I suspect it depends on a lot of factors, e.g., like age of the writing 
-- I mean, under the wrong (for someone looking for long term back up / 
archival storage), something might be written successfully, but 20 years later 
might not be readable.

Of course, I am unlikely to need my backups for more than a few months, so 
most of the above is probably moot. ;-)



Re: Home made backup system

2019-12-26 Thread Thomas Schmitt
Hi,

Greg Wooledge wrote:
> > > Remember, tar was designed for magnetic tapes,
> > > which are read sequentially.  It provides no way for a reader to learn
> > > that file xyz is at byte offset 31337 and that it should skip ahead to
> > > that point if it only wants that one file.

rhkra...@gmail.com wrote:
> > Just to confirm, I assume that is true ("no way to skip ahead to byte
> > 31337") even if the underlying media is a (somewhat random access) disk
> > instead of (serial access) tape?

It is about not knowing to what byte address to skip.
tar is simply a sequence of file containers. File header, data, next file
header, data, and so on.
Lacking is a kind of directory, which predicts where a particular file
begins.

There are archivers which have such a catalog. With some quillings this
constitutes a filesystem.


> > In other words, I suspect it would be more reliable if it functioned a
> > little bit more like a WORM (Write Once, Read Many) type device

That would be CD-R, DVD-R, DVD+R, and BD-R media.


> > data is appended by  writing in previously unused locations
> > rather than deleting some data,

That's called multi-session. It has other advantages beyond reducing the wear
of media. Typical filesystem for multi-session on write-once media is ISO 9660.


Greg Wooledge wrote:
> "Write Once, Read Many" is an entirely different data storage paradigm.
> Think of a large dusty vault full of optical media.

One can destroy them physically. Put stack-wise into the oven at 200 C / 400 F
for 10 minutes. Wear robust gloves when bending and breaking the hot media.
Single media can be destroyed by help of a lighter.


> Very expensive, and very niche.

One can buy 25 GB BD-R for less than a dollar, 50 GB for less than 2 dollar.
The usefulness depends on the storage constraints of orginal and backup.


> You can't reuse the medium, nor do you WANT to

If you want to re-use, there are CD-RW, DVD-RW, DVD+R, DVD-RAM, BD-RE.
Multi-session is possible on them with ISO 9660 filesystems.


Have a nice day :)

Thomas



Re: Home made backup system

2019-12-26 Thread Charles Curley
On Thu, 26 Dec 2019 09:51:59 -0500
rhkra...@gmail.com wrote:

> Again, I assume (I know what assume does) that "USB mass-storage
> device that acts like a hard drive" is (or might be) a pen drive type
> of device.  I've had a lot of bad luck (well, more bad luck than I'd
> like) with that kind of device, and I suspect that the problem is
> more likely to occur when parts of the device are erased to allow
> something new to be written to it.

When I first started working with the technology behind what we now
call flash drives, back in the late Pliocene, their capacities were
measured in bits, and I think I worked with 256 bit and 512 bit
devices. At that time, you had to read several bytes worth, modify the
relevant bits, and write out the several bytes worth to make a change,
much like changing a sector on a floppy disk or hard drive.

As you conjectured, device life was measured in write cycles, usually
on the order of tens or hundreds of write cycles.

Today all of that is still more or less true, except the capacities
and lives of the devices are greatly extended.

And one other change: When I was working with these things, the host
computer's operating system device driver had to take care of all that,
including using different "sectors" to spread out the wear and avert
device failure. Today, all of that is "under the hood" of the flash
drive, completely invisible to the host computer.

This is similar to the evolution of the hard drive. Way back when five
megabytes was a lot of hard drive (I started working with the Seagate
ST-506), the operating system driver had to worry about encoding,
tracks, sectors, and heads, error correction, and about bad sectors and
re-mapping them. SCSI, and later, IDE, moved all that onto the drive
itself, and all the OS sees is a linear expanse of sectors. 

So much of what you conjectured indeed goes on, but on the flash device,
and at a level utterly and completely invisible to the host operating
system. And it almost certainly does it better than most of us here
could do it.

-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/



Re: Home made backup system

2019-12-26 Thread Greg Wooledge
On Thu, Dec 26, 2019 at 09:51:59AM -0500, rhkra...@gmail.com wrote:
> Just to confirm, I assume that is true ("no way to skip ahead to byte 31337") 
> even if the underlying media is a (somewhat random access) disk instead of 
> (serial access) tape?

Correct.  There's no central index inside the tar archive that says
"file xyz begins at byte 12345".  This is by design, so that you can
append new content to an existing tar archive.  When you append a new
file to an existing archive, you simply drop a new metadata header
record, and then the new content.  So, the entire archive is a long
string of

header file header file header file 

The only way to find a file is to read the entire thing from the beginning
until you find the file you want.

> Again, I assume (I know what assume does) that "USB mass-storage device that 
> acts like a hard drive" is (or might be) a pen drive type of device.

Yes.

> I've had 
> a lot of bad luck (well, more bad luck than I'd like) with that kind of 
> device, and I suspect that the problem is more likely to occur when parts of 
> the device are erased to allow something new to be written to it.
> 
> In other words, I suspect it would be more reliable if it functioned a little 
> bit more like a WORM (Write Once, Read Many) type device

"Write Once, Read Many" is an entirely different data storage paradigm.
Think of a large dusty vault full of optical media.  Once you've backed up
your full database (or whatever) to one of these media, it goes into
the vault.  You can't reuse the medium, nor do you WANT to, for legal
reasons.  You've chosen this technology specifically because it CANNOT
be altered once written, and therefore gives you some sort of debatably
reliable legal trail of evidence.  "On May 7th, this is what we had."

Very expensive, and very niche.

> -- not that the whole 
> device necessarily has to be written in one go, but more that, for highest 
> reliablity,  data is appended by  writing in previously unused locations 
> rather than deleting some data, and then writing new data in previously used 
> and erased locations.

I am not an expert in solid state storage, so I won't even try to
address the questions about long-term reliability of various USB mass
storage devices.

For most people, it comes down to "when you can't write to the device
any more, you throw it away and get another".

> I don't know whether rsync, in the normal course of events will delete 
> (erase) 
> and write data in previously used locations, but it would be helpful to have 
> comments, with respect to:
> 
>* whether rsync will rewrite to previously used locations, [...]

Rsync does not operate at the disk sector level.  It operates at the
file level.  If you've modified a file since the last backup, then rsync
knows it needs to modify the backed-up copy of the file.  It will use
various algorithms to decide whether it should just copy the entire
file from the source, or try to preserve pieces of the file that are
already on the destination.

The main goal there is to reduce the transmission of bytes from a
source host to a destination host, because one of rsync's main use
cases is backing up files across a network.

Since you're focusing on the case where there's no network involved,
a lot of that work is just not relevant.  In the end, as far as I
understand it, rsync will create a new file on the destination, which
contains the new content (however it gets the new content).  Then the
older copy of the file will be deleted.

How the storage device's controller works (how it decides which parts
of the device get the new file, how the part where the old file used to
be get recycled, etc.) is outside of rsync's purview, and definitely
outside of *my* personal knowledge.



Re: Home made backup system

2019-12-26 Thread rhkramer
Thanks for addressing this -- I have a few questions I want to ask for my own 
edification / clarification:

On Thursday, December 26, 2019 08:18:12 AM Greg Wooledge wrote:
> The drawback of using tar is that it creates an *archive* of files -- that
> is, a single file (or byte stream) that contains a mashup of metadata and
> file contents.  If you want to extract one file from this archive, you
> have to read the entire archive from the beginning until you find the
> file you're looking for.  Remember, tar was designed for magnetic tapes,
> which are read sequentially.  It provides no way for a reader to learn
> that file xyz is at byte offset 31337 and that it should skip ahead to
> that point if it only wants that one file.

Just to confirm, I assume that is true ("no way to skip ahead to byte 31337") 
even if the underlying media is a (somewhat random access) disk instead of 
(serial access) tape?
 
> For most people, a backup using rsync to a removable *random access*
> medium (an external hard drive, or USB mass-storage device that acts
> like a hard drive) is a much better fit for their needs.

Again, I assume (I know what assume does) that "USB mass-storage device that 
acts like a hard drive" is (or might be) a pen drive type of device.  I've had 
a lot of bad luck (well, more bad luck than I'd like) with that kind of 
device, and I suspect that the problem is more likely to occur when parts of 
the device are erased to allow something new to be written to it.

In other words, I suspect it would be more reliable if it functioned a little 
bit more like a WORM (Write Once, Read Many) type device -- not that the whole 
device necessarily has to be written in one go, but more that, for highest 
reliablity,  data is appended by  writing in previously unused locations 
rather than deleting some data, and then writing new data in previously used 
and erased locations.

I once looked into the rsync type of thing (for example, I read the author's 
thesis back in the day) but I don't remember all I'd like to remember.  
(Including, I don't remember if he used the term rsync in the thesis, mabye it 
was rcopy or something.)

I don't know whether rsync, in the normal course of events will delete (erase) 
and write data in previously used locations, but it would be helpful to have 
comments, with respect to:

   * whether rsync will rewrite to previously used locations, (I think it 
does, I mean, I think under certain circumstances (maybe based on certain 
options), e.g., if a file is deleted from the "working space", that file is (or 
can be) deleted from the rsynced backup, and then that space can be reused.)

   * if when you say a "USB mass-storage device that acts like a hard drive" 
you refer to (or include) a pendrive type device

   * your experience as to the reliability of a pendrive type device, either 
in a WORM type usage (as described above) or when rewriting over previously 
used areas

Thanks!





Re: Home made backup system

2019-12-26 Thread Greg Wooledge
On Thu, Dec 26, 2019 at 08:18:12AM -0500, Greg Wooledge wrote:
> On Wed, Dec 25, 2019 at 11:07:22AM -0800, David Christensen wrote:
> > > I was amazed that nobody yet considered tar.

Sorry... that sentence was actually written by Franco Martelli.  I
replied to the wrong email.



Re: Home made backup system

2019-12-26 Thread Greg Wooledge
On Wed, Dec 25, 2019 at 11:07:22AM -0800, David Christensen wrote:
> > I was amazed that nobody yet considered tar.

The best use case for tar is creating a full backup to removable media
(magnetic tapes are literally what it was designed for -- the "t" stands
for tape).

The drawback of using tar is that it creates an *archive* of files -- that
is, a single file (or byte stream) that contains a mashup of metadata and
file contents.  If you want to extract one file from this archive, you
have to read the entire archive from the beginning until you find the
file you're looking for.  Remember, tar was designed for magnetic tapes,
which are read sequentially.  It provides no way for a reader to learn
that file xyz is at byte offset 31337 and that it should skip ahead to
that point if it only wants that one file.

Tar also provides no means to *update* the copy of a file contained
within an existing archive.  I.e. you can't do any kind of incremental
or differential backup with it -- not realistically.  The closest you
could come would be appending a series of binary patches to the end
of the existing archive.

(Appending to an archive only works if the archive is uncompressed, by
the way.)

It's certainly not *wrong* to do backups using tar, but for a lot of
people, it's not the strategy they want to employ.

For most people, a backup using rsync to a removable *random access*
medium (an external hard drive, or USB mass-storage device that acts
like a hard drive) is a much better fit for their needs.



Re: Home made backup system

2019-12-25 Thread David Christensen

On 2019-12-25 08:42, Franco Martelli wrote:

On 18/12/19 at 18:02, rhkra...@gmail.com wrote:

Aside / Admission: I don't backup all that I should and as often as I should,
so I'm looking for ways to improve.  One thought I have is to write my own
backup "system" and use it, and I've thought about that a little, and provide
some of my thoughts below.
...


I was amazed that nobody yet considered tar. My backup with tar is based
to a script that invoke tar reading two hidden file .tarExclude and
.tarInclude:

~# cat .tarExclude
/home/myuser/.cache
/home/myuser/.kde
/home/myuser/.mozilla/firefox/.default
/home/myuser/VirtualBox\ VMs
/home/myuser/Shared
/home/myuser/Sources
/home/myuser/Video
/home/myuser/Scaricati
/home/myuser/Modelli
/home/myuser/Documenti
/home/myuser/Pubblici
/home/myuser/Desktop
/home/myuser/Immagini
/home/myuser/Musica
/home/myuser/linux-source-4.19

~# cat .tarInclude
/home/myuser
/root/
/etc/
/usr/local/bin/
/usr/local/etc/
/boot/grub/grub.cfg
/boot/config-4.19.67

then the script invoke tar command this way:

/bin/tar -X /root/.tarExclude -zcpvf /tmp/$f -T /root/.tarInclude

$f variable is the filename that it'll be moved to USB stick once tested
with the command:

/bin/tar ztf /tmp/$f >/dev/null

one thing you must take care is that the -X switch must came before of
the -T switch otherwise tar command fails.
HTH



tar(1) is very flexible:

1.  I tend to formulate my archive jobs by host and (sub-)directory -- 
e.g. tinkywinky:/home, cvs:/var/local/cvs, etc..


2.  Within each archive job, I include everything by default and then 
specify what to exclude via the various --exclude* options.


3.  For a given host and directory, I may have multiple archive jobs 
that are run at different frequencies (daily, weekly, monthly, etc.). 
Frequent jobs exclude the most files and infrequent jobs exclude few (or 
none).


4.  The --exclude-tag* options have the advantage (and risk) that the 
administrator(s) and user(s) can maintain archive exclusion tag files 
(e.g. ".noarchive") throughout the live filesystem as archiving 
requirements change over time.  This reduces or eliminates the need for 
the administrator to make changes to the archiving scripts, 
configuration files, and job files.


5.  All that said, I do have one VPS whose archive job is inverted by 
design -- based at root, exclude everything by default, and specify what 
to include via the --files-from option.



David



Re: Home made backup system

2019-12-25 Thread Franco Martelli
On 18/12/19 at 18:02, rhkra...@gmail.com wrote:
> Aside / Admission: I don't backup all that I should and as often as I should, 
> so I'm looking for ways to improve.  One thought I have is to write my own 
> backup "system" and use it, and I've thought about that a little, and provide 
> some of my thoughts below.
> ...

I was amazed that nobody yet considered tar. My backup with tar is based
to a script that invoke tar reading two hidden file .tarExclude and
.tarInclude:

~# cat .tarExclude
/home/myuser/.cache
/home/myuser/.kde
/home/myuser/.mozilla/firefox/.default
/home/myuser/VirtualBox\ VMs
/home/myuser/Shared
/home/myuser/Sources
/home/myuser/Video
/home/myuser/Scaricati
/home/myuser/Modelli
/home/myuser/Documenti
/home/myuser/Pubblici
/home/myuser/Desktop
/home/myuser/Immagini
/home/myuser/Musica
/home/myuser/linux-source-4.19

~# cat .tarInclude
/home/myuser
/root/
/etc/
/usr/local/bin/
/usr/local/etc/
/boot/grub/grub.cfg
/boot/config-4.19.67

then the script invoke tar command this way:

/bin/tar -X /root/.tarExclude -zcpvf /tmp/$f -T /root/.tarInclude

$f variable is the filename that it'll be moved to USB stick once tested
with the command:

/bin/tar ztf /tmp/$f >/dev/null

one thing you must take care is that the -X switch must came before of
the -T switch otherwise tar command fails.
HTH

Merry Xmas

-- 
Franco Martelli



Re: Home made backup system

2019-12-23 Thread Celejar
On Mon, 23 Dec 2019 20:11:07 -0600
Nate Bargmann  wrote:

> Thanks for the tips!

Sure! Let us know if you hack together anything interesting.

Celejar



Re: Home made backup system

2019-12-23 Thread Nate Bargmann
Thanks for the tips!

- Nate

-- 

"The optimist proclaims that we live in the best of all
possible worlds.  The pessimist fears this is true."

Web: https://www.n0nb.us
Projects: https://github.com/N0NB
GPG fingerprint: 82D6 4F6B 0E67 CD41 F689 BBA6 FB2C 5130 D55A 8819



signature.asc
Description: PGP signature


Re: Home made backup system

2019-12-23 Thread Celejar
On Thu, 19 Dec 2019 14:25:24 -0600
Nate Bargmann  wrote:

> I also use rsnapshot on this machine to backup to another drive in the
> same case.  I'd thought about off site, perhaps AWS or such but haven't
> spent enough time trying to figure out how I might do that with
> rsnapshot.

One way to do this is by just using something like rclone (which speaks
AWS) to sync the rsnapshot backup to AWS. I do this with borg backups: I
used to do this with hubiC, which unfortunately has been offline for a
while. I currently sync a borg backup with a c14 cold storage repository
using a tool I wrote for this purpose:

https://github.com/tmo1/c14sync

(rclone doesn't have the capability to automate the moving of data into
and out of c14's "safes" (cold storage repositories), or at least it
didn't when I wrote my utility, but in general, rclone is the standard
tool to sync local data with cloud storage providers, assuming you
don't have access to the cloud storage via traditional protocals like
ssh and rsync.)

Celejar



Re: Home made backup system

2019-12-21 Thread Klaus Fuerstberger
Am 18.12.19 um 18:02 schrieb rhkra...@gmail.com:
> A purpose of sending this to the mailing-list is to find out if there already 
> exists a solution (or parts of a solution) close to what I'm thinking about 
> (no sense re-inventing the wheel), or if someone thinks I've overlooked 
> something or making a big mistake.

For my Linux based servers I use Dirvish Backup, a rsync based backup
solution which works with hardlinks, so you can always have a backup of
the whole root tree of your servers and save a lot of space. It works
local and also remote over ssh, you can add pre and post scripts for
example to stop and start database servers, or make database snapshots:

http://dirvish.org/

Klaus



Re: Fetchnews (was Re: Home made backup system)

2019-12-21 Thread songbird
rhkra...@gmail.com wrote:
> On Friday, December 20, 2019 09:40:28 PM songbird wrote:
>> Kenneth Parker wrote:
>
>> > Could you please ship me a personal email, on how you configured gmane
>> > and LKML to read debian-user?
>
>>   i'd rather post public messages as that way if anyone
>> else is reading along or searching they can also use the
>> information if they like.  that's why i like usenet.
>
>
> +1, and thanks from the peanut gallery

  you're welcome!

  i should also say that when you first subscribe to a 
group on gmane then it will not need any setup that i
recall.  however, when you reply to your first message
from a gmane group it will send you a confirmation e-mail
asking you to make sure it was really you who sent the
message.  you only need to do this once per each gmane
group you actually reply to.


  songbird



Re: Fetchnews (was Re: Home made backup system)

2019-12-21 Thread Ralph Katz
On 12/20/19 7:40 PM, songbird wrote:

[snip] ...[configuring gmane to read debian-user]

> 
>   gmane is a mail to usenet gateway service.
> 
>   when you install leafnode and your favorite newsreader 
> and get them configured you will still have to download
> an active list from the news service provider and then
> have to subscribe to each group you would like to use.
> 
>   so in the case of LKML the group name is:
> 
> gmane.linux.kernel 
>   which is an alias used for the linux kernel mailing list.
> 
>   when you use your newsagent to search for that group
> you will have to pull articles to read (via whatever you
> use to get messages).  leafnode is what i use because i
> understand how it works.  other people use gnus, but 
> i've always been used to trn, rn, tin like interfaces
> so slrn works well for me.
> 
>   so basically there are three chunks to set up to get
> going.  leafnode, a newsreader/writer and the news
> acct itself.  none of this is super easy but doable to
> anyone who likes to poke at linux/debian, etc.
> 
>   i'd rather post public messages as that way if anyone
> else is reading along or searching they can also use the
> information if they like.  that's why i like usenet.
> 
> 
>   songbird
> 
> 

I read on gmane several newsgroups from time to time with thunderbird.
Nothing is needed to install.  Setup is easy:
- Setup new newsgroup account on thunderbird; name news.gmane.org,
enter your SMTP server.
- Server settings:  Type: NNTP, Name: news.gmane.org  port: 119
- choose newsgroups:  gmane.linux.debian.user
- download messages.

I choose to post directly to the debian-user list as it posts faster
than thru gmane and still keeps the threading.
gmane provides a very effective means to quickly browse and read newsgroups.

Regards
Ralph



signature.asc
Description: OpenPGP digital signature


Re: Fetchnews (was Re: Home made backup system)

2019-12-21 Thread rhkramer
On Friday, December 20, 2019 09:40:28 PM songbird wrote:
> Kenneth Parker wrote:

> > Could you please ship me a personal email, on how you configured gmane
> > and LKML to read debian-user?

>   i'd rather post public messages as that way if anyone
> else is reading along or searching they can also use the
> information if they like.  that's why i like usenet.


+1, and thanks from the peanut gallery



Re: Fetchnews (was Re: Home made backup system)

2019-12-20 Thread Kenneth Parker
On Fri, Dec 20, 2019 at 9:41 PM songbird  wrote:

> Kenneth Parker wrote:
> >songbird wrote:
> ...
> >>   check out eternal-september.org  :)  no binaries.  just
> >> text.  that is all i want to read anyways.
>

You may see a sea7kenp username pop up occasionally.



> Could you please ship me a personal email, on how you configured gmane and
> > LKML to read debian-user?
>
>   gmane is a mail to usenet gateway service.
>

(Google doesn't like leafnode, by the way).

>
>   when you install leafnode and your favorite newsreader
> and get them configured you will still have to download
> an active list from the news service provider and then
> have to subscribe to each group you would like to use.
>
>   so in the case of LKML the group name is:
>
> gmane.linux.kernel
>   which is an alias used for the linux kernel mailing list.
>
>   when you use your newsagent to search for that group
> you will have to pull articles to read (via whatever you
> use to get messages).  leafnode is what i use because i
> understand how it works.  other people use gnus, but
> i've always been used to trn, rn, tin like interfaces
> so slrn works well for me.
>
>   so basically there are three chunks to set up to get
> going.  leafnode, a newsreader/writer and the news
> acct itself.  none of this is super easy but doable to
> anyone who likes to poke at linux/debian, etc.
>

Many thanks.  I just got approval to use gmane and  leafnode for purposes,
such as this.  There might be some excitement, at the Eye Blink Universe (
http://eyeblinkuniverse.com) in the next few Months.

>
>   i'd rather post public messages as that way if anyone
> else is reading along or searching they can also use the
> information if they like.  that's why i like usenet.
>

Fair enough.  Someone else is sure to combine your two, unrelated tasks on
the same line, just like I did.

Kenneth Parker


Re: Fetchnews (was Re: Home made backup system)

2019-12-20 Thread songbird
Kenneth Parker wrote:
>songbird wrote:
...
>>   check out eternal-september.org  :)  no binaries.  just
>> text.  that is all i want to read anyways.
>>
>
> Thanks!  Name Servers couldn't find it without the "www" in front.  I am
> investigating it now.
>
> Not likely to get too far down the nntp "Rabbit Hole" tonight,  but will
> look closer at what's there.
>
> Could you please ship me a personal email, on how you configured gmane and
> LKML to read debian-user?

  gmane is a mail to usenet gateway service.

  when you install leafnode and your favorite newsreader 
and get them configured you will still have to download
an active list from the news service provider and then
have to subscribe to each group you would like to use.

  so in the case of LKML the group name is:

gmane.linux.kernel 
  which is an alias used for the linux kernel mailing list.

  when you use your newsagent to search for that group
you will have to pull articles to read (via whatever you
use to get messages).  leafnode is what i use because i
understand how it works.  other people use gnus, but 
i've always been used to trn, rn, tin like interfaces
so slrn works well for me.

  so basically there are three chunks to set up to get
going.  leafnode, a newsreader/writer and the news
acct itself.  none of this is super easy but doable to
anyone who likes to poke at linux/debian, etc.

  i'd rather post public messages as that way if anyone
else is reading along or searching they can also use the
information if they like.  that's why i like usenet.


  songbird



Re: Fetchnews (was Re: Home made backup system)

2019-12-20 Thread Kenneth Parker
On Fri, Dec 20, 2019 at 7:47 PM songbird  wrote:

> Kenneth Parker wrote:
> > songbird wrote:
> ...
> >>   i only use a few commands regularly and have them either
> >> aliased or stuck in history for me in my .bashrc
> >> (i start every session by history -c to get rid of
> >> anything and then use history -s "command" so pretty
> >> much my routine when signing on in the morning is to
> >> do !1 and then !2, !3 if i need to do a dist-upgrade.
> >>
> >> !1 is apt-get update & fetchnews
> >>
> >
> > I opened a possible Hornet's Nest (at least in my understanding) on a
> > Public Server that I administer.   I had not seen "fetchnews" before and,
> > thinking it might give info on Upgradeable Packages, tried it, getting a
> > message about "leafnode" not found.
>
>   i think that would be apt-listchanges (i think).


Bingo!  That's what I thought you were doing!  I installed that also, right
now, and now have a tool, to display changes


> i don't use
> it i just use the apt-get update or apt-get dist-upgrade output
> to see what is potentially being changed and make sure there's
> nothing in there too scary before i answer Y to the download
> and update prompt.
>
>
> > Fair enough.  So, "apt-get install leafnode", which starts asking me when
> > it's supposed to Fetch this News.  Finally smelling something fishy, I
> > clicked on "none", and am examining what I just did to a "Debian-Like"
> > Ubuntu 16.04 Server.
> >
> > Obviously, I am partly to blame, as the "&" means multiple commands on
> one
> > Command Line.  It only "seemed" to be related to the "Apt-get update"
> > command.
>
>   oops sorry!
>

I didn't finish configuring it, so No Problem!

   > So, Mr. Songbird, are we talking  nntp here?  (That takes me back
several years!)

>
>   yes!  :)  it is how i read this group and write back
> via gmane.  it is much faster way to read LKML and a
> bunch of other lists too than via a web interface.  i can
> skim a few thousand posts in just a few moments and
> pick out the stuff that interests me.  mark the rest all
> read and it's done.
>
>
> > On Topic for here, is to make sure others don't make my error.  Off Topic
> > (and feel free to personal reply me, Songbird) is my curiosity on, what
> > remains of our nntp News System.  Are there still good Servers out there?
> > (Last time I checked, I found a smelly batch of Spam, mixed with Make
> Money
> > Fast and Porn, which caused me to back off!)
>
>   check out eternal-september.org  :)  no binaries.  just
> text.  that is all i want to read anyways.
>

Thanks!  Name Servers couldn't find it without the "www" in front.  I am
investigating it now.

Not likely to get too far down the nntp "Rabbit Hole" tonight,  but will
look closer at what's there.

Could you please ship me a personal email, on how you configured gmane and
LKML to read debian-user?

Thank you and best regards,

Kenneth Parker


Re: Fetchnews (was Re: Home made backup system)

2019-12-20 Thread songbird
Kenneth Parker wrote:
> songbird wrote:
...
>>   i only use a few commands regularly and have them either
>> aliased or stuck in history for me in my .bashrc
>> (i start every session by history -c to get rid of
>> anything and then use history -s "command" so pretty
>> much my routine when signing on in the morning is to
>> do !1 and then !2, !3 if i need to do a dist-upgrade.
>>
>> !1 is apt-get update & fetchnews
>>
>
> I opened a possible Hornet's Nest (at least in my understanding) on a
> Public Server that I administer.   I had not seen "fetchnews" before and,
> thinking it might give info on Upgradeable Packages, tried it, getting a
> message about "leafnode" not found.

  i think that would be apt-listchanges (i think).  i don't use
it i just use the apt-get update or apt-get dist-upgrade output
to see what is potentially being changed and make sure there's
nothing in there too scary before i answer Y to the download
and update prompt.


> Fair enough.  So, "apt-get install leafnode", which starts asking me when
> it's supposed to Fetch this News.  Finally smelling something fishy, I
> clicked on "none", and am examining what I just did to a "Debian-Like"
> Ubuntu 16.04 Server.
>
> Obviously, I am partly to blame, as the "&" means multiple commands on one
> Command Line.  It only "seemed" to be related to the "Apt-get update"
> command.

  oops sorry!


> So, Mr. Songbird, are we talking  nntp here?  (That takes me back several
> years!)

  yes!  :)  it is how i read this group and write back
via gmane.  it is much faster way to read LKML and a
bunch of other lists too than via a web interface.  i can
skim a few thousand posts in just a few moments and
pick out the stuff that interests me.  mark the rest all
read and it's done.


> On Topic for here, is to make sure others don't make my error.  Off Topic
> (and feel free to personal reply me, Songbird) is my curiosity on, what
> remains of our nntp News System.  Are there still good Servers out there?
> (Last time I checked, I found a smelly batch of Spam, mixed with Make Money
> Fast and Porn, which caused me to back off!)

  check out eternal-september.org  :)  no binaries.  just
text.  that is all i want to read anyways.


>> !2 is apt-get upgrade
>> !3 is apt-get dist-upgrade
>
>
>
>
> But, in your Personal response, I'm interested in if there are, properly
> Moderated Newsgroups that actually Work?

  there used to be one i was busy in which is now moribund like
many of the others, but you can still find some active enough
corners for some good conversations in good ol' text.  even ones
that are not moderated you can filter out people who you'd 
rather not read any more.


> Thank you and best regards,
>
> Kenneth Parker (Doesn't sing like a Bird, but took College Chorus Classes).

  i also have fetchnews and postnews as separate entries in
the history but as a habit the first thing i want to do when
i sign on is get the news and update my package lists.


  songbird



Fetchnews (was Re: Home made backup system)

2019-12-20 Thread Kenneth Parker
On Thu, Dec 19, 2019 at 11:29 AM songbird  wrote:

> Greg Wooledge wrote:
> ...
> > History expansion is a bloody nightmare.  I recommend simply turning
> > it off and living without it.  Of course, that's a personal preference,
> > and you're free to continue banging your head against it, if you feel
> > that the times it helps you outweigh the times that it hurts you.
>
>   i only use a few commands regularly and have them either
> aliased or stuck in history for me in my .bashrc
> (i start every session by history -c to get rid of
> anything and then use history -s "command" so pretty
> much my routine when signing on in the morning is to
> do !1 and then !2, !3 if i need to do a dist-upgrade.
>
> !1 is apt-get update & fetchnews
>

I opened a possible Hornet's Nest (at least in my understanding) on a
Public Server that I administer.   I had not seen "fetchnews" before and,
thinking it might give info on Upgradeable Packages, tried it, getting a
message about "leafnode" not found.

Fair enough.  So, "apt-get install leafnode", which starts asking me when
it's supposed to Fetch this News.  Finally smelling something fishy, I
clicked on "none", and am examining what I just did to a "Debian-Like"
Ubuntu 16.04 Server.

Obviously, I am partly to blame, as the "&" means multiple commands on one
Command Line.  It only "seemed" to be related to the "Apt-get update"
command.

So, Mr. Songbird, are we talking  nntp here?  (That takes me back several
years!)

On Topic for here, is to make sure others don't make my error.  Off Topic
(and feel free to personal reply me, Songbird) is my curiosity on, what
remains of our nntp News System.  Are there still good Servers out there?
(Last time I checked, I found a smelly batch of Spam, mixed with Make Money
Fast and Porn, which caused me to back off!)

!2 is apt-get upgrade
> !3 is apt-get dist-upgrade




But, in your Personal response, I'm interested in if there are, properly
Moderated Newsgroups that actually Work?

Thank you and best regards,

Kenneth Parker (Doesn't sing like a Bird, but took College Chorus Classes).


Re: Home made backup system

2019-12-19 Thread Keith Bainbridge

On 19/12/19 4:02 am, rhkra...@gmail.com wrote:

Aside / Admission: I don't backup all that I should and as often as I should,
so I'm looking for ways to improve.  One thought I have is to write my own
backup "system" and use it, and I've thought about that a little,



I understand. For a while I used mc to copy/update files to USB. I had 
to watch to copy - time consuming. Unreliable, because it was manual.


I also realised that I need to be able to go back, and not just to last 
week or last month. The file I change often, I use only a few days a year.


I get that if you create a file and never touch it again, the rotating 
back-ups are useful.   I use timeshift for system partition back-up's 
(and it has saved me several times).



Anyway, I use rsync with back-up option to a backup/year/month/date/hour 
using $NAMEs. The scripts are on the destination to ensure that the 
destinatoin is available.


Each USB  is listed at /etc/fstab to prevent automount on insertion. 
mount options include  noauto,noexecroot's cron mounts the 
destination, remounts it exec then runs the script, which ends with an 
unmount command


2 * * * * mount /mnt/g502 && mount -o remount,exec /mnt/g502/ && cd 
/mnt/g502  && ./daily.sh


extract from daily.sh:


DAY=`date +%Y%b%d`
NOW=`date +%Y%b%d%H`
HOUR=`date +%H`
YEAR=`date +%Y`

cd /mnt/g502

cp /home/keith/rsyncExclusionList.txt ./

date   >>  ./copydailyStarted
echo "g502" >>  ./copydailyStarted

#$DAY >>
#date >> /mnt/g502/copydailyStarted
#$NOW >>  /mnt//g502/copydailyStarted

mkdir ./rsynccBackupp/$DAY/$HOUR

rsync -rubvLH --backup-dir=./rsynccBackupp/$DAY/$HOUR  --exclude 
'**ache' --exclude '.thunderb**' --exclude '**mozilla**' --exclude 
'**mzzlla**' --exclude '**eamonkey**' --exclude '**hromium** ' 
/mnt/data/keith/ ./



The --exclude bits aren't working, yet; and neither did an 
--exclude-from-list bit.



Anyhow, just my 2 bobs worth.




--
Keith Bainbridge

kkeith.bainbridge.3...@gmail.com
+61 (0)447 667 468



Re: Home made backup system

2019-12-19 Thread David Christensen

On 2019-12-19 21:04, David Christensen wrote:

So, ~47 snapshots of ~892 GB of data.  That is ~51 TB.


Correction -- 42 TB.


David



Re: Home made backup system

2019-12-19 Thread David Christensen

On 2019-12-19 09:45, ghe wrote:


How about writing a little script for rsync saying how you want it to
backup, what to backup, and what not to backup and set cron jobs for
when you want it to run. In the cron jobs, tell it to write to different
directories, so to keep several days or backups.


The fundamental problem is duplication.


Here is the data on my SOHO server:

2019-12-19 20:33:28 toor@soho2 ~
# du -sg /jail/cvs/var/local/cvs /jail/samba/var/local/samba
1   /jail/cvs/var/local/cvs
891 /jail/samba/var/local/samba


So, ~892 GB of live data.


Here are the snapshots (backups):

2019-12-19 20:46:37 toor@soho2 ~
# ls -1 /jail/cvs/var/local/cvs/.zfs/snapshot 
/jail/samba/var/local/samba/.zfs/snapshot

/jail/cvs/var/local/cvs/.zfs/snapshot:
manual-20190530-1804
manual-20190530-1830
manual-20191209-1728
manual-20191209-1741
manual-20191209-1802
zfs-auto-snap_d-2019-12-07-00h07
zfs-auto-snap_d-2019-12-08-00h07
zfs-auto-snap_d-2019-12-14-00h07
zfs-auto-snap_d-2019-12-15-00h07
zfs-auto-snap_d-2019-12-16-00h07
zfs-auto-snap_d-2019-12-17-00h07
zfs-auto-snap_d-2019-12-18-00h07
zfs-auto-snap_d-2019-12-19-00h07
zfs-auto-snap_f-2019-09-05-20h12
zfs-auto-snap_f-2019-09-07-00h00
zfs-auto-snap_f-2019-09-15-23h00
zfs-auto-snap_f-2019-09-19-22h48
zfs-auto-snap_f-2019-10-05-23h12
zfs-auto-snap_f-2019-10-07-20h00
zfs-auto-snap_f-2019-10-15-20h00
zfs-auto-snap_f-2019-11-03-14h36
zfs-auto-snap_f-2019-11-14-19h36
zfs-auto-snap_f-2019-11-15-21h12
zfs-auto-snap_f-2019-11-25-19h48
zfs-auto-snap_f-2019-11-29-17h00
zfs-auto-snap_f-2019-12-19-20h36
zfs-auto-snap_h-2019-12-18-20h02
zfs-auto-snap_h-2019-12-18-21h02
zfs-auto-snap_h-2019-12-18-22h02
zfs-auto-snap_h-2019-12-18-23h02
zfs-auto-snap_h-2019-12-19-00h02
zfs-auto-snap_h-2019-12-19-01h02
zfs-auto-snap_h-2019-12-19-02h02
zfs-auto-snap_h-2019-12-19-03h02
zfs-auto-snap_h-2019-12-19-04h02
zfs-auto-snap_h-2019-12-19-05h02
zfs-auto-snap_h-2019-12-19-06h02
zfs-auto-snap_h-2019-12-19-07h02
zfs-auto-snap_h-2019-12-19-08h02
zfs-auto-snap_h-2019-12-19-09h02
zfs-auto-snap_h-2019-12-19-10h02
zfs-auto-snap_h-2019-12-19-11h02
zfs-auto-snap_h-2019-12-19-12h02
zfs-auto-snap_h-2019-12-19-13h02
zfs-auto-snap_h-2019-12-19-14h02
zfs-auto-snap_h-2019-12-19-15h02
zfs-auto-snap_h-2019-12-19-16h02
zfs-auto-snap_h-2019-12-19-17h02
zfs-auto-snap_h-2019-12-19-18h02
zfs-auto-snap_h-2019-12-19-19h02
zfs-auto-snap_h-2019-12-19-20h02
zfs-auto-snap_m-2019-09-01-00h17
zfs-auto-snap_m-2019-10-01-00h17
zfs-auto-snap_m-2019-11-01-00h17
zfs-auto-snap_m-2019-12-01-00h17
zfs-auto-snap_w-2019-11-17-00h12
zfs-auto-snap_w-2019-11-24-00h12
zfs-auto-snap_w-2019-12-01-00h12
zfs-auto-snap_w-2019-12-08-00h12
zfs-auto-snap_w-2019-12-15-00h12

/jail/samba/var/local/samba/.zfs/snapshot:
manual-20190530-1804
manual-20190530-1830
manual-20191210-1736
zfs-auto-snap_d-2019-12-09-00h07
zfs-auto-snap_d-2019-12-10-00h07
zfs-auto-snap_d-2019-12-14-00h07
zfs-auto-snap_d-2019-12-15-00h07
zfs-auto-snap_d-2019-12-16-00h07
zfs-auto-snap_d-2019-12-17-00h07
zfs-auto-snap_d-2019-12-18-00h07
zfs-auto-snap_d-2019-12-19-00h07
zfs-auto-snap_f-2019-12-08-11h36
zfs-auto-snap_f-2019-12-19-20h36
zfs-auto-snap_h-2019-12-18-20h02
zfs-auto-snap_h-2019-12-18-21h02
zfs-auto-snap_h-2019-12-18-22h02
zfs-auto-snap_h-2019-12-18-23h02
zfs-auto-snap_h-2019-12-19-00h02
zfs-auto-snap_h-2019-12-19-01h02
zfs-auto-snap_h-2019-12-19-02h02
zfs-auto-snap_h-2019-12-19-03h02
zfs-auto-snap_h-2019-12-19-04h02
zfs-auto-snap_h-2019-12-19-05h02
zfs-auto-snap_h-2019-12-19-06h02
zfs-auto-snap_h-2019-12-19-07h02
zfs-auto-snap_h-2019-12-19-08h02
zfs-auto-snap_h-2019-12-19-09h02
zfs-auto-snap_h-2019-12-19-10h02
zfs-auto-snap_h-2019-12-19-11h02
zfs-auto-snap_h-2019-12-19-12h02
zfs-auto-snap_h-2019-12-19-13h02
zfs-auto-snap_h-2019-12-19-14h02
zfs-auto-snap_h-2019-12-19-15h02
zfs-auto-snap_h-2019-12-19-16h02
zfs-auto-snap_h-2019-12-19-17h02
zfs-auto-snap_h-2019-12-19-18h02
zfs-auto-snap_h-2019-12-19-19h02
zfs-auto-snap_h-2019-12-19-20h02
zfs-auto-snap_m-2019-09-01-00h17
zfs-auto-snap_m-2019-10-01-00h17
zfs-auto-snap_m-2019-11-01-00h17
zfs-auto-snap_m-2019-12-01-00h17
zfs-auto-snap_w-2019-11-17-00h12
zfs-auto-snap_w-2019-11-24-00h12
zfs-auto-snap_w-2019-12-01-00h12
zfs-auto-snap_w-2019-12-08-00h12
zfs-auto-snap_w-2019-12-15-00h12


So, ~47 snapshots of ~892 GB of data.  That is ~51 TB.  My backup disks 
are 2.9 TB.



ZFS with de-duplication and compression consumes 1.16 TB for the live 
filesystem plus all snapshots:


2019-12-19 20:39:53 toor@soho2 ~
# zpool list p2
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAGCAP  DEDUP 
HEALTH  ALTROOT

p24.06T  1.16T  2.90T- - 3%28%  1.13x  ONLINE  -


Multiple rsync destination directories are not an option for me.


David



Re: Home made backup system

2019-12-19 Thread Nate Bargmann
I also use rsnapshot on this machine to backup to another drive in the
same case.  I'd thought about off site, perhaps AWS or such but haven't
spent enough time trying to figure out how I might do that with
rsnapshot.

- Nate

-- 

"The optimist proclaims that we live in the best of all
possible worlds.  The pessimist fears this is true."

Web: https://www.n0nb.us
Projects: https://github.com/N0NB
GPG fingerprint: 82D6 4F6B 0E67 CD41 F689 BBA6 FB2C 5130 D55A 8819



signature.asc
Description: PGP signature


Re: Home made backup system

2019-12-19 Thread Charles Curley
On Thu, 19 Dec 2019 10:45:22 -0700
ghe  wrote:

> How about writing a little script for rsync saying how you want it to
> backup, what to backup, and what not to backup and set cron jobs for
> when you want it to run. In the cron jobs, tell it to write to
> different directories, so to keep several days or backups.

Or look into rsnapshot, which does all this and more.

-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/



Re: Home made backup system

2019-12-19 Thread Celejar
On Wed, 18 Dec 2019 12:02:56 -0500
rhkra...@gmail.com wrote:

> Aside / Admission: I don't backup all that I should and as often as I should, 
> so I'm looking for ways to improve.  One thought I have is to write my own 
> backup "system" and use it, and I've thought about that a little, and provide 
> some of my thoughts below.
> 
> A purpose of sending this to the mailing-list is to find out if there already 
> exists a solution (or parts of a solution) close to what I'm thinking about 
> (no sense re-inventing the wheel), or if someone thinks I've overlooked 
> something or making a big mistake.

There are certainly tools that do at least most of what you want. For
example, I use rsnapshot, basically a front-end to rsync that is
designed to harness rsync's power to streamline the taking of
incremental backups.

...

>* the backups should be in formats such that I can access them by a 
> variety 
> of other tools (as appropriate) if I need to -- if I backup an entire 
> directory or partition, I should be able to easily access and restore any 
> particular file from within that backup, and do so even if encrypted (i.e., 
> encryption would be done by "standard programs" (a bad example might be 
> ccrypt) that I could use "outside" of the backup system.

rsnapshot uses rsync + hardlinks to recreate the portions of
the filesystem that you want to back up (source) to wherever you tell it
to (target). That recreated filesystem can be accessed in any way that
the original filesystem can - no special tools are required for access
or recovery.

>* the bash subroutine (command) that I write should basically do the 
> following:
> 
>   * check that the specified target exists (for things like removable 
> drives or NAS type things) and has (sufficient) space (not sure I can tell 
> that 

rsnapshot does have a check for target availability. I don't think it
can check for sufficient space before initiating a backup - as you note,
it's a tricky thing to do - but it does have a 'du' option to report on
the target's current level of usage.

> until after backup is attempted) (or an encrypted drive that is not mounted / 
> unencrypted, i.e., available to write to)

>   * if the right conditions don't exist (above) tell me (I'm thinking of 
> an email as email is something that always gets my attention, maybe not 
> immediately, but soon enough)

rsnapshot will fail with an error code if something is wrong - assuming
you run it from cron, cron will email the error message.

>   * if the right conditions do exist, invoke the commands to backup the 
> files
> 
>   * if the backup is unsuccessful for any reason, notify me (email again)

As above.

>   * optionally notify me that the backup was successful (at least to the 
> extent of writing something)

By default rsnapshot prints nothing to stdout upon success (although
it does have a 'verbose' option), but it does log a 'success' message to
syslog, which I suppose you can keep an eye on with a log analyzer
(something like logwatch). Alteratively, I just reconfigured my
rsnapshot deployment to run rsnapshot with this wrapper, which results
in a notification for success but not for failure (since rsnapshot
pulls backups from the source, and in my case, the laptop it's
backing up is often not present, I would normally be flooded with
unnecessary failure notices):

*

#!/bin/sh

# usage 'rsnapshot-script x', where 'x' is a backup interval defined in the
# rsnapshot configuration file

if nc -z lila 22 2>/dev/null
then
echo "Running 'rsnapshot $1' ..."
if rsnapshot $1
then echo Success
fi
fi

*

>   * optionally actually do something to confirm that the backup is 
> readable 
> / usable (need to think about what that could be -- maybe write it (to /tmp 
> or 
> to a ramdrive), do something like a checksum (e.g., sha-256 or whatever makes 
> sense) on it and the original file, and confirm they match

rsnapshot has a hook system that allows you to add commands to be run
by it.

>   * ???
> 
> All of the commands invoked by the script should be parameters so that the 
> commands can be easily changed in the future (e.g., cp / tar / rsync, sha-256 
> or whatever, ccrypt or whatever, etc.) 

rsnapshot has configuration options 'cmd_cp', 'cmd_rm', 'cmd_rsync',
'cmd_ssh', 'cmd_logger', 'cmd_du' to do exactly that.

> Then the master script (actually probably scripts, e.g. one or more each for 
> hourly, daily, weekly, ... backups) would be invoked by cron (or maybe 
> include 
> the at command? --my computers run 24/7 unless they crash, but for others, at 
> or something similar might be a better choice) would invoke that subroutine / 
> command for each file, directory, or partition to be backed up, specifying 
> the 
> commands to use, what files to backup, where to back them up, encrypted or 
> not, 
> compressed or not, tarred or not, etc.

rsnapshot does all this, via coordination with its configuratio

Re: Home made backup system

2019-12-19 Thread ghe


How about writing a little script for rsync saying how you want it to
backup, what to backup, and what not to backup and set cron jobs for
when you want it to run. In the cron jobs, tell it to write to different
directories, so to keep several days or backups.

Not as smart as amanda (it'll backup more than necessary), but I think
it'll do the job with a whole lot less configuration.

I use something like this to backup a domain a thousand miles away.

-- 
Glenn English



Re: Home made backup system

2019-12-19 Thread songbird
Greg Wooledge wrote:
...
> History expansion is a bloody nightmare.  I recommend simply turning
> it off and living without it.  Of course, that's a personal preference,
> and you're free to continue banging your head against it, if you feel
> that the times it helps you outweigh the times that it hurts you.

  i only use a few commands regularly and have them either
aliased or stuck in history for me in my .bashrc
(i start every session by history -c to get rid of
anything and then use history -s "command" so pretty
much my routine when signing on in the morning is to
do !1 and then !2, !3 if i need to do a dist-upgrade.

!1 is apt-get update & fetchnews
!2 is apt-get upgrade
!3 is apt-get dist-upgrade


...
> ... and then, to add insult to injury, the command with the failed history
> expansion isn't even recorded in the shell's history, so you can't just
> "go up" and edit the line.  You have to start all over from scratch, or
> copy and paste the command with the mouse like some kind of Windows user.

  ha, yeah...

  i rarely use shell recording or other tools like
that but once in a while i've been rescued by my
habit of cat'ing the contents of a file to the terminal
to look at it instead of using an editor (and having
an infinite scroll window).


  songbird



Re: Home made backup system

2019-12-19 Thread tomas
On Thu, Dec 19, 2019 at 08:51:51AM -0500, Greg Wooledge wrote:
> On Thu, Dec 19, 2019 at 10:03:57AM +0200, Andrei POPESCU wrote:
> > On Mi, 18 dec 19, 21:42:21, rhkra...@gmail.com wrote:
> > > On Wednesday, December 18, 2019 12:26:04 PM to...@tuxteam.de wrote:
> > > >   #!/bin/bash
> > > >   home=${HOME:-~}
> > 
> > It will set the variable 'home' to the value of the variable 'HOME' if 
> > set (yes, case matters), otherwise to '~'.
> 
> It appears to expand the ~, rather than assigning a literal ~ character
> to the variable.

For bash, it's in the docs:

Quoth the man page:

   ${parameter:-word}
  Use Default Values.  If parameter is unset or null, the expansion
  of word is substituted.  Otherwise, the value of parameter is
  substituted.

For the rest...

I agree that the shell is full of bashisms. I usually don't care very
much when it's a script "to use around home". Whenever scripts get
larger or more widely distributed, I put in some effort.

But thanks for your (as always) insightful comments!

[...]

> So, home=${HOME:-~} seems like some sort of belt-and-suspenders fallback
> check in case the script is executed in a context where $HOME hasn't been
> set.  Maybe in a systemd service or something similar?  That's all I
> can think of.

You are right: HOME belongs to the blessed shell variables (in bash, at
least). Moreover, tilde expansion is done, according to the docs, using
HOME.

Quoth (again) the man page:

  HOME   The home directory of the current user; the default argument
 for the cd builtin command.  The value of this variable is
 also used when performing tilde expansion.

In practical terms:

  tomas@trotzki:~$ export HOME=rumpelstilzchen
  tomas@trotzki:/home/tomas$ echo ~
  rumpelstilzchen

:-)

So this whole "fallback to tilde thing is redundant (at least in bash)!

Cheers
-- tomás


signature.asc
Description: Digital signature


Re: Home made backup system

2019-12-19 Thread Greg Wooledge
On Thu, Dec 19, 2019 at 09:47:03AM +0100, to...@tuxteam.de wrote:
> So this "if" means:
> 
>   if   ## if
>   test ##
>   -z "$home"   ## the value of $home is empty
>   -o   ## or
>   \!   ## there is NOT
>   -d "$home"   ## a directory named "$home"
>## we're homeless.

Expanding on what I said in a previous message, the reason this is not
portable is because parsing this kind of expression is hard, and shells
did not all agree on how to do it.

So rather than try to enforce some kind of difficult parsing within
test, POSIX decided to scrap the whole thing.  In POSIX's wording:

  The XSI extensions specifying the -a and -o binary primaries and the
  '(' and ')' operators have been marked obsolescent. (Many expressions
  using them are ambiguously defined by the grammar depending on the
  specific expressions being evaluated.) Scripts using these expressions
  should be converted to the forms given below.

Shells that don't support binary -o and -a are compliant by default, and
shells that DO support it are simply offering an extension.  BUT, this is
only true for some expressions involving -o and -a.  Not all expressions.

What POSIX actually settled on for the test command is a strict
interpretation based on the number of arguments passed.

https://pubs.opengroup.org/onlinepubs/9699919799/utilities/test.html

  0 arguments:
Exit false (1).

  1 argument:
Exit true (0) if $1 is not null; otherwise, exit false.

  2 arguments:
If $1 is '!', exit true if $2 is null, false if $2 is not null.

If $1 is a unary primary, exit true if the unary test is true,
false if the unary test is false.

Otherwise, produce unspecified results.

  3 arguments:
If $2 is a binary primary, perform the binary test of $1 and $3.

If $1 is '!', negate the two-argument test of $2 and $3.

[OB XSI]  If $1 is '(' and $3 is ')', perform the unary test of
$2.   On systems that do not support the XSI option, the results
are unspecified if $1 is '(' and $3 is ')'.

Otherwise, produce unspecified results.

  4 arguments:
If $1 is '!', negate the three-argument test of $2, $3, and $4.

[OB XSI]  If $1 is '(' and $4 is ')', perform the two-argument
test of $2 and $3.   On systems that do not support the XSI option,
the results are unspecified if $1 is '(' and $4 is ')'.

Otherwise, the results are unspecified.

  >4 arguments:
The results are unspecified.


So... your binary -o and -a are only allowed as extensions in one of the
"results are unspecified" cases, e.g. when there are 5 or more arguments
given to test.  Your code above has 6 arguments, so this is allowable, if
a given shell chooses to attempt it.  Bash is one of the shells that does.

Still, you shouldn't be writing this type of code.  If you're going
to require bash extensions, just go all in and use [[ -z $v || ! -d $v ]]
instead.  Otherwise, string together two test commands.

(Also remember that test -a is a legacy synonym for test -e, so a shell
that wants to parse binary -a first has to figure out whether it's
looking at a unary -a or a binary -a.  Bash's [[ || ]] doesn't have
that problem.)

The POSIX page actually goes into a lot more detail about some of the
historical glitches with test.  It's worth a read.

https://pubs.opengroup.org/onlinepubs/9699919799/utilities/test.html#tag_20_128_16



Re: Home made backup system

2019-12-19 Thread Greg Wooledge
On Thu, Dec 19, 2019 at 10:03:57AM +0200, Andrei POPESCU wrote:
> On Mi, 18 dec 19, 21:42:21, rhkra...@gmail.com wrote:
> > On Wednesday, December 18, 2019 12:26:04 PM to...@tuxteam.de wrote:
> > >   #!/bin/bash
> > >   home=${HOME:-~}
> 
> It will set the variable 'home' to the value of the variable 'HOME' if 
> set (yes, case matters), otherwise to '~'.

It appears to expand the ~, rather than assigning a literal ~ character
to the variable.

wooledg:~$ x=${FOO:-~}; echo "$x"
/home/wooledg

I'm not sure I would trust this, though.  Even if the standards require
this behavior (and I'd have to lawyer my way through them to try to
figure out whether they actually DO require it), I wouldn't trust all
shell implementations to get it right.

And in any case, $HOME and ~ should normally both be the same thing,
so long as the ~ isn't quoted, and the $HOME isn't single-quoted.

   Tilde Expansion
   [...]
   If  this  login name is the null string, the tilde is replaced with the
   value of the shell parameter HOME.  If HOME is unset, the  home  direc‐
   tory  of  the  user executing the shell is substituted instead.

So, home=${HOME:-~} seems like some sort of belt-and-suspenders fallback
check in case the script is executed in a context where $HOME hasn't been
set.  Maybe in a systemd service or something similar?  That's all I
can think of.

If that's the intent, then I might prefer something more explicit,
and less likely to trigger an obscure shell bug, like:

if [ -z "$HOME" ]; then HOME=~; export HOME; fi

Then you can simply use $HOME in the rest of the script.

(See also .  And if you're
a set -u person, too bad.  Mangle it for -u compatibility yourself.  You
should know how, or else you shouldn't be using -u.)



Re: Home made backup system

2019-12-19 Thread Greg Wooledge
On Thu, Dec 19, 2019 at 09:53:46AM +0100, to...@tuxteam.de wrote:
> > ...
> > >>   if test -z "$home" -o \! -d "$home" ; then

The main issue here is that the use of the binary -o and -a operators
in "test" or "[" is not portable.  It might work in bash's implementation
of test (sometimes), but you can't count on it in other shells.

The preferred way to write this in a bash script would be:

if [[ -z $home || ! -d $home ]]; then

Or, in an sh script:

if test -z "$home" || test ! -d "$home"; then

or:

if [ -z "$home" ] || [ ! -d "$home" ]; then


> > the backslash is just protecting the ! operator 
> > which is the not operator on what follows.

In the shell, backslash is a form of quoting.  \! is exactly the same as
'!' but it's one character shorter, so you'll see people use the shorter
form a lot of the time.

You don't actually NEED to quote a lone ! character in a shell command.

wooledg:~$ echo hi !
hi !

However, when the ! character is NOT all alone, in bash's default
interactive mode (with history expansion enabled), certain !x
combinations can trigger unexpected and undesired history expansion.

wooledg:~$ set -o histexpand
wooledg:~$ echo hi!!
echo hiset -o histexpand
hiset -o histexpand

So, some people who have run into this in the past have probably
developed a defense mechanism of "always quote ! characters, no matter
what".  Which isn't wrong... but even then, it's not always enough.

History expansion is a bloody nightmare.  I recommend simply turning
it off and living without it.  Of course, that's a personal preference,
and you're free to continue banging your head against it, if you feel
that the times it helps you outweigh the times that it hurts you.

wooledg:~$ set -o histexpand
wooledg:~$ echo "Oh Yeah!.mp3"
bash: !.mp3: event not found

... and then, to add insult to injury, the command with the failed history
expansion isn't even recorded in the shell's history, so you can't just
"go up" and edit the line.  You have to start all over from scratch, or
copy and paste the command with the mouse like some kind of Windows user.



Re: Home made backup system

2019-12-19 Thread tomas
On Wed, Dec 18, 2019 at 10:38:26PM -0500, songbird wrote:
> rhkra...@gmail.com wrote:
> ...
> >>   if test -z "$home" -o \! -d "$home" ; then
> >
> > What does the -o \! do -- hmm, I guess \! is a bash "refeence" to the owner 
> > -- 
>   no, -o is logical or in that context.

Yes, exactly: it's not bash operating on that, but test [1],
so for bash it's a plain old parameter passed to test.

> the backslash is just protecting the ! operator 
> which is the not operator on what follows.

Again, this is supposed to be passed to test unharmed,
so the \ is telling bash "nothing to see here, pass
along).

>   i'm not going to go any further with reading
> whatever script that is.  i don't want to be
> here all evening.  ;)

Shell can be entertaining, can't it [2]?

Cheers

[1] Of course, this was a little white lie: there is a /bin/test,
   but for bash "test" is a built in, so it's part of bash anyway,
   but it behaves as if it were a binary. Oh, my ;-)

[2] Recommended: Greg Wooledge's pages. He's a regular here. He
   knows much more about shells than me!
   https://mywiki.wooledge.org/

-- tomás


signature.asc
Description: Digital signature


Re: Home made backup system

2019-12-19 Thread tomas
On Wed, Dec 18, 2019 at 09:42:21PM -0500, rhkra...@gmail.com wrote:
> Thanks to all who replied!
> 
> This script (or elements of it) looks useful to me, but I don't fully 
> understand it -- I plan to work my way through it -- I have a few questions 
> now, I'm sure I will have more after I get past the first 3 (or more 
> encouraging to me, first 6) lines.
> 
> Questions below:
> 
> On Wednesday, December 18, 2019 12:26:04 PM to...@tuxteam.de wrote:
> > On Wed, Dec 18, 2019 at 12:02:56PM -0500, rhkra...@gmail.com wrote:
> 
> >   #!/bin/bash
> >   home=${HOME:-~}
> 
> What does that line do, or more specifically, what does the :-~ do -- note 
> the 
> following:

The "-" doesn't belong to the "~" but to the ":" ;-)

The construction is (see the section "Parameter Expansion" in the bash
manual):

   ${parameter:-word}
 Use Default Values.  If parameter is unset or null, the expansion
 of word is substituted.  Otherwise, the value of parameter is
 substituted.

("parameter" is the bash manual's jargon for what we colloquially call
"shell variable").

So this means: "if HOME is set, then use that. Otherwise use whatever
tilde ('~') expands to".

This is my way to find a home, but to allow the script's user to override
it by setting HOME to some other value.

> rhk@s19:/rhk/git_test$ echo ${HOME:-~}
> /home/rhk
> rhk@s19:/rhk/git_test$ echo ${HOME}
> /home/rhk
> 
> >   if test -z "$home" -o \! -d "$home" ; then
> 
> What does the -o \! do -- hmm, I guess \! is a bash "refeence" to the owner 
> -- 
> I guess I should look for it in man bash...

No, this exclamation mark ain't for bash -- it's an argument to "test"
which it interprets as "not". Since the "!" can mean to bash something
in some contexts, as you found out, I escaped it with the "\" [1].

So this "if" means:

  if   ## if
  test ##
  -z "$home"   ## the value of $home is empty
  -o   ## or
  \!   ## there is NOT
  -d "$home"   ## a directory named "$home"
   ## we're homeless.

> I'm sure I'll have more questions as I continue, but that is enough for me 
> for 
> tonight.

Questions welcome!

Cheers

[1] Actually, on revisiting things, I would tend to write '!' instead
   of \! these days.

-- tomás


signature.asc
Description: Digital signature


Re: Home made backup system

2019-12-19 Thread Andrei POPESCU
On Mi, 18 dec 19, 21:42:21, rhkra...@gmail.com wrote:
> On Wednesday, December 18, 2019 12:26:04 PM to...@tuxteam.de wrote:
> 
> >   #!/bin/bash
> >   home=${HOME:-~}
> 
> What does that line do, or more specifically, what does the :-~ do -- note 
> the 
> following:

It will set the variable 'home' to the value of the variable 'HOME' if 
set (yes, case matters), otherwise to '~'.

See the bash manpage, section 'Parameter Expansion'.

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: Home made backup system

2019-12-18 Thread David Christensen

On 2019-12-18 09:02, rhkra...@gmail.com wrote:

Aside / Admission: I don't backup all that I should and as often as I should,
so I'm looking for ways to improve.  One thought I have is to write my own
backup "system" and use it, and I've thought about that a little, and provide
some of my thoughts below.

A purpose of sending this to the mailing-list is to find out if there already
exists a solution (or parts of a solution) close to what I'm thinking about
(no sense re-inventing the wheel), or if someone thinks I've overlooked
something or making a big mistake.

Part of the reason for doing my own is that I don't want to be trapped into
using a system that might disappear or change and leave me with a problem.  (I
subscribe to a mailing list for one particular backup system, and I wrote to
that list with my concerns and a little bit of my thoughts about my own system
(well, at the time, I was hoping for a "universal" configuration file (the file
that would specify what, where, when, how each file, directory, or partition to
be backed up would be treated), one that could be read and acted upon by a
great variety (and maybe all future backup programs).

The only response I got (iirc) was that since their program was open source,
it would never go away.  (Yet, if I'm not mixing up backup programs, they were
transitioning from using Python 2 as the underlying language to Python 3 --
I'm not sure Python 2 would ever go completely away, or become non-functional,
but it reinforces my belief / fear that any (complex?) backup program, even
open source, would someday become unusable.

So, here are my thoughts:

After I thought about (hoped for) a universal config file for backup programs
and it seeming that no such thing exists (not surprising), I thought I'd try
to create my own -- this morning as I thought about it a little more (despite
a headache and a non-working car what I should be working on), I thought that
the simplest thing for me to do is write a bash script and a bash subroutine,
something along these lines:

* the backups should be in formats such that I can access them by a variety
of other tools (as appropriate) if I need to -- if I backup an entire
directory or partition, I should be able to easily access and restore any
particular file from within that backup, and do so even if encrypted (i.e.,
encryption would be done by "standard programs" (a bad example might be
ccrypt) that I could use "outside" of the backup system.

* the bash subroutine (command) that I write should basically do the
following:

   * check that the specified target exists (for things like removable
drives or NAS type things) and has (sufficient) space (not sure I can tell that
until after backup is attempted) (or an encrypted drive that is not mounted /
unencrypted, i.e., available to write to)

   * if the right conditions don't exist (above) tell me (I'm thinking of
an email as email is something that always gets my attention, maybe not
immediately, but soon enough)

   * if the right conditions do exist, invoke the commands to backup the
files

   * if the backup is unsuccessful for any reason, notify me (email again)

   * optionally notify me that the backup was successful (at least to the
extent of writing something)

   * optionally actually do something to confirm that the backup is readable
/ usable (need to think about what that could be -- maybe write it (to /tmp or
to a ramdrive), do something like a checksum (e.g., sha-256 or whatever makes
sense) on it and the original file, and confirm they match

   * ???

All of the commands invoked by the script should be parameters so that the
commands can be easily changed in the future (e.g., cp / tar / rsync, sha-256
or whatever, ccrypt or whatever, etc.)

Then the master script (actually probably scripts, e.g. one or more each for
hourly, daily, weekly, ... backups) would be invoked by cron (or maybe include
the at command? --my computers run 24/7 unless they crash, but for others, at
or something similar might be a better choice) would invoke that subroutine /
command for each file, directory, or partition to be backed up, specifying the
commands to use, what files to backup, where to back them up, encrypted or not,
compressed or not, tarred or not, etc.

In other words, instead of a configuration file, the system would just use bash
scripts with the appropriate commands, and invoked at the appropriate time by
cron (or with all backup commands in one script with backup times specified
with at or similar).

Aside: even if Amanda (for example) will always exist, I don't really want to
learn anything about it or any other program that might cease to be
maintainied in the future.


I wrote and use a homebrew backup and archive solution that started with 
a Perl script to invoke rsync (backup) and tar/ gzip (archive) over ssh 
from a central server according to configurable job files.  My thinking was:


1.  Use lowest-common denominator

Re: Home made backup system

2019-12-18 Thread songbird
rhkra...@gmail.com wrote:
...
>>   if test -z "$home" -o \! -d "$home" ; then
>
> What does the -o \! do -- hmm, I guess \! is a bash "refeence" to the owner 
> -- 
  no, -o is logical or in that context.
the backslash is just protecting the ! operator 
which is the not operator on what follows.

  i'm not going to go any further with reading
whatever script that is.  i don't want to be
here all evening.  ;)

  when searching the bash man pages you have to
be aware of context as some of the operators
and options are used in many places but have 
quite different meanings.


  songbird



Re: Home made backup system

2019-12-18 Thread Charles Curley
On Wed, 18 Dec 2019 12:02:56 -0500
rhkra...@gmail.com wrote:

> Aside / Admission: I don't backup all that I should and as often as I
> should, so I'm looking for ways to improve.  One thought I have is to
> write my own backup "system" and use it, and I've thought about that
> a little, and provide some of my thoughts below.

There are different backup programs for different purposes. Some
thoughts:
http://charlescurley.com/blog/posts/2019/Nov/02/backups-on-linux/



-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/



Re: Home made backup system

2019-12-18 Thread rhkramer
Thanks to all who replied!

This script (or elements of it) looks useful to me, but I don't fully 
understand it -- I plan to work my way through it -- I have a few questions 
now, I'm sure I will have more after I get past the first 3 (or more 
encouraging to me, first 6) lines.

Questions below:

On Wednesday, December 18, 2019 12:26:04 PM to...@tuxteam.de wrote:
> On Wed, Dec 18, 2019 at 12:02:56PM -0500, rhkra...@gmail.com wrote:

>   #!/bin/bash
>   home=${HOME:-~}

What does that line do, or more specifically, what does the :-~ do -- note the 
following:

rhk@s19:/rhk/git_test$ echo ${HOME:-~}
/home/rhk
rhk@s19:/rhk/git_test$ echo ${HOME}
/home/rhk

>   if test -z "$home" -o \! -d "$home" ; then

What does the -o \! do -- hmm, I guess \! is a bash "refeence" to the owner -- 
I guess I should look for it in man bash...

Hmm, but that means (in bash) the "history number" of the command

"  \! the history number of this command"

> echo "can't backup the homeless, sorry"
> exit 1
>   fi

I'm sure I'll have more questions as I continue, but that is enough for me for 
tonight.

>   backup=/media/backup/${home#/}
>   rsync -av --delete --filter="merge $home/.backup/filter" $home/ $backup/
>   echo -n "syncing..."
>   sync
>   echo " done."
>   df -h
> 
> I mount an USB stick (currently 128G) on /media/backup (the stick has a
> LUKS encrypted file system on it) and invoke backup.
> 
> The only non-quite obvious thing is the option
> 
>   --filter="merge $home/.backup/filter"
> 
> which controls what (not) to back up. This one has a list of excludes
> (much shortened) like so
> 
>   - /.cache/
>   [...much elided...]
>   - /.xsession-errors
>   - /tmp
>   dir-merge .backup-filter
> 
> The last line is interesting: it tells rsync to merge a file .backup-filter
> in each directory it visits -- so I can exclude huge subdirs I don't need
> to keep (e.g. because they are easy to re-build, etc.).
> 
> One example of that: I've a subdirectory virt, where I keep virtual images
> and install media. Then virt/.backup-filter looks like this:
> 
>   + /.backup-filter
>   + /notes
>   - /*
> 
> i.e. "just keep .backup-filter and notes, ignore the rest".
> 
> This scheme has served me well over the last ten years. It does have its
> limitations: it's sub-optimal with huge files, it won't probably scale
> well for huge amounts of data.
> 
> But it's easy to use and easy to understand.
> 
> Cheers
> -- t



Re: Home made backup system

2019-12-18 Thread elvis
If you don't to reinvent the wheel, and have more than one computer to 
backup...


try Bacula  www.bacula.org


does everything you want

On 19/12/19 3:02 am, rhkra...@gmail.com wrote:

Aside / Admission: I don't backup all that I should and as often as I should,
so I'm looking for ways to improve.  One thought I have is to write my own
backup "system" and use it, and I've thought about that a little, and provide
some of my thoughts below.

A purpose of sending this to the mailing-list is to find out if there already
exists a solution (or parts of a solution) close to what I'm thinking about
(no sense re-inventing the wheel), or if someone thinks I've overlooked
something or making a big mistake.

Part of the reason for doing my own is that I don't want to be trapped into
using a system that might disappear or change and leave me with a problem.  (I
subscribe to a mailing list for one particular backup system, and I wrote to
that list with my concerns and a little bit of my thoughts about my own system
(well, at the time, I was hoping for a "universal" configuration file (the file
that would specify what, where, when, how each file, directory, or partition to
be backed up would be treated), one that could be read and acted upon by a
great variety (and maybe all future backup programs).

The only response I got (iirc) was that since their program was open source,
it would never go away.  (Yet, if I'm not mixing up backup programs, they were
transitioning from using Python 2 as the underlying language to Python 3 --
I'm not sure Python 2 would ever go completely away, or become non-functional,
but it reinforces my belief / fear that any (complex?) backup program, even
open source, would someday become unusable.

So, here are my thoughts:

After I thought about (hoped for) a universal config file for backup programs
and it seeming that no such thing exists (not surprising), I thought I'd try
to create my own -- this morning as I thought about it a little more (despite
a headache and a non-working car what I should be working on), I thought that
the simplest thing for me to do is write a bash script and a bash subroutine,
something along these lines:

* the backups should be in formats such that I can access them by a variety
of other tools (as appropriate) if I need to -- if I backup an entire
directory or partition, I should be able to easily access and restore any
particular file from within that backup, and do so even if encrypted (i.e.,
encryption would be done by "standard programs" (a bad example might be
ccrypt) that I could use "outside" of the backup system.

* the bash subroutine (command) that I write should basically do the
following:

   * check that the specified target exists (for things like removable
drives or NAS type things) and has (sufficient) space (not sure I can tell that
until after backup is attempted) (or an encrypted drive that is not mounted /
unencrypted, i.e., available to write to)

   * if the right conditions don't exist (above) tell me (I'm thinking of
an email as email is something that always gets my attention, maybe not
immediately, but soon enough)

   * if the right conditions do exist, invoke the commands to backup the
files

   * if the backup is unsuccessful for any reason, notify me (email again)

   * optionally notify me that the backup was successful (at least to the
extent of writing something)

   * optionally actually do something to confirm that the backup is readable
/ usable (need to think about what that could be -- maybe write it (to /tmp or
to a ramdrive), do something like a checksum (e.g., sha-256 or whatever makes
sense) on it and the original file, and confirm they match

   * ???

All of the commands invoked by the script should be parameters so that the
commands can be easily changed in the future (e.g., cp / tar / rsync, sha-256
or whatever, ccrypt or whatever, etc.)

Then the master script (actually probably scripts, e.g. one or more each for
hourly, daily, weekly, ... backups) would be invoked by cron (or maybe include
the at command? --my computers run 24/7 unless they crash, but for others, at
or something similar might be a better choice) would invoke that subroutine /
command for each file, directory, or partition to be backed up, specifying the
commands to use, what files to backup, where to back them up, encrypted or not,
compressed or not, tarred or not, etc.

In other words, instead of a configuration file, the system would just use bash
scripts with the appropriate commands, and invoked at the appropriate time by
cron (or with all backup commands in one script with backup times specified
with at or similar).

Aside: even if Amanda (for example) will always exist, I don't really want to
learn anything about it or any other program that might cease to be
maintainied in the future.


--
If we aren't supposed to eat animals, why are they made of meat?



Re: Home made backup system

2019-12-18 Thread tomas
On Wed, Dec 18, 2019 at 12:02:56PM -0500, rhkra...@gmail.com wrote:
> Aside / Admission: I don't backup all that I should and as often as I should, 
> so I'm looking for ways to improve [...]

> Part of the reason for doing my own is that I don't want to be trapped into 
> using a system that might disappear or change and leave me with a problem.

I just use rsync. The whole thing is driven from a minimalist script:

  #!/bin/bash
  home=${HOME:-~}
  if test -z "$home" -o \! -d "$home" ; then
echo "can't backup the homeless, sorry"
exit 1
  fi
  backup=/media/backup/${home#/}
  rsync -av --delete --filter="merge $home/.backup/filter" $home/ $backup/
  echo -n "syncing..."
  sync
  echo " done."
  df -h

I mount an USB stick (currently 128G) on /media/backup (the stick has a
LUKS encrypted file system on it) and invoke backup.

The only non-quite obvious thing is the option

  --filter="merge $home/.backup/filter"

which controls what (not) to back up. This one has a list of excludes
(much shortened) like so

  - /.cache/
  [...much elided...]
  - /.xsession-errors
  - /tmp
  dir-merge .backup-filter

The last line is interesting: it tells rsync to merge a file .backup-filter
in each directory it visits -- so I can exclude huge subdirs I don't need
to keep (e.g. because they are easy to re-build, etc.).

One example of that: I've a subdirectory virt, where I keep virtual images
and install media. Then virt/.backup-filter looks like this:

  + /.backup-filter
  + /notes
  - /*

i.e. "just keep .backup-filter and notes, ignore the rest".

This scheme has served me well over the last ten years. It does have its
limitations: it's sub-optimal with huge files, it won't probably scale
well for huge amounts of data.

But it's easy to use and easy to understand.

Cheers
-- t


signature.asc
Description: Digital signature


Re: Home made backup system

2019-12-18 Thread billium

On 18/12/2019 17:02, rhkra...@gmail.com wrote:

Aside / Admission: I don't backup all that I should and as often as I should,
so I'm looking for ways to improve.  One thought I have is to write my own
backup "system" and use it, and I've thought about that a little, and provide
some of my thoughts below.

A purpose of sending this to the mailing-list is to find out if there already
exists a solution (or parts of a solution) close to what I'm thinking about
(no sense re-inventing the wheel), or if someone thinks I've overlooked
something or making a big mistake.

Part of the reason for doing my own is that I don't want to be trapped into
using a system that might disappear or change and leave me with a problem.  (I
subscribe to a mailing list for one particular backup system, and I wrote to
that list with my concerns and a little bit of my thoughts about my own system
(well, at the time, I was hoping for a "universal" configuration file (the file
that would specify what, where, when, how each file, directory, or partition to
be backed up would be treated), one that could be read and acted upon by a
great variety (and maybe all future backup programs).

The only response I got (iirc) was that since their program was open source,
it would never go away.  (Yet, if I'm not mixing up backup programs, they were
transitioning from using Python 2 as the underlying language to Python 3 --
I'm not sure Python 2 would ever go completely away, or become non-functional,
but it reinforces my belief / fear that any (complex?) backup program, even
open source, would someday become unusable.

So, here are my thoughts:

After I thought about (hoped for) a universal config file for backup programs
and it seeming that no such thing exists (not surprising), I thought I'd try
to create my own -- this morning as I thought about it a little more (despite
a headache and a non-working car what I should be working on), I thought that
the simplest thing for me to do is write a bash script and a bash subroutine,
something along these lines:

* the backups should be in formats such that I can access them by a variety
of other tools (as appropriate) if I need to -- if I backup an entire
directory or partition, I should be able to easily access and restore any
particular file from within that backup, and do so even if encrypted (i.e.,
encryption would be done by "standard programs" (a bad example might be
ccrypt) that I could use "outside" of the backup system.

* the bash subroutine (command) that I write should basically do the
following:

   * check that the specified target exists (for things like removable
drives or NAS type things) and has (sufficient) space (not sure I can tell that
until after backup is attempted) (or an encrypted drive that is not mounted /
unencrypted, i.e., available to write to)

   * if the right conditions don't exist (above) tell me (I'm thinking of
an email as email is something that always gets my attention, maybe not
immediately, but soon enough)

   * if the right conditions do exist, invoke the commands to backup the
files

   * if the backup is unsuccessful for any reason, notify me (email again)

   * optionally notify me that the backup was successful (at least to the
extent of writing something)

   * optionally actually do something to confirm that the backup is readable
/ usable (need to think about what that could be -- maybe write it (to /tmp or
to a ramdrive), do something like a checksum (e.g., sha-256 or whatever makes
sense) on it and the original file, and confirm they match

   * ???

All of the commands invoked by the script should be parameters so that the
commands can be easily changed in the future (e.g., cp / tar / rsync, sha-256
or whatever, ccrypt or whatever, etc.)

Then the master script (actually probably scripts, e.g. one or more each for
hourly, daily, weekly, ... backups) would be invoked by cron (or maybe include
the at command? --my computers run 24/7 unless they crash, but for others, at
or something similar might be a better choice) would invoke that subroutine /
command for each file, directory, or partition to be backed up, specifying the
commands to use, what files to backup, where to back them up, encrypted or not,
compressed or not, tarred or not, etc.

In other words, instead of a configuration file, the system would just use bash
scripts with the appropriate commands, and invoked at the appropriate time by
cron (or with all backup commands in one script with backup times specified
with at or similar).

Aside: even if Amanda (for example) will always exist, I don't really want to
learn anything about it or any other program that might cease to be
maintainied in the future.

The rsync web site had some good examples.  There is a daily rotating 
one and overall backups also. I use these to backup to a Debian nas and 
a VPS.





Re: Home made backup system

2019-12-18 Thread Levente
It depends what do you want to backup. If that is code, or text files, use
git. If they are photos videos or mostly binary, use some script and
magnetic tapes.


Levente

On Wed, Dec 18, 2019, 18:03  wrote:

> Aside / Admission: I don't backup all that I should and as often as I
> should,
> so I'm looking for ways to improve.  One thought I have is to write my own
> backup "system" and use it, and I've thought about that a little, and
> provide
> some of my thoughts below.
>
> A purpose of sending this to the mailing-list is to find out if there
> already
> exists a solution (or parts of a solution) close to what I'm thinking
> about
> (no sense re-inventing the wheel), or if someone thinks I've overlooked
> something or making a big mistake.
>
> Part of the reason for doing my own is that I don't want to be trapped
> into
> using a system that might disappear or change and leave me with a
> problem.  (I
> subscribe to a mailing list for one particular backup system, and I wrote
> to
> that list with my concerns and a little bit of my thoughts about my own
> system
> (well, at the time, I was hoping for a "universal" configuration file (the
> file
> that would specify what, where, when, how each file, directory, or
> partition to
> be backed up would be treated), one that could be read and acted upon by a
> great variety (and maybe all future backup programs).
>
> The only response I got (iirc) was that since their program was open
> source,
> it would never go away.  (Yet, if I'm not mixing up backup programs, they
> were
> transitioning from using Python 2 as the underlying language to Python 3
> --
> I'm not sure Python 2 would ever go completely away, or become
> non-functional,
> but it reinforces my belief / fear that any (complex?) backup program,
> even
> open source, would someday become unusable.
>
> So, here are my thoughts:
>
> After I thought about (hoped for) a universal config file for backup
> programs
> and it seeming that no such thing exists (not surprising), I thought I'd
> try
> to create my own -- this morning as I thought about it a little more
> (despite
> a headache and a non-working car what I should be working on), I thought
> that
> the simplest thing for me to do is write a bash script and a bash
> subroutine,
> something along these lines:
>
>* the backups should be in formats such that I can access them by a
> variety
> of other tools (as appropriate) if I need to -- if I backup an entire
> directory or partition, I should be able to easily access and restore any
> particular file from within that backup, and do so even if encrypted
> (i.e.,
> encryption would be done by "standard programs" (a bad example might be
> ccrypt) that I could use "outside" of the backup system.
>
>* the bash subroutine (command) that I write should basically do the
> following:
>
>   * check that the specified target exists (for things like removable
> drives or NAS type things) and has (sufficient) space (not sure I can tell
> that
> until after backup is attempted) (or an encrypted drive that is not
> mounted /
> unencrypted, i.e., available to write to)
>
>   * if the right conditions don't exist (above) tell me (I'm thinking
> of
> an email as email is something that always gets my attention, maybe not
> immediately, but soon enough)
>
>   * if the right conditions do exist, invoke the commands to backup
> the
> files
>
>   * if the backup is unsuccessful for any reason, notify me (email
> again)
>
>   * optionally notify me that the backup was successful (at least to
> the
> extent of writing something)
>
>   * optionally actually do something to confirm that the backup is
> readable
> / usable (need to think about what that could be -- maybe write it (to
> /tmp or
> to a ramdrive), do something like a checksum (e.g., sha-256 or whatever
> makes
> sense) on it and the original file, and confirm they match
>
>   * ???
>
> All of the commands invoked by the script should be parameters so that the
> commands can be easily changed in the future (e.g., cp / tar / rsync,
> sha-256
> or whatever, ccrypt or whatever, etc.)
>
> Then the master script (actually probably scripts, e.g. one or more each
> for
> hourly, daily, weekly, ... backups) would be invoked by cron (or maybe
> include
> the at command? --my computers run 24/7 unless they crash, but for others,
> at
> or something similar might be a better choice) would invoke that
> subroutine /
> command for each file, directory, or partition to be backed up, specifying
> the
> commands to use, what files to backup, where to back them up, encrypted or
> not,
> compressed or not, tarred or not, etc.
>
> In other words, instead of a configuration file, the system would just use
> bash
> scripts with the appropriate commands, and invoked at the appropriate time
> by
> cron (or with all backup commands in one script with backup times
> specified
> with at or similar).
>
> Aside: even if Amanda (for example) will alway

Home made backup system

2019-12-18 Thread rhkramer
Aside / Admission: I don't backup all that I should and as often as I should, 
so I'm looking for ways to improve.  One thought I have is to write my own 
backup "system" and use it, and I've thought about that a little, and provide 
some of my thoughts below.

A purpose of sending this to the mailing-list is to find out if there already 
exists a solution (or parts of a solution) close to what I'm thinking about 
(no sense re-inventing the wheel), or if someone thinks I've overlooked 
something or making a big mistake.

Part of the reason for doing my own is that I don't want to be trapped into 
using a system that might disappear or change and leave me with a problem.  (I 
subscribe to a mailing list for one particular backup system, and I wrote to 
that list with my concerns and a little bit of my thoughts about my own system 
(well, at the time, I was hoping for a "universal" configuration file (the file 
that would specify what, where, when, how each file, directory, or partition to 
be backed up would be treated), one that could be read and acted upon by a 
great variety (and maybe all future backup programs).

The only response I got (iirc) was that since their program was open source, 
it would never go away.  (Yet, if I'm not mixing up backup programs, they were 
transitioning from using Python 2 as the underlying language to Python 3 -- 
I'm not sure Python 2 would ever go completely away, or become non-functional, 
but it reinforces my belief / fear that any (complex?) backup program, even 
open source, would someday become unusable.

So, here are my thoughts:

After I thought about (hoped for) a universal config file for backup programs 
and it seeming that no such thing exists (not surprising), I thought I'd try 
to create my own -- this morning as I thought about it a little more (despite 
a headache and a non-working car what I should be working on), I thought that 
the simplest thing for me to do is write a bash script and a bash subroutine, 
something along these lines:

   * the backups should be in formats such that I can access them by a variety 
of other tools (as appropriate) if I need to -- if I backup an entire 
directory or partition, I should be able to easily access and restore any 
particular file from within that backup, and do so even if encrypted (i.e., 
encryption would be done by "standard programs" (a bad example might be 
ccrypt) that I could use "outside" of the backup system.

   * the bash subroutine (command) that I write should basically do the 
following:

  * check that the specified target exists (for things like removable 
drives or NAS type things) and has (sufficient) space (not sure I can tell that 
until after backup is attempted) (or an encrypted drive that is not mounted / 
unencrypted, i.e., available to write to)

  * if the right conditions don't exist (above) tell me (I'm thinking of 
an email as email is something that always gets my attention, maybe not 
immediately, but soon enough)

  * if the right conditions do exist, invoke the commands to backup the 
files

  * if the backup is unsuccessful for any reason, notify me (email again)

  * optionally notify me that the backup was successful (at least to the 
extent of writing something)

  * optionally actually do something to confirm that the backup is readable 
/ usable (need to think about what that could be -- maybe write it (to /tmp or 
to a ramdrive), do something like a checksum (e.g., sha-256 or whatever makes 
sense) on it and the original file, and confirm they match

  * ???

All of the commands invoked by the script should be parameters so that the 
commands can be easily changed in the future (e.g., cp / tar / rsync, sha-256 
or whatever, ccrypt or whatever, etc.) 

Then the master script (actually probably scripts, e.g. one or more each for 
hourly, daily, weekly, ... backups) would be invoked by cron (or maybe include 
the at command? --my computers run 24/7 unless they crash, but for others, at 
or something similar might be a better choice) would invoke that subroutine / 
command for each file, directory, or partition to be backed up, specifying the 
commands to use, what files to backup, where to back them up, encrypted or not, 
compressed or not, tarred or not, etc.

In other words, instead of a configuration file, the system would just use bash 
scripts with the appropriate commands, and invoked at the appropriate time by 
cron (or with all backup commands in one script with backup times specified 
with at or similar).

Aside: even if Amanda (for example) will always exist, I don't really want to 
learn anything about it or any other program that might cease to be 
maintainied in the future.