Re: Question about Running rsnapshot
Hi, On 05/05/18 07:45, Martin McCormick wrote: > I just realized that I goofed when I wrote the name of > the application that combines multiple drives in to one large > drive. I meant > mhddfs for example: > mhddfs /rsnapshot1,/rsnapshot2 /var/cache/rsnapshot -o mlimit=100M > >/dev/null 2>&1 Okay, that explains it. It seems that the underlying file systems are independent from each other in most respects. The virtual presentation is, interesting, but I wouldn't be using this setup for backups even though it might be useful for other types of use. https://romanrm.net/mhddfs Cheers A. signature.asc Description: OpenPGP digital signature
Re: Question about Running rsnapshot
On 04-05-2018 16:52, Martin McCormick wrote: > The backup file system resides on a pair of 256 GB usb > drives which are ganged together in to 1 large drive using mmddfs You can't have a hard link between files in different drives[0]. mmddfs is probably copying instead of linking, even if it receives a ln call, when it determines that each file must end up in different disks. For your use case, LVM seems more adequate: you'll have only one filesystem, even if at disk level it's stored in two disks. [0]Actually, they must be in the same filesystem -- Eduardo M KALINOWSKI edua...@kalinowski.com.br
Re: Question about Running rsnapshot
I am replying to two messages at once. Andrew McGlashanwrites: > Is it this: > > https://www.microchip.com/SWLibraryWeb/product.aspx?product=Memory%20Disk%20Drive%20File%20System > > I just realized that I goofed when I wrote the name of the application that combines multiple drives in to one large drive. I meant mhddfs for example: mhddfs /rsnapshot1,/rsnapshot2 /var/cache/rsnapshot -o mlimit=100M >/dev/null 2>&1 The file systems used on these drives are Linux file systems and are mounted ext4. Greg Wooledge writes: > Two (or more) hardlinks to the same file can only exist within a file > system. If /a and /b are separate file systems, then it is completely > impossible for /a/file and /b/file to be hardlinks to each other. That's what confuses me right now as this obviously works as I can go to all of those backup directories and cat that file or any of the several thousand other files in the backup and read their contents. I had a look at rsnapshot which is a perl script and the line that makes the link is: print_cmd("ln $srcpath $destpath"); Maybe I am confused about what produces a hard link but I thought that did. It ends up looking like all the blocks of that file are in that directory when in fact we are reading blocks that were only written once and only seem to be in multiple file systems. Martin
Re: Question about Running rsnapshot
On 05/05/18 05:52, Martin McCormick wrote: > Andrew McGlashanwrites: >> Have you got your backup areas on different file systems? > > I do. The backup file system resides on a pair of 256 GB usb > drives which are ganged together in to 1 large drive using mmddfs Is it this: https://www.microchip.com/SWLibraryWeb/product.aspx?product=Memory%20Disk%20Drive%20File%20System If so, that looks like it is a FAT type file system and that doesn't have the inodes like ext4 (and similar) has. A.
Re: Question about Running rsnapshot
On Fri, May 04, 2018 at 02:52:05PM -0500, Martin McCormick wrote: > As I write this, I am beginning to realize that maybe only > hard links in the same directory structure will reference the > same inode and that hard links spanning multiple directory trees > can be different but contain metadata that copy the map of the > original file which was the one that was on inode 16. Two (or more) hardlinks to the same file can only exist within a file system. If /a and /b are separate file systems, then it is completely impossible for /a/file and /b/file to be hardlinks to each other.
Re: Question about Running rsnapshot
Andrew McGlashanwrites: > Hi, > > Have you got your backup areas on different file systems? I do. The backup file system resides on a pair of 256 GB usb drives which are ganged together in to 1 large drive using mmddfs and then mounted on /var/cache/rsnapshot. The very first backup I took was the full backup and that particular file showed an inode number of 16. The remaining two backups are halfday.0 and halfday.1. Each of those files has a working version of that same file and one has an inode number of 11 while the other has an inode number of 6. Each of these backups is a different tree with the vast majority of files being hard links to the first backup which copied the contents of the working drive to the backup drive. As I write this, I am beginning to realize that maybe only hard links in the same directory structure will reference the same inode and that hard links spanning multiple directory trees can be different but contain metadata that copy the map of the original file which was the one that was on inode 16. If that is correct, then thank you for helping my slow mind to think since I was confused and expected all the hard links to have an inode of 16. The purpose behind this exercise is to make sure that the backups I am creating are good as the first or full backup used about 25% of the total space and the other two backups hardly added any more space. I was able to read the contents of the file on each of the subsequent two backups so it definitely is working. Martin McCormick
Re: Question about Running rsnapshot
Hi, On 05/05/18 03:40, Martin McCormick wrote: > rsnapshot hard-links files that haven't changed to save space. I > am doing two half-day backups and a daily each day. Shouldn't > the inode number as in ls -i filename stay the same for all the > backups? There is a daily.0 file plus a halfday.0 and a > halfday.1 backup and all 3 have different inode numbers for the > same file which hasn't changed since 2008. Doing ls -l on that > file does show 3 links to it. On the working drive, there is > only 1 which is what it should be. Have you got your backup areas on different file systems? 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 daily.0/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 daily.1/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 daily.2/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 daily.3/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 daily.4/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 daily.5/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 daily.6/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 hourly.0/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 hourly.10/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 hourly.11/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 hourly.12/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 hourly.13/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 hourly.14/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 hourly.15/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 hourly.1/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 hourly.2/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 hourly.3/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 hourly.4/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 hourly.5/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 hourly.6/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 hourly.7/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 hourly.8/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 hourly.9/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 monthly.0/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 monthly.1/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 monthly.2/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 monthly.3/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 monthly.4/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 monthly.5/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 monthly.6/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 monthly.7/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 monthly.8/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 weekly.0/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 weekly.1/server/root/etc/passwd 4720525 -rw-r--r-- 35 root root 1226 Oct 12 2016 weekly.2/server/root/etc/passwd Cheers A.
Question about Running rsnapshot
rsnapshot hard-links files that haven't changed to save space. I am doing two half-day backups and a daily each day. Shouldn't the inode number as in ls -i filename stay the same for all the backups? There is a daily.0 file plus a halfday.0 and a halfday.1 backup and all 3 have different inode numbers for the same file which hasn't changed since 2008. Doing ls -l on that file does show 3 links to it. On the working drive, there is only 1 which is what it should be. Today, daily.0 represents the full backup of the system and daily.1 tomorrow will be a renamed version of today's daily.0 directory. df -h looks good in that disk usage appears to be the equivalent of one full backup even with 3 intervals of backup now so I am curious as to why the inode numbers for hard links to the same file are different. Thanks. Martin McCormick