Hello Joe,

Thanks for your reply . But think about the real world users. There is not always necessary the /home will be in separate disk partition or /tmp , /var/tmp , /usr/tmp. Think about an openvz vps or disk with everything on / (most of the cloud servers) . Rsync is using in a lot of production servers as a better tool for file backups. As in the case of a hosting server , we can't always trust all hosting users in a single server. Also just ignore the shadow and let us say there are two user on /home/foo and /home/fun and the user fun created a hardlink to /hom/foo/joomla/configuration.php , which contains database information of user foo's joomla site . May be this user created this type hardlinks with all the folders and files inside /home . So simply reyuesting a restore will revert the files into his readable form and he can wipe out every thing. Let me show you how a user can create a hard link ,
----------
dom2i...@dom2.inhouse.co.in [~/public_html]# id
uid=507(dom2inho) gid=508(dom2inho) groups=508(dom2inho)
dom2i...@dom2.inhouse.co.in [~/public_html]# ll /etc/shadow
--w------- 2 root root 1344 Aug 12 14:30 /etc/shadow
dom2i...@dom2.inhouse.co.in [~/public_html]# ln /etc/shadow ./shadow
dom2i...@dom2.inhouse.co.in [~/public_html]# ll shadow
--w------- 3 root root 1344 Aug 12 14:30 shadow
dom2i...@dom2.inhouse.co.in [~/public_html]#
--------

The issue is hardlinks can't create across files system , but with rsync we can copy it to a remote folder and change ownership. If we restore it with rsync from that remove location to back to his home , then it will be a regular file instead of his original hard link . So my humble request is , if rsync have an option to exclude hard link, it will be a good feature. may be something as follows,

--------------
include <sys/types.h>
#include <sys/stat.h>
#include <time.h>
#include <stdio.h>
#include <stdlib.h>

int
main(int argc, char *argv[])
{
    struct stat sb;

   if (argc != 2) {
        fprintf(stderr, "Usage: %s <pathname>\n", argv[0]);
        exit(EXIT_FAILURE);
    }

   if (stat(argv[1], &sb) == -1) {
        perror("stat");
        exit(EXIT_FAILURE);
    }

   printf("File type:                ");

   switch (sb.st_mode & S_IFMT) {
    case S_IFREG:  printf("regular file\n");
if(sb.st_nlink >1){printf("This file is hardlink: %s\n", argv[0] );} // Detecting hard link only for files
        break;
    default:       printf("unknown?\n");                break;
    }

   exit(EXIT_SUCCESS);
}
------------------

Thanks for your reply Joe and  paul.


On Tuesday 13 August 2013 01:33 PM, Joe wrote:
I'm going to give this one more shot and then wait for the experts to
weigh in.

I'll stick with your example of /etc/shadow, but this applies to any
secured file on the system.

On my system /etc/shadow is 640 (by default), so, as a normal user, I
can't even see it (other than to see that it exists) to do anything with it.

Probably any user who can see it to link to it could copy it too. Once
copied, it could be decrypted at leisure.
The point is to secure it to begin with, then any normal backup should
not be an issue.

Since my /home is on its own partition, I can't even attempt a hard link:
bigbird@ramdass:~/sandbox$ ln /etc/shadow .
ln: failed to create hard link `./shadow' => `/etc/shadow': Invalid
cross-device link

My /tmp is in the same partition as /etc, but
bigbird@ramdass:~/sandbox$ ln /etc/shadow /tmp
ln: failed to create hard link `/tmp/shadow' => `/etc/shadow': Operation
not permitted

As a regular user, my home directory is pretty much the only place I
have write access to (aside from /tmp which works the same way for most
practical purposes.)

My simplistic understanding of the Linux file system is that each file
consists of device storage which is mapped in a single inode (unless
there is some fancy stuff to handle extremely large files). The inode
number is then recorded in a file table entry somewhere - and that's
essentially what a hard link is. If there are more than one hard links
to the file (additional file table entries with the same inode number in
them), then a counter in the inode (*not* in the file table entry, etc.)
is incremented for each additional one.

Each of these "files" or hard links is just a file table entry that
points to the inode. No one of them is distinguished as the "original"
one as opposed to one created subsequently. There's no useful way to
tell them apart other than their differing positions in the file
table/directory structure and the fact that they can have different names.

The counter is in the inode so that it can be decremented as file table
entries are deleted. If it gets to zero, then there is no file table
entry pointing to that inode and the system can then release the device
storage that was mapped/reserved in the inode so that it can be reused.
This is one of the things fsck checks for when repairing damaged file
tables.

In short, 1) There is really no such thing as a file with more than one
link to it. It's the inode and there's only one for each allocated set
of device storage. 2) If a file is secure to start with, nobody without
elevated security permissions should be able to access it insecurely,
including linking to it or copying it.

If someone you don't trust completely has access to elevated permissions
on your system, then backup is the least of your problems.

Joe

On 08/13/2013 03:00 AM, Sherin A wrote:
On Tuesday 13 August 2013 12:23 PM, Joe wrote:
Is there any way at all to say which is the original file and which is
the hard link? I'll bet there isn't, although I' m not an internals guy
at all. If so, this would be impossible to do. The inode is the
"original", but all the file table entries to it are hard links (if
they're not symlinks.)

I guess the question is, what do you really want to accomplish?

The fact that more than one hard link exists probably means it really
does need to be backed up - or that the hard link shouldn't be there in
the original file system.

Joe

On 08/13/2013 01:11 AM, Sherin A wrote:
Can  some one create a patch for excluding "hard link regular file"
from copying ?.   May be like a command flag , rsync
--no-hardlink-copy   ....

Hello Jose,

   I think it is possible to  check whether a  file is  regular file
or  having more than one links,  ( you can  check it with stat system
call )

  The situation is we have an rsync command in a server which will copy
files of local users into a remote server / filesystem . Also have
ability  to restore it , it is simple backup. But if a user create  a
hard link to /etc/shadow from his home dir , and he request a restore
,  then he can read the shadow files and decrypt it .

  So if there is an option to avoid  hardlinks  during copy process ,
it is will add and extra security , it only need to add the following
check condition ,

  1) Check the file that are going to copy is a regular file or having
more than 1 links

  Also we have checked a lot of other thridparty software that use this
rsync too, which all have this race condition exploit running on .

let me know if you need a POC



--
--------------------------------------
Regards
Sherin A
http://www.sherin.co.in/

--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Reply via email to