Re: Can't hardlink in different dirs. (BUG#826)

1999-12-08 Thread Alexander Viro



On 9 Dec 1999, Kjetil Torgrim Homme wrote:

> [Alexander Viro]
> 
> > Huh? If attacker can link something outside of chroot jail you
> >   are _already_ screwed - he can just access it directly.
> 
> A local user can make restricted files available for anonymous FTP if
> he has write access somewhere inside the jail.  A bit far fetched, I
> admit.  A more plausible scenario is a user jailed in order to have
> access to special resouces in a safe manner, and an accomplice on the
> outside giving him access to additional (setuid) programs by linking
> them into the jail.

... or opens it and passes to attacker's process via anonymous AF_UNIX
socket.

> >   > Think chmod() (by the admin after the "rogue" link()).
   
> > s/admin/luser with root/. That's what find(1) is for. Blind
> >   recursive _anything_ in places you don't control is asking for
> >   trouble.
> 
> Huh?  I'm thinking:

>  A creates a file which will contain a list of all students and their
>grades ("umask 002" rules :-)
>  B links to the file
>  A realizes he forgot to protect the directory
>  A fills in the list
>  B reads the list at leisure, and makes a fortune on blackmail.  Or
>perhaps on private lessons.
> 
> The nlink is easily overlooked.

I don't see where admin is mentioned in that scenario, but anyway: if you
are creating a file with sensible information - why the heck would you
make it world-readable in the first place? chmod on a directory might be a
good idea, but chmod 600 on the file is exactly what you need if you want
to stop other people from touching that file. What's that complex about
the idea that permissions on file are there to protect file contents?

The only case when admin is involved (aside of applying clue-by-four to
participants of the story) is when admin uses root for such data. I.e. 
uses root for no good reason. IMO "luser with root" is perfectly accurate
in that case.



Re: Can't hardlink in different dirs. (BUG#826)

1999-12-08 Thread Kjetil Torgrim Homme

[Alexander Viro]

>   Huh? If attacker can link something outside of chroot jail you
>   are _already_ screwed - he can just access it directly.

A local user can make restricted files available for anonymous FTP if
he has write access somewhere inside the jail.  A bit far fetched, I
admit.  A more plausible scenario is a user jailed in order to have
access to special resouces in a safe manner, and an accomplice on the
outside giving him access to additional (setuid) programs by linking
them into the jail.

>   > Think chmod() (by the admin after the "rogue" link()).
>   
>   s/admin/luser with root/. That's what find(1) is for. Blind
>   recursive _anything_ in places you don't control is asking for
>   trouble.

Huh?  I'm thinking:

 A creates a file which will contain a list of all students and their
   grades ("umask 002" rules :-)
 B links to the file
 A realizes he forgot to protect the directory
 A fills in the list
 B reads the list at leisure, and makes a fortune on blackmail.  Or
   perhaps on private lessons.

The nlink is easily overlooked.

>   > Hardlinks are seldom used by ordinary users, anyway, since they
>   > can't cross device boundaries.
>
>   Absolute BS. Subtrees from the homedir rarely span over
>   several filesystems.

Of course, but I own or have write access to all the files in my
homedir, so that doesn't matter.  Our users are spread out on 108(!)
different partitions. The odds that two students cooperating on some
project are on the same partition are miniscule.

I think the advantages outweigh the disadvantages, but it should be a
mount option, of course.


Kjetil T.



Re: Can't hardlink in different dirs. (BUG#826)

1999-12-08 Thread Alexander Viro



On 9 Dec 1999, Kjetil Torgrim Homme wrote:

> [Alexander Viro]
> 
> > Again, until you've removed your link _other_ links do not
> >   matter.  And once you've removed it it will not be used to create
> >   new ones anyway.  I still don't see anything you could buy
> >   prohibiting link().
> 
> Think chroot(). 

Huh? If attacker can link something outside of chroot jail you are
_already_ screwed - he can just access it directly.

> Think chmod() (by the admin after the "rogue"
> link()).

s/admin/luser with root/. That's what find(1) is for. Blind
recursive _anything_ in places you don't control is asking for trouble.
And I'ld rather live without "Linux-only admins" if they have a slightest
chance to get root on any other box. E.g. on a box with older version of
Linux kernel.

>  I think the change in semantics is reasonable.

>  Hardlinks
> are seldom used by ordinary users, anyway, since they can't cross
> device boundaries.
Absolute BS. Subtrees from the homedir rarely span over several
filesystems.



Re: Can't hardlink in different dirs. (BUG#826)

1999-12-08 Thread Kjetil Torgrim Homme

[Alexander Viro]

>   Again, until you've removed your link _other_ links do not
>   matter.  And once you've removed it it will not be used to create
>   new ones anyway.  I still don't see anything you could buy
>   prohibiting link().

Think chroot().  Think chmod() (by the admin after the "rogue"
link()).  I think the change in semantics is reasonable.  Hardlinks
are seldom used by ordinary users, anyway, since they can't cross
device boundaries.


Kjetil T.



Re: Oops with ext3 journaling

1999-12-08 Thread Stephen C. Tweedie

Hi,

On Wed, 8 Dec 1999 17:28:49 -0500, "Theodore Y. Ts'o" <[EMAIL PROTECTED]>
said:

> Never fear, there will be an very easy way to switch back and forth
> between ext2 and ext3.  A single mount command, or at most a single
> tune2fs command, should be all that it takes, no matter how the
> journal is stored.

Absolutely.  I am 100% committed to this.  Apart from anything else,
this is the mechanism which will allow for incompatible revisions of
the ext3 journal format: you will always be able to mount a journaled
ext3 partition as ext2, and then remount as the new ext3, if you want
to upgrade ext3 partitions to a new, incompatible format (which should
not happen after final release, but there will be at least one such
incompatible format revision required in the next month or so).

Any such format changes will be limited to the journal format: there
will be no journaling changes which prevent backing the filesystem
revision back down to ext2.  Ever.

--Stephen



Re: Oops with ext3 journaling

1999-12-08 Thread Theodore Y. Ts'o

   Date:   Mon, 6 Dec 1999 22:52:18 +0100
   From: Pavel Machek <[EMAIL PROTECTED]>

   > No, and I'm pretty much convinced now that I'll move to having a
   > private, hidden inode for the journal in the future.

   Please don't do that. Current way of switching ext2/ext3 is very
   nice. If someone wants to shoot in their foot...

Never fear, there will be an very easy way to switch back and forth
between ext2 and ext3.  A single mount command, or at most a single
tune2fs command, should be all that it takes, no matter how the journal
is stored.

- Ted



Re: Minimal fs module won't unmount

1999-12-08 Thread Erez Zadok

In message <[EMAIL PROTECTED]>, Malcolm Beattie writes:
> I sent this to linux-kernel 10 days ago but got zero responses so I'll
> try here in case I get luckier.

I don't recall seeing your message from 10 days ago.  Maybe it didn't get to
others as well.

> I'm writing a little "fake" filesystem module which, when mounted on a
> mountpoint, makes the root of the new filesystem be a "magic" symlink.
> It all works fine except that the filesystem won't unmount. strace
> shows that oldumount is returning EINVAL. What the "magic symlink"
> does isn't important here and the following cut-down version displays
> the same problem. Is it something to do with the fact that the core fs
> code expects the root of the new filesystem to be an ordinary
> directory or am I missing something else? Here's the cut-down module
> which simply makes follow_link appear to be your cwd. You can compile
> (under 2.2 or 2.3, though 2.3 is untested) by
> 
> cc -c -D__KERNEL__ -Wall -Wstrict-prototypes -O2 -fomit-frame-pointer -pipe 
>-fno-strength-reduce -m486 -DCPU=486 -DMODULE -DMODVERSIONS -include 
>/usr/src/linux/include/linux/modversions.h -I/usr/src/linux/include nullfs.c
> 
> (modifying arch-specific options as necessary) and then doing
> # insmod ./nullfs.o
> # mkdir /tmp/nullcwd
> # mount -t null none /tmp/nullcwd
> # ls -l /tmp/nullcwd
> lrwxrwxrwx   1 root root0 Nov 30 12:21 /tmp/nullcwd -> foo
> # ls -l /tmp/nullcwd/
> ...listing of your current working directory...
> # umount /tmp/nullcwd
> umount: none: not found
> umount: /tmp/foo: not mounted
> How can I get it to umount properly?

I've not looked at your code, but you might want to see what I do in my
wrapfs/lofs during mount and unmount.  Usually the main reason why something
won't unmount is that you're holding some resources (inodes, dentries, etc.)
in which case you get EBUSY.

If you're getting EINVAL, the question is where?  Is your code being invoked
at all, or is the VFS giving this EINVAL.  If your code isn't called, then
search the VFS (starting w/ do_umount) to find what code path could return
you an EINVAL.  I personally found out that it's faster (and more fun :-) to
debug VFS code myself by sticking printf's at certain places and building a
test kernel with that.

BTW, just to avoid any potential problems, mount w/ the real mnt point name
instead of 'none'.

Erez.



Minimal fs module won't unmount

1999-12-08 Thread Malcolm Beattie

I sent this to linux-kernel 10 days ago but got zero responses so I'll
try here in case I get luckier.

I'm writing a little "fake" filesystem module which, when mounted on a
mountpoint, makes the root of the new filesystem be a "magic" symlink.
It all works fine except that the filesystem won't unmount. strace
shows that oldumount is returning EINVAL. What the "magic symlink"
does isn't important here and the following cut-down version displays
the same problem. Is it something to do with the fact that the core fs
code expects the root of the new filesystem to be an ordinary
directory or am I missing something else? Here's the cut-down module
which simply makes follow_link appear to be your cwd. You can compile
(under 2.2 or 2.3, though 2.3 is untested) by

cc -c -D__KERNEL__ -Wall -Wstrict-prototypes -O2 -fomit-frame-pointer -pipe 
-fno-strength-reduce -m486 -DCPU=486 -DMODULE -DMODVERSIONS -include 
/usr/src/linux/include/linux/modversions.h -I/usr/src/linux/include nullfs.c

(modifying arch-specific options as necessary) and then doing
# insmod ./nullfs.o
# mkdir /tmp/nullcwd
# mount -t null none /tmp/nullcwd
# ls -l /tmp/nullcwd
lrwxrwxrwx   1 root root0 Nov 30 12:21 /tmp/nullcwd -> foo
# ls -l /tmp/nullcwd/
...listing of your current working directory...
# umount /tmp/nullcwd
umount: none: not found
umount: /tmp/foo: not mounted
How can I get it to umount properly?

-- nullfs.c --
#define NULL_ROOT_INO 1
#define NULL_SUPER_MAGIC 0x04ab

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

static struct dentry *
null_follow_link(struct dentry *dentry, struct dentry *base, unsigned int follow)
{
return dget(current->fs->pwd);
}

static int null_readlink(struct dentry *dentry, char *buffer, int buflen)
{
if (buflen > 4)
buflen = 4;
copy_to_user(buffer, "foo", buflen);
return buflen;
}

static struct file_operations null_file_operations = {
0   /* all of them */
};

static struct inode_operations null_inode_operations = {
&null_file_operations,  /* fops */
0,  /* create */
0,  /* lookup */
0,  /* link */
0,  /* unlink */
0,  /* symlink */
0,  /* mkdir */
0,  /* rmdir */
0,  /* mknod */
0,  /* rename */
null_readlink,  /* readlink */
null_follow_link,   /* follow_link */
0   /* and all the rest are zero */
};

static void null_put_inode(struct inode *inode)
{
if (inode->i_count == 1)
inode->i_nlink = 0;
}

static int null_statfs(struct super_block *sb, struct statfs *buf, int bufsiz)
{
struct statfs tmp;

tmp.f_type = NULL_SUPER_MAGIC;
tmp.f_bsize = PAGE_SIZE/sizeof(long);
tmp.f_blocks = 0;
tmp.f_bfree = 0;
tmp.f_bavail = 0;
tmp.f_files = 0;
tmp.f_ffree = 0;
tmp.f_namelen = NAME_MAX;
return copy_to_user(buf, &tmp, bufsiz) ? -EFAULT : 0;
}

static void null_read_inode(struct inode * inode)
{
if (inode->i_ino != NULL_ROOT_INO)
printk("nullfs: bad inode number %lu\n", inode->i_ino);

inode->i_mode = S_IFLNK | S_IRUGO | S_IWUGO | S_IXUGO;
inode->i_uid = 0;
inode->i_gid = 0;
inode->i_nlink = 1;
inode->i_size = 0;
inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
inode->i_blocks = 0;
inode->i_blksize = 1024;
inode->i_op = &null_inode_operations;
}

static void null_put_super(struct super_block *sb)
{
MOD_DEC_USE_COUNT;
}

static struct super_operations null_sops = { 
null_read_inode,
0,  /* write_inode */
null_put_inode,
0,  /* delete_inode */
0,  /* notify_change */
null_put_super,
0,  /* write_super */
null_statfs,
0,  /* remount_fs */
0,  /* clear_inode */
0   /* umount_begin */
};

static struct super_block *null_read_super(struct super_block *sb, void *data, 
  int silent)
{
struct inode *inode;

MOD_INC_USE_COUNT;
lock_super(sb);
sb->s_blocksize = 1024;
sb->s_blocksize_bits = 10;
sb->s_magic = NULL_SUPER_MAGIC;
sb->s_op = &null_sops;

inode = iget(sb, 1);
if (!inode)
goto out_no_root;
#if LINUX_VERSION_CODE < 0x020300
sb->s_root = d_alloc_root(inode, 0);
#else
sb->s_root = d_alloc_root(inode);
#endif
if (!s

Re: Web FS Q

1999-12-08 Thread Erez Zadok

In message <[EMAIL PROTECTED]>, "David Bialac" writes:
> For fun (and because I think it might be a useful feature), I'm working 
> on a filesystem that allows a website to be mounted as a local 
> filesystem.  I'm starting to dive in, and successfully have the kernel 
> recognizing that webfs exists, so it's now time to write some socket 
> code.  Amongst the thing I want to put into this system is caching of 
> server data locally, specifically on the local filesystem.  The 
> question I have is, can one filesystem ask to write to another?  I 
> don't see anythinng in there that seems to attempt to do this, so I 
> need to be sure said is possible.

As others have already said, this isn't a new idea, and there are several
alternatives you should look at first.  Also there are issues wrt mapping
the HTTP protocol to a file-system interface that you should be aware of.  I
believe this was discussed again in linux-fsdevel and the freebsd-fs mailing
lists just in the past 4-6 weeks.

However, there's nothing wrong with doing such a project for fun.  But if
you can find something that wasn't done before, that would be even better.

If you think that stackable file systems could help your project, see my
stackable f/s templates work

http://www.cs.columbia.edu/~ezk/research/software/

> Why this is not as stupid as it sounds:  Imagine the internet-enabled 
> appliance scenario: today, if say a DVD manufacturer has a glitch in 
> their DVD player, the only fix is to take it in for repair.  If the 
> device was internet-enabled, and further read its software off the web, 
> it could conceivably update software on the fly without the inconvience 
> of the user going without his player.  Nother scenario: you could save 
> your files to a website run anywhere, then download them anywhere.

This idea somewhat matches some of the ideas that were discussed in the
Usenix '94 "unionfs" paper: a way to merge a readonly f/s with a writable
f/s, the latter includes patches and updates to the readonly stuff (which
may come from a cdrom).

> David Bialac
> [EMAIL PROTECTED]

PS. I don't see a problem writing *loadable* kernel code.  It doesn't make
the core kernel bigger, only run-time kernel memory consumption increases.
Kernel modules aren't a solution for every application.  If speed is not a
concern, user-level file servers are easier to write and debug.  Otherwise I
personally think that all file systems should be in the kernel (loadable or
statically compiled) for performance reasons.

Erez.



Re: Oops with ext3 journaling

1999-12-08 Thread Erez Zadok

In message <[EMAIL PROTECTED]>, Pavel Machek writes:
> Hi!
> 
> > No, and I'm pretty much convinced now that I'll move to having a
> > private, hidden inode for the journal in the future.
> 
> Please don't do that. Current way of switching ext2/ext3 is very
> nice. If someone wants to shoot in their foot...
>   Pavel

IMHO, as a long term solution, ext3 should have as few ways to shoot oneself
in the foot as possible.  Hackers usually won't do "stupid" things (at least
not unintentionally :-), but hoards of Joe-users will.

> I'm really [EMAIL PROTECTED] Look at http://195.113.31.123/~pavel.  Pavel
> Hi! I'm a .signature virus! Copy me into your ~/.signature, please!

Erez.



Re: Can't hardlink in different dirs. (BUG#826)

1999-12-08 Thread Jan Kara

> Hi!
> 
> > But the fact that we don't have a working revoke() is the more important
> > problem. Forget local attacks. What about telnet to port 80, type GET
> > /~user/bigassgif.gif, and hit ^]^Z so the transfer will never finish? rm
> > needs some teeth for such situations.
> 
> You don't need revoke() to do this. Just > ~user/bigassgif.gif; rm
> ~user/bigassgif.gif will work in all cases. Anybody adding option to
> rm to do exactly this?
  By this you will get rid of problems with data blocks but not with inodes.
I agree that data blocks are more important but the problem will be still
+there...

Honza.



Re: Web FS Q

1999-12-08 Thread Alexander Viro



On Tue, 7 Dec 1999, David Bialac wrote:

> For fun (and because I think it might be a useful feature), I'm working 
> on a filesystem that allows a website to be mounted as a local 
> filesystem.  I'm starting to dive in, and successfully have the kernel 
> recognizing that webfs exists, so it's now time to write some socket 
> code.  Amongst the thing I want to put into this system is caching of 
> server data locally, specifically on the local filesystem.  The 
> question I have is, can one filesystem ask to write to another?  I 
> don't see anythinng in there that seems to attempt to do this, so I 
> need to be sure said is possible.

It's called CODA and it already exists. No need to reinvent the wheel
1001st time.

> Why this is not as stupid as it sounds:  Imagine the internet-enabled 
> appliance scenario: today, if say a DVD manufacturer has a glitch in 
> their DVD player, the only fix is to take it in for repair.  If the 
> device was internet-enabled, and further read its software off the web, 
> it could conceivably update software on the fly without the inconvience 
> of the user going without his player.  Nother scenario: you could save 

 You've seen how many Linux-based DVD players? And if you've
seen such a beast, what changes from putting the thing into the kernel
instead of using wget(1)?  IOW, _that_ rationale is pure
hogwash.

> your files to a website run anywhere, then download them anywhere.

It's spelled 'FTP'. And it doesn't need to be in the kernel either.
Had been used for that purpose since 70s.

-- 
What next, JabbaScript in the kernel? I don't think so.