Re: Security implications of btrfs receive?

2016-09-13 Thread Austin S. Hemmelgarn

On 2016-09-12 16:25, Chris Murphy wrote:

On Mon, Sep 12, 2016 at 5:24 AM, Austin S. Hemmelgarn
 wrote:


After device discovery, specify UUID= instead of a device node.


Oh yeah good point, -U --uuid is also doable. I'm not sure what the
benefit is of using sysfs to delete a device's node after umounting.
If the umount isn't successful or the accounting for all mount points
isn't right, there is no warning for device delete, it's gone and it's
like physically yanking the device out of the port. Seems dangerous
for udisksd to use that?
Unbinding the device node from the driver forces the driver to flush all 
commands and then stop talking to the device.  In essence, this is 
insurance that everything that got written to the device is actually on 
the device.  They're also trying to emulate the Windows behavior of 
making the disk disappear when you tell the system to prep it for 
physical disconnection.





I personally feel it's more important that we fix the whole UUID issue
first.  If we fix that, it is likely to at least make things better in this
particular case as well.


Fair enough.


The problem with trying to get this fixed upstream
in userspace is that udisks is essentially deprecated because of the work on
storaged (which will almost certainly depend on systemd), so you're almost
certainly get nothing fixed until someone decides to fork it and maintain it
themselves like happened for ConsoleKit.

As far as what udisks is doing wrong, based on what you've said and minimal
experimentation on my systems, the issue is that it's not identifying the
array as one filesystem.  They mount by device node to try and avoid the
UUID issues (because they affect every filesystem to some degree), but
because of that they see a bunch of filesystems.


By trying to avoid the UUID issues, they end up with another problem.
In this case though, it's really only a problem for BTRFS, and it 
technically is a result of our design, not theirs, as what we've done 
with integrating the volume management into the filesystem isn't 
something that's ever really been used on removable media before.

I guess right now the only sure fire options they have:

a. Don't automount any Btrfs. (This is what I've recommended.)
b. pass --uuid as well as device= for each device in the array as
discovered by BTRFS_IOC_FS_INFO and BTRFS_IOC_DEV_INFO: while also not
mounting by device node alone for Btrfs.
It's important to differentiate between auto-mounting (udisks default), 
on-demand mounting (what autofs and BSD's amd actually do), and manual 
mounting as a regular user (the other operational mode for udisks).  In 
this case, both operational modes for udisks are broken with BTRFS, but 
the autofs/amd method works as long as you configure them right.  On my 
system for example, I have an autofs directory set up to do on-demand 
mounting by device node, and it works perfectly fine with multi-disk 
BTRFS filesystems, because the kernel handles the bind mounting of the 
already mounted FS correctly.


--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Security implications of btrfs receive?

2016-09-12 Thread Chris Murphy
On Mon, Sep 12, 2016 at 5:24 AM, Austin S. Hemmelgarn
 wrote:
>
> After device discovery, specify UUID= instead of a device node.

Oh yeah good point, -U --uuid is also doable. I'm not sure what the
benefit is of using sysfs to delete a device's node after umounting.
If the umount isn't successful or the accounting for all mount points
isn't right, there is no warning for device delete, it's gone and it's
like physically yanking the device out of the port. Seems dangerous
for udisksd to use that?



> I personally feel it's more important that we fix the whole UUID issue
> first.  If we fix that, it is likely to at least make things better in this
> particular case as well.

Fair enough.

> The problem with trying to get this fixed upstream
> in userspace is that udisks is essentially deprecated because of the work on
> storaged (which will almost certainly depend on systemd), so you're almost
> certainly get nothing fixed until someone decides to fork it and maintain it
> themselves like happened for ConsoleKit.
>
> As far as what udisks is doing wrong, based on what you've said and minimal
> experimentation on my systems, the issue is that it's not identifying the
> array as one filesystem.  They mount by device node to try and avoid the
> UUID issues (because they affect every filesystem to some degree), but
> because of that they see a bunch of filesystems.

By trying to avoid the UUID issues, they end up with another problem.
I guess right now the only sure fire options they have:

a. Don't automount any Btrfs. (This is what I've recommended.)
b. pass --uuid as well as device= for each device in the array as
discovered by BTRFS_IOC_FS_INFO and BTRFS_IOC_DEV_INFO: while also not
mounting by device node alone for Btrfs.



-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Security implications of btrfs receive?

2016-09-12 Thread Austin S. Hemmelgarn

On 2016-09-09 14:58, Chris Murphy wrote:

On Thu, Sep 8, 2016 at 5:48 AM, Austin S. Hemmelgarn
 wrote:

On 2016-09-07 15:34, Chris Murphy wrote:



I like the idea of matching WWN as part of the check, with a couple of
caveats:
1. We need to keep in mind that in some environments, this can be spoofed
(Virtualization for example, although doing so would require source level
modifications to most hypervisors).
2. There needs to be a way to forcibly mount in the case of a mismatch, as
well as a way to update the filesystem to match the current WWN's of all of
it's disks.  I also specifically think that these should be separate
options, the first is useful for debugging a filesystem using image files,
while the second is useful for external clones of disks.
3. single device filesystems should store the WWN, and ideally keep it
up-to-date, but not check it.  They have no need to check it, and single
device is the primary use case for a traditional user, so it should be as
simple as possible.
4. We should be matching on more than just fsuuid, devuuid, and WWN, because
just matching those would allow a second partition on the same device to
cause issues.


Probably a different abstraction is necessary: WWN is appropriate
where member devices are drives; but maybe it's an LVM UUID in other
cases, e.g. where there's LVM snapshots. I'm not sure how drdb devices
are uniquely identified, but that'd also be in the "one of these"
list.
We'd need an abstracted serial identifier of some sort.  On a flat or 
partitioned device, it should include the WWN.  The same should probably 
happen for any 1:1 mapped devices (dm-crypt for example).  The 
potentially problematic part is that for this to be secure, we need to 
include info about the entire storage stack, which in turn means that if 
the storage stack changes, we need to update things.






It is also kinda important to see things like udisks and storaged as
user agents, ensuring they have a way to communicate with the helper
so things are mounted and umounted correctly as most DE's now expect
to just automount everything. I still get weird behaviors on GNOME
with udisks2 and multiple device Btrfs volumes with current upstream
GNOME stuff.


DE's expect the ability to automount things as a regular user, not
necessarily that it has to happen.  I'm not all that worried personally
about automounting of multi-device filesystems, largely because the type of
person who automounting in the desktop primarily caters to is not likely to
have a multi-device filesystem to begin with.


It should work better than it does because it works well for LVM and
mdadm arrays.

I think what's going on is the DE's mounter (udisksd) tries to mount
each Btrfs device node, even though those nodes make up a single fs
volume. It issues multiple mount and umount commands for that one
array. This doesn't happen with LVM and mdadm because an array has one
node. That's not true with Btrfs, it has one or many, depending on
your point of view. There's no way to mount just an fs volume UUID as
far as I know.
After device discovery, specify UUID= instead of a device 
node.  This is actually hos a large number of Linux distros mount all of 
their statically configured filesystems, including the root filesystem.




For that matter, the primary
(only realistic?) use for multi-device filesystems on removable media is
backups, and the few people who are going to set things up to automatically
run backups when the disks get plugged in will be smart enough to get things
working correctly themselves, while anyone else is going to be running the
backup manually and can mount the FS by hand if they aren't using something
like autofs.


Yeah I  am that person but it's the DE that's getting confused, and
then confusing me with its confusion, so it's bad Ux. GNOME automounts
a Btrfs raid1 by showing two disk icons with the exact same name, and
gets confused upon ejecting either with the GUI eject button or via
the CLI. So we can say udisks is doing something wrong, but what, and
is there anything we can do to make it easier for it to do the right
thing seeing as Btrfs is so different?
I personally feel it's more important that we fix the whole UUID issue 
first.  If we fix that, it is likely to at least make things better in 
this particular case as well.  The problem with trying to get this fixed 
upstream in userspace is that udisks is essentially deprecated because 
of the work on storaged (which will almost certainly depend on systemd), 
so you're almost certainly get nothing fixed until someone decides to 
fork it and maintain it themselves like happened for ConsoleKit.


As far as what udisks is doing wrong, based on what you've said and 
minimal experimentation on my systems, the issue is that it's not 
identifying the array as one filesystem.  They mount by device node to 
try and avoid the UUID issues (because they affect every filesystem to 
some degree), but because of that they see a bunch of filesystem

Re: Security implications of btrfs receive?

2016-09-10 Thread Chris Murphy
On Fri, Sep 9, 2016 at 12:58 PM, Chris Murphy  wrote:

>
> It should work better than it does because it works well for LVM and
> mdadm arrays.
>
> I think what's going on is the DE's mounter (udisksd) tries to mount
> each Btrfs device node, even though those nodes make up a single fs
> volume.

I updated this bug but it looks like it's going to a maintainer whose
not reading these mails.
https://bugs.freedesktop.org/show_bug.cgi?id=87277#c3

Anyway the problem is pretty bad as I describe in that bug. It pretty
much will always cause some kind of Btrfs corruption, which does get
fixed if everything is raid1. The main flaw is that it uses sysfs to
delete the device node before umounting the file system, so it causes
any multiple device Btrfs array that's automounted to become degraded.
There's something of a silver lining so long as the metadata at least
is raid1, which it probably ought to be, rather than single, dup, or
raid0.

In any case, it's far worse and more dangerous than LVM or mdadm raid
which don't exhibit this behavior. So of the available short term
options I see, udisks2 (and now storaged) needs to black list Btrfs
from automounting. It's just not smart enough to make sure it's not
doing really wrong things.



-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Security implications of btrfs receive?

2016-09-09 Thread Chris Murphy
On Thu, Sep 8, 2016 at 5:48 AM, Austin S. Hemmelgarn
 wrote:
> On 2016-09-07 15:34, Chris Murphy wrote:

> I like the idea of matching WWN as part of the check, with a couple of
> caveats:
> 1. We need to keep in mind that in some environments, this can be spoofed
> (Virtualization for example, although doing so would require source level
> modifications to most hypervisors).
> 2. There needs to be a way to forcibly mount in the case of a mismatch, as
> well as a way to update the filesystem to match the current WWN's of all of
> it's disks.  I also specifically think that these should be separate
> options, the first is useful for debugging a filesystem using image files,
> while the second is useful for external clones of disks.
> 3. single device filesystems should store the WWN, and ideally keep it
> up-to-date, but not check it.  They have no need to check it, and single
> device is the primary use case for a traditional user, so it should be as
> simple as possible.
> 4. We should be matching on more than just fsuuid, devuuid, and WWN, because
> just matching those would allow a second partition on the same device to
> cause issues.

Probably a different abstraction is necessary: WWN is appropriate
where member devices are drives; but maybe it's an LVM UUID in other
cases, e.g. where there's LVM snapshots. I'm not sure how drdb devices
are uniquely identified, but that'd also be in the "one of these"
list.




>> It is also kinda important to see things like udisks and storaged as
>> user agents, ensuring they have a way to communicate with the helper
>> so things are mounted and umounted correctly as most DE's now expect
>> to just automount everything. I still get weird behaviors on GNOME
>> with udisks2 and multiple device Btrfs volumes with current upstream
>> GNOME stuff.
>
> DE's expect the ability to automount things as a regular user, not
> necessarily that it has to happen.  I'm not all that worried personally
> about automounting of multi-device filesystems, largely because the type of
> person who automounting in the desktop primarily caters to is not likely to
> have a multi-device filesystem to begin with.

It should work better than it does because it works well for LVM and
mdadm arrays.

I think what's going on is the DE's mounter (udisksd) tries to mount
each Btrfs device node, even though those nodes make up a single fs
volume. It issues multiple mount and umount commands for that one
array. This doesn't happen with LVM and mdadm because an array has one
node. That's not true with Btrfs, it has one or many, depending on
your point of view. There's no way to mount just an fs volume UUID as
far as I know.


>For that matter, the primary
> (only realistic?) use for multi-device filesystems on removable media is
> backups, and the few people who are going to set things up to automatically
> run backups when the disks get plugged in will be smart enough to get things
> working correctly themselves, while anyone else is going to be running the
> backup manually and can mount the FS by hand if they aren't using something
> like autofs.

Yeah I  am that person but it's the DE that's getting confused, and
then confusing me with its confusion, so it's bad Ux. GNOME automounts
a Btrfs raid1 by showing two disk icons with the exact same name, and
gets confused upon ejecting either with the GUI eject button or via
the CLI. So we can say udisks is doing something wrong, but what, and
is there anything we can do to make it easier for it to do the right
thing seeing as Btrfs is so different?


Here's some 2 to 6 year old bugs related to this:
https://bugs.freedesktop.org/show_bug.cgi?id=87277
https://bugzilla.gnome.org/show_bug.cgi?id=746769
https://bugzilla.gnome.org/show_bug.cgi?id=608204


-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Security implications of btrfs receive?

2016-09-09 Thread Austin S. Hemmelgarn

On 2016-09-09 12:33, David Sterba wrote:

On Wed, Sep 07, 2016 at 03:08:18PM -0400, Austin S. Hemmelgarn wrote:

On 2016-09-07 14:07, Christoph Anton Mitterer wrote:

On Wed, 2016-09-07 at 11:06 -0400, Austin S. Hemmelgarn wrote:

This is an issue with any filesystem,

Not really... any other filesystem I'd know (not sure about ZFS) keeps
working when there are UUID collisions... or at least it won't cause
arbitrary corruptions, which then in the end may even be used for such
attacks as described in that thread.

Even other multi-device containers (LVM, MD) don't at least corrupt
your data like it allegedly can happen with btrfs.




 it is just a bigger issue with
BTRFS.

No corruption vs. possible arbitrary data corruption and leakage seems
to be more than "just bigger".
I'd call it unacceptable for a production system.

So is refusing to boot.  In most cases, downtime is just as bad as data
corruption.




  Take a system using ext4, or XFS, or almost any other Linux
filesystem, running almost any major distro, create a minimum sized
partition on the disk for that filesystem type, and create a
filesystem
there with the same UUID as the root filesystem.  Next time that
system
reboots, things will usually blow up (XFS will refuse to mount, ext4
and
most other filesystems will work sometimes and not others).

Well but that's something completely different.
It would be perfectly fine if, in case of an UUID collision, the system
simply denies mounting/assembly (actually that's one of the solutions
others and I've proposed in the aforementioned thread).

But it's not acceptable if the system does *something* in such
situation,... or if such fs/container is already mounted/active and
another device with colliding UUID appears *then*, it's neither
acceptable that the already active fs/container wouldn't continue to
work properly.

And that seems to my experience just how e.g. LVM handles this.

"Not booting" is not really an issue in terms of data corruption.


At least I'm pretty sure to remember that one of the main developers
(was it Qu?) acknowledged these issues (both in terms of accidental
corruption and security wise) and that he was glad that these issues
were brought up and that they should be solved.



It hasn't, because there's not any way it can be completely
fixed.

Why not? As it was laid out by myself and others, the basic solution
would be:
- Refuse any mounting in case UUID collisions are detected.
- Generally don't do any auto-rebuilds or e.g. RAID assemblies unless
  specifically allowed/configured by the user (as this might also be
  used to extract data from a system).
- If there are any collisions (either by mounting or by processes like
  rebuilds/added devices) require the user to specify uniquely which
  device he actually wants (e.g. by path).
- And in case a filesystem is already mounted and UUID collisions
  happens then (e.g. a dd clone get's plugged in), continue to use the
  already active device... just as e.g. LVM does.


  This
particular case is an excellent example of why it's so hard to
fix.  To
close this particular hole, BTRFS itself would have to become aware
of
whether whoever is running an ioctl is running in a chroot or not,
which
is non-trivial to determine to begin with, and even harder when you
factor in the fact that chroot() is a VFS level thing, not a
underlying
filesystem thing, while ioctls are much lower level.

Isn't it simply enough to:
- know which blockdevices with a btrfs and with which UUIDs there are
- let userland tools deny any mount/assembly/etc. actions in case of
  collisions
- do the actual addressing of devices via the device path (so that
  proper devices will continued to be if the fs was already mounted
  when a collision occurs)
?

That's not the issue being discussed in this case.  The ultimate issue
is of course the same (the flawed assumption that some arbitrary bytes
will be globally unique), but the particular resultant issues are
different.  The problem being discussed is that receive doesn't verify
that subvolume UUID's it has been told to clone from are within the are
it's been told to work.  This can cause an information leak, but not
data corruption, and is actually an issue with the clone ioctl in
general.  Graham actually proposed a good solution to this particular
problem (require an open fd to a source file containing the blocks to be
passed into the ioctl in addition to everything else), but it's still
orthogonal to the symptoms you're talking about.


And further, as I've said, security wise auto-assembly of multi-device
seems always prone to attacks at least in certain use cases, so for the
security conscious people:
- Don't do auto-assembly/rebuild/etc. based on scanning for UUID
- Let the user choose to do this manually via specifying the devices
  (via e.g. path).
  So a user could say something like
  mount -t btrfs -o 
device=/dev/disk/by-path/pci-\:00\:1f.2-ata-1,device=/dev/disk/by-path/pci-\:00\:2f.2-ata-2
 

Re: Security implications of btrfs receive?

2016-09-09 Thread Austin S. Hemmelgarn

On 2016-09-09 12:18, David Sterba wrote:

On Wed, Sep 07, 2016 at 07:58:30AM -0400, Austin S. Hemmelgarn wrote:

On 2016-09-06 13:20, Graham Cobb wrote:

Thanks to Austin and Duncan for their replies.

On 06/09/16 13:15, Austin S. Hemmelgarn wrote:

On 2016-09-05 05:59, Graham Cobb wrote:

Does the "path" argument of btrfs-receive mean that *all* operations are
confined to that path?  For example, if a UUID or transid is sent which
refers to an entity outside the path will that other entity be affected
or used?

As far as I know, no, it won't be affected.

Is it possible for a file to be created containing shared
extents from outside the path?

As far as I know, the only way for this to happen is if you're
referencing a parent subvolume for a relative send that is itself
sharing extents outside of the path.  From a practical perspective,
unless you're doing deduplication on the receiving end, the this
shouldn't be possible.


Unfortunately that is not the case.  I decided to do some tests to see
what happens.  It is possible for a receive into one path to reference
and access a subvolume from a different path on the same btrfs disk.  I
have created a bash script to demonstrate this at:

https://gist.github.com/GrahamCobb/c7964138057e4e092a75319c9fb240a3

This does require the attacker to know the (source) subvolume UUID they
want to copy.  I am not sure how hard UUIDs are to guess.

Oh, I forgot about the fact that it checks the whole filesystem for
referenced source subvolumes.


What if the stream is verified first? Ie. look if there are the
operations using subolumes not owned by the user.

I think that extending the ioctl to require proof of access to the 
source being cloned from would be a better approach to this, as this is 
an issue with the ioctl in general, it's just discussion of send/receive 
that brought this up.  I'm actually kind of surprised that this didn't 
get noticed before, seeing as it's a pretty significant and not all that 
difficult to use information leak.  Ideally, this needs to be decided 
before the VFS layer clone ioctl gets finalized.

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Security implications of btrfs receive?

2016-09-09 Thread David Sterba
On Wed, Sep 07, 2016 at 03:08:18PM -0400, Austin S. Hemmelgarn wrote:
> On 2016-09-07 14:07, Christoph Anton Mitterer wrote:
> > On Wed, 2016-09-07 at 11:06 -0400, Austin S. Hemmelgarn wrote:
> >> This is an issue with any filesystem,
> > Not really... any other filesystem I'd know (not sure about ZFS) keeps
> > working when there are UUID collisions... or at least it won't cause
> > arbitrary corruptions, which then in the end may even be used for such
> > attacks as described in that thread.
> >
> > Even other multi-device containers (LVM, MD) don't at least corrupt
> > your data like it allegedly can happen with btrfs.
> >
> >
> >
> >>  it is just a bigger issue with
> >> BTRFS.
> > No corruption vs. possible arbitrary data corruption and leakage seems
> > to be more than "just bigger".
> > I'd call it unacceptable for a production system.
> So is refusing to boot.  In most cases, downtime is just as bad as data 
> corruption.
> >
> >
> >>   Take a system using ext4, or XFS, or almost any other Linux
> >> filesystem, running almost any major distro, create a minimum sized
> >> partition on the disk for that filesystem type, and create a
> >> filesystem
> >> there with the same UUID as the root filesystem.  Next time that
> >> system
> >> reboots, things will usually blow up (XFS will refuse to mount, ext4
> >> and
> >> most other filesystems will work sometimes and not others).
> > Well but that's something completely different.
> > It would be perfectly fine if, in case of an UUID collision, the system
> > simply denies mounting/assembly (actually that's one of the solutions
> > others and I've proposed in the aforementioned thread).
> >
> > But it's not acceptable if the system does *something* in such
> > situation,... or if such fs/container is already mounted/active and
> > another device with colliding UUID appears *then*, it's neither
> > acceptable that the already active fs/container wouldn't continue to
> > work properly.
> >
> > And that seems to my experience just how e.g. LVM handles this.
> >
> > "Not booting" is not really an issue in terms of data corruption.
> >
> >
> > At least I'm pretty sure to remember that one of the main developers
> > (was it Qu?) acknowledged these issues (both in terms of accidental
> > corruption and security wise) and that he was glad that these issues
> > were brought up and that they should be solved.
> >
> >
> >> It hasn't, because there's not any way it can be completely
> >> fixed.
> > Why not? As it was laid out by myself and others, the basic solution
> > would be:
> > - Refuse any mounting in case UUID collisions are detected.
> > - Generally don't do any auto-rebuilds or e.g. RAID assemblies unless
> >   specifically allowed/configured by the user (as this might also be
> >   used to extract data from a system).
> > - If there are any collisions (either by mounting or by processes like
> >   rebuilds/added devices) require the user to specify uniquely which
> >   device he actually wants (e.g. by path).
> > - And in case a filesystem is already mounted and UUID collisions
> >   happens then (e.g. a dd clone get's plugged in), continue to use the
> >   already active device... just as e.g. LVM does.
> >
> >>   This
> >> particular case is an excellent example of why it's so hard to
> >> fix.  To
> >> close this particular hole, BTRFS itself would have to become aware
> >> of
> >> whether whoever is running an ioctl is running in a chroot or not,
> >> which
> >> is non-trivial to determine to begin with, and even harder when you
> >> factor in the fact that chroot() is a VFS level thing, not a
> >> underlying
> >> filesystem thing, while ioctls are much lower level.
> > Isn't it simply enough to:
> > - know which blockdevices with a btrfs and with which UUIDs there are
> > - let userland tools deny any mount/assembly/etc. actions in case of
> >   collisions
> > - do the actual addressing of devices via the device path (so that
> >   proper devices will continued to be if the fs was already mounted
> >   when a collision occurs)
> > ?
> That's not the issue being discussed in this case.  The ultimate issue 
> is of course the same (the flawed assumption that some arbitrary bytes 
> will be globally unique), but the particular resultant issues are 
> different.  The problem being discussed is that receive doesn't verify 
> that subvolume UUID's it has been told to clone from are within the are 
> it's been told to work.  This can cause an information leak, but not 
> data corruption, and is actually an issue with the clone ioctl in 
> general.  Graham actually proposed a good solution to this particular 
> problem (require an open fd to a source file containing the blocks to be 
> passed into the ioctl in addition to everything else), but it's still 
> orthogonal to the symptoms you're talking about.
> >
> > And further, as I've said, security wise auto-assembly of multi-device
> > seems always prone to attacks at least in certain use cases, so for the
>

Re: Security implications of btrfs receive?

2016-09-09 Thread David Sterba
On Wed, Sep 07, 2016 at 07:58:30AM -0400, Austin S. Hemmelgarn wrote:
> On 2016-09-06 13:20, Graham Cobb wrote:
> > Thanks to Austin and Duncan for their replies.
> >
> > On 06/09/16 13:15, Austin S. Hemmelgarn wrote:
> >> On 2016-09-05 05:59, Graham Cobb wrote:
> >>> Does the "path" argument of btrfs-receive mean that *all* operations are
> >>> confined to that path?  For example, if a UUID or transid is sent which
> >>> refers to an entity outside the path will that other entity be affected
> >>> or used?
> >> As far as I know, no, it won't be affected.
> >>> Is it possible for a file to be created containing shared
> >>> extents from outside the path?
> >> As far as I know, the only way for this to happen is if you're
> >> referencing a parent subvolume for a relative send that is itself
> >> sharing extents outside of the path.  From a practical perspective,
> >> unless you're doing deduplication on the receiving end, the this
> >> shouldn't be possible.
> >
> > Unfortunately that is not the case.  I decided to do some tests to see
> > what happens.  It is possible for a receive into one path to reference
> > and access a subvolume from a different path on the same btrfs disk.  I
> > have created a bash script to demonstrate this at:
> >
> > https://gist.github.com/GrahamCobb/c7964138057e4e092a75319c9fb240a3
> >
> > This does require the attacker to know the (source) subvolume UUID they
> > want to copy.  I am not sure how hard UUIDs are to guess.
> Oh, I forgot about the fact that it checks the whole filesystem for 
> referenced source subvolumes.

What if the stream is verified first? Ie. look if there are the
operations using subolumes not owned by the user.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Security implications of btrfs receive?

2016-09-08 Thread Austin S. Hemmelgarn

On 2016-09-07 15:34, Chris Murphy wrote:

On Wed, Sep 7, 2016 at 1:08 PM, Austin S. Hemmelgarn
 wrote:


I think I covered it already in the last thread on this, but the best way I
see to fix the whole auto-assembly issue is:
1. Stop the damn auto-scanning of new devices on hot-plug.  The scanning
should be done on mount or invoking something like btrfs dev scan, not on
hot-plug.  This is the biggest current issue, and is in theory the easiest
thing to fix.  The problem here is that it's udev sources we need to change,
not our own.
2. Get rid of the tracking in the kernel.  If a filesystem isn't mounted or
requested to be mounted, then the kernel has no business worrying about what
what devices it's on.  If the filesystem is mounted, then the only way to
associate new devices should be from userspace.
3. When mounting, the mount helper should be doing the checking to verify
that the UUID's and everything else are correct.  Ideally, the mount(2) call
should require a list of devices to use, and mount should be doing the
discovery.  This is at odds with how systemd handles BTRFS mounts, but
they're being stupid with that too (the only way to tell for certain if a FS
will mount is to try to mount it, if the mount(2) call succeeds, then the
filesystem was ready, regardless of whether or not userspace thinks the
device is).
4. The kernel should be doing a better job of validating filesystems. It
should be checking that all the devices agree on how many devices there
should be, as well as checking that they all have correct UUID's. This is
technically not necessary if item 3 is implemented, but is still good
practice from a hardening perspective.


It'd be nice to leverage WWN when available, cross referencing all WWN
with volume UUID and device UUID. There are all sorts of policies that
can make use of this, not least of which is "fail on mount whenever
all three expected IDs are not present" but also this crazy hunt for
the actual physical device to replace when all we know from btrfs fi
show is that a device is missing, and inferring what devid it is from
the devids that are still listed. I'd like to see a WWN for what's
missing. devid is useless, devuuid is useless, a serial number is
better than nothing but WWN fits the use case by design.
I like the idea of matching WWN as part of the check, with a couple of 
caveats:
1. We need to keep in mind that in some environments, this can be 
spoofed (Virtualization for example, although doing so would require 
source level modifications to most hypervisors).
2. There needs to be a way to forcibly mount in the case of a mismatch, 
as well as a way to update the filesystem to match the current WWN's of 
all of it's disks.  I also specifically think that these should be 
separate options, the first is useful for debugging a filesystem using 
image files, while the second is useful for external clones of disks.
3. single device filesystems should store the WWN, and ideally keep it 
up-to-date, but not check it.  They have no need to check it, and single 
device is the primary use case for a traditional user, so it should be 
as simple as possible.
4. We should be matching on more than just fsuuid, devuuid, and WWN, 
because just matching those would allow a second partition on the same 
device to cause issues.


But yeah I think we kinda need some other ducks in a row, the
mechanisms of discovery, and the continuum of faultiness (occasionally
flaky, to flat out vanished).

It is also kinda important to see things like udisks and storaged as
user agents, ensuring they have a way to communicate with the helper
so things are mounted and umounted correctly as most DE's now expect
to just automount everything. I still get weird behaviors on GNOME
with udisks2 and multiple device Btrfs volumes with current upstream
GNOME stuff.
DE's expect the ability to automount things as a regular user, not 
necessarily that it has to happen.  I'm not all that worried personally 
about automounting of multi-device filesystems, largely because the type 
of person who automounting in the desktop primarily caters to is not 
likely to have a multi-device filesystem to begin with.  For that 
matter, the primary (only realistic?) use for multi-device filesystems 
on removable media is backups, and the few people who are going to set 
things up to automatically run backups when the disks get plugged in 
will be smart enough to get things working correctly themselves, while 
anyone else is going to be running the backup manually and can mount the 
FS by hand if they aren't using something like autofs.

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Security implications of btrfs receive?

2016-09-07 Thread Zygo Blaxell
On Wed, Sep 07, 2016 at 08:07:59PM +0200, Christoph Anton Mitterer wrote:
> Even other multi-device containers (LVM, MD) don't at least corrupt
> your data like it allegedly can happen with btrfs.

LVM and MD also check sequence numbers and timestamps.  You can't just
guess the UUID, you need a UUID *and* some other values that change
every time an array is activated.

They don't change enough for security purposes--it's still possible to
intentionally spoof them--but they do prevent accidents like dd copies
of hard drives or LVM snapshots.  In this case, only one of the copies
will increment its sequence number, and after that the other copies will
not be permitted to join the array any more.

BTRFS could use transids for this.  It currently seems to accept the
last device to present the desired device UUID without checking to see
if the transid is consistent with the other devices, or if there are
other devices with the correct UUID and transid.  More can be done here.



signature.asc
Description: Digital signature


Re: Security implications of btrfs receive?

2016-09-07 Thread Chris Murphy
On Wed, Sep 7, 2016 at 1:08 PM, Austin S. Hemmelgarn
 wrote:

> I think I covered it already in the last thread on this, but the best way I
> see to fix the whole auto-assembly issue is:
> 1. Stop the damn auto-scanning of new devices on hot-plug.  The scanning
> should be done on mount or invoking something like btrfs dev scan, not on
> hot-plug.  This is the biggest current issue, and is in theory the easiest
> thing to fix.  The problem here is that it's udev sources we need to change,
> not our own.
> 2. Get rid of the tracking in the kernel.  If a filesystem isn't mounted or
> requested to be mounted, then the kernel has no business worrying about what
> what devices it's on.  If the filesystem is mounted, then the only way to
> associate new devices should be from userspace.
> 3. When mounting, the mount helper should be doing the checking to verify
> that the UUID's and everything else are correct.  Ideally, the mount(2) call
> should require a list of devices to use, and mount should be doing the
> discovery.  This is at odds with how systemd handles BTRFS mounts, but
> they're being stupid with that too (the only way to tell for certain if a FS
> will mount is to try to mount it, if the mount(2) call succeeds, then the
> filesystem was ready, regardless of whether or not userspace thinks the
> device is).
> 4. The kernel should be doing a better job of validating filesystems. It
> should be checking that all the devices agree on how many devices there
> should be, as well as checking that they all have correct UUID's. This is
> technically not necessary if item 3 is implemented, but is still good
> practice from a hardening perspective.

It'd be nice to leverage WWN when available, cross referencing all WWN
with volume UUID and device UUID. There are all sorts of policies that
can make use of this, not least of which is "fail on mount whenever
all three expected IDs are not present" but also this crazy hunt for
the actual physical device to replace when all we know from btrfs fi
show is that a device is missing, and inferring what devid it is from
the devids that are still listed. I'd like to see a WWN for what's
missing. devid is useless, devuuid is useless, a serial number is
better than nothing but WWN fits the use case by design.

But yeah I think we kinda need some other ducks in a row, the
mechanisms of discovery, and the continuum of faultiness (occasionally
flaky, to flat out vanished).

It is also kinda important to see things like udisks and storaged as
user agents, ensuring they have a way to communicate with the helper
so things are mounted and umounted correctly as most DE's now expect
to just automount everything. I still get weird behaviors on GNOME
with udisks2 and multiple device Btrfs volumes with current upstream
GNOME stuff.


-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Security implications of btrfs receive?

2016-09-07 Thread Austin S. Hemmelgarn

On 2016-09-07 14:07, Christoph Anton Mitterer wrote:

On Wed, 2016-09-07 at 11:06 -0400, Austin S. Hemmelgarn wrote:

This is an issue with any filesystem,

Not really... any other filesystem I'd know (not sure about ZFS) keeps
working when there are UUID collisions... or at least it won't cause
arbitrary corruptions, which then in the end may even be used for such
attacks as described in that thread.

Even other multi-device containers (LVM, MD) don't at least corrupt
your data like it allegedly can happen with btrfs.




 it is just a bigger issue with
BTRFS.

No corruption vs. possible arbitrary data corruption and leakage seems
to be more than "just bigger".
I'd call it unacceptable for a production system.
So is refusing to boot.  In most cases, downtime is just as bad as data 
corruption.




  Take a system using ext4, or XFS, or almost any other Linux
filesystem, running almost any major distro, create a minimum sized
partition on the disk for that filesystem type, and create a
filesystem
there with the same UUID as the root filesystem.  Next time that
system
reboots, things will usually blow up (XFS will refuse to mount, ext4
and
most other filesystems will work sometimes and not others).

Well but that's something completely different.
It would be perfectly fine if, in case of an UUID collision, the system
simply denies mounting/assembly (actually that's one of the solutions
others and I've proposed in the aforementioned thread).

But it's not acceptable if the system does *something* in such
situation,... or if such fs/container is already mounted/active and
another device with colliding UUID appears *then*, it's neither
acceptable that the already active fs/container wouldn't continue to
work properly.

And that seems to my experience just how e.g. LVM handles this.

"Not booting" is not really an issue in terms of data corruption.


At least I'm pretty sure to remember that one of the main developers
(was it Qu?) acknowledged these issues (both in terms of accidental
corruption and security wise) and that he was glad that these issues
were brought up and that they should be solved.



It hasn't, because there's not any way it can be completely
fixed.

Why not? As it was laid out by myself and others, the basic solution
would be:
- Refuse any mounting in case UUID collisions are detected.
- Generally don't do any auto-rebuilds or e.g. RAID assemblies unless
  specifically allowed/configured by the user (as this might also be
  used to extract data from a system).
- If there are any collisions (either by mounting or by processes like
  rebuilds/added devices) require the user to specify uniquely which
  device he actually wants (e.g. by path).
- And in case a filesystem is already mounted and UUID collisions
  happens then (e.g. a dd clone get's plugged in), continue to use the
  already active device... just as e.g. LVM does.


  This
particular case is an excellent example of why it's so hard to
fix.  To
close this particular hole, BTRFS itself would have to become aware
of
whether whoever is running an ioctl is running in a chroot or not,
which
is non-trivial to determine to begin with, and even harder when you
factor in the fact that chroot() is a VFS level thing, not a
underlying
filesystem thing, while ioctls are much lower level.

Isn't it simply enough to:
- know which blockdevices with a btrfs and with which UUIDs there are
- let userland tools deny any mount/assembly/etc. actions in case of
  collisions
- do the actual addressing of devices via the device path (so that
  proper devices will continued to be if the fs was already mounted
  when a collision occurs)
?
That's not the issue being discussed in this case.  The ultimate issue 
is of course the same (the flawed assumption that some arbitrary bytes 
will be globally unique), but the particular resultant issues are 
different.  The problem being discussed is that receive doesn't verify 
that subvolume UUID's it has been told to clone from are within the are 
it's been told to work.  This can cause an information leak, but not 
data corruption, and is actually an issue with the clone ioctl in 
general.  Graham actually proposed a good solution to this particular 
problem (require an open fd to a source file containing the blocks to be 
passed into the ioctl in addition to everything else), but it's still 
orthogonal to the symptoms you're talking about.


And further, as I've said, security wise auto-assembly of multi-device
seems always prone to attacks at least in certain use cases, so for the
security conscious people:
- Don't do auto-assembly/rebuild/etc. based on scanning for UUID
- Let the user choose to do this manually via specifying the devices
  (via e.g. path).
  So a user could say something like
  mount -t btrfs -o 
device=/dev/disk/by-path/pci-\:00\:1f.2-ata-1,device=/dev/disk/by-path/pci-\:00\:2f.2-ata-2
 /foo
  in order to be sure that just these devices would be *tried* to be
  used for the RAID1 btrfs, an

Re: Security implications of btrfs receive?

2016-09-07 Thread Christoph Anton Mitterer
On Wed, 2016-09-07 at 11:06 -0400, Austin S. Hemmelgarn wrote:
> This is an issue with any filesystem,
Not really... any other filesystem I'd know (not sure about ZFS) keeps
working when there are UUID collisions... or at least it won't cause
arbitrary corruptions, which then in the end may even be used for such
attacks as described in that thread.

Even other multi-device containers (LVM, MD) don't at least corrupt
your data like it allegedly can happen with btrfs.



>  it is just a bigger issue with 
> BTRFS.
No corruption vs. possible arbitrary data corruption and leakage seems
to be more than "just bigger".
I'd call it unacceptable for a production system.


>   Take a system using ext4, or XFS, or almost any other Linux 
> filesystem, running almost any major distro, create a minimum sized 
> partition on the disk for that filesystem type, and create a
> filesystem 
> there with the same UUID as the root filesystem.  Next time that
> system 
> reboots, things will usually blow up (XFS will refuse to mount, ext4
> and 
> most other filesystems will work sometimes and not others).
Well but that's something completely different.
It would be perfectly fine if, in case of an UUID collision, the system
simply denies mounting/assembly (actually that's one of the solutions
others and I've proposed in the aforementioned thread).

But it's not acceptable if the system does *something* in such
situation,... or if such fs/container is already mounted/active and
another device with colliding UUID appears *then*, it's neither
acceptable that the already active fs/container wouldn't continue to
work properly.

And that seems to my experience just how e.g. LVM handles this.

"Not booting" is not really an issue in terms of data corruption.


At least I'm pretty sure to remember that one of the main developers
(was it Qu?) acknowledged these issues (both in terms of accidental
corruption and security wise) and that he was glad that these issues
were brought up and that they should be solved.


> It hasn't, because there's not any way it can be completely
> fixed.
Why not? As it was laid out by myself and others, the basic solution
would be:
- Refuse any mounting in case UUID collisions are detected.
- Generally don't do any auto-rebuilds or e.g. RAID assemblies unless
  specifically allowed/configured by the user (as this might also be
  used to extract data from a system).
- If there are any collisions (either by mounting or by processes like
  rebuilds/added devices) require the user to specify uniquely which
  device he actually wants (e.g. by path).
- And in case a filesystem is already mounted and UUID collisions
  happens then (e.g. a dd clone get's plugged in), continue to use the
  already active device... just as e.g. LVM does.

>   This 
> particular case is an excellent example of why it's so hard to
> fix.  To 
> close this particular hole, BTRFS itself would have to become aware
> of 
> whether whoever is running an ioctl is running in a chroot or not,
> which 
> is non-trivial to determine to begin with, and even harder when you 
> factor in the fact that chroot() is a VFS level thing, not a
> underlying 
> filesystem thing, while ioctls are much lower level.
Isn't it simply enough to:
- know which blockdevices with a btrfs and with which UUIDs there are
- let userland tools deny any mount/assembly/etc. actions in case of
  collisions
- do the actual addressing of devices via the device path (so that
  proper devices will continued to be if the fs was already mounted
  when a collision occurs)
?

And further, as I've said, security wise auto-assembly of multi-device
seems always prone to attacks at least in certain use cases, so for the
security conscious people:
- Don't do auto-assembly/rebuild/etc. based on scanning for UUID
- Let the user choose to do this manually via specifying the devices
  (via e.g. path).
  So a user could say something like
  mount -t btrfs -o 
device=/dev/disk/by-path/pci-\:00\:1f.2-ata-1,device=/dev/disk/by-path/pci-\:00\:2f.2-ata-2
 /foo
  in order to be sure that just these devices would be *tried* to be
  used for the RAID1 btrfs, and not the one an attacker might have
  plugged into the USB.


> That said, nobody's really done any work on mitigating the issues 
> either, although David Sterba has commented about having interest in 
> fixing issues caused by crafted filesystem images, so hopefully
> things will start moving more in the direction of proper security.
Well that's good do hear... it's IMO one of the bigger potential issues
in btrfs, next to the ongoing stability problems[0] and still not
really working RAID.

Anyone working on this should probably have a look at the thread I've
mentioned, cause there are really several tricky ways one could exploit
this... to me especially any auto-(i.e. based on scanning for UUIDs)-
assembly/rebuilding and that like seemed to pose quite a big surface.


Cheers,
Chris.


[0] Though I must say that I personally have nev

Re: Security implications of btrfs receive?

2016-09-07 Thread Austin S. Hemmelgarn

On 2016-09-07 12:10, Graham Cobb wrote:

On 07/09/16 16:20, Austin S. Hemmelgarn wrote:

I should probably add to this that you shouldn't be accepting
send/receive data streams from untrusted sources anyway.  While it
probably won't crash your system, it's not intended for use as something
like a network service.  If you're sending a subvolume over an untrusted
network, you should be tunneling it through SSH or something similar,
and then using that to provide source verification and data integrity
guarantees, and if you can't trust the system's your running backups
for, then you have bigger issues to deal with.


In my personal case I'm not talking about accepting streams from
untrusted sources (although that is also a perfectly reasonable question
to discuss).  My concern is if one of my (well managed and trusted but
never perfect) systems is hacked, can the intruder use that as an entry
to attack others of my systems?
Probably not directly.  They could get data from the other systems 
backups, but unless something's seriously broken, there should be no way 
for them to inject data into those backups.  IOW, it should be about the 
same level of security you would get from doing this with tar based 
backups with an individual user for each system and world readable files.


In particular, I never trust my systems which live on the internet with
automated access to my personal systems (without a human providing
additional passwords/keys) although I do allow some automated accesses
the other way around.  I am trying to determine if sharing
btrfs-send-based backups would open a vulnerability.
As far as I can tell, the only vulnerability would be through 
information leaks.  In essence, this should in theory only be usable as 
a side-channel to obtain information to then use for a real attack 
(assuming of course that you have proper at-rest encryption for any 
sensitive data on your system).


There are articles on the web suggesting that centralised
btrfs-send-based backups are a good idea (using ssh access with separate
keys for each system which automatically invoke btrfs-receive into a
system-specific path).  My tests so far suggest that this may not be as
secure as the articles imply.
While just doing that in general is not all that secure, if you make 
sure each system-specific path is on it's own filesystem (this can be 
done with thin provisioning if you are short on space), then that should 
eliminate most of the security issues.  If you're feeling really 
adventurous and don't entirely trust the stability of the code involved, 
you could instead force it to run a script which invokes btrfs-receive 
in a per-instance dedicated sandbox constrained to that filesystem.


--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Security implications of btrfs receive?

2016-09-07 Thread Graham Cobb
On 07/09/16 16:06, Austin S. Hemmelgarn wrote:
> It hasn't, because there's not any way it can be completely fixed.  This
> particular case is an excellent example of why it's so hard to fix.  To
> close this particular hole, BTRFS itself would have to become aware of
> whether whoever is running an ioctl is running in a chroot or not, which
> is non-trivial to determine to begin with, and even harder when you
> factor in the fact that chroot() is a VFS level thing, not a underlying
> filesystem thing, while ioctls are much lower level.

Actually, I think the btrfs-receive case might be a little easier to fix
and, in my quick reading of the code before doing a test, I thought it
even did work this way...

I think the fix would be to require that the action of cloning disk
blocks required user-mode to provide an FD which has read access to the
source blocks.  There is little problem with allowing the ioctl to
identify the matching subvolume for a UUID but it should require user
mode to open the source file for read access using a path before
allowing any blocks to be cloned.  That way, any VFS-level checks would
be done.  And, what is more, btrfs-receive could do a file path check to
make sure the file being cloned from is within the path that was
provided on the command line.

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Security implications of btrfs receive?

2016-09-07 Thread Graham Cobb
On 07/09/16 16:20, Austin S. Hemmelgarn wrote:
> I should probably add to this that you shouldn't be accepting
> send/receive data streams from untrusted sources anyway.  While it
> probably won't crash your system, it's not intended for use as something
> like a network service.  If you're sending a subvolume over an untrusted
> network, you should be tunneling it through SSH or something similar,
> and then using that to provide source verification and data integrity
> guarantees, and if you can't trust the system's your running backups
> for, then you have bigger issues to deal with.

In my personal case I'm not talking about accepting streams from
untrusted sources (although that is also a perfectly reasonable question
to discuss).  My concern is if one of my (well managed and trusted but
never perfect) systems is hacked, can the intruder use that as an entry
to attack others of my systems?

In particular, I never trust my systems which live on the internet with
automated access to my personal systems (without a human providing
additional passwords/keys) although I do allow some automated accesses
the other way around.  I am trying to determine if sharing
btrfs-send-based backups would open a vulnerability.

There are articles on the web suggesting that centralised
btrfs-send-based backups are a good idea (using ssh access with separate
keys for each system which automatically invoke btrfs-receive into a
system-specific path).  My tests so far suggest that this may not be as
secure as the articles imply.

In any case, I think this is a topic worth investigating further, if any
graduate student is looking for a PhD topic!

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Security implications of btrfs receive?

2016-09-07 Thread Austin S. Hemmelgarn

On 2016-09-07 07:58, Austin S. Hemmelgarn wrote:

On 2016-09-06 13:20, Graham Cobb wrote:

Thanks to Austin and Duncan for their replies.

On 06/09/16 13:15, Austin S. Hemmelgarn wrote:

On 2016-09-05 05:59, Graham Cobb wrote:

Does the "path" argument of btrfs-receive mean that *all* operations
are
confined to that path?  For example, if a UUID or transid is sent which
refers to an entity outside the path will that other entity be affected
or used?

As far as I know, no, it won't be affected.

Is it possible for a file to be created containing shared
extents from outside the path?

As far as I know, the only way for this to happen is if you're
referencing a parent subvolume for a relative send that is itself
sharing extents outside of the path.  From a practical perspective,
unless you're doing deduplication on the receiving end, the this
shouldn't be possible.


Unfortunately that is not the case.  I decided to do some tests to see
what happens.  It is possible for a receive into one path to reference
and access a subvolume from a different path on the same btrfs disk.  I
have created a bash script to demonstrate this at:

https://gist.github.com/GrahamCobb/c7964138057e4e092a75319c9fb240a3

This does require the attacker to know the (source) subvolume UUID they
want to copy.  I am not sure how hard UUIDs are to guess.

Oh, I forgot about the fact that it checks the whole filesystem for
referenced source subvolumes.


By the way, this is exactly the same whether or not the --chroot option
is specified on the "btrfs receive" command.

Which makes sense, chroot only affects the VFS layer, not the underlying
FS.


The next question this raises for me is whether this means that
processes in a chroot or in a container (or in a mandatory access
controls environment) can access files outside the chroot/container if
they know the UUID of the subvolume?  After all, btrfs-receive uses
IOCTLs that any program running as root can use.

Yes, it does mean that, but if you want proper security you should be
using a real container system (or better yet, a virtual machine), not
just a chroot.  Chroot by itself is not a security mechanism, it's a
safety belt and testing tool.  If you actually want to secure something,
chroot should only be part of it, together with isolating it to it's own
filesystem, and using cgroups and namespaces.
I should probably add to this that you shouldn't be accepting 
send/receive data streams from untrusted sources anyway.  While it 
probably won't crash your system, it's not intended for use as something 
like a network service.  If you're sending a subvolume over an untrusted 
network, you should be tunneling it through SSH or something similar, 
and then using that to provide source verification and data integrity 
guarantees, and if you can't trust the system's your running backups 
for, then you have bigger issues to deal with.

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Security implications of btrfs receive?

2016-09-07 Thread Austin S. Hemmelgarn

On 2016-09-07 10:41, Christoph Anton Mitterer wrote:

On Tue, 2016-09-06 at 18:20 +0100, Graham Cobb wrote:

they know the UUID of the subvolume?


Unfortunately, btrfs seems to be pretty problematic when anyone knows
your UUIDs...
This is an issue with any filesystem, it is just a bigger issue with 
BTRFS.  Take a system using ext4, or XFS, or almost any other Linux 
filesystem, running almost any major distro, create a minimum sized 
partition on the disk for that filesystem type, and create a filesystem 
there with the same UUID as the root filesystem.  Next time that system 
reboots, things will usually blow up (XFS will refuse to mount, ext4 and 
most other filesystems will work sometimes and not others).  Windows and 
OS X actually have nearly as many problems like this too, they just 
happen in different circumstances (I've actually demonstrated to people 
the process of crashing a system by hot-plugging a disk before the boot 
drive at the right moment during startup, and that works on Windows and 
OS X, and under some circumstances on Linux too).

Look for my thread "attacking btrfs filesystems via UUID collisions?"
in the list archives.
From accidental corruptions to intentional attacks, a lot seemed to be
quite possible... and I do not even talk about general attacks that any
such systems that may do auto-assembly/auto-rebuilds (e.g. of multi-
device containers/fs) or similar likely suffer from (I've also
discussed this in the thread back then at some of the later mails).

Don't think anything has happened in that area yet (or ever?)... at
least I wouldn't have noticed that these security showstoppers would
have got fixed.
It hasn't, because there's not any way it can be completely fixed.  This 
particular case is an excellent example of why it's so hard to fix.  To 
close this particular hole, BTRFS itself would have to become aware of 
whether whoever is running an ioctl is running in a chroot or not, which 
is non-trivial to determine to begin with, and even harder when you 
factor in the fact that chroot() is a VFS level thing, not a underlying 
filesystem thing, while ioctls are much lower level.


That said, nobody's really done any work on mitigating the issues 
either, although David Sterba has commented about having interest in 
fixing issues caused by crafted filesystem images, so hopefully things 
will start moving more in the direction of proper security.


--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Security implications of btrfs receive?

2016-09-07 Thread Austin S. Hemmelgarn

On 2016-09-07 10:44, Christoph Anton Mitterer wrote:

On Wed, 2016-09-07 at 07:58 -0400, Austin S. Hemmelgarn wrote:

if you want proper security you should
be
using a real container system


Won't these probably use the same filesystems?
That depends on how it's set up.  Most container software doesn't really 
care (but as this shows, they really should at least warn about this), 
but has no issues running a container off of it's own filesystem, so it 
ends up being an administrator decision in the end (as it arguably 
should be).

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Security implications of btrfs receive?

2016-09-07 Thread Christoph Anton Mitterer
On Wed, 2016-09-07 at 07:58 -0400, Austin S. Hemmelgarn wrote:
> if you want proper security you should
> be 
> using a real container system

Won't these probably use the same filesystems?


Cheers,
Chris.

smime.p7s
Description: S/MIME cryptographic signature


Re: Security implications of btrfs receive?

2016-09-07 Thread Christoph Anton Mitterer
On Tue, 2016-09-06 at 18:20 +0100, Graham Cobb wrote:
> they know the UUID of the subvolume?

Unfortunately, btrfs seems to be pretty problematic when anyone knows
your UUIDs...
Look for my thread "attacking btrfs filesystems via UUID collisions?"
in the list archives.
From accidental corruptions to intentional attacks, a lot seemed to be
quite possible... and I do not even talk about general attacks that any
such systems that may do auto-assembly/auto-rebuilds (e.g. of multi-
device containers/fs) or similar likely suffer from (I've also
discussed this in the thread back then at some of the later mails).

Don't think anything has happened in that area yet (or ever?)... at
least I wouldn't have noticed that these security showstoppers would
have got fixed.

Cheers,
Chris.

smime.p7s
Description: S/MIME cryptographic signature


Re: Security implications of btrfs receive?

2016-09-07 Thread Austin S. Hemmelgarn

On 2016-09-06 13:20, Graham Cobb wrote:

Thanks to Austin and Duncan for their replies.

On 06/09/16 13:15, Austin S. Hemmelgarn wrote:

On 2016-09-05 05:59, Graham Cobb wrote:

Does the "path" argument of btrfs-receive mean that *all* operations are
confined to that path?  For example, if a UUID or transid is sent which
refers to an entity outside the path will that other entity be affected
or used?

As far as I know, no, it won't be affected.

Is it possible for a file to be created containing shared
extents from outside the path?

As far as I know, the only way for this to happen is if you're
referencing a parent subvolume for a relative send that is itself
sharing extents outside of the path.  From a practical perspective,
unless you're doing deduplication on the receiving end, the this
shouldn't be possible.


Unfortunately that is not the case.  I decided to do some tests to see
what happens.  It is possible for a receive into one path to reference
and access a subvolume from a different path on the same btrfs disk.  I
have created a bash script to demonstrate this at:

https://gist.github.com/GrahamCobb/c7964138057e4e092a75319c9fb240a3

This does require the attacker to know the (source) subvolume UUID they
want to copy.  I am not sure how hard UUIDs are to guess.
Oh, I forgot about the fact that it checks the whole filesystem for 
referenced source subvolumes.


By the way, this is exactly the same whether or not the --chroot option
is specified on the "btrfs receive" command.

Which makes sense, chroot only affects the VFS layer, not the underlying FS.


The next question this raises for me is whether this means that
processes in a chroot or in a container (or in a mandatory access
controls environment) can access files outside the chroot/container if
they know the UUID of the subvolume?  After all, btrfs-receive uses
IOCTLs that any program running as root can use.
Yes, it does mean that, but if you want proper security you should be 
using a real container system (or better yet, a virtual machine), not 
just a chroot.  Chroot by itself is not a security mechanism, it's a 
safety belt and testing tool.  If you actually want to secure something, 
chroot should only be part of it, together with isolating it to it's own 
filesystem, and using cgroups and namespaces.

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Security implications of btrfs receive?

2016-09-06 Thread Graham Cobb
Thanks to Austin and Duncan for their replies.

On 06/09/16 13:15, Austin S. Hemmelgarn wrote:
> On 2016-09-05 05:59, Graham Cobb wrote:
>> Does the "path" argument of btrfs-receive mean that *all* operations are
>> confined to that path?  For example, if a UUID or transid is sent which
>> refers to an entity outside the path will that other entity be affected
>> or used?
> As far as I know, no, it won't be affected.
>> Is it possible for a file to be created containing shared
>> extents from outside the path?
> As far as I know, the only way for this to happen is if you're
> referencing a parent subvolume for a relative send that is itself
> sharing extents outside of the path.  From a practical perspective,
> unless you're doing deduplication on the receiving end, the this
> shouldn't be possible.

Unfortunately that is not the case.  I decided to do some tests to see
what happens.  It is possible for a receive into one path to reference
and access a subvolume from a different path on the same btrfs disk.  I
have created a bash script to demonstrate this at:

https://gist.github.com/GrahamCobb/c7964138057e4e092a75319c9fb240a3

This does require the attacker to know the (source) subvolume UUID they
want to copy.  I am not sure how hard UUIDs are to guess.

By the way, this is exactly the same whether or not the --chroot option
is specified on the "btrfs receive" command.

The next question this raises for me is whether this means that
processes in a chroot or in a container (or in a mandatory access
controls environment) can access files outside the chroot/container if
they know the UUID of the subvolume?  After all, btrfs-receive uses
IOCTLs that any program running as root can use.

Graham
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Security implications of btrfs receive?

2016-09-06 Thread Austin S. Hemmelgarn

On 2016-09-05 05:59, Graham Cobb wrote:

Does anyone know of a security analysis of btrfs receive?
I'm not a developer, and definitely not a security specialist, just a 
security minded sysadmin who has some idea what's going on, but I can at 
least try and answer this.


I assume that just using btrfs receive requires root (is that so?).  But
I was thinking of setting up a backup server which would receive
snapshots from various client systems, each in their own path, and I
wondered how much the security of the backup server (and other clients'
backups) was dependent on the security of the client.
In an ideal situation, I'd suggest setting this up with everything 
running on behalf of the client in it's own container (probably using 
LXC, but lmctfy or similar would work too).  You do need root for 
receive to work (because it has to set ownership of files), but you can 
still sandbox it to a reasonable degree.


Does the "path" argument of btrfs-receive mean that *all* operations are
confined to that path?  For example, if a UUID or transid is sent which
refers to an entity outside the path will that other entity be affected
or used?

As far as I know, no, it won't be affected.

Is it possible for a file to be created containing shared
extents from outside the path?
As far as I know, the only way for this to happen is if you're 
referencing a parent subvolume for a relative send that is itself 
sharing extents outside of the path.  From a practical perspective, 
unless you're doing deduplication on the receiving end, the this 
shouldn't be possible.

Is it possible to confuse/affect
filesystem metadata which would affect the integrity of subvolumes or
files outside the path or prevent other clients from doing something
legitimate?

Do the answers change if the --chroot option is given?
This will give you no more or less security than a chroot would give on 
the system,   Pretty much, if there aren't any bugs, this will prevent 
things from getting written outside of the path specified.

I am confused
about the -m option -- does that mean that the root mount point has to
be visible in the chroot?
In this case, 'root' refers to the top level of the filesystem that the 
path resides in.  So, if you have the filesystem your putting the data 
on mounted at /mnt/backups, and the data for this particular client goes 
in /mnt/backups/client, the appropriate value for the -m option would be 
/mnt/backups.


Lastly, even if receive is designed to be very secure, it is possible
that it could trigger/use code paths in the btrfs kernel code which are
not normally used during normal file operations and so could trigger
bugs not normally seen.  Has any work been done on testing for that (for
example tests using malicious streams, including ones which btrfs-send
cannot generate)?
In general, most of what btrfs receive is doing is just regular file 
operations (open, write, chown, chmod, rename, and similar things), it 
only really gets into the ioctls when dealing with shared extents, and 
when initially setting up the target subvolume, and there really isn't 
anything run on the kernel side that wouldn't be done during normal 
filesystem operation (except possibly some of the extent sharing 
things).  I haven't done any testing myself (I don't use send-receive 
except for cloning VM's, so I don't have much incentive to worry about 
it), but my guess would be that a bogus data stream is most likely to 
crash the receive process in userspace, and has relatively little risk 
of trashing the filesystem beyond what would normally happen for 
something like a bogus rsync transfer.


I am just wondering whether any work has been done/published on this area.
There isn't any that I know of personally, but in theory it shouldn't be 
hard to test.

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Security implications of btrfs receive?

2016-09-05 Thread Duncan
Graham Cobb posted on Mon, 05 Sep 2016 10:59:30 +0100 as excerpted:

> Lastly, even if receive is designed to be very secure, it is possible
> that it could trigger/use code paths in the btrfs kernel code which are
> not normally used during normal file operations and so could trigger
> bugs not normally seen.  Has any work been done on testing for that (for
> example tests using malicious streams, including ones which btrfs-send
> cannot generate)?

As a btrfs user and list regular (not a dev) I'll only answer this part, 
as it's the part I know an answer to. =:^)

Btrfs in general is not fuzz- or malicious-content resistant, yet.  In 
general, btrfs is considered stabilizing, but not yet fully stable, and 
fuzzer-related bug reports, as others, are taken and worked on, but the 
emphasis has been primarily on getting things working and bugs fixed in 
general, not yet on security hardening of any sort, so no claims as to 
btrfs hardening or resistance to malicious content can be made at this 
point, except that it's known to be pretty soft in that regard ATM.

As I said, fuzz-generated bugs are accepted and fixed, but I don't know 
that the intent is to ever "harden" btrfs in that regard, more to simply 
make it resilient to corruptions in general.

There has been, for instance, some discussion of attacks by simply 
leaving maliciously engineered btrfs thumb drives around to be inserted 
and automounted, but the attitude seems to be once they have physical 
access to plug them in, hardening is an exercise in futility, so the 
object isn't to prevent that attack vector, but rather, to make btrfs 
more resilient to normal (as opposed to deliberate) corruption that may 
occur, including that which is easiest to /find/ by fuzzing, but which 
may "just happen" in the real world, as well.

Of course that's not in the specific scope of receive, but I'd put it in 
the same boat.  IOW, treat potential send clients much as you would 
people with accounts on the machine.  If you wouldn't trust them with a 
basic shell account, don't trust their send-streams either.

Meanwhile, the stabilizing but not fully stable and mature status also 
means backups are even more strongly recommended than they would be with 
a fully stable filesystem.  Which means, to the extent that backups can 
mitigate the issue, they'd certainly be prudent and may to that extent 
solve the practical issue.  However, as always, if it's not backed up and 
you lose it, you've simply lost the low-value data that wasn't of enough 
value to you to be worth the hassle of backup, defined by your actions as 
such, regardless of any words claiming the contrary.  To the extent that 
you can trust your people as much as your backups, great, but not having 
those backups really /is/ defining that data as not worth the hassle, 
regardless of whether it's lost to malicious attack or to hardware/
software/wetware bug.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html