It turns out we don't properly rollback in-core btrfs_device state on
umount. We zero out -bdev, -in_fs_metadata and that's about it. In
particular, we don't zero out -generation, and this can lead to us
refusing a mount -- a non-NULL fs_devices-latest_bdev is essential, but
Currently btrfs_device is allocated ad-hoc in a few different places,
and as a result not all fields are initialized properly. In particular,
readahead state is only initialized in device_list_add (at scan time),
and not in btrfs_init_new_device (when the new device is added with
'btrfs dev
In the spirit of btrfs_alloc_device, add a helper for allocating and
doing some common initialization of btrfs_fs_devices struct.
Signed-off-by: Ilya Dryomov idryo...@gmail.com
---
fs/btrfs/volumes.c | 71 +++-
1 file changed, 53 insertions(+),
Hello,
This patch series does two things:
- adds allocation helpers for struct btrfs_device and struct
btrfs_fs_devices so that all the list_heads, spinlocks, etc are
properly initialized and the code for that is in one place;
- fixes a bug in the umount sequence, which, under certain
find_next_devid() knows which root to search, so it should take an
fs_info instead of an arbitrary root.
Signed-off-by: Ilya Dryomov idryo...@gmail.com
---
fs/btrfs/volumes.c | 16
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/fs/btrfs/volumes.c
To get name of the file from a pathname let's use kbasename() helper. It allows
to simplify code a bit.
Signed-off-by: Andy Shevchenko andriy.shevche...@linux.intel.com
---
fs/btrfs/send.c | 17 ++---
1 file changed, 6 insertions(+), 11 deletions(-)
diff --git a/fs/btrfs/send.c
On Thu, Aug 08, 2013 at 11:44:54AM -0400, Josef Bacik wrote:
On Thu, Aug 08, 2013 at 06:48:05AM -0700, Christoph Hellwig wrote:
On Thu, Aug 08, 2013 at 09:02:07AM -0400, Josef Bacik wrote:
This won't work, try having 1 subvolumes with dirty inodes and do
sync then
go skiing,
On Fri, Aug 09, 2013 at 11:35:33PM +0200, Kai Krakow wrote:
Josef Bacik jba...@fusionio.com schrieb:
So I guess the reason that ZFS does well with that workload is that
ZFS is using smaller blocks, maybe just 512B ?
Yeah I'm not sure what ZFS does, but if you are writing over a block
We have logic to see if we've already created a parent directory by check to see
if an inode inside of that directory has a lower inode number than the one we
are currently processing. The logic is that if there is a lower inode number
then we would have had to made sure the directory was created
On Sun, Aug 11, 2013 at 09:53:01PM +0300, Emil Karlson wrote:
Greetings
Send fails for me unexpectedly:
I get:
ERROR: rename o262-5-0 - snapshots failed. No such file or directory
reproducer ( http://users.tkk.fi/~jkarlson/files/test4.txt ):
for i in 1 2; do
mkdir /mnt/$i
On Mon, 12 Aug 2013 10:59:52 -0400, Josef Bacik wrote:
On Sun, Aug 11, 2013 at 09:53:01PM +0300, Emil Karlson wrote:
Greetings
Send fails for me unexpectedly:
I get:
ERROR: rename o262-5-0 - snapshots failed. No such file or directory
reproducer (
Hi,
We decided to give BTRFS a try. We find it very flexible and generally
fast. However last week we had a problem with a Marvell controller in
AHCI and one BTRFS formatted hard drive. We isolated the problem by
relocating the disk to an Intel contoller (SATA controller: Marvell
Technology Group
On 08/12/2013 18:57, Todor Ivanov wrote:
Hi,
We decided to give BTRFS a try. We find it very flexible and generally
fast. However last week we had a problem with a Marvell controller in
AHCI and one BTRFS formatted hard drive. We isolated the problem by
relocating the disk to an Intel contoller
On Mon, Aug 12, 2013 at 05:16:04PM +0200, Stefan Behrens wrote:
On Mon, 12 Aug 2013 10:59:52 -0400, Josef Bacik wrote:
On Sun, Aug 11, 2013 at 09:53:01PM +0300, Emil Karlson wrote:
Greetings
Send fails for me unexpectedly:
I get:
ERROR: rename o262-5-0 - snapshots failed. No such
We have logic to see if we've already created a parent directory by check to see
if an inode inside of that directory has a lower inode number than the one we
are currently processing. The logic is that if there is a lower inode number
then we would have had to made sure the directory was created
This is a regression test for a problem we had where we'd assume we had created
a directory if it only had subvols inside of it. This was happening because
subvols would have lower inode numbers than our current send progress because
their inode numbers are based off of a different counter.
On 8/12/13 2:13 PM, Josef Bacik wrote:
This is a regression test for a problem we had where we'd assume we had
created
a directory if it only had subvols inside of it. This was happening because
subvols would have lower inode numbers than our current send progress because
their inode
Eric pointed out that btrfs will happily allow you to delete the default subvol.
This is a problem obviously since the next time you go to mount the file system
it will freak out because it can't find the root. Fix this by adding a check to
see if our default subvol points to the subvol we are
We were allowing users to delete their default subvolume, which is problematic.
This test is a regression test to make sure we don't let that happen in the
future. Thanks,
Signed-off-by: Josef Bacik jba...@fusionio.com
---
tests/btrfs/003 | 64
The handler for the ioctl BTRFS_IOC_FS_INFO was reading the
number of devices before acquiring the device list mutex.
This could lead to inconsistent results because the update of
the device list and the number of devices counter (amongst other
counters related to the device list) are updated in
On 8/12/13 2:40 PM, Josef Bacik wrote:
We were allowing users to delete their default subvolume, which is
problematic.
This test is a regression test to make sure we don't let that happen in the
future. Thanks,
Signed-off-by: Josef Bacik jba...@fusionio.com
---
tests/btrfs/003 |
Hello all,
About a week or so ago I noticed that [btrfs-ino-cache] process was
appearing in the 'top' on each reboot and disk is spinning like crazy
for about five minutes or so. Quite so often this caused X failing to
start because all I/O was busy with caching.
Even after letting it to calm
I'm hitting a btrfs Kernel BUG running a snapshot stress script with
linux-3.11.0-rc5.
I'm running with lzo compression, autodefrag, and the partition is
formated with 16k leafsize/inodesize.
[ 72.170431] device fsid 8a6be667-d041-4367-80f7-e4cb42356e85 devid
1 transid 4 /dev/sda7
[
On 06/08/2013 01:04, David Sterba wrote:
On Mon, Jul 15, 2013 at 01:30:52PM +0800, Anand Jain wrote:
This patch adds --mapper option to btrfs device scan and
btrfs filesystem show cli, when used will look for btrfs
devs under /dev/mapper and will use the links provided
under the /dev/mapper.
dima posted on Tue, 13 Aug 2013 10:28:59 +0900 as excerpted:
About a week or so ago I noticed that [btrfs-ino-cache] process was
appearing in the 'top' on each reboot and disk is spinning like crazy
for about five minutes or so. Quite so often this caused X failing to
start because all I/O
Josef Bacik posted on Mon, 12 Aug 2013 15:39:35 -0400 as excerpted:
Fix this by adding a check to see if our default subvol points to the
subvol we are trying to delete, and if it does not allowing it to
happen.
Umm... not to be a grammar policeman, but...
That last sub-sentence REALLY (!!)
On 08/13/2013 01:09 PM, Duncan wrote:
dima posted on Tue, 13 Aug 2013 10:28:59 +0900 as excerpted:
About a week or so ago I noticed that [btrfs-ino-cache] process was
appearing in the 'top' on each reboot and disk is spinning like crazy
for about five minutes or so. Quite so often this caused
27 matches
Mail list logo