The first patch aims to fix the bug of repeatly building inode cache.
The next two patches fix problems with the first one applied.
Liu Bo (3):
Btrfs: avoid building inode cache repeatly
Btrfs: don't build inode cache for orphan root
Btrfs: fix EEXIST error when creating new file in subvolum
We check if we have orphan roots when mounting btrfs, but orphan roots
are those who are already dead and about to be freed, so don't start building
inode cache for them, otherwise we'll get an ugly crash.
Acked-by: Miao Xie
Signed-off-by: Liu Bo
---
fs/btrfs/inode-map.c | 4 +++-
1 file change
While creating a subvolume/snapshot, we don't use inode cache to allocate
an inode id for the root dir ".", so inode cache doesn't mark that id as
used, and when we create a new file, it'll be unhappy and throw out
-EEXIST.
Reviewed-by: Miao Xie
Signed-off-by: Liu Bo
---
fs/btrfs/inode-map.c |
Inode cache is similar to free space cache and in fact shares the same
code, however, we don't load inode cache unless we're about to allocate
inode id, then there is a case where we only commit the transaction during
other operations, such as snapshot creation, we now update fs roots' generation
t
Hello Michael,
> I built the new btrfs-progs 3.12 recently. I note that the version
> information doesn't seem to match this:
>
># ./btrfs --version
>Btrfs v0.20-rc1-358-g194aa4a
>
> Regardless, I was trying to use btrfs send (which worked in the older
> btrfs), and failed. Here's an e
I built the new btrfs-progs 3.12 recently. I note that the version
information doesn't seem to match this:
# ./btrfs --version
Btrfs v0.20-rc1-358-g194aa4a
Regardless, I was trying to use btrfs send (which worked in the older
btrfs), and failed. Here's an example:
# ./btrfs send -v
There are actually more. Like this one:
http://iohq.net/index.php?title=Btrfs:RAID_5_Rsync_Freeze
It seems to be the exact same issue as I have, as I too can't do high
speed rsyncs writing to the btrfs array without blocking (reading is
fine).
Mvh
Hans-Kristian Bakke
On 16 December 2013 00:39,
torrents are really only one thing my storage server get hammered
with. It also does a lot more IO intensive stuff. I actually run
enterprise storage drives in a Supermicro-server for a reason, even if
it is my home setup, consumer stuff just don't cut it with my storage
abuse :)
It runs KVM virtua
Chris Murphy wrote:
> On Dec 14, 2013, at 4:19 PM, Hans-Kristian Bakke wrote:
>
> > # btrfs fi df /storage/storage-vol0/
> > Data, RAID10: total=13.89TB, used=12.99TB
> > System, RAID10: total=64.00MB, used=1.19MB
> > System: total=4.00MB, used=0.00
> > Metadata, RAID10: total=21.00GB, used=17.5
Leonidas Spyropoulos posted on Sun, 15 Dec 2013 20:28:05 + as
excerpted:
> Oh, so the df report from btrfs doesn't show the total as 'free'! But it
> means how much space the filesystem allocated so far.
Yes.
Btrfs allocates in chunks, 256 MiB at a time for metadata (but on a
single device,
Hans-Kristian Bakke posted on Sun, 15 Dec 2013 15:51:37 +0100 as
excerpted:
> # Regarding torrents and preallocation I have actually turned
> preallocation on specifically in rtorrent thinking that it did btrfs a
> favour like with ext4 (system.file_allocate.set = yes). It is easy to
> turn it off
Greetings all and pardon my intrusion,
I'm just a basic home user running a file server for mostly large
media files (write once, read many). Some dynamic working files, but
a much smaller portion. The OS will not run on BTRFS, nor will it do
much in the way of running services other than what's
On Dec 15, 2013, at 10:40 AM, Gene Czarcinski wrote:
> On 12/14/2013 01:43 PM, Chris Murphy wrote:
>> On Dec 14, 2013, at 2:57 AM, Gene Czarcinski wrote:
>>> Since I run Fedora with anaconda I use kickstart installs and can easily
>>> repeat an install since it included almost everything I wa
On Sun, Dec 15, 2013 at 8:24 PM, Hugo Mills wrote:
> On Sun, Dec 15, 2013 at 08:20:19PM +, Leonidas Spyropoulos wrote:
>> Hey all,
[..]
>> Anyone can explain me the Data row of the above output ? It used to be
>> 19.19GB and now it's 10.00GB. It's like the partition shrunk!? The
>> balance ope
On Sun, Dec 15, 2013 at 08:20:19PM +, Leonidas Spyropoulos wrote:
> Hey all,
>
> Just did a btrfs balance on a single device. Before the balance
> operation here is the df result:
>
> inglor@tiamat ~$ btrfs fi df /home
> Data: total=19.19GB, used=9.34GB
> System, DUP: total=32.00MB, used=4.00
Hey all,
Just did a btrfs balance on a single device. Before the balance
operation here is the df result:
inglor@tiamat ~$ btrfs fi df /home
Data: total=19.19GB, used=9.34GB
System, DUP: total=32.00MB, used=4.00KB
Metadata, DUP: total=896.00MB, used=227.98MB
Then I issues a balance operation rel
On 12/14/2013 01:43 PM, Chris Murphy wrote:
On Dec 14, 2013, at 2:57 AM, Gene Czarcinski wrote:
Since I run Fedora with anaconda I use kickstart installs and can easily
repeat an install since it included almost everything I want installed. And
then I have a post-install script I run to pi
Thank you for your very thorough answer Duncan.
Just to clear up a couple of questions.
# Backups
The backups I am speaking of is backup of data on the btrfs filesystem
to another place. The btrfs filesystem sees this as large reads at
about 100 mbit/s, at the time for about a week continuous. In
Hello Philipp,
In order to reproduce your problems.
I compile btrfs in kernel 3.2 and 3.12.
# mkfs.btrfs -f /dev/sda2
# mount /dev/sda2 /mnt
# dd if=/dev/zero of=/mnt/data bs=1M count=1024
# umount /mnt
The above operations is in 3.2
and then i switch my system into 3.12, and try to mount /d
Hans-Kristian Bakke posted on Sun, 15 Dec 2013 03:35:53 +0100 as
excerpted:
> I have done some more testing. I turned off everything using the disk
> and only did defrag. I have created a script that gives me a list of the
> files with the most extents. I started from the top to improve the
> frag
While running btrfs/004 from xfstests, after 503 iterations, dmesg reported
a deadlock between tasks iterating inode refs and tasks running delayed inodes
(during a transaction commit).
It turns out that iterating inode refs implies doing one tree search and
release all nodes in the path except th
On Sun, Dec 15, 2013 at 3:34 AM, Shilong Wang wrote:
> 2013/12/14 Filipe David Manana :
>> On Sat, Dec 14, 2013 at 3:13 PM, Filipe David Manana
>> wrote:
>>> On Sat, Dec 14, 2013 at 3:08 PM, Shilong Wang
>>> wrote:
2013/12/14 Filipe David Manana :
> On Sat, Dec 14, 2013 at 2:56 PM, Sh
22 matches
Mail list logo