Our free_space cluster currently only uses rb_next to find a proper
free_space entry by interating rbtree, there is no search involved,
so it's more efficient to iterate a list rather than a rbtree.
This is a straightforward change that converts rbtree to list.
Signed-off-by: Liu Bo
Several issues reported by coverity, minor resource leaks and two bugfixes.
David Sterba (5):
btrfs-progs: check, fix path leak in error branch
btrfs-progs: fi show, don't leak canonical path
btrfs-progs: check, missing parens around compound block in
find_normal_file_extent
Resolves-coverity-id: 1260250
Signed-off-by: David Sterba dste...@suse.cz
---
cmds-check.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/cmds-check.c b/cmds-check.c
index e74b116c0c43..71e4f4f3a13b 100644
--- a/cmds-check.c
+++ b/cmds-check.c
@@ -2839,7 +2839,7 @@
Resolves-coverity-id: 1127098
Signed-off-by: David Sterba dste...@suse.cz
---
cmds-filesystem.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/cmds-filesystem.c b/cmds-filesystem.c
index 1c1d34ae8ca2..a3cf114fb6ac 100644
--- a/cmds-filesystem.c
+++ b/cmds-filesystem.c
@@
Resolves-coverity-id: 1260247
Signed-off-by: David Sterba dste...@suse.cz
---
inode-item.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/inode-item.c b/inode-item.c
index 993f3091e335..522d25a433ac 100644
--- a/inode-item.c
+++ b/inode-item.c
@@ -89,7 +89,7 @@ int
Resolves-coverity-id: 1260252
Signed-off-by: David Sterba dste...@suse.cz
---
cmds-filesystem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/cmds-filesystem.c b/cmds-filesystem.c
index 80875fffddfe..1c1d34ae8ca2 100644
--- a/cmds-filesystem.c
+++ b/cmds-filesystem.c
@@
Resolves-coverity-id: 1260248
Signed-off-by: David Sterba dste...@suse.cz
---
cmds-check.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/cmds-check.c b/cmds-check.c
index 71e4f4f3a13b..d2d218a88589 100644
--- a/cmds-check.c
+++ b/cmds-check.c
@@ -2160,9 +2160,10 @@ static
Hi,
let me announce the release of btrfs-progs version 3.18. There are
updates to UI and several enhancements of check/repair. About 100
commits from 14 contributors, thank you all!
Tarballs: https://www.kernel.org/pub/linux/kernel/people/kdave/btrfs-progs/
Git:
I think you are missing crucial info on the layout on disk that BTRFS
implements. While a traditional RAID1 has a rigid layout that has
fixed and easily predictable locations for all data (exactly on two
specific disks), BTRFS allocs chunks as needed on ANY two disks.
Please research into this to
Am Dienstag, 30. Dezember 2014, 17:34:39 schrieb David Sterba:
Hi,
Hi David,
let me announce the release of btrfs-progs version 3.18. There are
updates to UI and several enhancements of check/repair. About 100
commits from 14 contributors, thank you all!
Tarballs:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 12/29/2014 4:53 PM, Chris Murphy wrote:
Get drives supporting configurable or faster recoveries. There's
no way around this.
Practically available right now? Sure. In theory, no.
This is a broken record topic honestly. The drives under
* filesystem usage - give an overview of fs usage in a way (道, みち,
michi) that's more
* device usage - more detailed information about per-device allocations
* same restrictions as for 'fi usage'
Interesting.
Used these to create a filesystem, with btrfs-progs v3.17.3:
# mkfs.btrfs -O
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 12/29/2014 7:20 PM, ashf...@whisperpc.com wrote:
Just some background data on traditional RAID, and the chances of
survival with a 2-drive failure.
In traditional RAID-10, the chances of surviving a 2-drive failure
is 66% on a 4-drive array,
On Tue, Dec 30, 2014 at 10:38:38PM +0100, Tomasz Chmielewski wrote:
* filesystem usage - give an overview of fs usage in a way (道,
みち, michi) that's more
* device usage - more detailed information about per-device allocations
* same restrictions as for 'fi usage'
Interesting.
Used
Phillip Susi wrote:
I'm wondering which of the above the BTRFS implementation most
closely resembles.
Unfortunately, btrfs just uses the naive raid1+0, so no 2 or 3 disk
raid10 arrays, and no higher performing offset layout.
Jose Manuel Perez Bethencourt wrote:
I think you are missing
On Tue, Dec 30, 2014 at 1:46 PM, Phillip Susi ps...@ubuntu.com wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 12/29/2014 4:53 PM, Chris Murphy wrote:
Get drives supporting configurable or faster recoveries. There's
no way around this.
Practically available right now? Sure. In
On Wed, Dec 17, 2014 at 08:07:27PM -0800, Robert White wrote:
[...]
There are a number of pathological examples in here, but I think there
are justifiable correct answers for each of them that emerge from a
single interpretation of the meanings of f_bavail, f_blocks, and f_bfree.
One gotcha is
Hi all,
While surfing the Redhat BZ, a lot(at least 5 I found in one month)
users report bugs in btrfs about
kernel warning in btrfs_abort_transaction().
And most of them (about 3 or more) are caused by disconnected usb device.
So I'm considering not to warn on some cases if we know its
o removed an unecessary INIT_LIST_HEAD after LIST_HEAD
o merge a declare INIT_LIST_HEAD pair into one LIST_HEAD
Signed-off-by: Gui Hecheng guihc.f...@cn.fujitsu.com
---
fs/btrfs/free-space-cache.c | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
On 12/30/2014 06:17 PM, ashf...@whisperpc.com wrote:
I believe that someone who understands the code in depth (and that
may also be one of the people above) determine exactly how BTRFS
implements RAID-10.
I am such a person. I had a similar
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
On 12/30/2014 06:58 PM, Chris Murphy wrote:
Practically available right now? Sure. In theory, no.
I have no idea what this means. Such drives exist, you can buy them
or not buy them.
I was referring to the no way around this part. Currently
I have a well tested and working fine Centos5-Xen system. Accumulated
cruft from various development efforts make it desirable to redo the
install. Currently a RAID-10 ext4 filesystem with LVM and 750G of
storage. There's a hot spare 750 drive in the system.
I'm thinking of migrating the
Hi Dave,
Original Message
Subject: should I use btrfs on Centos 7 for a new production server?
From: Dave Stevens g...@uniserve.com
To: Btrfs BTRFS linux-btrfs@vger.kernel.org
Date: 2014年12月31日 11:29
I have a well tested and working fine Centos5-Xen system. Accumulated
cruft
Hello,
I have a well tested and working fine Centos5-Xen system. Accumulated cruft
from various development efforts make it desirable to redo the install.
Currently a RAID-10 ext4 filesystem with LVM and 750G of storage. There's a
hot spare 750 drive in the system.
I'm thinking of
Hello,
I have a well tested and working fine Centos5-Xen system. Accumulated cruft
from various development efforts make it desirable to redo the install.
Currently a RAID-10 ext4 filesystem with LVM and 750G of storage. There's a
hot spare 750 drive in the system.
I'm thinking
On 12/30/14 10:03 PM, Wang Shilong wrote:
Hello,
I have a well tested and working fine Centos5-Xen system.
Accumulated cruft from various development efforts make it
desirable to redo the install. Currently a RAID-10 ext4 filesystem
with LVM and 750G of storage. There's a hot spare 750
On 12/30/14 10:06 PM, Wang Shilong wrote:
Hello,
I have a well tested and working fine Centos5-Xen system. Accumulated cruft
from various development efforts make it desirable to redo the install.
Currently a RAID-10 ext4 filesystem with LVM and 750G of storage. There's a
hot spare
On Wed, Dec 31, 2014 at 1:04 PM, Eric Sandeen sand...@redhat.com wrote:
On 12/30/14 10:06 PM, Wang Shilong wrote:
I used CentOS7 btrfs myself, just doing some tests..it crashed easily.
I don’t know how much efforts that Redhat do on btrfs for 7 series.
Maybe use SUSE enterprise for btrfs will
28 matches
Mail list logo