Hi btrfs folks,
I'm afraid I have a newbie question... but I can't sort it out? It's
just about adding a disk to a btrfs volume and not getting the correct
amount of GB in the end...
I have a btrfs volume which already consists of two different devices
and which is mounted on /samples. Its total
On Fri, Jul 26, 2013 at 09:05:03AM +0200, Axelle wrote:
Hi btrfs folks,
I'm afraid I have a newbie question... but I can't sort it out? It's
just about adding a disk to a btrfs volume and not getting the correct
amount of GB in the end...
I have a btrfs volume which already consists of
On Fri, Jul 26, 2013 at 01:19:40AM +0100, Pete wrote:
Dear All,
Have I anything to be concerned about?
I have got some error messages on booting. The scenario was that I
had installed some ram and I suspect that I had disturbed a cable as
one disk was not visible. I could not mount the
Unfortunately this test takes 6 minutes on my SSD equiped test box since
it runs all possible single/dup/raid0/raid1/raid10/mixed profiles, one
round with the '-f' option to 'btrfs replace start' and one round
without this option. The cancelation is tested only once and with the
dup/single profile
On Thu, 25 Jul 2013 15:13:40 -0400, Josef Bacik wrote:
A user reported a panic when running with autodefrag and deleting snapshots.
This is because we could end up trying to add the root to the dead roots list
twice. To fix this check to see if we are empty before adding ourselves to
the
Hugo,
thanks.
On 07/26/2013 08:47 AM, Hugo Mills wrote:
Looks like it. I'd recommend a scrub to check for any other out of
date data on the affected drive. I've done pretty much the same thing
as this myself, and a scrub, though scary in the amount of noise it
made, fixed everything
On Fri, Jul 26, 2013 at 11:38:47AM +0200, Stefan Behrens wrote:
On Thu, 25 Jul 2013 15:13:40 -0400, Josef Bacik wrote:
A user reported a panic when running with autodefrag and deleting snapshots.
This is because we could end up trying to add the root to the dead roots
list
twice. To fix
We can end up with inodes on the auto defrag list that exist on roots that are
going to be deleted. This is extra work we don't need to do, so just bail if
our root has 0 root refs. Thanks,
Signed-off-by: Josef Bacik jba...@fusionio.com
---
fs/btrfs/file.c |5 +
1 files changed, 5
Hi Hugo,
Thanks for your answer, but I'm afraid I still don't get it.
RAID-0 requires at least two devices.
Well, I have three devices, so that's more than enough isn't it?
Or do you mean I should be adding two devices at a time?
If you balance this
configuration, you'll use up the first 93.13
On Fri, Jul 26, 2013 at 04:35:59PM +0200, Axelle wrote:
Hi Hugo,
Thanks for your answer, but I'm afraid I still don't get it.
RAID-0 requires at least two devices.
Well, I have three devices, so that's more than enough isn't it?
Or do you mean I should be adding two devices at a time?
On 7/26/13 4:28 AM, Stefan Behrens wrote:
Unfortunately this test takes 6 minutes on my SSD equiped test box since
it runs all possible single/dup/raid0/raid1/raid10/mixed profiles, one
round with the '-f' option to 'btrfs replace start' and one round
without this option. The cancelation is
ls -l will show the nblocks for the directory, and this made it into the golden
output for 314. The problem is nblocks is 0 for btrfs directories because we're
awesome, which makes us fail this test. So filter out the total blah line of
ls -l so btrfs can pass this test too. Thanks,
On 7/26/13 10:45 AM, Josef Bacik wrote:
ls -l will show the nblocks for the directory, and this made it into the
golden
output for 314. The problem is nblocks is 0 for btrfs directories because
we're
awesome, which makes us fail this test. So filter out the total blah line
of
ls -l so
There's some 250+ lines here that are easily encapsulated into their own
function. I don't change how anything works here, just create and document
the new btrfs_clone() function from btrfs_ioctl_clone() code.
Signed-off-by: Mark Fasheh mfas...@suse.de
---
fs/btrfs/ioctl.c | 232
The range locking in btrfs_ioctl_clone is trivially broken out into it's own
function. This reduces the complexity of btrfs_ioctl_clone() by a small bit
and makes that locking code available to future functions in
fs/btrfs/ioctl.c
Signed-off-by: Mark Fasheh mfas...@suse.de
---
fs/btrfs/ioctl.c |
Hi,
The following series of patches implements in btrfs an ioctl to do
offline deduplication of file extents.
To be clear, offline in this sense means that the file system is
mounted and running, but the dedupe is not done during file writes,
but after the fact when some userspace software
This patch adds an ioctl, BTRFS_IOC_FILE_EXTENT_SAME which will try to
de-duplicate a list of extents across a range of files.
Internally, the ioctl re-uses code from the clone ioctl. This avoids
rewriting a large chunk of extent handling code.
Userspace passes in an array of file, offset pairs
We want this for btrfs_extent_same. Basically readpage and friends do their
own extent locking but for the purposes of dedupe, we want to have both
files locked down across a set of readpage operations (so that we can
compare data). Introduce this variant and a flag which can be set for
On 07/26/2013 12:30 PM, Mark Fasheh wrote:
Hi,
The following series of patches implements in btrfs an ioctl to do
offline deduplication of file extents.
To be clear, offline in this sense means that the file system is
mounted and running, but the dedupe is not done during file writes,
but
I have a 4 disk RAID1 setup that fails to {mount,btrfsck} when disk 4
is connected.
With disk 4 attached btrfsck errors with:
btrfsck: root-tree.c:46: btrfs_find_last_root: Assertion
`!(path-slots[0] == 0)' failed
(I'd have to reboot in a non-functioning state to get the full output.)
I can
+static struct page *extent_same_get_page(struct inode *inode, u64 off)
+{
+ struct page *page;
+ pgoff_t index;
+ struct extent_io_tree *tree = BTRFS_I(inode)-io_tree;
+
+ index = off PAGE_CACHE_SHIFT;
+
+ page = grab_cache_page(inode-i_mapping, index);
+ if
On Wed, 2013-06-19 at 12:15 -0700, Joe Perches wrote:
Don't emit OOM warnings when k.alloc calls fail when
there there is a v.alloc immediately afterwards.
Converted a kmalloc/vmalloc with memset to kzalloc/vzalloc.
Hey Jiri.
What's your schedule for accepting or rejecting
these sorts of
Replace list_for_each_entry() by list_for_each_entry_safe() in next
functions:
- lock_stripe_add()
- __btrfs_close_devices()
Signed-off-by: Azat Khuzhin a3at.m...@gmail.com
---
fs/btrfs/raid56.c |4 ++--
fs/btrfs/volumes.c |4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
diff
23 matches
Mail list logo