Hi,
Any comment on this patch?
Without it, btrfs will always fail for generic/387.
Thanks,
Qu
At 09/07/2016 08:17 PM, Wang Xiaoguang wrote:
Below test script can reveal this bug:
dd if=/dev/zero of=fs.img bs=$((1024*1024)) count=100
dev=$(losetup --show -f fs.img)
mkdir -p /mnt/mn
On 04.01.2017 00:43 Hans van Kranenburg wrote:
> On 01/04/2017 12:12 AM, Peter Becker wrote:
>> Good hint, this would be an option and i will try this.
>>
>> Regardless of this the curiosity has packed me and I will try to
>> figure out where the problem with the low transfer rate is.
>>
>> 2017-01
On 01/04/2017 12:12 AM, Peter Becker wrote:
> Good hint, this would be an option and i will try this.
>
> Regardless of this the curiosity has packed me and I will try to
> figure out where the problem with the low transfer rate is.
>
> 2017-01-04 0:07 GMT+01:00 Hans van Kranenburg
> :
>> On 01/
(Resend this reply due to a message that there is an invalid email address.)
On Tue, Jan 03, 2017 at 01:00:45PM -0800, Liu Bo wrote:
> On Fri, Nov 11, 2016 at 04:39:45PM +0800, Wang Xiaoguang wrote:
> > This issue was revealed by modifying BTRFS_MAX_EXTENT_SIZE(128MB) to 64KB,
> > When modifying B
Good hint, this would be an option and i will try this.
Regardless of this the curiosity has packed me and I will try to
figure out where the problem with the low transfer rate is.
2017-01-04 0:07 GMT+01:00 Hans van Kranenburg :
> On 01/03/2017 08:24 PM, Peter Becker wrote:
>> All invocations are
On 01/03/2017 08:24 PM, Peter Becker wrote:
> All invocations are justified, but not relevant in (offline) backup
> and archive scenarios.
>
> For example you have multiple version of append-only log-files or
> append-only db-files (each more then 100GB in size), like this:
>
>> Snapshot_01_01_20
Will include other fields, if this gets accepted.
Signed-off-by: Lakshmipathi.G
---
btrfs-corrupt-block.c | 8
1 file changed, 8 insertions(+)
diff --git a/btrfs-corrupt-block.c b/btrfs-corrupt-block.c
index 16680df..64376ca 100644
--- a/btrfs-corrupt-block.c
+++ b/btrfs-corrupt-block
Will include other fields, if this gets accepted.
Signed-off-by: Lakshmipathi.G
---
btrfs-corrupt-block.c | 8
1 file changed, 8 insertions(+)
diff --git a/btrfs-corrupt-block.c b/btrfs-corrupt-block.c
index 16680df..64376ca 100644
--- a/btrfs-corrupt-block.c
+++ b/btrfs-corrupt-block.
As i understand the duperemove source-code right (i work on/ try to
improve this code since 5 or 6 weeks on multiple parts), duperemove
does hashing and calculation before they call extend_same.
Duperemove stores all in a hashfile and read this. after all files
hashed, and duplicates detected, the
On Fri, Nov 11, 2016 at 04:39:45PM +0800, Wang Xiaoguang wrote:
> This issue was revealed by modifying BTRFS_MAX_EXTENT_SIZE(128MB) to 64KB,
> When modifying BTRFS_MAX_EXTENT_SIZE(128MB) to 64KB, fsstress test often
> gets these warnings from btrfs_destroy_inode():
> WARN_ON(BTRFS_I(inode)->o
On 2017-01-03 15:20, Peter Becker wrote:
I think i understand. The resulting keyquestion is, how i can improve
the performance of extend_same ioctl.
I tested it with following results:
enviorment:
2 files, called "file", size each 100GB, duperemove nofiemap-options
set, 1MB extend size.
duperem
-- Forwarded message --
From: Austin S. Hemmelgarn
Date: 2017-01-03 20:37 GMT+01:00
Subject: Re: [markfasheh/duperemove] Why blocksize is limit to 1MB?
To: Peter Becker
On 2017-01-03 14:21, Peter Becker wrote:
>
> All invocations are justified, but not relevant in (offline) back
I think i understand. The resulting keyquestion is, how i can improve
the performance of extend_same ioctl.
I tested it with following results:
enviorment:
2 files, called "file", size each 100GB, duperemove nofiemap-options
set, 1MB extend size.
duperemove output:
[0x1908590] (13889/72654) Try t
On 2017-01-03 13:16, Janos Toth F. wrote:
On Tue, Jan 3, 2017 at 5:01 PM, Austin S. Hemmelgarn
wrote:
I agree on this point. I actually hadn't known that it didn't recurse into
sub-volumes, and that's a pretty significant caveat that should be
documented (and ideally fixed, defrag doesn't need
All invocations are justified, but not relevant in (offline) backup
and archive scenarios.
For example you have multiple version of append-only log-files or
append-only db-files (each more then 100GB in size), like this:
> Snapshot_01_01_2017
-> file1.log .. 201 GB
> Snapshot_02_01_2017
-> file1
On Tue, Jan 3, 2017 at 5:01 PM, Austin S. Hemmelgarn
wrote:
> I agree on this point. I actually hadn't known that it didn't recurse into
> sub-volumes, and that's a pretty significant caveat that should be
> documented (and ideally fixed, defrag doesn't need to worry about
> cross-subvolume stuff
Yes. /mnt/file.txt is a mandatory argument. And -h/-b/-f are optional
arugments. But the issue is, one of these optional argument is must. If we run:
btrfs-debugfs /mnt/file.txt
doesn't produce any output at all. From time to time, I run 'btrfs-debugfs
/path/to/file' and wonder why no output r
On Tue, Jan 03, 2017 at 08:53:54AM +0800, Qu Wenruo wrote:
>
>
> At 01/03/2017 12:47 AM, David Sterba wrote:
> > On Fri, Dec 30, 2016 at 09:00:36AM +0800, Qu Wenruo wrote:
> >> Hi, please fetch the following branch for next branch:
> >> https://github.com/adam900710/linux.git fujitsu_for_next
> >
On Mon, Dec 19, 2016 at 07:09:06PM +0800, Anand Jain wrote:
> As of now writes smaller than 64k for non compressed extents and 16k
> for compressed extents inside eof are considered as candidate
> for auto defrag, put them together at a place.
>
> Signed-off-by: Anand Jain
Reviewed-by: David Ste
On Sat, Dec 03, 2016 at 03:39:54PM -0500, Zygo Blaxell wrote:
> I got tired of seeing "16.00EiB" whenever btrfs-progs encounters a
> negative size value, e.g. during resize:
>
> Unallocated:
>/dev/mapper/datamd18 16.00EiB
>
> This version is much more useful:
>
> Unallocated:
>/dev/map
On Wed, Dec 21, 2016 at 03:42:07PM +0800, Anand Jain wrote:
> Both BTRFS_IOC_DEFRAG and BTRFS_IOC_DEFRAG_RANGE call the same
> function- btrfs_ioctl_defrag(), however BTRFS_IOC_DEFRAG does
> not support any argument, so check that and return not supported
> if provided. This has valid impact at the
On Wed, Dec 21, 2016 at 03:42:08PM +0800, Anand Jain wrote:
> Since btrfs_defrag_leaves() does not support extent_root,
> remove its corresponding call. The user can use the file
> based defrag to defrag extents as of now.
>
> Signed-off-by: Anand Jain
Reviewed-by: David Sterba
Oh right, btrfs
On 2017-01-03 09:21, Janos Toth F. wrote:
So, in order to defrag "everything" in the filesystem (which is
possible to / potentially needs defrag) I need to run:
1: a recursive defrag starting from the root subvolume (to pick up all
the files in all the possible subvolumes and directories)
2: a no
On Fri, Dec 16, 2016 at 03:17:33PM +0100, Philippe Loctaux wrote:
> cleaned up the file with checkpatch
^^^
Sorry, this is an example of what should not be done. Checkpatch can
detect lots of things that once were valid or tolerated but are not
today. There are mi
On Tue, Jan 03, 2017 at 08:58:44PM +0530, Lakshmipathi.G wrote:
> Sorry about the misleading subject line. This patch is for missing
> optional arguments.
>
> Before the patch:
> $ ./btrfs-debugfs /mnt/file.txt # Does nothing and silently fails.
>
> After the patch:
> $ ./btrfs-debugfs /mnt/fil
Sorry about the misleading subject line. This patch is for missing
optional arguments.
Before the patch:
$ ./btrfs-debugfs /mnt/file.txt # Does nothing and silently fails.
After the patch:
$ ./btrfs-debugfs /mnt/file.txt
No arguments passed. Type 'btrfs-debugfs -h' for usage.
Cheers,
Lak
This is what I see when no arguments pare passed:
$ ./btrfs-debugfs
usage: btrfs-debugfs [-h] [-b] [-f] path [path ...]
btrfs-debugfs: error: too few arguments
And that's exactly the same output as with this patch applied. Am I missing
something?
--
To unsubscribe from this list: send the line "u
So, in order to defrag "everything" in the filesystem (which is
possible to / potentially needs defrag) I need to run:
1: a recursive defrag starting from the root subvolume (to pick up all
the files in all the possible subvolumes and directories)
2: a non-recursive defrag on the root subvolume + (
On 2016-12-30 15:28, Peter Becker wrote:
Hello, i have a 8 TB volume with multiple files with hundreds of GB each.
I try to dedupe this because the first hundred GB of many files are identical.
With 128KB blocksize with nofiemap and lookup-extends=no option, will
take more then a week (only dedup
Signed-off-by: Lakshmipathi.G
---
btrfs-debugfs | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/btrfs-debugfs b/btrfs-debugfs
index dfb8853..70419fa 100755
--- a/btrfs-debugfs
+++ b/btrfs-debugfs
@@ -392,7 +392,9 @@ parser.add_argument('-f', '--file', action='store_const',
---
btrfs-debugfs | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/btrfs-debugfs b/btrfs-debugfs
index dfb8853..70419fa 100755
--- a/btrfs-debugfs
+++ b/btrfs-debugfs
@@ -392,7 +392,9 @@ parser.add_argument('-f', '--file', action='store_const',
const=1, help='get fil
args
Thanks for the comments.
We are in the midst of making defrag better. For now, -r option picks
up files of the dir specified, there is no way to defrag all subvol
tree with out scripting, something like this.
If /mnt is mounted with subvolid=5 (default).
for all s subvol in /mnt
do
03.01.2017 00:02, Jeff Mahoney пишет:
> On 1/2/17 4:55 AM, Andrei Borzenkov wrote:
>> I try to understand what exactly is trimmed in case of btrfs. Using
>> installation in QEMU I see that host file size is about 9GB, allocated
>> size in guest approximately matches it and used space in guest is 7.
33 matches
Mail list logo