It's just a proof of concept, and i hope to see feedback/ideas/review
about it.
---
While deduplication,
Btrfs produce extent and file fragmentation
But it's can be optimized by compute - which inode data placed a closest
to beginning of hdd
It's allow to:
1. Performance boost on hdd (beginning
It's just a proof of concept, and i hope to see feedback, ideas about it.
---
While deduplication,
Btrfs produce extent and file fragmentation
But it's can be optimized by compute - which inode data placed a closest
to beginning of hdd
It's allow to reach:
1. Permonace boost on hdd (beginning of
Commit 8eb934591f8b ("btrfs: check unsupported filters in balance
arguments") adds a jump to exit label out_bargs in case the argument
check fails. At this point in addition to the bargs memory, the
memory for struct btrfs_balance_control has already been allocated.
Ownership of bctl is passed to b
copy-paste error corrected
On Wed, Oct 21, 2015 at 12:40 AM, Henk Slager wrote:
> I had a similar issue some time ago, around the time kernel 4.1.6 was
> just there.
> In case you don't want to wait for new disk or decide to just run the
> filesystem with 1 disk less or maybe later on replace 1 of
I had a similar issue some time ago, around the time kernel 4.1.6 was
just there.
In case you don't want to wait for new disk or decide to just run the
filesystem with 1 disk less or maybe later on replace 1 of the still
healthy disks with a double/bigger sized one and use current/older
kernel+tool
Thanks for the follow-up Duncan, that makes sense. I assumed I was
doing something wrong.
I downloaded the devel branch of of btrfs-progs and got it running
before I saw the need for a kernel patch and decided to wait.
For anyone following this later, I needed to use the following to get
the mis
Austin S Hemmelgarn posted on Tue, 20 Oct 2015 15:48:07 -0400 as
excerpted:
> FWIW, my assessment is based on some testing I did a while back (kernel
> 3.14 IIRC) using a VM. The (significantly summarized of course)
> procedure I used was:
> 1. Create a basic minimalistic Linux system in a VM (in
On 10/20/2015 15:59 -0400, Austin S Hemmelgarn wrote:
>> .
>> With a 32-bit checksum and a 4k block (the math is easier with
>> smaller numbers), that's 4128 bits, which means that a random
>> single bit error will have a approximately 0.24% chance of
>> occurring i
Kyle Manna posted on Tue, 20 Oct 2015 10:24:48 -0700 as excerpted:
> Hi all,
>
> I have a collection of three (was 4) 1-2TB devices with data and
> metadata in a RAID1 mirror. Last night I was struck by the Click of
> Death on an old Samsung drive.
>
> I removed the device from the system, rebo
On 2015-10-20 19:24, Kyle Manna wrote:
> Hi all,
[...]
> How do I remove the missing device? I tried the `btrfs device delete
> missing /mnt` but was greeted with "ERROR: missing is not a block
> device". A quick look at that btrfs-progs git repo shows that
> `stat("missing")` is called, which of
On 2015-10-20 15:20, Duncan wrote:
Austin S Hemmelgarn posted on Tue, 20 Oct 2015 09:59:17 -0400 as
excerpted:
It is worth clarifying also that:
a. While BTRFS will not return bad data in this case, it also won't
automatically repair the corruption.
Really? If so I think that's a bug in BTR
Hi Kyle,
On 10/20/2015 07:24 PM, Kyle Manna wrote:
I removed the device from the system, rebooted and mounted the volume
with `-o degraded` and the file system seems fine and usable. I'm
waiting on a replacement, drive but want to remove the old drive and
re-balance in the meantime.
This won'
On 2015-10-20 14:54, Duncan wrote:
But tho I'm a user not a dev and thus haven't actually checked the source
code itself, my believe here is with Russ and disagrees with Austin, as
based on what I've read both on the wiki and seen here previously, btrfs
runtime (that is, not during scrub) actuall
Austin S Hemmelgarn posted on Tue, 20 Oct 2015 09:59:17 -0400 as
excerpted:
>>> It is worth clarifying also that:
>>> a. While BTRFS will not return bad data in this case, it also won't
>>> automatically repair the corruption.
>>
>> Really? If so I think that's a bug in BTRFS. When mounted rw I
james harvey posted on Tue, 20 Oct 2015 00:16:15 -0400 as excerpted:
> Background -
>
> My fileserver had a "bad event" last week. Shut it down normally to add
> a new hard drive, and it would no longer post. Tried about 50 times,
> doing the typical everything non-essential unplugged, tryi
Hi all,
I have a collection of three (was 4) 1-2TB devices with data and
metadata in a RAID1 mirror. Last night I was struck by the Click of
Death on an old Samsung drive.
I removed the device from the system, rebooted and mounted the volume
with `-o degraded` and the file system seems fine and
A few more enhancements, I'd like to see all changes to the balance filters
merged into one major release. Thanks.
Changelog (v3):
* I've noticed that we can enhance the 'usage' filter the same way, so do it
to be consistent with the rest
* the flags for all the new filters have been renamed and
Similar to the 'limit' filter, we can enhance the 'usage' filter to
accept a range. The change is backward compatible, the range is applied
only in connection with the BTRFS_BALANCE_ARGS_USAGE_RANGE flag.
We don't have a usecase yet, the current syntax has been sufficient. The
enhancement should p
From: Gabríel Arthúr Pétursson
Balance block groups which have the given number of stripes, defined by
a range min..max. This is useful to selectively rebalance only chunks
that do not span enough devices, applies to RAID0/10/5/6.
Signed-off-by: Gabríel Arthúr Pétursson
[ renamed bargs members,
The 'limit' filter is underdesigned, it should have been a range for
[min,max], with some relaxed semantics when one of the bounds is
missing. Besides that, using a full u64 for a single value is a waste of
bytes.
Let's fix both by extending the use of the u64 bytes for the [min,max]
range. This c
2015-10-20 17:56 GMT+03:00, Filipe Manana :
> On Tue, Oct 20, 2015 at 2:29 PM, Timofey Titovets
> wrote:
>> For performance reason, leave data at the start of disk, is preferable
>> while deduping
>
> Have you made any performance tests to verify that?
pe, i don't run any performance test, at no
Janos Toth F. posted on Mon, 19 Oct 2015 10:39:06 +0200 as excerpted:
> I was in the middle of replacing the drives of my NAS one-by-one (I
> wished to move to bigger and faster storage at the end), so I used one
> more SATA drive + SATA cable than usual. Unfortunately, the extra cable
> turned ou
On Tue, Oct 20, 2015 at 2:29 PM, Timofey Titovets wrote:
> For performance reason, leave data at the start of disk, is preferable
> while deduping
Have you made any performance tests to verify that?
> It's might sense for the reasons:
> 1. Spinning rust - start of the disk is much faster
> 2. Bt
On 2015-10-20 09:15, Russell Coker wrote:
On Wed, 21 Oct 2015 12:00:59 AM Austin S Hemmelgarn wrote:
https://www.gnu.org/software/ddrescue/
At this stage I would use ddrescue or something similar to copy data from
the failing disk to a fresh disk, then do a BTRFS scrub to regenerate
the missing
reada is using -1 instead of the -ENOMEM defined macro to specify that
a buffer allocation failed. Since the error number is propagated, the
caller will get a -EPERM which is the wrong error condition.
Also, updating the caller to return the exact value from
reada_add_block.
Smatch tool warning:
Hello,
This is a resend of this patch series. It was posted on September 24 [0]
These two patches fix instances where -1 is used to specify a buffer
allocation fail, instead of using -ENOMEM.
Patch 1/2 is already reviewed by David Sterba.
Best regards,
Luis
[0]: https://lkml.org/lkml/2015/9/24
check-integrity is using -1 instead of the -ENOMEM defined macro to
specify that a buffer allocation failed. Since the error number is
propagated, the caller will get a -EPERM which is the wrong error
condition.
Also, the smatch tool complains with the following warnings:
btrfsic_process_superbloc
For performance reason, leave data at the start of disk, is preferable
while deduping
It's might sense for the reasons:
1. Spinning rust - start of the disk is much faster
2. Btrfs can deallocate empty data chunk from the end of fs - ie it's compact fs
Signed-off-by: Timofey Titovets
---
fs/btrf
On Wed, 21 Oct 2015 12:00:59 AM Austin S Hemmelgarn wrote:
> > https://www.gnu.org/software/ddrescue/
> >
> > At this stage I would use ddrescue or something similar to copy data from
> > the failing disk to a fresh disk, then do a BTRFS scrub to regenerate
> > the missing data.
> >
> > I wouldn'
On 2015-10-20 00:45, Russell Coker wrote:
On Tue, 20 Oct 2015 03:16:15 PM james harvey wrote:
sda appears to be going bad, with my low threshold of "going bad", and
will be replaced ASAP. It just developed 16 reallocated sectors, and
has 40 current pending sectors.
I'm currently running a "btr
On 2015-10-19 23:13, james harvey wrote:
Wanted to see if there's active development on N-Way (traditional) RAID-1.
By this, I mean that RAID-1 across "n" disks traditionally means "n"
copies of data, but btrfs currently implements RAID-1 as "2" copies of
data. So, unlike traditional RAID-1, lo
Btrfs now has changed to delete subvolume/snapshot asynchronously,
which means that after umount, if we've already deleted 'ext2_saved',
rollback can still be completed, which should not.
So this adds a regression test for this.
Signed-off-by: Liu Bo
---
tests/btrfs/012 | 12
1 fil
The current design of btrfs quota doesn't support "quota limit after
compression" after
commit e2d1f92399af ("btrfs: qgroup: do a reservation in a higher level.")
So remove it to make things clear.
Signed-off-by: Liu Bo
---
Documentation/btrfs-qgroup.asciidoc | 4
cmds-qgroup.c
On Mon, Oct 19, 2015 at 03:41:04PM +0200, David Sterba wrote:
> On Mon, Oct 19, 2015 at 07:37:52PM +0800, Eryu Guan wrote:
> > Coverity reports assigning value from "err" to "ret", but that stored
> > value is overwritten by check_extent_refs() before it can be used.
>
> If you fix a coverity issu
Hi, Anna Schumaker
This patchset compile ok in x86 and x86_64 target,
But failed in arm when compiling btrfs dir, and output following error message:
:1304:2: warning: #warning syscall copy_file_range not implemented
[-Wcpp]
Reproduce:
merge commands:
cd /mnt/big1/linux
git fetch -q --all
35 matches
Mail list logo