>Should quota support generally be disabled during balances?
If this true and quota impacts balance throughput, at-least there
should an alert message like "Running Balance with quota will affect
performance" or similar before starting.
Cheers,
Lakshmipathi.G
--
To unsubscribe from this
February 4, 2017 1:07 AM, "Goldwyn Rodrigues" wrote:
> On 02/03/2017 06:30 PM, Jorg Bornschein wrote:
>
>> February 3, 2017 11:26 PM, "Goldwyn Rodrigues" wrote:
>>
>> Hi,
>>
>> I'm currently running a balance (without any filters) on a 4 drives raid1
>>
On 02/03/2017 06:30 PM, Jorg Bornschein wrote:
> February 3, 2017 11:26 PM, "Goldwyn Rodrigues" wrote:
>
>>> Hi,
>>>
>>> I'm currently running a balance (without any filters) on a 4 drives raid1
>>> filesystem. The array
>>> contains 3 3TB drives and one 6TB drive; I'm
February 3, 2017 11:26 PM, "Goldwyn Rodrigues" wrote:
>> Hi,
>>
>> I'm currently running a balance (without any filters) on a 4 drives raid1
>> filesystem. The array
>> contains 3 3TB drives and one 6TB drive; I'm running the rebalance because
>> the 6TB drive recently
>>
On 02/03/2017 04:13 PM, j...@capsec.org wrote:
> Hi,
>
>
> I'm currently running a balance (without any filters) on a 4 drives raid1
> filesystem. The array contains 3 3TB drives and one 6TB drive; I'm running
> the rebalance because the 6TB drive recently replaced a 2TB drive.
>
>
> I
Hey Qu
On Fri, 2017-02-03 at 14:20 +0800, Qu Wenruo wrote:
> Great thanks for that!
You're welcome. :)
> I also added missing error message output for other places I found,
> and
> updated the branch, the name remains as "lowmem_tests"
>
> Please try it.
# btrfs check /dev/nbd0 ; echo $?
Am Thu, 02 Feb 2017 13:01:03 +0100
schrieb Marc Joliet :
> On Sunday 28 August 2016 15:29:08 Kai Krakow wrote:
> > Hello list!
>
> Hi list
>
> > It happened again. While using VirtualBox the following crash
> > happened, btrfs check found a lot of errors which it couldn't
> >
Hi,
I'm currently running a balance (without any filters) on a 4 drives raid1
filesystem. The array contains 3 3TB drives and one 6TB drive; I'm running the
rebalance because the 6TB drive recently replaced a 2TB drive.
I know that balance is not supposed to be a fast operation, but this
This adds some extra documentation to the btrfs-receive manpage that
explains some of the security related aspects of btrfs-receive. The
first part covers the fact that the subvolume being received is writable
until the receive finishes, and the second covers the current lack of
sanity checking
On 2017-02-03 14:17, Graham Cobb wrote:
On 03/02/17 16:01, Austin S. Hemmelgarn wrote:
Ironically, I ended up having time sooner than I thought. The message
doesn't appear to be in any of the archives yet, but the message ID is:
<20170203134858.75210-1-ahferro...@gmail.com>
Ah. I didn't
On 03/02/17 16:01, Austin S. Hemmelgarn wrote:
> Ironically, I ended up having time sooner than I thought. The message
> doesn't appear to be in any of the archives yet, but the message ID is:
> <20170203134858.75210-1-ahferro...@gmail.com>
Ah. I didn't notice it until after I had sent my
On Thu, Feb 02, 2017 at 06:34:06PM +0100, Jan Kara wrote:
> Allocate struct backing_dev_info separately instead of embedding it
> inside superblock. This unifies handling of bdi among users.
Looks good.
Reviewed-by: Liu Bo
Thanks,
-liubo
>
> CC: Chris Mason
On Fri, Feb 03, 2017 at 02:50:42PM +0100, Jan Kara wrote:
> On Thu 02-02-17 11:28:27, Liu Bo wrote:
> > Hi,
> >
> > On Thu, Feb 02, 2017 at 06:34:02PM +0100, Jan Kara wrote:
> > > Provide helper functions for setting up dynamically allocated
> > > backing_dev_info structures for filesystems and
From: Goldwyn Rodrigues
new_len is not used in delete_extent_records().
Signed-off-by: Goldwyn Rodrigues
---
cmds-check.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/cmds-check.c b/cmds-check.c
index 84e1d99..9fb85e4 100644
---
On 2017-02-03 10:44, Graham Cobb wrote:
On 03/02/17 12:44, Austin S. Hemmelgarn wrote:
I can look at making a patch for this, but it may be next week before I
have time (I'm not great at multi-tasking when it comes to software
development, and I'm in the middle of helping to fix a bug in
On 03/02/17 12:44, Austin S. Hemmelgarn wrote:
> I can look at making a patch for this, but it may be next week before I
> have time (I'm not great at multi-tasking when it comes to software
> development, and I'm in the middle of helping to fix a bug in Ansible
> right now).
That would be great,
On Mon 30-01-17 09:12:10, Michal Hocko wrote:
> On Fri 27-01-17 11:40:42, Theodore Ts'o wrote:
> > On Fri, Jan 27, 2017 at 10:37:35AM +0100, Michal Hocko wrote:
> > > If this ever turn out to be a problem and with the vmapped stacks we
> > > have good chances to get a proper stack traces on a
On 02/03/2017 03:36 PM, Timofey Titovets wrote:
>
> ➜ python-btrfs git:(master) ✗ tar -vxf python-btrfs-5-1-any.pkg.tar.xz
> .PKGINFO
> .BUILDINFO
> .MTREE
> usr/
> usr/lib/
> usr/lib/python3.6/
> usr/lib/python3.6/site-packages/
> usr/lib/python3.6/site-packages/btrfs-5-py3.6.egg-info/
>
2017-02-03 17:27 GMT+03:00 Hans van Kranenburg :
> On 02/03/2017 03:18 PM, Timofey Titovets wrote:
>> 2017-02-03 15:57 GMT+03:00 Hans van Kranenburg
>> :
>>> On 02/03/2017 12:25 PM, Timofey Titovets wrote:
Thank you for your
On 02/03/2017 03:18 PM, Timofey Titovets wrote:
> 2017-02-03 15:57 GMT+03:00 Hans van Kranenburg
> :
>> On 02/03/2017 12:25 PM, Timofey Titovets wrote:
>>> Thank you for your great work:
>>> JFYI Packaged in AUR:
>>>
2017-02-03 15:57 GMT+03:00 Hans van Kranenburg :
> On 02/03/2017 12:25 PM, Timofey Titovets wrote:
>> Thank you for your great work:
>> JFYI Packaged in AUR:
>> https://aur.archlinux.org/packages/python-btrfs-heatmap/
>
> Hey, thanks.
>
> Just wondering... what is
On Thu 02-02-17 11:28:27, Liu Bo wrote:
> Hi,
>
> On Thu, Feb 02, 2017 at 06:34:02PM +0100, Jan Kara wrote:
> > Provide helper functions for setting up dynamically allocated
> > backing_dev_info structures for filesystems and cleaning them up on
> > superblock destruction.
>
> Just one concern,
This adds some extra documentation to the btrfs-receive manpage that
explains some of the security related aspects of btrfs-receive. The
first part covers the fact that the subvolume being received is writable
until the receive finishes, and the second covers the current lack of
sanity checking
On Fri, Feb 03, 2017 at 11:16:51AM +0100, Juergen 'Louis' Fluk wrote:
> Dear all,
>
> the RAID controller underneath our 32T BTRFS container had a sudden reset,
> and after rebooting BTRFS drops to readonly after some list of messages.
>
> I did recovery + btrfs-zero-log + recovery (using a LVM
On 02/03/2017 12:25 PM, Timofey Titovets wrote:
> Thank you for your great work:
> JFYI Packaged in AUR:
> https://aur.archlinux.org/packages/python-btrfs-heatmap/
Hey, thanks.
Just wondering... what is that btrfs.py you refer to in...
On 2017-02-03 04:14, Duncan wrote:
Graham Cobb posted on Thu, 02 Feb 2017 10:52:26 + as excerpted:
On 02/02/17 00:02, Duncan wrote:
If it's a workaround, then many of the Linux procedures we as admins
and users use every day are equally workarounds. Setting 007 perms on
a dir that
Thank you for your great work:
JFYI Packaged in AUR:
https://aur.archlinux.org/packages/python-btrfs-heatmap/
--
Have a nice day,
Timofey.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at
Hi.
Came across this thread
https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg55161.html
Exploring possibility of adding test-scripts around these area using dump-tree
& corrupt-block.But
unable to figure-out how to get parity of file or find its location. dump-tree
output gave,
Dear all,
the RAID controller underneath our 32T BTRFS container had a sudden reset,
and after rebooting BTRFS drops to readonly after some list of messages.
I did recovery + btrfs-zero-log + recovery (using a LVM snapshot), yet
the error persists. From "transid verify failed" I understand that
Austin S. Hemmelgarn posted on Thu, 02 Feb 2017 07:49:50 -0500 as
excerpted:
> I think (although I'm not sure about it) that this:
> http://www.spinics.net/lists/linux-btrfs/msg47283.html is the first
> posting of the patch series.
Yes. That looks like it. Thanks.
--
Duncan - List replies
Graham Cobb posted on Thu, 02 Feb 2017 10:52:26 + as excerpted:
> On 02/02/17 00:02, Duncan wrote:
>> If it's a workaround, then many of the Linux procedures we as admins
>> and users use every day are equally workarounds. Setting 007 perms on
>> a dir that doesn't have anything immediately
When scrubbing a RAID5 which has recoverable data corruption (only one
data stripe is corrupted), sometimes scrub will report more csum errors
than expected. Sometimes even unrecoverable error will be reported.
The problem can be easily reproduced by the following steps:
1) Create a btrfs with
When dev-replace and scrub are run at the same time, dev-replace can be
canceled by scrub. It's quite common for btrfs/069.
The backtrace would be like:
general protection fault: [#1] SMP
Workqueue: btrfs-endio-raid56 btrfs_endio_raid56_helper [btrfs]
RIP: 0010:[] []
Unlike mirror based profiles, RAID5/6 recovery needs to read out the
whole full stripe.
And if we don't do proper protect, it can easily cause race condition.
Introduce 2 new functions: lock_full_stripe() and unlock_full_stripe()
for RAID5/6.
Which stores a rb_tree of mutex for full stripes, so
In the following situation, scrub will calculate wrong parity to
overwrite correct one:
RAID5 full stripe:
Before
| Dev 1 | Dev 2 | Dev 3 |
| Data stripe 1 | Data stripe 2 | Parity Stripe |
--- 0
| 0x (Bad) |
Before this patch, btrfs raid56 will keep raid56 rbio even all its IO is
done.
This may save some time allocating rbio, but it can cause deadly
use-after-free bug, for the following case:
Original fs: 4 devices RAID5
Process A | Process B
This patchset can be fetched from my github repo:
https://github.com/adam900710/linux.git raid56_fixes
It's based on v4.10-rc6 and none of the patch is modified after its first
appearance in mail list.
The patchset fixes the following bugs:
1) False alert or wrong csum error number when
37 matches
Mail list logo