[please cc linux-btrfs@vger.kernel.org for btrfs specific tests]
On Mon, Mar 13, 2017 at 04:37:16PM -0500, Goldwyn Rodrigues wrote:
> From: Goldwyn Rodrigues
>
> If we create and delete files within the qgroup limits, qg->reserved
> (allocations before commits) over-inflates
When using -t option to output trees not in root tree (chunk/root/log
root), then we output the tree twice.
Fix it
Signed-off-by: Qu Wenruo
---
v2:
None
---
cmds-inspect-dump-tree.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git
In btrfs-dump-tree, we output any existing log tree, however we don't
output the log root tree, which records all root items for log trees.
This makes it confusing for any one who want to know where the log tree
comes from.
Signed-off-by: Qu Wenruo
---
v2:
Check if
When using -t option to output trees not in root tree (chunk/root/log
root), then we output the tree twice.
Fix it
Signed-off-by: Qu Wenruo
---
cmds-inspect-dump-tree.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/cmds-inspect-dump-tree.c
In btrfs-dump-tree, we output any existing log tree, however we don't
output the log root tree, which records all root items for log trees.
This makes it confusing for any one who want to know where the log tree
comes from.
Signed-off-by: Qu Wenruo
---
+struct device_checkpoint {
+struct list_head list;
+struct btrfs_device *device;
+int stat_value_checkpoint;
+};
+
+static int add_device_checkpoint(struct list_head *checkpoint,
Could we have another structure instead of list_head to record device
checkpoints?
The list_head is
At 03/13/2017 09:26 PM, Stefan Priebe - Profihost AG wrote:
Am 13.03.2017 um 08:39 schrieb Qu Wenruo:
At 03/13/2017 03:26 PM, Stefan Priebe - Profihost AG wrote:
Hi Qu,
Am 13.03.2017 um 02:16 schrieb Qu Wenruo:
But wasn't this part of the code identical in V5? Why does it only
happen
At 03/13/2017 09:26 PM, Stefan Priebe - Profihost AG wrote:
Am 13.03.2017 um 08:39 schrieb Qu Wenruo:
At 03/13/2017 03:26 PM, Stefan Priebe - Profihost AG wrote:
Hi Qu,
Am 13.03.2017 um 02:16 schrieb Qu Wenruo:
At 03/13/2017 04:49 AM, Stefan Priebe - Profihost AG wrote:
Hi Qu,
while
At 03/14/2017 12:21 AM, Anand Jain wrote:
Thanks for the review..
On 03/13/2017 05:05 PM, Qu Wenruo wrote:
At 03/13/2017 03:42 PM, Anand Jain wrote:
The objective of this patch is to cleanup barrier_all_devices()
so that the error checking is in a separate loop independent of
of the
On Mon, Mar 13, 2017 at 10:58:29PM +0100, Kai Krakow wrote:
> Am Sat, 28 Jan 2017 15:50:38 -0500
> schrieb Matt McKinnon :
>
> > This same file system (which crashed again with the same errors) is
> > also giving this output during a metadata or data balance:
>
> This looks
Am Sat, 28 Jan 2017 15:50:38 -0500
schrieb Matt McKinnon :
> This same file system (which crashed again with the same errors) is
> also giving this output during a metadata or data balance:
This looks somewhat familiar to the err=-17 that I am experiencing when
using
From: Edmund Nadolski
Define the SEQ_NONE macro to replace (u64)-1 in places where said
value triggers a special-case ref search behavior.
Signed-off-by: Edmund Nadolski
Reviewed-by: Jeff Mahoney
---
fs/btrfs/backref.c | 16
From: Edmund Nadolski
Replace hardcoded numeric values for __merge_refs 'mode' argument
with descriptive constants.
Signed-off-by: Edmund Nadolski
Reviewed-by: Jeff Mahoney
---
fs/btrfs/backref.c | 15 +--
1 file changed, 9
From: Edmund Nadolski
This series replaces several hard-coded values with descriptive
symbols.
Edmund Nadolski (2):
btrfs: provide enumeration for __merge_refs mode argument
btrfs: replace hardcoded value with SEQ_NONE macro
fs/btrfs/backref.c | 31
Same here, Have been using BTRFS for a 'scratch' disk since about 2014.
The disk have had quite some abuse and no issues yet.
I don't use compression, snapshots or any fancy features.
I have recently moved all of the root filesystem to BTRFS with 5x SSD
disks set up in RAID1 and everything is
On Sat, Mar 11, 2017 at 05:03:26PM -0600, Goldwyn Rodrigues wrote:
> - int reserved;
> + long reserved;
> - reserved = atomic_xchg(>qgroup_meta_rsv, 0);
> + reserved = atomic64_xchg(>qgroup_meta_rsv, 0);
atomic64_xchg returns 'long long' so u64 should be used.
--
To unsubscribe
On Sun, Jan 29, 2017 at 08:14:55PM +0530, Lakshmipathi.G wrote:
> Simple script to verify non-raid filesystem conversion.
The cli (command line interface) tests are supposed to cover the common
usecases from the point of option combinations etc, not really verifying
the result. It would be good
On Mon, Jan 30, 2017 at 06:07:01PM +0530, Lakshmipathi.G wrote:
> >
> > Yes, the owner is the number of the tree.
> >
> > DATA_RELOC_TREE is -9, but then unsigned 64 bits.
> >
> -9 + 2**64
> > 18446744073709551607L
> >
> > So the result is a number that's close to the max or 64 bits.
> >
> >
On Wed, Feb 15, 2017 at 02:44:05PM +0530, Lakshmipathi.G wrote:
> On Wed, Feb 15, 2017 at 09:36:03AM +0800, Qu Wenruo wrote:
> >
> >
> > >+ # Corrupt superblock checksum
> > >+dd if=/dev/zero of=$TEST_DEV seek=$superblock_offset bs=1 \
> > >+count=4 conv=notrunc &> /dev/null
>
On Fri, Feb 17, 2017 at 04:03:41AM +0530, Lakshmipathi.G wrote:
> On Thu, Feb 16, 2017 at 09:05:02AM +0800, Qu Wenruo wrote:
> >
> >
> > At 02/16/2017 04:50 AM, Lakshmipathi.G wrote:
> > >Signed-off-by: Lakshmipathi.G
> >
> > Looks good to me.
> >
> > Reviewed by:
On Fri, Feb 17, 2017 at 04:01:59AM +0530, Lakshmipathi.G wrote:
> Test script to recover damaged primary superblock using backup superblock.
>
> Signed-off-by: Lakshmipathi.G
Thanks for the test, a few comments below.
> ---
>
On Fri, Mar 03, 2017 at 12:13:45PM +0800, Qu Wenruo wrote:
> In btrfs-dump-tree, we output any existing log tree, however we don't
> output the log root tree, which records all root items for log trees.
>
> This makes it confusing for any one who want to know where the log tree
> comes from.
>
>
Thanks for the review..
On 03/13/2017 05:05 PM, Qu Wenruo wrote:
At 03/13/2017 03:42 PM, Anand Jain wrote:
The objective of this patch is to cleanup barrier_all_devices()
so that the error checking is in a separate loop independent of
of the loop which submits and waits on the device
On Sat, Mar 11, 2017 at 05:04:52PM +0100, Holger Hoffstätte wrote:
> I'm on Gentoo and wanted to update Docker to 17.03.0, which failed when it
> couldn't build the btrfs driver due to a missing .
> This worked fine on another machine the other day, so I dug in and found that
> the only difference
Am 13.03.2017 um 08:39 schrieb Qu Wenruo:
>
>
> At 03/13/2017 03:26 PM, Stefan Priebe - Profihost AG wrote:
>> Hi Qu,
>>
>> Am 13.03.2017 um 02:16 schrieb Qu Wenruo:
>>>
>>> At 03/13/2017 04:49 AM, Stefan Priebe - Profihost AG wrote:
Hi Qu,
while V5 was running fine against the
On Mon, Mar 13, 2017 at 12:27 PM, Goldwyn Rodrigues wrote:
>
>
> On 03/13/2017 07:14 AM, Filipe Manana wrote:
>> On Sat, Mar 11, 2017 at 11:02 PM, Goldwyn Rodrigues wrote:
>>> From: Goldwyn Rodrigues
>>>
>>> We are facing the same problem
On 03/13/2017 07:14 AM, Filipe Manana wrote:
> On Sat, Mar 11, 2017 at 11:02 PM, Goldwyn Rodrigues wrote:
>> From: Goldwyn Rodrigues
>>
>> We are facing the same problem with EDQUOT which was experienced with
>> ENOSPC. Not sure if we require a full
On Sat, Mar 11, 2017 at 11:02 PM, Goldwyn Rodrigues wrote:
> From: Goldwyn Rodrigues
>
> We are facing the same problem with EDQUOT which was experienced with
> ENOSPC. Not sure if we require a full ticketing system such as ENOSPC, but
> here is a fix. Let me
On 2017-03-13 07:52, Juan Orti Alcaine wrote:
2017-03-13 12:29 GMT+01:00 Hérikz Nawarro :
Hello everyone,
Today is safe to use btrfs for home storage? No raid, just secure
storage for some files and create snapshots from it.
In my humble opinion, yes. I'm running a
2017-03-13 12:29 GMT+01:00 Hérikz Nawarro :
> Hello everyone,
>
> Today is safe to use btrfs for home storage? No raid, just secure
> storage for some files and create snapshots from it.
>
In my humble opinion, yes. I'm running a RAID1 btrfs at home for 5
years and I
Hello everyone,
Today is safe to use btrfs for home storage? No raid, just secure
storage for some files and create snapshots from it.
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at
> On Fri, Mar 03, 2017 at 10:55:09AM +0200, Elena Reshetova wrote:
> > Now when new refcount_t type and API are finally merged
> > (see include/linux/refcount.h), the following
> > patches convert various refcounters in the btrfs filesystem from atomic_t
> > to refcount_t. By doing this we
On Mon, Mar 13, 2017 at 10:22:04AM +0800, Qu Wenruo wrote:
>
>
> At 03/10/2017 08:23 PM, Hugo Mills wrote:
> > Does anyone recall seeing this oops before? Is it something that
> >can be fixed with a newer kernel? (I'm on a USB stick for this, so a
> >new kernel is a major undertaking, and I'd
At 03/13/2017 03:42 PM, Anand Jain wrote:
The objective of this patch is to cleanup barrier_all_devices()
so that the error checking is in a separate loop independent of
of the loop which submits and waits on the device flush requests.
The idea itself is quite good, and we do need it.
By
Qu,
patch 4/4 added a cleanup for barrier_all_devices() and introduced
a new function check_barrier_error() where integration with per chunk
level device check will simplify.
[PATCH 4/4] btrfs: cleanup barrier_all_devices() to check dev stat
flush error
Thanks, Anand
On 03/09/2017 09:34
Patchset can be fetched from my github:
https://github.com/adam900710/linux.git qgroup_fixes
The new base is v4.11-rc1, only minor conflicts and are all easy to handle.
This pull request includes:
1) Fix for inode_cache mount option
Although no one really cares inode_cache mount option, it
[BUG]
For the following case, btrfs can underflow qgroup reserved space
at error path:
(Page size 4K, function name without "btrfs_" prefix)
Task A | Task B
--
Buffered_write [0, 2K)
Introduce a new parameter, struct extent_changeset for
btrfs_qgroup_reserved_data() and its callers.
Such extent_changeset was used in btrfs_qgroup_reserve_data() to record
which range it reserved in current reserve, so it can free it at error
path.
The reason we need to export it to callers is,
Quite a lot of qgroup corruption happens due to wrong timing of calling
btrfs_qgroup_prepare_account_extents().
Since the safest timing is calling it just before
btrfs_qgroup_account_extents(), there is no need to separate these 2
function.
Merging them will make code cleaner and less bug prone.
Introduce the following trace points:
qgroup_update_reserve
qgroup_meta_reserve
These trace points are handy to trace qgroup reserve space related
problems.
Also export btrfs_qgroup structure, as now we directly pass btrfs_qgroup
structure to trace points, so that structure needs to be exported.
[BUG]
The easist way to reproduce the bug is:
--
# mkfs.btrfs -f $dev -n 16K
# mount $dev $mnt -o inode_cache
# btrfs quota enable $mnt
# btrfs quota rescan -w $mnt
# btrfs qgroup show $mnt
qgroupid rfer excl
0/5 32.00KiB
For btrfs_qgroup_account_extent(), modify make it exit quicker for
non-fs extents.
This will also reduce the noise in trace_btrfs_qgroup_account_extent
event.
Signed-off-by: Qu Wenruo
---
fs/btrfs/qgroup.c | 41 +++--
1 file changed,
[BUG]
Under the following case, we can underflow qgroup reserved space.
Task A|Task B
---
Quota disabled |
Buffered write |
|- btrfs_check_data_free_space() |
Newly introduced qgroup reserved space trace points are normally nested
into several common qgroup operations.
While some other trace points are not well placed to co-operate with
them, causing confusing output.
This patch re-arrange trace_btrfs_qgroup_release_data() and
btrfs_qgroup_release/free_data() only returns 0 or minus error
number(ENOMEM is the only possible error).
This is normally good enough, but sometimes we need the accurate byte
number it freed/released.
Change it to return actually released/freed bytenr number instead of 0
for success.
And
At 03/13/2017 03:26 PM, Stefan Priebe - Profihost AG wrote:
Hi Qu,
Am 13.03.2017 um 02:16 schrieb Qu Wenruo:
At 03/13/2017 04:49 AM, Stefan Priebe - Profihost AG wrote:
Hi Qu,
while V5 was running fine against the openSUSE-42.2 kernel (based on
v4.4).
Thanks for the test.
V7 results
Now when counting number of error devices we don't need to count
them separately once during send and wait, as because device error
counted during send is more of static check.
Also kindly note that as of now there is no code which would set
dev->bdev = NULL unless device is missing. However I
The only error that write dev flush (send) will fail is due
to the ENOMEM then, as its not a device specific error and
rather a system wide issue, we should rather stop further
iterations and perpetuate the -ENOMEM error to the caller.
Signed-off-by: Anand Jain
---
The objective of this patch is to cleanup barrier_all_devices()
so that the error checking is in a separate loop independent of
of the loop which submits and waits on the device flush requests.
By doing this it helps to further develop patches which would tune
the error-actions as needed.
Here
REQ_PREFLUSH bio to flush dev cache uses btrfs_end_empty_barrier()
completion callback only, as of now, and there it accounts for dev
stat flush errors BTRFS_DEV_STAT_FLUSH_ERRS, so remove it from the
btrfs_end_bio().
Signed-off-by: Anand Jain
---
fs/btrfs/volumes.c | 3
These patches provide cleanup of the function barrier_all_devices().
Anand Jain (4):
btrfs: REQ_PREFLUSH does not use btrfs_end_bio() completion callback
btrfs: Communicate back ENOMEM when it occurs
btrfs: cleanup barrier_all_devices() unify dev error count
btrfs: cleanup
Hi Qu,
Am 13.03.2017 um 02:16 schrieb Qu Wenruo:
>
> At 03/13/2017 04:49 AM, Stefan Priebe - Profihost AG wrote:
>> Hi Qu,
>>
>> while V5 was running fine against the openSUSE-42.2 kernel (based on
>> v4.4).
>
> Thanks for the test.
>
>> V7 results in OOPS to me:
>> BUG: unable to handle
At 03/13/2017 03:29 PM, Anand Jain wrote:
On 03/09/2017 09:34 AM, Qu Wenruo wrote:
Introduce a new function, btrfs_check_rw_degradable(), to check if all
chunks in btrfs is OK for degraded rw mount.
It provides the new basis for accurate btrfs mount/remount and even
runtime degraded mount
On 03/09/2017 09:34 AM, Qu Wenruo wrote:
Introduce a new function, btrfs_check_rw_degradable(), to check if all
chunks in btrfs is OK for degraded rw mount.
It provides the new basis for accurate btrfs mount/remount and even
runtime degraded mount check other than old one-size-fit-all method.
54 matches
Mail list logo