On Tue, Oct 25, 2016 at 10:17:19PM -0600, Andreas Dilger wrote:
> On Oct 25, 2016, at 4:44 PM, Omar Sandoval wrote:
> >
> > On Tue, Oct 25, 2016 at 02:41:44PM -0400, Josef Bacik wrote:
> >> With anything that populates the inode/dentry cache with a lot of one time
> >> use
On Oct 25, 2016, at 4:44 PM, Omar Sandoval wrote:
>
> On Tue, Oct 25, 2016 at 02:41:44PM -0400, Josef Bacik wrote:
>> With anything that populates the inode/dentry cache with a lot of one time
>> use
>> inodes we can really put a lot of pressure on the system for things we
Unfortunately, low memory mode is right here.
If btrfs-image dump the image correctly, your extent tree is really
screwed up.
And how badly it is screwed up?
It only contains the basic block group info.
Almost empty, without any really useful EXTENT_ITEM/METADATA_ITEM.
You can check it by
On Tue, Oct 25, 2016 at 6:33 PM, Linus Torvalds
wrote:
>
> Completely untested. Maybe there's some reason we can't write to the
> whole thing like that?
That hack boots and seems to work for me, but doesn't show anything.
Dave, mind just trying that oneliner?
On Tue, Oct 25, 2016 at 5:27 PM, Dave Jones wrote:
>
> DaveC: Do these look like real problems, or is this more "looks like
> random memory corruption" ? It's been a while since I did some stress
> testing on XFS, so these might not be new..
Andy, do you think we could
At 10/25/2016 10:09 PM, David Sterba wrote:
On Thu, Oct 13, 2016 at 05:22:26PM +0800, Qu Wenruo wrote:
Kernel clear_cache mount option will only rebuilt free space cache if
used space of that chunk has changed.
So it won't ensure any corrupted free space cache get cleared.
So add a new
Hi,
I'm currently trying to recover from a disk failure on a 6-drive Btrfs
RAID10 filesystem. A "mount -o degraded" auto-resumes a current
btrfs-replace from a missing dev to a new disk. This eventually triggers
a kernel panic (and the panic seemed faster on each new boot). I
managed to cancel
On Wed, Oct 26, 2016 at 09:01:13AM +1100, Dave Chinner wrote:
> On Tue, Oct 25, 2016 at 02:41:44PM -0400, Josef Bacik wrote:
> > With anything that populates the inode/dentry cache with a lot of one time
> > use
> > inodes we can really put a lot of pressure on the system for things we don't
> >
On Tue, Oct 25, 2016 at 02:41:44PM -0400, Josef Bacik wrote:
> With anything that populates the inode/dentry cache with a lot of one time use
> inodes we can really put a lot of pressure on the system for things we don't
> need to keep in cache. It takes two runs through the LRU to evict these
On Tue, Oct 25, 2016 at 02:41:44PM -0400, Josef Bacik wrote:
> With anything that populates the inode/dentry cache with a lot of one time use
> inodes we can really put a lot of pressure on the system for things we don't
> need to keep in cache. It takes two runs through the LRU to evict these
Hello,
On Tue, Oct 25, 2016 at 02:41:43PM -0400, Josef Bacik wrote:
> Now that we have metadata counters in the VM, we need to provide a way to kick
> writeback on dirty metadata. Introduce super_operations->write_metadata.
> This
> allows file systems to deal with writing back any dirty
Hello,
On Tue, Oct 25, 2016 at 02:41:42PM -0400, Josef Bacik wrote:
> Btrfs has no bounds except memory on the amount of dirty memory that we have
> in
> use for metadata. Historically we have used a special inode so we could take
> advantage of the balance_dirty_pages throttling that comes
On 10/25/2016 03:03 PM, Tejun Heo wrote:
Hello, Josef.
On Tue, Oct 25, 2016 at 02:41:41PM -0400, Josef Bacik wrote:
These are counters that constantly go up in order to do bandwidth calculations.
It isn't important what the units are in, as long as they are consistent between
the two of them,
Hello, Josef.
On Tue, Oct 25, 2016 at 02:41:41PM -0400, Josef Bacik wrote:
> These are counters that constantly go up in order to do bandwidth
> calculations.
> It isn't important what the units are in, as long as they are consistent
> between
> the two of them, so convert them to count bytes
On Tue, Oct 25, 2016 at 02:41:40PM -0400, Josef Bacik wrote:
> The only reason we pass in the mapping is to get the inode in order to see if
> writeback cgroups is enabled, and even then it only checks the bdi and a super
> block flag. balance_dirty_pages() doesn't even use the mapping. Since
>
With anything that populates the inode/dentry cache with a lot of one time use
inodes we can really put a lot of pressure on the system for things we don't
need to keep in cache. It takes two runs through the LRU to evict these one use
entries, and if you have a lot of memory you can end up with
(Sending again as 5/5 got eaten and I used the wrong email address for Dave.)
(Dave again I apologize, for some reason our email server hates you and so I
didn't get your previous responses again, and didn't notice until I was looking
at the patchwork history for my previous submissions, so I'll
These are counters that constantly go up in order to do bandwidth calculations.
It isn't important what the units are in, as long as they are consistent between
the two of them, so convert them to count bytes written/dirtied, and allow the
metadata accounting stuff to change the counters as well.
Now that we have metadata counters in the VM, we need to provide a way to kick
writeback on dirty metadata. Introduce super_operations->write_metadata. This
allows file systems to deal with writing back any dirty metadata we need based
on the writeback needs of the system. Since there is no
The only reason we pass in the mapping is to get the inode in order to see if
writeback cgroups is enabled, and even then it only checks the bdi and a super
block flag. balance_dirty_pages() doesn't even use the mapping. Since
balance_dirty_pages*() works on a bdi level, just pass in the bdi and
Btrfs has no bounds except memory on the amount of dirty memory that we have in
use for metadata. Historically we have used a special inode so we could take
advantage of the balance_dirty_pages throttling that comes with using pagecache.
However as we'd like to support different blocksizes it
Le 2016-10-25 05:04, Qu Wenruo a écrit :
At 10/25/2016 01:54 AM, none wrote:
So do you mean lowmem is also low cpu ?
Not sure, but lowmem is high IO.
And by design, it won't cause dead look unless there is a looping tree
block. But that will be detected by check_tree_block().
So, it just
George Chlipala posted on Tue, 25 Oct 2016 09:30:34 -0500 as excerpted:
> We had a major failure in one drive in a RAID56 BTRFS volume and that
> drive is no longer accessible. How can I replace the drive without
> mounting the filesystem? I have tried using a degraded mount but I
> receive
On Mon, Oct 24, 2016 at 10:43:32AM +0800, Qu Wenruo wrote:
> Ebs and pointers are allocated, but if any of the allocation failed, we
> should free the allocated memory.
>
> Reported-by: David Sterba
> Resolves-Coverity-CID: 1296749
> Signed-off-by: Qu Wenruo
We had a major failure in one drive in a RAID56 BTRFS volume and that
drive is no longer accessible. How can I replace the drive without
mounting the filesystem? I have tried using a degraded mount but I
receive the following error messages
ROOT [prometheus:/root] # mount -v -t btrfs -o
Fixed and applied.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, Oct 13, 2016 at 05:22:26PM +0800, Qu Wenruo wrote:
> Kernel clear_cache mount option will only rebuilt free space cache if
> used space of that chunk has changed.
>
> So it won't ensure any corrupted free space cache get cleared.
>
> So add a new option "--clear-space-cache v1|v2" to
On Tue, Oct 25, 2016 at 06:56:07PM +0800, Wang Xiaoguang wrote:
> Signed-off-by: Wang Xiaoguang
> ---
> V1: Just one small codes cleanup, if you think it's not appropriate to
> make a individual patch for it, please ignore it :)
No, that's fine, cleanups are
On Fri, Oct 21, 2016 at 05:05:07PM +0800, Wang Xiaoguang wrote:
> This issue was found when I tried to delete a heavily reflinked file,
> when deleting such files, other transaction operation will not have a
> chance to make progress, for example, start_transaction() will blocked
> in
On Tue, Oct 25, 2016 at 10:11:04AM +0800, Qu Wenruo wrote:
> Remove various BUG_ON in raid56 write routine, including:
> 1) Memory allocation error
>Old codes allocates memory when code needs new memory in a loop, and
>catch the error using BUG_ON().
>New codes allocates memory in a
Signed-off-by: Wang Xiaoguang
---
V1: Just one small codes cleanup, if you think it's not appropriate to
make a individual patch for it, please ignore it :)
---
fs/btrfs/extent-tree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git
hi,
On 10/24/2016 01:47 AM, Stefan Priebe - Profihost AG wrote:
Hello list,
just wanted to report that my ENOSPC errors are gone. Thanks to wang for
his great patches.
but the space_info corruption still occours.
On every umount i see:
[93022.166222] BTRFS: space_info 4 has 208952672256
hi,
On 10/19/2016 10:23 PM, David Sterba wrote:
On Mon, Oct 17, 2016 at 05:01:46PM +0800, Wang Xiaoguang wrote:
[..]
int btrfs_set_extent_delalloc(struct inode *inode, u64 start, u64 end,
- struct extent_state **cached_state);
+
This issue was found when I tried to delete a heavily reflinked file,
when deleting such files, other transaction operation will not have a
chance to make progress, for example, start_transaction() will blocked
in wait_current_trans(root) for long time, sometimes it even triggers
soft lockups, and
hi,
On 10/25/2016 03:00 AM, Liu Bo wrote:
On Fri, Oct 21, 2016 at 05:05:07PM +0800, Wang Xiaoguang wrote:
This issue was found when I tried to delete a heavily reflinked file,
when deleting such files, other transaction operation will not have a
chance to make progress, for example,
35 matches
Mail list logo