On May 11 2016, Nikolaus Rath wrote:
> Hello,
>
> I recently ran btrfsck on one of my file systems, and got the following
> messages:
>
> checking extents
> checking free space cache
> checking fs roots
> root 5 inode 3149867 errors 400, nbytes wrong
> root 5 inode 3150237
On Jun 01 2016, Nikolaus Rath wrote:
> Hello,
>
> For one of my btrfs volumes, btrfsck reports a lot of the following
> warnings:
>
> [...]
> checking extents
> bad extent [138477568, 138510336), type mismatch with chunk
> bad extent [140091392, 140148736), type mismatch with
From: Jeff Mahoney
In order to provide an fsid for trace events, we'll need a btrfs_fs_info
pointer. The most lightweight way to do that for btrfs_work structures
is to associate it with the __btrfs_workqueue structure. Each queued
btrfs_work structure has a workqueue
From: Jeff Mahoney
When using trace events to debug a problem, it's impossible to determine
which file system generated a particular event. This patch adds a
macro to prefix standard information to the head of a trace event.
The extent_state alloc/free events are all that's
Hi, David,
On 2016/06/09 22:13, David Sterba wrote:
On Thu, Jun 09, 2016 at 10:23:15AM +0900, Tsutomu Itoh wrote:
When open in btrfs_open_devices failed, only the following message is
displayed. Therefore the user doesn't understand the reason why open
failed.
# btrfs check /dev/sdb8
From: Liu Bo
This aims to decide whether a balance can reduce the number of
data block groups and if it can, this shows the '-dvrange' block
group's objectid.
With this, you can run
'btrfs balance start -c mnt' or 'btrfs balance start --check-only mnt'
Chris Murphy posted on Thu, 09 Jun 2016 11:39:23 -0600 as excerpted:
> Yeah but somewhere there's a chunk that's likely affected by two losses,
> with a probability much higher than for conventional raid10 where such a
> loss is very binary: if the loss is a mirrored pair, the whole array and
>
On Mon, Jun 6, 2016 at 11:00 AM, Hugo Mills wrote:
> On Mon, Jun 06, 2016 at 09:43:19AM -0400, Andrew Armenia wrote:
>> On Mon, Jun 6, 2016 at 5:17 AM, David Sterba wrote:
>> > On Thu, Jun 02, 2016 at 09:50:15PM -0400, Andrew Armenia wrote:
>> >> This patch
On Wed, Jun 8, 2016 at 5:10 PM, Hans van Kranenburg
wrote:
> Hi list,
>
>
> On 05/31/2016 03:36 AM, Qu Wenruo wrote:
>>
>>
>>
>> Hans van Kranenburg wrote on 2016/05/06 23:28 +0200:
>>>
>>> Hi,
>>>
>>> I've got a mostly inactive btrfs filesystem inside a virtual
On Thu, Jun 9, 2016 at 5:38 AM, Austin S. Hemmelgarn
wrote:
> On 2016-06-09 02:16, Duncan wrote:
>>
>> Austin S. Hemmelgarn posted on Fri, 03 Jun 2016 10:21:12 -0400 as
>> excerpted:
>>
>>> As far as BTRFS raid10 mode in general, there are a few things that are
>>> important
On 09.06.2016, at 17:20, Duncan <1i5t5.dun...@cox.net> wrote:
> Are those the 8 TB SMR "archive" drives?
No, they are Western Digital Red drives.
Thanks for the detailed follow-up anyway. :)
Half a year ago, when I evaluated hard drives, in the 8 TB category there were
only the Hitachi 8 TB
From: Filipe Manana
With commit 56f23fdbb600 ("Btrfs: fix file/data loss caused by fsync after
rename and new inode") we got simple fix for a functional issue when the
following sequence of actions is done:
at transaction N
create file A at directory D
at transaction N
From: Filipe Manana
When we attempt to read an inode from disk, we end up always returning an
-ESTALE error to the caller regardless of the actual failure reason, which
can be an out of memory problem (when allocating a path), some error found
when reading from the
Am 09.06.2016 um 16:52 schrieb Duncan:
> Fugou Nashi posted on Sun, 05 Jun 2016 10:12:31 +0900 as excerpted:
>
>> Hi,
>>
>> Do I need to worry about this?
>>
>> Thanks.
>>
>> Linux nakku 4.6.0-040600-generic #201605151930 SMP Sun May 15 23:32:59
>> UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
> There's
Hans van Kranenburg posted on Thu, 09 Jun 2016 01:10:46 +0200 as
excerpted:
> The next question is what files these extents belong to. To find out, I
> need to open up the extent items I get back and follow a backreference
> to an inode object. Might do that tomorrow, fun.
>
> To be honest, I
boli posted on Wed, 08 Jun 2016 20:55:13 +0200 as excerpted:
> Recently I had the idea to replace the 6 TB HDDs with 8 TB ones ("WD
> Red"), because their price is now acceptable.
Are those the 8 TB SMR "archive" drives?
I haven't been following the issue very closely, but be aware that there
On Thu, Jun 9, 2016 at 10:52 AM, Duncan <1i5t5.dun...@cox.net> wrote:
> Fugou Nashi posted on Sun, 05 Jun 2016 10:12:31 +0900 as excerpted:
>
>> Hi,
>>
>> Do I need to worry about this?
>>
>> Thanks.
>>
>> Linux nakku 4.6.0-040600-generic #201605151930 SMP Sun May 15 23:32:59
>> UTC 2016 x86_64
Fugou Nashi posted on Sun, 05 Jun 2016 10:12:31 +0900 as excerpted:
> Hi,
>
> Do I need to worry about this?
>
> Thanks.
>
> Linux nakku 4.6.0-040600-generic #201605151930 SMP Sun May 15 23:32:59
> UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
There's a patch for it that didn't quite make 4.6.0,
On 06/09/2016 03:07 PM, Austin S. Hemmelgarn wrote:
On 2016-06-09 08:34, Brendan Hide wrote:
Hey, all
I noticed this odd behaviour while migrating from a 1TB spindle to SSD
(in this case on a LUKS-encrypted 200GB partition) - and am curious if
this behaviour I've noted below is expected or
On Wed, Jun 8, 2016 at 5:36 AM, Jeff Mahoney wrote:
> The test for !trans->blocks_used in btrfs_abort_transaction is
> insufficient to determine whether it's safe to drop the transaction
> handle on the floor. btrfs_cow_block, informed by should_cow_block,
> can return blocks
On Mon, Jun 06, 2016 at 12:01:23PM -0700, Liu Bo wrote:
> Thanks to fuzz testing, we can pass an invalid bytenr to extent buffer
> via alloc_extent_buffer(). An unaligned eb can have more pages than it
> should have, which ends up extent buffer's leak or some corrupted content
> in extent buffer.
On Thu, Jun 09, 2016 at 10:23:15AM +0900, Tsutomu Itoh wrote:
> When open in btrfs_open_devices failed, only the following message is
> displayed. Therefore the user doesn't understand the reason why open
> failed.
>
> # btrfs check /dev/sdb8
> Couldn't open file system
>
> This patch adds
On 2016-06-09 08:34, Brendan Hide wrote:
Hey, all
I noticed this odd behaviour while migrating from a 1TB spindle to SSD
(in this case on a LUKS-encrypted 200GB partition) - and am curious if
this behaviour I've noted below is expected or known. I figure it is a
bug. Depending on the situation,
Hey, all
I noticed this odd behaviour while migrating from a 1TB spindle to SSD
(in this case on a LUKS-encrypted 200GB partition) - and am curious if
this behaviour I've noted below is expected or known. I figure it is a
bug. Depending on the situation, it *could* be severe. In my case it
On 2016-06-09 02:16, Duncan wrote:
Austin S. Hemmelgarn posted on Fri, 03 Jun 2016 10:21:12 -0400 as
excerpted:
As far as BTRFS raid10 mode in general, there are a few things that are
important to remember about it:
1. It stores exactly two copies of everything, any extra disks just add
to the
On 06/09/2016 10:52 AM, Marc Haber wrote:
On Thu, Jun 09, 2016 at 01:10:46AM +0200, Hans van Kranenburg wrote:
So, instead of being the cause, apt-get update causing a new chunk to be
allocated might as well be the result of existing ones already filled up
with too many fragments.
The next
On Wed, Jun 08, 2016 at 08:53:00AM -0700, Mark Fasheh wrote:
> On Wed, Jun 08, 2016 at 01:13:03PM +0800, Lu Fengqi wrote:
> > Only in the case of different root_id or different object_id, check_shared
> > identified extent as the shared. However, If a extent was referred by
> > different offset of
On Thu, Jun 09, 2016 at 01:10:46AM +0200, Hans van Kranenburg wrote:
> So, instead of being the cause, apt-get update causing a new chunk to be
> allocated might as well be the result of existing ones already filled up
> with too many fragments.
>
> The next question is what files these extents
Hi,
Deepa Dinamani writes:
> drivers/usb/gadget/function/f_fs.c | 2 +-
> drivers/usb/gadget/legacy/inode.c | 2 +-
for drivers/usb/gadget:
Acked-by: Felipe Balbi
--
balbi
signature.asc
Description: PGP
Austin S. Hemmelgarn posted on Fri, 03 Jun 2016 10:21:12 -0400 as
excerpted:
> As far as BTRFS raid10 mode in general, there are a few things that are
> important to remember about it:
> 1. It stores exactly two copies of everything, any extra disks just add
> to the stripe length on each copy.
30 matches
Mail list logo