On Thu, Apr 06, 2017 at 11:28:01AM -0500, Eric Sandeen wrote:
> On 4/6/17 11:26 AM, Theodore Ts'o wrote:
> > On Wed, Apr 05, 2017 at 10:35:26AM +0800, Eryu Guan wrote:
> >>
> >> Test fails with ext3/2 when driving with ext4 driver, fiemap changed
> >> after umount/mount cycle, then changed back to
Andrei Borzenkov posted on Sun, 02 Apr 2017 09:30:46 +0300 as excerpted:
> 02.04.2017 03:59, Duncan пишет:
>>
>> 4) In fact, since an in-place convert is almost certainly going to take
>> more time than a blow-away and restore from backup,
>
> This caught my eyes. Why? In-place convert just
Interesting. That's the first time I'm hearing this. If that's the
case I feel like it's a stretch to call it RAID10 at all. It sounds a
lot more like basic replication similar to Ceph only Ceph understands
failure domains and therefore can be configured to handle device
failure (albeit at a
Roman Mamedov posted on Mon, 03 Apr 2017 13:41:07 +0500 as excerpted:
> On Mon, 3 Apr 2017 11:30:44 +0300 Marat Khalili wrote:
>
>> You may want to look here: https://www.synology.com/en-global/dsm/Btrfs
>> . Somebody forgot to tell Synology, which already supports btrfs in all
>>
[BUG]
Cycle mount btrfs can cause fiemap to return different result.
Like:
# mount /dev/vdb5 /mnt/btrfs
# dd if=/dev/zero bs=16K count=4 oflag=dsync of=/mnt/btrfs/file
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..127]:
On Thu, Apr 6, 2017 at 7:31 PM, John Petrini wrote:
> Hi Chris,
>
> I've followed your advice and converted the system chunk to raid10. I
> hadn't noticed it was raid0 and it's scary to think that I've been
> running this array for three months like that. Thank you for
Hi Chris,
I've followed your advice and converted the system chunk to raid10. I
hadn't noticed it was raid0 and it's scary to think that I've been
running this array for three months like that. Thank you for saving me
a lot of pain down the road!
Also thank you for the clarification on the
On Thu, Apr 6, 2017 at 7:15 PM, John Petrini wrote:
> Okay so I came across this bug report:
> https://bugzilla.redhat.com/show_bug.cgi?id=1243986
>
> It looks like I'm just misinterpreting the output of btrfs fi df. What
> should I be looking at to determine the actual
On Thu, Apr 6, 2017 at 6:47 PM, John Petrini wrote:
> sudo btrfs fi df /mnt/storage-array/
> Data, RAID10: total=10.72TiB, used=10.72TiB
> System, RAID0: total=128.00MiB, used=944.00KiB
> Metadata, RAID10: total=14.00GiB, used=12.63GiB
> GlobalReserve, single:
Okay so I came across this bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=1243986
It looks like I'm just misinterpreting the output of btrfs fi df. What
should I be looking at to determine the actual free space? Is Free
(estimated): 13.83TiB (min: 13.83TiB) the proper metric?
Simply
At 04/07/2017 12:07 AM, Filipe Manana wrote:
On Wed, Mar 22, 2017 at 2:37 AM, Qu Wenruo wrote:
At 03/09/2017 10:05 AM, Zygo Blaxell wrote:
On Wed, Mar 08, 2017 at 10:27:33AM +, Filipe Manana wrote:
On Wed, Mar 8, 2017 at 3:18 AM, Zygo Blaxell
At 04/07/2017 12:28 AM, Eric Sandeen wrote:
On 4/6/17 11:26 AM, Theodore Ts'o wrote:
On Wed, Apr 05, 2017 at 10:35:26AM +0800, Eryu Guan wrote:
Test fails with ext3/2 when driving with ext4 driver, fiemap changed
after umount/mount cycle, then changed back to original result after
sleeping
Hello List,
I have a volume that appears to be full despite having multiple
Terabytes of free space available. Just yesterday I ran a re-balance
but it didn't change anything. I've just added two more disks to the
array and am currently in the process of another re-balance but the
available space
At 04/07/2017 12:02 AM, Filipe Manana wrote:
On Thu, Apr 6, 2017 at 2:28 AM, Qu Wenruo wrote:
Btrfs allows inline file extent if and only if
1) It's at offset 0
2) It's smaller than min(max_inline, page_size)
Although we don't specify if the size is before
On 05/04/17 08:04, Marat Khalili wrote:
> On 04/04/17 20:36, Peter Grandi wrote:
>> SATA works for external use, eSATA works well, but what really
>> matters is the chipset of the adapter card.
> eSATA might be sound electrically, but mechanically it is awful. Try to
> run it for months in a
On Mon, Apr 03, 2017 at 11:52:11PM -0700, Christoph Hellwig wrote:
> > + if (unaligned_io) {
> > + /* If we are going to wait for other DIO to finish, bail */
> > + if ((iocb->ki_flags & IOCB_NOWAIT) &&
> > +atomic_read(>i_dio_count))
> > +
On Thu, Apr 06, 2017 at 04:21:50PM +0200, David Sterba wrote:
> On Wed, Apr 05, 2017 at 02:04:19PM -0700, Liu Bo wrote:
> > When doing directIO repair, we have this oops
> >
> > [ 1458.532816] general protection fault: [#1] SMP
> > ...
> > [ 1458.536291] Workqueue: btrfs-endio-repair
On 4/6/17 11:26 AM, Theodore Ts'o wrote:
> On Wed, Apr 05, 2017 at 10:35:26AM +0800, Eryu Guan wrote:
>>
>> Test fails with ext3/2 when driving with ext4 driver, fiemap changed
>> after umount/mount cycle, then changed back to original result after
>> sleeping some time. An ext4 bug? (cc'ed
On Wed, Apr 05, 2017 at 10:35:26AM +0800, Eryu Guan wrote:
>
> Test fails with ext3/2 when driving with ext4 driver, fiemap changed
> after umount/mount cycle, then changed back to original result after
> sleeping some time. An ext4 bug? (cc'ed linux-ext4 list.)
I haven't had time to look at
From: Filipe Manana
Normally we don't have inline extents followed by regular extents, but
there's currently at least one harmless case where this happens. For
example, when the page size is 4Kb and compression is enabled:
$ mkfs.btrfs -f /dev/sdb
$ mount -o compress
On Wed, Mar 22, 2017 at 2:37 AM, Qu Wenruo wrote:
>
>
> At 03/09/2017 10:05 AM, Zygo Blaxell wrote:
>>
>> On Wed, Mar 08, 2017 at 10:27:33AM +, Filipe Manana wrote:
>>>
>>> On Wed, Mar 8, 2017 at 3:18 AM, Zygo Blaxell
>>> wrote:
On Thu, Apr 6, 2017 at 2:28 AM, Qu Wenruo wrote:
> Btrfs allows inline file extent if and only if
> 1) It's at offset 0
> 2) It's smaller than min(max_inline, page_size)
>Although we don't specify if the size is before compression or after
>compression.
>At
On Thu, Apr 06, 2017 at 05:05:16PM +0800, Qu Wenruo wrote:
> [BUG]
> Cycle mount btrfs can cause fiemap to return different result.
> Like:
> # mount /dev/vdb5 /mnt/btrfs
> # dd if=/dev/zero bs=16K count=4 oflag=dsync of=/mnt/btrfs/file
> # xfs_io -c "fiemap -v" /mnt/btrfs/file
>
On Thu, Apr 06, 2017 at 05:05:16PM +0800, Qu Wenruo wrote:
> [BUG]
> Cycle mount btrfs can cause fiemap to return different result.
> Like:
> # mount /dev/vdb5 /mnt/btrfs
> # dd if=/dev/zero bs=16K count=4 oflag=dsync of=/mnt/btrfs/file
> # xfs_io -c "fiemap -v" /mnt/btrfs/file
>
On Thu, Apr 06, 2017 at 03:20:43PM +0100, Filipe Manana wrote:
> On Thu, Apr 6, 2017 at 3:18 PM, Eryu Guan wrote:
> > On Tue, Apr 04, 2017 at 07:34:29AM +0100, fdman...@kernel.org wrote:
> >> From: Filipe Manana
> >>
> >> For example NFS 4.2 supports
On Wed, Apr 05, 2017 at 02:04:19PM -0700, Liu Bo wrote:
> When doing directIO repair, we have this oops
>
> [ 1458.532816] general protection fault: [#1] SMP
> ...
> [ 1458.536291] Workqueue: btrfs-endio-repair btrfs_endio_repair_helper [btrfs]
> [ 1458.536893] task: 88082a42d100
On Thu, Apr 6, 2017 at 3:18 PM, Eryu Guan wrote:
> On Tue, Apr 04, 2017 at 07:34:29AM +0100, fdman...@kernel.org wrote:
>> From: Filipe Manana
>>
>> For example NFS 4.2 supports fallocate but it does not support its
>> KEEP_SIZE flag, so we want to skip tests
On Tue, Apr 04, 2017 at 07:34:29AM +0100, fdman...@kernel.org wrote:
> From: Filipe Manana
>
> For example NFS 4.2 supports fallocate but it does not support its
> KEEP_SIZE flag, so we want to skip tests that use fallocate with that
> flag on filesystems that don't support
[BUG]
Cycle mount btrfs can cause fiemap to return different result.
Like:
# mount /dev/vdb5 /mnt/btrfs
# dd if=/dev/zero bs=16K count=4 oflag=dsync of=/mnt/btrfs/file
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..127]:
On Thu 06-04-17 11:12:02, NeilBrown wrote:
> On Wed, Apr 05 2017, Jan Kara wrote:
> >> If you want to ensure read-only files can remain cached over a crash,
> >> then you would have to mark a file in some way on stable storage
> >> *before* allowing any change.
> >> e.g. you could use the lsb.
[BUG]
Cycle mount btrfs can cause fiemap to return different result.
Like:
# mount /dev/vdb5 /mnt/btrfs
# dd if=/dev/zero bs=16K count=4 oflag=dsync of=/mnt/btrfs/file
# xfs_io -c "fiemap -v" /mnt/btrfs/file
/mnt/test/file:
EXT: FILE-OFFSET BLOCK-RANGE TOTAL FLAGS
0: [0..127]:
31 matches
Mail list logo