Austin S. Hemmelgarn posted on Fri, 07 Apr 2017 07:41:22 -0400 as
excerpted:
> 2. Results from 'btrfs scrub'. This is somewhat tricky because scrub is
> either asynchronous or blocks for a _long_ time. The simplest option
> I've found is to fire off an asynchronous scrub to run during
>> - trace_seq_printf(s, "#%-5u inner/outer(us): %4llu/%-5llu ts:%ld.%09ld",
>> + trace_seq_printf(s, "#%-5u inner/outer(us): %4llu/%-5llu
>> ts:%lld.%09ld",
>>field->seqnum,
>>field->duration,
>>
On Fri, 7 Apr 2017 17:57:00 -0700
Deepa Dinamani wrote:
> struct timespec is not y2038 safe on 32 bit machines
> and needs to be replaced by struct timespec64
> in order to represent times beyond year 2038 on such
> machines.
>
> Fix all the timestamp representation in
struct timespec is not y2038 safe on 32 bit machines
and needs to be replaced by struct timespec64
in order to represent times beyond year 2038 on such
machines.
Fix all the timestamp representation in struct trace_hwlat
and all the corresponding implementations.
Signed-off-by: Deepa Dinamani
CURRENT_TIME is not y2038 safe.
Replace it with ktime_get_real_ts64().
Inode time formats are already 64 bit long and
accommodates time64_t.
Signed-off-by: Deepa Dinamani
---
fs/ufs/ialloc.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git
CURRENT_TIME_SEC is not y2038 safe.
Replace use of CURRENT_TIME_SEC with ktime_get_real_seconds
in segment timestamps used by GC algorithm including the
segment mtime timestamps.
Signed-off-by: Deepa Dinamani
Reviewed-by: Arnd Bergmann
---
CURRENT_TIME macro is not y2038 safe on 32 bit systems.
The patch replaces all the uses of CURRENT_TIME by
current_time() for filesystem times, and ktime_get_*
functions for others.
struct timespec is also not y2038 safe.
Retain timespec for timestamp representation here as lustre
uses it
All uses of CURRENT_TIME_SEC and CURRENT_TIME macros have
been replaced by other time functions. These macros are
also not y2038 safe.
And, all their use cases can be fulfilled by y2038 safe
ktime_get_* variants.
Signed-off-by: Deepa Dinamani
Acked-by: John Stultz
CURRENT_TIME macro is not y2038 safe on 32 bit systems.
The patch replaces all the uses of CURRENT_TIME by
current_time().
This is also in preparation for the patch that transitions
vfs timestamps to use 64 bit time and hence make them
y2038 safe. current_time() is also planned to be
btrfs_root_item maintains the ctime for root updates.
This is not part of vfs_inode.
Since current_time() uses struct inode* as an argument
as Linus suggested, this cannot be used to update root
times unless, we modify the signature to use inode.
Since btrfs uses nanosecond time granularity, it
CURRENT_TIME_SEC is not y2038 safe. current_time() will
be transitioned to use 64 bit time along with vfs in a
separate patch.
There is no plan to transition CURRENT_TIME_SEC to use
y2038 safe time interfaces.
current_time() returns timestamps according to the
granularities set in the inode's
All uses of the current_fs_time() function have been
replaced by other time interfaces.
And, its use cases can be fulfilled by current_time()
or ktime_get_* variants.
Signed-off-by: Deepa Dinamani
Reviewed-by: Arnd Bergmann
---
include/linux/fs.h | 1 -
CURRENT_TIME macro is not y2038 safe on 32 bit systems.
The patch replaces all the uses of CURRENT_TIME by
current_time() for filesystem times, and ktime_get_*
functions for authentication timestamps and timezone
calculations.
This is also in preparation for the patch that transitions
vfs
CURRENT_TIME is not y2038 safe.
The macro will be deleted and all the references to it
will be replaced by ktime_get_* apis.
struct timespec is also not y2038 safe.
Retain timespec for timestamp representation here as ceph
uses it internally everywhere.
These references will be changed to use
struct timespec is not y2038 safe.
Audit timestamps are recorded in string format into
an audit buffer for a given context.
These mark the entry timestamps for the syscalls.
Use y2038 safe struct timespec64 to represent the times.
The log strings can handle this transition as strings can
hold upto
The series contains the last unmerged uses of CURRENT_TIME,
CURRENT_TIME_SEC, and current_fs_time().
The series also deletes these apis.
All the patches except [PATCH 9/12] and [PATCH 10/12] are resend patches.
These patches fix new instances of CURRENT_TIME.
cifs and ceph patches have been
[ ... ]
>>> I've got a mostly inactive btrfs filesystem inside a virtual
>>> machine somewhere that shows interesting behaviour: while no
>>> interesting disk activity is going on, btrfs keeps
>>> allocating new chunks, a GiB at a time.
[ ... ]
> Because the allocator keeps walking forward every
Ok, I'm going to revive a year old mail thread here with interesting new
info:
On 05/31/2016 03:36 AM, Qu Wenruo wrote:
>
>
> Hans van Kranenburg wrote on 2016/05/06 23:28 +0200:
>> Hi,
>>
>> I've got a mostly inactive btrfs filesystem inside a virtual machine
>> somewhere that shows
Commit 2dabb3248453 ("Btrfs: Direct I/O read: Work on sectorsized blocks")
introduced this bug during iterating bio pages in dio read's endio hook,
and it could end up with segment fault of the dio reading task.
So the reason is 'if (nr_sectors--)', and it makes the code assume that
there is one
Hi @all who answered,
thank for your help and please excuse my late answer. I didn't see
your answers because of misconfiguration of my GMail filter for that
list.
The filesystem contains backups of some other filesystems (it's on a
external storage which is mirrored by RAID 1). So, if the
On Mon, Apr 03, 2017 at 10:21:08PM +0200, Christian Brauner wrote:
> Signed-off-by: Christian Brauner
> ---
> tests/misc-tests/018-recv-end-of-stream/test.sh | 12 ++--
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git
On 2017-04-07 13:05, John Petrini wrote:
The use case actually is not Ceph, I was just drawing a comparison
between Ceph's object replication strategy vs BTRF's chunk mirroring.
That's actually a really good comparison that I hadn't thought of
before. From what I can tell from my limited
The use case actually is not Ceph, I was just drawing a comparison
between Ceph's object replication strategy vs BTRF's chunk mirroring.
I do find the conversation interesting however as I work with Ceph
quite a lot but have always gone with the default XFS filesystem for
on OSD's.
--
To
On 2017-04-07 12:58, John Petrini wrote:
When you say "running BTRFS raid1 on top of LVM RAID0 volumes" do you
mean creating two LVM RAID-0 volumes and then putting BTRFS RAID1 on
the two resulting logical volumes?
Yes, although it doesn't have to be LVM, it could just as easily be MD
or even
When you say "running BTRFS raid1 on top of LVM RAID0 volumes" do you
mean creating two LVM RAID-0 volumes and then putting BTRFS RAID1 on
the two resulting logical volumes?
___
John Petrini
NOC Systems Administrator // CoreDial, LLC // coredial.com
//
Hillcrest I, 751 Arbor Way, Suite
On 2017-04-07 12:28, Chris Murphy wrote:
On Fri, Apr 7, 2017 at 7:50 AM, Austin S. Hemmelgarn
wrote:
If you care about both performance and data safety, I would suggest using
BTRFS raid1 mode on top of LVM or MD RAID0 together with having good backups
and good
On 2017-04-07 12:04, Chris Murphy wrote:
On Fri, Apr 7, 2017 at 5:41 AM, Austin S. Hemmelgarn
wrote:
I'm rather fond of running BTRFS raid1 on top of LVM RAID0 volumes,
which while it provides no better data safety than BTRFS raid10 mode, gets
noticeably better
On Fri, Apr 7, 2017 at 7:50 AM, Austin S. Hemmelgarn
wrote:
> If you care about both performance and data safety, I would suggest using
> BTRFS raid1 mode on top of LVM or MD RAID0 together with having good backups
> and good monitoring. Statistically speaking,
On Fri, Apr 07, 2017 at 08:10:48AM -0700, Randy Dunlap wrote:
> On 04/07/17 08:08, Randy Dunlap wrote:
> > On 04/07/17 01:27, Stephen Rothwell wrote:
> >> Hi all,
> >>
> >> Changes since 20170406:
> >>
> >
> > on i386:
> >
> > ERROR: "__udivdi3" [fs/btrfs/btrfs.ko] undefined!
> >
> >
On Fri, Apr 7, 2017 at 5:41 AM, Austin S. Hemmelgarn
wrote:
> I'm rather fond of running BTRFS raid1 on top of LVM RAID0 volumes,
> which while it provides no better data safety than BTRFS raid10 mode, gets
> noticeably better performance.
This does in fact have better
On 4/7/17 10:42 AM, Darrick J. Wong wrote:
> On Fri, Apr 07, 2017 at 01:02:58PM +0800, Eryu Guan wrote:
>> On Thu, Apr 06, 2017 at 11:28:01AM -0500, Eric Sandeen wrote:
>>> On 4/6/17 11:26 AM, Theodore Ts'o wrote:
On Wed, Apr 05, 2017 at 10:35:26AM +0800, Eryu Guan wrote:
>
> Test
On Fri, Apr 07, 2017 at 01:02:58PM +0800, Eryu Guan wrote:
> On Thu, Apr 06, 2017 at 11:28:01AM -0500, Eric Sandeen wrote:
> > On 4/6/17 11:26 AM, Theodore Ts'o wrote:
> > > On Wed, Apr 05, 2017 at 10:35:26AM +0800, Eryu Guan wrote:
> > >>
> > >> Test fails with ext3/2 when driving with ext4
On 04/07/17 08:08, Randy Dunlap wrote:
> On 04/07/17 01:27, Stephen Rothwell wrote:
>> Hi all,
>>
>> Changes since 20170406:
>>
>
> on i386:
>
> ERROR: "__udivdi3" [fs/btrfs/btrfs.ko] undefined!
>
> Reported-by: Randy Dunlap
>
or when built-in:
fs/built-in.o: In
On Fri, Apr 07, 2017 at 06:34:28AM -0500, Goldwyn Rodrigues wrote:
>
>
> On 04/06/2017 05:54 PM, Darrick J. Wong wrote:
> > On Mon, Apr 03, 2017 at 11:52:11PM -0700, Christoph Hellwig wrote:
> >>> + if (unaligned_io) {
> >>> + /* If we are going to wait for other DIO to finish, bail */
>
On 04/07/17 01:27, Stephen Rothwell wrote:
> Hi all,
>
> Changes since 20170406:
>
on i386:
ERROR: "__udivdi3" [fs/btrfs/btrfs.ko] undefined!
Reported-by: Randy Dunlap
--
~Randy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of
Hello btrfs-list,
today a strange behaviour appered during the btrfs balance process.
I started a btrfs balance operation on the /home subvolume
that contains, as childs, all the subvolumes for the home directories
of the users, every subvolume with it's own quota.
A short time after the start
On 2017-04-07 09:28, John Petrini wrote:
Hi Austin,
Thanks for taking to time to provide all of this great information!
Glad I could help.
You've got me curious about RAID1. If I were to convert the array to
RAID1 could it then sustain a multi drive failure? Or in other words
do I actually
Hi Austin,
Thanks for taking to time to provide all of this great information!
You've got me curious about RAID1. If I were to convert the array to
RAID1 could it then sustain a multi drive failure? Or in other words
do I actually end up with mirrored pairs or can a chunk still be
mirrored to
On Mon, Mar 13, 2017 at 03:52:16PM +0800, Qu Wenruo wrote:
> + /*
> + * TODO: To also modify reserved->ranges_reserved to reflect
No new TODOs in the code please.
> + * the modification.
> + *
> + * However as long as we free qgroup
On Mon, Mar 13, 2017 at 03:52:15PM +0800, Qu Wenruo wrote:
> @@ -3355,12 +3355,14 @@ static int cache_save_setup(struct
> btrfs_block_group_cache *block_group,
> struct btrfs_fs_info *fs_info = block_group->fs_info;
> struct btrfs_root *root = fs_info->tree_root;
> struct inode
On 2017-04-06 23:25, John Petrini wrote:
Interesting. That's the first time I'm hearing this. If that's the
case I feel like it's a stretch to call it RAID10 at all. It sounds a
lot more like basic replication similar to Ceph only Ceph understands
failure domains and therefore can be configured
On 04/06/2017 05:54 PM, Darrick J. Wong wrote:
> On Mon, Apr 03, 2017 at 11:52:11PM -0700, Christoph Hellwig wrote:
>>> + if (unaligned_io) {
>>> + /* If we are going to wait for other DIO to finish, bail */
>>> + if ((iocb->ki_flags & IOCB_NOWAIT) &&
>>> +
On Mon, Mar 13, 2017 at 03:52:10PM +0800, Qu Wenruo wrote:
> [BUG]
> The easist way to reproduce the bug is:
> --
> # mkfs.btrfs -f $dev -n 16K
> # mount $dev $mnt -o inode_cache
> # btrfs quota enable $mnt
> # btrfs quota rescan -w $mnt
> # btrfs qgroup show $mnt
> qgroupid rfer
On Fri, Apr 7, 2017 at 1:28 AM, Qu Wenruo wrote:
>
>
> At 04/07/2017 12:02 AM, Filipe Manana wrote:
>>
>> On Thu, Apr 6, 2017 at 2:28 AM, Qu Wenruo wrote:
>>>
>>> Btrfs allows inline file extent if and only if
>>> 1) It's at offset 0
>>> 2) It's
On Fri, Apr 7, 2017 at 2:07 AM, Qu Wenruo wrote:
>
>
> At 04/07/2017 12:07 AM, Filipe Manana wrote:
>>
>> On Wed, Mar 22, 2017 at 2:37 AM, Qu Wenruo
>> wrote:
>>>
>>>
>>>
>>> At 03/09/2017 10:05 AM, Zygo Blaxell wrote:
On Wed, Mar
On Fri, Apr 7, 2017 at 9:51 AM, Eryu Guan wrote:
> On Tue, Apr 04, 2017 at 07:34:29AM +0100, fdman...@kernel.org wrote:
>> From: Filipe Manana
>>
>> For example NFS 4.2 supports fallocate but it does not support its
>> KEEP_SIZE flag, so we want to skip tests
On Tue, Apr 04, 2017 at 07:34:29AM +0100, fdman...@kernel.org wrote:
> From: Filipe Manana
>
> For example NFS 4.2 supports fallocate but it does not support its
> KEEP_SIZE flag, so we want to skip tests that use fallocate with that
> flag on filesystems that don't support
47 matches
Mail list logo