while
> > adjusting balloon seems to be causing a lockdep warning (see attached)
> > when running gce-xfstests on a Google Compute Engine e2 VM. I was not
> > able to trigger it using kvm-xfstests, but the following command:
> > "gce-xfstests -C 10 ext4/4k generic/476
On Mon, Jan 08, 2024 at 04:50:15PM -0500, Theodore Ts'o wrote:
> Hi, while doing final testing before sending a pull request, I merged
> in linux-next, and commit 5b9ce7ecd7: virtio_balloon: stay awake while
> adjusting balloon seems to be causing a lockdep warning (see attached)
gt; > adjusting balloon seems to be causing a lockdep warning (see attached)
> > when running gce-xfstests on a Google Compute Engine e2 VM. I was not
> > able to trigger it using kvm-xfstests, but the following command:
> > "gce-xfstests -C 10 ext4/4k generic/47
On 09.01.24 06:50, David Stevens wrote:
On Tue, Jan 9, 2024 at 6:50 AM Theodore Ts'o wrote:
Hi, while doing final testing before sending a pull request, I merged
in linux-next, and commit 5b9ce7ecd7: virtio_balloon: stay awake while
adjusting balloon seems to be causing a lockdep warning
On Tue, Jan 9, 2024 at 6:50 AM Theodore Ts'o wrote:
>
> Hi, while doing final testing before sending a pull request, I merged
> in linux-next, and commit 5b9ce7ecd7: virtio_balloon: stay awake while
> adjusting balloon seems to be causing a lockdep warning (see attached)
&
Hi, while doing final testing before sending a pull request, I merged
in linux-next, and commit 5b9ce7ecd7: virtio_balloon: stay awake while
adjusting balloon seems to be causing a lockdep warning (see attached)
when running gce-xfstests on a Google Compute Engine e2 VM. I was not
able to trigger
Don`t simplely disable local interrupt delivery of CPU hardware irq, should race
the region inside signal_irq_work, include
intel_breadcrumbs_disarm_irq/intel_breadcrumbs_arm_irq.
RT complains about might sleep inside signal_irq_work() because spin_lock will
be invoked after disabling interrupts.
>> iwl_opmode_register+0x71/0xe0 [iwlwifi]
> >> iwl_mvm_init+0x34/0x1000 [iwlmvm]
> >> do_one_initcall+0x5b/0x300
> >> do_init_module+0x5b/0x21c
> >> load_module+0x1dae/0x22c0
> >> __do_sys_finit_module+0xad/0x110
> >> do_syscall_64+0x33/0x80
> >> entry_SYSCALL_64_after_hwframe+0x44/0xae
> >>
> >> [ ... lockdep output trimmed ]
> >>
> >> Fixes: 25edc8f259c7106 ("iwlwifi: pcie: properly implement NAPI")
> >> Signed-off-by: Jiri Kosina
> >> ---
> >>
> >> v1->v2: Previous patch was not refreshed against current code-base, sorry.
> >>
> >> drivers/net/wireless/intel/iwlwifi/pcie/rx.c | 3 ++-
> >> 1 file changed, 2 insertions(+), 1 deletion(-)
> >
> >
> > Thanks, Jiri! Let's take your patch since you already sent it out.
> >
> > Kalle, can you please take this directly to wireless-drivers.git?
> >
> > Acked-by: Luca Coelho
>
> Ok but I don't see this either in patchwork or lore, hopefully it shows
> up later.
>
Is that intended to have a subject like...?
iwlwifi: don't call netif_napi_add() with rxq->lock held (was Re:
Lockdep warning in iwl_pcie_rx_handle())
- Sedat -
[1]
https://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers.git/commit/?id=295d4cd82b0181dd36b145fd535c13d623d7a335
On Wed, 3 Mar 2021, Kalle Valo wrote:
> > ... i believe you want to drop the "(was ...") part from the patch
> > subject.
>
> Too late now, it's already applied and pull request sent. Why was it
> there in the first place?
Yeah, it was, but I don't think it's a big issue :) So let it be.
BTW,
Jiri Kosina writes:
> On Wed, 3 Mar 2021, Kalle Valo wrote:
>
>> Patch applied to wireless-drivers.git, thanks.
>
> Thanks, but ...
>
>> 295d4cd82b01 iwlwifi: don't call netif_napi_add() with rxq->lock
>> held (was Re: Lockdep warning in iwl_pcie_rx_han
On Wed, 3 Mar 2021, Kalle Valo wrote:
> Patch applied to wireless-drivers.git, thanks.
Thanks, but ...
> 295d4cd82b01 iwlwifi: don't call netif_napi_add() with rxq->lock held (was
> Re: Lockdep warning in iwl_pcie_rx_handle())
... i believe you want to drop the "(was ...
d/0xbf0 [iwlmvm]
> _iwl_op_mode_start.isra.4+0x42/0x80 [iwlwifi]
> iwl_opmode_register+0x71/0xe0 [iwlwifi]
> iwl_mvm_init+0x34/0x1000 [iwlmvm]
> do_one_initcall+0x5b/0x300
> do_init_module+0x5b/0x21c
> load_module+0x1dae/0x22c0
> __do_sys_finit_module+0xa
On Tue, 2 Mar 2021, Kalle Valo wrote:
> > Thanks, Jiri! Let's take your patch since you already sent it out.
> >
> > Kalle, can you please take this directly to wireless-drivers.git?
> >
> > Acked-by: Luca Coelho
>
> Ok but I don't see this either in patchwork or lore, hopefully it shows
> up la
"Coelho, Luciano" writes:
> On Tue, 2021-03-02 at 11:34 +0100, Jiri Kosina wrote:
>> From: Jiri Kosina
>>
>> We can't call netif_napi_add() with rxq-lock held, as there is a potential
>> for deadlock as spotted by lockdep (see below). rxq->lock is not
>> protecting anything over the netif_napi_
On Tue, 2021-03-02 at 11:34 +0100, Jiri Kosina wrote:
> From: Jiri Kosina
>
> We can't call netif_napi_add() with rxq-lock held, as there is a potential
> for deadlock as spotted by lockdep (see below). rxq->lock is not
> protecting anything over the netif_napi_add() codepath anyway, so let's
> d
On Tue, 2021-03-02 at 10:27 +0100, Jiri Kosina wrote:
> On Mon, 1 Mar 2021, Johannes Berg wrote:
>
> > > I am getting the splat below with Linus' tree as of today (5.11-rc1,
> > > fe07bfda2fb). I haven't started to look into the code yet, but apparently
> > > this has been already reported by He
From: Jiri Kosina
We can't call netif_napi_add() with rxq-lock held, as there is a potential
for deadlock as spotted by lockdep (see below). rxq->lock is not
protecting anything over the netif_napi_add() codepath anyway, so let's
drop it just before calling into NAPI.
==
On Mon, 1 Mar 2021, Johannes Berg wrote:
> > I am getting the splat below with Linus' tree as of today (5.11-rc1,
> > fe07bfda2fb). I haven't started to look into the code yet, but apparently
> > this has been already reported by Heiner here:
> >
> > https://www.spinics.net/lists/linux-wire
Hi Jiri,
> I am getting the splat below with Linus' tree as of today (5.11-rc1,
> fe07bfda2fb). I haven't started to look into the code yet, but apparently
> this has been already reported by Heiner here:
>
> https://www.spinics.net/lists/linux-wireless/msg208353.html
>
> so before I sta
ght I'd ask whether this has been root-caused elsewhere
> already.
>
> Thanks.
After reverting 25edc8f259c7106 ("iwlwifi: pcie: properly implement
NAPI"), I don't see the lockdep warning any more (*), so it seems to be
culprit (or at least related). CCing Johannes.
L
Hi,
I am getting the splat below with Linus' tree as of today (5.11-rc1,
fe07bfda2fb). I haven't started to look into the code yet, but apparently
this has been already reported by Heiner here:
https://www.spinics.net/lists/linux-wireless/msg208353.html
so before I start digging deep i
From: Mark Brown
[ Upstream commit 14a71d509ac809dcf56d7e3ca376b15d17bd0ddd ]
With commit eaa7995c529b54 (regulator: core: avoid
regulator_resolve_supply() race condition) we started holding the rdev
lock while resolving supplies, an operation that requires holding the
regulator_list_mutex. This
From: Mark Brown
[ Upstream commit 14a71d509ac809dcf56d7e3ca376b15d17bd0ddd ]
With commit eaa7995c529b54 (regulator: core: avoid
regulator_resolve_supply() race condition) we started holding the rdev
lock while resolving supplies, an operation that requires holding the
regulator_list_mutex. This
concurrent access to the seqcount lock when it's used
for read and initialization.
Commit d5c8238849e7 ("btrfs: convert data_seqcount to seqcount_mutex_t")
does not mention a particular problem being fixed so revert should not
cause any harm and we'll get the lockdep warning fixed
Hi,
On 2/2/21 10:32 AM, Pavel Machek wrote:
> Hi!
>
>>> Is it a regression? AFAIK it is a bug that has been there
>>> forever... My original plan was to simply wait for 5.12, so it gets
>>> full release of testing...
>>
>> It may have been a pre-existing bug which got triggered by libata
>> chang
Hi!
> > Is it a regression? AFAIK it is a bug that has been there
> > forever... My original plan was to simply wait for 5.12, so it gets
> > full release of testing...
>
> It may have been a pre-existing bug which got triggered by libata
> changes?
Fixes tag suggests it is rather old.
> I don'
Hi,
On 1/27/21 11:01 PM, Pavel Machek wrote:
> Hi!
>
> Booting a 5.11-rc2 kernel with lockdep enabled inside a virtualbox vm
> (which still
> emulates good old piix ATA controllers) I get the below lockdep splat
> early on during boot:
>
> This seems to be led-class rela
Hi!
> >>> Booting a 5.11-rc2 kernel with lockdep enabled inside a virtualbox vm
> >>> (which still
> >>> emulates good old piix ATA controllers) I get the below lockdep splat
> >>> early on during boot:
> >>>
> >>> This seems to be led-class related but also seems to have a (P)ATA
> >>> part to
Hi,
On 1/13/21 9:59 AM, Hans de Goede wrote:
> Hi,
>
> On 1/12/21 11:30 PM, Pavel Machek wrote:
>> Hi!
>>
>>> Booting a 5.11-rc2 kernel with lockdep enabled inside a virtualbox vm
>>> (which still
>>> emulates good old piix ATA controllers) I get the below lockdep splat early
>>> on during boot
e resolution down to immediately before we do the
> set_supply() and drop it again once the allocation is done.
Applied to
https://git.kernel.org/pub/scm/linux/kernel/git/broonie/regulator.git
for-next
Thanks!
[1/1] regulator: Fix lockdep warning resolving supplies
commit: 14a71d509a
With commit eaa7995c529b54 (regulator: core: avoid
regulator_resolve_supply() race condition) we started holding the rdev
lock while resolving supplies, an operation that requires holding the
regulator_list_mutex. This results in lockdep warnings since in other
places we take the list mutex then th
Hi,
On 1/12/21 11:30 PM, Pavel Machek wrote:
> Hi!
>
>> Booting a 5.11-rc2 kernel with lockdep enabled inside a virtualbox vm (which
>> still
>> emulates good old piix ATA controllers) I get the below lockdep splat early
>> on during boot:
>>
>> This seems to be led-class related but also seems
Hi!
> Booting a 5.11-rc2 kernel with lockdep enabled inside a virtualbox vm (which
> still
> emulates good old piix ATA controllers) I get the below lockdep splat early
> on during boot:
>
> This seems to be led-class related but also seems to have a (P)ATA
> part to it. To the best of my knowl
Hi All,
Booting a 5.11-rc2 kernel with lockdep enabled inside a virtualbox vm (which
still
emulates good old piix ATA controllers) I get the below lockdep splat early on
during boot:
This seems to be led-class related but also seems to have a (P)ATA
part to it. To the best of my knowledge this
On 12/14/20 11:58 PM, Xiaoguang Wang wrote:
> hi,
>
>> On 11/28/20 5:13 PM, Pavel Begunkov wrote:
>>> On 28/11/2020 23:59, Nadav Amit wrote:
Hello Pavel,
I got the following lockdep splat while rebasing my work on 5.10-rc5 on the
kernel (based on 5.10-rc5+).
I did not
hi,
On 11/28/20 5:13 PM, Pavel Begunkov wrote:
On 28/11/2020 23:59, Nadav Amit wrote:
Hello Pavel,
I got the following lockdep splat while rebasing my work on 5.10-rc5 on the
kernel (based on 5.10-rc5+).
I did not actually confirm that the problem is triggered without my changes,
as my iouri
On 11/28/20 5:13 PM, Pavel Begunkov wrote:
> On 28/11/2020 23:59, Nadav Amit wrote:
>> Hello Pavel,
>>
>> I got the following lockdep splat while rebasing my work on 5.10-rc5 on the
>> kernel (based on 5.10-rc5+).
>>
>> I did not actually confirm that the problem is triggered without my changes,
>>
> On Nov 28, 2020, at 4:13 PM, Pavel Begunkov wrote:
>
> On 28/11/2020 23:59, Nadav Amit wrote:
>> Hello Pavel,
>>
>> I got the following lockdep splat while rebasing my work on 5.10-rc5 on the
>> kernel (based on 5.10-rc5+).
>>
>> I did not actually confirm that the problem is triggered withou
On 28/11/2020 23:59, Nadav Amit wrote:
> Hello Pavel,
>
> I got the following lockdep splat while rebasing my work on 5.10-rc5 on the
> kernel (based on 5.10-rc5+).
>
> I did not actually confirm that the problem is triggered without my changes,
> as my iouring workload requires some kernel chang
Hello Pavel,
I got the following lockdep splat while rebasing my work on 5.10-rc5 on the
kernel (based on 5.10-rc5+).
I did not actually confirm that the problem is triggered without my changes,
as my iouring workload requires some kernel changes (not iouring changes),
yet IMHO it seems pretty cl
There's a potential deadlock with the following cycle:
wfs_lock --> device_links_lock --> kn->count
Fix this by simply dropping the lock around a list_empty() check that's
just exported to a sysfs file. The sysfs file output is an instantaneous
check anyway and the lock doesn't really add any prot
On Mon, Aug 31, 2020 at 11:07 PM Peng Fan wrote:
>
> > Subject: Re: Lockdep warning caused by "driver core: Fix sleeping in invalid
> > context during device link deletion"
> >
> > On Wed, Aug 26, 2020 at 10:17 PM Saravana Kannan
> > wrote:
>
> Subject: Re: Lockdep warning caused by "driver core: Fix sleeping in invalid
> context during device link deletion"
>
> On Wed, Aug 26, 2020 at 10:17 PM Saravana Kannan
> wrote:
> >
> > On Thu, Aug 20, 2020 at 8:50 PM Dong Aisheng
> wrote:
> >
On Wed, Aug 26, 2020 at 10:17 PM Saravana Kannan wrote:
>
> On Thu, Aug 20, 2020 at 8:50 PM Dong Aisheng wrote:
> >
> > Hi ALL,
> >
> > We met the below WARNING during system suspend on an iMX6Q SDB board
> > with the latest linus/master branch (v5.9-rc1+) and next-20200820.
> > v5.8 kernel is ok
On Thu, Aug 20, 2020 at 8:50 PM Dong Aisheng wrote:
>
> Hi ALL,
>
> We met the below WARNING during system suspend on an iMX6Q SDB board
> with the latest linus/master branch (v5.9-rc1+) and next-20200820.
> v5.8 kernel is ok. So i did bisect and finally found it's caused by
> the patch below.
> R
> From: Saravana Kannan
> Sent: Saturday, August 22, 2020 2:28 AM
>
> On Thu, Aug 20, 2020 at 8:50 PM Dong Aisheng
> wrote:
> >
> > Hi ALL,
> >
> > We met the below WARNING during system suspend on an iMX6Q SDB board
> > with the latest linus/master branch (v5.9-rc1+) and next-20200820.
> > v5.8
On Thu, Aug 20, 2020 at 8:50 PM Dong Aisheng wrote:
>
> Hi ALL,
>
> We met the below WARNING during system suspend on an iMX6Q SDB board
> with the latest linus/master branch (v5.9-rc1+) and next-20200820.
> v5.8 kernel is ok. So i did bisect and finally found it's caused by
> the patch below.
> R
Hi ALL,
We met the below WARNING during system suspend on an iMX6Q SDB board
with the latest linus/master branch (v5.9-rc1+) and next-20200820.
v5.8 kernel is ok. So i did bisect and finally found it's caused by
the patch below.
Reverting it can get rid of the warning, but I wonder if there may be
.c
@@ -100,6 +100,15 @@ static DEFINE_MUTEX(ashmem_mutex);
static struct kmem_cache *ashmem_area_cachep __read_mostly;
static struct kmem_cache *ashmem_range_cachep __read_mostly;
+/*
+ * A separate lockdep class for the backing shmem inodes to resolve the lockdep
+ * warning about the race b
em.c
@@ -95,6 +95,15 @@ static DEFINE_MUTEX(ashmem_mutex);
static struct kmem_cache *ashmem_area_cachep __read_mostly;
static struct kmem_cache *ashmem_range_cachep __read_mostly;
+/*
+ * A separate lockdep class for the backing shmem inodes to resolve the lockdep
+ * warning about the race b
em.c
@@ -95,6 +95,15 @@ static DEFINE_MUTEX(ashmem_mutex);
static struct kmem_cache *ashmem_area_cachep __read_mostly;
static struct kmem_cache *ashmem_range_cachep __read_mostly;
+/*
+ * A separate lockdep class for the backing shmem inodes to resolve the lockdep
+ * warning about the race b
em.c
@@ -95,6 +95,15 @@ static DEFINE_MUTEX(ashmem_mutex);
static struct kmem_cache *ashmem_area_cachep __read_mostly;
static struct kmem_cache *ashmem_range_cachep __read_mostly;
+/*
+ * A separate lockdep class for the backing shmem inodes to resolve the lockdep
+ * warning about the race b
em.c
@@ -95,6 +95,15 @@ static DEFINE_MUTEX(ashmem_mutex);
static struct kmem_cache *ashmem_area_cachep __read_mostly;
static struct kmem_cache *ashmem_range_cachep __read_mostly;
+/*
+ * A separate lockdep class for the backing shmem inodes to resolve the lockdep
+ * warning about the race b
*ashmem_area_cachep __read_mostly;
static struct kmem_cache *ashmem_range_cachep __read_mostly;
+/*
+ * A separate lockdep class for the backing shmem inodes to resolve the lockdep
+ * warning about the race between kswapd taking fs_reclaim before inode_lock
+ * and write syscall taking inode_lock and then
lass for the backing shmem inodes.
> >
> > [1]: https://lkml.kernel.org/lkml/0b5f9d059aa20...@google.com/
> >
> > Signed-off-by: Suren Baghdasaryan
> > ---
>
> Once Eric's nits are resolved:
>
> Reviewed-by: Joel Fernandes (Google)
Thanks Joel!
On Wed, Jul 15, 2020 at 10:45 PM Suren Baghdasaryan wrote:
>
> syzbot report [1] describes a deadlock when write operation against an
> ashmem fd executed at the time when ashmem is shrinking its cache results
> in the following lock sequence:
>
> Possible unsafe locking scenario:
>
> CPU0
t;Qian
Cai" , "Eric Sandeen"
Sent: Monday, July 13, 2020 12:41:12 PM
Subject: Re: [PATCH v6] xfs: Fix false positive lockdep warning with sb_internal
& fs_reclaim
On Tue, Jul 07, 2020 at 03:16:29PM -0400, Waiman Long wrote:
Depending on the workloads, the following circular lo
uot; , "Eric Sandeen"
>
> Sent: Monday, July 13, 2020 12:41:12 PM
> Subject: Re: [PATCH v6] xfs: Fix false positive lockdep warning with
> sb_internal & fs_reclaim
>
> On Tue, Jul 07, 2020 at 03:16:29PM -0400, Waiman Long wrote:
> > Depending on the worklo
- Original Message -
From: "Darrick J. Wong"
To: "Waiman Long"
Cc: linux-...@vger.kernel.org, linux-kernel@vger.kernel.org, "Dave Chinner"
, "Qian Cai" , "Eric Sandeen"
Sent: Monday, July 13, 2020 12:41:12 PM
Subject: Re: [PATC
On Wed, Jul 15, 2020 at 8:30 PM Eric Biggers wrote:
>
> On Wed, Jul 15, 2020 at 07:45:27PM -0700, Suren Baghdasaryan wrote:
> > syzbot report [1] describes a deadlock when write operation against an
> > ashmem fd executed at the time when ashmem is shrinking its cache results
> > in the following
On Wed, Jul 15, 2020 at 07:45:27PM -0700, Suren Baghdasaryan wrote:
> syzbot report [1] describes a deadlock when write operation against an
> ashmem fd executed at the time when ashmem is shrinking its cache results
> in the following lock sequence:
>
> Possible unsafe locking scenario:
>
>
te lockdep class for the backing shmem inodes to resolve the lockdep
+ * warning about the race between kswapd taking fs_reclaim before inode_lock
+ * and write syscall taking inode_lock and then fs_reclaim.
+ * Note that such race is impossible because ashmem does not support write
+ * syscalls oper
On Tue, Jul 07, 2020 at 03:16:29PM -0400, Waiman Long wrote:
> Depending on the workloads, the following circular locking dependency
> warning between sb_internal (a percpu rwsem) and fs_reclaim (a pseudo
> lock) may show up:
>
> ==
> WARNING: po
On Tue, Jul 07, 2020 at 03:16:29PM -0400, Waiman Long wrote:
> One way to avoid this splat is to add GFP_NOFS to the affected allocation
> calls by using the memalloc_nofs_save()/memalloc_nofs_restore() pair.
> This shouldn't matter unless the system is really running out of memory.
> In that parti
Looks good,
Reviewed-by: Christoph Hellwig
Depending on the workloads, the following circular locking dependency
warning between sb_internal (a percpu rwsem) and fs_reclaim (a pseudo
lock) may show up:
==
WARNING: possible circular locking dependency detected
5.0.0-rc1+ #60 Tainted: G
On 7/2/20 9:07 PM, Dave Chinner wrote:
On Wed, Jul 01, 2020 at 08:59:23PM -0400, Waiman Long wrote:
Suggested-by: Dave Chinner
Signed-off-by: Waiman Long
---
fs/xfs/xfs_super.c | 12 +++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_s
On Wed, Jul 01, 2020 at 08:59:23PM -0400, Waiman Long wrote:
> Suggested-by: Dave Chinner
> Signed-off-by: Waiman Long
> ---
> fs/xfs/xfs_super.c | 12 +++-
> 1 file changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
> index 379cbff438bc..d
Depending on the workloads, the following circular locking dependency
warning between sb_internal (a percpu rwsem) and fs_reclaim (a pseudo
lock) may show up:
==
WARNING: possible circular locking dependency detected
5.0.0-rc1+ #60 Tainted: G
On 6/18/20 7:04 PM, Dave Chinner wrote:
On Fri, Jun 19, 2020 at 08:58:10AM +1000, Dave Chinner wrote:
On Thu, Jun 18, 2020 at 01:19:41PM -0400, Waiman Long wrote:
diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
index 379cbff438bc..1b94b9bfa4d7 100644
--- a/fs/xfs/xfs_super.c
+++ b/fs/xfs/x
On 6/19/20 9:21 AM, Christoph Hellwig wrote:
I find it really confusing that we record this in current->flags.
per-thread state makes total sense for not dipping into fs reclaim.
But for annotating something related to memory allocation passing flags
seems a lot more descriptive to me, as it is a
I find it really confusing that we record this in current->flags.
per-thread state makes total sense for not dipping into fs reclaim.
But for annotating something related to memory allocation passing flags
seems a lot more descriptive to me, as it is about particular locks.
On Fri, Jun 19, 2020 at 08:58:10AM +1000, Dave Chinner wrote:
> On Thu, Jun 18, 2020 at 01:19:41PM -0400, Waiman Long wrote:
> > diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
> > index 379cbff438bc..1b94b9bfa4d7 100644
> > --- a/fs/xfs/xfs_super.c
> > +++ b/fs/xfs/xfs_super.c
> > @@ -913,11
On Thu, Jun 18, 2020 at 01:19:41PM -0400, Waiman Long wrote:
> diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
> index 379cbff438bc..1b94b9bfa4d7 100644
> --- a/fs/xfs/xfs_super.c
> +++ b/fs/xfs/xfs_super.c
> @@ -913,11 +913,33 @@ xfs_fs_freeze(
> struct super_block *sb)
> {
>
Depending on the workloads, the following circular locking dependency
warning between sb_internal (a percpu rwsem) and fs_reclaim (a pseudo
lock) may show up:
==
WARNING: possible circular locking dependency detected
5.0.0-rc1+ #60 Tainted: G
On 6/18/20 11:20 AM, Darrick J. Wong wrote:
On Thu, Jun 18, 2020 at 11:05:57AM -0400, Waiman Long wrote:
Depending on the workloads, the following circular locking dependency
warning between sb_internal (a percpu rwsem) and fs_reclaim (a pseudo
lock) may show up:
===
On Thu, Jun 18, 2020 at 11:05:57AM -0400, Waiman Long wrote:
> Depending on the workloads, the following circular locking dependency
> warning between sb_internal (a percpu rwsem) and fs_reclaim (a pseudo
> lock) may show up:
>
> ==
> WARNING: po
Depending on the workloads, the following circular locking dependency
warning between sb_internal (a percpu rwsem) and fs_reclaim (a pseudo
lock) may show up:
==
WARNING: possible circular locking dependency detected
5.0.0-rc1+ #60 Tainted: G
On Thu, Jun 18, 2020 at 10:45:05AM +1000, Dave Chinner wrote:
> On Wed, Jun 17, 2020 at 01:53:10PM -0400, Waiman Long wrote:
> > fs/xfs/xfs_log.c | 9 +
> > fs/xfs/xfs_trans.c | 31 +++
> > 2 files changed, 36 insertions(+), 4 deletions(-)
> >
> > diff --git
On 6/17/20 8:45 PM, Dave Chinner wrote:
On Wed, Jun 17, 2020 at 01:53:10PM -0400, Waiman Long wrote:
fs/xfs/xfs_log.c | 9 +
fs/xfs/xfs_trans.c | 31 +++
2 files changed, 36 insertions(+), 4 deletions(-)
diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c
On Wed, Jun 17, 2020 at 01:53:10PM -0400, Waiman Long wrote:
> fs/xfs/xfs_log.c | 9 +
> fs/xfs/xfs_trans.c | 31 +++
> 2 files changed, 36 insertions(+), 4 deletions(-)
>
> diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c
> index 00fda2e8e738..33244680d0d4
Depending on the workloads, the following circular locking dependency
warning between sb_internal (a percpu rwsem) and fs_reclaim (a pseudo
lock) may show up:
==
WARNING: possible circular locking dependency detected
5.0.0-rc1+ #60 Tainted: G
On Mon, Jun 15, 2020 at 04:53:38PM -0400, Waiman Long wrote:
> On 6/15/20 12:43 PM, Darrick J. Wong wrote:
> > On Mon, Jun 15, 2020 at 12:08:30PM -0400, Waiman Long wrote:
> > > Depending on the workloads, the following circular locking dependency
> > > warning between sb_internal (a percpu rwsem)
On Mon, Jun 15, 2020 at 09:43:51AM -0700, Darrick J. Wong wrote:
> Also: Why not set PF_MEMALLOC_NOFS at the start of the freeze call
> chain?
Because there's no guarantee that we are always going to do this
final work in the freeze syscall context? i.e. the state is specific
to the context in whi
On 6/15/20 12:43 PM, Darrick J. Wong wrote:
On Mon, Jun 15, 2020 at 12:08:30PM -0400, Waiman Long wrote:
Depending on the workloads, the following circular locking dependency
warning between sb_internal (a percpu rwsem) and fs_reclaim (a pseudo
lock) may show up:
===
On Mon, Jun 15, 2020 at 12:08:30PM -0400, Waiman Long wrote:
> Depending on the workloads, the following circular locking dependency
> warning between sb_internal (a percpu rwsem) and fs_reclaim (a pseudo
> lock) may show up:
>
> ==
> WARNING: po
Depending on the workloads, the following circular locking dependency
warning between sb_internal (a percpu rwsem) and fs_reclaim (a pseudo
lock) may show up:
==
WARNING: possible circular locking dependency detected
5.0.0-rc1+ #60 Tainted: G
On 10.06.20 г. 10:19 ч., Michał Mirosław wrote:
> Dear Developers,
>
> I found a lockdep warning in dmesg some after doing 'mdadm -S' while
> also having btrfs mounted (light to none I/O load). Disks under MD and
> btrfs are unrelated.
Huhz, I think that's
Dear Developers,
I found a lockdep warning in dmesg some after doing 'mdadm -S' while
also having btrfs mounted (light to none I/O load). Disks under MD and
btrfs are unrelated.
Best Regards,
Michał Mirosław
==
WARNING: possibl
o that we avoid lockdep false positives
* by doing GFP_KERNEL allocations inside sb_start_intwrite().
+*
+* To prevent false positive lockdep warning of circular locking
+* dependency between sb_internal and fs_reclaim, disable the
+* acquisition of the fs_recl
accounting and setting up
>* GFP_NOFS allocation context so that we avoid lockdep false positives
>* by doing GFP_KERNEL allocations inside sb_start_intwrite().
> + *
> + * To prevent false positive lockdep warning of circular locking
> + * dependency
llocation context so that we avoid lockdep false positives
* by doing GFP_KERNEL allocations inside sb_start_intwrite().
+ *
+ * To prevent false positive lockdep warning of circular locking
+* dependency between sb_internal and fs_reclaim, disable the
+* ac
From: "Michael S. Tsirkin"
[ Upstream commit 01c3259818a11f3cc3cd767adbae6b45849c03c1 ]
When we fill up a receive VQ, try_fill_recv currently tries to count
kicks using a 64 bit stats counter. Turns out, on a 32 bit kernel that
uses a seqcount. sequence counts are "lock" constructs where you nee
From: "Michael S. Tsirkin"
[ Upstream commit 01c3259818a11f3cc3cd767adbae6b45849c03c1 ]
When we fill up a receive VQ, try_fill_recv currently tries to count
kicks using a 64 bit stats counter. Turns out, on a 32 bit kernel that
uses a seqcount. sequence counts are "lock" constructs where you nee
From: "Michael S. Tsirkin"
[ Upstream commit 01c3259818a11f3cc3cd767adbae6b45849c03c1 ]
When we fill up a receive VQ, try_fill_recv currently tries to count
kicks using a 64 bit stats counter. Turns out, on a 32 bit kernel that
uses a seqcount. sequence counts are "lock" constructs where you nee
From: "Michael S. Tsirkin"
Date: Thu, 7 May 2020 03:25:56 -0400
> When we fill up a receive VQ, try_fill_recv currently tries to count
> kicks using a 64 bit stats counter. Turns out, on a 32 bit kernel that
> uses a seqcount. sequence counts are "lock" constructs where you need to
> make sure th
When we fill up a receive VQ, try_fill_recv currently tries to count
kicks using a 64 bit stats counter. Turns out, on a 32 bit kernel that
uses a seqcount. sequence counts are "lock" constructs where you need to
make sure that writers are serialized.
In turn, this means that we mustn't run two tr
From: "Michael S. Tsirkin"
Date: Tue, 5 May 2020 20:01:31 -0400
> - u64_stats_update_end(&rq->stats.syncp);
> + u64_stats_update_end_irqrestore(&rq->stats.syncp);
Need to pass flags to this function.
On 2020/5/6 上午8:01, Michael S. Tsirkin wrote:
When we fill up a receive VQ, try_fill_recv currently tries to count
kicks using a 64 bit stats counter. Turns out, on a 32 bit kernel that
uses a seqcount. sequence counts are "lock" constructs where you need to
make sure that writers are serialize
When we fill up a receive VQ, try_fill_recv currently tries to count
kicks using a 64 bit stats counter. Turns out, on a 32 bit kernel that
uses a seqcount. sequence counts are "lock" constructs where you need to
make sure that writers are serialized.
In turn, this means that we mustn't run two tr
1 - 100 of 663 matches
Mail list logo