Hi Dan,

> On 23 Jun 2025, at 11:13, Dan Carpenter <[email protected]> wrote:
> 
> On Mon, Jun 23, 2025 at 10:36:46AM +0000, Miguel Luis wrote:
>> Keeping it logged...
>> 
>>> On 18 Jun 2025, at 14:33, Miguel Luis <[email protected]> wrote:
>>> 
>>> Hi!
>>> 
>>> I'm trying to track down a hard lockup with smatch which produces the 
>>> following
>>> stacktrace:
>>> 
>>> [   43.028758] Kernel panic - not syncing: Hard LOCKUP
>>> [   43.029701] CPU: 92 UID: 0 PID: 0 Comm: swapper/92 Not tainted 
>>> 6.15.0-rc4+ #11 VOLUNTARY
>>> [   43.031264] Hardware name: linux,dummy-virt (DT)
>>> [   43.032156] Call trace:
>>> [   43.032641]  show_stack+0x20/0x48 (C)
>>> [   43.033358]  dump_stack_lvl+0x38/0xf8
>>> [   43.034082]  dump_stack+0x18/0x30
>>> [   43.034740]  panic+0x3a8/0x408
>>> [   43.035349]  nmi_panic+0x48/0xc0
>>> [   43.035987]  watchdog_hardlockup_check+0x2dc/0x360
>>> [   43.036924]  watchdog_buddy_check_hardlockup+0x78/0xa8
>>> [   43.037924]  watchdog_timer_fn+0xa0/0x3b8
>>> [   43.038712]  __run_hrtimer+0x224/0x3b0
>>> [   43.039455]  __hrtimer_run_queues+0xb4/0x148
>>> [   43.040292]  hrtimer_interrupt+0x138/0x320
>>> [   43.040932]  arch_timer_handler_phys+0x4c/0x78
>>> [   43.041695]  handle_percpu_devid_irq+0xb0/0x2a0
>>> [   43.042584]  handle_irq_desc+0x54/0x90
>>> [   43.043328]  generic_handle_domain_irq+0x24/0x48
>>> [   43.044197]  gic_handle_irq+0x74/0x100
>>> [   43.044907]  call_on_irq_stack+0x24/0x30
>>> [   43.045653]  do_interrupt_handler+0xa8/0xb8
>>> [   43.046444]  el1_interrupt+0x34/0x60
>>> [   43.047126]  el1h_64_irq_handler+0x18/0x38
>>> [   43.047821]  el1h_64_irq+0x70/0x78
>>> [   43.048410]  default_idle_call+0xac/0x270 (P)
>>> [   43.049126]  cpuidle_idle_call+0x184/0x1e0
>>> [   43.049729]  do_idle+0xb8/0x108
>>> [   43.050195]  cpu_startup_entry+0x40/0x50
>>> [   43.050870]  secondary_start_kernel+0xe0/0x108
>>> [   43.051713]  __secondary_switched+0xc0/0xc8
>>> [   43.052480] SMP: stopping secondary CPUs
>>> [   43.053253] Kernel Offset: disabled
>>> [   43.053775] CPU features: 0x0000,080001c0,02138ca1,554ffea7
>>> [   43.054628] Memory Limit: none
>>> [   43.055234] ---[ end Kernel panic - not syncing: Hard LOCKUP ]---
>>> 
>>> I was able to successfully ( I believe ) build all the requirements with:
>>> 
>>> Setting CONFIG_DYNAMIC_DEBUG=n per [1]
>>> 
>>> And only build the kernel Image to reduce the build time with:
>>> 
>>> make clean && make CHECK="../smatch/smatch -p=kernel --info --file-output" 
>>> C=1 -j $(nproc) Image || make -j $(nproc) Image
>>> export WLOG="warns.txt"
>>> find -name *.c.smatch -exec cat {} \; -exec rm {} \; > $WLOG
>>> find -name *.c.smatch.sql -exec cat {} \; -exec rm {} \; > $WLOG.sql
>>> find -name *.c.smatch.caller_info -exec cat {} \; -exec rm {} \; > 
>>> $WLOG.caller_info
>>> 
>> 
>> 
>> s/>/>>
>> 
>> To append to file instead.
>> 
> 
> Those files are only used once...  Keeping the old data seems like it
> might cause a problem.

I agree and based on that this is the script I’m using:

```
# env
export WLOG="smatch_warns.txt"

# clear everything
rm -rf $WLOG* smatch_db.sqlite

# build
make clean && time make CHECK="../smatch/smatch -p=kernel --info --file-output" 
C=1 -j $(nproc) Image || time make -j $(nproc) Image

# squash
find -name *.c.smatch -exec cat {} \; -exec rm {} \; >> $WLOG
find -name *.c.smatch.sql -exec cat {} \; -exec rm {} \; >> $WLOG.sql
find -name *.c.smatch.caller_info -exec cat {} \; -exec rm {} \; >> 
$WLOG.caller_info

#create DB
../smatch/smatch_data/db/create_db.sh  -p=kernel $WLOG.sql
```

>  One thing you can do is:
> 
> kchecker --info drivers/file.c | tee out
> ~/smatch/smatch_data/db/reload_partial.sh out
> 
> It will reload just the one file.

Thanks for the tip!

>  It doesn't update everything,
> but it updates the caller_info and return_states which is what we
> mostly care about.
> 

And thanks for the hint. :)
I’m creating SQL views which I believe should simplify ‘smdb.py' a bit. Or at 
least for me or to use cases where would like to  work on top of the db.

> regards,
> dan carpenter

Regards
Miguel

Reply via email to