As there were several changes in 5.10.140 to the kernel I/O code which could be the cause of my issue, I downloaded the vanilla source code for the 5.10.139 kernel and built it using my Debian kernel config from /boot. I installed the resulting kernel and kernel headers and DKMS built the required ZFS modules. Upon rebooting, all my ZFS pools are working as expected.
In particular, the main pool that consistently showed six missing drives (A1-A6) under 5.10.140 is now showing all drives as online, just as it does with 5.10.136-1. In total, this system has 48 3.5" 7200 RPM SATA drives, two 1.92TB Samsung enterprise SATA SSDs, and three NVMe SSDs. The impacted drives are 16TB 3.5" drives, which are in two 4U DS4246 JBOD enclosures, and attached to a Dell R730xd server via an LSI 9207-8e HBA running P20 firmware in IT mode. I'm running ZFS 2.1.5 from bullseye-backports. Note that SMART data for the impacted drives is normal with no bad sectors. The only change I made was booting into a different kernel. Otherwise, it's running all the updates from the 11.5 point release. I will try to bisect 5.10.140 tomorrow to determine more precisely which commit(s) are causing my issue. NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 A1 ONLINE 0 0 0 A2 ONLINE 0 0 0 A3 ONLINE 0 0 0 A4 ONLINE 0 0 0 A5 ONLINE 0 0 0 A6 ONLINE 0 0 0 A7 ONLINE 0 0 0 A8 ONLINE 0 0 0 A9 ONLINE 0 0 0 A10 ONLINE 0 0 0 A11 ONLINE 0 0 0 A12 ONLINE 0 0 0 special mirror-1 ONLINE 0 0 0 nvme-HP_SSD_EX920_1TB_HBSE48481800144-part1 ONLINE 0 0 0 nvme-HP_SSD_EX920_1TB_HBSE48481800847-part1 ONLINE 0 0 0 logs mirror-2 ONLINE 0 0 0 nvme-HP_SSD_EX920_1TB_HBSE48481800144-part2 ONLINE 0 0 0