Date: Wed, 28 May 2025 10:44:03 +0200 (CEST) From: 6b...@6bone.informatik.uni-leipzig.de Message-ID: <44be64a4-0839-3857-8e30-60155b67a...@6bone.informatik.uni-leipzig.de>
| The problem recurs at irregular intervals. I can only observe the problem | on the server that uses a large iSCSI drive. The server is the iSCSI | initiator. Therefore, I suspect it's an iSCSI problem. The underlying problem might be, but the one you gave the stack trace for wasn't. Either some earlier crash might have caused the problem, or there is some corruption, either being caused by an iSCSI problemm or which is there, and isn't being fixed by your fsck. Note that the problem being reported there is not necessarily the one you're repairing with your fsck - a crash like that (any crash) will often leave file system damage that needs to be repaired, on any filesystem at all - you might just be repairing damage caused by the recent crash, and ignoring the underlying cause. Unless something is corrupting the iSCSI transfers in an unusual way (dup allocated inodes wouldn't be the usual thing I'd expect from the kind of issue that would typically cause) then I suspect the real problem may be elsewhere. Run fsck -f (the -f is important) on *all* of your filesystems, local and remote, starting with the one identified in the panic message, if you have that data. But in any case, on all of them. And yes, I know how tiresome that can be, I have filesystems which can take more than an hour to fsck, and I'm currently making even bigger ones. Fortunately, if you have sufficient RAM to avoid slowing things down even more by paging, you can run fscks in parallel on different drives (I wouldn't run two on the same drive (sets) at the same time, that would just cause head contention - and I know heads would be involved, ie: not SSDs, as SSDs aren't (yet) going to build a big enough filesystemm compared to their access speed, for this to really matter). kre