Hi, (cc-ing bug-xorr...@gnu.org and the reporter of the problem.)
I now have the Guix ISO which fails when created by grub-mkrescue with HFS+ tree. To my newest theory it is not about the number of files in a directory but about the total number of files in the tree and their name lengths. I wonder how to describe this limit to the users of grub-mkrescue. Maybe: grub-mkrescue for platforms I386_EFI, X86_64_EFI, and POWERPC_IEEE1275 has a limit on the number of files multiplied by their average name length. Beginning with about 300,000 files of usual name length expect a xorriso error libisofs: FAILURE : HFS+ map nodes aren't implemented plus a rather misleading error message libisofs: FAILURE : Too much files to mangle, cannot guarantee unique file names The limit can only be avoided by suppressing xorrisofs option -hfsplus. This can be done by using xorriso script frontend/grub-mkrescue-sed.sh with MKRESCUE_SED_MODE="mbr_only" or MKRESCUE_SED_MODE="gpt_appended". Another method is to add as last arguments of grub-mkrescue these two -- -hfsplus off in order to leave xorriso's mkisofs emulation mode and to disable HFS+ production by a generic xorriso command. I still hope for a clarifying comment by Vladimir Serbinenko. ---------------------------------------------------------------------- Reasoning: Riddling what is overflowing in hfsplus.c i found some description of HFS+ in https://developer.apple.com/library/archive/technotes/tn/tn1150.html Since hfsplus.c reports to need "map nodes" i assume that it is the "header node" which contains a "map record". Map nodes would then be data structures which contain more map records. So for now i believe the overflow is in the "B-tree Map Record" of the header node. "It is a bitmap that indicates which nodes in the B-tree are used and which are free. The bits are interpreted in the same way as the bits in the allocation file." The number in target->hfsp_nnodes which causes the overflow is 35487. This is not the number of files 434,920. The number in target->hfsp_nleafs is 869842 which is (434920+1)*2. The loop which accumulates the number of target->hfsp_nnodes iterates over this number of 869842. I guess that the 869842 leafs are planned to get stored in allocation blocks of target->hfsp_cat_node_size which i now know is 4096. Each leaf occupies 50 to 200 bytes in the allocation block. This roughly matches the ratio of 35487 * 4096 / 869842 = 167. So it seems to be about the number of files and the sum of their name lengths. The size limit gets exceeded by about 19,525,632 bytes or (estimated by above ratio) 116,920 files. Have a nice day :) Thomas _______________________________________________ Grub-devel mailing list Grub-devel@gnu.org https://lists.gnu.org/mailman/listinfo/grub-devel