Lola Kilo and I were debugging a layer for CRC32 and found that the LayerLengthEv is a NoCacheEvaluatable, so even if the schema does dfdlx:layerLength="1240" a constant, the the layerLengthEv.optConstant is None, which is because it is a NoCacheEvaluatable, which insists optConstant is false to prevent constant optimization. This results in calling on the LayerLengthEv at runtime every time.
I am not sure I understand the entire rationale for LayerLengthEv to be a NoCacheEvaluatable. I can think of one reason why.... so that people can't allocate mutable things based on that length, and store them in the layer transformer class instance. Because that is a shared runtime structure, so that wouldn't be thread safe. If they are forced to evaluate the layerLengthEv every time, then their code will naturally allocate any data structure based on the length at runtime rather than at layer compile time. In other words, it's a hack to trick people into being more likely to write thread-safe code. Based on this (and other experience with it) I think the layer API is in serious need of revisiting. We really should NOT just cast existing practice into JAPI/SAPI, but should rethink so that, for example, a thread-local is created from the layer transformer instance. Comments?