I'm thinking the layer framework needs to allocate the layer
transformer instance in a thread-local manner so that if it
constructs mutable state that will be thread local also
One downside to this is layer transformers that are thread-safe
unnecessarily incur some amount of overhead, both in ThreadLocal
overhead and memory overhead for each new transformer instance created.
I'm not sure how much overhead that really is and if it matters, but
maybe something to consider.
I also wonder how other systems handle this kind of thing. For example,
I'm somewhat familiar with Apache NiFi and I don't think it uses
ThreadLocals or allocates multiple instances of Processors. And I don't
think it does anything to actually ensure that Processors are thread-safe.
I think NiFi just documents that XYZ functions must be thread safe, and
it's up to Processor implementations to not do anything like store
state. And if they do need to store state or something, then they need
to use ThreadLocals or some thread-safe data structure for that state.
And actually this is what the Daffodil NiFi processor does. It has a
mutable cache compiled parsers so it doesn't need to recompile them all
the time, and this cache is stored in a thread-safe cache data structure.
On 2023-08-16 12:39 PM, Mike Beckerle wrote:
In the CRC32 layer, the "optimization" was really just laziness. The use
case we have only needs constant known explicit lengths, so we wanted to
hard wire that, and not bother to support nor test any other cases.
But, had that worked, I would bet we would have made the mistake and
pre-allocated and saved the buffer of that size, resulting in
non-thread-safe code.
But turns out there's a base class that handles all this for us, and we get
the ability for the explicit length to be computed from an expression also,
for free.
I'm thinking the layer framework needs to allocate the layer transformer
instance in a thread-local manner so that if it constructs mutable state
that will be thread local also, because the natural thing to do is to look
at the layerLength information and allocate a buffer in a member of the
class. It's just the first and most obvious thing to do, and it's dead
wrong for thread safety.
On Wed, Aug 16, 2023 at 9:48 AM Steve Lawrence <[email protected]> wrote:
I don't remember why NoCacheEvluatable was used, but you're I suspect
you're right that it was done so implementations are less likely to mess
things up. Note that NoCacheEvaluatable is also used for
ImplicitLengthEv. I'm not sure why it's used there either. Maybe
LayerLengthEv just copied from that?
Note that using NoCacheEvaluatble still doesn't really ensure
thread-safe implementations. An implementation could just as easily
allocate a buffer based on the first layer length it gets and then
stores that buffer for reuse later (with some logic to grow it if a
future layer length is bigger). That wouldn't be thread safe regardless
of no-cache or not.
So I don't think we should be relying on NoCacheEvalatable to ensure
thread safe implementations. API changes seems reasonable if we want to
enforce that, or maybe we just to improve the documentation about what
parts of the layer API are thread-safe and when things can and cannot be
shared.
That said, I'm curious what optimizations the CRC32 layer is trying to
do with a constant layer length?
On 2023-08-15 05:49 PM, Mike Beckerle wrote:
Lola Kilo and I were debugging a layer for CRC32 and found that the
LayerLengthEv is a NoCacheEvaluatable, so even if the schema does
dfdlx:layerLength="1240" a constant, the the layerLengthEv.optConstant is
None, which is because it is a NoCacheEvaluatable, which insists
optConstant is false to prevent constant optimization. This results in
calling on the LayerLengthEv at runtime every time.
I am not sure I understand the entire rationale for LayerLengthEv to be a
NoCacheEvaluatable.
I can think of one reason why.... so that people can't allocate mutable
things based on that length, and store them in the layer transformer
class
instance. Because that is a shared runtime structure, so that wouldn't be
thread safe. If they are forced to evaluate the layerLengthEv every time,
then their code will naturally allocate any data structure based on the
length at runtime rather than at layer compile time.
In other words, it's a hack to trick people into being more likely to
write
thread-safe code.
Based on this (and other experience with it) I think the layer API is in
serious need of revisiting.
We really should NOT just cast existing practice into JAPI/SAPI, but
should
rethink so that, for example, a thread-local is created from the layer
transformer instance.
Comments?