These counterexamples are interesting.
For the one with a sequence of optional elements, that suggests to me
perhaps there is a stack of saved errors. Or really we hang them
temporarily on the Infoset data structure.
So the error from the elem2 attempt is kept around until another parse
later in
That doesn't seem unreasonable to me, but here's some counter examples
where I think this approach won't help, maybe something to consider:
1) Imagine a simple schema that parses a single byte with no point of
uncertainties, and the input data was two bytes. In this case, there
will be no
Not unorthodox Larry. I just used similar ideas in a quite big binary data
schema. I agree with you this adds too much complexity though.
My schema has a mode controlled by a variable called
"captureUnrecognizedData". It defaults to false, but if set true, then data
that fails to parse gets
In some very unorthodox use of DFDL & Daffodil, I needed to ensure that I was
able to get xml output, even from files that contained extra data after the
last piece of parsable data.
I accomplished this by adding a "dataBlob" element that consumed everything up
to the end of the file:
Please comment on this idea.
The problem is that users write a schema, get "left over data" when they
test it. The schema works. The schema is, as far as DFDL and Daffodil is
concerned, correct. It just doesn't express what you intended it to
express. It IS a correct schema, just not for your
My guess is you are running into DAFODIL-2431, which was fixed in
version 3.1.0:
https://issues.apache.org/jira/browse/DAFFODIL-2431
- Steve
On 4/14/22 11:23 AM, Attila Horvath wrote:
ALCON
I suspect this unparse bug w/ Daffodil 2.4 has since been fixed. Parsing is
successful. Details as
ALCON
I suspect this unparse bug w/ Daffodil 2.4 has since been fixed. Parsing is
successful. Details as follows:
daffodil unparse --validate=on -s ... -r ... -o ...
!!
!! An unexpected exception occurred. This is a bug! !!