Hello all.

   Sorry, this is going to be a bit long.

   It took me a bit of time to process the input, but I am still
a bit unconfortable with this:

> John Larmouth wrote:
> <...>
> However, if you do not use version brackets, then each
> extension addition becomes a single addition group
> (implicitly in its own version brackets), and tail-end
> additions can then be omitted, because you can "pretend"
> you are an earlier version.

   The problem is in the definition of the word "omit".

   With PER encoding, once we have encoded all root components,
we still need to encode:
- the addition bitmap length as a small non negative number
- the addition bitmap itself
- each addition marked present in the above bitmap as an open
type (each ExtensionAdditionGroup considered as a SequenceType
and then encoded as an open type)

   Does "omit" mean "truncate the bitmap (and bitmap length, of
course) and do not encode the corresponding open types" or does
it mean "set each corresponding bit of the bitmap to 0 and do not
encode the corresponding open types"?

   The first definition implies that mandatory ExtensionAddition
always have their bit in the bitmap set to 1 and leads to no
decoding problem (and you can skip the rest of the message ;) ...
but please do tell me if it was the first definition).

   However, the second definition (and from my previous question
it seems that this is what Mr. Larmouth meant) raises more
difficulties.
   The problem in that case is that pretending to be an earlier
version leads to rather strange decoding behaviours when combined
with optional and default components.
   Consider the following extension of a SequenceType:
>  ...,
>  toto   Toto   OPTIONAL,      --v1
>  [[
>  tata   Tata   OPTIONAL,
>  titi   Titi   DEFAULT
>  ]],                          --v2
>  tutu   Tutu                  --v3
>  }
   In this simple example, consider the encoder pretends to be
v2. We know that it is pretending, because the addition bitmap is
still 3 bits long. So, is it pretending to be v2 or v1 or v0?
There is no way to tell in some cases (non-present optionals,
defaults with default values). Hence there is no way to know if
"titi" is to be considered not-known-by-the-encoder or if it
should have its default value, and this may impose unneeded
strain on the application which uses the decoder, for this
information may be relevant for it.
   (In addition to this, there is now numerous ways to encode the
same value and the same "faked version", depending on the version
of the encoder).

   Wouldn't it be preferable to force mandatory
ExtensionAddition's corresponding bit in the bitmap to 1? (the
bit is still needed by previous versions that do not know
anything about this field)
   It would still be possible to pretend to be an earlier version
, but then the encoder would have to encode the whole extension
list accordingly (that is, with the right bitmap size, and no 0
for mandatory ExtensionAdditions, since a real earlier version
would never have done that).

   What do you think of this all?

   Thanks a lot,

   Benoit Poste.

Reply via email to