> >>> - if (st->buffer.maxval - st->early_change ==
> >>> st->dict.size) + if (st->buffer.maxval -
> >>> st->early_change + 2 == st->dict.size) { if
> >>> (!lzw_buffer_inc_bitsize (&st->buffer)) {
> >
> > I found a very good explanation for a "+1" at this place (a second
> > +1 is of course still missing).
>
> I have found the second +1
>
> It is caused by mixing st->dict.size with table entry content:
> ISO-32000 says: "The first output code that is 10 bits long shall be
> the one following the creation of table entry 511"
> (entry 511, not dict.size = 511)
>
> If we create table entry in function lzw_dict_add with:
> d->table[d->size++] = *s;
>
> dict.size is already 511 after the entry of 510. So we have test
> against st->dict.size-1 (or add 1 to maxval)
> If that is the case, then it's probably better to test against dict.size-1. As you probably know already, digesting this algorithm is pretty tough :-) It would be great if you could document all your findings already in the source code, so that one can follow both the flow and the comments when reading it. Cheers! -- Aleksander
signature.asc
Description: This is a digitally signed message part
