Never mind. I think I answered my own question. Although I don't understand the Huffman algorithm well enough to know whether this is algorithmically possible, a naive analysis of the code shows that it calls PUT_BITS 128 times for each block, and the "size" argument in all of those cases can theoretically be as high as 16. So it seems to me that 2048 bits = 256 bytes (twice the size of the unencoded block) is the worst-case size for the Huffman-encoded block. Thus, 256 seems like the best value for BUFSIZE, to be 100% sure that this sort of thing cannot possibly happen again in the future.

On 11/22/14 2:06 PM, roucaries bastien wrote:
On Sat, Nov 22, 2014 at 6:58 PM, DRC <dcomman...@users.sourceforge.net> wrote:
I can readily reproduce the failure with the supplied test case, but what
I'm tripping on right now is understanding why a Huffman-encoded block can
grow so much larger than the size of the source block (128 bytes.)  While
this test case is very unusual, there may be others out there, and I want to
understand what the worst case is for Huffman encoding.  That would
determine the appropriate value for BUFSIZE. Generally speaking,
libjpeg-turbo will only need to use the local Huffman buffer when the buffer
supplied in the destination manager is nearly exhausted-- that is, when
libjpeg-turbo feels like the size of the encoded Huffman data for a given
block would overrun the destination manager's buffer.  But we don't want to
make the local Huffman buffer too big, else it might affect performance
(since it introduces an extra memcpy() for all of the bytes that are encoded
into the local buffer.) Hence the desire to understand exactly how big a
Huffman-encoded block can grow in theory.

Could you exactly describe that you are doing (mathematically) ?

Bastien



On 11/22/14 12:43 AM, Bernhard Übelacker wrote:

Hello,
I created a minimal test case in around 200 lines.

It uses a file with the intercepted scanlines of the calls to
jpeg_write_scanlines.

Also the Exif marker is read from such a file.
(And without this Exif marker the stack smash does not happen...)

The partial output file is byte equal to that generated by imagemagick
before it crashes.

The number of calls to encode_mcu_huff and the stack seem also to be the
same.

Kind regards,
Bernhard



Place all three files in the same directory and open a shell there:
(I just created the breakpoint to see how often it is called.)


$ bunzip2 jpeg_write_marker.bin.bz2
$ bunzip2 jpeg_write_scanlines.bin.bz2
$ gcc -g -O0 -fstack-protector-all test-768369.c -ljpeg
$ gdb --args ./a.out
...
(gdb) b encode_mcu_huff
Breakpoint 1 (encode_mcu_huff) pending.
(gdb) ignore 1 10000
Will ignore next 10000 crossings of breakpoint 1.
(gdb) run
Starting program:
/home/bernhard/data/entwicklung/2014/debian/libjpegturbo/a.out
*** stack smashing detected ***:
/home/bernhard/data/entwicklung/2014/debian/libjpegturbo/a.out terminated
...
(gdb) info break
Num     Type           Disp Enb Address            What
1       breakpoint     keep y   0x00007ffff7b8c190 in encode_mcu_huff at
jchuff.c:593
          breakpoint already hit 9842 times
          ignore next 158 hits

(gdb) bt
#0  0x00007ffff7811107 in __GI_raise (sig=sig@entry=6) at
../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x00007ffff78124e8 in __GI_abort () at abort.c:89
#2  0x00007ffff784f044 in __libc_message (do_abort=do_abort@entry=2,
fmt=fmt@entry=0x7ffff793f6ab "*** %s ***: %s terminated\n") at
../sysdeps/posix/libc_fatal.c:175
#3  0x00007ffff78d2147 in __GI___fortify_fail
(msg=msg@entry=0x7ffff793f693 "stack smashing detected") at
fortify_fail.c:31
#4  0x00007ffff78d2110 in __stack_chk_fail () at stack_chk_fail.c:28
#5  0x00007ffff7b96553 in encode_mcu_huff (cinfo=0x7fffffffdd70,
MCU_data=0x602720) at jchuff.c:641
#6  0x00007ffff7b89717 in compress_output (cinfo=0x7fffffffdd70,
input_buf=<optimized out>) at jccoefct.c:381
#7  0x00007ffff7b89006 in jpeg_finish_compress (cinfo=0x7fffffffdd70) at
jcapimin.c:183
#8  0x0000000000401196 in main () at test-768369.c:205







--
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org

Reply via email to