On Tue, Feb 11, 2014 at 02:17:04PM +0100, Richard Biener wrote: > > this is an interesting regression from GCC 4.5 present on all active > > branches > > and triggered by recent SRA improvements. For the attached testcase, we > > have > > an unchecked conversion of a 3-byte slice of an array of 4 bytes to a record > > type containing a 3-byte bit-field; an unchecked conversion in this case > > directly translates into a VIEW_CONVERT_EXPR. Then SRA scalarizes the bit- > > field and turns the original VCE into a VCE of the 3-byte slice to the bit- > > field type, which is a 32-bit integer with precision 24. > > > > But the expansion of VCE isn't prepared for this and generates a full 32-bit > > read, which thus reads 1 additional byte and doesn't mask it afterwards, > > thus > > resulting in a wrong value for the scalarized bit-field. > > > > Proposed fix attached, tested on x86-64/Linux, OK for the mainline? > > Hmm. The intent was of course to only allow truly no-op converts via > VIEW_CONVERT_EXPR - that is, the size of the operand type and the > result type should be the same. So, isn't SRA doing it wrong when > creating the VIEW_CONVERT_EXPR of a 3-byte type to a 4-byte type? > > The verification we do in tree-cfg.c:verify_types_in_gimple_reference > hints at that we _do_ have even grosser mismatches - so reality may > trump desired design here.
I thought we only allow VCE if the bitsize of both types is the same. If you have different bitsizes, then supposedly VCE to same bitsize integer followed by zero/sign extension or truncation followed by another VCE should be used. Jakub