rjmccall added inline comments.

================
Comment at: clang/lib/Sema/SemaExpr.cpp:1304
+    RHSTy = ResultTy;
+  }
+
----------------
ebevhan wrote:
> rjmccall wrote:
> > leonardchan wrote:
> > > rjmccall wrote:
> > > > ebevhan wrote:
> > > > > rjmccall wrote:
> > > > > > Hmm.  So adding a signed integer to an unsigned fixed-point type 
> > > > > > always converts the integer to unsigned?  That's a little weird, 
> > > > > > but only because the fixed-point rule seems to be the other way.  
> > > > > > Anyway, I assume it's not a bug in the spec.
> > > > > `handleFixedPointConversion` only calculates the result type of the 
> > > > > expression, not the calculation type. The final result type of the 
> > > > > operation will be the unsigned fixed-point type, but the calculation 
> > > > > itself will be done in a signed type with enough precision to 
> > > > > represent both the signed integer and the unsigned fixed-point type. 
> > > > > 
> > > > > Though, if the result ends up being negative, the final result is 
> > > > > undefined unless the type is saturating. I don't think there is a 
> > > > > test for the saturating case (signed integer + unsigned saturating 
> > > > > fixed-point) in the SaturatedAddition tests.
> > > > > `handleFixedPointConversion` only calculates the result type of the 
> > > > > expression, not the calculation type.
> > > > 
> > > > Right, I understand that, but the result type of the expression 
> > > > obviously impacts the expressive range of the result, since you can end 
> > > > up with a negative value.
> > > > 
> > > > Hmm.  With that said, if the general principle is to perform the 
> > > > operation with full precision on the original operand values, why are 
> > > > unsigned fixed-point operands coerced to their corresponding signed 
> > > > types *before* the operation?  This is precision-limiting if the 
> > > > unsigned representation doesn't use a padding bit.  That seems like a 
> > > > bug in the spec.
> > > > Hmm. With that said, if the general principle is to perform the 
> > > > operation with full precision on the original operand values, why are 
> > > > unsigned fixed-point operands coerced to their corresponding signed 
> > > > types *before* the operation? This is precision-limiting if the 
> > > > unsigned representation doesn't use a padding bit. That seems like a 
> > > > bug in the spec.
> > > 
> > > Possibly. When the standard mentions converting from signed to unsigned 
> > > fixed point, the only requirement involved is conservation of magnitude 
> > > (the number of integral bits excluding the sign)
> > > 
> > > ```
> > > when signed and unsigned fixed-point types are mixed, the unsigned type 
> > > is converted to the corresponding signed type, and this should go without 
> > > loss of magnitude
> > > ```
> > > 
> > > This is just speculation, but I'm under the impression that not as much 
> > > "attention" was given for unsigned types as for signed types since "`In 
> > > the embedded processor world, support for unsigned fixed-point data types 
> > > is rare; normally only signed fixed-point data types are supported`", but 
> > > was kept anyway since unsigned types are used a lot.
> > > 
> > > ```
> > > However, to disallow unsigned fixed-point arithmetic from programming 
> > > languages (in general, and from C in particular) based on this 
> > > observation, seems overly restrictive.
> > > ```
> > > 
> > > I also imagine that if the programmer needs more precision, the correct 
> > > approach would be to cast up to a type with a higher scale. The standard 
> > > seems to make an effort to expose as much in regards to the underlying 
> > > fixed point types as possible:
> > > 
> > > ```
> > > it should be possible to write fixed-point algorithms that are 
> > > independent of the actual fixed-point hardware support. This implies that 
> > > a programmer (or a running program) should have access to all parameters 
> > > that define the behavior of the underlying hardware (in other words: even 
> > > if these parameters are implementation-defined).
> > > ```
> > > 
> > > So the user would at least know that unsigned types may not have the same 
> > > scale as their corresponding signed types if the hardware handles them 
> > > with different scales.
> > > 
> > > Also added test for signed integer + unsigned saturating fixed point.
> > As long as we maintain the same typing behavior, does the standard permit 
> > us to just Do The Right Thing here and do the extended arithmetic with the 
> > original unsigned operand?  I'm sure there are some cases where we would 
> > produce a slightly different value than an implementation that does the 
> > coercion before the operation, but that might be permitted under the 
> > standard, and as you say, it would only affect some situations that it 
> > doesn't seem the standard has given much thought to.
> The coercion of unsigned to signed is likely done for pragmatic reasons; if 
> you have a signed and unsigned type of the same rank, operating on them 
> together won't require any 'extra bits'. This means that if your hardware has 
> native support for the operations, you won't end up in a situation where you 
> really need 33 or 34 bits (for a 32 bit value) to actually perform the 
> operation. This is one of the benefits of these kinds of implicit conversions.
> 
> It probably makes the implementation a bit easier as well, since you don't 
> have to consider cases where both sides have different signedness. The loss 
> of the extra bit of precision from the unsigned operand is probably not 
> considered to be very important. As Leonard said, if you wanted higher 
> precision you can simply use the next rank of types. ...Of course, they seem 
> to have forgotten that that exact issue with differing signedness also 
> applies to operations with fixed-point and integer operands.
> 
> Personally, I think the spec could be a bit more restrictive with how it 
> handles these conversions, as the way they are now makes it really hard to 
> generate code sensibly for machines that have native support for these kinds 
> of operations; in many cases you'll end up requiring a couple bits too many 
> or a couple bits too few to fit in a sensible register size, and ensuring 
> that the scales match what your hardware can handle is also a challenge.
> The coercion of unsigned to signed is likely done for pragmatic reasons; if 
> you have a signed and unsigned type of the same rank, operating on them 
> together won't require any 'extra bits'. This means that if your hardware has 
> native support for the operations, you won't end up in a situation where you 
> really need 33 or 34 bits (for a 32 bit value) to actually perform the 
> operation. This is one of the benefits of these kinds of implicit conversions.

Yeah, I can see that.  I think it really depends on the target.  If you have 
hardware support, and you're using an unpadded representation for unsigned 
fixed-point values, then preserving the extra bit on mixed-signedness 
operations will sometimes prevent you from using the hardware optimally.  On 
the other hand, if you don't have hardware support, preserving the extra bit is 
usually easier than not: e.g., IIUC, multiplying a 32-bit signed _Fract with a 
32-bit unpadded unsigned _Fract means just doing a 64-bit multiplication and 
dropping the low 32 bits, whereas doing the conversion first actually requires 
extra operations to ensure that the low bit doesn't contribute to the result.  
And a target with hardware support is probably going to use a padded format for 
unsigned fixed-point precisely so that it can take advantage of the hardware.

> ...Of course, they seem to have forgotten that that exact issue with 
> differing signedness also applies to operations with fixed-point and integer 
> operands.

Right.


Repository:
  rC Clang

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D53738/new/

https://reviews.llvm.org/D53738



_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to