On Fri Feb 23 20:38:59 2024 +0000, Zebediah Figura wrote:
What if the ambiguous int type is actually an ambiguous **number**
type -- meaning that it also includes floats and doubles -- and it is represented internally as double? That would explain why there is no loss of accuracy for that value. That seems unlikely if overflowing bounds always yields INT_MIN. E.g. "return (2147483647 + 1) - 1;" yields INT_MIN. Note also that floats have their own "ambiguous" type. 1.0f is float; 1.0h is half; 1.0 is ambiguous.
Similarly (3 / 2) * 3 yields 3, not 4. I think that pretty clearly proves it's converted to integer, at the very least after every step.