What if the ambiguous int type is actually an ambiguous **number** type -- meaning that it also includes floats and doubles -- and it is represented internally as double? That would explain why there is no loss of accuracy for that value.
That seems unlikely if overflowing bounds always yields INT_MIN. E.g. "return (2147483647 + 1) - 1;" yields INT_MIN.
Note also that floats have their own "ambiguous" type. 1.0f is float; 1.0h is half; 1.0 is ambiguous.