On Fri Feb 9 14:00:44 2024 +0000, Matteo Bruni wrote:
So, what's happening is that UINT_MAX can't be represented exactly as a float and the closest integer number happens to be UINT_MAX + 1? If so, I'd expect the same happens in the implementation (including native) and I think I'd rather lean the same way with the test e.g. by explicitly casting UINT_MAX to float instead.
Yes, float can't represent it, so it's really `UINT_MAX+1` due to the cast. I pushed a version with an explicit cast, thanks.