I did some additional testing, and it's more complicated than that.
I think I alluded to this elsewhere, but the native compiler has a concept of an "ambiguous" integer type, which is neither int nor uint. You can tell this by doing something like
``` float func(int x) { return 1; } float func(uint x) { return 2; }
float4 main() : SV_TARGET { return func(1); } ```
Ambiguous ints are automatically promoted when in an expression with other types, and can of course be implicitly cast, but an expression comprised only of ambiguous ints is still ambiguous.
Different integer constants have different types:
uint: 1u, 0x1, 01, 0x0, 4294967295u, 4294967296u, 4294967296lu int: 3000000000, -3000000000, 2147483648, -2147483648 ambiguous: 1, 0, 00, 000, -0, -1, 2147483647, -2147483647, 4294967296l, 1 + 2, 1 / 2 empty: 4294967296
Note "empty": that last token lexes as if it were a space. Fun!
uint adds with overflow, as you'd expect. Int and ambiguous int behaves like an integer type in bounds, but an expression that would overflow (in either direction) yields INT_MIN. Note that, contrary to Francisco's hypothesis, it does *not* act like a float type: expressions like 2147483645 + 1 do have accuracy to the nearest integer. Also, "1 / 2" equals zero even if converted to float.
Of course the rules are different for sm6; I haven't checked those yet.