For example, for a truncation from 64 to 16 bit, you should still set the destination type to a 32 bit integer, I think?
Truncation from 64-bit is not a `NOP`.
Minimum precision is a bit messy in DXIL. While FXC emits 32-bit types and flags them as only requiring 16 bits of precision, DXC emits them as 16-bit which are allowed to be implemented as 32-bit. Transparently remapping them is apparently acceptable. We must remap them in SPIR-V since they work much the same there as in TPF.
`(uint32)(uint16)x == x` won't compile without SM 6.2 and `-enable-16bit-types`, and with `min16uint` it is allowed to be always `true`.
The one problem with the DXIL scheme is the lack of signedness. If I use a negative constant, e.g. `min16int i = -1`, it appears in DXIL as 0xffff without signedness, so there's no way to tell if it should be sign-extended when we implement it in 32 bits. Windows drivers apparently always sign-extend, which can break shaders written for SM < 6.