Casts from minimum precision types are emitted as nop, but the result value type must be set to the cast result type.
From: Conor McCarthy cmccarthy@codeweavers.com
Casts from minimum precision types are emitted as nop, but the result value type must be set to the cast result type. --- libs/vkd3d-shader/dxil.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/libs/vkd3d-shader/dxil.c b/libs/vkd3d-shader/dxil.c index beb9ae574..218a2e22f 100644 --- a/libs/vkd3d-shader/dxil.c +++ b/libs/vkd3d-shader/dxil.c @@ -3700,6 +3700,9 @@ static void sm6_parser_emit_cast(struct sm6_parser *sm6, const struct dxil_recor if (handler_idx == VKD3DSIH_NOP) { dst->u.reg = value->u.reg; + /* Set the result type for casts from 16-bit min precision. */ + if (type->u.width != 16) + dst->u.reg.data_type = vkd3d_data_type_from_sm6_type(type); return; }
Giovanni Mascellani (@giomasce) commented about libs/vkd3d-shader/dxil.c:
if (handler_idx == VKD3DSIH_NOP) { dst->u.reg = value->u.reg;
/* Set the result type for casts from 16-bit min precision. */
if (type->u.width != 16)
Shouldn't you set the type even when `type->u.width == 16`? For example, for a truncation from 64 to 16 bit, you should still set the destination type to a 32 bit integer, I think?
More in general, I am a bit confused by how you handle 16 bit types. Looking at `sm6_map_cast_op()` it seems that you're just transparently remapping 16 bit types to their corresponding 32 bit types (either floating point or integer). For example, it seems that for a `uint32 x` the expression `(uint32)(uint16)x == x` always holds, which is of course not the case if proper 16 bit integers are implemented (even if stored in half of a 32 bit register). The same happens for floating point 16 bit values. Wouldn't it be advisable to just reject any program using 16 bit types until we have proper support? Are there programs using them?
For example, for a truncation from 64 to 16 bit, you should still set the destination type to a 32 bit integer, I think?
Truncation from 64-bit is not a `NOP`.
Minimum precision is a bit messy in DXIL. While FXC emits 32-bit types and flags them as only requiring 16 bits of precision, DXC emits them as 16-bit which are allowed to be implemented as 32-bit. Transparently remapping them is apparently acceptable. We must remap them in SPIR-V since they work much the same there as in TPF.
`(uint32)(uint16)x == x` won't compile without SM 6.2 and `-enable-16bit-types`, and with `min16uint` it is allowed to be always `true`.
The one problem with the DXIL scheme is the lack of signedness. If I use a negative constant, e.g. `min16int i = -1`, it appears in DXIL as 0xffff without signedness, so there's no way to tell if it should be sign-extended when we implement it in 32 bits. Windows drivers apparently always sign-extend, which can break shaders written for SM < 6.
Truncation from 64-bit is not a `NOP`.
Oh, right, I got confused.
Minimum precision is a bit messy in DXIL. While FXC emits 32-bit types and flags them as only requiring 16 bits of precision, DXC emits them as 16-bit which are allowed to be implemented as 32-bit. Transparently remapping them is apparently acceptable. We must remap them in SPIR-V since they work much the same there as in TPF.
Ok, thanks for the explanation.
This merge request was approved by Giovanni Mascellani.
This merge request was approved by Henri Verbeet.