Okay, I went and actually double checked, and though I didn't test exhaustively, it looks like d3d9 will always cast to float, but d3d11 will simply bit-cast. At least, R32G32B32A32_UINT seems to behave the same as R32G32B32A32_FLOAT.
I feel like you can make a reasonable argument for always defaulting d3d9 attributes to float, for defaulting BLENDINDICES to UINT and the rest to float, and for not having a default at all and making them some VKD3D_SHADER_COMPONENT_UNKNOWN or VOID type.
I feel like it has to be unlikely that any d3d8/9 application is ever going to deviate from the "blendindices is uint, everything else is float" pattern, but if you think it's more sensible to default everything to float and just add that interface anyway, I can take that approach...