I'm going off memory, but I think it's legal in d3d9 to use any data type in the vertex declaration. I think it's also legal for d3d11, even; i.e. there's no need to actually match the shader data type. In practice this means we hit Vulkan validation errors, which we'd need yet more interface data to avoid, and I believe these validation errors also matter (i.e. the driver will just bit-cast the data if the type doesn't match.)
So if my memory is correct in this respect, the vertex attribute type doesn't exactly matter. On the other hand, we can (as always) provide reasonable guesses, and UINT is the best reasonable guess for BLENDINDICES, so it does kind of seem like the most sensible thing to me.
Wrt wined3d, I haven't checked, but GL may implicitly do a proper cast?
Direct3D 9 and the OpenGL of that time didn't have proper integer attributes; you could send integer data to the driver/GPU, but in the shader it would be visible as floating point data. The GPUs of that time generally just didn't have the capability of doing e.g. integer multiplication. Note also that most of the d3d9 vertex declaration data types are either floating-point formats or normalised formats, although e.g. D3DDECLTYPE_UBYTE4 does exist.
The OpenGL interface is perhaps illustrative here; compare the behaviour of e.g. glVertexAttrib1i() with the behaviour of glVertexAttribI1i(). The GL spec calls these "pure integers".
I think that what you're saying implies that FLOAT would be the correct type for BLENDINDICES.