I think that what you're saying implies that FLOAT would be the correct type for BLENDINDICES.
I think so, but that does then force us to add that extra interface, whereas we could otherwise (always? usually?) get away with guessing UINT for blend indices and FLOAT otherwise. Which is annoying :-/
We could add a compilation option for "pure integer" BLENDINDICES which would perhaps make the issue go away for most cases in practice, but in principle I don't think it would prevent an application from e.g. using D3DDECLTYPE_UBYTE4 texture coordinates.