http://bugs.winehq.org/show_bug.cgi?id=7284
--- Comment #31 from Alexander Dorofeyev alexd4@inbox.lv 2009-03-28 18:48:46 --- I found this paper:
http://http.download.nvidia.com/developer/Papers/2005/FP_Specials/FP_Special...
That seems to suggest 0 * +Inf = NaN.
But, it also mentions that this applies to hardware/implementation that supports floating point operations and floating point special numbers like Inf and NaN. I'm not sure, but maybe in the old d3d8 days there was no floating point and thus no NaNs or Infs, so, basically, divide by zero was clamped to max value like "1.0" or something like that?