Hi
While testing more variant functions I tried this on Windows:
double dVal;
OLECHAR test1[]={'&', 'H', '8', '0', '0', '0', '0', '0', '0', '0', '\0'};
ok=VarR8FromStr(test1, LANG_NEUTRAL, NUMPRS_STD, &dVal);
The result for dVal was -2147483648. But as a real value it shouldn't
have any problems holding the "real" value 2147483648. So why has
it become negative? Is it because the source form was a hex number?
Are all hex numbers automatically signed if converted to int/real? Or
just because of the 32nd bit? The documentation wasn't that informative.
(The funny thing though was this remark in my VC6 help, it's not
in the online version of MSDN anymore:
Passing into this function any invalid and, under some circumstances, NULL pointers will result in unexpected termination of the application. For more information about handling exceptions, see Programming Considerations.
...now I understand many things :)
Thanks
bye Fabi