Hi
While testing more variant functions I tried this on Windows:
double dVal; OLECHAR test1[]={'&', 'H', '8', '0', '0', '0', '0', '0', '0', '0', '\0'}; ok=VarR8FromStr(test1, LANG_NEUTRAL, NUMPRS_STD, &dVal);
The result for dVal was -2147483648. But as a real value it shouldn't have any problems holding the "real" value 2147483648. So why has it become negative? Is it because the source form was a hex number? Are all hex numbers automatically signed if converted to int/real? Or just because of the 32nd bit? The documentation wasn't that informative.
(The funny thing though was this remark in my VC6 help, it's not in the online version of MSDN anymore: Passing into this function any invalid and, under some circumstances, NULL pointers will result in unexpected termination of the application. For more information about handling exceptions, see Programming Considerations.
...now I understand many things :)
Thanks
bye Fabi
Fabian Cenedese wrote:
Hi
While testing more variant functions I tried this on Windows:
double dVal; OLECHAR test1[]={'&', 'H', '8', '0', '0', '0', '0', '0', '0', '0', '\0'}; ok=VarR8FromStr(test1, LANG_NEUTRAL, NUMPRS_STD, &dVal);
The result for dVal was -2147483648. But as a real value it shouldn't have any problems holding the "real" value 2147483648. So why has it become negative? Is it because the source form was a hex number? Are all hex numbers automatically signed if converted to int/real? Or just because of the 32nd bit? The documentation wasn't that informative.
(The funny thing though was this remark in my VC6 help, it's not in the online version of MSDN anymore: Passing into this function any invalid and, under some circumstances, NULL pointers will result in unexpected termination of the application. For more information about handling exceptions, see Programming Considerations.
...now I understand many things :)
Thanks
bye Fabi
I'm not sure I understood your question properly. A signed 2 complement 32 bit var can hold the numbers (-2^31) to (2^31)-1. That's just how the encoding works. Was that your question?
Shachar
On February 23, 2004 08:41 am, Shachar Shemesh wrote:
Fabian Cenedese wrote:
Hi
While testing more variant functions I tried this on Windows:
double dVal; OLECHAR test1[]={'&', 'H', '8', '0', '0', '0', '0', '0', '0', '0', '\0'}; ok=VarR8FromStr(test1, LANG_NEUTRAL, NUMPRS_STD, &dVal);
The result for dVal was -2147483648. But as a real value it shouldn't have any problems holding the "real" value 2147483648. So why has it become negative? Is it because the source form was a hex number? Are all hex numbers automatically signed if converted to int/real? Or just because of the 32nd bit? The documentation wasn't that informative.
(The funny thing though was this remark in my VC6 help, it's not in the online version of MSDN anymore: Passing into this function any invalid and, under some circumstances, NULL pointers will result in unexpected termination of the application. For more information about handling exceptions, see Programming Considerations.
...now I understand many things :)
Now does that mean "invalid numbers" or "invalid pointers"? Are they just saying that they hadn't included IsBadReadPtr in the argument checking?
Thanks
bye Fabi
I'm not sure I understood your question properly. A signed 2 complement 32 bit var can hold the numbers (-2^31) to (2^31)-1. That's just how the encoding works. Was that your question?
Shachar
Because there is nothing explicitly in there about 32 bit representations. It goes from a string of characters to a (excuse my ignorance) 64 bit double prescision. So are there restrictions on what character strings can be passed in? e.g. what would it do with "&H80000000"?
double dVal; OLECHAR test1[]={'&', 'H', '8', '0', '0', '0', '0', '0', '0', '0', '\0'}; ok=VarR8FromStr(test1, LANG_NEUTRAL, NUMPRS_STD, &dVal);
The result for dVal was -2147483648. But as a real value it shouldn't have any problems holding the "real" value 2147483648. So why has it become negative? Is it because the source form was a hex number? Are all hex numbers automatically signed if converted to int/real? Or just because of the 32nd bit? The documentation wasn't that informative.
(The funny thing though was this remark in my VC6 help, it's not in the online version of MSDN anymore: Passing into this function any invalid and, under some circumstances, NULL pointers will result in unexpected termination of the application. For more information about handling exceptions, see Programming Considerations.
Now does that mean "invalid numbers" or "invalid pointers"? Are they just saying that they hadn't included IsBadReadPtr in the argument checking?
I don't know, that was the only remark. But the same remark is included in all VarXXFromYY functions.
I'm not sure I understood your question properly. A signed 2 complement 32 bit var can hold the numbers (-2^31) to (2^31)-1. That's just how the encoding works. Was that your question?
Shachar
Because there is nothing explicitly in there about 32 bit representations. It goes from a string of characters to a (excuse my ignorance) 64 bit double prescision. So are there restrictions on what character strings can be passed in? e.g. what would it do with "&H80000000"?
My original question was if this conversion works implicitly with a signed hex string even if hex can be anything. Strings with more than 8 significant chars are rejected (DISP_E_OVERFLOW). So I guess these conversions assume at the most a signed long. That's how I will implement them then.
Thanks
bye Fabi