On Tue, 22 Jan 2002, Alexandre Julliard wrote:
"Dimitrie O. Paun" dimi@cs.toronto.edu writes:
With all due respect Alexandre, I can't understand your point. When does the _semantics_ of the function differ based on the string encoding???
Functions that take strings usually do something with them, so this is part of the function semantics, and it differs between ASCII and Unicode. A fundamental part of that is making sure that all characters are preserved correctly (no W->A->W round-trip) and it's precisely the thing that will never get tested with the TCHAR stuff.
And this is precisly where I can not understand you. When we test stuff, we need to worry mainly about 2 things: 1. function semantics 2. W->A->W conversions, etc
(1) devides nicely into 1.A functions which don't care much about what the string is, it just gets passed around -- this case would benefit from TCHAR 1.B functions which deal with characters (length, possitions, etc) -- these are defined in terms of characters, so again TCHAR is OK That is, TCHAR is beneficial in testing the semantics of the function, save a few freak cases.
(2) is orthogonal to (1) Now it seems that you consider that having encoding-independent tests would completely miss this case, where as I find myself at the exactly opposite end.
Truth be told, I am not 100% in the "TCHAR" camp either. I need to see some code to be convinced either way, as you probably do too. So let me produce some code so that we can argue on something more concrete.
That being told, I am truely interested to understand your reasoning. From previous experience, you have solid reasons for arguing something, and given that I fail completely to see your side of the story, I must miss something rather important.
If all you want is to call the function with some random string, then you don't really need to call both A and W since they usually use the same code anyway.
But then we would have failed to test the (very important) W->A->W aspect, and it seems you contradict your previous statement...
-- Dimi.