"Nigel Liang" ncliang@gmail.com wrote:
+static const WCHAR UNICODE_PATH[] = {'c',':','\','w',0x00ef,0x00f1,0x00eb,
- 't',0x00e8,'s','t','\0','\0'}; /* "c:\winetest" */
The name above is definitely not "c:\winetest", also if you need a double termination '\0' state it specifically.
Yep, you are right, it is actually "c:\wïñëtèst". I will fix it in the next try.
That won't work. The actual unicode characters have different codes. Personally I don't see why you need non-acsii characters to test unicode APIs.
(GetVersion() & 0x80000000) is much shorter way to detect win9x, but if you need to test whether the platform supports unicode that's wrong, have a look at other tests how it's done there.
I see that the way it is being done in gdi32/tests/font.c is to look for functions that exist only on Win2k or later, is that the right way to do it? Or is there another place that I should be looking?
You have to check actual functionality, not presence of particular APIs, check for instance user32 tests for is_win9x/isWin9x.