On Friday 13 September 2002 02:29, David Laight wrote:
Argl, why does this code use the buffer size contants instead of sizeof(variable) !? I suggest we always specify buffer length constants only *once*, namely at creation of the buffer. Not doing so can be potentially very harmful if we decide to change the buffer length and then manage to forget one or more length constants...
However I don't think saying something like sizeof(bufferW) is a clear winner here, as we're not interested in the size in terms of the number of bytes (or number of items of data of type char to be pedantic) of bufferW, but rather the number of elements in bufferW. To get the number of elements in bufferW we'd have to use sizeof(bufferW) / sizeof(WCHAR), which is a bit long winded, but I suppose could be wrapped by a macro.
The 'usual' definition is sizeof bufferW / sizeof *bufferW sometimes encapsulated in a NELEM (or nelem) macro: #define nelem(x) (sizeof (x) / sizeof *(x))
Thanks, I'd totally forgotten about the dereferencing trick to get the size of an element!
Then the constant only appears once. Indeed you have to ask whether MAX_PATH is an enforced system restraint or just wishful thinking. Certainly the NetBSD kernel doesn't enforce it (although some shells enforce it (or other arbitrary limits) on the length of $PWD.
I was under the impression that MAX_PATH is a Windows limitation rather than one of the UNIX that WINE is running on. Since Windows doesn't promise to permit any more than MAX_PATH, we gain nothing by allowing for more in WINE, hence the use of MAX_PATH to size the buffers.
But I wonder about the wacky //?/ pathnames that Windows supports, and if WINE can cope with them...
David
Regards M.Beach