There was a bit of a philosophical discussion on #winehackers about the merits of creating tests for functions that might be testing undefined or unimportant behavior. Windows behaves one way, we behave another, the tests measure this delta, but it's unknown if this will actually improve a real world application.
There's always regression potential with any code change, however on balance adding more (fixed) tests should on average reduce that risk.
Are the tests themselves evidence that a change is needed? More broadly, should we resist any change without a particular (real-world) application in hand that needs it?
Or should we err on the side of testable behavior, because somewhere out there a Windows developer may have written an app the same way the test author did?
Thanks, Scott Ritchie
A test that passes on Windows and fails on Wine is not sufficient to motivate a change to Wine.
I think we normally consider the following factors: * Is the change correct? If not, we should consider leaving Wine alone, but I can imagine there being other compelling reasons, in rare circumstances, to knowingly make Wine less like Windows. * Is a Windows application likely to need this? If we already know of such an application, that's a strong indicator that we should make the change, but even then it's not automatic. * Would a Windows application that relies on this behavior be considered "broken"? Is it possible that newer versions of Windows, different Windows versions, or phases of the moon could cause this application to break? Is such an application relying on implementation details that it (and we) shouldn't care about? If so, the actual presence of an application that relies on the windows behavior is not so important. * Does this make the code simpler or more complex? If it makes the code simpler and more correct at the same time (without breaking anything), we should do it regardless of whether we know of an application that needs it. But complexity has a cost in maintenance, so we need some positive justification before we do something that makes the code more complex (though not necessarily a specific application). * Will this change become more difficult to make if we continue to develop Wine without it? If it will, and there's a reasonable chance we'll need it, I will try to fix it before doing any further development that would make it more difficult for me. * Is the change likely to break something else? I've seen regressions caused by parts of Wine that use other parts of Wine incorrectly but happen to work because Wine is broken in two ways (until one of them is fixed). We can't always anticipate them, but when we can, we should try to fix things in an order that doesn't result in a regression. * Can we do this correctly within the environment where Wine must function? Sometimes Unix, X, or the other systems interact with make it impossible for us to fix a bug, or cause problems if we do that are worse than the bug we fix.
Like the fair use principles, none of this is an absolute rule. You just have to look at each individual case. Usually, though, I think it will be obvious which ones apply.
On Thu, 25 Aug 2011, Vincent Povirk wrote: [...]
- Is a Windows application likely to need this?
I'd add a couple of factors that pertain to this: * Is the behavior documented by the MSDN? If yes then applications are more likely to rely on it.
* Does the behavior correspond to a known usage pattern? If yes, then even if not documented in the MSDN, applications are likely to depend on it. Two examples: - APIs that take an 'LPSTR output_buffer, DWORD *buffer_size' pair of parameters. If they allow the programmer to pass 'NULL, &size' where size=0 as parameters to determine the required buffer size, then you can expect applications to make use of it even if the MSDN forgot to document it. - APIs that take output parameters and will simply not fill them if the pointer is NULL instead of crashing.
Of course the first thing to test is that these are are actually supported across a broad swath of the more recent Windows versions.
On 08/26/2011 12:52 AM, Francois Gouget wrote:
On Thu, 25 Aug 2011, Vincent Povirk wrote: [...]
- Is a Windows application likely to need this?
I'd add a couple of factors that pertain to this:
Is the behavior documented by the MSDN? If yes then applications are more likely to rely on it.
Does the behavior correspond to a known usage pattern? If yes, then even if not documented in the MSDN, applications are likely to depend on it. Two examples:
- APIs that take an 'LPSTR output_buffer, DWORD *buffer_size' pair of parameters. If they allow the programmer to pass 'NULL, &size' where size=0 as parameters to determine the required buffer size, then you can expect applications to make use of it even if the MSDN forgot to document it.
- APIs that take output parameters and will simply not fill them if the pointer is NULL instead of crashing.
Incidentally, this is basically what the series of tests I wrote that prompted this discussion check. They're currently at "pending" in patch tracker.
http://msdn.microsoft.com/en-us/library/bb759980
Of course the first thing to test is that these are are actually supported across a broad swath of the more recent Windows versions.
Do you think testbot handles that nicely?
Thanks, Scott Ritchie
On Fri, 26 Aug 2011, Scott Ritchie wrote: [...]
- APIs that take an 'LPSTR output_buffer, DWORD *buffer_size' pair of parameters. If they allow the programmer to pass 'NULL, &size' where size=0 as parameters to determine the required buffer size, then you can expect applications to make use of it even if the MSDN forgot to document it.
Just a word of caution here though (not related to Scott's patch which I did not read but because I saw that case recently), if either Wine or Windows request a larger than strictly necessary buffer, fixing Wine or making it match Windows is not high priority.
[...]
Of course the first thing to test is that these are are actually supported across a broad swath of the more recent Windows versions.
Do you think testbot handles that nicely?
Yes.