There was a bit of a philosophical discussion on #winehackers about the merits of creating tests for functions that might be testing undefined or unimportant behavior. Windows behaves one way, we behave another, the tests measure this delta, but it's unknown if this will actually improve a real world application.
There's always regression potential with any code change, however on balance adding more (fixed) tests should on average reduce that risk.
Are the tests themselves evidence that a change is needed? More broadly, should we resist any change without a particular (real-world) application in hand that needs it?
Or should we err on the side of testable behavior, because somewhere out there a Windows developer may have written an app the same way the test author did?
Thanks, Scott Ritchie