Scott Ritchie wrote:
There was a bit of a philosophical discussion on #winehackers about the merits of creating tests for functions that might be testing undefined or unimportant behavior. Windows behaves one way, we behave another, the tests measure this delta, but it's unknown if this will actually improve a real world application.
Vincent Povirk wrote an excellent reply. I hope somebody will turn that into a Wiki page.
More broadly, should we resist any change without a particular (real-world) application in hand that needs it?
This is stupid because you're always late. Like a fireman. a) Somebody who wrote the bug report is waiting for the bug to be fixed ASAP. b) It takes a lot of effort to analyse huge and cryptic log files to find out what the particular delta is.
Or should we err on the side of testable behavior,
What you test is what you know. It's not erring. If you don't know what the behaviour should be, you can't add a FIXME pointing out that Wine deviates from native behaviour.
When I asked about testing depth w.r.t. the MCI a few years ago, Paul Vriens advised to test to the bones. IMHO that's good advice.
I recommend beginners to start writing tests *first*. Doing so, you learn a lot about the component Wine purports to mimic. You can then even start to predict failures!
Somehow I believe I'm a very untypical Wine contributor. I'm not bug driven. Quite to the contrary, my thesis is that once you've tested a component to the bones *and* replicated the observable behaviour, there will be few opportunities left for people to report bugs.
The basic alternative is that if you have free time now, you can either try to understand obscure logs of some component you're not exactly familiar with and more or less well documented, perhaps single-step your way through the debugger, *OR* you can spend time shedding light on said component via tests, mostly working with logs you control because you're working with full source to your test and the Wine code.
That's what I was doing with the audio parts of the MCI before WinMM was rewritten to use mmdevapi. Now that's what I'm doing with mmdevapi. After that, should I still have the energy, I'll look at WinMM audio.
Other benefit: - Tests get executed daily whereas an application X can be made to work in wine-X yet break again in X+1 until somebody notices.
Regards, Jörg Höhle
On 08/26/2011 12:45 PM, Joerg-Cyril.Hoehle@t-systems.com wrote:
Scott Ritchie wrote:
There was a bit of a philosophical discussion on #winehackers about the merits of creating tests for functions that might be testing undefined or unimportant behavior. Windows behaves one way, we behave another, the tests measure this delta, but it's unknown if this will actually improve a real world application.
Vincent Povirk wrote an excellent reply. I hope somebody will turn that into a Wiki page.
More broadly, should we resist any change without a particular (real-world) application in hand that needs it?
This is stupid because you're always late. Like a fireman. a) Somebody who wrote the bug report is waiting for the bug to be fixed ASAP. b) It takes a lot of effort to analyse huge and cryptic log files to find out what the particular delta is.
Or should we err on the side of testable behavior,
What you test is what you know. It's not erring. If you don't know what the behaviour should be, you can't add a FIXME pointing out that Wine deviates from native behaviour.
When I asked about testing depth w.r.t. the MCI a few years ago, Paul Vriens advised to test to the bones. IMHO that's good advice.
It is a good advice. But that doesn't means one has to submit tests that don't make sense.
I recommend beginners to start writing tests *first*. Doing so, you learn a lot about the component Wine purports to mimic. You can then even start to predict failures!
Somehow I believe I'm a very untypical Wine contributor. I'm not bug
I'm not bug driven either,
driven. Quite to the contrary, my thesis is that once you've tested a component to the bones *and* replicated the observable behaviour,
but here I heartily disagree. You don't test nor replicate all the observable behavior; that is way way too broad. An extreme case would be to try to test how much time/CPU cycles a function call takes and trying to replicate that in Wine. In general that doesn't matter but is is observable and testable.
What you do want to test is: - the de jure API (as documented), and - the de facto API (as used by applications and thus kept "broken" across Windows version).
So if an application relies on the fact that a function call will crash with an invalid combination of arguments than Wine will have to crash too but it doesn't matter if the segfault happens at address 0x00000002 or 0x00000001.
there will be few opportunities left for people to report bugs.
You sub-estimate the ingenuity of application developers to misuse the API. Or you found a way to solve the halting problem ;)
The basic alternative is that if you have free time now, you can either try to understand obscure logs of some component you're not exactly familiar with and more or less well documented, perhaps single-step your way through the debugger, *OR* you can spend time shedding light on said component via tests, mostly working with logs you control because you're working with full source to your test and the Wine code.
It is not an either or but a "do both", for the project in general that is. People are free to scratch their itch as long as they don't make the code a mess.
bye michael
Hi,
Michael Stefaniuc wrote:
What you do want to test is:
- the de jure API (as documented), and
- the de facto API (as used by applications
You plead for operational profile testing. I say that the API "as used by applications" is unknown to me because the 100000 apps out there don't send me email about their API usage. So I'm left with extremely few apps to test, and my judgement.
My judgement tells me to complete your list: - the expected corner cases of the API or the programming language. + e.g. buffer sizes 0 or even < 0 with C + reference counts with COM + class inheritance with C++
For instance, Scott Ritchie's example is not unusual: + * If lpszStr is Null, returns how long a formatted string would be. This is a very common pattern. Probably MSDN forgot to mention it. Expect apps to use that when they find out it works.
What I don't understand about Scott's patches is why there's half a dozen of them. One patch for the code, one for the tests about StrFromTimeInterval, possibly even join them.
An extreme case would be to try to test how much time/CPU cycles a function call takes and trying to replicate that in Wine. In general that doesn't matter but is is observable and testable.
I have audio in mind so I don't care about "in general". With audio, it matters, as seen in the MS compatibility kit: http://technet.microsoft.com/en-us/library/cc766308%28WS.10%29.aspx See IgnoreMCISTOP. Complete the list further: - the timing of the audio API - the timing of events (e.g. mmdevapi EVENTCALLBACK) and - the timing and invoking thread of callbacks (bugs #3930, 27795)
Actually, I've been thinking about performing timing tests of the mmdevapi Start and Stop methods. One noteworthy point is that I *may* perform such tests to validate my beliefs about how mmdevapi works (e.g. is it so fast as if it solely flips a bit polled by the timer job?), but that does not necessarily imply that I'll add tests to the source. I've performed many more tests than you'll eventually find in tests/.
Regards, Jörg Höhle
On 08/30/2011 04:05 AM, Joerg-Cyril.Hoehle@t-systems.com wrote:
For instance, Scott Ritchie's example is not unusual:
- If lpszStr is Null, returns how long a formatted string would be.
This is a very common pattern. Probably MSDN forgot to mention it. Expect apps to use that when they find out it works.
In this particular case, MSDN does mention it, however it doesn't mention all the testable behavior such as exactly when the buffer isnt modified. This is why once we test exhaustively like this our own API docs end up being better than MSDN itself.
Incidentally, the tests themselves are very informative example code, and if we were trying to create a resource for developers like MSDN they'd probably have a place there.
What I don't understand about Scott's patches is why there's half a dozen of them. One patch for the code, one for the tests about StrFromTimeInterval, possibly even join them.
A little birdie told me that Alexandre likes small incremental patches, so I provided each test in the small increment that I noticed their need in. It does seem like the sort of thing I should combine in the future though.
Thanks, Scott Ritchie
On Fri, Aug 26, 2011 at 6:00 AM, Michael Stefaniuc mstefani@redhat.com wrote:
It is a good advice. But that doesn't means one has to submit tests that don't make sense.
It seems like a simple comment describing the undocumented behavior would help. I wrote some tests years ago (the details are not important because I don't remember them) based upon something in MSDN and found that I could pass some extra foo and undocumented magic would result.
My point is, I wish now I had written down in the form of a one or two line comment, what I saw, so that in the future if I or someone else were tracking down a bug in that component, that little note might save someone else hours of effort.
Thanks