Hi Jörg
Failure to flag an error in such cases has known drawbacks:
For several month, Austin did not notice that his BSD boxes had lost sound. wave.ok simply reports "5 tests ok, 1 skipped" in joyfull green, whereas it executed many tests half a year ago.
All those vmware boxes configured without sound and thus skipping all but 5 wave.ok tests have for monthes *not* provided valuable feedback to Wine. So they were actually *not* executing tests, when they could have. It's only when Paul tested one of my tests that he realized that sound was not configured on one of his vmware boxes, promptly configured the sound device, and the tests passed.
The picture on test.winehq.org is a disaster: None of the w95, w98, nt4, 2k, 2k3, Vista, 2k8, w7 machines have performed winmm:wave tests! They all have no sound configured. Mostly useless.
A lot of those VMware boxes are mine. They come in two flavors. Some of them (starting with esx-) are ESX VMs, ESX is VMware's data center product, which simply doesn't support sound. The others (starting with gvg-) are Server VMs, which do support sound. The reason I didn't configure sound on them is because I'm simply not interested in sound. I'll consider enabling sound, but to be honest I'm a bit put off by your aggressive tone.
Anyway, the end result is that there will always be ESX VMs that can't run sound tests.
Best regards, Ge.
Ge,
to be honest I'm a bit put off by your aggressive tone.
Please accept my apologies. It was not meant to be agressive. I was certainly disappointed by having submitted tests that I knew worked on 3 real machines: w95, w2k, xp, then turn to test.winehq and see everything red.
BTW, dsound is also affected: both dsound and winmm have had little testing coverage despite the appealing green color.
I'm simply not interested in sound
Please weight providing as much test data as possible to the Wine project against your lack of interest in sound issues. BTW, the default sound tests make no noise.
Anyway, the end result is that there will always be ESX VMs that can't run sound tests.
Note taken.
The question remains: a) turn sound tests red on machines without sound (with 1 error only); b) turn them blue (clearly marking skipped tests) c) turn them green -- status quo (for dsound.ok and wave.ok).
Regards, Jörg Höhle.
Hi Joerg,
BTW, dsound is also affected: both dsound and winmm have had little testing coverage despite the appealing green color.
Your concern about coverage is valid one.
Please weight providing as much test data as possible to the Wine project against your lack of interest in sound issues.
I'm with Ge on this one: I don't much care whether Wine is able to play sound. I care in an abstract sense, but not more than I'd like to see green tests ;-) Is that silly or misguided of me? Perhaps. On the other hand, I think there's real value in stating that "the tests should pass on all Windows boxes," even if their configuration is invalid or strange. We're not there yet, but any failing test impedes that goal.
To put it another way: when the test failures seem intractable, it's hard to get motivated to fix them. For example, if there are some tests no one understands, or that fail on systems few people run, or if the number of test failures is simply too large, it's hard to get motivated to chip away at the issues. Fortunately we have a few cheerleaders like Paul who get us motivated to work on them every now and again.
On the other hand, when there are few test failures, there can be a little incentive to see whether a few days' effort will produce passing tests on at least one file, in order to see it magically become green. There has been for me, at least.
So, to answer your question:
a) turn sound tests red on machines without sound (with 1 error only); b) turn them blue (clearly marking skipped tests) c) turn them green -- status quo (for dsound.ok and wave.ok).
I'm in favor of either b or c, though I lean toward c. The reason is, not all tests can run on all systems. For example, you mentioned you saw a failure on a Win95 box with a broken sound card. If the sound card is broken, and other test results are invalid, surely a skip is appropriate here? That is, this doesn't indicate an error in the tests, and skip is appropriate. Green vs. red tells you at a glance what the pass rate is, though it doesn't say anything about the skip rate.
I think what you're asking for is a separate metric for coverage and pass rate. I'm in favor of that. --Juan
2009/11/6 Joerg-Cyril.Hoehle@t-systems.com:
Ge,
to be honest I'm a bit put off by your aggressive tone.
Please accept my apologies. It was not meant to be agressive. I was certainly disappointed by having submitted tests that I knew worked on 3 real machines: w95, w2k, xp, then turn to test.winehq and see everything red.
BTW, dsound is also affected: both dsound and winmm have had little testing coverage despite the appealing green color.
I'm simply not interested in sound
Please weight providing as much test data as possible to the Wine project against your lack of interest in sound issues. BTW, the default sound tests make no noise.
Anyway, the end result is that there will always be ESX VMs that can't run sound tests.
Note taken.
The question remains: a) turn sound tests red on machines without sound (with 1 error only); b) turn them blue (clearly marking skipped tests) c) turn them green -- status quo (for dsound.ok and wave.ok).
309 err = mciSendString("open waveaudio!tempfile.wav alias mysound", buf, sizeof(buf), NULL); 310 ok(!err,"mci open waveaudio!tempfile.wav returned error: %d\n", err); 311 if(err) { 312 skip("tempfile.wav was not found (not saved), skipping\n"); 313 return; 314 }
Here, you are doing both (generating an error and skipping)!
Also, how do you know that this is due to tempfile.wav being missing (the skip is invoked for any error case)? Especially given the line:
mci.c:261: Tests skipped: Please install audio driver. Tests will fail
in the test results.
Shouldn't this be something like:
err = mciSendString("open waveaudio!tempfile.wav alias mysound", buf, sizeof(buf), NULL); if(err == MCIERR_INVALID_FILE) { skip("tempfile.wav was not found (not saved?), skipping\n"); return; } ok(!err,"mci open waveaudio!tempfile.wav returned error: %d\n", dbg_mcierr(err));
I think that it is safe to say that if there is no audio driver present, all subsequent mci tests will fail and should also be skipped (ideally with just the single warning, e.g. by returning FALSE from that function and wrapping the other tests in an if).
27 static const char* dbg_mcierr(MCIERROR err) ... 108 default: return "unknown error";
This is not really helpful, it should contain the error number.
mci.c:239: Test failed: mci set format tag pcm returned error: 282
This (and the other error numbers) does not mean much to me. There is a dbg_mcierr function that was created; shouldn't that be used here (as the error is returned from mciSendString)?
- Reece
Reece Dunn wrote:
The question remains: a) turn sound tests red on machines without sound (with 1
error only);
b) turn them blue (clearly marking skipped tests) c) turn them green -- status quo (for dsound.ok and wave.ok).
310 ok(!err,"mci open waveaudio!tempfile.wav returned error: %d\n", err); 311 if(err) { 312 skip("tempfile.wav was not found (not saved), skipping\n"); Here, you are doing both (generating an error and skipping)!
On purpose. I wanted an error flagged to point people's attention to the issue of machines without sound, so that it can be discussed.
I think that it is safe to say that if there is no audio driver present, all subsequent mci tests will fail and should also be skipped (ideally with just the single warning, e.g. by returning FALSE from that function and wrapping the other tests in an if).
My current line of thought is that broken and skip are somewhat *bad* things, because by not performing tests, we don't learn anything about the behaviour in "unusual" circumstances.
The relevant question becomes: when is a configuration unusual? When can it be ignored for the purpose of Wine and when is it relevant?
- I don't care that tests fail on my father in law's machine with a broken sound card (or rather, the card maybe ok, but somehow the sndblast.dll doen't manager to install it). Yet testing on that machine was very revealing (black box testing with broken configurations tells you a lot about what's inside, like scientists learn a lot from patients with a local brain disease).
- Ge Geldorp pointing out that some vmware engines do not support sound at all renders Windows without sound not unusual -- unlike what I initially thought.
My conclusion is that we need tests to specifically find out how MS-Windows behaves on such machines, so that Wine can mimick it. That might be written entirely without skip() or broken(), because it becomes expected behaviour.
if (wave_GetNumDevs()== 0) { skip("no sound, no tests\n"); return; is *not* the answer IMHO. Turning all lights green is not a primary goal, it's a secondary/subordinated one.
What's the purpose of skip()? Nothing but generate blue colour in test.winehq (use trace(); return; if all you need is print something and avoid other tests). What's the purpose of the blue colour? - A call for action? + "please run again in the english locale so I can perform locale-dependent tests"; + "please install sound so that you'll know whether it works on BSD" + "please install Gecko so I can perform more tests" - ...?
Shouldn't this be something like: if(err == MCIERR_INVALID_FILE) { skip("tempfile.wav was not found (not saved?), skipping\n");
No. If any error happens in open, we can't continue. Perhaps you mean if (err) { ok(err==MCIERR_FILE_NOT_FOUND, "...\n"); /* the expected situation */ skip("tempfile.wav was not found (not saved?), skipping\n");
a dbg_mcierr function that was created; shouldn't that be used here
I didn't retrofit it into all messages after I wrote it. Will do so later.
Regards, Jörg Höhle.
On 11/09/2009 11:27 AM, Joerg-Cyril.Hoehle@t-systems.com wrote:
Reece Dunn wrote:
The question remains: a) turn sound tests red on machines without sound (with 1
error only);
b) turn them blue (clearly marking skipped tests) c) turn them green -- status quo (for dsound.ok and wave.ok).
310 ok(!err,"mci open waveaudio!tempfile.wav returned error: %d\n", err); 311 if(err) { 312 skip("tempfile.wav was not found (not saved), skipping\n"); Here, you are doing both (generating an error and skipping)!
On purpose. I wanted an error flagged to point people's attention to the issue of machines without sound, so that it can be discussed.
I think that it is safe to say that if there is no audio driver present, all subsequent mci tests will fail and should also be skipped (ideally with just the single warning, e.g. by returning FALSE from that function and wrapping the other tests in an if).
My current line of thought is that broken and skip are somewhat *bad* things, because by not performing tests, we don't learn anything about the behaviour in "unusual" circumstances.
There are so many variables on systems that these were introduced some time ago. The general feeling is that we try to have succeeding tests on a many boxes as possible. Currently we are already having a hard time achieving that on clean/correct (so we think) boxes. Once the majority of these clean boxes are showing up green we can start thinking of adding tests for broken configurations.
You are of course free to have your tests succeed for every possible combination out there, but the key word is 'succeed' here.
So if you find that we have test failures for a non-admin user with a Transylvanian locale with no soundcard it still should show up green (with or without the blue borders).
What's the purpose of skip()? Nothing but generate blue colour in test.winehq (use trace(); return; if all you need is print something and avoid other tests). What's the purpose of the blue colour?
- A call for action?
- "please run again in the english locale so I can perform locale-dependent tests";
- "please install sound so that you'll know whether it works on BSD"
- "please install Gecko so I can perform more tests"
- ...?
In my opinion the main purpose for skip() is to not run tests for legitimate reasons. If the tests are written so that they can only be run (aka succeed) when a sound card is present you skip() these tests on systems without a soundcard. skip() should not be used to get everything green (the same is in theory true for broken()).
A good example of using skip() is when functions are just not available on a platform (like most W-calls on 9x boxes). We also have some tests that will only succeed when the user is an administrator. It's perfectly valid to use skip() if we find the user isn't.
Broken() is a little bit different with that respect that we usually mark platforms/boxes as broken() when 99% (or at least the majority) of other configurations succeed these tests.
Regards, Jörg Höhle.
Hi,
Paul Vriens wrote:
In my opinion the main purpose for skip() is to not run tests for legitimate reasons.
Alas, this sentence and the examples you give do not explain to me why to use skip instead of e.g. trace + return. What is the added value? What to do about that blue color or mention in the log file?
My principle is: do not produce output if there's no consumer. Refuse to write a memo if nobody is going to read it!
Why, on w95, should I use skip to mark the impossibility to call UTF-16 xyzW functions? Why not simply use if (W_isAvailable) { do_more_tests } We already have files that end up performing a varying number of tests, depending on simple if-then-else statements, without use of skip.
That's different from the localisation example: I could switch the locale, then re-run the tests. This is again this skip=="call for action" idea I mentioned previously.
BTW, I've sent two patches today to stop errors on machines with sound. The third one (the localisation issue -- milliseconden, you already know it) is written but currently at the end of my patch queue. That should bring the number of errors down to 0 on every machine with sound.
The final one, about how to deal with machines with no sound, yet trying to test something instead of skipping everything, I still have to think about.
Regards, Jörg Höhle
2009/11/9 Joerg-Cyril.Hoehle@t-systems.com:
Hi,
Paul Vriens wrote:
In my opinion the main purpose for skip() is to not run tests for legitimate reasons.
Alas, this sentence and the examples you give do not explain to me why to use skip instead of e.g. trace + return. What is the added value? What to do about that blue color or mention in the log file?
My principle is: do not produce output if there's no consumer. Refuse to write a memo if nobody is going to read it!
Why, on w95, should I use skip to mark the impossibility to call UTF-16 xyzW functions? Why not simply use if (W_isAvailable) { do_more_tests } We already have files that end up performing a varying number of tests, depending on simple if-then-else statements, without use of skip.
That's different from the localisation example: I could switch the locale, then re-run the tests. This is again this skip=="call for action" idea I mentioned previously.
BTW, I've sent two patches today to stop errors on machines with sound. The third one (the localisation issue -- milliseconden, you already know it) is written but currently at the end of my patch queue. That should bring the number of errors down to 0 on every machine with sound.
The final one, about how to deal with machines with no sound, yet trying to test something instead of skipping everything, I still have to think about.
Skipping is a standard way of saying that "we are trying to test functionality A, but the host machine/configuration does not support it." This is not an error (as it is a valid confgiuration/setup on Windows machines), so producing errors only adds to the noise.
- Reece
2009/11/9 Joerg-Cyril.Hoehle@t-systems.com:
Reece Dunn wrote:
The question remains: a) turn sound tests red on machines without sound (with 1
error only);
b) turn them blue (clearly marking skipped tests) c) turn them green -- status quo (for dsound.ok and wave.ok).
310 ok(!err,"mci open waveaudio!tempfile.wav returned error: %d\n", err); 311 if(err) { 312 skip("tempfile.wav was not found (not saved), skipping\n"); Here, you are doing both (generating an error and skipping)!
On purpose. I wanted an error flagged to point people's attention to the issue of machines without sound, so that it can be discussed.
What has this to do with the issue of machines without sound.
The ok statement is saying that "mci open" has failed (for *any* reason). The skip (activated on *all* error codes) is saying that tempfile.wav is not found, even for cases such as err == MCIERR_CANNOT_LOAD_DRIVER!
I think that it is safe to say that if there is no audio driver present, all subsequent mci tests will fail and should also be skipped (ideally with just the single warning, e.g. by returning FALSE from that function and wrapping the other tests in an if).
My current line of thought is that broken and skip are somewhat *bad* things, because by not performing tests, we don't learn anything about the behaviour in "unusual" circumstances.
The point I am making is that: 1. The error and skip statements are saying the same thing; 2. But the skip message is misleading.
That is, it is valid to test MCI in the case when there is no sound device, but the tests are being confused by a lack of tempfile.wav.
For example: mci.c:310: Test failed: mci open waveaudio!tempfile.wav returned error: 275 mci.c:312: Tests skipped: tempfile.wav was not found (not saved), skipping If I have counted the offsets correctly 275 is MCIERR_DEVICE_NOT_READY, *not* that tempfile.wav was not found.
The relevant question becomes: when is a configuration unusual? When can it be ignored for the purpose of Wine and when is it relevant?
Answer: no configuration is unusual.
- I don't care that tests fail on my father in law's machine with a
broken sound card (or rather, the card maybe ok, but somehow the sndblast.dll doen't manager to install it).
The point is, if the machine has a broken soundcard (or in Wine does not have an available sound driver, or is broken due to PulseAudio on Ubuntu), the tests should still pass (or be skipped) as is appropriate.
Yet testing on that machine was very revealing (black box testing with broken configurations tells you a lot about what's inside, like scientists learn a lot from patients with a local brain disease).
Sure. But (a) the tests should still succeed (and doing so, document the behaviour for both these cases) and (b) should succeed or fail *for the right reason*.
- Ge Geldorp pointing out that some vmware engines do not support
sound at all renders Windows without sound not unusual -- unlike what I initially thought.
Sure.
My conclusion is that we need tests to specifically find out how MS-Windows behaves on such machines, so that Wine can mimick it. That might be written entirely without skip() or broken(), because it becomes expected behaviour.
But please make sure that they are failing in a way that is expected,
It also seems to me that there should be some tests that test opening an mci stream on a file that we know not to exist (e.g. this-file-does-not-exist.wav) so that those failure cases can be properly documented.
if (wave_GetNumDevs()== 0) { skip("no sound, no tests\n"); return; is *not* the answer IMHO. Turning all lights green is not a primary goal, it's a secondary/subordinated one.
You have the basic API and parameter tests that can be done. You have the playback tests. And you have the playback tests for non-existent files.
In the above example, you have the case of what happens when there are no devices available. Here, I would argue with you that skipping at that point does not make sense. However, skipping at the mciSendString("open ...") when you revieve an MCIERR_CANNOT_LOAD_DRIVER (or whatever the error code is that is returned), *then* you skip (saying "unable to load the mci driver").
What's the purpose of skip()? Nothing but generate blue colour in test.winehq (use trace(); return; if all you need is print something and avoid other tests). What's the purpose of the blue colour?
- A call for action?
+ "please run again in the english locale so I can perform locale-dependent tests"; + "please install sound so that you'll know whether it works on BSD" + "please install Gecko so I can perform more tests"
- ...?
No, skip is saying that the tests cannot be run as the environment does not support the functionality we require. Do you fail the Win9x tests just because they don't support *W API variants? Or Windows environments that don't have DirectX 7, 8, 9 or 10 installed?
Shouldn't this be something like: if(err == MCIERR_INVALID_FILE) { skip("tempfile.wav was not found (not saved?), skipping\n");
No. If any error happens in open, we can't continue. Perhaps you mean if (err) { ok(err==MCIERR_FILE_NOT_FOUND, "...\n"); /* the expected situation */ skip("tempfile.wav was not found (not saved?), skipping\n");
I meant MCIERR_FILE_NOT_FOUND, but the point was not to error in this case. Therefore, something like this:
err=mciSendString("open ...."); switch (err) { case MCIERR_FILE_NOT_FOUND: skip("tempfile.wav was not found (not saved?)"); return; case MCIERR_CANNOT_LOAD_DRIVER: case MCIERR_...: skip("unable to open the mci sound driver (%s) - is sound enabled?", dbg_mcierr(err)); return; default: ok(!err, "mciSendString('open') failed, expected NOERROR, got %s", dbg_mcierr(err)); }
But then, as you said, any failure here means that you cannot proceed. However, the main cases are: 1. tempfile.wav is not present; 2. sound is not present on the system.
The other errors (MCIERR_UNRECOGNISED_COMMAND, MCIERR_HARDWARE, MCIERR_OUT_OF_MEMORY) are either: 1. situations that cannot occur in the given test; 2. are unexpected error cases (i.e. situations that we do not expect on a normal run).
Therefore, the above code (with the correct error values) should be sufficient. If we find that other situations need correcting (due to failing tests), they can be looked at then.
- Reece
Reece,
I think we reached agreement on: a) Tests should "usually" succeed. b) machines without sound are usual (e.g. in virtual server env.). c) I'll modify the MCI tests to succeed on machines without sound. But for some other points, I've trouble following you.
The relevant question becomes: when is a configuration unusual? When can it be ignored for the purpose of Wine and when is it relevant?
Answer: no configuration is unusual.
Huh?
The point is, if the machine has a broken soundcard (or in Wine does not have an available sound driver, or is broken due to PulseAudio on Ubuntu), the tests should still pass (or be skipped) as is appropriate.
This seems wrong (in general) to me, even though Paul somehow expresses an opinion similar to yours. I say you cannot write a program meant to run in a completely unknown environment, doing comprehensive tests and not stumbling upon strange behaviour. The program is specifically designed to detect and flag deviant behaviour. All that can be achieved in a program is to be prepared for a known set of scenarios (a broken soundcard is such a scenerio), aka. use-case. Outside these known use-cases, the tests throw an error. Otherwise there's no point in ok() at all.
I agree that for the known use-cases (these explicitly include configurations without sound), the tests should succeed.
What can be done is what you say:
If we find that other situations need correcting (due to failing tests), they can be looked at then.
... then decide whether the cause of the failing test is "unusual" or not.
I feel that I shall stop discussing that issue. If the consensus is good enough, we need not take this further to eventually discover that the sole disagreement is on the exact meaning of some English words.
It also seems to me that there should be some tests that test opening an mci stream on a file that we know not to exist (e.g. this-file-does-not-exist.wav) so that those failure cases can be properly documented.
Good idea, esp. that w95 returns different error codes for foo.wav and bar\foo.wav!
The point I am making is that:
- The error and skip statements are saying the same thing;
- But the skip message is misleading.
Indeed, sorry for that. There's a dependency. The tests depend on record working to generate the file to be played, instead of including one in git. So they fail to test machines that can play but not record. Obviously I'll have to do something else in the yet to be written MIDI tests.
BTW, this raises the question whether Wine should include a tiny wave file for completeness. Invoking the MCI Sound command always plays some sound.
If I have counted the offsets correctly 275 is MCIERR_DEVICE_NOT_READY, *not* that tempfile.wav was not found.
Wrong. 275 - 256 is 19, which you look up in mmsystem.h: #define MCIERR_FILE_NOT_FOUND (MCIERR_BASE + 19)
default: ok(!err, "mciSendString('open') failed, expected NOERROR, got %s", dbg_mcierr(err)); }
In the default case, if (err) you must return (with or without skip message). You cannot continue if open fails. So it summarizes to: if (err) { /* list of situations that were analysed as "usual": */ ok(err=FILE_NOT_FOUND || err==MCIERR_CANNOT_LOAD_DRIVER || err==xyz ...); skip("Cannot test playback, cause %s",dbg_mcierr(err)); return }
case MCIERR_CANNOT_LOAD_DRIVER: 2. sound is not present on the system.
Perhaps there's a misconception? That's not how MS-Windows systems without sound drivers present themselves. Ge's, Paul's and my father in law's machines all include the MCI (and waveaudio.dll etc.). It's only at WAVE_Open time that failure is detected, e.g. Cue, Record or Play, and the error is MCIERR_WAVE_INPUTSUNSUITABLE for instance. (That's not entirely true, it does not explain the setformat pcm failure). I.e. no mciwave present at all so far is an "unusual" situation, and generates an error.
Regards, Jörg Höhle.
On 11/09/2009 07:25 PM, Joerg-Cyril.Hoehle@t-systems.com wrote:
Reece,
I think we reached agreement on: a) Tests should "usually" succeed.
I didn't read that in any of the emails. Tests should always succeed. the broken() and skip() will help in achieving that.
The point is, if the machine has a broken soundcard (or in Wine does not have an available sound driver, or is broken due to PulseAudio on Ubuntu), the tests should still pass (or be skipped) as is appropriate.
This seems wrong (in general) to me, even though Paul somehow expresses an opinion similar to yours. I say you cannot write a program meant to run in a completely unknown environment, doing comprehensive tests and not stumbling upon strange behaviour.
I'm now busy for example with the eventlog stuff. I create loads of tests and run them on my pretty clean boxes (W95 up to Win7). Only when they pass on all boxes I sent the patches. If test.winehq.org shows failures for these new tests they have to be dealt with.
I agree that not everybody has this multitude of test boxes but there are always people around willing to test new tests on their Windows boxes (real and/or virtual).
Regards, Jörg Höhle.
2009/11/9 Paul Vriens paul.vriens.wine@gmail.com:
On 11/09/2009 07:25 PM, Joerg-Cyril.Hoehle@t-systems.com wrote:
Reece,
I think we reached agreement on: a) Tests should "usually" succeed.
I didn't read that in any of the emails. Tests should always succeed. the broken() and skip() will help in achieving that.
I used "usually" succeed to avoid handling out-of-memory cases and other similar cases.
- Reece
2009/11/9 Joerg-Cyril.Hoehle@t-systems.com:
Reece,
I think we reached agreement on: a) Tests should "usually" succeed. b) machines without sound are usual (e.g. in virtual server env.). c) I'll modify the MCI tests to succeed on machines without sound. But for some other points, I've trouble following you.
Yes.
The relevant question becomes: when is a configuration unusual? When can it be ignored for the purpose of Wine and when is it relevant?
Answer: no configuration is unusual.
Huh?
You asked "when is a configuration unusual?" That is, are the following unusual: 1/ using a non-English locale (e.g. German or Polish); 2/ using a non-Western locale (e.g. Chinese or Japanese); 3/ using classic theme in XP or later; 4/ using 120DPI or higher; 5/ using a high-contrast theme; 6/ not having DirectX installed; 7/ having a Windows install on a drive other than C:\ 8/ ...
My point is that none of these (and other configurations/setups in Windows) should be considered unusual. They may (or may not) be the default, but they are possible.
From the perspective of Wine (and the Wine tests), if it is possible
in Windows (on VMware or natively) then it should be handled properly in the tests and not fail the tests.
The point is, if the machine has a broken soundcard (or in Wine does not have an available sound driver, or is broken due to PulseAudio on Ubuntu), the tests should still pass (or be skipped) as is appropriate.
This seems wrong (in general) to me, even though Paul somehow expresses an opinion similar to yours. I say you cannot write a program meant to run in a completely unknown environment, doing comprehensive tests and not stumbling upon strange behaviour.
Write the tests initially for the known valid environments, then fix up issues as they are seen on test.winehq.org (for example, if some of the comctl32/user32 tests fail because they are run in 120DPI instead of 96DPI).
You cannot cater for a completely unknown environment, but those on test.winehq.org are not unknown.
Also, it is not feasible to always do comprehensive tests (e.g. on Win9x when *W functions are not present). The point is to do what you can in those cases.
The program is specifically designed to detect and flag deviant behaviour. All that can be achieved in a program is to be prepared for a known set of scenarios (a broken soundcard is such a scenerio), aka. use-case. Outside these known use-cases, the tests throw an error. Otherwise there's no point in ok() at all.
You have a set of defined behaviour for the mci API when there is a sound card present. Good -- the tests should cover that.
You also have a set of defined behaviour for the mci API when a sound card/driver is not present. Good -- the tests should handle this (e.g. by defining the set of valid return values).
I agree that for the known use-cases (these explicitly include configurations without sound), the tests should succeed.
Yes.
What can be done is what you say:
If we find that other situations need correcting (due to failing tests), they can be looked at then.
... then decide whether the cause of the failing test is "unusual" or not.
No -- look at *why* the test is failing. This will either be that the test is doing something wrong (e.g. not creating a test file), or that it is not covering a valid situation.
The cause being "unusual" or not has no relevance. Either the test is wrong, or Windows is broken.
The point I am making is that: 1. The error and skip statements are saying the same thing; 2. But the skip message is misleading.
Indeed, sorry for that. There's a dependency. The tests depend on record working to generate the file to be played, instead of including one in git. So they fail to test machines that can play but not record.
Sure.
Obviously I'll have to do something else in the yet to be written MIDI tests.
BTW, this raises the question whether Wine should include a tiny wave file for completeness. Invoking the MCI Sound command always plays some sound.
Is it possible to generate the wav file programatically, without using the record functionality?
If I have counted the offsets correctly 275 is MCIERR_DEVICE_NOT_READY, *not* that tempfile.wav was not found.
Wrong. 275 - 256 is 19, which you look up in mmsystem.h: #define MCIERR_FILE_NOT_FOUND (MCIERR_BASE + 19)
Ah yes.
default: ok(!err, "mciSendString('open') failed, expected NOERROR, got %s", dbg_mcierr(err)); }
In the default case, if (err) you must return (with or without skip message). You cannot continue if open fails. So it summarizes to: if (err) { /* list of situations that were analysed as "usual": */ ok(err=FILE_NOT_FOUND || err==MCIERR_CANNOT_LOAD_DRIVER || err==xyz ...); skip("Cannot test playback, cause %s",dbg_mcierr(err)); return }
case MCIERR_CANNOT_LOAD_DRIVER: 2. sound is not present on the system.
Perhaps there's a misconception? That's not how MS-Windows systems without sound drivers present themselves. Ge's, Paul's and my father in law's machines all include the MCI (and waveaudio.dll etc.). It's only at WAVE_Open time that failure is detected, e.g. Cue, Record or Play, and the error is MCIERR_WAVE_INPUTSUNSUITABLE for instance. (That's not entirely true, it does not explain the setformat pcm failure). I.e. no mciwave present at all so far is an "unusual" situation, and generates an error.
In the example I gave, don't look at the exact error codes -- look more at the situations (tempfile.wav does not exist, sound not available).
Also,
if (err==MCIERR_FILE_NOT_FOUND || err==...) { skip("Cannot test playback (err=%s)",dbg_mcierr(err)); return; } ok(!err, ...);
The point here is that tempfile.wav not existing, sound not available, or a broken sound driver should not cause the ok line to fail. These are valid situations that mean that the playback tests cannot be run.
Now, arguably the tests should be modified so that tempfile.wav always exist. This would then address a chunk of the test failures that are happening on all machines at present.
IIUC, that file is being generated through the record API of mci? Isn't there a way to programatically create the WAV/PCM data through the use of math.h functions?
- Reece