Greetings! Hereby I ask people who have access to real Windows machines to help me gather information about the behaviour of cross- compiled conformance tests on the different platforms. Please go to http://afavant.elte.hu/~wferi/wine for details. If you can build the tests in the Wine tree by MSVC, then packaging the compiled binaries would also mean a valuable addition. You will also find a packaging script on the page. Thanks for your time, Feri.
Ferenc Wagner wrote:
Greetings!
Hereby I ask people who have access to real Windows machines to help me gather information about the behaviour of cross- compiled conformance tests on the different platforms.
Please go to http://afavant.elte.hu/~wferi/wine for details.
If you can build the tests in the Wine tree by MSVC, then packaging the compiled binaries would also mean a valuable addition. You will also find a packaging script on the page.
Thanks for your time, Feri.
On Windows 2003 server, one of the tests actually created a GPF (cannot write to memory address 0x00000000). In any case - attached are the results. I'll probably only get around to compiling them on VC Sunday. Shachar -- Shachar Shemesh Open Source integration consultant Home page & resume - http://www.shemesh.biz/ Tests from build 20030829 advapi32.dll:registry (test 1) registry.c:97: Test failed: value set to 'xxxxxxxxxx' instead of 'Te' registry.c:98: Test failed: data set to 'xxxxxxx' instead of 'foobar' registry.c:112: Test failed: data set to 'xxxxxxx' instead of 'foobar' registry: 56 tests executed, 0 marked as todo, 3 failures. advapi32.dll:registry done comctl32.dll:dpa (test 2) dpa: 4 tests executed, 0 marked as todo, 0 failures. comctl32.dll:dpa done dsound.dll:dsound (not compiled) dsound.dll:propset (not compiled) gdi32.dll:generated (test 3) generated: 3892 tests executed, 0 marked as todo, 0 failures. gdi32.dll:generated done kernel32.dll:alloc (test 4) alloc: 58 tests executed, 0 marked as todo, 0 failures. kernel32.dll:alloc done kernel32.dll:atom (test 5) atom: 229398 tests executed, 0 marked as todo, 0 failures. kernel32.dll:atom done kernel32.dll:codepage (test 6) codepage: 2 tests executed, 0 marked as todo, 0 failures. kernel32.dll:codepage done kernel32.dll:console (test 7) console: 275 tests executed, 0 marked as todo, 0 failures. kernel32.dll:console done kernel32.dll:directory (test 8) directory: 51 tests executed, 0 marked as todo, 0 failures. kernel32.dll:directory done kernel32.dll:drive (test 9) drive: 160 tests executed, 0 marked as todo, 0 failures. kernel32.dll:drive done kernel32.dll:environ (test 10) environ: 39 tests executed, 0 marked as todo, 0 failures. kernel32.dll:environ done kernel32.dll:file (test 11) file.c:265: Test failed: couldn't create file "testfi/" (err=123) file.c:524: Test failed: CopyFileA: unexpected error 80 file.c:556: Test failed: CopyFileW: unexpected error 80 file.c:585: Test failed: CREATE_NEW should fail if file exists and last error value should be ERROR_SUCCESS file.c:611: Test failed: CREATE_NEW should fail if file exists and last error value should be ERROR_SUCCESS file.c:623: Test failed: DeleteFileA(NULL) returned ret=0 error=3 file.c:627: Test failed: DeleteFileA("") returned ret=0 error=3 file.c:639: Test failed: DeleteFileW(NULL) returned ret=0 error=3 file.c:643: Test failed: DeleteFileW("") returned ret=0 error=3 file.c:799: Test failed: FindFirstFile on Root directory should Fail file.c:806: Test failed: Bad Error number 2 file.c:679:Current offset = 0015 file.c:708:Current offset = 0015 file: 487281 tests executed, 0 marked as todo, 11 failures. kernel32.dll:file done kernel32.dll:format_msg (test 12) format_msg: 58 tests executed, 0 marked as todo, 0 failures. kernel32.dll:format_msg done kernel32.dll:generated (test 13) generated.c:542: Test failed: TYPE_ALIGNMENT(*(LPWIN32_STREAM_ID)0) == 8 (expected 4) generated: 842 tests executed, 0 marked as todo, 1 failure. kernel32.dll:generated done kernel32.dll:locale (test 14) locale.c:155: Test failed: GetTimeFormat got '' instead of '4' locale.c:156: Test failed: GetTimeFormat: got 1 instead of 2 locale.c:182: Test failed: GetTimeFormat got '8.@:56AM' instead of '8.@:56.@:AM' locale.c:183: Test failed: GetTimeFormat: got 9 instead of 12 locale.c:191: Test failed: GetTimeFormat got '' instead of '3' locale.c:192: Test failed: GetTimeFormat: got 1 instead of 2 locale.c:467: Test failed: GetDateFormat got '5/4/2002' instead of '5/4/02' locale.c:468: Test failed: GetDateFormat: got 9 instead of 7 locale.c:490: Test failed: GetDateFormat check DATE_YEARMONTH with null format expected ERROR_INVALID_FLAGS got return of '10' and error of '0' locale: 193 tests executed, 0 marked as todo, 9 failures. kernel32.dll:locale done kernel32.dll:path (test 15) path.c:514: Test failed: GetLongPathNameA: wrong return code, 112 instead of 48 path.c:904:TMP=C:\DOCUME~1\ADMINI~1.SUN\LOCALS~1\Temp path.c:915:TMP=C:\WINDOWS path.c:925:TMP=C:\ path.c:935:TMP=C: path: 1733 tests executed, 0 marked as todo, 1 failure. kernel32.dll:path done kernel32.dll:pipe (test 16) pipe.c:591:test 1 of 4: pipe.c:593:test 2 of 4: pipe.c:595:test 3 of 4: pipe.c:511:test_NamedPipe_2 starting pipe.c:458:exercizeServer starting pipe.c:469:Client connecting... pipe.c:480:connect failed, retrying pipe.c:236:alarmThreadMain pipe.c:250:serverThreadMain1 start pipe.c:268:Server calling ConnectNamedPipe... pipe.c:469:Client connecting... pipe.c:271:ConnectNamedPipe returned. pipe.c:276:Server reading... pipe.c:487:Client writing... pipe.c:278:Server done reading. pipe.c:282:Server writing... pipe.c:284:Server done writing. pipe.c:490:Client reading... pipe.c:289:Server done flushing. pipe.c:291:Server done disconnecting. pipe.c:268:Server calling ConnectNamedPipe... pipe.c:495:Client closing... pipe.c:469:Client connecting... pipe.c:271:ConnectNamedPipe returned. pipe.c:276:Server reading... pipe.c:487:Client writing... pipe.c:278:Server done reading. pipe.c:282:Server writing... pipe.c:284:Server done writing. pipe.c:490:Client reading... pipe.c:289:Server done flushing. pipe.c:291:Server done disconnecting. pipe.c:268:Server calling ConnectNamedPipe... pipe.c:495:Client closing... pipe.c:469:Client connecting... pipe.c:271:ConnectNamedPipe returned. pipe.c:276:Server reading... pipe.c:487:Client writing... pipe.c:278:Server done reading. pipe.c:282:Server writing... pipe.c:284:Server done writing. pipe.c:490:Client reading... pipe.c:289:Server done flushing. pipe.c:291:Server done disconnecting. pipe.c:268:Server calling ConnectNamedPipe... pipe.c:495:Client closing... pipe.c:469:Client connecting... pipe.c:271:ConnectNamedPipe returned. pipe.c:276:Server reading... pipe.c:487:Client writing... pipe.c:278:Server done reading. pipe.c:282:Server writing... pipe.c:284:Server done writing. pipe.c:490:Client reading... pipe.c:289:Server done flushing. pipe.c:291:Server done disconnecting. pipe.c:268:Server calling ConnectNamedPipe... pipe.c:495:Client closing... pipe.c:469:Client connecting... pipe.c:271:ConnectNamedPipe returned. pipe.c:276:Server reading... pipe.c:487:Client writing... pipe.c:278:Server done reading. pipe.c:282:Server writing... pipe.c:284:Server done writing. pipe.c:490:Client reading... pipe.c:289:Server done flushing. pipe.c:291:Server done disconnecting. pipe.c:268:Server calling ConnectNamedPipe... pipe.c:495:Client closing... pipe.c:469:Client connecting... pipe.c:271:ConnectNamedPipe returned. pipe.c:276:Server reading... pipe.c:487:Client writing... pipe.c:278:Server done reading. pipe.c:282:Server writing... pipe.c:284:Server done writing. pipe.c:490:Client reading... pipe.c:289:Server done flushing. pipe.c:291:Server done disconnecting. pipe.c:268:Server calling ConnectNamedPipe... pipe.c:495:Client closing... pipe.c:469:Client connecting... pipe.c:271:ConnectNamedPipe returned. pipe.c:276:Server reading... pipe.c:487:Client writing... pipe.c:278:Server done reading. pipe.c:282:Server writing... pipe.c:284:Server done writing. pipe.c:490:Client reading... pipe.c:289:Server done flushing. pipe.c:291:Server done disconnecting. pipe.c:268:Server calling ConnectNamedPipe... pipe.c:495:Client closing... pipe.c:469:Client connecting... pipe.c:271:ConnectNamedPipe returned. pipe.c:276:Server reading... pipe.c:487:Client writing... pipe.c:278:Server done reading. pipe.c:282:Server writing... pipe.c:284:Server done writing. pipe.c:490:Client reading... pipe.c:289:Server done flushing. pipe.c:291:Server done disconnecting. pipe.c:268:Server calling ConnectNamedPipe... pipe.c:495:Client closing... pipe.c:501:exercizeServer returning pipe.c:458:exercizeServer starting pipe.c:469:Client connecting... pipe.c:480:connect failed, retrying pipe.c:301:serverThreadMain2 pipe.c:319:Server calling ConnectNamedPipe... pipe.c:469:Client connecting... pipe.c:322:ConnectNamedPipe returned. pipe.c:327:Server reading... pipe.c:487:Client writing... pipe.c:329:Server done reading. pipe.c:332:Server writing... pipe.c:334:Server done writing. pipe.c:490:Client reading... pipe.c:319:Server calling ConnectNamedPipe... pipe.c:495:Client closing... pipe.c:469:Client connecting... pipe.c:322:ConnectNamedPipe returned. pipe.c:327:Server reading... pipe.c:487:Client writing... pipe.c:329:Server done reading. pipe.c:332:Server writing... pipe.c:334:Server done writing. pipe.c:490:Client reading... pipe.c:319:Server calling ConnectNamedPipe... pipe.c:495:Client closing... pipe.c:469:Client connecting... pipe.c:322:ConnectNamedPipe returned. pipe.c:327:Server reading... pipe.c:487:Client writing... pipe.c:329:Server done reading. pipe.c:332:Server writing... pipe.c:334:Server done writing. pipe.c:490:Client reading... pipe.c:319:Server calling ConnectNamedPipe... pipe.c:495:Client closing... pipe.c:469:Client connecting... pipe.c:322:ConnectNamedPipe returned. pipe.c:327:Server reading... pipe.c:487:Client writing... pipe.c:329:Server done reading. pipe.c:332:Server writing... pipe.c:334:Server done writing. pipe.c:490:Client reading... pipe.c:319:Server calling ConnectNamedPipe... pipe.c:495:Client closing... pipe.c:469:Client connecting... pipe.c:322:ConnectNamedPipe returned. pipe.c:327:Server reading... pipe.c:487:Client writing... pipe.c:329:Server done reading. pipe.c:332:Server writing... pipe.c:334:Server done writing. pipe.c:490:Client reading... pipe.c:319:Server calling ConnectNamedPipe... pipe.c:495:Client closing... pipe.c:469:Client connecting... pipe.c:322:ConnectNamedPipe returned. pipe.c:327:Server reading... pipe.c:487:Client writing... pipe.c:329:Server done reading. pipe.c:332:Server writing... pipe.c:334:Server done writing. pipe.c:490:Client reading... pipe.c:319:Server calling ConnectNamedPipe... pipe.c:495:Client closing... pipe.c:469:Client connecting... pipe.c:322:ConnectNamedPipe returned. pipe.c:327:Server reading... pipe.c:487:Client writing... pipe.c:329:Server done reading. pipe.c:332:Server writing... pipe.c:334:Server done writing. pipe.c:490:Client reading... pipe.c:319:Server calling ConnectNamedPipe... pipe.c:495:Client closing... pipe.c:469:Client connecting... pipe.c:322:ConnectNamedPipe returned. pipe.c:327:Server reading... pipe.c:487:Client writing... pipe.c:329:Server done reading. pipe.c:332:Server writing... pipe.c:334:Server done writing. pipe.c:490:Client reading... pipe.c:319:Server calling ConnectNamedPipe... pipe.c:495:Client closing... pipe.c:501:exercizeServer returning pipe.c:539:test_NamedPipe_2 returning pipe.c:597:test 4 of 4: pipe.c:61:test_CreateNamedPipe starting pipe.c:162:test_CreateNamedPipe returning pipe.c:599:all tests done pipe: 286 tests executed, 0 marked as todo, 0 failures. kernel32.dll:pipe done kernel32.dll:process (test 17) tests/process.c: 1 tests executed, 0 marked as todo, 0 failures. tests/process.c: 1 tests executed, 0 marked as todo, 0 failures. tests/process.c: 1 tests executed, 0 marked as todo, 0 failures. tests/process.c: 1 tests executed, 0 marked as todo, 0 failures. tests/process.c: 1 tests executed, 0 marked as todo, 0 failures. tests/process.c: 1 tests executed, 0 marked as todo, 0 failures. tests/process.c: 1 tests executed, 0 marked as todo, 0 failures. tests/process.c: 1 tests executed, 0 marked as todo, 0 failures. tests/process.c: 1 tests executed, 0 marked as todo, 0 failures. tests/process.c: 1 tests executed, 0 marked as todo, 0 failures. tests/process.c: 1 tests executed, 0 marked as todo, 0 failures. tests/process.c: 1 tests executed, 0 marked as todo, 0 failures. tests/process.c: 1 tests executed, 0 marked as todo, 0 failures. tests/process.c: 1 tests executed, 0 marked as todo, 0 failures. process.c:443: Test failed: StartupInfoA:lpTitle expected 'kernel32_crosstest.exe process ', got 'C:\Documents and Settings\Administrator.SUNTEST\Desktop\tests\kernel32_crosstest.exe' process.c:595: Test failed: StartupInfoA:lpTitle expected 'kernel32_crosstest.exe process ', got 'C:\Documents and Settings\Administrator.SUNTEST\Desktop\tests\kernel32_crosstest.exe' process.c:928: Test failed: StartupInfoA:lpDesktop expected '(null)', got 'WinSta0\Default' process.c:929: Test failed: StartupInfoA:lpTitle expected '(null)', got 'C:\Documents and Settings\Administrator.SUNTEST\Desktop\tests\kernel32_crosstest.exe' process.c:976: Test failed: StartupInfoA:lpDesktop expected '(null)', got 'WinSta0\Default' process.c:977: Test failed: StartupInfoA:lpTitle expected '(null)', got 'C:\Documents and Settings\Administrator.SUNTEST\Desktop\tests\kernel32_crosstest.exe' process.c:1059: Test failed: StartupInfoA:lpDesktop expected '(null)', got 'WinSta0\Default' process.c:1060: Test failed: StartupInfoA:lpTitle expected '(null)', got 'C:\Documents and Settings\Administrator.SUNTEST\Desktop\tests\kernel32_crosstest.exe' process.c:1095: Test failed: Wrong cursor position process: 343 tests executed, 0 marked as todo, 9 failures. kernel32.dll:process done kernel32.dll:profile (test 18) profile: 25 tests executed, 0 marked as todo, 0 failures. kernel32.dll:profile done kernel32.dll:thread (test 19) thread: 113 tests executed, 0 marked as todo, 0 failures. kernel32.dll:thread done msvcrt.dll:file (test 20) file: 23 tests executed, 0 marked as todo, 0 failures. msvcrt.dll:file done msvcrt.dll:scanf (test 21) scanf: 12 tests executed, 0 marked as todo, 0 failures. msvcrt.dll:scanf done netapi32.dll:access (test 22) access.c:110: Test failed: Bad Network Path: rc=0 access: 23 tests executed, 0 marked as todo, 1 failure. netapi32.dll:access done netapi32.dll:apibuf (test 23) apibuf: 15 tests executed, 0 marked as todo, 0 failures. netapi32.dll:apibuf done netapi32.dll:wksta (test 24) wksta.c:133: Test failed: Invalid level wksta.c:143: Test failed: access violation wksta.c:148: Test failed: STATUS_ACCESS_VIOLATION wksta: 22 tests executed, 0 marked as todo, 3 failures. netapi32.dll:wksta done ntdll.dll:env (test 25) env: 611 tests executed, 0 marked as todo, 0 failures. ntdll.dll:env done ntdll.dll:error (test 26) error.c:81: Test failed: STATUS_SMARTCARD_CERT_REVOKED (c0000389): got -2146892975, expected 1266 (or MID_NOT_FOUND) error.c:81: Test failed: STATUS_ISSUING_CA_UNTRUSTED (c000038a): got -2146892974, expected 1267 (or MID_NOT_FOUND) error.c:81: Test failed: STATUS_REVOCATION_OFFLINE_C (c000038b): got -2146892973, expected 1268 (or MID_NOT_FOUND) error.c:81: Test failed: STATUS_PKINIT_CLIENT_FAILURE (c000038c): got -2146892972, expected 1269 (or MID_NOT_FOUND) error.c:81: Test failed: STATUS_SMARTCARD_CERT_EXPIRED (c000038d): got -2146892971, expected 1270 (or MID_NOT_FOUND) error: 813 tests executed, 0 marked as todo, 5 failures. ntdll.dll:error done ntdll.dll:generated (test 27) generated: 1730 tests executed, 0 marked as todo, 0 failures. ntdll.dll:generated done ntdll.dll:large_int (test 28) large_int: 704 tests executed, 0 marked as todo, 0 failures. ntdll.dll:large_int done ntdll.dll:path (test 29) path.c:144: Test failed: Wrong result (6,6)/(0,0) for c:\nul:: path.c:144: Test failed: Wrong result (4,6)/(0,0) for c:NUL .... path.c:144: Test failed: Wrong result (4,6)/(0,0) for c:nul . . : path.c:144: Test failed: Wrong result (4,6)/(0,0) for c:prn:aaa path.c:144: Test failed: Wrong result (4,6)/(0,0) for c:nul:aaa path: 140 tests executed, 0 marked as todo, 5 failures. ntdll.dll:path done ntdll.dll:rtl (test 30) rtl: 600107 tests executed, 0 marked as todo, 0 failures. ntdll.dll:rtl done ntdll.dll:rtlbitmap (test 31) rtlbitmap: 352 tests executed, 0 marked as todo, 0 failures. ntdll.dll:rtlbitmap done ntdll.dll:rtlstr (test 32) rtlstr.c:1599: Test failed: (test 32): RtlIntegerToUnicodeString(32768, 2, [out]) assigns string "1000000000000000", expected: "1000000000000000" rtlstr.c:1602: Test failed: (test 32): RtlIntegerToUnicodeString(32768, 2, [out]) string has Length 38, expected: 32 rtlstr.c:1599: Test failed: (test 33): RtlIntegerToUnicodeString(65535, 2, [out]) assigns string "1111111111111111", expected: "1111111111111111" rtlstr.c:1602: Test failed: (test 33): RtlIntegerToUnicodeString(65535, 2, [out]) string has Length 38, expected: 32 rtlstr.c:1602: Test failed: (test 83): RtlIntegerToUnicodeString(32768, 2, [out]) string has Length 38, expected: 32 rtlstr.c:1602: Test failed: (test 84): RtlIntegerToUnicodeString(32768, 2, [out]) string has Length 38, expected: 32 rtlstr.c:245: Test failed: pRtlInitUnicodeString(&uni, 0) sets Length to 65532, expected 33920 rtlstr.c:248: Test failed: pRtlInitUnicodeString(&uni, 0) sets MaximumLength to 65534, expected 33922 rtlstr: 2800 tests executed, 0 marked as todo, 8 failures. ntdll.dll:rtlstr done ntdll.dll:string (test 33) ntdll.dll:string done oleaut32.dll:olefont (test 34) olefont: 4 tests executed, 0 marked as todo, 0 failures. oleaut32.dll:olefont done oleaut32.dll:safearray (test 35) safearray.c:234: Test failed: SAC(20,1,[1,0]), result 8, expected 0 safearray.c:243: Test failed: SAGE for vt 20 returned elemsize 8 instead of expected 0 safearray.c:264: Test failed: copy of SAC(20,1,[1,0]), result 8, expected 0 safearray.c:267: Test failed: SAGE for vt 20 returned elemsize 8 instead of expected 0 safearray.c:234: Test failed: SAC(21,1,[1,0]), result 8, expected 0 safearray.c:243: Test failed: SAGE for vt 21 returned elemsize 8 instead of expected 0 safearray.c:264: Test failed: copy of SAC(21,1,[1,0]), result 8, expected 0 safearray.c:267: Test failed: SAGE for vt 21 returned elemsize 8 instead of expected 0 safearray: 963 tests executed, 0 marked as todo, 8 failures. oleaut32.dll:safearray done oleaut32.dll:vartest (test 36) vartest.c:1762:======== Testing VarUI1FromXXX ======== vartest.c:1850:======== Testing VarUI2FromXXX ======== vartest.c:1937:======== Testing VarUI4FromXXX ======== vartest.c:2023:======== Testing VarI1FromXXX ======== vartest.c:2089:======== Testing VarI2FromXXX ======== vartest.c:2134:======== Testing VarI4FromXXX ======== vartest.c:2178:======== Testing VarR4FromXXX ======== vartest.c:2225:======== Testing VarR8FromXXX ======== vartest.c:2245:======== Testing VarDateFromXXX ======== vartest.c:2316:======== Testing VarBoolFromXXX ======== vartest.c:2372:======== Testing VarBSTRFromXXX ======== vartest.c:2605:======== Testing Hi-Level Variant API ======== vartest.c:2692:======== Testing different VARTYPES ======== vartest: 1875 tests executed, 0 marked as todo, 0 failures. oleaut32.dll:vartest done rpcrt4.dll:rpc (test 37) rpc.c:124: ** Uuid Conversion and Comparison Tests ** rpc: 901 tests executed, 0 marked as todo, 0 failures. rpcrt4.dll:rpc done shell32.dll:generated (test 38) generated: 380 tests executed, 0 marked as todo, 0 failures. shell32.dll:generated done shell32.dll:shlfileop (test 39) shlfileop: 121 tests executed, 0 marked as todo, 0 failures. shell32.dll:shlfileop done shlwapi.dll:clist (test 40) clist: 237 tests executed, 0 marked as todo, 0 failures. shlwapi.dll:clist done shlwapi.dll:generated (test 41) generated: 25 tests executed, 0 marked as todo, 0 failures. shlwapi.dll:generated done shlwapi.dll:path (test 42) path.c:93: Test failed: Expected ?query=x&return=y, but got query=x&return=y path: 33 tests executed, 0 marked as todo, 1 failure. shlwapi.dll:path done shlwapi.dll:shreg (test 43) shreg.c:171: Test failed: (44,43) shreg.c:195: Test failed: (44,43) shreg.c:205: Test failed: () shreg.c:206: Test failed: () shreg.c:207: Test failed: (44,43) shreg.c:218: Test failed: (44,43) shreg.c:240: Test failed: didn't open dest shreg: 39 tests executed, 0 marked as todo, 7 failures. shlwapi.dll:shreg done urlmon.dll:generated (test 44) generated: 6 tests executed, 0 marked as todo, 0 failures. urlmon.dll:generated done user32.dll:class (test 45) class: 83 tests executed, 0 marked as todo, 0 failures. user32.dll:class done user32.dll:generated (test 46) generated: 2115 tests executed, 0 marked as todo, 0 failures. user32.dll:generated done user32.dll:listbox (test 47) listbox.c:165: Testing single selection... listbox.c:167: ... with NOSEL listbox.c:169: Testing multiple selection... listbox.c:171: ... with NOSEL listbox: 48 tests executed, 0 marked as todo, 0 failures. user32.dll:listbox done user32.dll:sysparams (test 48) sysparams.c:1113:strict=0 sysparams.c:214:testing SPI_{GET,SET}BEEP sysparams.c:350:testing SPI_{GET,SET}MOUSE sysparams.c:455:testing SPI_{GET,SET}KEYBOARDSPEED sysparams.c:487:testing SPI_ICONHORIZONTALSPACING sysparams.c:530:testing SPI_{GET,SET}SCREENSAVETIMEOUT sysparams.c:564:testing SPI_{GET,SET}SCREENSAVEACTIVE sysparams.c:601:testing SPI_{GET,SET}KEYBOARDDELAY sysparams.c:634:testing SPI_ICONVERTICALSPACING sysparams.c:685:testing SPI_{GET,SET}ICONTITLEWRAP sysparams.c:718:testing SPI_{GET,SET}MENUDROPALIGNMENT sysparams.c:754:testing SPI_{GET,SET}DOUBLECLKWIDTH sysparams.c:785:testing SPI_{GET,SET}DOUBLECLKHEIGHT sysparams.c:817:testing SPI_{GET,SET}DOUBLECLICKTIME sysparams.c:869:testing SPI_{GET,SET}MOUSEBUTTONSWAP sysparams.c:895:testing SPI_GETFASTTASKSWITCH sysparams.c:910:testing SPI_{GET,SET}DRAGFULLWINDOWS sysparams.c:948:testing SPI_{GET,SET}WORKAREA sysparams.c:992:testing SPI_{GET,SET}SHOWSOUNDS sysparams.c:1042:testing SPI_{GET,SET}DESKWALLPAPER sysparams: 412 tests executed, 0 marked as todo, 0 failures. user32.dll:sysparams done user32.dll:win (test 49) win.c:91:main window 000A019E main2 000D031A desktop 00010010 child 000E0316 win.c:107:created child 001402FC win.c:120:created child of desktop 001502FC win.c:130:created child of child 001602FC win.c:140:created top-level 001702FC win.c:150:created owned top-level 001802FC win.c:160:created popup 001902FC win.c:170:created owned popup 001A02FC win.c:180:created top-level owned by child 001B02FC win.c:186:created popup owned by desktop 001C02FC win.c:192:created popup owned by child 001D02FC win.c:198:created WS_CHILD popup 001E02FC win.c:204:created owned WS_CHILD popup 001F02FC win.c:209:testing parent changes win.c:221:created child 002002FC win.c:247:created top-level 002102FC win.c:264:created popup 002202FC win.c:281:created child 002302FC win.c:298:created top-level 002402FC win.c:307:created owned popup 002502FC win.c:323:created owner 002602FC and popup 00070318 win.c:337:created owner 00080318 and popup 002702FC win.c:345:created owner 00090318 and popup 002802FC win: 373 tests executed, 0 marked as todo, 0 failures. user32.dll:win done user32.dll:wsprintf (test 50) wsprintf: 4 tests executed, 0 marked as todo, 0 failures. user32.dll:wsprintf done wininet.dll:generated (test 51) generated: 344 tests executed, 0 marked as todo, 0 failures. wininet.dll:generated done wininet.dll:http (test 52) http.c:90:Starting with flags 0x10000000 http.c:92:InternetOpenA <-- http.c:95:InternetOpenA --> http.c:101:InternetConnectA <-- http.c:78:Callback 00CC0004 0xdeadbeef INTERNET_STATUS_HANDLE_CREATED(60) 0022EB40 4 http.c:104:InternetConnectA --> http.c:108:HttpOpenRequestA <-- http.c:78:Callback 00CC0008 0xdeadbead INTERNET_STATUS_HANDLE_CREATED(60) 0022EDB8 4 http.c:122:HttpOpenRequestA --> http.c:126:HttpSendRequestA --> http.c:133:HttpSendRequestA <-- http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_RESOLVING_NAME(10) 0103FB04 15 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_NAME_RESOLVED(11) 0103FAF4 15 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_CONNECTING_TO_SERVER(20) 0103FB9C 15 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_CONNECTED_TO_SERVER(21) 0103FD44 15 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_SENDING_REQUEST(30) 00000000 0 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_REQUEST_SENT(31) 0103FCE0 4 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_RECEIVING_RESPONSE(40) 00000000 0 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_RESPONSE_RECEIVED(41) 0103FD64 4 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_REDIRECT(110) 0103FBF4 33 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_RESOLVING_NAME(10) 0103FB04 15 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_NAME_RESOLVED(11) 0103FAF4 15 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_CONNECTING_TO_SERVER(20) 0103FB9C 15 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_CONNECTED_TO_SERVER(21) 0103FD44 15 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_SENDING_REQUEST(30) 00000000 0 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_REQUEST_SENT(31) 0103FCE0 4 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_RECEIVING_RESPONSE(40) 00000000 0 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_RESPONSE_RECEIVED(41) 0103FD64 4 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_REQUEST_COMPLETE(100) 0103FDAC 8 http.c:140:Option 0x17 -> 1 72 http.c:144:Option 0x22 -> 1 http://www.winehq.org/site/about http.c:149:Option 0x16 -> 1 HTTP/1.1 200 OK Accept-Ranges: bytes Transfer-Encoding: chunked Date: Sat, 30 Aug 2003 11:35:49 GMT Content-Type: text/html; charset=ISO-8859-1 Expires: Sat, 30 Aug 2003 11:35:49 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Server: Apache/2.0.40 (Red Hat Linux) X-Powered-By: PHP/4.2.2 Last-Modified: Sat, 30 Aug 2003 11:35:49 GMT Pragma: no-cache Via: 1.1 netcache (NetCache NetApp/5.3.1R3D1) http.c:154:Option 0x22 -> 1 http://www.winehq.org/site/about http.c:158:Option 0x5 -> 0 http://www.winehq.org/site/about (12150) http.c:163:Option 0x1 -> 1 text/html; charset=ISO-8859-1 http.c:166:Entery Query loop http.c:183:ReadFile -> 1 573 http.c:183:ReadFile -> 1 1904 http.c:172: Test failed: InternetQueryDataAvailable failed http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_REQUEST_COMPLETE(100) 0103FDAC 8 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_CLOSING_CONNECTION(50) 00000000 0 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_CONNECTION_CLOSED(51) 00000000 0 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_HANDLE_CLOSING(70) 0022EDEC 4 http.c:78:Callback 00CC0008 0xdeadbeef INTERNET_STATUS_HANDLE_CLOSING(70) 0022EDEC 4 http.c:90:Starting with flags 0x0 http.c:92:InternetOpenA <-- http.c:95:InternetOpenA --> http.c:101:InternetConnectA <-- http.c:78:Callback 00CC0004 0xdeadbeef INTERNET_STATUS_HANDLE_CREATED(60) 0022EB3C 4 http.c:104:InternetConnectA --> http.c:108:HttpOpenRequestA <-- http.c:78:Callback 00CC0008 0xdeadbead INTERNET_STATUS_HANDLE_CREATED(60) 0022EDB4 4 http.c:122:HttpOpenRequestA --> http.c:126:HttpSendRequestA --> http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_SENDING_REQUEST(30) 00000000 0 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_REQUEST_SENT(31) 0022ECDC 4 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_RECEIVING_RESPONSE(40) 00000000 0 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_RESPONSE_RECEIVED(41) 0022ECC8 4 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_REDIRECT(110) 0022EBE0 33 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_CONNECTING_TO_SERVER(20) 0022EBF0 15 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_CONNECTED_TO_SERVER(21) 0022EBCC 15 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_SENDING_REQUEST(30) 00000000 0 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_REQUEST_SENT(31) 0022ECDC 4 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_RECEIVING_RESPONSE(40) 00000000 0 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_RESPONSE_RECEIVED(41) 0022ECC8 4 http.c:133:HttpSendRequestA <-- http.c:140:Option 0x17 -> 1 72 http.c:144:Option 0x22 -> 1 http://www.winehq.org/site/about http.c:149:Option 0x16 -> 1 HTTP/1.1 200 OK Accept-Ranges: bytes Transfer-Encoding: chunked Date: Sat, 30 Aug 2003 11:35:50 GMT Content-Type: text/html; charset=ISO-8859-1 Expires: Sat, 30 Aug 2003 11:35:50 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Server: Apache/2.0.40 (Red Hat Linux) X-Powered-By: PHP/4.2.2 Last-Modified: Sat, 30 Aug 2003 11:35:50 GMT Pragma: no-cache Via: 1.1 netcache (NetCache NetApp/5.3.1R3D1) http.c:154:Option 0x22 -> 1 http://www.winehq.org/site/about http.c:158:Option 0x5 -> 0 http://www.winehq.org/site/about (12150) http.c:163:Option 0x1 -> 1 text/html; charset=ISO-8859-1 http.c:166:Entery Query loop http.c:183:ReadFile -> 1 573 http.c:183:ReadFile -> 1 1904 http.c:183:ReadFile -> 1 4344 http.c:183:ReadFile -> 1 4344 http.c:183:ReadFile -> 1 4069 http.c:78:Callback 00CC000C 0xdeadbead INTERNET_STATUS_HANDLE_CLOSING(70) 0022EDE8 4 http.c:78:Callback 00CC0008 0xdeadbeef INTERNET_STATUS_HANDLE_CLOSING(70) 0022EDE8 4 http.c:242:read 0x0000cf04 bytes http: 30 tests executed, 0 marked as todo, 1 failure. wininet.dll:http done winmm.dll:wave (test 53) wave.c:257:found 1 WaveOut devices wave.c:281: 0: "Creative Sound Blaster PCI" 5.10 (1:100): channels=65535 formats=bffff support=002c wave.c:295:Testing invalid 2MHz format wave.c:428:found 1 WaveIn devices wave.c:452: 0: "Creative Sound Blaster PCI" 5.10 (1:101): channels=65535 formats=bffff wave.c:466:Testing invalid 2MHz format wave: 428 tests executed, 0 marked as todo, 0 failures. winmm.dll:wave done ws2_32.dll:sock (test 54) sock.c:688: **** STARTING TEST 0 **** sock.c:326:simple_server (a68) starting sock.c:335:simple_server (a68) ready sock.c:340:simple_server (a68): waiting for client sock.c:385:simple_client (5d0): starting sock.c:388:simple_client (5d0): server ready sock.c:385:simple_client (520): starting sock.c:388:simple_client (520): server ready sock.c:400:simple_client (5d0) connected sock.c:400:simple_client (520) connected sock.c:421:simple_client (5d0) exiting sock.c:340:simple_server (a68): waiting for client sock.c:421:simple_client (520) exiting sock.c:368:simple_server (a68) exiting sock.c:690: **** TEST 0 COMPLETE **** sock.c:688: **** STARTING TEST 1 **** sock.c:326:simple_server (e84) starting sock.c:335:simple_server (e84) ready sock.c:340:simple_server (e84): waiting for client sock.c:439:event_client (e04): starting sock.c:439:event_client (638): starting sock.c:441:event_client (e04): server ready sock.c:461:event_client (e04) connected sock.c:499:event_client (e04): all data sent - shutdown sock.c:340:simple_server (e84): waiting for client sock.c:520:event_client (e04): all data received sock.c:532:event_client (e04): close event sock.c:550:event_client (e04) exiting sock.c:441:event_client (638): server ready sock.c:461:event_client (638) connected sock.c:499:event_client (638): all data sent - shutdown sock.c:368:simple_server (e84) exiting sock.c:520:event_client (638): all data received sock.c:532:event_client (638): close event sock.c:550:event_client (638) exiting sock.c:690: **** TEST 1 COMPLETE **** sock: 128 tests executed, 0 marked as todo, 0 failures. ws2_32.dll:sock done winspool.drv:info (test 55) info: 10 tests executed, 0 marked as todo, 0 failures. winspool.drv:info done
Hello Ferenc, Saturday, August 30, 2003, 4:50:02 AM, you wrote: FW> Greetings! FW> Hereby I ask people who have access to real Windows machines FW> to help me gather information about the behaviour of cross- FW> compiled conformance tests on the different platforms. I've got Win2k SP4 and kernel test process just hangs on my PC, after generating a number of GPF :( -- Best regards, Oleg mailto:xolegpro(a)rbcmail.ru
Hello Oleg, Monday, September 1, 2003, 8:29:57 AM, you wrote: FW>> Hereby I ask people who have access to real Windows machines FW>> to help me gather information about the behaviour of cross- FW>> compiled conformance tests on the different platforms. OP> I've got Win2k SP4 and kernel test process just hangs on my PC, after OP> generating a number of GPF :( I've found that process test loops here (on Win2k SP4) /* get all startup events up to the entry point break exception */ do { ok(WaitForDebugEvent(&de, INFINITE), "reading debug event"); ContinueDebugEvent(de.dwProcessId, de.dwThreadId, DBG_CONTINUE); if (de.dwDebugEventCode != EXCEPTION_DEBUG_EVENT) dbg++; } while (de.dwDebugEventCode != EXIT_PROCESS_DEBUG_EVENT); It will not receive EXIT_PROCESS_DEBUG_EVENT, only EXCEPTION_DEBUG_EVENT first with ExceptionCode = STATUS_BREAKPOINT, then with ExceptionCode = STATUS_ACCESS_VIOLATION continually. -- Best regards, Oleg mailto:xolegpro(a)rbcmail.ru
It will not receive EXIT_PROCESS_DEBUG_EVENT, only EXCEPTION_DEBUG_EVENT first with ExceptionCode = STATUS_BREAKPOINT, then with ExceptionCode = STATUS_ACCESS_VIOLATION continually.
it's likely the child process seg faults somewhere, and the test doesn't handle it. why the child seg faults is another story... A+ -- Eric Pouech
Oleg Prokhorov wrote:
Hello Ferenc,
Saturday, August 30, 2003, 4:50:02 AM, you wrote:
FW> Greetings!
FW> Hereby I ask people who have access to real Windows machines FW> to help me gather information about the behaviour of cross- FW> compiled conformance tests on the different platforms.
I've got Win2k SP4 and kernel test process just hangs on my PC, after generating a number of GPF :(
Hangs also on WinME. Jakob
Jakob Eriksson <jakob(a)solidform.se> writes:
Oleg Prokhorov wrote:
I've got Win2k SP4 and kernel test process just hangs on my PC, after generating a number of GPF :(
Hangs also on WinME.
Interestingly enough, 2k SP3 seems to more or less pass it. By the way, could you send me ME test results? :) Just kill through the hangs! Thanks, Feri.
Ferenc Wagner wrote:
Jakob Eriksson <jakob(a)solidform.se> writes:
Oleg Prokhorov wrote:
I've got Win2k SP4 and kernel test process just hangs on my PC, after generating a number of GPF :(
Hangs also on WinME.
Interestingly enough, 2k SP3 seems to more or less pass it. By the way, could you send me ME test results? :) Just kill through the hangs!
Killing on ME results in ME lock up. ------------ Comment: Crashes on kernel32_crosstest.exe console /jakob ============================================================== Tests prepared Aug 31 2003 Operating system version: 4.90 - 9x family WinME .............................................................. Running 'C:\Windows\TEMP\winertest\msvcrt_crosstest.exe': file: 23 tests executed, 0 marked as todo, 0 failures. file: 23 tests executed, 0 marked as todo, 0 failures. -------------------------------------------------------------- .............................................................. Running 'C:\Windows\TEMP\winertest\msvcrt_crosstest.exe': scanf: 12 tests executed, 0 marked as todo, 0 failures. scanf: 12 tests executed, 0 marked as todo, 0 failures. -------------------------------------------------------------- .............................................................. Running 'C:\Windows\TEMP\winertest\msvcrt_crosstest.exe': Usage: C:\WINDOWS\TEMP\WINERT~1\MSVCRT~1.EXE test_name Valid test names: file scanf scanf -------------------------------------------------------------- .............................................................. Running 'C:\Windows\TEMP\winertest\user32_crosstest.exe': class: 0 tests executed, 0 marked as todo, 0 failures. class: 0 tests executed, 0 marked as todo, 0 failures. -------------------------------------------------------------- .............................................................. Running 'C:\Windows\TEMP\winertest\user32_crosstest.exe': generated: 2115 tests executed, 0 marked as todo, 0 failures. generated: 2115 tests executed, 0 marked as todo, 0 failures. -------------------------------------------------------------- .............................................................. Running 'C:\Windows\TEMP\winertest\user32_crosstest.exe': listbox.c:165: Testing single selection... listbox.c:167: ... with NOSEL listbox.c:169: Testing multiple selection... listbox.c:171: ... with NOSEL listbox: 48 tests executed, 0 marked as todo, 0 failures. listbox: 48 tests executed, 0 marked as todo, 0 failures. -------------------------------------------------------------- .............................................................. Running 'C:\Windows\TEMP\winertest\user32_crosstest.exe': sysparams.c:1113:strict=0 sysparams.c:214:testing SPI_{GET,SET}BEEP sysparams.c:350:testing SPI_{GET,SET}MOUSE sysparams.c:455:testing SPI_{GET,SET}KEYBOARDSPEED sysparams.c:487:testing SPI_ICONHORIZONTALSPACING sysparams.c:530:testing SPI_{GET,SET}SCREENSAVETIMEOUT sysparams.c:564:testing SPI_{GET,SET}SCREENSAVEACTIVE sysparams.c:601:testing SPI_{GET,SET}KEYBOARDDELAY sysparams.c:634:testing SPI_ICONVERTICALSPACING sysparams.c:685:testing SPI_{GET,SET}ICONTITLEWRAP sysparams.c:718:testing SPI_{GET,SET}MENUDROPALIGNMENT sysparams.c:754:testing SPI_{GET,SET}DOUBLECLKWIDTH sysparams.c:785:testing SPI_{GET,SET}DOUBLECLKHEIGHT sysparams.c:817:testing SPI_{GET,SET}DOUBLECLICKTIME sysparams.c:869:testing SPI_{GET,SET}MOUSEBUTTONSWAP sysparams.c:895:testing SPI_GETFASTTASKSWITCH sysparams.c:910:testing SPI_{GET,SET}DRAGFULLWINDOWS sysparams.c:948:testing SPI_{GET,SET}WORKAREA sysparams.c:992:testing SPI_{GET,SET}SHOWSOUNDS sysparams.c:998:SPI_{GET,SET}SHOWSOUNDS not supported on this platform sysparams.c:1042:testing SPI_{GET,SET}DESKWALLPAPER sysparams.c:1050:SPI_{GET,SET}DESKWALLPAPER not supported on this platform sysparams: 349 tests executed, 0 marked as todo, 0 failures. sysparams: 349 tests executed, 0 marked as todo, 0 failures. -------------------------------------------------------------- .............................................................. Running 'C:\Windows\TEMP\winertest\user32_crosstest.exe': sysparams: 349 tests executed, 0 marked as todo, 0 failures. -------------------------------------------------------------- .............................................................. Running 'C:\Windows\TEMP\winertest\user32_crosstest.exe': wsprintf: 2 tests executed, 0 marked as todo, 0 failures. wsprintf: 2 tests executed, 0 marked as todo, 0 failures. -------------------------------------------------------------- .............................................................. Running 'C:\Windows\TEMP\winertest\user32_crosstest.exe': Usage: C:\WINDOWS\TEMP\WINERT~1\USER32~1.EXE test_name Valid test names: class generated listbox sysparams win wsprintf wsprintf -------------------------------------------------------------- .............................................................. Running 'C:\Windows\TEMP\winertest\kernel32_crosstest.exe': alloc.c:208: Test failed: GlobalReAlloc failed to convert FIXED to MOVEABLE: error=87 alloc.c:213: Test failed: Converting from FIXED to MOVEABLE didn't REALLY work alloc.c:238: Test failed: Discarded memory we shouldn't have alloc.c:239: Test failed: GlobalUnlock Failed. alloc: 58 tests executed, 0 marked as todo, 4 failures. alloc: 58 tests executed, 0 marked as todo, 4 failures. -------------------------------------------------------------- .............................................................. Running 'C:\Windows\TEMP\winertest\kernel32_crosstest.exe': atom.c:55:WARNING: Unicode atom APIs are not supported on this platform atom: 163850 tests executed, 0 marked as todo, 0 failures. atom: 163850 tests executed, 0 marked as todo, 0 failures. -------------------------------------------------------------- .............................................................. Running 'C:\Windows\TEMP\winertest\kernel32_crosstest.exe': codepage: 2 tests executed, 0 marked as todo, 0 failures. codepage: 2 tests executed, 0 marked as todo, 0 failures. -------------------------------------------------------------- .............................................................. Running 'C:\Windows\TEMP\winertest\kernel32_crosstest.exe':
Jakob Eriksson <jakob(a)vmlinux.org> writes:
Ferenc Wagner wrote:
By the way, could you send me ME test results? :) Just kill through the hangs!
Killing on ME results in ME lock up.
:) Sorry, I was not clear enough. May I ask you to run my package? (http://afavant.elte.hu/~wferi/wine) ME is still missing from the collection. Thanks for your time and best wishes, Feri. PS: It was a very good tip to strip the executables. The zip shrinked almost to the tenth of its original size!
Greetings again! Thank you very much for answering my call. The 9 reports for 5 different Windows versions gave some insight into the situation. So now I head to the following: 1. Native build tests. As a first step, I provide a collection of source/project/desktop files which I think are necessary for an MSVC build. Could somebody test it for me? (Get from http://afavant.elte.hu/~wferi/wine). 2. A simple batch file could be provided to zip up the results, if I knew how to use which compression program. Eg. (win)zip can not be found on the XP I have access to. Should I include one in the archive? Or as a separate link? Ideas? Anybody? 3. What features/views should the result page have? What to do with different results for the same Windows version? For native/cross builds? What to do with different releases? Surely some backlog must be kept for comparison or reference. How do you use it? 4. I need some guidance for integrating it into the WineHQ page. Desing ideas (or improvements, if the current can be considered 'design') are also welcome. Waiting for your comments, Feri.
On Tue, 2 Sep 2003, Ferenc Wagner wrote:
2. A simple batch file could be provided to zip up the results, if I knew how to use which compression program. Eg. (win)zip can not be found on the XP I have access to. Should I include one in the archive? Or as a separate link? Ideas? Anybody?
The winetests shell is comming along nicely, I don't think it's worth dupicating effort. It will do all the decompression, sending of results, cleanup.
3. What features/views should the result page have? What to do with different results for the same Windows version? For native/cross builds? What to do with different releases? Surely some backlog must be kept for comparison or reference. How do you use it?
The current matrix that you have is interesting. We should include Wine as one of the Windows version. The numbers in the cells are rather confusing though. I suggest the cells be coloured green, yellow and red. The colour is based on the following: -- green: all tests received passed -- yellow: some tests received failed -- red: all tests received failed The yellow and red cells should be clickable and a popup should detail the problems.
4. I need some guidance for integrating it into the WineHQ page. Desing ideas (or improvements, if the current can be considered 'design') are also welcome.
We will need to integrate your scripts with the new mailing list. Please not that people may send results inlines in the email, as text/plain attachments, or as encoded MIME attachments. Ideally we should be able to extract them out of the email regardless of how it was included/attached. For bonus points we should gunzip/unzip them if they were compressed. :) But all this is low priority because 99.99% of them will be sent by winetests I suspect, so we know exactly how they will be included. -- Dimi.
"Dimitrie O. Paun" <dimi(a)intelliware.ca> writes:
On Tue, 2 Sep 2003, Ferenc Wagner wrote:
2. A simple batch file could be provided to zip up the results, if I knew how to use which compression program. Eg. (win)zip can not be found on the XP I have access to. Should I include one in the archive? Or as a separate link? Ideas? Anybody?
The winetests shell is comming along nicely, I don't think it's worth dupicating effort. It will do all the decompression, sending of results, cleanup.
Maybe you misunderstood me. The question is about collecting the MSVC compiled test binaries. Or is that covered, too?
We should include Wine as one of the Windows version.
Nobody submitted that...
I suggest the cells be coloured green, yellow and red. The colour is based on the following: -- green: all tests received passed -- yellow: some tests received failed -- red: all tests received failed
What is in the cells?
The yellow and red cells should be clickable and a popup should detail the problems.
Now an error cell is clickable if there is some output. Even passed tests can produce some traces, although rarely.
Ideally we should be able to extract them out of the email regardless of how it was included/attached.
Not a serious problem. We have munpack and file and a handful of possibilities.
But all this is low priority because 99.99% of them will be sent by winetests I suspect, so we know exactly how they will be included.
Maybe the report formats should converge then. I find mine quite convenient, and have a parser for that... :) Feri.
On Tue, 2 Sep 2003, Ferenc Wagner wrote:
The winetests shell is comming along nicely, I don't think it's worth dupicating effort. It will do all the decompression, sending of results, cleanup.
Maybe you misunderstood me. The question is about collecting the MSVC compiled test binaries. Or is that covered, too?
Yes, the winetests.exe should contain the compiled test binaries as well.
We should include Wine as one of the Windows version.
Nobody submitted that...
Right, I did not realize. To make it clearer, maybe we should control the columns manually, rather than dinamically. We know what we expect, and an empty column will also give an indication of what tests we're missing.
I suggest the cells be coloured green, yellow and red. The colour is based on the following: -- green: all tests received passed -- yellow: some tests received failed -- red: all tests received failed
What is in the cells?
That's a good question. Maybe number of errors? Number of tests we received? I dunno...
The yellow and red cells should be clickable and a popup should detail the problems.
Now an error cell is clickable if there is some output. Even passed tests can produce some traces, although rarely.
Right, we are on same wavelenght. It's just that I find the output a bit too heavy, with the 3 numbers in there. A minor point though. But having less in the cells will also make them smaller, and we can fit more on the screen. This will be even better when most of them are green :)
Ideally we should be able to extract them out of the email regardless of how it was included/attached.
Not a serious problem. We have munpack and file and a handful of possibilities.
But all this is low priority because 99.99% of them will be sent by winetests I suspect, so we know exactly how they will be included.
Maybe the report formats should converge then. I find mine quite convenient, and have a parser for that... :)
For sure. The two efforts are very much complementary, and we should poll our efforts. winetests will handle just the client part, so if you need the output in a different format, just say so. -- Dimi.
Dimitrie O. Paun wrote:
On Tue, 2 Sep 2003, Ferenc Wagner wrote:
The winetests shell is comming along nicely, I don't think it's worth dupicating effort. It will do all the decompression, sending of results, cleanup.
Maybe you misunderstood me. The question is about collecting the MSVC compiled test binaries. Or is that covered, too?
Yes, the winetests.exe should contain the compiled test binaries as well.
Am I missing something here? Currently, winetests.exe contains crosscompiled tests, not MS Visual C - compiled ones. *)
For sure. The two efforts are very much complementary, and we should poll our efforts. winetests will handle just the client part, so if you need the output in a different format, just say so.
Exactly. I will look into the output format. :-) regards, Jakob *) Which is not a problem IMHO, but I could see the benefit of having the tests compiled with a Microsoft compiler too - maybe some of the more subtler bugs could be found that way, what do I know.
On Wed, 3 Sep 2003, Jakob Eriksson wrote:
Am I missing something here? Currently, winetests.exe contains crosscompiled tests, not MS Visual C - compiled ones. *)
Well, we should handle this through the Makefiles. Also the output should specify how the tests were compiled, it may help with error tracking. If we need, we can create 2 versions of the winetests.exe: one with MinGW compiled tests, one with MSVC. But do we really need to bother? -- Dimi.
"Dimitrie O. Paun" <dimi(a)intelliware.ca> writes:
On Wed, 3 Sep 2003, Jakob Eriksson wrote:
Am I missing something here? Currently, winetests.exe contains crosscompiled tests, not MS Visual C - compiled ones. *)
Well, we should handle this through the Makefiles. Also the output should specify how the tests were compiled, it may help with error tracking. If we need, we can create 2 versions of the winetests.exe: one with MinGW compiled tests, one with MSVC. But do we really need to bother?
I am afraid yes. The Dsound test does not compile with MinGW while others compile but do not run (DPA_Create) or similar. Testing MinGW is nice, but not enough. Feri.
On Wed, 3 Sep 2003, Ferenc Wagner wrote:
Well, we should handle this through the Makefiles. Also the output should specify how the tests were compiled, it may help with error tracking. If we need, we can create 2 versions of the winetests.exe: one with MinGW compiled tests, one with MSVC. But do we really need to bother?
I am afraid yes. The Dsound test does not compile with MinGW while others compile but do not run (DPA_Create) or similar. Testing MinGW is nice, but not enough.
OK, here is what I say: -- we should note in the test report how were the tests compiled. -- the build process should support compiling them through MinGW so that people under Linux can build (for their testing purposes) winetests. -- I don't think we need to _distribute_ two binaries. If the MSVC one works better, the person designated to build and publicly post the winetests.exe should build the tests through MSVC. Once we fix everything so that MinGW works just as well, we might as well switch to that. How does that sound? -- Dimi.
"Dimitrie O. Paun" <dimi(a)intelliware.ca> writes:
On Wed, 3 Sep 2003, Ferenc Wagner wrote:
If we need, we can create 2 versions of the winetests.exe: one with MinGW compiled tests, one with MSVC. But do we really need to bother?
I am afraid yes. The Dsound test does not compile with MinGW while others compile but do not run (DPA_Create) or similar. Testing MinGW is nice, but not enough.
I don't think we need to _distribute_ two binaries. If the MSVC one works better, the person designated to build and publicly post the winetests.exe should build the tests through MSVC. Once we fix everything so that MinGW works just as well, we might as well switch to that.
How does that sound?
That sounds nice, but I have got the feeling that it is not only Wine's fault that the tests do not compile. MinGW also has its problems, and although new versions are definitely better, they are arguably not perfect (dsound misses some uuid features, urlmon not present). Their import libraries will probably be good enough for us one day, but it is not our desk. Or did I misunderstand something? Feri.
On Wed, 3 Sep 2003, Ferenc Wagner wrote:
That sounds nice, but I have got the feeling that it is not only Wine's fault that the tests do not compile. MinGW also has its problems, and although new versions are definitely better, they are arguably not perfect (dsound misses some uuid features, urlmon not present). Their import libraries will probably be good enough for us one day, but it is not our desk. Or did I misunderstand something?
No, you don't. But things are not that bad -- as you say, we can use MSVC until MinGW gets fixed. And when that happens, we need not wait for wide adotion, since it's only the builder of winetests.exe that needs to upgrade. I think we've beaten this poor horse to death. :) Let's get the basic stuff in, we can worry about details like this a little later... -- Dimi.
Dimitrie O. Paun wrote:
-- I don't think we need to _distribute_ two binaries. If the MSVC one works better, the person designated to build and publicly post the winetests.exe should build the tests through MSVC. Once we fix everything so that MinGW works just as well, we might as well switch to that.
How does that sound?
Great, except if MSVC tests are the ones that work and the only ones that are used, there is little incentice to fix mingw. I think we really have to look at mingw and Wine as part of a greater whole. *) Having two winetests.exe or one compound winetests.exe with both sets of tests in it helps not only track down bugs in Wine, but also helps mingw become more MSVC conformant. I could easily extend winetests.exe to also include the MSVC compiled tests and run them, for instance, before the mingw compiled ones. regards, Jakob *) Each benefitting the other.
On Wed, 3 Sep 2003, Jakob Eriksson wrote:
Great, except if MSVC tests are the ones that work and the only ones that are used, there is little incentice to fix mingw. I think we really have to look at mingw > and Wine as part of a greater whole. *)
I don't think the current situation justifies bundling 2 sets of executables in winetests.exe -- it make the download much bigger, and the legth of the test. The are a few problems; let's submit some patches to MinGW and be done with it. When there is a release that works for us, we switch to it, and so we make sure it will not break in the future. -- Dimi.
"Dimitrie O. Paun" <dimi(a)intelliware.ca> writes:
On Tue, 2 Sep 2003, Ferenc Wagner wrote:
maybe we should control the columns manually, rather than dinamically. We know what we expect, and an empty column will also give an indication of what tests we're missing.
And waste precious space... But it is the way if we find a way to cope with serveral results per Windows version.
I suggest the cells be coloured green, yellow and red.
I dropped yellow, the cell contains the number of errors, and is always clickable. Have a look. But if you want to put it on WineHQ, the color scheme may have to change. Anyway, what I really care about is functionality. Does it have everything we need? Is it convenient to use? Feri.
On Wed, 3 Sep 2003, Ferenc Wagner wrote:
And waste precious space... But it is the way if we find a way to cope with serveral results per Windows version.
Not really wasting, it shows we are _missing_ important tests. I would suggest we collapse the tests for the same version is the same column. The columns should be: Win95, Win98, NT3, NT4, Win2K, XP, Server2K3, Wine Various smaller versions/service packs/etc of the above should be collapsed in the same column. Hence my colouring suggestion: if all versions fail, things are not good (red), if some pass we aren't in such a bad shape afterall (yellow), if all pass we're golden (er, green :)).
I suggest the cells be coloured green, yellow and red.
I dropped yellow, the cell contains the number of errors, and is always clickable. Have a look. But if you want to put it on WineHQ, the color scheme may have to change.
Looks good, we can worry about the color scheme later.
Anyway, what I really care about is functionality. Does it have everything we need? Is it convenient to use?
It is getting there. Some comments: 1. What is the "20030829" link in the column. I'd move it out of there somewhere before the table, the clomuns are busy as they are, and it's the same across all of them. 2. What is the "version" link for. Maybe that should list all the various versions that were tested for that OS. 3. What is the (1) mean? What is it good for? 4. What does "failed" and "N/C" mean? Maybe they should be links as well, explaining the problem. 5. I really think we should have empty columns for OS we did not receive tests for. 6. For "green" tests, it would be nice if we open a small popup instead of going to another page, it's just too little information. 7. We should eventually link back to a master page listing all the previous results. Maybe a "prev"/"next" link on the page would be nice as well. Kinda tricky to implement next though... Maybe through some CGI, something like results?dir=next&curr=20030829, this one looks it up, and does a redirect. 8. The "dll:test" thing should maybe be a hyperlink to the cvsweb archive or better yet to the LXR-based tree so than you get the test when you click on it. 9. Very nice! It looks really good! -- Dimi.
"Dimitrie O. Paun" <dimi(a)intelliware.ca> writes:
1. What is the "20030829" link in the column.
That is a link to the binaries giving this result. Now it is all the same everywhere, since we have one build only. If we had a "latest results" page, it could be different in each column. It we can recruit people for running the tests (which seems necessary anyway), then we will not ever need this and thus drop.
2. What is the "version" link for. Maybe that should list all the various versions that were tested for that OS.
This shows the version string of the testing OS. I moved that into the titles of the architecture names, and will add to the reporter data, too.
3. What is the (1) mean? What is it good for?
Wanna be the above. Since I found no meaningful way to collapse different tests for the same OS, it is always 1. Could be transformed into an optional "alternate results" link (perhaps test-by-test), since we will (hopefully) have a principal tester. Now I show the number of reports instead of the reporter data if there are more than one reports.
4. What does "failed" and "N/C" mean? Maybe they should be links as well, explaining the problem.
N/C: not compiled. Can be made a link. failed: the test did not finish. If there is any output, it is a link. If the tester supplies extra data, that can be presented, too.
6. For "green" tests, it would be nice if we open a small popup instead of going to another page, it's just too little information.
I added titles and if that's not enough some JavaScript, which I do not find particularly elegant. Could do separate popups, but is any of these any better that the titles?
Maybe a "prev"/"next" link on the page would be nice as well.
Or rather an orthogonal view: same test for different builds.
8. The "dll:test" thing should maybe be a hyperlink to the cvsweb archive or better yet to the LXR-based tree so than you get the test when you click on it.
For snapshot based test binaries I can put in an extra link. I would rather use the test names for the above task. Sorry for mispelling your name, Dimi. If only I had not thought about it! Too late... Feri.
On September 5, 2003 09:28 pm, Ferenc Wagner wrote: I'd like to say that the latest result looks _really_ good. I hope we can work out the last few (minor) things and get this integrated into WineHQ real soon. A few comments: -- The "Open popup" is a nice idea, but a bit confusing. When I opened it, I didn't know what it was for, I tried typing stuff in there. We should have a caption explaining you should hover above tests results to see stuff in there. Maybe we can disable the edit boxes so people can't type in there. -- The "2000 SP3" column I think should be named just 2000 and we should colapse the various SPx in there.
I added titles and if that's not enough some JavaScript, which I do not find particularly elegant. Could do separate popups, but is any of these any better that the titles?
Titles are pretty cool, thanks!
Maybe a "prev"/"next" link on the page would be nice as well.
Or rather an orthogonal view: same test for different builds.
That's what I mean. In fact, I think the page should contain the results of only one version, like 20030829. The prev/next would let you cycle through different builds. This way, if we haven't already received results for, say Win ME for build 20030714, it will show up as "annoy" (by the way, the annoy link is broken, it should bring up a page explaining that we're missing that test, and what we need to fix that).
For snapshot based test binaries I can put in an extra link. I would rather use the test names for the above task.
Yes, that's what I had in mind too -- use the test name, just make that a link to the source of the test, so if something fails, a developer click on the link and see exactly the code that fails. Don't forget to include the version number as well in the link, don't just point to HEAD as that is always changing.
Sorry for mispelling your name, Dimi. If only I had not thought about it! Too late...
No problem, thanks for noticing though! :) -- Dimi.
Ferenc Wagner wrote:
Maybe you misunderstood me. The question is about collecting the MSVC compiled test binaries. Or is that covered, too?
No... let's talk about that. I can see the benefit of running MSVC compiled tests too. Maybe we can include them too in winetests.exe somehow.
Maybe the report formats should converge then. I find mine quite convenient, and have a parser for that... :)
This we could probably fix. Maybe I could do the output exactly like yours. As soon as I have run your tests.zip on Windows ME too see what results.txt looks like I will get back you on this. Jakob
Jakob Eriksson <jakob(a)vmlinux.org> writes:
I can see the benefit of running MSVC compiled tests too. Maybe we can include them too in winetests.exe somehow.
Two different builds should suffice, IMHO.
Maybe I could do the output exactly like yours.
That would be nice. I am not too rigid, though. Have a look at runtests.bat, and reproduce that. The second (blank) line in the result is not necessary, that is a peculiarity of ver, which does not seem to return enough information prior to XP anyway. I thought about distributing my own ver-sion. The build-tag at the end of the first line can contain anything. If you could arrange that crashing tests die silently (ie. without silly dialog boxes), that would be really great! Feri.
Ferenc Wagner wrote:
Jakob Eriksson <jakob(a)vmlinux.org> writes:
That would be nice. I am not too rigid, though. Have a look at runtests.bat, and reproduce that. The second (blank) line in the result is not necessary, that is a peculiarity of ver, which does not seem to return enough information prior to XP anyway. I thought about distributing my own ver-sion. The build-tag at the end of the first line can contain anything.
Ok, I have now tested runtests.bat (Though not on ME as promised, because ... well, a long and sad story.) First - can I leave out the "of 55" tests? It is kind of difficult for me calculate how many tests there are in advance. Of course, I can do it - but it feels better if I know it is for a good cause. So, what I am proposing is something similar to this flow, had it been done as a BAT:: echo kernel32.dll:atom (test begin)>>results.txt kernel32_crosstest.exe atom>>results.txt echo kernel32.dll:atom done>>results.txt Also - those not compiled, how about just not include them in the report? This I am more flexible about though.
If you could arrange that crashing tests die silently (ie. without silly dialog boxes), that would be really great!
Hm... I don't know. Is that possible? I mean, it is Windows itself that puts up these dialogs. Can Windows be instructed to shut up and log to a file instead or something? I don't know how to do it, I agree it would be great. Pointers or ideas from anyone listening would be most appreciated. regards, Jakob BTW, is "ver-sion" a pun? Good one. :-)
Jakob Eriksson <jakob(a)vmlinux.org> writes:
First - can I leave out the "of 55" tests? It is kind of difficult for me calculate how many tests there are in advance. Of course, I can do it - but it feels better if I know it is for a good cause.
You know, it was not that handy for me, either. I do not need that, but the numbering does not make too much sense without the total -- all the fuss happened to provide sort of a "progress bar" for Dimitry. You will do it, won't you? :)
So, what I am proposing is something similar to this flow, had it been done as a BAT:
echo kernel32.dll:atom (test begin)>>results.txt kernel32_crosstest.exe atom>>results.txt echo kernel32.dll:atom done>>results.txt
All right. Instead of (test begin) simply put starts for consistency.
Also - those not compiled, how about just not include them in the report? This I am more flexible about though.
Not really the same. Tests can also be missing because the tester had to remove them, and I would like to discriminate in this respect. See XP_robi for example.
If you could arrange that crashing tests die silently (ie. without silly dialog boxes), that would be really great!
Hm... I don't know. Is that possible?
No idea. But the best would be to recover the error code and/or message somehow, and include in the report. Like waitpid() on Posix. Is the tester has to click a couple of times, then be it... It is surely impossible from a batch script, but since you have a C framework, let us try to make the most out of it!
BTW, is "ver-sion" a pun? Good one. :-)
Heh, I think it is. Glad you like it. Feri.
Ferenc Wagner wrote:
Jakob Eriksson <jakob(a)vmlinux.org> writes:
First - can I leave out the "of 55" tests? It is kind of difficult for me calculate how many tests there are in advance. Of course, I can do it - but it feels better if I know it is for a good cause.
You know, it was not that handy for me, either. I do not need that, but the numbering does not make too much sense without the total -- all the fuss happened to provide sort of a "progress bar" for Dimitry. You will do it, won't you? :)
For the sake of Dimis continued mental health, OK then. :-)
So, what I am proposing is something similar to this flow, had it been done as a BAT:
echo kernel32.dll:atom (test begin)>>results.txt kernel32_crosstest.exe atom>>results.txt echo kernel32.dll:atom done>>results.txt
All right. Instead of (test begin) simply put starts for consistency.
ok.
Also - those not compiled, how about just not include them in the report? This I am more flexible about though.
Not really the same. Tests can also be missing because the tester had to remove them, and I would like to discriminate in this respect. See XP_robi for example.
ok. I am not sure how to solve this right now, but I will figure something out. Can't be that hard.
Hm... I don't know. Is that possible?
No idea. But the best would be to recover the error code and/or message somehow, and include in the report. Like waitpid() on Posix. Is the tester has to click a couple of times, then be it... It is surely impossible from a batch script, but since you have a C framework, let us try to make the most out of it!
Sure, I have no idea how at this time though... regards, Jakob
Jakob Eriksson <jakob(a)vmlinux.org> writes:
the best would be to recover the error code and/or message somehow, and include in the report. Like waitpid() on Posix. Is the tester has to click a couple of times, then be it...
Sure, I have no idea how at this time though...
Check this out: http://msdn.microsoft.com/library/en-us/dllproc/base/getexitcodeprocess.asp Feri.
On August 29, 2003 05:50 pm, Ferenc Wagner wrote:
Please go to http://afavant.elte.hu/~wferi/wine for details.
Hi Feri, This stuff looks great -- maybe it's time we integrate it into WineHQ... A few comments on the current state of affairs: -- for the ME case, how can we have have some results (up to kernel32.dll:codepage) and then have no results? Doesn't that mean that they failed? -- maybe we need a small legend at the top of the page, below "Main summary for build..." Something like: Legend: <span class="pass">Tests pass in all reports</span> <span class="fail">Tests fail in all reports</span> <span class="mixed">Tests fail in some reports</span> -- It would be nice to make the tests name links to the test file. For example, assuming these tests results were for the 20030813 build of wine, kernel32:alloc would be a link to: http://cvs.winehq.com/cvsweb/wine/dlls/kernel/tests/alloc.c?only_with_tag=Wi... This is trick though, as it is not clear how to figure this thing out. Maybe the solution is to have a mode for tests where all they do they print their filename, like so: kernel32.alloc:dlls/kernel/tests/alloc.c -- Nice touch for the column span in "Unit tests for ... where not compiled...". Cool! -- How do you assign the name to different reports for the same OS? Like in Win98 Win98 joel Win98 kevin Does the script do this automatically, or it requires manual intervention. It would be nice to be automatic, so when we install it at WineHQ, "it just runs" :) -- Also, in the "XXX differences" section, shouldn't we have the exact version & ServicePack displayed between the OS name and the reporter link, as we are dealing with only one instance? -- A few links at the top of the page would be nice: 1. To the main Testing page, something like /site/testing 2. To the next/prev set of results, like so /?tests="20030813"&dir="next" For this we need a bit of help from Jer to implement, it's not possible to implement it properly as a static link in the page. Anyway, all this are minor nits, and can be implemented later -- I think the important stuff is to get this stuff integrated in the WineHQ. We also need a "Testing" page, but that's another story :) Good stuff! -- Dimi.
"Dimitrie O. Paun" <dpaun(a)rogers.com> writes:
-- for the ME case, how can we have have some results (up to kernel32.dll:codepage) and then have no results? Doesn't that mean that they failed?
No, this means that when the console test hung the tester killed the DOS box and thus did not run further tests. Jakob might implement a timeout or we could explain more.
-- It would be nice to make the tests name links to the test file. [...] This is trick though, as it is not clear how to figure this thing out.
Yes, it must be included in the output. I gave it a try, winsock and kernel32 are tricky, for example.
-- How do you assign the name to different reports for the same OS?
It is the name of the directory the data comes from. In principle, testers could provide their tags.
-- Also, in the "XXX differences" section, shouldn't we have the exact version & ServicePack displayed between the OS name and the reporter link, as we are dealing with only one instance?
Sure, but I do not have the information. Noted, though.
-- A few links at the top of the page would be nice: 1. To the main Testing page, something like /site/testing
I am not sure what this main Testing page is...
We also need a "Testing" page, but that's another story :)
I was on holiday last week and will possibly leave again, but will surely come back, just wait... Feri.
On Mon, 22 Sep 2003, Ferenc Wagner wrote:
"Dimitrie O. Paun" <dpaun(a)rogers.com> writes:
-- for the ME case, how can we have have some results (up to kernel32.dll:codepage) and then have no results? Doesn't that mean that they failed?
No, this means that when the console test hung the tester killed the DOS box and thus did not run further tests. Jakob might implement a timeout or we could explain more.
Right, this was my point: it's more of a failure, than not having run the test (displayed as "."). Maybe we should say "timeout" for these?
-- It would be nice to make the tests name links to the test file. [...] This is tricky though, as it is not clear how to figure this thing out.
Yes, it must be included in the output. I gave it a try, winsock and kernel32 are tricky, for example.
Yes, I've noticed, it's cool (however, as you say, some of the links are wrong also).
-- How do you assign the name to different reports for the same OS?
It is the name of the directory the data comes from. In principle, testers could provide their tags.
How do you make sure they don't collide? If we are to install this on WineHQ and have it run automatically, I guess we'll have to create temp dirs. At which point, that stuff is useless anyway, so I think we should just drop it altogether. That is, instead of having stuff like: Win95 Win95dimi Wine95joe .etc <rest of the table .... > We should just drop that from the column header, and say Win95: <rest of the table .... >
-- Also, in the "XXX differences" section, shouldn't we have the exact version & ServicePack displayed between the OS name and the reporter link, as we are dealing with only one instance?
Sure, but I do not have the information. Noted, though.
How come? I thought Jakob includes a dump of the OS version structure, like so: Operating system version: dwMajorVersion=5 dwMajorVersion=0 dwBuildNumber=2195 dwPlatformId=2 szCSDVersion=Service Pack 3 wServicePackMajor=3 wServicePackMinor=0 wSuiteMask=0 wProductType=1 wReserved=30
-- A few links at the top of the page would be nice: 1. To the main Testing page, something like /site/testing
I am not sure what this main Testing page is...
We will need a Testing status page on WineHQ no? We need a master page which explains where to get the latest tests, what you need to do, link to the results, etc. Not done yet, but needed :)
I was on holiday last week and will possibly leave again, but will surely come back, just wait...
Cool, I can't wait to have this done and integrated in WineHQ. -- Dimi.
Dimitrie O. Paun wrote:
No, this means that when the console test hung the tester killed the DOS box and thus did not run further tests. Jakob might implement a timeout or we could explain more.
Just to let you know I'm not gone or anything: I have been moving (within Sweden) and also quit my job, so things have been busy. I'm still trolling the list though and will resume development of the tester app. regards, Jakob
"Dimitrie O. Paun" <dimi(a)intelliware.ca> writes:
On Mon, 22 Sep 2003, Ferenc Wagner wrote:
"Dimitrie O. Paun" <dpaun(a)rogers.com> writes:
-- for the ME case, how can we have have some results (up to kernel32.dll:codepage) and then have no results? Doesn't that mean that they failed?
No, this means that when the console test hung the tester killed the DOS box and thus did not run further tests. Jakob might implement a timeout or we could explain more.
Right, this was my point: it's more of a failure, than not having run the test (displayed as "."). Maybe we should say "timeout" for these?
For the console test, yes. I just did not care, because I was promised a better run.
-- How do you assign the name to different reports for the same OS?
It is the name of the directory the data comes from. In principle, testers could provide their tags.
How do you make sure they don't collide?
We could put up a little cgi which asks for a tag and makes sure it is unique. Or simply append a number if the submission is done by email. I do not expect too many concurrent submissions anyway...
that stuff is useless anyway, so I think we should just drop it altogether.
I see your point, but would like to make sure it is easy to pinpoint a given submission. Names are useful for that. But I have a real problem here: which results to put in the main summary if there are many reports for a version? The one submitted first? The one with the most/least successes? Or maybe a mixture? Now it is the one with no tag, which is practically the first submission.
-- Also, in the "XXX differences" section, shouldn't we have the exact version & ServicePack displayed between the OS name and the reporter link, as we are dealing with only one instance?
Sure, but I do not have the information. Noted, though.
How come? I thought Jakob includes a dump of the OS version structure, like so:
Yes, he does. But except for one result submitted by Jakob, all the results are from my .bat-driven zip file. Feri.
On Sat, 27 Sep 2003, Ferenc Wagner wrote:
For the console test, yes. I just did not care, because I was promised a better run.
In other words, he is supposed to detect a hanged test, and report an error?
We could put up a little cgi which asks for a tag and makes sure it is unique.
This is overkill, for sure.
Or simply append a number if the submission is done by email. I do not expect too many concurrent submissions anyway...
For sure we need to append a number, but my point is that once we generate that name dinamically, it's ugly and it shouldn't be displayed. As for who submitted it, we already have that, so we're good.
But I have a real problem here: which results to put in the main summary if there are many reports for a version? The one submitted first? The one with the most/least successes? Or maybe a mixture? Now it is the one with no tag, which is practically the first submission.
What about the max number of errors in all tests?
How come? I thought Jakob includes a dump of the OS version structure, like so:
Yes, he does. But except for one result submitted by Jakob, all the results are from my .bat-driven zip file.
In other words, for stuff that includes that info, you parse it out? Maybe you should stop distributing the .bat file, we need to figure out how well do the two things work together, no? -- Dimi.
"Dimitrie O. Paun" <dimi(a)intelliware.ca> writes:
On Sat, 27 Sep 2003, Ferenc Wagner wrote:
For the console test, yes. I just did not care, because I was promised a better run.
In other words, he is supposed to detect a hanged test, and report an error?
Now I lost you here. Who is he? The setting is this: if a test simply crashes, the framework goes on and the failure will be obvious. If a test hangs, the user can kill it and we have the same. Or the timeout can kill it and maybe note this in the log. This all depends on the framework, but we only have serious trouble if the user kills the framework itself, which happened in the case of the ME tests.
Or simply append a number if the submission is done by email.
For sure we need to append a number, but my point is that once we generate that name dinamically, it's ugly and it shouldn't be displayed.
An occasional 1, 2 or 3 at the end of your tag? Why not?
As for who submitted it, we already have that, so we're good.
But it can not possibly be displayed (a name is too wide and you may have submitted several tests), while it seems useful for quick visual identication, if you want to concentrate on a single test run. Or do you simply not want to do this? I do.
But I have a real problem here: which results to put in the main summary if there are many reports for a version? The one submitted first? The one with the most/least successes? Or maybe a mixture? Now it is the one with no tag, which is practically the first submission.
What about the max number of errors in all tests?
Possible, but it would destroy the consistency of the links in the column. May be worth it, though.
How come? I thought Jakob includes a dump of the OS version structure, like so:
Yes, he does. But except for one result submitted by Jakob, all the results are from my .bat-driven zip file.
In other words, for stuff that includes that info, you parse it out? Maybe you should stop distributing the .bat file, we need to figure out how well do the two things work together, no?
Oh yes, I have not got any more reports for a while, and do not intend to distribute newer versions of my package. We had some discussion with Jakob on the format already, but not finished yet. Especially not with the master file... Feri.
On September 29, 2003 05:57 pm, Ferenc Wagner wrote:
In other words, he is supposed to detect a hanged test, and report an error?
Now I lost you here. Who is he?
Jakob, that is winetests.exe. It should read, "winetests.exe is supposed to detect a hanged test, kill it, and report an error?" But yeah, we seem to be on the same wavelength here.
For sure we need to append a number, but my point is that once we generate that name dinamically, it's ugly and it shouldn't be displayed.
An occasional 1, 2 or 3 at the end of your tag? Why not?
Sure, if you thnk that's helpful, I just thought that it's a bit redundant to have columns named like this: win95_1 win95_2 win95_3 win95_4 ... the number just duplicate their position...
As for who submitted it, we already have that, so we're good.
But it can not possibly be displayed (a name is too wide and you may have submitted several tests)
Of course you can't display it in the column name, but you can get to it by clicking on the "reporter". Bottom line, I don't have a beef with it, if you find it useful, include it, other might find it useful as well.
What about the max number of errors in all tests?
Possible, but it would destroy the consistency of the links in the column. May be worth it, though.
I guess you mean the links to results.txt. Yeah, those should go, they would be confusing. We can make them be links to the OS's home page or something. Or maybe a historical overview table with test results only for that OS... :) -- Dimi.
participants (9)
-
Dimitrie O. Paun -
Dimitrie O. Paun -
Eric Pouech -
Ferenc Wagner -
Jakob Eriksson -
Jakob Eriksson -
Oleg Prokhorov -
Shachar Shemesh -
Troy Rollo