http://bugs.winehq.org/show_bug.cgi?id=31279
--- Comment #8 from Charles Davis cdavis@mymail.mines.edu 2012-07-22 19:47:30 CDT --- (In reply to comment #7)
Created attachment 41120 [details] New log with +tid,... from Age of Empires II
Here.
Interesting. It looks like the address it's trying to read is before the IDT even starts:
003e:trace:int:emulate_instruction mov idt,xxx (0x7c2f0008, base=0x7c31b000, limit=0x1006) at 0x54287f
Why would that be? There are two others before it:
003e:trace:int:emulate_instruction mov idt,xxx (0x7c2f0008, base=0x7c2f0000, lim it=0x1004) at 0x54287f 003e:trace:int:emulate_instruction mov idt,xxx (0x7c2f0018, base=0x7c2f0000, lim it=0x1004) at 0x542884
so for some reason, we're pulling the rug from under SECDRV.
*smacks head* Now I know why!
get_idtr() is implemented like so:
static inline struct idtr get_idtr(void) { struct idtr ret; #ifdef __GNUC__ __asm__( "sidtl %0" : "=m" (ret) ); #else [...] #endif return ret; }
So on i386, it reads the actual IDTR. But I'm running this on a multi-core system--where (at least on Mac OS) each CPU has its own IDT! Therefore, when it executes the SIDT instruction, it gets a different result depending on which CPU it happens to be running on.
When SECDRV reads the IDTR, it caches the value somewhere, and calculates the address of each descriptor based on that cached value. But we don't, on the assumption that an SIDT is cheap. Thus, when the scheduler happens to pick a different CPU for SECDRV to run on, the value returned by get_idtr() differs from SECDRV's cached value. Wine thinks that SECDRV's access to the #DB and #BP descriptors is out-of-bounds and refuses to emulate the read. The AV then propagates and SECDRV crashes.
This suggests one of two fixes. One is to force the WineDevice process--or at least the thread running the NTOSKRNL loop--onto one CPU. The other is to cache the IDTR ourselves on the first read, so we always use the same IDT base and limit when checking if a MOV is from the IDT.