On 04/07/11 10:24, Alexandre Julliard wrote:
Piotr Cabanpiotr@codeweavers.com writes:
@@ -1617,9 +1617,18 @@ BOOL virtual_handle_stack_fault( void *addr ) BYTE vprot = view->prot[((const char *)page - (const char *)view->base)>> page_shift]; if (vprot& VPROT_GUARD) {
struct _TEB *teb = NtCurrentTeb(); VIRTUAL_SetProt( view, page, page_size, vprot& ~VPROT_GUARD );
if ((char *)page + page_size == NtCurrentTeb()->Tib.StackLimit)
NtCurrentTeb()->Tib.StackLimit = page;
if ((char *)page - page_size == teb->DeallocationStack)
teb->Tib.StackLimit = page;
else if ((char*)addr> (char*)teb->Tib.StackLimit + page_size&&
teb->Tib.StackLimit == (char*)teb->DeallocationStack + page_size&&
(view = VIRTUAL_FindView( (char*)teb->Tib.StackLimit, 0)))
{
vprot = view->prot[((char*)teb->Tib.StackLimit - (char*)view->base)>> page_shift];
VIRTUAL_SetProt( view, teb->Tib.StackLimit, page_size, vprot | VPROT_GUARD);
teb->Tib.StackLimit = (char*)teb->Tib.StackLimit + page_size;
}
Why do you want to lookup the view again?
It's not needed (I thought it may be not valid for whole stack). I'll send fixed version.
There also should be "growing" instead of "shrinking" in commit message. It was meant to point that it's possible to change StackLimit more then once without this patch.