It does make a bit of difference.
Compositor doesn't fail to track shape exposed pixels. But it can't predict what we are going to shape-expose ahead of time. App sets transparency colour key for layered window. Then paints it, having parts painted with that colour. Then both shaping and putting the image onscreen happens in x11drv_surface_flush, and XPutImage happens before we let X know about our desired shape, so those "transparent" pixels get painted.
FWIW this is a regression from redesigning this part, before that update_surface_region() was called before XPutImage in x11drv_surface_flush(), that changed.