https://bugs.winehq.org/show_bug.cgi?id=50235
--- Comment #1 from Matteo Bruni matteo.mystral@gmail.com --- Created attachment 68778 --> https://bugs.winehq.org/attachment.cgi?id=68778 Hack, game will run out of memory in a matter of seconds though
The bug is affected by https://source.winehq.org/git/wine.git/commit/8b3cc57df0b791ca6c33f383f9358e... in that now reverting the original regression patch doesn't fix the issue anymore.
What I think is happening is, basically, a WAW hazard: the game DISCARD maps constant buffers very often, drawing between maps. The Nvidia driver reuses previously used BO names when creating a new buffer with glGenBuffers(). So we end up with certain BO IDs "matching" multiple buffers / buffer contents over a short period of time. I suspect DISCARD maps (either explicit with GL_MAP_INVALIDATE_BUFFER_BIT or just the first and only map of a new buffer) are optimized in the driver not to wait for draws in progress. The driver then sometimes finds itself with "too new" data inside BO x: data destined for a following draw.
Updating a buffer with glBufferData() is optimized similarly, in that those calls never synchronize the driver command stream and each buffer is "versioned" so to avoid ambiguity at draw time. Before 8b3cc57df0b791ca6c33f383f9358e1613206b84 we were destroying and immediately recreating the BO, which got assigned the same ID and that, I gather, didn't break the driver's internal BO version tracking. I guess that the original regression patch, replacing glBufferData() with glBufferStorage(), broke the game because it effectively "hacked" around the glBufferData() update tracking in the driver (the driver doesn't track glBufferStorage() updates because those aren't a thing - the BO defined by glBufferStorage() is immutable and can't be updated, which is also the reason why we do glDeleteBuffers() + glGenBuffers()). At the same time, now going back to glBufferData() doesn't fix the issue because now additionally there's aliasing at the BO ID level.
This is arguably a driver bug, but our current buffer usage pattern is (temporarily, I assume) really weird so I'm not surprised we're breaking driver optimizations that usually work fine. I think we should do something more sensible with buffer updates (even though we're in code freeze, technically) and fix the regression, or otherwise revert everything up to 77f0149a6c99fc0289d12a788f630519d7dc49d3.