http://bugs.winehq.org/show_bug.cgi?id=11674
--- Comment #315 from Stefan Dösinger stefan@codeweavers.com 2013-09-04 14:38:27 CDT --- What's the point of a standardized API like OpenGL if I have to detect the implementation and use different codepaths to satisfy undocumented constraints to make it work properly?
But I'd prefer to discuss this over a beer than on the backs of our users. I could imagine using a GL extension that allows the driver to communicate its preferred update method. That should be more solid than parsing the vendor string and __GL_NV_THREADED_OPTIMIZATIONS setting.
What is a 'proper' way to use buffer_storage? My rough plan is to create buffers with MAP_READ_BIT | MAP_WRITE_BIT | MAP_PERSISTENT_BIT and use glMapBufferRange to access the buffer. MAP_READ_BIT would be optional and depend on the d3d flags the application sets. I'm not sure about MAP_COHERENT_BIT. If I'm not setting it, and calling MapBufferRange / UnmapBuffer, do I still have to call MemoryBarrier, or does UnmapBuffer take care of it? I don't care if changes made during map/unmap are picked up, I'll only ever draw from buffers mapped with MAP_UNSYNCHRONIZED.
I might also use CLIENT_STORAGE to preserve address space, but that is a lower priority.