On Sat, 27 Oct 2018 at 01:11, Stefan Dösinger stefandoesinger@gmail.com wrote:
Am 2018-10-26 um 17:23 schrieb Matteo Bruni:
There isn't much of a difference for buffers, admittedly. The docs suggest that resource updates might follow a slightly different policy between the two, which I'm not sure is really testable. Textures are going to stay as they were, which means, among other things, that we need to keep WINED3DUSAGE_SCRATCH around.
D3DUSAGE_SYSTEMMEMORY buffers might do more useful things with D3DLOCK_DISCARD / D3DLOCK_NOOVERWRITE. It might be worth extending test_map_synchronisation to cover different buffer pools.
My understanding is that it doesn't really work that way, but you do raise an interesting point. My current understanding (with all the usual "but it depends" caveats) of the various pool/usage combinations is the following:
- DEFAULT pool buffers are conceptually in VRAM. They're uploaded on creation, and (in d3d9) never modified afterwards. - DEFAULT pool + DYNAMIC, WRITEONLY usage buffers are conceptually in GTT/GART memory. They're expected to be updated often, and you want to use DISCARD/NOOVERWRITE to do so. - MANAGED pool buffers can bounce around VRAM/GTT/system memory as the runtime/driver see fit. You don't want to use these. - SYSTEMMEMORY pool buffers are in system memory. They're uploaded for each draw, not unlike "user memory" draws. You don't really want to use these either. MSDN claims you'd use these when you're concerned about GTT memory usage.
As for the point you raise about map synchronisation, an implication of the above would be that mapping SYSTEMMEMORY buffers never blocks, beyond perhaps the draw-time upload. One test I'd find interesting would be to compare the performance characteristics of draws of various sizes out of huge MANAGED and SYSTEMMEMORY buffers.