While I agree that we should store in a centralized place the amount we report to the application to be sure to be coherent, I wonder if taking the trouble to report the exact amount we have is really needed (as the memory management of GL and D3D9 differ anyway so the memory usage pattern may be completely different between both worlds)...
Lionel
Well, tracking approximate usage is quite important because some games and possibly applications will allocate textures until they run out of video memory. Under OpenGL we would have to run out of system memory and die if we didn't track memory usage.
As a example lets same I'm a game and I've got a working set of 1000 textures all mipmapped, if I know that the system only has 32 meg of memory then I can drop the high level mipmaps so that all 1000 textures fit into ram, If I don't know how much memory I have then I'm going to keep the high level mipmaps which may push some of the textures into swap space.
It would be nice to have a semi-automatic system instead of making the user setup there video card size, but I don't see why we should force people who want to run 3DStudio max of a 2mb card not to be able to because there cards only got 2mb of ram.
It's not a 'huge' amount of work to find out how much memory a graphics card has under linux, it should just be a few lines to interface with the kernel module and read a register on the card, but this may not be portable ,their are a lot of graphics cards out their and kernel modules have a habit of changing.
It's also very easy to retrieve the correct AGP memory (and other stats) under linux and I don't see why we shouldn't be reporting correct information if we can (unless it's the ammout of space free on C drive when I have c:\program files as a symlink)
___________________________________________________________ ALL-NEW Yahoo! Messenger - all new features - even more fun! http://uk.messenger.yahoo.com