Hi,
I am now thinking about rewriting the video rendering with DirectDraw. It seems that native quartz.dll uses DirectDraw, not GDI, to display video frames into the output window.
DirectDraw sounds like the right way to implement quartz video output, but performance wise I am concerned about the performance of our current directdraw implementation. DirectDraw is implemented in software, on top of gdi, so you end up with just one more abstraction layer in between. Our DDraw code basically has the capability to use opengl for hardware acceleration, but its far from complete.
One thing I am worried about is the pixel format of the surfaces. The video data is 32bpp, but the output window uses my X config at 16bpp. However, nowhere in the ddraw trace of native quartz I can find any reference to a CreateSurface with 32bpp. It seems that native quartz does the depth conversion itself. The question is this: if I create a 32 bpp surface (in order to lock it and write the 32 bpp frame data into it), can I then blit it into a 16 bpp output window and somehow let DirectDraw take care of the depth conversion? If so, can this be any faster than writing the depth conversion myself?
You cannot create a 16 bit front buffer on a 32 bit desktop, except if you switch to exclusive mode(and thus fullscreen). The really proper way to handle this is to create an overlay surface in your video's input format and overlay the primary(= the screen) with it. The issue with this is that wine does not support overlay surfaces yet. There is an overlay implementation via Xv in CodeWeaver's Picasa wine tree(downloadable at www.google.com). I was about to integrate it into wine, but some problems with accessing Xv from wined3d arised, and the overall idea was to wait for the wined3d using WGL instead of GLX efforts, which somehow froze with the windowed GL rendering problems.
The other idea would be to create a Direct3D device(with the back buffer beeing automatically in desktop bit depth), + a few(dynamic) textures in the video's format and size. Then upload the video into the textures and draw quads on the back buffer, then flip. This way the opengl hardware does the color conversion and filtering for you. The drawback is that you need hardware acceleration.
Obviously, if you can use Direct3D, and Direct3D uses OpenGL, you could use OpenGL directly, which saves you a not really cheap layer and gets you more control over opengl. The advantage of using Direct3D would be that an Overlay path and a Direct3D path could share some code, but I don't know how much.