Interesting.
I ran similar tests on a VM with NVIDIA GPU there: https://testbot.winehq.org/JobDetails.pl?Key=144923&f101=task.log#k101
ChoosePixelFormat really doesn't care much about the desired format, and seems to always return a 32bit pixel format. Does it even allow to select a R10G10B10 pixel format? It doesn't on NV. I can see some small variations depending on whether a depth buffer / double buffering is desired. If depth is requested and depth bits cannot be matched, it's simply maximized.
The results on NVIDIA look somewhat in between AMD and Intel, where the format list is partially sorted according to the filters, with exact matches ordered first, and the rest of the list put after, kept in its original order. The formats are also always filtered to drop any non-window format.
Overall it seems to me that using the host pixel format list, in its ID order, then implementing an AMD-like sorting on top is probably safe. That will differ a bit from Intel GPUs, but we can probably assume that it won't matter -or handle the problems if/when they appear.