I don't buy the logic about going back and forth. The draft is meant to be easy to follow. To understand the whole thing, you just need to find `myint_IIncIntVtbl`, check the generated implementation (which is nothing more than a direct call to the actual implementation), then go to that implementation and you’re done.
This is just one of point, and not the most critical one. Then I'm also talking from the experience I have so far as soon as functions are exported and used across sources, it's *always* more steps to navigate than when functions are defined directly in the same source (or its headers).
With your proposal, you’d have to go to the vtbl, find the implementation, and then wonder about the `*_funcs` struct. From there, you’d go to its declaration, then to the implementation, find the `*_FUNCS_INIT` macro, check that macro to figure out the implementation name, and finally go there. I don’t see that as an improvement. In fact, I think that’s actual indirection, and it’s something we should avoid.
Sure, well, you can also consider that this is only showing how much boilerplate we can strip and get generated, but nothing forces you to use these macros if you prefer to declare the class function table yourself (similarly you can also use any of the generated function freely if you prefer or need to build some exotic class that differs from what we generate). The macro only has the advantage that it prevents you from missing a function, and you will get a compilation error (which are much better than link errors as LSP tools can display them directly while writing code).
Also, although it is static in the current state, the class function table could be made dynamic (could even be an optional thing driven by some IDL keyword) quite easily, making it possible to have multiple flavors of one class with the same boilerplate code. You can also adjust only some of the functions, similarly to how we do it with some interfaces.