The whole point of abort() is that, rude as it is, it's generally the least rude option. It's far safer and more polite than things like (a) pretending to succeed but giving incorrect output; (b) crashing a greater distance away from the root cause, e.g. inside the library user's code.
I don't think that's a decision a library can reliably make.
I find this statement bizarre. I cannot think of an instance where I would ever want a library to yield incorrect results or corrupt memory instead of crashing.
Well, the statement is more in reply to the claim that aborting is generally the least bad option than the latter part of that quote. I don't think we ever want memory corruption, no, but I also don't think those three options are the only three we have.
Maybe I'm missing something, or still failing to read between the lines, but the only other option I see is to add more paranoid checks and try to handle any internal inconsistencies by gracefully bailing out. I see this as adding an inordinate amount of work and it's really not something I want to do.
Like, to answer your question, no, I'm not working from a specific case. But if you'd like an example, then just take the first couple of assert()s in hlsl.c. To gracefully handle that kind of inconsistency, we need to change the single-line assert to a multi-line "if (condition) { ERR ("..."); return false; }". Then we need to add a way for that function to report errors and propagate it to the caller, and so on, making several functions which should be infallible now fallible. It adds work, LoC, and mental burden as you think about all the new failure modes that you need to handle and how to unwind from them, and it confuses the reader as they now have to wonder why we're trying to handle a failure that can't actually happen. Then take that example and multiply it by a hundred.
Anyone who's programmed with the Win32 API should know what a pain it is to deal with the possibility of failure from functions that have no reason to fail. Either you have to just give up and propagate that failure to the caller, unwinding everything in the process, or you waste time researching before you decide that there is no real reason for failure and just ignore it. Neither option is pleasant, and they both make the code harder to read. Even dealing with allocation failure is a pain and a half, and it's why I find it rather easy to be sympathetic to some modern languages' decision to simply abort() on allocation failure.
To take it to an absurd extreme, if we want a library to always fail gracefully rather than crashing, then we should check every pointer before we dereference it. Obviously that's not realistic and you have to draw a compromise somewhere, but once you've decided that, why can't an intentional unreachable() be part of that compromise?
I hate to be arguing so vehemently, especially when I'm not even sure I'm understanding your position correctly, but this is one issue where I find that I do feel quite strongly.
In the HLSL compiler, sm4_base_type() for example calls vkd3d_unreachable() for unexpected/unhandled types, and HLSL_SAMPLER_DIM_BUFFER/D3D_SVT_RWBUFFER is one of those.
Good catch, thanks. This is why I hate the "default" case.
As it happens we shouldn't be outputting non-numeric types into the RDEF at all. We already skip top-level ones, but we don't skip structs (or arrays?). So that whole case should end up going away.