On Thu, Apr 7, 2022, 10:56 AM Elaine Lefler elaineclefler@gmail.com wrote:
On Wed, Apr 6, 2022 at 6:02 AM Jinoh Kang jinoh.kang.kr@gmail.com wrote:
So that's some complicated code which isn't actually better than a straightforward uint64_t loop. I think that's the reason I prefer seeing intrinsics - granted, I have a lot of experience reading them, and I understand they're unfriendly to people who aren't familiar - but they give you assurance that the compiler actually works as expected.
I think writing assembly directly is still best for performance, since
we can control instruction scheduling that way.
IME, it's almost impossible to hand-write ASM that outperforms a compiler. You might have to rearrange the C code a bit, but well-written intrinsics are usually just as good (within margin of error) as ASM, and much more flexible.
Perhaps I've misphrased myself here. Note that "direct assembly != completely hand-written assembly." It's a bold claim that a *human* could outperform a compiler in machine code optimization in the first place. I said we should stick to assembler because instruction scheduling is more predictable across compilers that way, *not* because a human could do better at scheduling. We can take the assembly output from the best of the compilers and do whatever we please on it. (That's even how it's usually done!) This will bring the optimization work to much older and/or less capable compilers, since we're not relying on the user's compiler's performance. Note that Wine still supports GCC 4.x. Also, future compiler regressions may affect the performance of the optimized code (as Jan puts it).
llvm-mca simulates CPU pipeline and shows how well your code would perform on a superscalar architecture. Perhaps we can use that as well.
When writing high-performance code it's usually necessary to try multiple variations in order to find the fastest path. You can't easily do that with ASM, and it leads to incomplete optimization.
Yeah, we can first write the first version in C with intrinsics, look at differences between outputs of serveral compilers, and choose the best one.
For
instance, I was able to write a C memcpy function that outperforms msvcrt's hand-written assembly, for the following reasons:
- The ASM version has too many branches for small copies, branch
misprediction is a huge source of latency.
- The ASM version puts the main loop in the middle of the function,
leading to a very large jump when writing small data. GCC puts it at the end, which is more optimal (basically, a large copy can afford to eat the lag from the jump, but a small copy can't).
- When copying large data it's better to force alignment on both src
and dst, even though you have to do some math.
- Stores should be done with MOVNTDQ rather than MOVDQA. MOVNTDQ
avoids cache evictions so you don't have to refill the entire cache pool after memcpy returns. Note that this would have been an easy one-line change even for ASM, but I suspect the developer was too exhausted to experiment.
These improvements (except code placement for I-cache utilization) have nothing to do with compiler optimization. The programmer can make these mistakes either way (ASM or C).
So I'm strongly opposed to ASM unless a C equivalent is completely impossible. The code that a developer _thinks_ will be fast and the code that is _actually_ fast are often not the same thing. C makes it much easier to tweak.
Or rather harder to tweak, since code arrangement is not something programmer has control over.