Giovanni Mascellani (@giomasce) commented about libs/vkd3d/resource.c:
+static inline bool atomic_compare_exchange_desc_object_address(union desc_object_address volatile *head,
union desc_object_address cmp, union desc_object_address xchg, struct vkd3d_desc_object_cache *cache)
+{ +#ifdef __x86_64__
- if (cache->have_128_bit_atomics)
return vkd3d_atomic_compare_exchange_128(&head->value, cmp.value, xchg.value);
- else
return vkd3d_atomic_compare_exchange_pointer(&head->address.u.object, cmp.address.u.object,
xchg.address.u.object);
+#elif defined(__APPLE__)
- /* TODO: solve ABA problem. */
- return vkd3d_atomic_compare_exchange_pointer(&head->address.u.object, cmp.address.u.object,
xchg.address.u.object);
+#else
- return vkd3d_atomic_compare_exchange_64(&head->value, cmp.value, xchg.value);
+#endif /* __x86_64__ */
Hmm, I think this leaves all non-x86_64 architectures broken, and x86_64 too in case `cmpxchg16b` is not supported. I think for these cases we should fallback to guarding the linked list with a mutex.