On Wed, Aug 3, 2011 at 10:56 AM, Michael Mc Donnell michael@mcdonnell.dk wrote:
On Tue, Aug 2, 2011 at 2:26 PM, Henri Verbeet hverbeet@gmail.com wrote:
On 1 August 2011 14:08, Michael Mc Donnell michael@mcdonnell.dk wrote:
With the bitfields I'm not sure about stuff like endianess. My gut feeling would be to use bitmasks and shifts to separate a DWORD instead, but bitfields certainly look nicer. Beyond that endianess is a somewhat academic consideration with an API that's available on x86 only. So I'd say keep the bitfields.
Ok good. That'll keep me from the pain of doing bit twiddling on a little endian machine :-)
You can't do that, even on x86. Bitfield memory layout is undefined.
It is *technically* undefined, but all the compilers I have tested it on do the same thing. The little test I've attached has been tested on Visual Studio x86, GCC x86 and x86-64, llvm/clang x86, and suncc on a SPARC-Enterprise-T5220, and they all behave the same way. The raw byte ordering on the SPARC is different because it's big endian, but the bit-fields still work the same way, in that it extracts the correct values. Do you know any common compiler and architecture combinations where it does not work?
I've removed the support for DEC3N and UDEC3 so it doesn't block the patch.
Oops here is the patch where I have actually removed it :-)