05 April 2010

Implications of ECMA-335 and Partition I, 12.1.2 and 12.

As I’ve been spending quite some time this weekend to refactor parts of the MOSA compiler and fixing things small and large. I’ve stumbled once again over our memory model. I was refactoring our internal representation in order to make load and store operations explicit and broke almost all of our tests at once. Fixing them was pretty easy, except for the smaller types... Section 12.1.2 states:

„Loading from 1- or 2-byte locations (arguments, locals, fields, statics, pointers) expands to 4-byte values.“

Ouch. We’ve gone through a lot of trouble to ensure correct arithmetics on all types and have been badly missing the point: All smaller integral types are handled at 4 bytes in size on the evaluation stack.

Next step was to change the CIL load instructions to correctly reflect this fact and fortunately we already had the appropriate instructions in the IR. So the current state of work is that most of our tests are passing again, but not all yet. Then I started wondering about the floating point specification. Looking at the section for floating point values (12.1.3), it states:

„The supported storage sizes are float32 and float64. Everywhere else (on the evaluation stack, as arguments, as return types, and as local variables) floating-point numbers are represented using an internal floating-point type. In each such instance, the nominal type of the variable or expression is either float32or float64, but its value can be represented internally with additional range and/or precision. The size of the internal floating- point representation is implementation-dependent, can vary, and shall have precision at least as great as that of the variable or expression being represented. An implicit widening conversion to the internal representation from float32 or float64 is performed when those types are loaded from storage. The internal representation is typically the native size for the hardware, or as required for efficient implementation of an operation.“

So for floating point types we have exactly one stack type F, but the implementation is free to choose the precision of its operations as long as it is at least as large as the storage size of the floating point type. Since we’ve spent a great deal of time on single precision arithmetics, I’m inclined to keep the reduced precision operations there. Any opinions?

I’ll continue fixing this in the next couple of days.

04 April 2010

Why the Switch Fixture has crashed on I2/U2 and others...

The tests are run by .NET calling through a function pointer delegate using the stdcall calling convention. This calling convention is similar to cdecl. One of the similarities is that the EBX register must be saved by the callee and restored before it returns to the caller. We didn’t do that and thus corrupted the state of the .NET runtime on Windows. Bug fixed and commit following soon.