MOSA Development Blog Entries

 MOSA Dev Blogs News Feed 
Wednesday, August 17, 2011  |  From MOSA Dev Blogs

Gotta say, this one went pretty smooth. MOSA is now able to compile and handle delegates. Meaning that we're now able to use constructs like this:

delegate bool MethodType(int x);


public bool SomeMethod(int x)

{

System.Console.WriteLine(x);

return true;

}


public void Foo()

{

MethodType method = this.SomeMethod;

method(42);

}


Wednesday, August 17, 2011  |  From MOSA Dev Blogs

After 3 years of development we were finally able to relase version 1.0 of the MOSA (Managed Operating System Alliance) project. Over the months we've been working and concentrating on two features:

- Support for generics

- Exception handling

Adding these two features marks an important step in the development of our compiler component and thus led to the decision to call it version 1.0. First of all because of the features, both of them have been pretty difficult to implement with only 2 developers at hand, second of all to call it a birthday release for the 3 year anniversary. All in all, I'm glad that we managed to get this release out in the open and take the opportunity to thank Phil Garcia (tgiphil) for working so hard and Michael Fröhlich (grover) for all his work beforehand. I'm proud to be part of this project and anxious to see where further versions will lead to.


Monday, April 5, 2010  |  From MOSA Dev Blogs

As I’ve been spending quite some time this weekend to refactor parts of the MOSA compiler and fixing things small and large. I’ve stumbled once again over our memory model. I was refactoring our internal representation in order to make load and store operations explicit and broke almost all of our tests at once. Fixing them was pretty easy, except for the smaller types... Section 12.1.2 states:

„Loading from 1- or 2-byte locations (arguments, locals, fields, statics, pointers) expands to 4-byte values.“

Ouch. We’ve gone through a lot of trouble to ensure correct arithmetics on all types and have been badly missing the point: All smaller integral types are handled at 4 bytes in size on the evaluation stack.

Next step was to change the CIL load instructions to correctly reflect this fact and fortunately we already had the appropriate instructions in the IR. So the current state of work is that most of our tests are passing again, but not all yet. Then I started wondering about the floating point specification. Looking at the section for floating point values (12.1.3), it states:

„The supported storage sizes are float32 and float64. Everywhere else (on the evaluation stack, as arguments, as return types, and as local variables) floating-point numbers are represented using an internal floating-point type. In each such instance, the nominal type of the variable or expression is either float32or float64, but its value can be represented internally with additional range and/or precision. The size of the internal floating- point representation is implementation-dependent, can vary, and shall have precision at least as great as that of the variable or expression being represented. An implicit widening conversion to the internal representation from float32 or float64 is performed when those types are loaded from storage. The internal representation is typically the native size for the hardware, or as required for efficient implementation of an operation.“

So for floating point types we have exactly one stack type F, but the implementation is free to choose the precision of its operations as long as it is at least as large as the storage size of the floating point type. Since we’ve spent a great deal of time on single precision arithmetics, I’m inclined to keep the reduced precision operations there. Any opinions?

I’ll continue fixing this in the next couple of days.

Monday, April 5, 2010  |  From MOSA Dev Blogs

As I’ve been spending quite some time this weekend to refactor parts of the MOSA compiler and fixing things small and large. I’ve stumbled once again over our memory model. I was refactoring our internal representation in order to make load and store operations explicit and broke almost all of our tests at once. Fixing them was pretty easy, except for the smaller types... Section 12.1.2 states:

„Loading from 1- or 2-byte locations (arguments, locals, fields, statics, pointers) expands to 4-byte values.“

Ouch. We’ve gone through a lot of trouble to ensure correct arithmetics on all types and have been badly missing the point: All smaller integral types are handled at 4 bytes in size on the evaluation stack.

Next step was to change the CIL load instructions to correctly reflect this fact and fortunately we already had the appropriate instructions in the IR. So the current state of work is that most of our tests are passing again, but not all yet. Then I started wondering about the floating point specification. Looking at the section for floating point values (12.1.3), it states:

„The supported storage sizes are float32 and float64. Everywhere else (on the evaluation stack, as arguments, as return types, and as local variables) floating-point numbers are represented using an internal floating-point type. In each such instance, the nominal type of the variable or expression is either float32or float64, but its value can be represented internally with additional range and/or precision. The size of the internal floating- point representation is implementation-dependent, can vary, and shall have precision at least as great as that of the variable or expression being represented. An implicit widening conversion to the internal representation from float32 or float64 is performed when those types are loaded from storage. The internal representation is typically the native size for the hardware, or as required for efficient implementation of an operation.“

So for floating point types we have exactly one stack type F, but the implementation is free to choose the precision of its operations as long as it is at least as large as the storage size of the floating point type. Since we’ve spent a great deal of time on single precision arithmetics, I’m inclined to keep the reduced precision operations there. Any opinions?

I’ll continue fixing this in the next couple of days.

Sunday, April 4, 2010  |  From MOSA Dev Blogs

The tests are run by .NET calling through a function pointer delegate using the stdcall calling convention. This calling convention is similar to cdecl. One of the similarities is that the EBX register must be saved by the callee and restored before it returns to the caller. We didn’t do that and thus corrupted the state of the .NET runtime on Windows. Bug fixed and commit following soon.

Saturday, March 27, 2010  |  From MOSA Dev Blogs

After the static object allocation, I’ve finished the next step for MOSA. The compiler now emits mtable (virtual method tables) records for compiled types and is able to properly call virtual functions. The test to check these is in CallVirtFixture. I’ll add a couple more tests there to check for proper hiding, base class calls and other things - hopefully the current code should handle all of those cases sufficiently well.

In order to accomplish this I’ve had to add a fake System.Object implementation to the existing tests, as those classes wouldn’t compile anymore - the linker couldn’t create the vtable for them due to the 4 virtual methods every object inherits from System.Object: ToString, GetHashCode, Equals and Finalize.

The good thing about this is of course we can now use virtual functions and use overrides to do OO-kernels, the down side of course is: Every kernel has to provide at least a fake implementation of System.Object.

I’ve added a fake System.Object to the existing HelloWorld kernel.

Let’s see what Phil and Simon can come up with in Hello World, now that this is out of the way.

Monday, March 15, 2010  |  From MOSA Dev Blogs

After improving our test situation on the weekend, I’ve started on making good a promise I’ve given a long time ago. The promise was: Allocate static objects at compile time for core kernel services.

The issue with writing a managed operating system or any operating system is memory management and moving to the OS way of working at boot time. The core problem is that there are assumptions about objects, which can’t be met easily as memory management is being initialized on the CPU(s) the OS will run on. MOSA is facing this problem too, with one addition: Writing non-OO code in an OO language feels broken.

Classical operating systems solve this by having a reduced set of services while booting and initializing the OS services later in the OS specific fashion. We could’ve done this too, but why go for common ground if there’s new to explore.

So what does this feature do? This feature detects all dynamic memory allocations happening in static constructors and allocates memory for the allocated objects at compile time in the bss segment of an executable. It replaces the call to new with a load of the address of the data segment location, making the position of the object fixed in memory relative to its load address. This allows core OS services to be written using C# classes right from the start and allows them to be used as such.

There are some limitations though: The allocated object must be fixed in size, it must not have a complex structure and the field used to store the object must have the exact same type as used for the new operator. No casts allowed.

There’re probably further limits to this feature, but I haven’t figured them out yet. It will certainly be interesting to explore our HelloWorld kernel with OO-features using the new MOSA compiler.

Oh and before I forget: The feature must be turned on explicitly on the command line. Use --enable-static-alloc (or the shorter --sa) to enable it.

I’ve update the HelloWorld projects CMOS and Boot classes to take advantage of this mechanism.

Thursday, March 11, 2010  |  From MOSA Dev Blogs

Over the past couple of days I’ve made changes to MOSA to support generics. The first and primary change was to support a scheduled compilation model.

Let me explain this: Up to these changes the MOSA compiler would just scan an assembly, locate all types and compile each method contained in them. The only exceptions to this rule were native methods, generic types and generic methods.

The revised compilation scheduler stage

However skipping generics doesn’t work anymore - you want those generic types and methods compiled too. The issue you face is how to compile these methods and types without knowing the usage. So I basically kept the current compilation scheduler, but added the capability for the pipeline to schedule additional types and methods in the scheduler. In order to do this, there’s a new assembly compilation stage: The ICompilationSchedulerStage. This stage performs the type lookups that used to be done by the MethodCompilerBuilderStage.

The compilation scheduler now maintains a schedule of methods and types to compile and executes these in order. Once all scheduled methods and types are compiled, the entire assembly including all of its generic usages has been compiled.

Thursday, March 11, 2010  |  From MOSA Dev Blogs

After about 3 hours patching and fixing the last issues I finally completed the first successful test of generics in MOSA. The following C# fragment compiles successfully and passes all tests:

static class Test
{
        private static T GenericMethod<T>(T value)
        {
                return value;
        }

public static bool TestCallGenericMethodWith(int value)
{
                return value == GenericMethod(value);
        }
}

It may not look like much, but this test is the ground for all other generic arguments and has moved forward not only the compiler, but also the assembly loader, the MOSA runtime and some other additions like cleaner error messages from mosacl.

I’ll finish this test case tomorrow and push my changes to Github, I’ll probably need some help with that from fellow MOSA contributors.

Good night.

Tuesday, March 9, 2010  |  From MOSA Dev Blogs

Let mosacl speak for itself:

Mini:Bin Michael$ mono --debug mosacl.exe -a x86 -o MQT.exe -f PE Mosa.QuickTest.exe
MOSA AOT Compiler, Version 0.6 'Tanigawa'
Copyright 2009 by the MOSA Project. Licensed under the New BSD License.
Copyright 2008 by Novell. NDesk.Options is released under the MIT/X11 license.

Output file: MQT.exe
Input file(s): /Users/Michael/Projects/MOSA/MOSA-Project/Bin/Mosa.QuickTest.exe
Architecture: Mosa.Platforms.x86.Architecture
Binary format: Portable Executable File Linker
Boot format: Not bootable
Is executable: True

Compiling ...
Scheduling type Mosa.QuickTest.App for compilation.
Compiling type Mosa.QuickTest.App
Compiling method Main
Loaded generic type Mosa.QuickTest.Generic`1
Scheduling type Mosa.QuickTest.Generic`1<System.Int32> for compilation.
Compiling type Mosa.QuickTest.Generic`1<System.Int32>
Compiling method .ctor
Compiling method get_Value


Unresolved symbols.
        System.Object..ctor()


Mini:Bin Michael$

Sunday, March 7, 2010  |  From MOSA Dev Blogs

After hacking MOSA two days again and trying to figure out what I was thinking way back, here’s one of the first runs with generic types:

        mini:Bin Michael$ mono --debug mosacl.exe -o MQT.exe --Architecture=x86 --format=PE Mosa.QuickTest.exe
        MOSA AOT Compiler, Version 0.6 'Tanigawa'
        Copyright 2009 by the MOSA Project. Licensed under the New BSD License.
        Copyright 2008 by Novell. NDesk.Options is released under the MIT/X11 license.

        Output file: MQT.exe
        Input file(s): /Users/Michael/Projects/MOSA/MOSA-Project/Bin/Mosa.QuickTest.exe
        Architecture: Mosa.Platforms.x86.Architecture
        Binary format: Portable Executable File Linker
        Boot format: Not bootable
        Is executable: True

        Compiling ...
        Scheduling type Mosa.QuickTest.App for compilation.
        Compiling type Mosa.QuickTest.App
        Compiling method Main
        Scheduling type System.Collections.Generic.List`1<System.Int32> for compilation.
        Scheduling type Mosa.QuickTest.Generic`1<System.String> for compilation.
        Compiling type System.Collections.Generic.List`1<System.Int32>
        Compiling method .ctor

Unfortunately this is as far as it gets right now, but I’m sure I’ll find the reason for this soon.

I had to rewrite a whole bunch of stuff to get this far, including the way types are scheduled for compilation by mosacl. Surprising is that generics can be AOT’d under certain circumstances.

Saturday, March 6, 2010  |  From MOSA Dev Blogs

Yesterday I’ve tried to install Mono 2.6.1 on my Mac mini to do some work on my pet project MOSA. Unfortunately things didn’t work out as smoothly as I thought as nant greeted me with the following error message:

        Failed to initialize the 'Mono 3.5 Profile' (mono-3.5) target framework.

Furthermore it stated:

        'pkg-config' failed to start.

I tried all sorts of things from reinstalling, to cleaning, to trying to pull mono from Macports and even compiling Mono myself. Nothing has helped, every attempt was stopped by some error at some point.

The I started to do some digging and tried running pkg-config from Terminal - sure enough it wasn’t there. It was however in my /opt/local/bin folder, so running

        /opt/local/bin/pkg-config --modversion mono

resulted in:

        Package mono was not found in the pkg-config search path.
        Perhaps you should add the directory containing `mono.pc'
        to the PKG_CONFIG_PATH environment variable
        No package 'mono' found

Ok, so again I stumbled - looking further I found another pkg-config installed by Mono itself. So running it with the following two commands in Terminal:

        export PATH=/Library/Frameworks/Mono.framework/Versions/2.6.1/bin:$PATH
        pkg-config --modversion mono

produced:

        Package glib-2.0 was not found in the pkg-config search path.
        Perhaps you should add the directory containing `glib-2.0.pc'
        to the PKG_CONFIG_PATH environment variable
        Package 'glib-2.0', required by 'Mono', not found

Again I stumbled. Looking for glib-2.0.pc I found it in /opt/local/lib. So I added it to the PKG_CONFIG_PATH and ran it again:

        export PKG_CONFIG_PATH=/opt/local/lib/pkgconfig:$PKG_CONFIG_PATH
        pkg-config --modversion mono

Now I finally got the response:

        2.6.1

So I’ve finally got pkg-config to work and running nant worked too - I can now start contributing some code again...

Tuesday, December 29, 2009  |  From MOSA Dev Blogs

MosaThe MOSA Project website (www.mosa-project.org) is no longer hosted at CodePlex.  Instead we are hosing our own website using Retrospectiva, an open-source project management tool. It has more advanced features than CodePlex could provide us; namely support for SVN or GIT as a source code repositories. We hope you like it better!

Also, if you have not already noticed, we are using a new logo design.




Monday, October 5, 2009  |  From MOSA Dev Blogs

Wow, that was a pretty long server downtime. I apologize for that, but there were some private reasons why the server has been down for such a long time.

But, that doesn't mean we haven't been productive since then. We began working on a rewrite of the compiler to let it use the Flyweight pattern. We expect this pattern to have a large impact on compilation performance and general speedups.


Also, we now have a small, experimental build server: http://think-in-co.de:8111

Tuesday, July 14, 2009  |  From MOSA Dev Blogs

Thanks to tgiphil (Phil Garcia) we are now able to read the multiboot structure handed over by Grub. This way we can finally figure out how much memory is available and test our first basic memory management routines.


Tuesday, July 14, 2009  |  From MOSA Dev Blogs

A few weeks ago I got sick of always running Mosa inside a VM, so I tried to write the image to a USB drive using dd and plugged it into my EeePC 1000H.


Long story short:


Tuesday, July 14, 2009  |  From MOSA Dev Blogs

First of all: Sorry for the long downtime folks. But due to some technical and some personal reasons the server has down for at least 2 months now. A lot happened during these days, especially on MOSA, and I'll make sure to catch up and'll post many articles on it now.

But in the beginning without the japanese translation as this'd take too long now.

Monday, June 15, 2009  |  From MOSA Dev Blogs

While most of the development and testing of MOSA is done using virtualization software, MOSA does indeed boot on real hardware too. At the moment we support booting MOSA from either a CD or USB key. We created a short one minute video showing MOSA being written to a USB Drive and the same PC booted with it: 

Booting MOSA from USB Drive

Here are the detailed instructions for writing MOSA to a USB Drive:

1. Download the dd utility for windows from http://www.chrysocome.net/dd.

2. Extract the dd.exe executable to the mosa\trunk\Setup\Boot\build directory.

3. Open a command prompt window and change directory to the build directory.

4. Connect the USB key you wish to ERASE and install MOSA onto. (WARNING: Data on the USB drive will be lost!)

5. Determine the device path for the USB key.

Type "dd --list" to display a list all the block devices on your system. Find the one for the USB Key you just connected. Be careful, if you select or mistype the wrong device, you can corrupt your hard drive or other storage devices. Unless you understand these steps completely, do not proceed.

6.Type "dd of=\\?\Device\HarddiskX\PartitionX if=bootimage.img bs=512 --progress" and substitute the of= parameter with the device path found in the previous step.

7. Wait until all the blocks are written to the USB key before disconnecting it.

8. Now boot a PC or laptop with the USB Key connected!




Sunday, May 10, 2009  |  From MOSA Dev Blogs

The first challenge to incorporating the Mono Class Libraries into MOSA is how to replace the external method calls in the class libraries with our own custom implementation. There are two basic approaches to this challenge. In part one of this blog series, we will explore the one used by other managed open source projects and its unique challenges.


The most common approach is to replace the call to the external methods with a call to the another custom method instead. This approach seems to be the simplest and most elegant solution – as you only have to supply the CIL code for the custom method and then have the compiler call the replacement method.


Ideally the replacement method would be implemented similar to this:




 1: namespace InternalSystem


 2: {


 3: [TargetNamespace("System")]


 4: public class SomeMonoClass


 5: {


 6: public int NewMethod()


 7: {


 8: return this.X * this.Y;


 9: }


 10: }


 11: }



 

Of course, this method will not compile because the member variables X and Y do not exist to the C# compiler. One possible solution is to create stubs for these member variables so they will exist while the code is compiled to CIL. Here we add those two member variables:

 



 1: namespace InternalSystem


 2: {


 3: [TargetNamespace("System")]


 4: public class SomeMonoClass


 5: {


 6: int X;


 7: int Y;


 8:  


 9: public int NewMethod()


 10: {


 11: return this.X * this.Y;


 12: }


 13: }


 14: }



 

Now the code compiles. And given some special CIL to native compiler, the external method call will be redirected to this other method instead. Creating stubs to expose these “hidden” member methods and variables seems like a fair compromise. But look at this example:

 



 1: namespace InternalSystem


 2: {


 3: [TargetNamespace("System")]


 4: public class SomeMonoClass


 5: {


 6: int x;


 7: int y;


 8:  


 9: public ClassC NewMethod(ClassA b, ClassB b)


 10: {


 11: return new ClassC(z, b.X, c.Y);


 12: }


 13: }


 14: }



 

Now additional stubs are required to represent the ClassA, ClassB, and ClassC classes, and stubs for each of their member methods and variables before this code will compile. Do you see where this is going? Given a rich framework with many dependencies between classes, like the .NET framework, this approach can results in the creation hundreds of stubs!

 

MOSA will approach this problem in another way, which I will be blogging about next.



Sunday, May 3, 2009  |  From MOSA Dev Blogs

configFor the MOSA project, we needed a tool to create image files for various Virtual machine software, such as for Microsoft Virtual PC and Microsoft Virtual Server, VMware, and VirtualBox, including simulators such as QEMU, and Bochs.

The use of virtualization software would help us facilitate the building and testing of the MOSA AOT/JIT compiler, the reference Operating System, and CIL based runtime libraries.

So we need an virtual disk image tool that could:

1. Create disk images in various virtual disk formats: IMG, VHD and VDI.

2. Create Master Boot Block (MBR) and partition to the disk image.

3. Create FAT12 or FAT16 partitions (or EXT partitions).

4. Incorporate a multi-stage boot loader with support for the Multiboot Specification.

5. Add files to the file system.

6. Execute on Windows or Linux platforms.

After an exhaustive search across various open source operating systems, I was not able to find a single tool was found that could do all of this. Usually the this required a whole tool set of applications and scripts to create the single boot image, such as GRUB, LILO, mkfatfs, fdisk, and Syslinux, and generally these tools and their related scripts were platform specific to either Linux or Windows.

So I decided to roll my boot image creation tool in C# that incorporated all these features. One of the good things about having written parts of the device drivers, driver system, and file system support for MOSA is already having code for creating, reading and writing MBR, partitions, and FAT12/16. Implementing basic support for the virtual disk formats was relatively straightforward after reading the specifications and understanding the Cluster-Head-Sector (CHS) address legacy. The most difficult part was incorporating the Syslinux boot record and boot loader binaries. Those binary executables required very specific in-place modifications based on the virtual disk geometry and file system.

The result is a tool simply called “CreateBootImage.exe”. It run on both Windows and Linux (with Mono), and is now available as part of the MOSA project.

Here’s how to use the tool:

The tool accepts two command line arguments:

CreateBootImage.exe <configuration file> <image file>

The configuration file contains a list of options, one per line, with arguments separated by a tab. The following options are supported:

option Arguments Description
-volume Volume Name Set the volume name for the first partition
-blocks # of Blocks Set the number of 512-byte blocks
-fat12   Use FAT12 file system
-fat16   Use FAT16 file system
-fat32   Use FAT32 file system (untested)
-vhd   Create image in VHD (Virtual PC/Virtual Server) format
-vdi   Create image in VDI (VirtualBox) format
-img   Create image in IMG format
-syslinux   Patch disk image for Syslinux
-mbr Filename Use file for Master Boot Record
-boot Filename Use file for Boot Record
-file Filename, Destination Include file in file system (destination optional)

The tool can create disk images for the following emulators:

Emulator File Format
Virtual PC 2004/2007 .VHD
Virtual Server .VHD
VMware .VHD
VirtualBox .VDI
QEMU .IMG
Raw Image .IMG



 MOSA Dev Blogs News Feed 

Last edited Apr 17, 2009 at 5:29 PM by illuminus86, version 1

Comments

No comments yet.