When I was a teenager, I only had access to a Commodore 64 at home. I envied my friends who had a PC or even an Amiga, for the seemingly unlimited RAM and crisp high resolution text display. Glorious 80 columns! At one point I wrote a Bash-like shell on my C64 with 80 character software display, each character had to fit into 4*8 pixels. It looked surprisingly good on my monochrome monitor, considering, but it was slow and I barely had any RAM left for programs to load due to all the space the shell functionality used up.
Later I got an Amiga 500 and I was happy with it text-wise, it could display a proper 640*200 screen with 80 columns of text. It was great for logging into a remote server via telnet using a modem (back then it wasn't considered harmful) where I had an account and I could use email and IRC. Writing assembly and C code on the Amiga was great and I had 1 MB RAM in it, which felt like tons. I nurtured a dream for years that I would get an Amiga 1200 with a hard disk in it and I could write a whole OS for it in assembly. That dream was never realized. When I got a proper PC I was writing programs in C. Making an OS didn't seem plausible due to the complexity of the CPU and the hardware itself. Sure I wrote some assembly code with BIOS interrupt calls but making anything serious as bare metal code is pretty hard on a PC, especially these days.
A few years ago I thought the Raspberry Pi will be great for such aspirations, but while the ARM CPU itself is OK to code for, the hardware is just way too modern, and in the same time surprisingly badly documented. Sure, making the Status LED blink was doable, but even that has sailed with the Pi 4 as I heard. Putting text on the screen is hard due to working in graphics mode (good luck even making a frame buffer with all those badly documented mail slots), accessing the SD card is hard, writing a whole FAT32 file system driver is really hard, writing a USB driver for keyboard handling is super hard. Using the GPIO pins to access the Pi via serial port is doable, but then you are still just using a terminal program to access a box, and for every change in the kernel code you need to copy data to the SD card and reboot, or use QEMU for everything, and... Small wonder that most "Let's make an OS for an ARM SBC" projects die at the stage of a blinking LED.
Some dreams come true
My dream computer would use a 32-bit CPU that is similar to ARM, but less complicated to work with, so it's somewhere between the 6502 and the ARM V6. Lots of RAM, up to 4 GB due to the 32-bit address space, but realistically I would never need that much. Easy enough access to the hardware using Hardware I/O registers. Unified memory architecture if I would ever have video and audio in it. A smart disk controller device that would give access to the file system in a high level way, so its firmware would handle the complicated block level tasks. A serial console port solution to access the system using a terminal program.
Since nobody is going to make it for me, and I'm just a software guy, I decided to make my own computer as a virtualized entity. It's called Ariel.
I created a small Virtual Machine in C# that can run a RISC CPU I designed and some virtual hardware devices I made. The ARM-like CPU is something that would work in real life, it could be implemented as an FPGA core, and the devices are either realistic or plausible. The VM can run code over 12 MIPS (million instructions per second) on my PC; this is comparable to a speedy Intel 486. Plenty of power for my needs, I would be fine with just 1 MIPS, and it's not particularly optimized code. Also this is just a number, I believe the Ariel CPU packs a bigger punch than a 486 when it needs to run useful code.
I wrote my own assembler for it, along with a Visual Studio Code extension. It works like a charm, in some aspects it's even better than Retro Assembler. Ariel boots a ROM image that is really simple so far, and afterwards the ROM will load the Kernel from the file system. The RAM size is configurable, it can be up to 4095 MB. The top 1 MB is the Hardware Space, and the top 64 KB is the Hardware I/O Space for the hardware registers that give me access to devices. The VM emulates the serial console port using a TCP Socket, so I can connect to it via Telnet on a specific port, even via the Internet.
What is it good for?
- Primarily it's for me. Making my own CPU and hardware devices, albeit virtually, is a great challenge and fun for an engineer. I enjoy it immensely and I haven't played with many video games since I started this project.
- I made an assembler for it, that was and still is a really enjoyable pastime.
- I can write my own ROM for it that already starts the computer and allows for terminal communication.
- I can write my own OS for it, finally.
- I can write interpreters or compilers for various languages like Basic and Python, and I can even write a small C compiler, because compilers are fun to make. Making those to create x86 or ARM code is hard and realistically nobody else would use them, so I might as well just generate code for my own CPU.
- I can write a C# -> IL -> Ariel Assembly compiler so I can write complex applications for it in Visual Studio, if I choose to do so.
- Later I can add a video device to it and even audio.
- Others who would like to dabble with writing bare metal code in assembly, ROM code or even a kernel, could use it and enjoy working on it, without the endless frustration of using real modern hardware.
- I plan to put it on GitHub when it's a bit more mature, so others could contribute, or even modify it for their own needs if they want to.
- Somebody could just make a ROM that starts up a Basic interpreter, like in the Commodore 64. I might be that somebody, some day.
If you're interested in this project, and would like to try it later, and perhaps even would like to write some code for Ariel, let me know what you think on Twitter. I would appreciate it. I'll follow up on how this project is going when there's more to share.
It's been a while since I posted about the Ariel Virtual Computer project. A lot happened to it since then, and also, not so much. Unfortunately, I've fallen into the trap that keeps many projects getting from the prototyping stage (if there was one) to completion: Scope Creep. It's a technical term for including more and more features in a project, compared to the modest, down to earth initial design plan and things just get out of hand. I'm not completely lost though, but it's time for me to take a step back and make some level headed decisions about what's next.
The virtual machine (or emulator, same thing) that's running behind the scenes has been more or less done back then, I just fiddled around with it to cut out some unnecessary capabilities. This part of the project is surprisingly solid. If you'd like to look at some of the CPU details, I created a PDF file that I use as a cheat sheet while writing assembly code. Now it just needs single-precision floating point instructions, and I'll draw the line there. No support for double-precision. It's not the easiest decision though, the struggle is real.
Things started going south when I decided to create my own solution for text mode display. Initially the emulator was written as a console application and I could connect to it using a terminal emulator, telnet client etc to see its output. But this felt too limiting and what modern computer doesn't have some kind of display anyway...
Since the CPU handler itself is in a separate library, I started working on a new front-end. But what should it be? I chose Windows Forms on .NET 6.0 because why not try to get on with the times, right? Or should it be WPF? Or UWP? It doesn't help that pretty much every Windows Desktop development environment you can think of is in the following state: Is it dead? Yes. Well, no, not yet, but... There is a new thing on the horizon called Project Reunion. I mean WinUI. I mean Windows App SDK. They probably renamed it since I started writing this paragraph. In that a Hello World takes from 30 to 200 megabytes and uses either a handful or a hundred individual files, depending on the build settings you chose, and then it tends not to work at all. Thumbs up guys.
I made a bare minimum application that can display text in a Rich Textbox, but it had weird screen refresh problems and worse, I realized it physically can't use an image as background, nor can it use a transparent background. For that you need WPF or UWP. Fine, I'm not going to get scared away by this, I want that background image so much that I wrote my own bitmap-based character matrix rendering control. It's beautiful and works really nicely... Unless the control is too big. Like it's in full screen on a 4K display. The update becomes just laggy. The problem isn't with the bitmap creation or font rendering itself, it doesn't have excessive CPU usage, it's just getting mangled somewhere in the bowels of Windows that's not under my control.
I tried to go between .NET 6 and .NET 4.8, thinking the issue is with the new framework, but they are equally bad. I optimized the heck out of it, it's much better than it would be by default but still not good enough. To add insult to injury, if I compile the application in x86 mode then it's fine in most cases, but in x86-64 mode the screen update gets even worse. That was a great time to figure that out, given that debugging an application in Visual Studio is still best to do in x86 mode, so the problem was always in the Release version. I also had problems with setting up a steady timer that can handle a 60 FPS update for generating Vertical Blank interrupts and deal with the screen refresh, but I solved that issue.
Right now, I feel like I wasted a lot of time with this whole text display solution. I should just make it work with a Rich Textbox control and live with a single background color.
It also doesn't help that I can't decide about a fixed character matrix size. I have various old computers and displays with different resolutions that I'd like to potentially run it on. Will it be my old Apple Cinema Display with a mechanical keyboard? Or this Dell 27" 4K display when I replace it? Or my old 13" Surface Book? Setting the matrix width to 120 and the height to something more flexible is a good solution and it's all configurable in the settings, and the programs can read the matrix size from hardware registers. Of course, it would be so much better to have a fixed size, it would work in windowed mode just fine, but I would like it to look pretty in full screen mode, where I can pretend that my hardware is running my own operating system.
Oh, about the OS... I decided on an MS-DOS style OS that uses a Linux Bash-like shell with flexible display size, but can't do multitasking. It will just load the selected application, run it, the app can call OS Kernel functions to handle the screen, input, storage and other OS-level things without touching the hardware registers. When it's done, the program gives the control back to the shell. I made the system load a ROM image, start it up, and it loads the OS Kernel and executes it. You can type in the shell, so far so good. It does not load and execute programs yet, but I'm working on Kernel functions that programs can call.
Should I add graphics mode? It would be nice, but then I have the same problem as with the character matrix size, what will it use? 1920 x 1080 would be ideal for showing it off on YouTube, but what about all the other weird resolutions and aspect ratio my displays use? I think for the time being I will not add any graphics mode, and will see it way later whether I need it. And what about sound? Of course, I would love to write scene demos for it, but is it worth the hassle? Not really...
Mostly I want this to be a playground for making compilers and other nerdy tools that only need text display mode. At least this is a good, solid pointer that can guide my decision making about the system itself.
Take my advice, make a reasonable plan for your project and stick to it. You can always add more features to it later.