The Nintendo Virtual Boy on a real screen
One day, like many other people, mister [Wuff] wondered why the VirtualBoy didn't have a stock TV output, and if it was possible to add one. A bit of research revealed that someone had already worked on such a project and made a proof of concept video (here). A link to the project page can be found on projectvb.com, but at the time this article is written, the page is blank.
A few days later, [Wuff] pointed me to a cheap VirtualBoy set on eBay from Japan. I bought it along with two games, and received it a month after.
While waiting to receive the package, I did some research with what I could get my paws on. First of all, I had to learn how to generate VGA signals with the only FPGA dev board I had, an Altera DE-1. Thanks to these pages, it just took me half a night. I also needed to find the best standard to stick to.
The VB's resolution is 384x224. I also needed a FPGA board small enough to fit inside the VB's casing. The decision was easy:
First step: Make the screen sync to the signal (wake up, black screen), and make it detect the mode: 640x480@60Hz. Problems encountered: none, except lots of counting and timing checks on the scope.
Second step: Center the 768x448 doubled up VB frame in the VGA frame using counters and comparisons, output a 0.7V signal on the red line when the frame is drawn. An internal flag signal is used to gate the video output to draw the black borders.
Third step: Make 3, 2-bit DACs with resistors and output a RGB XOR pattern (shown on the left), display is the VB's frame size. Problems encountered: used very cheap "5%" resistors and got unequal levels on each color component. Bought good 1% ones.
Fourth step: Convert two captured VB frames from an emulator to 2BPP and write them to flash. Read them back and display them alternatively on the red and blue VGA lines. The 3D separation is too pronounced to work with anaglyph glasses, let's forget about that idea. Also totally forgot the VB's display scanned the image vertically...
Fifth step, when I thought the SDRAM on the DE-1 was fast enough to handle both reads and writes: Realize that SDRAM isn't at all like SRAM, use the core generator to make the controller, use it to copy the flash contents to it and read it back for the video output. Problems encountered: phase shifting and self-refresh crap. I don't want to remember this.
I stayed on that fifth step for the video output part, I continued after I received the VB...
Almost blind information gathering
As it's now known thanks to iFixit's VB teardown, the display isn't made of one big or two small LCD or OLED screens like today's movie or VR glasses, but with two of what they call Scanned LED Arrays (SLA).
The following information comes from multiple sources (patents, pictures).
At the time of its invention, a company named RTI (Reflection Technology, Inc.) had a leading role in designing miniature displays using this technology. Nintendo, who were in need of an inexpensive, light-generating display with very short rise and fall times succeeded in getting an exclusive license from them to use their custom made displays. They chose to use only the red color as it was (and still is) the cheapest visible LED wavelength to produce. That choice also divided the die size at least by 3, further reducing the total cost.
The display's principle is pretty simple in theory: a tiny 244 LED strip (45μm spacing) resembling in appearance to today's linear CCD arrays for scanners, is disposed in front of a magnifying lens and an oscillating mirror which reflects the LED's light at a 90 degrees angle to the viewer's eye. A special servo circuit makes the mirror oscillate at a steady 50Hz thanks to infrared barriers, and generates the appropriate signals for another circuit to send the image data for a given column to the LED array. The mirror's motion is fast enough for the human eye to see a full rectangular image. This method is similar to the one used in CRT TVs, but instead of an unique dot moving in X and Y, a column of LEDs is only scanned in X.
The "LED UNIT" part is shown on the left. The transparent plastic piece over the array is just to protect it, the actual moving lens for focus adjustment is in the display assembly (and way bigger).
To reduce the number of connections needed to the graphics chips, RTI devised a solution using a shift register and a latch integrated into the LED array's die. This is described in one of the VB's patents.
Some pixels are therefor serially pushed into the shift register with multiple data lines and a clock signal. To avoid the visual artifacts caused by the pixels moving into the register, a latch sits between it and the LEDs.
Before the LEDs is also a luminance control circuit, which doesn't work with current control (too slow, too big) but with pulse width modulation as indicated by the name of the input signals CLKA/B/C. The fact that there are 3 signals for brightness is contradictory with information from numerous websites telling that there are only 4 levels of brightness possible on the same frame (2 bits). This is explained further in the patent.
This schematic explains the brightness control quite well: D0 and D1 are the pixel data (2 bits, as expected), the CLK* signals are used to gate the pixel data in different ways to the LED's transistor.
The output to the transistor according to the pixel data is as follow:
|D1||D0||CLK*||Brightness||Teleroboxer max values|
|1||1||A+B+C||100%||1.6+3.2+1.6 = 6.4us|
In short, there are 4 brightness levels indeed. But each level can be set in software.
Using only the pixel data for brightness on the FPGA might work, but it won't be very accurate, especially if some games use CLK* tricks to dim or fade the display.
To simulate the LED's brightness, it would be necessary to measure for how long each dot is lit up during the display of one column. For this, the output of OG needs to be sampled at regular and tight intervals to see if it gives a 1 (lit LED) or a 0 (turned off). That way, a "brightness counter" may be used for each pixel state (D0,D1: 0~3) giving accurate brightness which could then be mapped to the pixel data.
The VB's documentation states that the BRT* registers (corresponding to the CLK* signals ) must not be changed during display, so it might not be useful to run the brightness counter each column ?
I chose to ignore all this for this first version and just use the pixel data.
Thanks to iFixit's gigantic die shot of the LED array, it was possible to guess part of the ribbon cable pinout (the numbering is offset by 2 on the picture since some signals were unrouted and I had no photo of the back). In red is the 5V common power rail for all the LEDs. Grounds are in blue. I was certainly wrong with the orange trace, which could have either been the shift clock or latch signal (routed to 2x2 halves).
Close inspection of the wire bonding shows that to increase the LED density, they chose to have an "even" and "odd" die controlling each out of two LEDs.
Furthermore, a clear delimitation after each group of 4 LEDs can be seen, hinting that the pixel data is clocked in 4 pixels at a time. In pink rectangles are the suspected data lines, 4*2bits/pixel so 8 of them for each side. The remaining signals were the clock and latch ones, and an interesting direct connection to either 5V or GND depending on the side, certainly to select the shift direction.
Hands-on action !
The first thing I did when I received the VB was to power it up, see if the warning screen showed up and... tear it down !
I first thought that the VPU had dedicated buses for each of the SLAs, but by inspecting the pixel clock, I noticed that the bursts were happening at 100Hz instead of 50. This meant that the pixel data bus was instead shared between both displays and a select signal was used to indicate which burst was for which one (further called CS for "Chip Select").
The brightness signals CLKA/B/C are also shared, along with the pixel clock and clear signals.
The guessed pinout was pretty close to reality, except for 2 unused power signals and the order of the pixel data. Here's what I got on the scope:
The shift clock is very similar to a regular LCD clock signal with 2 levels of bursts for H and V. That's why I'm calling it DOTCLK.
Pixels are shifted in 8 by 8 (16 bit data bus) on the rising edge of DOTCLK in a top-down order. To form a 224-pixel high column, 28 pulses are needed. This latches the data to the LEDs and the CLK* brightness signals are then asserted (not shown).
These 5.6µs bursts are repeated 384 times to form a complete image. These image bursts happen every 10ms giving data to both displays at 50Hz.
Notice that contrary to what was specified in the patent, the latch signal is instead a clear signal. What probably goes on is that after a clear, a counter is set to zero. This counter is clocked by DOTCLK, and when it reaches 28, the data is automatically latched and sent to the brightness circuit.
Ribbon cable pinout, numbered from the main board connector:
|S. R.||Pixel data||Power||Shift reg.||Brightness control||Pixel data|
|P01||P20||P21||P40||P41||P60||P61||LEDs 5V||Logic 5V||Shift clock||Clear||CLKA||CLKB||CLKC||GND||GND||P11||P10||P31||P30||P51||P50||731||P70||1.8V|
For a complete column of 8 pixels, as the pixels are interleaved, the pin map is: 5-4-28-29-7-6-26-27-9-8-24-25-11-10-22-23.
Not sure why the 2V and 1.8V reference or power lines were routed to the connector (not used on the LED array boards, they work with 5V).
Now for the rest of the video output steps:
Sixth step: Realize after a few hours that the 133MHz SDRAM is simply not fast enough to handle writes at 4.3MHz and prioritized reads at 20MHz with totally different addresses. Took mood stabilizers, scraped everything SDRAM-related and hoped that the internal RAM blocks on the 2C20 were large enough. 384*224*2 bits per pixel = 172032 bits are needed for a framebuffer, the FPGA has a max of 239616. Great !
Generated a simple dual-port (Only one write, one read) 10752*16-bit words RAM block and everything worked perfectly with the flash being copied at 5MHz and the data read back at 20.
Seventh step: Find a suitable board with the needed capacities that would fit inside the VB...
[Grapsus] reminded me of the Xula boards from XESS. Xilinx instead of Altera, but what could go wrong when switching from the second to the first market share leader ? Except downloading a 4GB fatter IDE.
XESS proposes two breadboard-sized FPGA boards: the Xula and Xula2. I hoped the first one had enough block RAM, since the Xula2 had features not needed at all and is twice the price. I knew the number of gates wasn't going to be an issue since the logic was very simple.
XC3S200: 200 000 gates, 27648 bytes of block RAM and 21504 needed, excellent ! Luckily the anaglyph mode was a bad idea as it wouldn't have been possible with this board either without heavy tearing in the output since it would require 2 separate framebuffers and so twice the amount of RAM.
"Ported" the Verilog code for the DE-1 to VHDL for the Xula (just wanted to learn) without any difficulties. The board's documentation and examples are very helpful and Xilinx's IDE is fine, just slow as a braindead turtle.
The most frustrating part of the project was to solder the wires to the ribbon cable connector. As I'm not equipped with the most adequate tools for fine pitch soldering, getting rid of bridges or bad connections was a real pain as fixing one wire made the adjacent ones come loose.
When it was finally done and tested, I bent the wires straight over the ribbon cable and poured a large amount of hot glue over the connector's black plastic piece (not over the solder points !).
Then I had to make voltage dividers for level shifting (5V to 3.3V) on each line as the FPGA isn't 5V tolerant. Here are all 16 bits for pixel data, CS, DOTCLK and CLEAR, which I didn't use.
When everything ended up working fine, I made a dirty resistor board and added the possibility to change the hue with a momentary switch, as it just required 4 more resistors for the green and blue lines.
Also wired the Xula board to the 9V line from the joypad since I didn't know if I could load the internal step-down converter a lot more. Checked the current consumption by the way, with sound at average volume:
180mA originally, 220mA modded, which is about 20% more. Anyways, this mod is to make the thing playable on a TV, no point in using batteries...
The second worst part was to cram everything inside the VB's casing. While having the thing opened I thought there was plenty of room.
But considering the size of the level-shifting board, the speakers and the screw stands, I really had no choice but to take advantage of the long wires from the ribbon cable connector to go all the way around and put both boards in between the display and joypad connector board. Luckily everything fitted and no wires came lose.
I just had cold sweats when I powered the VB up and saw that some lines were missing in the display. It was caused by a short on the Xula's pins touching the metal frame. Put some thick tape over it and it was fixed. Fwweee ! Didn't have to go back to those tiny solder points !
Melted holes for the VGA connector and the color switch right next to the original connectors, to have all the wires on the same side. Looks kinda cool if you don't look too close.
Everything's set back up again and... IT WORKS !
Anaglyph does NOT work: