A Place of Ideas

4D Perspective Rendering — A Retrospective

The idea of higher dimensions is naturally intriguing. To reach beyond the three dimensions our universe provides, and explore a world that could exist, but doesn't.

For really high dimensions, that's the only approach. But down in four dimensions, we want more. We want to see it. Experience it. Hence, a video game.

Rendering in 4D

But what does that look like? How do we take a four-dimensional world, and put it on a two-dimensional screen?

The most common approach is to slice the four-dimensional world, and render only a three-dimensional piece of it. This is the approach taken by the games Miegakure and 4D Golf.

This works well. I can't speak for Miegakure, but 4D Golf is surprisingly intuitive.

But this approach also comes with downsides. Most importantly, you can only see a small section of the 4D world at once. Essentially, you have no peripheral vision. This limits your ability to fully understand the space you're in.

So let's consider an alternative.

Perspective Rendering

In an ordinary, 3D game, the three-dimensional world is rendered onto a two-dimensional screen. But the math involved generalizes perfectly. So a four-dimensional world can be rendered onto a three-dimensional screen.

This approach is commonly seen when you look at pictures of mathematical objects like tesseracts. But it's also the approach taken by this maze game.

The game certainly takes some getting used to. But once I did, it was a real revelation when I first saw four mutually-perpendicular passageways at the same time.

But this game has problems. In particular, I really don't like the "line art" drawing style, where only the edges of shapes are drawn.

The Goal

My goal was to make a four-dimensional game, using perspective rendering, that does not render everything as line art.

After many tries, over a number of years, I have declared this project a failure. Let's see what went wrong.

Design Difficulties

Rendering proceeds in two steps: projecting from 4D to 3D, then from 3D to 2D. Interestingly, it's the second that causes problems.

Think about the result of rendering an ordinary 3D scene. The screen ends up divided into a number of colored or textured regions. If you're looking down a hallway, the rendered image contains a square depicting the far end of the hallway, surrounded by four trapezoids depicting the walls.

So if you render a 4D scene, you end up with a 3D screen, divided into colored or textured sections. Such a thing isn't exactly easy to draw.

First of all, the user needs to be able to see the entire 3D screen. So when we project from 3D to 2D, we can't make anything opaque. Everything should be partially transparent, either through opacity or leaving holes to look through.

To make things worse, the colored regions are volumes in the 3D screen. So we need a fog effect. Rendering this isn't really the problem — it's just hard to interpret the layers of fog once rendered.

Alternatively, we could render only the bounding faces between the volumes. But then it's not clear how to color them. I tried just taking the color of the region behind, but that didn't work well. I suppose we could give up on color entirely, but that doesn't seem great either.

Implementation Difficulties

TODO

Partial Solution

TODO