Game Dev and the Rockstar Illusion

June 23, 2025

 

Logo

🎮 Game Dev and the Rockstar Illusion

At some point, many aspiring game developers ask the same question:

"Should I pursue game development as a career?"

It’s a fair question — and one I’ve heard dozens of times over the years. My usual answer goes something like this:

 Game development, for many of us, is the modern-day rockstar path. The huge potential upsides suspend people’s disbelief. Sheer optimism draws people into making life-changing career decisions on flimsy grounds.

And I still stand by that.


The Seductive Myth

There’s a dream attached to game development that’s hard to shake.

Make a hit indie game. Build a loyal community. Quit your day job. Maybe even go viral and rake in millions. We've all seen it happen. Stardew Valley. Undertale. Minecraft. Those stories are real — and they’re incredibly inspiring.

But here’s the thing: they’re not the rule. They’re the outliers. The exceptions. The lottery wins.


The Reality Check

Making software is difficult.

Making successful software? Even harder.

Now try making a successful game in one of the most oversaturated creative markets on the planet.

It’s not just about writing code or drawing sprites. It’s game design, storytelling, marketing, community building, testing, patching, supporting — usually with limited time, resources, or income. Even with passion and dedication, a great game can vanish in the noise of the marketplace.

This isn’t meant to scare you off — but it is meant to snap the illusion. Because game development isn’t a shortcut to fame or fortune. It’s work. Deep, complex, and often unpredictable work.


Why Do It Then?

Because you love it. Because it fascinates you. Because making something interactive — something playable — is uniquely satisfying.

For many of us, that’s reason enough. But the key is understanding that passion alone isn’t a business model. The most sustainable developers I know treat game dev like a long game. They build skills slowly. They wear many hats. They take breaks. They fail, adapt, and keep going.


Career vs. Calling

You can make a career in game development — but go in with your eyes open.

Ask yourself:

  • Do I enjoy the process, not just the outcome?
  • Am I okay with uncertainty and iteration?
  • Can I build skills that work outside games too?
  • Am I doing this because I love it — or because I want to “make it”?
  • If you’re honest with yourself about those answers, you’ll save a lot of time and heartache.


    Final Thought

    Chasing the dream isn’t wrong — just don’t buy into the fantasy wholesale.

    Game development is an incredible field, but it’s not a guaranteed golden ticket.

    Build your foundation. Grow your skills. Be curious, be resilient — and enjoy the ride.



    From Polygons to Photorealism: The Evolution—and Plateau—of 3D Game Graphics

    May 12, 2025

     

    Logo

    From Polygons to Photorealism: The Evolution—and Plateau—of 3D Game Graphics

    The Era of Clever Constraints

    In the early days of 3D gaming, hardware limitations forced developers to be clever. The visuals that defined an era—whether low-poly models, billboarding sprites, or pre-rendered backdrops—weren't just stylistic choices, but practical necessities. Every polygon had a cost. Every lighting trick was a compromise. But out of those limitations grew a golden age of innovation.

    Turning Limitations Into Innovation

    Take a look at games like Quake, Tomb Raider, or Final Fantasy VII. Each used the best of what the hardware of the time could handle, and did so in strikingly different ways. Quake leaned into full 3D environments with software-based lighting and gritty realism. Tomb Raider featured angular characters and blocky worlds that became iconic not despite their limitations, but because of them. Final Fantasy VII sidestepped real-time rendering entirely for much of its world, instead presenting lush pre-rendered scenes and letting players move 3D models across them. These games didn’t just work around limitations—they turned them into defining characteristics.

    Leaps in Generational Power

    With each new generation of graphics cards and consoles, we saw significant leaps. The move from software to hardware acceleration, the introduction of hardware T&L (Transform and Lighting), then programmable shaders—each of these brought clear visual benefits. By the early 2000s, games like Half-Life 2 and Doom 3 pushed lighting, animation, and physics into new territory. Visuals didn't just get better—they evolved.

    The Plateau of Realism

    But something changed in the last decade. As we neared photorealism, the rate of visual evolution began to slow. High-fidelity rendering techniques—global illumination, subsurface scattering, ray tracing—deliver spectacular results, but they come with steep performance costs. The problem is, these improvements aren't always obvious to the average player. When a scene already looks real, doubling the polygon count or pushing texture resolution to 8K yields diminishing returns. What once felt like huge jumps between generations now feels more like refinements.

    Style Over Specs

    Games like The Last of Us Part II, Red Dead Redemption 2, and Cyberpunk 2077 have reached a level where stylization and direction matter more than sheer rendering muscle. Once you're able to render a believable scene, it becomes less about adding detail and more about how you use the tools. It’s a creative turning point: technology is no longer the main limiting factor. Imagination is.

    A Full-Circle Moment

    Interestingly, this full-circle moment mirrors the past. We're seeing renewed interest in stylized graphics, from indie darlings like Hades and Tunic to AAA experiments like Hi-Fi Rush. Developers are embracing non-photorealistic styles not because they have to, but because they can. Limitations no longer dictate style—they inform it. And in doing so, we're seeing a broader range of visual expression than ever before.

    Looking Ahead

    As 3D hardware continues to improve, we may well see new breakthroughs. But the era of obvious, generational leaps in visuals is behind us. The future lies not in chasing realism, but in harnessing the freedom to create something uniquely beautiful, meaningful, and memorable.


    🎮 Join the Discussion

    What do you think—have we reached the peak of 3D visual evolution, or is there still another revolution waiting to happen?

    Share your thoughts in the comments below or connect with me on X.com

    If you enjoyed this article, consider subscribing or checking out some of my other posts on the evolution of game development and technology.



    Breaking the 3D Barrier: A Technical Journey Through Early Home Computer Graphics

    April 20, 2025

     

    Logo

    When people talk about the first true 3D video games, titles like Quake or Super Mario 64 often take the spotlight. These games marked a significant leap forward with fully texture-mapped environments, perspective-correct projection, and dynamic camera control. But this view tends to overshadow a rich decade of 3D experimentation on much less capable machines. To truly appreciate the origins of 3D gaming, we need to roll back to the 8-bit and 16-bit eras, where pushing polygons was a brutal, low-level grind, limited by memory, math, and raw CPU cycles.

    The 8-bit Foundations

    Writing a 3D game on an 8-bit system like the Commodore 64 or ZX Spectrum was a challenge of epic proportions. These systems had CPUs like the 6510 or Z80, typically clocked under 2 MHz. RAM was sparse—often 48K or less—and there were no floating-point units or hardware multipliers. Everything had to be done in software, often with 8-bit math stretched to fake 16- or 32-bit results.

    Take a 16-bit multiplication on a 6510:

    ; Multiply 8-bit A by 8-bit B, result in 16-bit
    clc
    ldx #8        ; loop for 8 bits
    lda #0         ; clear result low
    sta resultLo
    sta resultHi
    
    loop:
    lsr B          ; shift right B, carry holds current bit
    bcc skipAdd    ; if bit not set, skip add
    clc
    lda resultLo
    adc A
    sta resultLo
    bcc noCarry
    inc resultHi
    noCarry:
    skipAdd:
    asl A          ; shift A left for next bit
    rol resultHi
    rol resultLo
    
    inx
    cpx #8
    bne loop
    

    This was the sort of routine you'd write just to multiply two numbers. Think about trying to do vector math, matrix transformations, or projection in real time with this.

    Yet somehow, developers managed. Games like Elite delivered wireframe 3D on systems with no right to be doing 3D at all. How? Precomputed tables, integer-only math, and incredibly tight assembly code.

    The 16-bit Leap

    The 16-bit era (Amiga, Atari ST, early PCs) brought huge improvements: faster CPUs (Motorola 68000, x86), more RAM, and hardware that made color and resolution viable for real-time rendering.

    But even then, rendering filled polygons was tough. A scene with more than a handful of polygons meant serious compromises. Occlusion was mostly just backface culling and manual z-sorting. Overdraw was rampant. Color palettes were limited.

    The 68000, for example, was a joy compared to 8-bit CPUs—32-bit registers, hardware multiply, and full stack access. But it still had no floating-point unit, so most games still ran on fixed-point math.

    3D Rotations

    Rotating a point in 3D requires matrix multiplication. In floating point, it’s simple. On an 8- or 16-bit system? You’re manually calculating each axis:

    // Rotate point (x, y, z) around Y axis
    cosA = cos(angle)
    sinA = sin(angle)
    
    x' = x * cosA + z * sinA
    z' = -x * sinA + z * cosA
    y' = y
    

    On a machine with no hardware multiply or sin/cos functions, these had to be:

  • Converted to fixed-point (e.g., 8.8 or 16.16)
  • Stored in lookup tables
  • Operated on using hand-written multiply routines
  • A single rotation could take dozens or hundreds of cycles, and you needed one per vertex.

    Rasterization

    Once vertices were transformed into screen space, you had to fill triangles. On older machines, scanline rasterization was the norm. But drawing a flat-shaded triangle means:

  • Sorting the vertices
  • Interpolating edges
  • Drawing each horizontal scanline pixel by pixel
  • for y from top to bottom:
        leftX = interpolateEdge1(y)
        rightX = interpolateEdge2(y)
        drawHorizontalLine(leftX, rightX, y)
    

    Without hardware blitting or even fast memory access, this could chew up frame time fast. Fixed-point interpolation was the only way to keep things moving at all.

    Projection

    Perspective projection takes 3D points and maps them to 2D screen coordinates:

    screen_x = (x / z) * focal_length + screen_center_x
    screen_y = (y / z) * focal_length + screen_center_y
    

    That divide-by-z is dangerous. Division was slow, and precision was poor in integer math. Most systems used reciprocal tables and multiplication instead:

    invZ = reciprocal(z)  ; precomputed
    screen_x = (x * invZ) >> shift
    

    Clipping points behind the camera or too close was another performance headache.

    Texture Mapping

    Flat shading was hard enough, but texture mapping introduced per-pixel perspective correction, interpolation, and memory access. On early PCs, this became possible with chunky 256-color framebuffers and fast CPUs.

    The math alone was rough:

    for each pixel in scanline:
        u = uStart + (du * x)
        v = vStart + (dv * x)
        color = texture[u >> 8][v >> 8]
        framebuffer[x][y] = color
    

    Add in perspective correction and you're now doing:

    u = (u/z), v = (v/z) — per pixel!
    

    This was borderline insane without a 486 or better.

    Raycasting — The Shortcut to 3D

    When true 3D was too expensive, devs used raycasting—most famously in Wolfenstein 3D. It simulated 3D by casting a ray per screen column into a 2D map:

    for each column:
        ray = shoot_ray(player_pos, angle)
        distance = find_wall_hit(ray)
        wall_height = screen_height / distance
        draw_column(column, wall_texture, wall_height)
    

    With grid-based maps, DDA (Digital Differential Analyzer) could step through cells efficiently. You only drew vertical slices of wall textures, making it fast enough for 60fps on a 386.

    Raycasting had limits—no sloped surfaces, no rooms-over-rooms—but the performance was unbeatable.

    Conclusion — Hardware Set the Rules

    Looking back, it’s clear that the evolution of 3D games wasn’t just about better ideas. It was about better hardware. Every generation lifted the ceiling a little higher:

  • 8-bit machines taught us how to fake it with wireframes
  • 16-bit brought filled polygons and fixed-point engines
  • 32-bit (and early PCs) opened the door to real-time texture-mapped worlds
  • Modern devs live in a world of GPUs, shaders, and gigabytes of RAM—but it all started with hand-coded multiply routines and clever math hacks. The next time you rotate a mesh with a single API call, spare a thought for the programmers who had to write `lsr`, `rol`, and `adc` loops just to make a cube spin.