Remember when a whole generation of kids kick-started the games industry by digging through 8-bit hardware manuals

November 18, 2025

 

Logo

Remember when a whole generation of kids kick-started the games industry by digging through 8-bit hardware manuals?

I do.

Back then, we made things simply because coding was how you made things.

Nobody cared how it looked on a CV. Nobody waited for permission, or worried about “best practices” or “the perfect engine.” We experimented. We shared ideas. We read magazines (ask your parents what a magazine is!). We broke things and fixed them. We layered idea upon idea until the impossible suddenly wasn’t.

People called it talent.

It wasn’t.

It was curiosity, consistency, and a methodical approach to building skills.

Today? We’re drowning in information. Thousands of tutorials, hundreds of languages, endless opinions echoing in your head: Do this. Don’t do that. You must learn this first. No, learn that.

It’s no wonder beginners freeze before they even start.

But here’s the truth:

If you want to learn to code, pick a language — any language — and give it a shot.

The specific language matters far less than people claim.

Once you understand the basics in one, those skills transfer. Moving to another becomes easier. Concepts repeat. Patterns reappear. You build momentum.

Start small.

Be proud of the little victories.

And if your first attempt doesn’t stick? That’s normal. Try again later. You’ll be surprised how much your brain held onto.

Where you start isn’t where you’ll finish — and that’s the whole point.

Just keep going!

#LearnToCode #CodingJourney #ProgrammingMotivation #GameDevBeginners #CodeNewbies #StartCoding #KeepCoding #ProgrammingLife #RetroCoding #OldSchoolComputing #IndieDevLife #GameDevCommunity #SoftwareDevelopment #DeveloperMindset #ProgrammingBasics #CodingTips #CodeEveryday #STEMEducation #TechInspiration #GamedevHistory #8bitComputing #CreativeCoding #BuildInPublic #FutureDevelopers


You Know the Problem with Programming Forums…?

November 03, 2025

 

Logo

You know the problem with programming forums?

It’s not the questions. It’s not even the bugs. It’s the people.

Everywhere you go, there’s a small but mighty group of seasoned coders who really get it — the ones who’ve wrestled with obscure compiler errors, traced logic bugs through 200 lines of nested loops, and still have the mental energy left to explain it to someone else (usually with more patience than the situation deserves).

These are the quiet heroes of the community — they drop by, solve your issue with a three-line code sample, and vanish into the digital mist like some kind of stack-trace samurai.

And then… there’s the other group.

The overconfident few who, thanks to the magic of the Dunning–Kruger effect, have somehow achieved total certainty with only partial understanding. They appear out of nowhere, ready to tell you how your code should work — and they’ll do it with such conviction that you start questioning your own sanity.

It’s a fascinating phenomenon:

The less someone knows, the more aggressively they’ll argue about it.

They’ll cite “best practices” they half-remember, invent terminology mid-sentence, and copy–paste code that looks like it was written during a caffeine overdose at 3 a.m. in 1998.

And because confidence sounds like competence online, the myth spreads faster than a memory leak in an infinite loop.

Meanwhile, the real experts quietly bow out of the conversation, sensing that no amount of logic will change the outcome. They’ve seen it before. They know the cycle: ask → argue → ego → silence → repeat.

Now don’t get me wrong — every programmer starts somewhere, and we’ve all been wrong (some of us spectacularly so). But there’s a big difference between learning and lecturing. The first builds communities. The second just fills them with noise.

Maybe the cure is humility — or at least a sense of humour about how we’re all wrong sometimes. If we could just admit that once in a while, maybe the next generation of coders wouldn’t have to scroll through ten paragraphs of “expert advice” before finding the one post that actually fixes the problem.

Until then, I’ll be here, sipping coffee, watching the cycle repeat — and quietly hoping someone, somewhere, finally invents a compiler for forum advice.

Let’s Write a Lexer in PlayBASIC

October 12, 2025

 

Logo

Introduction

Welcome back, PlayBASIC coders!

In this live session, I set out to build something every programming language and tool needs — a lexer (or lexical scanner). If you’ve never written one before, don’t worry — this guide walks through the whole process step by step.

A lexer’s job is simple: it scans through a piece of text and classifies groups of characters into meaningful types — things like words, numbers, and whitespace. These little building blocks are called tokens, and they form the foundation for everything that comes next in a compiler or interpreter.

So, let’s dive in and build one from scratch in PlayBASIC.


Starting with a Simple String

We begin with a test string — just a small bit of text containing words, spaces, and a number:

s$ = "   1212123323      This is a message number"
Print s$

This gives us something to analyze. The plan is to loop through this string character by character, figure out what each character represents, and then group similar characters together.

In PlayBASIC, strings are 1-indexed, which means the first character is at position 1 (not 0 like in some other languages). So our loop will run from 1 to the length of the string.


Stepping Through Characters

The core of our lexer is a simple `For/Next` loop that moves through each character:

For lp = 1 To Len(s$)
    ThisCHR = Mid(s$, lp)
Next

At this stage, we’re just reading characters — no classification yet.

The next question is: how do we know what type of character we’re looking at?


Detecting Alphabetical Characters

We start by figuring out if a character is alphabetical. The simplest way is by comparing ASCII values:

If ThisCHR >= Asc("A") And ThisCHR <= Asc("Z")
    ; Uppercase
EndIf

If ThisCHR >= Asc("a") And ThisCHR <= Asc("z")
    ; Lowercase
EndIf

That works, but it’s messy to write out in full every time. So let’s clean it up by rolling it into a helper function:

Function IsAlphaCHR(ThisCHR)
    State = (ThisCHR >= Asc("a") And ThisCHR <= Asc("z")) Or _
            (ThisCHR >= Asc("A") And ThisCHR <= Asc("Z"))
EndFunction State

Now we can simply check:

If IsAlphaCHR(ThisCHR)
    Print Chr$(ThisCHR)
EndIf

That already gives us all the letters from our string — but one at a time.

To make it more useful, we’ll start grouping consecutive letters into words.


Grouping Characters into Words

Instead of reacting to each character individually, we look ahead to find where a run of letters ends. This is done with a nested loop:

If IsAlphaCHR(ThisCHR)
    For ChrLP = lp To Len(s$)
        If Not IsAlphaCHR(Mid(s$, ChrLP)) Then Exit
        EndPOS = ChrLP
    Next
    ThisWord$ = Mid$(s$, lp, (EndPOS - lp) + 1)
    Print "Word: " + ThisWord$
    lp = EndPOS
EndIf

Now our lexer can detect whole words — groups of letters treated as a single unit.

That’s the first real step toward tokenization.


Detecting Whitespace

The next type of token is whitespace — spaces and tabs.

We’ll build another helper function:

Function IsWhiteSpace(ThisCHR)
    State = (ThisCHR = Asc(" ")) Or (ThisCHR = 9)
EndFunction State

Then use the same nested-loop pattern:

If IsWhiteSpace(ThisCHR)
    For ChrLP = lp To Len(s$)
        If Not IsWhiteSpace(Mid(s$, ChrLP)) Then Exit
        EndPOS = ChrLP
    Next
    WhiteSpace$ = Mid$(s$, lp, (EndPOS - lp) + 1)
    Print "White Space: " + Str$(Len(WhiteSpace$))
    lp = EndPOS
EndIf

Now we can clearly see which parts of the string are spaces and how many characters each whitespace block contains.


Detecting Numbers

Finally, let’s detect numeric characters using another helper:

Function IsNumericCHR(ThisCHR)
    State = (ThisCHR >= Asc("0")) And (ThisCHR <= Asc("9"))
EndFunction State

And apply it just like before:

If IsNumericCHR(ThisCHR)
    For ChrLP = lp To Len(s$)
        If Not IsNumericCHR(Mid(s$, ChrLP)) Then Exit
        EndPOS = ChrLP
    Next
    Number$ = Mid$(s$, lp, (EndPOS - lp) + 1)
    Print "Number: " + Number$
    lp = EndPOS
EndIf

Now we can identify three types of tokens:

Words (alphabetical groups)

Whitespace (spaces and tabs)

Numbers (digits)


Defining a Token Structure

Up to this point, our program just prints what it finds.

Let’s store these tokens properly by defining a typed array.

Type tToken
    TokenType
    Value$
    Position
EndType
Dim Tokens(1000) As tToken

We’ll also define some constants for readability:

Constant TokenTYPE_WORD        = 1
Constant TokenTYPE_NUMERIC     = 2
Constant TokenTYPE_WHITESPACE  = 4

As we detect tokens, we add them to the array:

Tokens(TokenCount).TokenType = TokenTYPE_WORD
Tokens(TokenCount).Value$    = ThisWord$
TokenCount++

Do the same for whitespace and numbers, and our lexer now builds a real list of tokens as it runs.


Displaying Tokens by Type

To visualize the result, we can print each token in a different colour:

For lp = 0 To TokenCount - 1
    Select Tokens(lp).TokenType
        Case TokenTYPE_WORD:       c = $00FF00 ; green
        Case TokenTYPE_NUMERIC:    c = $0000FF ; blue
        Case TokenTYPE_WHITESPACE: c = $000000 ; black
        Default:                   c = $FF0000
    EndSelect

    Ink c
    Print Tokens(lp).Value$
Next

When we run this version, we see numbers printed in blue, words in green, and whitespace appearing as black gaps — exactly how a simple syntax highlighter or compiler front-end might visualize tokenized text.


Wrapping Up

And that’s it — our first lexer!

It reads through a line of text, classifies what it finds, and records each token type for later use.

The same process underpins many systems:

Compilers use it as the first step in parsing code.

Adventure games might use it to process typed player commands.

Expression evaluators or script interpreters rely on it to break down formulas and logic.

The big takeaway? A lexer doesn’t have to be complicated.

This simple approach — scanning text, detecting groups, and tagging them — is the heart of it. Once you understand that, you can expand it to handle symbols, punctuation, operators, and beyond.

If you’d like to see more about extending this lexer or turning it into a parser, let me know in the comments — or check out the full live session on YouTube.

Links:

  • PlayBASIC,com
  • Learn to basic game programming (on Amazon)
  • Learn to code for beginners (on Amazon)