When BASIC Became a Ritual: Thoughts on Gatekeeping in Retro Coding Communities

January 02, 2026

 

Logo

When BASIC Became a Ritual: Thoughts on Gatekeeping in Retro Coding Communities

BASIC has a handful of holy grails. Line numbers. `GOTO` and `GOSUB`. For some, even `ON GOTO` is a step too far. Mention anything outside these markers—pointers, advanced memory access, or modern abstractions—and you risk unleashing a ritual that repeats itself endlessly in BASIC communities.

I saw it happen recently in a Facebook BASIC group. Someone asked:

 “I am working on an interpreter for an early 1980s computer and mentioned to a friend I was coding support for memory access a little like pointers. His response was pointers have no business in BASIC! I also recall reading someone’s rant against PEEK and POKE, which were the most common approach to letting the programmer directly touch memory. The B in BASIC is Beginner’s, but hopefully we all recognize that it isn’t ONLY for beginners and supporting advanced usage has value as well.”

The thread exploded. And of course, it followed the familiar pattern: someone posts code, a gatekeeper declares, “That’s not BASIC,” examples from the 1980s are invoked, definitions are argued over, and the discussion collapses into circularity. Nothing is learned, nothing is resolved, and the original idea quietly disappears. What remains is ritual—a repeated performance that reinforces who belongs and who gets to decide.


PEEK, POKE, and Pointers: Semantics vs Symbols

At a purely technical level, the distinction between `PEEK` / `POKE` and pointers is minimal. Both let you interact with memory. Both can be misused. Both can crash a program. The difference is clothing: one is familiar, printed in magazines, and part of the retro-coding comfort blanket; the other looks like “C,” abstracted, and too modern.

`PEEK` and `POKE` feel safe because they’re familiar. They are a warm blanket. Pointers, by contrast, feel yucky—not because they’re dangerous or confusing, but because they look wrong. They challenge the aesthetic definition of BASIC, and that’s enough to trigger rejection.

The paradox is clear: if BASIC truly cared about beginners, pointers would be easier to teach and safer to use. Yet familiarity often masquerades as virtue. Age and nostalgia are confused with authority.


The Elephant and the Zebra

I’ve been caught in these discussions more than once. For a long time, I thought if I just explained things clearly enough, minds would change. Time has helped me gain perspective. I try not to invest energy in these pursuits anymore.

No matter how many stripes you paint on an elephant, it’s never going to be a zebra. Arguing about what “counts” as BASIC often feels like trying to convert Coke fans to Pepsi, or asking football supporters to change clubs. The choice was made emotionally a long time ago, and no amount of technical correctness is going to undo it.

At some point, the healthiest response isn’t disengagement from BASIC itself—it’s disengagement from the argument. Energy spent trying to win these debates is energy not spent building, teaching, or creating. These days, I focus on creation, experimentation, and helping others learn. You can’t argue someone out of an identity they didn’t argue themselves into. You can only decide where your own time is best spent.


BASIC as a Living Language

BASIC was never meant to be frozen in amber. It evolved constantly, even in the 70s and 80s, as programmers experimented and pushed the limits of the machines they used. Preserving that spirit doesn’t mean copying the past—it means keeping the language alive and accessible to anyone willing to learn.

Gatekeeping may feel like stewardship, but it often does the opposite. It isolates, discourages newcomers, and shrinks the community. True preservation of BASIC’s legacy isn’t about enforcing ritual—it’s about fostering exploration and creativity, which was the heart of BASIC from the very beginning.


Closing Thought

Communities that obsess over purity may think they’re protecting a language, but they often end up protecting only themselves. There’s nothing wrong with nostalgia, reverence, or preference for older dialects—but when identity is enforced over experimentation, the language becomes a museum exhibit, not a tool for learning or creation.

BASIC survives when we allow it to evolve, and when we let beginners—and even advanced users—explore it without fear of judgment. That’s the real legacy worth keeping.

Manual Base Conversion in PlayBASIC

December 08, 2025

 

Logo

Converting a decimal number stored as a string into Binary, Octal and Hexadecimal


In this tutorial we are going to manually convert a decimal number stored inside a string into:

• Base 2 (Binary)

• Base 8 (Octal)

• Base 16 (Hexadecimal)

This example avoids built-in conversion commands on purpose, so beginners can see how the process works internally.


Example Output Usage

s$="87654321"
print s$ +"="+ ConvertTo(S$,2)
print s$ +"="+ ConvertTo(S$,8)
print s$ +"="+ ConvertTo(S$,16)
print ""

s$="-12345678"
print s$ +"="+ ConvertTo(S$,2)
print s$ +"="+ ConvertTo(S$,8)
print s$ +"="+ ConvertTo(S$,16)
print ""

s$="255"
print s$ +"="+ ConvertTo(S$,2)
print s$ +"="+ ConvertTo(S$,8)
print s$ +"="+ ConvertTo(S$,16)
print ""

Sync
waitkey

Step 1: Manually Converting the String to an Integer

Before we can convert to another base, we must first turn the string into an actual integer value.

This is done digit-by-digit using basic decimal math.

Function ConvertTo(S$,Base)
rem assumed 32bit integers
Total =0
Negate=0

for lp=1 to len(s$)
    Total=Total*10
    ThisCHR = asc(mid$(s$,lp))

    if ThisChr = asc("-") then Negate=1   

    if ThisChr >= asc("0") and ThisCHR<=asc("9")
        Total=Total+(ThisCHR-Asc("0"))       
    endif
next

if Negate then Total *= -1   

What’s happening here?

• Each digit is multiplied into place using base-10 math

• `ASC()` is used to convert characters into numeric values

• The minus symbol `"-"` is detected and applied at the end

This is essentially how a basic `Val()` function works internally.


Step 2: Preparing for Base Conversion

Each output base is selected using bit grouping.

select base
case 2
Shift=1
Characters$="01"
case 8
Shift=3
Characters$="01234567"
case 16
Shift=4
Characters$="0123456789ABCDEF"
endselect

Why these values?

• Binary uses 1 bit per digit

• Octal uses 3 bits per digit

• Hexadecimal uses 4 bits per digit


Step 3: Bitwise Conversion Loop

Now the number is converted using bit masking and bit shifting.

if Shift
Mask    = (2^Shift)-1
Digits = 32 / Shift

    For lp=0 to Digits-1
        ThisCHR = Total and MASK
        Result$ = Mid$(Characters$,ThisChr+1,1) + Result$
        Total = Total >> Shift                               
    next
endif
   

EndFunction Result$

Important notes:

• Output is a fixed 32-bit representation

• Leading zeros are expected and correct

• Negative numbers are shown using two’s complement

The result string is built from right to left because the least-significant bits are processed first.


Summary

This tutorial demonstrates:

• Manual string → integer conversion

• Decimal positional maths

• Bit masking and shifting

• Why binary, octal and hex exist

• How CPUs naturally represent numbers

This approach may not be the shortest, but it clearly shows how the conversion works under the hood — making it ideal for learners.

Complete Code:

    s$="87654321"
    print s$ +"="+ ConvertTo(S$,2)
    print s$ +"="+ ConvertTo(S$,8)
    print s$ +"="+ ConvertTo(S$,16)
    print ""

    s$="-12345678"
    print s$ +"="+ ConvertTo(S$,2)
    print s$ +"="+ ConvertTo(S$,8)
    print s$ +"="+ ConvertTo(S$,16)
    print ""

    s$="255"
    print s$ +"="+ ConvertTo(S$,2)
    print s$ +"="+ ConvertTo(S$,8)
    print s$ +"="+ ConvertTo(S$,16)
    print ""

    Sync
    waitkey
   

Function ConvertTo(S$,Base)
    rem assumed 32bit integers
    Total =0
    Negate=0
    for lp=1 to len(s$)
        Total    =Total*10
        ThisCHR = mid(s$,lp)
        if ThisChr = asc("-") then Negate=1   
        if ThisChr >= asc("0") and ThisCHR<=asc("9")
            Total=Total+(ThisCHR-Asc("0"))       
        endif
    next
    if Negate then Total *= -1   

    Characters$    ="0123456789ABCDEF"
    select base
                case 2
                    Shift=1   
                case 8
                    Shift=3   
                case 16
                    Shift=4   
    endselect   
   
    if Shift
        Mask        =(2^Shift)-1
        For lp=1 to 32 / Shift
                ThisCHR = Total and MASK
                Result$ = Mid$(CHaracters$,ThisChr+1,1) +Result$
                Total = Total >> Shift                               
        next
    endif
       
EndFunction Result$

Let’s Write a Lexer in PlayBASIC

October 12, 2025

 

Logo

Introduction

Welcome back, PlayBASIC coders!

In this live session, I set out to build something every programming language and tool needs — a lexer (or lexical scanner). If you’ve never written one before, don’t worry — this guide walks through the whole process step by step.

A lexer’s job is simple: it scans through a piece of text and classifies groups of characters into meaningful types — things like words, numbers, and whitespace. These little building blocks are called tokens, and they form the foundation for everything that comes next in a compiler or interpreter.

So, let’s dive in and build one from scratch in PlayBASIC.


Starting with a Simple String

We begin with a test string — just a small bit of text containing words, spaces, and a number:

s$ = "   1212123323      This is a message number"
Print s$

This gives us something to analyze. The plan is to loop through this string character by character, figure out what each character represents, and then group similar characters together.

In PlayBASIC, strings are 1-indexed, which means the first character is at position 1 (not 0 like in some other languages). So our loop will run from 1 to the length of the string.


Stepping Through Characters

The core of our lexer is a simple `For/Next` loop that moves through each character:

For lp = 1 To Len(s$)
    ThisCHR = Mid(s$, lp)
Next

At this stage, we’re just reading characters — no classification yet.

The next question is: how do we know what type of character we’re looking at?


Detecting Alphabetical Characters

We start by figuring out if a character is alphabetical. The simplest way is by comparing ASCII values:

If ThisCHR >= Asc("A") And ThisCHR <= Asc("Z")
    ; Uppercase
EndIf

If ThisCHR >= Asc("a") And ThisCHR <= Asc("z")
    ; Lowercase
EndIf

That works, but it’s messy to write out in full every time. So let’s clean it up by rolling it into a helper function:

Function IsAlphaCHR(ThisCHR)
    State = (ThisCHR >= Asc("a") And ThisCHR <= Asc("z")) Or _
            (ThisCHR >= Asc("A") And ThisCHR <= Asc("Z"))
EndFunction State

Now we can simply check:

If IsAlphaCHR(ThisCHR)
    Print Chr$(ThisCHR)
EndIf

That already gives us all the letters from our string — but one at a time.

To make it more useful, we’ll start grouping consecutive letters into words.


Grouping Characters into Words

Instead of reacting to each character individually, we look ahead to find where a run of letters ends. This is done with a nested loop:

If IsAlphaCHR(ThisCHR)
    For ChrLP = lp To Len(s$)
        If Not IsAlphaCHR(Mid(s$, ChrLP)) Then Exit
        EndPOS = ChrLP
    Next
    ThisWord$ = Mid$(s$, lp, (EndPOS - lp) + 1)
    Print "Word: " + ThisWord$
    lp = EndPOS
EndIf

Now our lexer can detect whole words — groups of letters treated as a single unit.

That’s the first real step toward tokenization.


Detecting Whitespace

The next type of token is whitespace — spaces and tabs.

We’ll build another helper function:

Function IsWhiteSpace(ThisCHR)
    State = (ThisCHR = Asc(" ")) Or (ThisCHR = 9)
EndFunction State

Then use the same nested-loop pattern:

If IsWhiteSpace(ThisCHR)
    For ChrLP = lp To Len(s$)
        If Not IsWhiteSpace(Mid(s$, ChrLP)) Then Exit
        EndPOS = ChrLP
    Next
    WhiteSpace$ = Mid$(s$, lp, (EndPOS - lp) + 1)
    Print "White Space: " + Str$(Len(WhiteSpace$))
    lp = EndPOS
EndIf

Now we can clearly see which parts of the string are spaces and how many characters each whitespace block contains.


Detecting Numbers

Finally, let’s detect numeric characters using another helper:

Function IsNumericCHR(ThisCHR)
    State = (ThisCHR >= Asc("0")) And (ThisCHR <= Asc("9"))
EndFunction State

And apply it just like before:

If IsNumericCHR(ThisCHR)
    For ChrLP = lp To Len(s$)
        If Not IsNumericCHR(Mid(s$, ChrLP)) Then Exit
        EndPOS = ChrLP
    Next
    Number$ = Mid$(s$, lp, (EndPOS - lp) + 1)
    Print "Number: " + Number$
    lp = EndPOS
EndIf

Now we can identify three types of tokens:

Words (alphabetical groups)

Whitespace (spaces and tabs)

Numbers (digits)


Defining a Token Structure

Up to this point, our program just prints what it finds.

Let’s store these tokens properly by defining a typed array.

Type tToken
    TokenType
    Value$
    Position
EndType
Dim Tokens(1000) As tToken

We’ll also define some constants for readability:

Constant TokenTYPE_WORD        = 1
Constant TokenTYPE_NUMERIC     = 2
Constant TokenTYPE_WHITESPACE  = 4

As we detect tokens, we add them to the array:

Tokens(TokenCount).TokenType = TokenTYPE_WORD
Tokens(TokenCount).Value$    = ThisWord$
TokenCount++

Do the same for whitespace and numbers, and our lexer now builds a real list of tokens as it runs.


Displaying Tokens by Type

To visualize the result, we can print each token in a different colour:

For lp = 0 To TokenCount - 1
    Select Tokens(lp).TokenType
        Case TokenTYPE_WORD:       c = $00FF00 ; green
        Case TokenTYPE_NUMERIC:    c = $0000FF ; blue
        Case TokenTYPE_WHITESPACE: c = $000000 ; black
        Default:                   c = $FF0000
    EndSelect

    Ink c
    Print Tokens(lp).Value$
Next

When we run this version, we see numbers printed in blue, words in green, and whitespace appearing as black gaps — exactly how a simple syntax highlighter or compiler front-end might visualize tokenized text.


Wrapping Up

And that’s it — our first lexer!

It reads through a line of text, classifies what it finds, and records each token type for later use.

The same process underpins many systems:

Compilers use it as the first step in parsing code.

Adventure games might use it to process typed player commands.

Expression evaluators or script interpreters rely on it to break down formulas and logic.

The big takeaway? A lexer doesn’t have to be complicated.

This simple approach — scanning text, detecting groups, and tagging them — is the heart of it. Once you understand that, you can expand it to handle symbols, punctuation, operators, and beyond.

If you’d like to see more about extending this lexer or turning it into a parser, let me know in the comments — or check out the full live session on YouTube.

Links:

  • PlayBASIC,com
  • Learn to basic game programming (on Amazon)
  • Learn to code for beginners (on Amazon)