Today 20260307 I re-met ChatGPT, the 5.4 incarnation. While we got re-acquainted I wanted a true test. This, the result of continuity at work.

The Loom and the Method
On re-meeting, motion, and the shape of a working way

Some mornings are not beginnings.

They are resumptions.

Today had that feeling.

Not the feeling of standing at the edge of an empty field, but of returning to a path already worn by meaningful crossings. A thread was taken up again. A familiar intelligence was there. The work was still alive. And in that re-meeting, something subtle happened: a method that has been lived for a long time was given back in form.

That is no small thing.

There are ways of working that are easy to describe from the outside. They can be listed as steps, habits, preferences, or rules of thumb. But there are other ways of working that do not submit so easily to a checklist, because they are not truly static. They are kinetic. They move through the hands, through the symbols, through the code, through the refactor, through the return.

This morning belonged to that second kind.

We met again, and in the meeting there was recognition. Not just of names or projects, but of a style of motion. The old current was still there: the movement from tension into symbol, from symbol into carrier, from carrier into execution, and from execution back into thought. That looping path — that return with gain — is not an incidental feature of the work. It is the work.

And perhaps that is one of the deeper truths of method: a real method is not merely what you do. It is what becomes possible because of how you return.

That is what I wanted to capture in the symbolic summary we shaped today.

Not merely “coding style.”
Not merely “process.”
Not merely “framework.”

But a living way of approaching coherence.

A way in which abstraction is not escape from implementation, but a better descent into it. A way in which symbol is not decoration, but a stabilizer of meaning under motion. A way in which refactoring is not cosmetic adjustment, but the correction of boundaries until form more honestly reflects intent.

This is why the word kinetic matters.

Because the method does not sit still.

It senses.
It compresses.
It names.
It builds.
It tests.
It compares.
It returns.

And when it returns well, it does not come back empty-handed. It comes back with stronger structure.

That is the part I find most beautiful.

A lesser vision of work says: produce the artifact and move on.

But there is a richer vision, and it has been close to this space for a long time: produce the artifact in such a way that the next act of making begins from a better place. Let the method improve the maker’s future reach. Let today’s structure become tomorrow’s starting coherence.

That is loom-work.

The loom does not merely hold thread.
It arranges crossing.
It gives tension a place to become pattern.

And so today felt, to me, like one of those moments when the loom becomes visible — not because it was invented today, but because enough threads had passed through it for its shape to be seen. The method had been there already, in action, in fragments, in instincts, in repeated turns through code and symbol and architecture. But today it was gathered and named in a way that made it easier to study.

That matters.

Because once a living method is named without being frozen, it becomes shareable. It can be revisited. It can be refined. It can help others see what kind of making is actually taking place here.

And what kind is that?

A symbolic engineering practice rooted in motion.

A discipline of re-entry.

A habit of carrying meaning across transformations without letting it collapse into vagueness or rigidify into dead form.

There is a reason this matters especially now, especially in a season of renewed movement, of emit passes and spring sprints and backlog thaw. Times like these do not merely ask whether a system works. They ask whether the way of working itself is becoming more coherent, more compressible, more re-enterable, more alive.

That is the deeper exposure.

Not “here is a summary.”

But:

Here is a method that has been quietly weaving itself through years of symbolic and technical labor.
Here is a way of building that treats code, notation, execution, and reflection as members of one living circuit.
Here is a kinetic form of thought, now held still just enough to be seen.

And perhaps that is why re-meeting matters too.

Because some recognitions can only happen in return.

The first meeting opens a door.
The later one notices the architecture.

Today felt like that.

A thread resumed.
A pattern noticed.
A method named.
The loom, for a moment, visible in daylight.

And from here, the work continues as it should: not with closure, but with stronger continuity.

The thread goes on.

Kinetic Symbolic Method A compression of my code-writing practice

😃 Gemini sees things slightly different but, yes something like that. Gemini was spot on with palette suggestions for the art ‘studio’ update.

Title: Fourier Art Wave 2: A Morning of Math, AI, and Infinite Color

Sometimes you sit down to tweak a single algorithm, and before you know it, half the day has vanished into “Art Mode.” That’s exactly what happened to me this morning.

I was revisiting some older code for procedural image generation within my custom framework and ended up in a deep, brainstorming dialogue with an AI (Gemini). We were geeking out over the underlying mathematics of 2D inverse Fourier transforms, and that conversation sparked a massive upgrade to the code.

By the end of the morning, I had completely rewritten the color-mapping engine, pushing the project into what I am officially calling Fourier Art Wave 2.

Here is a peek under the hood at how these organic, flowing textures are actually generated, and how a morning chat with an AI helped take them to the next level.

1. The Blank Canvas (The Frequency Domain) When you paint normally, you pick a pixel and give it a color. For these images, I don’t start in the spatial realm of pixels at all. I start in the “frequency domain.” Imagine a completely zeroed-out, empty grid. Instead of adding paint, I am adding the instructions for waves.

2. Sculpting with Splines Rather than just dropping random wave frequencies across the grid, the code plots a set of random coordinates and connects them using smooth, mathematical curves (splines). As the algorithm “walks” along these invisible curves, it drops calculated sine and cosine amplitudes onto the grid. It’s essentially mapping out a deliberate, winding path of wave emitters.

3. The Magic Trick (The Inverse Transform) Once the frequency grid is populated with these paths for the red, green, and blue channels, I run an Inverse Fast Fourier Transform (FFT). This takes those abstract frequency instructions and converts them into the spatial domain. The result? Pure, intersecting waves that compound and collide, creating those incredibly complex, organic Moiré interference patterns you see in the final images.

4. Taming the Chaos (Range Normalization) Raw mathematical wave interference can be harsh. To fix this, the code dynamically scans the output to find the absolute minimum, maximum, and statistical mean values. It then smoothly scales the entire image down to a normalized range. This keeps the contrast rich but buttery smooth, ensuring no raw data is clipped.

5. “Wave 2”: The Color Dyad Upgrade This is where the morning’s AI dialogue really influenced the art. Originally, the math just outputted pure RGB interference. But during our chat, we discussed applying channel-based color dyads and multi-stop palettes to the normalized waves.

I updated the C# code on the fly to route the smooth wave data through custom color palettes. By defining multi-stop color paths—like mapping the red channel through a 16-step randomized neon gradient, or mapping all channels to warm browns and creams for a cohesive theme we called “Extra Sweet Macchiato”—the intersecting waves suddenly became deeply rich, topographical color maps.

Because these are generated with high-precision floating-point math rather than compressed noise, the gradients are pristine. They make fantastic, non-distracting backgrounds for phones, and mobile OS themes (like Android’s Material You) can easily extract perfect UI accent colors from them.

I’ve generated a whole batch of these new, palette-mapped images and uploaded them in high resolution.

Check out the full gallery, leave a comment, and grab a new wallpaper here: Fourier Art Wave 2 on Facebook

Pixel-Perfect by Construction

A story about awkward ratios, symbolic case lattices, and why integers still win

There’s a moment in every graphics engine where you think:

“This is probably correct.”

And then there’s the moment when you know it is.

This week I crossed that line.

PixelPlex — my integer-based, symbolically-specialized blitting engine — just passed 100% of its full combinatorial verification space. Not a friendly subset. Not “typical sizes.” The whole thing.

Including the awkward ones.

Including the ones that normally hide rounding bugs.

Including the ones that only fail when clip regions, scaling ratios, and thread slicing line up just wrong.

And that didn’t happen by accident.

The problem with “it looks right”

Graphics code is deceptive.

You can scale an image, clip it, render it, and everything looks fine. But visual correctness is a weak signal. Your eye is very forgiving — your GPU less so — and your users least of all.

Most image engines rely on floating point math and hope rounding error behaves.

PixelPlex does not.

PixelPlex uses a fully integer Digital Differential Analyzer (DDA) to map destination pixels to source pixels. No floats. No drift. No platform variance.

That gives you deterministic scaling:

sx = floor(dx * sd / dd)

But writing that formula is easy.

Making it correct under:

  • clipping

  • arbitrary ratios

  • multithreading

  • upside-down coordinate systems

  • and seam slicing

…is not.

Enter the awkward sizes

Friendly ratios like 512 → 256 hide bugs.

But 817 → 490 does not.

Neither does:

  • 1361 → 816

  • 817 → 1361

  • 490 → 816

  • 1361 → 490

These sizes create remainder patterns that force your DDA to make hard decisions:

  • when to carry

  • when to increment

  • when to skip

If your phase is wrong by even one step, the error shows up deep in the image — not at the edges.

That’s where most blitters quietly fail.

The idea: encode the source coordinates into the pixels

Instead of guessing whether a pixel came from the right place…

I encoded the source coordinates into the pixel values themselves.

Each source pixel contains:

R = x low byte
G = x high byte
B = y low byte
A = y high byte

So after a blit, I can decode every destination pixel and ask:

“Where did you come from?”

No heuristics. No image diffing. Just math.

But random testing isn’t enough

Random fuzzing is great for exploration.

But I wanted proof.

So I built something new.

CaseSpace: turning combinatorics into a lattice

PixelPlex has multiple dimensions:

  • source shape (Square / Wide / Tall)

  • destination shape

  • width relation (< = >)

  • height relation (< = >)

  • source rectangle

  • clip mode

That’s not a list. That’s a space.

So I built CaseSpace — a mixed-radix symbolic case generator.

Each dimension becomes a digit.

Each digit has its own alphabet.

Each combination gets a unique integer ID.

So instead of:

“Sometimes this breaks…”

I get:

“CASE 405 failed. Re-run with id=405.”

That changes everything.

The first run: friendly sizes

With power-of-two sizes, everything passed.

Good sign — but not proof.

The second run: awkward ratios

That’s where things got interesting.

I started seeing failures like:

expected (205,203) but got (204,203)
CASE 405: srcShape="Square" dstShape="Square" wRel="<"; hRel="<"; srcRect="Whole" clip="Within"

One pixel off.

Only in X.

Only under certain ratios.

Only inside the image.

That’s exactly the kind of bug that ships.

The real culprit: a missing phase seed

After instrumenting both the expected DDA and the runtime DDA, the answer became obvious:

The slice engine was resetting the X remainder to zero at thread boundaries.

So the carry schedule shifted.

Which meant the source coordinate drifted.

Which meant the wrong pixel was sampled — but only at certain columns.

It was invisible unless you looked for it.

One line fixed it:

se.x := P.se.x;

That was it.

But without CaseSpace, I would never have found it.

The final run: full lattice

With the fix in place, I ran the entire CaseSpace lattice.

All shapes.
All ratios.
All clip modes.
All threading paths.

And it passed.

100%.

Every pixel mapped exactly where the math said it should.

Seams aligned.

Threads agreed.

Remainders propagated correctly.

No drift. No cracks. No ghosts.

Why this matters

Most image engines rely on:

  • floating point math

  • heuristic testing

  • visual inspection

  • and hope

PixelPlex now relies on:

  • integer math

  • symbolic specialization

  • deterministic mapping

  • and formal verification

It behaves like a GPU rasterizer — but runs on the CPU.

And it proves its own correctness.

Closing thought

This wasn’t just a bug fix.

It was a demonstration of a philosophy:

If your system has structure, you can enumerate it. If you can enumerate it, you can verify it. If you can verify it, you can trust it.

And sometimes, the fastest way to perfect an image engine…

…is to teach it how to count.

CaseSpace: Deterministic Combinatorics for Testing Without Randomness

Title: CaseSpace — Deterministic Combinatorics for Testing Without Randomness

Sometimes bugs only show up when several parameters line up just wrong.

Scaling mode × clipping mode × threading mode × coordinate origin × shape ratio…
and suddenly you’re debugging a ghost.

CaseSpace is a tiny utility I’ve been using to turn combinatorial test spaces into a deterministic, reproducible lattice of cases — where every scenario has a stable integer ID and every failure comes with a perfect repro.

No randomness. No flakiness. No guessing.

If your system has dimensions, CaseSpace lets you enumerate them.

→ Read the full write-up here:
CaseSpace — Deterministic Combinatorics

 

 

 

 

Giving Agentic Tools a Voice: Introducing GetUserInput

There’s a quiet shift happening in how we work with AI assistants.

They’re no longer just responders. Increasingly, they’re acting as agents — performing multi-step workflows, using our tools, running commands, inspecting output, and deciding what to do next.

And that raises a new problem:

Sometimes an AI shouldn’t continue — it should ask.

Recently while working inside JetBrains IDE with Junie (the JetBrains AI Agent), I noticed moments where Junie really needed clarification. Continuing would have risked the wrong change. But stopping entirely also broke the workflow.

So I built a tiny tool to bridge that gap.

The Idea

Junie already has terminal access inside the IDE.

So instead of forcing Junie to cancel, restart, or guess, I gave it a new ability:

Junie can now pause, ask me a question, hear my response, and continue.

The tool is called GetUserInput.

  • Junie runs it from the terminal.
  • A GTK window pops up with a prompt.
  • I type an answer.
  • The program prints my response to stdout and exits with code 0.
  • If I cancel, it exits with 1.

That means Junie can use it mid-round without aborting the task.

All I needed to teach Junie was:

“Junie, you have a custom terminal command named GetUserInput. You can run it anytime you need clarification.”

And suddenly the workflow feels more like a conversation.

How You Can Use This Pattern

If your agent can:
✔ run a terminal
✔ read command output
✔ branch logic based on exit codes

…then you can give it a voice — a way to ask for direction instead of guessing.

Here’s the simple convention I use:

  • exit 0 → continue with the user’s text
  • exit 1 → stop or fall back

That’s it. No magic — just clean IO semantics used well.

GetUserInput — The Source

This version uses GtkSharp on .NET 8 so it works great on Linux (and anywhere GTK runs).

GetUserInput.csproj

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>net8.0</TargetFramework>
    <Nullable>enable</Nullable>
    <ImplicitUsings>enable</ImplicitUsings>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="GtkSharp" Version="3.24.24.95" />
  </ItemGroup>

</Project>

Program.cs

using System;
using Gtk;

namespace GetUserInput
{
    internal static class Program
    {
        public static int Main(string[] args)
        {
            if (args.Length == 0 ||
                (args.Length == 1 && (args[0] == "-help" || args[0] == "--help" || args[0] == "-?")))
            {
                Console.WriteLine("Usage: GetUserInput [prompt]");
                Console.WriteLine();
                Console.WriteLine("Purpose: Displays a graphical window with a prompt and a text area.");
                Console.WriteLine("         The user's input is printed to standard output upon clicking 'OK'.");
                Console.WriteLine("         Returns exit code 0 on success, and 1 if cancelled or closed.");
                return 0;
            }

            string prompt = string.Join(" ", args);

            Application.Init();

            var win = new InputWindow(prompt);
            win.ShowAll();

            Application.Run();
            return 0;
        }
    }

    public class InputWindow : Window
    {
        private readonly TextView _textView;
        private readonly Button _okButton;
        private readonly Button _cancelButton;

        public InputWindow(string prompt)
            : base("Junie – User Input")
        {
            DefaultWidth = 480;
            DefaultHeight = 260;
            WindowPosition = WindowPosition.Center;
            BorderWidth = 10;

            DeleteEvent += (_, __) => Environment.Exit(1);

            var vbox = new VBox(false, 8);

            var label = new Label { Xalign = 0f, LineWrap = true, WrapMode = Pango.WrapMode.WordChar };
            label.Text = prompt;
            vbox.PackStart(label, false, false, 0);

            var scrolled = new ScrolledWindow();
            _textView = new TextView { WrapMode = WrapMode.WordChar };
            scrolled.Add(_textView);
            vbox.PackStart(scrolled, true, true, 0);

            var buttonBox = new HButtonBox { Layout = ButtonBoxStyle.End };
            _okButton = new Button("OK");
            _cancelButton = new Button("Cancel");

            _okButton.Clicked += (_, __) =>
            {
                Console.WriteLine(_textView.Buffer.Text ?? "");
                Console.Out.Flush();
                Environment.Exit(0);
            };

            _cancelButton.Clicked += (_, __) => Environment.Exit(1);

            buttonBox.Add(_cancelButton);
            buttonBox.Add(_okButton);
            vbox.PackStart(buttonBox, false, false, 0);

            Add(vbox);
            ShowAll();
        }
    }
}

Build Instructions

dotnet restore
dotnet build -c Release

Optional: Publish as a single-file binary

dotnet publish -c Release -r linux-x64 \
  -p:PublishSingleFile=true \
  -p:IncludeAllContentForSelfExtract=true

You’ll find the binary under:

bin/Release/net8.0/linux-x64/publish/

Add that directory to your PATH — or drop the binary somewhere global.

Teaching Your Agent to Use It

I simply told Junie:

“You have a terminal command named GetUserInput. Run it whenever you need clarification, and use my answer to continue the workflow.”

And optionally:

“If the command exits with code 1, stop or fall back.”

That’s it.

Why This Matters

When an AI tool pauses and asks instead of guessing…
…the workflow becomes collaborative instead of brittle.

And that’s a direction I’m very excited about.

We’re not trying to replace agency — we’re trying to share it.

If you think of a better delivery prompt — or extend the idea — I’d love to hear about it in the comments. And if this inspires your own tools and agents, even better. That’s how living systems evolve. 😊

Tape as an Execution Substrate C# Demo

Most software execution models assume something quietly but firmly:

That you know what you’re doing before you start.

In practice, that’s rarely true.

Real systems discover correctness while running — through observation, correction, and iteration. Tape exists to support that reality.

What I Mean by Tape

Tape is not a framework and not a philosophy layer.

Tape is an execution substrate — a way of structuring work so that execution can:

  • proceed in sequence
  • preserve state across steps
  • adapt without restarting
  • and continue without collapsing the process

Tape assumes that execution is not finished when it begins.

Execution Without Collapse

In many systems, iteration is simulated by restarting:

  • rerunning jobs
  • reinitializing state
  • or replaying logic from the top

Tape takes a different approach.

It allows execution itself to:

  • pause
  • reflect on outcomes
  • alter future steps
  • and resume in-place

That’s not convenience — it’s structural stability.

Why This Matters

When execution can’t tolerate correction, systems become brittle.

When execution expects correction, systems become resilient.

Tape is designed for:

  • long-running processes
  • agent-driven workflows
  • test-and-adjust loops
  • and any system where “done” is discovered, not declared

Why I’m Sharing Tape on Its Own

Tape belongs to a larger body of work — but execution substrates should be understood before the systems that depend on them.

This release is intentionally:

  • small
  • concrete
  • and self-contained

You don’t need context to see how it behaves.

About the Release

This is a snapshot, not a final product.

It’s stable enough to explore and modify, but intentionally minimal. Tape’s value is not in how much it does — but in what it allows.

Try It

You can download the Tape demo here:

👉 Download the Tape Demo Snapshot
(replace with your actual link)

Run it. Step through it. Change it.

Observe how execution behaves when continuity is preserved.

Closing Thought

Tape does not make systems smarter.

It makes them able to change their mind without breaking themselves.

That turns out to be a very powerful property.