The Beginning is Near

After five years of essentially non-stop development on the Mythryl programming language, I can finally say we’re approaching the possibility of a real paradigm shift that will change the entire way we think about programming.

I’m going to get a bit technical here; this is not a publicity release, it’s meant for the bitheads in our audience. Here we go.

We’re moving away from a pure control-flow based programing paradigm to a mixed paradigm in which we think in terms of control flow (“where is the program counter now, and where do I want it to be?”) at the low level (within an individual thread), but we start to think in terms of data flow at the high level: In essence each thread becomes a node in a Petri graph which “fires” when (sufficient) input arrives on its IN edges, after which it thinks a bit and then fires out one or more results on its OUT edges.

This is actually a very natural division of layers in multiple ways:

At the human-cognitive level, you may have noticed that when you ask a hacker how quicksort works you’ll usually get text pseudocode on the whiteboard (with maybe some datastructure pointers-and-arrows graphics) but when you ask how gcc works, you’ll usually get boxes and arrows with maybe some text inside the boxes and labelling the arrows.

What is happening here is that programming-in-the-small can be efficiently humanly understood in terms of

“The program counter jumps to this line, and then to that line, and …”,

but that programming-in-the-large can only be efficiently humanly understood in terms of data flow:

“The input text flows from the disk into the lexer,
emerging as a stream of tokens, which flows into the parser,
emerging as a stream of parsetrees, which flows into the typechecker,
emerging as a stream of (symboltable, intermediate code) pairs,
which flows into the static-single-assignment module,
emerging as a stream of SSA graphs, which flow through the optimizer modules,
finally flowing into the code generator module from which it
emerges as object code files.”

At this level of description the “program counter” concept merely gets in the way.

Doing dataflow computation at the low level is very inefficient. I once tried to design a hardware-level dataflow machine, and wound up with immense respect for the von Neuman architecture, which solves many problems so efficiently and unobtrusively that you don’t really even realize that they exist until you try building an alternate design.

Conversely, doing high-level programming in terms of control flow is unnatural and difficult: One winds up thinking “Oh jeez, the program counter is in the code generator right now but now I need it in the C pre-processor — do I have to stick a jump in codegen just to make make cpp wake up? Ick!”

Which is to say, thinking in terms of a single hardware program counter at high levels of program structure just tends to result in poor separation of concerns.

So while I expect that we will continue to think in terms of control flow and program counters at the intra-thread level for the forseeable future, I expect that threads will mostly become pretty simple, small and single-purpose (as the unix “tools” philosophy would approve of) while most of the structure and complexity of our programs moves into a dataflow paradigm in which we think of threads as nodes in a graph where the relevant consideration is the flow of data along the edges of the graph, being absorbed, transformed and then re-emitted by the thread nodes on the graph.

We will develop tools, terminologies and methodologies specific to this new dataflow layer of our software systems, and they will be pervasively visual in nature, driven by visual display of graphs and interactive editing and de/composition of those graphs. (Not that program text will go away, even at this level; command languages for doing systematic global edits on threadgraphs will continue to need the abstraction that symbolic language alone provides. But the subjective programming experience will change from primarily verbal to primarily visual.)

There has been much (justified!) discussion of Steve Jobs’ as visionary of late, but to me the real visionary was and remains Alan Kay. In 1970 Alan Kay looked at IBM mainframes and said, “Someday that compute power will fit in something the size and weight and resolution of a magazine, and we will have to program and use them, and IBM’s Job Control Language won’t be up to the job — how can we cope?”

In his “The Reactive Engine” thesis he in essence invented the iPad and the whole GUI paradigm we now take for granted, and object-oriented programing in the form of Smalltalk to make it all run.

A decade later, Steve Jobs visited Xerox PARC and saw the result of Alan Kay’s vision actually running and recognized that here was something “insanely great”, and went forth and preached it as gospel and introduced it to the millions, and that was unquestionably an act of genius in its own right, but the fact remains that it was Alan Kay who conjured up this vision out of whole cloth — a leap of intellect probably unmatched before or since in computing, perhaps not in human history.

Even today, you can read Alan Kay’s early papers and they seem cutting edge except for the actual technology available then and now. I recommend the exercise, in particular his thesis and the original Smalltalk-72 manual.

In particular — returning to the immediate subject at hand — Alan Kay at the time compared the subjective experience of programming in Fortran or Pascal to the experience of pushing blocks of wood around on a table: The data were passive, and stuff happened only to the extent that you explicitly programmed every detailed change. (About the same time, others were somewhat more picturesquely comparing such programming to “kicking a dead whale down the beach with your bare feet”.)

By contrast, Alan Kay compared the subjective experience of programming in Smalltalk to working with trained circus seals: They all each know their tricks, and all you have to do is tell them when to do it.

I love the analogy — thirty years later it sticks vividly in my mind — and it provides a wonderful aspiration for where we would like to go with our programming languages, but I think Alan was giving in a bit to wishful thinking and hyperbole at the time. By the time wildman Smalltalk-72 evolved into sedate buttoned-down Smalltalk-80 and then decayed into Objective C and C++ and Java, an “object” had fallen from being the truly autonomous agent that Alan so vividly envisioned to being nothing more than a C function invoked via runtime vtable lookup.

This wasn’t for lack of trying; with the hardware and software technologies available in the 1970s, there was simply no other way to make OOP viable for doing real-world stuff, and Alan Kay very much wanted to do practical stuff now, not merely sketch things for future generations to build.

But today we live ten years beyond the sciencefiction-colored world “2001: A Space Odyssey”, with literally gigabytes of ram available on commodity machines — multitouch palmtops even! — together with multicore processors clocked literally at gigahertz, together with software technologies Alan Kay could only dream of, and today we are in a position to actually realize Alan Kay’s fevered 1970 visions of possible computing futures.

That is what I see happening with Mythryl multithread programming: We are entering a world in which the elementary unit of computation is in fact the active agent that Alan Kay envisioned: An autonomous thread ready at all times to respond to asynchronous input by performing an arbitrary computation and generating asynchronous output in its turn.

I believe this to be arguably the biggest paradigm change in the computing world since stored programs: Bigger than High Level Programming (compilers), bigger than Structured Programming (if-then-else), bigger than Object Oriented Programming, bigger than Graphical User Interfaces.

All of those were based on the same old control-flow programming paradigm of getting things done by pushing the program counter from here to there. In essence today the mainstream computing world is still programming in the same sequence-of-instructions-controlled-by-GOTOs paradigm in which ENIAC was programmed in back in the 1950s.

But with Mythryl multithread programming we now are entering a whole new world where programmers get things done by wiring up data-flow graphs, and we have the challenge of starting over from scratch to re-invent computing.

The lazy man might view this with dread — lots more work to do.

But the true adventurer will view this with delight — an entire new world to explore! This is the computing equivalent of Columbus discovering the New World. (Although we may hope that this Columbus will not deny its existence to his dying day.)

This is the sort of paradigm shift in which names are made, fields founded, and worlds changed. Personally, I’ve been working monomaniacally toward this moment for several decades; I’m almost quivering with anticipation.

– Cynbe

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.