Squishable Plane

Cynbe has been thinking about learning to fly r/c (radio control) model planes again. He started some years ago, but at that time he gave it up in favor of other more pressing things on his plate. While we’re in Santa Cruz, things are feeling stable enough that he wants to try again — plus, being outside in the Cruz is glorious any time of year (though lots of folks born in Austin said they feel that way, too).

The hardest part of learning to fly r/c is the beginning, when it’s only too easy to crash and destroy your model plane. Cynbe is the methodical one in our marriage, and after carefully researching online, he came up with a model called the Great Planes U-Can-Do EP. This trainer seems to have been around since 2003; the last mention of it on a discussion board is in 2005, but it’s still being manufactured and is in stock at the usual places. Superficially it resembles most trainer foamies — except that, unlike any other plane we’ve seen, it is waaay flexible, even floppy…actually, um, squishy. Literally: this plane is made of such soft foam that you can bend it double. Here’s Cynbe demonstrating by bending a wingtip. Yes, it pops right back when he lets go.

Cynbe bends a wingtip of the U-Can-Do EP
Cynbe bends a wingtip of the U-Can-Do EP

That line down the middle of the cockpit area is a zipper. That’s right, the fuselage is so flexible that it unzips like a purse, and inside are the battery, receiver, and motor control. The wing is not flat; it has a normal airfoil cross-section. It is actually built up just like a balsa wing would be, with longitudinal spars and formers and a skin, but they’re all made of super-flexible foam. You can also bend the fuselage double with no discernible ill effects. Here’s a better view of the squishy wing, from Johan Sundqvist’s page.

Johan Sundqvist's U-Can-Do EP, majorly bent but totally unharmed
Johan Sundqvist’s U-Can-Do EP, majorly bent but totally unharmed

I visited various forums, and found some folks who hated this plane. Their reasons varied from tirades about the control horns breaking at the slightest shock to the engine being hopelessly underpowered to the fuselage ripping apart during normal maneuvers. We’ve adopted Beginner’s Mind for this project; I want to know what’s wrong with the design and what we can do to make it better, because if it does what Great Planes claims it does, it should be a big help indeed for the rank amateurs among us who are willing to learn by doing. So far, we’ve assembled the airframe; in the next few days we’ll populate the electronics. Then we’ll see what this puppy can actually do.

July 16, 1945 Denial

Cynbe sez:

Over the last few years I’ve been slowly growing a thesis that the most important truths in a given society are those which are so important that not only does nobody do or say or think about them, but nobody is even aware of them — the truths which are buried so deep beneath layers of denial as to bear comparison with something like Jungian racial memories.

The most important date in human history by far was July 16, 1945. There isn’t even any also-run to compare with it; that date stands alone like Olympus Mons on an infinite plain. Yet how many people today can even identify that date?

The Trinity plutonium implosion bomb tower. Image: nnsa.energy.gov
The Trinity plutonium implosion bomb tower. Image: nnsa.energy.gov

Human survival for the last ten million years has rested solely on one weak reed: That humanity’s best efforts to exterminate itself and to destroy the resources upon which it depends for survival have always been defeated by sheer inability to do the job — that humanity always survived in the end thanks to the Angel of Impotence swooping in at the last moment to save it.

On July 16, 1945 that weak reed broke.

The bright light of the first artificial dawn illuminated a stark choice: Either humanity eradicated war, or war eradicated humanity.

Without a vote, without any discussion, without any announcement, humanity with a single vast shared understanding agreed that war was just too jolly to give up.

On that day, humanity drove off a cliff at top speed, waving cheerily, and everything since then has been just waiting for the impact at the bottom of the cliff.

While pulling layer after layer after layer of denial over that decision and that consequence. Whee! Are we having fun yet?

The real significance of nuclear power, which nobody dare ever think about, much less mention, is that nuclear weapons and nuclear power plants make the military perfect storm. They go together like love and marriage, like a horse and carriage, like the two components of a binary nerve gas.

The problem with nuclear weapons is that they just have a few pounds of radioactives. As Hiroshima and Nagasaki demonstrated, you get a little bang, everything and everyone in a hundred square miles or so get vaporized, and then you rebuild. No muss, no fuss. Not much different from Xerxes sacking Athens in 480 BC.

Conversely, the problem with nuclear power plants is that they just don’t have the peak power output to effectively spread their radioactives. As Chernobyl demonstrated, the core just melts down into a puddle, a few kilograms of radioactives escape, you put a fence around a few thousand square miles of land for a few centuries, and life goes on. No muss, no fuss. We destroy more land than that every year via abusive agriculture and desertification. Yawn.

But a nuclear weapon targeted on a nuclear power plant — sublime beauty!

A nuclear power plant together with its associated fuel dumps can contain thousands of tonnes of radioactives — and one little teeny A-bomb, the kind so small and harmless that the SALT treaties don’t even bother mentioning much less counting them — one negligible backpack or artillery shell nuke is quite sufficient to vaporize those thousands of tons of radioactives, allowing them to settle out downwind for tens of thousands of miles and get efficiently incorporated into the food chain, thereafter to concentrate steadily until reaching top predators like tuna, eagles — and humans.

Scientific American once ran an article showing that one nuke on one midwestern nuclear powerplant would be sufficient — given normal prevailing winds — to take out essentially the entire East Coast of North America. (Try that with a coal plant!)

But we’re not going to be talking about just one nuclear power plant entering the stratosphere via mushroom cloud. When the killing frenzy hits, the first thing humans do is to blow up every factory and power plant in sight. The first thing the Americans did in the Korean War was to blow up the biggest dam in the country, with loss of civilian life fully comparable to Dresden, Hiroshima or Nagasaki. The US has over one hundred civilian nuclear targets (um, “reactors”) sitting waiting to play their final roles. The fallout plumes from all of them combined color the Northern Hemisphere dead black.

(Then comes nuclear winter — ten years of no crops on a planet with a two-month supply of food. What fun! There’s nothing quite like freezing to death while starving for lack of radioactive food. Looking on the bright side, very few people will live long enough to come down with cancers — count your blessings! But I digress.)

Nuclear Winter. Image: Sebastian Knight/Stockphoto.com.
Nuclear Winter. Image: Sebastian Knight/Stockphoto.com.

Beyond that, there are dozens to hundreds of reactors specifically designated to be blown up in case of war: No major modern warship is complete without a nuclear reactor, and warships exist specifically to blow each other up — that is their exact design purpose. So blowing up nuclear reactors is firmly established as a central focus of the next developed-nations war.

By common consent, the nuclear-powered aircraft carriers will be the first to go; they are sitting ducks for modern torpedos and missiles. Modern submariners insist that there are only two kinds of warships: submarines and targets.

To what shall we equate one destroyed nuclear aircraft carrier? A hundred Chernobyls? A thousand? A million? After all, Chernobyl was on land and the molten core buried itself in the local bedrock, after which a heroic cast of thousands assembled a concrete sarcophagus. A nuclear aircraft carrier blown to bits will be dumping its reactor core into the ocean, not into rock, and nobody is going to be building any kind of containment structure around the wreck in the middle of a war as it sits glowing and boiling satanically on the seabed.

Once the aircraft carriers have been eliminated, the real fun starts: the patient, silent game of nuclear-powered hunter-killer attack submarines stalking and killing other nuclear-powered submarines — both attack and ballistic-missile.

The aircraft carriers will likely get taken out by conventional weapons, since they will go at the start of the war when tempers are cool and everyone is still having a good time and respecting the Laws of War and such. But common sense suggests and war games conclude that as the war progresses and at least one side starts feeling desperate that gloves come off and the the nuclear weapons come out of the box; it is a good guess that the last few nuclear submarines taken out will be vaporized by tactical nuclear weapons, not merely blown up by conventional explosives. Their tons of radioactives, thus, including any hoarded MIRVed ICBMs, will be injected into the stratosphere for all to enjoy within days, rather than spread over the seabed to enter the foodchain over months, decades and centuries.

(Archaeologists already use the global radioactives layer from the 1960s as a standard dating horizon; any intelligences active on Earth millenia from now will find the World War III horizon an even better reference.)

(By the way, wars end when one power is no longer able to participate — when the war machine crumbles. In the past, the war machine was built of humans, and by rule of thumb became inoperative when casualities reached about ten percent. For example, in World War II German resistance collapsed after losing about six million from a population of about seventy million [1]. With robotics, however, we may soon reach a point at which it is possible for a country to continue warring even after the loss of essentially the entire population exclusive of military leaders in bunkers. That would add an extra piquancy to war missing in previous eras — losing countries would be extirpated as completely as Sybaris. Should some scattered humans survive WW III, perhaps “America” will replace “Sybaris” as the word for glory and luxury lost without trace.)

What’s that you say? Nukes have made nuclear war unthinkable?

Funny you should say that. Alfred Nobel said the same thing about dynamite: He felt no guilt about selling high-explosive based weapons all over the world because they made war too terrible for any sane leader to contemplate.

Maybe so, but the Great War arrived on schedule all the same, and then had to be renamed World War I to make room for the sequel.

Lewis Richardson demonstrated in a classic paper in 1948 [2] that interstate conflicts obey a power law: For each increase of x10 in fatalities, the number of wars decreases by x2.5, along a nice smooth curve. (Nobody has a good theory explaining this.)

What does this tell us?

It tells us that local wars and big wars and world wars are not different phenomena. They are all the same thing. A world war is just a local war that got out of hand. Sometimes you shoot a Serbian Archduke and nothing happens. Sometimes you shoot a Serbian Archduke and you get a pleasant little Balkan war where only expendable Serbs and Croats and such die while everyone else tut-tuts. And sometimes you get a world war where people you know and love die. By the billion. You just never know.

You say our survival for half a century on a planet groaning under uncounted (literally) tens of thousands of nuclear weapons demonstrates that we would never actually use them?

Actually, the history of the era demonstrates quite the opposite; that we’ve survived to date through sheer dumb luck. If Krushchev had been as belligerent as JFK, you’d be dead now. On well over a dozen known occasions (and how many unknown ones?) the world teetered on the very brink of accidentally initiated thermonuclear war [3] — just about everyone alive today owes their lives to various unsung heroes who at the crucial moment disobeyed orders and prevented armageddon. In a rational world they’d be world celebrities with statues in front of the UN. In a world order focussed on denial, they are persecuted nonentities — traitors who dared derail destiny. (Can you name a single one of them?)

You say American early warning systems are now far too comprehensive to allow accidental nuclear war? You don’t get it. Recent studies show roughly 200 weapons are sufficient to trigger nuclear winter. Russia and the United States may wind up not being involved in WW III at all. North Korea and China could do it themselves without American involvement. Or France and South Africa. Or Israel and Iran. Or India and Pakistan. Are you sure Pakistan’s nuclear early warning system will work with perfection throughout a civil war? And those of every other nuclear power? Forever? As steadily more countries join the nuclear club each decade, having seen both Iraq and Libya obliterated for the crime of failing to have a credible nuclear deterrent?

Humanity faces only one significant problem today.

Yet nobody you know even recognizes July 16, 1945.

Have a nice day.

– Cynbe

Notes
=====

[1] http://en.wikipedia.org/wiki/World_War_II_casualties

[2] Richardson, Lewis F. 1948.
Variation of the frequency of fatal quarrels with magnitude.
Journal of the American Statistical Association 43 (244) 523-46.

[3] http://www.neatorama.com/2007/02/20/close-calls-in-the-nuclear-age/

http://peacemagazine.org/archive/v13n1p20.htm

http://www.globalissues.org/issue/67/nuclear-weapons

http://www.brightstarsound.com/world_hero/insight.html

http://www.pbs.org/wgbh/nova/missileers/falsealarms.html

http://www.peace.ca/20mishaps.htm

Model Aircraft Videos

We’ve wanted to fly a video camera for a long time, but we settled for watching our friend Knut’s excellent video work. Then we were in Fry’s the other day on a general parts run, and saw what looked like a great camera for initial experiments: the Midland XTC-100. It’s tiny, light, relatively inexpensive, and we felt that we could afford to lose it in a crash, if that’s what was in the cards. This vid is our first attempt, using a Parkmaster 3D lofting the camera, which is simply stuck on the fuselage with duct tape and looking downward between the landing gear. The flying site is Lighthouse Field, Santa Cruz, California. Sorry about the unsteadiness; apparently the camera’s added weight and drag, even if small, makes a noticeable difference in the Parkmaster’s flying characteristics. Those simple-looking foamies must be tuned to a fare-thee-well.

The Midland camera is designed to be a helmet cam; it has a 140 degree field of view, comes with a variety of mounts, and runs on two AAA batteries. I’m lashing up a one-chip 3 volt regulator so I can eliminate the batteries (and their weight) and run the camera from the plane’s LiPo battery. The weight of two AAA batteries isn’t much, but, as I’ve discovered, in the context of a foamie like the Parkmaster it’s considerable.

As the plane flies out over the Monterey National Marine Sanctuary, which includes all of the coastline of Santa Cruz, you’ll see an extensive kelp forest just offshore. This is local to the area. Kelp forests provide shelter and breeding grounds for fish, as well as a cafeteria for sea otters, who love to browse on the abundant marine life and then roll up in the kelp for a nap. The kelp keeps them from drifting while they snooze. Also visible in the water flyover is Steamer Lane, one of Santa Cruz’s well-known surfing sites; kelp also offers protection for surfers, since great white sharks, a rare but not unknown danger along these shores, won’t enter kelp forests.

Flying the Modded XTC-100 Video Camera

I finished the modded XTC-100 this evening, having lopped off the case and batteries and reconfigured the optical block and circuit board. By virtue of the 3 volt regulator, it now gets its DC power by plugging into any unused socket on the receiver. I expected noise on the power bus to be visible in the image, but clearly things are different in the XXIth Century, what with brushless motors and suchlike futuristic tech.

Cynbe thought the most stable platform for making videos would be the Radian Pro, which he loves to fly anyway. We actually tried the unmodified XTC-100 on the Radian and got some decent video, but the graceful Radian was clearly wallowing under the load. Also, there was no good way to mount the camera on the glider’s belly without some serious hacking. So I wound up taping it to the top of the fuselage facing straight forward, which produced videos of less than satisfactory quality. We quickly abandoned that approach.

The modded XTC100, total weight less than 1 ounce. The plastic case is made from the box the camera comes in. The 3-volt regulator, near the top right of the image, is wrapped in blue masking tape. Since the total current draw is roughly 2.5ma, the regulator dissipates negligible heat.
The modded XTC100, total weight less than 1 ounce. The plastic case is made from the box the camera comes in. The 3-volt regulator, near the top right of the image, is wrapped in blue masking tape. Since the total current draw is roughly 2.5ma, the regulator dissipates negligible heat.

And so, on to better things. The completed modded camera weighs less than an ounce. Taped to the wing close to the fuselage, this isn’t even enough weight to disturb the lateral balance. Making videos with extremely light cameras and large, stable foamies like the Radian may be our personal solution to a whole bunch of technical problems for other, larger and more complex projects, for which things like prebuilt foam model aircraft are simple and inexpensive development platforms.

Cynbe with the Radian Pro, showing the modded XTC100 taped to the wing. Duct taping a camera to a wing is uglissimo, but a great way to experiment since it peels off easily.
Cynbe with the Radian Pro, showing the modded XTC100 taped to the wing. Duct taping a camera to a wing is uglissimo, but a great way to experiment since it peels off easily.

If you need more detailed comments on cracking the XTC-100 camera and modding its PCB, drop me a line. Next: The XTC-200, which is HD. (I thought it prudent to destroy a cheaper camera first, to get the bugs ironed out.)

And incidentally: Naio at the SCCMA flying field in Morgan Hill. The first time he's seen Seriously Big Aircraft.
And incidentally: Naio at the SCCMA flying field in Morgan Hill. The first time he’s seen Seriously Big Aircraft.

The Beginning is Near

After five years of essentially non-stop development on the Mythryl programming language, I can finally say we’re approaching the possibility of a real paradigm shift that will change the entire way we think about programming.

I’m going to get a bit technical here; this is not a publicity release, it’s meant for the bitheads in our audience. Here we go.

We’re moving away from a pure control-flow based programing paradigm to a mixed paradigm in which we think in terms of control flow (“where is the program counter now, and where do I want it to be?”) at the low level (within an individual thread), but we start to think in terms of data flow at the high level: In essence each thread becomes a node in a Petri graph which “fires” when (sufficient) input arrives on its IN edges, after which it thinks a bit and then fires out one or more results on its OUT edges.

This is actually a very natural division of layers in multiple ways:

At the human-cognitive level, you may have noticed that when you ask a hacker how quicksort works you’ll usually get text pseudocode on the whiteboard (with maybe some datastructure pointers-and-arrows graphics) but when you ask how gcc works, you’ll usually get boxes and arrows with maybe some text inside the boxes and labelling the arrows.

What is happening here is that programming-in-the-small can be efficiently humanly understood in terms of

“The program counter jumps to this line, and then to that line, and …”,

but that programming-in-the-large can only be efficiently humanly understood in terms of data flow:

“The input text flows from the disk into the lexer,
emerging as a stream of tokens, which flows into the parser,
emerging as a stream of parsetrees, which flows into the typechecker,
emerging as a stream of (symboltable, intermediate code) pairs,
which flows into the static-single-assignment module,
emerging as a stream of SSA graphs, which flow through the optimizer modules,
finally flowing into the code generator module from which it
emerges as object code files.”

At this level of description the “program counter” concept merely gets in the way.

Doing dataflow computation at the low level is very inefficient. I once tried to design a hardware-level dataflow machine, and wound up with immense respect for the von Neuman architecture, which solves many problems so efficiently and unobtrusively that you don’t really even realize that they exist until you try building an alternate design.

Conversely, doing high-level programming in terms of control flow is unnatural and difficult: One winds up thinking “Oh jeez, the program counter is in the code generator right now but now I need it in the C pre-processor — do I have to stick a jump in codegen just to make make cpp wake up? Ick!”

Which is to say, thinking in terms of a single hardware program counter at high levels of program structure just tends to result in poor separation of concerns.

So while I expect that we will continue to think in terms of control flow and program counters at the intra-thread level for the forseeable future, I expect that threads will mostly become pretty simple, small and single-purpose (as the unix “tools” philosophy would approve of) while most of the structure and complexity of our programs moves into a dataflow paradigm in which we think of threads as nodes in a graph where the relevant consideration is the flow of data along the edges of the graph, being absorbed, transformed and then re-emitted by the thread nodes on the graph.

We will develop tools, terminologies and methodologies specific to this new dataflow layer of our software systems, and they will be pervasively visual in nature, driven by visual display of graphs and interactive editing and de/composition of those graphs. (Not that program text will go away, even at this level; command languages for doing systematic global edits on threadgraphs will continue to need the abstraction that symbolic language alone provides. But the subjective programming experience will change from primarily verbal to primarily visual.)

There has been much (justified!) discussion of Steve Jobs’ as visionary of late, but to me the real visionary was and remains Alan Kay. In 1970 Alan Kay looked at IBM mainframes and said, “Someday that compute power will fit in something the size and weight and resolution of a magazine, and we will have to program and use them, and IBM’s Job Control Language won’t be up to the job — how can we cope?”

In his “The Reactive Engine” thesis he in essence invented the iPad and the whole GUI paradigm we now take for granted, and object-oriented programing in the form of Smalltalk to make it all run.

A decade later, Steve Jobs visited Xerox PARC and saw the result of Alan Kay’s vision actually running and recognized that here was something “insanely great”, and went forth and preached it as gospel and introduced it to the millions, and that was unquestionably an act of genius in its own right, but the fact remains that it was Alan Kay who conjured up this vision out of whole cloth — a leap of intellect probably unmatched before or since in computing, perhaps not in human history.

Even today, you can read Alan Kay’s early papers and they seem cutting edge except for the actual technology available then and now. I recommend the exercise, in particular his thesis and the original Smalltalk-72 manual.

In particular — returning to the immediate subject at hand — Alan Kay at the time compared the subjective experience of programming in Fortran or Pascal to the experience of pushing blocks of wood around on a table: The data were passive, and stuff happened only to the extent that you explicitly programmed every detailed change. (About the same time, others were somewhat more picturesquely comparing such programming to “kicking a dead whale down the beach with your bare feet”.)

By contrast, Alan Kay compared the subjective experience of programming in Smalltalk to working with trained circus seals: They all each know their tricks, and all you have to do is tell them when to do it.

I love the analogy — thirty years later it sticks vividly in my mind — and it provides a wonderful aspiration for where we would like to go with our programming languages, but I think Alan was giving in a bit to wishful thinking and hyperbole at the time. By the time wildman Smalltalk-72 evolved into sedate buttoned-down Smalltalk-80 and then decayed into Objective C and C++ and Java, an “object” had fallen from being the truly autonomous agent that Alan so vividly envisioned to being nothing more than a C function invoked via runtime vtable lookup.

This wasn’t for lack of trying; with the hardware and software technologies available in the 1970s, there was simply no other way to make OOP viable for doing real-world stuff, and Alan Kay very much wanted to do practical stuff now, not merely sketch things for future generations to build.

But today we live ten years beyond the sciencefiction-colored world “2001: A Space Odyssey”, with literally gigabytes of ram available on commodity machines — multitouch palmtops even! — together with multicore processors clocked literally at gigahertz, together with software technologies Alan Kay could only dream of, and today we are in a position to actually realize Alan Kay’s fevered 1970 visions of possible computing futures.

That is what I see happening with Mythryl multithread programming: We are entering a world in which the elementary unit of computation is in fact the active agent that Alan Kay envisioned: An autonomous thread ready at all times to respond to asynchronous input by performing an arbitrary computation and generating asynchronous output in its turn.

I believe this to be arguably the biggest paradigm change in the computing world since stored programs: Bigger than High Level Programming (compilers), bigger than Structured Programming (if-then-else), bigger than Object Oriented Programming, bigger than Graphical User Interfaces.

All of those were based on the same old control-flow programming paradigm of getting things done by pushing the program counter from here to there. In essence today the mainstream computing world is still programming in the same sequence-of-instructions-controlled-by-GOTOs paradigm in which ENIAC was programmed in back in the 1950s.

But with Mythryl multithread programming we now are entering a whole new world where programmers get things done by wiring up data-flow graphs, and we have the challenge of starting over from scratch to re-invent computing.

The lazy man might view this with dread — lots more work to do.

But the true adventurer will view this with delight — an entire new world to explore! This is the computing equivalent of Columbus discovering the New World. (Although we may hope that this Columbus will not deny its existence to his dying day.)

This is the sort of paradigm shift in which names are made, fields founded, and worlds changed. Personally, I’ve been working monomaniacally toward this moment for several decades; I’m almost quivering with anticipation.

– Cynbe

Bandwidth via Parallelism, or Composability via Concurrency?

(In an interesting article, Paul Graham remarked: “It would be great if a startup could give us something of the old Moore’s Law back, by writing software that could make a large number of CPUs look to the developer like one very fast CPU. There are several ways to approach this problem. The most ambitious is to try to do it automatically: to write a compiler that will parallelize our code for us. There’s a name for this compiler, the sufficiently smart compiler, and it is a byword for impossibility. But is it really impossible? Is there no configuration of the bits in memory of a present day computer that is this compiler? “)

Cynbe replied in an email that, AFAIK, never got delivered. To fill out the discussion, here it is:

Mostly spot-on, as usual. I do have one comment:

You are close to the mark here, but not quite on it. (I speak as
someone who has been hoeing this garden for several decades, and
hoeing this row for the last five years: I’ve been productizing
the SML/NJ compiler — the last great thing out of Bell Labs before
it Lucented and tanked — under the rubric “Mythryl”.)

The real issue is more about composability via concurrency
than about bandwidth via parallelism.

The latter is not unimportant by any means, but at the end of the
day the overwhelming majority of software will run fine on one modern
CPU (as far as bandwidth goes) and the majority of the remainder will
do so if a handful of hotspots are parallelized by hand at no particularly
great proportional coding effort. There are a few application classes
where CPU is indeed a scarce resource, but they are almost without
exception “embarassingly parallel”. This is something which I think
none of us really expected back in the late 1970s and early 1980s when
parallel computing started receiving serious academic attention.

But the really critical issues are software security, reliability, maintainability,
and development costs in time-to-ship terms — the ones that the
regular doubling of *software* system size is exposing as the critical
achilles heel of the industry.

Mostly-functional programming ala Ocaml, SML and (ahem) Mythryl are
really the only light on the horizon here. I can say this with some
confidence because it takes 15+ years for anything to make it from
the lab to industry, and if you look around there just isn’t anything
else out there ready to make the jump.

Mostly-functional programming is pre-adapted to the multicore/manycore
world because it has only about 1% as many side-effects as conventional
imperative code, which translated directly to only 1% as many race
conditions, hung-lock bugs etc etc: To the extent that the need to
transparently use multiple cores actually does drive the industry as
you project, it will drive the industry to mostly-functional
programming. (Pure-functional? A bridge too far. But let’s not open
that discussion.)
Cheng’s 2001 CMU thesis “Scalable realtime parallel garbage collection
for symmetric multiprocessors — http://mythryl.org/pub/pml/ — shows
how to make this sort of computation transparently scalable to the
multicore/manycore world, and John H Reppy’s recent Manticore work
(http://manticore.cs.uchicago.edu/) shows how to put clean lockless
concurrent programming on top of that to yield just the sort of
automatically-parallel coding system you are thinking of.

But the real significance of all this is not the way it will allow
a few large apps to scale transparently to the upcoming 128-core (etc)
desktop chips, but the way it will let vanilla apps scale to ever-doubling
lines-of-code scales without becoming completely incomprehensible and
unmaintainable.

You have probably noticed that if you ask a hacker how quicksort works
you get back pseudocode on the whiteboard, but if you ask how the
linux kernel works you get boxes and arrows.

This is not an accident; it is because the imperative push-the-program-
counter-around-with-a-stick model of computing simply doesn’t scale.

What does scale is concurrent programming based on coarse-grain
dataflows-between-modules. That is what the boxes-and-arrows on
the whiteboard tell you, and that is what a scalable software
paradigm has to implement.

Reppy’s Manticore CML stuff on top of Cheng’s scalable-realtime-gc
on top of basic mostly-functional programming gives one exactly
that. (Give or take a few obvious fixes like substituting message-queues
for synchronous message-slots.)

That gives it a very real opportunity to move the industry a quantum
leap beyond the 1970+-5 era software technology represented by C++/Java.
And someone a very real opportunity to make a mint, which was your
original point in the post.

Which is why I’ve spent the last five years on that. 🙂

Anyhow, just my $0.02.

Life is Good!

— Cynbe

The Book That Nearly Killed Me

OKAY, I said, taking a deep breath: after thirty-two years, it’s about time. Hell, it’s way past time. Maybe.

The short version is this: Long ago, in a parallel universe that for lack of a better term we’ll call radical lesbian separatism, there lived five young women. These women shared a collective vision, a vision of parthenogenetics — women giving birth to women without sperm, forever — in other words, a world in which men, as they understood the concept, didn’t exist.

Bear with me.

Three of the women began to develop a series of practices designed to facilitate the physiological part of their collective dream. The other two set off on a related quest which they felt was a necessary concomitant. This quest had to do with a greater, overarching dream. It was based on the idea that “men” and “women” were not facets of a race composed of two “sexes”, but were in fact two separate species, who had happened to be thrown together at some time in the distant past and found it possible to breed with each other. The two races had originally had no customs in common, but through various exigencies, one had conquered and enslaved the other. As is frequent in such situations, the one had forced the other to learn their language and erased all evidence of the original one. It was believed that because the slaves had lost all memory of their original language, they had also lost their sense of self, power, and destiny.

So the two young women set out to recover the lost language: the lost, originary language of Women.

And they did.

Are you still with me, you? Yes, I know, sounds like a tired, superannuated scifi plot. And, shit, I wish it were just that, because then the whole thing — passionate, joyous, heartbreaking — could simply fade away. The waters would close over it, and life would just go on. But it hasn’t. It won’t. No matter what I do, the damn thing won’t let go of its hold, precisely because it wasn’t fiction. We lived it. All of it.

As my mother put it so many times: Why couldn’t I have just become a doctor?

Along about 1980, when I couldn’t sleep for all the dreaming, I gathered my journals from the preceding years and wrote them out in the form of a novel. Anyone who writes fiction can tell you that, as a writer, that’s just about the worst thing you can possibly do. Unless you happen to be Marcel Proust or have super powers, you lose all objectivity and control over the material. But I desperately needed to get the accursed thing out of my system, so I did it anyway — except that, in my case, instead of throwing the fucking manuscript in a bottom drawer and sealing it up forever, I made the fatal mistake of stuffing it into an envelope and sending it to — wait for it — DAW Books.

I sent it with no representation, no agent, no credentials, right over the transom. I knew it would end up in the slush pile. And I figured that that would be the end of it. But, oh, no. No such luck. A few weeks later, my phone rings. It is some ungodly hour of the morning. I wake from a deep and troubled slumber, knock the phone off the bed table, fumble around on the floor, and finally get it to my ear. At the other end of the line I hear Betsy Wollheim, DAW’s Editor-In-Chief, say, “I want you to know we think your book is powerful and gripping and important, and when we publish it we’re going to make it a leader.” Dimly I realize what’s going on: it’s nine o’clock in New York, which is six a.m. in my Scotts Valley bedroom, which is why it’s pitch black outside and why there’s a slight aura of unreality to the whole thing. But the conversation goes on for about half an hour, and at the end of it I am left with no doubt that Betsy is perfectly serious.

Wow! Yay! What every young, aspiring writer wants to hear about their first novel! Right?

Wrong. Because, you see, now The Madness sets in, and I discover that I am in Hell. Because what Don Wollheim (who was alive at the time) and Betsy want now — and you knew this was coming, didn’t you? — is the rewrite. And it’s then that I really begin to understand what it means to not be able to sleep for all the dreaming.

Because I can’t rewrite the book.

God knows I tried. I tried so fucking hard to rewrite that book. But I could never manage to tear myself away from the fact that it was a journal. The love, the passion, the sense of limitless possibility and high adventure that I’d shared with these vibrant, engaged friends and companions and lovers would not let go of me long enough for me to take even a single step back, examine the thing as a work of fiction — even with Betsy yelling at me “Even if it’s true, it’s fiction” — and do what any writer needs to do in order to coax fact into fiction and fiction into publishable shape. I simply couldn’t do it. I failed miserably. I took the thing back. I gave up. Finito.

Years passed. Decades. I published loads of other stuff. Some of it was good, I think. Now it’s 2012. Life went on. I’m older. Most of the Five are dead; one is missing and presumed dead. Unless you consider programming, a new language did not change the world. And, as seems somehow fitting, I’m still left with the weight of that exalting, humiliating tale. An originary, empowering language; remembering the future as an act of liberation: many other writers who are far better than I am have tried to tell that story. Maybe now I can tell my version. Maybe now, in the age of self-publishing, the best thing I can do with the accursed manuscript is simply to put it on line. Maybe if I know it’s out there, even if no one reads it, it may help to exorcise the ghosts of those tumultuous years.

So.

Just a few words about the work. Firstly, please forgive the love scenes. That’s what sex was really like for a twenties-something Trans geek– embarrassingly naive, but very much of its time. Secondly, as with any journal, Ktahmet/Remember has no narrative arc, in the customary sense. (On the other hand, it clearly has something rather like a narrative arc, or I doubt that Don would’ve bought it.) Since it’s episodic, and mainly concerned with the interactions of the characters, then it doesn’t really matter in what order I make the chapters available. So let’s start in the middle, and see what happens. I’ll try to post another one every few days.

Sandy’s Rules for Media Interviews

There’s never been a shortage of hate speech about which, from time to time, various trans persons who seem like they’re worth quoting are asked for soundbites.  Now there appears to be another book making the rounds, only this time we can add to the mix cadres of radical trans activists who are fighting back — noisily and insistently.

I’m asked to comment on that sort of thing, and you may be, too; and please notice that it’s easier than ever to be taken out of context, and even bent a bit to favor the agendas of whatever media is doing the story.  So, with the looming teapot of a new radical feminist attack on trans issues in mind, let’s try to keep the tempest as calm, reasonable and orderly as possible.

1.  Even those reporters with the best intentions will, almost reflexively, go for the prurient quote.  ”When did you first realize you were…” is usually first.  ”When did you have (blockers, hormones, surgery, etc., etc.) is usually next.  Or it may alternate with “When did you (transition, de-transition)”, or, in the worst case, “Do you have (a vagina, a penis, breasts, no breasts, muscles, scales, fins, long green sucker-covered tendrils)”.  As people like Janet Mock, who have much more visible speaking positions, are pointing out, when you answer those questions you are merely pandering to the audience’s prurient interests.  It contributes nothing of value to the discussion and distracts from the real and urgent issues, which are about raising the awareness of the general public regarding trans issues, in particular showing that trans people are just people; working toward freedom from oppression, from stereotyping, from bigotry and hate.

2.  Except in the very rare instance that your interviewer is writing a long form piece, you’re going to be soundbyted, so try to think ahead and get everything you’d like to say into your very first sentence.

3.  Except in the very rare instance that the article happens to be about you, what you say will probably be reduced to a few words among many sentences belonging to others, so make every single word count.

4.  Think about how your words can be taken out of context, and do your best to head it off.  Some years back I was asked by a TV reporter when I “first felt that I was a woman”.  I replied that in my specific case, I felt there was something odd going on at around the age of five and that that was more or less classic, but it was important to keep in mind that there are huge variations in individual persons’ senses of themselves and the way they articulate that.  The soundbite they ran had me say, in total:  ”I felt I was a woman at the age of five.”  Epic fail on my part in my attempt to head off that kind of essentializing, but I’ll keep on trying.

I’ll add to this list from time to time, and comments are welcome (on my FB page, please; I don’t have time to moderate this blog.)

Peace,

–Sandy