Category Archives: What we do

ON BEING TRANS, AND UNDER THE RADAR: TALES FROM THE ACTLAB

Sandy

(Note: This is a transcription of a talk I gave in the latter part of the XXth Century. Consequently various parts sound terribly dated. At the time this talk was given, those parts were cutting-edge, and I present them here for historical value as much as anything else.)

Thanks to ISEA and to the dedicated staff at Switch for allowing me this opportunity to say something about Transpractices, and I’d particularly like to direct this to those of us who are teacher-practitioners, which is to say, artists who teach for the love of teaching. I’d also like to frame this in terms of some experiences I’ve had in relation to my own and my culture’s concepts of gender.

There are lots of misconceptions about how gendered identity works, so let’s clarify as best we can. Firstly, in the case of Transgender there is nothing transitory about Trans. The fact is that if you are Trans, then, to pervert Simone de Beauvoir, one is neither born a woman nor becomes one; Trans is all there is. In the same way that Brenda Laurel pointed out that if you’re designing an immersive world then for a participant in it the simulation is all there is, if you start out as Trans, then Trans is all there is. If you are lucky, you discover that you don’t become assimilated into one of the existing categories because you can’t.

Realizing that you can’t may be a powerful strategy. Gloria Anzaldua described this as mestiza consciousness — a state of belonging fully to none of the possible categories. She talked about being impacted between discourses, about being everywhere but being at home nowhere. She spoke in terms of cultural impaction or what in geology is called subinclusion, though her words apply as fully to our experience.

To survive the Borderlands
you must live sin fronteras
be a crossroads
(11)

At around the same time Anzaldua wrote those words, other people were realizing that existing languages were frequently inadequate to contain new forms of thought and experience.10 These people invented new languages to allow new modes of thought: languages with names like Laadan, Kesh, Ktah’meti…”fictional” (in the best sense) languages meant to enable new practices. Laadan, for instance, was a language in which truth values were encoded right into speech, so an utterance had to contain information about whether the speaker had actually witnessed the event being described, or whether the information had come to them second or third hand or in some other way. Laadan addressed itself to a specific set of problems, namely that language is a vehicle for deception, and it set out to make that impossible. And so forth, for other needs in other situations.

Why invent new words? Because your language is a container for your concepts, and because political power inevitably colonizes language for its value as a technology of domination. Recent neologist practices come from poetic (and poststructuralist) attempts to pull and tear at existing language in order to make visible the fabric of oppression that binds any language together, as well as to create new opportunities for expression. New words and new practices are related — as in Harry Partch’s musical instruments, which enabled new musical grammars.

Language itself follows a cycle that repeats roughly once every thirty years, or once a human generation. Let’s call that cycle the arc of death by naming. Here’s what it looks like:

(Stone takes out her black, dog-eared notebook, digs out a worn pencil, and scribbles for a moment. Then she holds the notebook up…)

Figure 1: The arc of death by naming.

So here’s the drill: Someone wakes up one day and makes something vital and interesting, something that doen’t fit into any existing definition. Maybe the geist is grinning that day and so lots of people do the new thing simultaneously. Then others notice resonances with their own stuff, triggering a natural human desire to associate. They begin talking about it over coffee or dope or whatever, and their conversations inspire them. Gradually others begin to join in the discussion.

The conversation gathers energy, the body of work grows and becomes more vital and interesting. Elsewhere, people doing similar things find resonances in their own work.

Resonant groups begin to meet together to share conversations. Conferences start to happen.

Students, always alert for innovation, pick up on the new thing and talk about it.

The new thing begins to show up in the media and thereby in popular culture.

Forms of capital take notice. The thing can be marketed, or can assist other things in being marketed. A workforce is needed to produce and market it. This requires that the thing possess stable qualities over a sufficiently long period of time to maximize the marketing potential. One of the stable qualities necessary for this is a name. (Of course the other way to do this is to track trends and market them as they emerge, which rapidly exhausts the power of language inherent in a trend.)

Capital mobilizes educational institutions to produce the necessary workforce, adopting the name perceived as being most useful for this purpose. One fine day someone wakes up and says “Hey, I can get a job with that stuff.” A year or two later, sleek, suited people with hungry eyes are stalking the corridors at MLA, sniffing out the recruiters. The dance is on.

Those of us who were around in the 1980s saw this all the way through to the end with Cultural Studies, from a gleam in the eyes of innovative students to the first time Routledge catalogued a book as “Cultural Studies”. I think many of us felt with some justifiable pride that we’d finally arrived, only to realize an instant later that our discipline had come of age and died…though, as things have turned out, it’s a quite serviceable zombie.

I don’t think it’s an accident that Cultural Studies began as an oppositional practice. I do think that virtually all worthwhile new work starts out as oppositional practice.8 The people who were practitioners of what became Cultural Studies, though they might not have known it, were trying to open space for new kinds of discourses. Because existing discourses take place in a force field of political power, there is never any space between them for new discourses to grow. New discourses have to fight their way in against forces that attempt to kill them off. Those forces fight innovation with weapons like “That’s collage, not photography”, or “That isn’t legitimate scholarship.”

If you’re nervous about suddenly and unexpectedly finding that your discourse has been hijacked14, and that now there are shelves down at the Barnes Ignoble filled with books with your field’s name on them, and that a bunch of universities are suddenly hiring people into an area or concentration with your discourse’s name on it, you might want to reinvent your discourse’s name as frequently as you can. This, of course, is known as nomadics, and it comes from a fine and worthy tradition. Marcos (Novak, one of the sponsors of this session) is such a one; he has been reinventing and renaming almost continually the discourses he creates, right up to and including Transvergence. Watching him makes me tired; I wish I had his stamina and his inexhaustibility.

My approach has been more or less to keep the same name for the discourse I inhabit most frequently. However, I long ago realized that if I’m not involved in an oppositional practice I’ll shrivel up and die. My oppositional practices are Transgender and the ACTLab. The term Transgender has been around for roughly ten years, and the ACTLab has been around for nearly fourteen years. Transgender as a discourse has survived by steadily becoming more inclusive. I don’t think many folks who were around at the inception of the term would recognize it if they saw it now, but it’s certainly become more interesting.9

Okay, that’s the Trans-fu teaser. I’m going to take off at a tangent now and talk about the ACTLab instead, because for me the ACTLab is an embodiment of Trans practices writ large — it was, and remains, absolutely grounded in oppositional practices. As I said, while I do a lot of dog and pony shows about our program I’ve never talked about how it works. Because I want this screed to speak directly to the people who are most likely going to go out there and try to figure out how and where their own oppositional practices will allow them entry into a reactionary and hostile academy, I’m going to concentrate on writing out a sort of punch list — ACTLab For Dummies, the nuts and bolts of an academic program specializing in Trans-fu in the sense that Marcos meant it.

Cut to the Chase

ACTLab stands for Advanced Communication Technologies Laboratory. The name began as a serviceable acronym for the New Media program of the department of radio-television-film (RTF) at the University of Texas at Austin. People used to ask us if we taught acting. We don’t.

The ACTLab is a program, a space, a philosophy, and a community.

Before I tell you about those things I’ll mention that if you intend to build a Trans-fu program of your own, it’s crucial to know that the ACTLab would be impossible without our primary defense system: The Codeswitching Umbrella.

The Codeswitching Umbrella

Conceptually, the ACTLab operates on three principles(5):

Refuse closure;
Insist on situation;
Seek multiplicity.

In reality you can’t build a program on that, because (a) refusing closure is the very opposite of what academic programs are supposed to be about, and (b) insisting on situation means, among other things, questioning where your funding comes from. Also, if time and tide have shown you that your students do their very best work when you offer them almost no structure but give them endless supplies of advice and encouragement, you have precipitated yourself into a surefire confrontation with people who are paid to enforce structure and who, in order to collect their paychecks, are responsible to people who understand nothing but structure. We all have to get along: the innovators, the bean counters, and me. So to make this smooth and easy for everyone, we actlabbies live beneath the Codeswitching Umbrella.

(Stone whips out her notebook and scribbles frantically. Then she holds it up…)

Figure 2: The codeswitching umbrella. The umbrella is opaque, hiding what’s beneath from what’s above and vice versa. The umbrella is porous to concepts, but it changes them as they pass through; thus “When’s lunch?” below the umbrella becomes “Lunch is at noon sharp” above the umbrella. In this conceptual model the lowest subbasement level you can descend to, epistemically speaking, is the ACTLab, and the highest level you can ascend to, epistemically speaking, is Texas. Your mileage (and geography) may vary. This particular choice of epistemic top and bottom expresses certain power relationships which are probably obvious, but then again, you can never be too obvious about power relationships.

The codeswitching umbrella translates experimental, Trans-ish language into blackboxed, institutional language. Thus when people below the umbrella engage in deliberately nonteleological activities, what people above the umbrella see is organized, ordered work. When people below the umbrella produce messy, inarticulate emergent work, people above the umbrella see tame, recognizable, salable projects. When people below the umbrella experience passion, people above the umbrella see structure.

When you think about designing your program, you’ll realize that institutional expectations chiefly concern structure. Simultaneously, you’ll be aware that novel work frequently emerges from largely structureless milieux. Structureless exploration is part of what we call play. A major ACTLab principle is to teach people to think for themselves and to explore for themselves in an academic and production setting. We use play as a foundational principle and as a homonym for exploration. How we deploy the concept of play is not just by means of ideas or words; it’s built right into the physical design of the ACTLab studio.

The ACTLab studio

Figure 3: The ACTLab studio. Legend: (A) Seminar table (the dotted line shows where the two rectangular tables join); (B) Pods; (C) Thrust stage; (D) Screen; (E) Video projector; (F) Theatrical lighting dimmerboard and plugboard; (G) Workstation connected to projector and sound system; (H) Wall-o-computers, which I could claim is there merely to break the interactivity paradigm but which in fact is there because we ran out of space for more pods; (I) Doors; (J) Quadraphonic speakers (these are halfway up the walls, or about twenty feet off the ground); (K) Couch. Actually the couch is a prop which can be commandeered by production students doing shoots, so it’s sometimes there and sometimes not. Although my lousy drawing shows the space as rectangular (well, my notebook is clearly rectangular), the room is actually a forty-foot cube with the lighting grid (not shown) about twenty feet up.

It’s hard to separate the ACTLab philosophy from the studio space, and vice versa. They are co-emergent languages. The ACTLab studio is the heart of our program and in its semiotics it embodies the ACTLab philosophy. At the center of the space is the seminar table. The table is a large square around which we can seat about twenty people if we all squeeze together, and around fifteen if we don’t. The usual size of an ACTLab class is about twenty, so in practice fifteen people sit at the table and five or so sit in a kind of second row. Because the “second row” includes a couch, people may jockey for what might look like second-class studentship but isn’t.

Ideally the square table would be a round table, because the philosophy embodied in the table is there to deprivilege the instructor. Most other classrooms at UT consist of the usual rows of seats for students, all bolted to the floor and facing the instructor’s podium. We needn’t emphasize that this arrangement already incorporates a semiotics of domination. Whatever else is taught in such a room, the subtle instruction is obedience. There’s nothing obvious about using a round table instead, but the seating arrangement delivers its subtle message to the unconscious and people respond. The reason the table is square instead of round is that we need to be able to move it out of the way in order to free up the floor space for certain classes that incorporate movement and bodywork. In order to make this practical the large square table is actually two rectangular tables on wheels which, when separated, happen to fit neatly between the computer pods; or, if we happen to need the pods at the same time as the empty floor, the two tables can be rolled out the door and stored temporarily in the hall.

The computers in the studio are arranged in three “pods”, each of which consists of five workstations that face each other. When you look up from the screen, instead of looking at a wall you find yourself looking at other humans. Again, even in small ways, the emphasis is on human interaction. (As we grew, there wasn’t room to have all the workstations arranged as pods, so newer computers still wind up arranged in rows along the wall. We’ll fix that later…)

Let’s get the sometimes vexing issue of computers and creativity out of the way. Although we use computers in our work, we go through considerable effort to place them in proper perspective and deemphasize the solve-everything quality they seem to acquire in a university context. In particular we go to lengths to disrupt closure on treating computers as the wood lathes and sewing machines of our time; that is, as artisanal tools within a trade school philosophy, tools whose purpose and deployment are exhaustively known and which are meant to dovetail within an exhaustive recipe for a fixed curriculum. While we’re not luddites, our emphasis is far more on flexibility, initiative, creativity, and group process, in which students learn to select appropriate implements from a broad range of alternatives and also to make the best possible use of whatever materials may come to hand. In this way we seek to foster the creative process, rather than specific modes of making.

Besides the seminar table and the inescapable computers, the ACTLab studio features a large video projection system, quadraphonic sound system, thrust stage, and theatrical lighting grid. These are meant to be used individually or in combination, and emphasize the intermodal character of ACTLab work. We treat them as resources and augmentations.

As with our physical plant, ACTLab faculty and staff are meant to be resources. Thinking Trans-fu here, as I encourage us to always do, we shouldn’t take the idea or definition of resources uncritically, without some close examination of how “resource” means differently inside and outside the force fields of institutional expectations and nomadic programs.

Rethinking resource paradigms

If you design your Trans-fu program within a traditional institutional structure you may find that the existing structure, by its nature, can severely limit your ability to acquire and maintain resources such as computers. It can do this by the way it imposes older paradigms that determine how resources should, usually in some absolute sense, be acquired and maintained. To build an effective program, think outside these resource-limiting paradigms from the outset, and instead rethink resource paradigms just like curricular paradigms — that is, as oppositional practices.

For example, when we established the ACTLab in 1993 there was no technology infrastructure in our college, no coordinated maintenance or support. Then, to our dismay, we found that by virtue of preexisting practices of hiring staff with preexisting skills and preexisting assumptions about the “right way” to build those infrastructures, the university, perhaps unintentionally, perpetuated hugely wasteful resource paradigms. At most institutions this happens in the guise of solid, reliable procedures that are “proven” and “known to work”, and it’s also possible that it is fallout from sweetheart deals between manufacturers and the institution.13

For example, in its existing resource paradigms our institution assumes a radical disjunct between acquisition, support, and use. Because the paradigms themselves are designed by the acquisitors and supporters, users are at the bottom of that pile, and while in theory the entire structure exists to make equipment available to users, in practice the actual users are almost afterthoughts in the acquisition-support-user triad. Contrariwise, the Trans-fu approach is to see the triad as embedded in old-paradigm thinking, and to virally penetrate and dissolve the boundaries between the three elements of the triad. For example, if you have designed your program well, you will find that it attracts students with a nice mix of skills, including hardware skills. In the ACTLab we’ve always had a critical mass of hardware geeks, which is to say that a small but significant percentage of our student population has the skills to assemble working computers from cheap and plentiful parts. It’s helpful here to recall one of the definitive Trans principles, which of course we share with all distributed systems: a distributed system interprets chronic inefficiency as damage, and routes around it. Trans resource paradigms share this quality as well. A student population with hardware skills is a priceless resource for program building, because it gives one the unique ability to work around an institution’s stunningly inefficient acquisition and maintenance procedures. Doing so reveals another aspect of running under the radar, which is that you can do it in plain sight and still be invisible.

Let’s run the numbers. At our institution, if you follow the usual materiel acquisition path, before you can purchase significant hardware the university’s tech staff must accept it as worthy of their attentions and agree to support it. For our university’s technical support staff to be willing to support a piece of hardware, it must be purchased from a large corporate vendor as a working item and then maintained only by the technical staff. The institution intends the requirement for a large corporate vendor to assure that the vendor will still be there a few years down the road to provide support to the institution’s in-house tech staff as necessary. This sets the base acquisition cost for your average PC at roughly US$2500, and requires you to sign on to support a portion of a chain of tech staff at least three deep — which includes salaries, office space, and administrative overhead. On average this adds an additional $1000 per year to the real cost of owning a computer. We don’t pay these costs directly, but the institution factors them into the cost of acquiring equipment. Also, because the commercial computer market changes so fast, after about a year this very same computer is worth perhaps half of what it cost.

On the other hand, if we make building computers part of our curriculum, and build our own computers from easily available parts, we can deploy a perfectly serviceable machine for something in the vicinity of $600. By building it from generic parts, we put most of our dollars into the hardware itself, instead of having to pay a portion of the first cost to finance a brand-name manufacturer’s advertising campaign. After a year of use such a box has depreciated hardly at all. When it breaks, instead of supporting a large and cumbersome tech support infrastructure we simply throw it away. If on average such a box lasts no more than a year, we have already saved more than enough in overhead to replace it with an even better one.

This nomadic Trans-fu philosophy of taking responsibility for one’s own hardware, thinking beyond traditional institutional support structures, and folding hardware acquisition into course material, results in a light, flexible resource paradigm that gives you the ability to change hardware priorities quickly and makes hardware turnover fast and cheap — the very antithesis of the usual institutional imperatives of conservation.

ACTLab curriculum

When you build your Trans-fu program you will be considering what your prime directive should be. The ACTLab Prime Directive is Make Stuff. By itself, Make Stuff is an insufficient imperative; to be insitutionally viable we have to translate that into a curriculum. The purpose of a curriculum is to familiarize a student with a field of knowledge. Some people believe you should add “in a logical progression” to that. With New Media the “field” is always changing; the objects of knowledge themselves are in motion, and you have to be ready to travel light. However, when we observe the field over the course of the last fifteen years, it’s easy to see that although this is true, some things remain constant. For the purposes of this discussion I’m going to choose innovation, creativity, and play. The ACTLab curriculum has a set of specifics, but our real focus is on those three things.2

As we discuss curriculum I want to remind you of the necessity to keep in mind the nomadic imperative of Trans-fu program building: refuse closure. When you want to focus on innovation, creativity, and play, one of your most important (and difficult) tasks is to keep your curricular content from crystallizing out around your course topics. We might call this approach cyborg knowledges. In A Manifesto for Cyborgs, Donna Haraway notes that cyborgs refuse closure. In constructing ACTLab course frameworks, refusing closure is a prime directive: we endeavor to hold discourses in productive tension rather than allowing them to collapse into univocal accounts. This is a deliberate strategy to prevent closure from developing on the framework itself, to make it difficult or impossible for an observer to say “Aha, the curriculum means this”. Closure is the end of innovation, the point at which something interesting and dynamic becomes a trade, and we’re not in the business of teaching a trade.

Still, we have to steer past the rocks of institutional requirements, so we need to have a visible structure that fulfills those requirements. Once again we look to the codeswitching umbrella. Above the umbrella we’ve organized our primary concerns into a framework on which we hang our visible course content. The content changes all the time; the framework does not. For our purposes it’s useful to describe the curriculum in terms of a sequence, but this is really part of the codeswitching umbrella; in practice it makes no difference in which order students take the courses, and in fact it wouldn’t matter if they took the same course eight times, because as the semester progresses the actual content will emerge interactively through the group process. The real trick, then, the actual heart of our pedagogy, is to nurture that group process, herd it along, keep it focused, active, and cared for.(3)

Figure 4: The ACTLab Prime Directive, which is based on messy creation and the primacy of play, filters through the codeswitcher umbrella to reappear as a disciplined, ordered topic list. In this instance the umbrella performs much the same numbing function as Powerpoint. (Also notice that my drawing improves with practice; this is a much better umbrella than the one in Figure 2.)

There are eight courses in the ACTLab sequence, and we consider that students have acquired proficiency when they have completed six. We try to rotate through all eight courses before repeating any, but sometimes student demand influences this and we repeat something out of sequence.

The ACTLab pedagogy requires that graduate and undergraduate students work together in the same courses. We find that the undergrads benefit from the grads’ discipline and focus, and the grads benefit from the undergrads’ irreverence and boisterous energy.

As I mentioned, our curricular philosophy is about constructing dynamic topic frameworks which function by defining possible spaces of discourse rather than by filling topic areas with facts.12 With this philosophy the role of the students themselves is absolutely crucial. The students provide the content for these frameworks through active discussion and practice. The teacher acts more as a guide than as a lecturer or a strong determiner of content.

For an instructor this can be scary in practice, because it depends so heavily on the students’ active engagement. For that reason, during the time that the program is ramping up and acquiring a critical mass of students who are proficient in the particular ways of thinking that we require, we spend a significant amount of time at the beginning of each semester doing “boot camp” activities, in which we demonstrate through practice that we don’t reward rote learning, question-answer loops, or authority responses, but do reward independent thinking, innovation, and team building.

I’ll list the titles of the eight active ACTLab topics here, but keep in mind that in a Trans-fu curriculum model like the ACTLab’s the courses do not form a sequence or progression. There is no telos, in the traditional sense. That’s why we call them clusters. Their purpose is to create and sustain a web of discourses from which new work can emerge. Like the Pirates’ Code, the titles are mere guidelines, actually:

Weird Science

Death

Trans

Performance

Blackbox

Postmodern Gothic

Soundscapes

When Cultures Collide

Because the course themes are not the point of the exercise, we’re always thinking about what other themes might provide interesting springboards for new work. Themes currently in consideration, which may supplement or replace existing themes, depending upon how they develop, are:

Dream


Delirium

(You probably noticed that some of these topics read like headers for a Neil Gaiman script. I think a topic list based on Gaiman’s Endless would be killer:))

A brief expansion of each topic might look like this:

Weird Science: Social and anthropological studies of innovation, the boundaries between “legitimate” science and fakery, monsters and the monstrous, physics, religion, legitimation, charlatanism, informatics of domination.

Death: Cultural attitudes toward death, cultural definitions of death, political battles over death, cultural concepts of the afterworld, social and critical studies of mediumship, “ghosts” and spirits, zombies in film and folklore; the spectrum of death, which is to say not very dead, barely dead, almost dead, all but dead, brain dead, completely dead, and undead.

Trans: Transformation and change, boundary theory, transgender, gender and sexuality in the large and its relation to positionality and flow; identity.

Performance: Performance and the performative, performance as political intervention, history and theory of theatre, masks, puppetry, spectacle, ritual, street theatre.

Blackbox: Closure, multiplicity, theories of discourse formation, studies and practices of innovation, language.

Postmodern Gothic: Theories and histories of the gothic, modern goth, vampires, monsters and the monstrous, genetic engineering, gender and sexuality, delirium, postmodernity, cyborgs.

Soundscapes: Theory and practice of audio installation, multitrack recording, history of “music” (in the sense described by John Cage). (Note that unlike all the other topics this one is very specific, and because its specificity tends to limit the range of things the students feel comfortable doing, it’s likely going to be retired or at least put into a secondary rotation like songs that fall off the Top 40.)

When Cultures Collide: Language and episteme, cultural difference, subaltern discourses, orientalism, mestiza consciousness. (This topic has a multiple language component and we had a nice deal with one of the language departments by which we shared a foreign (i.e., not U.S. English) language instructor. With visions of Gloria Anzaldua dancing in our heads we began the semester conducting class in two languages: full immersion, take-no-prisoners bilinguality. Three weeks into the semester he suddenly got a great gig at another school, and the course sank like an iceberged cruiser. Since then I’ve been wary about repeating it, and I suspect that — unless I can find an instructor who will let me keep their firstborn as security — it’s not likely to come up in the rotation again any time soon.)

So that’s what I consider to be a totally useless description of the ACTLab curriculum, useless because the curriculum itself is a prop for the Trans-fu framework which underlies it. Remember the curriculum is above the codeswitching umbrella. And therefore it’s important to those necessary deceptions that make a Trans-fu program look neat, presentable, and legit.

ACTLab pedagogy

When we discuss Trans-fu pedagogy I always emphasize the specifically situated character of the ACTLab’s pedagogical imperatives. I think the default ACTLab pedagogy was heavily influenced by having come of academic age in an environment in which our mentors were, or at least were capable of giving us the illusion that they were, all very comfortable with their own identities and accomplishments. There was no pressure to show off or to make the classroom a sounding board for our own egos. This milieu allowed me the freedom to develop the ACTLab’s predominant pedagogical mode. It primarily consists of not-doing. Ours is the mode of discussion, and you can’t create discussion with a lecture. Those are incompatible modes. You may encourage questions, but questions aren’t discussion, and though questions may take you down the road to discussion it’s better to bushwhack your way directly there.

Other faculty have their own teaching methods, some of which are more constructivist, but I encourage people to teach mostly by eye movement and body position.7 Usually class starts off with someone showing a strange video they found, or a soundbyte, or an odd bit of text, or someone will haul out a mysterious hunk of tech and slap it on the table. Most times that’s enough, but in the event that it isn’t, the instructor may ask a question.6 For the rest of the time, when it’s appropriate to encourage responses, the instructor will turn her body to face the student. In this situation the student naturally tends to respond directly to the instructor, and when that happens, the instructor moves her eyes to look at another student. This leads the responder’s eyes to focus on that student too, whereupon the instructor looks away, breaking eye contact but leaving the students in eye contact with each other.

That’s a brief punch list of what I consider the minimum daily requirement for building a useful and effective program based on Trans-fu principles. The ACTLab’s pedagogical strategies, together with semiotically deprivileging the instructor through the design and position of chairs, tables and equipment in the studio, plus refusing closure, encouraging innovation, and emphasizing making, help us create the conditions that define the ACTLab. I think it’s obvious by now that all of this can be very easily rephrased to emphasize that the ACTLab, like any entity that claims a Trans-fu positionality, is itself a collection of oppositional practices. Like any oppositional practice, we don’t just live under the codeswitching umbrella; we also live under the institutional radar, and to live under the radar you have to be small and lithe and quick. We cut our teeth on nomadics, and although we’ve had some hair-raising encounters with people who went to great lengths to stabilize the ACTLab identity, we’re still nomadic and still about oppositional practices.

I’m writing this for ISEA at this time because, although we’ve been running a highly successful and unusual program based on a cluster of oppositional practices for almost fourteen years now, a new generation of innovators are about to begin their own program building and I’ve never said anything publicly about the details that make the ACTLab tick. I’m also aware that our struggles tend to be invisible at a distance, which can make it appear that we simply pulled the ACTLab out of a hat — which I think is a dangerous misconception for people to have if they intend to model their programs to any extent on what we’ve done. People who’ve been close up for extended periods of time tend to be more sanguine about the nuts and bolts of putting such a program together. Some very brave folks who have crouched under the radar with us have taken the lessons to heart, gone out and started Trans-fu programs of their own. Some of them have been very successful. Usually other programs simply adapt some of our ideas for their own purposes, but very few have actually made other programs like the ACTLab happen. Recently we’ve talked about how people who’ve never had the opportunity to participate in our community would think about New Media as an oppositional practice, and what, with a little encouragement, they might decide to do about it. I’d like to close by saying that however you think about it or talk about it, at the end of the day the only thing that matters is taking action. I don’t care what you call it — New Media, Transmedia, Intermedia, Transitivity, Post-trans-whazzat-scoobeedoomedia — whatever. The important thing is to do it. Music is whatever you can coax out of your instrument. Make the entering gesture, take brush in hand, pick up that camera. We’re waiting for you.

————————————

Acknowledgements

The ACTLab and the RTF New Media program exist because a lot of people believed and were willing to work to make it happen and keep it happening. A much too short list includes Vernon Reed, Honoria Starbuck, Rich MacKinnon, Smack, Drew Davidson, Harold Chaput, Brenda Laurel, Joseph Lopez, and Brandon Wiley. Special thanks to my colleague Janet Staiger for her unfailing support, and to my husband and colleague Cynbe, who from his seat on the fifty-yard line has seen far too much of the joys and tears of building this particular program. And, of course, to Donna Haraway, the ghost in our machine.

The ACTLab and the RTF New Media program also exist because along the way some very nasty shit happened — involving jealousy, envy, bribery, betrayal, abuse, deception, brute force thuggery, malfeasance in high places, destruction — but, by some miracle, we survived, and that which didn’t kill us made us stronger. And that is also a valuable thing. So thanks also to those who wished us ill and tried to bring us down; you know who you are, so I won’t embarrass you.

Notes

1. Some people add “digital” to this mix in some fashion or other; hence “digital art”, “digital media”, et cetera. Digital-fu is unquestionably part of the field, but there is a tendency on the part of just about everyone to expand the digital episteme beyond any limit and give it way too much importance as some godzilla-like ubertech. Digital is great, but it isn’t everything, and it will never be everything. A useful parallel might be the history of electricity. When the use of electricity first became widespread, all sorts of quack devices sprang up. At one time you could go to an arcade and, for a quarter, you got to hold onto a pair of brass handles for two minutes while the machine passed about fifty volts through you. Hell, why not, it was that electricity stuff, so it must be great. That’s technomania — look, it’s new, it’s powerful, it’s mysterious, so it must be good for everything. Electricity has become invisible precisely because of its ubiquity, in the way of all successful tech; it hegemonizes discourses and in the process it becomes epistemic wallpaper. No question electricity profoundly affects our lives, but, pace McLuhan, absent purpose electricity itself is merely a powerful force. What we do with it, how we shape its power, is key, and the same thing is true with “digital”.

2. This emphasis produces a continual tug of war with the department in which we work, because the emphasis of undergraduate courses is supposed to be on learning a trade, and learning a trade is the last thing on our minds.

3. There’s neither time nor space here to devote to describing how we foster group process in the practical setting of a semester-long course. I give the topic the attention it deserves in the book from which this article is excerpted. (Don’t go looking for the book; it’s in progress, and if the Goddess in Her infinite sarcasm smiles on us all it’ll be out next year.)

4. In passing let me note that this aspect of the ACTLab philosophy has been subject to some serious high-caliber shelling on the part of the department’s administration. In fact we had to stop for a year or two while we slugged it out with the administration, but it’s back now and likely to stay.

5. The three principles come from The War of Desire and Technology at the Close of the Mechanical Age (MIT Press, 1994), in which you may find other useful things. Bob Prior will be very happy if you go out and buy a copy, and he may even let me write another one.

6. From a methodological perspective the next question is, of course, what happens if nobody responds. That’s rare, but it does occur. When it happens I count aloud to twenty, thereby deconstructing the pedagogy (Methods 101: Count silently to twenty before moving things along) and sometimes making someone nervous enough to blurt out a remark, though actlabbies are a notoriously tough group when faced with that situation. However, on very rare occasions even that strategy fails, in which event I send everyone home. It is our incredible good fortune to have classes filled with people who come there to do the work, and if the room is dead there’s usually a good reason, even though I may not know precisely what it is.

7. One of the most perplexing and hilarious encounters I’ve had in my years with the department was when I came up for tenure and they sent a traditional constructivist to evaluate my teaching skills. His report read, virtually in its entirety: “Dr. Stone just sat there for three hours.”

8. Che Sandoval, now at UCSB, was referring to the same thing when she theorized oppositional consciousness in the late ’80s.

9. I don’t want to get into it here, but something about the fact that there are now jobs in academia in something called Transgender Studies makes me nervous. As the philosopher Jimmy Buffett asserts, I don’t want that much organization in my life.

10. The phenomenon of artificial languages created de novo essentially as oppositional practices seemed to be associated in particular with the emergence of second wave American feminism in the 1970s, though work remains to be done on understanding the phenomenon. In regard to academic jargons, the dense jargon of poststructuralism was particularly intended as an oppositional practice, a tool to undermine the academic use of language as power; and it did so in a particularly viral way. Unfortunately, the distance between “true” poststructuralist jargon and what was nothing more than strategies for concealing mediocre work inside nearly incomprehensible rhetoric proved to be much too close for comfort.

11. From Gloria Anzaldua, Borderlands/La Frontera. San Francisco: Spinsters/Aunt Lute 1987.

12. The extent to which the ACTLab community considers naming our discipline to be both crucial and deadly and therefore to be approached with irony and humor may best be exemplified by our response to a request from the Yale School of Architecture. Those nice folks in New Haven wanted me to go up there and describe our program and in particular to tell them what we called our discipline. To prepare for this, each person in the ACTLab wrote a single random syllable on a scrap of paper. We put the scraps in a box, and then, to the accompaniment of drumming and tribal vocalizations, I solemnly withdrew two slips of paper and read them out. Based on this result, I proceeded to Yale and explained to them that our discipline was called Fu Qui. John Cage would have instantly recognized what we were doing.

13. Those deals may be either benign (economies of scale that have side effects of limiting purchase options) or malignant (pork). In a place like Texas you don’t want to be noticed questioning pork, which is another reason that it’s nice to be down under the radar, being lithe and quick and running silent, not attracting attention.

14. I think we need a term for discourse hijacking. Disjacking?

Why Mythryl?

Cynbe’s life work was the programming language Mythryl. There will be a lot more about Mythryl in future; for now, in honour of his passing, here’s an excerpt from the Mythryl documentation.
________________________________________________________________

In the introductory material I state

Mythryl is not just a bag of features like most programming languages;
It has a design with provably good properties.

Many readers have been baffled by this statement. This is understandable enough; like the first biologists examining a platypus and pronouncing it a fake, the typical contemporary programmer has never before encountered an engineered progamming language and is inclined to doubt that such a thing truly exists, that one is technically possible, or indeed what it might even mean to engineer a programming language. Could designing a programming language possibly involve anything beyond sketching a set of features in English and telling compiler writers to go forth and implement? If so, what?

My goal in this section is to show that it is not only meaningful but in fact both possible and worthwhile to truly engineer a programming language.

Every engineering discipline was once an art done by seat of the pants intuition.

The earliest bridges were likely just trees felled across streams. If the log looked strong enough to bear the load, good enough. If not, somebody got wet. Big deal.

Over time bridges got bigger and more ambitious and the cost of failure correspondingly larger. Everyone has seen the film of Galloping Gertie, the Tacoma Narrows bridge, being shaken apart by a wind-excited resonant vibration. The longest suspension bridges today have central spans of up to two kilometers; nobody would dream of building them based on nothing more than “looks strong enough to me”.

We’ve all seen films of early airplanes disintegrating on their first take-off attempt. This was a direct consequence of seat of the pants design in the absence of any established engineering framework.

The true contribution of the Wright brothers was not that they built the first working airplane, but rather than they laid the foundations of modern aeronautical engineering through years of research and development. With the appropriate engineering tools in hand, building the aircraft itself was a relatively simple exercise. The Wright Flyer was the first controllable, workable airplane because the Wright brothers did their homework while everyone else was just throwing sticks, cloth and wire together and hoping. Sometimes hoping just isn’t enough.

Large commercial aircraft today weigh hundreds of tons and carry hundreds of passengers; nobody would dream of building one without first conducting thorough engineering analysis to ensure that the airframe will withstand the stresses placed upon it. Airplanes no longer fall out of the sky due to simple inadequacy of airframe design.

Airplanes do however fall out of the sky due to inadequacy of flight software design. Software today is still an art rather than an engineering discipline. It ships when it looks “good enough”. Which means it often is not good enough — and people die.

Modern bridges stand up, and modern airplanes stay in the sky, because we now have a good understanding of the load bearing capacity of materials like steel and aluminum, of their typical failure modes, and of how to compute the load bearing capacity of engineered structures based upon that understanding.

If we are to reach the point where airliners full of passengers no longer fall out of the sky due to software faults, we need to have a similarly thorough understanding of software systems.

Modern software depends first and foremost on the compiler. What steel and concrete are to bridge design, and what aluminum and carbon composites are to airframe design, compilers are to software design. If we do not understand the load bearing limits of steel or aluminum we have no hope of building consistently reliable brdiges or airframes. So long as we do not understand what our compilers are doing, we have no hope of building consistently reliable software systems, and people will continue to die every year due to simple, preventable software faults in everything from radiological control software to flight control software to nuclear reactor control software to car control software.

Our minimal need is to know what meaning a compiler assigns to a given program. So long as we have no way of agreeing on the meaning of our programs, as software engineers we have lost the battle before the first shot is fired. Only when we know the precise semantics assigned to a given program by our compiler can we begin to develop methodologies to validate required properties of our software systems.

I do not speak here of proving a program “correct”. There is no engineering analysis which concludes with “and thus the system is correct”. What we can do is prove particular properties. We can prove that a given program will not attempt to read values from outside its address space. We can prove that a given program will always eventually return to a given abstract state. We can prove that a given program will always respond within one hundred milliseconds. We can prove that a given program will never enter a diverging oscillation. We can prove that a given program will never read from a file descriptor before opening it and will always eventually close that file descriptor. We can prove that certain outputs will always stand in given relationships to corresponding inputs. Given time, tools and effort, we can eventually prove enough properties of a flight control program to give us reasonable confidence in trusting hundreds of lives to it.

Traditional programming language “design” does not address the question of the meaning of the language. In an engineering sense, traditional programming languages are not designed at all. A list of reasonable-sounding features is outlined in English, and the compiler writer then turned loose to try and produce something vaguely corresponding to the text.

The first great advance on this state of affairs came with Algol 60, which for the first time defined clearly and precisely the supported syntax of the language. It was then possible for language designers and compiler writers to agree on which programs the compiler should accept and which it should reject, and to develop tools such as YACC which automate significant parts of the compiler construction task, dramatically reducing the software fault frequency in that part of the compiler. But we still had no engineering-grade way of agreeing on what the programs accepted should actually be expected to do when executed.

The second great advance on this state of affairs came with the 1990 release of The Definition of Standard ML, which specified formally and precisely not only the syntax but also the semantics of a complete usable programming language. Specifying the syntax required a hundred phrase structure rules spread over ten pages. Specifying the semantics required two hundred rules spread over another thirty pages. The entire book ran to barely one hundred pages including introduction, exposition, core material, appendices and index.

As with the Wright brother’s first airplane, the real accomplishment was not the artifact itself, but rather the engineering methodology and analysis underlying it. Languages like Java and C++ never had any real engineering analysis, and it shows. For example, the typechecking problem is for both of those languages undecidable, which is mathematical jargon for saying that the type system is so broken that it is mathematically impossible to produce an entirely correct compiler for either of them. This is not a property one likes in a programming language, and it is not one intended by the designers of either language; it is a simple consequence of the fact that the designed of neither language had available to them an engineering methodology up to the task of testing for and eliminating such problems. Like the designers of the earliest airplanes, they were forced to simply glue stuff together and pray for it to somehow work.

The actual engineering analysis conducted for SML is only hinted at in the Defintion. To gain any real appreciation for it, one must read the companion volume Commentary on Standard ML.

Examples of engineering goals set and met by the designs of SML include:

Each valid program accepted by the language definition (and thus eventually compiler) should have a clearly defined meaning. In Robin Milner’s famous phrase, “Well typed programs can’t go wrong.” No segfaults, no coredumps, no weird clobbered-stack behavior.

Each expression and program must have a uniquely defined type. In mathematical terminlogy, the type system should define a unique most general principal type to each syntactically valid expression and program.

It must be possible in principle to compute that type. In mathematical terminology, the problem of computing the principal type for an expression or program must be decidable. This is where Java and C++ fall down.

In general it is excruciatingly easy for the typechecking problem to become undecidable because one is always stretching the type system to accept as many valid expressions as possible.

Any practical type system must err on the side of safety, of rejecting any program which is not provably typesafe, and will consequently wind up throwing out some babies with the bathwater, rejecting programs which are in fact correct because the type system was not sophisticated enough to realize their correctness. One is always trying to minimize the number of such spuriously rejected by being just a little more accomodating, and in the process creeping ever closer to the precipice of undecidability. The job of the programming language type system designer is to teeter on the very brink of that precipice without ever actually falling over it.

It must be possible in practice to compute that type with acceptable efficiency. In modern praxis that means using syntax-directed unification-driven analysis to compute principal types in time essentially linear in program size. (Hindley-Milner-Damas type inference.)

There must be a clear phase separation between compile-time and run-time semantics — in essence, between typechecking and code generation on the one hand and runtime execution on the other. Only then is it possible to write compilers that generate efficient code, and only then is it possible to give strong compile-time guarantees of typesafety.

The type system must be sound: The actual value computed at runtime (i.e., specified by the dynamic semantics must always possess the type assigned to it by the compiletime typechecker (i.e., static semantics.

The runtime semantics must be complete, assigning a value to every program accepted as valid by the compiletime typechecker.

The design process for SML involved explicitly verifying these properties by informal and formal proofs, repeatedly modifying the design as necessary until these properties could be proved. This intensive analysis and revision process yielded a number of direct and indirect benefits, some obvious, some less so:

Both the compiletime and runtime semantics of SML are precise and complete. there are no direct or indirect conflicting requirements, nor are there overlooked corners where the semantics is unspecified.
This sort of analysis is arduous and lies at the very limits of what is possible at the current state of the art. Consequently there was a powerful and continuing incentive to keep the language design as spare and clean as humanly possible. The original 1990 design was already very clean; the 1997 revision made it even simpler and cleaner by removing features which had since been found to be needlessly complex.
The analysis explicitly or implicitly explored all possible interactions between the different language parts; each was revised until it interacted smoothly with all other parts in all possible contexts. It was this process which took an initial bag of features and welded them into a coherent design. It is the lack of this process which has left other contemporary languages still an uncoordinated bag of features rife with unanticipated corner cases.
The analysis process exposed initially unanticipated design consequences and concomitant design choices, allowing explicit consideration of those design choices and selection of the most promising choice. Other contemporary languages have discovered these design consequences only in the field when the size of the installed base prevented a design change. For example it was not initially anticipated that every assignment into a Java array would require a type check; this unexpected cost will handicap Java forever. The undecidability of Java and C++ typechecking are similar unexpected and unpleasant design misfeatures discovered too late to be correctable.
The analysis process made clear which language features were semantically clean and which introduced pervasive semantic complexities. For example:
The original Definition handling of equality introduced special cases throughout the semantic rules and proofs; more recent research such as the Harper Stone semantics for the language have addressed this by finding a simpler, more natural treatment.
The original Definition treatment of type generativity was via an imperative-flavored mechanism which proved resistant to analysis; the more recent Harper Stone semantics has addressed this via a clean type-theoretic treatment more amenable to analysis.
The original Definition reconciliation of type polymorphism with the imperative features of assignment and exceptions proved needlessly complex; the 1997 revision adopted the simplified “value restriction” approach now universally adopted in ML-class languages.
The analysis process identified problematic areas in which the semantic consequences of particular features was not clearly understood; these features were left out of the design, forestalling possible unpleasant discoveries later. For example, inclusion of higher order functors was postponed pending deeper understanding of them.
Conversely, the analysis identified some generalizations of the language as being in fact unproblematic, allowing certain language features which initially looked suspect to be included in the language, either in the Standard itself or in commmon extensions.

SML was the first general-purpose realistic programming language to enjoy rigorous engineering-grade design analysis of this sort comparable to what we routinely do for a proposed bridge or airframe. SML/NJ is the reference implementation of SML, constructed with the active assistance of the SML language designers. Mythryl inherits this theoretical foundation and this codebase, and adapts it to production use in the open source tradition.

Further reading

The definitive work on the SML language is

The Definition of Standard ML (Revised)
Robin Milner, Mads Tofte, Robert Harper, David MacQueen
MIT Press 1997 ISBN 0-262-63181–4

The definitive work on the SML language design analysis process is

Commentary on Standard ML
Robin Milner, Mads Tofte
MIT Press 1991 ISBN 0-262-63137-7

You will find the former very slow going without the latter to illuminate it!

If you are new to this style of operational semantics, you may find useful background introductory material in:

Types and Programming Languages
Benjamin C. Pierce
MIT Press 2002 ISBN 0-262-16209-1

If the above leaves you hungering for more, you might try

Advanced Topics in Types and Programming Languages
Benjamin C Pierce (editor)
MIT Press 2005 ISBN 0-262-16228-8

Some more recent works on ML semantics:

Understanding and Evolving the ML Module System
Derek Dreyer 2005 262p (thesis)
http://reports-archive.adm.cs.cmu.edu/anon/usr/anon/home/ftp/usr0/anon/1996/CMU-CS-96-108.ps

A Type System for higher-order modules
Dreyer, Crary + Harper 2004 65p
http://www.cs.cmu.edu/ dreyer/papers/thoms/toplas.pdf

Singleton Kinds and Singleton Types
Christopher Stone 2000 174p (thesis)
http://reports-archive.adm.cs.cmu.edu/anon/usr/anon/home/ftp/usr0/ftp/2000/CMU-CS-00-153.ps

Comments and suggestions to: bugs@mythryl.org

Mindfulness Is All, But It Isn’t Enough

Studio engineering is about mindfulness, pure and simple. All you have to do is keep an eagle eye on the levels, the limiting, the transport, the sends and returns, the headphone mix, the monitors, the producer, and the artists. If you are being seconded, the second or assistant engineer may be a seasoned studio veteran or an apprentice, and if it’s the latter then you have to keep an eye on them too. At the same time, if the producer is inexperienced, they are probably sitting next to you, fingers nervously twitching, because they can see that there’s nothing to engineering besides knob-twiddling and they think they can do that perfectly well. It doesn’t occur to them to ask themselves how a pianist might feel if the producer barged into the studio, shoved the pianist aside, and began to play the piano part for them. So part of your job is to smoothly and gracefully head off those impulses so that they continue to keep their hands off your instrument. Sometimes the artists themselves sit next to you, and that creates conflicts: part of the engineer’s obligation is to respect the artist’s wishes, and when the artist wishes to put their hands all over the board you have an obligation to let them and correct the damage later.

Sometimes, however, in the interests of doing your job properly you can be devious. Jimi Hendrix liked to get the sound he wanted by turning up the gain on various instrument tracks. This would have been perfectly fine, but he couldn’t tell the difference between the monitor controls and the recording controls, and thereby hangs a tale.

Let me pause here to explain a psychological phenomenon. An important part of recording is simply making sure you have enough signal and not too much. You do this by setting the gain — the amount of signal going to the recorder — at a level which the equipment is designed to accept. Normally this is indicated by some sort of meter. If you set it too low, the normal background noise that your equipment produces will be noticeable; if you set it too high, it will exceed the recorder’s ability to accept it and it will become distorted. So in setting up a recording you adjust all the instrument levels to be as loud as reasonable, which is pretty much equally loud across all tracks. But when you hear a group of instruments playing on a recording, the levels of the individual instruments aren’t equal at all. Some stand out, others are background, and the balance between them is important for the listener’s enjoyment. The vocal, for instance, may be in front of the accompaniment; there may be violins floating in the background, a guitar a bit more prominent.

To make this work, in practice you have a completely separate set of controls for what you hear in the control room, so that you can create that balance, simulate what the final mix may sound like without disturbing the levels going to the recorder. You can, and in fact you should, meddle around with the monitor mix. You may need, say, to hear whether the rhythm guitar is playing precisely on the beat, in which case you push the rhythm guitar level way, way up, above the other instruments — not on the recorder, on the monitor.

Inexperienced folks invariably do this by making the rhythm guitar louder, but after a while they want to hear things in balance again, so they make the other instruments loud enough to match, and lo and behold, everything is back the way it was…except all the controls are now closer to maximum than they were before. This will continue until all the knobs are at ten (or eleven, if you’re Spinal Tap), there’s no more up to push things to, and the client looks at you in bewildered frustration and says “Fix it.” Whereupon you pull them all down and reset the mix, and then the whole thing happens again. And again.

One of the first things you learn as an engineer is to resist the impulse to turn something up and instead turn everything else down. Do this judiciously, and the knobs stay somewhere around the center of their travel (which is called Design Center), which is where they belong. And yet, after you’ve watched a few clients, you realize that the temptation to turn things up rather than down is damn near irresistible. It’s built into the human psyche somehow. We want more. Of everything.

Unfortunately it’s not your job to educate the client on what they should do when it conflicts with what they naturally want to do. So you get creative. Mindfulness, in fact, is not enough. First and foremost you are a prestidigitator. You succeed through misdirection.

In Jimi’s case, Gary Kellgren caused a special control box, covered with knobs identical to the ones on the board, to be created especially for him, so that he could adjust the mix to the sound he liked. A fat cable emerged from the box and disappeared into the console. The box did nothing. The knobs weren’t even wired up inside. But it made Jimi happy. He would tweak and poke and somehow, through the miracle of suggestion, he got what he wanted. Meanwhile Gary no longer had to tear his hair out because Jimi had hopelessly screwed up his record levels.

When I took over, it was patent that unless you maintained an iron resolve Jimi would be all over the board whenever he felt like it. But I was overcome with shame at having to put one over on the artist, so I sat Jimi down and made a solemn pact with him. I built a fence down the middle of the console — a literal physical barrier, made of marker tape and popsicle sticks. On my side I had the record controls for the night’s instruments. I patched everything else — all the monitor controls, the effects, toys, everything — to the other side. If Jimi’s body language so much as suggested he was about to reach over the barrier and touch my controls, I slapped him. But we were using a Datamix One console with about twenty-four faders plus submixes and masters, which meant that during the overdub phase I had three or four faders and Jimi had around thirty. It was enough. He gazed out over his expanse of controls and was happy; and furthermore, they did stuff. When he turned knobs, shit happened. Occasionally earsplitting shit happened, because Jimi was nearly deaf from standing in front of his two ten-foot Marshall stacks being massaged by 120dB of top-quality blast. At those times I grabbed my handy earplugs and jammed them in, trusting to one of his entourage to plead with him to embrace reason.

Regardless of these mild contretemps, Jimi could hear when he needed to hear, and his musical judgment was impeccable. I’ll tell you more about that later.

(To be continued…)
Permalink to this post: http://sandystone.com/blog/index.php/2015/08/31/mindfulness-is-all-but-it-isnt-enough/
All contents copyright 2015-2017 by Sandy Stone. FWIW, Y’all.

Flying the Radian

We flew the new Radian Pro BNF at Lighthouse Field, using the Spektrum DX8 transmitter. Binding was flawless, and the Radian is beautiful…it floats like the proverbial butterfly. A bit too much so, perhaps– I’m going to have to learn how to bring it down much closer to us than I’ve so far been able to do. It’s so light in the air that I keep overshooting.

Someone asked if that’s a large cross in the background, It isn’t; it’s the Lighthouse Field sign seen end-on.

Cynbe launches the Radian at Lighthouse Field
Cynbe launches the Radian at Lighthouse Field

(Of course, this isn’t all fun: our r/c airplanes and gliders are test platforms for the control systems I use in my art installation work, like Sandy’s Fan Club. Cynbe and I need to continually experiment with new kinds of electronics to stay current with, well, the state of the art of my art…)

The Beginning is Near

After five years of essentially non-stop development on the Mythryl programming language, I can finally say we’re approaching the possibility of a real paradigm shift that will change the entire way we think about programming.

I’m going to get a bit technical here; this is not a publicity release, it’s meant for the bitheads in our audience. Here we go.

We’re moving away from a pure control-flow based programing paradigm to a mixed paradigm in which we think in terms of control flow (“where is the program counter now, and where do I want it to be?”) at the low level (within an individual thread), but we start to think in terms of data flow at the high level: In essence each thread becomes a node in a Petri graph which “fires” when (sufficient) input arrives on its IN edges, after which it thinks a bit and then fires out one or more results on its OUT edges.

This is actually a very natural division of layers in multiple ways:

At the human-cognitive level, you may have noticed that when you ask a hacker how quicksort works you’ll usually get text pseudocode on the whiteboard (with maybe some datastructure pointers-and-arrows graphics) but when you ask how gcc works, you’ll usually get boxes and arrows with maybe some text inside the boxes and labelling the arrows.

What is happening here is that programming-in-the-small can be efficiently humanly understood in terms of

“The program counter jumps to this line, and then to that line, and …”,

but that programming-in-the-large can only be efficiently humanly understood in terms of data flow:

“The input text flows from the disk into the lexer,
emerging as a stream of tokens, which flows into the parser,
emerging as a stream of parsetrees, which flows into the typechecker,
emerging as a stream of (symboltable, intermediate code) pairs,
which flows into the static-single-assignment module,
emerging as a stream of SSA graphs, which flow through the optimizer modules,
finally flowing into the code generator module from which it
emerges as object code files.”

At this level of description the “program counter” concept merely gets in the way.

Doing dataflow computation at the low level is very inefficient. I once tried to design a hardware-level dataflow machine, and wound up with immense respect for the von Neuman architecture, which solves many problems so efficiently and unobtrusively that you don’t really even realize that they exist until you try building an alternate design.

Conversely, doing high-level programming in terms of control flow is unnatural and difficult: One winds up thinking “Oh jeez, the program counter is in the code generator right now but now I need it in the C pre-processor — do I have to stick a jump in codegen just to make make cpp wake up? Ick!”

Which is to say, thinking in terms of a single hardware program counter at high levels of program structure just tends to result in poor separation of concerns.

So while I expect that we will continue to think in terms of control flow and program counters at the intra-thread level for the forseeable future, I expect that threads will mostly become pretty simple, small and single-purpose (as the unix “tools” philosophy would approve of) while most of the structure and complexity of our programs moves into a dataflow paradigm in which we think of threads as nodes in a graph where the relevant consideration is the flow of data along the edges of the graph, being absorbed, transformed and then re-emitted by the thread nodes on the graph.

We will develop tools, terminologies and methodologies specific to this new dataflow layer of our software systems, and they will be pervasively visual in nature, driven by visual display of graphs and interactive editing and de/composition of those graphs. (Not that program text will go away, even at this level; command languages for doing systematic global edits on threadgraphs will continue to need the abstraction that symbolic language alone provides. But the subjective programming experience will change from primarily verbal to primarily visual.)

There has been much (justified!) discussion of Steve Jobs’ as visionary of late, but to me the real visionary was and remains Alan Kay. In 1970 Alan Kay looked at IBM mainframes and said, “Someday that compute power will fit in something the size and weight and resolution of a magazine, and we will have to program and use them, and IBM’s Job Control Language won’t be up to the job — how can we cope?”

In his “The Reactive Engine” thesis he in essence invented the iPad and the whole GUI paradigm we now take for granted, and object-oriented programing in the form of Smalltalk to make it all run.

A decade later, Steve Jobs visited Xerox PARC and saw the result of Alan Kay’s vision actually running and recognized that here was something “insanely great”, and went forth and preached it as gospel and introduced it to the millions, and that was unquestionably an act of genius in its own right, but the fact remains that it was Alan Kay who conjured up this vision out of whole cloth — a leap of intellect probably unmatched before or since in computing, perhaps not in human history.

Even today, you can read Alan Kay’s early papers and they seem cutting edge except for the actual technology available then and now. I recommend the exercise, in particular his thesis and the original Smalltalk-72 manual.

In particular — returning to the immediate subject at hand — Alan Kay at the time compared the subjective experience of programming in Fortran or Pascal to the experience of pushing blocks of wood around on a table: The data were passive, and stuff happened only to the extent that you explicitly programmed every detailed change. (About the same time, others were somewhat more picturesquely comparing such programming to “kicking a dead whale down the beach with your bare feet”.)

By contrast, Alan Kay compared the subjective experience of programming in Smalltalk to working with trained circus seals: They all each know their tricks, and all you have to do is tell them when to do it.

I love the analogy — thirty years later it sticks vividly in my mind — and it provides a wonderful aspiration for where we would like to go with our programming languages, but I think Alan was giving in a bit to wishful thinking and hyperbole at the time. By the time wildman Smalltalk-72 evolved into sedate buttoned-down Smalltalk-80 and then decayed into Objective C and C++ and Java, an “object” had fallen from being the truly autonomous agent that Alan so vividly envisioned to being nothing more than a C function invoked via runtime vtable lookup.

This wasn’t for lack of trying; with the hardware and software technologies available in the 1970s, there was simply no other way to make OOP viable for doing real-world stuff, and Alan Kay very much wanted to do practical stuff now, not merely sketch things for future generations to build.

But today we live ten years beyond the sciencefiction-colored world “2001: A Space Odyssey”, with literally gigabytes of ram available on commodity machines — multitouch palmtops even! — together with multicore processors clocked literally at gigahertz, together with software technologies Alan Kay could only dream of, and today we are in a position to actually realize Alan Kay’s fevered 1970 visions of possible computing futures.

That is what I see happening with Mythryl multithread programming: We are entering a world in which the elementary unit of computation is in fact the active agent that Alan Kay envisioned: An autonomous thread ready at all times to respond to asynchronous input by performing an arbitrary computation and generating asynchronous output in its turn.

I believe this to be arguably the biggest paradigm change in the computing world since stored programs: Bigger than High Level Programming (compilers), bigger than Structured Programming (if-then-else), bigger than Object Oriented Programming, bigger than Graphical User Interfaces.

All of those were based on the same old control-flow programming paradigm of getting things done by pushing the program counter from here to there. In essence today the mainstream computing world is still programming in the same sequence-of-instructions-controlled-by-GOTOs paradigm in which ENIAC was programmed in back in the 1950s.

But with Mythryl multithread programming we now are entering a whole new world where programmers get things done by wiring up data-flow graphs, and we have the challenge of starting over from scratch to re-invent computing.

The lazy man might view this with dread — lots more work to do.

But the true adventurer will view this with delight — an entire new world to explore! This is the computing equivalent of Columbus discovering the New World. (Although we may hope that this Columbus will not deny its existence to his dying day.)

This is the sort of paradigm shift in which names are made, fields founded, and worlds changed. Personally, I’ve been working monomaniacally toward this moment for several decades; I’m almost quivering with anticipation.

– Cynbe

Bandwidth via Parallelism, or Composability via Concurrency?

(In an interesting article, Paul Graham remarked: “It would be great if a startup could give us something of the old Moore’s Law back, by writing software that could make a large number of CPUs look to the developer like one very fast CPU. There are several ways to approach this problem. The most ambitious is to try to do it automatically: to write a compiler that will parallelize our code for us. There’s a name for this compiler, the sufficiently smart compiler, and it is a byword for impossibility. But is it really impossible? Is there no configuration of the bits in memory of a present day computer that is this compiler? “)

Cynbe replied in an email that, AFAIK, never got delivered. To fill out the discussion, here it is:

Mostly spot-on, as usual. I do have one comment:

You are close to the mark here, but not quite on it. (I speak as
someone who has been hoeing this garden for several decades, and
hoeing this row for the last five years: I’ve been productizing
the SML/NJ compiler — the last great thing out of Bell Labs before
it Lucented and tanked — under the rubric “Mythryl”.)

The real issue is more about composability via concurrency
than about bandwidth via parallelism.

The latter is not unimportant by any means, but at the end of the
day the overwhelming majority of software will run fine on one modern
CPU (as far as bandwidth goes) and the majority of the remainder will
do so if a handful of hotspots are parallelized by hand at no particularly
great proportional coding effort. There are a few application classes
where CPU is indeed a scarce resource, but they are almost without
exception “embarassingly parallel”. This is something which I think
none of us really expected back in the late 1970s and early 1980s when
parallel computing started receiving serious academic attention.

But the really critical issues are software security, reliability, maintainability,
and development costs in time-to-ship terms — the ones that the
regular doubling of *software* system size is exposing as the critical
achilles heel of the industry.

Mostly-functional programming ala Ocaml, SML and (ahem) Mythryl are
really the only light on the horizon here. I can say this with some
confidence because it takes 15+ years for anything to make it from
the lab to industry, and if you look around there just isn’t anything
else out there ready to make the jump.

Mostly-functional programming is pre-adapted to the multicore/manycore
world because it has only about 1% as many side-effects as conventional
imperative code, which translated directly to only 1% as many race
conditions, hung-lock bugs etc etc: To the extent that the need to
transparently use multiple cores actually does drive the industry as
you project, it will drive the industry to mostly-functional
programming. (Pure-functional? A bridge too far. But let’s not open
that discussion.)
Cheng’s 2001 CMU thesis “Scalable realtime parallel garbage collection
for symmetric multiprocessors — http://mythryl.org/pub/pml/ — shows
how to make this sort of computation transparently scalable to the
multicore/manycore world, and John H Reppy’s recent Manticore work
(http://manticore.cs.uchicago.edu/) shows how to put clean lockless
concurrent programming on top of that to yield just the sort of
automatically-parallel coding system you are thinking of.

But the real significance of all this is not the way it will allow
a few large apps to scale transparently to the upcoming 128-core (etc)
desktop chips, but the way it will let vanilla apps scale to ever-doubling
lines-of-code scales without becoming completely incomprehensible and
unmaintainable.

You have probably noticed that if you ask a hacker how quicksort works
you get back pseudocode on the whiteboard, but if you ask how the
linux kernel works you get boxes and arrows.

This is not an accident; it is because the imperative push-the-program-
counter-around-with-a-stick model of computing simply doesn’t scale.

What does scale is concurrent programming based on coarse-grain
dataflows-between-modules. That is what the boxes-and-arrows on
the whiteboard tell you, and that is what a scalable software
paradigm has to implement.

Reppy’s Manticore CML stuff on top of Cheng’s scalable-realtime-gc
on top of basic mostly-functional programming gives one exactly
that. (Give or take a few obvious fixes like substituting message-queues
for synchronous message-slots.)

That gives it a very real opportunity to move the industry a quantum
leap beyond the 1970+-5 era software technology represented by C++/Java.
And someone a very real opportunity to make a mint, which was your
original point in the post.

Which is why I’ve spent the last five years on that. 🙂

Anyhow, just my $0.02.

Life is Good!

— Cynbe