Musings On Auto-Mation

Ctein

Many of you have read about Tesla’s imminent wide release of their “Full Self Driving” (snerk) software to the general auto ownership. Consequently I’ve been reading many articles and watching a metric ton of YouTube videos created by the current (small) group of beta testers— a kind of virtual driver’s training class. You can decide if this is a case of anticipating Santa or getting to know one’s enemy.

Of course, I may not ever see FSD in any form in my Model X. I’ve not decided if FSD falls into the category of Zeno’s Software or Godot’s Software. I guess it’ll depend upon whether the ever-slipping deadline (up to four years and counting) represents a converging or diverging series. Your guess is as good as mine. I figure if I’m lucky I’ll get the “beta” in November. Maybe. (Side note: some components of the Tesla software have been in “beta” release for years. Beta is no guarantee of there ever being a final release when it comes to Tesla.)

Unfortunately, for my sanity, I also find myself reading some of the comments accompanying these articles and videos. Always a bad idea. Note to self (and everyone): do NOT read online comments if you wish to retain a favorable view of humanity. Ever!

The conversations inevitably degenerate into flame wars between the Tesla Fanbois and the Deniers, with neither side showing particular understanding the science or technologies involved. I hope, with this column, to introduce a modest degree of enlightenment.

The current approach to autonomous vehicles, used by all the players, is some flavor of machine learning a.k.a. “Artificial Intelligence.” (But not really — a computer-hype marketing term: These programs bear as much resemblance to real artificial intelligence as Tesla’s FSD does to real self-driving cars.) There are many different approaches to machine learning, many different architectures. Designing the best one for problem is, in itself, something of an art. So is successfully implementing it.

Will this approach, in fact, lead to self-driving vehicles? At this point, we have no idea. On the one hand, machine learning is capable of generating some truly astonishing programs. The Topaz image processing programs I’ve touted in several columns are based on AI-derived algorithms. The results they produce are sometimes extraordinarily, even unbelievably good. There are times when I honestly can’t tell if the programs are simply making shit up, because it is such utterly convincing shit. Other times, they are meh. (That’s an important point, and I’ll get back to it.)

In areas of scientific imaging and image construction, the results have been, on occasion, even more unbelievable. Algorithms for doing synthetic microscopy or holographic rendering that run a thousand to a million times faster than anything a human being ever came up with. Machine learning can be utterly amazing.

But… to achieve some real level of automotive autonomy, it’s going to have to be. Tesla’s current Autopilot, which is probably the best commercially available system, barely hits a low Level III on the autonomy scale. That is, you don’t have to pay CONSTANT attention to drive safely, but you do have to pay very frequent attention, because the car will make mistakes. I’ve discussed that in previous columns. Depending upon where I’m driving, I have to wrest control away from Autopilot (lest it crash my car) once every 30 to 500 miles. (Yes, awfully variable results.)

To get to any useful level of autonomy the car has to do a whole lot better than that! What would I consider useful? Well, if I could ignore the car from the time I get on Interstate 280 in Daly City until I transfer from I 85 to Highway 17, I would be very happy! That’s 50-60 minutes of time I could spend reading, or writing, or whatever. Call it a very low Level IV capability.

(Oh sure, I’d love to have a car that would take me from my house to, say, Santa Cruz all by itself and even let me nap, if I so desired. But let’s get real!)

That’ll require a thousand-to-ten-thousand improvement in software reliability before I’m going to trust it with my life. If not my life, my $85,000 car!

Is current AI technology up to the task? Nobody knows… Although everyone is betting tens of billions of dollars on it.

Here’s something we do know, though. Despite what you may read on the Internet, it’s not primarily a sensor issue! It’s a judgment issue! Watch a bunch of the YouTube videos and you’ll see what I’m talking about. Rarely is a driver having to take over from the car because of a problem that radar, or lidar, or stereovision would solve. Yes, one can argue that — in theory — this that or the other combination of sensor technology ought to be better. But the real world results, so far, say that isn’t where the fundamental problems lie.

(In fact, I was thinking of making this column about several such cases I encountered on the road in recent travels, where no sensor technology or combination thereof would’ve handled the problem. Instead, I decided to go more general. Maybe I’ll write about those next time. Or not. I’m fickle.)

 One of the common misconceptions out there is that this can be solved on a case-by-case basis. The programmer’s version of, “Doctor it hurts when I do that. Well, then don’t do that!” Problem is, machine learning algorithms really HATE to be told what to do in the specific. The way they work is you give them a goal, then you hand them a gazillion individual cases to look at (lessons, if you will), and they trial-and-error their way to that goal. Of course, it’s not a random walk — the magic of these architectures is that they’re really good at figuring out algorithmic tricks that get to that goal.

But they do it through a generalized interpretation of all those gazillion cases. Oh sure, you can give them a certain number of specific rules; you have to! But get too specific and give them too many, and you either end up with contradictory answers that result in very unpredictable behavior or the algorithm becomes too focused on satisfying the specific rules, to the degradation of the overall results.

In other words, you can’t fix all those driver overrides by just piling on the IF-THEN-ELSE or CASE statements, like you would in conventional programming. Do that and you’ll end up with a result a lot like HAL 9000.

Somehow, someway, you have to make sure that those gazillion cases you feed the program are an accurate and sufficiently complete representation of what a car is going to encounter in real life.

In truth, it’s impossible. When you get it wrong, bad things can happen. Like facial recognition software that brands people of color as gorillas, because a predominantly white-techbro culture didn’t notice that their training cases were insufficiently diverse.

Or systems used by the police misidentifying 20% of the California legislature as known criminals (sure, insert your obvious jokes here). That likely goes to a second problem with these systems. called “underdeterminism,” and it seems to be inherent in known architectures. It’s like this:

Suppose you do come up with what appears to be a perfectly real-world-representative data set to feed your baby AI. Depending on exactly what starting point you give it, it’ll come to different conclusions about the best fit to that data. Those answers are local minima in very large very-many-dimensional solution space; they are like little individual gravitational wells that the program settles into, depending upon which one it started off “closest” to. You can easily get a half-dozen very different algorithmic solutions, all of which do extremely well on your teaching set.

Set those solutions loose in the real world and they will be wildly different in their accuracy. It doesn’t matter how good you thought your teaching set was. There is no way you can produce a finite set that accurately and proportionately represents every possible variable and combination of variables where the program might decide to look for a pattern that it can home in on.

The only way you end up with a really robust and reliable algorithm is to take it out into the real world and pile on more gazillions upon gazillions of test cases from everyday life. You keep your fingers crossed and hope and pray that the algorithm continues to improve and doesn’t merely level out at some unsatisfactory level.

That’s what happened to Tesla’s Autopilot software. It hit its limits and no matter how much more data the programmers threw at it and how much they tweaked the algorithms, they were getting diminishing returns. After years of effort and billions of driver-miles worth of data they’d reached the end of that road (so to speak). It was back to Square One and do a complete rewrite. Hence, FSD.

No way to figure out if that’s gonna happen to your AI without putting wheels on the road. In fact, it’s pretty much guaranteed it will if you don’t.

But wait, there’s more. Remember what I said about HAL 9000? Well, Tesla found itself in a similar situation. They discovered that when there was a conflict between the information being provided by their different sensor systems, it made the FSD beta unhappy. You’d think that more kinds of sensory input would be better… and they are if you’ve got a human brain with a billion times the computing power of anything we have in silicon. But these things are DUMB and easily confused.

As a result,Tesla went back to the drawing board and rewrote the FSD software to use ONLY the visual camera systems and ignore what the radar said. The result was a whole lot more reliable. Less information produced better answers. (Which, y’know, is sometimes true of human beings, too!)

Now maybe if we had computing systems that were million times faster, they could have done a better job of understanding that conflicting information. Or maybe not. We won’t know that we’ve hit an algorithmic wall until we hit it. Hopefully that will be on the other side of Level IV… but maybe not.

Meanwhile, I (im)patiently await this rough software, its hour come round at last, to slouch its way to my Tesla.

More of Ctein’s ruminations at https://ctein.com