Around July 10, Tesla began to send out another beta revision of updates to its suite of semi-automated driving tools, collectively and misleadingly known as Full Self-Driving (FSD), with this latest version being called Beta 9.0. Drivers who have qualified to participate in the public-streets beta-testing of the system have been uploading videos of their experiences, and while it’s certainly impressive in many ways, it’s also clear there’s a long way to go still, and there are even some downright dangerous behaviors. Let’s take a look at some of these videos.
This update is especially significant because it marks the first FSD update that’s designed to be reliant only on camera input—Tesla has stopped installing radar transceivers on Models 3 and Y, and is transitioning to Tesla Vision, the all-camera-based semi-automated system.
Other significant changes include an all-new visualization system with much more detail, and through this visualizer you can see that the system is often able to detect specific types of vehicles, such as distinguishing between cars, SUVs, pickup trucks, people on scooters or bikes, and so on. That’s very cool.
Also in these visualizations you can see that brake lights are now recognized, a big improvement, though it does not appear yet that the system is able to identify turn signals.
In many ways, Beta 9 is quite impressive, especially when you think about the overall context and how far we’ve come since a Volkswagen Touareg named Stanley won the first DARPA challenge back in 2005.
There are a number of videos showing the system performing the driving task (remember, it’s still a Level 2 system, so the human in the seat must remain ready to take over at any moment, which does happen even in the videos where its performing very well) with a great deal of skill and accuracy, like this one:
There was at least one disengagment there, but overall, FSD Beta 9 seemed to work well. Light and weather conditions were close to ideal, but there are some videos of it running at night, too:
That one has lots of narrow roads with parked cars on either side, but pretty minimal moving vehicle traffic, so that’s not really terribly challenging; the daytime one above showed many more dynamic elements for the system to deal with.
Okay, there’s some positive examples. Let’s get to the not-so-positive stuff now, which is arguably more important than the demonstrations of it generally working well, because unlike other AI systems that may be used to identify photos of people holding hams or composing plausible-sounding blog posts about taillights, when something goes wrong in an AI controlled a car, there’s real danger there, up to and including deaths of people outside and inside the car.
With that in mind, let’s see where FSD Beta 9 screws up. We’ll start with this very well-documented video of a drive in excellent weather and visibility in a challenging, dense urban environment, San Francisco. There are a number of genuinely alarming behaviors seen in this video, which the owner was kind enough to flag in the video (links to which are below the video):
0:00 Introduction 0:13 First Mistake 0:40 Discussing new User Interface 2:12 New Brake Light Detection 2:34 Disengagement 3:12 Car Hits Bush 3:38 Wrong Lane! 4:30 Unprotected Right 4:50 Left Turn Fail 5:25 Near Collision 6:12 Pedestrians in Street 6:57 Maneuvers Static Object in Road 7:45 My Opinion on Beta 9 vs Beta 8.2 9:12 Disengagement 9:25 Disengagement 10:15 Update to Green Light Reaction 10:30 Double Parked Vans 10:51 Outro
Some of these are very vivid examples of the differences in human and machine driving, like the part where the car scrapes through an overgrown bush on a median, much to the displeasure of the owner, who seems to prefer his paint unscratched.
This is one of those situations that humans would have no trouble detecting and avoiding, but the Tesla, following its hard lane-keeping rules is completely unaware of the branches and plows right through them. This could be a good example of the differences between AI-focused bottom-up reasoning (follow sensors and rules) and human-style top-down reasoning (I know what branches are, and want to avoid them).
More concerning are the strange left-turn fail that starts in the wrong lane for the turn, and then executes a really sudden and confusing jerk into another lane, and then immediately afterwards almost turns right into a parked car. It’s bad.
What makes these errors especially unsettling is that it’s not remotely clear why it failed to execute the turn properly—there really wasn’t anything that noticeably different from other left turns it executed just fine, which is sort of an unpleasant reminder of how much of a black box these kinds of systems can be.
The whole video is worth watching and it’s worth noting that the driver is clearly a fan of Tesla and wants the system to work well—no one there is actually trying to confuse it.
Another very earnest Tesla fan did a city drive in Seattle at night, and while the car generally performed fairly well, there was one pretty significant oversight:
Yeah, somehow, Tesla Vision seems to be blind to the massive concrete columns that support Seattle’s Monorail. A driver who trusted this system enough to not paid the attention they’re supposed to (but, realistically, don’t always) could have been driven right smack into those columns. As the video creator notes, the visualization didn’t show the columns at all.
The driver asks if the absence of the radar system is why the car can’t perceive the monorail pylons or a roadside planter, but it’s really not clear at all what’s going on.
The system doesn’t quite seem to understand the differences in flashing red and flashing yellow traffic signals, either, as the driver had to push the accelerator to get the car to go at the ones it encountered.
Another issue noted by Beta testers has to do with unprotected left turns, which this very dedicated tester attempts multiple times in an attempt to understand just what the car is perceiving and doing:
The results aren’t conclusive, but it doesn’t seem that FSD v9 is really looking for the oncoming traffic approaching from the right, but, again, it’s hard to be sure.
In looking through these videos—and there are many, lots of which are being collected on Twitter via the FSDBetaBot account—the consensus seems to be that people very much like the new visualizations, brake light detection, and are impressed that it’s camera-only, but overall the improvements to driving behavior do not seem really notable.
Again, what’s been achieved is impressive, no question; it’s a technological marvel.
It is also in no way close to being a full Level 5 go-anywhere with no supervision or even Level 4 system, as it absolutely still needs a human to monitor it, constantly. It has no safe means of getting out of harms’ way if it gets into a situation it can’t handle, and as such will always need a vigilant human ready to take control at any moment.
I still feel this is an inherently flawed and potentially dangerous model. Then there’s still the very questionable choice of testing beta software controlling a 4,000 pound car on public streets full of people who never once agreed to participate in any beta testing of a robot car.
No one is going to see Level 5 Tesla robotaxis by the end of the year, and while there’s certainly progress happening, the significant failures the system has already demonstrated in the hands of eager testers in the short time it’s been available should prove to be a wake-up call to the reality of the state of FSD.
I’m familiar enough with reality to know that this won’t be the case for Tesla’s hardcore fans, but, you know, like they say, edge cases are tricky.