Gadgets News Technology

Tesla reportedly eyes brakes in fatal Model S crash

Written by Mariella Moon

Tesla is considering two possible scenarios that would explain the fatal Model S crash in Florida, and according to Reuters and The New York Times, neither is about Autopilot. During a meeting with the US Senate Commerce Committee, the automaker reportedly presented two theories. First is the possibility that the car’s automatic emergency braking system’s camera and radar didn’t detect the incoming truck at all. The other theory is that the braking system’s radar saw the truck but thought it was part of a big structure, such as a bridge or a building. It’s programmed to ignore huge structures to prevent false braking, after all.

If you’ll recall, the Model S in this incident collided with a tractor trailer while Autopilot was on. Since the company’s semi-autonomous driving system is a fairly new technology, both the National Transportation Safety Board (NTSB) and the Securities and Exchange Commission are investigating the incident. According to NTSB’s preliminary results, the car was speeding when it crashed into the bigger vehicle.

It’s worth noting that the automaker considers its braking system a separate entity from Autopilot, which is in charge of steering and changing lanes. Tesla has always denied that the accident was caused by Autopilot, though it ended up breaking things off with the company that made its image recognition hardware. A statement Tesla released in June only said that “Neither [the feature] nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied.”

About the author

Mariella Moon


  • A DVD player was in use from what I read, didn’t notice if it said it was Harry Potter, so he was definitely relying completely on a system that’s in Beta and, as you write, semi-automatic.

  • A report I heard stated the guy was watching Harry Potter while the car was driving. Why is no one commenting on what the driver was doing? Had he been paying attention to what was going on he would probably still be alive. They call the auto pilot Semi-Autonomous. That means that it shouldn’t be fully on its own. And I’ll be willing to bet Tesla makes this incredibly clear in their documentation. So really, its mostly the guys own fault at the early stages of this tech.

  • ??? if a person decides to head towards a wall at a high rate of speed no one should question his actions and let that person die! are you playing god or something???

  • Yeah, this explanation doesn’t really make sense. Auto-braking should be triggered by any obstruction.

    My guess is that what they meant to say (or what Mariella should have written) is that the system didn’t identify the truck as an instruction because it thought it was a building far enough away that it should be ignored. It’s a matter of perspective and parallax; objects far away may appear the same size as objects nearby – a building a mile away might be the same size as your finger held at arms’ length. If the system were only using cameras to detect objects, it might have confused the truck for a building much farther away.
    It’s also worth pointing out that, as other websites have noted, the radar systems may not have worked because they detect objects closer to the ground than this truck – the truck was sideways to the Tesla in this accident, and the trailer is about four feet off the ground. To the radar, there was nothing there. To the camera, it may have looked like a bridge that was much farther away.
  • So if a person decides to head towards a wall at a high rate of speed the emergency braking system won’t kick in because a building is too big? And tractor-trailers too? Then what good is it to have the system? It needs to be all or none because people can’t select where their next accident is going to be or what it will be up against and the expectations are that these systems are going to work regardless.

  • This is why you need to be in the seat and watching everything ask the time. Why didn’t the driver see the truck?. Put on the breaks, etc?. If you’re to close to something in front of you, why would you let that continue? It’s too many unanswered questions. Is relying on mostly a camera enough? Is the name auto pilot the wrong thing to call it as it makes people take unnecessary risks?

    This is all new territory. There clearly is going to be issues early on. I think at some point, cars will need to talk to each other to know what’s going on around them. You can have car trains when doing this. It would make things safer. That’s not going to happen any time soon. It’s going to be a slow transitional propose to go from people serving cars to where it’s computers doing most of the driving.
  • I agree but Tesla already has a lot built into the car to do just that. Perhaps camera capture would be a nice addition as well as microphone.

  • Being in the nascent times of the ‘computers with wheels’ cars… wouldn’t be the a good idea from the federal agencies to make mandatory to have a ‘black box’ as in aircrafts?

    I think that it would have to register all data and the last 30 minutes of ‘in-car conversations.’
    If they make them mandatory now, car makers won’t have to recall previous vehicles.

Leave a Comment