And yet, as Robert Lowell wrote, “No rocket goes as far astray as man.” In recent months, as the outrages at Twitter and elsewhere began to multiply, Musk seemed determined to squander much of the good will he had built up over his career. I asked Slavik, the plaintiffs’ attorney, whether the recent shift in public sentiment against Musk made his job in the courtroom any easier. “I think at least there are more people who are skeptical of his judgment at this point than were before,” he said. “If I were on the other side, I’d be worried about it.”
Some of Musk’s most questionable decisions, though, begin to make sense if seen as a result of a blunt utilitarian calculus. Last month, Reuters reported that Neuralink, Musk’s medical-device company, had caused the needless deaths of dozens of laboratory animals through rushed experiments. Internal messages from Musk made it clear that the urgency came from the top. “We are simply not moving fast enough,” he wrote. “It is driving me nuts!” The cost-benefit analysis must have seemed clear to him: Neuralink had the potential to cure paralysis, he believed, which would improve the lives of millions of future humans. The suffering of a smaller number of animals was worth it.
This form of crude long-term-ism, in which the sheer size of future generations gives them added ethical weight, even shows up in Musk’s statements about buying Twitter. He called Twitter a “digital town square” that was responsible for nothing less than preventing a new American civil war. “I didn’t do it to make more money,” he wrote. “I did it to try to help humanity, whom I love.”
Autopilot and F.S.D. represent the culmination of this approach. “The overarching goal of Tesla engineering,” Musk wrote, “is maximize area under user happiness curve.” Unlike with Twitter or even Neuralink, people were dying as a result of his decisions — but no matter. In 2019, in a testy exchange of email with the activist investor and steadfast Tesla critic Aaron Greenspan, Musk bristled at the suggestion that Autopilot was anything other than lifesaving technology. “The data is unequivocal that Autopilot is safer than human driving by a significant margin,” he wrote. “It is unethical and false of you to claim otherwise. In doing so, you are endangering the public.”
I wanted to ask Musk to elaborate on his philosophy of risk, but he didn’t reply to my interview requests. So instead I spoke with Peter Singer, a prominent utilitarian philosopher, to sort through some of the ethical issues involved. Was Musk right when he claimed that anything that delays the development and adoption of autonomous vehicles was inherently unethical?
“I think he has a point,” Singer said, “if he is right about the facts.”
Musk rarely talks about Autopilot or F.S.D. without mentioning how superior it is to a human driver. At a shareholders’ meeting in August, he said that Tesla was “solving a very important part of A.I., and one that can ultimately save millions of lives and prevent tens of millions of serious injuries by driving just an order of magnitude safer than people.” Musk does have data to back this up: Starting in 2018, Tesla has released quarterly safety reports to the public, which show a consistent advantage to using Autopilot. The most recent one, from late 2022, said that Teslas with Autopilot engaged were one-tenth as likely to crash as a regular car.
That is the argument that Tesla has to make to the public and to juries this spring. In the words of the company’s safety report: “While no car can prevent all accidents, we work every day to try to make them much less likely to occur.” Autopilot may cause a crash WW times, but without that technology, we’d be at OOOOOOOOOOOOOOOOOOO.