(For the SoundCloud audio, scroll down)
Those of us residing in West LA have lived alongside Waymo “robotaxis” since early 2024. For those who don’t live in LA, Waymos are fully autonomous vehicles you can summon via an app, similar to Uber, and they’ll take you to your destination—without a human driver.
Truthfully, it’s pretty unnerving. These ghostly, self-driving vehicles, eerily smooth in their movements, glide through our streets, their cameras and spinning sensors bristling from every corner of the car, stopping at intersections with algorithmic precision. No driver, no hesitation—just cold, calculated efficiency.
Waymo is a project of Google’s parent company, Alphabet, and it may very well represent the future of personal transportation—a world where AI, not humans, takes the wheel. In theory, this sounds like a good thing. Computers don’t text while driving, they don’t get distracted, they never drink, and they certainly don’t experience road rage.
But there’s a problem. While AI can follow traffic laws perfectly, what happens when the unexpected occurs? Just last week, I watched a Waymo car—caught in a traffic snarl on a narrow side street—struggle helplessly to execute a U-turn, boxed in by cars ahead and behind. And that was in a situation where no one was in danger.
Now imagine something far more critical—a child suddenly running out into the street. A human driver might instinctively make a moral calculation: swerve into a parked car to avoid the child or slam the brakes and risk being rear-ended. But can an AI ever be programmed to make a moral decision? Should a machine really be entrusted with life-or-death choices?
The Waymo experiment is just one facet of a much larger debate raging in the worlds of medicine, law, and military ethics—how much decision-making can we safely outsource to artificial intelligence? It’s not a theoretical question; it’s a real and urgent dilemma with implications unfolding in real-time.
From self-driving taxis to AI-powered sentencing algorithms in courtrooms to autonomous drones in war zones, we increasingly hand over critical decisions to machines. Proponents argue that AI is more objective, efficient, and immune to human error. It can process vast amounts of data without bias, fatigue, or hesitation, operating strictly within the guidelines it has been given. But critics warn that morality isn’t just about data—it’s also about judgment.
Take, for example, the development of AI-controlled weaponry. Militaries worldwide are exploring whether autonomous drones should be allowed to fire without human approval. But is it ethical for a machine to decide who lives and who dies? Isn’t that a step too far?
Or consider the healthcare industry, where AI is already used to determine which patients receive organ transplants or critical care resources. Should a machine—guided by cold, detached algorithms—have the power to decide who gets a ventilator and who doesn’t?
It goes without saying that these dilemmas are not new. History is filled with moments where technological advancements or rigid systems clashed with human judgment—and the consequences were dire.
One example is the Flash Crash of 2010, when automated stock trading algorithms suddenly triggered an inexplicable stock market plunge. The machines were fine – they followed their programmed logic flawlessly, executing trades at lightning speed. But the result was utter chaos. Prices crashed in minutes, wiping out billions. It was only once human traders had intervened that order was restored.
Or consider airplane autopilot systems—invaluable for modern aviation but potentially deadly when pilots rely on them too much. The 2013 crash of Asiana Airlines Flight 214 was partly attributed to pilots who trusted the automated system even as it failed instead of taking manual control using human intuition.
Even in military history, the Cold War nearly ended in catastrophe in 1983 when a Soviet early-warning system falsely detected an incoming American nuclear attack. The system did exactly what it was programmed to do—it signaled that a nuclear response was required.
But one man, Soviet Air Force Lieutenant Colonel Stanislav Petrov, chose to ignore the computer’s warning, relying on his gut instinct instead of blind faith in technology. He was right. The “attack” was a false alarm.
Had it not been for Petrov, a machine would have started World War III. No matter how advanced technology becomes, it can never fully replace human judgment.
Which brings us to one of the most fascinating decision-making tools in Jewish history—a concept embedded in Parshat Tetzaveh.
Amidst the detailed descriptions of the High Priest’s garments, we find one of the Torah’s most enigmatic artifacts: the Urim VeTummim. This mysterious tool, placed within the Choshen (breastplate) of the Kohen Gadol, was used to determine major national decisions.
When consulted, letters on the Choshen would illuminate in a divine display—but crucially, the High Priest had to interpret them. The Urim VeTummim wasn’t an oracle that dictated absolute answers; it required human wisdom to decipher and apply its message.
One striking case of misinterpretation occurred when the Israelites consulted it before waging war against the tribe of Benjamin (Judges 20). The response seemed to grant divine approval for battle, yet they suffered two crushing defeats before finally emerging victorious.
Did they misunderstand the message? Did the Urim VeTummim signal approval for war but not guarantee success? Or was the answer contingent on factors they had failed to consider—such as whether they had adequately prepared? The failure suggests that divine guidance still requires human judgment.
This detail is critical. Even when God Himself provided insight, it was never meant to override human decision-making. The Urim VeTummim was not a replacement for leadership; it was a tool to assist it.
In a sense, the Urim VeTummim is the closest thing in Jewish history to an AI-powered decision-making device—but it still required human intuition. This reality has profound implications for today’s world. AI can calculate risk, probability, and strategy, but it cannot weigh compassion, mercy, justice, or other human factors that can’t be reduced to algorithms.
The Urim VeTummim reminds us that even when divine guidance is available, human judgment is irreplaceable. Which means that no matter how intelligent machines become, some decisions must always remain in human hands.