Who's Behind the Wheel? The Ethics of Autonomous Vehicle Startups

By Reyna Hurand | Friday, 15 September 2023 | Feature

Imagine you're riding in a self-driving car. Suddenly, an accident ahead is unavoidable. Should the car swerve left, killing you but saving a group of pedestrians? Or should it swerve right, sacrificing the pedestrians to save you? Who decides?

Driverless cars are fast approaching, bringing promises of safety and convenience. But they also force us to confront impossible ethical dilemmas like this. The engineers programming autonomous vehicles will shape how they respond in a variety of situations.

So, who's really behind the wheel if no human sits in the driver's seat? As startups race to roll out AVs, tough questions need answers.  Society must grapple with the moral trade-offs inherent to automation — because the choices AV companies make have life-or-death consequences.

 Driverless car stopping for pedestrians.

Understanding Autonomous Vehicle Technology

Before diving into the ethical landscape, it's crucial to grasp how autonomous vehicle technology works. AVs are a culmination of advanced sensors, intricate software, and a constant stream of data. Light detection and ranging (LiDAR) sensors, radar, and cameras collaborate to interpret an ever-changing environment, from discerning a pedestrian crossing the road to detecting a cyclist zipping by. 

Beyond the hardware lies the true marvel: software algorithms that harness the power of AI and machine learning. These algorithms process massive amounts of visual and spatial data to identify pedestrians, read street signs, detect other vehicles, and more. The on-board computer uses this analysis to plan driving routes, adjust speed, change lanes, and perform the other basic tasks of a human driver. 

Some AV systems can now make left turns across oncoming traffic, navigate complex intersections, and deal with challenging weather conditions. More advanced AVs collect data during real-world driving to continually improve their algorithms through experience.

But under all the tech, there’s still an element of human-made subjectivity. Someone had to decide how these algorithms respond — this is where we raise ethical questions about the values embedded in this technology.

Key Players

Many major players in the auto industries are developing autonomous driving tech, eager to capitalize on a market estimated to reach over $600 billion by 2030. For example:

  • Waymo, a subsidiary of Alphabet Inc. (Google's parent company), is considered a frontrunner in the autonomous vehicle industry. They've already launched Waymo One, a ride-hailing service that uses autonomous vehicles in specific regions.

  • Cruise, backed by General Motors, focuses primarily on developing autonomous technologies for urban areas. They have been testing their vehicles extensively in San Francisco and aim to launch a public ride-hailing service using their autonomous vehicles.

  • Tesla CEO Elon Musk has been promising fully autonomous vehicles since 2016. Despite receiving criticism for exaggerated claims and safety concerns, Tesla continues beta testing its "Full Self-Driving" upgrade.

  • Amazon-owned startup Zoox is developing bi-directional AVs designed for taxi and delivery services in urban areas.

  • Aurora was founded by former leaders from Google, Tesla, and Uber to develop self-driving technology for trucks and other commercial vehicles.

  • Nuro creates custom self-driving delivery vehicles and has partnerships with companies like FedEx, Domino's, and Kroger.

As the world of autonomous vehicles expands and evolves, there's more at play than just cutting-edge technology. While we're equipping these vehicles with the capability to drive, we're also giving them the responsibility to make choices. These aren't just any choices but ones with real-world consequences that mirror the morals embedded by their developers. 

So, the real question becomes: whose set of ethics are we installing into these machines? To better understand the weight of this question, we turn to a study that tackles this exact issue: the Moral Machine Experiment.

The Moral Machine Experiment: A Deep Dive

Imagine a scenario where an autonomous vehicle must decide between saving its passengers or avoiding a pedestrian. How does it choose?

The "Moral Machine" experiment grappled with this very question, offering a glimpse into the complex world of machine ethics. The choices offered in the experiment drew parallels with the well-known trolley problem — a philosophical conundrum where one must decide between sacrificing one person to save five others or doing nothing and letting the five perish.

Designed by MIT, the Moral Machine was a deep dive into the ethical dilemmas autonomous vehicles might face. Millions participated from 233 countries and territories, offering an astounding 40 million decisions on potential vehicle dilemmas. The scope was global, the stakes high, and the results were eye-opening.

Here are some key findings from the experiment:

  • Value of Life: Across cultures, respondents showed a preference for saving human lives over animals, sparing the lives of many over a few, and prioritizing the young over the elderly.

  • Avoiding Intervention: There was a general preference for action (i.e., swerving to prevent harm) over inaction (i.e., continuing ahead even if it results in harm).

  • Law- and Rule-Abiding: Participants generally preferred victims who were obeying the law (like pedestrians crossing on a green light) over those breaking the law (like jaywalkers).

  • Valuation of Status: In some scenarios, participants were presented with characters of different perceived social statuses (e.g., a business executive vs. a homeless person). Results varied across cultures, with some societies showing a preference for saving those with higher status.

  • Moral Paradox: Despite a general preference for saving younger lives, when asked about regulating AVs, respondents were torn. They preferred riding in AVs that would protect them at all costs but believed others should buy AVs that save younger people first, demonstrating a moral vs. self-preservation conflict.

  • Utilitarian Approach: In scenarios where the choice was between saving more lives versus fewer, there was a general inclination towards the utilitarian approach, which involves actions that maximize the overall good.

The results also showed a fascinating diversity in global moral preferences:

  • Participants in eastern countries, such as Japan and Saudi Arabia, often chose to spare the law-abiding — for instance, those crossing the street with a green light.

  • Western countries like the US and Germany leaned towards inaction, letting the autonomous vehicle continue on its predefined course.

  • Latin American participants from countries like Mexico showed a preference to spare the young, fit, and those perceived as having a higher societal status.

While these are vast generalizations, what's clear is that there's no one-size-fits-all answer to these dilemmas. Different cultures, backgrounds, and personal experiences shape our ethical compass.

Recommended: For a deeper dive into global ethical preferences surrounding autonomous vehicles, try our visualization tool to explore the results of the Moral Machine experiment.

Startups' Dilemma

As autonomous vehicle technology advances, startups in this space face mounting pressure. It's not just about creating a vehicle that drives itself but one that aligns with societal moral expectations. The "Moral Machine" experiment underscores that these expectations can vary dramatically across cultures and demographics.

Should these startups program a universally accepted set of ethics into their vehicles? Or should they regionalize, calibrating their vehicles based on predominant local ethical preferences?

One of the standout conclusions from the MIT experiment is not necessarily what decision the machine should make but rather who gets to decide what the machine decides. It's an invitation for a broader dialogue that includes not just engineers and developers but also the public, policymakers, ethicists, and philosophers.

Autonomous Vehicle Startups Must Address Ethics

As AV companies race for dominance, they bear great responsibility in programming morality into our transportation future. Here are some ways startups can proactively address the ethics of self-driving vehicles:

  • Practice responsible AI principles like transparency, accountability, and safety during development. Engineers should document ethical algorithm design decisions.

  • Establish independent advisory boards with philosophers, ethicists, and other experts to provide guidance.

  • Be open with regulators and the public about safety protocols, testing procedures, and decision-making capabilities. Clearly communicate limitations.

  • Use simulations and sandbox environments to thoroughly test AVs in ethical edge cases before real-world trials.

  • Audit algorithms for demographic biases and other issues using techniques like crowdsourced datasets.

Transportation laws, regulations, and insurance policies will need major revisions for an autonomous future. But AV startups can't wait for policy to be perfected before innovating. By prioritizing safety and ethics from the start, they can lead the way in developing morally responsible autonomous driving technology.


About the Author

TRUiC Flame.

TRUiC’s team of researchers, writers, and editors dedicate hours to ensure startupsavant.com’s articles are actionable and accessible for both startup founders and startup enthusiasts. From launching a startup to growing your venture, you can trust that our information is an up-to-date and reliable source.

Related Articles


Read More

Form Your Startup

Ready to formally establish your startup? Click below to read our review of the best business formation services!

Best Business Formation Services