Self-driving cars are basically cars that do not need a driver. You just need to tell them where to go, and they will take you there! Self-driving cars are also known as autonomous cars or driverless cars and they use a combination of sensors, cameras, radars, artificial intelligence (AI), and machine learning to travel between destinations without a human operator.
There are various levels of automation of Self Driving Cars. These are as follows:
In this blog, we will be focussing on level 5 automation as this is where the cars are truly “Self-Driving''. This is also where things are most exciting, and also has the most strange scenarios.
Self-driving cars are complex systems that use an array of technologies, including artificial intelligence (AI), machine learning, sensors, and advanced control systems, to operate without human intervention.
There are three main stages in the functioning of self-driving cars: input, processing and output. In this sense, the function of a car can be compared to you choosing what clothes to wear in the morning!
Input involves the car gathering information about its surroundings. This is the part where you are looking at the various options for clothes that you can wear. Similarly, the car is “looking” at its surroundings, understanding the details of what is there in front, behind and beside it.
Processing involves the car using the information it gathered in the input phase to determine what to do. This is the part where you have looked at the options for clothes to wear in the morning, and are now using this information to think about what to wear, considering many factors such as what you want to do in the day, where you want to go, who you want to meet, and so on. When doing this “thinking”, you are essentially “processing” the information you gained in the input phase. Similarly, in this phase, the car “thinks” about what to do based on the information gained in the input phase. This normally involves performing many calculations with the data and also involves the user of Artificial Intelligence. The main goal here is to determine what to do in the output phase.
The final phase is the output phase. This is where the car actually “acts” based on what it decided to do in the processing phase. By this time, you have decided what clothes to wear. In the output phase, you will actually pick up the clothes and put them on. This is what the previous two phases have been leading up to. It is where the car decides whether it should speed up, slow down, stop, accelerate. Keep in mind that all three of these phases are happening simultaneously at all times. The car is always gathering data, processing it, and acting on it.
Sensors are the main hardware that are used by self-driving cars in the input phase. This is what a car uses to “see” and “feel” its surroundings. The car won’t be able to drive itself if it has no idea where it is and what is around it! This is done through sensors, which are devices that allow the car to get a sense of its surroundings. Self-driving cars are equipped with several different types of sensors. Some of the major ones include:
Now that the car has gathered data using its sensors, it needs to use this data to determine what to do. But first, in order to make it easier to process the data, the data from all the sensors is combined in a process called Sensor Fusion. For example, the car may have detected a tree in front of it using its camera. It may also have detected, using LIDAR, that there is something in front of it that is 50 metres away. By combining these two pieces of information, it can deduce that there is a tree 50 metres away from the front of the car. The car, thus, is able to gain a more comprehensive understanding of its surroundings.
The use of Artificial Intelligence is a key element of the processing phase. Artificial Intelligence allows a computer to learn in a way similar to how the human brain learns new things. This allows the car to learn as it operates, understanding the best actions to take in a variety of situations. For example, let’s say the car sees a pedestrian at the side of a road. It needs to determine whether it should wait for the pedestrian to cross, or whether the pedestrian will wait for it to cross. After experiencing this scenario multiple times, artificial intelligence will allow it to learn new information that will allow it to more accurately determine what to do. For example, it may learn that if the pedestrian waves for the car to move forward, the car should move and not wait. A large number of these scenarios would be provided to the car before it is sold so that it can learn to drive as effectively and safely as possible before hitting the road. Obviously, this would involve using data and simulations with fake cars and pedestrians.
The use of Artificial Intelligence to learn in this way is called Machine Learning. Machine Learning is particularly used in perception, prediction, planning, and control tasks. Let’s explore these in more detail:
Think of the output in self-driving cars as the final command centre that actually gets the wheels rolling! This process is predominantly controlled by unassuming pieces of hardware known as actuators. Like a relay runner passing the baton, actuators take the digital decisions made by the car's AI and translate them into physical movement. These hidden devices control functions like accelerating, braking, and steering. So, in the future, if you see a self-driving car smoothly gliding around a corner or coming to a gentle halt, remember to tip your hat to the humble actuator!
Ethics is an interesting yet important factor to consider when programming the function of self-driving cars. However, it is important for us to understand concepts in ethics before we can apply them to self-driving cars.
The Trolley Problem is a popular thought experiment in ethics. Imagine you are inside a trolley and the breaks fail. The trolley is rushing ahead to five unsuspecting folks on the track. You have the option to pull a lever which will change the direction of the trolley to a different track. However, as you probably expected, it isn’t that simple. There is one other person standing on that other track. What do you do? Do you actively participate in the death of one person? Or do you do nothing and let five people die? Obviously, there is no one right answer. However, there are two ethical theories that present two different ways to look at this problem: Deontology and Utilitarianism.
Deontology is the ethical theory that specific actions are morally right or wrong based on the action itself regardless of the consequences of that action. A deontologist might argue, for example, that lying is always wrong even if it would result in someone’s benefit. In the context of the Trolley Problem, a deontologist would likely argue that you should not pull the lever. This is because they are likely to define murder as being an action that is morally wrong. By pulling the lever, you are actively participating in the killing of one person, an action that a deontologist would deem as being morally wrong even if it leads to a better overall outcome. The deontologist emphasises the moral value of the act itself, not its consequences.
Utilitarianism is, in a way, the opposite of Deontology. It concerns the consequences of an action rather than the action itself. This ethical theory holds that the morally right action is the one that produces the most good or happiness for the most people. For example, let’s say your friend asks you to taste a cake they baked and you absolutely hate it. They then ask you about how it was. A deontologist would argue that you should respond truthfully since, according to them, lying is morally wrong. However, a utilitarian would argue that you should lie and say that you liked it since it would, as a consequence, lead to a greater overall happiness for your friend. In the context of the Trolley Problem, a utilitarian would likely argue that you should pull the lever. Even though this action would result in one person's death, it would prevent five others from dying, leading to a net decrease in harm or increase in happiness. This is because utilitarianism deems that it is that the consequences of the action are what determine its moral value, not the action itself.
Reading the previous section, you might be thinking to yourself: “What does this have to do with self-driving cars? I came here to learn about cool tech, not this boring ethics stuff!” However, ethics, in fact, is a very important factor that needs to be considered when programming self-driving cars. Say the breaks of the car with one passenger fail and there are five people crossing the road in front of it. The car can either drive over and kill these people, saving the passenger, or it can crash into a barrier, killing the passenger and saving the five people. What should it do? This probably reminds you of the trolley problem we discussed above, only it is a scenario that can very possibly occur in the case of a real self-driving car.
The car can be programmed to make decisions based on a deontology, taking into account specific actions. For example, the car can be programmed to never let any harm come to the passenger no matter what happens. In this case, the car would deem harming the passenger as being “morally wrong”. If this is the case, the car would likely kill the five people in the above scenario.
I think that there are a few ways to program the car to operate using Deontologist philosophies. One way could be to simply categorise each possible action as being “right” or “wrong”. Then, the car would simply need to be programmed to never perform the actions that are “wrong”. This would be very simple to program. However, this method of programming poses challenges. For example, we may program both killing the passenger and killing pedestrians as being “wrong”. However, if a scenario like the above occurs, this programming would be meaningless as one of the “wrong” actions has to be performed. In this case, the car would likely not know what to do and so is likely to do nothing and kill the five pedestrians.
Another possibility is to assign a “morality” value to each action. This would be a numerical value to indicate how good or bad each action is. Bad actions would be given negative values and good actions would be given positive values. The more “good” an action is, the higher the value it will be given and the more “bad” an action is, the lower the value it will be given. Then, in a given scenario, the car would be programmed to perform the best possible action. This would be the action with the highest morality score. This would allow the car to handle scenarios with multiple possible actions. However, this would pose the additional challenge of assigning a numerical value to each action. This would be subjective as there is no definitive way to quantify the relative “goodness” or “badness” of specific actions. For example, is it worse to kill the passengers in the car or pedestrians? The values assigned with respect to the answer to this question would determine the outcome to the above scenario in this case.
All in all, programming the car to operate using a deontologist viewpoint has some advantages. It defines clear guidelines by providing clear and strict rules, making it easier to program into AI systems. It involves adhering to traffic laws and safety regulations, leaving little room for ambiguity. It would also offer consistency as it would always follow the same rules, which can make the AI’s decisions more predictable and transparent.
However, there would also be certain disadvantages to this sort of programming. There is a lack of flexibility, as the rigid rules may not allow for the most beneficial outcome in every situation. For example, if the car is required to always stay in lane, it may not be able to avoid an obstacle if it suddenly appears. Also, just because an action is deemed to be “morally right”, it may not result in the best possible outcome, perhaps causing greater harm that was possible.
If deontology is not your cup of tea, you can also program self-driving cars to operate based on utilitarian philosophies! Well, it isn’t really up to “you” specifically, it’s mostly up to the companies who will manufacture and program these cars.
I think a utilitarian method of programming would involve assigning a numerical value to each possible consequence that can occur from any given action. Bad consequences would have negative values and good consequences would have positive values. There are likely to be way too many to manually program, so a human will likely program some and then leave AI to determine the values of the rest. Then, when deciding what to do, the car would use AI to predict all of the consequences for each possible action. It would then sum the values of the consequences for each action to get a “morality score” for each action. Then, similar to deontologist programming, the action that will be performed is the action with the highest mortality score.
This is different from the deontologist method as the morality score of each action is calculated from the consequences of that action and is not hard-coded based on the action itself. In the case of the above scenario, the problem would not arise regarding the ranking of killing pedestrians or passengers. The death of any human, as a consequence, could be assigned a very low morality value. In this case, the car would likely kill the passenger in the above scenario as this would result in the death of fewer people, meaning this action would have a higher morality score.
Utilitarian programming has its advantages. It aims to maximise overall welfare or happiness. By choosing the option that causes the least harm or greatest benefit, it may result in fewer injuries or deaths in accidents overall. Also, it is flexible to situations as it doesn't bind the AI to a specific set of unyielding rules. It can adapt its decisions based on the unique conditions of each situation.
On the contrary, utilitarian programming also has some disadvantages. It, again, aims to quantify the “goodness” or “badness” of each consequence, which is very subjective. Also, accurately predicting the outcomes of every potential action is incredibly complex, if not impossible, given the number of variables in every situation. For example, we can consider the above scenario where the car can either wait for a pedestrian to cross or go ahead. Determining the consequences of each action would involve predicting whether the pedestrian will cross the road. This involves predicting human nature, which is incredibly complex as humans act with their own free will, sometimes even performing actions on impulse alone. Without accurately knowing the outcomes, it would be impossible to apply a utilitarian philosophy as it is solely based on the outcomes of a specific action.
In conclusion, I do not think we currently have the means to get self-driving cars to predict every outcome of every action. This means that utilitarian programming is currently impossible, meaning, for the first iterations, we should focus on deontologist programming as this is simpler and more achievable. However, I think the end goal should still be utilitarian programming. Research should continue on how the outcomes of an action can be accurately predicted quickly, with cars making use of utilitarianism once the technology is ready. This is because, in my opinion, it is the outcomes of an action that are most important. I think that no action is inherently “good” or “bad”, it is the consequences of that action that define its “goodness” or “badness”. Ultimately, the goal should be to minimise harm and maximise happiness!