top of page
Search
  • Writer's pictureHelen

Phase 1, Post 3: Self-Driving Cars


Can you imagine a world without cars? Perhaps not. Cars have been around for so long and have had so many upgrades in recent times that we use them so much more that we even realize! If you’re running late or just don’t want to take public transportation you might use a rideshare app, if you order something it might get delivered by car, if you’re moving, chances are a car will be transporting your things. We may not value drivers or even our own ability to drive as much as we should and soon we might value these even less. Driverless cars are becoming more and more talked about and as the technology improves and becomes more available, the idea or a world without drivers is becoming more of a reality.

Although the advancements of technology are making significant strides for humankind, there are also setbacks and controversies that come with such advancements. Autonomous vehicles are a great example of this. Everyone knows of Tesla, and GM and Aptiv’s “self-driving” cars. When the idea of these car was introduced to the general public, many people were as excited as they were terrified of what Artificial Intelligence is coming to. On one hand, the driver no longer must be fully aware when driving because the car would know how to drive safely and “perfectly” because it is a computer. On the other hand, all the general public’s worst fears about AI were becoming reality. When on the road, a person is constantly making decisions: “Do I need to take this exit?”, “The light is yellow, I should slow down.”, and even hitting the brakes quickly if someone or something suddenly crosses the path of the vehicle. These are decisions based on the law but ultimately based on the driver’s morals. This reminds me of the classic trolley problem. Does the driver of the trolley kill the five people tied up on the track, or make a turn and kill only one person? There is a lot to consider when making a decision for this problem, “Do I know any of these people?”, “Is there a mother on tracks?”, “Could one of the people have the cure to cancer?”, etc. If it is so hard for a human to answer this question, how would AI make the decision based purely off logic. Autonomous vehicles are programmed to follow a set of rules and so it would make a decision based on these “morals”. Perhaps the AI is following the rules and laws set forth by society, but there is also an unwritten set of rules and laws that all humans accept and keep. This is known as the social contract theory. A law-abiding citizen will conduct themselves in a moral way that benefits and protects others in society. However, even with laws and social contracts and everything meant to keep society running smoothly, we still encounter issues with them everyday. Yes, autonomous vehicles might make some everyday tasks easier but just as they may facilitate a lot of life’s problems, they also come with a whole list of potential issues that need to be taken into account.

5 views0 comments

Recent Posts

See All

Phase 2, Post 12: Story Making Workshop

I wasn’t able to join in on this class but I looked back at the class recording and followed along: This is the story I created: First breakout session: Issues of bias: Different races are being hit h

Post: Blog2_Post
bottom of page