As we move towards more technological capability and deferring judgement and decision to artificial intelligence, some difficult ethical questions will come up.
A recent article in TechnologyReview highlights how self-driving cars will be programmed to make tradeoffs in difficult situations. The use of the image to the left demonstrates the type of situation in which a self driving car may have to deliberately chose to kill one person to save many people.
It gets even more confusing when we think about one adult vs one child, a cyclist vs a car, a passenger vs a pedestrian. There will be a huge new body of research in practical ethics and applied philosophy that companies such as Google will be looking to for guidance.