>>14733102What about a situation where I've got a gun, and I'm about to kill someone, and the robots ONLY choices of action are to kill me, or to let the other person die?
The first law of robotics states a robot may not harm a human or through inaction allow a human to come to harm, but no matter what it does (or doesn't do in this case), a human is going to be harmed.
How does probability come into it? If say there's a 70% chance of me killing the other person, and a 60% chance that it can stop me without killing me.
What then if I'm about to kill two humans?
Logically, it should kill me to save the most humans, right? But what about a situation where allowing a small group of people to live will cause many more to die in the future?
There are so many what if questions, and much of Asimov's novels about robots were based around these sort of scenarios.