2006-11-28

Scott Adams proposes a thought experiment that can make clear the illusory nature of free will. The implications of the thought experiment go beyond just free will, though. Just as we would agree that the baby-sitting robot does not have free will, we also would not impute personhood to it. If the baby-sitting robot is a valid model of moist robots (humans), then it also does not make sense to impute personhood to a moist robot whose responses are determined by the operation of software. So that would mean that there's no such thing as a person, or soul. Such things are further illusions created by the operation of the robot's thought processor (commonly called "consciousness").

Morality is usually presented as a fixed, objective standard external to the robots. However, this thought experiment can show that it's actually just a way robots attempt to reprogram each other (We might tell the baby-sitting robot that gently rocking the baby is "good" while killing it would be "bad", but that just reflects our preferences. Replace the baby with something we don't feel as strongly about, say, another robot. Or replace it with something we feel negatively about, say, a cockroach. Now suddenly killing the creature in the crib is good!) Good and evil are just what a given robot admires or fears, what aids or threatens the robot's survival. Since each robot may admire or fear different things, good and evil wind up being relative to the perspective of the robot in question.

For the arms dealer or terrorist, war in Iraq is a good thing. For the Iraqi man on the street, war in Iraq is a bad thing.

0 Comments:

Post a Comment

<< Home