I'm not in a position to contradict anyone about anything.


Scott Adams proposes a thought experiment that can make clear the illusory nature of free will. The implications of the thought experiment go beyond just free will, though. Just as we would agree that the baby-sitting robot does not have free will, we also would not impute personhood to it. If the baby-sitting robot is a valid model of moist robots (humans), then it also does not make sense to impute personhood to a moist robot whose responses are determined by the operation of software. So that would mean that there's no such thing as a person, or soul. Such things are further illusions created by the operation of the robot's thought processor (commonly called "consciousness").

Morality is usually presented as a fixed, objective standard external to the robots. However, this thought experiment can show that it's actually just a way robots attempt to reprogram each other (We might tell the baby-sitting robot that gently rocking the baby is "good" while killing it would be "bad", but that just reflects our preferences. Replace the baby with something we don't feel as strongly about, say, another robot. Or replace it with something we feel negatively about, say, a cockroach. Now suddenly killing the creature in the crib is good!) Good and evil are just what a given robot admires or fears, what aids or threatens the robot's survival. Since each robot may admire or fear different things, good and evil wind up being relative to the perspective of the robot in question.

For the arms dealer or terrorist, war in Iraq is a good thing. For the Iraqi man on the street, war in Iraq is a bad thing.


If I claim to know "The Truth", that seems to me pretty good evidence that I don't.


On 2006 Nov 06 , at 15:52, Sally wrote:
You can enter names to be included on a DVD that will be flown to Mars. You can print a certificate too. It just takes a minute.

The Planetary Society is now giving everybody a chance to be a part of this exciting mission:


Billy responded:

I dunno. Sounds like a potential galactic privacy issue to me. Do they want SSNs?
"Anything is possible if your system has bad memory."
- Roland Dreier


Is life about a search for Truth?

How do you know when you've found it?

What if there is no "Truth"? Or if there is, what if it's impossible to know?

What if the best we can do is to formulate models, that may or may not reflect Truth, but there's no way for us to tell? We can test how well a given model predicts observations (this is my understanding of the scientific process), but even if a model predicts observations perfectly, that doesn't mean it's "true" in the sense of accurately reflecting reality. There's no way to tell whether it accurately reflects reality (I'm not even sure the phrase "reflects reality" means anything), only how well it predicts observations.

For example, if my model tells me the earth is flat and I therefore predict that if you sail west too far, you'll fall off and never come back, my model may produce accurate predictions -- people sail west, something happens to them, and they never come back. Does that make my model true?

If I think my beliefs reflect Truth, I'll cling to them whatever anyone else says and I may be motivated to try and convince others of my Truth.

If I think my beliefs are models whose usefulness lies in their ability to predict observations, I might be more inclined to keep my mind open to new models that might produce better predictions. I might be less inclined to evangelize others.