Myths About the Risks of AI
Summary
Evil robots is not on the list of things to worry about with AI. Here’s what experts are concerned about:
AI turning evil is a red herring. The real worry is competence. Machines are goal-oriented, and a superintelligent AI is very good at attaining its goals. It can harm us if it fails to achieve its goals. It can also harm us if it is highly competent in achieving malevolent goals. It is important to develop AI with goals aligned with ours.
Robots are not the main concern. The real concern is intelligence. A body without intelligence can be controlled. Humans control tigers not because we’re stronger, but because we’re smarter. This means that if we cede our position as smartest on our planet, it’s possible that we might also cede control.
Thoughts
I had to write a piece about this because, for a very long time, we were animals of no significance. It wasn’t until we invented tools, learned how to hunt, and domesticated fire for survival that our evolution took a significant turn. As a result, our intestines became shorter and our brains grew larger. It was only in the last 100,000 years that we jumped to the top of the food chain.
How then can we keep our position of power once AI surpasses human intelligence? While building an ultraintelligent machine would undoubtedly be one of man’s greatest achievements, at what point do we decide enough is enough? Irving Good once said, “there would be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” How could we keep such intelligence under control? The most intelligent man, Elon Musk, is also the richest and most influential man on our planet. Fortunately for us, he harnesses his power for good. But how do we determine what is good and evil for humanity? The truth is, man has never built a perfect product—such a thing does not exist. So how can flawed, imperfect humans create a perfect AI that is competent and intelligent?
I cogitated on what this incredible power would mean for us, but it only led me to a sea of endless questions. No one knows when superhuman AGI will happen in our lifetime; experts disagree on the timeline. It only takes one Red Bull-drinking programmer to turn the switch on human-level AGI.