Nick Land on Friendly AI
I wanted to summarize Nick Land’s views on Friendly AI for the rationalist (LessWrong) community. I wrote the following and emailed Land asking if it was a reasonable summary, and he replied that it seemed fine. To him the basic point was that values and intelligence were “diagonal”, not orthogonal.
Friendly AI is either impossible or worthless. Why impossible: due to instrumental convergence a superintelligence will inevitably hack its utility function. Why worthless: intelligence optimization is the goal most worth pursuing. An AI not constrained by human desires will be better at maximizing intelligence than an AI that is.
- “Instrumental convergence”. Eliezer Yudkowsky. Arbital.