Nick Land on AI alignment
I wanted to summarize Nick Land’s views on AI alignment (previously called “Friendly AI”) for the LessWrong community. In 2021 I wrote the following and emailed Land asking if it was a reasonable summary. He replied that the summary seemed fine. For Land the basic point was that values and intelligence were “diagonal”, not orthogonal.
Friendly AI is either impossible or worthless. Why impossible: due to instrumental convergence a superintelligence will inevitably hack its utility function. Why worthless: intelligence optimization is the goal most worth pursuing. An AI not constrained by human desires will be better at maximizing intelligence than an AI that is.
(These are not my views.)
- “Instrumental convergence”, Eliezer Yudkowsky. Arbital.