“… In short, it can’t be done. Sorry we kept you in the dark about this project.”
“What’s your probability you’re right? Proofs can be wrong.”
“I give it 85%. The proof has been formalized and machine-checked. We worked on this over the last two months. It was going to take up to a year, but the new proof assistant really cut the time down. It’s ML-powered.”
“So, friendly AI is mathematically impossible. An AI can be recursively self-improving or aligned, not both. Now I ask you, what do we do? Do we send out a newsletter saying we’re done and close up shop?”
A moment passed.
Several weeks later a prediction market went up. Unusually for prediction markets, it was only available through Tor and accepted difficult to trace cryptocurrencies. It started with the usual bets on politics and the economy, but after a while people watching darknets noticed it was filling with predictions about when certain people, most of them academics, would die. Publications in machine learning, GitHub repositories, even blog posts raised the reward for guessing correctly. The money on their head quickly overtook the h-index as a measure of a researcher’s impact.
Based on a plot idea by spxtr.