This year, I’m donating to MIRI. Here’s a quick summary of the classic argument:
- Artificial General Intelligence is possible and reasonably probable in the medium-term.
- Such AI would be very powerful.
- Without careful steps to avoid it, this AI is likely to be unfriendly. This would be very bad. Unfriendly AIs do not hate us, but we are made of atoms they can use for purposes other than our own.
- A friendly AI dedicated to promoting our values would be a very good thing.
- Donating to MIRI is one of the best ways of doing this, as they are the only organization fully focused on this one issue.
Even ignoring the risk of UFAI, I think that FAI may be one best ways of preventing run-away value drift from destroying all value in the future.