Updated: Aug 23, 2021
Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and many other leading researchers in science and Technology attended by many big name in the media recently about the risks that AI's AI and create through open letters expressed their concern.
The idea that the search for powerful artificial intelligence will eventually succeed has long been thought of as science fiction, centuries or more away. However, thanks to recent breakthroughs, many AI milestones that experts saw as decades away just five years ago have now been reached, leading many experts to take seriously the possibility of superintelligence in our lives. While some experts still estimate that human-level artificial intelligence is centuries away, most artificial intelligence researchers at the 2015 Puerto Rico conference predicted it would be before 2060. Since it may take decades to complete the necessary security research, it would be prudent to start this now…
Since AI has the potential to be smarter than any human, there is no definitive way to predict how it will behave. We can't base past technological advances so much because we didn't intentionally or unknowingly create anything capable of defeating us. The best example we can encounter may be our own evolution. Humans now control the planet not because it is strongest, fastest, or largest, but because it is the smartest. If we're no longer the smartest, are we sure we'll be in control?
As long as we win the race between the growing power of technology and the wisdom we use to decipher artificial intelligence, we will improve our civilization. When it comes to AI technology, the FLI's position is that the best way to win this race is not to block the former, but to accelerate the latter by supporting AI security research.