In Nick Bortrom’s reading on ‘Superintelligence’, he explores the prospect of superintelligence with the emergence of artificial intelligence, and highlights some of the benefits of having digital intelligence. He differentiates three different forms of superintelligence: speed superintelligence (system that can do all that a human brain can), collective superintelligence (a system that combines a large number of smaller intellects across many domains which outperforms that of human cognitive system) and quality superintelligence (system that is at least as fast as a human mind and qualitatively smarter).
The thought of having a machine outperform a human, in which we are supposed to be the dominant and most intelligent species on Earth, just boggles my mind. Perhaps this fear of having Artificial Intelligence take over us humans is caused by Sci-Fi films I have watched that makes me rather resistant to certain artificial intelligence technologies that are being created/have been created.
In Royal Society’s paper ‘Portrayals and perceptions of AI and why they matter’, it mentioned how the exaggerations from the media (e.g. over-emphasis of humanoid representations) of the potential consequences of AI research could greatly affect public confidence and perceptions of AI, especially when the public is misinformed or skewed in their perceptions of AI. Borstrom talks about the advantages of AI, such as increasing storage capacity/memory space that is enormous as compared to that of human mind that is limited as well as the speed at which it processes information. While all of these are clear benefits of AI, it led me to wonder a few questions:
- How much agency are humans willing to give to machines before we lose total control over the information that we have in our minds?
2. Is it even possible for AI to take over humans, or will humans forever be the dominant species?