Protected: Week 10 Reflections
Protected: Week 9 Reflections
Protected: Week 8 Reflections
Week 7 Reflections
In Nick Bortrom’s reading on ‘Superintelligence’, he explores the prospect of superintelligence with the emergence of artificial intelligence, and highlights some of the benefits of having digital intelligence. He differentiates three different forms of superintelligence: speed superintelligence (system that can do all that a human brain can), collective superintelligence (a system that combines a large number of smaller intellects across many domains which outperforms that of human cognitive system) and quality superintelligence (system that is at least as fast as a human mind and qualitatively smarter).
The thought of having a machine outperform a human, in which we are supposed to be the dominant and most intelligent species on Earth, just boggles my mind. Perhaps this fear of having Artificial Intelligence take over us humans is caused by Sci-Fi films I have watched that makes me rather resistant to certain artificial intelligence technologies that are being created/have been created.
In Royal Society’s paper ‘Portrayals and perceptions of AI and why they matter’, it mentioned how the exaggerations from the media (e.g. over-emphasis of humanoid representations) of the potential consequences of AI research could greatly affect public confidence and perceptions of AI, especially when the public is misinformed or skewed in their perceptions of AI. Borstrom talks about the advantages of AI, such as increasing storage capacity/memory space that is enormous as compared to that of human mind that is limited as well as the speed at which it processes information. While all of these are clear benefits of AI, it led me to wonder a few questions:
- How much agency are humans willing to give to machines before we lose total control over the information that we have in our minds?
2. Is it even possible for AI to take over humans, or will humans forever be the dominant species?
Protected: Week 6 Reflections
Week 5 Reflection
Managing Opacity: Information Visibility and the Paradox of Transparency in the Digital Age by Cynthia Stohl et. al
I thought it was interesting to see how they differentiate the term ‘visibility’ and ‘transparency’, which I used to think are one and the same. They mentioned the relationship between the two, which was not necessarily a linear relationship, and that it could be a colinear one as well.
Even though an organisation has all the information that is available and easily accessed by the public, in addition to it being approved legally, what was the intriguing was the transparency paradox that was mentioned in the article. Information can be hidden in plain sight when the receiver of the information chooses to intentionally or unintentionally ignore the relevant or important information due to information overload (inadvertent opacity). Organisations can also intentionally conceal certain important information through the framing of the information, which can give the impression of it being transparent, but in reality, the opposite is true.
This made me wonder, if there is such a thing as transparency paradox, then how can there be better monitoring of organisational conduct (instead of merely observing the three attributes of visibility)?
The Software Act by Warren Sack
In this reading, Warren Sack positions computing as a form of art, essentially calling programmers/computer scientists as artists. He tells us of the history of computing, but instead of putting science and technology at the forefront, he is placing the arts, and attributing the arts for its evolution. I thought that this reading was intriguing because we have always viewed computing as something that is science or mathematical in nature, but never have I viewed it the way Sack has viewed computing, in an arts and humanities perspective, which was rather refreshing for me.