I recently saw Blade Runner 2049 for the first time. Released in 2017, it was a sequel of the original Blade Runner where issues around Artificial Intelligence (A.I) are explored. In thinking about A.I. and the role of A.I. in our lives, movies like Blade Runner tend to come to mind because they seem to not only represent of what could happen as A.I. development progresses, but these neo-noir, science-fiction movies reveal human’s fears of the doom and gloom of the inevitable Singularity. However, doom and gloom are not the only possible outcomes on A.I. development.
According to Margaret Boden (2018), “Artificial Intelligence seeks to make computers do the sorts of things minds can do, which involves psychological skills such as perception, association, prediction, planning and motor control” (pg. 104). She goes further to define the “main aims” of A.I. as both technological and scientific (pg. 104). One concept of heated debate, though, it whether or not A.I. is really intelligence or just programming. This debate plays out in practical settings like using A.I. to diagnose and treat medical issues. A.I.’s perceived weakness in real life situations is that it does not have the ability to explain why it made the decisions it makes. However, I wonder how well doctors can explain the same diagnoses. If A.I. processing is meant to imitate the human brain’s processing, then I wonder if a doctor’s explanation would actually be similar to the A.I.’s? Specifically, A.I. should be able to list out the rules it followed to make the diagnosis just like a doctor would list out the same process of elimination for his or her diagnosis. I know the algorithms of artificial intelligence are far more complicated then how I have described them, but the same could be said for the human brain’s processing. What A.I. cannot account for is a doctor’s decision based on other aspects than just book knowledge like the senses and gut responses. But, at the end of the day, no one really can know if A.I. is truly intelligent or now. In fact, Boden makes the same claim: “Since genuine intelligence involves understanding, that’s another reason why no one knows whether our hypothetical AI would really be intelligent” (pg. 120).
So, is this a problem? No, I don’t think we can know or must know how closely artificial intelligence mirrors human intelligence. It is and will be a form of intelligence. Human’s need to know 100% seems to be an attempt to control the future of A.I. development. I know Stephen Hawking described A.I. as being a future threat to humanity that we cannot ignore, but I wonder why? Is the only option of A.I. development to be a threatening one? I don’t think A.I. needs or can replace the entire human being. Instead, it would redefine the roles of humans and computers. We can look to science fiction again to see other possibilities for A.I. development that do not put the existence of humanity at risk. For example, the Enterprise computer on Star Trek the Next Generation as well as Data both provide different stages of A.I. that both work hand in hand with humans on board. First, the computer aboard the Enterprise contains a massive amount of information, but it isn’t just an information storage unit. It can do more. Man times, the Captain Picard asks for information form the computer, then asks is to extrapolate, or draw a conclusion about that information, for him. Sometimes the computer complies, and sometimes there isn’t enough information to extrapolate from. The computer possesses a higher processing function than my current PC in that it can make associations, predictions and plan courses of action, but it isn’t as advanced as the character Data. Data is a created artificial life form that definitely has a superior intelligence that that of his colleagues on board. Data is unique in that he has been created to be both computer and consciousness. It is this version of A.I. that seems to create the most problems in the minds of humans today, but if we look at the intricacies of the Data character, it is clear that Data serves no real threat to the other life forms on board. It works with and for the Academy, but has a personal life of his own. What Data and the Enterprise computer are both missing is human emotion. But, I wonder if human emotion is require for this form of intelligence?
Artificial Intelligence is already here. It will continue to advance. How we perceive A.I. is just as important as how we use A.I. If we are careful to curb the human responses and interaction with A.I., then maybe we can avoid the doom and gloom of the Singularity.
Boden, Margaret A. Artificial Intelligence: A Very Short Introduction. Oxford University Press, 2018.