Essay on the Negative Effects of Artificial Intelligence

Published: 2021/12/07
Number of words: 2187

In a technological conference in Austin, Texas, tech mogul Elon Must issued a stern warning to the audience. He begins his message by stating that people should mark his words and that artificial intelligence is more dangerous than nuclear weapons. Before people get ahead of this premonition and start preparations for the hostile takeover of machines, I wish to reiterate that machines have not yet taken over, not yet at least. Despite this dark premise, it is undeniable that technology has increasingly integrated itself into many aspects of daily living. The experience of human living can no longer be described without the mention of technological contributions. Technology affects how people work, live, rest, and entertain themselves. Technology can be found anywhere, whether at home, school, mall, and office. From virtual assistants like Google and Siri to global positioning system apps like Waze that help people navigate traffic, technology has steadily seeped into people’s lives. Artificial intelligence presents good results in society and will continue to benefit the technologically reliant world people enjoy. Despite the many advantages of AI, there are still several disadvantages that AI may bring. Black Mirror highlights the dystopian effects of AI and how the development of advanced AI machines will degrade society. An example is the episode “Nosedive.” In this episode, a social media application determines a “social score” depending on the good deeds the user has posted in the app. Since each citizen has the application installed on their phones, the social score in the app determines the perks one will receive, such as discounts in stores, chances of securing a home loan, and many more. In this episode, the application dictated the protagonist’s faith, revealing how excessive reliance on social media may deteriorate people’s social norms. In a different episode called “Be Right Back,” the story tackles how AI has progressed so that it can bring the dead back to life. The AI accomplishes this by recording and analyzing all messages, calls, and personal records of the deceased person. In this episode, the widow avails this service to bring her husband “back to life.” She later realizes that the android version can never replace him, thus leaving her miserable. These are only two examples in which the development of AI led to social rating systems and android humans negatively affected people’s lives. Black Mirror is a show that tackles the social commentary on how technology, when left unchecked, may manifest in forms that will disrupt and alter human living altogether. As I said at the beginning of this essay, technology has not yet taken over. However, it is undeniable that technology is constantly developing and improving. It is essential that people research AI technology well to prevent the futuristic problems Black Mirror presents. When placed in a hypothetical timeline, Black Mirror is the future that awaits technological development devoid of limitations and checks. The contemporary era is the present wherein people have the chance to identify the problems in AI and immediately remedy them in such a way as to prevent a “Black Mirror-like” future. AI is a complex construct that people cannot fully grasp yet. The lack of knowledge and deeper understanding of AI allows for errors or adverse consequences to occur. The essay will discuss the current problems in artificial intelligence, including challenges in AI development, emerging security threats, and the development of superintelligence.

Need an essay assistance?
Our professional writers are here to help you.
Place an order

A TED Talk by Janelle Shane tackles one key issue in AI development. In this video, Shane shares her experiences in AI development. The central issue in AI development is that AI machines do not execute the developers’ actions. This premise means that when AI developers feed the machine data and instructions, the machine interprets this set of instructions differently from how developers planned to understand the instruction. For example, an AI machine was programmed to move fast from point A to point B. While the instructions are simple enough, the machine interpreted this differently. The machine moved to do summersaults, twitched, and rolled to move fast from point A to B. The developers fed the machine data in hopes that it would jog or run from point A to B, but instead, the machine executed different sets of emotions that still met the criteria of “moving fast.” Another example given in Shane’s TED talks was when Amazon introduced a resume sorting algorithm and said algorithm learned to discriminate against resumes with the word women written on them. The algorithm created by Amazon was fed data from previous resumes of people who worked in Amazon. Since most of the resumes were from male candidates, the algorithm accidentally learned that women candidates are inferior to men, thus identifying specific sex as inferior. While the AI does not comprehend social constructs and cultural ideologies, it will always accomplish the task it was made to do based on the data and programming it receives (Shane). The central problem is how AI can interpret instructions very differently from what programmers and developers imagine. We must remember that AI will always do what it is asked to accomplish; the problem lies wherein people accidentally ask the AI to do the wrong thing (Shane). The problem arises because AI often commits these errors, which leads to accidental occurrences that may promote adverse outcomes. This TED talk by Shane reveals that AI developers must be careful and strict in their data programming so that errors may occur less. Knowledge in AI is built to improve lives, but glitches and data misinterpretations may lead to dire consequences that create adverse outcomes.

Another issue in AI technology is the emergence of security threats in a person’s data. Data is the heart and soul of all artificial intelligence machines. An article by Joseph Mutschelknaus tackles the top 5 data privacy issues in the emergence of AI technology. Data is essential to any AI operations because data is needed to train machines and algorithms into performing tasks (Mutschelknaus). This premise shows that data from people or events are harvested and utilized with each operating AI machine or algorithm. The problem arises when data is used for malicious intent. One prominent example of the misuse of data is the emergence of deepfake pictures and videos. John Villasenor tackles the issues of deepfake media content and how this technology may be abused. Deepfakes are media content constructed to make the person appear to be saying or doing something that he or she has never done before (Villasenor). The term “deepfake” emerges from the combination of two words, deep learning and fake. Deep learning refers to an AI learning process in which data is fed to the machine to analyze and memorize. What deepfake aims to present videos or pictures that look and sound natural, but AI devices have edited these media contents. There are many types of deepfake technologies currently in existence. Some examples include face-swapping. In face swapping, a person’s face is swapped for the body of a different person, and lip-syncing, in which an AI algorithm alters a person’s lip movements and imposes a different audio track to their speech. Bernice Donald and Ronald Hedges tackle creating deepfake technology and how deepfakes have evolved with the technological advancement of AI. Deepfakes are created by AI algorithms or machine learning programs in which large data sets of images or sound clips are analyzed, reconstructed, and edited by the program (Donald and Hedges). While initial deepfake photos and videos were challenging to create in the past, technological advances in machine learning and new software have made the creation of deepfakes convenient (Donald and Hedges). What is concerning in deepfakes is that they are challenging to identify. Deepfakes in social media are screened and removed, but research has shown that only two-thirds of all deepfakes are identified as fake pictures (Donald and Hedges). What follows is the malicious attempts to utilize the deepfake technology. Circling back to Villasenor’s article on the negative consequences of deepfake technology, deepfakes can be used against politicians or famous personalities by manipulating videos. These personalities are made to say things that could harm their reputations (Villasenor). Furthermore, deepfakes are becoming prominent in pornographic websites in which people superimpose the faces of artists and other people into obscene videos (Villasenor). The implications of dealing with deepfake technology become complex because people cannot distinguish real from fake. Deepfakes scramble our notion of the truth by exploiting our drive to believe what we see first-hand. In trying to fend off deepfake propaganda, an individual’s trust in all video and photographic content becomes tainted because it is increasingly difficult to discern real from fake. This premise reveals that AI advancement drives humanity in a situation where we cannot reliably believe our own eyes and ears in processing content. This situation becomes increasingly dangerous when malicious people employ deepfakes for events that lead to grave consequences such as war and international conflict. What makes the situation worse is that legislation against deepfake technologies has not been drafted; thus, deepfake remains increasingly difficult to handle.

Worry about your grades?
See how we can help you with our essay writing service.
LEARN MORE

Another pressing issue that humans face during AI development is the eventual birth of super-intelligent AI. Sam Harris outlines super-intelligent AI in his TED talk as an irrefutable future that humans will undergo. Harris opens his talk by providing two options; one option offers people to stop creating AI technology advancements. This option appears to be unfavorable given how much technology has improved human living. Apart from this, people still desire to create machines that will solve cancer and climate research problems that we are yet to discover. Harris also outlines that it would take a global crisis such as a pandemic or nuclear warfare to halt the development of new technology altogether. Harrisreiterates by saying, “You have to imagine how bad it would have to be to prevent us from making improvements in our technology permanently.” This premise leaves people to choose the second door. In the second door, people continue to develop AI technology, and in this effort, it is predicted that AI will develop a super-intelligent machine that transcends human intelligence (Harris). In the development of super-intelligent machines, the primary concern is whether the machine that surpasses its creators may coexist. He explains that if the goals of super-intelligent machines diverge from the goals of humans, super-intelligent machines may not hesitate to eradicate humans. An analogy he gives pertains to ants. People do not actively want to harm ants and often try to avoid harming them. However, if the ants were in a land where construction of a new building would occur, humans would immediately eradicate the ants in said land. The problem is not that super-intelligent machines will spontaneously become malevolent; it is that superintelligence is far superior to humans that we may not predict when our goals will diverge from the super-intelligent machines. The development of super-intelligent machines is inevitable, and while experts speculate that this form of technology is many years away, it is still a possibility awaiting the future generation.

To conclude, humans’ current AI technology is not yet up to par with “Black Mirror” technology. As experts continuously reassure people that machines will not take over the world, people need to think about the possible adverse effects of technological advancement. Issues in AI development, deepfakes, and superintelligence are only some issues that affect AI advancement. There are many other issues concerning AI development. In trying to achieve significant accomplishments, there will always be hurdles that will make the journey rough. Technology is already a part of human living that cannot be undone. People can now refine the research on AI machines to prevent the adverse effects that may lead to unprecedented outcomes in the future. Experts in AI must ensure that ethical standards and safety checks are thoroughly considered in developing AI technology. As it stands, people are tinkering with an object that has the potential to surpass human intelligence; with this knowledge, it is ultimately up to humans to determine what the future holds.

Works Cited

Adams, R. “10 Powerful Examples Of Artificial Intelligence In Use Today.” Forbes, 10 Jan. 2017, www.forbes.com/sites/robertadams/2017/01/10/10-powerful-examples-of-artificial-intelligence-in-use-today/?sh=77f35725420d. Accessed 18 Apr. 2021.

Donald, Bernice B., and Ronald J. Hedges. “Deepfakes Bring New Privacy and Cybersecurity Concerns.” Corporate Counsel Business Journal, 25 Sept. 2020, ccbjournal.com/articles/deepfakes-bring-new-privacy-and-cybersecurity-concerns. Accessed 30 Apr. 2021.

Faggella, Daniel. “Everyday Examples of Artificial Intelligence and Machine Learning.” Emerj, 11 Apr. 2020, emerj.com/ai-sector-overviews/everyday-examples-of-ai/. Accessed 18 Apr. 2021.

Harris, Sam. “Can we build AI without losing control over it?” Youtube, TED, 20 Oct. 2016, www.youtube.com/watch?v=8nt3edWLgIg. Accessed 18 Apr. 2021.

Mutschelknaus, Joseph E. “Top Five Data Privacy Issues that Artificial Intelligence and Machine Learning Startups Need to Know.” Inside Big Data, 23 July 2020, insidebigdata.com/2020/07/23/top-five-data-privacy-issues-that-artificial-intelligence-and-machine-learning-startups-need-to-know/. Accessed 30 Apr. 2021.

Shane, Janelle. “The danger of AI is weirder than you think.” Youtube, TED, 14 Nov. 2019, www.youtube.com/watch?v=OhCzX0iLnOc. Accessed 18 Apr. 2021.

Villasenor, John D. “Artificial intelligence, deepfakes, and the uncertain future of truth.” Brookings, 14 Feb. 2019, www.brookings.edu/blog/techtank/2019/02/14/artificial-intelligence-deepfakes-and-the-uncertain-future-of-truth/. Accessed 30 Apr. 2021.

Cite this page

Choose cite format:
APA
MLA
Harvard
Vancouver
Chicago
ASA
IEEE
AMA
Copy
Copy
Copy
Copy
Copy
Copy
Copy
Copy
Online Chat Messenger Email
+44 800 520 0055