Artificial intelligence (AI) has become a staple of consideration and discussion amongst the living, breathing human race. AI has developed and accelerated rapidly within the last fifteen years, but its growth in the 2020s has been unprecedented. The last half decade is known as the AI boom which has led to many ethical questions about AI use. So much is unknown and this is a genuine point of contention for many, especially when considering large corporation profit versus human livelihood may be at stake.
Artificial intelligence is utilized daily and provides concrete benefits, such as streamlining and automation. Not to mention developments within medicine and science, due to the highly efficient and complex data analysis capabilities possessed by AI systems. AI can provide accessibility, while offering improvements in efficiency, but it remains a tool which can be used for good or ill. Humans need to monitor this tool to ensure ethics and transparency are always prioritized.
With a lack of effective regulation, AI has amplified social and political polarization. The emergence of deepfakes has been a key reason for AI’s involvement in polarization. According to a definition provided by the Department of Homeland Security, “deepfakes, an emergent type of threat falling under the greater and more pervasive umbrella of synthetic media, utilize a form of artificial intelligence/machine learning (AI/ML) to create believable, realistic videos, pictures, audio, and text of events which never happened.”
AI deepfakes progressed quickly as they entered the mainstream, which only adds to the concern and danger surrounding this technology. The development of technology is a good thing, but not all technology has the same capabilities and potential. At the rapid rate technology is developing, actors with misguided or corrupted interests who have access to these technologies pose a grave threat. Deepfakes will only get increasingly more accessible and realistic and are already seen as blurring the lines between authentic evidence and fabricated illusions. Pew Research Center data consistently show declining trust in news media and growing concerns over “fake news.” Content can be altered to target certain groups that will like and share the content because it emotionally aligns with that group’s feelings and beliefs. Deepfakes can take over online networks, spreading rapidly all while reinforcing disinformation, echo chambers and in-group identities.
An added concern is the reverse effect developing as a result of deepfakes becoming so prevalent. Presented with real footage that is problematic or damaging, in-groups, politicians, or influential figures could use the existence of AI and deepfakes in the inverse to dismiss that which was presented as “fake.” Almost a get out of jail free card that can be played to discredit the evidence by claiming it was modified or altered. The consequence being an environment filled with distrust, where the entire community is on their guard and attacks when deemed fit. It would be a world run by skepticism, and it is not hard to guess how that would affect democracy.
Social media platforms act as feeding grounds for AI-generated content and deepfakes, because an abundance of AI content and deepfakes being posted can outpace the verification systems of social media platforms. The volume of fake content will only increase as AI production and distribution become both faster and cheaper. In theory, even if social media had great verification systems, the AI content and deepfakes are so quickly and easily placed into circulation, that the harm would remain. Research shows once a false narrative is internalized, later corrections rarely neutralize its impact, meaning the damage has already been done (see Westbrook, Wegener & Susmann, 2023 and Chan & Albarracín, 2023).
Regulations and standards must be developed to verify legitimate media and to govern malicious or incorrect media content. A solution could be in the very workings of AI-tools being in place to fight back, AI versus AI. As this looks more and more likely to be the new normal, digital media literacy should be a crucial focus. Being better equipped in evaluating the validity of the information encountered will help combat polarization amidst the digital era of AI and deepfakes.

