top of page

The Rise of Artificial Media and the AI Trust Gap

  • Writer: Shreyas Sundararaman
    Shreyas Sundararaman
  • May 11, 2025
  • 3 min read

AI Is Reshaping Media 

Artificial intelligence now drives many forms of media. Deepfakes, AI-generated influencers,  and automated newswriting are changing how content is made and shared. While these tools  offer creative possibilities, they raise serious concerns about misinformation, transparency, and  public trust. 


Deepfakes Fuel Mistrust

AI-generated media often misleads the public. A fake image of Pope Francis went

viral,  prompting him to warn leaders at the G7 summit about deepfake risks.

Stanford’s JSK Journalism Program reports that these tools erode visual trust and confuse facts. Public fear is not just theoretical — it affects real-world events, including elections and public safety. The Harvard Business Review (HBR) outlines how deepfakes have disrupted elections in  countries like Bangladesh and Moldova. Social media platforms have cut back on human moderation, allowing disinformation to spread faster. Despite the government's efforts to regulate, global enforcement is uneven, leaving many users exposed to deceptive content (HBR, 2024). 


The AI Trust Gap 

The HBR introduces the concept of the "AI trust gap" — the difference between what AI can do  and what people are willing to trust it to do. Trust is undermined by risks like disinformation,  safety failures, lack of transparency, and bias. These risks often outweigh performance gains and  limit adoption across sectors like healthcare, finance, and media.  AI tools, especially large language models (LLMs), sometimes produce hallucinations or  unpredictable outputs. Transparency remains low due to the “black box” nature of many AI  systems, where even developers can’t fully explain how decisions are made. This erodes  confidence, especially in sensitive fields like medicine and law (HBR, 2024). 


Virtual Influencers Blur Reality 

Computer-generated personas like Lil Miquela and Aitana López front major campaigns. Brands  favor them for being controllable, but many users don’t know these influencers aren’t real. This  raises ethical concerns about manipulation and unrealistic expectations, especially among young  audiences. 

AI’s role in creating human-like content complicates notions of authenticity. The HBR  emphasizes that public skepticism grows when people can’t tell what’s real — a core issue  contributing to the trust gap. 


AI Is Also Writing the News 

Media outlets have used AI to publish articles under fake names. In 2024, Sports Illustrated was  exposed for using undisclosed AI-generated content filled with errors. NewsGuard reports that  such sites have grown from 49 to over 600 in a matter of months. These practices undermine  journalistic integrity and public trust. 


Safety, Bias, and Ethics Remain Unsolved 

Even high-performing AI systems face risks. Experts warn that safety failures — from mislabeling medical scans to errors in autonomous vehicles — could have serious consequences. Bias also persists due to flawed training data and ambiguous definitions of fairness. The HBR  notes that ethical dilemmas vary by culture, making global standards difficult to enforce. AI companies often fail to implement effective transparency practices. The industry’s incentives work against openness, with few firms releasing training data or submitting to external audits.  Voluntary factsheets are often unaudited and inconsistent (HBR, 2024).


Final Thoughts 

Closing the AI trust gap requires more than better algorithms. The HBR suggests three key steps:

  • Identify key risks for each application and actively work to reduce them.

  • Keep humans in the loop, especially in high-stakes decisions. 

  • Train users and developers to recognize and address trust issues. 


AI adoption must include human oversight. Systems should be labeled clearly, and platforms  must enforce quality control. Regulations can help but won’t eliminate the risks. Human vigilance is the most reliable safeguard. Artificial media is here to stay, but trust is not guaranteed. As HBR states, even a single AI  failure can damage confidence across an entire system. Responsible development, clear labeling,  and human supervision are essential for AI’s future in media. The industry has invested billions in training machines. Now, it must invest in training people to use them wisely.



References:


“After Viral Deepfake, Pope Francis Is Highlighting the Use of AI at G7 Summit.” KSBY  News, 4 Apr. 2024, https://www.ksby.com/politics/disinformation-desk/after-viral-deepfakepope francis-is-highlighting-the-use-of-ai-at-g7-summit. Accessed 12 Apr. 2025.  


“AI-Generated Image of Pope Francis in a Puffer Jacket.” The New York Times, 6 Apr.  2023, https://static01.nyt.com/images/2023/04/06/business/00AI-POPE/00AI POPEarticleLarge.png?quality=75&auto=webp&disable=upscale. Accessed 12 Apr. 2025. 


Chakravorti, Bhaskar. "AI’s Trust Problem." Harvard Business Review, 3 May 2024,  https://hbr.org/2024/05/ais-trust-problem 


Harwell, Drew. “AI’s Threat to 2024: Fake News, Phony People and Chaos.” The  Washington Post, 17 Dec. 2023,  


Perlman, Merrill. “Seeing Is No Longer Believing: Artificial Intelligence’s Impact on  Photojournalism.” John S. Knight Journalism Fellowships, Stanford University, 25 Apr. 2023,  https://jsk.stanford.edu/news/seeing-no-longer-believing-artificial-intelligences impactphotojournalism. Accessed 12 Apr. 2025.

bottom of page