23 Dec 2024
Saturday 30 March 2024 - 19:44
Story Code : 418579

China turns to AI in propaganda mocking the ‘American Dream’

Chinese state media’s A Fractured America series shows how AI is beginning to shape Beijing’s influence campaigns.
The Iran Project : “​​The American Dream. They say it’s for all, but is it really?” So begins a 65-second, AI-generated animated video that touches on hot-button issues in the United States ranging from drug addiction and imprisonment rates to growing wealth inequality.
China
China's is using artificial intelligence (AI) to spread its message abroad
According to The Iran Project,As storm clouds gather over an urban landscape resembling New York City, the words “AMERICAN DREAM” hang in a darkening sky as the video ends.

The message is clear: Despite its promises of a better life for all, the United States is in terminal decline.

The video, titled American Dream or American Mirage, is one of a number of segments aired by Chinese state broadcaster CGTN – and shared far and wide on social media – as part of its A Fractured America animated series.

Other videos in the series contain similar titles that invoke images of a dystopian society, such as American workers in tumult: A result of unbalanced politics and economy, and Unmasking the real threat: America’s military-industrial complex.

Besides their strident anti-American message, the videos all share the same AI-generated hyper-stylised aesthetic and uncanny computer-generated audio.

CGTN and the Chinese embassy in Washington, DC did not respond to requests for comment.
The Fractured America series is just one example of how artificial intelligence (AI), with its ability to generate high-quality multimedia with minimal effort in seconds, is beginning to shape Beijing’s propaganda efforts to undermine the United States’ standing in the world.

Henry Ajder, a UK-based expert in generative AI, said while the CGTN series does not attempt to pass itself off as genuine video, it is a clear example of how AI has made it far easier and cheaper to churn out content.

“The reason that they’ve done it in this way is, you could hire an animator, and a voiceover artist to do this, but it would probably end up being more time-consuming. It would probably end up being more expensive to do,” Ajder told Al Jazeera.

“This is a cheaper way to scale content creation. When you can put together all these various modules, you can generate images, you can animate those images, you can generate just video from scratch. You can generate pretty compelling, pretty human-sounding text-to-speech. So, you have a whole content creation pipeline, automated or at least highly synthetically generated.”

China has long exploited the enormous reach and borderless nature of the internet to conduct influence campaigns overseas.
China’s enormous internet troll army, known as “wumao”, became known more than a decade ago for flooding websites with Chinese Communist Party talking points.

Since the advent of social media, Beijing’s propaganda efforts have turned to platforms like X and Facebook and online influencers.

As the Black Lives Matter protests swept the US in 2020 following the killing of George Floyd, Chinese state-run social media accounts expressed their support, even as Beijing restricted criticism of its record of discrimination against ethnic minorities like Uyhgur Muslims at home.

China turns to AI in propaganda mocking the ‘American Dream’
Chinese state media’s A Fractured America series shows how AI is beginning to shape Beijing’s influence campaigns.

Taipei, Taiwan – “​​The American Dream. They say it’s for all, but is it really?”

So begins a 65-second, AI-generated animated video that touches on hot-button issues in the United States ranging from drug addiction and imprisonment rates to growing wealth inequality.

KEEP READING
list of 4 items
list 1 of 4
Former US Senator Joe Lieberman dies at age 82: Media reports
list 2 of 4
Can we talk about Tate? The ‘manosphere’ in Australian schools
list 3 of 4
US stock market hits record high after three-day lull
list 4 of 4
Two bodies recovered from Francis Scott Key Bridge disaster in Baltimore
end of list
As storm clouds gather over an urban landscape resembling New York City, the words “AMERICAN DREAM” hang in a darkening sky as the video ends.

The message is clear: Despite its promises of a better life for all, the United States is in terminal decline.

The video, titled American Dream or American Mirage, is one of a number of segments aired by Chinese state broadcaster CGTN – and shared far and wide on social media – as part of its A Fractured America animated series.

Other videos in the series contain similar titles that invoke images of a dystopian society, such as American workers in tumult: A result of unbalanced politics and economy, and Unmasking the real threat: America’s military-industrial complex.

Besides their strident anti-American message, the videos all share the same AI-generated hyper-stylised aesthetic and uncanny computer-generated audio.

CGTN and the Chinese embassy in Washington, DC did not respond to requests for comment.

The Fractured America series is just one example of how artificial intelligence (AI), with its ability to generate high-quality multimedia with minimal effort in seconds, is beginning to shape Beijing’s propaganda efforts to undermine the United States’ standing in the world.

Henry Ajder, a UK-based expert in generative AI, said while the CGTN series does not attempt to pass itself off as genuine video, it is a clear example of how AI has made it far easier and cheaper to churn out content.

“The reason that they’ve done it in this way is, you could hire an animator, and a voiceover artist to do this, but it would probably end up being more time-consuming. It would probably end up being more expensive to do,” Ajder told Al Jazeera.

“This is a cheaper way to scale content creation. When you can put together all these various modules, you can generate images, you can animate those images, you can generate just video from scratch. You can generate pretty compelling, pretty human-sounding text-to-speech. So, you have a whole content creation pipeline, automated or at least highly synthetically generated.”

China has long exploited the enormous reach and borderless nature of the internet to conduct influence campaigns overseas.

Advertisement

China’s enormous internet troll army, known as “wumao”, became known more than a decade ago for flooding websites with Chinese Communist Party talking points.

Since the advent of social media, Beijing’s propaganda efforts have turned to platforms like X and Facebook and online influencers.

As the Black Lives Matter protests swept the US in 2020 following the killing of George Floyd, Chinese state-run social media accounts expressed their support, even as Beijing restricted criticism of its record of discrimination against ethnic minorities like Uyhgur Muslims at home.



In a report last year, Microsoft’s Threat Analysis Center said AI has made it easier to produce viral content and, in some cases, more difficult to identify when material has been produced by a state actor.

Chinese state-backed actors have been deploying AI-generated content since at least March 2023, Microsoft said, and such “relatively high-quality visual content has already drawn higher levels of engagement from authentic social media users”.

“In the past year, China has honed a new capability to automatically generate images it can use for influence operations meant to mimic US voters across the political spectrum and create controversy along racial, economic, and ideological lines,” the report said.

“This new capability is powered by artificial intelligence that attempts to create high-quality content that could go viral across social networks in the US and other democracies.”

Microsoft also identified more than 230 state media employees posing as social media influencers, with the capacity to reach 103 million people in at least 40 languages.

Their talking points followed a similar script to the CGTN video series: China is on the rise and winning the competition for economic and technological supremacy, while the US is heading for collapse and losing friends and allies.

As Al models like OpenAI’s Sora produce increasingly hyperrealistic video, images and audio, AI-generated content is set to become harder to identify and spur the proliferation of deepfakes.

Astroturfing, the practice of creating the appearance of a broad social consensus on specific issues, could be set for a “revolutionary improvement”, according to a report released last year by RAND, a think tank that is part-funded by the US government.

The CGTN video series, while at times using awkward grammar, echoes many of the complaints shared by US citizens on platforms such as X, Facebook, TikTok, Instagram and Reddit – websites that are scraped by AI models for training data.

Microsoft said in its report that while the emergence of AI does not make the prospect of Beijing interfering in the 2024 US presidential election more or less likely, “it does very likely make any potential election interference more effective if Beijing does decide to get involved”.

The US is not the only country concerned about the prospect of AI-generated content and astroturfing as it heads into a tumultuous election year.

By the end of 2024, more than 60 countries will have held elections impacting 2 billion voters in a record year for democracy.

Among them is democratic Taiwan, which elected a new president, William Lai Ching-te, on January 13.

Taiwan, like the US, is a frequent target of Beijing’s influence operations due to its disputed political status.

Beiijijng claims Taiwan and its outlying islands as part of its territory, although it functions as a de facto independent state.

In the run-up to January’s election, more than 100 deepfake videos of fake news anchors attacking outgoing Taiwanese President Tsai Ing-wen were attributed to China’s Ministry of State Security, the Taipei Times reported, citing national security sources.
Much like the CGTN video series, the videos lacked sophistication, but showed how AI could help spread misinformation at scale, said Chihhao Yu, the co-director of the Taiwan Information Environment Research Center (IORG).

Yu said his organisation had tracked the spread of AI-generated content on LINE, Facebook, TikTok and YouTube during the election and found that AI-generated audio content was especially popular.

“[The clips] are often circulated via social media and framed as leaked/secret recordings of political figures or candidates regarding scandals of personal affairs or corruption,” Yu told Al Jazeera.
Deepfake audio is also harder for humans to distinguish from the real thing, compared with doctored or AI-generated images, said Ajder, the AI expert.

In a recent case in the UK, where a general election is expected in the second half of 2024, opposition leader Keir Starmer was featured in a deepfake audio clip appearing to show him verbally abusing staff members.

Such a convincing misrepresentation would have previously been impossible without an “impeccable impressionist”, Ajder said.

“State-aligned or state-affiliated actors who have motives – they have things they are trying to potentially achieve – now have a new tool to try and achieve that,” Ajder said.

“And some of those tools will just help them scale things they were already doing. But in some contexts, it may well help them achieve those things, using completely new means, which are already challenging for governments to respond to.”

 
Reporter : Editorial of The Iran Project
https://theiranproject.com/vdcd5j0onyt0fk6.em2y.html
Your Name
Your Email Address