AI Slop

AI Slop: Artificial intelligence, digital videos and AI rubbish on the internet and on TikTok & Co.

AI Slop: Artificial intelligence, digital videos and AI rubbish on the internet and on TikTok & Co.

the joker from batman
the joker from batman

DESCRIPTION:

AI Slop and the littering of the internet: AI rubbish and mass-generated videos are flooding social media. How AI Slop is created, platforms fail and absurd, AI-generated content floods search results and feeds.

AI Slop: How artificial intelligence is fuelling the digital flood on social media, especially TikTok and Instagram, and contributing to the littering of the internet

The world of social media is increasingly being flooded by a new threat: AI slop, also known as AI rubbish. This mass-generated, low-quality content is created by the uncontrolled use of artificial intelligence and is flooding platforms such as TikTok and Instagram. From bizarre-looking ‘Shrimp Jesus’ images to fake historical videos, the littering of the internet with AI-generated content poses a serious challenge to information quality and credibility in the digital age. This article analyses the mechanisms behind this phenomenon, explains why AI slop is created, and suggests ways in which we can counteract this development.

What is AI slop and why is it flooding social media platforms?

AI slop refers to low-quality, AI-generated content that is mass-produced using artificial intelligence and flooded into social networks. The term ‘slop’ comes from English and literally means ‘pig feed’ – an apt metaphor for the low-quality content that clogs our digital feeds.

This AI-generated content is mainly created using tools such as ChatGPT, Midjourney and other text-to-image generators. Content creators use automated prompts to generate hundreds of posts within minutes, without editorial control or ethical considerations. The result is a flood of absurd, misleading or completely fabricated media content.

The platforms TikTok and Instagram are particularly affected, as their algorithms are optimised for engagement and reach, not factual accuracy. This algorithm-driven amplification means that AI slop often receives more visibility than high-quality, editorially vetted content.

How is AI-generated content created by artificial intelligence?

The emergence of large language models and generative artificial intelligence has dramatically simplified the production of AI slop. Developers and content producers use these AI models to automatically create text, images and even videos. A single model can produce photorealistic images or convincing text in a matter of seconds.

It becomes particularly problematic when this technology is used for engagement farming. Accounts generate massive amounts of content with the sole aim of collecting clicks and interactions. This often results in bizarre combinations – such as the infamous ‘Shrimp Jesus’ phenomenon, in which religious motifs are mixed with surreal elements to generate maximum attention.

The simplicity of use makes these tools accessible even to laypeople. With just a few clicks, any user can produce AI-generated content and distribute it across various platforms. This leads to an exponential increase in AI slop in our digital spaces.

What role do platforms such as TikTok play in the spread of AI slop?

TikTok and Instagram contribute significantly to the spread of AI slop through their algorithms. These platforms prioritise content based on engagement metrics such as likes, comments and dwell time. Since AI-generated content is often deliberately designed to be provocative or emotionally appealing, it frequently receives high interaction rates.

The speed at which content is consumed on these platforms makes it even more difficult for users to verify its authenticity. A 15-second video is liked and shared before the viewer has had time to question its credibility. This dynamic greatly favours the spread of AI-generated content.

To make matters worse, moderation on these platforms is often inadequate. While obviously harmful content is removed, many AI slop posts slip through the cracks because they do not explicitly violate community guidelines, but ‘only’ contain inferior or misleading information.

Why is AI slop problematic?

The littering of the internet with AI slop has far-reaching consequences for the information landscape. First, it dilutes the quality of available information. When search results and social media feeds are flooded with low-quality content, it becomes more difficult to find trustworthy sources.

Secondly, AI slop contributes to the spread of fake news and disinformation. Since AI-generated content is often not marked as such, it can be perceived as authentic information. This undermines trust in digital media as a whole and can lead to false beliefs or decisions.

The psychological effects are also concerning. Users develop ‘information fatigue’ – exhaustion from constant exposure to questionable content. This can lead to apathy or cause people to turn away from digital information sources altogether, which is problematic in an increasingly connected world.

How can you recognise automated, AI-generated videos on social media?

Identifying AI-generated videos is becoming increasingly difficult as the technology becomes more sophisticated. Nevertheless, there are several tell-tale signs that attentive viewers can look out for. Unnatural movements, inconsistent lighting or strange artefacts in faces or hands can be indications of AI generation.

One should be particularly sceptical of historical or documentary content. If a ‘historical’ video looks too perfect or shows events that appear too well documented, it could be a deepfake or other AI-generated media content. A critical look at the source and research in established media can provide clarity.

Text-based content often displays characteristic patterns: repetitive phrases, unnaturally perfect grammar or inconsistencies in content can be indications of automated generation. The frequency of an account's posts can also be suspicious – if a user posts dozens of times a day, this often indicates automated content production.

What impact does AI slop have on intelligence and opinion formation?

AI slop significantly influences the way people process information and form opinions. Constant exposure to low-quality content causes users to become accustomed to superficial, sensationalised representations of complex topics. This can lead to a dumbing down of discourse and a reduction in attention span.

Particularly problematic is the impact on younger generations who grow up with AI slop and may develop difficulty distinguishing between high-quality and low-quality information. In the long term, this can impair their ability to think critically and analyse objectively.

The proliferation of AI-generated content also contributes to polarisation. Since this content is often designed to elicit strong emotional responses, it can reinforce existing prejudices and lead to extreme viewpoints. This jeopardises social cohesion and democratic opinion-forming.

How can AI-generated content be exposed and analysed?

Analysing and exposing AI-generated content requires a combination of technical understanding and critical thinking. First, check the source: is the account verified? Does it have a history of credible posts? Is similar sensational content posted regularly?

Technical tools can also be helpful. The first AI detectors that can identify artificially generated texts and images are already available. However, these are not yet perfect and can be fooled by advanced AI systems. Experts such as Simon Willison therefore recommend a multidimensional approach to verification.

One proven method is cross-referencing: Does the information appear in several independent, trustworthy sources? Is it confirmed by established media or institutions? Satirists such as John Oliver have already pointed out the problem in ‘Last Week Tonight’ and presented methods for identifying AI slop.

What should platforms do to combat the flood of AI slop?

Platforms face the challenge of combating AI slop without compromising freedom of expression or legitimate AI use. Mandatory labelling of AI-generated content would be an important first step. Users should be able to clearly recognise when content has been artificially generated.

Algorithm changes could also help. Instead of focusing solely on engagement, platforms should place greater emphasis on factors such as source credibility and information quality. This would favour high-quality, editorially vetted content and make AI slop less visible.

Transparency in content moderation is another important aspect. Platforms should communicate more clearly how they deal with low-quality content and what measures users can take to report suspicious content. Collaboration with fact-checking organisations and media experts could also improve content quality.

What role does deception play in AI-generated social media posts?

Deception is a central element of many AI slop strategies. Much AI-generated content is deliberately designed to appear authentic, even though it has been created entirely artificially. This deliberate misrepresentation distinguishes AI slop from legitimate AI applications, where the artificial nature is communicated transparently.

 

Particularly insidious are cases in which AI-generated content is presented as historical documents or news material. This can lead to false perceptions of history or distort the presentation of current events. The line between entertainment and disinformation is often blurred.

The psychological mechanisms of deception are sophisticated: AI slop often uses familiar visual codes or emotional triggers to suggest credibility. A ‘historical’ photo appears authentic thanks to sepia toning and artificial signs of ageing, even though it was created by AI just minutes before.

How does AI slop influence the future of digital media content?

The long-term impact of AI slop on the digital media landscape is not yet fully foreseeable, but worrying trends are already emerging. If the current development continues, authentic, high-quality content could be lost in the mass of AI-generated content.

This could lead to a two-tier society of information: those who have access to trustworthy, curated sources of information and those who mainly consume AI slop. Such a development would reinforce existing social inequalities and counteract the democratisation of information that the internet originally promised.

At the same time, the omnipresence of AI slop could lead to a counter-movement: users could increasingly turn to verified, human-curated content and be willing to pay for high-quality information. Paradoxically, this could lead to a renaissance of professional journalism and editorial quality control.

Key insights into AI slop and the digital future

• AI slop is ubiquitous: Inferior, AI-generated content is already flooding social networks and will continue to increase without adequate regulation.

• Detection is becoming more difficult: As technology advances, AI-generated videos and texts are becoming increasingly realistic and harder to distinguish from authentic content.

• Platforms bear responsibility: TikTok and Instagram must adapt their algorithms and implement better mechanisms for labelling AI content.

• Education is crucial: Users must learn to think critically and check different sources so as not to fall for AI slop.

• Quality becomes a luxury: High-quality, editorially vetted content could become a premium product in the future, while AI slop dominates the ‘free landscape’.

• Democratic threat: The littering of the internet with AI slop threatens opinion-forming and could impair democratic processes in the long term.

• Technical solutions needed: Better AI detectors and transparency tools are needed to stem the flood of inferior content.

• Social task: The fight against AI slop is not only a technical challenge, but also a social one that requires collective effort.

Loss of reality

Hannah Arendt sees lies as a real threat to the anchoring of truth and reality in public and individual life and emphasises that the systematic loss of reality through lies has serious ethical consequences for society and the individual. According to Arendt, constant lying is not primarily aimed at making people believe a particular untruth, but leads to ‘nobody believing anything anymore’. People lose the ability to distinguish between truth and lies, right and wrong. In totalitarian systems, lies therefore serve to undermine shared reality and so that collective identity.

Consequences for society

The destruction of trust between people is one of the most ruinous consequences of lying; without common ground, any form of rational and moral communication becomes untenable.

Political lies undermine democratic institutions by depriving citizens of the opportunity to hold those in power to account.

This creates a climate of uncertainty and disorientation that encourages social manipulation and undermines plurality, judgement and political discourse.

Impact on individuals

Constant exposure to AI slop has psychological effects on users. Like TikTok's attention economy, the littering of the internet with inferior content results in information overload and a loss of orientation – apart from the intended reduction in users' attention spans. The latter can no longer identify trustworthy sources and distinguish facts from fake narratives. 

Individuals lose confidence in their own judgement and are forced to follow external interpretations, leading to disempowerment and loss of self. Arendt describes how constant lying destroys ‘human orientation in the realm of the real’.

The ethical centring of the individual is weakened by the erosion of factual reality.

Case study: The Vanvera phenomenon

Origin and spread of misinformation

The case study of the ‘Vanvera’ phenomenon illustrates how AI slop works and spreads in detail. The origin of this misinformation lies in a Finnish satirical article that presented the Vanvera as a ‘fragrant device’ of the 19th-century Venetian aristocracy. This article contained typical characteristics of AI-generated content, including exaggerated imagery and absurd comparisons. Although the story was meant to be satirical, it was amplified and spread by automated processes and algorithms on social media, contributing to the littering of the internet.

The role of Reddit and Instagram

Reddit and Instagram played a crucial role in spreading the Vanvera myth. On Reddit, the story was spread by users who claimed that the Vanvera was a ‘tipo di sacchetto’ for Venetian nobles. This post generated high visibility and contributed to the spread of misinformation, a typical example of the spread of ‘Ai Slop’.

From Reddit, the story migrated to Instagram, where accounts added AI-generated images to substantiate the story. The algorithms of these platforms further amplified the spread, as AI-generated content that generated high engagement was displayed preferentially.

The historical fabrication and its consequences

The analysis of the Vanvera fabrication shows how AI-generated content can arise from linguistic confusion and translation artefacts. Descriptions such as ‘unique object’ indicate non-native content generation, possibly from underrepresented languages.

Despite attempts at debunking by Italian users, the misinformation persisted due to the algorithmic amplification described above. This AI-generated content was monetised through views from Western markets, which further fuelled the production of AI slop. This underscores the need to label and analyse AI-generated content in order to combat the littering of the internet.

Conclusions and outlook

The future of digital content

The future of digital content will be significantly shaped by the development and use of artificial intelligence. The flood of AI slop threatens the quality and trustworthiness of information on the internet. 

Society and each individual also play an important role in the fight against AI slop. It is crucial to critically question the information one consumes and shares.

The littering of the internet can only be combated if users learn to identify trustworthy sources and distinguish facts from fiction, rather than relying on content from platforms such as 4chan. It is also important to draw attention to the dangers of AI slop.

Only in this way can the littering of the internet be curbed and an informed society be ensured.

Arendt's ethical judgement

Arendt advocates for the preservation of truth as the cornerstone of a free and pluralistic society and as a prerequisite for individual judgement and moral responsibility.

She sees ethical responsibility where individuals are willing to assert truth even against political power; truth without power seems ‘contemptible’ to her, and power that is maintained only through lies is destructive.

Arendt's analysis shows that continued lying creates a social atmosphere that undermines freedom, plurality and ethical judgement, and is thus existentially dangerous for both social cohesion and individual identity and responsibility.


RELATED ARTICLES:

AI chatbots: psychosis, delusions and AI psychosis

AI makes us lazy thinkers – protect your ingenuity

Ava 2050: Influencers, digital footprints and health

‘parlare a vanvera’ Italian: English translation of the term in the dictionary and its bizarre history.


Directions & Opening Hours

Close-up portrait of Dr. Stemper
Close-up portrait of a dog

Psychologie Berlin

c./o. AVATARAS Institut

Kalckreuthstr. 16 – 10777 Berlin

virtual landline: +49 30 26323366

email: info@praxis-psychologie-berlin.de

Monday

11:00 AM to 7:00 PM

Tuesday

11:00 AM to 7:00 PM

Wednesday

11:00 AM to 7:00 PM

Thursday

11:00 AM to 7:00 PM

Friday

11:00 AM to 7:00 PM

a colorful map, drawing

Load Google Maps:

By clicking on this protection screen, you agree to the loading of the Google Maps. Data will be transmitted to Google and cookies will be set. Google may use this information to personalize content and ads.

For more information, please see our privacy policy and Google's privacy policy.

Click here to load the map and give your consent.

Dr. Stemper

©2025 Dr. Dirk Stemper

Monday, 9/22/2025

a green flower
an orange flower
a blue flower

Technical implementation

Directions & Opening Hours

Close-up portrait of Dr. Stemper
Close-up portrait of a dog

Psychologie Berlin

c./o. AVATARAS Institut

Kalckreuthstr. 16 – 10777 Berlin

virtual landline: +49 30 26323366

email: info@praxis-psychologie-berlin.de

Monday

11:00 AM to 7:00 PM

Tuesday

11:00 AM to 7:00 PM

Wednesday

11:00 AM to 7:00 PM

Thursday

11:00 AM to 7:00 PM

Friday

11:00 AM to 7:00 PM

a colorful map, drawing

Load Google Maps:

By clicking on this protection screen, you agree to the loading of the Google Maps. Data will be transmitted to Google and cookies will be set. Google may use this information to personalize content and ads.

For more information, please see our privacy policy and Google's privacy policy.

Click here to load the map and give your consent.

Dr. Stemper

©2025 Dr. Dirk Stemper

Monday, 9/22/2025

a green flower
an orange flower
a blue flower

Technical implementation

Directions & Opening Hours

Close-up portrait of Dr. Stemper
Close-up portrait of a dog

Psychologie Berlin

c./o. AVATARAS Institut

Kalckreuthstr. 16 – 10777 Berlin

virtual landline: +49 30 26323366

email: info@praxis-psychologie-berlin.de

Monday

11:00 AM to 7:00 PM

Tuesday

11:00 AM to 7:00 PM

Wednesday

11:00 AM to 7:00 PM

Thursday

11:00 AM to 7:00 PM

Friday

11:00 AM to 7:00 PM

a colorful map, drawing

Load Google Maps:

By clicking on this protection screen, you agree to the loading of the Google Maps. Data will be transmitted to Google and cookies will be set. Google may use this information to personalize content and ads.

For more information, please see our privacy policy and Google's privacy policy.

Click here to load the map and give your consent.

Dr. Stemper

©2025 Dr. Dirk Stemper

Monday, 9/22/2025

a green flower
an orange flower
a blue flower

Technical implementation