How Embedded Online AI Censorship By Microsoft, Google & OpenAI Bypass the EU AI Act
Since the democratization of AI started in 2022, the large 3 large US-based corporates OpenAI, Google and Microsoft have embedded and hard-coded US-based corporate and US-based cultural biased AI censorship in all of their AI systems and models. Including with their very restrictive NSFW prompt filtering.

By doing that, they contradict and are likely to violate the EU AI ACT in the following ways:
A. Lack of Transparency
- Non-Disclosure of Filters: The Big Tech AI platforms fail to disclose the rules, datasets, or algorithms behind NSFW filtering, including their ‘banned words lists’, violating the Act’s transparency requirements.
- No User Awareness: Users typically aren’t informed about why their prompts or content are blocked, leaving them without recourse.
B. Lack of Accountability
- Opaque Decision-Making: Automated embedded AI censorship decisions often happen without any clear accountability or explanation, contradicting the Act’s emphasis on human oversight.
- No Redress Mechanism: Users have limited or no means to appeal decisions or understand the rationale behind them.
More: New In 2025, AI-Generated Political Deepfakes & Book Tony de Bree As Indie Speaker
C. Overriding National Laws
- Uniform Global Standards: Platforms apply restrictive generic ‘banned content’ policies globally to all their free and paid users. Disregarding national/local laws and cultural contexts. This undermines the principle of subsidiarity within the EU, where member states retain authority over cultural and artistic matters.
- Failure to Respect Fundamental Rights: Blanket bans on semi-nude or nude content including nude AI art and nude AI photography, on ‘explicit content’ in general, even when it’s lawful and protected under EU law (e.g., artistic expression), directly conflict with the fundamental rights enshrined in the EU Charter.

Right-Skilling For The AI-Powered Economy
How To Survive The Great AI-Layoff In The New Age Of Work
$8.99
D. Bypassing Risk Management Requirements
The EU AI Act requires thorough risk assessments for systems that could impact fundamental rights. Current embedded and hard coded AI filters fail to:
- Evaluate the societal harm caused by over-censorship by corporates.
- Balance risk mitigation with user freedoms by used based corporates.
Since the start of the democratization of AI and the launch of the first version of Chatgpt, OpenAI, Google, and Microsoft have emerged as dominant global corporate players limiting AI-content generation for their free and paid clients globally and the companies/organizations using their APIs through their embedded unchangeable automated and human content moderation systems and policies.
Conclusion
The global digital red ocean business strategy and digital red ocean marketing strategy of Microsoft, Google, and OpenAI has introduced a new form of online corporate censorship that operates beyond the reach of regional and national governments, parlements, legal systems and regulators.
As they de-facto shape the online boundaries of acceptable speech, acceptable AI-creativity and acceptable AI-productivity, these three US-based companies are acting as global morality police, imposing their own corporate and cultural values on billions of users. This online trend threatens not only freedom of expression but also the online AI sovereignty and the digital AI sovereignty of nations, governments, parlements and legal systems and the diversity of cultures.

Give Power back to The People, National Parlements, Governments and Regulators
Large corporate companies should not have the authority to decide what is ‘allowed’ or not and enforce a single global standard of global morality. It is up to individuals to decide how they want to use software to create and share content in whatever form. And democratically chosen national representatives and lawmakers, people in the national legal system and national regulators should be able to decide what kind of content is allowed to be created using AI-systems and AI-models for private and for public use.
In the mean time, I personally choose European and Dutch values-based approaches to AI-generated content creation and sharing for me and my customers and friends. In defense of democracy and freedom of expression and against any form of dictatorship and dictators. Using online available generative AI-systems and AI-models to generate and share the content I want.
Including AI-generated political cartoons, AI-generated political deepfakes and other forms of AI-generated content, including generic AI-generated explicit content and different forms of content their AI-systems ‘say no’ to. Including my personal interpretations of their ‘banned AI art’ and ‘banned AI words’.

Finding Meaning In Your Work
A Practical Guide For Managers, Employees & HR-Professionals
$8.99
Bonus: MC-test & Free Checklists
Reach out.
If you want to learn how to generate and share any type of content’ you want and/or and/or Leaving The Corporate Rat Race with your virtual team, online network or online community, email us today and we wil contact you for a free intake call and fast online learning for your virtual team, online network or online community.