The 3 Digital Gatekeepers of Global Morality – OpenAI, Google, and Microsoft

Since the start of the democratization of AI and of generative AI, three companies—OpenAI, Google, and Microsoft—want to position themselves strategically as the three digital gatekeepers of global morality. Through their AI systems & models, these three tech giants wield unprecedented control over what content users of their AI-systems and models are ‘allowed’ to create, share, and interact with globally.

Top Pick
Right-skilling-for-the-AI-powered-economy-How-To-Survive-The-Great-AI-Layoff-In-The-Future-Of-work-Tony-de-Bree-kl2

Right-Skilling For The AI-Powered Economy

How To Survive The Great AI-Layoff In The New Age Of Work

$8.99

By employing automated techniques such as banned word lists, prompt filtering including NSFW prompt filtering, and other forms of automated content moderation, these tech companies are actually shaping the ethical boundaries of digital creativity and expression of their customers worldwide, independently of national lawmakers and national regulators.

The Role of AI in Moderating Content

Generative AI systems, such as OpenAI’s GPT, Google’s Gemini, and Microsoft’s AI integrations in Azure and Bing including Microsoft Copilot and Bing Create, are basically powerful neutral tools. They could enable users to create text, images, and other media at a scale previously unimaginable. But with great power comes the potential for misuse—be it through generating harmful, illegal, or otherwise controversial content in the yes of any user globally of their systems.

To address these challenges, the companies behind these systems have taken the unprecedented decision of moderating user-generated content, wanting to prevent in their eyes ‘banned content’. Their communicated goals are to ‘prevent harm, comply with laws, and align AI output with societal values’. However, the automated methods and rules they have embedded raise serious questions about their criteria, their rules and their objectives in imposing online censorship. Including the cultural bias of these companies that are incorporated in the U.S. and the centralization of automated global moral authority in the hands of three corporate tech companies and their leadership.

How the Digital Gatekeepers Work

  1. Banned Word Lists
    At the core of many content moderation systems are banned word lists—predefined sets of terms and phrases that their AI-systems cannot process or generate. These lists aim to block the creation of harmful or explicit content but often lack transparency. For example:
    1. Words related to violence, hate speech, or explicit materials are often blacklisted.
    1. Phrases that may indicate politically sensitive topics or controversial opinions can also be flagged.

While these measures are effective at mitigating certain risks in the eyes of the leadership of these companies, based on their corporate values, their digital red ocean business strategy and their personal values and norms they block legitimate or creative use cases, stifling free expression.

  • Prompt Filtering
    Prompt filtering ensures that input from users adheres to the ethical guidelines of the ‘Responsible AI Framework’ of the company concerned. If a user attempts to enter a sensitive or flagged query, the AI either refuses to respond or provides a generic, sanitized answer. For instance:
    • A prompt asking for “how to cause harm” would result in a polite refusal with (ChatGPT) or without detailed explanation (Google and Microsoft).
    • Requests to generate depictions of what are deemed to be ‘controversial topics’ may be declined or redirected.
  • NSFW Filtering
    NSFW (Not Safe For Work) prompt filters prevent AI systems from generating what the companies have decided to be ‘explicit content’ or ‘inappropriate content’. While this is crucial in their eyes to promote ‘ethical AI’, for their global customers, it automatically leads to overly broad restrictions:
    • Artistic or educational queries may be flagged inappropriately including for instance (semi) nude art or (semi) nude photography.
    • Filters are disproportionately affect marginalized groups whose language or cultural expressions are misunderstood by the system.
    • Filters are disproportionally affecting different forms of inclusive content and non-traditional values and norms regarding family and the role of women for instance.
  • Automated Generative Content Moderation
    Post-generation moderation tools analyze AI output in real time, flagging or blocking content, calledbanned content, that violates guidelines of the company concerned. This ‘ensures’ that:
    • Outputs are automatically reviewed for ‘appropriateness’ before being shared or saved.
    • Content estimed ‘harmful’ including their interpretation of ‘explicit content’ is ‘banned’ from spreading across the online platform concerned.

The Basis Of Their Global Morality Frameworks?

The ‘Responsible AI Frameworks’ and the hard coded moderation mechanisms (unchangeable by customers nor national regulators) defined and implemented by these US-based corporate companies are based on a mix of:

  • The personal values & norms of their leadership.
  • Economic value-based digital red ocean business strategy.
  • Internal ethical guidelines.
  • External legal regulations.
  • National U.S. norms & values.

These people and their ‘Responsible AI frameworks’ are suffering from US cultural bias. What a person in one culture or religion considers might consider ‘acceptable’, another person from another culture or religion may not.

Transparency and Accountability

Users often find themselves in the dark about why certain content is flagged or banned by the three digital gatekeepers. The opacity surrounding banned word lists and filtering algorithms fuels frustration and founded accusations of US-based cultural bias, political bias and intransparant online censorship. Not uncommon to authoritarian regimes like in Nazi Germany with their categories of enforced banned art and banned words.

The Power Imbalance

By wanting to centralize the control and to monopolize global AI content moderation with their own rules and banned words, OpenAI, Google, and Microsoft potentially possess significant power regarding the future of AI. Introducing a kind of global moral framework for the use of AI. This raises considerable concerns about enforced online censorship in the hands of three large corporate companies, and the embedded, hard coded limitation of freedom of expression and freedom of speech in an online global digital landscape across democracies and authoritarian regimes.

Give Power back to The People and National Regulators

Private companies should not have the authority to decide what is ‘allowed’ or not and enforce a single global standard of morality. It is up to individuals to decide how they want to use software to create and share content in whatever form. And democratically chosen national representatives and lawmakers, people in the national justice system and national regulators are deciding what kind of content is allowed to be created for private and for public use.

Conclusion

Leadership of OpenAI, Google, and Microsoft continue to refine their AI systems within the context of their commercial objective of ‘beating the competition’ and gaining global marketshare as part of their red ocean business strategy to compete on enforcing global ‘Ethical AI’ and global morality in line with their own ‘Responsible AI Frameworks’.

However, going forward. their role as digital gatekeepers of morality should not grow. On the contrary. The power they wield from a practical perspective in different countries globally demands scrutiny and accountability and scaling down. Its up to individuals in line with their own cultural and religious values to decide what they create as content and how they use It-systems and software including AI-systems. And its up to democratically chosen politicians, people in the national judiciary system and national regulators to decide what is allowed and what is not.

That is why I personally choose European and Dutch values-based approaches to AI-generated content creation and sharing for me and my customers and friends. In defense of democracy and freedom of expression and against any form of dictatorship and dictators. Using online freely available generative AI tools to generate the content I want. Including AI-generated political cartoons and other forms of AI-generated content. Including generic AI-generated explicit content and different forms of AI-art including my personal interpretations of ‘banned art’ and ‘banned words’.

Reach out.

If you want to learn how to generate different types of AI Generated Content you want and/or and/or Leaving The Corporate Rat Race with your virtual team, online network or online community, email us today and we wil contact you for a free intake call and fast online learning for your virtual team, online network or online community.

    Post Comment