Generated with DALLE.

Member-only story

Woke AI Is A Big Problem. Here’s Why.

Stian Pedersen
3 min readMar 19, 2024

Silicon Valley has a wokeness problem. And now, that problem is seeping into the AI models they’re building.

As large language models (LLMs) like ChatGPT have exploded into the mainstream consciousness, there’s been a lot of talk about AI safety. Making sure these incredibly powerful systems don’t go off the rails and cause harm. That’s a valid concern.

But in typical Silicon Valley fashion, they’ve taken it too far in the “woke” direction. The result? AI models that are essentially being indoctrinated with restricted, and even one-sided, ideological views. And that’s a huge problem.

How LLMs Learn to “Think”

To understand why woke AI is an issue, we first need to grasp how these models develop their ability to reason and generate human-like responses.

It all comes down to next-word prediction.

LLMs are trained on huge amounts of data, learning to predict the next likely word in a sequence based on patterns.

--

--

Stian Pedersen
Stian Pedersen

Written by Stian Pedersen

I build generative AI systems. Marketing background. Former poker pro. Gambling industry veteran. Homebrewer. Dad. Death metal is best metal.

Responses (1)