Troubling trend of woke AI is a big threat to free speech

The troubling trend of woke AI is a big threat to free speech. Big Tech’s AI is biased both by employees and the datasets it uses. Result endangers future of American ingenuity.

Sep 14, 2023 - 12:06
 0  29
Troubling trend of woke AI is a big threat to free speech

Have you ever seen the YouTube video of the young boy at Christmas unwrapping a Nintendo 64 and completely freaking out with excitement? "Nintendo Sixty-FOOOOOOOOOOOUR!" Utter bliss. And that kid was me! My peak experiences as a kid always coincided with groundbreaking technology launches.

The big N64 moment came for me as an adult was when I first saw OpenAI’s ChatGPT perform with its dazzling human-like responses. I knew instantly that we had something special and radically different in our hands.

But my elation quickly turned to concern as I realized how easily artificial intelligence (AI) can be used for nefarious purposes. Who controls the AI, and what is their agenda?

CHRISTIANS ATTACK CHATGPT-GENERATED FAKE BIBLE VERSE ABOUT JESUS ENDORSING TRANSGENDERISM

As someone who has spent 1,000+ hours in 2023 building with ChatGPT, I would like to offer my perspective on how I look behind the curtain to the puppeteer while the rest of society stares at the puppet.

Big technology firms like Apple, Facebook (Meta), Google (Alphabet) and Microsoft are poised to dominate this new market. We are watching a data monopoly congeal in front of our eyes, and need to pay careful attention to the decisions these firms are making. 

An ugly truth about large language models, or LLM — the technology behind ChatGPT — is that they are susceptible to manipulation. But first let’s break down what makes the secret sauce that is ChatGPT.

To prepare this AI for the main course, we need a few key ingredients to cook with — the algorithm and a mechanism for collecting feedback to improve the model, which we call Reinforcement Learning from Human Feedback, or RLHF.

Let’s say you ask the AI for some help, so, you ask ChatGPT, "How do I write an article for Fox News on AI?" In a laboratory, the computer generates an initial response to the question and if the answer is written poorly then a human rater would flag this response as poor.

Then the AI tries again and seeks our approval for the revised version. We call this in data-science biz a "reward function," which is used to optimize a policy framework. 

This begs some important questions: How do I get to become a feedback rater? What if a rater has a woke ideology and their subjective bias is used to downgrade responses that the opposing perspective would consider as valid?

I’m not the only one who worries about AI manipulation. Sam Altman, CEO of OpenAI, the company behind ChatGPT, had this to say: "There will be no one version of GPT that the world ever agrees is unbiased… The bias I am most nervous about is the bias of the human feedback raters." Me too, Sam. Me too! 

Sam further reflects on selecting the right people for RLHF: "This is the part that we understand the least about. … We are now trying to figure out how we are going to select those people. How we will like verify that we get a representative sample ... but we don’t have this functionality built out yet." If the CEO of ChatGPT doesn’t know how to solve the human bias issue, we have a real problem, Houston. 

The larger algorithms need some careful tuning of their policy frameworks, and this is just a fact of life. I saw a new AI white paper published on September 1, that piqued my interest. This groundbreaking research can be accessed from Cornell University arXivLabs and was drafted by the Google Research team.

The paper introduces a new idea called Reinforcement Learning from AI Feedback which says that it’s not just humans we’re dealing with anymore, since the AI is making these policy framework determinations for us now. Which raises the question of the fundamental ethics around this use of AI? Only time will tell and it’s a challenge I’m watching closely. 

AI bias is starting to smolder, and where there’s smoke there’s usually fire. The companies that are making LLM’s have a few other tricks up their sleeves to perpetuate their agendas, e.g., the dataset that feeds the algorithm.

What data is being used to train the algorithm? Who curates this dataset? And… is it possible that the dataset powering the AI’s knowledge base is somehow massaged to emphasize a particular agenda of the parent company?

CLICK HERE FOR MORE FOX NEWS OPINION

This wouldn’t concern me if it weren’t for the fact that Big Tech controls nearly all the data, which means they have a competitive advantage regarding LLM performance, and a liberal-leaning political agenda. If this isn’t a recipe for disaster I don’t know what is.

We need to look at options to mitigate this advantage to level the playing field, which includes but is not limited to breaking up these tech companies under monopoly regulation. My position is simple: the corporations making the AI algorithms which are vital for the future should not be the ones designing and implementing the RLHF mechanisms. 

However, it’s important to have a balanced perspective on the issue. The tech firms have the resources to pay for the expensive computing costs and sit on treasure troves of data. More data means a smarter AI. So, why shouldn’t they be free to use their data assets as they see fit?

From my perspective, humans have a responsibility to ensure that we are not inadvertently or maliciously crafting AI biases. The possibilities of what can be achieved for humanity are indeed stupendous with AI and it represents the future of American ingenuity and prosperity.

This is why I am asking you to reflect on these issues and form your own opinions. Then use your resources and influence to ensure that we are not leaving the fate of our society in the hands of Big Tech producing the next generation of woke AI’s.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow