Google Stock Drops as 'Woke' AI Draws Mockery and Sanctions

Google Stock Drops as 'Woke' AI Draws Mockery and Sanctions

Last Monday, Google's stock saw a drastic drop of up to 8% in just a week, due to the disastrous debut of their new AI model, Gemini. The model, which was released with the intention of rivaling OpenAI's GPT-4, failed dramatically with its woke images and biased text answers, to the point of becoming a worldwide mockery.

CEO Sundar Pichai admitted the failure was "completely unacceptable" and reassured investors that his teams were "working around the clock" to improve the model's accuracy. He also promised to better vet future products and ensure smoother rollouts in the future.

This episode is more than just an awkward AI blunder; it reflects a bigger problem facing big tech companies as they try to navigate the intricacies of political correctness. The episode also serves as a preview of a new kind of innovator's dilemma, where even the most well-intentioned and thoughtful big tech companies may struggle to overcome the power of external forces.

The Beginnings of a Blackout

In an arena of free-flowing information and argument, it's unlikely that a bizarre array of unprecedented medical mistakes and impositions on liberty could have persisted. But that's exactly what happened in the context of the covid response.

Beginning in the late 2010s and accelerating beginning in 2020, US government agencies collaborated with social media companies to stifle speech during Covid-19, and thus blocked the rest of us from hearing from Drs. Jay Bhattacharya, Martin Kulldorff, and Aaron Kheriaty, among other experts.

Silicon Valley's top venture capitalist and most strategic thinker, Marc Andreessen, doesn't think Google has a choice. He questions whether any big tech company can field objective AI:

Can Big Tech actually field generative AI products?(1) Ever-escalating demands from internal activists, employee mobs, crazed executives, broken boards, pressure groups, extremist regulators, government agencies, the press, "experts," et al to corrupt the output(2) Constant risk of generating a Bad answer or drawing a Bad picture or rendering a Bad video - who knows what it's going to say/do at any moment?(3) Legal exposure - product liability, slander, election law, many others - for Bad answers, pounced on by deranged critics and aggressive lawyers, examples paraded by their enemies through the street and in front of Congress(4) Continuous attempts to tighten grip on acceptable output degrade the models and cause them to become worse and wilder - some evidence for this already(5) Publicity of Bad text/images/video actually puts those examples into the training data for the next version - the Bad outputs compound over time, diverging further and further from top-down control(6) Only startups and open source can avoid this process and actually field correctly functioning products that simply do as they're told, like technology should?

It remains to be seen whether Google can recover from this disaster, or if it will fall prey to the same innovator's dilemma that other big tech companies face. The stock market will be watching closely.