fbpx

Google, known for its search engine, is reportedly concerned about the impact of OpenAI’s ChatGPT, a Q&A machine that uses natural language to respond to questions. ChatGPT’s user-friendly interface and fluid prose has led to speculation that it may end Google’s dominance in search. In response, Google has declared a “code red” and is focusing on AI with plans to release 20 AI-powered products this year, including a ChatGPT-like search bot. However, questions remain about the technology’s readiness for widespread use and whether it can be trusted for accurate information.

Google is one of the biggest companies on Earth. Google’s search engine is the front door to the internet. And according to recent reports, Google is scrambling. Late last year, OpenAI, an artificial intelligence company at the forefront of the field, released ChatGPT. Alongside Elon Musk’s Twitter acquisition and fallout from FTX’s crypto implosion, breathless chatter about ChatGPT and generative AI has been ubiquitous. The chatbot, which was born from an upgrade to OpenAI’s GPT-3 algorithm, is like a futuristic Q&A machine. Ask any question, and it responds in plain language. Sometimes it gets the facts straight. Sometimes not so much. Still, ChatGPT took the world by storm thanks to the fluidity of its prose, its simple interface, and a mainstream launch.

When a new technology hits public consciousness, people try to sort out its impact. Between debates about how bots like ChatGPT will impact everything from academics to journalism, not a few folks have suggested ChatGPT may end Google’s reign in search. Who wants to hunt down information fragmented across a list of web pages when you could get a coherent, seemingly authoritative, answer in an instant?

In December, The New York Times reported Google was taking the prospect seriously, with management declaring a “code red” internally. This week, as Google announced layoffs, CEO Sundar Pichai told employees the company will sharpen its focus on AI. The NYT also reported Google founders, Larry Page and Sergey Brin, are now involved in efforts to streamline development of AI products. The worry is that they’ve lost a step to the competition.

If true, it isn’t due to a lack of ability or vision. Google’s no slouch at AI. The technology here—a flavor of deep learning model called a transformer—was developed at Google in 2017. The company already has its own versions of all the flashy generative AI models, from images (Imagen) to text (LaMDA). Indeed, in 2021, Google researchers published a paper pondering how large language models (like ChatGPT) might radically upend search in the future. “What if we got rid of the notion of the index altogether and replaced it with a pre-trained model that efficiently and effectively encodes all of the information contained in the corpus?” Donald Metzler, a Google researcher, and coauthors wrote at the time. “What if the distinction between retrieval and ranking went away and instead there was a single response generation phase?” This should sound familiar.

Whereas smaller organizations opened access to their algorithms more aggressively, however, Google largely kept its work under wraps. Offering only small, tightly controlled demos to limited groups of people, it deemed the tech too risky and error-prone for wider release just yet. Damage to its brand and reputation was a chief concern.

AdobeStock 560199186 Large Large
Chat GPT Chat with AI Artificial Intelligence. Businessman using chatbot in computer smart intelligence Ai, artificial intelligence developed by OpenAI. Futuristic technology, robot in online system.

Now, sweating it out under the bright lights of ChatGPT, the company is planning to release some 20 AI-powered products later thisyear, according to the NYT. These will encompass all the top generative AI applications, like image, text, and code generation—and they’ll test a ChatGPT-like bot in search. But is the technology ready to go from splashy demo tested by millions to a crucial tool trusted by billions? In their 2021 paper, the Google researchers suggested an ideal chatbot search assistant would be authoritative, transparent, unbiased, accessible, and contain diverse perspectives. Acing each of those categories is still a stretch for even the most advanced large language models. Trust matters with search in particular. When it serves up a list of web pages today, Google can blame content creators for poor quality and vow to serve better results in the future. With an AI chatbot, it is the content creator, and if ChatGPT can’t get its facts straight, it may not be able to gain the trust of users, ultimately leading to its failure.