Google’s PaLM is Ready for the GPT Challenge

You’ve probably heard that Google recently issued Code Red because of concerns that the growing popularity of ChatGPT. It utilizes the GPT-3.5 architecture, could endanger the Google search advertising business. People are also talking about what the upcoming year has in store for us due to the looming noise that GPT-4 is just around the horizon. Nevertheless, despite the rumors, Google’s PaLM (Pathways Language Model), which was unveiled earlier this year, continues to dominate the AI field.

About Google’s PaLM

The performance across activities keeps improving with the model’s rising scale, which opens up new opportunities. PaLM can be scaled up to 540 billion parameters. GPT-3, in contrast, contains just roughly 175 billion parameters.

The Pathways system is used to train Google’s language model. This makes it highly effective and able to generalize tasks across a range of domains and tasks. By building models that are “sparsely activated” rather than activating the entire neural network for both simple and complex tasks, pathways is an AI architecture that aims to create general-purpose intelligent systems that can accomplish tasks across several domains well. The system is also taught to simultaneously process information in several media, including text, pictures, and speech.

BERT and PaLM

The performance across activities keeps improving with the model’s rising scale, which opens up new opportunities. Google’s PaLM can be scaled up to 540 billion parameters. GPT-3, in contrast, contains just roughly 175 billion parameters.

The Pathways system is used to train Google’s language model, which makes it highly effective and able to generalize tasks across a range of domains and tasks. By building models that are “sparsely activated” rather than activating the entire neural network for both simple and complex tasks, pathways is an AI architecture that aims to create general-purpose intelligent systems that can accomplish tasks across several domains well. The system is also taught to simultaneously process information in several media, including text, pictures, and speech.

Google’s PaLM
Google’s PaLM

Consider Google’s BERT transformer model, which runs their search engine at the moment. Since it made it possible for the search to transition from keyword-based to contextually-based results, it was a crucial integration. As an illustration, prior to BERT, Google search only understood keywords like “strategies” and “study” to give results for the query “strategies to study well.” With BERT, Google search recognized contexts like “to” to produce results. In this way, Google has been using BERT many times per day for many years.

It helps readers identify the finest news items connected to a specific story. Google’s BERT also assists in organizing numerous relevant news articles together in carousels. Google claims that the model has reduced “unexpected shocking results” for searchers by 30% over the past year. This has been done by helping to better recognize when a searcher is looking for explicit content.

PaLM’s Growth

Recent implementation of a framework by Phil Wang (lucidrains) on Github also enables PaLM to be trained using the same reinforcement learning methodology as ChatGPT. Basically, because PaLM is open-source, some have already upgraded ChatGPT using it
With Google’s new product, Med-PaLM, which was built on PaLM and its instruction-tuned variant Flan-PaLM to evaluate LLMs using MultiMedQA, an open-source model that provides datasets for multiple-choice questions as well as for longer responses to questions posed by medical professionals, as well as non-professionals, the capabilities of PaLM, can already be seen in action. The results showed that 92.6% of the Med-PaLM responses were comparable to the clinician-generated answers (92.9%), as determined by a group of clinicians.

Google’s PaLM
Google’s PaLM

Google Leads the AI Game

About 57% of Google’s business is search. Therefore, it’s crucial to remember that Google now has a greater and stronger armament than its competitors to handle the problem, regardless of whether one believes that GPT would disrupt search or that the risk was exaggerated. Although it might be argued that GPT-4 is on the horizon, its specifics remain a mystery. Of course, there have been outlandish and generalized conjectures circulated online. Users have speculated that it has somewhere between 1 trillion and 100 trillion parameters. However, this doesn’t imply anything about the effectiveness of the model itself.

Furthermore, retraining the models to maintain the relevance of the information is a significant difficulty with LLMs that OpenAI will need to address. For instance, a model like ChatGPT was pre-trained on data from 2021. If OpenAI were to continuously update the model with new content, the cost would be prohibitive. Here, Google outperforms OpenAI because it regularly purges its online corpus by scraping and analyzing sites to identify recently-posted queries.

Follow us on Instagram: @niftyzone