Back to top

Image: Bigstock

Alphabet (GOOGL) Ups Generative AI Game With Gemini Integration

Read MoreHide Full Article

Alphabet’s (GOOGL - Free Report) Google continues to power its various products with its most advanced and powerful large language model (“LLM”), namely Gemini. This, in turn, is making the generative AI battle intensified for its peers like Microsoft (MSFT - Free Report) and Amazon (AMZN - Free Report) .

The latest upgrade of Android Studio's bot with Gemini Pro is a testament to the same. This advancement will enable developers to ask coding-related questions.

Users will be able to add generative AI-powered features to their apps seamlessly by accessing the Gemini API starter template through Android Studio.

Google strives to deliver enhanced answer quality in code completions, debugging, finding relevant resources and writing documentation.

Notably, Google is rolling out Gemini in Android Studio in more than 180 countries for the Android Studio Jellyfish version.

In addition to Android Studio, Google made Gemini 1.5 Pro available in a public preview on Vertex AI.

Gemini 1.5 Pro is capable of processing an amount of context ranging from 128,000 tokens to 1 million tokens. It is four times the amount of data that Anthropic's flagship model, Claude 3, can process and eight times as high as OpenAI's GPT-4 Turbo max can take as input.

The above-mentioned endeavors are expected to drive Google’s momentum among various developers and enterprises.

Alphabet Inc. Price and Consensus

 

Alphabet Inc. Price and Consensus

Alphabet Inc. price-consensus-chart | Alphabet Inc. Quote

Growing Generative AI Efforts

Alphabet remains well-poised to capitalize on the growth opportunities in the booming generative AI market on the back of the above-mentioned endeavors. Currently, Alphabet carries a Zacks Rank #3 (Hold). You can see the complete list of today’s Zacks #1 Rank (Strong Buy) stocks here

A Fortune Business Insights report shows that the global generative AI market size is expected to reach $667.96 billion by 2030, seeing a CAGR of 47.5% between 2023 and 2030.

Apart from integrating Gemini Pro into various products, Google introduced various open-source tools to support generative AI projects and infrastructure.

The new tools include the likes of MaxDiffusion — a collection of reference implementations of various diffusion models, JetStream — a new engine to run generative AI models, MaxText — a collection of text-generating AI models targeting tensor processing units (TPUs), and NVIDIA’s (NVDA - Free Report) GPUs in the cloud.

MaxText now offers Gemma 7B, OpenAI's GPT-3, Llama 2 and models from AI startup Mistral. Further, Google collaborated with an AI startup, Hugging Face, to create Optimum TPU.

Additionally, the company launched an enterprise-focused AI code completion and assistance tool called Gemini Code Assist.

Competitive Scenario

Given the upbeat scenario in the generative AI space, not only Google but also Microsoft and Amazon are flexing muscles to bolster generative AI capabilities.

Microsoft continues to make strong efforts to boost its generative AI capabilities. Its integration of GPT-4 into its search engine Bing and browser Edge to deliver a ChatGPT-like experience to users remains noteworthy.

Microsoft Azure offers the Azure OpenAI Service, which enables the seamless application of LLM and generative AI techniques in various use cases. It recently launched the latest copilot templates on Azure OpenAI Service that allow retailers to build personalized shopping experiences and support store operations.

In addition, Microsoft and OpenAI are reportedly collaborating on an ambitious venture to establish a cutting-edge data center project to address the imperative need for advanced infrastructure to manage generative AI-driven complex data and tasks.

Then again, Amazon’s AWS is riding on Amazon Bedrock, which has provided it with a breakthrough in the generative AI space. Amazon Bedrock offers seamless access to high-performing foundation models from AI companies through an API.

Its latest partnership with NVIDIA to power its portfolio with the generative AI technology is another positive. Per the terms, the NVIDIA Blackwell GPU platform will be available on AWS. This will help speed up inference workloads for resource-intensive, multi-trillion-parameter language models.

Published in