We use cookies to understand how you use our site and to improve your experience.
This includes personalizing content and advertising.
By pressing "Accept All" or closing out of this banner, you consent to the use of all cookies and similar technologies and the sharing of information they collect with third parties.
You can reject marketing cookies by pressing "Deny Optional," but we still use essential, performance, and functional cookies.
In addition, whether you "Accept All," Deny Optional," click the X or otherwise continue to use the site, you accept our Privacy Policy and Terms of Service, revised from time to time.
You are being directed to ZacksTrade, a division of LBMZ Securities and licensed broker-dealer. ZacksTrade and Zacks.com are separate companies. The web link between the two companies is not a solicitation or offer to invest in a particular security or type of security. ZacksTrade does not endorse or adopt any particular investment strategy, any analyst opinion/rating/report or any approach to evaluating individual securities.
If you wish to go to ZacksTrade, click OK. If you do not, click Cancel.
After Marvell's (MRVL) guidance in custom silicon last week, there's caution on AVGO's 22X sales valuation.
But Broadcom is succeeding wildly with AI sales +220% last year propelled by XPUs and Ethernet.
Will the insatiable demand for generative and agentic AI tokens push AVGO back to 40% topline growth?
Broadcom ((AVGO - Free Report) ) reports its July quarter (Q3FY'25) on Thursday afternoon and investors are eager to hear about the growth story after smaller competitor Marvell ((MRVL - Free Report) ) last week offered disappointing sales guidance in the custom silicon AI market.
Broadcom is a premier designer, developer and global supplier of a broad range of semiconductor, networking, enterprise software and security solutions. Broadcom’s category-leading product portfolio serves critical markets including cloud, data center, networking, broadband, wireless, storage, industrial and enterprise software.
The company delivered a fantastic FY'24 (ended October) with 44% annual sales growth to a record $51.6 billion, as infrastructure software revenue grew to $21.5 billion, on the successful integration of VMware. Semiconductor revenue was a record $30.1 billion driven by AI revenue of $12.2 billion. AI revenue which grew 220% year-on-year was propelled by their leading AI XPUs and Ethernet networking portfolio.
The Tomahawk 6 Ethernet switch (launched with 102.4 Tbps capacity) and Jericho4 Ethernet fabric router are driving revenue growth by enabling extreme-scale AI cluster networking across and between data centers.
But now shares trade at a staggering 22.3 times this year's topline estimate of $62.7 billion. And the bottom line profit estimate for this year puts the P/E at 45X. So there's a lot riding on this week's report.
Broadcom Growth in the AI Economy
The top 5 areas of Broadcom’s business that investors and Wall Street analysts are closely watching for growth amid the AI buildout are as follows:
AI Networking: Broadcom’s Ethernet-based AI networking segment is the biggest single driver of AI-related growth, bolstered by surging demand for Tomahawk and Jericho switches and routers, especially among hyperscale customers scaling out large AI clusters. Networking revenue was up 170% year-over-year and represented 40% of AI revenue in Q2 2025, according to Daniel Newman and his research team at The Futurum Group.
Custom AI Silicon and Accelerators (XPUs): The company’s custom silicon solutions -- AI accelerators, XPUs, and supporting semiconductors (e.g., advanced interconnects) -- are a critical area as hyperscalers seek differentiated compute capabilities for AI training and inference workloads. Management specifically highlighted expected acceleration in XPU demand into late 2026 owing to growth in AI inference workloads.
Ethernet/Open Networking Protocols: Broadcom is seen as the key leader in Ethernet networking for AI data centers, providing standardized, high-bandwidth, low-latency connectivity adopted by hyperscalers building massive clusters. The open protocol approach is viewed as a distinct market advantage, enabling scale and vendor-agnostic deployments, according to Beth Kindig and her research team at the I/O Fund.
VMware-Driven Infrastructure Software: With the acquisition of VMware, Broadcom’s infrastructure software segment is another growth pillar. Growth in VMware Cloud Foundation (VCF) and enterprise adoption of its hybrid and multi-cloud management solutions are watchpoints for sustained, recurring software revenue supporting the broader data center AI transformation.
Hyperscaler and Cloud Customer Penetration: Large cloud players (Google, Meta, Microsoft, Amazon, Tencent, etc.) are central customers for both Broadcom’s networking/data-center silicon and infrastructure software. Wall Street is watching how quickly and deeply Broadcom can expand content and wallet share within these accounts as global AI investment accelerates.
What to Watch for in Q3 and Beyond
Broadcom delivered record revenues of $15 billion (+20% y-o-y) in its April Q2 reported in June on continued momentum in AI semiconductor solutions and VMware. But this was in line with consensus expectations as was the EPS mark of $1.58 (+43.6%).
"Q2 AI revenue grew 46% YoY to over $4.4 billion, driven by robust demand for AI networking,” said Hock Tan, President and CEO of Broadcom. “We expect growth in AI semiconductor revenue to accelerate to $5.1 billion in Q3, delivering ten consecutive quarters of growth, as our hyperscale partners continue to invest.”
Broadcom’s fastest AI revenue growth comes primarily from its Tomahawk and Jericho Ethernet switches and routers, custom AI accelerators (XPUs) for hyperscaler customers, and related high-bandwidth networking solutions. These product lines form the backbone of next-generation AI data center networks and are seeing remarkable demand from cloud giants building large-scale AI clusters.
This makes sense that networking chips would be such a big driver of growth as GPU and custom silicon clusters reach into the hundreds of thousands and require more connectivity to scale up the rack, scale across the datacenter, and scale out between datacenters.
Custom accelerators require ultra high-density, low-latency AI networking. And this is where Broadcom excels with its expertise in networking, not just chips. So we expect to see these areas growing solid double-digits again in the Q3 that ended July as inference demands from the big models soared, generating hundreds of billions of tokens to serve complex tasks like computational biology and chemistry, software code, legal analysis, and multi-agent workflows.
Call Me Slightly Cautious
But, after Marvell has failed to gain momentum in the custom silicon market -- even with big customers like Microsoft and Amazon who are looking for ways to build their own systems instead of buying everything from NVIDIA -- I'm concerned that Broadcom growth may not support the current valuation.
And this is coming from one of the biggest AI bulls you'll meet as we look at $600 billion in capex being invested this year for the 5-year datacenter transformation. I could be wrong, though, so here's the "super bull" case by Dany Kitishian writing for Klover.AI in July on the architectural shift toward custom solutions...
Broadcom’s custom silicon approach enables hyperscalers to optimize hardware for their LLMs and specialized workloads, pushing performance and efficiency further than commodity solutions allow. As inference workloads are projected to account for up to 70% of all AI compute by 2027, the market for custom solutions will dwarf that of general-purpose GPUs.
Broadcom’s custom AI accelerators (XPUs/ASICs) are purpose-built for inference workloads and deliver superior performance-per-watt and cost efficiency compared to merchant GPUs such as NVIDIA’s.
Beth Kindig shared similar ideas in her June report on Broadcom where she described the opportunity AVGO has vs. the pricier offerings from NVIDIA ((NVDA - Free Report) ). AI inference demand is accelerating Broadcom’s custom silicon plans because hyperscalers and AI platform providers need high-performance, cost-effective chips tailor-made for serving billions of AI requests from end users -- requirements that general-purpose GPUs struggle to meet in terms of efficiency, scale, and price.
So "custom" doesn't imply "pricey." And while NVIDIA chief Jensen Huang always reiterates that the problem with custom XPU silicon and ASICs solutions is that they often get planned and designed, but never make it out of the foundry as plans and designs change, there is clearly a push by big customers to have more flexibility at better cost.
Token Demand = Energy Demand
In early 2024, Chris Zeoli of the Datagravity newsletter recognized that Broadcom was at the epicenter for hyperscalers looking for alternative custom architectures. He explained in December "How a strategic partnership with Google on TPUs and others on XPUs, coupled with networking, is counterbalancing NVIDIA's rise."
Broadcom’s long-term, multi-generational custom accelerator and XPU programs, particularly for Google (TPUs), Meta (MTIA), ByteDance, and potentially Apple/OpenAI, represent a high-growth future for Broadcom.
The more nodes in AI architectures, the more capacity, the more demand, and the more tokens created -- and this burns lots of energy. So custom chips are not only essential for hyperscaler differentiation in AI training and inference at scale, but also to manage energy efficiency.
What Is an AI Token?
To think properly of a "token," or a sequence of them, imagine a word, data point, phrase, or fact connected across billions of transistors in an AI neural network.
Each token is represented as a high-dimensional numerical vector, which is a complex data structure that captures its meaning and context using weighted probability functions.
The large-scale, parallel processing of AI models, which involves trillions of calculations to process tokens, requires tens of billions of transistors working together in GPUs, XPUs, and other AI chips.
Here's how the "processing pipeline" for tokens works...
Tokenization: The input data (e.g., a text prompt) is broken down into a sequence of tokens.
Embedding: The model looks up each token's vector, or embedding, in a large table. The embedding is a complex, multi-dimensional numerical value.
Matrix multiplication: Billions or trillions of transistors on a GPU perform the parallel mathematical calculations required to process these token embeddings through the AI's neural network.
Prediction: The model iteratively predicts the next token in the sequence until it generates the final output.
Why Are AI Tokens -- and Costs -- Exploding?
There's a pushback in lots of mainstream press that AI systems are becoming an "over-thinking" sinkhole for customers, including many small and medium size companies, who end up spending far more than they expected to streamline their systems and workflows.
Christopher Mims recently wrote a piece in The Wall Street Journal titled "Cutting-Edge AI Was Supposed to Get Cheaper. It's More Expensive Than Ever."
But here was a good rebuttal from Aaron Levie, CEO of Box, on X yesterday...
Because the cost of AI tokens have gone down, we can now afford to use far more of them for increasingly complex tasks. The key point, thus, is not that "AI is getting more expensive"; instead, it's that because it's getting cheaper and more capable, we're using more of it to solve problems better.
For almost every like-for-like task, we're just using way more tokens to complete the task to deliver far better output. Whether it's writing code, answering a healthcare question, or analyzing a contract, we're using far more AI today to perform that work because we need the additional points of performance. Getting a 99% correct answer when working with a legal contract is *very* different from a 90% correct answer, and it's easily worth the 10X to 100X increase in tokens.
Levie's full post is worth reading, but the main idea is that a company like Broadcom is uniquely positioned to excel in this early-innings AI environment where tokens will multiply exponentially, systems will get "smarter," and costs will come down. If we get a 10-15% pull back in AVGO shares into or after their Thursday report, I would be a long-term buyer.
See More Zacks Research for These Tickers
Normally $25 each - click below to receive one report FREE:
Image: Bigstock
Broadcom (AVGO) Thrives in Custom AI Explosion
Key Takeaways
Broadcom ((AVGO - Free Report) ) reports its July quarter (Q3FY'25) on Thursday afternoon and investors are eager to hear about the growth story after smaller competitor Marvell ((MRVL - Free Report) ) last week offered disappointing sales guidance in the custom silicon AI market.
Broadcom is a premier designer, developer and global supplier of a broad range of semiconductor, networking, enterprise software and security solutions. Broadcom’s category-leading product portfolio serves critical markets including cloud, data center, networking, broadband, wireless, storage, industrial and enterprise software.
The company delivered a fantastic FY'24 (ended October) with 44% annual sales growth to a record $51.6 billion, as infrastructure software revenue grew to $21.5 billion, on the successful integration of VMware. Semiconductor revenue was a record $30.1 billion driven by AI revenue of $12.2 billion. AI revenue which grew 220% year-on-year was propelled by their leading AI XPUs and Ethernet networking portfolio.
The Tomahawk 6 Ethernet switch (launched with 102.4 Tbps capacity) and Jericho4 Ethernet fabric router are driving revenue growth by enabling extreme-scale AI cluster networking across and between data centers.
But now shares trade at a staggering 22.3 times this year's topline estimate of $62.7 billion. And the bottom line profit estimate for this year puts the P/E at 45X. So there's a lot riding on this week's report.
Broadcom Growth in the AI Economy
The top 5 areas of Broadcom’s business that investors and Wall Street analysts are closely watching for growth amid the AI buildout are as follows:
>>AI networking
>>Custom AI silicon/accelerators (XPUs)
>>Ethernet/Open networking
>>VMware-driven infrastructure software
>>Hyperscaler/cloud partnerships
Let's take a look at each area...
AI Networking: Broadcom’s Ethernet-based AI networking segment is the biggest single driver of AI-related growth, bolstered by surging demand for Tomahawk and Jericho switches and routers, especially among hyperscale customers scaling out large AI clusters. Networking revenue was up 170% year-over-year and represented 40% of AI revenue in Q2 2025, according to Daniel Newman and his research team at The Futurum Group.
Custom AI Silicon and Accelerators (XPUs): The company’s custom silicon solutions -- AI accelerators, XPUs, and supporting semiconductors (e.g., advanced interconnects) -- are a critical area as hyperscalers seek differentiated compute capabilities for AI training and inference workloads. Management specifically highlighted expected acceleration in XPU demand into late 2026 owing to growth in AI inference workloads.
Ethernet/Open Networking Protocols: Broadcom is seen as the key leader in Ethernet networking for AI data centers, providing standardized, high-bandwidth, low-latency connectivity adopted by hyperscalers building massive clusters. The open protocol approach is viewed as a distinct market advantage, enabling scale and vendor-agnostic deployments, according to Beth Kindig and her research team at the I/O Fund.
VMware-Driven Infrastructure Software: With the acquisition of VMware, Broadcom’s infrastructure software segment is another growth pillar. Growth in VMware Cloud Foundation (VCF) and enterprise adoption of its hybrid and multi-cloud management solutions are watchpoints for sustained, recurring software revenue supporting the broader data center AI transformation.
Hyperscaler and Cloud Customer Penetration: Large cloud players (Google, Meta, Microsoft, Amazon, Tencent, etc.) are central customers for both Broadcom’s networking/data-center silicon and infrastructure software. Wall Street is watching how quickly and deeply Broadcom can expand content and wallet share within these accounts as global AI investment accelerates.
What to Watch for in Q3 and Beyond
Broadcom delivered record revenues of $15 billion (+20% y-o-y) in its April Q2 reported in June on continued momentum in AI semiconductor solutions and VMware. But this was in line with consensus expectations as was the EPS mark of $1.58 (+43.6%).
"Q2 AI revenue grew 46% YoY to over $4.4 billion, driven by robust demand for AI networking,” said Hock Tan, President and CEO of Broadcom. “We expect growth in AI semiconductor revenue to accelerate to $5.1 billion in Q3, delivering ten consecutive quarters of growth, as our hyperscale partners continue to invest.”
Broadcom’s fastest AI revenue growth comes primarily from its Tomahawk and Jericho Ethernet switches and routers, custom AI accelerators (XPUs) for hyperscaler customers, and related high-bandwidth networking solutions. These product lines form the backbone of next-generation AI data center networks and are seeing remarkable demand from cloud giants building large-scale AI clusters.
This makes sense that networking chips would be such a big driver of growth as GPU and custom silicon clusters reach into the hundreds of thousands and require more connectivity to scale up the rack, scale across the datacenter, and scale out between datacenters.
Custom accelerators require ultra high-density, low-latency AI networking. And this is where Broadcom excels with its expertise in networking, not just chips. So we expect to see these areas growing solid double-digits again in the Q3 that ended July as inference demands from the big models soared, generating hundreds of billions of tokens to serve complex tasks like computational biology and chemistry, software code, legal analysis, and multi-agent workflows.
Call Me Slightly Cautious
But, after Marvell has failed to gain momentum in the custom silicon market -- even with big customers like Microsoft and Amazon who are looking for ways to build their own systems instead of buying everything from NVIDIA -- I'm concerned that Broadcom growth may not support the current valuation.
And this is coming from one of the biggest AI bulls you'll meet as we look at $600 billion in capex being invested this year for the 5-year datacenter transformation. I could be wrong, though, so here's the "super bull" case by Dany Kitishian writing for Klover.AI in July on the architectural shift toward custom solutions...
Broadcom’s custom silicon approach enables hyperscalers to optimize hardware for their LLMs and specialized workloads, pushing performance and efficiency further than commodity solutions allow. As inference workloads are projected to account for up to 70% of all AI compute by 2027, the market for custom solutions will dwarf that of general-purpose GPUs.
Broadcom’s custom AI accelerators (XPUs/ASICs) are purpose-built for inference workloads and deliver superior performance-per-watt and cost efficiency compared to merchant GPUs such as NVIDIA’s.
Beth Kindig shared similar ideas in her June report on Broadcom where she described the opportunity AVGO has vs. the pricier offerings from NVIDIA ((NVDA - Free Report) ). AI inference demand is accelerating Broadcom’s custom silicon plans because hyperscalers and AI platform providers need high-performance, cost-effective chips tailor-made for serving billions of AI requests from end users -- requirements that general-purpose GPUs struggle to meet in terms of efficiency, scale, and price.
So "custom" doesn't imply "pricey." And while NVIDIA chief Jensen Huang always reiterates that the problem with custom XPU silicon and ASICs solutions is that they often get planned and designed, but never make it out of the foundry as plans and designs change, there is clearly a push by big customers to have more flexibility at better cost.
Token Demand = Energy Demand
In early 2024, Chris Zeoli of the Datagravity newsletter recognized that Broadcom was at the epicenter for hyperscalers looking for alternative custom architectures. He explained in December "How a strategic partnership with Google on TPUs and others on XPUs, coupled with networking, is counterbalancing NVIDIA's rise."
Broadcom’s long-term, multi-generational custom accelerator and XPU programs, particularly for Google (TPUs), Meta (MTIA), ByteDance, and potentially Apple/OpenAI, represent a high-growth future for Broadcom.
The more nodes in AI architectures, the more capacity, the more demand, and the more tokens created -- and this burns lots of energy. So custom chips are not only essential for hyperscaler differentiation in AI training and inference at scale, but also to manage energy efficiency.
What Is an AI Token?
To think properly of a "token," or a sequence of them, imagine a word, data point, phrase, or fact connected across billions of transistors in an AI neural network.
Each token is represented as a high-dimensional numerical vector, which is a complex data structure that captures its meaning and context using weighted probability functions.
The large-scale, parallel processing of AI models, which involves trillions of calculations to process tokens, requires tens of billions of transistors working together in GPUs, XPUs, and other AI chips.
Here's how the "processing pipeline" for tokens works...
Tokenization: The input data (e.g., a text prompt) is broken down into a sequence of tokens.
Embedding: The model looks up each token's vector, or embedding, in a large table. The embedding is a complex, multi-dimensional numerical value.
Matrix multiplication: Billions or trillions of transistors on a GPU perform the parallel mathematical calculations required to process these token embeddings through the AI's neural network.
Prediction: The model iteratively predicts the next token in the sequence until it generates the final output.
Why Are AI Tokens -- and Costs -- Exploding?
There's a pushback in lots of mainstream press that AI systems are becoming an "over-thinking" sinkhole for customers, including many small and medium size companies, who end up spending far more than they expected to streamline their systems and workflows.
Christopher Mims recently wrote a piece in The Wall Street Journal titled "Cutting-Edge AI Was Supposed to Get Cheaper. It's More Expensive Than Ever."
But here was a good rebuttal from Aaron Levie, CEO of Box, on X yesterday...
Because the cost of AI tokens have gone down, we can now afford to use far more of them for increasingly complex tasks. The key point, thus, is not that "AI is getting more expensive"; instead, it's that because it's getting cheaper and more capable, we're using more of it to solve problems better.
For almost every like-for-like task, we're just using way more tokens to complete the task to deliver far better output. Whether it's writing code, answering a healthcare question, or analyzing a contract, we're using far more AI today to perform that work because we need the additional points of performance. Getting a 99% correct answer when working with a legal contract is *very* different from a 90% correct answer, and it's easily worth the 10X to 100X increase in tokens.
Levie's full post is worth reading, but the main idea is that a company like Broadcom is uniquely positioned to excel in this early-innings AI environment where tokens will multiply exponentially, systems will get "smarter," and costs will come down. If we get a 10-15% pull back in AVGO shares into or after their Thursday report, I would be a long-term buyer.