Investing analysis of the software companies that power next generation digital businesses

Q2 2023 Hyperscaler Earnings Review

Headwinds from workload optimization may have peaked in Q2. After nearly a year of negative drag on revenue growth for the hyperscalers, it appears that cloud infrastructure customers are shifting their focus away from workload optimization and back towards the deployment of new ones. At least that is what Amazon’s CEO told us in the opening statement of their earnings release. While the hyperscalers delivered revenue growth rates that decelerated from the prior year, they are now showing an uptick sequentially from Q1.

This circumstance makes sense. After rapidly spinning up new cloud workloads during Covid, established enterprises and start-ups alike postponed the normal post-launch clean-up cycles that address right-sizing server instances, tuning queries and rationalizing data retention. In an environment of flush IT budgets and pressure to ship fast, it’s easy for engineering teams to postpone this technical debt.

As budgets contracted and digital transformation project pace slowed down in 2022, engineering teams were encouraged to shift cycles back to paying down their technical debt. With pressure to generate cost savings, a lot of that work focused on optimizing existing application workloads. Engineering teams were able to review cloud resource utilization, reset capacity allocations, tune database queries, clean up prolific logging and refactor code to run more efficiently. All normal engineering best practices.

Except that they happened all at once. As this optimization work digested the oversized backlog of Covid technical debt, it created an unusually large reduction in resource consumption over a shorter period. This translated into less revenue for the hyperscalers and software infrastructure vendors that are consumption based. A normal application workload tuning exercise can reduce resource utilization by 20-30% or more. Excessively over-provisioned capacity might even be cut by 50%, simply by downsizing server instances. The hyperscalers rightfully make this easy to execute as a consequence of elasticity, but lower resource consumption means less revenue.

While this tuning is standard practice, in the post Covid catch-up period, it created a larger than normal reduction in spend. Enterprises were increasing consumption in some areas by launching new cloud workloads and digital transformation projects (although probably at a slower rate), but the immediate drop in cost for existing workloads created a large negative headwind to revenue growth. This caused the prior cadence of sequential quarterly increases in hyperscaler spending to slow down rapidly and even go negative briefly.

The optimization catch-up cycle can only last a finite amount of time. Eventually, technical debt is worked off and Covid workloads are tuned and right-sized. The effort to capture savings has diminishing returns, or as Microsoft’s CFO said in April, that workloads can’t be optimized forever. With the latest earnings commentary from the hyperscalers, it appears the catch-up period is wrapping up. Hyperscaler spend by enterprises will be increasingly driven by new activity, with the negative drag created by post-launch optimization returning to its pre-Covid levels.

Where the steady state growth rate lands remains an open question. Data points during Covid were clearly inflated, as both large enterprises and rapidly growing start-ups threw money at their cloud migration and digital transformation projects. We can assume that this rate of new investment has slowed down from the peak, but is still robust. Over the past 12 months, negative consumption trends from optimization have obfuscated the post-Covid steady state. When optimization stabilizes, investors will get a clear idea of what the steady state growth rate will be.

Unfortunately, that growth rate won’t be easily comparable to the pre-Covid period as another force has injected a new variable, which is investment in AI. As enterprises consider their AI strategy, they are spinning up IT projects to harness their data to create digital services powered by new AI models. These efforts are generating demand for AI specific services from the hyperscalers. There would be some spillover into demand for generalized software infrastructure services as well.

As growth rates for the hyperscalers potentially level out and even inflect back upwards, investors can expect a similar pattern to follow for various software infrastructure providers. Logically, if enterprises are consuming more compute and storage resources on the hyperscalers to support their new AI-driven applications, then a similar increase in demand should materialize for adjacent software services, like databases, monitoring, delivery, orchestration and application security. Serving content at scale to users worldwide over the Internet still requires these things.

Audio Version
View all Podcast Episodes and Subscribe

In this post, I will review the general trends in cloud infrastructure and how they are impacting the providers. Then, I will look at results from the Big 3 hyperscalers. I’ll wrap up by trying to project what path demand trends may take going into 2024, particularly understanding that AI will become an even larger influence over the next year.

Background

In their March 2023 quarter, Microsoft’s CFO commented that “at some point, workloads just can’t be optimized much further.” She is correct. Cloud workload optimization refers to the customer process of reviewing costs of various infrastructure services and identifying opportunities to reduce them by making better use of the resources available. Hyperscalers generally offer multiple tactics for cost reductions, like considering instance sizes, upfront commitments, access latency, software version and more.

During Covid boom times, it was common to over-provision server instances. Code was often rushed to production, with adequate bug checking, but limited performance tuning. As enterprise IT teams are now contending with budget pressures, they are trying to support the same workloads for less cost. By right-sizing server instances, refactoring code and rationalizing data retention policies, they have been reducing resource consumption and lowering the cost for each workload.

In some cases, these cost savings can reach 50% or more. As an example on AWS, simply downsizing an EC2 instance from 2xlarge to large can cut monthly costs by 75%. Committing to a one year contract rather than month-to-month can save another 20% and a 3 year commitment can reduce costs by 40%. These types of changes can be executed very quickly, usually within a quarter. As a consumption business, the negative impact on hyperscaler revenue would be immediate.

However, workload optimization has diminishing returns. The most significant changes are logically front-loaded. As each quarter passes, the revenue impact should decrease. This would explain what appears to be the current situation. Enterprises are still optimizing, but the opportunities to make the large reductions in cloud spending are passing, as they have mostly worked off their Covid backlog. Going forward, the negative reductions in spend on existing cloud workloads would slow down, allowing new workloads to contribute more of the growth in enterprise spend.

Optimizations started back in the second half of 2022. They continued through Q4 and into Q1 2023. As we are now receiving Q2 earnings reports from the hyperscalers, management is still referencing the effect, but the impact on revenue growth rates appears to be decreasing. This is evidenced by leveling out of annual growth rates and even sequential reacceleration of growth from Q1 to Q2.

Looking forward, management teams are hesitant to predict the end of optimization. In many ways, enterprises will always optimize workloads to some extent. The right question is whether the negative drag of optimization efforts is increasing, decreasing or remaining at steady state. I think it has reached steady state transitioning from Q1 to Q2 and will start decreasing through the remainder of 2023. Amazon, for one, has called out this inflection starting in Q2.

This means that revenue growth rates for hyperscalers can be primarily driven by the expansion of existing workloads (from more usage) and the introduction of new ones. New workloads are created by enterprises starting digital transformation projects or migrating existing applications to the cloud. This type of work has continued through the optimization period, likely with some projects postponed to accommodate one-time budget resets. I think the frequency of project delays will decrease also, as macro headwinds and interest rates stabilize.

Another factor to consider relative to hyperscaler utilization has been demand from the start-up community. As interest rates dropped to zero during Covid, VC firms poured money into start-ups offering all kinds of new digital-based services. These start-ups were encouraged to get product to market as quickly as possible, generating rapid scale-ups on hyperscaler infrastructure.

As this funding dropped in 2022 (whether measured by investment amounts or IPOs), so did the cloud infrastructure demand generated by the start-ups. In many cases, the existing start-ups went through their own optimization exercises, similarly tuning performance and right-sizing their resource consumption to match the new reality of their growth rates. Like enterprises, though, these start-ups will start to expand again, and will begin increasing their cloud infrastructure spend.

At some point, VC funds should start investing again. We are already seeing this in the AI space. AI services require other software infrastructure services beyond model training, if they are packaged as an application delivered over the Internet. As these start-ups scale up their products for broad use, they should create a new tailwind for hyperscaler and software services demand. Software infrastructure providers, like in application security, monitoring and operational databases, have also called out AI start-ups among their new customer additions.

AI Headwind or Tailwind for Broader Software Infrastructure?

Looking more broadly at software infrastructure service consumption and non-AI hyperscaler spend, one wildcard is the near term IT budget allocation between direct AI services and everything else. Most hyperscalers have rolled out AI specific offerings at this point. These supplement their existing cloud infrastructure services that allow enterprises to host applications, data processing and storage on the hyperscalers.

Enterprises and start-ups are flocking towards AI services, pursuing opportunities to launch new AI-driven products for their customers, partners and suppliers. These efforts are already showing up in hyperscaler earnings results. Microsoft reported that 1% of Azure’s annual revenue growth was due to AI spend in the prior quarter. They expect that to grow to 2% in the current quarter.

This is great news for the companies directly in the AI infrastructure value chain. The hyperscalers obviously benefit, as do the companies providing hardware to the hyperscalers that enable AI processing. These include the AI chip manufacturers, data storage and networking providers.

If IT budgets are set for 2023, it is possible that enterprises will shift some funds from digital transformation and cloud migration projects into new AI initiatives. This might result in less spend for traditional cloud infrastructure services, versus specific AI offerings. For the hyperscalers, it doesn’t matter too much, as they generate revenue from both. For vendors of services that wrap around AI-driven applications (operational databases, security, delivery, monitoring, etc.), it might.

AI services delivered as standard Internet-based applications would still consume these services, but if enterprises shift more spend towards building their AI models initially, there may be less remaining budget to fund the ongoing digital transformation and cloud migration projects. As technology leaders at enterprises prioritize their efforts based on input from corporate leadership, they may choose to delay some projects on the standard cloud migration rail in favor of investment in AI processing.

In fact, Arista Networks has a term for this. They break out hyperscaler spend on their networking gear into two buckets – AI networking and classic cloud networking. The idea is that hyperscaler infrastructure spending in the past (2022 and earlier) has been to support “classic cloud” workloads, like those enabling digital transformation and cloud migration of enterprise applications. AI networking refers to network gear specifically earmarked to enable clustering of large numbers of GPUs and storage for AI processing.

During the past couple of years, we have enjoyed a significant increase in cloud capex to support our cloud titan customers for their ever-growing needs, tech refresh, and expanded offerings. Each customer brings a different business and mix of AI networking and classic cloud networking for their compute and storage clusters. One specific cloud titan customer has signaled a slowdown in CapEx from previously elevated levels. Therefore, we expect near-term cloud titan demand to moderate with spend favoring their AI investments. We do project, however, that we will grow in excess of 30% annually versus our prior analyst day forecast of 25% in 2023.

Arista Networks, Q2 2023 Earnings Call

Arista Networks leadership signaled that their cloud titan customers appear to be considering their mix of AI networking and classic cloud networking investments. Given the large demand for AI services, they are favoring spend to support AI investment near term. Arista still expects the net benefit to be positive for their total revenue, as they raised their annual growth target to 30% from 25% for 2023.

This signals that vendors selling hardware and services directly targeted at AI processing will likely be better positioned near term, than those catering towards “classic cloud” workloads. For companies like Arista and Nvidia, this should drive increased demand. For businesses supporting “classic cloud” deployments, there may be a temporary dip in demand, as enterprise investments shift towards AI.

Like Arista, many “classic cloud” software infrastructure providers can be utilized by AI workloads as well. These require large amounts of clean and secure data, a standard application stack if they will be accessed over the Web, as well as monitoring and security. These demands should generate consumption for cloud database, data streaming, application security and monitoring, but perhaps not as much as a “classic cloud” application.

Over the long run, I think that AI will generate other cost savings in enterprises, freeing up more budget for cloud infrastructure services. As developer productivity increases, enterprises will launch applications faster, generating more demand for cloud services to host them all. These effects should drive an acceleration in demand for “classic cloud” services, offset by savings in engineering headcount. Developers will still be hired – enterprises just won’t need as many.

The independent providers of “classic cloud” infrastructure outside of the hyperscalers can benefit from demand for their services from AI-specific companies as well. Many of these companies are fast-growing AI start-ups, flush with VC investment and a mandate to grow quickly. These companies have rapidly emerged in new customer highlights for independent software infrastructure providers in monitoring (DDOG), operational databases (MDB) and application security (NET). It’s probable that a reduction in spend from enterprises is backfilled by new demand from AI customers.

What does this all mean? After the majority of one-time optimization work is complete (and we can assume that the biggest cost reductions were front-loaded), then growth in cloud spend should return to a normal level. Normal will likely reflect growth before Covid, discounting for decay from the law of large numbers. Most importantly, spend growth will no longer have a large headwind from the one-time optimization catch-up.

Another factor to consider is that enterprises will likely start to realize internal cost savings as a consequence of their AI efforts. This should free up more budget to pay for them. Microsoft’s CEO hinted at this in response to an analyst question on their latest earnings call. The analyst referenced data showing that developers are experiencing a 40%-50% productivity improvement from GitHub Copilot and wondered if that same boost would extend to other Copilot efforts in Microsoft 365, Sales and Services.

I think what you’re also referencing is now there’s good empirical evidence and data around the GitHub Copilot and the productivity stats around it. And we’re actively working on that for M365 Copilot, also for things like the role-based ones like Sales Copilot, our Service Copilot. We see these business processes having very high productivity gains. And so, yes, over the course of the year, we will have all of that evidence.

And I think at the end of the day, as Amy referenced, every CFO and CIO is also going to take a look at this. I do think for the first time — or rather, I do think people are going to look at how can they complement their OpEx spend with essentially these Copilots in order to drive more efficiency and, quite frankly, even reduce the burden and drudgery of work on their OpEx and their people and so on.

Microsoft CEO, Q4 FY2023 Earnings call

In sum, these contributors should allow the hyperscalers to enjoy consistent revenue growth again, in line with secular trends of digital transformation and cloud migration. They should additionally benefit from incremental spend on AI services, as enterprises and start-ups alike invest separately in those. The same trends would apply to the independent software infrastructure providers in observability, data processing, delivery and application security. After the hyperscaler earnings were reported, particularly with Amazon, the stocks in these companies jumped. As independent software providers report their most recent quarterly results over the next few weeks, we should get further updates on how these trends are playing out.


Sponsored by Cestrian Capital Research

Cestrian Capital Research provides extensive investor education content, including a free stocks board focused on helping people become better investors, webinars covering market direction and deep dives on individual stocks in order to teach financial and technical analysis.

The Cestrian Tech Select newsletter delivers professional investment research on the technology sector, presented in an easy-to-use, down-to-earth style. Sign-up for the basic newsletter is free, with an option to subscribe for deeper coverage.

Software Stack Investing members can subscribe to the premium version of the newsletter with a 33% discount.

Cestrian Capital Research’s services are a great complement to Software Stack Investing, as they offer investor education and financial analysis that go beyond the scope of this blog. The Tech Select newsletter covers a broad range of technology companies with a deep focus on financial and chart analysis.


Hyperscaler Results

With that background, let’s look at how the hyperscalers performed. I will focus on the cloud business for each company and the revenue component of that. Amazon, Google and Microsoft have other aspects of their businesses that aren’t material to this discussion. Additionally, their reporting of the cloud hosting component of their business is usually limited to revenue performance, with varying degrees of transparency about future expectations.

Hyperscaler YTD Stock Chart, Koyfin

Coming into this quarter’s earnings, all three hyperscaler company stocks have enjoyed nice appreciation so far into 2023. Amazon stock had the most appreciation, up 59% YTD prior to earnings on August 3rd. Microsoft stock was up 46% YTD before earnings on July 25th. Alphabet was third, up 38% prior to earnings on the 25th as well.

The stock performance following their earnings reports was mixed, as Microsoft disappointed and lost 3.9% the following day. Alphabet performed well on the back of strong advertising spend, gaining 5.6% the day after earnings. The market had to wait a week for Amazon’s results, which turned out to be worth it. Both their primary business and AWS beat expectations, driving an 8.3% stock price jump the next day.

Hyperscaler Quarterly Revenue, Author’s Table. Q3 reflects estimate provided by Microsoft.

For their cloud infrastructure businesses, the three companies delivered revenue growth slightly above their prior guidance and street expectations. After a couple of quarters of rapid drops in growth rates, the deceleration appears to be moderating. For Amazon AWS and Google Cloud, which provide the actual revenue value, we can see that sequential growth even picked up from the prior quarter. This implies that revenue growth rates are leveling out. For Amazon, we also know that growth improved as Q2 progressed and the same trends continued into July.

Given that all three companies showed improving sequential growth, but still referenced ongoing workload optimization headwinds, my interpretation is that enterprises have already addressed the big tuning opportunities. They are now working through smaller refinements or have largely completed their deferred tech debt for Covid workloads. Technology leaders would prioritize the low hanging optimization fruit first, which would explain why the largest impact was seen earlier in the cycle.

The other factor helping to improve hyperscaler growth rates is the contribution from AI spending. The hyperscalers all developed products that specifically support the desire of enterprises to begin leveraging their internal data to generate new service offerings using AI. AI specific start-ups are generally hosting on the hyperscalers as well. As these start-ups receive a surge of VC funding, they represent new spending for AI services.

Microsoft reported that about 1% of their revenue growth in the prior quarter was from AI services. They expect that to double to 2% in the September quarter. The other hyperscalers don’t break out AI spend, but their management teams provided commentary indicating that AI is driving consumption across their service offerings.

With that, let’s briefly review what each hyperscaler reported.

Microsoft

As part of their Q3 (ended March 2023) earnings report, Microsoft guided for Azure revenue growth of 26-27% in constant currency for Q4 (ending June 2023). They had just delivered 31% annual revenue growth, which was down 7% from 38% growth in their Q2 (ended December 2023). For the Q4 quarter just reported on July 25th, they actually achieved 27% revenue growth, which was a 0.5% beat at the midpoint. This included a contribution from AI services equal to 1% of growth.

For the current Q1 quarter ending in September, the market was looking for 25% growth. Management guided just above that to a range of 25-26%, with 2% of revenue growth coming from AI. They expect the growth trends from last quarter to carry forward into the current quarter.

While Microsoft doesn’t report the exact revenue amount for Azure, the deceleration in annual growth appears to be slowing down. Two quarters ago, the deceleration was 700 bps from 38% to 31%, then 31% to 27% in the June quarter and potentially just 100bps from 27% to 26% in the current quarter. This isn’t completely apples-to-apples, as AI is a new contributor. However, the trend implies the sequential revenue growth rate is leveling out and possibly even increasing.

Management also provided a signal about the overall size of Azure revenue. The CEO commented that Microsoft Cloud revenue was $110B over the prior 12 months and that Azure passed 50% of that for the first time. This implies that Azure’s annual trailing revenue total was $55B, which compares to $85.4B for AWS over the last 4 quarters.

The sales pipeline for Azure is healthy. Microsoft’s CFO commented that Azure received a record number of $10M+ contracts in the prior quarter. Additionally, the average annualized value for large long-term Azure contracts was the highest ever. This was driven by customer demand for both traditional cloud services and new AI offerings.

Management highlighted the continued strength in their Azure AI offering. It is enjoying rapid customer adoption with 11,000 customers. They cited several examples of enterprises incorporating ChatGPT features into their own internal product offerings. Mercedes is using ChatGPT through Azure OpenAI to improve its in-car voice assistant for 900,000 vehicles in the United States. Financial services company Moody’s built its own internal Co-pilot to improve productivity of its 14,000 employees.

Alphabet

Alphabet bundles revenue for Google Cloud Platform (GCP) in the line item called Google Cloud. This includes their workforce application business (Google Workspace – formally G Suite) in addition to their hyperscaler services. For Q1 (ended March 2023), Google Cloud delivered $7.454B of revenue, up 28.1% annually and 1.9% sequentially. In Q2 reported on July 25th, they increased Cloud revenue to $8.031B. This roughly matched the prior annual growth rate with 28.0%, but jumped sequentially by 7.7%. This highlights why tracking sequential growth rates currently will be more indicative directionally than annual growth rate comparisons.

I particularly liked the jump in sequential revenue growth, which is much higher than the prior two quarters. While we don’t know the exact contribution from Google Cloud Platform, management typically comments on the relative growth rate of GCP versus Google Cloud overall. During the earnings call, they again confirmed that growth of GCP was higher than the 28% annual growth rate of Google Cloud.

While management highlighted the strong results for GCP, they did cite ongoing optimization of spending from enterprises. They also didn’t provide any forward guidance on GCP performance, which is standard. Management did discuss at length the strength of adoption of their AI product offerings. They claim that their cloud infrastructure is a leading platform for training and serving generative AI models, with more than 70% of Generative AI unicorns on Google Cloud, including Cohere, Jasper, Typeface and others.

GCP offers a wide variety of AI supercomputer options for customers ranging from Google TPUs and advanced Nvidia GPUs to new A3 AI supercomputers powered by Nvidia’s H100. Google AI is experiencing strong demand for their more than 80 available AI models, both open source and third party. The number of customers consuming these has grown 15x from April to June.

Examples include Priceline for trip planning, Carrefour for creation of full marketing campaigns and Capgemini to streamline hundreds of internal business processes. HSBC uses their anti-money laundering AI service to flag financial crime risk. What I like about these examples is the breadth of use cases, extending into many facets of enterprise operations. This goes far beyond simple chat agents powered by ChatGPT.

Amazon

Saving the best for last, Amazon signaled the most upbeat momentum for cloud infrastructure spend. Not only did the overall business deliver a strong beat, but AWS performed better than expected. Investors will recall that AWS grew 16% y/y in Q1 (ended March 2023). The CFO mentioned on the earnings call that the annual growth rate had dipped to 11% y/y for the month of April (first month of Q2). If this deceleration continued, the overall growth rate for Q2 might have dropped to 10% or lower.

This led analysts to expect 10% annual growth for Q2 from AWS. The actual rate was 12.2% annually, which implies revenue growth re-accelerated after April. In fact, the q/q revenue growth rate was 3.7% in Q2, recovering from the -0.1% drop in Q1. This stabilization of growth rates was supported by management commentary.

So, again, if we rewind to our last conference call, we had seen 16% AWS revenue growth in Q1, and the growth rates had been dropping during the quarter. And what I mentioned was that April was running about 500 basis points lower than Q1.

What we’ve seen in the quarter is stabilization and you see the final 12% growth. So, while that is 12%, there’s a lot of cost optimization dollars that came out and a lot of new workloads and new customers that went in.

What we’re seeing in the (current) quarter is that those cost optimizations, while still going on, are moderating, and many maybe behind us in some of our large customers. And now we’re seeing more progression into new workloads, new business. So, those balanced out in Q2. We’re not going to give segment guidance for Q3.

But what I would add is that we saw Q2 trends continue into July. So, generally feel the business has stabilized, and we’re looking forward to the back end of the year in the future because, as Andy said, there’s a lot of new functionality coming out with. So, optimistic and starting to see some good traction with our customers’ new volumes.

Amazon CFO, Q2 2023 EArnings CAll

The statement from the press release that “growth stabilized as customers started shifting from cost optimization to new workload deployment” provides a good summary for the drivers of the Q2 results relative to Q1. As I discussed in my Q1 review of hyperscaler results (and Microsoft’s CFO hinted at), enterprise workload optimization had to decrease at some point. There is only a finite amount of optimization that can be performed and the impact is usually front-loaded. This effect was largely a catch-up on technical debt that had been accumulated during the Covid spending surge.

All the while, enterprises have been continuing to deploy new workloads and other lagging companies are starting their cloud migration journey. These effects generated incremental consumption of cloud infrastructure resources, but it was being offset by the negative impact of optimization. As optimization curtails, overall revenue growth rates will be primarily driven by new deployments again.

Amazon leadership also wanted to emphasize the opportunity for AI to drive more demand within AWS. They highlighted a “slew of generative AI releases” that provide cost savings and ease of use relative to model training, LLM customization and code generation. AWS wants to democratize access to generative AI for their customers by providing access to any LLM of choice and simplifying the requirements to get started. They are also emphasizing security and privacy, lest proprietary enterprise data is leaked out through model training.

Investment Plan

While the three hyperscalers continued to reference ongoing customer workload optimization as part of their calendar Q2 earnings reports, the magnitude of the negative impact appears to be decreasing. This was evident from their revenue growth rates and management commentary. Both AWS and Google Cloud delivered sequential revenue growth rates, which annualized are greater than their growth rate over the prior year. This implies a reacceleration of growth.

This quarter, Amazon leadership provided the strongest signal that the impact from enterprise workload optimization is largely behind us, emphasizing that customers are shifting their focus to launching new workloads. The CFO even added that some of their largest customers are complete with their Covid workload optimization catch-up work.

Investors will recall that last quarter Microsoft CFO’s reflected a similar view that the optimization surge would eventually end. More specifically, she asserted that optimization can’t continue forever, implying that customers are mostly through the adjustments that have the largest negative impact on revenue. She discussed how workload optimization had started about a year ago and that comparables will be easier as we get into the second half of 2023.

This underscores a point that I raised in my review of the hyperscaler results from Q4. Optimization of a cloud workload tends to be a one-time exercise. Once the resources dedicated to an application workload are reset to match actual utilization, there isn’t a reason to keep reducing that allocation. The magnitude of an optimization exercise can be large. Simple server instance downsizing and longer commitments can reduce workload costs by 50% or more. The impact on quarterly revenue will also be immediate, as a consequence of the hyperscaler consumption models.

Once optimization is complete, however, then growth in utilization will once again be driven by increases in application usage. Enterprises will continue their cloud migration and digital transformation work, generating new consumption of infrastructure resources. While this was continuing to a lessor extent over the past year, its contribution to revenue growth was offset by a large deficit from workload optimization.

Additionally, enterprises are pursuing initiatives to incorporate AI into new offerings for their customers, employees and suppliers. This drives more hyperscaler resource consumption, both of AI services and the broader rails of software infrastructure. These efforts will go beyond core AI model generation, cascading out towards inference and all standard cloud application support services (data, storage, security, monitoring, etc.). As Microsoft’s CEO said, AI services will also “spin the Azure meter.”

Last quarter, I tried to visualize these various effects in the diagram below. While there are a lot of moving parts and certainly unknowns, I think we can make some assumptions that explain the surge in hyperscaler revenue growth during the Covid period (2020-2021) and then the marked deceleration in growth rates we have been witnessing over the last few quarters since mid-2022. If we assume this has been driven by a cycle of over-provisioning and optimization, the trends in hyperscaler growth rates make sense. Further, if optimization will taper off and AI workloads ramp up, then we can extrapolate the likely curve of revenue growth over the next year or two.

Projected Hyperscaler Growth Influences over Time, Author’s Graphic (Updated for Q2)

The trends I anticipated in Q1 have largely played out in Q2. We are witnessing an inflection in the blended revenue growth rates, as optimization effects are bottoming and should diminish going forward. While the growth rate of overall classic cloud infrastructure is slowing decreasing due to large numbers, spend on new AI offerings is starting to register and will likely backfill a portion of the decay (if not all of it).

I mean, even the workloads themselves, AI is just going to be a core part of a workload in Azure versus just AI alone. In other words, if you have an application that’s using a bunch of inference, let’s say, it’s also going to have a bunch of storage, and it’s going to have a bunch of other compute beyond GPU inferencing, if you will. I think over time, obviously, I think every app is going to be an AI app. That’s, I think, the best way to think about this transformation.

MICROSOFT CEO, Q2 FY2023 EARNINGS CALL, JANUARY 2023

For investors in those companies that provide supporting infrastructure around the hyperscalers, like observability, data transport and management, security and delivery, we should see similar trends play out. The optimization and new workload patterns for these companies should follow the behavior of the hyperscalers.

Additionally, a new wave of AI-driven innovation won’t just benefit the vendors of the core AI inputs, like chip manufacturers and the hyperscalers. Any service hosted in the cloud and delivered over the Internet will consume the same software infrastructure resources that contributed to the last wave of Internet growth, whether Web2, mobile apps or remote work.

Further, new AI application investment won’t require incremental budget from most enterprises over time. The costs can be offset by the productivity gains for their knowledge workers. Enterprise departments will find that their employees can accomplish more with AI-driven software services and co-pilots. They will therefore require less headcount to complete the same amount of work. Payroll costs will decrease, providing savings to be invested in more sophisticated software services, whether digital assistants, workflow automation or system-to-system coordination.

Finally, the creators of software applications, namely developers, will become several times more productive. They will deliver new digital experiences faster. The result will be more applications that then consume greater cloud infrastructure resources. That additional expense will be further offset by the need for fewer developer resources.

For me, all of this implies a guardedly optimistic outlook for the independent providers of cloud infrastructure and associated software services. We will likely see stabilization of revenue growth rates for the software infrastructure companies that will be reporting their quarterly results over the next month. This will be driven by the same moderation of cost optimization impact being experienced by the hyperscalers.

Unless the macro picture takes a significant additional step downward, I expect demand to pick back up in the second half of 2023, providing a real opportunity for re-acceleration of growth going into 2024. This could provide a favorable set-up for many of the software infrastructure stocks that have been beaten down over the last year. For those willing to stomach some volatility over the next quarter or two, you might be well rewarded 6-12 months from now.

Further Reading

  • Peer analyst Muji over at Hhhypergrowth recently published a review of the hyperscaler’s latest performance. He included some more useful growth data and additional insights. This content is behind a paywall, but well worth the expense.
  • I recently started using the charting tools from Koyfin. I have found their product to be very helpful in conducting research and charting. Readers can check out Koyfin for themselves and receive a 15% discount on a subscription by using this link.

NOTE: This article does not represent investment advice and is solely the author’s opinion for managing his own investment portfolio. Readers are expected to perform their own due diligence before making investment decisions. Please see the Disclaimer for more detail.

4 Comments

  1. Michael Orwin

    Thanks for the article! How do you expect AI to spread between cloud, near-edge and extreme edge? From earnings calls, Qualcomm seem confident about demand for on-device AI. Cloudflare’s CEO (as I’m sure you know!) said some applications needing low latency, like in a driverless car, need to be at the extreme edge, but he seemed confident about Cloudflare’s revenue from near-edge AI. Microsoft seem confident about demand for AI in the cloud. I suppose some day there’ll be AI in everything, just like we already have silicon chips in kitchen appliances, but I’ve no idea when. I’m thinking on-device AI might be lost business for software infrastructure companies, but as a non-expert I could be way-off. Mostly I’m guessing there’ll be plenty of demand for AI in at least cloud and near-edge, but supply of the right silicon might be a constraint until production catches up.

    • poffringa

      Hi – good question. I can see a split for something like this for overall AI processing:
      – Central cloud: 60%
      – On device: 25%
      – On edge: 15%

      Central cloud would likely always be the largest component, as that is where modeling would start and logically would need the most data centralized into one location. With the rise of autonomous devices, we should see an increase in AI performed on devices themselves. This would probably be isolated to inference and decision making specific to that device, where latency or disconnect connect cannot be tolerated.

      Finally, I see the edge being useful for as the space in the middle to perform two functions. First, it could serve as an interim AI processing layer for use cases that benefit from proximity, but can tolerate some latency. Second, if multiple devices are interacting within a local physical space, then the edge node could provide AI processing for those coordination devices. This could address use cases where those autonomous devices need to be orchestrated or communicate with each other.

      • Michael Orwin

        Thanks!

  2. Nadal

    Thank you very much.