Coming out of 2022, the catchphrase of the Q4 earnings reports from the hyperscalers was “customer workload optimization”. This referred to the process by enterprise customers of scrubbing their cloud infrastructure bills for savings. This exercise introduced a headwind to revenue growth, offsetting the positive impact of new customer cloud migration projects and digital transformation efforts. This effect drove deceleration in Q4 revenue growth rates for AWS, Microsoft Azure and GCP.
To further dampen sentiment, all three hyperscalers reported continued weakness in Q1 and avoided direct predictions as to when optimization might end. Given that they only provide guidance one quarter forward, management could remain vague around the expected revenue trajectory for the full year 2023.
While discouraging, the hyperscalers largely delivered results that were “better than feared”. Microsoft stock even rallied 8% the day after earnings. AMZN initially surged on an overall Q1 beat, but pulled back on commentary reflecting ongoing revenue growth pressure in AWS expected for Q2. Hyperscaler results also dragged notable software infrastructure companies along with them, with stocks like DDOG, MDB and SNOW surging and then dropping as each hyperscaler reported.
Investors did receive some hints in the Q1 report commentary about the potential trajectory of optimization going forward. The Microsoft team asserted that optimization impact will end eventually, while Amazon reiterated their long-term view that AWS has an enormous market to pursue. When the impact of optimization efforts does taper off, hyperscaler revenue growth can revert to being driven by new customer workloads from cloud migration and digital transformation projects.
Audio Version
View all Podcast Episodes and Subscribe
New AI-driven services should provide another tailwind. In Microsoft’s case, they attributed a point of the incremental Azure revenue growth for Q2 to AI. AI services delivered over the Internet or through mobile devices will require the same software infrastructure rails as existing online experiences. These include the hyperscalers for core compute and storage, as well as other application services across the software stack, like unstructured data storage, transactional databases, security, monitoring and content acceleration.
For those independent software infrastructure providers that mirror the utilization of the hyperscalers, an inflection in hyperscaler growth could provide a bottom in their deceleration. If the “optimization debt” accrued over the past two years is indeed getting paid off, then these companies can return to a more predictable cadence of revenue growth in the near future. Their growth profile will be more balanced between new customer activity and optimization of spend from existing customers.
In this post, I will examine the Q1 results from the hyperscalers. Embedded within the results is a view of how enterprise optimization efforts are progressing and demand for new workloads. I will also try to interpret how AI may drive incremental demand for software infrastructure.
Optimization Update and AI Tailwinds
In my prior post on the Q4 hyperscaler results, I provided an in-depth explanation of optimization mechanics. I’ll review some of that here. The general idea is best summed up by the comment from Microsoft’s CFO that “at some point, workloads just can’t be optimized much further.”
Backing up a little, optimization in this context refers to the process of reviewing a customer’s billing patterns for various cloud infrastructure services and identifying opportunities to reduce costs by making better use of the resources available. All of the hyperscalers offer multiple options for each resource type. Typical variances that can affect cost include instance size, time commitment, access latency (hot or cold storage), software version and many more.
By closely examining a customer’s actual usage of these resources, it is possible to find savings by considering switching to less expensive configurations or simply downsizing a server instance that had been over-provisioned. Over-provisioning was common during the Covid boom times, as businesses projected that the surge in activity would continue linearly at the same rate. As it hasn’t in many categories, servers and databases don’t need as much memory and CPU as before. The 8 CPU instance that had been running at 50% utilization but is now at 20% can safely be downsized to 4 CPUs.
While a simple exercise, this would cut the monthly cost by roughly half. Other optimization efforts can be a bit more complex, but the ease of change speaks to the broader theme and also reflects the potential impact. If this server instance normally generates $100 of revenue in a month, the next month that revenue will be $50. This example can help frame the outsized deceleration in revenue growth currently being experienced across the hyperscalers.
The silver lining is that these types of optimizations are usually applied once. Also, the changes with greatest impact would logically be prioritized first. I think the low hanging fruit, like this example, has been getting addressed starting back in 2H2022. This continued in Q1 and will likely persist through Q2. As we get into second half of 2023 (roughly a year after it started), the situation may improve.
That seemed to be the perspective from Microsoft leadership, where they implied that a calendar year was needed to address the majority of optimization. The AWS team echoed the optimization effect as well, and also implied that it would be an ongoing effort for their sales support teams. I agree with the subtext that hyperscalers in general should be more proactive in helping customers manage their resources and avoid ratcheting up large cloud bills like they did during Covid. This bias towards cost controls as a customer-friendly strategy has been discussed by other software infrastructure providers as well, like Snowflake.
Enactment of savings recommendations requires varying amounts of work. Some changes are simple configuration options available in the console or require a short call with the sales rep and an updated contract. Others require a maintenance window or upgrade to a new service (and pre/post testing). In any case, teams will generally front-load the items that generate the most savings, which means we should see the highest reductions of spend near term, with a tapering off of optimization impact as teams move to the lower cost savings recommendations. While hyperscalers anticipate that optimization will continue for a couple more quarters, the revenue impact should be front-loaded with enterprises harvesting the high-saving, low hanging fruit first. I expect the bulk of that to have started in Q4, carried through Q1 and will persist into Q2.
The Microsoft leadership team appeared to reinforce this view. Looking forward, they implied that optimization should start tapering off and have a less pronounced effect later in 2023. As the Microsoft CFO said, at some point, workloads can’t be optimized further. I would add that they can always be optimized, but there are diminishing returns and the magnitude of the revenue impact will be smaller.
The Amazon team shared that these customer optimization exercises do not involve turning off software applications, switching to open source, “repatriating” to on-premise or delivering less functionality. Rather, they focus on finding savings based on actual usage patterns. When IT budgets are flush and enterprises are rushing to deliver new features, they invest little time in reviewing usage and right-sizing their resource allocations.
As IT budgets have come under pressure, IT leaders and DevOps teams have naturally looked for savings. The current macro environment has encouraged this. Recently, the optimization effect has been compounded by the proliferation of tools and consultants who specialize in helping enterprises find savings. Many of these emerged in the last year. Whether they were the result of macro deterioration or just the natural evolution of the cloud market is hard to say, but in either case, the impact of optimization is exacerbated by both macro pressure and the ease of finding savings.
After the majority of one-time optimization work is complete (and we can assume that the biggest cost reductions were front-loaded), then growth in cloud spend should return to a normal level. Normal will likely reflect growth before Covid, discounting for decay from the law of large numbers. Most importantly, spend growth will no longer have a large headwind from the one-time optimization catch-up.
AI as New Tailwind for Cloud Infrastructure
Investors are being inundated with references to AI and are left to interpret what impact this may have on cloud infrastructure spending. They also hear lots of discussion from software providers that they have been using ML and AI all along. So, what is new and how does this change things? Haven’t we seen this before?
If we focus on just on generative AI and LLM’s, the primary change (for the better) is how humans can interact with these new digital experiences and the value they can expect from them. Specifically:
- Interface. The method for interaction between human and machine (and in the future machine-to-machine) is evolving from point, click, select on a screen (web browser or mobile app) to natural language queries and task instruction. This increases the efficiency of the interface by an order of magnitude or more.
- Value Realization. The efficiency of the interface saves time. Additionally, LLMs and other AI-enabled models are more powerful and far-reaching than even a few years ago. The latest models can harness more data and uncover deeper insights than before. This means that the output of interactions with AI systems creates more value than previously in a shorter time. The ability to specify a task for an AI agent to execute is far more productive for the human operator than searching for information and taking the actions themselves.
I think that both of these factors will drive a large increase in consumption of new AI-enabled digital experiences. Existing software applications will be re-imagined and redesigned to make use of the improvements in interaction, efficiency and effectiveness. Both public consumers and internal enterprise employees should benefit.
This process should resemble the scramble to launch new mobile applications in the early 2010s, but likely at an even larger scale. Mobile apps increased usage of software infrastructure because humans could access those applications from anywhere. Instead of interacting for an hour a day while seated at their computer, consumers could engage over many hours as they moved through the day. Additionally, new hand gestures (touch / swipe) made the interface more efficient.
Yet, mobile apps didn’t make the base software applications more effective. Most mobile apps involved reproducing a similar experience to that exposed in a web browser. With AI-enabled applications, though, we get both benefits. The interface is more efficient and the effectiveness of the application is much greater. Combined, these two factors should generate more usage (more interaction, more data scope, more processing).
Similar to the introduction of mobile apps, AI-enabled applications will begin to consume more software infrastructure resources. Having witnessed the introduction of mobile apps for several consumer Internet businesses from 2010-2015, I can tell you that the back-end infrastructure requirements increased by 2-3x over just browser interfaces. The increased usage of mobile required more data processing, compute, delivery, security, authentication, etc.
I think the added efficiency of AI-enabled applications will have a similar effect. It will take some time, though. While the earliest mobile apps popped up around 2010, we didn’t hit critical mass until 2012-2013 and then a long tail of new apps followed. While the introduction of AI-enabled experiences seems to have faster adoption, uptake by a critical mass of the Global2000 will likely require a few years to realize. The leaders will be the Big Tech companies first, as they were back then with mobile.
The next question is how enterprises will pay for this. Assuming the current pressure on IT budgets persists, I think that incremental investment in new software services for AI will be funded by savings from increases in employee productivity. We are all hearing of examples of how AI services allow information workers in a variety of fields to be more productive by making use of LLMs and generative AI. These range from content producers to lawyers to marketers to managers.
If these employees are more productive, then enterprises will need fewer of them. This reduction of department headcount will free up budget to pay for the software that drives this productivity. Whether it is $20/month for ChatGPT or many of the other new commercial software services, a $100k annual all-in cost per corporate information worker (salary, benefits, space, etc.) will pay for a lot of software.
While they haven’t admitted it, I suspect that the FAANG tech giants are already doing this. They have announced layoffs to rightsize their employee base and reduce costs. I wouldn’t be surprised if a portion of these savings are being re-invested in the creation of new AI-driven software services for internal use. The tech giants are well-known for dog-fooding their own technologies to create operational advantages over mainstream incumbents in various industries. They are often the tip of spear.
Investors received a hint of this trend in comments in a non-hyperscaler earnings report, namely Meta (Facebook). On their earnings call, an analyst asked if applying new AI services internally would allow Meta to reduce headcount growth going forward. The CFO deftly side-stepped the question (due to sensitivity around AI replacing jobs), but I think the implication stands.
Analyst: Then the second one for Susan just on as we’re thinking about hiring expectations for ‘24 and beyond, I think there’s been some ink in the press about potential 1% to 2% hiring going forward. Can you just talk to us about that a little bit? And how does using AI internally factor in your thoughts about long term hiring?
CFO: And you also asked about using AI internally and how that factors into our thoughts on long term hiring. We certainly don’t have enough visibility yet into how AI will make our workforce more productive, but it’s something we’re excited about and I think we will have more clarity on that as more tools begin getting developed to enhance employee productivity across the industry.
META Q1 EArnings Call, April 2023
As these productivity enhancing AI services ramp up, I do think the rate of hiring will slow down at Big Tech and then the Global 2000 for traditional knowledge workers. Obviously, enterprises will be careful with this messaging, as they don’t want to fuel the “AI is replacing jobs” narrative, but I think the writing is on the wall.
In some cases, they aren’t avoiding it. IBM just announced that they are pausing hiring for information worker roles that they think could be replaced by AI at some point. This is projected to impact about 7,800 workers over several years.
Hiring in back-office functions — such as human resources — will be suspended or slowed, Krishna said in an interview. These non-customer-facing roles amount to roughly 26,000 workers, Krishna said. “I could easily see 30% of that getting replaced by AI and automation over a five-year period.”
Bloomberg ARticle, May 2023
When fewer high cost information workers are required to accomplish the same output, that savings can offset the cost of additional software automation. IBM’s CEO intends to invest more in AI and automation to address these corporate functions. That implies a shift of more corporate budget to IT and associated software services.
As another catalyst, developer productivity will increase significantly. With AI-enabled coding assistants, developers have reported anecdotal improvements of 2-3x in productivity. This makes sense, as a lot of development time can be occupied in time-consuming, but relatively mundane, tasks. These can be automated through AI tools. Tasks include not just code suggestions, but creation of unit tests, configuration scripts, code review, security checks, API interface discovery, etc.
Higher developer productivity will result in more applications. Fewer developers will be required, again generating cost savings to invest back into more software services to enable their productivity. More applications will require more software infrastructure to host them. This cost will be born by savings in headcount. Enterprise developer teams might be 50% smaller in the future. We could see enterprise IT and other knowledge worker departments shift from the majority of expense (let’s say 90% now) being applied to headcount to a more even mix of AI software services (digital assistants) and employee salary.
Bringing this back to today, I think we investors are in a difficult spot. We are on the backside of the Covid-driven surge in digital transformation investment. Optimization is the catchphrase of the day, which really means reducing costs by wringing out excessive spend and making better use of existing cloud infrastructure resources. This is creating a large headwind to growth in cloud resource consumption, literally injecting spend reduction as workload resource clusters are downsized. This is overshadowing any growth from ongoing digital transformation workloads and cloud migrations.
In parallel, AI is introducing new usage patterns, but is at the beginning stages. Microsoft Azure, likely the most immediate beneficiary, estimates that a percentage point of their new annual growth for Q2 in Azure revenue will be associated with AI. Most of that probably came from OpenAI. As AI applications fan out into many use cases, I could see that contribution becoming much larger over time.
Additionally, as workload optimization tapers off, the positive growth from new projects and cloud migrations will regain its dominance in impact over any remaining optimization headwind. Amazon leadership said they will work with customers proactively to keep cloud spend in check, but it will still increase. Applying ongoing optimization forward will result in a better outcome anyway, as enterprises will avoid the current cycle of over-provisioning and then cutting back.
Sponsored by Cestrian Capital Research
Cestrian Capital Research provides extensive investor education content, including a free stocks board focused on helping people become better investors, webinars covering market direction and deep dives on individual stocks in order to teach financial and technical analysis.
The Cestrian Tech Select newsletter delivers professional investment research on the technology sector, presented in an easy-to-use, down-to-earth style. Sign-up for the basic newsletter is free, with an option to subscribe for deeper coverage.
Software Stack Investing members can subscribe to the premium version of the newsletter with a 33% discount.
Cestrian Capital Research’s services are a great complement to Software Stack Investing, as they offer investor education and financial analysis that go beyond the scope of this blog. The Tech Select newsletter covers a broad range of technology companies with a deep focus on financial and chart analysis.
Hyperscaler Results
With that background, let’s look at how each hyperscaler performed. I will focus on the cloud business for each company and the revenue component of that. Amazon, Google and Microsoft have other aspects of their businesses that aren’t material to this discussion. Additionally, their reporting of the cloud hosting component of their business is usually limited to revenue performance, with varying degrees of transparency about future expectations.
Overall, the three hyperscalers continued to reference the optimization of customer workloads as a headwind to revenue growth. To varying degrees, we started to see hints that this optimization might be moderating or at least has an end in sight. This signal was strongest with Azure, where the Microsoft CFO discussed how workload optimization had started about a year ago and that comparables will be easier as we get into the second half of 2023. More specifically, she asserted that optimization can’t continue forever, implying that customers are mostly through the adjustments that have the largest negative impact on revenue.
This underscores a point that I raised in my review of the hyperscaler results from Q4. Optimization of a cloud workload tends to be a one-time exercise. Once the resources allocated to an application workload are reset to match actual utilization, there isn’t a reason to keep reducing that allocation. The magnitude of an optimization exercise can be large. I have observed savings on some workloads reaching 50% cost reduction in some cases with start-ups that I advise. The impact can also be immediate, as a consequence of the hyperscaler consumption models for customers on a monthly billing cycle.
Once optimization is complete, then growth in utilization will once again be driven by increases in application usage. Optimization has acted as a headwind to revenue growth over the last couple of quarters. As optimization can create an actual negative adjustment to spend, it goes beyond just registering a slower growth rate. The negative adjustment can offset growth from new customer workloads as well as any expansion by existing customers.
In reporting their results, none of the hypescalers implied that customers going through optimization were considering shutting down application workloads completely or moving them off of the cloud. The Amazon team did discuss how some of their customers were applying the savings towards new digital transformation and cloud migration projects. These new projects naturally introduce a delay in the time required to spin them up and introduce substantial load. That delay would further exacerbate the impact of these shifts on revenue growth.
Once this optimization headwind subsides, growth will return to being driven by incremental usage from new and existing customers. I think this is why the market reacted so positively to the results from Azure. The implication is that in future quarters, optimization will stop being a strong headwind, allowing all cloud services based on consumption to swing back to growth. Investors witnessed not just a positive move in MSFT stock the day after their results, but also outsized gains in the higher level software infrastructure services, like Datadog, Snowflake, MongoDB and others.
With that said, I don’t think we will immediately snap back to the heyday of 2021 spending. There will likely still be some long tail of workload optimization, as well as delays in starting new digital transformation projects. Until the macro picture becomes clearer, enterprises may hesitate to make large investments in IT projects. However, I think it’s fair to assume that the rapid deceleration in annual growth rates will bottom soon and we could return to consistent sequential growth.
To summarize the revenue growth rates for the three hyperscalers, I created the table above covering the last two years. You can see the revenue growth deceleration start in 2022 and proceed through Q1 2023. As a highlight, we saw Google Cloud (which includes GCP) log its slowest sequential growth in two years at 1.9%. AWS even registered negative sequential growth in Q1 (although a little bit of seasonality may be at play). Azure does’t provide the total revenue value, but Q1 resulted in the largest q/q decrease in the annual growth rate (7% in cc).
Amazon and Microsoft leadership also provided commentary around estimates for the current quarter (Q2 2023 calendar year). For Azure, they project annual revenue growth of 26% – 27% in constant currency. Assuming the high end, that represents another 4 points of annual deceleration, but better than the 7 point drop from Q4 to Q1.
For AWS, management commented that they are seeing workload optimization continue into Q2 with “April revenue growth rates about 500 basis points lower than what we saw in Q1.” This implies an annual growth rate of 11%. If projected against all of Q2, AWS revenue would land at about $21.9B, which would actually represent a slight re-acceleration sequentially to 2.6%. However, the unknown is whether that downward trend for April continues through the full quarter, or levels out at 11%.
For Azure, leadership also added that about 1 point of their Q2 growth is associated with AI services. While 1% sounds small, it is about 4% of the total revenue increase for Q2 (assuming 26%-27% of growth). Given that the run rate of Microsoft Cloud is over $100B annually, that could account for a large amount of revenue. Microsoft leadership doesn’t break out the contribution from Azure, but I think we can assume it is more than half.
Looking forward, this still leaves investors speculating about the likely trajectory for cloud infrastructure revenue growth through 2023. Examining the broader commentary from the Azure and AWS teams provides some directional trends. The view can be summarized by the following three points:
- The demand for cloud infrastructure resources, whether cloud migrations or new application workloads, is continuing. Enterprises are not stopping their investment in cloud. They may also be delaying spending commitments or trying to break investments into smaller chunks currently.
- Workload optimization is creating a negative growth headwind against the benefit of new customer activity. This optimization work will end and may taper off in the second half of 2023. This would result in revenue growth primarily being driven by customer expansion going forward.
- AI-services will drive new demand for cloud infrastructure from enterprises. This will go beyond the core model generation, cascading out towards inference and all standard cloud application support services (data, storage, security, monitoring, etc.). As Microsoft’s CEO said, AI will “spin the Azure meter.”
While I dislike pasting in long quotes from earnings calls, here are two that I think are very relevant to this discussion. They underscore the points I summarize above.
Microsoft CEO: First, optimizations do continue. In fact, we are focused on it. We incent our people to help our customers with optimization because we believe in the long run that the best way to secure the loyalty and long-term contracts with customers when they know that they can count on a cloud provider like us to help them continuously optimize their workload. That’s sort of the fundamental benefit of public cloud, and we are taking every opportunity to prove that out with customers in real time.
The second thing I’d say is, we do have new workloads started because if you think about it, during the pandemic, it was all about new workloads and scaling workloads. But pre pandemic, there was a balance between optimizations and new workloads. So what we’re seeing now is the new workloads start in addition to highly intense optimization driven that we have.
The third is perhaps more of a relative statement because of some of the work we’ve done in AI even in the last couple of quarters, we are now seeing conversations we never had, whether it’s coming through you and just OpenAI’s API, right? If you think about the consumer tech companies that are all spinning essentially Azure meters, because they have gone to open AI and are using their API. These were not customers of Azure at all.
Second, even Azure OpenAI API customers are all new, and the workload conversations, whether it’s B2C conversations in financial services or drug discovery on another side, these are all new workloads that we really were not in the game in the past, whereas we now are. So those are the three comments that I’d make, both in terms of absolute macro, but more importantly, I think, what is our relative market position and how it’s being changed.
CFO: Maybe the one thing I would add to those comments is, we’ve been through almost a year where that pivot that Satya talked about from we’re starting tons of new workloads, and we’ll call that the pandemic time, to this transition post, and we’re coming to really the anniversary of that starting. And so to talk to your point, we’re continuing to see optimization. But at some point, workloads just can’t be optimized much further. And when you start to anniversary that, you do see that it gets a little bit easier in terms of the comps year-over-year. And so you even see that in a little bit of our guidance, some of that impact from a year-over-year basis.
Microsoft Q3 FY2023 EArnings Call, April 2023
And the AWS leadership team shared a similar perspective.
Amazon CEO: We’ve spent a fair bit of time analyzing what we’re seeing, and I’ve spent a good chunk of time myself looking as well, and we like the fundamentals of what we’re seeing in AWS. The new customer pipeline looks strong. The set of ongoing migrations of workloads to AWS is strong. The product innovation and delivery is rapid and compelling, and people sometimes forget that 90-plus percent of global IT spend is still on-premises.
If you believe that equation is going to flip, which we do, it’s going to move to the cloud and having the cloud infrastructure offering with the broadest functionality by a fair bit, the best securing operational performance, and the largest partner ecosystem bodes well for us moving forward. But we’re not close to being done investing in AWS. Our recent announcement on large language models and generative AI and the chips and managed services associated with them is another recent example. And in my opinion, few folks appreciate how much new cloud business will happen over the next several years from the pending deluge of machine learning that’s coming.
….
And so, if you believe that equation is going to flip, it’s mostly moving to the cloud. And I also think that there are a lot of folks that don’t realize the amount of consumption right now that’s going to happen and be spent in the cloud with the advent of large language models and generative AI. I think so many customer experiences are going to be reinvented and invented that haven’t existed before. And that’s all going to be spent in my opinion, on the cloud.
Amazon Q1 2023 Earnings Call, April 2023
I tried to visualize these various effects in the diagram below. While there are a lot of moving parts and certainly unknowns, I think we can make some assumptions that explain the surge in hyperscaler revenue growth during the Covid period (2020-2021) and then the marked deceleration in growth rates we have been witnessing over the last couple of quarters. If we assume this has been driven by a cycle of over-provisioning and optimization, the trends in hyperscaler growth rates make sense. Further, if optimization will taper off and AI workloads ramp up, then we can extrapolate the likely curve of revenue growth over the next 1-2 years.
These assumptions are speculative, but I think reflect the major influences we have witnessed on hyperscaler growth trends and might expect going forward. Investors can tweak slopes of any lines to match their own perspective.
My assumption is that cloud infrastructure demand has been growing at fairly steady state over time, with a gradual decay in the growth rate to reflect larger numbers. AI specific infrastructure started registering utilization in late 2022, with Azure projecting 1 point of their Q2 annual growth attributable to AI. During Covid-19, enterprises rushed to stand-up new digital transformation projects and start-ups flush with VC money rapidly introduced new workloads with a lot of capacity allocated in anticipation of rapid growth. This resulted in over-provisioning of cloud resources, inflating the overall growth rates of revenue for cloud service providers.
As we moved into the post-Covid period and associated macro tightening, companies began optimizing their cloud workloads. This created a reduction in utilization of cloud resources, pulling down overall revenue growth rates by offsetting the benefit of new cloud infrastructure demand. As optimization tapers off, hyperscaler revenue growth rates should return to the steady state (with a gradual deceleration in rates over a longer period).
Finally, the incremental infrastructure tied to new AI-driven applications should create new demand for workloads beyond the trajectory of standard cloud migration and digital transformation. As both AWS and Azure leadership implied, this would be additive. We won’t get back to the inflated growth rates during Covid, but should moderate at a higher rate of growth than today.
Investor Take-Aways
While investors may find the persistence of deceleration in hyperscaler revenue growth frustrating, I see light at the end of the tunnel for a couple of reasons. First, at a high level, leadership commentary was not as dismal as in Q4. Microsoft Azure performed better than expected with guardedly positive remarks about the demand trends looking forward. The AWS team was less specific, but were still optimistic about the long-term demand trends for cloud infrastructure. They also assuaged concerns that enterprises were simply done with cloud.
Second, as workload optimization is creating a headwind to growth currently, we received some hints that the impact of optimization efforts on revenue growth is likely to taper off. No specific timeline was given, but the Azure team implied that the second half of 2023 would look better.
Finally, all leadership teams raised AI as a new catalyst for cloud infrastructure demand generation. AI will drive a wave of software infrastructure investment and increased resource consumption. This will be similar to mobile app adoption in the prior decade, but will likely have a more significant impact on software infrastructure resource consumption as AI increases both user interaction efficiency and the effectiveness of addressing work.
Going back a quarter, Microsoft’s CEO projected the impact of AI on broader Azure utilization. In response to an analyst question about trying to quantify the potential contribution to Azure revenue from AI initiatives, Microsoft’s CEO pointed out that new AI-driven applications and capabilities do more than just increase usage of specialized compute for model creation and inferencing. Those applications will consume other resources, including storage and supporting services around application delivery.
I mean, even the workloads themselves, AI is just going to be a core part of a workload in Azure versus just AI alone. In other words, if you have an application that’s using a bunch of inference, let’s say, it’s also going to have a bunch of storage, and it’s going to have a bunch of other compute beyond GPU inferencing, if you will. I think over time, obviously, I think every app is going to be an AI app. That’s, I think, the best way to think about this transformation.
Microsoft CEO, Q2 FY2023 earnings call, january 2023
This implies that cloud infrastructure like Azure will see an increase in overall resource utilization if AI-infused applications really take off. For investors in those companies that provide supporting infrastructure around the hyperscalers, like observability, data transport and management, security and delivery, we may see a renewed surge in demand. The point is that a new wave of AI-driven innovation won’t just benefit the vendors of the core AI inputs, like chip manufacturers.
Any service hosted in the cloud and delivered over the Internet will consume the same software infrastructure resources that contributed to the last wave of Internet innovation, whether Web2, mobile apps or remote work. This would benefit not just the hyperscalers, but the basket of other independent software infrastructure service providers around databases, analytics, security, delivery, monitoring, etc.
Further, new AI application investment doesn’t need to require incremental budget from most enterprises. I think the costs can be offset by the productivity gains for knowledge workers. Enterprise departments will find that their employees can accomplish more with AI-driven software services and assistants. They will therefore require less headcount to complete the same amount of work. Payroll costs will decrease, providing savings to be invested in more sophisticated software services, whether digital assistants, workflow automation or system-to-system coordination.
As FAANG companies are announcing layoffs under the guise of cost cutting, I suspect they are already making these investments. These companies have often been the first to make use of the same technologies internally that they enable for external customers. This embrace of software automation for efficiency gains has largely made them a competitive force. These cutting-edge internal approaches are usually adopted by the Global 2000 next.
Finally, the creators of software applications, namely developers, will become several times more productive. They will create new digital experiences more quickly. The result will be more applications that then consume more cloud infrastructure resources. That additional expense will be further offset by fewer developer resources.
For me, all of this implies a guardedly optimistic outlook for providers of cloud infrastructure and associated software services. We will likely see further softening of revenue growth for the software infrastructure companies that will be reporting their quarterly results over the next 1-2 months. This will be driven by many of the same cost optimization headwinds being experienced by the hyperscalers.
However, unless the macro picture takes a significant additional step downward, I expect demand to pick back up in the second half of 2023, providing a real opportunity for re-acceleration of growth going into 2024. This could provide a favorable set-up for many of the software infrastructure stocks that have been beaten down over the last year. For those willing to stomach volatility in the first half of 2023, you might be rewarded 6-12 months from now.
Further Reading
- Our partners at Cestrian Tech Select published several posts on the hyperscalers, including a review of Microsoft’s results with detailed financials and technical analysis.
- Peer analyst Muji over at Hhhypergrowth recently published a detailed review of AI and its implications for software development. He drew a number of conclusions regarding potential impact on software infrastructure providers at all levels of the software stack. While this is behind a paywall, I think it is necessary reading for investors.
NOTE: This article does not represent investment advice and is solely the author’s opinion for managing his own investment portfolio. Readers are expected to perform their own due diligence before making investment decisions. Please see the Disclaimer for more detail.
Thanks for your thoughts and work as always Petter!
Thanks for another highly informative article.
Will greater developer productivity reduce the advantage of software companies like Cloudflare that already have a fast pace of rolling out new products, or does composability matter more than AI-enhanced productivity?
Interesting question – I think that developer productivity will rise across the board. So, all companies would benefit equally. Of course, as you imply, faster developer coding isn’t the only factor in supporting a rapid product development cadence. Architecture matters as well.
There’s a news piece out about IBM planning to use AI to replace 7,800 jobs in five years.
Yes – I saw that. I added a link to the article and a quote from it to the blog post.
Thanks for your reply. I expect tech companies will generally be careful about the messaging, as you said, but I expect your main point was that tech companies would use AI to keep headcount lower than otherwise, and would generally lead the trend. IBM’s announcement looks like very quick confirmation.
A small correction (sorry), I said “IBM’s announcement” but maybe it just came out in an interview.
There’s an article on the SemiAnalysis site titled ‘Google “We Have No Moat, And Neither Does OpenAI” /
Leaked Internal Google Document Claims Open Source AI Will Outcompete Google and OpenAI’.
It’s about how the open source community has built on model weights leaked from Meta. A technique called LoRA (low rank adaptation) enables fast iteration. There’s still a quality gap between the open source large language models and Google’s models, but it’s closing, even though the open source models are smaller.
I expect that greater efficiency will lead to greater demand, but as a non-expert I can’t predict exactly where the greater demand will hit, or the effect of any regulation (e.g. to AI art with no restrictions).
Thank you very much, it’s very useful
Another thorough and excellent review, Peter; thank you.
A slightly off-topic question, please…
Are you following the bourgeoning success of Vector databases? Vector DBs seem to be a new thing although I recognize it really is not; it has been around for several years. Nonetheless, I see the positive business flows towards Pinecone in particular but also Chroma and Weaviate. What do you think: Are vector databases more enduring than just one more flash in the (data stack’s) pan?
Thank you.
Hi – Vector databases are experiencing a surge in interest because they address a specific use case in storing and rapidly accessing the output of AI models (called embeddings). These are often represented as a long string of vector coordinates. The vector database can calculate relationships between vectors very efficiently, while other database types cannot (relational, document, etc.). However, a vector database would not be suitable for other application workloads. So, vector database usage would increase as AI models proliferate, but they wouldn’t replace other types of databases. My thesis is that AI-enabled applications will increase software usage across the board, benefitting all types of databases.