Investing analysis of the software companies that power next generation digital businesses

Q4 2022 Hyperscaler Earnings Review

Over the last two weeks, we received earnings results from the three hyperscalers – AWS, Google Cloud Platform and Microsoft Azure. Additionally, several software companies reported, providing another view of trends in software infrastructure and developer tooling. If that wasn’t enough, various economic reports and a Fed meeting were mixed in. I won’t cover the macro developments, beyond commenting on how they influenced market performance during the period and exacerbated the market’s reaction.

Overall, we received mixed signals. On one hand, the hyperscalers exhibited a deceleration in revenue growth, as customers used this period to “optimize” cloud usage and stretch out projects. On the other hand, several software companies reported revenue with less deceleration and surprisingly linear growth projections looking forward. A couple of explanations might be drawn from this.

First, while software infrastructure company sales are correlated to usage of cloud resources, certain companies may be more insulated from the effects of resource optimization. In some cases, pricing models by host or service have less elasticity than options available for hyperscaler optimization, like server downsizing, reserved instances and cold storage. Additionally, a few software infrastructure companies have been experiencing ongoing optimization over a longer period, going back to early 2022.

Second, the hyperscalers issue limited forward projections. They generally report revenue performance for just the prior quarter and sprinkle in a little verbal commentary about the current quarter. None of them project growth for the full year, which is particularly important now as most software companies are providing their preliminary full year 2023 guidance.

This leaves investors with a confounding dilemma. They could project the hyperscaler Q1 growth commentary reflecting further deceleration downward through all of 2023. Or, they could follow the lead of the software company full year estimates and factor in some re-acceleration going into the second half of the year. A deeper look into the mechanics of workload optimization may provide insight into this apparent divergence.

Audio Version
View all Podcast Episodes and Subscribe to the Feed

Optimization

Since the hyperscalers all referenced customer optimization of cloud workloads as the primary contributor of revenue growth deceleration, let’s spend a little time discussing what that means. This is important for investors to understand, as the impact of optimization is more acute in many cases for the hyperscalers than it is for most software companies. This has to do with the plethora of options available with the hyperscalers to adjust resource utilization and commitment levels.

Optimization in this context refers to the process of reviewing a customer’s billing patterns for various cloud infrastructure services and identifying opportunities to reduce costs by making better use of the resources available. All of the hyperscalers offer multiple options for each resource type. Typical variances that can affect cost include instance size, time commitment, access latency (hot or cold storage), software version and many more. By closely examining a customer’s actual usage of these resources, it is possible to find savings by considering switching to other options, which are less expensive.

In many cases, switching to a new option is a simple configuration change or a minor maintenance operation, allowing nearly instant turn-around. Customers who embark on an optimization exercise can generally identify and execute changes that generate significant cost savings very quickly. These savings can immediately offset any planned expansion or new usage patterns.

To be clear, these optimization exercises do not involve turning off software applications, switching to open source, “repatriating” to on-premise or delivering less functionality. Rather, they focus on finding savings based on actual usage patterns. Over the years, the hyperscalers have introduced a lot of different size, configuration and pricing options, which can be fairly convoluted. When IT budgets are flush and enterprises are rushing to deliver new features, they invest little time in reviewing usage and right-sizing their resource allocations.

As IT budgets have come under pressure, IT leaders and DevOps teams will naturally look for savings. The current macro environment has obviously encouraged this. Recently, the optimization effect has been compounded by the proliferation of tools and consultants who specialize in helping enterprises find savings. Many of these emerged in the last year. Whether they were the result of macro deterioration or just the natural evolution of the cloud market is hard to say, but in either case, the impact of optimization is exacerbated by both macro pressure and the ease of finding savings.

To provide an illustrative example, I just went through this process with a start-up in the InsurTech space that finally acknowledged that their AWS spend was growing faster than they wanted. Over the past two years, spend had more than doubled. While this was easier to ignore as the business rapidly expanded and new funding was flowing, renewed scrutiny on overall budgets brought this into focus. Rather than sort out the savings themselves, the start-up engaged a third party provider of managed cloud services.

The cloud consultants were able to import the start-ups’ AWS bill from the prior 3 months, run analytics on it and spit out a number of recommendations. Each recommendation involved a change to the configuration of a particular resource on AWS, a description of the work required and the potential savings. The start-up’s IT team was able to then review the recommendations, prioritize them and start enacting those that had the highest benefit for lowest effort. The end result was about a 30% reduction in spend over the course of a month.

Here is a short list of the primary recommendations:

  • EC2 Rightsizing. Based on historical utilization patterns (mostly CPU), downsize to a smaller EC2 instance. An example would be to switch from a 2xLarge instance to Large, or a Medium to a Small. These changes can cut cost for that instance by 50% or more.
  • Long Term Commitments. AWS offers different pricing for 1 year and 3 year commitments, with varying amounts paid upfront. The longer the term and the more paid upfront, the greater the monthly savings. In one example, the start-up could save 24% by switching to a 1 year commitment and 41% for a 3 year commitment on the same resource versus paying month-to-month.
  • RDS Rightsizing. The start-up uses PostgreSQL on RDS. Based on query volume and CPU utilization, they could downsize to smaller database instances. If they enacted all recommendations, the savings would be 50%.
  • RDS Reservations. RDS databases can also benefit from a longer term commitment. The savings was 36% by committing to a year.
  • EBS Upgrade. By upgrading from GP2 to GP3 (newer version) on their EBS volumes, the savings would be 20%.
  • Cold Storage. Some S3 buckets were used very infrequently. These could be moved to colder storage. Significant savings can be realized, based on acceptable time delay for retrieval. Log retention necessary for audits, but not real time usage, could be moved to very cold storage.

As one example, here are the recommendations for the EC2 instances that are associated with just one micro-service. The costs are monthly. As you can see, the recommended changes are primarily associated with reducing instance sizes within EC2. The smaller size has a lower monthly charge. In this case, the cost reduction would be about 36% if all recommendations were enacted.

Recommended changes to EC2 instances for one micro-service

The start-up had initially selected a larger instance assuming they might need the additional capacity at some point. While they were growing quickly, it wasn’t necessary to loop back on these sizes. However, with some pressure on the budget in general, the team can evaluate which of these instances could be reduced. In the end, the team didn’t reduce all of them, as expected growth would require the additional capacity for some. As some required a maintenance window, they front-loaded the changes that resulted in the most savings.

Additionally, it’s worth mentioning that further savings can be realized on most AWS services by making a longer term commitment. If the enterprise is confident that the service or product will be needed for a longer period, they can commit to a minimum usage level during that time. Usually, the hyperscalers offer discounts for a 1 year and 3 year commitment. Making these types of changes would immediately lower that quarter’s revenue for the hyperscaler, but add to RPO.

A simple analogy for investors might be the optimization of one’s mobile phone bill. If you wanted to save money, you may notice that you are paying for a usage plan that is much larger than you need. Or, you added a service that you don’t use. You would contact the mobile phone provider and make changes to your plan to reduce costs. This would be done once. From that point forward, the mobile phone company would generate consistent revenue from you.

If your usage later increased, they would capture that additional revenue. In parallel, many other customers might be joining their network, providing new revenue. If many existing customers were optimizing all at once, the revenue from new customers would be negated. After the surge of one-time optimizations tapered off, though, then revenue growth would kick back in.

Zooming out to a high level, we can assume that many enterprises are going through this exercise now. IT teams (or their consultants) are scrubbing their hyperscaler bills for potential savings. They are then enacting changes to reduce their spend on existing services. That is acting as a counterweight to any new uses from digital transformation or cloud migration projects. That effect is compounding what might be some reduction in growth due to less economic activity.

For investors, there are several take-aways. First, the heavy optimization happening now is one-time in nature. Enterprises likely accumulated a lot of “optimization debt” over the last two years of rapid expansion. They are now taking advantage of the slowdown to go through all their existing cloud services and perform this optimization exercise. Going forward, they will likely be more thoughtful about resource utilization and won’t have a large backlog of optimization work to do.

Second, enactment of savings recommendations requires varying amounts of work. Some are simple configuration options available in the console or require a short call with the sales rep and an updated contract. Others require a maintenance window or upgrade to a new service (and pre/post testing). In any case, teams will front-load the items that generate the most savings, which means we will see the highest reductions of spend near term, with a tapering off of optimization impact as teams move to the lower cost savings recommendations. While hyperscalers anticipate that optimization will continue for a couple more quarters, the revenue impact should be front-loaded with enterprises harvesting the high-saving, low hanging fruit first. I expect the bulk of that to occur in Q4 and Q1.

Third, optimization exercises will most acutely impact hyperscaler resource consumption, where there are clear savings opportunities from downsizing instance, database and storage capacity. Usage of the next layer upwards of software infrastructure resources will feel some pressure, but likely not as much. This is because optimization generally reduces the allocation to a cloud resource, but does not eliminate it. Software infrastructure services like observability, security, developer tooling and others that base utilization on the number of hosts or services would not experience the same relative magnitude of reduction.

For example, if a software or security service were deployed per server host, right-sizing that host to a smaller instance type to save money with the hyperscaler would still result the same host count. Granted, in some cases, a server tier could be downsized into fewer hosts, but there are many micro-services with a minimal number of hosts for redundancy. In the example with the start-up micro-service I provided above, they could save 36% on their hyperscaler bill by downsizing every host, but they would still maintain the same number of hosts for that service.

Fourth, if macro easing contributed to the build-up of excess capacity, then we can expect a similar effect as the current macro pressure abates. It may not be as pronounced as we experienced during Covid, but will be noticeable. Software still represents a competitive tool for enterprises. They will have to keep investing to maintain parity with their peers. The start-up that I referenced plans to keep investing in cloud infrastructure as they grow, but were happy to have realized a 30% savings upfront in this environment.

In terms of timing, we started hearing about optimization in Q2-Q3 of 2022. For Q4 and the initial look at Q1 of 2023, optimization has reached a crescendo. As it is a one-time exercise, but can take a quarter or two for enterprise IT teams to work through the high impact items, I think we can expect to see the bulk of this effect in first half of 2023.

After the majority of one-time optimization work is complete (and we can assume that the biggest cost reductions were front-loaded), then growth in cloud spend should return to a normal level. Normal will likely reflect growth before Covid, discounting a bit for the law of large numbers. Most importantly, spend growth will no longer have a large headwind from a one-time optimization catch-up.

Considering the start-up example I referenced, they had the potential for 36% cost reduction. If they enacted part of the recommendations and achieved 25%-30% in savings, that would immediately reduce their monthly bill by the same amount and result in that much less revenue to AWS. If most enterprises are going through this exercise now, it explains the anemic sequential revenue growth. Enterprises are still introducing new usage through digital transformation projects and cloud migrations. The problem is that optimization is negating the impact of this growth. As optimization finishes the high impact changes and shifts to the long tail of smaller savings, then cloud revenue growth rates should re-accelerate.


Sponsored by Cestrian Capital Research

Cestrian Capital Research provides extensive investor education content, including a free stocks board focused on helping people become better investors, webinars covering market direction and deep dives on individual stocks in order to teach financial and technical analysis.

The Cestrian Tech Select newsletter delivers professional investment research on the technology sector, presented in an easy-to-use, down-to-earth style. Sign-up for the basic newsletter is free, with an option to subscribe for deeper coverage.

Software Stack Investing members can subscribe to the premium version of the newsletter with a 33% discount.

Cestrian Capital Research’s services are a great complement to Software Stack Investing, as they offer investor education and financial analysis that go beyond the scope of this blog. The Tech Select newsletter covers a broad range of technology companies with a deep focus on financial and chart analysis.


Hyperscaler Results

With that background, let’s briefly look at how each hyperscaler performed. I will focus on the cloud business for each company and the revenue component of that. Amazon, Google and Microsoft have other aspects of their business that aren’t material to this discussion. Additionally, their reporting of the cloud hosting component of their business is usually limited to revenue performance, with varying degrees of transparency.

Overall, the three hyperscalers referenced ongoing customer optimization of workloads as a headwind to their revenue growth. They provided limited projections for how long this behavior would persist and avoided any projections beyond the next quarter or two. As I discussed, this is a different situation than the smaller software infrastructure companies. In the case where their fiscal year aligns with the calendar year, they provided revenue guidance through the end of 2023.

Microsoft Azure

Coming into their quarterly report on January 24th (Q2 ended December 31st), we expected Azure to grow by 37% in constant currency. This was based on comments during their last (Q1 earnings) earnings call in which the CFO stated that they anticipated Q2 growth to be about 5% below Q1’s 42%. Analysts projected a little less than this.

When the company reported actual growth of 38% (in constant currency), the market was elated. Guidance for the next quarter is always held for the conference call and shared verbally. That proved to be disappointing. Regarding the next quarter (Q3), the CFO commented that growth in Q2 had been trending downward and exited the quarter in the “mid 30’s”.

Azure and other cloud services revenue grew 31% and 38% in constant currency. As noted earlier, growth continued to moderate, particularly in December, and we exited the quarter with Azure constant-currency growth in the mid-30s. 

Microsoft Q2 EArnings Call, January 2023

The real bombshell occurred when the CFO estimated that the full quarter’s revenue growth for Q3 (ending March) would represent four to five points of further deceleration off of the prior quarter’s exit rate. It required an analyst question to clarify that this referred to the mid-30’s rate, versus the just reported 38% growth. This implies an expected growth rate somewhere in the range of 30-32% depending on how you want to interpret mid-30’s. Analysts were expecting roughly 33-34% growth in Q2, so the actual guide missed by a couple of points.

With this revelation, the markets soured on both MSFT stock and many related cloud infrastructure stocks, pulling them down after hours. The stock actually recovered much of those losses the following day, but that is likely more related to some macro expectation shifting and a little bit of “it could have been worse”.

In terms of the drivers of the slowdown, the leadership team attributed the revenue growth deceleration to optimization of existing cloud resources and a little hesitancy to start new projects. They see the current focus being on the optimization cycle and then application of those savings into new cloud workloads. The new workloads will take some time to ramp up and existing workloads should start to grow consumption again following the optimization effort.

(About optimization) One is it absolutely starts with workloads that they have at scale, just because of the visibility one has on what’s driving, essentially, the consumption meters. And there’s real guidance that we ourselves entered in the product to say, here are things that you could do to optimize your billing. And so, that’s sort of what is the fundamental thing. When we say, do more with less, and how can we help, that’s sort of the first place customers go to.

And then the next piece really, I think, is going to be about how do they take the optimization that they get, and the savings they get in one workload, and what new project starts. And that’s where there’s a reprioritization. When should we start a new project? Those are the two things that are happening simultaneously. They don’t perfectly match, but one of the things is they’re looking to back some savings on some workloads, and then start.

That’s where, I think, a little bit of what has to happen is a cycle time where the optimization cycle finishes, the projects start, and then the projects ramp. And I think that that’s what, at least on the cloud consumption side, you’re seeing.

Microsoft Q2 Earnings Call, January 2023

When asked to project a timeframe for the optimization, Microsoft’s CEO responded that it wouldn’t take two years to optimize (the length of heightened spend during Covid), and was more likely a focus for “this year”. Further, as the optimization cycle progresses, he expects new projects to start, which take some time to ramp.

While a year sounds like a long time for optimization, as I mentioned previously, I expect teams to front-load the changes with the biggest savings impact. This implies that that greatest headwind to revenue growth from these steps down in existing workload billing would be concentrated in Q4 and Q1, and tapering into Q2.

AWS

Coming a week later on February 2nd, the earnings report for AWS largely reflected similar trends as Azure. AWS revenue for Q4 was $21.38B, up 20.2% annually and 4.1% sequentially over Q3. This represents a deceleration in annual growth from 27.5% in Q3, but roughly linear sequential growth. For the current quarter, reflecting trends through January, management commented that they are seeing year/year growth in the “mid-teens”. If that annual growth rate holds for the full quarter, then we would see no sequential growth for Q1.

The management team offered similar commentary for the growth slowdown in AWS, attributing it to “thoughtfully identify opportunities to reduce costs and optimize their work.” At the same time, they announced a number of customer wins and new cloud migrations, continuing enterprise “multi-decade shift to the cloud.”

Starting back in the middle of the third quarter of 2022, we saw our year-over-year growth rates slow as enterprises of all sizes evaluated ways to optimize their cloud spending in response to the tough macroeconomic conditions. As expected, these optimization efforts continued into the fourth quarter. Some of the key benefits of being in the cloud compared to managing your own data center are the ability to handle large demand swings and to optimize costs relatively quickly, especially during times of economic uncertainty. Our customers are looking for ways to save money, and we spend a lot of our time trying to help them do so.

This customer focus is in our DNA and informs how we think about our customer relationships and how we will partner with them for the long term. As we look ahead, we expect these optimization efforts will continue to be a headwind to AWS growth in at least the next couple of quarters. So far in the first month of the year, AWS year-over-year revenue growth is in the mid-teens. That said, stepping back, our new customer pipeline remains healthy and robust, and there are many customers continuing to put plans in place to migrate to the cloud and commit to AWS over the long term.

Amazon Q4 Earnings Call, February 2023

The Amazon leadership team provided a little more specific timeline around the effects of optimization. They described the trend as starting noticeably in Q3 2022 and expect it to last for the next couple of quarters. This would align with the Microsoft CEO’s expectation that optimization would require about a year of effort. If we project optimization forward for two quarters, then Amazon’s guidance indicates it would persist into Q3 2023.

In terms of the type of optimization, they provided a few details. It largely reflects what I had described. Customers are switching to lower cost products, changing the storage configuration for data and even reducing the frequency that some systems are online. In many of the same ways that cloud hosting provides near instant elasticity to accommodate rapid expansions, the same elasticity of resource consumption can be decreased quickly as well when needed.

The challenge right now is that all enterprises are exercising these optimizations at once. If they are all able to realize 20%-30% savings, similar to my start-up example, then it is no surprise that sequential growth is slowing and might even stall from Q4 to Q1. However, as I mentioned, these optimization efforts are generally a one time exercise. There wouldn’t be repeated cycles of optimization – once a server is downsized to a smaller instance, it’s unlikely that it would be downsized by the same magnitude again in the future, and more likely upsized as normal growth resumes.

Leadership did reiterate the overall customer demand pipeline, describing it as “healthy and robust” with customers continuing to plan cloud migrations and commit to AWS over the long term. This verbiage mirrors some of the conclusions I could draw from the optimization exercise I observed with the InsurTech start-up. Optimization is resulting in an immediate reduction in monthly spend, with a good portion of those savings being realized by making a long term commitment (1 year or 3 years) for ongoing consumption of those resources versus month-to-month.

Near the end of the call, the CFO offered some potentially optimistic comments when asked how long the optimization cycle might persist. He drew a parallel to 2020 and posited that demand might actually accelerate after this optimization cycle, as companies realize they can generate more value from cloud resources by paying closer attention to costs.

And whether there’s short-term belt tightening in the infrastructure expense by a lot of companies, I think the long-term trends are still there. And I think the quickest way to save money is to get to the cloud, quite frankly. So there’s a lot of long-term positive in tough economic times. Saw that in 2020 when volumes for customers shifted very quickly.

It led to a resurgence after that and probably acceleration of people’s journeys to the cloud, and we’ll just have to see if that happens again with what we’re seeing today.

Amazon CFO, Q4 EArnings Call, February 2023

This somewhat reminds me of comments that the Snowflake leadership team has made regarding periodic optimization of their platform performance. While those cycles result in a near term reduction in utilization as customers can do more work with less spend, the longer term outcome is that they choose to move more workloads onto the Data Cloud, as the cost relative to their on-premise workloads is lower. The Amazon CEO made a rather shocking claim “that 90% to 95% of the global IT spend remains on-premises.” He went on to say that he expects this to shift to majority being in the cloud and that he sees their enterprise customers making this transition over time.

Google Cloud Platform

Google rounded out the earnings reports from the hyperscalers, announcing their Q4 results the same day as Amazon. Google doesn’t provide exact revenue amounts for GCP. Rather, they bundle it into Google Cloud revenue, which also includes Google Workspace, their cloud-based collaboration tools for enterprises, with offerings like Gmail, Docs, Drive, Calendar and Meet. On the call, management shared that GCP grew faster than Google Cloud overall, but we don’t know by how much or the relative revenue split.

Google Cloud generated revenue of $7.315B, up 32.0% annually and 6.5% sequentially over Q3.  Analysts were looking for $7.4B in revenue. In Q3, Google Cloud’s annual growth was 37.6% and 9.4% sequentially. Given that GCP is growing faster, its growth rate is significantly higher than AWS and likely roughly inline with Azure. GCP’s revenue run rate is much smaller than AWS, however, likely around one-fourth.

Google leadership didn’t offer much additional color on usage trends within GCP. They reiterated the same theme as the other hyperscalers that GCP experienced “slower growth of consumption as customers optimized GCP costs, reflecting the macro backdrop.” Looking forward, they remain excited about the long-term opportunity and the trajectory of the business. They further asserted that enterprises and governments are increasingly engaging Google Cloud to address their digital transformation initiatives across verticals and geographies.

Software Companies

Mixed in with the hyperscaler earnings results, investors received quarterly reports from a few relevant software infrastructure and service providers. While these companies are experiencing pressure on demand due to the macro environment, the impact does not appear to be as pronounced. Additionally, these smaller companies provide full year guidance, on top of firm numbers for the current quarter. This is in contrast to the hyperscalers, which provide directional revenue growth guidance for the current quarter and no full year projections.

This provides an important data point, as full year guidance for 2023 would provide some indication from the hyperscalers when they expect revenue growth to potentially bottom out and even pick up. For the software companies, Q1 guidance didn’t show as large of an expected drop off in revenue growth. Further, for those providing full year guidance that aligned with the 2023 calendar year, the full year revenue growth target is roughly inline with Q1 if we assume a reasonable beat and raise cadence as the year progresses.

This potentially provides a positive view regarding the demand curve for cloud and software infrastructure for the year, supporting a pattern of lower demand in first half, with possible acceleration in the second half. To understand this dynamic, we can look back to the optimization effect. While enterprises can reduce their spend on these software services, the magnitude of the reduction is generally more pronounced and immediate for the hyperscalers.

Additionally, software providers offer less opportunity to get out of whack with spend in the first place. While it varies by deployment model, software infrastructure can’t be over-provisioned as easily as compute or storage. There isn’t an XL instance that really should have been a Small. Granted, there will be the same efforts to reduce spend wherever possible, but I think an optimization effort applied to a software service would result in a 10% reduction, versus the examples I discussed earlier of 30%-50% for core compute and storage at the extreme end.

Optimization also appears to be a more regular activity with some software providers. Snowflake talked about the impact of workload optimization going back to early 2022. Datadog’s customers performed a large optimization in Q2 2020 when Covid hit, and management discussed how customers regularly right-size their usage, just not all at once. Additionally, products billed on a per host or service basis would not be impacted by right-sizing of the host instance type or storage allotment.

Let’s take a look at a few of the examples from software infrastructure companies that we have received thus far. For each, my focus will be on their revenue performance and guidance, versus a full analysis of the quarter.

Confluent

Confluent reported Q4 revenue of $168.7M for growth of 40.7% annually and 11.2% sequentially. This beat their prior guidance of $161M-$163M for 35.1% growth by over 500 bps (note a large beat here, not roughly inline or lower than expectations, like with the hyperscalers). This compares to Q3 revenue of $152M, increasing 48% annually and 9.3% sequentially. So, some deceleration, but better than expected, and healthy sequential growth.

Looking forward to Q1, they provided guidance for $166M-$168M for 32.4% annual growth and roughly linear sequentially. If we apply the same beat from Q4 to Q1, then we can assume revenue could reach almost $174M for annual growth of about 38% and several points of sequential growth.

For the full year, the projection is more interesting. The preliminary estimate for the full year is for $760M-$765M in revenue for 30.1% growth at the midpoint. Management’s prior guidance from Q3 was for a preliminary range of $760M-$770M, so this was only lowered by $2.5M at the midpoint. As this is likely conservative given the environment, there is opportunity for Confluent to raise this value as the year progresses.

For comparison, their initial FY2022 (last year) guidance was for 39.7% growth and they ended the year with 51.0% growth, for roughly an 11% raise over the course of the year. This implies that FY2023 could end with 40-41% revenue growth on the high end, which is inline with what they delivered for Q4 and slightly above the projected outcome for Q1 with a typical beat.

Of course, management could have to lower this as the year progresses, but I would expect this guidance was issued very deliberately with the macro environment in mind. If management expected the deceleration to continue through the year, the preliminary guidance would have landed closer to 20%, not nearly inline with the Q1 revenue growth rate. This implies that management expects revenue growth to level out or possibly reaccelerate in the second half of the year.

The Confluent leadership team commented on the demand environment. They discussed how macro is causing customers to apply additional scrutiny on budgets and delaying deals past the quarter end. They mentioned that this activity started in June 2022 and has continued. More deals took longer to close than expected and some expansions were slower than in the past. However, many of the deals that were pushed out of Q4 are still active in the pipeline and some have closed. Additionally, growth in customer counts with large ARR is trending nicely. This is in spite of a transition in the CRO role.

Dynatrace

Dynatrace exhibited a similar pattern. Because their fiscal year is offset, this was their Q3 report (still ending December 2022). Therefore, they only provided the prior quarter’s results and the next quarter. Yet, we see similar linearity in revenue growth. The Q4 guide was roughly inline with Q3’s revenue growth rate if we assume a beat of the same magnitude.

Specifically, Dynatrace reported Q3 revenue of $297.5M for 29% growth in constant currency. This beat their prior guidance for a range of $283M-$286M, which would have represented 24%-25% growth in constant currency. This means that they beat their own guidance by 450 bps.

For Q4, they estimated revenue of $304M – $307M for growth of 24% – 25% in constant currency. While this represents 4.5% of deceleration, if we factor in their prior beat of 4.5% annual growth, then the expected growth rate should be about the same as Q3 at 29%. Additionally, the preliminary revenue estimate calls for 2.7% of sequential growth, which would increase to about 7.2% with the same sized beat as they delivered in Q3. Analysts had expected only $291.9M in revenue for Q4, which was about 4-5% lower than the actual estimate. Dynatrace beat their next quarter growth estimates by a large margin, in contrast to the hyperscalers who underperformed against the current quarter estimates.

In terms of the demand environment, management acknowledged that IT budgets are coming under increased scrutiny and that sales cycles are extending. With that said, they reported a record number of new logo additions in the quarter. Their sales teams have gotten better at forecasting and building longer sales cycles into their closing expectations.

Dynatrace also appears to be benefitting from more consistency in their demand pipeline. In response to an analyst reference to Microsoft Azure’s experience of further deceleration in spend as the quarter progressed and over the first month of the current quarter, Dynatrace leadership asserted that their demand was roughly consistent through the prior quarter and into the current one. This may reflect the dynamic I discussed earlier in which optimization can more acutely impact hyperscaler resources through downsizing and extended commitments in the near term, versus software infrastructure services that accrue spend by host or service count.

As Dynatrace shares many of the same dynamics in the observability space as peers, their results may portend well for other providers like Datadog. On the earnings call, Dynatrace leadership even commented that demand for Dynatrace was enhanced by the enterprise optimization exercises. They claim that observability tools provide useful inputs to optimization, by indicating utilization rates, helping DevOps teams assess whether a cloud resource has excess capacity at a deeper level. Datadog recently launched a Cloud Cost Management tool.

Another aspect of some software infrastructure services to consider is that they can sell into on-premise installations. This obviously differs from the hyperscalers, which effectively compete with on-premise hosting. An enterprise choosing to keep a large application workload in an on-premise data center would still use an observability solution like Dynatrace (or New Relic, Datadog, etc.) to monitor it.

Atlassian

Similar to Dynatrace, Atlassian has an offset fiscal year. They just reported Q2 results (ending December). They also delivered a beat and raise on revenue, with the updated full year guidance representing a raise to an annual revenue growth rate roughly equal to that for Q2.

For Q2, they reported revenue of $872.7M, for annual growth of 26.7% and 8.2% sequentially. This beat the analyst estimates for $849.5M and the company’s prior guidance for $835M to $855M, or 22.7% annual growth at the midpoint. In Q1, they had delivered $807.4M for 31.5% annual growth. So, we are seeing some annual growth deceleration, but a nice sequential increase.

Looking forward to Q3, the revenue estimate is for $890M-$910M, slightly beating the analyst estimate for $898.7M at the midpoint. This would represent 21.5% growth annually and 3.1% sequential growth. Assuming a similar beat of around 400 bps, the expected result would be about 25.5% annual growth with healthy sequential growth between 6-7%.

For the full year (FY2023), Atlassian hadn’t issued a preliminary target in Q1. As part of the Q2 report, they set a revenue growth target for about 25% year/year, which would correspond to about $3.50B. This was above analyst expectations for $3.46B. This implies $920M in revenue for Q4, which is 2.2% higher than their current forecast for Q3.

ServiceNow

I covered ServiceNow’s results already in a prior post. Suffice it to say that ServiceNow is showing little deceleration, with growth rates projecting rough linearity through the year, if we account for the typical beat and raise cadence.

Specifically, they delivered 27.5% revenue growth in Q4 in constant currency, beating their prior range of 26% – 27% by 1%. For Q1, they project a range of 25% to 25.5% growth in constant currency. Applying a 1% beat, the Q1 actual growth rate would represent about 1% deceleration from Q4. For the full year, their preliminary guidance is for 23% growth at the midpoint. Assuming this is conservative and allows for some raises as the year progresses, the guide implies roughly linear growth through the year.

Investor Take-Aways

The hyperscalers all reflected significant deceleration in revenue growth as part of their Q4 earnings reports (December end) and commentary heading into Q1. Because they don’t provide full year estimates, investors are left to project the same growth trend line through the year. Generally, the hyperscalers attributed the revenue growth reductions to customer optimization of existing workloads. As I discussed, optimization involves the exercise of customers generating savings by downsizing underutilized server resources, committing to longer contracts and moving infrequently accessed data to cold storage.

These optimization exercises can drive significant savings in a short period of time. In the example I provided, a start-up was able to cut their AWS spend by roughly 30% within a quarter. Some optimization efforts involved fairly straightforward configuration changes and others required a short maintenance window. In all cases, the company front-loaded the changes with the highest savings impact.

As we evaluate the performance of the hyperscalers, investors have to take the nature of this optimization into account. While the impact is very acute currently, I believe it is both a one-time exercise and front-loaded. Over the past two years, enterprise IT teams and start-ups enjoyed expanded budgets, where shipping new features was prioritized over spend management.

As the macro environment has shifted and IT budgets are under pressure, DevOps teams are taking advantage of this period to rapidly optimize their existing cloud workloads. With new tools and consulting services available to identify savings, this optimization is even easier. The hyperscalers themselves are even cooperating. These exercises are creating a large headwind to revenue growth, as cloud bills rapidly get cut. Some of the savings are being applied to new workloads, but these will take some time to ramp up utilization.

While the situation looks dismal in the current quarter, I expect growth rates to level out in the second half of the year and sequential growth to pick back up again. This assumption is based on the observation that workload optimization is generally a one-time exercise and front-loaded. Teams will quickly work through the low hanging fruit, reducing the headwind of smaller bills for existing workloads. Further, new workloads will start to ramp up, increasing cloud spend again. Given how acutely optimization efforts can lower existing spend, it’s actually surprising the hyperscalers are showing any sequential growth (unless new workloads are backfilling the decrease).

The other indicator providing some optimism for the second half of the year comes from the earnings results of the few software infrastructure companies that have reported thus far. Dynatrace, Confluent, ServiceNow and Atlassian provided revenue growth projections further out in 2023 than the hyperscalers. These imply smaller deceleration in first half of the year, and then leveling of growth rates or even slight re-acceleration in the back half.

In addition to the tailing off of optimization efforts, these software service providers may be less impacted than the hyperscalers for a few reasons. In some cases, their pricing models are better protected from downsizing as they are based on host count or service. Enterprises have an easier time reducing a large instance to a small, rather than turning off an application entirely. In the case of observability providers, like Dynatrace, they claim that their service even helps the resource optimization effort.

Additionally, the software providers appear to have been experiencing ongoing spend management efforts, referencing them earlier in 2022 than the hyperscalers. Confluent traced this back to June 2022. Snowflake started discussing optimization even earlier as part of their own efforts working with customers to take advantage of performance improvements. Datadog talked about customer reductions in log retention back in Q2 2022.

These factors may help explain why the software infrastructure providers reporting thus far aren’t projecting revenue growth going to zero in 2023. If we also acknowledge that macro pressure may abate at some point, then easy comps in 2023 may provide the opportunity for re-acceleration of growth again in 2024.

In addition to the dynamics around spend optimization and the start of new workloads, we have to consider the potential impact of AI as driver of new cloud resource consumption. In response to an analyst question about trying to quantify the potential contribution to Azure revenue from AI initiatives, Microsoft’s CEO pointed out that new AI-driven applications and capabilities do more than just increase usage of specialized compute for model creation and inferencing. Those applications will consume other resources, including storage and supporting services around application delivery.

I mean, even the workloads themselves, AI is just going to be a core part of a workload in Azure versus just AI alone. In other words, if you have an application that’s using a bunch of inference, let’s say, it’s also going to have a bunch of storage, and it’s going to have a bunch of other compute beyond GPU inferencing, if you will. I think over time, obviously, I think every app is going to be an AI app. That’s, I think, the best way to think about this transformation.

Microsoft CEO, Q2 Earnings Call, January 2023

This implies that cloud infrastructure like Azure will see an increase in overall resource utilization if AI-infused applications really take off. For investors in those companies that provide supporting infrastructure around the hyperscalers, like observability, data transport and management, security and delivery, we may see a renewed surge in demand. The point is that a new wave of AI-driven innovation won’t just benefit the vendors of the core AI inputs, like chip manufacturers. Any service hosted in the cloud and delivered over the Internet will consume the same software infrastructure resources that contributed to the last wave of Internet innovation, whether Web2, mobile apps or remote work.

We will likely see further softening of revenue growth for the software infrastructure providers that will be reporting their quarterly results over the next 1-2 months. This will be driven by some of the same cost optimization headwinds being experienced by the hyperscalers. However, unless the macro picture takes a significant additional step downward, I expect demand to pick back up in the second half of 2023, providing a real opportunity for re-acceleration of growth going into 2024. As investors, we try to find opportunities to buy stocks 6-12 months ahead of potential inflection points. For those willing to stomach some volatility in the first half of this year, you might be well rewarded in the back half.

NOTE: This article does not represent investment advice and is solely the author’s opinion for managing his own investment portfolio. Readers are expected to perform their own due diligence before making investment decisions. Please see the Disclaimer for more detail.

6 Comments

  1. Robert

    Having such a large allocation to NET (at least from what I remember) how do you think their Q4 is going to look, will they benefit for increased cybersecurity need in the current tense geopolitical environment?

    • poffringa

      Hi Robert – Yes, I still have a large allocation to NET. Long term, I think Cloudflare is well positioned and are growing across many product categories. As you point out, they can benefit from a number of tailwinds, including the security threat landscape, growth in edge compute, cost sensitivity for various IT services and the need for fast/reliable network connectivity for application delivery. The near term performance could be volatile, though, as we have macro pressuring IT budgets in general and expectations are high for Cloudflare’s performance, as reflected by its premium valuation. I am hopeful that Cloudflare is able to deliver a strong Q4 with favorable forward guidance. The $5B in revenue goal within 5 years is bold. If they iterate that projection, then I think the stock will perform well over the next several years.

  2. Michael Orwin

    Thanks for the article and the clear explanations around optimizing cloud usage.

  3. Michael Orwin

    Do Snowflake and Datadog make it easier for a business to switch cloud provider and get the compute and storage they need at the best price?

    • poffringa

      Hi Michael – Snowflake and Datadog are independent from the hyperscalers, as you know. This allows an enterprise to pursue a multi-cloud strategy and still retain the same observability provider and data warehouse. That represents a big advantage for Snowflake and Datadog, relative to similar offerings from any of the hyperscalers. So, to your point, these services from the independent companies do help reduce “lock in” with any one hyperscaler. Granted, moving from one hyperscaler to another would still be an exercise, but having these higher level services on a neutral party makes the migration easier. I would throw MongoDB, Confluent, Elastic into the same pool. Cloudflare is also well-positioned, more as the independent front door for any Internet service and to facilitate data migration between services on the different hyperscalers.

      • Michael Orwin

        Thanks!
        The ability to reduce lock-in seems particularly good now businesses are looking to optimize their cloud spend.