After languishing for most of 2023, software and security infrastructure stocks finished the year with an impressive run. This inflection started with Q3 results as the hyperscalers telegraphed a moderation of the optimization headwinds that had been plaguing the sector since 2022. After spending the prior 12 months wringing out savings from their cloud workloads, enterprise IT teams began reaching the end of their optimization exercises. These were largely catch-up efforts from delayed post-launch tuning during the Covid spending surge, as well as right-sizing of workload resources that had been over-provisioned in expectation of maximum growth.
This curtailing of optimization removes a negative headwind to revenue growth for the hyperscalers and the downstream software infrastructure companies. Revenue growth can return to being predominantly driven by positive influences, like the creation of new cloud workloads and expansion of usage for existing ones. Drafting off the hyperscaler trends, software infrastructure companies generally reported better than expected Q3 results, sharing similar commentary as the hyperscalers around less pronounced optimization and recovery towards normal spending patterns. They were quick to point out that they still feel macro pressure – it just isn’t getting worse.
This narrative helped several of the independent software infrastructure providers revisit their 52 week highs in stock price coming into 2024. Beneficiaries included SNOW, DDOG, NET, ESTC and MDB, among others. Cybersecurity companies fared even better with CRWD, PANW and ZS surpassing 100% gains for 2023.
The big question for investors is how these trends will project through 2024. With valuations for these companies back at new highs, further gains may be limited, even if the recovery persists. The key determinant will be whether revenue growth will re-accelerate for these companies and beat the expectations already baked into 2024 estimates. A few of these companies are still parked at a low valuation (for fair reasons). If they can outperform expectations in 2024, a re-rating of their stock is possible. Those companies provide the an even more favorable set-up for investors.
Looking across the spectrum of software infrastructure, I think data management providers are uniquely positioned to benefit from multiple tailwinds in 2024. They will have the combined benefit of continued optimization moderation and resumption of expanding IT budgets. This should accelerate cloud migration work at the same time that optimization of prior workloads is diminishing, exaggerating the incremental contribution of new customer applications and expansion of existing ones.
Additionally, AI provides an incremental source of enterprise demand. Driven by a desire to capitalize on value creation through the application of AI to their proprietary data, enterprises will raise their data management requirements. They will want to harness additional data sources, increase retention, perform more operations and reduce lag. As consumption based businesses, most data infrastructure companies should benefit from this new sense of urgency.
These factors will all combine to drive higher demand for companies that provide analytics, data storage, processing, distribution and retrieval. Potential beneficiaries within data infrastructure include independent providers like Snowflake (SNOW), MongoDB (MDB) and Confluent (CFLT), among others. In this post, I will explore these themes for software infrastructure companies, stemming from the rapid rise of generative AI as a driver of cost savings and new capabilities for enterprises. As part of this, I will analyze the set-up for these three companies and the potential for stock appreciation in 2024.
Background
As I discussed in a prior post summarizing hyperscaler Q1 2023 results, workload optimization is normally an exercise with a finite timeframe and diminishing returns. Teams will usually prioritize the most impactful changes first. The negative effect of aggressive optimization can be acute for cloud-based workloads with a consumption-based revenue model. Teams can quickly downsize an over-provisioned server resource, adjust data retention settings, cut out unnecessary services and tune long-running queries.
Optimization changes can reduce consumption service charges for a workload by 50% or more in a short amount of time (like days). As revenue accrues to the provider based on resource usage, these drastic reductions in consumption would decrease the revenue generated by a proportional amount. This effect helps explain how revenue growth rates could drop very quickly for software infrastructure providers over the course of 2022 and 2023. While they still had demand growth, a lot of that was offset by these reductions in spend from existing customers as they optimized.
The causes of this pullback were clear in hindsight. In a world of unconstrained budgets during Covid, enterprise IT teams were able to defer the normal post-launch tuning and right-sizing of their new cloud workloads. Additionally, many digital native companies emerged in categories like food delivery, telehealth and employee productivity services. All of these companies started on the cloud and were pressured to grow as rapidly as possible. In a growth at all cost environment, ballooning cloud spend was rationalized by optimistic market size estimates.
These forces all created tailwinds for those companies providing software infrastructure services from 2020-2021. In 2022, a reversal started. Macro conditions worsened as inflation spiked and interest rates rose, not to mention a new war in Europe. Fearing the worst, enterprise IT budgets tightened. Start-up funding slowed to a drip. And those rapidly growing digital natives saw their demand normalize.
This drove IT teams to prioritize activities that reduced the cost of their operations. They reviewed every cloud workload, identifying opportunities for savings. These efforts manifested in a number of ways. For application workloads, teams downsized server instances to match utilization, extended commitments for lower rates, reduced logging and disabled optional add-on services. On the database management side, they reset cluster resources, shortened data retention, lowered the frequency of recurring jobs, tuned inefficient queries and eliminated data sources.
At the beginning of this cycle, IT teams would generally address those optimization tasks which they expected to generate the most return. Effectively, this front-loaded the savings. For providers with consumption businesses, this front-loaded the impact on revenue. Considering that enterprises could quickly cut the monthly charge for hosting a cloud-based workload in half through extreme measures, it’s a wonder the hyperscalers and software infrastructure companies showed any revenue growth at all.
Of course, not every enterprise started their optimization exercises at the same time, so the effect was staggered. Also, while they would generally front-load the low hanging fruit, some optimization projects required more work to execute, like consolidating two workloads onto a single database or refactoring the code to use less resources. These types of projects might require a couple of quarters to complete. I would estimate that most enterprises could finish the bulk of their “catch up” optimization work within 12 months.
Given that the optimization cycle appeared to start in mid 2022, it would be expected to run through 2023. A 12 month period and the assumption that most optimization impact would be front-loaded aligns with the anecdotal commentary from the infrastructure providers on their earnings calls. In Q2 2023, Microsoft’s CFO marked the beginning of the moderation by commenting that “optimization can’t last forever.” By Q3, AWS’s leadership noted they were seeing optimization headwinds stabilize and even mitigate a little.
We will get Q4 results shortly. I suspect that optimization will have an even less pronounced effect this quarter. There will likely still be some optimization impact as the final workloads from the Covid over-provisioning are cleaned up. Going forward, there will always be some level of normal workload optimization. It will just not be clumped together in a single large “catch-up” exercise.
This has the effect of eliminating the large negative headwind to revenue growth that the optimization catch-up period created. Revenue will return to being driven by new cloud workloads and usage expansion for existing ones. Given that we are further along in the overall cloud migration process than when 2020 started, the steady state of cloud infrastructure consumption growth will likely be lower (law of larger numbers). But, it won’t be wiped out by all the negative usage resets from a wave of optimization.
As optimization impact moderates and growth returns to normal enterprise expansion driven by cloud migration and digital transformation, two factors will provide further upside. First, generative AI is prompting the creation of new digital services from enterprises. These have been largely in the experimentation phase up to now, but will start moving to production soon. Most AI-enhanced digital services will be delivered over the Internet. These will represent new application and data workloads that consume similar resources as standard applications, like observability, security, delivery and data storage.
AI is just going to be a core part of a workload in Azure versus just AI alone. If you have an application that’s using a bunch of inference, it’s also going to have a bunch of storage, and it’s going to have a bunch of other compute beyond GPU inferencing. I think over time, every app is going to be an AI app.
Microsoft Q4 FY2023 Earnings Call
Second, generative AI is launching a number of new coding co-pilots that help developers become more productive. As developer productivity increases, they will naturally address the enterprise’s backlog of digital transformation projects more quickly. I think this will result in a pull-forward of resource consumption for application workload hosting. As projects are completed rapidly, we should see an increase in application workloads moving into the production environment. This pull-forward might last for a year or so, before the next IT budget cycle allows enterprises to adjust the number of developers needed to maintain steady state.
Higher developer productivity should reduce the number of developers needed on IT teams at most enterprises. We might not see lay-offs, but rather a slowdown in the rate of new hires. As less budget is needed for developer salaries, funding for application workload hosting can become an increasing portion of the IT budget. This means budget will shift from paying developers to paying software infrastructure providers and cybersecurity companies.
Finally, if interest rates eventually lower, we should see an increase in funding for start-ups from VCs. While AI companies have been the beneficiaries over the past year, I think this will expand outward to a broader range of software-enabled businesses. AI techniques will be applied to mainstream industries, enabling the creation of brand-new services. These will still be digital in nature, likely consuming similar services from software infrastructure companies to deliver the actual user experience to end customers. The recovery in VC funding may take longer to materialize than the other factors, but should start to provide a tailwind to IT spending later in 2024 and going into 2025.
Individual Companies
As this is an investing blog, let’s examine how these trends may benefit specific companies in the software infrastructure space. The most relevant in my coverage area are Snowflake (SNOW), MongoDB (MDB) and Confluent (CFLT). There are likely other beneficiaries, which investors can consider on their own. For each of these companies, I will discuss how their product offering aligns against the demand environment for 2024 and the emerging opportunity to piggy-back on the growth of AI services.
We can also look at analyst expectations for 2024 and whether those appear achievable. For all software companies, an important upcoming input will be the preliminary view of 2024 revenue estimates. This information has the potential to move the stock significantly, particularly if the initial estimate is much lower (or higher) than analyst expectations.
Snowflake
Snowflake was one of the first independent software infrastructure providers to talk about optimization effects in early 2022. This was harder to identify as a new trend at that point, because Snowflake traditionally plans on some level of optimization as their underlying hyperscaler infrastructure becomes more efficient. Additionally, the Holiday season can temper usage, as employees are on vacation and are not submitting ad hoc queries. At least, that was the explanation coming out of Q4 FY2022 (January 2022 end).
As 2022 progressed, it became evident that customers were indeed scaling back their consumption of Snowflake utilization. Annual revenue growth declined from well over 100% in 2021 to 32% in Q3 FY2024, which ended in October 2023. Over this period, annual revenue growth each quarter decreased by 5-10% (or more) with each report. This was increasingly attributed to customers cutting back on consumption to reduce costs. They used multiple techniques to accomplish this, including importing less data, running queries at a lower frequency and limiting retention periods. These all had the impact of consuming fewer Snowflake credits than expected and lowering quarterly spend.
What was notable in the most recent Q3 FY2024 report on 11/29/2023 was the flattening of the annual growth growth rate and even acceleration on a sequential basis. This appeared to mark the bottoming of the long slide in revenue growth rates since 2021. Specifically, Q3 product revenue was $698.5M, up 34% y/y and 9.1% sequentially. This compares to Q2 results of $640.2M for 37% annual growth and 8.5% sequentially. While the Q3 annualized growth rate decreased by 3% y/y (which was less than prior quarters), the sequential growth rate ticked up. Further, if we annualize the 9.1% sequential growth rate, we get 41.7% annual growth. This is higher than the 34% rate just delivered, implying that annual revenue growth could re-accelerate towards 40%.
This would represent a welcome inflection. Other growth metrics reflected a similar trend. After adding a low of 28 customers over $1M in product revenue during Q2, this count jumped back up to 35 in Q3. Looking towards Q4, the company projected product revenue of $716M – $721M, representing 29%-30% annual growth. Analysts were expecting a lower value, reflecting about 25%-26% annual growth. Given that the Q3 beat was several percentage points above the company’s prior guidance, the actual annual growth rate for Q4 could reach 35%, representing a slight acceleration.
After the results were announced, SNOW stock popped by 7% the next day. In the subsequent weeks, it revisited its 52 week highs, breaking $200 again, before receding in the first few days of 2024. Looking at all of 2023, SNOW finished the year up 38.6%.
While this momentum appears positive, the big test for SNOW stock will come in early March, when the company reports Q4 results. As part of that, they will issue their guidance for the current fiscal year. Analysts have modeled an expected revenue growth rate of 30.2% for FY2025 (calendar year 2024). This is below the 35.2% annual growth expectation to wrap up FY2024.
As Snowflake management issues the preliminary guide for FY2025, the initial estimate may be lower than this target. The market would likely react negatively to a perceived “miss” of the full year revenue estimate. This is in spite of the fact that the actual growth delivered by year end could be higher. This temporary shortfall may present an opportunity for investors.
Product Tailwinds
As I discussed, investors can expect some benefit to Snowflake just from the moderation of excessive optimization and stabilization of customer consumption. Many of Snowflake’s large customers are picking up their data warehouse migrations again and expanding usage of existing workloads. They have also worked through their delayed consumption tuning exercises – refactoring expensive data queries, eliminating unneeded data retention and generally improving their utilization. The expected effect is a reduction of the big negative drag on growth and resumption of normal resource consumption increases.
Beyond these broader industry factors, Snowflake will benefit from the contribution of a number of emerging product offerings. These generally represent brand new revenue streams, that would be incremental to the ongoing consumption recovery of their existing platform offerings. As an example, Snowpark is already estimated to have hit a $70M revenue run rate and is growing rapidly.
Snowflake management claims they have more new products coming to market in 2024 than any year prior. These include the following highlights:
- Snowpark (GA now). On the Q3 earnings call, management called out Snowpark consumption growth of 47% quarter over quarter. Consumption in October was up over 500% compared to the prior year. Management also shared that Snowpark has a revenue run rate of about $70M. Compared to the expected full year product revenue total of $2.65B, this contributes would already represent about 2.6%. Further, they shared that over 35% of Snowflake customers are using Snowpark on a weekly basis (as of September 2023).
- Container Services. Going beyond running code within the Snowpark runtime, Container Services allows customers to drop their entire application and configuration into Snowflake’s environment by porting the container itself. This is an interesting move by Snowflake, which will ultimately make migrations of data processing applications easier, as no rewrite is needed. For AI-driven processes, customers can port their LLM of choice as well and perform fine-tuning on their data directly within Snowflake. The customer benefit is that the application runs next to the data and within the security controls of the Snowflake instance. On the earnings call, management shared that more than 70 customers are already using Container Services in preview mode, with “many more waiting in line.”
- Unistore (Private preview). Unistore represents Snowflake’s offering to support operational database use cases. This is based on Hybrid Tables, which enable single-row database transactions. Database operations can be read and write. The reads are relatively fast – targeting a 10ms response time. This performance would be appropriate for data applications that can tolerate a little latency. Snowflake management claims they already have 10 customers using Unistore in production environments. The product should be transitioning into public preview soon.
- Streamlit (GA). Streamlit is an open-source library that turns Python scripts into shareable web apps in minutes. The big benefit for data analysts is that no front-end development experience is necessary, providing design options through a WYSIWYG interface. This product was based on Snowflake’s acquisition of the company Streamlit, announced in early 2022. By that point, Streamlit had become the leading framework for Python-based data application development. It enjoyed an 80% adoption rate within the Fortune 50 and had hundreds of thousands of developers. Snowflake brought the framework into their development environment, allowing Snowflake users to produce simple data-driven apps directly from their data with the Python scripts they had already created to retrieve the data. Streamlit is being used by many Snowflake customers to quickly produce sophisticated dashboards on top of their Snowflake data, much like they had done in the past with commercial tools like Tableau. The benefit to Snowflake is that they can capture some share of the dashboard market and generate more underlying utilization of Snowflake to render the data.
- Streaming Dynamic Tables (Public preview). Snowflake has also been supporting streaming data ingestion for several years through Snowpipe Streaming. This supports standard ingestion frameworks and integrates with popular streaming sources. The improvements that the Snowflake team has been focusing on is reducing the latency of the streaming load. At this point, data landed in Snowflake can be accessed within a few seconds, versus minutes previously. Snowpipe Streaming is supported by Dynamic Tables, which allow users to perform data transformations as data is being piped into Snowflake. Data can be joined and aggregated across multiple sources into one Dynamic Table. As those data sources update, the Dynamic Table will refresh to reflect the latest results. Dynamic Tables entered public preview early in 2023. Approximately 1,500 customers are using the feature and initial adoption is outpacing expectations
- Iceberg Tables (Private preview). Where customers need to manage data outside of the Snowflake platform, they can leverage Iceberg Tables as Snowflake’s universal open format. At their Summit user conference, Snowflake announced the extension of this open format to imbue governance concepts into the management of the data. Customers will be able to designate whether their Iceberg Tables inherit governance from Snowflake (managed) or allow another engine to handle access controls (unmanaged). Native and External Tables are being consolidated as part of this. Where Snowflake needs to connect to external data sources, they make that possible through Iceberg Tables, with the bias towards bringing that data onto the platform eventually. All of Snowflake’s supporting services around deep governance, collaboration and fast processing just work better this way.
These capabilities were all discussed during an Investor Day event at Snowflake’s annual user conference, Snowflake Summit, in June 2023. More recently, during Snowday in November, the Snowflake team announced Snowflake Cortex, a fully managed service that provides access to industry LLMs, homegrown AI models and vector search functionality. This allows customers to easily build custom LLM-powered apps within the Snowflake environment, using Serverless Functions powered by Snowflake Cortex. By running AI models against their own data within the Snowflake environment, customers can maintain flexibility and control over their data, without having to worry about exporting it to another system or having sensitive data exposed in another provider’s AI model.
With Snowflake Cortex, Snowflake users have access to a set of serverless functions that can accelerate typical tasks for analytics and AI app development. With little code, analysts can access specialized ML and LLM models tuned for specific tasks. They can also leverage more general purpose models for prompt engineering and in-context learning. Since these are fully hosted and managed by Snowflake Cortex, users always have access to them directly, obviating the need to set up their own infrastructure. They can also use and leverage Snowflake’s unified governance framework to seamlessly secure and manage access to their data.
As part of this, Snowflake is providing access to a wide array of LLM-based models for customers to use. These include models for working with unstructured data (sentiment detection, summarization, translation), machine learning models (classification, forecasting, anomaly detection) and general purpose use cases like completion and SQL generation. This also includes Document AI, which was announced at Summit and supports content extraction from proprietary document formats, like PDF and screenshots. These provide a nice foundation for basic models, which can be supplemented by third-party LLM’s.
Competitive Focus and Market Opportunity
Comparing Snowflake to the broader competitive landscape, they have the following strategic points of differentiation. Competing solutions offer these capabilities, but Snowflake generally has a more complete offering and focused on this differentiation earlier in their product development cycle.
- Governance. One of the big selling points for storing all of an enterprise’s permanent data in Snowflake is the ability to apply a common governance model over it all. Snowflake administrators can grant users access to enterprise data at a very granular level. These can be easily revoked and avoids requests to destroy data as no copies are made. Partners can be invited to collaborate on data sets in secure clean rooms. As AI workloads layer on top of sensitive enterprise data, governance controls can ensure that sensitive data isn’t incorporated into public models.
- Data Collaboration. Enabling secure data sharing between companies, with granular governance and frictionless management layered in. This effort was started in 2018 and has been the catalyst for Snowflake’s growth in data sharing and enabling industry ecosystems for customers. By providing customers with seamless mechanisms to distribute data securely to their industry partners, Snowflake is building strong network effects. These can’t be easily duplicated by competitors who are either on a single cloud (hyperscalers) or offer rudimentary solutions to data sharing that still create copies (Databricks). Strictly governed data sharing without copies will be even more critical as enterprises seek to enhance LLMs and foundation models with their proprietary data.
- Bring Apps to the Data. Allow developers to build applications directly over a customer’s data set within the Snowflake environment. This represents the next driver of Snowflake’s growth. The rationale is simple. It is more expensive and less secure for an enterprise application provider to maintain their own copy of an enterprise’s data in their cloud hosting environment. Rather than trusting your CRM (or other SaaS application variant) provider to keep your data secure and pass on any cost efficiencies, why not allow CRM app developers to host their solutions within Snowflake’s environment on top of your enterprise data housed in one location? This is the crux of Snowflake’s strategy to “disrupt” application development. While early, the value proposition makes sense.
Of these capabilities, I think that data sharing provides the greatest competitive advantage. That is because it promotes network effects. Not only is Snowflake’s implementation of data sharing better than those of competitors (no copies of data made), but each new connection brings another company into the Snowflake ecosystem. Snowflake’s data sharing makes data distribution within industry segments so secure and seamless that distributors often encourage new data consumers to simply create a Snowflake account to receive data.
Two companies can exchange data without requiring complex APIs or rudimentary file transfer processes. More importantly, the scope of the data can be limited to just what is needed with a fixed duration. The recipient can’t “keep” a copy of the data after the partnership ends. The same benefit applies to customer data for a CRM app, employee data for HRM and every other SaaS enterprise app derivative.
While data sharing is often ignored by analysts and investors, it continues to surface as one of Snowflake’s stickiest features. In Q3, 28% of all Snowflake customers maintained active data sharing relationships, up from 22% a year ago. Among large $1M+ customers, 73% share data, which is up from 67% a year ago. The fact that large customers have such a high take rate on data sharing emphasizes the significance of this capability.
With a renewed focus on maintaining control over an enterprise’s proprietary data as an input to AI model training/inference, strong governance of data is even more important. Snowflake has built granular controls into their data sharing methodology. Most importantly, data is not shared by making a copy, unlike some competitive solutions. Access to data for any partner can be immediately revoked without having to request that the partner “delete their copies”.
Snowflake management cites data sharing as a strong source of new customer engagements. As network participants build data processing into their ecosystem of industry distribution, their investment in Snowflake’s platform grows, increasing the switching costs. These create strong incentives to utilize Snowflake, even if a competing platform has a better capability in a particular function of data analytics or machine learning.
Snowflake management estimates that their addressable market for existing workload solutions is about $248B. This includes use cases around analytics, data engineering, collaboration, machine learning, data science and supporting cybersecurity. Snowflake’s long term aspiration is to allow enterprises to store the vast majority of their data of record on the Snowflake Data Cloud. While individual applications will still have their own transactional databases, Snowflake sees the opportunity for all of that data to eventually move into the Data Cloud for permanent storage, analysis, model training and distribution.
This aspirational addressable market is likely much larger than the $248B targeted by existing solutions, essentially encompassing the value of all enterprise data management (maybe $1T+?). I know that many investors and analysts worry about competitive offerings. However, I think the market for this vision of data management would support multiple players.
While Snowflake (and their competitors) would love for enterprises to consolidate all their data onto the Data Cloud, I don’t think this is a realistic outcome. Similar to the case where we have multiple hyperscalers, many enterprise customers choose to utilize more than one provider. As Databricks is often raised as a threat to Snowflake, the reality is that many enterprises use both Snowflake and Databricks solutions. Each vendor has strengths in specific product areas that make a full consolidation unlikely. A large enterprise CIO or CTO would prefer to hedge their bets and keep more than one vendor (I know I would). Additionally, in the largest enterprises, different departments might have their vendor of choice. The data science or engineering teams may prefer the versatility of Databricks, while the business analysts and lines of business would like the pre-packaged ease of use found in Snowflake.
On the hyperscaler front, Snowflake’s partnerships are improving. Many Snowflake bears have raised the competitive threat from hyperscaler products, like Microsoft Fabric and Azure Synapse or even Google BigQuery and Amazon Redshift. Yet, Snowflake is still growing. Over the last couple of years, Snowflake’s co-selling relationships with the hyperscalers have even been improving.
As an example, Snowflake has traditionally generated the majority of its revenue from AWS, with just a little from Microsoft Azure and none from Google Cloud Platform. Because of this, many analysts concluded that Snowflake’s relationship with Azure was antagonistic. In June 2023, however, Snowflake and Microsoft expanded their operating partnership. Beyond product integrations to support generative AI, the two companies agreed to focus on their business relationship. Snowflake will increase its Azure spend commitment and Microsoft committed to jointly support go-to-market efforts.
The key part of the agreement in my opinion is the joint sales efforts. This has been a significant component of the success with AWS for Snowflake and I am happy to see it extended to Microsoft. The benefit for AWS has always been that they still make money on compute, storage and networking when customers choose Snowflake’s platform on AWS (as opposed to Microsoft or GCP). While Microsoft would prefer to win the customer’s entire business, I suspect that like AWS, they realize getting some revenue from Snowflake is better than none at all. Forcing a customer into an analytics solution that they don’t like could force an unnecessary competitive loss.
On the Q3 earnings call, Snowflake management shared the relative percentages of their revenue contribution from each of the hyperscalers. This was a new revelation, as prior disclosures were summarized as “large” and “small”. Specifically, AWS constitutes 76% of Snowflake’s business (we assume revenue), with 21% attributed to Azure and the remaining 3% for GCP.
AWS, by far, is our biggest, followed by Azure, and then GCP. GCP is up to 3% right now. Microsoft Azure is the fastest-growing one, but AWS is still 76% of our business with Microsoft being 21%.
As I said, GCP is 3%. I will tell you one of the reasons why GCP is not as big is it’s just so much more expensive for our customers to operate in GCP than it is in AWS and Azure. And as a result, our salespeople are really not inclined to do much in GCP.
Snowflake Q3 FY2024 EArnings Call
Interestingly, the CFO shared that Microsoft is the fastest growing partner at this point, which is likely being driven by the expanded partnership agreement. I was surprised that Azure was at 21% – this is larger than I expected and apparently growing the fastest. This undermines a perspective that Microsoft taking share from Snowflake – it sounds like the opposite.
Also, the explanation regarding the cost of Snowflake on GCP is insightful as the driver for the small relative percentage. There is still likely some sort of competitive issue. It’s notable that Google doesn’t have a similar co-selling agreement in place. I actually think this closed posture will hurt GCP in the long run. They already dropped below Azure in annual growth rate in the Q3 earnings reports (22.5% for GCP versus 28% for Azure). I expect that trend will continue.
Investment Opportunity
Snowflake currently sells at a P/S ratio of about 25. This is high for revenue growth in the low 30% range. A target of 27% FCF margin for the full year provides some support for the valuation, but on an earnings basis, Snowflake clocks in at a forward Non-GAAP P/E ratio of 258. If we assume that annual revenue growth gets to 35% in calendar year 2024, then they reach $3.77B for a forward P/S of 17.7.
The long term target for adjusted FCF margin is 30%, which implies Snowflake could generate about $1B in FCF for FY2025 (CY2024). That would bring the Price/FCF ratio down to about 67 from about 105 now. Hopefully, more profit falls to the bottom line this year. Analysts currently have modeled $1.13 in Non-GAAP EPS for FY2025 (starts Feb 2024). That would bring the P/E ratio down to 180, which is still pretty high.
If Snowflake hits all of these targets in the current fiscal year (FY2025) and the macro environment doesn’t deteriorate (and with lower interest rates), SNOW should be able to maintain roughly the same valuation metrics. Even with the rapid deceleration in revenue growth over the past year, SNOW has maintained a P/S ratio in the low 20’s. This is grounded in the perceived durability of their growth, which hinges on an FY2029 product revenue target of $10B.
If Snowflake delivers $3.77B in FY2025 and they maintain their P/S ratio of 24, it implies the market cap could increase from $67B to about $90B in a year, driving the stock up about 34% from here or a price target of $270. I acknowledge that is an optimistic view. In SNOW’s case, it is still below the all-time-high price of around $400 hit in November 2021. The primary risk to this target would be further deceleration in the revenue growth rate below 30%.
MongoDB
MongoDB experienced a more pronounced recovery in 2023 than Snowflake. MDB started the year under $200 a share and subsequently more than doubled as the year progressed topping out above $440 temporarily before dropping back to the $380 range after Q3 FY2024 earnings on December 5th. While I thought the Q3 report was okay, the market pushed MDB stock down about 12% the following day. The stock had enjoyed a strong run-up in advance of the report, though, so an in-line result would be sold off.
The challenge for MongoDB currently is the sequential growth rate of its revenue, which is affected by continued pressure on its cloud business (due to lingering optimization) and the volatility associated with revenue recognition on its packaged software product Enterprise Advanced. This is causing large beats one quarter, followed by a weak guide for the next. As MongoDB is still transitioning its product mix from software licenses (and 606 revenue recognition) to pure consumption, total revenue on a quarterly basis can jump around driven by the closure of any large Enterprise Advanced deals.
Financial Results
For Q3, MongoDB reported $432.9M in total revenue, which was up 2.1% sequentially and about 30% annually. This beat the company’s prior guidance for a range of $400M-$404M (which was down $20M from Q2) by $30.9M. That represented a huge revenue beat and without forward guidance would have likely driven the stock price up. But, management guided for revenue of $429M-$433M in Q4, which would be down slightly at the midpoint from Q3. If this is the actual revenue result, annual growth would drop to 19.3%. Additionally, sequential revenue growth would point to further deceleration.
Under the covers, MongoDB’s cloud-based offering, Atlas, grew faster than overall revenue at 36% y/y. It made up 66% of total revenue in Q3, up from 63% in Q2 and 63% a year ago. Atlas revenue growth in Q3 decelerated slightly from 38% in Q2. On a sequential basis, it was up 7.0%, which was a decrease from the 11.5% sequential jump in Q2.
Similar to Snowflake, MongoDB’s revenue growth has been decreasing since early 2022. The decline in Atlas revenue growth has been more pronounced, after reaching 85% in January of 2022 (Q4 FY2022) and dropping to 36% nearly two years later. The rate of decline was precipitous over calendar year 2022 and early 2023. The prior two quarters, starting from Q1 FY2024 have seen that drop moderate somewhat, declining from 40% in Q1 to 38% in Q2 and most recently 36% in Q3.
Similar to Snowflake, we may be approaching the trough of MongoDB Atlas annual growth deceleration. If we assume MongoDB beats the top end of their revenue estimate for Q4 by a similar amount as in Q3, they might actually deliver $460M-$465M in total revenue. If Atlas makes a similar contribution to total revenue of 66%, then Atlas’s annual revenue growth will hover in the mid-30% range.
The other variable is MongoDB’s software license revenue from their Enterprise Advanced product. This represents a traditional software distribution, which is hosted by the customer, not MongoDB’s cloud. As a software package, MongoDB has to realize the revenue contribution according to accounting rule 606, which causes lumpiness due to the front-loading of the contract value.
For a typical EA deal, 25% of the total contract value has to be recognized upfront, with the remainder amortized monthly for the full term. For a full year contract, this means that 44% of the total contract value is recognized as revenue in the first quarter it is active. For a 3 year contract, 31% is recognized in the first quarter. This is opposed to a consumption model, where costs would roughly accrue each month at a fairly linear rate.
While we would expect the percent of revenue that EA contributes to continue declining over time as more enterprises move to the cloud, it will not be zero. Because of the nature of their business, some enterprises will choose to self-host indefinitely. This will always add some variability to MongoDB’s revenue guidance and cause management to skew conservatively. Over time, this impact will become less and less, with Atlas consumption driving more revenue.
The other challenge for MongoDB’s Atlas revenue revolves around its consumption model. Because MongoDB’s product usage is so closely tied to the utilization of the customer’s application, changes in the customer business environment can impact MongoDB’s revenue generation. This is why management has been calling out seasonal aspects to Atlas revenue generation in recent quarters. These mirror some of the patterns that Snowflake experiences, like a dip in utilization over the end of year Holiday period because fewer employees are working. In MongoDB’s case, they attribute the seasonal dip to the general consumer being on vacation and offline (whether Holidays or the summer period).
Atlas revenue grows based on three primary variables:
- The number of customers using it. This is measured by customer count.
- Penetration within each customer. This is driven by the number of application workloads within each customer and the percentage of those with MongoDB as the back-end database. Growth in share of database spend isn’t measured directly, but would be reflected by growth of $100k ARR customers and NRR.
- Utilization of those customer applications backed by MongoDB. This utilization is determined by the customer’s underlying business. If their customer facing applications are in high demand, then Atlas utilization would be larger. If the customer’s business decreases, then less Atlas utilization would occur.
The third variable is outside of MongoDB’s control, and is based on the success of the customer’s business. Coming out of Covid, a number of digital natives like Coinbase and Instacart experienced a rapid decline in their business. This had the knock-on effect of contracting their consumption of MongoDB Atlas very quickly. It’s one thing to experience spending pressure due to customers optimizing their cloud infrastructure utilization. It’s worse if that resource optimization is exacerbated by a sudden drop in business activity for those digital-native customers.
MongoDB’s revenue growth is affected by optimization effects and the additional impact of a drop in the customer’s core business. Other software and security infrastructure providers would be less impacted by changes in traffic patterns on their customers’ applications. Investors need to keep this volatility in mind as they evaluate MongoDB’s revenue growth expectations. This has been a headwind in 2022 and 2023, as interest rates slowed down business activity. Looking forward to 2024-2025, we could see expansion of business activity (and VC investment) again, which may reverse this business growth effect into a tailwind.
Moving to profitability, MongoDB has been making noticeable progress. Like other software infrastructure companies, they took feedback about profitability to heart coming out of the Covid period of stretched valuations and negative operating margins. In Q3, MongoDB almost doubled their estimate for Non-GAAP operating income of $41M-$44M by delivering $78.5M. This represents an operating margin of 18.1%. More impressively, it almost quadrupled (4x) from $19.8M a year ago.
This drove Non-GAAP net income of $79.1M, or $0.96 per share. Analysts were looking for $0.50, after the company guided to a range of $0.47 – $0.50. This means that MongoDB effectively doubled their estimated net income per share. A year ago, non-GAAP net income was $18.7M, or $0.23 per share. If we annualize the Q3 net income, then the PEG ratio yields a relatively low 0.3. That level of growth in net income isn’t sustainable, but helps emphasize how quickly MongoDB is shifting to profitability.
MongoDB leadership projected a full year FY2024 (calendar year 2023) Non-GAAP EPS in a range of $2.89 – $2.91. At the midpoint, this yields a P/E ratio of 142. For the next calendar year in 2024 (FY2025), analysts have projected $3.23 in Non-GAAP EPS. That would bring the forward P/E ratio to 128. Given MongoDB’s outperformance on earnings the last few quarters, I wouldn’t be surprised if MongoDB delivers $4.00 in calendar 2024.
This would bring the PE down to about 100. This is still high, relative to more mature software companies (Microsoft is at 38). But, investors can now have a conversation about P/E ratios, which wasn’t even possible in the past. Some software infrastructure peers are even getting P/E ratios close to a normal range. For 2023, Datadog is projected to deliver $1.53 of Non-GAAP EPS for a P/E ratio of 85. This will probably drop to 70 in 2024.
Even on a GAAP basis, MongoDB is demonstrating nice progress, reducing operating loss from $82.9M a year ago to $45.2M this Q3. The GAAP operating margin decreased from -24.8% to -10.4%. Net loss per share was $0.41, versus $1.23 a year ago. These improvements are being driven by better gross margins (77% versus 74% a year ago), operating cost management and savings through volume discounts. Looking forward, these trends will likely continue improving.
Shifting to customer activity, additions were mixed in Q3, but generally positive. MongoDB’s total customers increased by 1,400 in Q3, which is the smallest increase in two years. However, this count was impacted by a one-time clean-up of 350 customers. These represented cases where duplicates existed, like two customer accounts that were part of a single large company. Also, the team decided to adjust the threshold for being included as a paying customer. Some customers were spending minimally and dropped out of the paying customer count. Adding back in this adjustment, the quarterly increase would have been 1,750, which is still on the low side of additions over the past year, but better.
With 46,400 total customers, though, I think MongoDB has plenty of room to grow spend from each customer. As it turns out, this is where MongoDB excelled in Q3. They added a record 117 customers with annualized revenue over $100k in the quarter. This was a nice jump from the dip to 94 in Q2, and represented the most additions ever.
This supported management’s comment that the net ARR expansion rate remained over 120% in the quarter. MongoDB publishes another metric, which measures the percentage of subscription revenue contributed by their Direct Sales customers. This was 88% in Q3, which has been consistent for the past four quarters and is up from 85% two years ago. Direct Sales customers make up 6,900 of total customers or about 14.9% of total.
MongoDB defines these as “customers that were sold through our direct sales force and channel partners.” Effectively, these customers have a relationship with MongoDB through the GTM team. The remaining customers would be self-serve, Atlas users. Given that this segment contributes the majority of revenue, it is a critical component. These would also represent the primary drivers of the 120%+ net ARR expansion rate. As MongoDB’s projected annual revenue growth drops into the 20% range, a net expansion rate of 120% or more would provide a floor to overall annual revenue by definition.
Product Strategy and Updates
This ability to garner a larger share of revenue from their biggest customers is a major aspect of MongoDB’s product strategy. This theme was emphasized during the Investor Event in June 2023. The goal is to gain a foothold in a customer and then incrementally add more workloads over time. Workloads represent software applications (whether Internet based or internal) that use MongoDB as their backing data store.
During the Investor Session, MongoDB leadership reported the following stats for large customer penetration:
- 64 of Fortune 100 (64% of total)
- 192 of Fortune 500 (38% of total)
- 457 of Global 2000 (23% of total)
The leadership team then compared the current MongoDB revenue from their Fortune 100 and Fortune 500 customer segments to industry analyst estimates of total database spend for those cohorts. MongoDB makes up roughly 2% of the total database budget currently (per IDC estimates of Fortune 100/500 data management spend).
As data management can include a lot of functions outside of MongoDB’s core offering in application databases, I don’t think 100% of this spend is a reasonable target. But 10% would be, representing roughly a 5x increase from here.
To accomplish this, the MongoDB product team has been expanding the types of workloads that MongoDB could support. This started with the company’s founding as a document-oriented database. Over time, the team added support for more adjacent data types, like key value, geospatial, time series, graph, text search and most recently vector search. These were applied to different application workloads – web apps, mobile, analytics and now AI.
With Atlas, customers could access all of this functionality on their favorite hyperscaler, including AWS, Azure and GCP. Like the hyperscalers, costs accrue based on consumption. Also, like the cloud providers, MongoDB’s cloud service allows customers to skip provisioning their own hardware and running a data center. Atlas provides all of the cloud infrastructure needed, with the experts at MongoDB managing the system. Like the cloud offerings from other open source projects, this ability to outsource the database operations becomes a real value-add over just licensing the software.
This value proposition was reflected in the growth of Atlas over the past couple of years. Looking forward, MongoDB’s strategy is to continue to expand the use cases and data workloads that the product can support. At a base level, this spans all the traditional data types mentioned above that support common web, mobile and data applications for their transactional needs. Gaining more workloads means improving support for the various data types or providing tools to migrate legacy databases into MongoDB. These were all the focus of product development since 2020.
Growth in this segment hinges on landing more customers, and more importantly, addressing more application workloads within the larger ones. This is where the vendor consolidation argument works in MongoDB’s favor. As Internet and mobile usage exploded over the last 10 years and software infrastructure was partitioned into microservices, a plethora of point solutions emerged to service different data types. Individual vendors supported one data type each, whether relational, time series, graph, search, wide column, etc.
Over time, supporting all of these different database implementations became unwieldy for enterprise software teams. They had to maintain experts in each vendor’s solution, creating a lot of duplication and staffing overhead. If one platform could support multiple data types through a common interface, that vendor would create significant efficiencies for the software team. As IT budgets were pressured, this consolidation bias became more pronounced. By supporting multiple data types, MongoDB stands to benefit.
Growth in this area for MongoDB is driven by the factors discussed earlier – land more customers with an initial use case, expand the number of workloads over time and then add support for new workload types to the product offering. Going forward, this growth vector continues to benefit from the same drivers. More customer workloads yields more consumption and then more revenue.
With the explosion of interest in AI, the MongoDB team looked for opportunities to participate in this trend. For MongoDB, this boils down to two primary efforts:
- Modernize data infrastructure. As enterprises recognize the value to be gained by creating new AI-enabled products and services from their data, they also realize that a modern data infrastructure makes it easier to aggregate, cleanse and feed their proprietary data into AI models. An emphasis on data infrastructure improvements would likely accelerate the timelines for planned database migrations off of legacy solutions.
- Support appropriate use cases within AI workflows. Outside of building and powering AI models themselves, tangential components of the overall AI data workflow offer opportunities for data infrastructure companies. This was the genesis for MongoDB’s new Vector Search offering. While MongoDB would not be appropriate for core AI model training and inference, it can provide a metadata store and enable vector search for enterprises pursuing an AI strategy.
Over the last 6 months, the MongoDB team has accelerated their support for AI-enabled applications. During their Investor Session in June 2023, the product team provided a number of updates around MongoDB’s strategy to leverage AI. While the team sees an opportunity to directly power aspects of the AI workflow and data infrastructure, they also predict that AI itself will expand MongoDB’s market opportunity.
Before I discuss MongoDB’s specific AI offerings, I thought the first and fourth bullets were interesting comments on the general impact that MongoDB’s leadership expects AI to drive for the company. They argue that every new technology paradigm over the past 50 years has generated an incremental step change in the number of applications being built.
A simple example to consider is the rise of mobile devices and their development platforms. Up until smartphones emerged in the late 2000’s, the vast majority of consumer applications were Internet web sites. After 2010, however, a whole new wave of mobile apps built momentum and eventually increased the number of applications by a factor of 2-3x. These applications required software infrastructure to be supported, including data sources. That drove an increase in utilization for database providers.
AI stands to generate another wave of applications that leverage AI-powered capabilities to support a new set of consumer services. We are already seeing new applications that leverage chat agents powered by ChatGPT, as well as co-pilots for a variety of tasks from coding to copy writing. These are all unique applications, generally requiring a standard database in addition to their AI specific data model.
Further, AI will make it easier for developers to build new applications and modernize legacy ones. As legacy applications are quickly rebuilt with the help of coding co-pilots, they will need a data store. MongoDB has a good chance of being selected as the database for these modern applications, as the development team usually can make new infrastructure selections as part of a legacy application refactoring.
The transition to MongoDB as the operational database is also facilitated with MongoDB’s Relational Migrator product, which is now generally available. Relational Migrator is a tool that helps customers migrate relational workloads to MongoDB. It performs three primary functions:
- Designs an appropriate MongoDB schema, by using an existing relational schema as the source.
- Migrates data from the legacy database, either continuously or as a one-time snapshot. Supported source databases include Oracle, SQL Server, MySQL, PostgreSQL and Sybase ASE.
- Generates code artifacts to access the MongoDB data set. This reduces the time required to update legacy application code.
For AI specific workloads, MongoDB has added a couple of capabilities. The most exciting is Vector Search, which is now in General Availability (after being introduced in June 2023). Atlas Vector Search allows customers to search unstructured data and create vector embeddings with machine learning models from providers like OpenAI and Hugging Face. These can then be stored and indexed in Atlas for various retrieval use cases. Vector search can support AI-related functions like retrieval augmented generation (RAG), semantic search, recommendation engines and dynamic personalization.
A key benefit of MongoDB’s implementation of vector search support is that it supplements their other database functions as the core data and metadata stores. These are all delivered through a single platform, promoting the consolidation of AI data workloads. This simplifies the infrastructure set up for developers and provides a single interface for access.
While only being available as public preview to customers in 2023, MongoDB’s vector search is already gaining popularity. Retool, a leading development platform for business software, conducted a survey late in 2023 to assess the state of AI and how it is being used in the enterprise. They published a number of findings in a report released in November. The survey polled over 1,500 technology workers including software developers, business and engineering leaders, executives, product managers and designers.
One of the survey questions was about usage of vector databases. While fewer than 20% of respondents reported using a vector database at this point, MongoDB Vector Search ranked very highly. It was the second most popular vector database and scored the highest NPS rating (customer satisfaction). This is pretty impressive given that other providers have had a product in market for longer, or are primarily designed as a vector database. This could represent a very positive indicator for MongoDB’s potential in this market.
Investment Opportunity
MongoDB has experienced rapid growth over its history. From its IPO in 2017 through 2022, revenue grew by 8x. For the most recent full year of FY2023 (calendar 2022), revenue grew by an impressive 47% to $1.28B. In their Q3 report from December 2023, the company increased its FY2024 (calendar 2023) projection by $54M (more than the $31M beat) to a range of $1.654B to $1.658B, for 29.0% year/year growth. Obviously, this represents a big step down from 47% growth in the prior year, but generally aligns with the growth deceleration for other software infrastructure companies over the same period.
The current fiscal year, FY2024, ends on January 31st. As part of their Q4 earnings report, MongoDB will guide for revenue in FY2025 (calendar 2024). Analysts currently have modeled $2.032B in revenue for 22.4% growth. It’s tough to read how realistic this is. On one hand, MongoDB just reported 30% annual growth in Q3, but the sequential rate slowed substantially. Additionally, the Q4 estimate is currently for 19.3% annual growth and roughly equal to Q3 in total revenue. So, an expected slowdown to 22% is a reasonable estimate at this point.
The MongoDB leadership team tends to guide conservatively. They may provide a preliminary revenue estimate for FY2025 that is below the analyst expectation. In this case, the stock would drop, providing what might be a good entry point.
Optimistically, I think that MongoDB could deliver 25% revenue growth next fiscal year (FY2025 or calendar year 2024). We also could see lower growth in the first half of the year and then higher growth in the second half. My target is based on a few factors. First, I think that software infrastructure spending by enterprises will generally increase in 2024. The severe cost optimization efforts over the last 18 months should moderate. Enterprises have realized the big savings at this point and there will be less pressure to cut costs. This will provide a tailwind to all software infrastructure companies.
Second, I think that data infrastructure providers will see an even larger uptick in demand, as enterprises shift from figuring out their AI use cases to implementing some of them. This will cause IT leadership to evaluate their data stacks and trigger modernization where needed to make their proprietary data more available to AI processes. This should unlock any database migration projects that were on hold. Additionally, new AI-driven applications will require a data store, which would represent an incremental workload. MongoDB’s new offerings, like vector search, might be appealing.
The other factor that may provide outsized support for MDB stock would be continued rapid growth in profitability measures. As I discussed earlier, MongoDB quadrupled their operating income in Q3, growing it from $19.8M a year ago to $78.5M. This drove an operating margin of 18.1% and Non-GAAP EPS of $0.96 (twice the company’s guide). For the year, the company raised guidance to a range of $2.89 to $2.91, implying a P/E ratio of about 142.
That is a little high, relative to other software infrastructure peers, but not outlandish. SNOW currently sits at the high end with 258, while DDOG is around 73, CRWD about 100, ZS at 97, NET at 154. Large cap technology companies are obviously lower, with ORCL at 31 and MS at 38.
For FY2025 (Calendar year 2024), analysts currently have modeled Non-GAAP EPS of $3.23 for MongoDB. This would bring the forward EPS down to 128. Representing just 10% EPS growth over FY2024, I think this estimate is low. From FY2023 to FY2024, MongoDB is on track to increase Non-GAAP EPS from $0.81 to $2.91, a factor of 2.6x.
I don’t think we will see anywhere near that rate in FY2025 (calendar 2024), but wouldn’t be surprised by 30-40% EPS growth. That would bring FY2025 EPS to about $4.00 and the P/E ratio to 103. This outperformance would be driven by 25% revenue growth and further realization of operating leverage.
Like Snowflake, MongoDB stock has benefitted from high valuation multiples, relative to peers and its own performance. This is also driven by the perception that MongoDB’s addressable market is huge, where durable revenue growth in the 20-30% range would be possible for many years. If MongoDB delivers 25% revenue growth in FY2025, total revenue will hit $2.075B. With a market cap of about $29.7B, this brings the P/S ratio down to about 14.3.
Over the last 6 months, MDB stock has carried a P/S ratio of 18-20. If MongoDB revenue growth accelerates in the second half of 2024 and approaches 30%, the improved profitability metrics would likely support a case for a linear P/S ratio. This would imply market cap grows to about $40B at the high end, for a 35% gain from here. That might get the stock back to $550, which is just below its high in November 2021. This is possible, but a lot has to go well for MongoDB this year.
Confluent
Of these three companies, Confluent stock has been the most volatile. It ended 2023 about where it started. At the beginning of 2023, CFLT traded around $22 and even dipped below $20 a few times in January. Following the company’s strong Q1 report, it briefly topped $40 a share. Then, the dismal guidance in the Q3 report brought it down to $15 in early November. Since then, it has recovered somewhat to the low $20’s as 2023 ended. Currently, CFLT is around $22, similar to a year ago.
The reason for the huge drop in stock price following the Q3 report was primarily about forward guidance. Management attributed the growth slowdown to a few issues – single large customer behaviors, macro and the compensation plan for the GTM team. These issues raised questions about the durability of Confluent’s growth going forward. Given the magnitude of the drop in share price after earnings (over 40%), the market interpreted these issues as existential.
I have a less dire outlook. First, in spite of the behavior of a few customers, I see no change in product alignment, market opportunity or competitive position. The transition in sales incentives towards customer usage better aligns with how Confluent Cloud should be sold and mirrors the approach taken by peers. As I’ll discuss, I think these issues are addressable, but may require a couple of quarters to play out. In the interim, I think CFLT stock presents an appealing entry point for investors who can stomach a little execution risk and be patient.
While the rest of software infrastructure has appreciated nicely over the last 3 months, CFLT hasn’t. Yet, it would benefit from the same tailwinds that are driving renewed interest in the stocks like DDOG, NET, SNOW, CRWD, ZS, which are pushing past 52 week highs. These potential tailwinds for CFLT are moderated optimization cycles, less pressure on IT budgets and excitement around the spillover from AI spending.
Financial Results
For Q3, Confluent actually beat their revenue estimate issued with the Q2 results. That called for $193.5M to $195.5M in revenue or 28.2% annual growth. The actual Q3 revenue result was $200.2M, which represented 32.0% growth. Confluent Cloud revenue was up a healthy 61% y/y and 9.5% sequentially, reaching $91.6M. While this was up $8.0M quarter/quarter, this result was slightly below management’s target for $92.2M. Confluent Cloud made up 46% of total revenue in Q3, up from 38% a year ago and 44% in Q2.
While the annual revenue growth rate stayed above 30% in Q3, the sequential growth rate ticked down to 5.8% from 8.4% in Q2. This was caused by the lower than expected spend increase for Confluent Cloud in Q3. Management attributed the slowdown in revenue growth to a couple of factors:
- Company specific circumstances with two large digital-native customers, which accounted for 50% of the slowdown. One online gaming company moved workloads back to their own data center. This was part of a larger migration from cloud back to a self-managed data center. Their plan is to bring up Kafka using the open source package for now, but the Confluent sales team is in discussion for a Confluent Platform license. The second customer slowed down their ramp of new consumption, as they are being acquired. I consider both of these to be isolated issues. Some analysts on the call tried to tie the online gaming company’s migration to a larger cloud repatriation story, but I don’t see it. There will always be a couple of examples of companies that claim they saved large sums by migrating cloud infrastructure to an on-premise environment, but this is rare. Having managed both deployments, running your own data centers only works economically for the very largest of technology companies (i.e. Facebook).
- Ongoing macro pressure, including the war in Israel, which is a top 10 country for Confluent. I suspect that Confluent is still feeling some pressure on IT budgets, similar to other software infrastructure companies. Also, I believe that these optimization efforts affected different software and data infrastructure providers in varying cycles. It may be that large vendors (like hyperscalers, Snowflake, etc.) were prioritized for cost savings first, then IT teams moved on to smaller providers. Regardless, the further we proceed into 2024, I think the majority of enterprises will have finished their large post-Covid optimization catch-up.
- Sales team compensation currently skews towards large upfront deals versus focusing on consumption. This is a big one. Confluent will shift sales compensation to make cloud revenue, rather than bookings or committed spend, the primary goal for Confluent Cloud engagements. This was a planned transition, which Confluent is now accelerating. The new model only rewards consumption, not the advanced sale of commitments or credits. The distinction is subtle, but incentivizes the sales team to ensure the customer is using the cloud solution and to assist the customer in expanding usage to more use cases. This approach matches how other consumption-based cloud services are sold, like MongoDB, Snowflake, Datadog and the hyperscalers.
The GTM transition had started in 2023, but only attributed 10-15% of a salesperson’s comp to consumption. Going forward in Q1, that will shift to where 100% of cloud sales compensation will be determined by incremental consumption and new logo acquisition.
First, beginning in Q1 of FY ’24, we’re shifting from our current model where 10% to 15% of cloud sales compensation is based on consumption to a model where 100% of cloud sales compensation is based on incremental consumption and new logo acquisition. We will keep the vast majority of our customer revenue under committed contract as we do today. However, we will not be attempting to get commitments ahead of the usage. Rather committed amounts will be customer driven as customers choose to commit in exchange for greater discounts.
Confluent Q3 FY2023 Earnings CAll
This will drive a few outcomes for Confluent Cloud:
- Get usage activated sooner. Instead of focusing on landing as large a commitment in advance as possible, salespeople will be incentivized to get the customer on the Confluent Cloud platform quickly with initial use cases. Large commitments place undue pressure on the customer to map out use cases and predict utilization far in advance. Large contracts have to be negotiated across multiple layers of management, causing more delays in getting the customer activated. Consumption models promote shorter sales cycles and less risk for customers, because they are pay-as-you-go.
- Add more workloads over time. Once a customer is actively using the Confluent Cloud platform, salespeople can work with them to facilitate new workloads, product extensions and upsells. The sales team will be more proactive about presenting new features and working collaboratively to unlock additional use cases. This will help maintain Confluent’s already high NRR of 140% for the cloud offering.
- Hunt for more customers. Confluent’s customer growth metrics have been declining, as salespeople focused on extracting large commitments from a smaller number of big companies already using Kafka. The new sales plan will incentivize bringing on incremental customers, as each one will start with smaller commitments. This naturally forces the sales team to spread out across more target companies in order to reach their quota.
As these issues cut into Q3 revenue somewhat, they had a deeper impact on forward guidance. For Q4, management guided to a range for total revenue of $204M-$205M, up 21.2% annually and 2.1% sequentially. This represents a pretty sizable drop from the 32% growth just delivered, and the 28.2% preliminary guide to Q3. Further, they are calling for cloud revenue of $97.5M, which represents a sequential increase of $5.9M, down from the $8M of sequential growth just delivered.
While this guidance disappointed the market after the release, the big impact to share price emerged when the CFO provided preliminary guidance for FY2024 during the call. He said that they expect the same dynamics that impacted the Q4 results to carry forward into the FY2024 revenue result. This makes sense, as the lower performance in Q4 would reduce the starting point for Q1. They expect full year revenue growth for FY2024 to be 22%. This would be down from the 31.2% that is currently projected for FY2023’s result.
Following the earnings release, CFLT stock dropped about 42% the following day. Shifting from revenue growth in the low 30% range to the low 20’s forced a re-rating of the stock. The P/S ratio dropped from 11.3 before earnings to 6.5 afterwards. This reaction by the market the following day was a bit overdone, but is understandable. Where I think the opportunity lies for CFLT in 2024 is for revenue growth to re-accelerate in the second half of the year. Combined with a marked inflection to positive operating and FCF margins, we could see a new catalyst for the stock.
If we make a few assumptions about the recovery of the negative impact drivers from Q3, I can see how Confluent’s growth can return to its pre-Q3 state over the next few quarters. These assumptions are that the two big customer adjustments were one-offs, that the overall macro picture improves and that the recalibration of the GTM team’s incentives have the desired effect. These are a lot of assumptions, but I think they are reasonable.
Later in the Q3 earnings call, the CFO even supported the idea for a recovery by postulating that Confluent could return to 30% revenue growth by end of 2024. This assumes the previous issues discussed improve, which I think is likely. Further, it recognizes that Confluent has a few products in the pipeline, like stream processing with Flink, which introduce new revenue streams in 2024.
And as I shared in the prepared remarks, there are a couple of headwinds heading into 2024 and which we spoke about a couple of customers, macro and the consumption transformation. And as you think about the shape for next year, we feel that the consumption transformation will have a bigger impact from an adjustment perspective in the first half of the year than the second half of the year. So that’s one data point.
And I think Jay and I touched on it, exiting Q4 of next year, we’ll have a few tailwinds, and I’ll just reiterate a couple of them. The first one is the consumption transformation will be behind us, and what that means is how our customers want to consume Confluent’s product and services will be very aligned with how we are going to market. So that’s number one. Number two, with respect to the product unlocks, of course, we’ll have Flink which will be in GA, coupled with other unlocks on the DSP side, that’s going to be FedRAMP exiting Q4 ’24, and there’s going to be tailwinds from AI.
So overall, we feel that there are a decent amount of tailwinds exiting 2024, which gives us confidence that we’ll get back to a 30% (growth rate). It’s just timing.
Confluent Q3 fy2023 Earnings call
As with MongoDB, the profitability story for Confluent is moving in a positive direction. In Q3, they reported non-GAAP operating margin of -5.5%, which was up from -27.8% a year ago, or a 2200 bps improvement. The prior estimate for Q3 coming out of Q2 earnings was for -10% operating margin, so Confluent beat this by about 450 bps.
Total gross margins reached 76.4%, up 540 bps over the prior year. For just the software subscription business (excluding services), gross margins were 80.1%, up 320 bps, even with the continued shift to cloud. These improvements were a result of better unit economics and optimization of the cloud business. These all drove Confluent’s first positive Non-GAAP net income per share of $0.02 in Q3, up from ($0.13) a year ago. Even on a GAAP basis, there was improvement from ($0.41) to ($0.30). FCF margin followed the same trend, improving 24 percentage points to -6.5%.
Looking forward, Confluent expects Q4 results to continue the momentum, in spite of the revenue hit. They are projecting Non-GAAP operating margin in a range of 0-1% and $0.05 of Non-GAAP income per share. For the full year of 2024, they are calling for at least break-even Non-GAAP operating margin and FCF margin, which would be a large improvement from the likely FY2023 outcome. This factors in some expected pressure on operating margins from the shift to the new sales incentive plan.
As the CFO hinted, I think the headwinds exiting Q4 2023 are addressable. Additionally, a more favorable IT spending environment should provide another tailwind. I think Confluent can return to a revenue growth rate of 30% as they exit 2024. This would combine with improving profitability measures, to inflect FCF and operating margins to positive.
Optimistically, with increased revenue growth, Non-GAAP operating margin could hit 5-10% later in 2024. Similar to peers that just made the transition to positive margins in the software space, CFLT’s valuation multiple should get a little boost, on top of the rerating from getting revenue growth back on track. To illustrate the potential lift from earnings, simply annualizing the projected Q4 Non-GAAP EPS of $0.05 would bring the P/E ratio down to 111. EPS of $0.10 in a quarter would mean a PE of 55.
Product Strategy and Updates
Coming out of the challenges from 2023, I think Confluent has several potential product and industry tailwinds that may feed the upside case for 2024. My baseline assumption is that general software infrastructure spending patterns will improve in 2024. As investors know, early in 2023, software infrastructure consumption trends were negatively impacted by a pullback in IT spend and one-time optimization exercises.
Workload optimization followed the spending surge during Covid where many digital native companies spun up new cloud workloads with more capacity than was needed and postponed the normal post-launch refactoring for better performance. As IT budgets came under pressure in 2022, it was easy for them to loop back on this consumption optimization by downsizing clusters, reigning in data retention and turning off unneeded activity.
Providers of software infrastructure services saw their revenue streams come under pressure. While enterprises still launched new workloads (but at a slower pace) and expanded usage in some areas, there was a large negative revision to consumption in existing workloads as they underwent abnormal optimization adjustments. Negative impact from optimization masked the continued benefit from cloud migration.
However, this outsized optimization reset was largely one-time in nature, following the Covid-fueled pull forward of cloud spending. As the optimization projects finish, their negative influence on consumption is abating. Enterprises would also front-load the largest expected savings, causing the most noticeable impact to have surfaced in first half of 2023. As we moved into the second of 2023, the hyperscalers and software vendors began reporting that these effects were moderating. Yes, there was still budget scrutiny, but the large headwinds from optimization lessened.
I expect this moderation to continue into 2024. I don’t think we will see a return to the Covid-surge in spending, but revenue growth will be driven more by normal cloud migration and digital transformation projects. Additionally, the VC market should improve over the next year or two, introducing more spend (outside of AI companies) to flow towards software infrastructure providers. A lot of the demand surge during Covid on top of enterprise digital transformation came from a flood of start-ups spinning up their own application workloads.
For data infrastructure providers, like Snowflake, MongoDB, Elastic and Confluent, the effects will follow the same trend. Snowflake has already seen the moderation of optimization and appears poised for steady revenue growth going forward (possibly even some acceleration). As a large component of many enterprise IT budgets, I suspect optimization projects focused on Snowflake first. Smaller providers like MongoDB, Elastic and Confluent would logically have started their optimization cycles after the savings were squeezed out of the larger vendors.
Specific to Confluent, leadership commented that they saw continued pressure on spending through Q3. The example of the online gaming company moving their workloads back on-premise is another example of the same budget pressure. In that case, the IT team was forced to make drastic cuts to their budget. Likely having some experience with on-premise hosting and open source, they made the unusual move of pulling everything back into this self-hosted deployment mode. I think less pressure on IT budgets going forward will prevent companies from needing to take this drastic approach to cost-cutting.
Additionally, the cost advantage of using the open source version of Kafka instead of Confluent’s commercial product has diminished substantially since the launch of their Kora architecture as part of Confluent Cloud. The Confluent team claims that the Confluent Cloud data streaming platform with Kora is 10x better than self-hosted open source Kafka. Some of the this claim is subjective, but Kora introduced much higher elasticity, resiliency and storage capabilities than is available from self-hosted open source Kafka. This performance advantage of Kora would also extend to a comparison with look-alike hyperscaler solutions.
Overall, 2024 should bring a new tailwind for software infrastructure providers in the form of less negative pressure on budgets. It should be easier to attract new customers, faster to close deals and more inclination to expand usage. As a data infrastructure provider, I think Confluent will get a similar benefit as other industry players.
The second tailwind for Confluent has to do with AI. Enterprises are actively looking for ways to harness LLM’s and other AI capabilities to offer new products, lower costs and improve employee efficiency. I think that much of 2023 and even early 2024 will be spent in experiments and prototypes for these new AI applications. As they determine which services seem to generate value, enterprises will be ready to ramp up usage of these new applications.
Launching new software services in production and growing usage will drive incremental demand for consumption of data infrastructure services. These applications will require more data, more processing, more movement and additional monitoring. Providers with cloud-based consumption models should see the usage meters start to tick up again from this scale up. Vendors who are deployed in this way include all the data infrastructure providers that I just mentioned.
Specific to Confluent, management makes the argument that AI workloads benefit from real-time data flows. We can conceive that an LLM chatbot deployed by an enterprise for customer service would be more effective if it had access to the latest activity by any particular customer. This increases the need for more real-time data flows from a larger number of sources.
During their Investor Day in June 2023, Confluent leadership used Expedia as an example of this exact use case. They discussed how a customer service application would generally have to deal with cases where a customer’s flight was delayed or their baggage was lost. Without access to the customer’s latest travel data, the customer service bot would be ineffective. This raises the priority for enterprises to find ways to distribute customer data more broadly and to maintain updates in near real-time.
Confluent is well-positioned to facilitate the need to incorporate real-time data flows into fine-tuning and inference for AI models. During the Investor Session, the CEO discussed how traditional machine learning models could be built from enterprise data on a one-time basis using bulk data loading methods. The inference function was minimal, requiring little real-time data to be effective.
With the advent of pre-trained models from 3rd parties, the role of inference and fine tuning has increased. Inference is dependent on enterprise data loaded in real-time to bring business context to the pre-trained model. Confluent leadership believes this shift in architecture will drive more interest in and demand for real-time data distribution platforms, like Confluent. As enterprises want to bring their new AI-services online, that should correspond with heavier use of an existing real-time data distribution platform (like Confluent) or offer a sales opportunity if they need to license one.
On the product side, the big incremental revenue opportunity for Confluent has to do with their new stream processing offering. This was initiated by their acquisition of Immerok, announced in January 2023. Immerok maintains Flink, the most popular open source project for stream processing. Subsequently, the team has been working on integrating Flink into the Confluent platform as a core offering. This product was released to public preview in September and should be ready for GA in 2024.
The excitement around Flink at Current was palpable. Flink sessions were among the highest-rated and most attended sessions at the entire conference, highlighting the hunger Kafka users have for Flink. That’s why we’re so pleased with the launch of Flink public preview in Confluent Cloud.
Since the announcement, we’ve seen incredible uptake with hundreds of customers opting into the preview and trying out our cloud-native and serverless Flink offering. The addition of Flink strengthens our position as the only complete data streaming product. We bring Flink together with the connectors that capture streaming data that stream itself in Kafka and the governance capabilities to manage streaming data across an organization. Each of these capabilities strengthens the other, and the combination comprise what we believe will be the most important data platform in a modern company.
Confluent Q3 FY2023 Earnings call
The demand expectation for Confluent’s Flink offering is supported in my view by other data infrastructure providers releasing new products to help their customers with stream processing. Notably, MongoDB introduced Atlas Stream Processing in June 2023 and Snowflake has emphasized the versatility of Snowpipe Streaming as a mechanism to process streaming data in advance of loading it into Snowflake.
While one interpretation is that this new competition could present an issue for Confluent’s offering, I actually view them as complementary and indicative of overall customer interest. As stream processing becomes a larger workload within a typical data infrastructure configuration, I expect that Confluent’s product will see heightened demand. AI workflows further underscore the usefulness for stream processing, as real-time data distribution needs methods to query, filter and reformat data for inference and fine-tuning.
Confluent has the advantage of providing both the data streaming and stream processing solutions within the same platform. This creates efficiencies for data access, shared governance models and a common interface. Many innovative companies already use both Apache Kafka and Flink, so the need for combining them had already been established.
After Flink transitions to GA in 2024, revenue from it will be net new for Confluent, increasing their year/year growth. Confluent’s leadership has commented in the past how they expect the stream processing business to eventually be as large as core data streaming. Each quarter in 2024 that the stream processing offering grows, its usage will yield more incremental revenue for the year/year comparison. This should peak in Q4 2024, with a potentially easy compare to Q4 2023.
Finally, there are a few other smaller positive influences for Confluent’s growth path in 2024. Management mentioned getting FedRamp certification later in the year. This will unlock new government spending. Confluent also has a few other product add-ons that are small, but ramping nicely, which could introduce additional net new revenue in 2024. These include Stream Governance, Connectors and Stream Sharing.
Investment Opportunity
Looking forward to FY2024, analysts have modeled revenue for Confluent of $940.8M, which represents 22.3% growth over the current estimate for $769.1M of revenue at end of 2023. I think Confluent can beat this and deliver $960M in revenue, representing a full year growth rate of about 25%. My assertion is based on the assumption of improving demand and sell-through, based on the factors I have discussed. Additionally, new products like stream processing will ramp, introducing an incremental revenue source. Annual revenue growth by quarter through 2024 should start in the low 20% range in Q1 and Q2, and then ramp into the high 20’s and hit 30% by Q4.
On the earnings side, analysts have modeled FY2024 generating $0.17 of Non-GAAP EPS, representing a forward P/E ratio of 130. If Confluent can continue driving the optimization efficiencies discussed and outperform on revenue, they might get this up to $0.25 to $0.30 for the year. That would rapidly pull the P/E ratio down to 73 at the top end for the current price per share of about $22. A Q4 EPS of $0.10 would bring the annualized run rate for P/E down to 55.
I think the combination of a Q4 exit revenue growth rate of 30% and rapidly declining P/E ratio based on a favorable Q4 annualized EPS run rate would support a re-rating of the stock back to a P/S ratio of 12 (supported by lower interest rates). This implies an optimistic market cap of $11.5B or an increase of about 66% from today’s $6.8B. CFLT’s market cap was about this high in July 2023, so it’s not an outlandish assumption about 1.5 years later. A return to that market cap level would bring the stock price up to $37, which is still below its current 52 week high of $41.
Investment Plan
I have discussed three companies in this post, all of which I think stand to benefit from improving tailwinds in 2024. The negative pressure on IT budgets and focus on post-Covid workload optimization should abate, allowing revenue growth for data infrastructure providers to return to normal cloud migration and digital transformation drivers. This by itself has already benefited software infrastructure companies and stands to continue. Expected cuts in interest rates should encourage new investment in IT capital projects as well.
AI should create another catalyst for data infrastructure companies. Enterprises spent much of 2023 figuring out their AI strategy and testing potential applications. As these prototypes get released to production and start to scale, they will drive increased demand for data services, including distribution, processing and storage. If proprietary data creates value for unique enterprise AI services, then more high quality data is better.
A desire for better data will cause enterprises to increase investment in their data infrastructure – modernizing it, moving it to the cloud, making data more recent, adding new sources and keeping it around longer. These factors should drive incremental utilization for those data infrastructure providers that provision services on a consumption model. A boost in revenue should result.
These trends should benefit all three of the companies I discussed, as Snowflake, MongoDB and Confluent each can generate more revenue from increased utilization. Additionally, all of these three are ramping up new products that will introduce incremental revenue streams that will further compound the year/year revenue comparisons.
From an investment perspective, SNOW and MDB have factored some of this upside already into their share price, as their valuations are on the high side. With that said, I think they could still see another 30% in share appreciation in 2024 if revenue growth re-accelerates and profitability measures keep improving. Seeing measurable, positive Non-GAAP EPS is also adding support to the share price, as forward P/E ratios can be calculated for these companies and are dropping under 100 in some cases.
For CFLT, I think the most upside potential in 2024 exists as a consequence of its beaten down share price. Because of the issues that surfaced with the Q3 report, there is more execution risk. However, I think the challenges encountered are not existential and are addressable in the first half of 2024. This could result in the situation where Confluent is back to 30% revenue growth with positive EPS. The stock’s valuation multiple could re-rate back to prior levels, bringing the share price near its prior highs in the $30 range. This would generate at least a 50% return for investors who can accept a higher execution risk.
For me, I have positions in all three stocks. I would add to CFLT over the course of 2024 if it appears that their financial performance is recovering in the ways I have laid out. For SNOW and MDB, I expect a reasonable return and plan to simply maintain my positions.
NOTE: This article does not represent investment advice and is solely the author’s opinion for managing his own investment portfolio. Readers are expected to perform their own due diligence before making investment decisions. Please see the Disclaimer for more detail.
Happy new year Peter! Welcome back 💚 🥃
Hi Peter, It’s great to have you back. You work is just fantastic and I really appreciate it! I’m wondering if you publish your portfolio on the SSI site any more? I notice that your Common Stock profile is no longer available. Thank you.
Thanks, Peter. Very interesting read.
Peter, it’s great to have another article from you.
If a workload needs data from different places like on-prem, clouds, and from 3rd parties, are any software infrastructure companies particularly well placed to help, and is that likely to be significant for their prospects?
Hi Michael – Yes, the three data infrastructure companies that I mentioned all provide the benefit of working across the hyperscalers, so moving data between those installations would avoid incurring incremental transfer cost from the hyperscalers. If we expand the data movement to include on-premise, clouds and 3rd parties, then I think Cloudflare is a good solution with R2. That allows data storage for a low cost, and the movement between nodes is free.
Much thanks!
Awesome post !
Does GenAI change the influence developers have on how the IT budget is spent?
Interesting question. Developers would have influence over any tooling decisions associated with their AI workflows and systems.
and thanks again!
As others have said…great to have you back, Peter! Love the way you analyze each company and their stock from numerous angles, making for a comprehensive and invaluable discussion.
Welcome back Peter and great to hear from you. Look forward to your future notes and any updates.
Best,
Aaron.
“Most AI-enhanced digital services will be delivered over the Internet. These will represent new application and data workloads that consume similar resources as standard applications, like observability, security, delivery and data storage.”
Hi Peter – great insights as always. As you said above, What’s holding you back from investing in security names? For example, CRWD was among the best performing stocks in 2023.
Good question. Thanks for the feedback.
Related to the quote, the security I am referring to as a direct potential beneficiary of AI is application security, as opposed to enterprise workforce and asset security. Application security providers include companies like Cloudflare, Akamai and Fastly. Data security has become a feature wrapped into most data platforms, like Snowflake, Databricks, the hyperscalers, etc.
In terms of enterprise security companies, I agree that they outperformed in 2023. The demand for these security providers should continue going forward, but I think that a lot of that growth has been priced in at this point, after such a big run. I am still looking at them and would consider a position in CRWD/PANW/ZS if there were a pullback.
Hi Peter,
Hyperscalers, SNOW and Databricks have their own data streaming. Could you help us to understand how CFLT data streaming (except cloud agnostic) different from those companies?
You work is really fantastic and appreciate for sharing with us!
Hi – the big difference and advantage of Confluent’s implementation of data streaming on the cloud lies in their Kora platform. This represented a complete rewrite of the underlying data processing engine in Apache Kafka to make it much more reliable, performant and elastic. This is unique and proprietary to Confluent. The hyperscalers generally are just hosting basic Apache Kafka, or created their own look-alike data streaming engine that is much less capable.
Thank you so much for your reply, Peter.
Nice article. I’m also curious about your thoughts on ESTC. And what do you think about the interest in vector databases that generative AI will generate?
Hi – I generally like Elastic. Their natural support for vector search is a nice benefit. I think the only issue is that Elastic has spent a lot of time making the case that they are a suitable solution for observability and security. Now, they are throwing in AI and RAG. It does dilute the GTM effort.
I have one question regarding “data is not shared by making a copy”.
Both Amazon and Databricks say the same thing on this matter.
It’s really Snow’s strong and important point in my view.
so, I just would like to clear my confusion that ..
It was Snow’s ability that others couldn’t do, right?
Thank you very much as always.
It’s nuanced, but Snowflake has the purest form of data sharing without allowing the recipient to create a full copy of the dataset. The other providers can say that they don’t make a copy, per se, but it is possible to download the full dataset in their implementations. Only Snowflake allows the recipient to access the data in a controlled environment where they cannot make a full copy of the data. However, the downside is that they requires the recipient to be on the Snowflake platform (which helps with network effects). Arguably, this competitive advantage for Snowflake has become less distinct over time.
Hi Peter,
Thank you very much for your reply.
According to the revenue guidance issue, on Snowflake at Barclays Global Technology Conference. (Dec. 07, 2023)
as Mike Scarpelli said:
In a consumption model in Snowflake, kind of customers really ramp very quickly, but they can also slow down if they choose.
And so, I think a lot of new products we have coming out next year could be a reacceleration in our business.
I am not forecasting that right now, and I need more time to see how these things play out before I won’t be changing any guidance or anything.
I will say a lot of these new things that we have coming out next year will be headwind to product margin expansion.
⚈ And there’s a number of things from UniStore.
The economics on that are not as good as, very or product
because of the way the system works to get the performance.
There’s I can store the data twice, cost we need to take out will get there.
It’s going to take about six months.
What do you think ?
Whether it is the one of issue for reducing their guidance or not ?
I think their revenue guidance is conservative and that new products will be additive this year. The comments about margins are fair – like double storage of data to make Unistore viable. I think the market will be more focused on the incremental revenue from Unistore and other new products than a slight margin hit.
Hi Peter – could you please share your thoughts on SNOW earnings, and on Iceberg in particular? Seems their revenue deceleration is huge and they’ve withdrawn their long term forecast of 10b revenue. What’s your investing plan on snow? Thanks.
I thought the Q4 report was okay. They delivered 33% product revenue growth and a huge acceleration in RPO. Total customers ticked up nicely sequentially and management shared some impressive growth for their largest customers (like 8 of top 10 customers increased spend sequentially and signed a $250M deal). Even the deceleration of NRR (a lagging indicator) slowed.
The issue, of course, was with revenue guidance. I think they are sandbagging, in order to provide support for the new CEO. I think Snowflake could end the year with 30% revenue growth, which was the original analyst estimate. This could set up a nice cadence of beat/raise as the year progresses. I like the CEO change, as it brings technologist with both enterprise and start-up experience to run the company. Also, I am sure he will do everything possible to be successful.
If SNOW stock drops further, I will likely up my allocation.
Thanks Peter for your insights as always. Could you please briefly comment on if Iceberg will meaningfully take share from $SNOW and impact their way to $10B annual revenue? And $MDB just reported a rather big Q4 revenue beat but awful guidance (in my opinion). Are you thinking about the same – that is they’re sandbagging and you’ll up your allocation should stock drops further? Finally $GTLB appears to be closely related to the current AI boom. Given the recent drop in stock price, do you plan to start a position or what’s withholding you from owning $GTLB?
Hi Chris,
I don’t think Iceberg will take share from Snowflake, but it does offload some of the revenue from storage. This might have some limited near term impact, but in the end, provides an easier onramp to Snowflake for other data workloads. So, their support of the Iceberg format is good in the broader sense.
Regarding MDB, I agree. They had a pretty strong Q4, with weak guidance. I think they are being conservative and could end the year with revenue growth in the mid-20% range. I would add again under $350.
I think GitLab is getting interesting. They are riding a number of tailwinds right now. My hesitation has been their contention that they can dislodge entrenched provided in categories like observability and application security with their broader offering. But, developer productivity and tooling has been elevated with GenAI. I might start a position if the stock keeps dropping.
Thanks Peter for your comments. What’s your opinion on HashiCorp? I haven’t seen your coverage on them. Given the huge drop in their share price over the past few years, what’s holding you from opening a position? Their focus on cloud infrastructure should make them an AI beneficiary, right?
I have looked at HashiCorp, but never got around to covering them. I worry at this point that their offering is a little too diluted. Their products touch a lot of areas and are still largely associated with open source. I think their offering is too far removed in the software stack to see a meaningful tailwind from AI. That said, if the stock price drops too much, it may be interesting. There is an argument that AI lowers software development costs so much that we have a surge in new applications coming to production, all of which will need hosting.
Hi Peter – Thank you so much for this valuable insight. With these tailwinds, have you considered companies like Informatica or Dynatrace? And any reason you’ve decided not to own Elastic or GitLab? Thanks so much.
Sure – thanks. Regarding your questions:
– Informatica: Maybe. I have always viewed them as basic ETL plumbing and haven’t followed their results closely. But, the argument that AI investment would raise usage of data distribution and priority for data infra improvements does favor them.
– Elastic: Same reasoning, but I have struggled with their ability to compete effectively in observability and security. With a clear use case to support AI through vector search, they may realize a new growth driver.
– Dynatrace: I like Datadog better in observability. More innovation.
– GitLab: Struggled with their overall platform strategy and confidence around extensions into observability and security. But AI has changed the landscape and makes developer tooling and productivity more important for investment, as the ROI is clearer. Might open a position.
Excellent, thanks so much
Thank you. Have you considered Oracle? There are a few data scientists who think this is an overlooked company with these tailwinds, although I’ve been mixed on them through my experience. Thanks
Hi – Funny, I wrote about Oracle a little over a year ago and had a small position for some time. I closed that out with a nice gain. https://softwarestackinvesting.com/oracle-cloud-in-hypergrowth/
I thought their recent quarterly report was strong. Their product positioning makes sense to me. I like how they are partnering with the hyperscalers (like with Azure) and messaging towards the multi-cloud vendor approach. They also have a strong relationship with Nvidia and aren’t trying to compete by designing their own AI chips. Finally, they have an extensive installed base of Oracle databases in the Global2000, which should help with GTM.
Hi, Peter,
I would love to get your thoughts (either as a pithy reply here or a dedicated post) on DBOS, which seems an up&coming technology solution.
“Like most applications, databases usually run on top of operating systems, which manage the resources they need to get the job done. But DBOS implements a database _inside_ the operating system to manage the complex array of states it needs to know to run software effectively. And designed to scale across multiple nodes, which means users would not need to bother with Kubernetes to orchestrate Linux containers across multiple operating environments.”
More info at this press release
https://www.prnewswire.com/news-releases/technology-pioneer-mike-stonebraker-raises-8-5m-to-launch-dbos-and-radically-transform-cloud-computing-302086000.html
Thank you for your reply!