Investing analysis of the software companies that power next generation digital businesses

Datadog Q3 2021 – Land and Expand (Squared)

Datadog (DDOG) delivered another impressive earnings report on November 4th. After a strong Q2, Datadog is showing no signs of slowing down as we finish out the year. They beat expectations for Q3 on the top and bottom line, further accelerating revenue growth from Q2 to a staggering 75% year/year increase. Further, they raised projections for Q4 and the full year, with the initial revenue target for the current quarter implying higher revenue growth than Q3. The stock surged 12% the day after earnings, bringing DDOG’s 2021 performance to nearly a double from its $98 price at the close of 2020. DDOG is well past the $150 target for 2021 that I set at the beginning of the year. It now occupies the second largest allocation in my personal portfolio.

These results followed Datadog’s annual user conference, Dash, which was packed with product announcements. Like revenue growth, Datadog’s product development cadence is accelerating, with more product releases this year than in 2020. Product expansion spanned both extensions to existing categories and entry into whole new markets. This product growth further underscores the large opportunity for Datadog. They are taking the typical land and expand model to the next level, by upselling existing customers and then continually delivering new product extensions for them to adopt in the future. In a prior blog post, I discussed how an ever-expanding addressable market can elongate revenue growth durability. The same arguments can be made for Datadog.

I’ll list the most significant take-aways from the quarter below, with more details provided in the sections that follow. Additionally, readers can review my prior coverage of Datadog, which provides a useful foundation for Datadog’s investment thesis and the progression of events in 2021.

  • Revenue growth continues to accelerate. This may be the most significant outcome of the earnings report. After delivering 67% revenue growth in Q2, Datadog’s revenue growth stepped up to 75% in Q3, beating their original estimate from Q2 earnings for 60% growth by about 1500 basis points. Looking forward to Q4, a similar beat would deliver revenue growth above Q3 and approach 80% annualized.
  • The full year outlook was also raised, to represent growth of about 65%. Keep in mind that the original estimate coming out of Q4 2020 was for 38% growth. At that point, several analysts concluded that Datadog’s growth story was lagging. Datadog is now projecting growth a full 27% above that rate and raised the full year outlook by 9% with Q3’s earnings from their prior estimate of 56% coming out of Q2. This implies that the full year growth rate exiting the year will hit 70% or more. This shows strong momentum as Datadog approaches $1B in annual revenue and could provide a nice set up for continued elevated growth in 2022.
  • Billings, total RPO and current RPO growth rates were higher than revenue, further supporting future revenue growth looking forward.  Total RPO increased by a staggering 127% year/year, up from 103% in the prior quarter, reflecting Datadog’s ability to sign more multi-year deals.
  • Operating leverage at scale is continuing, demonstrating that Datadog can drive significant revenue outperformance without unduly impacting profitability. Non-GAAP operating margin ticked up again to 16% versus 13% in the prior quarter. Even on a GAAP basis, Datadog is approaching profitability at just -2% operating margin. FCF margin was 21.1% and represented a doubling of FCF from a year ago. 
  • R&D spend increased 83% year/year and now makes up 31% of revenue. This exceeds S&M spend at 24% of revenue. This inflection of R&D and S&M is unique among peers. Datadog is able to keep investing in R&D while maintaining low S&M spend without impacting revenue growth.
  • Total customer growth of 34% combined with DBNRR over 130% provide another tailwind to revenue growth. Datadog is continuing a high pace of new customer additions and then increasing their spend over time. 70% of Q3’s year/year growth in revenue came from customer spend expansion.
  • Large customers (over $100k ARR) now make up 82% of total revenue, up from 80% last quarter. The number of these large customers was 1,800 in Q3, up 66% from a year ago and 11.8% sequentially.
  • Introduced a series of new product releases and extensions as part of their annual Dash conference. This was on top of several product announcements made earlier in the year. Datadog is entering adjacent markets and extending their coverage in existing ones.
  • Product release velocity is accelerating. As part of an Investor Meeting during Dash, management highlighted the increasing number of product introductions each year. More products provide customers with additional ways to increase their spend, allowing for a sustained high net expansion rate.

Clearly, Datadog is firing on all cylinders. They are exhibiting a combination of rapid product delivery and strong sales execution. This builds confidence that Datadog can continue to dominate the observability category and extend the general concept of observability into adjacent IT functions, like developer workflows, security and business intelligence. Further, while these categories have incumbent providers and next generation competitors, Datadog is outpacing competitors across all business measures.

Let’s dive into the numbers and then talk about the product landscape.

Financial Summary

Top Line Growth

Q3 revenue was $270.5M, up 74.8% annually and 15.8% sequentially. This beat analyst estimates for $247.8M, or 60.2% growth, and the company’s prior estimate from Q2 for $246M – $248M. Q3 revenue growth accelerated over Q2’s rate of 66.8% annually. Datadog beat their Q2 estimate by $21M and Q3 by $23M, which translates into about 15% of annualized revenue growth in each case. This provides investors with a pretty consistent predictor of actual revenue delivery for the following quarter (applicable to the Q4 guide in this case).

With this in mind, looking forward to Q4, more acceleration is implied. Management’s preliminary revenue estimate is for a range of $290M – $292M, or 63.9% annual and 7.6% sequential growth at the midpoint. If we apply the same pattern of outperformance from Q2 and Q3 to Q4’s estimate, we can assume that Datadog will beat by about $25M. That would result in Q4 revenue of $316M, for annual growth of 78% and sequential growth of 16.8%.

Looking to 2022, we don’t have projections from the company, which would come with the Q4 report early next year. Analysts have $1.397B modeled for 2022 at this point, for growth of 40.5% over their 2021 outlook. However, sequential revenue growth provides some hints of what’s to come. Datadog has delivered sequential growth in a fairly consistent range over the past 2 quarters at 16-17%. Further, the implied sequential growth for Q4 is at the same level. If we apply the Q1 2021 sequential growth of just 12% to my Q4 estimate of $316M, we get a Q1 2022 target of $354M for 78% growth. This would be linear to my Q4 projection.

While we can expect revenue growth to eventually slow down at scale, Datadog appears to have significant momentum going into 2022, and could deliver revenue growth in the range of 50-60% for full year 2022. I think Datadog will deliver a little over $1B in revenue for 2021, representing about 65% y/y growth. For 2022, I think they could hit $1.5B for another 50% of growth, if they can maintain the same level of absolute customer additions and spend expansion.

Beyond the consistency of sequential revenue growth, other sales metrics further support the view for continued elevated revenue growth going into 2022. While management emphasizes that revenue growth is a better indicator of business trends, billings and RPO growth provide additional forward-looking signals. For Q3, billings were $309M, up 98% year/year. This compares to 69% billings growth in Q2. Total RPO (Remaining Performance Obligations) was $719M in Q3, up 127% year/year, driven by strong sales activity, increased contract duration and an easier compare to Q3 2020. This compares to total RPO of $583M for growth of 103% in Q2. Current RPO growth was about 100% annually in Q3, versus 80% in Q2.

On the earnings call, Datadog management shared that they experienced another quarter of record ARR addition, crossing a milestone of $1B in total ARR in Q3. This growth was distributed across products, with all major product lines experiencing record ARR addition. Even Datadog’s oldest product offering, Infrastructure Monitoring, is still delivering accelerating ARR growth.

Log management and APM product suites combined remained in hypergrowth mode, exceeding $500M in ARR together in Q3. This is up from $400M in ARR from Q2, representing more than 25% sequential growth. These growth rates in newer product offerings bodes well for the sustainability of future revenue growth. As Datadog keeps launching new products and entering adjacent markets, it provides confidence that these additional product lines are based on real customer demand and that management has a track record of successfully predicting how to drive future product revenue.

Profitability

While revenue growth is accelerating, Datadog continues to drive operating leverage. Q3 Non-GAAP gross margin was 78%, down slightly from 79% a year ago but up from 76% in Q2. These are all inline with Datadog’s long term target for gross margin in the high 70% range. Leadership did mention that they expect it to tick up slightly in Q4 due to incremental efficiencies in cloud costs.

Non-GAAP operating income was $44.0M, for an operating margin of 16.3%. This was up from $30.9M and 13.2% last quarter. For Q3 2020, operating income was $13.8M and operating margin was 8.9%. Interestingly, for investors peeved by the use of Non-GAAP measures, Datadog was nearly break-even on a GAAP basis. They recorded -$4.9M of GAAP operating loss for -2% operating margin. The estimate for Q4 is to deliver $38M – $40M of Non-GAAP operating income. This represents a doubling over the original estimate for Q3 of $18M – $20M. Non-GAAP EPS was $0.13, more than doubling analyst estimates for $0.06 in Q3.

For the full year, leadership raised the Non-GAAP operating income target to a range of $133M – $135M. This is up from the Q2 estimate for $87M – $93M and triple the initial 2021 estimate for $35M – $45M set with Q4 2020 earnings. Datadog’s long term operating margin target is still for 20-25%.

Operating cash flow was $67.4M in Q3, up from $36.3M a year ago. Free cash flow was $57.1M for a FCF margin of 21.1%. This represents a doubling from Q3 2020 free cash flow of $28.6M. Datadog’s rule of 40 value is now in the range of 91 (op margin based) to 96 (FCF margin based). This is far above competitors.

In terms of functional areas, Datadog continues their investment into R&D and enjoyed leverage in S&M spend. Non-GAAP S&M spend increased 30% year/year in Q3, while R&D spend increased 83% over 2020. This compares to a 33% increase year/year in Q2 for S&M and 85% for R&D. R&D spend now exceeds S&M spend by 30% or $83.9M vs. $64.6M. That is unheard of with SaaS peers, who almost uniformly spend more on S&M than R&D.

  • R&D = 31% (versus 30% in Q3 2020 and 30% in Q2)
  • S&M = 24% (versus 32% in Q3 2020 and 26% in Q2)
  • G&A = 6% (versus 8% in Q3 2020 and 7% in Q2)

These spending trends provide a critical insight into the power of Datadog’s strong product development and go-to-market flywheel. Increased investment in R&D creates new product offerings that are highly relevant for customers. New module adoption drives incremental product subscriptions and increased spend per customer (NRR). This expansion requires little engagement from the sales team, generating sales efficiency. In many cases, customers can subscribe to additional products through a self-service motion. Datadog can grow sales spend more slowly as a consequence, freeing up more incremental dollars for R&D.

From their 10-Q:

Our customers often significantly increase their usage of the products they initially buy from us and expand their usage to other products we offer on our platform. We grow with our customers as they expand their workloads in the public and private cloud.

Datadog, Q3 2021, 10-Q

This emphasizes the two low cost revenue generators for Datadog from existing customers:

  • Spend expansion by adding on additional product modules with stand-alone pricing.
  • Spend growth simply because of increased customer utilization. As many of Datadog’s customers are rapidly growing digital innovators, they are experiencing revenue growth themselves. This makes it easy to justify an expanding IT spend, as digital operations grow.

For high growth digital natives, increases in the customer company’s overall digital revenue will drive a larger IT budget. With observability generally making up 5-10% of IT spend, Datadog can achieve a high net expansion rate simply from the customer’s own business growth.

Customer Activity

Datadog continued its strong “land and expand” motion in Q2. As I hinted at above, this comprises two factors. First, Datadog must attract new customers. They accomplish this through direct sales, marketing efforts, conferences and subsidizing trial usage. Second, existing customers will increase their spend over time, usually start with 1-2 products and add more each year. This product module adoption, coupled with their own utilization growth from continued digital transformation investments, combine to grow spend for each customer annually. In their Q3 10-Q, Datadog disclosed that approximately 70% of the increase in revenue was attributable to growth from existing customers, and the remaining 30% came from new customers.  This is consistent with past quarters.  

New customers provide the baseline for the future expansion motion. Datadog had over 17,500 total customers at the end of Q3. This was up 1,100 from 16,400 in Q2 and 13,100 a year ago, representing 33.4% annual and 6.7% sequential growth. In Q2, they added a record 1,200 new customers and Q1 brought on 1,000. So, Q3’s addition of 1,100 appears to be inline. Given the larger denominator, though, the annualized growth percentages are ticking down slightly. This is partially understandable given the larger size, but is something we will want to monitor over time. Datadog’s long term growth will be optimal if customer additions remain above 25-30% annually. Datadog management feels there is a “substantial” opportunity to continue to grow their customer base, particularly in international markets.

Datadog Total Customer Growth, Q3 2021

Datadog regularly reports on the number of “large” customers, which they define as spending more than $100k in ARR.  These now represent 82% of Datadog’s ARR, up from 80% last quarter. Therefore, growth in these large customers is a critical contributor to overall revenue growth. In Q3, Datadog reported 1,800 of these customers, up 66% from the 1,082 such customers in Q3 2020. In Q2, they reported 1,610 of these customers, adding 190 or 11.8% sequential growth for Q3. This count has more than doubled in 2 years.

Datadog Large Customer Growth, Q3 2021

Datadog measures the increase in customer spend using the dollar-based net retention rate.  They haven’t reported the actual value since their IPO, but disclose that it is above 130%.  Once again in Q3, DBNRR was over 130%, which has been the case for the past 17 quarters. This means that on average customers increase their spend on Datadog by 30% or more each year.

Because Datadog makes it easy for customers to activate additional products, customers often start with a couple and then gradually add more modules over time. Datadog management has been reporting values for the percent of customers that have adopted two or more and four or more products for the last two years.

If we examine these values and extrapolate to total number of customers, the high growth in product add-ons becomes apparent. Not only are the percentages of customers using multiple products increasing, but also the total customer count grows, which magnifies the absolute number of customers in each category and the rate of change annually.

Datadog Multiple Product Customer Adoption Metrics, Q3 2021

For example, in Q3, the number of customers using two or more products increased by 45% annually, while the total number of customers increased by 34%. Penetration in customers with two or more products is reaching 80% which may represent a ceiling as new customers usually start with 1-2 products. For customers using four or more products, the absolute number of customers doubled year/year, showing rapid expansion continuing into a larger number of products being adopted by each customer. As Datadog continues to add more products to the platform, we will likely get more visibility into customers adopting a higher numbers of products. In Q1, Datadog disclosed that “hundreds” of customers were using 6 or more products.

The combination of these factors is driving high revenue growth. As total annualized customer growth remains above 30% and DBNRR (dollar based net retention rate) remains over 130%, I think we can expect total revenue growth to continue to be over 50% annually. Conceptually, this makes sense. Datadog is adding more than 30% total new customers each year and existing customers increase their spend by an average of over 30%. Of course, these growth percentages should marginalize over the long term, as the numbers become very large.

The growth in multiple product adoption is enabled by Datadog’s platform strategy and rapid product development cadence. They continue to add new products to the platform suite, each of which addresses unique use cases and adds value incrementally. Their pace of product development has been accelerating over the years, at least as measured by the number of released offerings that contribute to revenue. While 2017-2018 added one new product each year, 2019-2020 added 3-4 per year.

Up until the Dash user event in late October 2021, Datadog had already released 3 new products into GA and 2 into Beta this year. Once a product is released to GA, it is included on the pricing page. Datadog now has 13 top-level products listed on the pricing page, with some top-level products including multiple add-ons that include pricing as well. This total number of products is up from 9 at end of 2020.

As I will elaborate in the product development section below, Dash delivered many new product offerings to the platform. Three products were promoted to the pricing page, one was stand-alone and two are folded into top-level products. Four new products were brought to a beta status. These would be candidates for a future GA release and pricing going into first half of 2022.

These additions to the platform from Dash, plus what was introduced earlier in 2021, represent continued acceleration of the product development delivery cadence. Datadog is simply bringing more product to customers each year. These represent a combination of extensions in existing product categories, and entry into whole new categories. Both of these have the end effect of increasing Datadog’s addressable market. Go-to-market efficiencies then ensure that Datadog can gradually fill out each market that it enters. This motion helps sustain a high DBNRR for a longer period of time, as new product adoption from customers backfills the eventual decay in older product offering growth.

Product Development

Datadog increases revenue by driving expansion in three dimensions. This isn’t necessarily unique to Datadog, but they have refined their approach to deliver very consistent high growth at scale. The three dimensions are landing new customers, building new products based on perceived demand and growing adoption of new product subscriptions by existing customers. ARR growth is powered by the combination of these three. As long as Datadog expands along these dimensions, investors can expect their high rate of revenue growth to continue.

Datadog’s Revenue Growth Model, Author’s Diagram

Customer additions and sustained annual growth in spend for existing customers are the two foundational elements of a typical SaaS company’s “land and expand” model. Datadog adds another dimension to the expand contributor, which I think elevates them into a higher echelon of growth. Not only do they encourage existing customers to increase the number of products that they utilize, but Datadog continually adds new relevant products for them to consider. This magnifies the “expand” component of their growth model, which we could effectively consider to be something like “land and (expand x expand)” or expand squared. Datadog not only increases spend for existing customers by 30% or more each year (DBNRR over 130%), but they have been increasing the number of monetized product offerings on their platform by about the same annual rate.

Obviously, the incremental value of additional product subscriptions is captured by the net expansion rate. However, what I think is unique is that sustained growth in new product offerings makes that expansion rate much more durable. With most SaaS companies, the net expansion rate eventually diminishes, as customer adoption of products in each segment becomes saturated. However, if existing customers always have a new set of relevant products to consider, they can expand their spend for a longer period. This effect likely explains why Datadog has been able to maintain a DBNRR above 130% for the past 4 years.

Rather than launching high risk, overly optimistic bets, Datadog’s product additions represent logical extensions of practical functionality that customers need. These consist of both incremental offerings that round out a full observability suite and entry into adjacent market segments that leverage their core capabilities in data collection, processing and visualization. As an example, Infrastructure, APM, Logging, RUM, Database Monitoring, Network Monitoring, etc. all fall within the observability product category. Each of these additions represent defensible product offerings for software infrastructure and application monitoring. If an enterprise delivering a digital experience at scale doesn’t have one of these solutions in place, they generally should. While Datadog leadership contends that many new deals are “greenfield”, they are often replacing an in-house or open source solution that is requiring customer resources to maintain. Customer expansion across the observability suite is justifiable spend and CTO’s already expect to allocate 5-10% of their cloud infrastructure budget on observability.

The same argument extends to Datadog’s new product categories. Infrastructure and application security, developer workflows, incident response and operational intelligence all represent functions that the same cohort of digital operators should address. As with observability, they may already be performing these functions with an existing internal solution. Or, they have a point solution from another vendor for just that function. With Datadog’s single agent, it is very easy for operators to activate Datadog’s solution in these new categories. Datadog’s scope of data collection often includes many of the signals necessary to provide valuable insights for operators responsible for these other product categories.

There are several benefits to this type of consolidation of vendor offerings:

  • Simplification. Each monitoring solution requires an agent. Having fewer agents running on servers improves performance and reduces configuration management overhead.
  • Shared View. Insights from application and infrastructure activity is automatically shared across all teams. DevOps, SecurityOps, product managers, business analysts and developers are all looking at the same data and responding to alerts in one interface. This is really important to appreciate, as it avoids the common inefficiency of switching between tools and the issue of speculating which alerting system to trust (“my monitoring system doesn’t show an issue”).
  • Cost. While Datadog charges for each product offering separately, as customers scale overall spend, they can activate volume discounts. Since the customer is likely already allocating spend to another vendor, consolidation can lower some cost and reduce vendor management.

In order to capitalize on these advantages of a common platform and to sustain revenue growth, a high pace of product creation is needed to keep the expansion flywheel turning. While many SaaS companies, and some of Datadog’s peers, will add new products at a predictable annual pace, Datadog’s rate of new product introductions is accelerating. In the same way that we investors salivate over companies that are accelerating their annual revenue growth rates quarter over quarter, I get excited about an accelerating rate of product development. I believe the net effect on the company’s valuation is commensurate. For more background, interested readers can see my prior blog post on the power of expanding TAM’s.

Datadog is demonstrating this trend. While they spent their first 5 years with just one product, Infrastructure, they have been adding an increasing number of products each year since 2017. This pace was highlighted in a slide during Datadog’s Investor Meeting in late October. I think it is noteworthy that the design of the graph resembles what we are used to seeing with financial metrics, like revenue growth or customer additions moving up and to the right. This is intentionally illustrated by leadership in my mind and underscores their commitment to this third dimension of a company’s growth trajectory.

Datadog Product Release History, Investor Meeting, October 2021

This chart also reflects the benefit of increased spending on R&D. Datadog has been hiring engineers at a rapid clip over the past year. According to the 10-Q, on a GAAP basis, R&D spend increased by 100% year/year in Q3. The majority of that incremental spend went to headcount. This provides the fuel to continue launching new product offerings. These are often introduced first as a private beta for existing customers and then opened up to GA several months later.

Because of the background of their founders, Datadog is well-tuned to the needs of their customers, achieving product/market fit quickly. New products predictably ramp up in adoption, roughly proportional to their time in market. Long time products like Infrastructure, APM and Logging generate the majority of ARR at this point, but newer products like Synthetics, Network Monitoring, RUM and even Security are gradually adding customers and contributing to ARR. From their smaller base, the newer products experience hypergrowth, offsetting the gradual slowdown in growth contribution from the older products. This motion makes Datadog’s high revenue growth more durable over time.

A useful visual for the number of products in GA that are monetized with customers is a view of Datadog’s pricing page. On it, they display a “honeycomb” grid of their top-level product offerings. Each cell is clickable to return more details on a product and pricing levels. Some product categories have sub-offerings with their own incremental pricing. As an example, RUM has the core real user monitoring offering within it, but also now has the recently launched Session Replay tucked into the same top-level product offering with its own usage based pricing.

Datadog Product Graphic, Pricing Page

The honeycomb grid now has 13 top-level product offerings. This is up from 9 at the end of 2020 and 12 just before the Dash user conference. That represents 44% growth in 2021. While I realize that isn’t a recognized industry KPI, I think it provides a strong signal for investors that a high DBNRR is durable. Within these 13 listings, several of the top-level products offer multiple subscription options, each representing a relevant upsell opportunity.

Adding a new product subscription is a self-service process for the customer. They can generally activate the new product module within the admin section of their customer dashboard. Provisioning and billing are automatic. Datadog sales personnel do not need to be involved in the upsell. This helps drive sales efficiency, supporting the gradual reduction in S&M spend that I highlighted earlier and keeps sales resources focused on landing new customers.

Dash Conference

Datadog held their annual Dash user conference in late October, which generated a flurry of product announcements. A number of beta products were promoted to GA and several brand new product offerings were introduced into a beta mode. These products spanned not just observability, but expanded Datadog’s foothold into new categories like security and developer workflows. They even introduced a whole new category of cloud cost management, which represents a product segment that has supported a number of stand-alone companies offering point solutions for cloud cost optimization. Given Datadog’s access to all of a customer’s cloud application and infrastructure activity data, this product introduction represents a natural extension.

Product Announcements at Dash, Datadog Investor Meeting, October 2021

The chart above from the Investor Meeting during Dash highlights all of the product announcements. As you can see, there were a lot, and this was on top of several major product announcements earlier in 2021. Let’s spend a little time going through these offerings. I will provide a short summary for each of their impact with links to relevant materials for for in-depth study. Also, beyond the major releases represented in the graphic, there were numerous other minor releases. Full details are provided in a summary blog post from Datadog.

CI Visibility

In July, Datadog announced the beta release of a new product that represents their foray into applying observability to developer workflows. Their first step is CI Visibility, which provides visibility into the development organization’s CI/CD (Continuous Integration / Continuous Delivery) workflows. As we saw with Security, I think this represents the first of several products that target a “shift left” towards development activities and folds them into the broader observability trend. I wouldn’t be surprised to see a suite of products built up over time that target developers.

CI/CD bridges the gap between development and operations activities, by adding automation to the building, testing and deployment of software applications. Before the CI/CD movement, these processes were handled manually by Build and Release Engineers. Over time, engineering teams realized that many of these processes could be scripted and made repeatable, allowing them to scale and remove human error. This laid the groundwork for modern day DevOps practices that involve continuous development, testing, integration, and deployment of software applications.

The tie with observability is natural because the only way to scale these automated processes and reach the end goal of frequent releases is to have full stack infrastructure and application monitoring in place. That way, if a software release triggers a production outage, the monitoring system will identify the cause and can connect that to the last software release. The CI/CD practice forms the basis of modern day DevOps operations, because it represents the membrane between development and operations.

Datadog’s CI Visibility provides deep insight into the performance of an organization’s CI pipelines, making it easy to identify issues. With the rise of micro-services and the breaking up of the application monolith, engineering organizations often have many different code projects progressing through the CI/CD process in parallel. Keep these jobs running smoothly can be a major undertaking for a large engineering organization. CI Visibility monitors this build, test and deploy activity and will flag jobs or functional tests that fail frequently. This insight can help DevOps personnel make adjustments to improve the success rate. That might be to add more error-catching code around the build steps or updating tests to improve stability.

One component of the product, CI Pipeline Visibility, generates key performance metrics to help understand which pipelines, build stages, or jobs are run the most. It also tracks how often they fail, and how long they take to complete. Datadog visualizes this information in a customizable out-of-the-box Pipelines dashboard. This provides DevOps teams with a high-level overview of performance across all pipelines, build stages, and jobs. Teams can track these trends to identify where to focus troubleshooting efforts.

The other component, CI Testing Visibility, allows teams to easily monitor tests across all builds to surface common errors and visualize test performance over time to spot regressions. In the Testing Visibility page, operators can see each services’ test suites along with the corresponding branch, duration, and number of fails, passes, and skips. Datadog also tracks the number of tests that pass and fail for the same commit, which were previously unseen in the default branch.

Datadog Testing Visibility automatically instruments each test so operators can trace them from end to end without spending time reproducing test failures. For example, an operator can debug a flaky test by drilling into the test trace for more information. Using the flame graph, the user can easily find the point(s) of failure in a complex integration test. Clicking on an errorful span, they can examine the stacktrace along with related error messages to examine what caused the test to fail in that instance.

CI Visibility leverages capabilities gained from a recent Datadog acquisition. In August 2020, Datadog announced that they had acquired Undefined Labs, which provides testing and observability capabilities for developer workflows before pushing to production. As Datadog’s products have traditionally been focused on the production environment, this acquisition extended their visibility into pre-production development and test cycles (shift left). Undefined Labs’ Scope tool is integrated into CI/CD platforms, like CircleCI and Jenkins, to enable developers to automatically take advantage of monitoring and testing capabilities within their existing workflows. Scope allows developers to execute unit tests, measure performance, tie test failures back to source code and view summarized results in a consolidated dashboard.

Screenshot from Undefined Labs Web Site

The Undefined Labs acquisition aligned Datadog more closely with developers during the design and test phases of software development, as opposed to the just the post-release monitoring of the production environment. Datadog’s plan back in 2020 for absorbing the Undefined Labs product set was to sunset their existing products and rebuild them on the Datadog platform, so that full integration is realized off the bat. The engineering team has been actively engaged in this work over the last year. The release of CI Visibility represents the first functional incorporation of the Undefined Labs capabilities into the Datadog platform.

Connecting pre-production with production represents a smart strategy by Datadog and provides significant value for engineering teams, as it combines DevOps with developer activities. Not only will this yield another source of product revenue streams, but it would represent a significant advantage over other observability and security vendors by addressing a broader set of DevSecOps use cases by adding code-level insights.

CI Visibility is now GA for all customers. This represents a top-level product on the pricing page. Drilling into the pricing details, it consists of two sub-products, each with pricing. Pipeline Visibility provides a view of performance for pipelines, builds and jobs. It costs $20 per user per month if billed annually. The other sub-product is Testing Visibility, which enables granular tracking of automated test performance. This is also being offered for $20 per user per month. It would make sense for most customers to subscribe to both services to monitor the effectiveness of their CI/CD processes.

Session Replay

This is a new product offering that was released to GA immediately. It falls under the Real User Monitoring top-level product and extends basic browser-based user monitoring to provide product managers with much deeper insight into user behavior. The top-level RUM offering measures the end-to-end experience of users on a company’s web and mobile applications, logging load times for various UI components and third-party service requests. This is performed from the user’s perspective, often by measuring activity directly from their browser or mobile device. As APM is performed on the server-side, RUM provides a useful view of the digital experience from the client-side. Issues with content delivery, UI rendering or third party services can manifest within the end user’s device. These kinds of issues create the same level of user frustration as an outage on the server side would.

However, oftentimes the “waterfall” view of all components in a page load lacks the context of what the user was actually doing on the page. As web pages have become increasingly interactive, using Javascript-powered rich UI elements, it can be difficult to recreate the user’s actual actions when an error is reported. Developers or product managers can waste a lot of time trying to reconstruct the sequence of clicks taken by the user.

Session Replay eliminates this guess work by providing a video-like replay of the user’s actual activity on the web page. It shows the movement of their mouse, mouse-overs, button clicks and keyboard entry. As they say, “a picture is worth a thousand words”. This view is invaluable for developers to identify edge cases or unexpected behaviors that are generating the errors being logged by basic user monitoring. Equally valuable is the insight it can provide product managers to review the usability of each web page, identifying points where the UI or steps required in a workflow are not clear. This allows product managers to optimize the UX to reach the desired business outcome faster, by sampling user session replays.

Session Replay is available as an add-on to Real User Monitoring. It is priced at $1.80 per thousand sessions per month. This is 4x the cost of basic browser-based user monitoring, which is fair given the amount of incremental data that must be stored per user session to recreate all their interactions. A customer would likely subscribe to both the basic user monitoring for summary metrics and session replay for detailed troubleshooting.

Funnel Analysis

Funnel analysis is a new feature within Real User Monitoring. It provides users with the ability to leverage the detailed data captured by RUM to reconstruct key customer workflows on a web site or mobile app. These usually represent the completion of a series of steps, made up of a sequence of web pages. These sequences mirror real-world experiences like account creation, e-commerce shopping and check-out.

Product managers will find this new capability particularly valuable, as it provides insight into how users are navigating through an application’s UI. This information can be used to flag sources of friction for further investigation. Combined with Session Replay, the product manager can get both a macro summary of user progression to each step in the funnel and a micro view of any user’s actual session through replay. This enables the product manager to make UI changes to improve funnel conversion.

Datadog Funnel Analysis, Datadog Blog Post

Funnel Analysis represents a noteworthy product addition because it isn’t a stand-alone paid product. It represents a “free” extension to the existing Real User Monitoring product. In order to maintain competitiveness with peer offerings, Datadog must continue to expand the core capabilities of each of its product.

Network Device Monitoring

In 2019, Datadog introduced Network Performance Monitoring. This provides customers with cloud-based infrastructure a view of network traffic and bottlenecks between their server instances and other services. In this case, the network devices themselves are managed by the cloud infrastructure provider and are not directly monitored by the Datadog system. For customers that maintain an on-prem or hybrid infrastructure, they didn’t have an option to monitor the performance of their owned network devices.

Network Device Monitoring addresses this gap. Customers can add Datadog monitoring directly onto their owned network equipment, like firewalls, routers and switches. Users can view device-level health and performance metrics on all layers of network activity from the device perspective. This allows operators to quickly isolate and troubleshoot network wide outages. For example, if many devices suddenly lose connectivity, it becomes easy to trace back an upstream switch or router that may be experiencing an issue.

Network device monitoring provides a detailed list of every network device in a customer’s installation. For each device, the user can view the state of all its network interfaces, uptime, tags, metadata as well as inbound/outbound network throughput. It also provides a view of bandwidth utilization by interface, so that it is easy for operators to identity changes over time. If network utilization increases over time on a particular interface, users can investigate causes and consider network configuration changes to accommodate more bandwidth. Error and packet drop metrics can help isolate problematic network connections.

This device level data also enhances the view into potential security issues. Failed network device logins, sudden changes in traffic or a new error type might indicate malicious behavior. As the Datadog platform is shared across DevSecOps, security teams have access to the full range of application and infrastructure activity data. This provides a holistic view to combine signals to identify potential threats. Security operators can also automate their checks for malicious behavior by configuring anomaly monitors, which use machine learning to map typical network behavior and alert on deviation from expected patterns.

Network Device Monitoring is offered as part of the Network Monitoring top-level product. This includes the existing Network Performance Monitoring product for $5 per host per month. Network Device Monitoring is now available for $7 per device per month. What I think is most interesting about Network Device Monitoring is that it provides Datadog with a solution for customers who provision and manage their own network equipment. These are generally on-premise or hybrid cloud configurations, a market which Datadog previously did not address.

Datadog Apps

Datadog Apps provides an additional level of customization to the previously launched Datadog Marketplace. With Apps, developers can build and share UI elements that utilize data from other popular third-party services and display that information in a summarized view within the Datadog user interface. This supports Datadog’s intent to keep all user attention within the Datadog interface and reduces some of the switching between tools to fully manage a product environment.

Datadog Apps Example Screenshot, Blog Post

The initial release of Datadog Apps included widgets built by launch partners from a number of popular services. Some examples are Embrace, Fairwinds, Harness, Rookout, Shoreline, LaunchDarkly and PagerDuty. Specific to the latter two, the PagerDuty App provides a custom widget that lists all of a customer’s monitored business services. It displays active incidents associated with them, ordered by severity. As DevOps teams address and resolve issues, they can make updates to the PagerDuty status directly from the Datadog dashboard.

For LaunchDarkly, the App adds feature flag display and control to the Datadog interface. This is particularly convenient, as operators can activate a feature flag and then immediately monitor the application impact, all within the same view. As an example use case, if a new feature might impact database performance, the operator can activate the feature flag and immediately track database resource utilization. If a problem arises, they can immediately turn off the feature and conduct further investigation with the development team.

Datadog Apps doesn’t have a pricing model associated with it. The intent is to make the Datadog toolset the central location for DevSecOps personnel, by allowing them to monitor and control activity for other related third-party services on one place. Apps are available to existing Datadog customers.

Online Archives

Online archives is a new capability that is being introduced in limited availability for existing customers, which supplements Datadog’s Log Management product set. Online archives provides customers with another option to keep log data available for querying, but without the higher cost of indexing it. Indexing is necessary to support real-time queries for immediate display of performance graphs and generating alerts. The value of indexed data diminishes quickly over time for application performance troubleshooting, as the focus window is usually on what happening currently, or at least over the past week.

There are use cases, though, where querying data over a longer period is useful, generally on an ad hoc basis. These queries are usually associated with historical investigations, like researching a security incident, compliance audit or prior service outage. In these cases, the data needs to be available, but a longer query response time is acceptable. This log data can be stored in a state where it isn’t indexed, but can be readily accessed.

The Online Archive state provides this capability. Individual log streams can be designated for archiving over any time period up to 15 months, which is usually long enough for most annual compliance reviews or security audits. Operators can perform queries over this data in the same location as they perform searches on indexed logs. While Datadog hasn’t published pricing for Online Archives, they imply the cost will be about the same as a month’s worth of indexed log data. If that is the case, Online Archives would represent an upsell.

Observability Pipelines

Observability Pipelines is being offered through a private beta to existing customers. This represents another product for companies that run their own server and network infrastructure and want full control over data collection for observability. In this case, the customer is basically installing a copy of Datadog’s data collection and processing services to run on their own hardware or on a cloud instance that they control. This configuration would appeal to customers with sensitive data or governance requirements that prohibit them from allowing a SaaS company to access their data.

The customer will be able to install Datadog agents on their systems, and then route the data collected to any desired service, whether inside or outside of their organization. This data can even be routed to Datadog or other observability providers for visualization. After data is collected, the user can determine what data transformations are desired, including filtering, sampling, enrichment and even encryption. The data can be ported into security monitoring tools as well.

Datadog hasn’t published pricing for this product yet. Once it moves out of beta into GA, we should get a sense for the incremental revenue contribution.

Application Security

Application Security had been announced previously, but was highlighted at Dash and is now available in private beta. The service provides protection against application-level threats by identifying and blocking attacks that target code-level vulnerabilities. Examples are SQL injections and cross-site scripting (XSS) exploits. These typically involve the user interface and data inputs associated with an application. The hacker will try to manipulate user input processing routines to gain access to data from other users. This type of activity can be prevented by actively monitoring the inputs and outputs of the web or mobile application, and its underlying APIs that provide data exchange between the client and the server.

This capability leverages the acquisition of Sqreen, announced in February and closed in April.  Sqreen is a SaaS-based security platform that enables enterprises to detect, block and respond to application level attacks. To do this, they provide a solution for runtime application protection (RASP). In addition to RASP, Sqreen’s solution includes a web application firewall (WAF). Security issues in the application layer are challenging to manage, as the owner needs to allow legitimate traffic, while blocking nefarious activity.

At the time of the acquisition, Datadog leadership signaled that they would be taking the technology behind Sqreen and incorporating that into the Datadog platform. With the beta release of Application Security, we have the output of that integration. Prior to the acquisition, Sqreen claimed to have over 800 customers. These likely offer some cross-sell opportunities for Datadog.

Sqreen Architecture, Web Site

These capabilities push Datadog further forward into active application protection and deeper into the security space. By having the Datadog agent on every infrastructure host observing activity at a granular level, Datadog can easily turn on these new security capabilities without requiring additional deployment by their customers. Having the agent on every device allows for more active monitoring of security-related context. It also offers the ability to take action to prevent further damage once malicious behavior is detected.

Universal Service Monitoring

This new offering provides basic monitoring of all services accessed by an application, providing full visibility across a complex software installation. With the proliferation of third-party services and micro-services, it is likely that applications may have a dependency on a service that hasn’t been instrumented for use with Datadog’s platform. This might be for a variety of reasons, including being run by a third-party or written in a language that is not compatible with Datadog’s agent. Regardless of the explanation, Universal Service Monitoring will provide DevOps personnel with a full view of the performance of all services accessed by an application.

When activated, Universal Service Monitoring examines all HTTP(s) requests by accessing the kernel network stack. It can collect a basic set of performance metrics, referred in the industry as golden signals. These represent request counts, error rates and latency. This data can then be incorporated into the standard Service Map, which provides a global view of all services being accessed across a software installation. Operators can set alerts, measure service level objectives and track releases using this data. This allows operators to stay ahead of new service activations, getting visibility into them even before they can be instrumented for use with other more granular Datadog services like Distributed Tracing.

Universal Service Monitoring is being introduced as a private beta. Once moved to GA, it would justify an incremental pricing scheme, as a supplement to infrastructure monitoring or APM.

Cloud Cost Management

Last but not least, Datadog introduced a new product for an adjacent market that represents a thoughtful extension which leverages data already available to Datadog. This is cloud utilization cost management. As cloud infrastructure has become popular over the last decade, enterprises have been challenged to manage and optimize their escalating costs. Oftentimes, without good visibility, a cloud deployment may include a lot of wasted spend. For example, compute resources may be underutilized or databases over-provisioned.

A number of stand-alone companies emerged over the last several years focused on just this problem. They perform analysis of an enterprise’s cloud footprint and then make recommendations to reduce cost. Yet, many of these services rely on the same utilization data that Datadog is already collecting, like host metadata, CPU rates, storage utilization, throughput, etc. For Datadog to offer this kind of cost management represents a fairly straightforward extension into a recognized budget line item.

Datadog’s Cloud Cost Management solution provides a single view of operational data and cloud service costs. Finance teams can even get access to the view in order to attribute costs to the correct departments internally. For developers, they can get real-time feedback on the incremental hosting cost of new application features. Finally, managers can use the data to identify opportunities for cost optimization by refactoring certain software components or right-sizing the infrastructure resources for an application.

Cloud Cost Management is being introduced as a private beta for existing customers. Given that this is already a known industry service, it will presumably represent a new product offering with pricing when eventually moved into GA.

Future Development

Datadog is expanding their platform footprint so quickly that investors may be left to wonder what is next. The continued increasing investment in R&D implies that Datadog will maintain this pace. R&D spend is growing faster than revenue, hitting 100% year/year growth in the prior quarter on a GAAP basis and exceeds S&M spend by nearly 50%. Leadership is still optimistic about other areas for future expansion.

As we speculate about other market opportunities, I think we can consider two themes. The first is observability itself. The idea of observability is not constrained to just infrastructure and application monitoring. In fact, the early origins of the term observability weren’t associated with modern software application infrastructure at all. Rather, it derived from control theory and captures the idea that the internal state of any system can be determined by measuring its outputs. A system can be made observable if it can be sufficiently instrumented such that measurement of external outputs predicts internal behavior with sufficient accuracy.

With this general definition, observability can be applied (and originally was) to other domains outside of software infrastructure. Any process that relies on repeatability, quality controls and predictable outcomes can be a candidate for observability solutions. That might apply to manufacturing lines, telecommunications, health care or even political campaigns. Within a business context, a sales funnel, marketing campaign, customer service program or recruiting effort can be made observable. In all these cases, the business process runs through a system of inputs and outputs. If we instrument it correctly and measures the appropriate signals, then its internal state should become more clear. More importantly, the likelihood of reaching an expected business outcome can be made more predictable.

The term does apply neatly to the software infrastructure and code that drive modern software applications. If DevOps teams can identify and instrument the right measures of application performance, they can reliably predict the stability of the system (a digital experience) as a whole. This concept gave rise to the use of observability to describe software infrastructure monitoring. And, because of overuse, most industry participants assume the two are inextricably linked.

I realize this is a bit abstract, but the distinction is important for investors to appreciate. I make this point for two reasons. First, if another technology provider announces an offering for observability, that does not automatically mean they are targeting the core market of Datadog and other infrastructure monitoring vendors. They could very well be intending to make another system observable. As the term observability is applied to different contexts, investors can keep this in mind to avoid a knee jerk reaction around competitive announcements.

Second, and along the same lines, Datadog can take the concept of observability and apply it to other contexts. The most obvious would be in digital business operations. Specifically, a move into business analytics, helping digital businesses identify and measure the signals that would indicate a healthy digital operation. In addition to tying these measures to application uptime and responsiveness, they could correlate them to other desired business outcomes, like sell-through rates, marketing funnels and customer satisfaction. These types of business functions can be made “observable” and Datadog has the methodology and data processing platform to do that.

The new product offering Funnel Analysis likely represents a first step in this direction. This offering would be primarily used by product managers to measure the effectiveness of a digital experience through a sequence of user steps. Session Replay provides some supporting context here as well. Given that Datadog already has the majority of this instrumentation in place, it is probable they will leverage these analytics to provide business operations personnel with deeper insight, whether through business intelligence solutions or product analytics. They could even expand into newer categories, like digital optimization, with negligible incremental data collection.

The other theme that offers expansion opportunities for Datadog is the superset of DevSecOps itself. I explored this in depth in my Q1 review, but it’s worth continuing to revisit. While Datadog’s guiding mission has been to assist with the convergence of Development, Security and Operations functions, that does not imply they have to remain within the intersection of those three spheres. Expanding from that core to provide tooling for each function more broadly offers a lot of incremental product landscape. While I think their offerings will still be grounded in the delivery of digitally enabled businesses using modern software applications, as these businesses evolve, they will need more tooling to execute along each of these distinct functions.

Considering that “observability” can be applied to other business functions and that the superset of DevSecOps encompasses many parts of the back-office of a business, Datadog’s product development pipeline can continue to explore new areas. We can infer some indicators by looking at each of these functions in isolation.

  • Development. The process of designing, building, testing and releasing a digital experience.
  • Security. The process of monitoring the digital experience for threats, mitigating them and controlling access to sensitive information.
  • Operations. Running the digital experience, ensuring that it is meeting user expectations, achieving desired business outcomes and consuming company resources as expected.

If we extrapolate these general definitions of each function within DevSecOps and consider how to make them observable, it implies many additional product opportunities for Datadog. I created the digram below in my Q1 review and updated it for Q2. For Q3, I have updated it again to reflect recent product additions and other possible directions. Items in bold represent new product categories entered within roughly the last 12 months.

Author’s Diagram, Updated from Q2 version with new additions. Bolded items reflect previous product launches.

This exercise makes the growth opportunities for Datadog more clear. In the past, investors might have been tempted to put Datadog in the box of application and infrastructure monitoring. This led to the assumption that the market opportunity was limited. With more entrants, competitors would be fighting over increasingly smaller slices of market share. However, Datadog’s continued expansion into areas outside of traditional application and infrastructure monitoring have made their growth ambitions clear. Their advantage is that these expansions all leverage data they are already collecting, with incremental layers.

The addition of application and workload security was an obvious extension. Datadog has stepped into operational orchestration with Incident Management. The leadership team has mentioned future opportunities in business analytics on analyst calls. CI Visibility and the Undefined Labs acquisition provide a beachhead into developer processes and tooling. Cloud Cost Management is an interesting step into bringing the Finance team into the cloud infrastructure management mix. This represents another example of how Datadog is applying observability to other business processes, in this case resource optimization and cost control.

Competitive Landscape

I have covered Datadog’s competitive position fairly extensively in past blog posts. These included my original write-up on Datadog and subsequent quarterly updates. Interested readers can track competitive activity through those posts. In the interest of brevity, I won’t repeat all of that here. Rather, I will provide some broad observations relative to competitive positioning and highlight notable activities.

At a high level, the competitive dynamic hasn’t changed for Datadog. If anything, their continued outperformance in revenue growth, operating leverage improvement and product development acceleration is creating more distance between Datadog and the publicly-traded modern observability providers. Included in the list of peers are Dynatrace, Splunk, New Relic and Elastic. In all cases, for the most recent set of quarterly results, Datadog exceeds peer performance across the operating metrics that I consider important. A snapshot is represented below (spend metrics are Non-GAAP).

CompanyRev GrowthS&M SpendR&D SpendR&D Increase
DDOG75%24%31%83%
DT34%33%14%33%
ESTC*50%39%24%22%
NEWR18%41%20%15%
SPLK*23%51%30%38%
Comparison of Observability Providers, Author’s Calculations. *ESTC and SPLK represent prior quarter.

As discussed, Datadog accelerated year/year revenue growth in Q3 by 800 basis points (8% of year/year growth). This acceleration amount from the prior quarter was greater than that of all peers. Additionally, Datadog increased R&D spend by the largest percentage. While comparisons of product development velocity might be subjective, if we assume each company has optimized their R&D output, then Datadog would continue to grow their product reach the fastest. Finally, Datadog is experiencing the highest sales efficiency, given that S&M spend represents the lowest percent of revenue. This implies that Datadog’s customer spend expansion is driving the majority of revenue growth for little incremental sales effort.

Splunk pre-announced their Q3 results on November 15th with a press release about their CEO transition. For the upcoming report, they expect to deliver 19% year/year revenue growth, which represents a deceleration from the previous quarter. Cloud ARR growth will be 75% year/year, which is slightly above the previous quarter. Since cloud revenue makes up about 36% of total revenue, we need to see where cloud revenue growth lands after it crosses into the majority of total revenue. Presumably, some percent of the cloud revenue growth is from existing customers. This will provide a true view of the overall revenue growth potential.

Elastic accelerated revenue growth last quarter by 600 basis, from 44% in Q4 to 50% in the most recent quarter. Cloud revenue grew by 89% year/year and now makes up 32% of total revenue. Like Splunk, Elastic is undergoing a transition to cloud hosted offerings, which are growing much faster than overall revenue. However, their cloud revenue growth accelerated by 12% in the most recent quarter, from 77% year/year growth in Q4. This represents a nice increase, showing the appeal for Elastic’s platform. I will be watching Elastic’s growth closely as cloud contributes a greater percentage of overall revenue. I have covered Elastic in previous blog posts and appreciate the breadth and extensibility of their platform.

At this point, we could consider the observability market as becoming a bit commoditized. What this has resulted in is a situation in which observability vendors are being evaluated on the breadth of their platform offering. Point solutions that address just log analysis, APM, network monitoring or synthetics are no longer tenable. This explains some of the recent acquisitions, where the acquired company addressed just a couple of use cases. Since most point offerings look the same, vendors are judged based on the number of feature solutions they support, with the broadest offering generally winning. This development has favored the larger vendors, with the R&D budget or balance sheet to support adding all features through internal development or acquisitions.

Going forward, I expect this to persist. The leading vendors will continue to expand their offerings to check more of the boxes in a CTO/CIO observability wish list. Given that most observability solutions have some set up cost (agent installation, data collection, operator training, alert configuration, etc.), reduction of providers is more efficient where features are similar. Also, it simplifies issue investigation by reducing the number of monitoring systems to check. Toggling between multiple monitoring tools is not efficient for DevSecOps personnel. These trends further support the hegemony of the largest vendors and creates a formidable barrier of entry for new entrants. It’s hard to win market share with just a couple of point solutions.

This explains why Datadog (and its competitors) have been rushing to build out so many new offerings. With its outsized investment in R&D and well-tuned product development process, Datadog is in a favorable position. This is now primarily an execution game and Datadog is in the pole position. They are expanding their platform faster than any of the major competitors. They are also growing revenue faster. And they continue to improve their competitive position in each category.

Datadog has the right foundation. Their founders lived the DevOps problem before starting Datadog and are still running the company. They intuitively know what products will meet customer demand at the right point in time. Datadog’s start in infrastructure ensured their agent landed on every component of a customer’s software installation. And their focus on digital disruptors provides a high growth customer base to fuel rapid expansion.

From this core, Datadog has constructed a finely-tuned flywheel of product development and go-to-market functions. They apply the concept of land and expand with military precision, with a clear understanding of the drivers and watch-outs. They have also leaned into product innovation heavily, constructing a product offering that can be easily extended into adjacent markets with a pricing model that ties incremental value to cost. New product offerings cleanly replace inefficient in-house efforts or displace point solutions.

Leadership Changes

Speaking of Datadog’s founders, several of Datadog’s competitors announced leadership changes over the past quarter. All companies eventually have to replace their founder or CEO, but in these cases, the changes seemed disruptive and introduce execution risk. As the CEO sets performance expectations for the whole company, at the very least, momentum can stall while the organization re-aligns. Witness Fastly’s CEO transition in 2020, which is still playing out.

Splunk

On November 15th, Splunk announced that its current CEO Doug Merritt would be leaving the company. They immediately replaced him with an interim CEO, who is Graham Smith, the Chairman the Board of Directors. Merritt will remain with the company in “an advisory role to ensure a smooth transition.”

Merritt had been CEO for the last 6 years and grew the company about 10x in recurring revenue over that time. The Board will be looking for a new CEO who can scale operations for a multi-billion dollar company. The suddenness of this change was surprising and coincided with fairly significant deceleration in revenue growth for the upcoming quarterly report. The market responded by pushing the stock down 18% that day. SPLK is now down 22.5% year to date.

Dynatrace

On November 15th as well, Dynatrace announced that long-time CEO , John Van Siclen, plans to retire in December. In their case, the Dynatrace Board of Directors already had a replacement lined up in Rick McConnell, who is currently President and GM of the Security Technology Group at Akamai. Van Siclen will remain as a consultant to the Company through May 31, 2022 to facilitate the CEO transition. Van Siclen led Dynatrace since 2008, from a $5M start-up to approaching $1B in annual recurring revenue. He took Dynatrace public in the summer of 2019.

I can understand the motivation for this change. After 13 years, it is fair for a CEO to retire. Also, the Board already had a replacement lined up. However, I am not very excited about the replacement. McConnell has been in a President role over Product, Web and Security divisions at different stages during the past 10 years at Akamai. Yet, in that decade, Akamai has only grown revenue by 3.5x. I would argue that in 2011, Akamai was a leading provider in CDN, DDOS and security solutions. Now, they are considered a legacy provider that is hardly innovating.

Both Splunk and Dynatrace were already losing ground to Datadog. Leadership changes of this significance make it more difficult to rally the team to “catch up”. I anticipate these transitions may further slow product and go-to-market momentum for these competitors.

Crowdstrike and Humio

In prior blog posts, I discussed Crowdstrike’s acquisition of log management vendor, Humio, in February 2021. Humio provides a highly scalable streaming log management platform, which has applications for security monitoring and observability. The platform can ingest any type of log data, like system logs, metrics, traces, etc. and rapidly aggregate them for visualization. 

Near term, Humio enhances Crowdstrike’s back-end data processing and provides more data sources to inform the effectiveness of their security solution. Humio brings log management technology to allow Crowdstrike to rapidly ingest logs from many sources and surface insights for security monitoring. The initial planned application is to enhance the performance and reach of Crowdstrike’s EDR product to address what the industry is now calling eXtended Detection and Response (XDR). XDR essentially builds on the same capabilities enabled by EDR, but casts a wider net of data collection across a customer’s infrastructure.

Crowdstrike is also making Humio’s log analysis and management capabilities available to customers for other use cases. These traditionally fall into the observability space, with a focus on system and application logs. Crowdstrike emphasized the highly scalable data ingestion and rapid query times, due to Humio’s internal architecture (index-free). Users can log any type of data, both structured and unstructured, and benefit from advanced compression to keep costs low.

Humio’s capabilities could be leveraged across multiple product categories. Within security, Humio could compete with other SIEM solutions. It has also been applied to basic end user behavior analytics (EUBA). During Crowdstrike’s Investor Product Briefing in October, leadership provided some updates on Humio sales activity.

Crowdstrike Investor Briefing, October 2021

They highlighted three recent customer wins for Humio. Interestingly, all of them represented displacements of the open source Elasticsearch package (ELK) or Elastic itself. This is understandable, as Elasticsearch is the most directly comparable product set for large scale logging use cases on a stand-alone basis. Splunk would also fall into this category, where a company might be using one of the two vendors just for log management and analysis for security and/or observability use cases. For these customer wins, leadership claims that Humio significantly increased data ingestion capacity and reduced query times.

I think Crowdstrike can continue to pick up log management business, particularly where those logs are being ingested to perform security monitoring. As it relates to Datadog, I think Humio and Crowdstrike are far from offering a full-featured application observability platform. Humio is primarily used for log analysis at this point, lacking depth in APM, infrastructure monitoring, RUM, Synthetics, Databases, network, etc. It’s possible these will become product focus areas in the future.

For now, Humio represents a solid addition to Crowdstrike’s platform for heavy log analysis with a security focus and likely is being used to enhance Crowdstrike’s own data processing pipelines. Access to new log data sources is enhancing their EDR offering, evolving to the newer XDR label that seems to be gaining traction in the industry. Humio should also continue to generate incremental revenue opportunities for customers seeking a better point solution around log management.

Paradigm Shifts

Finally, we always need to watch out for complete paradigm shifts that might evaporate a company’s market share, separate from a direct competitive threat. In this regard for Datadog, I am watching activity in the “metaverse” and Web3. I have discussed these potential exogenous threats in other blog posts and on the Colossus Business Breakdowns podcast. These two areas seem to be gaining more momentum as 2021 progresses. While I don’t see a material risk to Datadog’s business at this point, these are the types of disruptions that can slowly (and then quickly) erode a software provider’s leadership.

In a typical innovator’s dilemma, the industry leader will focus on their current stable of customers and known competitors. In the background, however, their industry may be evolving to a different business model or method of engaging with end users. The reinvented version of the industry may no longer rely on the legacy leader’s services, resulting in a slow erosion of spend as the leader’s current large customers keep focus on their needs. Once the risk becomes apparent, though, it can be too late for the legacy leader to innovate quickly enough to address the new market dynamic.

As it relates to the metaverse and Web3, it’s possible that a “native” solution for observability within these experiences provides more relevant capabilities than Datadog can deliver, or builds direct relationships with the developers working on them. If Datadog (or any software provider) ignores these opportunities, it is possible they get shut out of an important future market.

For the metaverse, the risk is that the platform provider rolls their own observability solution. At a high level, a metaverse provider is delivering a platform on which developers build new experiences and applications for the users of that metaverse. This platform may be constructed in such a way that observability of these metaverse applications is modeled or accomplished differently from the current practice among web apps. Or, the metaverse platform provider decides to sell their own observability solution, locking out third-parties. This would be akin to the early days of the hyperscalers, who aggressively rolled their own versions of common developer services before today’s specialists emerged.

In the same way, Web 3.0 applications may make traditional centralized cloud infrastructure observability providers less relevant. While developer expectations for dapp performance, error monitoring and security will likely be similar to Web2 considerations, the underlying infrastructure has shifted, at least for those portions of the decentralized application that interface with the blockchain. Blockchain based networks inherently run on distributed infrastructure, making them harder to instrument in a consistent way. Performance data could be collected through remote service calls and APIs. However, a dapp hosting solution may bundle a monitoring service into their platform, or dapp developers pursue other methods for achieving observability.

We are already seeing start-ups labeled as the “xxx of Web3”. Often, this is a reference to AWS or other cloud infrastructure providers. The same analogy could emerge for a doppelgänger of Datadog, being the “Datadog of Web3”. An example of the “AWS of blockchain” is Alchemy. They provide the leading platform for developers to build applications that access blockchain networks.

After launching in August 2020, Alchemy has rapidly grown its customer base. They power transactions across several blockchain vertical industries, including exchanges and DeFi projects. They have also become a technology provider for almost all major NFT platforms, including MakersPlace, OpenSea and Nifty Gateway. Finally, Fortune 500 companies launching blockchain based offerings utilize their services, including stalwarts like Adobe and PWC. Alchemy just raised a $250M Series C round of funding for a valuation of $3.5B, from a who’s who of VC firms.

Alchemy Web Site

Alchemy provides a platform for blockchain developers. Like AWS, it includes a number of services that developers can consume to support managing blockchain-based applications. The primary service is called the Supernode, which provides an API-based software infrastructure facade that mirrors most functionality needed from a blockchain network node. This allows developers to focus on building their dapp, without needing to run their own node.

Alchemy Supernode Architecture, Supernode Documentation

In addition to the core Supernode functionality, Alchemy is building services to support application developers. These currently include Build, Notify and Monitor. Alchemy Build provides tools for visualizing request flows, debugging errors, transaction state monitoring and composing JSON-RPC calls. These are all recognized developer activities for building Web2 applications, but Alchemy Build applies the Web3 context.

Alchemy Notify captures important Web3 events and communicates them to the application user. Examples of notifications include mined transactions, dropped transactions, transfers to a user’s address and gas price threshold monitoring. Collecting this type of information from the blockchain and communicating it to the correct user requires a lot of overhead for the individual dapp developer. Yet, it can be critical functionality needed for a useful blockchain-based application.

Finally, Alchemy Monitor is a comprehensive suite of dashboards and alerts for app health, performance, and user behavior. These capabilities resemble those offered by observability tools. They include data collection and reporting of application usage metrics (like APM), user insights (like Google Analytics), error monitoring and alerting on utilization thresholds. Use of these tools by developers improves the user experience with minimal overhead, as they are already integrated into the Supernode infrastructure.

We can see overlap with current Web2 providers of software infrastructure services, like observability, end user behavior analytics (EUBA) and customer service. These developments from Alchemy will bear watching, both as the Web3 ecosystem grows and infrastructure providers jockey for position. Datadog, like other Web2 infrastructure providers, will need to monitor this activity very closely and ensure they are pursuing opportunities as developer mindshare and apps grow in the Web3 ecosystem. While Datadog hasn’t announced an offering directly targeting Web3 at this point, they are presumably aware of it. Peer Web2 infrastructure providers, like Cloudflare, are beginning to evaluate the space and make some initial product offerings that assist developers.

As Web2 still has a long tail and Datadog is rapidly expanding their product reach, I think Datadog’s growth will continue for several years. However, if Web3 gains traction and expands its use cases beyond the current niche applications, then Datadog will need to form a strategy. In the generic sense, providing observability solutions should translate into Web3. But, they might risk playing catch up in building name recognition among a new wave of developers who skipped right over Web2.

One additional consideration is that Alchemy is positioning themselves as the AWS equivalent for Web3 and layering on value-add services, like monitoring, alerting, user analytics and debugging tools. This resembles the early days of AWS, where many of their core add-on services grew without much competition initially. They had a monitoring, communications, security, identity, database and data warehouse offering. Later, Datadog, Twilio, Crowdstrike, Okta, MongoDB, Snowflake and others pulled back much of this spend through specialization. A full featured platform like Alchemy risks the same product dilution by trying to address too many categories.

Also, as an interesting side note, it appears that Alchemy’s Supernode infrastructure is hosted on AWS in the US East region. I’m not sure how they can position themselves as the “AWS for blockchain” when they are hosted on AWS. They likely have plans to stand-up their own data centers or PoPs (and may have done so already). However, a centralized storage and compute topology may concede an advantage to edge network providers that host their own globally distributed compute and storage (like Cloudflare). For now, though, Alchemy appears to be a leading choice for developers in the Web3 software infrastructure space.

Final Thoughts and Investment Plan

Datadog is firing on all cylinders. They are continuing to accelerate revenue growth and improve operating leverage. They are funneling gross profit into R&D investment, driving increases in the magnitude of product releases each year. More relevant products make customer expansion easier, lowering sales overhead for existing customers and shifting focus to landing new ones. These forces all combine to support sustained ARR growth at an elevated level.

Following Datadog’s Q4 2020 report in February 2021, the stock was trading around $100. Given what appeared to me to represent a favorable set-up for 2021, I set a price target of $150 for end of this year. I also significantly increased the allocation to DDOG in my personal portfolio. Following a dip in price in the Spring, I further layered into my DDOG investment. Datadog now represents my second largest holding with a basis of $94. The stock recently posted an ATH near $200, and then settled near $180 with the recent sell-off in high growth stocks.

Putting aside market concerns around valuations and the potential for a sector wide correction, I am still optimistic about Datadog’s continued trajectory going into 2022. I think they are well positioned to sustain high revenue growth. This is based on a flywheel of strong sales execution, expanding customer activity and an ever-increasing addressable market driven by rapid product development. They enjoy a favorable competitive position as they round out existing market segments and launch product offerings in new categories that leverage their core strength of delivering easy observability at scale.

NOTE: This article does not represent investment advice and is solely the author’s opinion for managing his own investment portfolio. Readers are expected to perform their own due diligence before making investment decisions. Please see the Disclaimer for more detail.

Additional Reading

  • Peer analyst Muji at Hhhypergrowth has some great coverage of Datadog, including an overview of Dash and the Q3 earnings report. Some of his content is behind a subscription, but that is well worth the cost.
  • Presentations from Datadog Dash 2021 are available on YouTube. I recommend at least watching the Keynote.
  • During Dash, Datadog leadership held a Virtual Investor Meeting. Provides a view of Dash for investors and looks at the road ahead.

13 Comments

  1. Gabrielle Z

    Excellent piece as always. DDOG has always been my fav stock. One trend I kept noticing is the rise of hyperscalers where AWS etc start to offer a “good enough” tool to developers. Curious if you see that as a threat to DDOG. If so, what would be the timeline?

    Also I have a general question on OpenTelemetry and its impact on broad observability tools like DDOG and SPLK – would it lead to price competition or lack of differentiation?

    Thank you so much and happy thanksgiving!!

    • poffringa

      Thanks for the feedback. First, I don’t see an emerging threat from the hyperscalers for Datadog or other observability providers. Datadog has built such a complete solution and raised the bar for expectations by modern DevOps teams, that it would take an inordinate amount of investment for AWS or the others to catch up. Plus, I don’t think that would make sense at this point, as there are too many market segments where that would be the case (databases, data warehouse, identity, CPaaS, etc.). The point being that the hyperscalers need to carefully pick their battles. Given the size and number of the specialists, AWS no longer able to bulldoze them all. It would make more sense to focus on a segment closer to their core functionality than observability.

      OpenTelemetry provides an open standard for defining how to instrument applications, with a focus on metrics, logs and traces. This doesn’t disrupt observability tools – if anything, it helps standardize data collection. It is true that enterprises could take OpenTelemetry packages and stitch together their own observability solutions for applications, but that DIY option has always existed. The benefit of Datadog and other modern observability providers is twofold. First, customers don’t need to invest their own development resources in reproducing functionality available from off-the-shelf SaaS solutions. Second, OpenTelemetry primarily focuses on APM. There are a suite of advanced observability tools offered by Datadog and others that go far beyond what a DIY team could meaningfully produce on their own (think Session Replay). So, I don’t think OpenTelemetry changes the underlying value proposition for observability specialists.

  2. Paul Dickwin

    I spoke to Olivier in the early 2010s at re:Invent. He said that, every year, his greatest fear was that AWS would unveil a competitor to Datadog at re:Invent. Cloudwatch always existed, and AWS made several improvements to it over the years, but an actual competitor was never revealed. Cloudwatch was always very far off from what Datadog offered. Obviously, even if AWS did reveal a competitor, it is no longer a problem for Datadog due to the “completeness” of the Datadog product today.

    This 4 year old article points out a good question about AWS:

    https://www.nextplatform.com/2017/02/03/will-aws-move-stack-real-applications/

    The answer is that for products with huge markets, AWS does not have the in-house expertise to do it well. Just like you mentioned with Okta, MDB, and Snowflake, AWS tried, but their product was subpar. It was more of a half-assed effort and they did not have the in-house knowledge and team to make a best of breed product. Regardless, this still brings in hundreds of millions in revenue for AWS, but this revenue is not nearly on the same growth trajectory as the companies mentioned.

    • poffringa

      Fully agree. To AWS’ credit, they created this ecosystem for other specialists to thrive, but it was unrealistic to expect that they could dominate every category. I think the “AWS could do that” threat is well understood at this point. And, there are still emerging opportunities as the Web (whether 2 or 3) evovles.

  3. Sharkey

    I love reading your articles as they truly paint a picture for the non-IT reader. Once again, a great article. I did notice while you are extremely confident in their revenue growth ability, you have not revised your 2024 target price of $215….do you see DDOG as close to fully valued at this point?

    • poffringa

      Thanks for the feedback. I was planning update the price target once we had some more insight into 2022 expectations from the company. I am modeling 50% revenue growth for 2022 for about $1.5B, and an informal price target of $240 within 12-28 months. However, I would like to corroborate my view with management’s initial estimate for 2022 growth to come with the Q4 report in Feb 2022.

      Regarding fully valued – the way I think about this is if a company can maintain a high rate of revenue growth with increasing profitability, then the valuation multiple should remain in a predictable range. In the case of Datadog, revenue growth may decelerate somewhat in 2022, but still remain relatively high. Also, if profitability trends continue, then op/FCF margins will be higher, even with lower revenue growth. If the P/S ratio drops to 50 by end of 2022 (from 63 now), then market cap would be roughly $75B by end of 2022. That represents an increase of about 33% from the current market cap of $56.5B. That implies a price target of $240. Of course, this will vary if Datadog outperforms or the market as a whole resets valuation multiples.

  4. Mark Stucker

    I just wanted to thank you for all the great free info you share. I love how you update your personal holdings every week. I also love how you share the podcasts you are on. I check your website once a week. I have increased by position in DDOG and NET in the last 15 months and I am reaping the results. Your conviction about the prospects for these stocks is definitely one reason for my increase purchases. I also like how current your info is. I heard you rave about Fastly on a podcast but when the Investment thesis changes, you articulate that so well, so once again, at Thanksgiving I am thankful to you!

    • poffringa

      Thanks for the kind words. I am so happy to hear that my coverage has helped positively impact your portfolio. I also appreciate you recognizing that we can’t get all long term calls correct. While a company (like Fastly) can have a great deal of potential, they still need to execute. When I recognize that, I have to put aside my appreciation for the technology aside and judge based on actual performance. Have a great Holiday season. Thanks, again.

    • Albert

      I AGREE

  5. Kirill

    Hello Peter,
    How could you comment on the last report DOCU
    Thank you,
    Kirill

  6. Michael Orwin

    1) Thanks, Peter, for another highly informative article.

    2) When I read, “It shows the movement of their mouse, mouse-overs, button clicks and keyboard entry.” (about Session Replay), I thought, is that bad for password security? It was probably a daft thought, because they must have thought about it, but anyway, I emailed Datadog’s Investor Relations, and the reply had a link to a blog post titled “Use Datadog Session Replay to view real-time user journeys”. There’s a section titled “Configure privacy options” about not getting private info like credit card numbers. I got the reply in less than two hours, which I think is very good, especially on a Saturday.

    • poffringa

      Thanks, Michael. That is a useful addition relative to Session Replay and a good point. Agreed that the turn-around on your inquiry is impressive.

  7. Michael Orwin

    “A full featured platform like Alchemy risks the same product dilution” (as for AWS) “by trying to address too many categories.”

    As described on this site, AWS also had the disadvantage that businesses competing with Amazon didn’t want to use AWS. So far as I know, nothing like that is an issue for Alchemy. Have I got that right, and does it matter much?