Investing analysis of the software companies that power next generation digital businesses

Datadog (DDOG) Stock Analysis

Datadog is a leader in monitoring solutions for cloud-scale applications. They are experiencing significant growth in a large addressable market that is a core beneficiary of digital transformation initiatives. While competitive offerings have recently stepped up, Datadog still enjoys impressive customer expansion. Their product development velocity is breakneck, doubling the number of paid solutions in the last year. Looking forward, they have additional opportunities in adjacent markets. The company is led by two technical co-founders. I will dig into the company’s history, financials, product portfolio, addressable market and competitive landscape. This will set a foundation and investment framework which investors can use to monitor Datadog’s progress going forward.

History and Technology Foundation

Datadog was founded in 2010 and launched its first commercial tool in 2012. The company started with infrastructure monitoring, and then expanded into APM, logging, user activity, network monitoring, and most recently security. According to its S-1, Datadog provides a “monitoring and analytics platform for developers, IT operations teams and business users in the cloud age.” In 2018, they claimed to be the first company to address the “three pillars of observability” in one toolset, with solutions for metrics, traces and logs.

While the term “observability” seems to have gained a lot of popularity recently, it was first introduced around 7 years ago. One of the first references to it was made by Twitter, on their engineering blog in 2013. As Twitter grew rapidly and experienced well-publicized outages, they migrated their back-end architecture from a single monolithic application to a set of distributed services. When a site impacting issue arose, Twitter engineers needed to quickly troubleshoot to identify the root cause. This led to the creation of a dedicated Observability team, whose responsibilities were twofold – 1) to create a central system to capture, store, query and visualize performance data from all their disparate services and 2) when issues arise, quickly analyze all this data to identify what service behavior is the root cause. The combination of these two functions allowed the Observability team to rapidly diagnose and fix site issues, eventually sunsetting the “fail whale” image that plagued Twitter for so long.

Many old school system administrators will recognize facets of observability as system monitoring. This is true – part of observability involves monitoring. A monitoring process collects relevant performance data from systems and services, reflecting their current state of operation. Performance data can encompass a broad range of indicators, from CPU levels to query response times. It can also include basic up/down checks for service availability. The key is to pick indicators that are strongly correlated to the overall health of the system, meaning that if the indicator has a big deviation from its norm, the system is probably not functioning. All of this data is collected in a “time series”, which is basically a value and a timestamp. Time series data takes the form of metrics.

Another segment of monitoring involves log analysis. Every software service generates some sort of log of activity. This can include system logs (events at the operating system level), security logs (resource access events) or application logs (events related to service activities, like a web server log recording every page request).

The third aspect of monitoring is traces. Traces provide a view into how a request progresses through an application’s code. A common trace would be a customer request for a certain web page on a site. The trace would show each step in processing the request – from the web server code, to queries of the database, to calls to any outside services, like Facebook Auth or handling a credit card transaction. The trace shows each step in the process and how long it takes. This generates what is called a “waterfall” view, which allows a system operator to quickly follow the trace and see where a bottleneck might exist.

These types of monitoring have had tools available for a long time, far before Datadog was founded in 2010. Open source projects initially helped system operators perform monitoring, and then commercial entities popped up to make this easier. Personally, I have managed teams that used Nagios, New Relic, Splunk, Pingdom, Graphite and the ELK stack. Some of these solutions go back to the late 1990’s.

However, two main issues existed with these disparate monitoring solutions before the Observability movement. First, each monitoring tool was tailored to a specific use case, like New Relic for application tracing, Splunk for log analysis or Graphite for infrastructure performance metrics. If a user-impacting issue arose, like the web site going down, system operators would have to frantically toggle between each of these monitoring tools to find the root cause. In this scenario, the operator has to determine if the site is down because calls to the database spiked, the web server’s CPU is too high, or that some application error is being triggered in the web logs, among possible causes. All of these examples were tracked in different monitoring tools, before full observability.

The second challenge of monitoring is the time required to troubleshoot and resolve an issue (Mean Time to Repair – MTTR). Monitoring simply reflects the state of the services, but doesn’t provide context for “why” the system isn’t behaving as expected. As a simple example of useful context, automated ML processes can analyze past behavior of a metric and provide expected ranges for normal operation. Then, if the metric suddenly drops out of range, that might indicate an issue. Or, a log monitor can count the frequency of errors and issue an alert if there is a sudden spike in a new type. This context allows system operators to quickly zero in on the likely cause of a service disruption, versus having to manually scan all recent activity in each of their monitoring tools.

Observability delivers the combination of monitoring and context in one tool. It ensures that all relevant service performance data is captured and visualized, as well as provides the necessary context to sort through the data rapidly to identify aberrant behavior for troubleshooting.

Full observability of systems emerged as a necessity in the last five years, as the following trends converged in digital enterprises:

  • More applications. As part of digital transformation efforts, companies are creating many new stand-alone applications for customers, in order to deliver differentiated service. These applications are often custom built by in-house developers. Once these applications are adopted by customers, their uptime becomes business critical.
  • Cloud hosting and virtualization. The migration of software workloads to the cloud means that applications are hosted on ephemeral servers and new hosting containers. The notion of a server is no longer tied to a single physical device in a company’s data center, which reboots once a year. On the cloud, server instances get swapped in and out continuously, often on each release or through auto-scaling. Also, the one-to-one parity of server to machine no longer holds. A physical server can host many individual containers or virtual machines. This constantly changing infrastructure makes monitoring much more complex.
  • Micro-services. Large, monolithic applications are being broken up into discrete services. New applications are designed to consume a service-oriented architecture. Each individual service will often have its own application servers, database, message queues, etc. This dramatically expands the surface area that needs to be monitored and increases the dependency tree by an order of magnitude.
  • DevOps. The realm of service monitoring and issue resolution is no longer silo’ed into the system operations team. Developers are increasingly expected to take operational ownership of their application performance. This dramatically increases expectations for tooling and the requirement for context, as developers aren’t familiar with all the performance parameters in the lower layers of the server stack. Templates and pre-set indicators by service type are needed to enable a broader audience of DevOps.

Datadog entered this rapidly evolving space at the right time with the right approach. They capitalized on the observability trend by offering both monitoring and context. They did this in one solution across the areas that matter – metrics, logs, traces, and for all relevant systems – applications, server infrastructure, network, third party services in all types of hosting configurations. They were the first company to offer a complete solution across these dimensions in one consolidated view (a single pane of glass).

By rolling up these use cases into one solution and marketing the “three pillars of observability”, Datadog was able to leapfrog the leading point solution providers, like New Relic for tracing and Splunk for logging. These companies are scrambling to broaden their observability scope, while Datadog quickly usurps market share. The only other company that emerged with a holistic solution on a parallel path is Elastic. We’ll dive into all these offerings in the competition section.

Going forward, the trends driving the need for observability are not likely to slow down. Putting aside the current COVID-19 situation, companies of all types will continue their digital transformation initiatives as customer expectations for higher levels of service increase. This will create more custom applications, hosted in a variety of environments and supported by distributed services.

In addition, as digital operations become the core of every type of company, the system operations team isn’t the sole consumer of observability solutions. Developers were first to be pulled into the mix and given more responsibility over application performance and reliability as part of the DevOps movement. Then, product managers, business analysts, customer service and even department heads became dependent on tracking the availability of their customer applications. This has dramatically expanded the audience for observability.

All of this means that Datadog has a huge opportunity in front of it. Their market-leading solution addresses the needs outlined above and puts them in the pole position. Gartner calls this market IT Operations Management (ITOM) and estimates it will reach $37B in spend by 2023.

Unfortunately, this market is fairly saturated with incumbents offering point solutions, who are not remaining idle. Datadog has sprinted ahead with their full observability platform, but we should expect the competitive noise to increase. Interestingly, Datadog claims that most enterprise deals do not displace other competitors, beyond home-grown, open source solutions. While this is hard for me to imagine, at least for the digital first leaders, it will certainly change over time. Datadog hints at adjacent opportunities beyond IT Operations Management, as offering other growth trajectories in the future. They recently announced a security solution, as an example. Success will be determined by how well Datadog executes over the next few years to maintain their lead in observability and eventually find other avenues for growth. We will examine the product roadmap in a bit, but first, let’s take a look at Datadog’s financial performance.

Financial Overview

DDOG went public on Sept 19, 2019. The IPO priced at $27 and the stock closed at $37.55 that day. Its highest daily close since then was $50.01 on Feb 12, 2020. Currently, DDOG trades around $36, representing a drop of about 30% from its ATH. Like many software stack companies, the COVID-19 situation has depressed the price due to overall macro concerns.

DDOG Stock Chart, YCharts

Q4 and FY 2019 Earnings Report

On February 13, 2020, Datadog released earnings results for Q4 and full year 2019. The earnings results significantly exceeded expectations, but the stock closed down 3.0% at $47.03 the next day. At about this time, the market began reacting to the COVID-19 situation and many software companies experienced stock price drops.

Here are some highlights from the earnings release (EPS is Non-GAAP):

  • Q4 2019 Revenue grew 84.4% year/year to $113.6M, versus the analyst estimate of $102.4M. The original estimate would have represented annual growth of 66.2%, so Datadog outperformed by about 18% on annualized basis. Q3 2019 revenue growth was 87.7%, a slight sequential deceleration, but impressive that they are staying above 80%.
  • Q4 2019 EPS of $0.03, which beat the analyst estimate of ($0.02) by $0.05.
  • Q4 Non-GAAP operating income was $7.0 million, yielding an operating margin of 6.1%.
  • Q4 FCF was $10.9 million, yielding a FCF margin of 9.6%.
  • FY 2019 Revenue grew 83% year/year to $362.8M. FY 2018 Revenue growth was 97% year/year.
  • FY 2019 EPS was ($0.01).
  • FY 2019 operating loss was $5.4M, yielding an operating margin of -1.5%.
  • FY 2019 free cash flow of $0.8 million, representing a FCF margin of 0.2%.
  • Q1 FY 2020 Revenue guidance of $117-119M versus $108.6M consensus. At the midpoint, this would represent year/year growth of 68.5%. Note that Q4 year/year growth was 84.4%, and revenue outperformed guidance by 18% on a year/year basis.
  • Q1 FY 2020 EPS guidance of $(0.07) – $(0.03) versus $(0.10) estimated. Raise of $0.05 at the midpoint.
  • Q1 operating loss between $7.0M and $5.0M, representing an operating margin of -5.1% at the midpoint.
  • FY 2020 Revenue guidance of $535-545M versus $503.9M consensus. At the midpoint, this represents year/year growth of 49%.
  • FY 2020 EPS guidance of ($0.07) – ($0.03) versus ($0.10) consensus.
  • FY 2020 operating loss between $(30.0) million and $(20.0) million, representing an operating margin of -4.6% at the midpoint.
  • At 2019 year end, Datadog had 858 customers with ARR of $100,000 or more, representing an increase of 89% from 453 at end of 2018 and up 130 in Q4 alone. Over 70% of total ARR is generated by customers spending more than $100k annually.
  • Had 50 customers with ARR of $1 million or more at end of 2019, representing an increase of 72% from 29 at end of 2018 and 12 at the end of 2017.
  • Average ARR of enterprise customers at the end of 2019 was about $230k, an increase from $160k at the end of 2018. Average ARR of mid-market customers at the end of 2019 was about $170k, an increase from $110k at the end of 2018.
  • Q4 gross margin was 78%. This compares to a gross margin of 76% in Q3 and 75% in the year ago period. Improvement in gross margin was attributed to efficiencies gained in cloud hosting operations.
  • R&D expense was $31.6 million, or 28% of revenue, consistent with the year ago period. Sales and marketing expense was $39.3 million, or 35% of revenue, down from 46% in the year ago period. The change in Q4 was pronounced partly due to the outperformance on the revenue line. Also, management noted the greatest leverage came from marketing expenses, which experienced a lower growth rate year/year. G&A expense was $10.4 million, or 9% of revenues, slightly higher than 8% a year ago.
  • Cash and cash equivalents were $777.9M as of December 31, 2019.
  • DBNER was over 130%, as has been the case in past 10 quarters.

On a Rule of 40 basis for Q4, Datadog passes with flying colors. Revenue growth of 84.4% + 9.6% = 94. This is one of the highest numbers among software stack companies that I track.

DDOG has an enterprise value of about $10B currently. Based on 2019 revenue, it’s EV/Revenue ratio is about 27.5. Looking forward to FY 2020, EV/R ratio drops to about 18.5. If we assume Datadog outperforms on FY 2020 revenue by about 10% and achieves $600M (versus current estimate of $540M), this puts forward EV/R ratio at 16.7. Outperformance of this magnitude might be expected under normal circumstances. With COVID-19, this may be a stretch, as I don’t think DDOG will see significant tailwinds from that situation. Regardless, let’s assume a forward EV/Revenue ratio of 17-18. This is pretty high, but conceivable, given Datadog’s exceptional revenue growth and Rule of 40 calculation.

Finding peers in the software stack space growing at these rates is difficult. Alteryx (AYX) had 75% year/year revenue growth in Q4 with non-GAAP operating margin of 33%. Its current forward EV/Revenue ratio is 9.4, but AYX is also currently down about 44% from its ATH.

Analyst Coverage

For analyst recommendations made in 2020, DDOG has 4 buy ratings and 4 neutral ratings. Several of these updates with lower price targets were made recently, as a result of anticipated headwinds due to the COVID-19 situation. DDOG stock is currently trading at about $36. Average price target is $48.62.

DateAnalystRatingPrice Target
3/27Goldman SachsNeutralLowered from $48 to $45
3/26MizuhoBuy$46
3/26Morgan StanleyEqual WeightLowered from $50 to $42
3/18RBCHold$43
3/17BarclaysBuy$42
2/14JefferiesHoldRaised from $36 to $45
2/14RosenblattBuyRaised from $50 to $61
2/14NeedhamBuyRaised from $54 to $65
DDOG Analyst Ratings, YCharts and MarketBeat

Of the analyst feedback, here is positive commentary from Needham right after Q4 earnings.

Needham raises their DDOG tgt to $65 from $54. Analyst Jack Andrews offered, “DDOG reported a very strong 4Q19, with all key metrics well above consensus. Our key takeaway is that DDOG’s product innovation and marketing message focused on delivering a “unified observability platform” is clearly resonating with broad-based growth. To illustrate some metrics supporting our view, in 2019 ~65% of DDOG’s new customers started with two or more products, compared with ~25% in 2018. For existing customers, ~60% are using two or more products, up from ~25% in 2018. Most significantly, ~25% of customers are now utilize DDOG’s three pillars of observability (infrastructure, APM, logs), up from ~5% in 2018. Given our view that DDOG is attacking a largely greenfield market driven by secular tailwinds, this strong product/market fit should enable high growth rates over a multi-year horizon. Reiterate BUY, raise PT to $65.”

Briefing.com, Feb 14, 2020

And here is commentary from Jeffries, acknowledging solid performance, but raising concern over valuation.

Jefferies analyst Brent Thill raised his price target on shares of Datadog to $45 from $36 after the company’s Q4 metrics suggested to him that its competitive positioning is intact, with net new ARR growth supported by new customers as well as expansions within the base. While he likes Datadog’s fundamentals, he remains on the sidelines on valuation, although he said he would be more constructive on a meaningful pullback. Thill keeps a Hold rating on Datadog shares.

The Fly, Feb 24, 2020

Product Overview

Datadog provides a SaaS platform that automates comprehensive system monitoring and troubleshooting for modern digital operations teams. The platform is simple to use with pre-packaged integrations, customizable drag-and-drop dashboards, real-time data visualizations and prioritized alerting. The platform is deployed in a self-service installation process, allowing new users to quickly derive value without specialized training. Datadog is used by organizations of all sizes, across a wide range of industries.

Datadog started with infrastructure monitoring in 2012, which was largely left to open source solutions. Other commercial offerings in the monitoring space focused on application performance monitoring (New Relic) and log analysis (Splunk). As the migration to cloud, micro-services and virtualized containers began making monitoring exponentially more complex, the surface area of discrete software components to track increased rapidly. This sprawl applied to infrastructure monitoring as much as APM and logging.

Datadog had the benefit of starting as these trends were accelerating, allowing them to design extensibility, flexibility and enormous scale into their architecture from the beginning. They tackled ephemeral cloud instances, containerization and micro-services upfront, which laid the groundwork for their future expansion into other areas. In parallel, New Relic remained tied to a model based on physical servers for some time.

The other advantage of starting with infrastructure monitoring, versus APM and logging, was that the Datadog agents needed to be deployed on just about every device in the data center or cloud instance. Engineering organizations will generally monitor every component of infrastructure, at least for basic availability, while not every infrastructure component needs an APM agent or log analysis. This is an important consideration. At the time, New Relic would generally be deployed only on application servers, as that is where the traces were generated. Similarly, due to pricing by log volume, only application and other high value logs would be forwarded to Splunk for ingestion.

Datadog S-1 Filing, August 2019

After establishing their beachhead in infrastructure monitoring and refining the solution there, Datadog eventually expanded into the higher-order layers of the stack. They released an APM solution in 2017 and added log analysis in 2018. Datadog’s approach to log analysis was revolutionary, as they introduced the concept of “Logging without Limits“. This refers to the idea that all log data can be ingested and stored by the Datadog solution. A portion of those logs can be designated for persistence and detailed analysis, which represents the volume for which the customer is charged. Competitive solutions at the time would charge for total volume of data ingested, whether actually analyzed or not.

After launching the “big 3” of observability in 2018, Datadog continued expanding. In 2019, they released solutions for synthetics and real user monitoring (RUM). Synthetics allows for the simulation of actual interactions with a web application, often through a browser. The Datadog operator can configure a monitor that completes a common web site interaction, like sign-up or check-out, and records the result. These actions are then scheduled to run periodically from test servers across the globe. The results are recorded and if a test action fails, an alert can be generated.

RUM allows for real user interactions with a company’s software application to be captured. These might be actions that have business value, like clicking on a play button, conducting a product search or adding an item to a shopping cart. This data can be aggregated, summarized in dashboard views and monitored for consistency. If, for example, new user registrations suddenly drops off, an alert could be sent to the product and engineering teams to investigate. This circumstance might not necessarily be flagged by infrastructure monitoring, log analysis or APM, as it could be caused by a bug in the last release.

In the latter part of 2019, Datadog launched network performance monitoring (NPM). This provides an observability solution for network traffic. As servers and services are distributed across cloud data centers, issues with network traffic flow could similarly create application availability issues. Wrapping network monitoring into Datadog’s observability solution is smart, as it further consolidates the customer’s technology organization around a single toolset.

Finally, in the Q4 earnings report, Datadog announced the beta release of a security monitoring solution. This effectively brings the security team, along with development and operations, into the observability mix. This solution adds security context to data already being collected in logs and infrastructure monitoring, which security personnel can query and graph. Also, it allows developers and operations personnel to be more security aware.

Datadog S-1 Filing, August 2019

All of these solutions run on top of the same Datadog core platform. The base layer is data ingestion. Early on, Datadog built a flexible data model that could represent a broad set of data types. This data model is used to store inputs from all kinds of sources, like logs, performance metrics, user activity and application traces. It is necessarily extensible. The data structure is also efficient to minimize storage space.

The ingestion engine is designed to be very fast and scalable, as it needs to accommodate enormous data flows, yet be able to support real-time log tailing and dashboard updates. It also enables functions to categorize, aggregate and summarize data, as well as tag with metadata.

The common application components of the platform allow for sharing of functionality across the different data types, whether metrics, logs or traces. These include search, visualization, analysis, alerting and collaboration. On top of all data flows, Datadog inserted a machine learning layer. This allows the system to identify common ranges and patterns for monitored data, so that abnormal behavior can be quickly flagged for operators.

With that overview, let’s take a look at each of the product offerings in more detail.

Infrastructure

As mentioned previously, infrastructure monitoring was Datadog’s first product offering. Infrastructure monitoring collects a wide range of relevant data points about the performance of any type of device at the layers below the application. These include physical hardware components and operating system activities. Examples of infrastructure data points are cpu levels, memory usage, disk access and storage. This data is collected, categorized, aggregated and displayed in charts. Operators can configure dashboards with canned or customized collections of interesting charts for monitoring.

As infrastructure data points are processed, machine learning establishes expected ranges, so that operators can be alerted if a data point suddenly spikes. An easy example would be a server’s CPU hitting 100% utilization and staying there for over 30 seconds.

Infrastructure monitoring can span all locations utilized by the customer organization, whether public cloud, private cloud or hybrid environments. The entire infrastructure can be consolidated into a single view, even large ecosystems comprised of thousands of individual hosts or containers. The operator can use the top-level view to understand the overall topology of the infrastructure, and then quickly drill down to individual components when needed.

APM

Datadog released their APM product in 2017. This represented a natural first extension, as the application generally rests on top of the services provided by the system infrastructure. By building their infrastructure solution for the dynamic and ephemeral nature of cloud hosting, Datadog solved a major problem with existing APM solutions, like New Relic. These were designed around the assumption that servers were generally static instances tied directly to physical hardware in a data center. New Relic didn’t adapt well (at least initially) to context in the cloud, where each customer release could generate a new set of application servers.

Datadog’s APM solution provides full visibility into the health and functioning of applications regardless of their deployment environment. The application tracing solution is distributed by default, allowing it to work across disparate micro-services, hosts, containers and even serverless computing functions.  APM traces are correlated back to infrastructure monitoring context, enabling rapid troubleshooting in one view.

This represented a real leap forward for DevOps teams, who were increasingly expected to work together to support customer application performance. If a service-level issue surfaced in an application, the APM solution would reflect it in the trace. In the previous world, a developer would be left to figure out why an application bottleneck existed without a view into infrastructure monitoring. With Datadog’s solution, the cause of the bottleneck, like a database crash, would be immediately obvious and explain why the application trace showed a time-out at the database request.

The Datadog APM solution can quickly assemble a “service map”. This represents a visual representation of the application processing flow through various internal and external services. This distributed tracing further enables operators to visualize the cause of application issues, by seeing the connections between infrastructure nodes and the volume of traffic flows between them. This further helps to understand the real-time performance health of each service.

Datadog’s APM solution supports all common development frameworks and languages, including Java, Python, Go, Ruby, .NET, Node.js and PHP. This is critical, as microservices allow engineering teams to select the best development framework for the needs of each microservice. Similar to infrastructure monitoring, a “watchdog” service analyzes application performance data and applies anomaly detection to flag unusual trends and alert operators of potential issues ahead of them occurring.

Log Management

Datadog’s log management product was released in 2018. The solution ingests data from any log source and then breaks apart the log entries, categorizes field data and attaches metadata. The output can be viewed in a real-time flow or aggregated into charts by metric type. The solution can create indexes of data values, which users can query. The data ingestion process also passes data through a machine-learning enabled pattern detection engine that looks for sustained variations and alerts operators pro-actively of issues.

Introduced as part of the log management solution was the concept of “Logging without Limits”. This represented a direct challenge to Splunk, which was notorious for generating spiraling costs for customers through incremental fees for data volumes. Datadog addressed this issue by allowing all log data to be ingested and users to control costs by limiting the granularity of data analysis.

In Logging without Limits, all log data is ingested by default and receives base level parsing and enrichment. This base data set can be tailed live or aggregated into metrics, without incremental cost. High value log data can be selected for indexing and alerting, which is metered by volume and desired length of persistence. All logs, whether indexed or not, are archived in the desired cloud vendor’s bulk storage solution, like AWS S3, as part of the base service. If a log is archived, but months later needs detailed analysis for an investigation, it is easy to pull the log out of storage, process it and create indexes just for the period of interest. Datadog calls this process rehydration, after which the data can be fully queried for troubleshooting. This scenario of pulling up old log files is common for security audits, where the issue in question occurred months earlier. Incremental processing costs only apply to the rehydrated data set, not all data in the archives.

This new approach to controlling data ingestion and processing levels was a major disruption for the monitoring space and likely explains a lot of the growth Datadog experienced in the last couple of years. As we examine competitive offerings, we will see that other players (like Splunk) are adjusting to this new approach, but certainly lost ground in the interim.

Synthetics

With the launch of logging, Datadog completed coverage for the “three pillars of observability”. This represented a significant milestone, as they were the first major commercial product to pull all three of these disparate monitoring use cases into a single view. In 2019, Datadog continued their rapid product development cadence and expanded into other use cases.

The next logical step was to extend monitoring to the end-user experience. It’s important to monitor activity on the server side through infrastructure, logging and APM, but if the end-user is experiencing issues on their end, then the customer experience could still be interrupted. This caused Datadog to add user experience monitoring to their product suite.

The first product was Synthetics, which focuses on simulating interactions with the application’s user interface in order to identify issues from the customer’s perspective. For example, a web site might have a simple sign-up form that gathers the customer’s name and email. That form could generate an error when submitted due to a bug in the most recent code release. That error would not surface in APM or infrastructure monitoring, and might not be captured in a log entry. Yet, it is creating a significant impediment to user engagement.

Synthetics provides an engineering organization with the ability to recreate step-by-step user interactions with a web property, including button clicks, form fields and page element interactions on multiple browser versions. These simulations can be set up for major user interaction flows on a site, like sign-up, product browsing, shopping cart and check-out. The simulations are built in a very human-friendly way, by simply “recording” a typical user session on the site itself. This makes creation and maintenance of these synthetic checks very easy.

Once a set of synthetic tests is created, the DevOps team can then schedule them to be run periodically from test servers in data centers across the world on different device types (desktop, mobile, etc.) and browser versions (Chrome, IE, Firefox, etc.). If a test fails, the error is recorded and reported in the Synthetics dashboard for operators to investigate.

The Datadog Synthetics solution also includes AI-powered logic that can automatically update an existing test if it notices that the UI has changed. This was a big problem with prior incarnations of synthetic testing – that a minor update to the UI (like the name of a submit button) would break the test. The Datadog solution can infer that the submit button is still in place and simply updates the test to reflect the new button name.

With the rise of API’s necessary to deliver data and logic to disconnected applications, like mobile apps, a form of synthetic testing was needed for API endpoints. In this case, there isn’t a UI to exercise, but rather simulated API request/response pairs. The Datadog Synthetics solution can be applied to API’s as well as browser-based UI’s, allowing DevOps teams to surface issues that would impact disconnected apps.

Synthetic testing isn’t a capability invented by Datadog. There were point solutions, like Gomez and Selenium, that supported this capability. However, Datadog further consolidated the monitoring toolset by pulling this into their suite. It makes sense, as errors recorded by Synthetic monitoring could then be rapidly investigated. Most importantly, DevOps teams could identify whether the broken sign-up form is due to a recent code release, or that the backing database is down.

RUM

Real User Monitoring provides detailed analysis and summary graphs of actual user activity on front-end applications. This ranges from generic session data like page views and unique visits to business-specific metrics like clicks and sales. For each user session, detailed activity data is available, allowing operators to view all steps of each user’s interaction. In July 2019, Datadog announced the availability of RUM.

For troubleshooting of issues, the user session data is useful to recreate how a user arrived at a particular error condition. Sometimes, users take unexpected paths through an application and then report an error. Being able to trace their steps allows QA and operations personnel to gather useful incremental context. Additionally, like all Datadog solutions, core infrastructure, logging and APM data are available in the same view as RUM, making it very easy to discern if a user behavior issue is tied to some underlying failure.

As an expanded use case, this detailed user session detail is also useful for product managers and business managers as they seek to understand how users interact with new features or promotions, as an input to their product development cycle.

Network

As part of the Q4 earnings report, Datadog announced that Network Performance Monitoring (NPM) was generally available. NPM provides the ability to visualize and monitor network traffic flows across cloud-based or hybrid hosting environments. Dependencies are mapped through the infrastructure stack, from data center locations up to servers and finally the applications run on them. This provides additional context for network flows. Other network monitoring solutions previously would just label network nodes with location specific names, requiring the operator to mentally map the application components in each location.

The Datadog network monitoring solution can quickly assemble a top-level view of an organization’s entire infrastructure. From this view, operators can drill down into individual locations and software components to examine the traffic flows between them. Intelligent, contextual labeling makes understanding traffic flows much easier.

Besides tracking network traffic levels, the tool reports retransmit rates between nodes. Retransmits occur when the connection between two nodes is degraded and the TCP protocol doesn’t receive confirmation of of packet transmission. These are strong indicators of a network issue.

Finally, traffic nodes are organized by security group. This provides useful context for security audits – as network topologies are usually designed to ensure that certain application resources only have access to sensitive resources like databases. If the network monitor showed egress traffic to an unexpected resource, this might indicate an opportunity to tighten controls or possibly a breach. A common indicator of a breach is high network traffic passing in unexpected ways, like from a database directly to a perimeter device.

Security

Datadog’s newest product offering, Security Monitoring, was announced in November 2019. The solution is currently in beta. The product’s vision is to provide security teams with the same level of visibility into infrastructure, network and applications that DevOps teams have. This will provide them with the raw information needed to do their jobs. Previously, security teams would often fly “blind” with no real view of the internal network topology, applications or dependencies, generating many requests for information to the DevOps teams. This also allows all teams to take a more active role in monitoring infrastructure for possible security issues, versus relying solely on the security team to find breaches through a separate set of tools.

The security monitoring product leverages the existing flow of data from infrastructure, network, application and security devices to identify potential threats. A base set of threat detection rules is provided by Datadog to surface widespread attacker techniques. Security teams can then create and fine-tune their own rules to further refine coverage.

These rules are applied at the point of data ingestion prior to indexing, so security rule processing doesn’t incur significant incremental log analysis costs (Logging without Limits still applies). This is important, as security analysis requires detailed examination of a broad set of system logs and network traffic indicators.

Other Features

Across all product offerings, the Datadog platform includes a number of common features, which make the tools easier for DevOps teams to share information, generate notifications and connect to third-party services.

Dashboards

Data from all sources can be easily aggregated into high-resolution charts and graphs that update in real-time. Users can assemble these individual components into dashboards that provide a single, contextual view. From the top-level dashboard, users can drill into into components by host, device, metric or custom tags. Data can be sliced in many ways and summary calculation logic added, like rates, ratios or averages. Operators can easily customize the view through drag-and-drop interfaces or even in code.

Having access to all data sources in one consolidated interface is a real game changer. Operators can efficiently perform complete system observability as they explore infrastructure, logs, user activity, security and network performance together.

Collaboration

Datadog’s collaboration features allow DevOps team members to annotate graphs and kick off conversations with others. This can be very helpful as part of the troubleshooting process, as different team members can provide input in a shared interface, allowing the team to know what indicators have been investigated and potential causes that have been ruled out.

Team members can be tagged and communications forwarded to whatever team collaboration tool is being used – Slack, email, PagerDuty, etc. Also, operators can access historical data, in the event that a similar issue occurred previously. That way, the operator assigned to the current issue knows who and how the issue was addressed.

Alerts

Once teams set up all their monitoring sources and dashboards, it isn’t efficient to continuously stare at them in order to be aware of problems. Alerts allow operators to set thresholds for any metric, log entry frequency, APM trace or user action. Then, when the threshold is exceeded for some set period of time, an alert will be generated.

The alerting system can generate a notification through the most popular channels used by DevOps teams – including email, Slack, PagerDuty, ServiceNow, Zendesk and Microsoft Teams. It can also open a ticket in the bug tracking system for lower priority issues.

The alerting system also supports logic to allow operators to only trigger an alert if multiple indicators combine in a certain way. This helps reduce noise. Machine learning can be applied against indicators to create alerting thresholds automatically. This allows the system to intelligently associate data patterns with past outages and apply that logic to new data in real-time.

Integrations

The Datadog platform includes integrations with over 400 third party services. These represent both common software components for data ingestion and services for outgoing notifications. The coverage spans popular SaaS and cloud providers, automation tools, bug tracking, databases and all other standard infrastructure components.

Datadog users need only select the desired integration component to access the appropriate agent to install or can set up an API connector.

Datadog S-1 Filing, August 2019

Pricing

Datadog pricing is segmented by the six products that are currently in GA. Pricing generally accumulates on a usage basis, whether by host, by quantity of log events, test runs or user sessions. For some products, there are add-on options or advanced features.

For per host charges, a host is defined as any physical or virtual OS instance. Multiple containers are allowed within each host, up to a point (10 or 20 per host, depending on the plan).

Here is short summary of pricing options for each product:

  • Infrastructure: The infrastructure service is charged on a per host basis. There is a free tier, which allows usage up to 5 hosts. This is a good bootstrap option to hook start-ups. The first paid plan is called “Pro”, which covers all the basic infrastructure monitoring features, including standard metrics, canned dashboards, alerts and 15 month data retention. The “Enterprise” plan costs $23/host/month and adds advanced features, like machine learning, anomaly detection, forecast modeling and live processes.
  • Log Management: There is a charge of $0.10 per GB of ingested data. This receives the basic functions of parsing, enrichment, metric generation and archiving. The operator can tail this data, build summary graphs and direct to an index. In order to examine the raw data for analysis, it must be retained for a period of time. The cost of retention varies based on the retention period and volume of data. Retention costs range from $1.06 for 3 days to $4.10 for 60 days per million events. Logs rehydrated from archived storage incur the retained charge as well.
  • APM: Distributed tracing and APM costs $31 per host per month. App analytics capabilities can be layered on for $1.70 per million analyzed spans.
  • Network: Network traffic monitoring costs $5 per host per month.
  • Synthetics: The cost for synthetic test varies based on whether they are browser based or API tests. API tests are $5 per 10,000 test runs per month. Browser tests are significantly more expensive at $12 per 1,000 tests per month. This makes sense, as API tests are much easier to program and run.
  • RUM: Real User Monitoring costs $15 per 10,000 user sessions per month.

Datadog’s pricing structure is fair. Charging on a per host and usage basis allows companies to start small and scale their use as they grow. This price structure is likely the major driver of the high net expansion rate. It is easy to see how a company could start with one service and then add more over time. Similarly, as a company’s traffic grows, they would scale up their Datadog spend.

Product Development Velocity

Starting in 2018 with the release of the log management solution, the pace of product development at Datadog has accelerated. Log management was a major feature addition (following APM in 2017), along with their revolutionary release of Logging without Limits. Then, in 2019, they added Synthetics, RUM, Network monitoring and a serverless solution. Starting in 2020, the focus has shifted to the Security product.

Through this rapid development and release velocity, Datadog has sprinted ahead of competitive offerings and solidified their position as a leader in offering a comprehensive observability solution. Their pace of product development parallels that of Elastic, which we will explore in the competition section.

In the Q4 2019 earnings report, Datadog allocated 28% of revenue to R&D, which was consistent with the prior year. Based on management commentary, they plan to continue their oversized investment in R&D going forward.

Datadog has relied on acquisitions to drive some of its newer features, primarily Log monitoring and Synthetics. The major acquisitions listed are:

  • Mortar Data – Feb 2015. Mortar data provided a highly scalable data processing pipeline. This appears to have been incorporated into the Datadog core data ingestion function.
  • Logmatic.io – Sept 2017. Added log monitoring and analytics capabilities to the Datadog platform. This rounded out the 3 Pillars of Observability with the formal launch of the Datadog Log Management solution in early 2018.
  • Madumbo – Feb 2019. Madumbo built a solution for automated UI testing and monitoring. They also provided a solution for easy test configuration, by recording operator interactions with the site versus scripting tests. This became the foundation for the Synthetics product launched later in 2019.

Newer product extensions have been built in house, although in Datadog’s S-1, they mention continually looking out for acquisition opportunities. With $778M cash on the balance sheet, we can expect some opportunistic acquisitions to occur, but probably not as heavily as has been seen by other large software vendors, like Salesforce.

Openness and Platform Approach

Datadog open sourced the agent software that runs on customer hosts and automatically transmits data to Datadog. The agent source code is available on GitHub, under an Apache 2.0 license, which is the most liberal for re-use. Open sourcing the agent software is smart, as it allows customers to inspect the software that they are installing on their systems. Additionally, the license allows them to customize it if desired.

Datadog also provides an open API, for developers to programmatically exchange data with the Datadog platform. This API is geared towards allowing developers to create their own custom monitors for forwarding data to Datadog and generating notifications from the platform. It isn’t meant to be used to create completely new applications, using the Datadog platform’s data processing and visualization capabilities as a foundation. This might be a future product direction.

API coverage is pretty extensive – essentially allowing a developer to trigger any interaction with the Datadog product suite through code rather than the UI. This includes creating, updating and retrieving data. The scope of features includes comments, dashboards, events, graphs, hosts, logs, metrics, monitors, synthetics, users and roles.

The API documentation is comprehensive and provides code samples using Curl, Python and Ruby. Request and response payloads are formatted in JSON. These are all standard.

Datadog also provides a set of developer tools. These are geared towards interacting with the platform by using or creating code libraries to integrate with third party services or generate data through custom agents. Another useful developer feature is custom metrics. If a metric is not submitted from one of the 400+ existing Datadog integrations, it can be created. The primary use case for custom metrics is the collection of business-specific KPIs, unique to one’s business performance. For example, if you ran a video conferencing solution, you might want to track how many participants are on each video call.

Total Addressable Market

Gartner describes the market for Datadog solutions as IT Operations Management (ITOM). They define ITOM as software that is “intended to represent all the tools needed to manage the provisioning, capacity, performance and availability of computing, networking and application resources — as well as the overall quality, efficiency and experience of their delivery.” Gartner divides the ITOM market into three smaller categories – delivery automation, experience management and performance analysis. They provide magic quadrants for application release orchestration (ARO), application performance monitoring (APM), IT Service Management (ITSM), and network performance monitoring and diagnostics (NPMD). 

Looking at each of these categories helps understand where Datadog currently fits and possible future expansion vectors:

  • Application Release Orchestration (ARO): ARO tools provide capabilities for deployment automation, development pipeline management and release coordination. They enable enterprises to scale application release activities across multiple teams. Datadog does not currently have an offering in this space. Players are GitLab, Atlassian, Red Hat Ansible, Chef Automate, CloudBees and cloud providers.
  • Application Performance Monitoring (APM): APM suites facilitate monitoring of digital experiences, application tracing and diagnostics and artificial intelligence for IT Ops (AIOps) for applications. As discussed, Datadog has solutions in this category, including APM, RUM and Synthetics, which are all overlaid with machine learning capabilities (AIOps).
  • IT Service Management (ITSM): ITSM tools enable IT operations teams (infrastructure mainly) to better support the production environment. They also facilitate the tasks and workflows involved in delivery of quality IT services within the enterprise. These tools are used by IT service desks and have ticket handling and workflow capabilities. Players in this space are ServiceNow, Atlassian, BMC Helix, SAP and some cloud providers. Datadog does not currently have a direct offering in this space, but it could represent an expansion opportunity.
  • Network Performance Monitoring and Diagnostics (NPMD): NPMD tools enable IT operations teams to understand the behavior of their network in response to traffic demands and hardware utilization. Users of NPMD tools are primarily trying to detect application issues and perform capacity planning. Datadog offers a solution in this area with its recently launched Network monitoring product. Other providers include Cisco, SolarWinds, ThousandEyes and cloud providers.

As part of the August 2019 S-1 filing, Datadog stated that “our platform currently addresses a significant portion of the IT Operations Management market.” According to Gartner, this market will generate $37B in spend by 2023. Datadog believes this estimate is primarily for legacy on-premise and private cloud environments and does not include the full opportunity for multi-cloud and hybrid cloud, which Datadog also addresses. However, Datadog solutions only directly address two of the four sub-categories in ITOM.

Datadog estimated their current market opportunity in 2019 to be about $35B. They calculated this figure by determining the total number of companies globally with 200 or more employees, segmenting them into two groups (large or mid-sized) based on employee count and then multiplying by average ARR per customer type for each product as of June 2019. Obviously, this estimate would be larger in 2020, as Datadog has launched a few new products since then, like Network and Security.

Given that Datadog’s estimated revenue for FY 2020 is $540M, they currently occupy a small percentage of the total opportunity. On the recent earnings call, Datadog leadership claimed that most new deals are greenfield, in that they are displacing home-grown or open source solutions at major enterprises. Even where there are incumbents, I think Datadog’s comprehensive and integrated observability solution across infrastructure, APM, logs, synthetics, RUM, network and security has given them a leg up in any competitive displacements as well.

Beyond competitive analysis, which we will explore next, I think Datadog’s long term growth will be determined by their ability to continue expanding into adjacent market categories. The recent addition of Security monitoring is a good example. Looking at the other categories in the ITOM space as defined by Gartner also provides insight into future growth. Application Release Orchestration would be a natural next step, as Datadog already has deep penetration within DevOps and release orchestration simply precedes post release monitoring. ITSM is a huge category and would involve adding a ticketing system of some sort. This might be enabled through an acquisition. Finally, analytics and business intelligence tooling would be another interesting extension.

Competition

The observability market has many players. Like Datadog with infrastructure monitoring, most of the larger vendors started in one product segment and then expanded into the others over time. Examples include New Relic in APM and Splunk in log monitoring.

Gartner primarily categorizes observability under the label of APM. Of Datadog’s core products, they roughly align with industry analyst categories in these ways:

  • Datadog Infrastructure, logging, synthetics and RUM: Gartner APM, Forrester IASM (Intelligent Application and Service Monitoring)
  • Datadog Network monitoring: Gartner NPMD (Network Performance Monitoring and Diagnostics)
  • Datadog Security monitoring: Gartner SIEM (Security Incident and Event Management), Gartner Endpoint Protection

In Gartner’s June 2019 ITOM market share analysis report, they noted an important trend in the APM space of providers expanding their capabilities beyond core market segments “For example, APM vendors adding log and infrastructure monitoring, particularly for modern architectures that involve containerized-applications, serverless functions and cloud frameworks.”

Gartner lists a number of “legacy” vendors in their analysis of ITOM and specifically APM. These include IBM, Oracle, Broadcom, Solarwinds and Riverbed. These vendors started in and primarily focus on infrastructure monitoring. However, they also call out a subset of vendors who exhibit “extensive cloud native visibility” – Datadog, New Relic, Splunk, Sciencelogic and VMWare.

Gartner also recognizes a “convergence” of offerings across multiple IT Infrastructure Monitoring (ITIM) segments. They recognize the following:

  • ITIM tools incorporating APM capabilities: Datadog, Splunk
  • APM vendors offering ITIM capabilities: AppDynamics, Dynatrace, New Relic
  • Domain-agnostic AIOps vendors that capture raw data starting to offer ITIM capabilities: Elastic, Splunk

In terms of actual Magic Quadrant and Forrester Wave reports around these particular segments, we have the Gartner APM report from March 2019 and the Forrestor Wave IASM report from April 2019. I assume there will be updates to these forthcoming.

Gartner Magic Quadrant for APM, March 2019

Datadog is conspicuously missing from the quadrant, but is included along with Splunk and Elastic as an Honorable Mention. They stipulate that Honorable Mention vendors “address some APM use cases, but do not meet all of the functional and/or business requirements to be included in this research.” I suspect these were not evaluated due to requirements around number of global customers and revenue threshold attributable to APM directly in 2018. In the next report, this will likely change.

Forrester did include Datadog in their Forrester Wave report for Intelligent Application and Service Monitoring, along with Dynatrace, New Relic and Splunk.

Forrester Wave IASM Report, Q2 2019

Forrester made an interesting observation in their report about what is really differentiating leading solutions.

Previous generations of monitoring tools often focused on specific silos within the application or infrastructure environment. Vendors that can provide strong root-cause analysis (RCA) and remediation, digital customer experience (CX) measurement capabilities, and ease of deployment across the customer’s whole environment position themselves to successfully deliver intelligent application and service monitoring.

Forrester Wave IASM Report, Q2 2019

These requirements obviously favor Datadog’s approach. Forrester also provided this commentary on Datadog’s solution:

  • Addresses both traditional IT operations and newer DevOps use cases.
  • The unified dashboard allows operators to remain “in-context” while troubleshooting issues.
  • Integrations for collaboration and notification tools are supplied natively through APIs.
  • Customers noted that Datadog gives them far greater visibility than previous tools.
  • Pure SaaS solution resonates with IT leaders, as they don’t need to allocate staff to internal tool management and support.

Given these reports from Gartner and Forrester, Datadog is positioned favorably and in a leading position. Based on their reviews and my experience in the industry, I think that relevant competitors to Datadog represent the following publicly traded companies – Elastic, New Relic, Splunk and Dynatrace. These are viewed as rapidly progressing, independent companies that will likely lead the addressable market going forward.

Let’s briefly take a look at each:

Elastic (ESTC)

I performed a detailed analysis of Elastic’s product offering and strategy in a prior post. Founded in 2012, Elastic provided solutions around the open source ELK stack (Elasticsearch, Logstash and Kibana). While the original use case for Elasticsearch was site and product search functionality, the addition of Logstash, Kibana and Beats, along with Elastic’s flexible data model, enabled the ingestion and visualization of any type of data. This quickly extended to server logs and metric graphing for IT infrastructure monitoring use cases. Elastic officially added an APM solution in 2018 to round out their observability solution.

Elastic is interesting because the whole platform is open source, including agents, data processing and UI. Elastic protects commercial features under an Elastic specific license (to prevent cloud competitors from simply hosting the open source package). All code can be reviewed in Github repos. Developers can submit change requests, but Elastic employees are sole approvers.

In addition to observability, Elastic has solutions for enterprise search and security. Elastic solutions are utilized by a broad set of Global 2000 companies and progressive internet-first leaders. These include a variety of use cases, like driver search at Uber, profile matching at Tinder, network monitoring at Sprint, log processing at SoftBank and security operations at Indiana University. Another strength of the Elastic platform is the single vendor argument – IT managers and engineering leaders might prefer a platform that can address a variety of use cases, minimizing complexity and training time.

In this way, Elastic is positioning themselves as a generic platform, on which developers can build custom solutions for any “search” related use case. Elastic does build and market their own solutions for market segments, like observability, which are sold pre-packaged to customers. However, any customer is free to build on top of the core platform to create their own customized solution.

How this will relate to Datadog and the addressable market needs to be considered. Elastic is different from other competitors due to its open and extensible platform. In addition to selling their point solutions, they also market broadly to the developer ecosystem looking to build custom solutions. At minimum, having more visibility into how the platform works gives confidence as well around potential security or performance concerns. Datadog currently keeps the back-end of its platform obfuscated and only open-sources the agent code.

This will be important to watch over time. I think in the near term, Datadog will continue to grow rapidly, as their packaged solutions are purpose-built and appeal to IT managers who just want to plug in a best of breed solution and go. However, for developer-led IT organizations where there is a desire to tinker or extend the base solution, Elastic might offer an appealing alternative. As we turn next to examining New Relic, investors will note their recent move to a true “platform” posture.

New Relic (NEWR)

One of the pioneers in APM, New Relic was founded in 2008 by Lew Cirne and went public in Dec 2014. They claim to have over 17,000 customers, ranging from the Fortune 500 to start-ups distributed globally, including over 50% of the Fortune 100.

New Relic’s initial product focused heavily on application performance metrics and tracing. It was the first SaaS-based product for APM, with a lightweight software agent that was easily integrated into most development frameworks. I personally led developer and ops teams that used New Relic heavily from 2012 – 2015. It was extremely valuable and represented a big step up from self-managed open source tools at the time.

However, after 2015, New Relic seemed to lose momentum in product development, continuing to focus on a limited slice of APM, while competitors (like Datadog and Elastic) began rolling out full observability solutions that included not just metrics and tracing, but also log management and infrastructure. New Relic gradually began adding these features as well, but they felt bolted on and not fully integrated.

This changed with the release of the New Relic One Platform in September, 2019. First, this officially added all segments of a full observability solution to the New Relic offering. These included logs, metrics, traces, synthetics, user and infrastructure monitoring with AI capabilities across hosted, cloud, virtualized and serverless environments. This brings the coverage of New Relic’s offerings inline with Datadog’s.

Second, and perhaps more significant, a major feature of the platform is openness. The platform provides programmability at its core, with the intent to allow customers and partners to build new applications on top of the New Relic platform. Applications can connect observability data with business data of any type, addressing new use cases in customer service and communications. Developers can leverage React.js and GraphQL to build the front-end of their applications and use the New Relic One Platform as their back-end through APIs. They have already published a set of open source applications that customers are free to extend under the liberal Apache license.

Forbes recently published an article with a favorable review of the move. CEO Lew Cirne provided a useful summary as part of the press release.

“New Relic One is now the world’s first observability platform that is open, enabling customers to bring in agent-based and open telemetry data so they have no blind spots; connected, allowing customers to see relationships between their systems so they can act more quickly and effectively; and programmable, empowering customers to build entirely new applications on top of New Relic to fuel their business. It’s not a platform unless you can build apps on it,” said Cirne. “I’m so inspired by the creativity and ingenuity I’ve seen from our early-access customers and partners building observability apps on New Relic One. I can’t wait to see what our developer community will unleash for our global customers.”

New Relic Press Release, Sept 2019

This represents a huge bet on New Relic’s part. I applaud the expansion of solutions to address all aspects of observability. The openness of the New Relic One platform will likely appeal to developers. The key consideration relative to Datadog (like Elastic) will be whether customers value the openness of the platform over having pre-built best-of-breed observability features.

Splunk (SPLK)

Splunk was the earliest provider of monitoring solutions, starting with log event processing, search and visualization. The company was founded in 2003 and went public in 2012. As a result of their longevity, Splunk has deep relationships with major enterprises. They claim 92 of the Fortune 100 as customers. Given their early focus on log analysis and search, many companies would utilize Splunk alongside an application tracing and monitoring solution like New Relic or Dynatrace. This separation was tolerated for a while, but soon demanded consolidation as DevOps teams wanted to monitor the entire technology footprint from a single toolset. This gap, of course, gave rise to the observability movement and opened the door for Datadog’s consolidated solution.

As a result, Splunk went through a difficult period after 2015 as they too tried to counteract two competitive challenges. First was their pricing model, in which companies were charged based on the volume of ingested data, regardless of the granularity of processing needed. This resulted in rapid cost escalation as companies grew application usage or logs ingested. I personally had my Splunk costs grow 3x in a couple of months in 2016 due to this, forcing us to pull some log types out of ingestion and try to move to sampling by only sending logs from some application servers. This gave rise to Datadog’s “Logging without Limits” offering. Second, as other APM solutions included logging (like Datadog and Elastic), it became difficult to demand premium pricing for just a data processing solution.

In response to these factors, in Sept 2019, Splunk announced changes to their data pricing models with their “Data-to-Everything Pricing” program. This provides predefined pricing tiers, which are more flexible and can range up to unlimited data volumes under one fee structure. Alternately, customers can choose infrastructure based pricing, which charges based on the compute power required to run Splunk Enterprise.

In August 2019, Splunk announced the acquisition of SignalFx, described as “a SaaS leader in real-time monitoring and metrics for cloud infrastructure, microservices and applications.” Combined with Splunk’s existing log monitoring solution, this effectively rounded out the core features of a modern observability platform by adding application tracing and infrastructure monitoring. By October of 2019, Splunk announced the integration of SignalFx and Splunk Cloud to deliver full observability. This integration is characterized as deep linking between the two systems, which implies it would be seamless to the user but acknowledges that the two platforms are separate.

Splunk also offers an incident management solution through VictorOps, which was an acquisition announced in June 2018. The VictorOps toolset allows DevOps teams to coordinate the activities associated with incident resolution, such as on-call management, notifications/escalations and event communication. These services are similar to PagerDuty’s on-call management and modern incident response solutions. I reviewed PagerDuty’s solution in a prior post. Having incident management in its portfolio provides a slight leg up for Splunk. However, I don’t think this is significant for now, as IT organizations are used to having separate solutions for incident response. Splunk is even keeping the VictorOps branding for now. Nonetheless, incident management might be a logical extension for Datadog in the future.

Dynatrace (DT)

Dynatrace has been around for some time in various forms, although it has gone through several transformations. It started in 2005 as an Austrian company called dynatrace software. Then, Compuware acquired dynatrace in 2011, after acquiring Gomez in 2009. This group of solutions became Compuware’s APM product line. In 2014, private equity firm Thoma Bravo acquired Compuware, which brought in the APM assets. In July 2019, Dynatrace (DT) was spun out as a stand-alone publicly traded company, although Thoma Bravo still retains the majority of the equity.

Dynatrace’s roots and ownership by a private equity firm gives me a mixed reaction. On one hand, pulling this asset out of the Compuware umbrella was obviously a good call, given Dynatrace’s success and competitiveness as a stand-alone company. On the other hand, I feel private equity firms in general don’t have a great reputation and I am concerned that this might dissuade talent from joining. Also, while Thoma Bravo has gained some accolades for their “buy and build” strategy, their track record appears mixed. Reviewing the list of past transactions, I see companies that were leaders at time of acquisition, but have since fallen behind in innovation and visibility. Examples are Riverbed, Qlik, Barracuda Networks, Imperva and SolarWinds.

Regardless, Dynatrace offers a solid product suite with APM, tracing, infrastructure monitoring, logs, synthetics, RUM, business analytics and AIOps. The solution includes extensive integrations, with over 300 monitoring agents listed on their site. Dynatrace has broad penetration in large, more traditional, enterprises. They claim over 2,200 enterprise customers, with Kroger, Experian, Carnival, Daimler, Air Canada, Samsung, Dish, Porsche, KeyBank and SAP as featured customers. I don’t see any “internet first” consumer brands (like Airbnb, Evernote, Buzzfeed, Draftkings, Coinbase, Wayfair at Datadog) or platform plays (like Salesforce, Zendesk, Twilio, Pagerduty at Datadog) as customers.

I don’t have personal experience with the Dynatrace platform, beyond using Gomez a long time ago. My sense is that it is a complete solution, but that the pace of technology innovation and product development velocity will lag Datadog and other players. This is based on a review of their product development press releases and company blog (no engineering specific blog). Also, there doesn’t appear to be programmability or openness in their platform. I don’t see any API spec, developer docs or community features, nor references to open source code.

Growth Rates

Comparing revenue growth rates of the different competitors from the most recent quarterly report might help provide some insight into their relative traction in the market:

CompanyQtr RevenueY/Y Growth
Datadog$113M84%
Elastic$113M60%
New Relic$153M23%
Splunk$791M27%
Dynatrace$143M25%
Datadog Competitor Last Quarter Revenues and Growth Rates

Customer Adoption

Underlying Datadog’s rapid revenue growth is an impressive velocity in customer additions and expansions. New customers are landing with at least one, and increasingly several, products. Once a company is a customer, they tend to expand their usage, as evidenced by Datadog’s high DBNER. Expansion within a customer is along two dimensions – addition of more of the six core products, each with their own fee schedule, and growth in usage for each product as fees accumulate by host or data volume.

Some customer specific highlights from the Q4 earnings report elucidate this motion:

  • Added about 1,000 net new customers in Q4, which is a record and almost twice the number added in Q4 2019. At the end of Q4, had about 10,500 total customers, up from 7,700 a year ago, representing year/year growth of 36%.
  • About 60% of customers are using two or more products, which is an increase from about 25% a year ago. That translates into over 6,000 customers using multiple products, which is more than some point solution vendors have for a single product.
  • Penetration is relatively even across enterprise, mid-market and SMB segments.
  • About 25% of customers are using all three pillars of observability combining infrastructure, APM and logs, which is up from 5% a year ago. This is especially impressive considering that Log monitoring has been in the market for less than two years.
  • Approximately 65% of new logo deals had two or more products, up from only about 25% in 2018. This demonstrates the pent-up demand for the integrated platform.
  • Q4 saw record international ARR adds and strong momentum with international growth outpacing that of the aggregate business. International is a growth area for the future and sales teams are still ramping outside the U.S.

Datadog has an impressive customer list. Their customers include high-traffic consumer sites, like Airbnb, Peloton, Evernote, Buzzfeed, Draftkings, Coinbase, Nasdaq, Hulu and Wayfair. Having demanding use cases across media, e-commerce, finance and services provides a strong testimonial for the use of the platform. However, even more impressive is how a number of clients are platforms themselves, providing software-enabled services to thousands of other customers. Examples of these software platform providers include PagerDuty, Twilio, Dropbox, Salesforce and Zendesk. Requirements for observability solutions for platform providers would be an order of magnitude greater than even the largest individual consumer sites, so this provides a strong testament to the value of the Datadog platform.

Datadog S-1 Filing, August 2019

In the S-1 filing, Datadog provided a graph depicting growth in ARR by customer cohort in the year they started with Datadog. You can see the impressive compounding of total spend as customers expand over the years. They shared a statistic that the ARR from their top 25 customers had increased by a median multiple of 33.9x, as measured from the ARR generated in each such customer’s first month as a customer.

Datadog S-1 Filing, August 2019

Developer Motion

Datadog appears to have reasonable engagement with the developer community, but not at the level for a full-blown platform solution. In this context, what I mean by a platform solution is where developers are actively engaged/invited to use the company’s product to build new software application solutions. Some examples of platform companies that cater to the developer ecosystem are Twilio, Elastic, Fastly, Salesforce, etc.

An interesting example of a company that recently expanded from offering packaged products to promoting a platform is Okta. On April 1st at Oktane Live, they announced a new set of platform services that enable developers in customer organizations to address new application use cases that have an identity component.

Enterprises require an independent platform approach to centralize identity; one that offers APIs and SDKs to customers and partners to drive development, customization, and meet future use cases. By building Okta Platform Services with a modular, service-oriented architecture, Okta, along with its customers and partners, can quickly create new features to speed innovation for everyone in the ecosystem.

OKta press release, April 1, 2020

Opening up its platform for developers to build new applications that require generic capabilities around event processing, monitoring and alerting, might represent a future opportunity for Datadog.

Along these lines, Datadog doesn’t appear to host user or developer conferences, like Twilio Signal, Salesforce Dreamforce, Zendesk Relate or Elastic’s Elastic{ON} mini-conferences. Obviously, the large in-person conference isn’t feasible these days, but many of the above examples are moving their conference motion online. Datadog doesn’t appear to have started these, yet their growth thus far doesn’t seem to have suffered. If they evolve to a broader platform strategy, then this developer evangelism will be necessary.

Datadog does support community activity around open source contributions to create new service integrations, agent software and custom events. As mentioned previously, the agent software is all open-sourced, which does provide a nice level of transparency. I found an interesting blog post in which an active member of the Datadog community who made several code contributions was eventually hired.

Datadog has a basic blog with four categories – The Monitor, Engineering, Pup Culture and Community. The Monitor, which appears to cover capabilities, is pretty active. I was able to find a couple of posts a week. The Engineering blog, and other blog categories, garner less activity, maybe a post a month.

Datadog’s Twitter account is useful. Each day, they post a couple of items highlighting new capabilities or uses of the toolset. Some announcements are recycled (like most companies), but there is generally something new and interesting every day or two.

Datadog is also investing in building out their Partner channel. As software companies expand their sales efforts and product scope, having system integrators (SI’s) and global IT consulting companies as advocates for your product helps drive business, particularly from the large Global 2000 companies. Usually these companies focus on software solutions which would require a lot of customization (think Salesforce), but some are tapped for generalized digital transformation efforts. One component of digital transformation is often improvement in IT operations, of which application monitoring is a component.

In January, Datadog announced the launch of the Datadog Partner Network, a new program to expand Datadog’s support for channel partners. The program is available for managed service providers (MSPs), system integrators (SIs), resellers and referral partners. The Partner Network will provide marketing materials, training, opportunity tracking and a partner locator service. Members of the Partner Network will have access to training and accreditation programs for Datadog products so that they can provide implementation services to their customers.

Leadership

Datadog has two technical co-founders who still run the company. Having a technical founder involved in the early development of the product who is still running the business is one of my most important criteria in selecting software stack companies for investment. This is because they deeply understand the problem space, have a “builder” mentality and can establish credibility with the internal technology organization. Datadog checks all the boxes here.

Datadog was founded by Olivier Pomel and Alexis Le-Quoc in 2010. The two met while working at Wireless Generation, a provider of digital educational services for K-12 programs. After Wireless Generation was acquired by NewsCorp, the two set out to create a product that could reduce the friction they experienced between developer and sys-admin teams, who were often working at cross-purposes.

  • Olivier Pomel – Co-founder and CEO. Prior to founding Datadog, Olivier was the VP, Technology at Wireless Generation in NYC. He grew the engineering team to 100, before the company was acquired by News Corp. Before Wireless Generation, he held software engineering positions at IBM and several internet startups. He is a hands-on engineering leader with an MS degree in computer science.
  • Alexis Le-Quoc – Co-founder and CTO. Like Olivier, Alexis worked at Wireless Generation. He was the Director of Operations where he was responsible for the company’s highly scalable infrastructure. Alexis was part of the original DevOps movement and has been active in presenting at conferences and meet-ups. Prior to Wireless Generation, he was a software engineer at IBM, Neomeo and Orange. Alexis also has an MS degree in computer science.
  • David Obstler – CFO. David joined Datadog in Oct 2018. Prior to joining Datadog, he was the SFO at Travelclick for 4 years. He previously held CFO roles at OpenLink Financial, MSCI Inc., Risk Metrics Group and Pinnacor. He also held investment banking positions at JP Morgan, Lehman Brothers and Goldman Sachs. He has an MBA from Harvard.
  • Dan Fougere – CRO. Dan joined Datadog in Feb 2017. Prior to that, he ran the sales organization for Medallia (MDLA), which provides a SaaS based customer experience management platform. Before Medallia, he held sales leadership positions at BMC, Bladelogic and Actuate.

Overall, the leadership team appears strong. I particularly like the continued active engagement of the two technical founders. Other members of the team appear to have relevant experience for their roles. Also of note is that Dev Ittycheria, the CEO of MongoDB, is on the Board of Directors.

Take-aways

Datadog (DDOG) offers a lot for an investor to get excited about. The Q4 and FY 2019 earnings report was outstanding. In considering DDOG for an investment, I think these factors stand out:

  • Revenue growth is exceptional. I like that it stayed above 80% year/year for both Q3 and Q4. The initial estimate for Q1 2020 growth of 68.5% also represents a high mark, particularly given the annual run rate is over half a billion dollars. DDOG tends to outperform revenue projections as well, but this may be tempered by the COVID-19 situation.
  • Q4 profitability was refreshing, ending the quarter with 6.1% operating margin. For the full year of 2019, operating margin was -1.5%, so profitability gradually improved through the year. For FY 2020, projected operating margin is -4.6%, but outperformance might push it back up, as was the case in Q4.
  • The growth in customer spend over $100k and $1M ARR is significant. This, along with continued strength in DBNER, underscores Datadog’s strong expansion motion.
  • Large market opportunity to pursue, estimated at more than $35B in 2019. And Datadog offers market leading solutions. The CFO said on the recent earnings call – “We believe we are at the very early stages of a multi-billion dollar market opportunity and we feel very good about our ability to build a large and successful company over time.”
  • Lots of cash on the balance sheet for continued operations or acquisitions.
  • The product development velocity is impressive. Datadog continues to crank out new product offerings, that represent significant, monetizeable additions. Of the six products that currently generate fees, three were launched in the past year, and Security was just released in Beta.
  • The usability and sophistication of the product design is far ahead of competitive offerings at this point. DevOps teams often remark how easy it is to install Datadog and the significant time savings it generates. After toggling between multiple monitoring tools, they love having a single, comprehensive tool for troubleshooting issues. Datadog was the first to bring all system visibility into one view.
  • Having two technical co-founders running the company will ensure that the product vision and delivery pace continues.

While Datadog has a lot of momentum currently, there are a few areas to watch.

  • The competitive set has advanced significantly in the past year. Both Splunk and New Relic launched full observability solutions to market to their broad customer relationships. Elastic continues to address the observability space with viable point solutions and the superset of “all things search”. The rebirth of New Relic as a programmable platform also creates noise and may appeal to customers who value the ability to tinker with their solutions. I think Datadog is secure in 2020 with their head start, but I worry about how the competitive landscape will evolve as we look to 2021. The key consideration will be whether Datadog’s exceptional revenue growth will be maintainable as look-alike solutions fight for share.
  • Along those lines, Datadog does command a premium valuation currently. Based on 2019 revenue, it’s EV/Revenue ratio is 27.5. Looking forward to 2020 revenue estimates, it becomes 18.5. This is higher than other software companies, but not outlandish. Valuation will likely continue to be a sticking point for analysts.
  • While the addressable market estimate of $35B is large, it will be interesting to see if Datadog can expand further into adjacent segments. Security was a natural next step. Release orchestration, incident response and/or IT service desk would be possible extensions. Further product expansions grow the size of the market and help justify the premium valuation.

Related to COVID-19 impact on DDOG, I think it will be neutral. Datadog’s revenue primarily stems from activity (hosts, logs, user actions, tests, network traffic) generated by its customers’ applications. Looking at the list of featured customers, some are getting a boost from COVID-19 (like Peloton) and some are suffering (like Airbnb). Datadog revenue goes up or down based based on application usage by its customers, so investors can judge COVID-19 impact based on their projections for the customers’ businesses. In terms of new software sales, business continuity is a high priority (like security), but if enterprises have existing workable solutions, they might postpone an upgrade.

Investment Plan

I am not issuing a long term price target or purchase recommendation for DDOG stock at this point. As a rule of thumb, I refrain from doing so on companies within one year of their IPO. With that said, I am generally bullish on DDOG for 2020. I think their revenue trajectory is strong and will continue through this year, as competitive offerings are playing catch-up. Based on Q4 earnings, new product launches from Splunk and New Relic haven’t impacted Datadog’s expansion. Going forward, though, they could turn Datadog’s greenfield customer sales into bake-offs. The other wild card will be if Datadog continues their rapid product expansion into adjacent markets and leapfrogs incumbents there.

For 2020, I think this means that DDOG can end the year with about the same EV/Revenue ratio (27.5 on 2019 revenue). I think they will outperform on their 2020 revenue estimate of $540M, but not as much as the 18% in Q4. Let’s assume $560M to be conservative. That yields an end of year EV of $15.4B, or about a 50% gain over current value. That translates into a share price of about $54 by end of year. This takes an optimistic view of the COVID-19 situation, at minimum assuming IT spending bounces back in Q3-Q4 2020.

I will continue to monitor DDOG over the next two quarters and provide updates. After the one year anniversary of the IPO lapses and the observability market dynamics become more clear, I may make a recommendation and open a position.

11 Comments

  1. David

    First off, thanks for updating your current holdings. Much appreciated.

    Second, this is a great review of DDOG. I’ve been reluctant to buy shares due to very high price/sals ratio. But your review has definitely made me put this one on the radar screen.

    Finally, it would be awesome if you could put in a review of Livongo LVGO, an interesting cloud software firm in the healthcare industry.

    Much appreciated,
    Dave

    • poffringa

      Hi Dave,

      No problem on updating my holdings and thanks for the feedback on the DDOG post. Glad that helped.

      I took a quick look at Livongo. While I agree it is an interesting company, I can’t provide useful coverage of it. The reason is that Livongo is in the healthcare industry and the target users are the patients themselves. I focus my coverage on the software companies that provide the technologies used by companies like Livongo. I can’t offer much incremental perspective on the prospects for Livongo itself, as I don’t understand all the nuances around modern healthcare and whether their AI-enabled solutions are good for patients, doctors and insurance companies. If I were a former physician, I might have some insight to offer. Sorry.

      Looks like Seeking Alpha has some interesting articles on the company written by analysts who probably have a better grasp of the industry than I do.

      Thanks,
      Peter

  2. Henry

    Thanks! It’s really helpful!

  3. Kurt

    I read your very informative articles on ESTC and DDOG both of which I own. I wanted to get your opinion on my thought process:
    1. ESTC strengths – being open source which allows for customization, mind share, new products can be built on it; Weakness – no market leading soltns. by gartner/forrester. mentioned in gartner for APM.
    2. DDOG strengths – are its ease of use unlike ESTC which needs more work. So, may appeal to a broader and wider set of customers.
    Both companies can help reduce vendor sprawl as they can do logging, monitoring, and security (ESTC can do search, and endpoint sec. too). DDOG is ranked in forrester and mentioned in Gartner.
    ESTC search is its DNA; DDOG monitoring as DNA.
    Observability largest rev segment for ESTC, search smaller. ESTC’s search cust., may like the 1 platform to get observability but if a customer is not using ESTC for search not sure about buying it just for observability. I wonder if their Observability customers started off with only search. If not my thesis won’t be correct.
    Oveall, mainly due to it’s ease of use and rapid adoption of multiple pillars it seems DDOG has bigger chance to grow faster and into a much bigger company than ESTC provided they can hold off newr, splk.

    • poffringa

      Thanks for your feedback. I generally agree with your thought process on product fit and opportunities. Here are a few other comments:
      – ESTC does have a number of examples where the customer started with a search use case and then expanded into other areas. This also happens from observability into security, but that is expected.
      – ESTC platform is programmable, meaning that customers can make feature changes to best match their use case. This happens mostly around search, but is also possible with observability and some customers have taken advantage of this to create a new solution to meet a monitoring use case that doesn’t have a direct product counterpart. DDOG offers APIs for custom data ingestion and retrieval, but isn’t inherently programmable. You touched on this – just wanted to reinforce the point.
      – The programmability question is probably most acute when comparing these two – does a customer prefer the ability to customize a less good product, or just want a best of breed design out of the box?

      In terms of your investment thesis, we probably need to look at valuation and time horizon for the investment. At the simplest level (you can do much deeper analysis):
      – ESTC: Current price of $54, P/S ratio of 11. Latest quarterly revenue growth of 60%. All time high price of $102. Hit $72 after earnings on Feb 26.
      – DDOG: Current price of $38, P/S ratio of 27. Latest quarterly revenue growth of 84%. All time high price of $50. Hit $47 after earnings on Feb 13.

      For remainder of 2020, assuming the Covid-19 situation improves business outlook in second half, I could see DDOG return to ATH price plus some, representing a 50% gain. Their revenue momentum should continue this year and justify the high P/S ratio. For ESTC, though, I could see it return to the post earnings price of $72, and then even retest the ATH. This would represent a greater than 50% gain.

      Going into 2021, I have concerns about the competitive landscape for both as SPLK and NEWR are pushing hard on observability and even security (for SPLK). I think ESTC is better positioned for this, given they are a generalized, programmable platform for “search”. But, DDOG is rapidly expanding into other adjacencies and could make a play into Ops, like incident response or release orchestration. I slightly prefer ESTC, mostly because I like programmable platforms.

      Hope this helps. I think both companies are safe investments for at least this year.

      • Kurt

        Thanks for your reply and love the discussion. I have been in ESTC since early last year. It’s valuation and relative strength is an anomaly for it’s growth rate. I feel it should have a P/S at least similar to SMAR. I think the market is somewhat confused because of these issues:
        1. Overhang of lawsuits with Amazon
        2. poorer operating margins and FCF when compared to faster peers
        3. Perception that AWS makes more $ of their hosted OS version. AWS has not really accepted ESTC the way it has accepted MDB for instance. I agree the Open distro thing has not taken off and slowed down ESTC growth
        4. Feeling that DDOG will eat it’s lunch – ESTC the stock has struggled since DDOG IPO
        5. There was the concern of the slowing billings last Q but that was addressed in most recent Q.
        Another question for you – In your response you mentioned SPLK, and NEWR are pushing hard. Is it easy for a company to replace ESTC or DDOG with say SPLK. Is their any moat in terms of stickiness for these companies?

        • poffringa

          Yes – thanks for the additional color. I agree about the valuation comparison of ESTC and SMAR.

          On SPLK and NEWR, I called those out because they both achieved full observability solutions in the last 6 months. Prior to that, each lacked one of the “3 pillars of observability” (logs for NEWR, traces for SPLK). This adds more noise to the competitive landscape for new deals. As a CIO/CTO, I would want to see demos from more than one provider. DDOG still has a more refined and cleanly integrated product.

          In terms of switching observability solutions, there is some friction, but I would call it low. I have swapped out monitoring tools a couple times at past companies at scale. The downside of such easy installation and deployment of modern observability solutions is that a team can run two observability solutions in parallel. They would watch both for a period and once they feel the replacement is stable, the team just removes the monitoring agents of the one getting replaced. Pretty straightforward. Switching costs would roughly be: lost monitoring history (but running in parallel mitigates somewhat), new agent testing and roll-out, DevOps team familiarization and any custom configurations the team had made.

          • nilvest

            Hi Pierre,
            You have hands down the best analysis I have seen on the names you cover. Thanks a lot for sharing so much insight in a way that a non-SW person can also understand.

            Great discussion with Kirk above on ESTC / DDOG and cost of switching.
            Question – is it fair to think ESTC appeals more to IT groups with team of developers vs DDOG appeals more to IT groups with fewer developers?
            Would that translate to ESTC customers more invested in use of ESTC and less likely to switch out because they have some custom work done and may not want to reinvent the wheel with other supplier?

            How would you qualify SPLK and NEWR and MDB for the same comparison (efforts put in customization –> increasing switching cost).

          • poffringa

            Hi There,

            Thanks for the feedback. I am glad the blog is helping you. My intent is exactly what you hint at – to share some of my technology perspective for investing in software companies.

            Regarding your question about ESTC / DDOG, you are hitting on a key point in evaluating SaaS companies. There seems to be an emerging trend of software companies with best-of-breed point solutions extending their platforms to expose programmable services for developers either to customize their company’s use of the product or to build whole new independent solutions. I covered this in a more recent blog post, Considering Platform Plays. I explored how OKTA, NEWR and ZEN recently expanded their offerings to include programmability, and considered a few under the radar opportunities for ZM, DDOG and COUP. I think you would find it interesting.

            Applied to ESTC/DDOG, I think ESTC would appeal to organizations with a strong developer mindset that want the ability to “hack” on the solution or address an edge case that falls between search, observability and security. Elastic’s generalized “search” platform enables this, along with their open source posture. DDOG, on the other hand, offers a superior product out of the box, based on my own experience and their remarkable growth. Many DevOps shops are happy to just plug that in and go. DDOG is currently riding the wave of launching the first fully integrated and complete observability solution, as compared to NEWR, SPLK and even DT, which are now playing catch-up.

            Whatever programmability work that a technology organization does on an open software platform inherently makes the solution more sticky, as the engineering team would need to port that work to another platform to consider a switch. Of course, if that work is done in a common dev framework versus the vendor’s proprietary language, that makes the migration easier, but there are still differences in core APIs/libraries, data access, etc. that the team would need to account for. So, yes, I think this makes ESTC stickier once it’s in an organization’s infrastructure for those engineering shops so inclined.

            For the other companies you mention:

            – SPLK: I am not aware of full programmability built into their solutions at this point. They do cover the “3 pillars of observability” now, which could generate some noise for DDOG, DT, NEWR, etc. Switching off of SPLK to another solution would require migration of agents and testing (mentioned in prior comment). SPLK does support some programmability in how queries are built in their language, so that work would be lost, along with history. But, switching is still relatively low impact.

            – NEWR: Switching costs for NEWR were low previously. As mentioned in the Platform Plays post, they recently launched a programmable platform solution as part of New Relic One. That implies work done by DevOps organizations to build custom “apps” inside the New Relic interface would make their platform stickier.

            – MDB: MongoDB falls somewhere in the middle. An engineering team’s use of MongoDB is mostly as a database appliance, versus something that they would look to modify. I could see some cases where they might want to customize ancillary functions. These enhancements would more likely come from the open source community, but are still value-add to MongoDB. You can see examples in MongoDB Github. Switching costs for MongoDB revolve around changing out the back-end data store. That is generally very involved. Most organizations switch their database as part of a new greenfield project or breaking a monolithic legacy application into services. I think this motion favors MongoDB and other modern database solutions, as there are still a lot of legacy DB installations (Oracle, etc.) out there and monolithic apps that run on a single, large relational cluster. I think MDB will benefit over the next 5-10 years, as a percentage of new application builds use the solution, as well as monolithic re-architecture efforts that pick MongoDB as the backing store for a single service. Many investors don’t realize that a large engineering organization might use several different flavors of database solutions for different use cases – relational for structured data, non-relational for unstructured, graph databases for node/edge relationships, etc. For example, I could build a shopping cart service with MongoDB on the back-end, while the actual purchase data is stored in MySQL. Regardless, once a new app or service is built on MongoDB, it would likely be there for a while. That’s not to say it’s permanent, though. I see more “retiring” of services than direct replacements.

            Hope that helps.

            Thanks,
            Peter

  4. Aananth

    Great article – comprehensive, insightful and well articulated. Thanks a ton.

  5. Sonal

    Peter,

    This was the first blog I’ve read on your site. I’m from the s/w industry myself and have been trying to understand what DDOG was doing so I could decide whether I wanted to invest in it. Your explanation made it crystal clear. It also answered so many of my questions around DDOG’s moat, growth etc. In other words, the perfect balance of technical deep dive and financial growth estimates. I’m bookmarking your site. Will definitely be on my visit often list.