Early in the evolution of cloud computing infrastructure, the cloud providers were rapidly expanding their offerings. For a while, it seemed they would leave no room for independent providers in a land grab to address every segment of software infrastructure. As the landscape has matured and enterprises increasingly implement a multi-cloud strategy, it has become clear that independent providers can not only co-exist, but thrive, in this environment. Examples are Datadog for observability, Twilio for communications, MongoDB for databases and Fastly for CDN.
This blog post examines the history of cloud service providers and the evolution of their offerings. As cloud vendors have defined broad categories of software services, they have left openings for nimble, focused independent software vendors to leverage the same cloud infrastructure to deliver substantially better product offerings in some segments. From this, we can draw observations about why they are succeeding and what they need to continue doing. Investors occasionally raise competitive concerns for independent software providers that cloud vendors will choose at some point to crush them. I posit that threat has passed in many categories. This post seeks to help investors understand what has changed and how to reason about the risks going forward for their favorite independent software company investments.
History of Cloud Services
The AWS platform was first launched in 2002. Initially, it was intended for use internally by Amazon to host its retail site and operations infrastructure. It was designed for modularity and openness from the beginning, through standardization, automation and extensive reliance on web services for common functions. This was also about the time that configuration management (automated provisioning and management of servers) began proliferating, making this this effort more feasible. Interestingly, the original engineers who designed AWS published an internal paper in which they described the design concepts behind the software infrastructure. They only mentioned the notion of selling access to outside parties in a comment near the end.
In November 2004, the first public service from AWS was introduced. This was the Simple Queue Service, SQS, which is still available today. SQS provides a distributed message queueing service, which allows programmatic sending of messages over the Internet. This provides a means of handling asynchronous communications between systems. For example, an e-commerce site could use SQS to send messages with customer purchase data to an outside fulfillment center.
AWS was fully launched in March 2006, as an integrated suite of core services offered as an environment for other developers to build and host their Internet applications. The initial version of AWS consisted of S3 (cloud storage), EC2 (virtualized compute) and SQS. Amazon began quickly expanding the set of services offered on AWS, trying to keep up with the increasing demands of the developers using it. As an additional driver, by 2010, all of Amazon’s retail sites had migrated to run on AWS. Outside companies began utilizing AWS for hosting as well. Netflix famously started using AWS in 2008 and then declared their intention to go 100% cloud in 2010. They finally completed the cloud migration in 2016.
As you would expect, as usage of AWS grew, it required more and more infrastructure services to address all the common components associated with building modern internet applications. These started as layers around the core compute and storage functions and evolved to include 175 fully featured services today. The pace of these releases has been extraordinary, with the number of new services launched each year accelerating over time. Here are rough counts of new service launches by year, taken from Jerry Hargrove’s blog post on AWS History.
There are now so many services, that Amazon had to start grouping them into categories. A user can see all the services by navigating to the AWS home page and viewing the Explore our Products section. The Product tab also provides an expandable menu.
There are now 25 different categories of services. Most categories have many individual service offerings within them. The biggest category is Machine Learning with 25 offerings. A few categories only have one product offering, like Robotics, Quantum, AR/VR and Satellite. Other categories with more than 10 product offerings are:
- Management and Governance: 20
- Security, Identity and Compliance: 18
- Developer Tools: 14
- Database: 12
- Analytics: 12
- IoT: 12
- Networking and Content Delivery: 11
- Compute: 11
- Storage: 10
- Media Services: 10
This “shelf space” approach to provisioning services makes sense. Any time a developer was seeking a new service to plug into their application, AWS wanted to make theirs the default. This is because AWS could incrementally charge for usage. Since its beginning, the AWS pricing model has been based on granular usage. To their credit, AWS introduced the notion of charging per hour of compute or block of storage used, allowing customers to optimize their spend by turning off servers when not being used. As new services were introduced, the AWS team applied the same usage model for pricing of those services, at whatever usage increment made sense for the context of that service. DynamoDB (key-value database) is charged by number of reads, writes and storage. Cognito (identity management) is charged by monthly active users. SES (email service) is charged by 1,000 email increments, plus size of attachments.
This incremental contribution to revenue for AWS became the main driver of the explosion of services. As AWS was becoming an ever larger component of Amazon’s overall revenue and profitability, the AWS team continued pushing to create ever more services to charge for. Many services were introduced in an experimental form. If they received traction, AWS would invest further in building them out. If adoption was low, they would slow down development and promote them less. Interestingly, given the large number of services started, not very many services have been completely shut down. This reflects the fact that keeping a service on life support can be low cost, while still generating some recurring revenue.
As an example, the Simple Notification Service (SNS) was once promoted as a means of sending SMS messages to customers, similar to Twilio’s Programmable SMS product. Now, SNS is labeled as being for “Managed message topics for pub/sub” and is categorized under Application Integration, rather than Customer Engagement. The SMS capability is still available, but reference to it is buried in the product page.
With the growing visibility of AWS, other large tech companies saw the potential for cloud hosting and wanted to ensure they weren’t left out. Microsoft announced Azure in 2008 and formally released the service in 2010. Azure now advertises 600 individual services. The Google Cloud Platform (GCP) was initially launched as App Engine in 2008, which offered a basic Platform as a Service functionality for running code. It has similarly expanded its suite of services, but in a more methodical way.
Between these 3 large cloud service providers, they constitute 58% of the cloud infrastructure market as of Q1 2020. A long tail of other providers make up the remaining 42%, including Alibaba, IBM, Oracle, Rackspace, etc. Also, the relative share of market for these big 3 providers has changed over the last 3 years (data is from 2017 to 2020):
- AWS: Decreased from 34% to 32%
- Azure: Increased from 11% to 18%
- GCP: Increased from 6% to 8%
Most of the growth for Azure and GCP has come at the expense of the smaller providers, but AWS has seen share decrease over the last couple of years as well. Microsoft Azure has experienced the most growth. This can be attributed to a few factors – improvements in their platform, leveraging their existing penetration among the Global 2000 and bundling of IT products. Additionally, competitive fears among enterprises in retail, banking and health care have driven some customers to Azure out of concern that Amazon would examine their application activity to inform their own forays into those industries.
Another data point on the relative market share trajectory for the large cloud infrastructure providers came from a JP Morgan survey conducted in June 2020. They asked 130 CIO’s at large enterprises what percentage of IaaS spending was on each of the cloud providers currently and then what percentage they would expect in 3 years. The results were telling – AWS was projected to drop by 4%, while GCP would grow by 4%. Azure already had the largest share amongst this group and was projected to increase slightly.
Just to corroborate the JP Morgan survey, Goldman Sachs conducted a similar survey in Dec 2019 of 100 IT executives at Global 2000 companies about their planned IT spending. Of these executives, 56 were using Azure for cloud infrastructure versus 48 using AWS at the time of the survey. They were then asked to look forward 3 years. As with the JP Morgan survey, these trends towards expansion of share for Azure and GCP continue. In 3 years, 66 of the IT executives expect to be using Azure, with 64 using AWS and 30 using GCP. What is striking about these numbers is that they reveal a move towards multi-cloud deployments. In order for 100 respondents to cast these 160 votes, many will be using more than one cloud vendor. As I will discuss in a bit, multi-cloud is becoming the Achilles heel of cloud vendor hegemony in several categories of software services.
Differing Attitudes towards Independent Providers
As the cloud providers were getting started, they rapidly expanded their service offerings, adding on any new service that might be needed by developers. In most cases, they built proprietary versions of the service with unique protocols and API interfaces. In some cases, they took open source projects and sold hosting of them. For a couple, this was in direct competition with a commercial entity that that funded the majority of the development and had built a business around that open source project. AWS has been the most aggressive in this practice, causing some analysts to describe it as “strip mining”. A well publicized article in the NY Times highlighted this practice in December 2019 and cited “strip mining” examples with Elastic, MongoDB, MariaDB, Redis Labs and Confluent. AWS responded that these allegations were “off-base”.
In response, some of these companies took actions to discourage the behavior. MongoDB modified their software license to force any company offering hosting of the solution to also open source all of their underlying software. This effectively discouraged AWS from using any future version of MongoDB, as they weren’t willing to open source other proprietary aspects of AWS. They still offer a document-oriented database product in DocumentDB, which is pinned to the older version 3.6 of MongoDB (as compared to MongoDB’s latest version of 4.4). In Elastic’s case, they pivoted to an “open core” model. All their software is open source, in the sense that the source code can be viewed. However, more advanced features are covered by a proprietary license, which prohibits hosting by anyone except Elastic. In response, AWS forked the Elasticsearch project and created Open Distro for Elasticsearch, in an effort to engage the open source community against Elastic. Success of this has been limited. Currently, there are 32 contributors to Open Distro, most of whom are AWS employees. This compares to 149 contributors on the Elastic project. There is also an active court case regarding trademark infringement by AWS in their use of the Elasticsearch name.
This isn’t meant to be a criticism of AWS. In the heyday of cloud growth from 2015 onward, they were scrambling to keep up with the increasing volume of requests for disparate services and were likely rapidly plugging holes. Also, up until 2018, AWS enjoyed a significant lead over Azure and GCP in core infrastructure coverage, reliability and capabilities, leaving potential customers with little choice. I remember managing a large AWS installation in 2016, where our sales leadership was prodding the technology organization to migrate the cloud footprint to Azure because our customers in retail (Kroger, Target, Albertsons) were concerned about their data sitting on Amazon servers. At that time, Azure was rapidly expanding their footprint and were experiencing legitimate stability issues with underlying services like storage. Our team had real concerns with the migration and experienced at least one outage due to instability. Today, that is no longer a problem. Both Azure and GCP have advanced and solidified their offerings to the point where comparisons over core system availability are no longer meaningful. New cloud customers have real choice .
If AWS is the most competitive with the independent software providers, Google Cloud Platform (GCP) is the most accommodating. They launched a revamped Partner Program in 2019 with the goal to help software providers “have a good experience with Google at all stages” and a focus on creating the best value for their customers. For example, MongoDB enjoys a deep integration with GCP products and unified billing for GCP customers. MongoDB, Elastic and Confluent were named Technology Partner of the Year in their respective categories. The Fastly CTO was recently featured on the GCP Podcast, which is GCP’s own weekly podcast featuring news about the Google Cloud Platform. If GCP were intent on limiting the exposure of these partners, they certainly wouldn’t feature one on their weekly podcast.
When asked about competition on the Q1 Earnings Call on June 4th, MongoDB’s CEO even called out Google for their approach.
Google, as we’ve talked about in the past, it doesn’t have a competitive product and the partnership there is very strong. They named us one of the technology partners of the year and the sales teams do a lot of joint planning, joint account planning and work in various — in Europe and in North America and other parts of the world. And so that business is growing quickly, but frankly our business to all the cloud providers are growing quickly. So we feel quite good — we feel very good about our value proposition.
mongodb CEO, Q1 Earnings call
If AWS and GCP are at two ends of the spectrum, Azure is in the middle. They will offer competitive products to the independent providers, but haven’t engaged in extensive re-purposing of open source projects. Since Microsoft has traditionally been a commercial software vendor, they leaned more towards building their own solutions for cloud services. Similar to GCP, Microsoft manages an accommodating partner program. This allows independent software providers to promote their products on the Microsoft Azure Marketplace. This program provides unified billing for joint customers and makes it easy for Azure customers to locate and provision solutions from the independent providers. For example, MongoDB Atlas, Elastic Cloud, Twilio SendGrid and Confluent have offerings available for purchase on the Marketplace. In fact, at a recent analyst event, Elastic’s CFO mentioned that an Azure Cloud executive, Scott Guthrie, spoke at Elastic’s sales team kick-off.
This spectrum of differing approaches towards the independent software providers is interesting. For GCP, I think it is a strategy to catch-up with the other cloud providers and follows their “do no harm” philosophy. Azure is aligning with Microsoft’s broader sales strategy to offer a full bundle of products to large, traditional enterprises. By including alternatives to their own products in a bundle, their sales teams can still manage the relationship with the customer. It will be interesting to see if AWS takes a more conciliatory approach as well to independent software providers, particularly if their market share continues to decline.
Multi-Cloud
When cloud hosting was first gaining traction from 2010 – 2015, companies would begin their cloud migration with a single provider, usually AWS. This was the case with Netflix, that went all-in on AWS. However, as Azure and GCP have dramatically expanded the reach and reliability of their core compute and storage capabilities, enterprises are hedging their cloud deployment. This was implied by the Goldman Sachs survey mentioned above. A 2019 Gartner cloud adoption survey showed that of those companies on the public cloud, 81% were using more than one cloud service provider. The cloud ecosystem is expanding beyond the scope of a single cloud service provider for most large enterprise customers. This has been enabled by the commoditization of core capabilities amongst AWS, Azure and GCP. Enterprise IT organizations are spanning multiple cloud providers for a few reasons:
- Reliability. With ultra-high expectations for uptime, large enterprises want to ensure that they have a presence on more than one cloud provider in the event of a system-wide outage. While most cloud providers operate out of multiple regions for resiliency, network or software infrastructure issues could still impact the availability of a provider. Business continuity plans are increasingly requiring a multi-cloud configuration. SLAs for cloud vendors vary by provider and even service, but generally don’t exceed 99.99% uptime. While this sounds high, it still allows for almost an hour of downtime a year. The SLA for AWS Compute targets 99.99% but only gives a full refund if availability dips below 95% in a month, which translates into a day and a half of downtime.
- Negotiation Posture. If a cloud vendor knows that a customer is exclusively deployed on their infrastructure, they feel the customer is susceptible to lock-in. This leads to less leverage for the customer in negotiations for bulk price reductions at contract renewal. Large customers of every cloud vendor are assigned a sales rep, who has latitude around pricing of long term spend commitments. Also, cloud vendors offer other benefits, like escalated support and architecture consulting. These are easier for the customer to access if the vendor feels they are competing for spend.
- Features. The cloud vendors are developing specializations and customers may want to take advantage of particular services within each cloud provider. This optionality for customers has emerged as an outcome of breaking up their monolithic applications into stand-alone services. Services can be deployed on different cloud providers, if desired. For example, GCP has a strong practice around AI/ML, while Azure has invested heavily in IoT.
- Geographic Distribution. Some cloud providers have a stronger presence in certain geographic regions. For global enterprises, data localization requirements might argue for a different hosting provider by country.
A bias towards utilization of multiple cloud vendors creates an advantage for the independent software providers. This is because common software services, like database access, email generation, identity management or CDN, tend to be accessed through an API interface. The structure of the API endpoints, organization of the data, callback workflow and business logic are generally unique to each software service. This means that a custom software application created by an enterprise, like an e-commerce site, customer service tool or media distribution service, would need to code to that service’s API interface in order to access it.
For example, an organization might decide to utilize a document-oriented database for one of their microservices, like shopping cart or user profile data. If they hosted that application in separate geographic regions on both AWS and Azure, they would have to utilize the document-oriented database service provided by each. For Azure, they could use CosmosDB. On AWS, they could use DocumentDB. These are slightly different implementations, however. The APIs, operations and data types supported by DocumentDB are different from those supported by CosmosDB.
They do both offer compatibility with MongoDB version 3.6. However, the latest version of MongoDB is 4.4 (as of June 2020 in beta). There have been numerous improvements to the MongoDB engine between 3.6 and 4.4. Instead of maintaining a software application with separate API interfaces for CosmosDB and DocumentDB or pinning the API to an old version of MongoDB, the enterprise could simply utilize MongoDB Atlas. Atlas is MongoDB’s cloud hosted service and offers the latest version of MongoDB. It is available on AWS, Azure and GCP. In this case, the enterprise’s engineering team would only need to maintain one set of code to interface to the document-oriented database. Even if they were only deployed on one cloud provider currently, a future migration or expansion to another cloud provider would involve no code changes.
I realize this explanation is a bit technical, but it is an important point and applies to all independent software service providers. Large enterprise customers want to avoid technology lock-in where possible. If an alternative to a cloud vendor supplied service is available that works across all cloud providers, it only makes sense to utilize it. This argument applies across all types of services. The first types of services that benefitted from this realization were those that could be hosted completely independently, like payments (Stripe, Adyen), communications (Twilio, Bandwidth) and CDN (Akamai, Fastly, Cloudflare). Now, core infrastructure services like databases (MongoDB Atlas), search (Elastic Cloud) and identity management (Okta) can run within cloud provider regions on their hardware. Usage of these independent services still generates revenue for the cloud service providers for compute and storage, so supporting them is beneficial.
Why are Independent Providers Successful?
In terms of identifying the independent software providers that benefit from specialization in a category, the landscape is broad. Investors could look at any category and find both public and private companies vying for attention. I’ll provide some examples of software stack providers that I follow below, but this list isn’t meant to be exhaustive:
- NoSQL Database: MongoDB (MDB), DataStax
- Search: Elastic (ESTC)
- Observability: Datadog (DDOG), Splunk (SPLK), Dynatrace (DT), New Relic (NEWR)
- CDN / Edge Compute: Fastly (FSLY), Cloudflare (NET)
- Identity Management: Okta (OKTA), Ping Identity (PING)
- Communications: Twilio (TWLO), Bandwidth (BAND)
- Data Streaming: Confluent
- Data Warehouse: Snowflake
Previously, I touched on how multi-cloud by default favors the independent providers. The CEOs of several independent software companies refer to this as neutrality. They claim that enterprise IT organizations are increasingly showing preference for a neutral provider when possible, due to concerns around cloud vendor lock-in. Beyond neutrality, I believe there are several other factors that benefit the independent software providers and will continue to drive their growth going forward.
Focus
By focusing on a particular niche, the independent software provider is able to go very deep into their technology solution. Rather than standing up a service that largely duplicates what is available on the market currently, they dedicate resources to innovation and addressing harder problems. Having hundreds or thousands of product development personnel (design, engineering, test, analytics) spending every day iterating on a feature set within a single category should ultimately result in a better product. Customer feedback will be targeted and largely unfiltered, as it isn’t obfuscated by an account manager covering many categories. It often benefits from a direct connection between engineering teams in customer and provider organizations.
As one example mentioned previously, Elastic has almost 5 times the number of code committers on the Elastic GitHub project than AWS has on its sponsored version of Open Distro Elasticsearch. Yes, AWS could decide that search infrastructure is more strategic to them and dedicate more resources, but I question whether these would be as productive or creative as Elastic engineers who have chosen to work on the Elastic open source project from the beginning. Also, each cloud vendor would need to duplicate this effort due to multi-cloud and there are only so many search specialists to go around.
As another example of the benefit of focus, Fastly has spent many years building a better CDN. As I explained in a prior post, they took a different approach in designing their CDN infrastructure, questioning every conventional architectural decision, with an eye towards improving performance and customer flexibility. This approach was applied towards their POP’s, internal network routing and storage mechanics. Now, they are applying the same rigor towards building a new edge compute platform that enables distributed serverless processes in a fast, secure and compact way. To design their solution, they skipped over the conventional serverless solutions on the market, like containers or the V8 Engine, to build their own runtime on top of WebAssembly. For distributed storage, they are similarly considering a new approach, leveraging CRDT’s to ensure eventual consistency. The CTO recently talked about how his job has evolved into managing a small team of top engineers conducting something akin to basic research in finding new ways to deliver ground-breaking solutions around serverless and edge compute. There is even recent evidence that Amazon is using Fastly’s CDN for their own sites, versus their internal CloudFront product.
Not to be outdone, Cloudflare (NET) is announcing some significant enhancements to their serverless solutions this week. In the blog post from Cloudflare’s CEO, he points out several advantages of Cloudflare’s serverless solution over that offered by the cloud providers. It is possible (and likely) that cloud vendors will incrementally improve their serverless solutions, but again, I posit that the independent providers like Cloudflare and Fastly will always stay one step ahead.
While I concede that cloud vendors will continue to drive innovation in the areas that they consider strategic, I find it hard to believe they can duplicate that effort across hundreds of services. Additionally, each of the large cloud vendors would need to reproduce the same level effort in order to maintain parity due to the requirements of multi-cloud. Therefore, I believe that independent service providers will continue to deliver a more complete product and better customer experience in certain categories over time.
Network Effects
As a service reaches critical mass, it begins to benefit from network effects. By examining the behavior of users within their services, providers can glean insights that make the product better. This can take the form of fewer bugs, more granular logic or more connections with outside systems. Observing customer behavior can also influence future product enhancements or spawn brand new products.
Okta provides a good example of the power of network effects. As the largest independent provider of identity management solutions, their platform benefits from a continuous feedback cycle of more data from customers, integrations, devices and use cases. By harnessing all of these inputs, Okta is able to continually refine the efficacy of their identity solutions. Specifically, they can examine anomalous behavior from a group of users and rapidly adjust security rules to require an additional layer of authentication. Crowdstrike realizes the same benefits for their EDR/EPP solutions. These network effects contribute to an ever-increasing moat, making it harder for look-alike solutions from cloud vendors to compete.
Talent
Talented employees will generally gravitate towards the independent software providers. This is primarily because greater financial opportunity and career growth exists. Most of the upside has been realized at Amazon, Microsoft or Google at this point. With market caps over $1T and employee counts exceeding 100k, a new employee risks getting lost in the crowd. It is easier to imagine the stock of DDOG, FSLY, OKTA, TWLO, NET, etc. continuing to increase in multiples of valuation. More importantly for ambitious employees, smaller, rapidly growing companies offer more career advancement and a sense that their contribution directly impacts the company’s success.
The cloud vendors do attract talent at the entry level and in senior management. For college grads unfamiliar with the technology landscape, a starter job at a cloud giant represents a great environment to cut their teeth. However, once they have built up a few years of experience, they tend to leave. At the top ranks, where the pay packages are lucrative, the cloud vendors can also attract talent. It’s the broad middle where the innovation happens and this tends to favor the smaller, independent companies.
Product Development Velocity
As a consequence of focus, network effects and talent, the product development velocity is generally higher at the independent software providers. Because clear incentives exist, the team will usually push hard to get releases completed. Also, the impact of a successful release on the independent software company’s success is more evident. It can even move the stock price. At the cloud vendors, product releases are recognized, but one team’s contribution can make up a small part of a long list of feature announcements.
As an example, Elastic pushes a major release every 1-2 months. While these are labeled as point releases, they often include a broad set of features. Elastic Stack version 7.8 was released on June 18. Improvements spanned their enterprise search, observability and security solutions as well as the unified Cloud offering. This followed a pretty large release in version 7.7 on May 17. For comparison, the AWS Elasticsearch release feed has 67 entries over 5 years, most of which are announcements of new availability zones, compatibility with older Elastic core releases and smaller features.
As another example, Datadog is rapidly expanding their product suite. Over the course of 2019, they launched 3 new, monetized products with Synthetics, RUM and Network monitoring. They also announced the beta of a security product and extended all monitoring to the serverless compute environment. It’s no surprise that the cloud vendors haven’t mustered a comparable product offering in observability.
Investor Take-aways
Sometimes I hear investors immediately discount an independent software provider with the assertion that a particular cloud vendor will “crush them”. I think this bias was appropriate earlier in the adoption cycle for cloud hosting, when the cloud vendors were rapidly expanding their offerings and there were clear differences in the capabilities of each. As the cloud provider landscape has matured and multi-cloud deployments are favored by most enterprises, I think the pendulum has swung towards independent service providers in a number of categories. Their neutrality, focus, network effects and engineering talent provide a foundation upon which to deliver a rapid pace of product innovation and capabilities expansion. I believe this will only continue, allowing these independent software providers to grow into significant contributors to the overall software ecosystem.
Hopefully, this post helped investors better understand the cloud hosting landscape and how large cloud vendors and independent service providers can co-exist. I have provided some examples of independent companies that have thrived in this environment and built rapidly growing businesses in individual categories. I have also surfaced several themes that appear to contribute to their success. Investors can use this framework to evaluate their own selections of independent software provider companies for their portfolio. Many of these names are covered on this blog with individual deep-dive analysis available.
Excellent post. Very informative and insightful.
In addition, I will also like to point out that most big cloud vendors are on govt radar for anti-trust and monopolistic practices, and this will discourage them to try to crush independent software providers.
Thanks for all the hard work on this excellent article, giving a great historic and future perspective of the cloud and its impact on enterprise software.
I am not close enough to the business, but understand that there are also software solutions that help companies view and integrate services across the differing cloud services such VM ware, DataDog etc. The fact that so many companies are cross platform would mean that these services should be much more in demand. Any thoughts on who the big winners are in this category.?
Thanks for the feedback. My understanding is that there are software solutions like you mention for managing multiple cloud deployments and even balancing those with a private data center (hybrid cloud). Those services tend to operate more at the infrastructure, rather than application, level. So, I am not well-versed enough in the various providers to give a meaningful recommendation.
Datadog, however, does operate at the application level, by providing “observability”. Essentially, that represents a combination of gathering performance metrics, log data and processing traces from applications and the hardware hosting them. This has also been called application monitoring in the past. I have published a number of posts about observability providers. Datadog (DDOG) is the leader in that space for now, with Splunk (SPLK), Dynatrace (DT) and New Relic (NEWR) vying for share. Elastic (ESTC) also offers solutions for observability, but represents a broader platform that addresses many use cases around data processing and search.
Many tnx Peter, as always comprehensive and well written.many tnx.
Thanks for the great post
My only concern with the independent companies that are niche players is just the nature of being niche. Yes they are not sporting a $1 trillion valuation but also their growth and optionality may be much more limited
What makes you comfortable that they haven’t already nearly saturated their markets at this point
Thanks for the feedback. That’s a fair question. Most of the independents I mentioned have annual revenue amounts that represent a small percentage of their estimated addressable market. Granted, TAM estimates are often inflated and in some cases require displacement of incumbents, but there does seem to be an order of magnitude of growth remaining for most. Also, annual revenue growth rates are still high, even in the latest quarter. I would expect to see a dramatic slowdown in growth if markets were saturating (putting aside the current COVID-19 induced headwinds).
Thanks Peter,
Informative and understandable for a non techie as always. My thoughts turn to Nutanix and how they fit into this picture. Given the multi vendor situation mentioned , it would seem to me that they might have a future unless I am fundamentally mis-understanding the nature of their business which is entirely possible.
Thanks for the feedback. I am not that familiar with Nutanix. My understanding is that they provide an infrastructure management platform that allows enterprises to duplicate a cloud-like environment within their private data centers. This would be useful for those enterprise customers that need to keep some infrastructure on premise (either for compliance purposes or to leverage an existing hardware investment). In theory, Nutanix would compete with cloud vendors, in that they give enterprises a reason to not move onto the cloud.
As part of their solution, they do seem to duplicate many of the basic cloud infrastructure services, like virtualization, compute, storage, networking, provisioning, administration, etc. Where their offering gets a little murky for me, is in the way they provide managed software services on top of that, specifically those that enable application development. For example, they offer big data and database management capabilities. For these, both MongoDB and Elastic Stack appear to be options that their platform supports. But, I can’t tell if they support only the open source versions of these products, or allow the customer to bring their enterprise license for the latest version of both.
So, Peter, I have completed the self-assigned task of reading *all* your SSI posts since inception. My brain hurts from ingesting so much technical information that is W A Y beyond my knowledge and understanding — but you have the rare ability to explain even the most arcane items clearly and compellingly for all readers. I come away knowing far more than I ever did about all the topics you cover. I look forward to your future commentaries.
I imagine you know already all about Snowflake
Not public yet but filed recently for its IPO. As IPOs go Snowflake likely will be the hottest of the hot but it is Snowflake’s technology solutions that cause me to ask: Any enduring interest…?
Thank you!
You are welcome. I appreciate the feedback. Yes – I am watching Snowflake closely and will likely publish coverage after they IPO. They are interesting on a number of levels. Their core offering is a next-gen data warehouse, but they seem to have future plans to expand into other data processing segments, which might encroach on existing players in analytics and observability.
Peter, on July 27 Cloudflare announced a beta of Workers Unbound. It seems from my reading of the announcement that its technology will still be much slower than Fastly’s compute@edge beta – 5 milliseconds to load for NET vs 35 microseconds for FSLY – but my technical expertise is limited. Does it seem to you too that Workers Unbound does nothing to bridge that particular gap? Thanks very much.
Just posted a new article with my commentary on the new Workers product announcements.
This was a fantastic post. I love how you’ve framed all these companies and their roles in larger ecosystem. I’d love to hear your insights into the evolving field of cyber security for cloud if you ever had the time or inclination. Thanks again for your valuable work.
Just found your blog from another investor site reference. Stellar analysis! I work in this space and can very much relate to your analysis. We switched from AWS Redshift to Snowflake recently and it is a great product(comparable to GCPs BigQuery). Looking forward to your analysis on it.
Do you observe or forsee any change in AWS’s strategy to keep market share?
Thanks for the feedback. On Snowflake, yes, I am watching them closely. They are clearly on a path to disrupt the data warehouse market and may extend their offerings into other generalized data processing workloads (observability, analytics, IoT, etc.)
On AWS’s strategy, I imagine they will adjust to collaborate more openly with the independent software providers. Given that GCP and Azure do, and are experiencing faster growth, AWS would likely consider a similar approach. Multi-cloud almost forces this. In terms of examples, one that popped up recently was that Amazon Kinesis Data Firehose now supports streaming data directly to MongoDB Atlas and Datadog.