Investing analysis of the software companies that power next generation digital businesses

Cloudflare Q3 2021 – Helping Build a Better Internet

In my prior post on Cloudflare in October, I highlighted several themes that are changing how compute, data storage and network connectivity are being consumed by developers and leveraged by the digital enterprises supporting them. This included a summary of product deliverables coming out of Birthday Week in September. Since then, Cloudflare has continued their trajectory of rapid product development, by completing two more Innovation Weeks packed with new announcements. They also reported Q3 earnings in early November, delivering the same consistent high revenue and customer growth to which investors have become accustomed.

In the last month, we have also seen a significant change in valuations for high growth software infrastructure companies. Cloudflare stock is down 38% from its peak price following earnings. As the macro environment shifts, we may see more volatility. The market will try to reconcile what is a “fair” valuation for high growth companies, incorporating the removal of substantial government stimulus. On one hand, multiples are still above historic norms. On the other, software infrastructure in particular is demonstrating durable revenue growth rates over a longer period than other sectors. The hyperscalers provide an easy reference point, growing at 40% y/y and higher with annual run rates between $20B – $60B.

As I have discussed about software infrastructure leaders, the combination of customer additions, broadening product reach and consistent annual spend expansion are allowing these companies to extend the “law of large numbers” to a point further in the future. Compounding of high revenue growth rates over many years eventually pulls down even excessive valuation multiples. This may explain why the market has assigned a premium to Cloudflare, at least for the time being. The prospect of becoming a fourth cloud provider and the connective fabric of the Internet certainly lend some rationalization to the perceived opportunity.

In this post, I review Cloudflare’s latest product announcements, analyze their Q3 quarterly results and draw conclusions about the durability of their growth going forward. I also discuss why I think Cloudflare is uniquely positioned to execute on this broader opportunity. Their network-first mindset has created architectural advantages to address challenges in application performance, data distribution, security and compliance. The Internet, after all, is nothing without a network.

Quarterly Results

Cloudflare announced Q3 earnings on November 4th. At a high level, Cloudflare delivered more of the same, maintaining the 50+% revenue growth rate that we have become accustomed to, with some notable highlights. The market responded by keeping the stock around its elevated valuation levels, initially spiking to an all time high and then settling into a slight loss by the end of the day. NET’s valuation multiple remained above its lofty P/S ratio of 100. The fact that the stock maintained its price range at the time indicates that the market was satisfied with the results.

However, in the subsequent weeks since earnings, NET stock has been hit with a sell-off that has spanned all high growth stocks. Since the earnings release, the stock is down 33%. Further, from its closing peak of $217.25 on November 18th, it is now down 38%. The P/S ratio has dropped from a peak of 113 to 70 currently. I will provide my perspective on valuations later in the article. Suffice it to say that near term, we are on shaky ground, as the market tries to reconcile a new valuation multiple for NET in an environment of fiscal tightening and higher interest rates. While these will certainly impact the stock price, I think that the fundamentals and future growth potential for Cloudflare have not changed. Following announcements from their two recent Innovation Weeks, I think the opportunity is even greater.

Top-line Growth

Q3 revenue growth came in at 50.9% year/year. This is down slightly from Q2’s 52.8%, but was up 13.1% sequentially. Also, as noted on the earnings call, accounting for a $1.9M customer renewal in Q3 2020, annual growth would have been 53.5%. Regardless, actual growth beat the company’s prior guidance by 600 basis points and analyst estimates for 45% growth. Looking forward to Q4, the preliminary revenue estimate would represent 46.5% annual growth and 7% sequentially. This is above the original estimate for Q3 2021 of 44.9% growth, implying slight acceleration of revenue growth in Q4 with a same-sized beat. Given that Q4 is traditionally a strong quarter, we could see more out-performance.

Cloudflare Q3 2021 Investor Presentation

Cloudflare’s revenue growth is consistent if nothing else. They continually increase revenue annually at about 50% on average. Some quarters slightly higher, some dip below that level. For the full year of 2021, I anticipate they will deliver about $655M in revenue, for growth of 52%. This is slightly above 2020’s growth of 50.2%.

As I discussed in a prior post on TAM expansion, it is conceivable that Cloudflare could continue this consistent, but elevated, revenue growth pace for many years. They have orchestrated an operating model which delivers new product offerings at a rapid pace. That is supplemented by a strong go-to-market effort that continually adds new customers and expands their spend each year. These factors combine to keep driving high revenue growth.

While the “law of large numbers” could temper this growth over time, if we look at the hyperscalers, we really don’t see this effect even at high scale. Azure grew 48% y/y in Q3 at a $35B-$40B run rate. AWS grew 39% y/y at a $64B run rate. Given that Cloudflare’s annual revenue hasn’t crossed $1B, the durability of hyperscaler growth provides evidence that Cloudflare could sustain a high rate of growth for a while.

Profitability Measures

A big upside surprise was in profitability and operating leverage. In Q3, Cloudflare increased Non-GAAP gross margin to 79.2%, up from 78.0% in Q2 and 77.3% a year ago. This gradual improvement in gross margin is being driven by Cloudflare’s continued optimization of their infrastructure and use of idle network capacity. All software products are built on top of Cloudflare’s own compute and data storage platform, called Workers. By having a serverless architecture at its core, it’s is easier for Cloudflare to scale up its capacity as usage increases. Additionally, because Cloudflare operates its own data centers on top of commodity hardware, they can drive more cost savings than if they were running on top of another company’s infrastructure (like the hyperscalers).

Combined with strong revenue growth and these operating efficiencies, Cloudflare was able to deliver its first Non-GAAP operating income of $2.2M or 1.3% operating margin in Q3. This is an improvement of over 500 basis points from -4.0% operating margin a year ago. This led to positive Non-GAAP net income of $1.4M for the first time as well. This transition to Non-GAAP break-even was originally targeted for 2H2022 during the IPO in September 2019. Obviously, Cloudflare has pulled this goal in substantially through outperformance.

Going forward, Cloudflare leadership reiterated their long-term operating margin target of 20%. However, they indicated they are not in any hurry to get to that. On the earnings call, leadership shared that they will “hover just below or just above breakeven likely for years to come.” This is reflected in their guidance for Q4, which calls for Non-GAAP income from operations of $(1.0) to $0.0M.

Instead of continuing to drive operating margins upwards in the near term, Cloudflare will invest this gross profit into Sales and Marketing and Research and Development. Given the large opportunity for the company, I am okay with this approach and prefer it. They have demonstrated the ability to realize operating leverage. Having hit breakeven, I like the idea of reinvesting any further improvement to profitability back into growth. Of course, that does set the expectation that Cloudflare will deliver consistent or higher revenue growth from this point forward, if they are pulling back from making further improvements in operating margins.

As I have discussed previously, Cloudflare has demonstrated a brisk pace of product development and new releases. They are entering adjacent product markets quarterly, which is rapidly expanding their TAM. Based on what I have seen over the past year, their new releases are thoughtful extensions that reflect customer demand and leverage building blocks previously added to the platform. The fact that many of these products are built on top of Workers increases their efficiency. Cloudflare is an innovation machine, extending their reach into more and more product segments.

At the same time, revenue growth hasn’t reflected this acceleration of product releases over the past year. Cloudflare has maintained annualized revenue growth between 50% to 54% during the same period. If product releases are accelerating, why aren’t we seeing a tick up in revenue growth rates? I think there are two explanations. First, we could observe that new products are likely helping to offset any slowdown in more mature offerings, like CDN or DDOS. This may explain why Cloudflare has been able to maintain revenue growth of 50%, while peer Fastly is falling back into a range of 30% growth, as they remain primarily in their core markets (content delivery, streaming media, bot mitigation, application security).

Second, their enterprise sales motion has taken time to ramp up. Cloudflare is relatively new to enterprise sales, versus peers in network and security services, like Zscaler and Crowdstrike. However, they are investing heavily and presumably are building commensurate capabilities. Coupled with an ever expanding product offering, it’s likely that Cloudflare can start driving outsized revenue growth from these larger customers. Progress is already being reflected in customer activity metrics around large customer growth, DBNER and average spend per customer. I’ll delve into these metrics further below.

Due to their gross margin efficiencies, Cloudflare grew its Non-GAAP gross profit by 54.8% year/year, which was a bit above revenue growth. Even with the substantial improvement in operating margin, Cloudflare still increased their allocations to S&M and R&D year/year. These increases were roughly in proportion to revenue growth. On a GAAP basis, R&D spend increased by 51.4% year/year and S&M spend increased by 53.4%. For a high growth company, I like to see increases in this spend that are proportional to revenue growth. Looking forward, if Cloudflare will maintain operating margin at roughly break-even, then these levels of spend increase should be sustainable or even increase slightly.

  • R&D = 27.2% of revenue (versus 27.1% in Q3 2020)
  • S&M = 49.9% of revenue (versus 49.0% in Q3 2020)
  • G&A = 16.6% of revenue (versus 18.8% in Q3 2020)

Additionally, scale is reflected in G&A spend, which only increased 16.6% year/year on a GAAP basis, reducing its relative proportion of revenue by over 2% between 2020 and 2021. You can see a full summary view of these changes over the past several years in the following slide from the Q3 investor presentation. The relative percentages are reported on a Non-GAAP basis.

Cloudflare Q3 2021 Investor Presentation

Cloudflare continues to hire at a brisk pace. They ended Q3 with 2,240 employees. This is up 9.3% sequentially from Q2 and 32.0% year/year. The CEO often highlights how competitive hiring at Cloudflare can be, with acceptance rates below 5% of applicants.

Customer Activity

As we consider Cloudflare’s investment into Sales and Marketing and emphasis on building out an enterprise sales motion, we are starting to see the benefits play out in customer activity metrics. Specifically, large customer growth (which Cloudflare defines as spending over $100k in annualized revenue) was 71% in Q3, increasing from 736 a year ago to 1,260 this year. This was up 172 customers of this size from 1,088 in Q2, for sequential growth of 15.8%. To demonstrate the acceleration, Q2 added 143 of these customers over Q1, representing sequential growth of 15.1%.

Cloudflare Q3 2021 Investor Presentation

While Cloudflare is rapidly growing large customer counts, they are maintaining strong growth in total paying customers. They ended Q3 with 132.4k paying customers, which is up 31.1% annually from 101.0k in Q3 2020. This number increased 4.5% sequentially from 126.7k in Q2. Q2 experienced strong sequential growth of 6.3%, so Q3’s incremental growth was down slightly for total paying customers. This can jump around a bit, but we will want to ensure that Cloudflare continues to increase the total number of paying customers by 25% annually. Still, with a paying customer base well over 100k, they have a large stable of customers to upsell into the large customer category over time. This likely reflects their shift to enterprise selling, in order to generate more revenue from existing paying customers.

This increase in average customer spend year/year for existing customers is captured by Cloudflare’s dollar-based net retention value (DBNR) which measures the change in customer spend over a one year period. This is calculated by identifying the cohort of paying customers a year ago and then measuring spend for the same cohort a year later. Cloudflare’s value includes customers that have churned, meaning their revenue for the current period would just be $0. This value has been ticking up gradually for Cloudflare, increasing to 124% in Q3 2021, up from 116% a year ago. It was 124% in Q2 and 123% in Q1.

This metric means that the cohort of customers increased their spend with Cloudflare by 24% year/year on average. As Cloudflare increases their product breadth and focuses on large customers, we should see this value continue to increase slowly over time. For comparison, Datadog reports this value as being above 130%, Zscaler at 128% and Crowdstrike above 120%. With the cross-sell opportunity for Cloudflare’s emerging enterprise customer cohort, I think they can continue to increase this value over time.

Cloudflare Key Business Metrics, Supplemental Financial Information from Q4 2019 to Q3 2021

In Cloudflare’s Supplemental Financial Information report for Q3 2021, they shared these metrics from Q4 2019 (leftmost column) to Q3 2021 (rightmost column). What is nice to see is acceleration in growth for both large customers and DBNRR from 2019 to current. Even overall paying customer growth has increased over the two year period.

This view is also supported by the increase in average customer spend. On the earnings call, for Q3, Cloudflare leadership reported that their average customer now spends over $100k annually, which is up from an average of $72k when they went public in September 2019. This also underscore the success in selling to larger customers, as we can assume that some customers are spending many times $100k.

Several of the customer wins highlighted on the earnings call reflect exactly this. I’ll provide a sample of the call-outs below to illustrate the opportunity to substantially increase customer spend.

CompanyDeal SizeProducts Used
F500 Pharma$600k Expansion
$2M Total Contract
Cloudflare Gateway /
Zero Trust Suite
F500 Manufacturer$500k 2Y ContractCloudflare One for 50k Emp
EU Software Corp$600k 3Y ContractZero Trust Suite
F500 Retailer$200k AnnuallyWorkers / Durable Objects
MidEast Financial$600k 3Y ContractVariety
US Airline$180k AnnualCloudflare One
Social Network$1M Annual (Oct)Zero Trust
Video Conferencing$8M Annual (Oct)DDOS, Zero Trust
Cloudflare Q3 2021 Deal Highlights

Going back a few quarters, we see a similar pattern. Contract sizes are getting very large and Cloudflare will likely begin reporting $1M+ customers at some point. Additionally, it’s worth noting that relatively new product offerings are starting to be included in customer highlights. Recall that Cloudflare One was launched in October 2020. Durable Objects was introduced in September 2020 and went GA just a month ago on November 15th. This is now being referenced in new customer wins with sizable contract value.

These references provide some anecdotal evidence that Cloudflare’s rapid product development pipeline is generating business impact. I think we can expect the same effect for new products launched this year. As I discussed in a prior post, Birthday Week in September introduced a number of new products with significant addressable markets. Highlights included Email Security, R2 Storage, RTC, Live Video Streaming and even Web3 Gateways. I suspect that customer win announcements in future quarters will begin to reference these products, following the same progression as Cloudflare One and Zero Trust.

The combination of enterprise customer upsell with an increased set of products to offer them will drive a powerful expansion motion. As an example, Cloudflare added email security to their product offering during Birthday Week in September, in response to customer requests. Many enterprise customers either have an existing email security product, or would need one. It would be very easy for these customers to supplement their Cloudflare implementation with email security. Their transition would be particularly seamless if they are already connected to the Cloudflare network through Cloudflare One.

In addition to growth in new products, Cloudflare’s legacy products are still seeing demand. I was on a due diligence call recently with a rapidly growing start-up that provides e-commerce services for retailers in the EU. They highlighted the lightening fast load times of their customers’ web sites as a competitive advantage. For their tech stack, the team discussed their application delivery architecture that leverages a Jamstack design. In this model, a pre-rendered HTML page is deployed onto a CDN, which then uses JavaScript and remote APIs to dynamically retrieve content. They chose Cloudflare as the content delivery solution (over AWS Cloudfront), and will likely migrate the back-end APIs to Workers over time.

As another example, DDOS is a very mature market, with vendor offerings at least 10 years old. Yet, when security and network continuity is necessary at extremely high scale, Cloudflare continues to shine. While DDOS protection was one of Cloudflare’s first products, the recent proliferation of attacks is increasing demand. Leading voice provider Bandwidth (BAND) suffered a notable outage in late September due to an ongoing DDOS attack. The attack was unique in both the volume of traffic, but also the different protocols utilized to affect VoIP services, relative to standard web site attacks that primarily traverse HTTP.

This attack was unique, and not only was it massive in its size, it was specifically on a dimension using UDP packets and fragmented UDP packets which had never been seen before. We’re only seeing very recently in our space with voice, you need signaling and media and UDP fragments we can’t just block wholesale in the way that you can with an HTTP attack. So, we had a very good robust solution for attacks that had been seen in the past.

So, when the attack started, we had a network-based best of breed, awesome solution by a great vendor in the industry and it worked well for the first 48 hours of the running gun battle. After that, there was a different dimension to the attack on a different protocol, different ports, different origins and you’re talking about attacks originating in a different nation state, transiting through a partner that doesn’t know any better and then hitting all your IP ranges with different flavors of traffic in different ways. And so, we migrated from the original defense that we had set up that was working and used Cloudflare thereafter. They were superlative in working with us, rallying with us.

We shared, in real time aspects of the vector of attack or the changing dimension and they would adjust with us their solution in real time and it was a combined effort and one that should be celebrated and I think resulted in many others in the voice industry probably becoming Cloudflare customers.

Bandwidth (BAND) CEO, Q3 2021 EArnings Call

This is an amazing testament to Cloudflare from a completely unrelated source. Not only did they displace an existing vendor, but are likely gaining more customers as a result of their success in handling this unique attack.

Product Development

In my prior post on Cloudflare, I covered Birthday Week which took place in late September. That product week introduced a number of new offerings, including Email Security, R2 Storage, Cloudflare for Offices, Real Time Communications and Web3 Gateways. Each of these releases represents a substantial increase in capabilities to drive penetration in existing product markets or entry into completely new ones. For the new markets, I optimistically estimated that Cloudflare added about $100B of incremental TAM. The deliverables for Birthday Week 2021 eclipsed those of Birthday Week 2020 by a large margin.

As we consider Cloudflare’s elevated (yes, I know it’s unusually high) valuation, the pace of innovation and perception of future TAM are the only factors that would explain its premium relative to peers. Measured purely on a revenue growth basis, Cloudflare’s roughly 50% of annual growth wouldn’t justify a P/S over 100. Peer Snowflake recently delivered 110% annual revenue growth and has maintained a similar P/S ratio. Even Datadog just delivered a phenomenal quarter with 75% revenue growth and increasing profitability. Yet, its P/S ratio is much lower, bouncing around 60.

I can’t attribute all of Cloudflare’s premium to market shortsidedness. There has to be some reason behind the willingness of so many retail and institutional investors to “overpay” for NET. While I won’t defend the absolute valuation, I think Cloudflare deserves a premium for a couple of reasons, all of which revolve around their product and architectural advantages. Some of these also provide competitive moat, which will support durable growth over a longer period, in spite of overlapping offerings from other software infrastructure providers.

  • Perception of TAM. As part of Birthday Week, CEO Matthew Prince revealed aspirations for Cloudflare to become the fourth public cloud. As Cloudflare continues adding product extensions and entering new markets, investors can only conclude that it will be a much bigger company in the future. This goes far beyond its origins as a CDN and DDOS provider.
  • Re-use of Platform Primitives. As the Cloudflare engineering team builds out the platform, they think in terms of re-usable platform primitives and composability. Each new service becomes a building block for a more sophisticated product offering in the future. This reduces development time for each new innovation, as Cloudflare software engineers can pull production-ready components off the internal shelf to assemble into the next product offering. Workers is the cornerstone of most new application offerings. Durable Objects provides the mapping layer for R2. RTC uses Cloudflare for Teams to manage user access permissions.
  • Pace of Product Innovation. As a consequence of primitive re-use and Cloudflare’s culture, the pace of product development is accelerating. Each year, Cloudflare is increasing the amount of innovation occurring. A simple comparison of Birthday Week 2020 to 2021 illustrates this fact. Cloudflare’s product footprint today is significantly broader than that of December 2020. This implies that Cloudflare will have an even more expansive product offering in 2022. Cloudflare management often mentions that new customer wins are influenced by the perception that Cloudflare is an innovator. Additionally, their Innovation Weeks provide a large source of organic incoming leads. During Birthday Week in September, they reported a 10x increase in organic customer requests.
  • Free User Base. Cloudflare has around 4M free customers, of which only about 3% are paying customers. While this sounds like it would create terrible unit economics, Cloudflare actually leverages these free users for significant value. First, these users generate low relative traffic levels. Since many of Cloudflare’s costs are fixed, servicing the free users wouldn’t create much incremental cost to Cloudflare.
    • In exchange for free usage, the Cloudflare product team uses the free tier for testing of new product offerings. These users can tolerate some bugs or service issues. Also, they provide valuable feedback to help improve the new product offering. This harnessing of the free user tier saves Cloudflare material costs in QA resources, which normally would be needed to stress test new products.
    • Free users are often individuals managing their own personal hosting or network connectivity on Cloudflare. These individuals usually have a professional job at a company where they can make a technology purchase recommendation. Familiarity with Cloudflare products brings Cloudflare into the vendor consideration without requiring sales and marketing spend.
    • Free users provide the network traffic to justify new network peering relationships. It can be difficult for a new market entrant to access an ISP’s customers through a peering relationship. However, with millions of free users, it is easy for Cloudflare to demonstrate to a small country that many of its citizens are already Cloudflare users.
    • Free user traffic provides valuable data to optimize traffic routing, security response and new product ideas. Cloudflare’s security products are made more effective as traffic flows increase.
  • Owned PoPs and Network Infrastructure. Cloudflare owns and operates all of its 250+ PoPs. It also controls the network traffic between PoPs and has access to all of its equipment down to the hardware level. This allows them to fully optimize network performance and their compute/storage layers. This insulates Cloudflare from potential competitors. Only a couple of other companies operate a large network of global PoPs (Zscaler, Fastly, Akamai), outside of the hyperscalers. Any other software service provider (runtimes, distributed data, security services) are layered over hyperscaler data centers. This limits their control, customization and footprint.
  • Commodity Equipment. Every server in a Cloudflare PoP is alike. It is built on commodity hardware, assembled just for Cloudflare to their specifications. Every server runs all Cloudflare services. This means they don’t need to provision separate server hardware for network, compute and data. It also means that every PoP can offer the same set of product offerings in parallel.

As it relates to becoming the “fourth public cloud”, I don’t think Cloudflare has aspirations to displace AWS, Azure or GCP. These hyperscalers have enormous amounts of capital invested in providing solutions for large scale, centralized compute and storage. They operate huge data centers in key locations across the globe and maintain high speed, private network connections between them. This kind of centralized compute and storage will always have a use for powering software applications and big data processing.

Cloudflare is taking a different tact, focusing on network connectivity. The networking of compute nodes is arguably is what makes the modern Internet useful. Cloudflare aspires to provide the underlying “fabric” of the Internet, which means delivering fast, reliable, dynamic and secure connectivity between hyperscalers, private data centers, enterprises and end users. Cloudflare provides onramps to the network, enhanced with dynamic traffic routing and security. This is enabled by a globally distributed network of over 250 PoPs located within 50ms of 95% of world’s population. With Cloudflare for Offices, the number of locations will balloon into the thousands.

This also means that Cloudflare is somewhat insulated from changes in how digital experiences are financed, built and delivered. Even as Web3 and blockchain have the potential to disrupt some “Web2” incumbents in areas such as data storage and distributed compute, this next generation of applications still runs on the Internet. They require a reliable and secure network to connect all of their decentralized nodes. Cloudflare is already creating new products that cater to the evolving Web3 developer ecosystem. They provide services to some of the leading entities in the crypto space, including a number of the largest exchanges.

While Cloudflare is foundationally a networking company with a globally distributed mesh of PoPs, they are also well-positioned to build software services on top of it. In order to make the network more useful, Cloudflare offers developers a platform that includes compute resources and data storage, encapsulated in the Workers product. This isn’t an end unto itself, but rather the means. With an easy to use compute and data storage platform, developers can build the next wave of distributed applications that take advantage of Cloudflare’s network to deliver digital experiences to end users. This doesn’t necessarily displace AWS EC2, RDS or S3. Rather, it provides another way to build digital experiences delivered over the Internet.

A useful analogy for investors might be to consider a highway transportation system for shipping. This largely exists to connect large population centers (cities) to each other and to provide access to the segment of the population that lives in rural areas. Within the cities, there are many large factories and warehouses (equivalent to data centers owned by the hyperscalers). Each city may have different combinations of the hyperscalers with a presence in it. The hyperscalers may also have their own private roads between their data centers, which are not available to the public.

In this analogy, Cloudflare’s aspiration is to maintain the public roadways linking the cities. Their PoPs are like service stations and shipping hubs located around the cities and along the roadways. PoPs also provide entry onto the roadways, providing local access to anyone in a rural area. In order to direct, enhance and protect the shipping activity being conducted across the roadways, Cloudflare PoPs house compute resources and data storage. These might be used to support activities like staging supplies close to the cities (CDN), checking the contents and permissions of roadway traffic (Zero Trust), making routing decisions for traffic or even altering the shipping contents to best meet the needs of the cities (Workers). In this analogy, a lot of activity is still concentrated in the cities. This would all be handled by the data centers maintained by the hyperscalers.

With this set-up, Cloudflare is ending the year with two final Innovation Weeks. These were announced as part of the Q3 earnings call and transpired over the following month. Like prior product weeks, both were packed with announcements and further demonstrated Cloudflare’s rapid pace of product development.

Full Stack Week

Cloudflare’s Full Stack Week ran from November 15 – 19th. It started on Sunday with the usual summary blog post of what we could expect. Full Stack Week focused on providing developers with capabilities and tooling to build distributed applications on Cloudflare’s network. Cloudflare’s compute capabilities are unique in that application code is deployed to every one of Cloudflare’s 250+ PoPs. The code runs in a serverless mode, in parallel, for any user accessing it across the globe. While centralized hyperscaler application deployments direct user requests to the single data center housing that application’s code, any Cloudflare PoP can field a user’s request. Cloudflare’s network will usually select the geographically closest one. This makes for a much faster and more reliable response. It also allows for extreme scalability, as surges in traffic are distributed across many small data centers, versus all getting directed to one and relying on auto-scaling within it.

However, a distributed, serverless runtime introduces more complexity to support conventional application development patterns. Data storage is decentralized, creating challenges in strong consistency and state management. A distributed runtime requires a new set of developer tools for writing, testing and deploying code. It also introduces new patterns for monitoring applications and tracking errors. Finally, Cloudflare can’t reproduce every third-party service and needs to support integrations with common application functions like payment gateways or code repositories.

There was a lot packed into Full Stack Week. For a full list of everything that transpired, readers can check out the the summary page including all CloudflareTV clips and blog posts. I will cover the most impactful announcements from the week below.

Data Storage Improvements

One of the key considerations for making a full-featured developer platform is to support multiple types of data storage. In a conventional, centralized hosting environment, this can include access to a key-value store, unstructured data, an object store, a relational database and other types of non-relational data stores. Cloudflare has been rapidly building out their data storage capabilities along similar lines. They started with a simple KV store, then added Durable Objects and most recently R2. They also supported integrations with two distributed data storage networks, Fauna and Macrometa.

The architectural advantage for Cloudflare and other distributed network compute providers (Fastly, Akamai) is that the runtime can be located in every PoP , providing close geographic proximity to every user on the planet. Since the runtime is serverless and multi-tenant, code runs in parallel from every location, versus being centralized into one or more data centers. This architecture provides a huge advantage over a centralized topology in terms of performance and scale.

However, the use cases for “edge” compute (edge meaning not centralized) will be extremely limited without data storage. In this case, edge compute would be relegated to a few stateless use cases, like A/B testing or routing decisions. With a simple key-value store, the developer could add some basic user personalization use cases, like a shopping cart or authentication. However, the developer still wouldn’t be able to address the entire range of fully-featured, multi-user applications that rely on a distributed, persistent data store. It would be like building a very fast car that has only one passenger seat and no trunk. This is why Cloudflare continues to iterate on data storage options, while other distributed serverless options seem satisfied with just a KV store.

With Full Stack Week, Cloudflare introduced two new data storage integrations, with MongoDB and Prisma. MongoDB provides a document-oriented database that is popular with developers. Through its cloud-based Atlas offering, developers can create database clusters on any hyperscaler and share data between them. By importing MongoDB’s Realm SDK into the code running on a Worker, developers can connect to and query data stored on the MongoDB Atlas cluster. Realm is commonly used for facilitating data connections for mobile apps. By replacing the mobile app with worker instances running on the closest Cloudflare PoP, Cloudflare can help developers deliver a faster experience for a web site that has a MongoDB back-end.

Prisma provides an object-relational mapping (ORM) tool that abstracts a lot of the complexity of connecting to databases. Developers can build their applications faster and make fewer errors by utilizing ORM. Prisma ORM can interface with most popular databases, including MySQL, PostgreSQL, SQL Server and SQLite. In this implementation, those relational databases would be hosted on a cloud provider. Similar to MongoDB, Prisma provides a client that can be embedded into a Worker script to facilitate data model creation and access.

As another data enhancement during Full Stack Week, Cloudflare brought Durable Objects to GA. Durable Objects was announced a little over a year ago and moved to open beta in March 2021. Durable Objects is a very powerful data storage primitive that brings strong consistency to applications. While the existing KV store provides a very fast cache, it doesn’t guarantee consistency (multiple users writing to a key value at once). Durable objects provides a way to store a data value that can be referenced by multiple users, yet maintain a strongly consistent value. This is useful for data values like the number of tickets remaining for a popular concert, where it’s important that requests are processed in the order they are received and all clients retrieve the correct value.

Since bringing Durable Objects to open beta, Cloudflare has added several new capabilities. First, they made object access much more scalable. Durable Objects can now serve hundreds of thousands of requests per second across multiple objects, and hundreds of requests per second on a single object. They added Jurisdictional Restrictions, which allows developers to tag a Durable Object with a geographic region. Cloudflare will keep that object’s data within the physical region, adhering to data localization requirements. Finally, they introduced a caching layer and protections against race conditions. These further support scalability.

Lastly, Cloudflare provided a hint about a future service that could become a very substantial product offering. Transferring data from a remote data store to a client application or between compute nodes requires a secure network connection. Often, developers are challenged to set these up, ensuring that open connections over the Internet are protected, reliable and secure. Setting up private, secure network connections is what Cloudflare already does with their Tunnels product, which uses cloudflared to create a secure network tunnel between a client and Cloudflare’s network.

Cloudflare Data Connectors, Cloudflare Blog

What would be interesting is the application of these capabilities to launch a new product that provides data connectors to developers to connect an application running in any hosting environment to a remote data store somewhere else on the Internet. Just like Cloudflare for Teams provides this secure connectivity for users to their enterprise applications and data centers, Cloudflare could apply the technology to construct a secure tunnel from any application runtime to its remote data store. This has large implications for future distributed applications, like IoT, autonomous devices and decentralized apps (Dapps). The Cloudflare team teased some of the implications in a blog post.

Just as Cloudflare started by providing security, performance, and reliability for customer’s websites, we’re excited about a future where Cloudflare manages database connections, handles replication of data across cloud providers and provides low-latency access to data globally.

Our position in the network layer of the stack makes providing performance, security benefits and extremely reduced egress costs to global databases all possible realities. To do so, we’ll repurpose the HTTP to TCP proxy service that we’ve currently built and run it for developers as a connection pooling service, managing connections to their databases on their behalf.

Finally, our network makes caching data and making it accessible globally at low latency possible. Once we have connections back to your data, making it globally accessible in Cloudflare’s network will unlock fundamentally new architectures for distributed data.

Cloudflare Blog, November 2021

Investors can think of this like SASE and Zero Trust, but applied to secure transmission of data between applications over the Internet. Beyond just facilitating a secure connection, Cloudflare could offer caching and other value add services. These capabilities would also be useful for enterprise customers who need to move data between data centers or hyperscalers, in theory reducing egress costs by sending the data over Cloudflare’s network. Finally, combining localized data caching and processing with routing logic built on Workers, Cloudflare could begin to reproduce capabilities offered by popular data queuing, pub-sub and even data fabric services. This would be akin to products like RabbitMQ or Kafka, but over the open Internet versus on an internal network.

Worker Services

On Day 2, Cloudflare introduced what I think could have the largest long-term impact on the opportunity for Cloudflare and distributed applications in general. They introduced Worker Services, which provides another large building block for developers assembling distributed applications. The current incarnation of Workers revolves around single scripts, where all functionality is bundled into one instance with one externally facing interface and one codebase. This is akin to the legacy software application convention of a “monolith“, where engineering departments all worked in the same code base and had to coordinate releases across multiple teams. This design worked okay for small teams, but quickly bogged down the development pace as the organization grew.

These constraints of a large monolithic codebase sparked the advent of micro-services. A micro-services architecture involves breaking the monolith into functional parts, each with its own smaller, self-contained codebase, hosting environment and release cycle. In a typical Internet application, like an e-commerce app, functionality like user profile management, product search, check-out and communications can all be separate micro-services. These communicate with each other through open API interfaces, often over HTTP.

With Worker Services, Cloudflare is adding support for a micro-services architecture. Every Worker script can now be deployed into its own runtime environment. This includes easily configurable environment variables, hostnames, DNS and SSL. Additionally, services can be deployed into multiple environments for development, testing and production, supporting a standard continuous integration and deployment pipeline. Services are also versioned on each deployment, making it easy to rollback to the prior version if the latest instance introduces production issues.

Beyond easing developer operations and adding scalability, Services adds secure “composability” to the mix. Each service can easily reference and invoke another, through a new convention called a Service Binding. A service binding enables developers to send HTTP requests to another service, without those requests going over the Internet. That allows developers to invoke other Workers directly from the code in a running Service. Service bindings open up a whole new level of composability. 

This provides an easy way for developers to leverage functionality in existing services as they build new applications, saving time and speeding up cycles. Because of Cloudflare’s globally distributed, serverless architecture, these services are effectively self-referencing, meaning that all Worker instances run all services, so a call from one service to another happens within the same runtime. This is a very powerful concept which has huge implications for security and scalability.

Cloudflare Blog Post, November 2021

The reason for this advantage has to do with the standard way of hosting a classic service-oriented architecture. That involves provisioning a set of servers in one cloud location, loading the micro-service code, setting up environment variables and configuring security permissions to protect the service from public traffic. Calls between two services will travel over the network, either within a private data center, or across the Internet. In either case, that network hop introduces latency and the potential for errors through dropped packets.

With Cloudflare Services, all these steps are automated and network risks are mitigated. Each service can reference another service within the same virtual environment, eliminating the need to make HTTP calls over the Internet. Service Bindings make it easy to tie services together and minimize latency as the “travel time” is reduced to the Worker instance itself. Combined with a backplane of distributed data through Durable Objects, each service can maintain its own data store, or share data with other services across an application.

Taken to their next logical step, Services have other implications. They resemble smart contracts on a distributed blockchain network, with the same advantages of composability, resilience and security. The only difference is in access permissions, where most developers would restrict access to their applications or user audience. However, it is conceivable that toggling access permissions to “everyone” would achieve a similar outcome.

Of course, because Cloudflare engineers use the Workers platform internally to build new product offerings, they will realize the most productivity benefits from Services and compatibility. This notion of building “primitives” for developers is very powerful. It further highlights how Cloudflare is dogfooding their own platform and helps understand the rapid product development pace. If Cloudflare engineers can re-use their own software infrastructure components to build new features, then delivery cycles will be dramatically sped up.

Services are available currently for any developer with a Workers account. Cloudflare already wrapped existing scripts with the services convention and linked them to a production environment. The developer dashboard now includes the configuration parameters and other controls to deploy Services in whatever combinations desired.

Developer Tooling

There were a few releases tied to enhanced tooling, which I will briefly highlight below.

  • Wrangler 2.0. Wrangler is Cloudflare’s tool for managing the developer environment. This version provides some new capabilities for deploying Worker code, debugging and running Worker instances in a local environment.
  • Images. Cloudflare Images provides a single service for developers and content producers to store, resize, optimize and serve images for their applications. With Full Stack Week, Cloudflare added support for AVIP format (better compression), blurring (to preview an image without displaying full resolution), support for custom domains (to improve download performance) and integration with their Streams product.
  • More Unbound. When Cloudflare introduced their Workers Unbound offering in July 2020, the idea was to extend the usability of Workers by lengthening allowed execution times. They are expanding those constraints to address even broader use cases. Developers can now run scripts for up to 15 minutes of execution time. This is suitable for heavier batch processing, like data analytics. They have also increased the number of scripts per account to 100, which can be magnified further by leveraging Services. Finally, Cloudflare has dropped all data egress costs from the Workers Unbound package. This implies that developers could perform heavy data aggregation functions locally on Cloudflare Workers and then forward the processed data sets to a central data warehouse to aggregate and mine. I think this has implications for future potential products, like supporting data queues or ETL pipelines.
  • Stream Player Improvements. Cloudflare announced several incremental improvements to their video stream player to improve the video experience and allow customers to add customizations. These improvements included deep links, custom colors, localized captions and a new Embed tab.
  • Native Stripe Support. Prior to Full Stack Week, developers could integrate with Stripe to process payments for their distributed applications, but had to do so through Stripe’s API interface. In most hosted JavaScript environments, developers can simply import the Stripe SDK and make calls out to Stripe services directly from their code. Cloudflare added native support for Stripe’s SDK to the Workers environment. This allows developers to simply configure their Stripe API key within Workers directly and call out to Stripe for payment processing.

Cloudflare Pages go Full Stack

Cloudflare Pages was launched about a year ago. In its first incarnation, Pages provided developers with a convenient way to host the front-end of their applications. This aligned with an emerging architectural approach to delivering web applications, referred to as Jamstack (JavaScript, APIs and Mark-up stack). In this design, the UI of the web app is decoupled from the back-end. The UI consists of a light, pre-rendered HTML file that provides the skeleton for the web page to display in a browser. That HTML is then manipulated by JavaScript, which makes calls out to remote APIs to return data and services.

The benefits of this architectural approach revolve around ease of use and scale. First, they simplify the work of developers, who can focus on either the UI or the back-end APIs. This decoupling streamlines the deployment process for changes on either side, as they operate independently. Second, by pre-rendering the front-end HTML and supporting files, and hosting them on a CDN, users get a much faster response. Since a CDN PoP is likely very geographically near a user’s location, the browser will begin rendering the initial web page frame instantly. Also, this delivery system is highly scalable, as heavy traffic can be distributed across the entire footprint of the CDN, versus being directed to a single data center.

Cloudflare’s initial release of Pages only addressed the front-end of the application. They made it easy to create, distribute and host the content needed to render the web page in the user’s browser. The back-end APIs were hosted elsewhere (these could be Workers, but they weren’t automatically integrated). With the Full Stack release, Cloudflare is rounding out the other components of the Jamstack paradigm, primarily delivering support for the back-end in one package.

All components of a Jamstack application can now be addressed by a developer in a single framework and hosting environment on Cloudflare’s platform. This encompasses the front-end, APIs and data storage, all deployed with a single commit. The developer can easily test in an isolated staging environment and then perform a single merge to deploy to production. Existing Workers scripts can be referenced within the Pages projects as Functions. The developer can also easily configure Pages to access their Cloudflare data stores, like KV, Durable Objects and soon R2, simply by adding that namespace to the project. Finally, Cloudflare significantly expanded their integration with source control providers, by adding a partnership with GitLab. This allows any developer using GitLab for source control to easily deploy their Pages projects onto Cloudflare’s platform.

Full Stack Pages is available now as an open beta. Cloudflare is offering the incremental features to existing customers for no additional cost. After the product goes to GA, use of the full stack components would move the customer to the Workers Bundled plan, which would represent an upsell. Looking forward, the Cloudflare team will continue adding capabilities to the developer environment, including integrated logging, analytics and support for popular JavaScript frameworks such as NextJS, NuxtJS, React Server Components and Remix. These will expand the appeal to developers already working on these frameworks, but add Cloudflare’s distributed network as the runtime target for back-end services.

This is your Availability Zone

Before I wrap up Full Stack Week, I think it’s worth underscoring what these developments mean and more importantly, the direction they are taking Cloudflare. Cloudflare’s commitment to keep its compute and storage solutions fully distributed is a critical technology differentiator from any of the hyperscalers and most other software infrastructure providers. This means that every script or service runs in parallel across the entire Workers platform on every Cloudflare PoP. Developers don’t need to indicate which region or availability zone that they want their code or data to live in. They simply write their code, push it to Cloudflare’s network and it runs everywhere in parallel. This approach provides huge benefits in response times, scale and operational overhead.

The concept of “serverless” really comes to bear in this architectural model. For most hosting providers, serverless means that the developer doesn’t need to worry about configuring the server hosts on which their code runs. But, that code is still tied to a data center in a geographic region. Making that code run in multiple regions at once becomes an exercise in selecting every location, redeploying the code to each, considering how to distribute data, what other services are needed nearby, etc. An application design for distributed compute and data is possible, but it has to be deliberate and introduces a lot of operational overhead.

For Cloudflare, the scope of serverless rises to another level. Developers not only don’t have to worry about the servers on which their code runs, but they also don’t have to consider where it runs in order to be globally responsive and scalable. That provides a huge advantage for developers to easily scale their applications and achieve resiliency without any infrastructure planning overhead. Because of Cloudflare’s inherent design to support every service on every PoP server, everything is fully parallelized, without any forethought. Workers are serverless and ubiquitous.

Cloudflare Network, Dec 2021

The same concept applies to data. Cloudflare’s data storage products are designed to work across all locations. In the same serverless fashion, the developer can simply query the data store without worrying about where the data is located. Cloudflare provides a seamless data backplane. Of course, the developer can constrain the geographic boundaries for data with Jurisdictional Restrictions, but that is a refinement versus the default. That distinction is important – most other data storage providers require location to be selected (implicitly or explicitly) first.

With the improvements made in Full Stack Week, Cloudflare is incrementally adding capabilities, but deliberately adhering to their architectural vision. For example, it would have been trivial to roll out Durable Objects or R2 pinned to a single PoP. But, to make those data services available to code running in parallel in any one of 250+ PoPs with reasonable performance is an order of magnitude more complicated. The same applies to support for conventional relational data stores, like MySQL or PostgreSQL. By considering how to provide a secure network tunnel to connect from any PoP to any remote cloud data service requires a much more flexible and scalable design.

In the early days of the Workers product, its lead engineer wrote “We believe the true dream of cloud computing is that your code lives in the network itself.” It shouldn’t be constrained by default to a single hyperscaler or private data center. It should run everywhere in parallel. Cloudflare’s network is the computer. With 250+ global PoPs and thousands more locations to be added with Cloudflare for Offices, that computer will be available everywhere on the planet in near real-time. As the world moves to always-on autonomous devices and services, this hyper-local, global compute fabric will become a foundational requirement.

CIO Week

A few short weeks after Full Stack Week, Cloudflare rounded out the year of innovation with CIO Week, starting on December 5th. This week featured a number of new capabilities that further demonstrate to CIO’s that Cloudflare provides the network that can handle all their enterprise connectivity and security needs.

As they typically do, the week kicked-off with a blog post describing Cloudflare’s vision and linking it to the theme for that week. At a high level, Cloudflare seeks to “help build an Internet that’s faster, more secure, more reliable, more private, and programmable”. They are positioning themselves as a platform versus a set of services, with the intent to allow companies to build the next generation of network services and distributed applications on top of Cloudflare. This differentiates Cloudflare from some of the other network and security providers, which focus primarily on selling packaged solutions without the ability to make them programmable. Cloudflare, on the other hand, will provide out-of-the-box services to interested customers, but also offers development building blocks through Workers, data storage and dynamic network routing. This expands the set of use cases that Cloudflare can address.

Cloudflare recognizes that enterprises are becoming more distributed every day. Their employees work remotely all over the globe. Business applications are available from a variety of distinct SaaS companies. Their software infrastructure is hosted on premise, in data centers and on the cloud. They ship data between fleets of devices, log collection points, data clouds and partners. All of these functions rely on a network to communicate. More frequently, this network has become the Internet, effectively making the Internet the corporate network.

With this evolution of enterprise network services, Cloudflare is perfectly positioned. Their globally distributed edge network of PoPs can provide the network backplane to handle all enterprise traffic. Because they have full network control, deep packet inspection and scalable compute at every node, Cloudflare can ensure that corporate network activity is secure, reliable, performant and compliant. With increasingly complex user privacy requirements, data localization is a critical component that provides significant differentiation.

This network has 100Tbps of capacity between over 250+ nodes, each with significant compute and storage capacity. Every server in every PoP can run all Cloudflare services in parallel, dramatically simplifying capacity planning. The Cloudflare network process over 28M requests per second and routes traffic across over 10,000 interconnects. They also host 25M of the world’s web sites and run one of the largest public DNS resolvers. All of this activity provides the Cloudflare team with critical insights to optimize network traffic and stay ahead of emerging threats. They can also mitigate the largest denial of service attacks, which are becoming increasingly sophisticated.

With this backdrop, let’s briefly examine the highlights from CIO Week.

New Firewall Capabilities and a Trip to Hawaii

Cloudflare kicked off the week with several product announcements targeted at helping CIOs migrate their corporate network from a dependence on firewall hardware to Cloudflare’s software-based network. This revolves around Cloudflare One, which is Cloudflare’s Zero Trust network-as-a-service product. It dynamically connects users to enterprise resources, like data centers, offices and corporate apps (self-hosted and SaaS). It layers on identity-based security controls which are delivered from the nearest PoP, significantly enhancing performance.

Cloudflare One Service Layers, Cloudflare Web Site

Beyond reinforcing the capabilities of the existing Cloudflare One offering, the team announced several new features. I won’t delve into these too deeply, but they generally improve the privacy and granularity of controls for administrators. First, Cloudflare added more options to determine what information is logged by Cloudflare Gateway and which users can review it. This is accomplished through role-based dashboard access and selective logging of events. These controls will limit which users can see PII in log data. For everyone else, it will be redacted.

They also added support for IPsec as a mechanism to on-ramp traffic onto Cloudflare’s network. As many customers already use IPsec tunnels to create their connections over the Internet, Cloudflare added support to allow an enterprise to create a single IPsec tunnel to connect to Cloudflare’s network. From that connection, Cloudflare can then route the enterprise’s traffic to all the other resources needed.

The Magic Firewall team extended packet inspection and enabled more complex matching logic by adding eBPF support. Extended Berkeley Packet Filter (eBPF) allows engineers to insert packet processing programs that execute within the Linux kernel. This provides engineers with the flexibility of familiar programming paradigms and the speed of in-kernel execution. Because Cloudflare’s firewalls are running as software on their PoP nodes, this programming capability allows the team to iterate very quickly and roll out new security features. This differs from hardware firewalls, which require the application of software patches to make updates.

Finally, to create an incentive for customers to begin migrating their corporate network traffic to Cloudflare, they have created tools and resources for customers to easily import policies from legacy hardware firewalls to Cloudflare’s cloud-native service. Customers who deprecate hardware firewalls may qualify for discounts and be entered for a chance to win a trip to Oahu, Hawaii. This contest represents a veiled dig on Palo Alto Networks.

Logging Solutions

Cloudflare announced that customers could begin storing log data from Cloudflare services on the new R2 data store, introduced during Birthday Week. I thought this announcement was very interesting from a strategic perspective. As investors will recall, R2 provides a distributed object store, which can handle any type of unstructured data. It is comparable to Amazon’s S3, even providing a similar API interface. R2 will be offered to customers at a lower price point than S3.

This recent announcement during CIO Week would allow customers to aggregate all logs into R2 of event data generated from their Cloudflare products. These include CDN, Magic Firewall, Magic Transit, Spectrum and others. This log data can be stored in R2 as long as needed by customers for their compliance needs. Activation is simply a click on the Admin panel.

Cloudflare’s intent with this announcement is very interesting. They claim to be creating the building blocks for customers to perform log analysis and forensics of their Cloudflare security products directly on the Cloudflare platform. This presumably means they will start assembling features that resemble a SIEM tool.

Storage on R2 adds to our existing suite of logging products. Storing logs on R2 fills in gaps that our customers have been asking for: a cost-effective solution to store logs for any of our products for any period of time.

Cloudflare Blog Post, December 2021

Beyond the potential to expand into other infrastructure logging and security monitoring use cases, this move underscores Cloudflare’s commitment to R2. Network events from Cloudflare products would likely be some of the most verbose logs in software infrastructure. Cloudflare claims to generate over 10M events per second of log data. Assuming most of that will get routed to R2, it implies an enormous volume of data storage will be needed. This doesn’t relegate R2 to some side project that provides a little bit of application storage for Worker scripts. Rather, it signals that R2 could become one of the largest data stores on the Internet.

Some of Cloudflare’s future product plans are implied in this job description for a Distributed Systems Engineer – Data. It is targeted at software engineers interested “to take on the challenges of working with data at an incredible scale”. Reading the job description, Cloudflare is planning to build a highly scalable global data pipeline. The initial use case is for the collection and distribution of network event logs. Combined with Workers, though, the capabilities could be extended to other data pipeline use cases like message queues, ETL, data streaming and data fabrics.

You will be responsible for designing, building, and scaling one of the biggest global data pipelines to overcome network delays and partitions. The pipeline uses Go, Kafka, ClickHouse, Flink and PostgreSQL to store and analyze in excess of 10 million events per second (and growing fast!).

Cloudflare Job Description, December 2021

Smaller Enhancements

There were a few other releases, which I will briefly highlight below.

  • Support for UDP. With more and more users adopting Cloudflare’s Zero Trust platform, the most common request from customers has been support for UDP-based traffic. Modern protocols like QUIC take advantage of UDP’s lightweight architecture. UDP is utilized by communications services like video streaming and VoIP. Cloudflare is making support for UDP and internal DNS available for customers of the Zero Trust product suite.
  • Security Center. Cloudflare launched the beta version of a single view for customers to map their attack surface, identify potential security risks and mitigate them with a few clicks. The tool is available within the existing Cloudflare Admin panel. It surfaces potential security risks and vulnerabilities within a customer’s infrastructure. The user can activate infrastructure scans on a scheduled basis. These will summarize a customer’s hosting footprint from an attack surface point of view and highlight potential security risks. The customer can follow recommended actions to address each risk. This opens the door for Cloudflare to expand into other risk detection capabilities in network security, enterprise security and brand security.
  • Magic Firewall Enhancements. Cloudflare announced a set of updates to Magic Firewall, adding new security and visibility features that are critical for the effectiveness of modern cloud firewalls. To improve security, they have added threat intelligence integration and geo-blocking. For enhanced visibility, they have added packet captures at the edge, a way to see packets arrive at the edge in near real-time.
  • New Partnerships. In order to solidify their position in the emerging cloud security ecosystem, Cloudflare has established partnerships with leading cyber insurance and incident response companies. For insurance, initial partners include  At-BayCoalition, and Cowbell Cyber. For incident response, they are partnering with Crowdstrike, Mandiant and SecureWorks. These help Cloudflare customers quickly identify resources who can respond to a security incident, benefiting from familiarity with Cloudflare’s infrastructure. Additionally, Cloudflare has joined the Microsoft 365 Networking Partner Program (NPP).  Cloudflare One recently qualified for the NPP by demonstrating that on-ramps through Cloudflare’s network help optimize user connectivity to Microsoft.
  • Clientless Web Isolation. Cloudflare introduced a beta for a clientless version of web isolation. This technology provides customers with a new on-ramp for Browser Isolation that natively integrates Zero Trust Network Access (ZTNA) with the zero-day, phishing and data-loss protection benefits of remote browsing for users on any device. All without needing to install any software or configure any certificates on the endpoint device. Clientless web isolation will be available as a capability of Cloudflare for Teams subscribers who have added Browser Isolation to their plan.

Zaraz Acquisition

On December 8th, Cloudflare announced the acquisition of Zaraz. Zaraz provides a cloud-hosted service that remotely processes activity scripts for website operators. They have a commercial product and a set of active customers, including Instacart, BetterHelp, Razorpay and Y Combinator. They have integrations with the most popular data analytics, advertising, marketing analytics and CRM tools. Because Zaraz was built on the Cloudflare platform, the service will be immediately available to Cloudflare Enterprise customers or as an add-on to others.

Without going into too much technical detail, most websites and mobile apps include third-party scripts that perform some level of activity tracking and data collection. Examples include Google Analytics, Hubspot, Mixpanel, Facebook, Twitter and Google Ads. Normally, these scripts are implemented in JavaScript and inserted into a web page’s HTML near the top of the page. The intent is that these scripts run asynchronously in the background, while the browser focuses on processing the code associated with the source web page.

However, this is not always the case. As web sites add an increasing number of these third-party scripts, their pages quickly bloat and exhibit performance, security and privacy issues. Recent studies show that the average web site is loading over 20 of these individual scripts (whereas their own web page will often load just a couple of their own). While initially a well-intentioned way of providing an easy integration for developers through simple JavaScript includes, this method of processing has become problematic in the following ways:

  • Performance. Each third-party JavaScript will begin instantiating once the browser execution thread hits the first include code. While most browsers can execute multiple threads in parallel, there is a limit to how many can run at once. Additionally, the browser will limit how many external network calls it can make in parallel. As more scripts are loaded onto the page (often calling additional scripts), the page execution can become bogged down. If one of these scripts is unresponsive or can’t make its network connection (remote service down), it can block the whole page from loading.
  • Security. By including a script in a web page, the developer is delegating some level of authority to the company providing the script. While most known third-parties are not malicious, they often include JavaScript libraries from yet another source. This cascading dependency loading creates opportunities for hackers to inject malicious code somewhere in the software chain that could exfiltrate user data.
  • Maintainability. If issues with page execution arise, they present a debugging nightmare for the main site’s developers. Third-party scripts are often minified, making them very hard to decipher. Following the dependency tree to the unresponsive source can require excessive digging.
  • Privacy / Compliance. Many CRM, CDP and analytics services collect some level of user data. This can sometimes infringe upon what is considered private or sensitive customer data. Even where the third-party, like a CRM tool, has been given permission to store this kind of data, they may not be considering jurisdictional compliance requirements. For example, if a third-party script is collecting email addresses, it may not respect the GDPR requirement that the data needs to remain in an EU data center, versus being shipped to a central location.

Zaraz addresses these concerns by executing the third-party scripts on their server-side infrastructure in the cloud, versus allowing them to run in the browser. They accomplish this by working with the third-party providers to create a version of their script for Zaraz’s environment. Then, all scripts are invoked through a single interface in the web page, provided by Zaraz.

The back-end is implemented on Cloudflare Workers, running code that Zaraz maintains. This approach allows Zaraz to consolidate common functions, run more code in parallel, distribute network calls and keep user data in one place. These all improve the performance of the page. For security and privacy, Zaraz can control the activities of the third-party scripts (including cascading loads), limit the distribution of sensitive data and maintain geographic compliance restrictions. Implemented on this level, Zaraz solves most of the issues raised by running these scripts separately on the browser side.

Cloudflare Blog Post, Standard Script Processing on Browser vs. Zaraz Approach

This acquisition brings Cloudflare into a new market for tag management and the parts of CDP solutions which facilitate data collection. Given the benefits in terms of page speed and security, this is a service that would generate revenue for Cloudflare, either as an add-on or value-added bundled service. It is a natural product extension. The acquisition is also strategic on a few levels:

  • Built on Workers. The Zaraz infrastructure is built on Cloudflare’s Workers platform. This represents the first acquisition by Cloudflare of a company that built a commercial solution on its development platform. Other software infrastructure providers, like Snowflake and Elastic, have benefitted from this source of platform usage and funnel of potential acquisitions for new capabilities.
  • Edge delivered. Cloudflare is unique in supporting this capability due to their globally distributed network of compute. As performance is a critical consideration for a solution like this, having their owned and operated PoPs within milliseconds of every global Internet user is a critical competitive advantage.
  • Brings customers. Zaraz already has some customers of its service, including Instacart. Given Instacart’s interest in this service, we can presume it will appeal to other forward-thinking web application providers for whom performance and privacy are critical. This may provide an entry point into the full suite of Cloudflare products.

When new customers started using Zaraz, we noticed a pattern: the best teams we worked with chose Cloudflare, and some were also moving parts of their backend infrastructure to Workers.

Zaraz CTO, December 2021

As Zaraz was first building their infrastructure to address these customer issues, they evaluated many options. First, they considered running their own servers in data centers across the Internet, but quickly ruled this out due to capital requirements and overhead. Next, they looked at leveraging a CDN with the serverless AWS Lambda product providing the back-end code processing. This approach led them to Cloudflare for CDN, with some logic being processed by Workers. As they began utilizing Workers (and the Workers platform continued to evolve), they discovered that they could run the whole back-end software infrastructure just on Workers. Zaraz’s CTO published a blog post about why they made these design choices and how their infrastructure evolved.

Initially we thought we would need to create Docker containers that would run around the globe and would use their own HTTP server, but then a friend from our Y Combinator batch said we should check out Cloudflare Workers.

We planned to let Workers handle the requests coming from users’ browsers, and then use an AWS Lambda for the heavy lifting of actually processing data and sending it to third-party vendors.

As we took a deeper look, we found Workers answering demand after demand from our list, and learned we could even do the most complicated things inside Workers. The Lambda function started doing less and less, and was eventually removed. Our little Node.js proof-of-concept was easily converted to Workers.

Zaraz CTO, Cloudflare Blog Post, December 2021

The blog post highlights several advantages of Workers that were beneficial to Zaraz. These would also apply to other providers considering similar options in the future.

  • The Zaraz team is able to run JavaScript natively through the V8 engine on Workers. Most third-party JavaScript code is easily ported to Workers.
  • The serverless and stateless environment for Workers addresses data storage and security concerns. No data is stored in between requests, limiting the attack surface for hackers and greatly reducing Zaraz’s risk.
  • Zaraz was able to significantly improve performance and scale by running all their code in parallel across a geo-distributed network of nodes. Cloudflare’s multi-tenant architecture supports this parallelism natively.

Zaraz represents a very exciting acquisition for Cloudflare. Looking forward, it implies that more companies could be built that use Cloudflare infrastructure for their core operations. Zaraz is the first. The Cloudflare leadership team thinks there will be more. “We truly believe that the next billion-dollar company can and will be built on Cloudflare Workers,” said Matthew Prince, Cloudflare CEO.

A word on Valuations and Where We Go from Here

Cloudflare, along with other high-growth software providers, current enjoys a historically high valuation. This has dropped considerably over the past month, from an all time high P/S ratio above 110 to about 70 now. This has been largely driven by macro concerns, whether attributed to monetary tightening, high inflation or fears of renewed economic slowdown. The problem we investors face near term is that the rapid drop in valuation multiples provides no real confidence in what could be the “new range” going forward. Some analysts refer to the valuation multiples in 2019, using pre-COVID ranges as a base. Others go back further to 2015-2016, where software company multiples for decent growers were in the 10-20 range. For context, after Cloudflare’s IPO in September 2019, the stock peaked at a P/S of 23.

I think valuations could settle somewhere in between, but I am not making investment decisions based on these kinds of macro projections. I realize some investors may find that approach short-sighted and put forth strategies around trimming at the top, moving into cash or applying hedges. Those strategies may produce better returns than mine and readers can certainly consider those kinds of plays.

My approach is to simplify the calculus to just what is within the company’s scope. Do they address a large and growing TAM? Is their product development velocity higher than competitors? Do they effectively land new customers and then realize operating leverage by expanding spend across their product suite? Do they enjoy strong unit economics and the ability to generate significant operating income (and free cash flow) if they so choose? Is the company led by a CEO and executive team that were either founders or have deep experience in the problem space?

My thesis is that these factors will lead to durable revenue growth rates over time. And over many years, compounding revenue growth will overcome any near term adjustment to valuation multiples. This is because once the valuation “reset” occurs, then stock price will generally scale proportional to revenue growth going forward after that. Of course, this requires the company to continue to execute consistently.

That is why one of the most important criteria I look for is companies that can maintain a high, or even the same, revenue growth rate over several years. A continual flow of new customer additions, spend expansion annually by existing customers and an ever growing suite of product offerings can all combine to keep the revenue flywheel turning. Leading software infrastructure companies like Datadog, Snowflake, MongoDB, Cloudflare and others are successfully leveraging this approach.

One thing that I think has changed recently with some of the best software companies is the durability of revenue growth. Most software companies exhibit revenue growth degradation as the magnitude of their revenue increases. This effect is termed the “law of large numbers”. It provides analysts with a pretty reliable model for projecting a decrease in a company’s annual revenue growth by 5-10% for each year in the future.

However, we are seeing some companies in software infrastructure being able to maintain high revenue growth for a longer period of time and not succumb to the law of large numbers as quickly. We can just look to the hyperscalers for evidence, as some put up nearly 50% annual revenue growth at run rates over $20B. Companies that can maintain a consistent high revenue growth rate will benefit from powerful compounding. For example, 50% annual revenue growth over 5 results in an absolute increase in revenue of 7.5x. Over 10 years, the multiplier exceeds 50.

I think Cloudflare is one of those companies that can maintain consistent elevated revenue growth for a longer period than expected. Their new product introductions not only round out the features available in their core platform, but can capture market share from providers in adjacent markets that are already generating revenue. Let’s take R2 as an example. While most of the focus in the R2 product launch has been on its swipe at AWS’ S3 product, it is also fair to recognize that other providers have made a business out of selling less expensive distributed object storage that competes with S3.

Notable of these is Blackblaze (BLZE), which went public in November 2021 and claims about $65M in ARR. They offer two cloud data storage products, one for computer back-ups and the other for object storage. The B2 Cloud Storage product directly competes with S3, offering a lower cost solution. For this product, they claim to have >60% y/y growth and 130% NRR. They have over 500k total customers, with many recognizable names using the cloud storage product.

In one of Cloudflare’s featured developer blog posts, a start-up offering a consumer service around file sharing between devices discussed the software stack that the service is built upon. The development team uses Workers as the core back-end processing engine, but integrates with a number of third-party services for specific application functions. These include Stripe for payments, Google Firebase for authentication and communications, and Backblaze B2 Cloud Storage for file storage.

Looking towards building an updated version of their service, the founder and lead developer discussed how some of these third party services could be moved to the Cloudflare platform now. They can integrate directly with Stripe from Workers after Full Stack Week, versus passing that through a third party subscription platform called RevenueCat. Authentication and communications functions could be migrated from Firebase to Durable Objects and WebSockets. And most significantly, R2 could replace the Blackblaze B2 cloud service for file storage.

Using Workers to manage notifications and signaling, the initial handshake process has been reworked, and I am looking to move the real-time communication from Firebase to Durable Objects and WebSockets. And of course, I can’t wait to try out R2 when that becomes available.

Cloudflare Blog post, November 2021

I raise this point not as a knock on the new IPO for Backblaze, but to demonstrate that R2 and other Cloudflare services already have independent service providers generating revenue in competition with the hyperscalers. There is already a market that Cloudflare can immediately tap. As customers build distributed applications on the Workers platform, Cloudflare will be able to offer these supporting services in a premium bundle or as a stand-alone upsell.

As Cloudflare continues adding new products and services in every direction, these offerings will generate incremental revenue. That will likely backfill a slowdown in their larger and older products. Not every product offering will be successful, which is why Cloudflare’s breakneck product development velocity is so important. If the Cloudflare product team takes enough shots, several of them will hit the mark each year. As they do, Cloudflare should be able to replicate the enduring growth of the hyperscalers and outlast the law of large numbers.

Investment Take-aways

On the simplest terms, we have to applaud Cloudflare for its consistency. Delivering 50% revenue growth over several years is a feat in itself. Similarly, if we project the same trajectory linearly over the next 5-10 years, then compounding will make for a much bigger company (approx 8x to over 50x). If Cloudflare will finish 2021 with just over $650M in revenue, a 50x increase would be $32B. While that sounds like a lot, it is about inline with the run rates for AWS, Azure and GCP, which grew by 39% to 48% annually last quarter. So, this extended growth rate at scale appears conceivable.

Even with the current valuation multiple at an excessive 70 P/S, that level of revenue compounding will bring the valuation multiple down significantly over time. This is likely the assumption that the market is making at this point. Cloudflare’s grand vision of becoming the fourth public cloud, coupled with the rapid expansion of its product footprint, are fueling a narrative that revenue growth will be durable. This assumption drives a premium in the stock price as 10 year financial models start to look reasonable. The Cloudflare story is as much about consistent execution as it is about potential.

At the same time, many investors are growing impatient with the notion that all these new product releases should be contributing to revenue at some point and have an expectation that revenue growth should start to accelerate. If revenue growth isn’t accelerating, then the new products are more experiments than actual business drivers. This is a fair perspective and also captures the reality that any slowdown in revenue growth (even temporary) would have an outsized impact on valuation. We know what happens to a stock’s valuation once it appears to be a “decel” story.

I don’t expect revenue growth to accelerate significantly from here and would be fine with roughly 50% revenue growth on average going forward. As the total revenue amount becomes larger, it will be necessary for new product growth to contribute incrementally to fill the gap as more mature offerings slow down. We see this effect with other rapidly growing software infrastructure companies, that keep adding products to backfill the eventual slowdown of the largest and oldest offerings. Datadog comes to mind here, as new offerings are in hypergrowth and early offerings like infrastructure are slowing down.

Cloudflare is showing evidence that new offerings do eventually contribute to revenue growth. It’s not immediate, but does appear to ramp up after about a year in GA. The fact that most of the highlighted new customer wins in Q3 included Cloudflare One / Zero Trust offerings supports this point. At the end of 2022, we will likely see even more contribution from Zero Trust and new significant deals that include Email Security, Workers/R2, RTC or even Web3 Gateways.

As long as Cloudflare keeps innovating and entering new markets, they should be able to maintain their growth rates. The reason I can justify this, relative to other competitors or providers in adjacent markets, has to do with Cloudflare’s unique position from an architecture and infrastructure point of view. By focusing on the network as the foundation for all their product offerings, they can bring fresh insights and capabilities to many aspects of security, compute, data processing and content delivery that would be out of reach from both hyperscalers and potential competitors that sit on someone else’s data centers.

Additionally, while many of their new products and market entries appear to be with bare-bones offerings and limited adoption, I think use and feedback from individual developers and forward-thinking start-ups is exactly what is needed at this point in Cloudflare’s evolution. Through their history, Cloudflare products have gone through this bottoms-up approach, where they gain traction by addressing a mundane, but under-served segment of the market and focus initially on free users. Over time, they build up to commercial relationships with start-ups, then mid-sized companies and finally enterprises.

“We had to prove the value for companies running their applications on this new infrastructure platform. There were skeptics, particularly amongst competitors… They said at first… this is never gonna take off. Then, they said, well it’s only for start-ups. And then, when enterprises really started to use it, they said it’s certainly not for mission critical workloads.”

You might assume the quote above is from Cloudflare. Actually, this was part of Adam Selipsky’s (new head of AWS) keynote at AWS re:Invent 2021 just a couple of weeks ago. He was describing the skepticism and reaction from the industry when cloud computing was just getting off the ground and applications workloads were predominately in private data centers. While I don’t think Cloudflare will fully displace the hyperscalers, I do think that a similar evolution is happening as it relates to a view that applications can be built on top of a fully distributed network of compute and storage.

Just like with my contrived highway system analogy, the ability to inspect and route traffic between cities and provide onramps for everyone in between will lead to many new service offerings that Cloudflare will be best positioned to address. Distributed compute, data storage (and transit), security, scalability (caching) all naturally extend from this configuration. The only viable alternatives would be other players that run their own network of PoPs and have relationships for carriage across the globe. That really only includes a handful of companies, most of which are no longer innovating or don’t offer programmability.

Given all this, I am comfortable holding my outsized position in NET stock for the remainder of 2021. This is now 26% of my personal portfolio and is my second largest holding. My basis is $46, building the position primarily in the Fall of 2020. This entry was timed with an exit from Fastly, which was exhibiting slowing growth and execution issues. This highlights my prior point about Datadog, where product potential has to be measured against actual go-to-market delivery. In Fastly’s case, I like their architectural choices and the Compute@Edge platform, but the organization seems to be struggling to deliver consistent product development and sales growth. We will see if the situation improves in 2022.

With my NET holdings up over 75% in 2021, the stock has appreciated considerably this year. I am maintaining my position for now, deferring capital gains taxes and reflecting my desire to hold this company for the long term. Like with Datadog, the accelerating rate of product delivery makes me think that Cloudflare of 2022 will again be a much bigger company than it is currently. The expansion of their platform across so many vectors makes me bullish that Cloudflare is well-positioned to grow into a significantly larger infrastructure provider over the next 5-10 years.

In November 2020, I initiated coverage of NET and set a 2024 price target of $180. They have already surpassed that mark. As market valuations find their base and we get some updates on Cloudflare’s performance in 2022, I will adjust this price target.

NOTE: This article does not represent investment advice and is solely the author’s opinion for managing his own investment portfolio. Readers are expected to perform their own due diligence before making investment decisions. Please see the Disclaimer for more detail.

Additional Reading / Research

  • Cloudflare Full Stack Week – Summary Page. Includes a roll-up of blog posts and CloudflareTV episodes.
  • Cloudflare CIO Week – Summary Page.
  • Hhhypergrowth’s extensive coverage of Cloudflare – See Flare Ups, What are Edge Networks, A Cloudflare Deep Dive and more. Subscribe to the Premium Service for extensive coverage of all things software infrastructure.

15 Comments

  1. Trond

    Thank you, this is now on top of my to-read list for the Christmas break!

    Just wanted to quickly share two blog posts related to Cloudflare in case you haven’t seen them already, one positive and the other perhaps a bit less so.

    This is a fresh “real world” performance test between Workers vs Compute@Edge:

    https://barstool.engineering/a-real-world-comparison-between-cloudflare-workers-and-fastly-compute-edge/

    My takeaway is, when also considering the recent performance blog posts by Cloudflare and Fastly, that both companies have performant platform and neither one seems to have clear upper hand over the other on that area. All the other things you have so well explained many times would seem to be much more important from investment perspective. Though I must admit that after reading so much last year how Fastly’s technology is “superior” or “best” especially performance-wise it’s reassuring to see that Cloudflare is doing just fine against them, to say the least. And also considering some individual hints here and there – like applying eBPF/XDP based protections globally almost at instant on global scale – are very clear indicators they are at least mostly second to none.

    The second case emerged a few weeks ago but its follow-up is very recent. Some dubbed this initially as a Workers issue but to me it seems to have been more higher level, let’s say, process issue and both parties (Badger and Cloudflare) seemed to have found something to improve since then:

    https://badger.com/technical-post-mortem
    https://badger.com/news/badger-security-upgrades

    Although especially the first post might paint Cloudflare or Workers in somewhat negative light the second would seem to confirm that Badger is still a customer and they were able to apply several new security measures provided by Cloudflare platform to prevent such issues going forward.

    Thank you very much for a very educational year, wishing you happy year’s end and new year!

    • poffringa

      No problem – glad you found it helpful and thanks for the feedback. I agree with your points relative to performance. I think Fastly’s more efficient and faster runtime would provide an advantage if they also offered the same level of supporting services around the runtime. This is where their optimized runtime also has to be usable. Language support, data storage options and tooling all become as important as raw speed. With what is primarily suited for stateless application use cases, Compute@Edge becomes very limited.

      Regarding the security incident at BadgerDAO, I agree with your take-away as well. I think the design of the Workers API permissions management should have been more stringent, but this also wasn’t a “hack” (meaning the attacker didn’t break into the Workers platform itself). The attacker made use of the bad design to trick BadgerDAO into granting permissions to their new account. So, as you said, both parties could have done better. I agree that it is encouraging that BadgerDAO is remaining as a customer. Also, this reveals another major customer of the Workers platform, leveraging it for real application use cases.

    • AH

      Trond – sounds like it would be prudent to invest in both Fastly and Cloudflare for years to come

  2. YY

    Thank you for yet another excellent post! Having followed Cloudflare for quite some time, a few high-level thoughts come to mind:

    Looking at the first nine months of 2021, Fastly’s R&D spend was ~$92m, and Cloudflare’s was $127.6m. Obviously, NET spends more, but does the difference explain all the product announcements by Cloudflare? Is FSLY that inefficient in its R&D spending? Is NET that efficient? Looking at both companies’ breadth and amount of new product announcements, one would expect NET to be spending multiples of what FSLY does, but it is far from being the case. What is your view on that? Clearly, Cloudflare’s execution is much better, but is it THAT much better given the spending level?

    Similarly, The Capex spend at NET is double than FSLY’s, but it is tiny compared to the spending it would take to become a significant player in the cloud infrastructure space. Looking at how many billions the leading cloud infrastructure providers spend, NET’s spend (~$63m in 9 months) is a drop in the ocean. As you mentioned, NET probably doesn’t plan to overtake AWS or Azure, but the high valuation does imply that Cloudflare can become a significant number four here. Cloudflare is investing less than 1% of the capex that each of the big three spend on cementing their market position. Is that enough to become number four in this space?

    I’m just wondering how Cloudflare does so much with so little relative spending on R&D and Capex. Any insight on that would be much appreciated.

    • poffringa

      Thanks for the feedback and detailed questions. On R&D spend, I think there are a couple of possible explanations. I talked about some of these in a prior post on product agility, using Cloudflare as the example. Here are a few points to consider:
      – Cloudflare decided to re-use the V8 Engine for its Workers product. While this has limited the performance relative to Compute@Edge, it does simplify their development. Fastly likely has to invest significant development resources in building out the supporting capabilities for the Lucet runtime and keeping it up to date. For example, to bring JavaScript support to Lucet/WASM, Fastly likely had to commit its own resources, whereas Cloudflare gets those types of features for free.
      – Cloudflare makes heavy use of its own infrastructure for building new products. In this post, I talked about concepts like primitives and composability. For example, the mapping functionality of R2 is based on Durable Objects. That reduced the development time for R2.
      – Cloudflare organizes around many small product teams that are empowered to take Cloudflare’s infrastructure and apply it to solve a wide array of problems. This is likely how whole new product vectors, like RTC, email security and Cloudflare One were launched. Again, because the platform provides the developers with so many building blocks, they likely do more snapping together than writing code from scratch to launch something new.

      On CapEx, I think that simply has to do with their relative size in terms of revenue. At a $660M revenue run rate, Cloudflare is about 1% the size of AWS ($64B run rate).

      • YY

        Thank you very much for the detailed answer. Very useful insights.

      • AH

        Embedded in your comments is a bullish case made for Fastly, with the fastest edge computing platform working through their sales and marketing issues this year

        CloudFlare moved from 220 levels to 100, will see when there is consolidation

  3. udit

    Thank you so much for the post and all your work!

    I wanted to bring up the revenue model and get your thoughts on it.

    Q4-2020 Earnings call
    “Less than 5% of our revenue comes from usage-based products.”

    Q1-2021 Earnings call
    “Since very little of our revenue is usage based, our success with this metric (DBNR) is driven by our success selling our broad platform to our customers.
    We saw particular strength in the quarter from Cloudflare One, which unifies Cloudflare for infrastructure and Cloudflare for Team solutions into a platform that we believe represents the future of enterprise networking.”

    Their pricing model also reflects this
    https://www.cloudflare.com/plans/#overview
    Application services, Zero Trust services, Network services don’t have usage based pricing.

    Developer platform such as Cloudflare workers has “$0.15/million requests per month” which is based on usage.

    From this post
    “The durability of hyperscaler growth provides evidence that Cloudflare could sustain a high rate of growth for a while.”
    “The combination of enterprise customer upsell with an increased set of products to offer them will drive a powerful expansion motion”.

    Is it a bit concerning as they then don’t grow with their customers usage and need to come up with new products and new customers? The new products being usage based such as workers, workers KV is encouraging though.

    • poffringa

      Thanks for the feedback. I don’t really see a problem. Their DBNR metric captures this concept of incremental spend from existing customers over a one year period. That was a respectable 124% last quarter, which means that customers a year ago spend 24% more on Cloudflare products this year. Putting aside new customer adds, Cloudflare’s spend expansion opportunity occurs across two dimensions:
      1. Customers increase their usage of existing product subscriptions. Most of the pricing plans will generate more revenue as the customer increases their dependence on each product type. For example, Zero Trust Services are still calculated based on the number of users and includes different plans with increasing thresholds and feature access. The Network Services (Magic Transit, WAN, Firewall) all have custom pricing billed annually. That basically implies a renewal annually based on usage. And, as you noted, the Developer Platform (Workers) is all usage based.
      – Existing customers increase their spend by adopting more products. Email security (introduced in October) would be an example of a new product that will generate incremental spend with existing customers. The same applies to Durable Objects, R2, RTC, Browser Isolation, Zaraz, etc.

      This mode of growing spend with existing customers appears very similar to that of Datadog. While customers sign monthly or annual commitments, their cost assumes certain levels of usage. If the customer wants to increase their usage, they will update the subscription contract. That results in an increase in DBNR.

  4. Michael Orwin

    Thanks, Peter, for another very informative article.

    Can Cloudflare let an organization bypass the public internet?

    I suppose that for a BtoC business, customers will get to the website via public internet, often via a google search, and that can’t be improved on, but might it be possible in other areas? I thought about it when reading a Cloudflare article about an internet protocol, “What is BGP? | BGP routing explained”. At the end it links to “Protecting Cloudflare Customers from BGP Insecurity with Route Leak Detection” (03/25/2021). I expect the solution it describes is effective, but it’s geared to alerting administrators, who then need to contact the service providers listed in the alert. I’d rather have seen something like “How to enable work-from-home without relying on BGP”, but maybe that’s not feasible, unless Web3 Gateways are more secure and you can work-from-home in the metaverse, or something.

    I’m not a techie, so sorry if that doesn’t make sense. Cloudflare’s Learning Center explains things well, and I didn’t want to risk describing BGP myself.

    • poffringa

      Hi Michael – Yes, that is the premise behind Magic WAN. It is geared towards enterprises, allowing the employees to connect to corporate resources (SaaS apps, company data centers, offices, document stores) without having their network traffic transit over the open Internet. Rather, the enterprise connects their network to Cloudflare’s network at any one of their global PoPs. From the entry point, employee traffic is routed over Cloudflare’s network. The benefits to the enterprise are better performance and security.

      • Michael Orwin

        Thanks, Peter!

  5. Sachiv Mehta

    Amazing to re-read this after NET’s speedy fall in the past 3 weeks. Much easier to ignore the noise after delving into he product and development rollout strength of the company, as well as what to watch out for in the segment space. Thanks again for the great work!

  6. udit

    Hey Peter,
    I wanted to bring up the developments in SpaceX StarLink Satellites
    broadband internet system and how this relates to Cloudflare.

    “Starlink internet works by sending information through the vacuum of space, where it travels much faster than in fiber-optic cable and can reach far more people and places.
    Because Starlink satellites are in a low orbit, this enables Starlink to deliver services like online gaming that are usually not possible on other satellite broadband systems.”

    Google and Microsoft have already entered into partnerships with StarLink.

    Altough Starlink is in Beta testing, this left me wondering how all this relates and may affect Edge computing?

    a) Does Starlink become the new Edge.

    Space lasers would improve the Starlink network by allowing it to exchange data in between the satellites in orbit, rather than beaming it back-and-forth to the ground. SpaceX tested two of the Starlink satellites in orbit that are equipped with the intersatellite links.

    b) Does this have implications on some of the advantages of Cloudflare such as Smart routing?

    c) Does this open up field for other players to enter now and Cloudflare may loose its competitive advantage of having a global network?

    Here is one tweet I found of Clouflare CEO roughly related to this https://bit.ly/3GmUDEf

    Appreciate your thoughts. Thanks.

    • poffringa

      Hi – I think it’s hard to say how StarLink might impact Cloudflare. The Tweet you provided seems to imply that Cloudflare would simply peer with StarLink and utilize their network, like their other peers. They might also place their PoPs on StarLink’s network infrastructure, assuming that is possible. Right now, StarLink appears to be a network, without providing compute, data storage or other security services layered over it. Until they do, they really wouldn’t be in a position to disrupt or compete with Cloudflare. They are worth watching though.