Investing analysis of the software companies that power next generation digital businesses

Cloudflare Serverless Week Unpacked

Cloudflare (NET) kicked off Serverless Week with a blog post from their CEO on Sunday, July 26. The event ran this past week and highlighted a number of enhancements to Cloudflare’s serverless edge compute product. These included reduced cold start times, additional language support, improved developer tooling and lower price points. They also unveiled a new offering called Workers Unbound, which removes prior restrictions on CPU usage to allow for long running processes. These are all very exciting for serverless edge compute adoption and represent a step up in capabilities for Cloudflare. In this post, I will dig into the changes announced and what these imply for usage of Cloudflare’s Workers product. I will also try to draw parallels to Fastly’s Compute@Edge offering, based on what we can glean thus far (still in beta). At a high level, this progress from Cloudflare provides further momentum for the migration of application processing out of central data centers (whether cloud or private) to the network edge. This trend should benefit both FSLY and NET, as leading independent providers of serverless edge compute solutions.

The Kick-off

Serverless Week ran from July 27 – 31. It was kicked off by a blog post on Sunday morning (July 26) from Cloudflare’s CEO. The post provided some philosophy around the CEO’s view of the evolution of the serverless market and how Cloudflare’s offerings fit into that. It also included teasers for most of the week’s announcements. I won’t delve deeply into the philosophy – readers can check out the blog post themselves. It provides the CEO’s perspective on the choices developers are faced when considering a serverless edge compute environment. He presents them as a hierarchy in which “Speed < Consistency < Cost < Ease of Use < Compliance.” My opinion is that these factors need to be weighed relative to the use case. For any technology decision, there are trade-offs and nuance. As a CTO, I would answer “it depends.” Investors can read the post and draw their own conclusions.

Regardless, acceptance of the philosophy has little bearing on the actual improvements Cloudflare announced, and there were many. First, the CEO highlighted some impressive stats on usage of Cloudflare’s existing Workers product, which represents their serverless offering. Over the last two years that Workers has been generally available, hundreds of thousands of developers have used it, including 20,000 new developers in the last quarter. The product now constitutes more than 10% of all requests on the Cloudflare network. Of large customers, approximately 20% are using Workers as part of their deployment.

The blog post set the stage for the announcement of a series of improvements to the Workers product line. The formal release of a new product offering, Workers Unbound, was announced in a press release and additional blog post on Monday morning. Workers Unbound is available as a private beta, which requires completion of a sign-up form and review by Cloudflare personnel for acceptance. In a segment on Cloudflare TV announcing the product later on July 27, the speakers mentioned that they are starting Workers Unbound as a private beta so that they can monitor the performance of the new capabilities and address any technical challenges that arise. They also plan to roll-out changes to the support infrastructure around the new product, including updates to analytics reports, the alert system and profiling. They are targeting General Availability of Workers Unbound for later this year. In the meantime, Cloudflare will continue to support the existing Workers offering, which will be labelled Workers Bundled. On a separate Cloudflare TV segment on Thursday, the product marketing manager for Workers mentioned that they had received 700 beta program requests thus far, which is a promising sign. Finally, further blog posts during the week highlighted a few features that have already been rolled across all Workers offerings, separate from the Unbound beta.

For a general background on Cloudflare, you can review a Hhhypergrowth article from February that provides an informative explanation of all of Cloudflare’s offerings.

Specific to Serverless Week, Cloudflare’s blog posts and Monday’s press release delivered a list of product announcements. As the week progressed, we have received more information about each announcement, both through blog posts and Cloudflare TV segments. For interested investors, the Cloudflare Blog and Cloudflare TV provide useful sources of information about Cloudflare’s full product offering. I like the transparency and timeliness of their communication. Below, I provide a summary list of the announcements from Serverless Week and then will explore each one in detail.

  • Reduced Cold Start Times
  • Removed Runtime Execution Constraints
  • Lowered Costs
  • Additional Language Support
  • Security Enhancements
  • Tooling and Network Infrastructure
  • Data Localization

Reduced Cold Start Times

In the press release and the CEO’s blog post, Cloudflare announced that they have effectively eliminated cold start times for Workers. For readers not familiar with the concept of cold start times, this represents the time it takes for a serverless process to begin executing code from a clean memory state. For serverless solutions offered by AWS, GCP and Azure, this time can range from 100ms to several seconds depending on the type of container, language used, etc. This lag is generally too long to allow these serverless solutions to address requests in which a human is waiting on the response (synchronous) and are better suited for asynchronous workloads, like sending an email after a purchase transaction is complete.

Cloudflare’s prior incarnation of Workers introduced in 2018 delivered a significant improvement over the cloud vendor solutions, achieving a 5ms cold start by using the V8 Engine. V8 was originally developed at Google as part of the Chrome browser to execute JavaScript code in isolation within each browser tab. This was extended to run on the server-side and packaged into V8 Isolates, which provide a secure sandbox environment for executing code. It offers a way to execute Javascript and WebAssembly code in a runtime sandbox, that is compact and fast. It accomplishes this by shedding the overhead of a full container and virtualized environment, the approach generally used by the cloud vendors. Cloudflare’s solution represented at least a 100x improvement in cold start times, allowing Cloudflare Workers to be used for synchronous workloads, as the faster start time made the processing delay less noticeable.

In this week’s press release and the CEO’s kick-off blog post for Serverless Week, Cloudflare claimed to have reduced the cold start time to “0 nanoseconds”. I am not sure why they included the nanoseconds time unit here. Perhaps, a little marketing hyperbole.

So, this week, we’ll be announcing that Workers now supports zero nanosecond cold starts. Since, unless someone invents a time machine, it’s impossible to take less time than that, we’re confident that Workers now has the fastest cold starts of any serverless platform.

Cloudflare CEO, July 26, 2020

While this sounds amazing on the surface, and grabbed investor interest, there is a little more to the story. My initial assumption was that they rewrote their serverless compiler and runtime to optimize performance to the extreme, so that the cold start time could effectively be measured as 0 nanoseconds (perhaps in picoseconds and then rounded).

However, this was not the case. Workers Unbound still runs on V8 Isolates, so the cold start time will still be 5ms. The Cloudflare team accomplished cold start time reduction by front-running the cold start process while the initial HTTPS (secure web connection) connection preceding it was being negotiated between the user’s client and the web server in Cloudflare’s POP. The system takes hints from the request header to determine the customer code that it will need to run, loads that in parallel into an Isolate and has it ready for the connection once the TLS negotiation is complete. This is a smart optimization to remove the cold start time from the the end-to-end processing flow for a full request/response sequence, but did not make it 0 seconds. This was revealed in an interview with The Register.

“We did something pretty clever,” said Prince. “The first thing that has to happen when you connect is the TLS handshake. The very first request as part of the handshake, we use that as a hint there’s going to be a request. During the time that handshake happens, we pre-warm the Worker so it loads instantly. Unless someone invents a time machine, we don’t think anyone will have a faster start time.”

Cloudflare CEO, Interview with The REgister, July 27, 2020

I realize this is a bit technical and probably semantic, but pre-warming the Worker does not mean that the cold start time has been optimized to 0 nanoseconds. It means that the V8 Isolate for the Worker is instantiated in advance of the request needing it. This sequence was confirmed in a subsequent Cloudflare blog post on July 30 from the engineer who worked on it.

With our newest optimization, when Cloudflare receives the first packet during TLS negotiation, the “ClientHello,” we hint the Workers runtime to eagerly load that hostname’s Worker. After the handshake is done, the Worker is warm and ready to receive requests. Since it only takes 5 milliseconds to load a Worker, and the average latency between a client and Cloudflare is more than that, the cold start is zero. The Worker starts executing code the moment the request is received from the client.

Cloudflare Blog, July 30, 2020

The blog post included a sequence diagram showing the steps in the new end-to-end process. I added my annotations in red.

Cloudflare Blog Post Image, Author’s Annotations

As you can see, there is still a cold start for the Worker, which I labelled as (A). It just isn’t noticeable as lag at the client’s browser, as that is still handling the TLS negotiation. Once the Worker has been loaded following the pre-warm, it sits in memory waiting for the actual request. I labelled this time as (B).

This distinction is important. While the end user would experience a reduction in overall end-to-end request processing time, the server on which the V8 Isolate runs will still need to incur the overhead of the Worker warm-up and the waiting period (A and B above). This has implications for scalability and hardware utilization. The Worker warm-up process would still require 5ms of processing from the CPU and occupy approximately 3MB of memory while the V8 Isolate waits for the actual request to arrive.

Also, as highlighted in the blog post, this front-loading benefit has two constraints:

  • It works for secure traffic (HTTPS) only. Granted, this constitutes 95% of Cloudflare’s worker traffic, but is worth mentioning.
  • More importantly, since the system uses the host name as the hint, it would only work for Worker scripts tied directly to the web site’s base URL, like https://example.com. It couldn’t accommodate multiple scripts, each tied to their own distinct URL pattern. This is common in larger installations, where the customer wouldn’t want to pack all their processing code into a single script tied to the root host name. REST APIs, for example, break code up into distinct actions by URL resource, like https://example.com/api/product/getproductdetails or https://example.com/api/user/profile. That structure would break this pre-warming benefit. The blog post mentioned this caveat near the end.

For now, this is only available for Workers that are deployed to a “root” hostname like “example.com” and not specific paths like “example.com/path/to/something.” We plan to introduce more optimizations in the future that can preload specific paths.

Cloudflare Blog Post, July 30, 2020

In a Cloudflare TV segment on Thursday with the blog post’s author, he also confirmed the limitation of extended paths. He mentioned that they might be able to infer the full path in the future by examining traffic request history. However, that would difficult for API requests, which are commonly used by Internet companies to expose back-end functionality for mobile apps and partners.

Putting aside these details, this enhanced performance still compares very well with the serverless solutions from the big cloud vendors, which require several hundred ms or even full seconds for cold starts and have much larger memory footprints. It is possible the cloud vendors may try to duplicate this pre-warming optimization, but the benefit would be lower as TLS negotiation generally doesn’t require that much time to complete.

For comparison, Fastly has published that their Lucet compiler and runtime can produce a cold start time of 35 microseconds on average. Also, the Lucet runtime requires just a few kilobytes of memory overhead. This was a consequence of Fastly’s decision to build their own runtime, optimized for speed and resource footprint. These values are about 100x smaller than comparable values for the V8 Engine. In theory, that means Fastly would be able to pack more Lucet runtimes onto each server in their POPs, resulting in more efficient hardware utilization and scalability. Additionally, some synchronous request types, like API stitching, would benefit from the reduction of response time.

Runtime Execution Constraints

The current incarnation of Workers (now called Workers Bundled) limits the length of any running worker process to 50ms of CPU time. The reasoning was to ensure that a long-running worker process doesn’t block other workers queued up to execute. With the release of Workers Unbound, Cloudflare will gradually extend this constraint with a goal to allow up to 15 minutes of runtime. Cloudflare had been removing this limit for some customers on request previously, but with Workers Unbound, they will make the capability available for all developer instances.

As an additional enhancement, Workers Unbound will remove CPU throttling, which had been previously employed to run more worker processes in parallel. This throttling will be removed, allowing each worker full access to the CPU, while it is running.

In the previously mentioned segment on Cloudflare TV announcing Workers Unbound on July 27, the lead engineer discussed how this change would be rolled out gradually in increments, so that they could observe usage behavior and the impact on overall performance of the POPs. This makes sense as the Cloudflare team would want to make sure that incoming Worker requests don’t get queued up waiting for server resources. They will also want to monitor the utilization of the host servers and determine if more capacity will be needed.

Once this extension is available, it does open up new applications for Cloudflare’s edge serverless offering. The lead engineer discussed several potential use cases, including machine learning, video processing, inference and analytics. These have heavier processing requirements, which wouldn’t normally be feasible in 50ms. Some of their existing customers have already addressed uses like these with the limit removed.

This is an exciting enhancement to the Cloudflare Worker platform, by expanding the potential use cases beyond quick running synchronous requests. As an example, for IoT workloads, Workers could be employed to pre-process large chunks of raw data in order to summarize it before forwarding to a permanent data store. Similarly, incoming video streams could be reformatted to reduce size locally and then sent to storage.

In comparison to Fastly, we don’t have information on how they are handling runtime limits in Compute@Edge. Most of the use cases they have discussed publicly thus far are focused on short-lived, synchronous web transactions, like API response stitching, personalization, content targeting, authentication, video manifest file rewrites, etc. These longer running use cases may be something Fastly addresses in the feature, or simply leaves this segment of the market to other providers.

Lower Costs

Cloudflare announced some pretty significant cost savings for running workloads on Workers Unbound, relative to serverless solutions provided by the large cloud vendors. The Cloudflare team conducted experiments in which they ran a sample workload on AWS, Azure and GCP serverless products and then examined their bill. They compared this to what Workers Unbound would cost for the same workload. The savings are detailed below:

  • 75% less expensive than AWS Lambda@Edge
  • 24% less expensive than Microsoft Azure Functions
  • 52% less expensive than Google Cloud Functions

Also, Cloudflare customers are not charged for adjacent usage fees like DNS requests or API Gateway (AWS specific). For AWS, Cloudflare provided a detailed break-out of cost comparison between Workers Unbound, AWS Lambda and Lambda@Edge.

Pricing Comparison, Cloudflare Blog, July 2020

In the interview with The Register on July 27, the Cloudflare CEO explained how they were able to achieve these lower costs, relative to the cloud providers. These savings are not dependent on cutting margins, rather they are the result of efficiency in resource utilization and operating costs.

Ninety per cent of the savings, said Prince, come from building a sandboxing platform based on Isolates that is more efficient with underlying computing resources than VMs or containers. The other 10 per cent, he said, comes from lower operating costs, a consequence of a symbolic and mutually beneficial relationship with ISPs around the world that provide access to their data center infrastructure.

The Register, July 27, 2020

For comparison, we do not know what the costs of Fastly’s Compute@Edge product will be. So, we can’t draw a direct conclusion. However, we do have a directional input based on the CEO’s comments, relative to utilization. Given that Fastly’s Lucet runtime uses less server resources than V8 Isolates, it is possible Fastly could offer their service for a lower price. Or, at a similar price point, but with less CapEx spend for POP hardware required.

Language Support

An exciting aspect of the Cloudflare’s new serverless capabilities is additional language support. As part of the announcements, Workers is adding the ability to run code written in Python, Scala, Kotlin, Reason and Dart. This is in addition to the prior languages of JavaScript, C, C++, Rust and Go.

Cloudflare Workers Language Support, Cloudflare Blog Post

Because of the use of the V8 Engine, Workers can run any language that can be compiled to JavaScript. This list is pretty extensive, including most popular developer languages. The dependency is the completeness and reliability of that language’s JavaScript compiler, many of which are open sourced. For example, Cloudflare was able to offer Scala support by leveraging the open source Scala.js compiler. To add a new language, the Cloudflare team would need to vet the compiler and incorporate it into their build tool, Wrangler. There are also some language specific functions they would need to account for in each case. Finally, Cloudflare provides a basic template for developers to use to get started with a new language project.

The addition of more languages extends the usability of the Workers platform to a broader set of developers. The average developer builds proficiency in a couple of programming languages. Learning a new one is possible and sometimes required in large engineering shops, but does take time. For smaller engineering teams or individual developers, they tend to centralize on just a few languages in building their applications. By supporting a large number of languages within the Workers environment, Cloudflare is expanding the appeal of the product.

To get a sense for the number of languages out there and their relative popularity, the industry analyst firm RedMonk recently published their 2020 Programming Language Rankings. This is determined through a combined measurement of activity in GitHub (number of projects) and Stack Overflow (amount of discussion). These provide a good proxy for popularity of a certain language based on engagement on those two resources.

RedMonk 2020 Programmer Languages Rankings, Author’s Annotations

I listed the Top 20 languages from RedMonk’s survey above and added a green box around those that Cloudflare Workers currently supports. As you can see, this represents good coverage. Also, some of the languages listed wouldn’t be relevant to port, like CSS, which is used for styling web pages. The objective is to have critical mass in language coverage, so that a supported language falls into one of the couple that an average developer can use.

For comparison, Fastly currently supports fewer languages. Specifically, these are WebAssembly, Rust, C and C++. This constraint is due to the use of WebAssembly as the core of the Lucet runtime. However, while JavaScript is the most popular language, Fastly has chosen not to support it directly due to performance limitations. AssemblyScript is a language that is related to JavaScript, which Fastly is close to supporting. Go is also being evaluated.

This surfaces one of the important trade-offs between the Cloudflare and Fastly serverless platforms, as a consequence of their design decisions. By building their runtime on V8, Cloudflare was able to quickly leverage that technology to rapidly roll out their Workers product. With this, they get access to all the languages with JavaScript compiler support. The downside is performance and resource utilization.

Fastly decided to forgo V8 and build their own compiler and runtime in Lucet based on WebAssembly to squeeze out more performance. They did this in collaboration with other open source organizations, like Mozilla and Red Hat, as part of the Bytecode Alliance. Lucet has been open-sourced. I think we can assume Fastly will build commercial services around it (open core model), like distributed state management, in their edge compute environment. As a result, Lucet can only run languages that compile to WebAssembly, which as we see, is a smaller set.

Related to language support specifically, Cloudflare and Fastly seem to be pursuing slightly diverging strategies. Cloudflare appears to be targeting all developers by making the Workers platform compatible with as many languages as possible. This would appeal to individual developers and small teams with limited resources that might not have developers proficient in multiple languages. Fastly’s approach seems targeted at enterprises, where a large engineering team would likely have a few developers dedicated to just writing code for the edge compute environment. In this case, they would be okay with the constraint of a specific set of highly performant languages, like Rust or AssemblyScript.

As an anecdotal data point, this seems to be acceptable to Shopify, at least based on a use case discussed by a development manager at a WebAssembly SF presentation in May 2020. Shopify is building an isolated code execution engine on their merchant platform which allows partners to create custom logic that extends basic Shopify merchant functionality. The Shopify team is referring to this capability as “synchronous extensibility” and sees broad use for the technology. This capability is built on the Lucet runtime and the code modules are written in AssemblyScript.

Security

The Workers platform has a variety of preventative measures designed to limit its vulnerability to security threats. This is a result of multitenancy, which means the platform allows multiple customers to run their code on the same server. Also, because edge compute requires a customer’s code to be run within any POP on a provider’s network, this problem is magnified. Centralized serverless providers might have the option to create a custom hosting environment for each customer. But for distributed serverless edge compute, this isn’t an option. Therefore, the risk of security vulnerabilities allowing for data exploits between customers is particularly acute. This problem exists for any edge compute provider, including both Cloudflare and Fastly.

Unfortunately, there is not silver bullet to eliminate the risk of a security vulnerability. Exploits have become extremely sophisticated, as the newest line of speculative execution attacks, like Spectre, have revealed. In this case, attackers use very extensive techniques to try to gain access to data caches on microprocessors, from which they can extract the private data generated in the execution of adjacent code from another user.

Cloudflare published a lengthy blog post on July 29th from their lead architect detailing their security architecture and the methods they employ to mitigate the chance of a vulnerability being exploited by an attacker. These include process isolation, handling of API requests, environment patches, disabling certain commands, obfuscating timers and active monitoring. By running customer code on their servers in a controlled environment, Cloudflare can actively monitor for suspicious behavior and rapidly isolate a misbehaving process. Additionally, all source code uploaded to the platform is subject to review, which helps proactively identify potential hacks.

For comparison, Fastly’s Compute@Edge platform is vulnerable to the same types of security exploits. They have taken very similar approaches to mitigating these risks, including isolation, disabling certain commands, timer obfuscation and monitoring. For interested investors, I suggest watching a few video presentations on the subject, including a recent one at NS1 in June 2020, the RSA Conference in February 2020 and Code Mesh LDN in February 2020.

I think a detailed review of these types of attacks and the mitigation techniques used by each company goes beyond the scope of this article. I think both serverless platforms go to great lengths to ensure the integrity and security of their platforms, including engaging with third party auditors. Because the breadth of possible exploits is really unknown and how those might apply to either architecture design or mitigation techniques, I don’t think it is productive to try to speculate that any serverless platform is truly secure.

Tooling and Network Infrastructure

Cloudflare announced several enhancements to help developers create their code, debug issues, rapidly distribute changes and manage capacity. The primary developer tool is Wrangler, which allows developers to provision, debug and deploy their Cloudflare Workers. Based on extensive feedback from users over several years, it has matured into a full-featured development environment.

Cloudflare recently launched Wrangler Dev, which provides a development environment that appears local to the developer, but runs on the edge network, so that there is no difference between results on the local environment and production. For observability, Cloudflare launched Workers Metrics earlier this year, which provides basic performance metrics for the worker and error notices. Developers can also tail production logs with “wrangler tail”, to see debugging messages as they occur.

Additionally, Workers Unbound includes capabilities to manage auto-scaling. This allows more Worker instances to come online automatically with increased request load. Other serverless providers require the user to manually set auto-scaling parameters. With Workers Unbound, the developer doesn’t need to worry about auto-scaling, as it happens automatically.

Finally, code updates are distributed to the live Cloudflare network within 15 seconds globally. On other serverless platforms, like AWS Lambda@Edge, this can take up to 5 minutes. If the code change is addressing a bug or service issue, that delay can be costly. For scheduled deployments, the developer needs to verify the code is working in production, so a long delay creates idle time.

For comparison, Fastly provides a similar set of developer tools for Compute@Edge. The primary code editor and deployment manager is Terrarium, which is also hosted on Fastly’s network. This product is newer than Wrangler, so likely still has some rough edges. Fastly hasn’t provided detail on auto-scaling, although presumably, more requests would just be handled as they came into POPs, since each Lucet runtime is spun up and destroyed on each request. Regarding code distribution, Fastly has published that the global deploy time for code is 13 seconds on average. For observability, Fastly and Datadog recently presented how developers can utilize Datadog to monitor their Fastly edge performance.

Data Localization

Cloudflare announced on July 15, that their network had expanded to 206 POPs covering over 100 countries. In the CEO’s kick-off blog post, he talked about compliance and increasing requirements from countries worldwide for data sovereignty. That means a country wants their citizen’s data to be processed and stored locally in that country. This requirement poses a problem for centralized serverless providers, in that they would need to have a presence in each country and a mechanism for keeping the data local.

With its distributed serverless platform spanning over 100 countries, Cloudflare already has the foundation to support this requirement. Workers can process a user’s request in the POP located in that country. Any data generated could be kept in that POP as well. Cloudflare plans to build tools for developers to provide granular controls over data storage to address data sovereignty requirements in the future.

The Fastly network consists of 72 POPs, distributed across 26 countries. As an aside, readers shouldn’t focus on the number of POPs, as Fastly has discussed that more POPs isn’t necessarily better. They purposely built fewer, but larger POPs, to maximize cache hit rates. However, having a local presence in fewer countries could create a constraint for data locality requirements. While only a few countries (EU, China, Brazil, India) have or are considering localization requirements currently, this will be a development to monitor.

Cloudflare and Fastly

In my commentary above about each Cloudflare feature enhancement, I tried to inject some comparison to how Fastly addresses the same. These kinds of technology comparisons can be very nuanced and subject to change, so investors shouldn’t nitpick through them in an effort to draw blanket conclusions. I think both companies are taking an interesting approach to delivering edge compute solutions and are well positioned to capitalize on what could be a large shift in compute processing to the edge. TechHQ published an article this month about edge computing with a particularly relevant stat.

Today, 90% of all data is created and processed inside traditional centralized data centers or clouds. That is beginning to change. According to Gartner, by 2025, 75% of data is going to be processed at the Edge.

Tech HQ, July 21, 2020

So, I think there will be a large market for both of these independent, nimble edge compute providers to pursue. With that said, in looking at their approaches to serverless edge compute thus far, there do seem to be a few themes emerging.

  • Target Customer Segment. Fastly appears to be focused on enterprise customers primarily, while Cloudflare is more broadly inclusive of individual developers and SMBs. This difference is evidenced by customer counts. Fastly just has paying customers and reported 1,837 total in Q1. Cloudflare allows both free and paid use and reported 2.8M total customers, which includes 13% of the Fortune 1,000. Cloudflare’s open philosophy was evidenced in one of the blog posts from Serverless Week about security “We wanted Workers to be a platform that is accessible to everyone — not just big enterprise customers who can pay megabucks for it.”
  • Types of Workloads. By removing the constraints around the Worker processing times with Workers Unbound, Cloudflare is allowing the product to address compute heavy workloads that are longer running. Examples given were machine learning, analytics and video processing. As discussed above, this has implications for optimization of the delivery platform and the number of runtimes that can be squeezed onto each server. We don’t know how Fastly is addressing this in their Compute@Edge product, but I suspect they will initially focus on shorter running, synchronous processes that complete while a human is waiting on the response. These include API stitching, authentication, personalization and video routing. Both approaches will likely have large markets.
  • Breadth of Adoption. Along the lines of target customer segment, Cloudflare is biased towards a broad offering that appeals to as many developers as possible. The primary indicator here is the number of languages supported for their serverless offering. Cloudflare’s design and the continued use of the V8 Engine allows for any language that can be compiled to JavaScript. This includes many popular languages like Python and JavaScript itself. This is a conscious trade-off for inclusiveness versus performance. Fastly thus far has drawn the line at those languages that can be compiled to WebAssembly. They explained this philosophy in a blog post about supporting JavaScript. This limits the offering to a smaller set of specialized developers. On the surface, that sounds like an issue, but most large enterprise engineering organizations would have a few engineers dedicated to edge development who would be capable of doing that work in performant languages.
  • Alignment Against Cloud Vendors. Cloudflare’s commentary, particularly around pricing and performance, is clearly meant to distinguish them from the serverless offerings of the big cloud vendors. The blog posts and pricing comparisons target AWS Lambda directly and also provide parallels to Azure Functions and Google Cloud Functions. I will be interested to see how Fastly positions their product marketing for Compute@Edge relative to solutions from the cloud vendors. In the end, both are targeting the same spend that the cloud vendors would like to capture.

Investor Take-aways

With these announcements from Serverless Week, Cloudflare is demonstrating that they are serious about enabling software development at the edge. On the Q1 earnings call, a lot of attention was given to the new Cloudflare for Teams product, which provides security for enterprise workforces. I wondered if Cloudflare was pivoting towards a security focus. With this renewed momentum around serverless, they clearly want to compete in multiple segments.

I think these improvements to Cloudflare’s serverless offering raise the bar and reflect confidence in the market opportunity for edge compute. As mentioned, I think many new applications for distributed edge compute are emerging and this will become a large category of cloud spending over the next decade. Both Cloudflare and Fastly are fielding meaningful product offerings to leverage this trend. They will compete effectively with the cloud vendors for edge compute spend as more companies spin up workloads outside of central data centers.

I have covered Fastly’s emerging story since January and have an allocation of my personal portfolio invested in the name. I hadn’t initiated coverage of Cloudflare to date, as my tendancy is towards companies with a significant developer product focus. A large portion of Cloudflare’s offerings appear to target enterprise security, particularly with the roll-out of Teams. That’s not an area I feel comfortable in evaluating, as my background is in software development.

However, with this latest move by Cloudflare to double-down on serverless, my interest is piqued. I will continue to monitor their progress and may initiate a position in the future. In the meantime, I hope this article helped investors better understand the technical nuances of Cloudflare’s latest announcements in the serverless edge compute sphere.

12 Comments

  1. James Colucci

    Thank you, Peter. Your analysis is most appreciated.

  2. Simon

    First of all, thank you for making your analysis available to everyone. Very helpful. I noticed on the ZEN call that mid-size companies were a drag while large enterprise was more robust. Do you have any thoughts on how that may play out in the CDN space between FSLY and NET? Thank you.

    • poffringa

      Sure – no problem. On enterprise spend, TEAM made a similar observation. Hard to say how that translates directly to FSLY and NET. In theory, FSLY skews towards larger companies as customers, while NET generates revenue from the full spectrum (individuals, SMB, enterprise).

  3. Gal

    Hi Peter, thank you very much.
    one question if I may, would u pls explain what is the function of containers and isolates and the difference between them? I read the post in the “isolates” link but i have poor tech background and it didn’t really help me.many tnx.

    • poffringa

      Sure – I think a couple of videos provide a good explanation in context:
      Secure Sandboxing, Fastly at RSA, Feb 2020 (watch first 10 mins)
      WebAssembly on the Server, Cloudflare, Mar 2019

      • Gal

        much obliged

  4. Joe

    Thanks again Peter
    Great review

    Wonder your thoughts re comments from Akamai

    Tom Leighton — Chief Executive Officer

    Yes. We’re the largest provider of edge computing services by far. We have been doing it for close to 20 years. And the idea that this is how somehow something new is just not true.

    Most of our customers are using our edge computing capabilities for a variety of applications to A/B testing for how users like their site, to do things locally about what content actually gets delivered to the user, what ads get delivered to the user, keeping track of how a user goes through a site. The security services use an extensive compute power at the edge. We don’t break it out as a separate revenue item, but if you use the definitions, we see that a lot of folks in the analyst community are using — I would say, already, it’s over a $2 billion business for us. We don’t report it that way.

    The suggestion is this is all hype
    Fastly had already Captured half the Akamai market cap

    In a way this reminds me a borders talking down that amazon competitor in 2003. His comments really give the impression that these new IPOs are just trying to create hype for something that they have been doing forever without fanfare

    • poffringa

      Hi – thanks for the feedback. I agree with you, that as the incumbent, Akamai would dismiss the opportunity for Cloudflare or Fastly in this space. I have used the Akamai platform in the past. While it could perform some of the basic routing logic he describes, it wasn’t inherently programmable. And, yes, by definition to perform DDOS mitigation, they need to conduct traffic analysis in their POPs, which you could call edge compute.

      If you look at their serverless edge compute solution, called EdgeWorkers, it is in beta. It appears to be built on the V8 Engine like Cloudflare, but isn’t as extensive in language support (just JS) or tooling.

  5. Miles

    Analysis with this breadth and depth is so hard to find. Another highly illuminating article. Thanks so much SSI.

  6. Trond

    Thank you for another excellent analysis and deepdive! Your articles like this are definitely top-notch and to me they seem to be unique in the whole investing universe as they uniquely combine profound technical evaluation (*) with insightful views on business opportunities.

    I am sure that both FSLY and NET have great years ahead but one particular aspect I am following with great interest is the selection of target customer segment which you also brought up. While I don’t expect it to be a crucial factor determining their success it reminds me of what we saw in the Linux market during the past decade: Canonical (the maker of Ubuntu) was offering something for everyone and gained huge following everywhere, causing it to became the default platform for many projects and the initial platform for many emerging technologies. So they had lots of users and captured a great deal of mindshare, and this might be what happens with Cloudflare’s approach as well. However, despite of Canonical’s (Ubuntu’s) popularity their business wasn’t stellar as Red Hat’s RHEL (Red Hat Enterprise Linux) became to dominate the enterprise market where the money is being made (individuals, hobbyists, and SMBs don’t help you to grow to a multi-billion company). So Fastly’s customer target segment strategy is perhaps akin to Red Hat’s approach for the Linux market.

    I don’t think this comparison is in any way indicative what will happen with FSLY and NET in the future business-wise (it’s only a small piece in a much larger puzzle) but to me this is interesting from both technical and developer mindshare & platform popularity point of view which, if it would happen to become a business-differentiating factor, would then be something to pay more attention as an investor in the future when some future competing platforms are emerging.

    Thanks again for the great writeup!

    *) I almost wrote “technical analysis” here which in the usual investing context discussion is probably as far from your technical evaluation as any alternative approach could be 🙂

  7. Max Margenau

    Thank you, Peter, for this rundown on NET. As thorough and insightful an analysis as ever.

  8. Niraj

    Thank you so much. I am truly happy I found you here.

    Appreciate your generous and deep insightful sharing.