Fastly (FSLY) has experienced an incredible run over the past several weeks. The share price has more than doubled since releasing Q1 earnings on May 6. This can be primarily attributed to the major increase in guidance for Q2 and the rest of the year. There have been other surprises as well, like the observation that Amazon.com has been using Fastly POPs for content delivery. A lot of investor excitement is also pinned on the upcoming release of Compute@Edge, which represents a significant extension of Fastly’s current CDN offerings. Since the product is in beta, it isn’t clear how sizable the revenue impact will be. Given this, I thought it would be worth spending some time examining how Fastly has approached building new technologies in the past and what this might mean for their future edge compute offering. I also wanted to share my understanding of the technical underpinnings of the platform and how these differ from other serverless offerings currently on the market. Whether Fastly’s surge represents a one-time COVID-19 driven bump or they will sustain long-term usage growth remains to be seen. With this information, investors can decide for themselves if the Fastly story is hype or their edge compute platform represents the beginning of something fundamentally disruptive.
Fastly’s Roots and Bad CDNs
Since its founding, there has been something different about Fastly’s mind set. The Fastly team has a trait ingrained in their DNA to address hard technical problems with innovative and often more difficult solutions, with the goal to create a better experience for their customers. This was applied to the CDN business originally and is now manifesting in the design of a fast, compact, secure, globally distributed serverless compute platform.
Fastly was founded by Artur Bergman in 2011, directly as a consequence of his frustration with using existing CDN solutions as the CTO of Wikia, a popular hosting service for wikis. At the time, CDN offerings were dominated by entrenched players like Akamai (AKAM), Limelight (LLNW) and even emerging CDN products from cloud vendors, such as Amazon Cloudfront. As the story goes, Artur became frustrated with the capabilities of CDNs around 2010. He complained about needing technical support to make any adjustments to his CDN configuration and long delays while changes rolled out. Existing solutions lacked programmability, a trait highly valued by the engineering teams who were increasingly getting pulled into discussions about application performance and uptime through the DevOps movement. With a strong hands-on technical background, Artur decided that he could build a better solution and did so.
Out of this Fastly was born, with an intent to find better ways to enable modern software engineering organizations to optimize the delivery of their internet applications. This was grounded in an avoidance of the status quo and the easy path. Fastly’s Chief Product Architect recently said “Fastly has a long history of looking at problems from first principles and being unafraid to undertake difficult projects if we know they will benefit our customers.” This approach reminds of Eric Yuan at Zoom Video (ZM) and his journey to build a better video conferencing solution. At the time in 2011, it would have been easy to question why Eric thought he could improve on existing video conferencing offerings from WebEx and Skype, in what many would consider a largely commoditized space. Yet, Zoom’s success can be attributed to a highly focused and passionate drive to deliver an incrementally better experience for their users. I posit that Fastly is doing the same for content delivery and distributed compute infrastructure with a laser focus on what is better for the developer.
Applied to Network Design
While Fastly was initially getting off the ground as a cash-strapped start-up, they applied this approach to the design of their content delivery network. They could have taken the common path of deploying network and server hardware in a standard configuration at many POPs across the globe. To route traffic across the internet, they could have purchased expensive off the shelf routers. However, the team stepped back, examined the problems they were trying to solve and designed a different approach to their content delivery network. It didn’t hurt that they needed to preserve cash, which pushed the team towards addressing common functionality with their own custom software modules versus commercial products. This software-driven design has paid dividends many times over, giving the team the flexibility to evolve their network design as content delivery use cases change.
As the first problem, the Fastly team examined approaches to handling networking traffic into and out of their POPs. The standard configuration was to purchase border routers that can store the entire internet routing table in memory space. Switches were less expensive, but didn’t offer the memory or compute necessary to handle full internet routing. In 2013, Arista Networks began selling a switch that allowed users to run their own software on it. This provided the best solution for the Fastly team. They proceeded to write their own distributed routing agent, called Silverton, which orchestrates route configuration within Fastly POPs. Silverton peers with the BGP daemon, BIRD, which interfaces with the outside internet. This combination allows Fastly’s customized switches to function as a more expensive border router, saving hundreds of thousands of dollars for every POP they deployed. Also, it pushes external routing logic down to the host level, reducing load on the switches and providing more fine-grained control over traffic routing per user request. By custom designing Silverton, the Fastly team has full control over routing management within the POP, allowing them to push network path selection to the application level and achieve far greater control over network behavior. This has been subsequently iterated upon to selectively override route selection for certain types of content and use cases, resulting in better content delivery performance than existing solutions that generally apply a single routing rule at the border router to all cases.
As the next problem, the Fastly team examined the assumption that having many small POPs geographically dispersed across the globe results in the best overall delivery times for user content. This was the approach taken by other legacy CDNs who had invested in hundreds or thousands of small POPs, spread down to the city level. This “strength in numbers” philosophy is still presented as an advantage today by legacy players, as it is easy to make the argument that more POPs are better for performance. Actually, this is not generally the case, as the amount of content that can be cached at each small POP is limited. So, a significant percentage of user requests will need to be sent back to the origin servers, which will increase access times for those requests by a factor of 10x or more. For example, if the cache miss percentage increases by just 10%, it could raise overall average access times by 50% or more. Fastly realized that having fewer POPs with much more storage space would actually result in faster average delivery times. In a blog post from 2016, one of Fastly’s co-founders gave the analogy of convenient stores versus supermarkets. While a convenience store is generally closer to a person’s home, it has a limited set of items for sale. If the person drives a few more miles to the supermarket, they could get all of their groceries in one trip. In this case, convenience stores represent the approach taken by the legacy CDN’s with many local POPs and the supermarkets represent Fastly’s approach with fewer, larger ones.
This is why Fastly still has a limited number of POPs across the globe (72 total as of June), versus thousands advertised by competitors. Fastly POPs are thoughtfully placed at network crossroads, where they provide proximity to geographic regions, but have the storage capacity to enable much higher hit ratios for user requests.
Additionally, Fastly chose to utilize SSDs (relatively new at the time) to store the cached data. SSDs are more expensive than standard disks, but offer much faster retrieval times, on par with RAM. Based on Fastly’s research, a typical hard drive could perform approximately 450 IOPS when reading and 300-400 IOPS when writing (in a test with 4kb files). The SSDs Fastly uses, however, execute somewhere in the region of 75,000 IOPS reading and 11,500 when writing. Making this design choice further contributed to reducing Fastly’s average response times, by making data retrieval within the POPs super fast (furthering the supermarket analogy, like having all your groceries sitting at the front door). At the time, each server in a POP had 384 GB of RAM, and 6 TB of SSD space (made up from 12x 500 GB drives), and each CPU has 25 MB of L3 cache. A typical POP had 32 machines spec’ed like this.
But Fastly didn’t stop there. They also wrote a custom storage engine for their POP servers to bypass the file system and squeeze every last drop of performance out of the SSDs, cramming as much data on them as possible. They employ various algorithms to keep commonly used data in the 384 GB of RAM, making it even faster. For some assets, such as “Like” or “Share” buttons that never change and are requested millions of times a second, they go even further by serving them out of the L3 cache directly from the processor.
As a third problem with existing CDNs, Fastly examined the experience for developers and engineers within customer organizations who were responsible for delivering the application. These individuals were frustrated by a lack of tools and controls to manage changes to the content they were delivering. Common complaints were that legacy providers required technical support personnel to roll out changes and that purging content could take hours. Additionally, they had limited ability to programmatically cache dynamic content or run custom logic on Fastly’s edge nodes to evaluate user requests. In response to this, Fastly added programmability to content control through the Varnish Configuration Language. Varnish is an open source web accelerator designed for high performance delivery. Fastly uses a customized version of Varnish 2.1 and Fastly engineers have continued to contribute to the general open source project.
Fastly’s customization of Varnish extended the basic capabilities to work on Fastly’s global distribution network. It powers useful features for users, such as instant purge of content (which is necessary to enable caching of dynamic content), reverse proxies, real-time performance monitoring and custom cache policy definitions. Customers can create their own VCL scripts, upload them to Fastly’s POPs and then activate/deactivate them on the fly. This gives customers fine-grained control over the deployment of their custom caching rules. This is particularly important for production releases, as a mistakes need to be rolled back quickly. Competitive offerings at the time required maintenance windows and long roll out / roll back periods. If a configuration contained a bug, it could take hours for the fix to roll out. I experienced this pain personally on legacy CDNs from 2005-2010, while managing software engineering for CNET.com and CBSNews.com.
I realize this commentary is a bit technical for some investors. The point is to illustrate the general approach to every design problem taken by the Fastly team. As software engineers at heart and CDN outsiders, they customized almost all aspects of their content delivery solution to be programmable, performant and cost efficient. It is this approach that helped Fastly differentiate themselves from competitors at the time and disrupt the legacy CDN industry.
This strategy appears to be working. Fastly has accumulated a list of customers that represents the most technically savvy, discerning buyers at internet-first companies that value strong engineering and high performance. These include Shopify, Stripe, Spotify, Slack, Twitter, Pinterest, GitHub, etc. Within these companies, software engineers work closely with their counterparts at Fastly to push the envelop on optimal solutions for super fast delivery.
This raises a question for investors. Are CDN services at risk of becoming a commodity? I would posit that for some use cases they aren’t. There is still room for differentiation based on features and performance. In Fastly’s case, their customized network, POP design and programmability ensure their solutions are preferred by digital first companies. I think this flight to quality will continue, particularly as internet software enabled customer experiences become a more critical competitive advantage for every enterprise with a consumer oriented product. Fastly continues to innovate in this space and provides reasons for software engineers to prefer their solution. As Dan Rayburn of Frost & Sullivan recently wrote, “Customers will pay for a better level of performance.”
Distributed Compute
Okay, enough about legacy CDN. While there will continue to be large market for smart CDN capabilities, there is an equally large opportunity in enabling fast, secure distributed compute. Fastly offers programmability within the context of content management at the edge through the Varnish Configuration Language. But, VCL isn’t a general programming language. It lacks basic constructs like loops, pointers, etc.
However, a few years ago, Fastly saw another opportunity to enable a full-featured development environment within its POP infrastructure. This would allow developers to create autonomous logic to supplement their core applications. As Fastly is often the entry point for user requests to its customers’ infrastructure (in order to handle intelligent content delivery and traffic routing), the Fastly team sees potential to improve processing capabilities at these entry points on the “edge” and reduce the need to perform all logic in centralized data centers.
As an example, Khan Academy published a post on their Engineering blog in May 2020, describing how they were able to rapidly scale to 2.5x usage over a two week period in March. The author describes how all client requests are passed through Fastly first, in order to improve performance and reduce traffic sent to the back-end GCP compute infrastructure in the origin data center. Besides caching static content with Fastly, Khan Academy also extensively caches common queries, user preferences and session data. This is all used to speed up the user experience, by eliminating the round trip back to centralized servers.
Fastly’s bet, which we would assume is informed by customer requests, is that more application logic could be performed in Fastly’s distributed POPs. This would have the benefit of proximity to users and be much more responsive. It also has security benefits in certain cases. If this trend plays forward, increasing slices of data processing will relocate from centralized data centers to distributed compute platforms like Fastly’s.
Fastly’s solution for this distributed compute is called Compute@Edge and was announced in November 2019. Compute@Edge is currently in beta with a controlled set of customers, likely including a subset of the ones listed above. The CEO gave an update on Compute@Edge during the Baird conference call in June. He discussed a number of use cases from beta customers. Personalization is a common theme, where user authentication can be performed at the edge and authorized content can be quickly assembled and returned. He also highlighted other opportunities around machine learning inference processing, IoT data stream summarization and potentially gaming. He mentioned some interesting concepts around applying Compute@Edge for security use cases which were surfaced by customers and Fastly hadn’t considered. The plan is to continue the beta program through the remainder of 2020 and then transition to GA next year. At that point, Compute@Edge would begin generating revenue.
I think the label of edge compute has confused some investors and led to many interpretations of where the edge is located. I agree this isn’t clear on the surface, and Fastly may be purposely leaving it open-ended. At the Velocity Conference in October 2017, the CTO of Fastly gave the keynote, entitled “Edge Compute – The Missing Pieces“. He started his talk by saying the term “edge compute” is a misnomer. It’s actually about pushing logic that has been traditionally performed at the origin out to the “branches and leaves” of the network. More specifically, traditional cloud-based applications centralize logic in a data center. That communicates directly with the user’s device, regardless of where the user is located geographically. Granted, some static content can be delivered locally by a CDN, but actual business logic and even cached data storage, is typically performed at the data center. Between the data center and the user’s device are numerous network hops. Fastly theorizes that some of the application logic could be pushed out out of the data center and processed closer to the user’s device.
Given this vision, he stated that “edge compute” is really about providing a development environment for enabling large-scale, coordination-free distributed systems. These can run autonomously, anywhere outside of the origin. Providing the development platform to enable this type of distributed compute is Fastly’s vision for Compute@Edge. This goes far beyond traditional CDN. Building an environment that enables fast, secure, distributed compute at high scale anywhere on the globe is a much harder problem.
Because this distributed logic runs outside of the data center and is generally processed while a human is waiting on the other end, it must meet the following goals:
- Fast. Edge compute would be a step in the synchronous processing flow from the human user to the origin. Any processing time that edge compute injects must be lightning fast, so that it doesn’t add noticeable delay in the response.
- Isolated. Due to the distributed nature of edge compute, logic for multiple customer applications runs in parallel. It would not be feasible or cost efficient to dedicate hardware to each customer’s usage. Therefore, every request must be processed in a completely isolated sandbox, with no memory sharing or residue. The majority of security vulnerabilities (like Spectre or Heartbleed) rely on inappropriate access of another process’ memory space.
- Compact. In order to minimize cost and maximize scale, it is desirable to run as many of these isolated sandboxes as possible on a single physical machine. Therefore, their compute and memory footprint should be as small as possible.
- Developer Friendly. While meeting the requirements above are absolutes, in order to be usable, the solution should mirror common development frameworks and practices. A subset of popular programming languages should be supported. A mechanism for storing data should be provided. Finally, standard tools for testing, deployment and application monitoring should be available. Observability is a major requirement as issue troubleshooting will be compounded by having edge compute logic in the control flow. Is the bug or latency generated in the edge compute code or at the origin?
Early on, the Fastly team decided that the best way to address the requirements above was by adopting the serverless model. This means that Fastly is not continuously running servers with customer code loaded up in a web container waiting for user requests. Serverless is a bit of a misnomer. A server still processes code in response to a user request. It’s just that the runtime is not activated (in the ideal case) until the user request comes in, versus the normal model of keeping a server running continuously with active threads waiting for incoming requests.
However, serverless was originally intended to handle asynchronous workloads. These are jobs that run when a human isn’t waiting on the response and can be kicked off after the user’s request has been completed. Examples are generating a confirmation email for a purchase or updating state in multiple downstream systems. Because of this context, most serverless solutions spin up a virtualized container with the full runtime environment to process the first serverless request and then persist that to handle any subsequent requests.
However, this approach won’t work for distributed, synchronous, shared processing. The primary issues that the Fastly team identified are listed below.
- Cold Start Speed. The time required to spin up a full virtualized container is generally over 100ms. If processing requires this long to start (putting aside time to run the logic), a human on the other end will begin to notice the lag. This is why serverless approaches were traditionally not applied to synchronous processing.
- Security. The vast majority of security vulnerabilities involve memory sharing. This means that the active processing thread tries to access memory space outside of what is allowed. This has been the basis for most exploits, like Spectre. In order to circumvent cold start times, many serverless providers will keep the same runtime container up to handle subsequent requests. This can leave active memory or residue that an exploit can access. For example, PII from the previous user request might still be in memory values on the next request.
- Resource Overhead. In order to provide the virtualized container or native code runtime, most serverless providers require a large memory footprint for the environment. This limits the number of runtimes that can hosted on a physical machine, and encourages re-use (which again causes security risks).
As Fastly thought about designing their edge compute solution, they applied similar discipline to squeeze out the most performance, security and user experience, even if that required more engineering work. True to form, Fastly didn’t take the standard path to address these issues. Rather than relying on existing technologies for serverless compute, like re-usable containers, Fastly decided to leverage WebAssembly (Wasm) which translates several popular languages into an assembly language that runs very fast on the target computer architecture.
WebAssembly was spawned as a browser technology, which has been recently popularized for server-side processing. However, the common method of compiling and running WebAssembly is to use the Chromium V8 engine. This results in much faster cold start times, but they are still limited to about 3-5 milliseconds. It also has a smaller, but non-trivial, memory footprint. Considering this approach too slow still, Fastly decided to build their own compiler and runtime, optimized for performance, security and compactness.
This resulted in the Lucet compiler and runtime. Fastly has been working on this behind the scenes since 2017. Lucet compiles WebAssembly to fast, efficient native assembly code, and enforces safety and security. Fastly open sourced the code and invites input from the community. The was done in collaboration with the Bytecode Alliance, an open source community dedicated to creating secure software foundations, building on standards such as WebAssembly. Founding members of the Bytecode Alliance are Mozilla, Red Hat, Intel and Fastly. Lucet supports WebAssembly, but customized its own compiler. It also includes a heavily optimized, stripped-down runtime environment, on which the Fastly team spends the majority of their development cycles.
With Lucet, Fastly has addressed the three existing issues with serverless processing listed above in their beta solution.
- Speed. Cold start times have been reported of 35 microseconds. This is at least 100 times faster than other approaches, like the V8 engine, which requires 3-5 milliseconds to start (3,000 to 5,000 microseconds).
- Security. Because of super fast cold start times, each request is processed in an isolated, self-contained memory space. There is only one request per runtime. The code compiler also ensures that the code’s memory access is restricted to a well-defined structure. Lucet’s deliberate memory management improves speed as well, since memory locations are fixed locations, removing overhead from reference space look-ups.
- Compact. Using WebAssembly, the runtime has been stripped down to a tiny footprint. The Fastly team deliberately left out any capability that wasn’t needed. In this state, the runtime resembles an operating system kernel and is packaged as a compiled code module. That is then swapped onto the processor stack in one operation. This minimizes start up time and allows thousands of runtimes to spin up in parallel on a single server. The Lucet runtime occupies into a few kilobytes of memory, versus at least 3MB for the V8 Engine.
Beyond the core compiler and runtime enabled by Lucet for fast, compact, secure serverless compute, the Fastly team is working on other supporting functionality to round out the usability of Compute@Edge for developers and to broaden the set of addressable use cases. Several of these tools are included as part of Fastly Labs.
- Developer Tools. Fastly provides Terrarium, which is a code editor and deployment management tool. It supports code creation, testing and production distribution.
- Language Support. Lucet currently supports Rust, TypeScript, C and C++. The team recently announced intent to support AssemblyScript. They also published a blog post explaining why they aren’t immediately supporting Javascript, as it impedes the speed requirement. However, they plan to provide support for many of the features of JavaScript through the AssemblyScript implementation. The Fastly team is working on continuing to add more languages in the future, but will hold the line on runtime speed and security.
- Operating System Interface. While Lucet runs WebAssembly code in a secure sandbox, there are potential use cases that would benefit from having access to system resources. These might include files, the filesystem, sockets, and more. Lucet supports the WebAssembly System Interface (WASI) — a new proposed standard for safely exposing low-level interfaces to operating system facilities. The Lucet team has partnered with Mozilla on the design, implementation, and standardization of this system interface.
- Observability. Once distributed compute code is running in production on Fastly POPs, having the ability to monitor the performance of the environment is necessary. This allows DevOps personnel to quickly troubleshoot issues, particularly where it isn’t clear whether the problem is introduced on the edge or in the origin. Fastly recently announced full support for granular metrics, logging and tracing of their serverless compute runtimes. Logs can be streamed to a number of log analysis solutions, like Datadog, Splunk and Elasticsearch. Or, a developer can access log entries in real-time from their CLI. For tracing, the Fastly runtime honors tracing parameter formats and makes those available to third-party monitoring solutions. Interestingly, the press release included a quote from a VP at Datadog highlighting their partnership.
Also, of note, is the recent acquisition of Tesuto announced on June 17. Tesuto is a virtual network emulation platform, which allows users to clone any network and test planned configuration changes in a controlled sandbox. This will be applied to Fastly’s platform to facilitate automated testing and deployment of network configuration changes, which enables users to avoid unnecessary downtime. This adds another capability to the Fastly platform to facilitate fast and secure distributed compute, by providing more support for programmability and testing of the network configuration.
Another aspect of the Tesuto acquisition that is noteworthy is the addition of network engineering talent to the Fastly team. Since their founding, Fastly has been attracting thought leaders in language design, networking and compute infrastructure, who often sit on the standards boards that are mapping out future protocols for internet technologies.
Before co-founding Tesuto, Jay Sakata co-founded EdgeCast Networks, a content delivery network that provided web accelerations for some of the world’s most demanding web properties and was acquired by Verizon in 2013. Fellow Tesuto co-founder Chris Bradley worked alongside Sakata at EdgeCast Networks as the principal engineer and brings more than 20 years of experience building and managing network-focused applications, with a special focus on anti-DDoS software. Tesuto’s third co-founder Hossein Lotfi worked on Google’s data center fabrics and SD-WAN before co-founding Tesuto.
fastly Press Release, June 17, 2020
With their advanced coding and runtime environment in place, Fastly will be able to address a number of synchronous processing use cases anywhere between the user and the central data center. As mentioned above, during the Baird conference, the CEO discussed a few interesting use cases from beta customers. A recent Fastly user survey revealed that 64% of beta program participants already have an application in mind for Compute@Edge. In the observability press release, Fastly highlights a few use cases “that range from content transformation at the edge, to identity enforcement, to enterprise-wide data loss prevention and data protection.” One beta customer discussed is RVU, the UK’s leading resource for comparison sites and apps in home and financial services, such as Uswitch, Money, and Bankrate.
One of the most exciting uses of these technologies based on WebAssembly is an example I saw from Shopify. A development manager presented in May 2020 to WebAssembly SF, a user group in San Francisco for Wasm. He described how Shopify is planning to allow partners and merchants to create custom code modules that can run quickly and safely on Shopify’s core platform to provide unique functionality not included in Shopify’s standard set of merchant configurations. The example given is the real-time calculation of a product discount, based on custom logic defined by the merchant. In this case, the Shopify platform offers merchants a basic set of discount logic, like “buy 1, get 1 free”, but doesn’t offer more granular discounting rules that each merchant might want to customize. The new code/runtime environment will allow the partner to create their own discount rules and run them within the Shopify environment in a fast, compact, secure manner. In this case, the code is written in AssemblyScript and then compiled and loaded into a compact runtime using Lucet, which is the technology created by Fastly. See the example below of a demo merchant store, in which the discount rule is applied on the fly during the rendering of the check-out page.
The development manager mentioned that they evaluated other serverless runtimes for this project, including V8 engine, Google Cloud Functions and AWS Lambda, but none met their requirements for speed. With Lucet, he said they can process 1,000 requests / second on a single worker. He is “super thrilled with the performance.”
Traditionally, Shopify merchants and app partners use APIs and their own hosting environment to deliver content into the Shopify e-commerce flow. This can result in a disjointed user experience. This new capability allows these partners to seamlessly inject their own logic directly into the shopping experience within a merchant’s store. The Shopify team is referring to this capability as “synchronous extensibility” and sees broad use for the technology. If this injected code is run on every product page or check-out request, that could potentially generate a lot of traffic. In fact, their long term goal with the technology is to move from enabling discrete functions to full-featured applications that run safely within the Shopify platform.
It’s not clear if Shopify is enabling this capability as part of Fastly’s Compute@Edge beta program or repurposing Fastly’s open-source Lucet compiler and runtime. Regardless, this use case provides evidence for the versatility of the solution Fastly is building. If we consider every type of online platform that offers extensibility through third-party apps and APIs, this approach to customization would achieve the same result but in a more responsive, secure and seamless way.
With all that said, there are some risks to Fastly’s investment in this technology. While the use cases are exciting for engineers, the market size for this solution is unproven. Fastly has first mover advantage, but it’s likely that other providers will move quickly to duplicate the approach. With that said, what Fastly is doing with Compute@Edge represents a fresh and innovative approach to serverless compute, that expands the use cases for serverless dramatically by allowing it to address synchronous requests.
Other Considerations
Cloud Vendor Competition
One risk to the Fastly investment thesis is that the cloud vendors will try to dominate distributed serverless edge compute, as they have with origin compute. Amazon, Microsoft and Google certainly have ample resources. However, I would argue that they already have serverless products. Yet, Fastly’s purpose-built serverless compute runtime is both faster and more secure. This isn’t to say that the cloud vendors couldn’t revamp their serverless solutions and borrow some of the approaches used by Fastly (and they likely will), but for the time being Fastly is ahead in this area.
Also, the desire to use a cloud-neutral provider for multiple software services categories is being embraced by enterprises trying to avoid cloud vendor lock-in. Most large IT organizations are pursuing a multi-cloud strategy. Commentary from other independent software company CEO’s, like Okta, Datadog, MongoDB, Twilio and Elastic, continues to reinforce the notion that customers prefer a best of breed cloud neutral solution where available. For Fastly, this would mean that they would provide content delivery and distributed serverless infrastructure to their customers who host in multiple data centers. Fastly’s solutions are designed to be autonomous, meaning they have no dependency on an origin server or a particular cloud vendor’s implementation, in order to function.
Other Independents
Of other companies offering CDN and distributed serverless compute solutions, Cloudflare’s Workers product is the most progressive and is available in the market already. They are big supporters of Wasm and often present alongside Fastly at conferences and meet-ups, like WebAssembly SF. Currently, Cloudflare utilizes the V8 engine to power Cloudflare Workers, their serverless edge compute solution. As discussed above, this represents a big improvement over cloud vendor solutions with cold start times in the 3-5ms range, but is slower than Fastly’s published 35 microsecond start-up for the Lucet runtime. From a security point of view, Cloudflare’s implementation of serverless offers an improvement over the persisted containers used by the cloud vendors in their V8 isolates. Memory is more discretely provisioned and controlled in the solution used by Fastly. Finally, the memory footprint for V8 isolates is about 3MB versus several kilobytes for Lucet.
Given that Cloudflare has a talented team and is well aware of Fastly’s work in serverless, I would expect them to roll a comparable solution at some point in the future. Cloudflare and Fastly will likely emerge as independent leaders in enabling distributed serverless compute. Akamai and Limelight also offer serverless solutions, but these lag both Cloudflare and Fastly in the criteria discussed above.
Investor Take-aways
While Fastly is benefiting from investor excitement currently, there are no guarantees around the the eventual size of the market for Fastly’s Compute@Edge solution. Cutting edge technology innovation doesn’t always translate into market penetration and competitive advantage. My purpose here is to explain why I think Fastly’s fundamental approach to addressing long-standing challenges in CDN and distributed serverless infrastructure is unique. My investment thesis is that like Zoom, Fastly’s intense focus on a limited set of difficult problems with the goal to create noticeably better outcomes for developers and DevOps teams, will ultimately drive investment returns over the long term. Personally, I think Fastly is fundamentally bringing a better architecture design to distributed, serverless compute outside of the data center. While nascent, similar to Shopify’s planned usage, I can think of many future applications for this technology. Hence, I am bullish on FSLY and expect continued price appreciation over the long term.
Personally, I have a large portion of my portfolio allocated to FSLY. This started earlier in the year and I added just after Q1 earnings for a current cost basis of $32. I formally recommended the stock on May 19, when it was trading for $38.50, and set a five year price target of $155. My original allocation has grown significantly due to price appreciation. I plan to maintain an oversized portion of my portfolio allocated to FSLY, with some periodic trimming to maintain parity with other holdings. While I will remain in FSLY, investors can draw their own conclusions regarding an investment at this point. Hopefully, this article has at least provided a better understanding of their technology foundation and approach to engineering problems.
As a postscript, here is a full set of videos that might help investors with the Fastly story and other offerings. Some were referenced above, but this provides a complete list.
- Edge Compute: The Missing Pieces, Velocity Conf Keynote, Fastly CTO, Nov 2017
- Lucet: Safe WebAssembly Outside the Browser, Fastly CTO, Feb 2020
- Rust, WebAssembly and the Future of Serverless, Cloudflare engineer, Jan 2020
- Beyond the Browser with Serverless WebAssembly, WebAssembly SF, Invitae engineer and author, Nov 2019
- Building and Scaling the Fastly Network, Fastly Altitude, Fastly Network Lead, 2015
- Secure Sandboxing in a Post-Spectre World, RSA Conference, Fastly CTO and Security Architect, Feb 2020
- Infinite Parallel Universes: State at the Edge, QCon, Fastly Tech Lead and author of Go kit, April 2020
- Making Commerce Extensible with WebAssembly, WebAssembly SF, Shopify Development Manager, May 2020
- Evolving WASI with code generation, WebAssembly SF, Fastly Engineer, Dec 2019
- WebAssembly on the Server, WebAssembly SF, Cloudflare Lead, Mar 2019
Very technical but helpful for a layman.
Amazing story of FSLY, now I understand so much better. Thank you for taking the time and going into such details so we all can invest with a better education. Much appreciated.
Thanks for the detailed deep dive. Very very informative and explains it very lucidly.
My concern is the moat that Fastly has for the Compute@Edge platform. Both Lucet and Terrarium which Fastly developed under the hood are open source and under Apache License. So it would imply any of their competitors large and small can integrate the same technologies with their offerings. Of course other advantages like their superior Fastly Pops based on modern architecture, cloud agnostic, first mover advantages remain, integration with observability tools (Datadog, splunk, elastic), Tesuto acquisition and a great talented team remain. I am trying to understand how strong is the Compute@Edge moat and it’s shelf life. It seems too important a market for the Cloud Titans to leave it to Fastly and Cloudflare. Is their something else I am missing ?
No problem – thanks for the feedback. That is a fair concern. Lucet and Terrarium were likely open-sourced to build adoption for the approach, versus the status quo of leaving all compute at origin. Plus, the Shopify use case is important as that represents another twist on potential applications for this technology, which may not have manifested if the code was obfuscated and closed. I think Fastly’s approach is smart as the bigger risk would be lack of adoption for edge compute, given that the use cases are nascent. I suspect that Fastly will pursue an open core model, building commercial products and services around the runtime.
Potential customers and new competitors could try to roll their own solution, but would have to invest in POPs to distribute it. Cloud vendors do represent a risk, but they have offered both CDN and serverless offerings for a while, yet didn’t innovate too much there. Also, I am seeing a split in cloud vendor strategies recently. GCP appears to be embracing best of breed software providers as opposed to AWS trying to take share of every adjacent software service. It will be interesting to see where edge compute falls. Regardless, assuming a market emerges for distributed edge compute, then I think Fastly is well positioned to capture a sizable slice (expanding their relatively small revenue run rate). I would posit that Fastly’s moat will continue to be their innovation, engineering talent, neutrality and focus (arguably like ZM). Existing relationships with digital first companies will keep an active feedback loop for future opportunities. Finally, there is value in having the majority of Lucet committers working for Fastly, even if the code is open source. Top engineering talent is gravitating towards the independents, as they offer more upside.
Thanks for the reply. Much appreciated !
Another excellent article with a depth of analysis that stands out. Really nice work. How does 5G rollouts effect the CND realm if at all?
Thanks – I haven’t really considered the impact of 5G adoption. If anything, it would provide a tailwind as more content could be distributed and more immersive software apps could be built.
Feels like the edge wasm stuff will take a couple of years to turn into meaningful revenue. So once hype dies down, stock price will be driven more by revenue growth of existing products. Cloudflare has had workers in production for a while so should have a very good sense of the real world applications customers are looking to solve and what the possible market size is. While good to see they are continuing to invest in it, they don’t seem to talk about workers much vs their other products. My sense is that it’s along term play rather than an immediate larger market. Also have to admit that the use cases still few pretty opaque to me and I’m very technical. I can see why some very large sites might use this to optimize but not sure it’s a mass market developer need.
If edge Serverless really doe have broad applications, fees like Cloudflare and fastly will become head-on competitors. Right their products are in the same space but with different focuses.
Btw, both have strong modern engineering teams. I have long watched what Cloudflare has been doing and been impressed. Have recently heard of some good people joining fastly. One note, it’s a bit concerning that fastly doesn’t have many engineering jobs posted. Fast growing companies normally hire all time time. I’m in Toronto and Shopify is relentless in its hiring.
Note, I own both stocks right now.
I don’t know if you take requests, but do you have any thoughts on Dynatrace (DT)?
The CEO said at the William Blair Conference on June 9 that their annual guidance was consciously pessimistic, but over April and May hey have been ‘pleasantly surprised’ by their business that does not justify that pessimism. The stock is very expensive (I have no position).
http://www.wsw.com/webcast/blair56/dt/index.aspx
Super insightful. Thanks!
Saw from the newsletter that you are considering analyzing a new company. I would recommend that you look at Agora.io — recently IPO’ed, strong COVID tailwinds, a lot of other parallels to Zoom (shared CEO history from webex, split R&D across US/China, cost conservatism etc)
Thanks so much for this incredible deep dive on Fastly
I wonder if you could provide some insight as to the potential size of this market based on other similar companies and based on the growth of internet traffic in general. How big could Fastly get in your most optimistic scenario?
Thanks. For size of market and Fastly’s share, I would consider these factors:
– Fastly’s own TAM estimate of $35B from the most recent Q1 investor deck.
– Edge Compute (2021): I would look at the total value of spend for compute at cloud vendors and then assume that some percentage, maybe 5% over the first couple years, could be moved outside of centralized data centers. Fastly would conceivably win some percentage of this – let’s say 10-20%.
– New Digital Transformation Spend. We can assume that the acceleration of digital transformation due to COVID will drive new experiences online. This would represent additional CDN / distributed compute spend.
Difficult to come up with a total, since there are a lot of factors and assumptions. As part of my review of FSLY’s Q1 results, I projected $1B in revenue for Fastly by 2024. That might be conservative.
You are very generous with the knowledge you share, thank you for this!
This blog is a game changer. I feel so much more knowledgable and empowered as an investor because of your research into these companies. Please keep the updates coming. Thank you.
Great read, nice level of technical discussion, thanks.