Investing analysis of the software companies that power next generation digital businesses

Snowflake Updates – Q4 FY2022

Snowflake reported Q4 FY2022 earnings on March 2nd. The results for the quarter were strong across all operational measures. However, planned platform performance optimizations are driving a near term decline in revenue recognition for the next couple of quarters. These were taken into account in setting Q1 and FY2023 revenue projections. The market’s knee jerk reaction was to view this as an indication of revenue deceleration and future execution risk. The stock subsequently dropped by 15% the next day. After a brief dip to a 52 week low on March 14th, the stock has recovered to about a 16% post-earnings dip currently. Notably, SNOW is down 34% since the beginning of 2022 and 45% from its peak in November 2021.

In spite of the market’s reaction, I think Snowflake’s trajectory is well on track. I view the platform optimization less as a revenue headwind and more as a solidification of market share. While Snowflake’s FY2023 (this calendar year) revenue growth may well decelerate into the 80% range, I think this report and several other factors build confidence in durability of revenue growth above 50% for several more years. That compounding growth is supported by an enormous addressable market, which is expanding at an incredible rate. Add to this Snowflake’s increasing product offerings and adoption of data sharing, which serve to drive more consumption of the core compute and storage engine.

I recently had the honor to deliver a guest lecture to the London Business School for their Master’s in Finance program (ranked #1 in the world). As part of their coursework, students are examining the emergence of digitally-driven companies and how to assess their value in the public markets. I provided perspective on trends in the data infrastructure space and what signals are important to watch beyond analysis of financial statements. The talk focused on a case study of Snowflake and why I think they are well-positioned to maintain their leadership in this large category. I will incorporate some of that content into this post.

What Happened

My last post on Snowflake in October 2021 provides a useful overview of the company, their product offerings and competitive position. Most of that remains relevant and I won’t rehash it all here. I will focus on what we learned in the last earnings report and what I think is important to keep in mind going forward. I will touch on a few of the relevant metrics and the pressure points from the Q4 earnings report. These set the stage for the next section in which I discuss why Snowflake is still on track for the larger opportunity.

I also have to acknowledge Snowflake’s leadership, CEO Frank Slootman and CFO Mike Scarpelli. These two have worked together successfully at three different companies. They intimately understand how to assess a market, position their company’s offering, form a strategy, build a high-performance organization and execute. If the leadership team were untested founders, I would be more concerned about potential risks associated with their performance optimization strategy and distribution of savings to customers. With these two at the helm, though, I have confidence that they have a cogent strategy.

To set the stage, let’s take a minute to review the good, the interesting and the bad from the earnings report. Then, in subsequent sections, I will explore the broader themes of Snowflake’s position and why they will continue to excel. Additionally, as with any analysis, I appreciate the ongoing macro concerns and downward pressure on valuation multiples for high growth software companies. I can’t control for that, and will provide some context later in the post that might help investors reconcile some of the shift in valuing software companies. At the end of the day, I try to focus on a company’s market opportunity, product offering and competitive positioning, and what those imply for the durability of revenue growth. Regardless of where valuation multiples land, my thesis is that the compounding of high, and more importantly durable, revenue growth with improving operational leverage will ultimately drive stock price over time.

The Good

These metrics provide reassurance that the Snowflake story continues to be on track.

Revenue Growth. Snowflake delivered nearly $384M in revenue in Q4. This was up 101% y/y and 14.8% sequentially. That represents a deceleration from Q3’s results of 109.5% y/y and 22.8% sequentially. While on this surface, that is concerning, there are a few mitigating factors.

  • Q3 outperformance was unusual and management told us not to expect a repeat. Going back a quarter, Q2 delivered 104% y/y and 18.9% sequential growth.
  • Q4 was impacted by the post-Covid extended holiday season and a burst in omnicron cases. These two combined to reduce the amount of human-driven consumption in December-January. Management shared that 30% of usage is generated by ad hoc analysis, versus the 70% associated with automated workloads.
  • The platform optimizations, specifically warehouse scheduling, were rolled out in a limited fashion in January. This allowed pre-purchased utilization credits to handle more queries, resulting in less consumption. This impacted revenue recognition by $2M. That would have provided another 1.0% of annual growth and 0.6% sequentially.

Non-Revenue Indicators. While forward revenue projections were not as high as desired (consumption driven), metrics associated with demand for Snowflake services were strong.

  • RPO now stands at $2.6B, up 99% y/y. This included a significant increase of $842M from Q3, or 46.7% sequentially.
  • Q4 was the strongest bookings quarter to date, with $1.2B in new bookings for an increase of 106% y/y.

I think it is important to recognize that the demand for Snowflake isn’t slowing down, as indicated by these sales figures. This is evidence to me of willingness of customers to allocate substantial IT budget to the services that Snowflake provides. Granted, revenue recognition will be lower in the near term, as customers can accomplish planned work with less credit consumption. However, I don’t think that means future budgets will be adjusted downward to reflect the lower cost. Rather, customers will find ways to grow their consumption to meet their budget. This may be through migration of more workloads or data sources to Snowflake. For a CIO, coming in significantly under budget is as bad as being over budget, because the next budget planning session will be second guessed by the CFO.

Profitability Measures. As valuations for high-growth software companies are scrutinized, demonstration of operating leverage and cash flow generation are rightfully being assigned a larger weighting in multiple premium calculations. A year ago, companies with high growth could largely ignore the cost of that revenue generation. In the current environment, a miss on profitability weighs on a stock as much as revenue growth underperformance. With that in mind, Snowflake demonstrated that it can drive profitability and revenue growth in parallel.

  • Non-GAAP gross margin was 75%, in line with Q3 and up 500 bps from a year ago.
  • Non-GAAP Operating income was $18.1M for a 4.7% operating margin. This is up from $8.5M or 3% in Q3. More significantly, it is up 28.8% from a year ago, when operating income was -$46.0M or -24.1%.
  • Free cash flow was $70.7M for a 18.4% FCF margin.
  • On an adjusted basis, free cash flow was $102M for a 26.6% adjusted free cash flow margin. This is up from $21.5M or 6% in Q3. A year ago, adjusted free cash flow was $17.3M for a margin of 9.1%. Leadership did mention that Q4 and Q1 are usually the highest cash flow quarters, but these numbers are significant.
  • For FY2023, they expect this profitability to continue. The baseline for operating income was set at 1% with 15% adjusted free cash flow margin.

Interestingly, cash flow generation is driving a rapid drop in valuation multiples based on profitability. At the very least, the ratio is no longer near infinity. At today’s enterprise value and using Q4’s annualized FCF at $280M, we get an EV to FCF multiple of about 228. Yes, this is high, but cash flow metrics grew faster year/year than revenue, so we can expect this ratio to continue to compress.

Customer Activity. Customer related metrics implied continued momentum as well. I like to focus on the combination of growth in total customer counts and the net expansion rate, as an indicator of revenue growth durability. Also, the size of customer spend is an important signal for the breadth of the addressable market and whether a company is approaching saturation any time soon.

  • Total customers were 5,944 at the end of Q4, up 44% y/y and 9.7% sequentially. This represents a slight acceleration from Q3 sequential growth of 8.5%.
  • Reported 184 customers with trailing 12 month revenue over $1M, up from 148 in Q3. This represents 24.3% sequential growth versus 27.6% last quarter. This value was 77 in Q4 FY21, representing 139% y/y growth.  Leadership mentioned that many more customers are on the “cusp” of $1M during the earnings call.
  • NRR ticked up again to 178%, compared to 173% last quarter and 168% a year ago.
  • Leadership mentioned that 6 of their 10 largest customers increased their spend y/y by a higher rate than revenue growth (which was 101%). That is a significant call-out, as spend for these customers is already large. This speaks to the size of the market and elasticity of spend.
  • Fortune 500 customers count was 241 at end of Q4, up 24% y/y and 14 sequentially. Global 2000 customers were 488, up 21% y/y or 21 sequentially. Given the low relative penetration of the Global 2000, there is still growth available here. Also, management called out that many of their large customers are digital natives that are not in the Global 2000.
  • They closed 7 new $30M+ (TCV) deals in Q4, versus just 1 of these a year ago. These will drive revenue through consumption over the next 2-3 quarters. This is on top of the $100M three year deal highlighted in Q3.

I think the size of deals is interesting. It underscores the large addressable market. Snowflake’s deal sizes are much larger than we see in other software infrastructure categories.

The Interesting

These data points provide signals that could drive future positive performance, but don’t directly impact financials.

Data Sharing Metrics. As I discussed in my prior post on Snowflake, I think that data sharing creates strong network effects to draw in new customers for the Data Cloud. In the Q4 report, we received an update on progress with data sharing. Leadership revealed that the total number of stable edges increased by 130% during the year. At the end of FY2022, 18% of customers had at least one stable edge, up from 13% at the end of FY2021. Applying these percentages to the total customer counts at each point in time reveals that the absolute number of customers with a stable edge almost doubled year/year. Data sharing remains a popular feature for customers.

Employee Growth. Snowflake ended FY2022 with 3,992 employees. This was up 60% from end of FY2021, or about 1,500 total employees. For FY2023, they are projecting to hire an additional 1,500 employees, representing an increase of 37.5%. These increases over the past year were fairly evenly distributed among S&M, R&D, G&A and cost of revenue. Hiring the same number of employees year/year should provide some margin improvement. This does create expectations for improvements in productivity in order to maintain the same growth trajectory.

Areas for Concern

These areas fueled the negative reaction around the earnings report and provide the basis for monitoring as the year progresses. They also represent risks to the investment thesis for this year.

Revenue Guidance. The main driver of the stock price decline after earnings was forward revenue projections for Q1 and the full year. Q4 provided our first view of expectations for FY2023. For Q1, Snowflake estimated product revenue growth year/year of 79-81%. This was just slightly above analyst estimates for 78.5% y/y growth in total revenue. For comparison, the preliminary guide for Q4 issued in the Q3 report called for 94-96% growth. On the surface, this implies a 15% deceleration in annualized revenue growth.

However, management also disclosed that they expect a $10M impact from platform optimizations for the full Q1 quarter (as opposed to the $2M from Q4). If we back out this $10M projection, then product revenue would have been estimated to grow by 84% y/y, which softens the deceleration somewhat. Not mentioned, but also worth considering is that Q1 has 3 fewer days than other quarters. Snowflake didn’t call this out, but other consumption oriented software companies, like Elastic, called it out for Q1 impact. This is more relevant for evaluating sequential growth than annual.

For the full year, the impact of the platform optimization is more pronounced. The preliminary estimate for full year product revenue calls for growth of 65-67% y/y. Coming off of 106% growth in the prior year, this represents a concerning level of deceleration. However, management disclosed that they expect a net $97M impact from platform improvements. Backing this out implies growth would have been 74.3%.

Going back a year to Q4 FY2021, the preliminary estimate for FY2022 product revenue growth was 81-84%. Product revenue growth landed at 106%, for a beat/raise of about 24% at the midpoint. Applying this to the actual FY2023 product revenue growth estimate for 66% year/year, we should land somewhere between 80-90% for actual performance taking into account the normal beat/raise cadence. Nonetheless, this implies a deceleration from FY2022 growth.

NRR Guidance. While NRR of 178% in Q4 is impressive, management shared that they don’t expect NRR to remain above 170%. As NRR is dependent on the increase in spend from existing customers, then it would be impacted by the platform efficiency roll-out. Dropping NRR would coincide with less revenue recognition and reinforce the deceleration narrative.

However, the CFO stated that he expected NRR to remain above 150% for “quite some time”. Depending on the time horizon, we could anticipate durable revenue growth above 50% a year for a few more years. The high NRR combined with an elevated rate of total customer adds almost ensures high revenue growth levels. Compounding annually at a high growth rate will increase absolute revenue substantially. At 80% revenue growth, FY2023 should land around $2.19B. Add another 60% in FY2024 and we are up to $3.51B, putting the EV/S ratio at 18 in two years. I delve into future revenue projections later in the post, but use these estimates to illustrate the impact of durable elevated revenue growth.

Higher Level Themes

The expected slowdown in consumption rates due to the platform efficiency will likely drive revenue growth deceleration this year. The magnitude of that remains to be seen. In subsequent media appearances, Snowflake leadership implied that the preliminary projections for FY2023 were conservative, referencing the outperformance delivered in FY2022 between what was projected at beginning of the year and was actually delivered at year’s end (raised from 81-84% initial estimate to actual of 106%). Similarly, they were confident that passing cost reductions on to customers was a good strategy, which would result in higher utilization from customers over the long term.

Of course, anything can happen. If product revenue growth trends to the low side of these projections, then the deceleration story will be magnified. In a tougher macro environment, customers might decide to just pocket the savings. They may not move incremental workloads or data to the Snowflake platform.

However, I think there are a number of factors that will continue to provide durable tailwinds for Snowflake. These will drive both elevated revenue growth rates and continued improvements in operating leverage over time. These factors revolve around the size of the market, sales efficiencies with hyperscalers, product workload expansion and the network effects of data sharing. I will detail these in the sections below.

Market Size

As we think about the long term durability of growth for any company, the foundation for the opportunity is usually grounded in the size of the addressable market. If a provider can maintain their market share, they will benefit from a natural tailwind as the market continues to grow. Further, for a large market with many legacy players, a disrupter can ride the long tail of technology upgrades and cloud migrations. Therefore, they can benefit in two dimensions – legacy upgrades and brand new workloads. If the provider is expanding their product offerings, a third expansion dimension emerges, representing more utilization in the same market, but from adjacent product categories.

In Snowflake’s case, they are benefiting from a number of market tailwinds. These revolve around the rate of data creation and the increasing importance of mining that data in order for enterprises to maintain their competitive advantage. Back in 2019, industry analysts at IDC predicted that over half of all GDP worldwide would be driven by products and services from digitally transformed businesses. They have since increased this percentage in some geographic areas, like APAC, to 65%.

By 2023, IDC predicts that the global economy will finally reach “digital supremacy” with more than half of all GDP worldwide driven by products and services from digitally transformed enterprises.

IDC, FutureScape Report, 2019

While that is a broad categorization, I think we will see this environment translate into higher demand for data processing through a number of industry trends. The IDC quotation underscores the impact from digital transformation. Businesses are increasingly using direct digital channels to reach customers. Each of these new digital channels generates extensive data footprints for consumers that didn’t exist in the physical space. The amount of data collected about a consumer is many times higher when they engage on a digital channel than walking into a physical retail location.

Author’s Slide, London Business School Digital Investing Course, March 2022

Exacerbating the creation of data is the shift to first party data collection. Due to privacy concerns, the availability of rich consumer profiles from third party sources is diminishing. This forces enterprises to take over consumer data collection and mining themselves. The transition to first party data collection forces each enterprise to build their own customer data sets and analytics capabilities for targeting and insights. They then need to continuously mine this data for insights to improve services for customers through personalization and better communication.

We should also keep in mind that many modern businesses are digital natives, that don’t even have a physical corollary. Examples abound in staffing (Fiverr, Upwork), finance (Coinbase, SoFi), fitness (Peloton, Mirror) and education (Roblox). A digitally driven fitness experience, like Peloton, will maintain a much larger data footprint about its users than the local gym. While your gym knows that you have a monthly membership and records each visit, it doesn’t track every piece of equipment that you use, for whole long, how much you exert yourself, how you are doing against other gym members, etc, etc. Similarly, for the fitness trackers that we wear every day, huge amounts of data are created that didn’t exist before.

This explosion in first party data collection is further illustrated by the most recent Customer Data Platform report published by Twilio Segment. Segment provides a mechanism for enterprises to collect data from multiple sources, aggregate it and then feed it to downstream systems for analysis and permanent storage. Given their role in the ecosystem and position as the largest CDP platform, their data provides insight into broader trends in data analytics.

Twilio Segment, Customer Data Platform Report, 2022

In the report, they shared a graph showing the increase in API calls processed by the Segment platform over the past four years. Each data point represents the number of API calls for the prior month. We see a steady increase in data generation in the years before Covid, as well as the step up in utilization as the pandemic hit. While the sharp acceleration in growth may not continue as we move out of Covid lockdowns, I expect the general velocity of data creation to continue. As Segment collects first party data that is streamed to data warehouses for analytics jobs (among other destinations), I think it provides a strong correlation to demand generation for Snowflake.

This growth in data creation is also being met by an increase in expectations for use of it. Enterprises are moving beyond leveraging customer data for simple BI visualizations to a bias towards mining that data for insights, recommendations and personalization. As applications become “smarter”, consumer expectations have increased for digital businesses to anticipate their needs and deliver differentiated experiences. This is putting pressure on enterprises to do more with their customer data. If the last 10 years were focused on whether companies had a feature-complete web site or a mobile app, the next 10 years will be about how well they can leverage the data they collect from these sources to improve service, efficiency and convenience for customers.

All of this data collection and processing has also driven a rise in companies sharing data between partners and ingesting curated data sets from independent providers to further enrich their processing. Data sharing activities within an industry ecosystem expand the data sets, providing more data to draw insights and train machine learning models. These also require more processing. Layering in curated data sets for everything from demographics, consumer spending trends, economics and even weather data further increases utilization.

In another useful IDC report from December 2021, they published predictions for the Future of Industry Ecosystems. This focused on trends in data sharing and collaborative workflows within industry segments. Their premise is that digital transformation is rapidly evolving industry ecosystems from groups of independent participants to open value chains of technology, industry and supplier partners that are increasingly willing to share data to drive innovation, reduce costs, improve efficiency and deliver a better consumer outcome.

Prediction 1: By 2022, organizations that share data, applications, and operations with their industry ecosystem will realize a revenue increase of 3 percentage points higher than nonparticipants.

Prediction 4: By 2025, 80% of industry ecosystem participants will leverage their own product, asset, and process digital twins to share data and insight with other participants.

Prediction 7: By 2022, 50% of the Fortune 500 will manage the value of shared ecosystem data via KPIs of improved operational productivity, ongoing customer engagement, and skills enhancement.

Prediction 10: By 2026, on average, 30% of Global 2000 company revenue will derive from industry ecosystem shared data, applications, and operations initiatives with partners, industry entities, and business networks.

IDC FutureScape: Worldwide Future of Industry Ecosystems 2022 Predictions

Finally, data will not just be generated by human interactions with digitally transformed businesses. As smart devices and industrial automation initiatives proliferate, the amount of data generated should further increase. Applied Materials published an interesting slide in their Investor Meeting from April 2021. On it, they project the growth in data generation looking forward, separating human activity from categories of IoT applications. They represent 2018 as the point of “crossover”, in which the volumes of data created by machines began to exceed those by direct human activity. You can see the projections going forward, where the vast majority of data creation will be generated by machines, which are not bound by population growth.

Applied Materials, Investor Meeting, April 2021

All of this data will need to be collected, cleansed, processed and mined. While much of it will be pre-processed locally on edge compute infrastructure, a large remainder will be sent to centralized data lakes and warehouses for data mining and machine learning in aggregate. These workloads would leverage technologies from companies like Snowflake, further driving utilization and storage demands.

Platform Efficiencies and Market Share

On the Q4 earnings call, Snowflake management revealed that the platform efficiencies being rolled out this quarter are resulting in a 10-20% improvement in workload performance on average. In a simple example, if a customer could run 100 queries for every Snowflake consumption credit before the optimization, now they can run 110 to 120 per credit. In the near term, this will immediately allow manual and automated customer jobs to accomplish the same work for less consumption of credits. Since Snowflake revenue is recognized on consumption and not contract sales, the immediate impact would materialize in revenue recognition, since fewer pre-sold credits will be consumed.

In projecting FY2023 revenue, management accounted for this impact by assuming a reduction of $160M of usage credits. However, they added back in about $63M of incremental usage that they assume will be generated by new customer data and workloads in response to the lower processing price point. This is based on evidence from prior efficiency roll-outs that customers generally increase utilization about 6 months after a cost reduction. They also mentioned it could happen sooner, since the improvement is so significant.

I think management’s assumption that lowering cost will result in more customer utilization is reasonable. This is based on a few factors. First, they have evidence from prior performance improvements. They have rolled out similar optimizations in years past. Those optimizations didn’t impact revenue performance significantly thus far. As noted, in FY2022 Snowflake still grew revenue by over 100%.

Additionally, these changes aren’t performed in a vacuum. The Snowflake sales team is in constant contact with customers, who have indicated that they would move incremental workloads onto Snowflake at a lower price point. This makes sense to me, as I can imagine many lower value, infrequent data processing jobs that currently run on some sort of in-house, on-premise solution. With a sunk cost in legacy, on-premise solutions, a CIO would logically extend the duration of a technology upgrade to spread costs over a longer period.

Snowflake’s CFO reinforced this point at the JMP Technology Conference recently. In discussing the performance optimizations, he said they have hundreds of legacy on-premise data warehouse migrations that are not complete. Many customers start these migrations with their most important analytics workloads. As Snowflake becomes cheaper, they move the lower value workloads over as well. There is a long tail of workloads to migrate before the legacy solution is shut down.

As another example from the earnings call, the Snowflake CFO referenced having hundreds of customers with large Teradata migrations. These represent pre-sold contracts for consumption (RPO) where revenue hasn’t been realized. How can Snowflake accelerate the revenue recognition for these contracts? They can make it more compelling for customers to move over analytics workloads. Snowflake already has a relationship with these customers, so we can only assume that this is in response to real customer demand.

Our customers are always looking at price performance. And when they compare our price performance versus running it, whether in another cloud or running it on-prem in their existing data warehouses, they make the move to move more things into it. As a reminder, we have landed hundreds of customers to do these big, on-prem Teradata migrations.

I think we’ve only completed — we’re completely shut down a little over 30 of those. It’s maybe in the mid-30s now. There’s piles of other workloads that they plan on moving. And when customers see the price performance, they will accelerate the movement of those other workloads to us.

Snowflake Earnings CAll, Q4 FY2022

Not only should cost reductions draw in more workloads and data, it will preserve market share. We need only look to the hyperscalers for evidence that this strategy works. AWS has reduced prices 107 times since 2006. Yet, they just delivered nearly 40% revenue growth at a $71B run rate, accelerating slightly from the prior quarter. More significantly, they maintained market share through all of these reductions and are still larger than the number two and three competitors combined.

I think Snowflake is following AWS’s playbook here. They are leading in market share and are using their size and scale to maintain it by passing on cost reductions. This makes it harder for new market entrants to compete and explains in the case of AWS why they have maintained leadership for 15 years. As the market for data services is still growing rapidly, Snowflake can afford to take small hits in the short term on revenue recognition to ensure they maintain share of an enormous market over the long term. This is a recipe for durable revenue growth.

For more evidence that Snowflake is maintaining its market share, we can go back to the Segment Customer Data Platform report for 2022. In it, they ranked the growth in usage of data sources and data destinations by their customers. The methodology was to count the number of customers using a particular connector as a data source or destination, which they simplify into the label of “app”. This data would correlate closely to new customer relationships with each app.

Twilio Segment, Customer Data Platform Report, 2022

According to Segment’s report collected from data across their 25,000 customers, Snowflake had the largest increase in customer connections of any third party. This was also by the greatest margin, leading competitor Google BigQuery by 22%. As Segment is used heavily to collect and distribute first party data for digitally enabled enterprises, it reinforces Snowflake’s pole position in data analytics workloads.

Hyperscaler Relationships

A significant change in posture has occurred with the major cloud vendors over the past year. As competition shifts to cooperation, the relationship with the cloud vendors is inflecting from a headwind to a tailwind. As recently as a couple of years ago, analysts and investors were rightfully concerned about competition from the cloud vendors for smaller independent software infrastructure companies across a variety of segments. This was not just in data services, but included security, identity, communications and even monitoring. Both open and closed systems were at risk. Companies based on an open source project, like MongoDB, Elastic and Confluent, experienced the most direct encroachment, while closed offerings like Datadog, Twilio, Crowdstrike, Okta and Snowflake were faced with competitive offerings from cloud vendors to differing extents.

Author’s Slide, London Business School Digital Investing Course, March 2022

However, in the past year, this bias has inflected significantly towards cooperating with the independent providers. I think this shift is being driven by the growth in deployments onto hyperscaler infrastructure. In Snowflake’s case, their data cloud is deployed onto the infrastructure provided by AWS, Azure and GCP. So, as Snowflake utilization scales, the cloud vendor can generate significant revenue from the underlying compute and storage.

The reason this is appealing to the cloud vendors is that this is essentially low-cost operating income. While they might prefer to sell their own solutions for data processing to end customers, those products require staff to maintain the software, provide customer service and sales support. I imagine the operating margin for co-selling an independent’s service can be more favorable for the cloud provider than investing in building their own competing products, at least in some cases.

AWS seems to be embracing this co-selling approach the most deliberately. For Snowflake, management revealed that $700M of the $1.2B in new bookings for Q4 was generated through co-selling with the hyperscalers. This is up from $500M in Q3 (40% sequentially). Of that $700M, Snowflake’s CFO said the “vast majority” was with AWS, $0 with GCP and the balance with Azure.

For AWS, this makes sense. While they may not generate revenue for Redshift (their data warehouse product), they get to cross-sell Sagemaker into the account, which is their machine-learning platform that is well integrated with Snowflake. Additionally, AWS gets the benefit of the customer’s other workloads and services outside of Snowflake’s usage. This strategy seems to be employed by AWS to win business over other cloud vendors, particularly GCP. On a recent analyst call, the Snowflake CFO mentioned many competitive wins over GCP through their co-sell with AWS.

This may be part of a larger strategy by AWS, as the fruits of co-selling were highlighted by other independent software infrastructure providers in their recent quarterly updates. Datadog, Crowdstrike, MongoDB and others called out co-selling with AWS as driving revenue upside. Faced with competition from Azure and GCP, I think AWS is using these partnerships as a strategy to grow market share. Based on Q4 results from the hyperscalers, this seems to be working. Of the three, only AWS accelerated year/year revenue growth. It’s likely that we see Azure and GCP adopt a similar approach.

The reason this co-selling is important is that it now provides a tailwind for Snowflake, versus a headwind. Co-selling would result in more wins for Snowflake, if only in that it removes competitors. It also increases sales productivity, as leads come for free from the hyperscaler marketplace or enterprise sales team. In an environment of skilled labor shortages and increasing costs for top sales people, this will help maintain operating leverage. The incremental contribution from the hyperscaler co-selling relationships, drove record bookings in Q4. As this continues with AWS and picks up with the other hyperscalers, it will provide more support for durable revenue growth beyond this year.

Product Expansion Will Drive More Consumption

Snowflake has been expanding the platform to enable new workloads to be run on top of the core compute and storage engine. One avenue for that is to allow engineers to create applications that directly access data in Snowflake. This is accomplished by adding a runtime code environment around the data processing engine. This way, engineers don’t have to export code out of Snowflake to another runtime environment. This is important because that external runtime represents an infrastructure cost for the customer.

Snowpark Runtime, Snowflake Web Site

Snowpark is the primary vehicle for enabling this. Introduced in November 2020, Snowpark provides data engineers with an environment to create their own data management applications that run on top of the core data processing engine.  It supports several common developer languages, including Java, Scala and most recently Python. The Python offering was introduced in a private preview mode. This makes sense as Python is the most popular language for data mining and processing.

Snowpark can be applied to many types of workloads – data cleansing and prep, feature engineering and data applications. Data applications are particularly exciting as they fill an immediate need and are fairly straightforward to build. The benefit to Snowflake is that these applications generate more utilization of compute/storage, which drives up revenue through consumption. As Snowflake continues to optimize for high concurrency and low latency, they will expand towards more use cases in data-rich applications. 

The Steamlit acquisition announced with the earnings report will further accelerate data application build-outs.  Streamlit provides a runtime environment for data access scripts and a simple drag-and-drop toolset to create rich user interfaces.  This is built on an open source framework that is very popular with developers. Over 1.5M apps are currently powered by Streamlit and it is used by tens of thousands of developers.

Prior to the acquisition, the Steamlit team was considering providing a hosted runtime for their service and had approached the cloud vendors for the infrastructure. However, they lacked sophisticated data governance controls, which would require significant development work to add. By joining with Snowflake, they can deploy their runtime on the big three cloud vendors and leverage Snowflake’s extensive privacy, security and governance capabilities.

I think data applications will represent a large demand driver for Snowflake going forward. On the Q4 earnings call, the CPO mentioned that the Python private preview is already over-subscribed by customers eager to take advantage of the capability. Having the ability to launch data applications without engaging the main development team in an organization will be a boon to productivity for data teams. This should unlock a lot of pent of internal demand and further drive net expansion for existing customers.

Additionally, on a recent analyst call, the CFO pointed out that add-ons like Snowpark and Streamlit don’t require the sale of a new module or licensing arrangement. They can be used by customers for “free”, as Snowflake generates revenue on the underlying increase in compute from the additional workloads. Additionally, these are also compelling because these runtime environments replace costs spent elsewhere on cloud infrastructure to host data applications separately.

Data sharing and Industry Ecosystems

In a prior post on Snowflake, I focused extensively on the competitive advantages that are drawn from data sharing. These are enhanced by clean rooms, which allow two participants to share data, but only from a relevant overlap. As an example, a retailer may want to offer a promotion to their customers on a media provider. The two can utilize a clean room to determine the intersection of their customers based on a common attribute like an email address. Then, only the customer list in common is shared. The rest of the data from each participant is kept secret.

This capability would not be possible with the legacy method of data exchange, which involved data dumps or API queries. In those cases, one party would be privy to the full data set. To get around this, the partners could contract with a neutral third party to find the intersection, but this adds latency and cost to the process.

With Clean rooms, Snowflake customers can create Secure User Defined Functions (UDFs) in SQL or Javascript that are run on the shared data sets. Only the output of the functions is made available to both parties. Generally, this would represent an intersection of two data sets. Besides the efficiency of data sharing, clean rooms provide another capability to encourage participants to move data onto the Snowflake platform.

Author’s Slide, London Business School Digital Investing Course, March 2022

Data sharing and clean rooms create network effects. As more enterprises leverage Snowflake and offer data distribution through data sharing and clean rooms with their partners, new participants will feel a pull into the Snowflake network. This was highlighted by a quote from the Head of Data Solutions at ADP on a podcast, “Do you guys have a Snowflake account?  Can you just distribute data to us that way?” He cited this as a common question from new customers of ADP’s financial data feeds.

Snowflake has three other strategic initiatives that encourage enterprises to join the Snowflake Data Cloud. Each of these generates revenue for Snowflake by driving more utilization of the underlying storage and compute platform, versus introducing a separate monetization model.

  • Data Marketplace. The Snowflake data marketplace allows companies to sell curated data sets to Snowflake customers directly. Distributing the data on Snowflake’s platform leverages the inherent data sharing capabilities, eliminating the normal overhead of ingesting a data feed or querying a set of APIs. Snowflake customers can easily import data sets from Data Marketplace providers and then combine them with their own data. Some popular participants selling data feeds are Factset, Equifax and AccuWeather.
  • Industry Ecosystems. Snowflake has focused on eight industry segments to build networks of participants. Financial services and media are the largest, with healthcare and retail described as large opportunities. These are primarily a sales and marketing effort, in which Snowflake has built subject matter expertise around each category. Sales engineers familiar with common data problems within the industry segment can help customers utilize Snowflake to address them and collaborate with other ecosystem participants. This adds value for participation in the network, versus remaining outside it. Snowflake also layers on standards and best practices relevant to an industry segment, like the SOX compliant standard for data sharing in the financial industry called the Cloud Data Management Capabilities (CDMC) assessment.
  • Powered By Program. Snowflake is leveraging their investment in infrastructure to allow other companies to run their businesses on top of the Snowflake platform. Instead of building their own data management solution, new companies are launching targeted services on top of Snowflake. As an example, Observe runs their whole observability business on Snowflake’s Data Cloud. Blackrock and Adobe extended their product offerings in data services by distributing on top of Snowflake.  

In fact, as I was writing this, Snowflake formally announced the Healthcare and Life Sciences Data Cloud. This data cloud will allow organizations to securely aggregate, process and share data between participants. Snowflake protects sensitive data, allowing companies to meet compliance requirements and industry regulations. The goal is to help providers improve patient outcomes, optimize healthcare delivery and accelerate clinical research. Initial participants include Snowflake customers Anthem, Health Catalyst, IQVIA, Komodo Health, Novartis, and Spectrum Health.

What I like best about the press release was a graphic that provided an updated view of all data sharing relationships across the Snowflake Data Cloud. We have seen versions of this graphic in the past. In it, each dot represents a customer and each line is a data sharing relationship. This web of data sharing is becoming more complete on each iteration.

Data Sharing Relationships, As of January 31, 2022

In their Q4 earnings report, Snowflake management provided a few metrics around data sharing and industry ecosystems. They reported that 18% of customers now have at least one stable edge, up from 13% a year ago.  Snowflake leadership sets a high bar for considering a data sharing relationship as actively being used, referred to as a stable edge. In order to be considered a stable edge, the two parties must consume 40 or more credits of Snowflake usage each day over a 6 week period for the data sharing relationship. 

What is most impressive about this statistic is revealed if we calculate the absolute number of customers involved. This takes into account the fact that Snowflake increased the total number of customers by 44% over the comparable period. Along those lines, 1,070 of Snowflake’s 5,944 customers have a stable edge. This is almost twice as many as the 538 customers with a stable edge (13% of 4,139) a year ago. This underscores the power of network effects associated with Snowflake’s data sharing capability.

Additionally, Snowflake announced that listings in the Data Marketplace grew by 195% last year. There are now more than 1,100 data listings from over 230 individual providers. The Powered by program has 285 companies participating. These companies have built their business operations on top of the Snowflake platform. We can expect these companies to generate significant utilization of the Snowflake platform.

Labor Shortage

One other trend to keep in mind has to do with the ongoing shortage in high skilled employees. For Snowflake’s customers, this would apply to software engineers that would be necessary to set up infrastructure, maintain open source solutions or build low value internal applications. As engineers with these skill sets are in short supply, enterprises are more likely to focus their development resources on building high value customer-facing applications. DevOps personnel charged with managing the runtime environments will be more inclined to utilize a packaged software service from a third party than cobble together an open source solution that requires significant configuration and maintenance.

This implies that enterprise IT teams will prefer a packaged service in areas that are easy for them to outsource. For observability as an example, this would mean paying Datadog to monitor applications and logs, instead of setting up an open source alternative like Prometheus, Jaegar or Fluentd. The same argument would apply to using a cloud-hosted service for MongoDB (like Atlas) or Kafka (like Confluent) versus trying to manage open source alternatives themselves.

This resource optimization bias should benefit Snowflake. Data teams will want to focus their resources on extracting insights from the data or running machine learning models. They will be less inclined to set up multiple open source packages to configure their modern data stack. Granted, some companies will have the resources to do all of that themselves, but I think that approach will represent the exception and the purview of the digital natives. On the other side, I think that many large enterprises will leverage solutions like Snowflake to handle the infrastructure of data management and delivery at scale.

Investor Take-Aways

I think all of these factors contribute to a reasonable expectation for high durable revenue growth going forward from Snowflake. While the near term impact on revenue recognition due to platform optimizations resulted in a guidance disappointment for FY2023, it will allow Snowflake to maintain market share and stave off new entrants trying to compete on price. With a large share of the addressable market, Snowflake will benefit from the explosion in data growth. Just by maintaining market share, Snowflake gains an automatic tailwind.

Yet, Snowflake isn’t relying only on market growth. They are expanding into adjacent categories by offering data services on top of their core compute and storage engine. Data applications represent a large opportunity here. As new capabilities are built on the Data Cloud, customers can consume them without having to go through a procurement process. Snowflake captures their revenue from the underlying utilization of compute and storage resources.

Data sharing and industry ecosystems are driving network effects that increase the value of joining the Snowflake network for each new participant. This will attract more customers to Snowflake, continuing to feed the long tail of spend expansion. Data sharing is supplemented by the Data Marketplace, which enhances customer data sets by providing curated data feeds directly in the environment. The storage associated with this data and the compute to process it all generate consumption of usage credits for customers.

Finally, the relationship with the hyperscalers is shifting from being competitive to cooperative. AWS has been credited with the most co-selling activity with Snowflake. GCP has the least. As AWS is the largest cloud provider and has a $71B run rate, peeling off some of this business for Snowflake will provide a tailwind to revenue growth. Bookings in Q4 hit a new record , with a large portion of that attributed to hyperscaler co-selling. This acceleration in contribution from the hyperscalers should help offset the natural deceleration in revenue growth that we would typically see as a result of large numbers. This again speaks to the durability of high revenue growth beyond 2022.

Based on these factors, we can try to project revenue growth and some level of profitability for Snowflake looking forward several years. We received guidance for a multi-year forward model out to FY2029 at the Investor Day event in June 2021. I thought their model was conservative and plugged in my own estimates as part of my October post. The updated table is shown below, with actual FY2022 results in bold.

As a measure of profitability, in FY2022, Snowflake delivered a FCF margin of 7% for the full year and 12% on an adjusted basis. For FY2023, the preliminary estimate is for 15% adjusted FCF margin. Given that adjusted and actual FCF margin should continue to increase, we can model 25% FCF margin over the long term. This may well be conservative, and 30% FCF margin at that scale is likely.

We have evidence that 30% FCF margin is achievable based on results from heavy data processing software infrastructure peers that also run on hyperscaler infrastructure. Datadog delivered 33% FCF margin in their Q4 report at a $1.3B run rate, and 24% FCF margin for the full year of 2021. Crowdstrike just reported 30% FCF margin for the full fiscal year of 2022 on total revenue of $1.45B.

Fiscal YearProduct RevenueGrowth Rate25% FCF
2022$1.1B ($1.14B)98% (106%)
2023$2.05B80%
2024$3.49B70%
2025$5.58B60%
2026$8.37B50%$2.09B
2027$11.7B40%$2.92B
2028$15.8B35%$3.95B
2029$20.6B30%$5.15B
Author’s Projections for Product Revenue Growth and FCF. Bold are actual values for FY2022, versus the author’s prior estimates in October 2021.

Investors can play around with this model, but I think the growth rate projections are reasonable given Snowflake’s unusually high and durable net expansion rate. Also, during the Investor Day event, leadership stressed that the original model did not make any assumptions for new product offerings or lines of business. I think it is fair to assume that Snowflake will increase their reach into other markets and create additional revenue streams from new offerings, as we have seen with Snowpark, Streamlit and data applications.

Total revenue is usually about 6% higher than product revenue, as it also includes professional services. This would bring FY2029 total revenue to $21.8B and FCF to $5.46B. At a 30% FCF margin, free cash flow increases to $6.5B. If we use a 50x multiplier from FCF to enterprise value, we get to $325B in EV by FY2029. For comparison, NOW has a trailing EV/FCF ratio of 64 with revenue growth of 29% on $6B in total revenue. SNOW’s current EV is $64B, implying about a 5x increase in valuation from today’s prices over the next 7 years for a price target of about $1,120, as we approach the end of this decade.

In my October review, I applied a P/S ratio of 50 to end of calendar year 2024 (FY2025) revenue. That calculation yielded a $940 price target for end of 2024. Looking again at FY2025, I expect $5.91B in total revenue (1.06 x updated product revenue target of $5.58B). Using a lower EV/S ratio of 35 for a profitable revenue grower at 60%, we get a target EV of $207B. The lower EV/S ratio is inline with other 60% revenue growth companies, like CRWD and ZS. Current EV for SNOW is $64B, yielding an implied increase of 3.2x. This provides an updated price target of $720 by end of 2024, or about 3 years from now.

Of course, investors with a different view of appropriate valuation metrics can apply their own multiples. I understand that valuation multiples in 2021 were unusually high. Many would argue that current valuations for software companies are still significantly higher than they should be, if compared to multiples during the past 5-10 years. However, having worked in the software industry during that period, I can say that confidence was much lower back then in the long term growth prospects for Internet infrastructure and the ability to achieve durable revenue growth rates above 50% at scale. I think the companies that provide the foundation for digital experiences and big data processing have much more potential in the next decade than was anticipated in the decade past.

While it might be hard to imagine a software infrastructure company with a $20B revenue run rate, Snowflake’s unique position at the nexus of the explosion in data creation, processing and sharing make that conceivable. I think it is quite possible that the total market for big data management reaches $1T by the end of this decade. For Snowflake to occupy 2% of that TAM is a reasonable assumption (likely more).

I realize that a $1T market size may sound far fetched, but I would have laughed at anyone who told me 8 years ago that AWS would be a $71B business by 2022 or that cloud computing as a whole would reach $482B. In 2014, the total market for cloud computing and hosting infrastructure was about $60B. This represents an 8x increase over the past 8 years. Looking forward 8 years, we could see the same. Data storage and processing infrastructure would occupy a big slice of that total market.

Given all these factors, I am comfortable maintaining my 18% allocation to SNOW in my personal portfolio. With a leading position in a large addressable market, I think SNOW can continue a high trajectory of revenue growth. Compounding over many years should address valuation concerns and allow for gradual upside. Our next big product event will occur in mid-June at Snowflake Summit. I am looking forward to seeing what exciting developments Snowflake has planned.

NOTE: This article does not represent investment advice and is solely the author’s opinion for managing his own investment portfolio. Readers are expected to perform their own due diligence before making investment decisions. Please see the Disclaimer for more detail.


Other Resources

Here are links to some other online resources that readers might find useful:

  • Peer analyst Muji over at Hhhypergrowth has written extensively about Snowflake and platforms for data & analytics. This is must read content for investors. Like me, he goes deep on these big data concepts and the broader landscape of technology solutions.
  • Go back and listen to Snowflake Investor Day from 2021. This will provide background for the larger strategy and the set up for an update in June 2022.
  • Catch up on podcast episodes from the Rise of the Data Cloud. These are useful as they provide context from a customer’s perspective.
  • Read the IDC FutureScape report on Worldwide Future of Industry Ecosystems 2022 Predictions. This provides additional background on why data sharing and industry ecosystems could provide a real tailwind for Snowflake.

26 Comments

  1. Nick Ellis

    Simply excellent writing! 👏

  2. Martin

    Thanks for another insightful article on SNOW. Just saw you latest portfolio. Little bit surprised NET has become the highest weight in your portfolio. I know it has very decent revenue growth about 54% but is it so good to take the #1 position in portfolio ? 🙂

    • poffringa

      Thanks for the feedback. First, NET is the highest allocation because the stock price has outperformed during the past year. I invested roughly the same amounts of money in DDOG, NET and SNOW over a year ago. In that time, NET has appreciated over 100%, while DDOG is up 55% and SNOW is up 2%. However, you could ask why I don’t trim NET and add to another position. While NET is growing revenue more slowly than DDOG or SNOW, I like the consistency. They deliver 50%+ revenue growth year after year. I think of all those companies, Cloudflare is most likely to continue at this rate for the longest time. This is a result of their very aggressive product release pace and the breadth of markets they can reach. I think the market has assigned NET a premium valuation for this reason. It’s the confidence in the consistency of revenue growth rates over time, versus high revenue growth now that is likely to decelerate at some point.

  3. MBH

    Hi Peter.

    Thank you for your detailed insights. Being so richly valued, alot of exicution risk is offset by a very good duo of Slootman and Scarpelli. And last track record of Slootman says he doesn´t stay longer than 7 years if you count Data Domain and NOW each which means he isn´t going to be around Snowflake after FY2025-2026. Any thoughts? My thesis would be broken if these two left as I said about their high valuation.

    • poffringa

      Hi – thanks for the feedback. That is a fair point that there is an execution dependency on Slootman/Scarpelli. Granted, Slootman came out of retirement for the Snowflake opportunity, but he isn’t a young founder who could stay with the company for 20 years. I don’t know how to account for that risk, but agree it is something to watch. Maybe Bill McDermott of ServiceNow comes over…

      • MBH

        Hi Peter, One more question (I am new to this so forgive my uneducated question):
        To calculate a $940 price target for end of 2024, you used EV/S multiple. But for price target for end of 2028 you are using TTM EV/FCF? Is it because FY2025 FCF margin is still not at 30%?

        BTW, is there a site where you get TTM EV/FCF for all the public companies or are you manually calculating?

        Thank you for you time! MBH

        • poffringa

          Hi – yes, I think it’s reasonable to assume the 30% FCF margin target for 2028. This is likely not achievable by 2024. For the 2024 estimate, I found it easier to use EV/S. I use YCharts for my source data.

  4. Dominique

    Fantastic update. Thank you.

  5. Michael Orwin

    1) Thanks for another highly informative article.

    2) I’ve heard that according to Gartner, 85% of AI initiatives fail. I haven’t seen the original version, and search results have “machine learning”, “data science” or “big data” instead of AI. Anyway, are there any likely implications for Snowflake?

    • poffringa

      No problem, Michael. Thanks for the feedback. That statistic sounds directionally correct. A lot of companies embark on AI or machine learning projects with very loose goals or understanding of how to leverage the technology. Or, they mine their data extensively, but can’t uncover any real insights that move the business (and justify the cost of the project).

      I think Snowflake is insulated from this problem because their core focus is on enabling the generation of core analytics. Those are the data outcomes that are associated with known questions, like business metrics (what were sales last month) or deterministic insights (customers of type A prefer product B). Machine learning on the other hand, can be very powerful when successful, but starts without knowing if there is even an improvement to be made over the known methods. Snowflake provides the data source for these types of exercises, but isn’t the technology running the machine learning models and experiments.

      • Martin

        Following this discussion of AI, definitely wish we could see your analysis of UPST one day, if possible.
        Thanks again for your excellent articles!

  6. dmg

    I have more than a few friends, Peter, who attended B school and subsequently became investors; each states emphatically that what they learned in B school did not prepare them for becoming a successful investor.

    London Business School seems to recognize this lack so they seek [guest] lecturers successful in business and now as investors – two entirely different career paths not complementary. And LBS found you…
    “I recently had the honor to deliver a guest lecture to the London Business School for their Master’s in Finance program (ranked #1 in the world)…”

    You put the lie to George Bernard Shaw’s oft-quoted quip… Well, perhaps not. LBS recognizes what we all (your readers on SSI) discovered long ago. And a third career is born: Peter Offringa, pedagogue.

    You strike me as a fellow not prone to humblebragging so I will clamber onto to the roof and shout…

    CONGRATULATIONS!

    • poffringa

      Hi David – Thank you for underscoring the opportunity to address London Business School. It was truly an honor. I was happy to help provide some perspective into software company investing, besides strictly dissecting the financials.

  7. Priya

    Thanks much for a super informative writeup. +1 on dmg’s comments – The London Business School students are lucky to get these insights early.

    What are your thoughts on Databricks (often mentioned as a serious Snowflake competitor) and how their products compare-compete with Snowflake? Thanks again.

    • poffringa

      Snowflake and Databricks currently focus on different segments of the market, but are moving towards each other. Snowflake is primarily grounded in analytics processing, where business analysts examine data to report on what has happened. This is about understanding past behaviors, trends and business activity. The data is primarily in a structured format, organized in a database in a way that is optimized for certain query patterns. Databricks enables data scientists to address predictive data inquiries. These are future facing. The data source is generally unstructured, pulled from a data lake. This predictive analytics usually involves machine learning models.

      In many organizations, the two can co-exist. However, they are both expanding into the space occupied by the other. Databricks allows SQL jobs to be run against the data lake (lakehouse). Snowflake is enabling predictive analytics and machine learning to be run on their data sets. They also support unstructured data. It’s probably too early to say who will win at this point. I think Snowflake has a larger market currently, but predictive analytics will consume more resources in the future.

  8. Prashant

    Excellent writing!

  9. sanket Rathore

    Superb Article. I share the conviction in SNOW and fully agree that this may be one of the best investment for the next decade if you get in at a good price ( which is now)

  10. Nick

    Thank you very much for such a well-written article. One question. Based on your valuations figures, I believe that you don’t take into consideration the number of fully diluted shares outstanding which is 359M, without even considering dilution from Streamlit acquisition, why is that?

    • poffringa

      Thanks for the feedback. I try to keep my valuation models as simple as possible. I realize that a complete model would include consideration for share counts. When trying to value high growth companies in the past, however, I haven’t found share count modeling to be a very useful indicator. That’s my opinion. Investors are welcome to use my revenue and FCF projections and then layer in their own share count projections. The Streamlit acquisition should have minimal dilutive impact, given that the purchase price was $800M in cash and stock.

  11. Syed

    Hi Peter,
    Thanks again for an insightful update. Do you know if your LBS talk was recorded and whether we can access/watch/listen that?

    • poffringa

      Thanks, Syed. The recording is for internal use by LBS students. However, I can share the slide deck. You can email analysis@softwarestackinvesting.com to request a copy.

  12. Onder

    hi, there many thanks for the excellent work. I wonder if you see some potential hot competitors that could melt the snow?

    • poffringa

      Thanks – I think competition for Snowflake could come from three areas. First, while the hyperscalers are more cooperative in general, GCP is not. Second, companies with a foundation in data lakes and machine learning could increasingly move towards Snowflake’s core workloads in structured, analytical data processing. Databricks comes to mind here. Third, the legacy data warehouse providers (Oracle, Teradata, IBM) are not standing still. They might improve their feature sets enough to delay migrations to Snowflake.

  13. Paul Dickwin

    This is certainly not within my circle of competence, but I would love to see an analysis of HubSpot. There’s probably something there. It seems to have all of the properties of a great hypergrowth company. My company started using it and I became more and more interested in it. It could be the next Salesforce.

  14. graliontorile

    Hi, I think your site might be having browser compatibility issues. When I look at your website in Safari, it looks fine but when opening in Internet Explorer, it has some overlapping. I just wanted to give you a quick heads up! Other then that, fantastic blog!

    • poffringa

      Thanks – it uses a standard WordPress theme, so I’m not sure how much control I have over that. I will take a look nonetheless. Thanks for the feedback.