CapitalG’s Jill Chase: Investing In and Navigating the Complexities of the AI/ML Ecosystem
Thesis BriefsCapitalGJill ChaseAI/MLBYOMLMLaaSGPT-3AdeptCanvaCharacter.AICohere.aiCopy.aiFigmaJasperOpenAI

Terra Nova Insights Team,

CapitalG’s Jill Chase: Investing In and Navigating the Complexities of the AI/ML Ecosystem

Contents


ABSTRACT

🗒️
Research and innovation has been happening in AI/ML for decades, but we've passed a turning point and advances have unlocked transformational consumer engagement. The result is a massive influx of new companies in the AI/ML space spanning a variety of delivery models. Jill (Greenberg) Chase, investor at CapitalG, assesses the feasibility of several business models amid a massive wave of funding into the space. She predicts large, standalone businesses will be built if they can generate true technical differentiation and proprietary workflows.

KEY POINTS FROM JILL CHASE'S POV

Why is everyone suddenly paying so much attention to AI/ML?

  • The past few years have seen notable changes that have enabled a step-function change in the power of machine learning: Improved hardware, access to capital and compute, new model architectures, and the proliferation of data have created an inflection point for AI/ML use cases. “First,” says Chase, “there’s more hardware being purposefully built to be performant for machine learning. Second, both excitement in the venture landscape and access to scalable compute via cloud have increased access to capital and compute power. While it's still expensive, prices have, and will continue, to come down. Third, new model architectures, especially transformer models, have enabled more performant and cost-effective innovations.” Finally, she says, the proliferation of data is improving training and model specificity.

What interesting businesses models might pop up that leverage transformational AI?

  • The true power of this next generation of AI is that it can be leveraged to transform effectively all software, including:
    • Consumer software that powers novel interactions between consumers and AI “Imagine things like an intelligent chat bot helping you shop,” says Chase, “or things like Character AI, which Noam Shazeer has built, that came out of Google Brain. Consumers are interacting with AI in a completely different way than before.”
    • Software development applications that have tremendous value in light of expensive and hard-to-find software engineers. “This is a category,” Chase says, “where I think it's incredibly clear that if you can build a better, more performant, more cost-effective model that helps with either autocomplete or code assistance, there's a tremendous unlock in terms of ROI and cost.”
    • Enterprise software features that quickly multiply existing tech-stack value. “It's pretty clear that with the advent of more open-source machine learning models, large companies will depend on AI features to make their products magnitudes more powerful,” says Chase. Examples could include customer support, or AI features appended onto apps like Canva or Figma.

The AI/ML space is bifurcated by delivery model. Providers will deliver software via either 'Build Your Own ML (BYOML)' or 'Machine Learning as a Service (MLaaS)'

Jill Chase~quoteblock

  • Chase likes to think of the AI/ML space as being bifurcated by delivery model. Providers within each category will deliver software via either what she calls “Build Your Own ML (BYOML)” or “Machine Learning as a Service (MLaaS)”:
    • BYOML offerings provide the tools that ML engineers need to create their own machine learning applications. There is a whole value chain of services that exist across BYOML, from scoping model selection and data gathering to training and model management. This category generated many companies over the past roughly three to five years, each company typically targeted at a specific part of the value chain, but with the long-term vision of creating a platform for machine learning engieers.
    • MLaaS offerings abstract away infrastructure and simply offer the output of machine learning. Chase sees three sub-categories:
      1. Model APIs (e.g. Cohere and GPT-3) where organizations build the large foundation model themselves and then offer it up via API for others to build end applications on top.
      2. End applications (e.g. Jasper and Copy.ai) where organizations build on top of the API players, and focus their differentiation on fine-tuning models, or creating compelling user workflows.
      3. Full-stack applications (e.g. Character and Adept) where organizations build their own foundation models and the user application on top. According to Chase, the more end-use oriented the models and training data are, the better the end product will be.

What are some of the potential roadblocks?

  • There is a difference between amazing technology and an amazing business model. Chase worries that with the influx of capital funding novel technology at high valuations, there isn’t quite as much emphasis on generating a sustainable business model as there should be. “One result of the recent influx of capital and excitement in this space is that a lot of very early companies are getting heavily funded,” she says. “That doesn't have to be a problem, but it does pose challenges. Given how expensive compute is, this is a category where figuring out the right business model is hard and really matters - the cost structure really differs from that of a SaaS application. I worry that, with this massive influx of capital coming from all angles, there's going to be some businesses that forget to set up a compelling business model up front, and that's going to be really hard to go back and change.”

Given how expensive compute is, this is a category where figuring out the right business model is hard and really matters — the cost structure really differs from that of a SaaS application

Jill Chase~quoteblock


IN THE INVESTOR’S OWN WORDS

saidbyblock~ via email correspondence
Jill (Greenberg) Chase

There is no doubt that the technical innovation in the AI/ML space over the last decade has lead to massive opportunity for real consumer and enterprise use cases in the next decade. Though this revolution is the result of decades of research and innovations, it’s still extremely early in its application — that’s what makes the space so fun for founders and investors.

We believe the space can be thought of in two categories, largely bifurcated by delivery model. We call these two categories "Build your own ML (BYOML)" and "Machine Learning as a Service (MLaaS)." BYOML companies focus on building tools for machine learning engineers to build their own models and applications, typically selling to large enterprises. MLaaS companies are abstracting away infrastructure and giving consumers and companies the outputs of the models themselves.

While there are pros and cons to each approach, I believe that big businesses will be built in both categories. They will fundamentally differ, however, in how you build them, who their end customers are, and the economics of their business models.


MORE Q&A

Q: Is there a critical threshold that differentiates full stack companies from improved application use-case providers?

A: The true full-stack providers have two advantages (if the unit economics work): first, they can train a more highly performant and cost-effective model because they can train on more specific data for their end-use case; second, they get deeper data flywheel advantages because they can collect data from usage and immediately feed that into making the model more performant.

That said, training these models from scratch is both hard and expensive, so it’s critical that it’s applied towards a use case that truly generates ROI for end customers to make the business model work. Today, the application use case providers are much faster to market because they can rely on existing models and focus on compelling workflows, but they may face challenges with unit economics and differentiation over time.

What will likely be the case is that the minute these players get enough proprietary data for their platform to train their own small model on that specific data — that's when they will become full stack.

Q: Is it possible for players at the application layer to gain enough differentiation through fine tuning or collecting proprietary data that they can emerge as full stack providers?

A: Right now, we have model API players, application layer companies — who are doing some fine tuning on the models — and full-stack players who are building their own large language model and the UI on top.

My prediction is that the best application-layer companies will differentiate on either, or ideally both: (1) how they collect their data and then fine-tune their models or, (2) workflows that are particularly sticky and hard to switch from.

This has to happen because, from a differentiation perspective, it's obviously not interesting if there are six companies that are all building a UI on the exact same GPT-3 model. Differentiation is especially critical because pricing pressure on the application will be a huge challenge with the substantial cost coming from inference. If you don't own both the back-end infrastructure and model, you are a bit of a price taker.

The reality is that the space is still so early, the technology so transformative, and the opportunity so game-changing, that we’re all still seeing how things evolve.

Because so much of AI/ML is open source, there's going to be a lot of value accrual for tech-savvy incumbents. If consumers want an AI appendage to existing tech-stack tools, incumbents will be several steps ahead.

Jill Chase~quoteblock


WHAT ELSE TO WATCH FOR

  • End-user preferences will be crucial in determining whether value in the space accrues to incumbents or new, independent companies. “Because so much of AI/ML is open source, there's going to be a lot of value accrual for tech-savvy incumbents,” says Chase. While the space should be big enough for large independent companies to be built, there are several categories where the end consumer will dictate whether new companies or incumbents have the advantage. “If end consumers want a 100% solution, there's an opportunity for building a large company. If consumers want 80%, or an appendage to existing tech-stack tools, incumbents will be several steps ahead. AI software engineering is an example of an area where a large independent company could be built because developers care about the best model, the best AI code completion or the best AI software assistant, whether that's with GitHub or not.”

STARTUPS MENTIONED IN THIS BRIEF

Adept, Canva, Character.ai, Cohere.ai, Copy.ai, Figma, Jasper, OpenAI


The 2022 EVC List honors the top 50 rising starts in venture capital. Terra Nova’s Thesis Brief series showcases each investor’s insights and category expertise.

Related Posts

Costanoa's John Cowgill: Overlooked Bets On AI in the Enterprise

Costanoa's John Cowgill: Overlooked Bets On AI in the Enterprise

John Cowgill, Partner at Costanoa Ventures, points to some of the more overlooked and contrarian business models within this closely watched ecosystem. As the space matures, competition between emerging companies and quick-to-adopt incumbents will determine the winners...