Canaan’s Rayfe Gaspar-Asaoka: Why The Next-Gen Tech Stack Will Be Built On AI
As the well-deserved hype around AI/ML applications drives R&D and funding to all-time highs, there are new opportunities for players building infrastructure. Rayfe Gaspar-Asaoka, partner at Canaan, assesses the technological drivers that led to this inflection point and why foundational models like GPT-3, Stable Diffusion, and others are opening the gates to multi-layered, AI-driven tech stacks.
KEY POINTS FROM RAYFE GASPAR-ASAOKA'S POV
Why is AI/ML infrastructure such an important category moving forward?
Foundation models have democratized access to AI/ML, and there will be a flood of new use cases and applications. The pace of research, development, and funding interest for AI/ML is increasing in stride. “Mega-tech companies like Google, Microsoft, Meta, and Amazon are leveraging their massive datasets and compute capabilities to publish large, fully trained models that companies and startups can take and tailor in order to build next generation applications on top of – flipping the way companies use AI/ML,” says Gaspar-Asaoka. Rather than building from scratch with limited resources and costly unit economics, startups can leverage foundation models to kickstart development, similar to what cloud services like AWS and Azure did to speed up software development.
AI/ML will be built into every tech stack as businesses look to develop robust offerings in increasingly crowded markets. “I see the space of AI/ML becoming ubiquitous in the same sense that people today do not think about companies as cloud companies, but rather that the application above is scalable, fast, and efficient,” says Gaspar-Asaoka. Similarly, AI/ML will permeate through every application and business in the future. “It will be a fundamental part of the technology stack that is ingrained as something that end users expect as part of their baseline experience with a product or application,” he says.
What are the business models, applications, or use cases that might be attached to this category?
Foundation models will enable the creation of ‘scary good’ content for things like advertising, social media, code creating, and more. There’s already a lot of buzz around applications ranging from writing & content creation from vendors like Jasper to code generation with GitHub’s Co-pilot to video editing and creator tools via Runway, and note taking & meeting summarization from players like Fireflies and Mem.ai. “The inherent flywheel of more users equals more data equals better models equals more users suggests this space will favor largely product-led, bottoms-up business models,” says Gaspar-Asaoka.
Decision making power will shift from executives to developers, enabling new dynamics and GTM strategies. AI/ML models are continuously running, and need to be tweaked, modified, and re-deployed within an application on an ongoing basis – placing developers in a prime position to discover and implement the infrastructure tools and platforms that will best suit their environment. “Companies building infrastructure for AI/ML will need to think about how to engage the developer and get on their radar, because they will ultimately be the biggest advocates of your product and a key part of the GTM strategy,” he says.
Companies building infrastructure for AI/ML will need to think about how to engage the developer and get on their radar.
What are some of the potential roadblocks?
AI/ML research has largely been a cooperative and open-sourced space for community-driven innovation, and a reversal of this sentiment could hinder interoperability and speed. “The biggest benefit to AI/ML right now is the open and transparent ecosystem that exists today. I think this is also one of the biggest areas of potential risk as well. If the large tech companies decide to move up the stack and own more of the applications built on top of modern AI/ML, then there is a risk that they become a more walled garden ecosystem, creating fragmentation, a lack of interoperability, and ultimately reducing the speed of innovation in AI/ML,” says Gaspar-Asaoka.
IN THE INVESTOR’S OWN WORDS
I am most interested in continuing to double down on investments in the field of artificial intelligence and machine learning, especially at the infrastructure layer for developers, ML engineers, and data scientists.
We are at a critical inflection point in terms of the fundamental capabilities of AI/ML. Historically, AI/ML was largely restricted to well-defined, structured problems with lots of data to use. For example, consider building a recommendation algorithm for an online e-commerce shop. There was a clear objective –to recommend a specific product or good – and you were largely limited by the scale of the dataset. The more users and traffic flow to the website, the better the dataset to train and build a bespoke and custom AI/ML model for your needs.
Today, large tech companies like OpenAI, Google, Microsoft, Amazon, Meta, and StabilityAI have taken some of the most cutting-edge research in AI, such as transformers, and used their scale advantages of computer resources and data to pre-train very large AI/ML models and release a set of foundational models like GPT-3, Whisper, BERT, Meta's OPT-175B, and Stable Diffusion.
Specifically, I think foundational models, including large language models (LLM), have set the stage for the next wave of innovation across the entire enterprise stack, from end-user applications to core developer infrastructure. In today’s era of AI, I see this as the third wave of innovation, from traditional ML to deep learning and now, foundation models. We have seen each wave becoming increasingly generalizable to a broader array of use cases and problems.
Q: What do other market participants or observers misunderstand about these categories?
A: One of the biggest misconceptions is that this AI/ML is magic – aka just plug it in and it works. While AI has gotten more generalizable and applicable to a wider set of use cases, there is still a large amount of upfront tuning and implementation work needed, as well as ongoing monitoring and maintenance to ensure that your AI/ML models behave correctly and continue to perform over time.
Q: What can you say about the timing?
A: This is a now type of timing. I think every business needs to think about how they can leverage and use AI/ML in their application today. It is a top priority because there are compounding effects to AI/ML, especially regarding usage data and model quality. Laggard players in the space that don’t act in the near-term are going to be left behind.
The companies that build the necessary infrastructure to support next-generation AI businesses will be large companies with truly horizontal, sticky, platforms.
WHAT ELSE TO WATCH FOR
The most exciting opportunities will be in infrastructure layers such as security, data management, and monitoring. Foundation models like GPT-3 by OpenAI are just one piece of the infrastructure stack, according to Gaspar-Asaoka. AI/ML continues to evolve as it trains on new and updated data, which will necessitate an entire ecosystem of adjacent infrastructure services including monitoring, security, data management, as well as developer pipeline tools that allow for rapid redeployments. “Whereas I see the applications being built on AI/ML becoming highly verticalized and use-case specific, the companies that build the necessary infrastructure to support these next-generation businesses will be large companies with truly horizontal, sticky, platforms,” says Gaspar-Asaoka.
Jill (Greenberg) Chase, investor at CapitalG, assesses the feasibility of several business models amid a massive wave of funding into the AI/ML space. Across different delivery modes and business models, large standalone businesses will be built on differentiation....
John Cowgill, Partner at Costanoa Ventures, points to some of the more overlooked and contrarian business models within this closely watched ecosystem. As the space matures, competition between emerging companies and quick-to-adopt incumbents will determine the winners...