Subscribe

What to Watch in AI

Ten companies AI founders and investors are keeping a close eye on.

Artwork by 
Shira Inbar
IN tHis BRIEFING
Back to all

Brought to you by A.Team

The world’s best product builders are not for hire.

The most important resource in an advanced economy is talent. But great talent is hard to find. Increasingly, the world’s top product builders are no longer looking for full-time jobs.

If you want to build great things without increasing full-time headcount, use A.Team. The company’s platform has brought together some of the world’s most talented engineers, designers, and product managers as cohesive teams.

Post a summary of your business's most pressing problem and get matched with an exceptional team in days. Once your team is deployed, the A.Team platform makes it easy to manage contracts, payment, and scale resources up or down.

As Mario wrote of A.Team: “The company has created a platform powerful enough to solve the difficult problems McKinsey advises on but is equipped to build solutions. It makes the best of Silicon Valley – the energy, experimentation, iteration, and invention – accessible to the rest of the corporate world.”

To start building transformative products with your dream team, sign up here.

You can listen to an audio version of The Generalist on Spotify or Apple Podcasts.
ACTIONABLE INSIGHTS

If you only have a few minutes to spare, here’s what investors, operators, and founders should know about the most exciting AI startups. 

  • Predict the future. We all enjoy playing Nostradamus every once in a while. But what if you could forecast the future with a good chance of being right? One contributor suggests AI is making this increasingly possible thanks to companies like Kumo. The predictive intelligence platform allows users to “query the future,” forecasting how customers may react to new products or business changes.
  • The augmented expert. Marketers already leverage large language models (LLMs) to create copy, blog posts, and more. But how can we ensure AI’s power reaches the domains of law and medicine? Two submissions highlight companies making inroads, helping doctors and lawyers operate more efficiently and effectively. 
  • Cloud studios. Once upon a time, if you wanted to capture 3D footage, you needed bespoke, expensive hardware. Recent innovations mean that’s no longer the case. With nothing more than a smartphone, users can now capture photorealistic 3D imagery that can be used for gaming assets, e-commerce product shots, or creative projects. Increasingly, the power of a megastudio is accessible from anywhere. 
  • The power of product savvy. Simply leveraging AI is not enough; you must also wrap the technology in a compelling product. One company highlighted for this ability is Sana, an enterprise knowledge management platform. According to one contributor, Sana leverages AI subtly but effectively, demonstrating how it can be woven into a cohesive product. 
  • Science speaks. The world is full of vernaculars humans have not been able to decipher. That includes the “languages” of life sciences like biology and chemistry. Companies like Enveda translate the scientific realm's vocabulary, grammar, and semantics – and allow humans to “speak back” by developing novel chemistries.

The artificial intelligence renaissance has officially hit warp speed. It has not even been five months since our last edition of this series, yet the space has radically shifted. Back in the quaint days of mid-November, ChatGPT had yet to launch, let alone GPT-4. Bing was still a brainless search laggard, and to all within the Grand Googleplex, the word “Bard” provoked no fever dreams or night terrors. 

Increasingly, this is how our world seems to work: in fits and starts, in spurts of mind-torching acceleration and gorgeous contraptions, in obsolescences that occur over a long lunch break.

Our “What to Watch” series attempts to stay on top of technology’s rapidly advancing sectors. While no one can hope to know every innovation in AI (this is an energetic organism, moving in a hundred directions at once), we hope that this surfaces compelling companies and trends before the broader market has noticed. To do that, we’ve asked some of AI’s most impressive investors and founders to select the companies and trends they’re monitoring closely. (In a fun coincidence, one investor independently picked a contributing founder’s company.)

These are their picks. 

Note: Long-time readers will know we intentionally don’t preclude investors from mentioning companies they’ve backed. The benefits of increased knowledge and “skin in the game” outweigh the risk of facile book-talking. Across contributors, we do our utmost to select for expertise, originality, and thoughtfulness.

Harvey: Your legal assistant

There is enormous innovation at multiple layers of the AI stack. For example, various people are building interesting new bespoke models for coding (Magic), image generation, and core multimodal language (OpenAI, Anthropic, Google, and others). Similarly, there is an explosion of tooling companies, from Langchain to Llama-index to Chroma. Some companies are hybrid across these boundaries (like ChatGPT from OpenAI), while others are building standalone applications.

Common characteristics of applications that are seeing fast adoption include:

  1. Focusing on what’s new. Apps that leverage the unique advantages of LLMs or other models often see strong uptake. Builders are asking themselves: What can this technology uniquely do that prior tech can not?
  2. Reducing drudgery. Applications that replace repetitive human labor or core workflows with light machine intelligence are compelling. Reducing or eliminating painful manual work holds obvious attraction. 
  3. Augmenting the human. Full automation may not be possible in many instances. As a result, some applications have taken a “human in the loop” approach, focusing on augmenting user capability rather than taking it over completely. Humans are used to correct hallucinations or provide a qualitative view on accuracy or wording.

Like GitHub’s Copilot, Harvey possesses all three characteristics. But rather than assisting with code generation, Harvey focuses on the legal world. Harvey helps lawyers perform tasks in due diligence, litigation, research, and compliance. It is off to a good start: the firm has already landed deals with behemoths like PwC and Allen & Overy.

Amidst a sea of AI startups, Harvey stands out for a few key reasons:

  1. The right team. Harvey’s founders understand the problem from both a use case and technology perspective. Co-founder Winston Weinberg practiced antitrust litigation at O’Melveny, while CEO Gabriel Pereyra worked as a research scientist at Deepmind.
  2. A thoughtful approach. Harvey has taken a differentiated approach to assessing what legal teams need. Weinberg and Pereyra focused on a few key early problems in due diligence, seeing strong customer feedback. With iteration, they’ve achieved deep user engagement.
  3. Speed and focus. After choosing their problem set, Harvey’s founders have built rapidly to serve a specific customer need and use case. Harvey engaged tightly with specific customers like Allen & Overy to ensure it was building against real customer pain. This uncovered more use cases to deepen their workflows.

The legal and compliance world is a great example of one that will be remade with AI – for everything from litigation to drafting insurance claims to filing with the courts on behalf of human clients.

And it won’t stop there. Specialized AI assistants – and applications that can synthesize data and provide answers – have considerable potential in medicine (see: medPaLM), finance, marketing, sales, accounting, and beyond.  

Elad Gil, and Vince Hankes, Partner at Thrive Capital

Kumo: Know the Future

I met Jure Leskovec at Stanford University’s NVIDIA auditorium. The computer science professor gave a lecture as part of his “Massive Mining Datasets” course to some of the brightest young engineers on the planet. The graduate students held onto his every word, as did I. Jure is a rare combination of technically brilliant and precisely articulate. Part of what was so compelling about Jure’s talk was his provocative claim: that in the years to come, AI-powered systems would be able to predict the future.

Today, the AI revolution that Jure theorized is becoming real. Modern businesses extract powerful insights from massive trails of data. Everything from customer transactions (sales figures and support tickets) to internal operations (finance figures and administration) to external signals (web traffic and social media) can be turned into useful knowledge.

Jure, Vanja Josifovski (formerly CTO at Pinterest and Airbnb), and Hema Raghavan (ex-Head of Growth AI at LinkedIn) came together to build a Kumo

Using Kumo, companies can query their future just as they might rely on a database to search their past. Instead of analyzing what happened last year, Kumo allows customers to see what may happen next year. The impact of such a product may be profound: businesses are no longer limited to analyzing past events; they’re better able to anticipate new opportunities. Users will still want to trace data that shows what went wrong, but they will use Kumo to see what can go right.

For example, a traditional customer relationship management database houses information like customer names, account numbers, and transaction history. Kumo, by contrast, provides access to a database that predicts how much a given customer will spend over the next year, which new products are most likely to help them, and the key factors that may cause the customer to leave for a competitor.

Kumo uses Graph Neural Networks (GNNs) to identify patterns and relationships in a company’s data. GNNs have powerful predictive capabilities and are ideal for analyzing complex, interconnected data that cannot be easily represented using traditional statistical or machine learning (ML) techniques. 

The application of GNNs will drive massive transformations over the coming decade. Companies will overhaul their operations to orient around future customer behavior. Accurately predicting such behavior allows companies to offer customized product recommendations, tailored promotions, and targeted communication. Predictive analytics will also be applied across a broader range of use cases – everything from fraud detection, product design, planning, and forecasting. 

Since partnering with Kumo, I have witnessed first-hand what an ambitious, brilliant, and determined team working at the forefront of AI can deliver.

Konstantine Buhler, Partner at Sequoia Capital

ReflexAI: Training frontline support

Many founders are building applications on top of LLMs just because they became such powerful platforms over the past year. But I’ve never been interested in tourists, entrepreneurs who’ve started a company for the sake of starting a company, or because they spotted an exciting trend. I love builders who’ve been building for a while, who’ve uncovered a unique insight, and who realize they can’t do anything but start a particular company. 

That’s the flavor of builders that Sam Dorison and John Callery are, the co-founders of ReflexAI. As leaders at The Trevor Project, an organization that does critical work in suicide prevention among LGBTQ youth, they started tinkering with OpenAI’s early models, such as GPT-2, in 2019. They realized the potential to use these models to help train full-time agents and part-time volunteers in crisis conversations and spent a couple of years building software to do that. Their Crisis Contact Simulator was named one of TIME’s best inventions of 2021, training thousands of counselors to better support kids with their mental health, especially in times of need, saving many lives. And then, in 2022, as the world realized the power of GPT-3, Sam and John understood that there was a bigger opportunity: to take their learnings from The Trevor Project and apply them to build AI-powered support tools to train, develop, and empower frontline teams across organizations and companies. 

ReflexAI is moving quickly, with early launch partners like Google.org and the Department of Veterans Affairs. Via simulations, their software helps train agents to have challenging conversations and will give actionable feedback to help them improve how they interact over time. It’s built by a mission-driven team that’s been the customer and has a multi-year track record of working with these models (including GPT-4, to which they had early access from OpenAI). They are uniquely positioned to solve a tough problem. And I can’t wait to see their impact on high-stakes call center operations and beyond. 

Nikhil Basu Trivedi, co-founder and General Partner of Footwork

Together: A decentralized cloud for AI

AI is arguably having its “Linux moment.” Today, there is an unsettled debate on whether open or closed AI models will take the lead in the market, a parallel of the Microsoft Windows versus Linux debate that started in the late 1990s. The answer ultimately became “both,” but Linux’s open model dominated high-end computation. That’s different from the result of the competition in mobile operating systems, where the walled garden of Apple iOS took the lead over Google’s more extensible Android. 

Many organizations have focused on developing the open model and data ecosystem for AI over the past few years, including Hugging Face, Meta, Runway, and Stability, alongside research organizations such as EleutherAI, CarperAI, LAION, and many academic institutions. They are helped by companies that simultaneously invest in open and closed AI ecosystems. Google, Microsoft, Nvidia, and others have significantly contributed to open source ecosystems through models and frameworks such as TensorFlow, Jax, DeepSpeed, Megatron, and so forth – even while they develop proprietary products.

As models and data in AI have opened up, large-scale compute in the field still relies on just a few large cloud providers. Does compute have to be proprietary? Bitcoin, Ethereum, and other crypto networks proved that decentralized pooling of large shared compute resources is possible. What if we could recreate these scaled networks but for higher-value workloads such as LLM training and inference? 

Together is attempting to do exactly that. The startup is building a decentralized cloud combining data, models, and computation to enable researchers, developers, and companies to leverage the latest advances in artificial intelligence.

One challenge to opening up compute is the rising cost of training foundation models. We have seen figures rise from tens of millions of dollars to hundreds of millions if rumors are to be believed. Costs may soon crest $1 billion. Critically, these costs may act as a barrier to entry, skewing the industry towards centralization since only a few large model and computer providers can afford to operate. It’s an echo of how the high-end semiconductor market has evolved.

Together could ensure a vibrant open-source ecosystem by reducing access and cost barriers, enabling a wider range of companies, research institutions, and individuals to contribute. Starting with GPT-JT and OpenChatKit (an open-source ChatGPT), the Together Cloud has demonstrated that users can train foundation models on commodity, heterogeneous hardware with network bandwidths 100x slower than traditional data centers. The last decade of technology relied on cloud services; the AI revolution may be built on providers like Together.

Below is an excerpt from Import AI which analogizes the importance of Together and the open-source ecosystem:

Together and LAION and Eleuther all represent One Big Trend; a desire for a decentralized AI ecosystem where open source models are trained by disparate groups on increasingly distributed compute. There’s echoes of “The Cathedral and the Bazaar” here, where the builders of cathedrals (DeepMind, OpenAI, et al) have access to large amounts of compute and centralized teams, while the people of the Bazaar (Eleuther, LAION, etc) have access to fewer resources but a larger collective intelligence enabled by bottom-up experimentation. One of these approaches will be first to build something we’d all call superintelligence and the political ramifications of which approach is more successful will be vast.

Brandon Reeves, General Partner of Lux Capital

PostEra: Rapid drug discovery

AI is taking off in drug discovery. Companies are racing to produce built-for-purpose, AI-designed drugs with optimized binding or function. These organizations rely on models that search the entire, near-infinite space of possible molecule structures. This process offers exciting possibilities and raises new challenges. For example, as AI gets better at predicting structures with desirable drug properties, increasingly, the question will become: how do we make them?

In driving a small molecule drug toward clinical trials, the bulk of the time, effort, and money is not spent on discovering the drug to start out with. Instead, it’s spent on lead optimization, in which medicinal chemists take the initial hit structure and iteratively design, synthesize, and test variants. Chemists do this to find versions with increased potency and specificity, and less toxicity. This “design-make-test” optimization cycle can take years and cost millions per drug program.

Most time and money are spent on the “make” phase within the design-make-test cycle. Each of the hundreds of different variations of the molecule must be individually synthesized by a highly-trained synthetic chemist, with each variation taking a week or more. This stymies rapid iteration and acts as a global bottleneck, slowing promising drugs from reaching clinics.

PostEra is one company addressing the challenges in this space. Co-founded by Aaron Morris and Cambridge professor Alpha Lee, PostEra gained early prominence in 2020. That was thanks to their project, “COVID Moonshot.” The initiative used PostEra’s ML platform to prioritize antiviral ideas from over 400 scientists worldwide. It was a powerful example of AI and crowdsourcing working together, and it succeeded in identifying promising candidates for further development. 

PostEra’s drug discovery platform, dubbed “Proton,” takes a holistic approach to lead optimization by more explicitly incorporating synthetic pathway prediction into the generative ML. It focuses on removing the bottleneck of chemical synthesis in the design cycle for faster iteration and testing more molecules. Proton leverages the company’s “Manifold” software system, which suggests practical synthetic routes for arbitrary chemical structures. Manifold is based on their team’s early work using language models to predict the results of chemical reactions. PostEra uses its platform for its internal pipeline and a strategic partnership with Pfizer.

As AI becomes increasingly important in drug discovery, companies like PostEra may have a vital role in helping them reach the market.

Viswa Colluru, CEO, and David Healey, VP of Data Science at Enveda Biosciences 

Pathway Medical: The augmented doctor 

With GPT-4 hot off the press, 2023 promises to be an exciting year for applied AI, with many entrepreneurs racing to build the next “LLM for ‘X.’” But while “moving fast and breaking things” might work in some industries, it doesn’t in healthcare.

The healthcare industry is among the slowest adopters of new technologies, and possibly rightfully so. An AI model with 70% accuracy in predicting email text is annoying. But when making decisions that impact patient outcomes, it’s unacceptable.

If we want to realize the full impact of modern AI technologies in healthcare sooner, we need to address the foundation of any AI model: the data. Unfortunately, access to high-quality, structured data remains challenging in many healthcare applications. And it’s no longer simply about big data, in which more data is uniformly better – it’s about smart data and having access to high-quality information in formats relevant to industry-specific use cases.

That’s where companies like Pathway Medical have the edge over newcomers. It has already aggregated and built large pools of smart data needed to fuel domain-specific LLMs in healthcare.

Pathway is an AI-first clinical decision support tool that has spent years building a vast and structured medical knowledge graph, vetted by experts to ensure reliability. By leveraging advanced language models with this best-in-class data, Pathway aims to generate trustworthy output, free of hallucinations, anchored in well-referenced and verified information.

The result is something like a smart assistant for doctors. Using Pathway, medical professionals can seamlessly read relevant medical guidelines, receive patient-specific advice, and explore differential diagnoses. Pathway refers to itself as doctors’ “instant second opinion.”

As companies like Pathway evolve, we hope to see the vast potential of AI in healthcare come to fruition, transforming how clinicians access and interact with critical information. This should help streamline education and decision-making processes, ultimately improving patient outcomes and elevating the standard of care.

Therence Bois, co-founder and COO of Valence Discovery

Luma: 3D for everyone

When I started writing this, Luma was a small startup that raised a modest seed round in 2021. I’ve since had to edit it. On March 20, Luma announced a Series A led by Amplify Partners. That doesn’t surprise me at all.

Neural Radiance Fields, better known as “NeRFs,” is a technology that, in simple terms, allows you to transform photos taken from any device into fully-fledged 3D models. Unlike previous 3D scanning technologies, it requires no specialized hardware (such as LIDAR sensors). The output is considerably higher quality than anything we’ve seen before, with far higher visual fidelity and photorealism. Light, shadow, and reflection are all possible with NeRFs.

Luma is at the vanguard of deploying this technology. The startup’s app lets customers capture photorealistic 3D from their smartphones. These images can then be used as game assets, e-commerce product shots, or artistic creations.

Why is this such a big deal?

With VR on our doorstep and AR likely less than a decade away, demand for photorealistic 3D assets should increase rapidly. Additionally, it’s reasonable to think that these revolutions may require improvements in 3D capture to be fully enabled. In the past, that process has been hard, expensive, and sometimes impossible. It isn’t anymore. 

Even before VR and AR take flight, products like Luma can unlock exciting new use cases. An Etsy merchant can easily capture a 3D model of the table they’re selling. A sole developer can create a game set in a photorealistic world with little more than a smartphone. If you can photograph it, it can become 3D. Luma’s tagline says it best: “3D, finally for everyone!”

Considering how NeRFs will combine with other technologies to create tomorrow’s media is fascinating. In just a few years, you might watch a feature-length movie set inside a Luma-generated 3D model, with actors generated by Midjourney, scripted by ChatGPT, and voiced by ElevenLabs.

Eiso Kant, founder and CEO of Athenian

Coactive: Decoding visual data

From social media videos to smartphone pictures, visual content dominates our daily lives and is increasing at an unprecedented pace. Despite its ubiquity, visual content is the most challenging form of information to analyze as it is typically unstructured. Missing out on the insights in visual formats is a loss for data-driven organizations. It’s also part of a broader issue: according to MIT, 80% of enterprise data is unstructured, trapped in audio, video, and web server logs. 

Coactive brings structure to unstructured data through machine learning. It helps data-driven teams derive insights from visual content like images and videos. Coactive is interesting because businesses have a massive opportunity to organize and analyze multimedia content, especially for industries with high volumes of visuals. Examples include retail, social media, medical imaging, gaming, and autonomous vehicles. 

On the technological level, Coactive brings unstructured data into the world of SQL so analysts can annotate, search, query, and model it. Today’s most popular use cases are search, recommendations, trust and safety, and data analytics. 

With the product, customers upload raw images or videos directly into Coactive’s platform through an API or secure data lake connection. The visual data is then embedded and indexed by Coactive’s platform with minimal manual supervision or labeling. It’s then made available through Coactive’s fully hosted image search API and SQL interface for users to gather insights and run queries and searches. Coactive carefully developed its UI/UX to make it easy for both the citizen and data scientist to leverage and derive value from it.

For example, a fashion brand can upload large sets of visual images and videos and define concepts and categories within seconds rather than days. This allows the brand to better understand how customers interact with their products in near real-time.

For several years, Bessemer has been tracking breakthroughs in machine learning infrastructure and evolutions in business intelligence. Given these trends, enterprises will not only generate an explosion of visual content but also need to make sense of it.

Ethan Kurzweil, Partner at Bessemer Venture Partners

Sana: Knowledge capture for companies

The generative AI hype is real. Twitter is overrun with mind-expanding tech demos in every modality, from text to images to video and beyond. It’s almost incomprehensible how good LLMs are at certain tasks, and it does legitimately feel like the first exciting platform since mobile.

But in all the excitement, it’s important to remember that, as with any other great technology, AI is only as good as its UX when it makes contact with reality. Right now, we’re seeing a lot of cool point-based solutions that demonstrate magical features such as transcription, summarization, creative writing, image/video generation, and coding.

Most of these will likely become features in existing platforms (GitHub, Microsoft, Notion, and Intercom are already aggressively adding AI to their products) rather than standalone products as they fail to answer the boring questions: who is going to use this every day and in what context? Will they pay for it? Is this a feature or a product? Is it defensible, and can you quickly build a large business and category around it? 

I think many companies that will win in the coming years were started a few years ago by founders who realized the immense potential of LLMs and generative AI before it was obvious. They have a headstart in building platforms – not just tools – and have honed their product sense and customer intuition by interfacing with the real world. They understand that utility eats novelty for breakfast, even if it’s a slower grind.

One of the companies I’m excited about is Sana, an AI-powered learning platform for enterprise. The product is both a traditional learning management system where you can create courses and run live sessions, and a knowledge management platform that creates a “company brain” that can be queried directly by integrating into platforms like Google Workspaces, Notion, GitHub, and so on.

The platform is beautiful, has real-time collaboration, and is way ahead of the market in terms of core SaaS functionality. But it’s also clearly designed to be AI-first in a subtle but effective manner. You can use AI to make an entire course from scratch (text, images, quizzes) or as a co-pilot that helps you complete your work faster by automating information retrieval and content creation. 

Long term, you can imagine a generative learning system that has the full context of everything and everyone in the company and can train the workforce both on-demand and in a structured fashion with coursework. It could monitor the workforce to identify knowledge gaps, make a short and personalized course and deliver it directly to the individual in real-time. The potential for increasing productivity is enormous.

The challenge for Sana is inventing an AI-first product that can easily adapt to an enterprise workflow. People don’t like too much change at once. The real art is in sequencing over time; hook them with a familiar but 10x better experience today, and move them slowly but steadily to an entirely new solution that is 100x better. 

Learning & Development (L&D) is traditionally an overlooked category by VCs – it’s a cost center with low budgets and rarely a C-level topic. We haven’t seen a +$10 billion company that defines itself as an L&D platform.

But there is a clear secular trend that I think might change that: learning is no longer just about compliance and HR. It’s about ensuring the sales force are product experts, developers can quickly onboard, and everyone understands the company strategy. 

The lines between knowledge management, productivity, and L&D are increasingly blurring, opening up opportunities. And the faster the world moves, the more important it becomes for companies to upskill their workforce and disseminate internal information at Twitter speed.

From the execution and traction I’ve seen from Sana so far, they have a good shot at not just building a great company but evolving the entire L&D category into something very different from what it is today.

Victor Riparbelli, co-founder and CEO of Synthesia

Enveda: Speaking in chemistry

What to watch in AI right now? Fucking. Everything. 

We’re amidst a singular moment in the history of humanity. We’ve invented an alien intelligence and are (sheepishly) grappling with the near-term tech and venture implications. But if I had to answer your question with the non-obvious (LLMs! Open source! Chained models! Verticalized apps atop GPT-X!), I’d point to LLMs’ ability to learn all languages, not just those decipherable by our particular variant of evolved human cerebrum. 

ChatGPT and GPT have broadly captured the zeitgeist because they mirror to us the known unknown. Novel ideas, heretofore never seen limericks, enumerated lists in language we can immediately understand. Novel, until now, unknown (never before uttered or written) concepts in known human language. But why stop there? Pointing LLMs at the internet unlocks our collective digital trove of knowledge but remains in the realm of what we could have otherwise done, written, or uttered…with enough time and resources. (Come back to me with GPT-10’s proof of P != NP.) 

Why not learn entirely new languages, latent in our universe, yet never before spoken by humans? 

Much, if not all, of life sciences, can be “cast” as languages with unique syntax and alphabets, grammars, and ultimately semantics or meaning. Let’s use biology and chemistry as examples. In terms of syntax, biology uses the G-A-T-C letters of DNA or amino acid sequences, while chemistry relies on any number of representations, including SMILES strings or mass spectrometry. Regarding semantics, you could point to protein structure and function for biology, and metabolite structure and chained reactions for chemistry. In short, each life science has a vernacular, uniquely its own.

Finding terminology to distinguish between the science and the language of that science makes it easier to grok: we can think of Biology and Chemistry as the languages of their respective domains. We’ve always been surrounded by these languages, expressed and spoken fluently by the physical world around us. We’re just now – with the advent of LLMs – on the cusp of being able not just to observe but understand and speak back.

Enveda, where my firm Dimension led the Series B, is doing fascinating work in this space. It teaches computers Chemistry, the language of chemistry. Combining next-generation mass spectrometry (a syntactic representation of chemical space) with LLMs, Enveda goes from generally indiscernible syntactic gobbledygook to a well-defined grammar and finally to the semantics of chemical structure and properties. 

Why does this matter?

  1. We can suddenly read the functional lego pieces resulting from billions of years of evolution. (What are the planet’s naturally occurring chemistries, and why have biological processes evolved under intense selective pressure to produce them and not others?)
  2. We can better understand human disease by analyzing metabolites inside our cells, the building blocks for DNA, RNA, and proteins.
  3. We can speak back, using this new tool – language – to design novel chemistries (for those in the industry “small molecules”) as therapies.

Zavain Dar, founder and Managing Partner at Dimension Capital 

The Generalist’s work is provided for informational purposes only and should not be construed as legal, business, investment, or tax advice. You should always do your own research and consult advisors on these subjects. Our work may feature entities in which Generalist Capital, LLC or the author has invested.