I spoke with 10 VC investors in healthcare-AI. These are the takeaways.
Conversations Navigating the Medical-AI Investment Landscape: What worries investors.
For the past three months I’ve talked with a variety of tech and health care investors from Silicon Valley, New York, and Europe. My goal was to understand the pulse and worries of investing in medical-AI in the face of the most transformative technology humans have ever developed.
I’ve summarized the themes below. The perspectives and apprehensions are from 10 leading VC’s and AI experts. I’ve also shared my take on each topic.
The Fear of AI Safety in Healthcare
Among the VCs and investors I spoke with, safety is a consistent concern. Even while they recognize the immense potential of AI in improving patient outcomes and streamlining healthcare processes, there is a palpable fear of unintended consequences that could arise from AI deployment in medicine, and the subsequent risk to their investment. The potential for misdiagnosis, algorithmic biases, and privacy breaches weighed heavily. Many felt the safest investable path is to fund healthcare AI for non-medical products, thereby avoiding the most challenging aspects of the issue altogether for now. There is comfort in applying AI to back-office tasks, dictation and documentation tasks.
My Take: AI for backoffice and dictation tasks may be safer in the early stage but these uses are not the solving the true bottlenecks of healthcare. You don’t even need AI to solve these problems. Cognitively complex tasks are the best suited use-cases for the newest AI and LLM models. It should also be encouraging that frameworks for regulatory guidance are rapidly emerging — just in time. My conversations with legal experts in the area also suggests that highly regulated industries like healthcare have preexisting principles which are broad enough in their existing forms with the requisite incentives and disincentives to ensure safe AI applications even in the absence of brand new regulations.
Worrying about Hallucinations of AI
The next common theme was the apprehension around the possibility of AI hallucinations — the idea that AI systems could generate false or misleading information, leading to disastrous consequences in critical decision-making scenarios. Clear regulations, rigorous testing, and continuous monitoring were cited as essential components to mitigate this risk and enhance the reliability of AI systems.
My take: There isn’t much research published on this topic specifically but the sense in the research community is that this may be due to limitations from the architecture of the neural networks architecture not yet designed to understand the negative or inhibitory signals. I’ve learned from our prototypes developed at our company (currently in stealth), methods that are quite effective to reduce hallucination risk significantly. We’ve also explored AI architecture designed to present its references and source material. This appears to be a solvable problem.
Cautious Embrace of AI
Despite the investments pouring into AI companies, VCs and investors admitted they are cautious of fully embracing AI. While they may have an investment thesis for AI they admit that they struggle with understanding the technical limitations of AI given its complexity. The 10 I spoke with admitted they have not read a single journal article published in a scholarly or peer-reviewed journal. Instead they have to rely primarily on expert opinions of their peer network and summaries in business publications.
This is not surprising. The field is complex. The mathematics is not straightforward. And there are a limited number of experts. Then layer on the complexity of applying this to the healthcare world. Thus from these investors perspective it becomes hard to quantify risk.
My take: Anyone building, investing, or regulating the AI industry would be wise to start with the first principles of AI. Do the fundamental concepts of computing allow for cognitively capable machines? Review the mathematics. Is it sound? Is mimicking biologic principles for machine development sensible? Track the rate of scholarly research. Are they demonstrating massive breakthroughs? Is hardware keeping pace with compute power for large model deployment? What is the lifecycle of translation from research to production-grade deployment? It is now in the order of months — not years! We haven’t seen this type of rapid pace in computing and capability before!
AI Investing in regulated industries
VC’s and Investors in medical-AI are both excited and cautious. Caution is logical. But what indicators offset the apprehension? I argue the central leading indicator in AI for financial forecasting is to monitor the pace of research, given that AI development is moving so fast from research to production. Secondly, highly regulated industries may ironically lower investment risk. The rules and laws of healthcare, particularly regulations of medical practice developed around medical licenses define scope of work (ie. MD, vs RN, vs MA). This actually give us useful parameters by which to architecture AI products.