What You Should Know:
– Hippocratic AI, a company developing a safety-focused Large Language Model (LLM) for healthcare raises $53M in Series A funding, bringing their total funding to $120M. The round was co-led by Premji Invest and General Catalyst with participation from SV Angel and Memorial Hermann Health System as well as existing investors Andreessen Horowitz (a16z) Bio + Health, Cincinnati Children’s, WellSpan Health, and Universal Health Services (UHS).
– The company also launched its first product for phase three safety testing: a staffing marketplace where healthcare providers can utilize AI agents to manage low-risk, non-diagnostic patient tasks.
Prioritizing Safety in AI Development
Hippocratic AI emphasizes safety as a core value. The company name itself references the Hippocratic Oath, a tenet of medical ethics emphasizing “do no harm”. This focus on safety is reflected in their rigorous multi-phase testing process and transparent publication of results. Their LLM is trained on evidence-based content and utilizes a unique “constellation architecture” with multiple models working together to improve accuracy and minimize errors. Additionally, human supervision is built-in when necessary.
Addressing Healthcare Staffing Shortages
The company’s initial product aims to address the critical shortage of healthcare professionals like nurses, social workers, and nutritionists. These AI agents can be used for tasks such as chronic disease management, post-discharge follow-up, and wellness surveys. However, these agents will not interact with patients unsupervised until phase three safety testing is complete.
Testing and Transparency
The initial funding round was co-led by Premji Invest and General Catalyst, with participation from prominent healthcare organizations like Memorial Hermann Health System. Notably, the round was oversubscribed, allowing Hippocratic AI to prioritize safety over short-term profits and select investors who shared this value.
The company has already completed the first two phases of safety testing, involving over 1000 nurses and 130 physicians who interacted with the AI agents and assessed them on various safety measures. The results, published in a paper titled “Polaris: A Safety-focused LLM Constellation Architecture for Healthcare”, show promising outcomes. For instance, nurses rated the AI’s ability to educate patients about their condition at 89.82%, compared to a human nurse’s average score of 80.64%.
Phase three testing will involve a significantly larger group of nurses, physicians, and healthcare partners who will thoroughly assess the safety and effectiveness of the AI agents in real-world settings.
“When we started the company, we prioritized safety as our top value. This is why we named the company after the physician’s Hippocratic Oath and made the tagline ‘Do no harm,’” said Munjal Shah, co-founder and CEO of Hippocratic AI. “This has been our guiding principle since the company’s founding. Our focus on safety testing our product in multiple phases and transparent publication of the results for everyone to see is the next down payment in this safety-first journey. Our selection of partners who align with our values and have the patience to let us pursue safety over revenue and profits further underscores our commitment to these values.”