At Quadrivia, we're dedicated to developing Qu, our clinical AI assistant to be the most effective and trustworthy clinical AI assistant in healthcare. But how do we ensure Qu is safe, reliable, and delivers exceptional patient experiences?
We've created a rigorous, hands-on Manual Clinical Quality Assurance Framework to guide our development and continuous improvement of Qu.
Manual testing is critical in our approach because it helps us achieve:
We're building Qu to conduct outbound patient calls, offering seamless support while providing clear, precise transcripts and summaries for healthcare providers. To ensure Qu consistently meets high standards, robust evaluation is essential.
Here’s how our Manual Clinical Quality Assurance Framework works:
Our clinical team manually simulates realistic patient interactions covering:
These simulations ensure Qu is prepared for the wide range of conversations it will encounter in real clinical environments.
Every interaction Qu has is evaluated through clearly defined checkpoints across three critical clinical domains:
Our scoring system is straightforward and actionable:
Following these detailed evaluations, Qu receives an overall performance rating:
These ratings help our teams quickly pinpoint areas for Qu's ongoing improvement.
Our framework also includes targeted checks across critical quality dimensions:
Speech & Voice Quality
Clinical Safety
Call Summary Quality
User Experience & Technical Reliability
Patient safety is paramount. We leverage the Agency for Healthcare Research and Quality (AHRQ) Harm Classification Scale to assess potential patient safety risks comprehensively, categorizing severity from 'No Harm' to 'Death.' This enables Quadrivia to proactively manage and mitigate safety concerns.
Through this robust, manual Quality Assurance Framework, Quadrivia is committed to ensuring Qu represents the forefront of safe, effective, and patient-focused clinical AI, delivering the highest standards of healthcare innovation.