top of page

Intelligence, Designed for Accountability

I investigate the operational gap between algorithmic performance and human adoption.

By observing how AI functions within high-stakes, regulated workflows, I design systems that embed explainability, accountability, and contextual clarity directly into decision-making environments.

HCP Innovation 

I research why AI fails at the moment of clinical decision and how to fix it.
My work designs context-aware, explainable decision intelligence that reduces uncertainty and preserves clinician autonomy.

Designing Trust in Healthcare AI

Healthcare AI has reached a paradoxical moment. While predictive models continue to improve in accuracy and sophistication, real-world adoption remains constrained. The limiting factor is not technical capability—it is trust.

 Trustworthy AI

AI in Life Sciences isn’t struggling because of weak models—the technology can already classify safety cases, predict trial risk, identify patient cohorts, and generate narratives. 

Ambient Health AI

Ambient Health AI is a passive, voice-enabled health detection system that listens for symptoms spoken naturally in everyday environments and intelligently routes them to clinicians. 

VisiGuard AI

This solution creates a more trustworthy, safe, and human-centred digital experience by combining real-time sensing, age detection, and well-being interventions.

 AI proposing is a middle layer

My research redefines pharmacovigilance by using AI to make human safety decisions explainable and defensible—without removing medical authority.

Judgment-Calibrated Longitudinal

Systems and Methods for Judgment-Calibrated Longitudinal Patient Narrative Intelligence in AI-Driven Healthcare

  • Behance
  • LinkedIn
  • Dribbble
bottom of page