top of page
 Trustworthy AI

AI in Life Sciences isn’t struggling because of weak models—the technology can already classify safety cases, predict trial risk, identify patient cohorts, and generate narratives. The real barrier I uncovered through research in Pharmacovigilance, Clinical Operations, and RWE is the absence of a Decision Accountability Layer. While AI systems present outputs, they rarely capture who made the final decision, how AI influenced it, what evidence justified it, or how uncertainty was handled. In regulated environments, these elements matter far more than predictions themselves, because decisions—not algorithms—are audited. By studying XAI literature, PV research, regulatory guidance, and real workflow patterns, I found that explainability alone is insufficient. What’s missing is a UX framework that makes decision ownership explicit, preserves AI-to-human lineage, requires justification, and embeds audit-readiness into the workflow. My ongoing work focuses on designing this decision accountability layer for AI-assisted safety systems and beyond.

Why I Chose This Problem

I focused on the Decision Accountability gap because my research consistently showed a disconnect between AI output and human responsibility in regulated life sciences workflows. AI works, but adoption fails because accountability is not designed into the product.
This is a system problem, not a UI problem—exactly the kind of challenge where design creates organizational value.

Why I Designed a “Decision Accountability Layer”

  • Problem Solved:
    Medical reviewers re-check AI outputs manually because existing PV tools show what AI decided but not how humans should decide with it.
    This creates trust issues, compliance risk, and double work.

  • Design Reasoning:
    A structured accountability layer:

  • Makes the decision moment explicit

  • Forces clarity on who decided what and why

  • Captures reasoning, uncertainty, and evidence

  • Creates audit-ready lineage (AI → human → outcome)

  • This shifts AI from “smart helper” to “trusted collaborator.”

bb700d3e-673c-421a-85d5-66b753d00423.png

“I don’t design interfaces

I design systems that make AI trustworthy, accountable, and operable in the real world. Every layout, component, and flow in this project exists because it solves a specific trust, evidence, or accountability gap revealed in research.”

This shows exactly the type of thinking hiring managers look for:
strategy → reasoning → clarity → impact.

  • Behance
  • LinkedIn
  • Dribbble
bottom of page