Reflections from Stanford Health AI Week At Stanford University’s Health AI Week, our CEO Amber Nigam joined an important conversation on the responsible use of AI in healthcare specifically: When should AI be used, and when shouldn’t it? The panel, composed of clinicians, technologists, and policy experts, didn’t fall into the common trap of treating AI as a universal remedy. Instead, it wrestled with nuance: the contexts where AI adds value, and the ones where it risks creating noise, harm, or false confidence. Amber offered three principles that anchor basys.ai’s work and thinking. They’re deceptively simple but, in practice, profoundly clarifying:
1. Start with First Principles: AI is not a foregone conclusion
The best AI applications don’t begin with AI. They begin with a problem so stubborn or systemic that existing solutions manual workflows, business rules, traditional automation simply can’t scale. In Prior Authorization and Utilization Management, for example, the issue isn’t just inefficiency. It’s interpretive complexity. Thousands of evolving policies, nuanced medical contexts, and often-misaligned payer-provider priorities. These are problems where logic trees fail and where AI, especially when combined with clinician insight, can establish adaptable, transparent decision-making frameworks. But that clarity comes only from starting with a clean sheet not from trying to retrofit AI where it doesn’t belong.
2. Align Incentives or Don’t Bother
Healthcare is notorious for misaligned incentives. AI can worsen that or fix it. Used well, AI can be the scaffolding for shared clinical standards across stakeholders. It can help transform opaque processes like prior auth from black boxes into systems that are interpretable and negotiable. But that only happens if the technology is built with trust infrastructure in mind. That means designing systems that respect clinician time, surface clinical intent, and create accountability across parties. AI must not reinforce siloed logic. It should be the connective tissue.
3. Clinician Collaboration Is Not a Checkbox
One of the most dangerous assumptions in healthcare AI is assuming we know what clinicians want. Real collaboration isn’t a UX survey or a post-hoc advisory board. It’s structured, ongoing, and embedded. It means co-designing logic with medical directors. It means making sure AI decisions are auditable and can defer to clinical expertise when needed. And it means building a product that doesn’t just fit into a clinical workflow it earns its place there.
What We Took Away AI’s role in healthcare is not about replacing humans. It’s about restructuring the relationships between humans, institutions, and data. The most powerful insight from the session? The real question isn’t can AI do this it’s should it? That tension between possibility and prudence is where the most important work happens. We’re grateful to Stanford for convening the kind of dialogue that doesn’t chase hype but insists on substance. Because getting this right isn’t just about innovation. It’s about integrity.