I spent the last week at SAIL 2025, an intimate conference in Puerto Rico of clinicians, patient advocates, academics, hospital executives, and many others. The sunsets were beautiful, and the conversations were candid. Because the conference operates under Chatham House Rule, I’ve compiled some de-identified quotes – both from the stage and in smaller discussions – that capture the complexities and contradictions of the current state of medical AI.
On clinical innovation:
“The default state of ML models is profound irrelevance. There are millions of papers out there that haven’t made the slightest difference. There is no real clinical impact without clinical collaboration.”
“Doctors pick the wrong problems. They bring their own biases. Corralling the cats of clinical collaborators stalls innovation. We can’t afford to wait for them.”
“We’re adopting AI right now in the clinic based on the promise of tomorrow compared to the proven results of today, and that could be a big problem soon.”
“We’re in the third wave of AI and medicine: HITECH act in 2009, the wave of health AI in 2016, and now this. We have two options: 1) become exhausted and disillusioned, or 2) be realistic about how AI can actually help us become more efficient.”
On ethics, transparency, and regulation:
“Who does transparency in models, in data, and in code help? It helps Google, Microsoft, Apple, and other incumbents. Requiring AI transparency stifles innovation. “
“At a time when trust in institutions and clinicians is at an all time low, we need trust in our models, and that means AI transparency is essential”
“Truthfully I don’t care if a model is biased statistically. I care about how it’s used and how it impacts patients operationally and therefore how it affects health disparities.”
“We’re in a schizophrenic time of AI regulation where we’re getting a huge surge in promising RCTs but also widely used AI scribes are completely unregulated.”
Patient-Centered Care:
“If we want to include patient advocates in the process to center the patients, we should be paying them a fair wage. Don’t make them volunteer their time.”
“EHR randomized trials should be opt-out. By coming to an academic hospital, you’re consenting to helping the research process.”
“A big blind spot for AI models – and the FDA – is that they care only about metrics like survival. Patients care way more about quality of life.”
On health system challenges:
“We pose this problem as ‘build vs buy’ but the truth is that it’s usually ‘buy vs. nothing.’ A lot of community hospitals don’t have the resources to develop their own models or comprehensively evaluate other models.”
P1: “Why are AI scribes so expensive? It doesn’t make any sense.” P2: “Completely informally, we’re seeing big increases in RVUs and numbers of patients seen. Huge benefits all around.”
“If I were a CEO of a health system, I’d get rid of the entire EHR and start over. Why do we collect so much of this data? Does all of it really get used?”
“One of the most thriving AI use cases in medicine is insurance claims. We see your AI drafted insurance letter and raise you an AI generated insurance denial.”
On current technical limitations:
“There is a lot of work on agent-based AI models working together. So far, I haven’t seen anything compelling motivating those setups with careful ablations or meaningful evaluations.”
“LLM-generated reviews are bad and not constructive. Maybe in the next 5 years.”