I signed up when I saw it. It had Leo Celi’s name. I didn’t check the time. Days before, Leo messaged me on WhatsApp to ask if I could be a coordinator. He said that this was to make sure I attended the entire conference. It started 9 am in Boston, which meant 10 pm for me last night, a Thursday night, after a 6-hour afternoon clinic. It turns out Sebastian was more than capable of handling the coordination by himself so I sat back as a Zoom participant. In the end, I only lasted until panel discussion 3 past 4 am Manila time. I thought I’d write this now (the morning after) because for sure my non-AI-middle-aged brain will forget what I meant by my notes hahaha.

I really enjoyed the panel discussions. There was some mic echoes in the Zoom room and I wasn’t able to catch the panelists’ names but here’s what struck me most for each of the panel discussions.
PANEL 1
The panelists were asked to select a picture from a pile of pictures which they associated with “healing.” Each panelist selected one and explained why. The panel moderator said this is an activity they would sometimes do as a museum activity. Then the moderator asked the panelists, how they would have replied, if they had been asked to define “healing” without any pictures. It would have been very different! So I got curious about this “third thing” concept which the moderator quoted. So I googled (nope, I didn’t use Perplexity) and found this, “The Third Thing in Medical Education” by Gaufberg and Batalden.

And I thought, why not use this “third thing” to revise my syllabus for my upcoming medical informatics elective next week? I’ve been tossing some ideas around with Claude Opus 4.5. I’m excited!
PANEL 2
There was a nun on the panel (sorry, I didn’t catch her name) and she asked many thought-provoking questions. Here’s one – it’s not about asking if AI has bias but WHERE is bias hiding? She says systems do NOT declare their biases but it is embedded in the design. Paraphrasing here, but whoa! Another panelist opined that when systems work, NO ONE asks if it is ethical. Gut punch right there!
Someone said, “knowledge always carries responsibility.” I run AI workshops. What responsibility do I carry? That was enough to keep me awake! I also heard this from a panelist, we teach our students how to use the tools but NOT how to interrogate them. I was nodding my head! I need to do this more for my next AI workshops.
One panelist asked, can we train students to shape AI for the good of all? Much work to be done there. One of the recurring themes I heard over and over at this conference was re-imagining medical education. It’s not about just integrating AI into the curriculum, but really taking a hard look at what, how, and why we teach because the world is changing. Merely teaching the students how to use the tools is going to fall short, I realize. Medical educators need to imbue them with the agency to SHAPE AI.
PANEL 3
I’m sure I missed a lot of the discussion as I was nodding off to sleep despite coffee. One panelist discussed how our current medical education values how much knowledge one can hold. And realizing that knowledge is not going to be the measure of success given the “knowledge” (quotes mine) AI has! In a previous presentation, I had talked about the media hype of this AI or that passing the medical board exam. I protested that being a physician, is NOT JUST about the knowledge. Medical educators know this and have known this for a long time. There are other dimensions beyond knowledge that make a physician someone who truly helps her patients. In the UP College of Medicine, we talk about the six-star physician. More than a decade ago, I wrote about the five-star physician here. To this, Dean Charlotte Chiong, added “scientist with national fervor.”
Another panelist talked about how the focus of AI should be on EQUITY and not EFFICIENCY. I definitely agree with this! And that in itself is worth another blog post.

