Enjoy breakfast whilst you catch up with your network and jump into our workshops ahead of the keynote sessions.
Current AI systems pose significant risks due to their overconfidence, particularly in healthcare where they present false information with the same certainty as accurate data, leading to dangerous automation bias where humans over-rely on seemingly confident systems. We propose embedding "curiosity and humility" into AI architecture—designing systems that actively assess their own limitations, provide calibrated confidence estimates, issue explicit warnings when operating outside their training data, and implement escalation protocols to engage human experts when uncertainty is high. Rather than pursuing algorithmic certainty, we advocate for transparent human-AI partnerships that amplify human capacity for ethical reasoning and compassionate care, with success measured not just by accuracy but by how well systems promote thoughtful collaboration and equitable outcomes across diverse populations. The goal is to create AI that acts as a humble partner that knows when to pause, question its outputs, and defer to human insight rather than replacing human judgment with overconfident automation.