Artificial intelligence (AI) is embedded in nearly every aspect of our lives. Health care is no exception. At Stanford Medicine Children’s Health, we design AI to provide faster, safer, and more personalized care for all.
Ensuring the responsible use of AI
Your safety is our priority. That’s why, before any new AI tool is used, it must meet our high standards. That means we do a thorough review, from how it is built to how it performs once in use to how it sustains that performance over time, so we can ensure that it remains safe, effective, and reliable.
This review, which we call the Responsible AI Life Cycle, is conducted by a team of experts that includes academics, doctors, nurses, lawyers, computer scientists, executive leaders, and ethicists—with added insights from our Patient and Family Advisory Council. They evaluate each tool for things like bias, privacy protection, ethical concerns, and other important considerations AI is designed to support the work of doctors and nurses. Your child’s care continues to be led by a clinical team that makes final decisions.
Patient-centered, always
AI is a tool to support your care team, not a substitute for it. We adopt technologies that help our clinicians focus more on patients—improving communication, enabling more time for meaningful interactions, and enhancing decision-making.
Safe, responsible innovation
Before any new AI tool is used in our care delivery system, it undergoes rigorous clinical evaluation and oversight. We only implement technologies that meet the highest standards for safety, effectiveness, and reliability.
Your privacy matters
Protecting your personal health information is nonnegotiable. We adhere to strict data privacy and security protocols to ensure that your information stays safe.
Clear communication
We’re committed to taking the mystery out of AI and helping you understand how we use it as you come into contact with AI during your care.
Connect with us:
Download our App: