As artificial intelligence (AI) technologies continue to emerge and develop, they are beginning to be incorporated into the delivery of healthcare. AI systems are already being used to detect cancers and other conditions. They can interpret information in electronic health records and identify patients at risk for hospital-acquired infections or match them with available clinical trials. There is significant potential for these technologies to improve healthcare delivery and health outcomes, but it comes with particular risks. The same features that make AI systems useful can also make them difficult to oversee. Undetected biases in the datasets used to train AI systems can cause them to produce systematically biased outputs. Further, when problematic or unexpected outputs are detected, it can be difficult to get an explanation why due to the complexity of their internal operations (let alone intellectual property protection).
To explore the ethical dimensions of AI systems in healthcare, the Center for Practical Bioethics, in collaboration with Cerner and other community partners, is conducting a workshop in August of 2019. Attendees will include a wide range of stakeholders involved in both development and implementation of healthcare AI. While the workshop will include some short speaker sessions, the bulk of the event will be collaborative discussion exercises aimed at identifying specific ways that developers and users can evaluate adherence to an ethical framework for healthcare AI.
After the event, emergent themes and considerations will be identified a CPB and partners will begin developing best practices. Attendees of the workshop, along with other interested community members, will be invited to contribute to this report and identify AI projects that could benefit from implementing the recommendations. For more information about the project you can contact Matthew Pjecha at the Center for Practical Bioethics: email@example.com