Invited Speakers


Robert R. Hoffman, Ph.D.

Emeritus Senior Research Scientist, IHMC
Title: Old Lessons Uncovered, New Lessons Discovered: Capturing Knowledge for XAI
Abstract

Spanning the early 1980s to the present, work on the notion of Intelligent Tutoring Systems (ITSs) revealed many requirements and challenges that are manifest today in the topic of Explainable AI. Also spanning those decades was the work on knowledge capture for Expert Systems. There too, many requirements and challenges were revealed that are manifest today in Explainable AI. The nexus formed by these endeavors is the following: An XAI system is a form of ITS, but the thing being explained is the AI itself. An XAI system must have within it a model of the knowledge of the user, and also model of the pedagogical process. Clarity about the challenges arises from a consideration of the cognitive processes involved in how people explain and try to make sense of complex systems. Clarity about system requirements arises from a re-consideration of how large-scale AI research programs are conceived and designed. This presentation will focus on a set of Principles of Explanation that are specific to XAI, and a set of general Policy Recommendations that pertain to all research that is intended to evaluate the performance of Human-AI work systems.

Bio

Hoffman is a recognized world leader in cognitive systems engineering and Human-Centered Computing. He is a Senior Member of the Association for the Advancement of Artificial Intelligence, Senior Member of the Institute of Electrical and Electronics and Engineers, Fellow of the Association for Psychological Science, Fellow of the Human Factors and Ergonomics Society, and a Fulbright Scholar. His Ph.D. is in experimental psychology from the University of Cincinnati. His Postdoctoral Associateship was at the Center for Research on Human Learning at the University of Minnesota. He served on the faculty of the Institute for Advanced Psychological Studies at Adelphi University. He has been Principal Investigator, Co-Principal Investigator, Principal Scientist, Senior Research Scientist, or Principal Author on over 60 grants and contracts including alliances of university and private sector partners. He has been a consultant to numerous government organizations. He has been recognized internationally in the fields of psychology, remote sensing, human factors engineering, intelligence analysis, weather forecasting, and artificial intelligence—for his research on the psychology of expertise, the methodology of cognitive task analysis, human-centering issues for intelligent systems technology, and the design of macrocognitive work systems. His current work focuses on "Explainable AI."


Dr. Jane Pinelis

Chief AI Engineer of the Applied Information Sciences Branch at JHU/APL
Title: Justified Confidence in highly-consequential AI: Assuring AI-Enabled Systems for the Department of Defense
Abstract:

As the implementation and adoption of Artificial Intelligence (AI) technologies continues to expand within the U.S. Department of Defense (DoD), the critical need for AI Assurance becomes increasingly apparent. Successful integration of AI into defense operations demands robust mechanisms to measure and ensure effectiveness, reliability, security, and ethical use of these advanced systems. This talk delves into the multifaceted challenges of AI Assurance within the DoD, addressing key aspects such as algorithmic transparency, accountability, and the mitigation of potential risks. By examining best practices, emerging technologies, and policy frameworks, this presentation aims to shed light on the strategic imperatives required to build a resilient AI ecosystem that supports the DoD's mission while maintaining integrity in the application of AI. 

Bio

Dr. Jane Pinelis currently serves as the Chief AI Engineer of the Applied Information Sciences Branch at Johns Hopkins University’s Applied Physics Laboratory (JHU/APL). She leads a diverse team of testers and analysts in rigorous test and evaluation (T&E) for JAIC capabilities, as well as the development of T&E-specific products and standards that will support testing of AI-enabled systems across the DoD.

Dr. Pinelis holds a BS in statistics, economics, and mathematics, an MA in statistics, and a Ph.D. in statistics, all from the University of Michigan, Ann Arbor. With a background in defense and national security spanning over 15 years, Dr. Pinelis has held a variety of key positions of responsibility. Prior to her current role, she served as the inaugural Chief of AI Assurance at the Chief, Digital and AI Office (CDAO) and the Joint Artificial Intelligence Center (JAIC) at the Department of Defense, where she oversaw the Test and Evaluation, as well as Responsible AI directorates.

Her career has largely focused on operational test and evaluation, both in support of the service operational testing commands and at the OSD level; her leadership roles included managing the Test Science team that supports the Director of Operational Test and Evaluation (DOT&E), and leading the design and analysis of the highly-publicized study on the effects of integrating women into combat roles in the Marine Corps during her assignment at the Marine Corps Operational Test and Evaluation Activity.