CS4340 Trustworthy and Responsible Artificial Intelligence

More and more critical activities are performed by or with the help of automation under the banner of "artificial intelligence". Finance, housing, medicine, communication, criminal justice, and combat systems all have been or stand to be revolutionized by this change. But such systems have often been the subject of significant failures, having unanticipated, undesired, or unfair effects. These technologies are not subject to the same level of safety and reliability engineering as other engineered systems. This course examines the need for and approaches to developing and sustaining automated systems in trustworthy, responsible, and ethical ways for use in everyday and national security applications. The course frames these technologies as complex sociotechnical systems with trustworthiness affected by technical capabilities, interaction with both users and subjects of the system's behaviors, and context such as operative law, policy, and doctrine. Specific focus is given to tools, techniques, and processes which can define assurance within such systems and the ways in which assurance does and does not support inquiry into ethical dimensions of the system’s behaviors. In turn, these methods and tools support the development of systems consistent with ethical principles and core values, such as the DoD AI Principles.

The course uses a combination of lectures, discussions of readings, a quarter-long course project, and detailed analysis of failure and success cases to demonstrate the current capabilities and limitations of engineering computational automation. Students will be exposed to the cutting edge of research and practice around creating trustworthy and responsible AI.


Familiarity with programming at the level of CS2020 and the basics of artificial intelligence/machine learning at the level of CS3310 or CS3315, or permission of the instructor.

Lecture Hours


Lab Hours


Course Learning Outcomes

After completing this course, students will be able to:

  • Recognize common failure modes with ethically salient consequences in computing systems.
  • Evaluate and assess risks related to the use of computational automation for a variety of purposes
  • Prescribe and design interventions which mitigate risks from automation, including recommending when and why automation is inappropriate for a given problem.
  • Describe computational tools as components of complex sociotechnical systems and the value of this approach for establishing confidence in the functional and ethical dimensions of system behavior and outcomes.