Active Grants (Completed grants can be viewed here)
Collaborative Research: Developing a Quantification System for Robot Moral Agency. Air Force Office of Scientific Research (AFOSR). In collaboration with Dr. Elizabeth Phillips (George Mason University, Lead institution) and Dr. Tom Williams (Colorado School of Mines).
Measuring or quantifying moral agency is of critical importance for human-robot interaction research. Although there have been some recent attempts to develop scales that might achieve this goal, these approaches do not align with the philosophical literature on machine moral agency, and moreover, mistake agency (which we argue to be an ontological state of being) for a psychological construct. In this work, we thus seek to develop a tool for quantifying moral agency that better aligns with the philosophical literature which offers rigorous frameworks for conceptualizing machine moral agency. Specifically, we aim to create new methods for quantifying Moral Agency in which researchers (1) separately assess the core constructs of moral agency: capacity for moral action, autonomy, interactivity, and adaptability (the MIAA scales), and (2) logically combine the outputs of those scales. We will draw upon experimental psychological approaches for construct measure development and merge them with techniques rooted in mathematical logic and philosophical theory for determining robots’ ontological status as moral agents. We will also demonstrate the usefulness of the MIAA scales to assess moral agency of artificial agents and the logical procedures for combining the four constructs measured with the scales in empirical studies.
Exploring the Eco-ethical Identity for Responsible AI Research among Faculty and Students. ICTAS Diversity & Inclusion Seed Investment Grant, Virginia Tech.
This ICTAS Diversity & Inclusion See Investment grant will allow me and my research group in Engineering Education to build direct faculty-to-faculty research partnerships with Dr. Megan Kenny Feister (Communication Program) and Dr. Sean Ferguson (Environmental Science & Resource Management) at the California State University Channel Islands (CSUCI). Our cross-institutional research collaboration will focus on how faculty and students in AI-related fields make sense of the ecological impacts of AI technologies and how these impacts are related to and/or challenged by their professional identity. Despite the transformative impacts AI technologies will bring to us and future generations, researchers and organizations globally have recently become concerned about the broader ecological impacts of AI technologies including AI’s role in climate change. It has become imperative for engineering educators to think creatively and critically about how to cultivate an “ecoethical identity” among AI researchers responsive and reflexive to the ecological impacts of AI technologies.
Towards a Global Ethics of Artificial Intelligence (AI) Identifying and Comparing AI Ethical Perspectives across Sectors and Cultures. 4-VA Initiative + Virginia Tech Engineering Faculty Organization - Opportunity (EFO-O) Seed Investment Grant.
Approaches to AI ethical governance have been largely prescriptive in nature, concerned with outlining what should or should not be done, based on abstract Western ethical values and principles. Studies have shown these guidelines rarely have an impact on decision-making in the field of AI and machine learning. As a collaborative effort between Virginia Tech (VT) and James Madison University (JMU), this project aims to maximize resources and leverage faculty expertise at the two institutions. This project seeks to expand current research in AI ethics, using interdisciplinary insights and methods from anthropology, digital humanities, computational social science, moral and cultural psychology, and human-technology interaction to promote more culturally diverse perspectives on AI ethics and in AI research. More specifically, this project will employ empirical methods to identify and compare how AI ethical issues are perceived differently across cultures that are leading contributors to global AI research (e.g., United States, Europe, and China).
Formulating effective ethics policies for AI is encountering at least two daunting challenges. First, policymakers are often behind the curve when it comes to designing ethics policies responsive to advances in AI. There has been a disconnect between ethics policies and guidelines and actual research and practice in AI (Hagendorff, 2020). Effective AI ethics policymaking thus will benefit from the expertise and critical participation of AI professionals. It also requires that the professional curriculum for future AI scientists and engineers be informed by the expertise and concerns of working professionals. Second, in professional education, scientists and engineers have limited opportunities to learn about how to contribute to science and technology policymaking with their expertise. Students in AI-related professional programs are unaware of possible career pathways through which their expertise can contribute to policymaking or they can transition their future careers from AI-related fields to policy analysis or advocacy. To address these concerns, this project explores how scientists and engineers working in AI-related fields (e.g., machine learning, cybersecurity, robotics, and data analytics) particularly those who have received policy training as part of their professional education: (1) perceive ethics policy topics and skills central to their professional work; (2) have developed their careers that either focus on AI policy or are responsive to policy concerns.
Collaborative Research: Growing a Community of Compassionate Higher Education Teachers in Science, Technology, Engineering, and Mathematics (STEM). Templeton Foundation. See project website.
Our project aims to develop a community of compassionate teachers who are dedicated to bringing a loving mindset into their classrooms. This proposal targets STEM higher education teachers, but we hypothesize that shifts in teacher classroom attitudes and practices will affect student character development. In the future, insights from this project can be further extended to education in other fields and at different levels, particularly K-12 education. We are interested in exploring the following lines of inquiry: (1) What does a character of love (heart) in the STEM-classroom in higher education mean, and in what ways might it be expressed to be beneficial for students and teachers? (2) How can we grow a character of love in STEM teachers in higher education, and how is this shaped by their beliefs and practice? (3) How can a character of love be nurtured in STEM higher education teachers?
Teaching with a Heart will use workshops and subsequent community building among participants to assist teachers to become aware of their beliefs and attitudes about their teaching roles, to reframe these beliefs and attitudes in a positive way, and to incorporate a character of love into their teaching practice in STEM higher education.
ER2: Collaborative Research: Responsible Engineering across Cultures: Investigating the Effects of Culture and Education on Ethical Reasoning and Dispositions of Engineering Students. See NSF webpage.
The goal of this project is to identify educational interventions with the greatest effects on ethical reasoning and dispositions of engineering students, whether these effects differ among cultural and national groups, and if/how to modify these interventions to respond effectively to cultural and national differences. To do so, researchers from Colorado School of Mines, University of Pittsburgh, Delft University of Technology, and Shanghai Jiao Tong University will implement mixed-method, quasi-experimental, longitudinal, and cross-sectional research to: (1) determine the effects of culture and foreign language on the ethical perspectives of first-year engineering students; (2) assess the relative effects of culture and education on these perspectives over four years; (3) use engineering ethics assessment tools across cultures and countries to examine their cross-cultural validity. Findings from this project will be essential to develop educational interventions that effectively respond to the globalized environments of contemporary engineering practice. They will also contribute to the development of more inclusive engineering education, by identifying perspectives potentially marginalized in the reigning paradigms. Finally, this project has implications for the development of responsible research education at the graduate level. Despite the fact graduate student bodies in STEM fields have become increasingly international, limited work has focused on developing culturally responsive ethics curricula for graduate students from diverse backgrounds.
CHS: Small: Collaborative Research: Role-Based Norm Violation Response in Human-Robot Teams, National Science Foundation (PI: Tom Williams, Co-PI: Qin Zhu). See NSF webpage.
Robots may need to carefully decide when and how to reject commands given to them, if the actions required to carry out those commands are not morally permissible. Most previous work on this topic takes a norm-based ethical approach, where a robot would operate under a set of rules describing what states or actions are morally wrong, and use those rules to explain its actions. In contrast, this project explores a role-based perspective, in which the robot reasons about the relationships it holds with others, the roles it plays in those relationships, and whether the actions requested of it are benevolent with respect to those roles and relationships. Specifically, the researchers will develop a framework to allow robots to reason in this way and generate explanations of its actions based on this reasoning. The researchers will then explore how role-based and norm-based command rejections compare in terms of how they affect human-robot teamwork, and design algorithms to allow robots to automatically decide what type of rejection to generate based on their context. These algorithms and explanations will be evaluated in two very different contexts with different types of relationships, roles, and rules: with civilian undergraduates at the Colorado School of Mines, and with Air Force cadets at the US Air Force Academy. This work will not only increase robots' ability to behave ethically and act as good teammates, but will also advance moral philosophy by providing experimental evidence for the relative importance and effectiveness of different tenets of role-based moral philosophy.