Exploring the Eco-ethical Identity for Responsible AI Research among Faculty and Students. ICTAS Diversity & Inclusion Seed Investment Grant, Virginia Tech.
This ICTAS Diversity & Inclusion See Investment grant will allow me and my research group in Engineering Education to build direct faculty-to-faculty research partnerships with Dr. Megan Kenny Feister (Communication Program) and Dr. Sean Ferguson (Environmental Science & Resource Management) at the California State University Channel Islands (CSUCI). Our cross-institutional research collaboration will focus on how faculty and students in AI-related fields make sense of the ecological impacts of AI technologies and how these impacts are related to and/or challenged by their professional identity. Despite the transformative impacts AI technologies will bring to us and future generations, researchers and organizations globally have recently become concerned about the broader ecological impacts of AI technologies including AI’s role in climate change. It has become imperative for engineering educators to think creatively and critically about how to cultivate an “ecoethical identity” among AI researchers responsive and reflexive to the ecological impacts of AI technologies.
CHS: Small: Collaborative Research: Role-Based Norm Violation Response in Human-Robot Teams, National Science Foundation (PI: Tom Williams, Co-PI: Qin Zhu). See NSF webpage.
Robots may need to carefully decide when and how to reject commands given to them, if the actions required to carry out those commands are not morally permissible. Most previous work on this topic takes a norm-based ethical approach, where a robot would operate under a set of rules describing what states or actions are morally wrong, and use those rules to explain its actions. In contrast, this project explores a role-based perspective, in which the robot reasons about the relationships it holds with others, the roles it plays in those relationships, and whether the actions requested of it are benevolent with respect to those roles and relationships. Specifically, the researchers will develop a framework to allow robots to reason in this way and generate explanations of its actions based on this reasoning. The researchers will then explore how role-based and norm-based command rejections compare in terms of how they affect human-robot teamwork, and design algorithms to allow robots to automatically decide what type of rejection to generate based on their context. These algorithms and explanations will be evaluated in two very different contexts with different types of relationships, roles, and rules: with civilian undergraduates at the Colorado School of Mines, and with Air Force cadets at the US Air Force Academy. This work will not only increase robots' ability to behave ethically and act as good teammates, but will also advance moral philosophy by providing experimental evidence for the relative importance and effectiveness of different tenets of role-based moral philosophy.
Formulating effective ethics policies for AI is encountering at least two daunting challenges. First, policymakers are often behind the curve when it comes to designing ethics policies responsive to advances in AI. There has been a disconnect between ethics policies and guidelines and actual research and practice in AI (Hagendorff, 2020). Effective AI ethics policymaking thus will benefit from the expertise and critical participation of AI professionals. It also requires that the professional curriculum for future AI scientists and engineers be informed by the expertise and concerns of working professionals. Second, in professional education, scientists and engineers have limited opportunities to learn about how to contribute to science and technology policymaking with their expertise. Students in AI-related professional programs are unaware of possible career pathways through which their expertise can contribute to policymaking or they can transition their future careers from AI-related fields to policy analysis or advocacy. To address these concerns, this project explores how scientists and engineers working in AI-related fields (e.g., machine learning, cybersecurity, robotics, and data analytics) particularly those who have received policy training as part of their professional education: (1) perceive ethics policy topics and skills central to their professional work; (2) have developed their careers that either focus on AI policy or are responsive to policy concerns.
Towards a Global Ethics of Artificial Intelligence (AI) Identifying and Comparing AI Ethical Perspectives across Sectors and Cultures. 4-VA Initiative + Virginia Tech Engineering Faculty Organization - Opportunity (EFO-O) Seed Investment Grant.
Approaches to AI ethical governance have been largely prescriptive in nature, concerned with outlining what should or should not be done, based on abstract Western ethical values and principles. Studies have shown these guidelines rarely have an impact on decision-making in the field of AI and machine learning. As a collaborative effort between Virginia Tech (VT) and James Madison University (JMU), this project aims to maximize resources and leverage faculty expertise at the two institutions. This project seeks to expand current research in AI ethics, using interdisciplinary insights and methods from anthropology, digital humanities, computational social science, moral and cultural psychology, and human-technology interaction to promote more culturally diverse perspectives on AI ethics and in AI research. More specifically, this project will employ empirical methods to identify and compare how AI ethical issues are perceived differently across cultures that are leading contributors to global AI research (e.g., United States, Europe, and China).
Convergence Accelerator Phase I (RAISE): Toward Fair, Ethical, Efficient, and Trustworthy Crowdsourcing Platforms to Support Crowdworkers in Jobs of the Future, National Science Foundation (PI: Chuan Yue, Co-PIs: Ben Gilbert and Qin Zhu). See NSF webpage.
The broader impact/potential benefit of this Convergence Accelerator Phase I project is multifaceted. Crowdsourcing has created a vast and rapidly growing online labor market. However, today's crowdsourcing platforms cannot well support crowdworkers, job requesters, and the healthy growth of this important online labor market due to four major problems: fairness, ethics, efficiency, and trustworthiness. This project is a convergence of the research and development from multiple intellectually distinct disciplines including Computer Science, Economics & Business, and Humanities & Social Sciences. By performing fundamental research with rapid development advances through partnerships with crowdsourcing platform providers, this project will deliver techniques that can be used to create fair, ethical, efficient, and trustworthy crowdsourcing platforms to support American crowdworkers. It will also enable job requesters including researchers, companies, and government or humanitarian aid organizations to receive high-quality and trustworthy task submissions for them to confidently conduct their important studies and make important decisions. This project will actively involve students from underrepresented groups including female and minority students. It will train students on research and on producing high-quality deliverables. It will widely disseminate its results via activities such as publishing research papers and promoting the wide use of the deliverables.
Mines Open Education Resources (OER) Grant: Value-based, Engaged & Self-reflective Ethics Pedagogies (VESEPs) in STEM
In science and engineering ethics education, there has been an increasing interest in cultivating students’ moral sensitivity and self-reflection capabilities. However, dominant ethics learning modules are not very well aligned with such a learning objective on the “self-dimension” of professional education, as most existing modules often focus on applying either codes of ethics or ethical theories while overlook the role of personal values in affecting professional decision-making. With the support of this grant, we are hoping to develop a collective of ethical learning tools that can help students critically examine the development of their own values, actively engage in meaningful ethics discussions, and develop self-reflective capabilities.
With the support of this grant, we are planning to develop around 10 value-based, engaged & self-reflective ethics learning tools that can be easily integrated into the classes in technical and humanities disciplines that aim to cultivate students’ moral sensitivity and self-reflection. In particular, for each ethics learning tool, we will include: (1) a “ready-to-be-used” student worksheet which includes the full student instruction; (2) a teaching guide for the instructor that explains how to set up and implement such activity including the use context of this tool (e.g., classroom, field session, senior design class); and (3) a grading rubric that will help the instructor better assess students’ moral development and learning experience. We are hoping to share these ethics learning tools at the website of the National Academy of Engineering’s Ethics Center for Engineering and Science and these tools will be reviewed by the editors at this Center. Therefore, these tools will potentially benefit students and instructors beyond the Mines campus. We will also seek help from our library colleagues and share these tools on one website.
Mines Open Education Resources (OER) Grant: Ethics, society, and technology: A Confucian perspective
This project focuses on developing open education resources (OER) for Mines undergraduate and graduate classes that are interested in incorporating Confucian ethics into the classroom. We have published an essay on the fundamental teachings in Confucian ethics and their applications in understanding the complex relationships between ethics, society, and technology. This essay also includes a few pedagogical tools for enriching and evaluating students' experience learning Confucian ethics of technology. We hope to publish this material in an open access format.