Infographic Title: A Guide for Students: Should I Use AI?
A blue and green graphic of a decision flowchart designed to help users determine if it would be appropriate to use an AI chatbot in various circumstances. The infographic reads:
Header: "Why do you want to use an AI chatbot? I want it to..."
Additional considerations if the answer is yes, using AI would be appropriate for your circumstance:
Key Vocabulary:
NOTE: This policy is listed in the 2023-24 Saltire and Academic Catalog.
The use of Artificial Intelligence tools presents opportunities and challenges in the academic realm. Like all tools, it can be used properly and improperly; there are some uses that always run counter to the educational mission of the institution and constitute cheating. Namely, students turning in as their own work essays, homework assignments, or exam answers that were written completely or in part by Artificial Intelligence tools without proper citation constitutes an academic integrity violation.
Individual faculty members may permit Artificial Intelligence tools based on the standards of their academic disciplines and the learning goals of their particular courses. Faculty are required to make clear in course syllabi whether, under what conditions, and for what purposes Artificial Intelligence tools are permitted, as well as include specific citation guidelines appropriate to a particular course or assignment.
Students are required to follow an individual professor's guidelines provided on the use and documentation of Artificial Intelligence tools. Students may be asked to state what program was used, how it was used, and the date it was used. Failure to follow a professor’s guidelines on Artificial Intelligence tools will constitute an academic integrity violation.
The resources listed below discuss bias in AI. Bias can be built into AI tools when algorithms learn from data and text that contain errors or distortions that reinforce inequalities in society.
Black Artists See Clear Bias in A.I. New York Times, 05 July 2023
Towards a standard for identifying and managing bias in artificial intelligence
Chatbots can accidentally create plausible answers that are false. The New York Times reported in November 2023 that these 'hallucinations' can happen in 3% to 30% of generative AI queries. See the article links below for more information.
More specifically, when ChatGPT is asked to generate citations, it may create links to sources that are not real. For example, a real author might be attached to a made-up journal, or an actual title will be listed next to the wrong facts, with the wrong dates.
Hallucinations by ChatGPT and other generative models are accidental. But AI images, audio, and text can also be created with the intention of providing false information. See the links below for more information:
Chatbots and other forms of AI use large amounts of processing power. As the technology expands, carbon emissions may rise. The articles here discuss these concerns and potential solutions.
Aligning artificial intelligence with climate change mitigation A research paper from Nature Climate Change about measuring and minimizing greenhouse gas emissions from AI and machine learning.
Many AI tools function at the expense of underpaid workers in the United States and around the world.
The links below include groups are studying the ethical implementation of AI technology. Other groups on this list are working to change AI policy or pioneering more ethical uses of the technology. Some of the links included here are publications by or about these organizations.
HAI: Human-Centered Artificial Intelligence The Stanford Institute for Human-Centered Artificial Intelligence (HAI) works to advance AI research, education, policy and practice to improve the human condition.
The AI Index Report: Measuring Trends in Artificial Intelligence Stanford University - Human Centered Artificial Intelligence
RAISE: Responsible AI for Social Empowerment and Education An initiative at MIT to innovate learning and education in the era of AI.
Latimer A large language model trained with diverse histories and inclusive voice.
The Black GPT: Introducing The AI Model Trained With Diversity And Inclusivity In Mind An October 2023 article from People of Color in Tech about Latimer.
UNESCO Recommendations on the Ethics of Artificial Intelligence